bname
stringclasses
12 values
keyterm
stringclasses
1 value
summary
stringlengths
0
14k
intro
stringlengths
163
7k
questions
list
chapter
int64
1
47
chapter_text
stringlengths
11.7k
173k
principles_of_accounting,_volume_1:_financial_accounting
Summary 1.1 Explain the Importance of Accounting and Distinguish between Financial and Managerial Accounting Accounting is the process of organizing, analyzing, and communicating financial information that is used for decision-making. Accounting is often called the “language of business.” Financial accounting measures performance using financial reports and communicates results to those outside of the organization who may have an interest in the company’s performance, such as investors and creditors. Managerial accounting uses both financial and nonfinancial information to aid in decision-making. 1.2 Identify Users of Accounting Information and How They Apply Information The primary goal of accounting is to provide accurate, timely information to decision makers. Accountants provide information to internal and external users. Financial accounting measures an organization’s performance in monetary terms. Accountants use common conventions to prepare and convey financial information. Financial accounting is historical in nature, but a series of historical events can be useful in establishing predictions. Financial accounting is intended for use by both internal and external users. Managerial accounting is primarily intended for internal users. 1.3 Describe Typical Accounting Activities and the Role Accountants Play in Identifying, Recording, and Reporting Financial Activities Accountants play a vital role in many types of organizations. Organizations can be placed into three categories: for profit, governmental, and not for profit. For-profit organizations have a primary purpose of earning a profit. Governmental entities provide services to the general public, both individuals and organizations. Governmental agencies exist at the federal, state, and local levels. Not-for-profit entities have the primary purpose of serving a particular interest or need in communities. For-profit businesses can be further categorized into manufacturing, retail (or merchandising), and service. Manufacturing businesses are for-profit businesses that are designed to make a specific product or products. Retail firms purchase products and resell the products without altering the products. Service-oriented businesses provide services to customers. 1.4 Explain Why Accounting Is Important to Business Stakeholders Stakeholders are persons or groups that rely on financial information to make decisions. Stakeholders include stockholders, creditors, governmental and regulatory agencies, customers, and managers and other employees. Stockholders are owners of a business. Publicly traded companies sell stock (ownership) to the general public. Privately held companies offer stock to employees or to select individuals or groups outside the organization. Creditors sometimes grant extended payment terms to other businesses, normally for short periods of time, such as thirty to forty-five days. Lenders are banks and other institutions that have a primary purpose of lending money for long periods of time. Businesses generally have three ways to raise capital (money): profitable operations, selling ownership (called equity financing), and borrowing from lenders (called debt financing). In business, profit means the inflows of resources are greater than the outflows of resources. Publicly traded companies are required to file with the Securities and Exchange Commission (SEC), a federal government agency charged with protecting the investing public. Guidelines for the accounting profession are called accounting standards or generally accepted accounting principles (GAAP). The Securities and Exchange Commission (SEC) is responsible for establishing accounting standards for companies whose stocks are traded publicly on a national or regional stock exchange, such as the New York Stock Exchange (NYSE). Governmental and regulatory agencies at the federal, state, and local levels use financial information to accomplish the mission of protecting the public interest. Customers, employees, and the local community benefit when businesses are financially successful. 1.5 Describe the Varied Career Paths Open to Individuals with an Accounting Education It is important for accountants to be well versed in written and verbal communication and possess other nonaccounting skill sets. A bachelor’s degree is typically required for entry-level work in the accounting profession. Advanced degrees and/or professional certifications are beneficial for advancement within the accounting profession. Career paths within the accounting profession include auditing, taxation, financial accounting, consulting, accounting information systems, cost and managerial accounting, financial planning, and entrepreneurship. Internal control systems help ensure the company’s goals are being met and company assets are protected. Internal auditors work inside business and evaluate the effectiveness of internal control systems. Accountants help ensure the taxes are paid properly and in a timely manner. Accountants prepare financial statements that are used by decision makers inside and outside of the organization. Accountants can advise managers and other decision makers. Accountants are often an integral part of managing a company’s computerized accounting and information system. Cost accounting determines the costs involved with providing goods and services. Managerial accounting incorporates financial and nonfinancial information to make decisions for a business. Training in accounting is helpful for financial planning services for businesses and individuals. Accounting helps entrepreneurs understand the financial implications of their business. Accountants have opportunities to work for many types of organizations, including public accounting firms, corporations, governmental entities, and not-for-profit entities. Professional certifications offer many benefits to those in the accounting and related professions. Common professional certifications include Certified Public Accountant (CPA), Certified Management Accountant (CMA), Certified Internal Auditor (CIA), Certified Fraud Examiner (CFE), Chartered Financial Analyst (CFA), and Certified Financial Planner (CFP).
Chapter Outline 1.1 Explain the Importance of Accounting and Distinguish between Financial and Managerial Accounting 1.2 Identify Users of Accounting Information and How They Apply Information 1.3 Describe Typical Accounting Activities and the Role Accountants Play in Identifying, Recording, and Reporting Financial Activities 1.4 Explain Why Accounting Is Important to Business Stakeholders 1.5 Describe the Varied Career Paths Open to Individuals with an Accounting Education Why It Matters Jennifer has been in the social work profession for over 25 years. After graduating college, she started working at an agency that provided services to homeless women and children. Part of her role was to work directly with the homeless women and children to help them acquire adequate shelter and other necessities. Jennifer currently serves as the director of an organization that provides mentoring services to local youth. Looking back on her career in the social work field, Jennifer indicates that there are two things that surprised her. The first thing that surprised her was that as a trained social worker she would ultimately become a director of a social work agency and would be required to make financial decisions about programs and how the money is spent. As a college student, she thought social workers would spend their entire careers providing direct support to their clients. The second thing that surprised her was how valuable it is for directors to have an understanding of accounting. She notes, “The best advice I received in college was when my advisor suggested I take an accounting course. As a social work student, I was reluctant to do so because I did not see the relevance. I didn’t realize so much of an administrator’s role involves dealing with financial issues. I’m thankful that I took the advice and studied accounting. For example, I was surprised that I would be expected to routinely present to the board our agency’s financial performance. The board includes several business professionals and leaders from other agencies. Knowing the accounting terms and having a good understanding of the information contained in the financial reports gives me a lot of confidence when answering their questions. In addition, understanding what influences the financial performance of our agency better prepares me to plan for the future.”
[ { "answer": { "ans_choice": 1, "ans_text": "B" }, "bloom": null, "hl_context": "<hl> A traditional adage states that “ accounting is the language of business . ” While that is true , you can also say that “ accounting is the language of life . ” At some point , most people will make a decision that relies on accounting information . <hl> For example , you may have to decide whether it is better to lease or buy a vehicle . Likewise , a college graduate may have to decide whether it is better to take a higher-paying job in a bigger city ( where the cost of living is also higher ) or a job in a smaller community where both the pay and cost of living may be lower .", "hl_sentences": "A traditional adage states that “ accounting is the language of business . ” While that is true , you can also say that “ accounting is the language of life . ” At some point , most people will make a decision that relies on accounting information .", "question": { "cloze_format": "Accounting is sometimes called the “language of _____.”", "normal_format": "Accounting is sometimes called the “language of which of the following”?", "question_choices": [ "Wall Street", "business", "Main Street", "financial statements" ], "question_id": "fs-idm169728896", "question_text": "Accounting is sometimes called the “language of _____.”" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "summarizes what has already occurred" }, "bloom": null, "hl_context": "Financial accounting information is mostly historical in nature , although companies and other entities also incorporate estimates into their accounting processes . For example , you will learn how to use estimates to determine bad debt expenses or depreciation expenses for assets that will be used over a multiyear lifetime . <hl> That is , accountants prepare financial reports that summarize what has already occurred in an organization . <hl> This information provides what is called feedback value . The benefit of reporting what has already occurred is the reliability of the information . Accountants can , with a fair amount of confidence , accurately report the financial performance of the organization related to past activities . The feedback value offered by the accounting information is particularly useful to internal users . That is , reviewing how the organization performed in the past can help managers and other employees make better decisions about and adjustments to future activities .", "hl_sentences": "That is , accountants prepare financial reports that summarize what has already occurred in an organization .", "question": { "cloze_format": "Financial accounting information ________.", "normal_format": "What is a characteristic of Financial accounting information?", "question_choices": [ "should be incomplete in order to confuse competitors", "should be prepared differently by each company", "provides investors guarantees about the future", "summarizes what has already occurred" ], "question_id": "fs-idm181308384", "question_text": "Financial accounting information ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "C" }, "bloom": null, "hl_context": "<hl> External users are those outside of the organization who use the financial information to make decisions or to evaluate an entity ’ s performance . <hl> <hl> For example , investors , financial analysts , loan officers , governmental auditors , such as IRS agents , and an assortment of other stakeholders are classified as external users , while still having an interest in an organization ’ s financial information . <hl> ( Stakeholders are addressed in greater detail in Explain Why Accounting Is Important to Business Stakeholders . )", "hl_sentences": "External users are those outside of the organization who use the financial information to make decisions or to evaluate an entity ’ s performance . For example , investors , financial analysts , loan officers , governmental auditors , such as IRS agents , and an assortment of other stakeholders are classified as external users , while still having an interest in an organization ’ s financial information .", "question": { "cloze_format": "External users of financial accounting information include all of the following except ________.", "normal_format": "Which of the following is not included in external users of financial accounting information?", "question_choices": [ "lenders such as bankers", "governmental agencies such as the IRS", "employees of a business", "potential investors" ], "question_id": "fs-idm170667104", "question_text": "External users of financial accounting information include all of the following except ________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 3, "ans_text": "managers" }, "bloom": null, "hl_context": "Since most managerial accounting activities are conducted for internal uses and applications , managerial accounting is not prepared using a comprehensive , prescribed set of conventions similar to those required by financial accounting . This is because managerial accountants provide managerial accounting information that is intended to serve the needs of internal , rather than external , users . In fact , managerial accounting information is rarely shared with those outside of the organization . <hl> Since the information often includes strategic or competitive decisions , managerial accounting information is often closely protected . <hl> <hl> The business environment is constantly changing , and managers and decision makers within organizations need a variety of information in order to view or assess issues from multiple perspectives . <hl> Financial accounting is also a foundation for understanding managerial accounting , which uses both financial and nonfinancial information as a basis for making decisions within an organization with the purpose of equipping decision makers to set and evaluate business goals by determining what information they need to make a particular decision and how to analyze and communicate this information . <hl> Managerial accounting information tends to be used internally , for such purposes as budgeting , pricing , and determining production costs . <hl> <hl> Since the information is generally used internally , you do not see the same need for financial oversight in an organization ’ s managerial data . <hl>", "hl_sentences": "Since the information often includes strategic or competitive decisions , managerial accounting information is often closely protected . The business environment is constantly changing , and managers and decision makers within organizations need a variety of information in order to view or assess issues from multiple perspectives . Managerial accounting information tends to be used internally , for such purposes as budgeting , pricing , and determining production costs . Since the information is generally used internally , you do not see the same need for financial oversight in an organization ’ s managerial data .", "question": { "cloze_format": "The group that would have access to managerial accounting information is ___.", "normal_format": "Which of the following groups would have access to managerial accounting information?", "question_choices": [ "bankers", "investors", "competitors of the business", "managers" ], "question_id": "fs-idm202187392", "question_text": "Which of the following groups would have access to managerial accounting information?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "A" }, "bloom": null, "hl_context": "An example may be helpful in clarifying the difference between cost and managerial accounting . Manufacturing companies often face the decision of whether to make certain components or purchase the components from an outside supplier . Cost accounting would calculate the cost of each alternative . <hl> Managerial accounting would use that cost and supplement the cost with nonfinancial information to arrive at a decision . <hl> Let ’ s say the cost accountants determine that a company would save $ 0.50 per component if the units were purchased from an outside supplier rather than being produced by the company . <hl> Managers would use the $ 0.50 per piece savings as well as nonfinancial considerations , such as the impact on the morale of current employees and the supplier ’ s ability to produce a quality product , to make a decision whether or not to purchase the component from the outside supplier . <hl> <hl> Examples of other decisions that require management accounting information include whether an organization should repair or replace equipment , make products internally or purchase the items from outside vendors , and hire additional workers or use automation . <hl>", "hl_sentences": "Managerial accounting would use that cost and supplement the cost with nonfinancial information to arrive at a decision . Managers would use the $ 0.50 per piece savings as well as nonfinancial considerations , such as the impact on the morale of current employees and the supplier ’ s ability to produce a quality product , to make a decision whether or not to purchase the component from the outside supplier . Examples of other decisions that require management accounting information include whether an organization should repair or replace equipment , make products internally or purchase the items from outside vendors , and hire additional workers or use automation .", "question": { "cloze_format": "All of the following are examples of managerial accounting activities except ________.", "normal_format": "All of the following are examples of managerial accounting activities except which?", "question_choices": [ "preparing external financial statements in compliance with GAAP", "deciding whether or not to use automation", "making equipment repair or replacement decisions", "measuring costs of production for each product produced" ], "question_id": "fs-idm170219888", "question_text": "All of the following are examples of managerial accounting activities except ________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 0, "ans_text": "Organizations share a common purpose or mission." }, "bloom": null, "hl_context": "We can classify organizations into three categories : for profit , governmental , and not for profit . These organizations are similar in several aspects . <hl> For example , each of these organizations has inflows and outflows of cash and other resources , such as equipment , furniture , and land , that must be managed . <hl> In addition , all of these organizations are formed for a specific purpose or mission and want to use the available resources in an efficient manner — the organizations strive to be good stewards , with the underlying premise of being profitable . <hl> Finally , each of the organizations makes a unique and valuable contribution to society . <hl> <hl> Given the similarities , it is clear that all of these organizations have a need for accounting information and for accountants to provide that information . <hl>", "hl_sentences": "For example , each of these organizations has inflows and outflows of cash and other resources , such as equipment , furniture , and land , that must be managed . Finally , each of the organizations makes a unique and valuable contribution to society . Given the similarities , it is clear that all of these organizations have a need for accounting information and for accountants to provide that information .", "question": { "cloze_format": "A false statement is that ___ .", "normal_format": "Which of the following is not true?", "question_choices": [ "Organizations share a common purpose or mission.", "Organizations have inflows and outflows of resources.", "Organizations add value to society.", "Organizations need accounting information." ], "question_id": "fs-idm391804128", "question_text": "Which of the following is not true?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "B" }, "bloom": null, "hl_context": "<hl> But in the case of a nonprofit ( not-for-profit ) organization the primary purpose or mission is to serve a particular interest or need in the community . <hl> <hl> A not-for-profit entity tends to depend on financial longevity based on donations , grants , and revenues generated . <hl> It may be helpful to think of not-for-profit entities as “ mission-based ” entities . It is important to note that not-for-profit entities , while having a primary purpose of serving a particular interest , also have a need for financial sustainability . An adage in the not-for-profit sector states that “ being a not-for-profit organization does not mean it is for-loss . ” That is , not-for-profit entities must also ensure that resources are used efficiently , allowing for inflows of resources to be greater than ( or , at a minimum , equal to ) outflows of resources . This allows the organization to continue and perhaps expand its valuable mission .", "hl_sentences": "But in the case of a nonprofit ( not-for-profit ) organization the primary purpose or mission is to serve a particular interest or need in the community . A not-for-profit entity tends to depend on financial longevity based on donations , grants , and revenues generated .", "question": { "cloze_format": "The primary purpose of a ___ business is to serve a particular need in the community.", "normal_format": "The primary purpose of what type of business is to serve a particular need in the community?", "question_choices": [ "for-profit", "not-for-profit", "manufacturing", "retail" ], "question_id": "fs-idm364000736", "question_text": "The primary purpose of what type of business is to serve a particular need in the community?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 3, "ans_text": "computer manufacturer" }, "bloom": null, "hl_context": "<hl> Examples of retail firms are plentiful . <hl> <hl> Automobile dealerships , clothes , cell phones , and computers are all examples of everyday products that are purchased and sold by retail firms . <hl> <hl> What distinguishes a manufacturing firm from a retail firm is that in a retail firm , the products are sold in the same condition as when the products were purchased — no further alterations were made on the products . <hl>", "hl_sentences": "Examples of retail firms are plentiful . Automobile dealerships , clothes , cell phones , and computers are all examples of everyday products that are purchased and sold by retail firms . What distinguishes a manufacturing firm from a retail firm is that in a retail firm , the products are sold in the same condition as when the products were purchased — no further alterations were made on the products .", "question": { "cloze_format": "A ___ is not an example of a retailer.", "normal_format": "Which of the following is not an example of a retailer?", "question_choices": [ "electronics store", "grocery store", "car dealership", "computer manufacturer", "jewelry store" ], "question_id": "fs-idm396193680", "question_text": "Which of the following is not an example of a retailer?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 1, "ans_text": "B" }, "bloom": null, "hl_context": "Accountants in governmental entities perform many of the same functions as accountants in public accounting firms and corporations . <hl> The primary goal of governmental accounting is to ensure proper tracking of the inflows and outflows of taxpayer funds using the proscribed standards . <hl> Some governmental accountants also prepare and may also audit the work of other governmental agencies to ensure the funds are properly accounted for . The major difference between accountants in governmental entities and accountants working in public accounting firms and corporations relates to the specific rules by which the financial reporting must be prepared . Whereas as accountants in public accounting firms and corporations use GAAP , governmental accounting is prepared under a different set of rules that are specific to governmental agencies , as previously referred to as the Governmental Accounting Standards Board ( GASB ) . Students continuing their study of accounting may take specific courses related to governmental accounting . <hl> A governmental entity provides services to the general public ( taxpayers ) . <hl> Governmental agencies exist at the federal , state , and local levels . <hl> These entities are funded through the issuance of taxes and other fees . <hl>", "hl_sentences": "The primary goal of governmental accounting is to ensure proper tracking of the inflows and outflows of taxpayer funds using the proscribed standards . A governmental entity provides services to the general public ( taxpayers ) . These entities are funded through the issuance of taxes and other fees .", "question": { "cloze_format": "A suitable description of a governmental agency is that it ___.", "normal_format": "A governmental agency can best be described by which of the following statements?", "question_choices": [ "has a primary purpose of making a profit", "has a primary purpose of using taxpayer funds to provide services", "produces goods for sale to the public", "has regular shareholder meetings" ], "question_id": "fs-idm388900560", "question_text": "A governmental agency can best be described by which of the following statements?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "local movie theater" }, "bloom": null, "hl_context": "Not-for-profit entities include charitable organizations , foundations , and universities . Unlike for-profit entities , not-for-profit organizations have a primary focus of a particular mission . Therefore , not-for-profit ( NFP ) accounting helps ensure that donor funds are used for the intended mission . <hl> Much like accountants in governmental entities , accountants in not-for-profit entities use a slightly different type of accounting than other types of businesses , with the primary difference being that not-for-profit entities typically do not pay income taxes . <hl> Examples of not-for-profit entities are numerous . Food banks have as a primary purpose the collection , storage , and distribution of food to those in need . Charitable foundations have as a primary purpose the provision of funding to local agencies that support specific community needs , such as reading and after-school programs . <hl> Many colleges and universities are structured as not-for-profit entities because the primary purpose is to provide education and research opportunities . <hl>", "hl_sentences": "Much like accountants in governmental entities , accountants in not-for-profit entities use a slightly different type of accounting than other types of businesses , with the primary difference being that not-for-profit entities typically do not pay income taxes . Many colleges and universities are structured as not-for-profit entities because the primary purpose is to provide education and research opportunities .", "question": { "cloze_format": "The ___ is likely not a type of not-for-profit entity.", "normal_format": "Which of the following is likely not a type of not-for-profit entity?", "question_choices": [ "public library", "community foundation", "university", "local movie theater" ], "question_id": "fs-idm370707920", "question_text": "Which of the following is likely not a type of not-for-profit entity?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 4, "ans_text": "E" }, "bloom": null, "hl_context": "It is no different when it comes to financial decisions . Decision makers rely on unbiased , relevant , and timely financial information in order to make sound decisions . In this context , the term stakeholder refers to a person or group who relies on financial information to make decisions , since they often have an interest in the economic viability of an organization or business . <hl> Stakeholders may be stockholders , creditors , governmental and regulatory agencies , customers , management and other employees , and various other parties and entities . <hl>", "hl_sentences": "Stakeholders may be stockholders , creditors , governmental and regulatory agencies , customers , management and other employees , and various other parties and entities .", "question": { "cloze_format": "___ are not considered a stakeholder of an organization.", "normal_format": "Which of the following is not considered a stakeholder of an organization?", "question_choices": [ "creditors", "lenders", "employees", "community residents", "a business in another industry" ], "question_id": "fs-idm167897584", "question_text": "Which of the following is not considered a stakeholder of an organization?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "investors who purchase an ownership in the business" }, "bloom": null, "hl_context": "<hl> A stockholder is an owner of stock in a business . <hl> Owners are called stockholders because in exchange for cash , they are given an ownership interest in the business , called stock . Stock is sometimes referred to as “ shares . ” Historically , stockholders received paper certificates reflecting the number of stocks owned in the business . Now , many stock transactions are recorded electronically . Introduction to Financial Statements discusses stock in more detail . Corporation Accounting offers a more extensive exploration of the types of stock as well as the accounting related to stock transactions .", "hl_sentences": "A stockholder is an owner of stock in a business .", "question": { "cloze_format": "Stockholders can best be defined as ___ .", "normal_format": "Stockholders can best be defined as which of the following?", "question_choices": [ "investors who lend money to a business for a short period of time", "investors who lend money to a business for a long period of time", "investors who purchase an ownership in the business", "analysts who rate the financial performance of the business" ], "question_id": "fs-idm604769552", "question_text": "Stockholders can best be defined as which of the following?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "A" }, "bloom": null, "hl_context": "For-profit businesses are organized into three categories : manufacturing , retail ( or merchandising ) , and service . Another way to categorize for-profit businesses is based on the availability of the company stock ( see Table 1.1 ) . <hl> A publicly traded company is one whose stock is traded ( bought and sold ) on an organized stock exchange such as the New York Stock Exchange ( NYSE ) or the National Association of Securities Dealers Automated Quotation ( NASDAQ ) system . <hl> Most large , recognizable companies are publicly traded , meaning the stock is available for sale on these exchanges . A privately held company , in contrast , is one whose stock is not available to the general public . Privately held companies , while accounting for the largest number of businesses and employment in the United States , are often smaller ( based on value ) than publicly traded companies . Whereas financial information and company stock of publicly traded companies are available to those inside and outside of the organization , financial information and company stock of privately held companies are often limited exclusively to employees at a certain level within the organization as a part of compensation and incentive packages or selectively to individuals or groups ( such as banks or other lenders ) outside the organization .", "hl_sentences": "A publicly traded company is one whose stock is traded ( bought and sold ) on an organized stock exchange such as the New York Stock Exchange ( NYSE ) or the National Association of Securities Dealers Automated Quotation ( NASDAQ ) system .", "question": { "cloze_format": "___ sell stock on an organized stock exchange such as the New York Stock Exchange.", "normal_format": "Which of the following sell stock on an organized stock exchange such as the New York Stock Exchange?", "question_choices": [ "publicly traded companies", "not-for-profit businesses", "governmental agencies", "privately held companies", "government-sponsored entities" ], "question_id": "fs-idm168051792", "question_text": "Which of the following sell stock on an organized stock exchange such as the New York Stock Exchange?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "tax refunds" }, "bloom": null, "hl_context": "<hl> There are two advantages to raising money by borrowing from lenders . <hl> <hl> One advantage is that the process , relative to profitable operations and selling ownership , is quicker . <hl> As you ’ ve learned , lenders ( and creditors ) review financial information provided by the business in order to make assessments on whether or not to lend money to the business , how much money to lend , and the acceptable length of time to lend . A second , and related , advantage of raising capital through borrowing is that it is fairly inexpensive . A disadvantage of borrowing money from lenders is the repayment commitments . Because lenders require the funds to be repaid within a specific time frame , the risk to the business ( and , in turn , to the lender ) increases . Besides borrowing , there are other options for businesses to obtain or raise additional funding ( also often labeled as capital ) . <hl> It is important for the business student to understand that businesses generally have three ways to raise capital : profitable operations is the first option ; selling ownership — stock — which is also called equity financing , is the second option ; and borrowing from lenders ( called debt financing ) is the final option . <hl>", "hl_sentences": "There are two advantages to raising money by borrowing from lenders . One advantage is that the process , relative to profitable operations and selling ownership , is quicker . It is important for the business student to understand that businesses generally have three ways to raise capital : profitable operations is the first option ; selling ownership — stock — which is also called equity financing , is the second option ; and borrowing from lenders ( called debt financing ) is the final option .", "question": { "cloze_format": "All of the following are sustainable methods businesses can use to raise capital (funding) except for ________.", "normal_format": "Which of the following is not a sustainable method businesses can use to raise capital (funding)?", "question_choices": [ "borrowing from lenders", "selling ownership shares", "profitable operations", "tax refunds" ], "question_id": "fs-idm205363680", "question_text": "All of the following are sustainable methods businesses can use to raise capital (funding) except for ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "D" }, "bloom": null, "hl_context": "For-profit businesses are organized into three categories : manufacturing , retail ( or merchandising ) , and service . Another way to categorize for-profit businesses is based on the availability of the company stock ( see Table 1.1 ) . A publicly traded company is one whose stock is traded ( bought and sold ) on an organized stock exchange such as the New York Stock Exchange ( NYSE ) or the National Association of Securities Dealers Automated Quotation ( NASDAQ ) system . Most large , recognizable companies are publicly traded , meaning the stock is available for sale on these exchanges . A privately held company , in contrast , is one whose stock is not available to the general public . Privately held companies , while accounting for the largest number of businesses and employment in the United States , are often smaller ( based on value ) than publicly traded companies . <hl> Whereas financial information and company stock of publicly traded companies are available to those inside and outside of the organization , financial information and company stock of privately held companies are often limited exclusively to employees at a certain level within the organization as a part of compensation and incentive packages or selectively to individuals or groups ( such as banks or other lenders ) outside the organization . <hl>", "hl_sentences": "Whereas financial information and company stock of publicly traded companies are available to those inside and outside of the organization , financial information and company stock of privately held companies are often limited exclusively to employees at a certain level within the organization as a part of compensation and incentive packages or selectively to individuals or groups ( such as banks or other lenders ) outside the organization .", "question": { "cloze_format": "The accounting information of a privately held company is generally available to all of the following except for ________.", "normal_format": "The accounting information of a privately held company is not generally available to which of the following except?", "question_choices": [ "governmental agencies", "investors", "creditors and lenders", "competitors" ], "question_id": "fs-idm184895296", "question_text": "The accounting information of a privately held company is generally available to all of the following except for ________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 4, "ans_text": "extensive computer programing background" }, "bloom": null, "hl_context": "The Association of Chartered Certified Accountants ( ACCA ) , the governing body of the global Chartered Certified Accountant ( CCA ) designation , and the Institute of Management Accountants ( IMA ) , the governing body of the Certified Management Accountant ( CMA ) designation , conducted a study to research the skills accountants will need given a changing economic and technological context . <hl> The findings indicate that , in addition to the traditional personal attributes , accountants should possess “ traits such as entrepreneurship , curiosity , creativity , and strategic thinking . ” 4 4 The Association of Chartered Certified Accountants ( ACCA ) and The Association of Accountants and Financial Professionals in Business ( IMA ) . <hl> “ 100 Drivers of Change for the Global Accountancy Profession . ” September 2012 . https://www.imanet.org/insights-and-trends/the-future-of-management-accounting/100-drivers-of-change-for-the-global-accountancy-profession?ssopc=1 <hl> While it is true that accountants often work independently , much of the work that accountants undertake involves interactions with other people . <hl> <hl> In fact , accountants frequently need to gather information from others and explain complex financial concepts to others , making excellent written and verbal communication skills a must . <hl> In addition , accountants often deal with strict deadlines such as tax filings , making prioritizing work commitments and being goal oriented necessities . In addition to these skills , traditionally , an accountant can be described as someone who", "hl_sentences": "The findings indicate that , in addition to the traditional personal attributes , accountants should possess “ traits such as entrepreneurship , curiosity , creativity , and strategic thinking . ” 4 4 The Association of Chartered Certified Accountants ( ACCA ) and The Association of Accountants and Financial Professionals in Business ( IMA ) . While it is true that accountants often work independently , much of the work that accountants undertake involves interactions with other people . In fact , accountants frequently need to gather information from others and explain complex financial concepts to others , making excellent written and verbal communication skills a must .", "question": { "cloze_format": "The skill/attribute that is not a primary skill for accountants to possess is ___.", "normal_format": "Which of the following skills/attributes is not a primary skill for accountants to possess?", "question_choices": [ "written communication", "verbal communication", "ability to work independently", "analytical thinking", "extensive computer programing background" ], "question_id": "fs-idm252705888", "question_text": "Which of the following skills/attributes is not a primary skill for accountants to possess?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 0, "ans_text": "A" }, "bloom": null, "hl_context": "<hl> Entry-level positions in the accounting profession usually require a minimum of a bachelor ’ s degree . <hl> For advanced positions , firms may consider factors such as years of experience , professional development , certifications , and advanced degrees , such as a master ’ s or doctorate . The specific factors regarding educational requirements depend on the industry and the specific business .", "hl_sentences": "Entry-level positions in the accounting profession usually require a minimum of a bachelor ’ s degree .", "question": { "cloze_format": "(A) ___ is typically required for entry-level positions in the accounting profession.", "normal_format": "Which of the following is typically required for entry-level positions in the accounting profession?", "question_choices": [ "bachelor’s degree", "master’s degree", "Certified Public Accountant (CPA)", "Certified Management Accountant (CMA)", "only a high school diploma" ], "question_id": "fs-idm265454224", "question_text": "Which of the following is typically required for entry-level positions in the accounting profession?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 4, "ans_text": "purchasing direct materials" }, "bloom": null, "hl_context": "<hl> Public accounting firms offer a wide range of accounting , auditing , consulting , and tax preparation services to their clients . <hl> A small business might use a public accounting firm to prepare the monthly or quarterly financial statements and / or the payroll . A business ( of any size ) might hire the public accounting firm to audit the company financial statements or verify that policies and procedures are being followed properly . Public accounting firms may also offer consulting services to their clients to advise them on implementing computerized systems or strengthening the internal control system . ( Note that you will learn in your advanced study of accounting that accountants have legal limitations on what consulting services they can provide to their clients . ) Public accounting firms also offer tax preparation services for their business and individual clients . Public accounting firms may also offer business valuation , forensic accounting ( financial crimes ) , and other services . Cost accounting and managerial accounting are related , but different , types of accounting . In essence , a primary distinction between the two functions is that cost accounting takes a primarily quantitative approach , whereas managerial accounting takes both quantitative and qualitative approaches . <hl> The goal of cost accounting is to determine the costs involved with providing goods and services . <hl> <hl> In a manufacturing business , cost accounting is the recording and tracking of costs such as direct materials , employee wages , and supplies used in the manufacturing process . <hl> <hl> Many businesses find it necessary to employ accountants to work on tax compliance and planning on a full-time basis . <hl> Other businesses need these services on a periodic ( quarterly or annual ) basis and hire external accountants accordingly .", "hl_sentences": "Public accounting firms offer a wide range of accounting , auditing , consulting , and tax preparation services to their clients . The goal of cost accounting is to determine the costs involved with providing goods and services . In a manufacturing business , cost accounting is the recording and tracking of costs such as direct materials , employee wages , and supplies used in the manufacturing process . Many businesses find it necessary to employ accountants to work on tax compliance and planning on a full-time basis .", "question": { "cloze_format": "Typical accounting tasks include all of the following tasks except ________.", "normal_format": "Which of the following task is NOT a typical accounting task?", "question_choices": [ "auditing", "recording and tracking costs", "tax compliance and planning", "consulting", "purchasing direct materials" ], "question_id": "fs-idm259486352", "question_text": "Typical accounting tasks include all of the following tasks except ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "B" }, "bloom": null, "hl_context": "<hl> Public accounting firms offer a wide range of accounting , auditing , consulting , and tax preparation services to their clients . <hl> A small business might use a public accounting firm to prepare the monthly or quarterly financial statements and / or the payroll . A business ( of any size ) might hire the public accounting firm to audit the company financial statements or verify that policies and procedures are being followed properly . Public accounting firms may also offer consulting services to their clients to advise them on implementing computerized systems or strengthening the internal control system . ( Note that you will learn in your advanced study of accounting that accountants have legal limitations on what consulting services they can provide to their clients . ) Public accounting firms also offer tax preparation services for their business and individual clients . Public accounting firms may also offer business valuation , forensic accounting ( financial crimes ) , and other services .", "hl_sentences": "Public accounting firms offer a wide range of accounting , auditing , consulting , and tax preparation services to their clients .", "question": { "cloze_format": "The type of organization that primarily offers tax compliance, auditing, and consulting services is ___.", "normal_format": "What type of organization primarily offers tax compliance, auditing, and consulting services?", "question_choices": [ "corporations", "public accounting firms", "governmental entities", "universities" ], "question_id": "fs-idm255741232", "question_text": "What type of organization primarily offers tax compliance, auditing, and consulting services?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "Certified Public Accountant (CPA)" }, "bloom": null, "hl_context": "Since each state determines the requirements for CPA licenses , students are encouraged to check the state board of accountancy for specific requirements . <hl> In Ohio , for example , candidates for the CPA exam must have 150 hours of college credit . <hl> Of those , thirty semester hours ( or equivalent quarter hours ) must be in accounting . Once the CPA designation is earned in Ohio , 120 hours of continuing education must be taken over a three-year period in order to maintain the certification . The requirements for the Ohio CPA exam are similar to the requirements for other states . Even though states issue CPA licenses , a CPA will not lose the designation should he or she move to another state . Each state has mobility or reciprocity requirements that allow CPAs to transfer licensure from one state to another . Reciprocity requirements can be obtained by contacting the respective state board of accountancy .", "hl_sentences": "In Ohio , for example , candidates for the CPA exam must have 150 hours of college credit .", "question": { "cloze_format": "Most states require 150 semester hours of college credit for the professional certification of a ___.", "normal_format": "Most states require 150 semester hours of college credit for which professional certification?", "question_choices": [ "Certified Management Accountant (CMA)", "Certified Internal Auditor (CIA)", "Certified Public Accountant (CPA)", "Certified Financial Planner (CFP)" ], "question_id": "fs-idm247394848", "question_text": "Most states require 150 semester hours of college credit for which professional certification?" }, "references_are_paraphrase": null } ]
1
1.1 Explain the Importance of Accounting and Distinguish between Financial and Managerial Accounting Accounting is the process of organizing, analyzing, and communicating financial information that is used for decision-making. Financial information is typically prepared by accountants —those trained in the specific techniques and practices of the profession. This course explores many of the topics and techniques related to the accounting profession. While many students will directly apply the knowledge gained in this course to continue their education and become accountants and business professionals, others might pursue different career paths. However, a solid understanding of accounting can for many still serve as a useful resource. In fact, it is hard to think of a profession where a foundation in the principles of accounting would not be beneficial. Therefore, one of the goals of this course is to provide a solid understanding of how financial information is prepared and used in the workplace, regardless of your particular career path. Think It Through Expertise Every job or career requires a certain level of technical expertise and an understanding of the key aspects necessary to be successful. The time required to develop the expertise for a particular job or career varies from several months to much longer. For instance, doctors, in addition to the many years invested in the classroom, invest a significant amount of time providing care to patients under the supervision of more experienced doctors. This helps medical professionals develop the necessary skills to quickly and effectively diagnose and treat the various medical conditions they spent so many years learning about. Accounting also typically takes specialized training. Top accounting managers often invest many years and have a significant amount of experience mastering complex financial transactions. Also, in addition to attending college, earning professional certifications and investing in continuing education are necessary to develop a skill set sufficient to becoming experts in an accounting professional field. The level and type of training in accounting are often dependent on which of the myriad options of accounting fields the potential accountant chooses to enter. To familiarize you with some potential opportunities, Describe the Varied Career Paths Open to Individuals with an Accounting Education examines many of these career options. In addition to covering an assortment of possible career opportunities, we address some of the educational and experiential certifications that are available. Why do you think accountants (and doctors) need to be certified and secure continuing education? In your response, defend your position with examples. In addition to doctors and accountants, what other professions can you think of that might require a significant investment of time and effort in order to develop an expertise? A traditional adage states that “accounting is the language of business.” While that is true, you can also say that “accounting is the language of life.” At some point, most people will make a decision that relies on accounting information. For example, you may have to decide whether it is better to lease or buy a vehicle. Likewise, a college graduate may have to decide whether it is better to take a higher-paying job in a bigger city (where the cost of living is also higher) or a job in a smaller community where both the pay and cost of living may be lower. In a professional setting, a theater manager may want to know if the most recent play was profitable. Similarly, the owner of the local plumbing business may want to know whether it is worthwhile to pay an employee to be “on call” for emergencies during off-hours and weekends. Whether personal or professional, accounting information plays a vital role in all of these decisions. You may have noticed that the decisions in these scenarios would be based on factors that include both financial and nonfinancial information. For instance, when deciding whether to lease or buy a vehicle, you would consider not only the monthly payments but also such factors as vehicle maintenance and reliability. The college graduate considering two job offers might weigh factors such as working hours, ease of commuting, and options for shopping and entertainment. The theater manager would analyze the proceeds from ticket sales and sponsorships as well as the expenses for production of the play and operating the concessions. In addition, the theater manager should consider how the financial performance of the play might have been influenced by the marketing of the play, the weather during the performances, and other factors such as competing events during the time of the play. All of these factors, both financial and nonfinancial, are relevant to the financial performance of the play. In addition to the additional cost of having an employee “on call” during evenings and weekends, the owner of the local plumbing business would consider nonfinancial factors in the decision. For instance, if there are no other plumbing businesses that offer services during evenings and weekends, offering emergency service might give the business a strategic advantage that could increase overall sales by attracting new customers. This course explores the role that accounting plays in society. You will learn about financial accounting , which measures the financial performance of an organization using standard conventions to prepare and distribute financial reports. Financial accounting is used to generate information for stakeholders outside of an organization, such as owners, stockholders, lenders, and governmental entities such as the Securities and Exchange Commission (SEC) and the Internal Revenue Service (IRS). Financial accounting is also a foundation for understanding managerial accounting , which uses both financial and nonfinancial information as a basis for making decisions within an organization with the purpose of equipping decision makers to set and evaluate business goals by determining what information they need to make a particular decision and how to analyze and communicate this information. Managerial accounting information tends to be used internally, for such purposes as budgeting, pricing, and determining production costs. Since the information is generally used internally, you do not see the same need for financial oversight in an organization’s managerial data. You will also note in your financial accounting studies that there are governmental and organizational entities that oversee the accounting processes and systems that are used in financial accounting. These entities include organizations such as the Securities and Exchange Commission (SEC), the Financial Accounting Standards Board (FASB), the American Institute of Certified Public Accountants (AICPA), and the Public Company Accounting Oversight Board (PCAOB). The PCAOB was created after several major cases of corporate fraud, leading to the Sarbanes-Oxley Act of 2002, known as SOX. If you choose to pursue more advanced accounting courses, especially auditing courses, you will address the SOX in much greater detail. For now, it is not necessary to go into greater detail about the mechanics of these organizations or other accounting and financial legislation. You just need to have a basic understanding that they function to provide a degree of protection for those outside of the organization who rely on the financial information. Whether or not you aspire to become an accountant, understanding financial and managerial accounting is valuable and necessary for practically any career you will pursue. Management of a car manufacturer, for example, would use both financial and managerial accounting information to help improve the business. Financial accounting information is valuable as it measures whether or not the company was financially successful. Knowing this provides management with an opportunity to repeat activities that have proven effective and to make adjustments in areas in which the company has underperformed. Managerial accounting information is likewise valuable. Managers of the car manufacturer may want to know, for example, how much scrap is generated from a particular area in the manufacturing process. While identifying and improving the manufacturing process (i.e., reducing scrap) helps the company financially, it may also help other areas of the production process that are indirectly related, such as poor quality and shipping delays. 1.2 Identify Users of Accounting Information and How They Apply Information The ultimate goal of accounting is to provide information that is useful for decision-making. Users of accounting information are generally divided into two categories: internal and external. Internal users are those within an organization who use financial information to make day-to-day decisions. Internal users include managers and other employees who use financial information to confirm past results and help make adjustments for future activities. External users are those outside of the organization who use the financial information to make decisions or to evaluate an entity’s performance. For example, investors, financial analysts, loan officers, governmental auditors, such as IRS agents, and an assortment of other stakeholders are classified as external users, while still having an interest in an organization’s financial information. (Stakeholders are addressed in greater detail in Explain Why Accounting Is Important to Business Stakeholders .) Characteristics, Users, and Sources of Financial Accounting Information Organizations measure financial performance in monetary terms. In the United States, the dollar is used as the standard measurement basis. Measuring financial performance in monetary terms allows managers to compare the organization’s performance to previous periods, to expectations, and to other organizations or industry standards. Financial accounting is one of the broad categories in the study of accounting. While some industries and types of organizations have variations in how the financial information is prepared and communicated, accountants generally use the same methodologies—called accounting standards—to prepare the financial information. You learn in Introduction to Financial Statements that financial information is primarily communicated through financial statements, which include the Income Statement, Statement of Owner’s Equity, Balance Sheet, and Statement of Cash Flows and Disclosures. These financial statements ensure the information is consistent from period to period and generally comparable between organizations. The conventions also ensure that the information provided is both reliable and relevant to the user. Virtually every activity and event that occurs in a business has an associated cost or value and is known as a transaction . Part of an accountant’s responsibility is to quantify these activities and events. In this course you will learn about the many types of transactions that occur within a business. You will also examine the effects of these transactions, including their impact on the financial position of the entity. Accountants often use computerized accounting systems to record and summarize the financial reports, which offer many benefits. The primary benefit of a computerized accounting system is the efficiency by which transactions can be recorded and summarized, and financial reports prepared. In addition, computerized accounting systems store data, which allows organizations to easily extract historical financial information. Common computerized accounting systems include QuickBooks, which is designed for small organizations, and SAP, which is designed for large and/or multinational organizations. QuickBooks is popular with smaller, less complex entities. It is less expensive than more sophisticated software packages, such as Oracle or SAP, and the QuickBooks skills that accountants developed at previous employers tend to be applicable to the needs of new employers, which can reduce both training time and costs spent on acclimating new employees to an employer’s software system. Also, being familiar with a common software package such as QuickBooks helps provide employment mobility when workers wish to reenter the job market. While QuickBooks has many advantages, once a company’s operations reach a certain level of complexity, it will need a basic software package or platform, such as Oracle or SAP, which is then customized to meet the unique informational needs of the entity. Financial accounting information is mostly historical in nature, although companies and other entities also incorporate estimates into their accounting processes. For example, you will learn how to use estimates to determine bad debt expenses or depreciation expenses for assets that will be used over a multiyear lifetime. That is, accountants prepare financial reports that summarize what has already occurred in an organization. This information provides what is called feedback value. The benefit of reporting what has already occurred is the reliability of the information. Accountants can, with a fair amount of confidence, accurately report the financial performance of the organization related to past activities. The feedback value offered by the accounting information is particularly useful to internal users. That is, reviewing how the organization performed in the past can help managers and other employees make better decisions about and adjustments to future activities. Financial information has limitations, however, as a predictive tool. Business involves a large amount of uncertainty, and accountants cannot predict how the organization will perform in the future. However, by observing historical financial information, users of the information can detect patterns or trends that may be useful for estimating the company’s future financial performance. Collecting and analyzing a series of historical financial data is useful to both internal and external users. For example, internal users can use financial information as a predictive tool to assess whether the long-term financial performance of the organization aligns with its long-term strategic goals. External users also use the historical pattern of an organization’s financial performance as a predictive tool. For example, when deciding whether to loan money to an organization, a bank may require a certain number of years of financial statements and other financial information from the organization. The bank will assess the historical performance in order to make an informed decision about the organization’s ability to repay the loan and interest (the cost of borrowing money). Similarly, a potential investor may look at a business’s past financial performance in order to assess whether or not to invest money in the company. In this scenario, the investor wants to know if the organization will provide a sufficient and consistent return on the investment. In these scenarios, the financial information provides value to the process of allocating scarce resources (money). If potential lenders and investors determine the organization is a worthwhile investment, money will be provided, and, if all goes well, those funds will be used by the organization to generate additional value at a rate greater than the alternate uses of the money. Characteristics, Users, and Sources of Managerial Accounting Information As you’ve learned, managerial accounting information is different from financial accounting information in several respects. Accountants use formal accounting standards in financial accounting. These accounting standards are referred to as generally accepted accounting principles (GAAP) and are the common set of rules, standards, and procedures that publicly traded companies must follow when composing their financial statements. The previously mentioned Financial Accounting Standards Board (FASB) , an independent, nonprofit organization that sets financial accounting and reporting standards for both public and private sector businesses in the United States, uses the GAAP guidelines as its foundation for its system of accepted accounting methods and practices, reports, and other documents. Since most managerial accounting activities are conducted for internal uses and applications, managerial accounting is not prepared using a comprehensive, prescribed set of conventions similar to those required by financial accounting. This is because managerial accountants provide managerial accounting information that is intended to serve the needs of internal, rather than external, users. In fact, managerial accounting information is rarely shared with those outside of the organization. Since the information often includes strategic or competitive decisions, managerial accounting information is often closely protected. The business environment is constantly changing, and managers and decision makers within organizations need a variety of information in order to view or assess issues from multiple perspectives. Accountants must be adaptable and flexible in their ability to generate the necessary information management decision-making. For example, information derived from a computerized accounting system is often the starting point for obtaining managerial accounting information. But accountants must also be able to extract information from other sources (internal and external) and analyze the data using mathematical, formula-driven software (such as Microsoft Excel). Management accounting information as a term encompasses many activities within an organization. Preparing a budget, for example, allows an organization to estimate the financial performance for the upcoming year or years and plan for adjustments to scale operations according to the projections. Accountants often lead the budgeting process by gathering information from internal (estimates from the sales and engineering departments, for example) and external (trade groups and economic forecasts, for example) sources. These data are then compiled and presented to decision makers within the organization. Examples of other decisions that require management accounting information include whether an organization should repair or replace equipment, make products internally or purchase the items from outside vendors, and hire additional workers or use automation. As you have learned, management accounting information uses both financial and nonfinancial information. This is important because there are situations in which a purely financial analysis might lead to one decision, while considering nonfinancial information might lead to a different decision. For example, suppose a financial analysis indicates that a particular product is unprofitable and should no longer be offered by a company. If the company fails to consider that customers also purchase a complementary good (you might recall that term from your study of economics), the company may be making the wrong decision. For example, assume that you have a company that produces and sells both computer printers and the replacement ink cartridges. If the company decided to eliminate the printers, then it would also lose the cartridge sales. In the past, in some cases, the elimination of one component, such as printers, led to customers switching to a different producer for its computers and other peripheral hardware. In the end, an organization needs to consider both the financial and nonfinancial aspects of a decision, and sometimes the effects are not intuitively obvious at the time of the decision. Figure 1.3 offers an overview of some of the differences between financial and managerial accounting. 1.3 Describe Typical Accounting Activities and the Role Accountants Play in Identifying, Recording, and Reporting Financial Activities We can classify organizations into three categories: for profit, governmental, and not for profit. These organizations are similar in several aspects. For example, each of these organizations has inflows and outflows of cash and other resources, such as equipment, furniture, and land, that must be managed. In addition, all of these organizations are formed for a specific purpose or mission and want to use the available resources in an efficient manner—the organizations strive to be good stewards, with the underlying premise of being profitable. Finally, each of the organizations makes a unique and valuable contribution to society. Given the similarities, it is clear that all of these organizations have a need for accounting information and for accountants to provide that information. There are also several differences. The main difference that distinguishes these organizations is the primary purpose or mission of the organization, discussed in the following sections. For-Profit Businesses As the name implies, the primary purpose or mission of a for-profit business is to earn a profit by selling goods and services. There are many reasons why a for-profit business seeks to earn a profit. The profits generated by these organizations might be used to create value for employees in the form of pay raises for existing employees as well as hiring additional workers. In addition, profits can be reinvested in the business to create value in the form of research and development, equipment upgrades, facilities expansions, and many other activities that make the business more competitive. Many companies also engage in charitable activities, such as donating money, donating products, or allowing employees to volunteer in the communities. Finally, profits can also be shared with employees in the form of either bonuses or commissions as well as with owners of the business as a reward for the owners’ investment in the business. These issues, along with others, and the associated accounting conventions will be explored throughout this course. In for-profit businesses, accounting information is used to measure the financial performance of the organization and to help ensure that resources are being used efficiently. Efficiently using existing resources allows the businesses to improve quality of the products and services offered, remain competitive in the marketplace, expand when appropriate, and ensure longevity of the business. For-profit businesses can be further categorized by the types of products or services the business provides. Let’s examine three types of for-profit businesses: manufacturing, retail (or merchandising), and service. Manufacturing Businesses A manufacturing business is a for-profit business that is designed to make a specific product or products. Manufacturers specialize in procuring components in the most basic form (often called direct or raw materials) and transforming the components into a finished product that is often drastically different from the original components. As you think about the products you use every day, you are probably already familiar with products made by manufacturing firms. Examples of products made by manufacturing firms include automobiles, clothes, cell phones, computers, and many other products that are used every day by millions of consumers. In Job Order Costing , you will examine the process of job costing, learning how manufacturing firms transform basic components into finished, sellable products and the techniques accountants use to record the costs associated with these activities. Concepts In Practice Manufacturing Think about the items you have used today. Make a list of the products that were created by manufacturing firms. How many can you think of? Think of the many components that went into some of the items you use. Do you think the items were made by machines or by hand? If you are in a classroom with other students, see who has used the greatest number of items today. Or, see who used the item that would be the most complex to manufacture. If you are able, you might consider arranging a tour of a local manufacturer. Many manufacturers are happy to give tours of the facilities and describe the many complex processes that are involved in making the products. On your tour, take note of the many job functions that are required to make those items—from ordering the materials to delivering to the customer. Retail Businesses Manufacturing businesses and retail (or merchandising) businesses are similar in that both are for-profit businesses that sell products to consumers. In the case of manufacturing firms, by adding direct labor, manufacturing overhead (such as utilities, rent, and depreciation), and other direct materials, raw components are converted into a finished product that is sold to consumers. A retail business (or merchandising business ), on the other hand, is a for-profit business that purchases products (called inventory) and then resells the products without altering them—that is, the products are sold directly to the consumer in the same condition (production state) as purchased. Examples of retail firms are plentiful. Automobile dealerships, clothes, cell phones, and computers are all examples of everyday products that are purchased and sold by retail firms. What distinguishes a manufacturing firm from a retail firm is that in a retail firm, the products are sold in the same condition as when the products were purchased—no further alterations were made on the products. Did you happen to notice that the product examples listed in the preceding paragraph (automobiles, clothes, cell phones, and computers) for manufacturing firms and retail firms are identical? If so, congratulations, because you are paying close attention to the details. These products are used as examples in two different contexts—that is, manufacturing firms make these products, and retail firms sell these products. These products are relevant to both manufacturing and retail because they are examples of goods that are both manufactured and sold directly to the consumer. While there are instances when a manufacturing firm also serves as the retail firm ( Dell computers, for example), it is often the case that products will be manufactured and sold by separate firms. Concepts In Practice NIKEiD NIKEiD is a program that allows consumers to design and purchase customized equipment, clothes, and shoes. In 2007, Nike opened its first NIKEiD studio at Niketown in New York City. 1 Since its debut in 1999, the NIKEiD concept has flourished, and Nike has partnered with professional athletes to showcase their designs that, along with featured consumer designs, are available for purchase on the NIKEiD website. 1 Nike. “Nike Opens New NIKEiD Studio in New York.” October 4, 2007. https://news.nike.com/news/nike-opens-new-nikeid-studio-in-new-york Assume you are the manager of a sporting goods store that sells Nike shoes. Think about the concept of NIKEiD, and consider the impact that this concept might have on your store sales. Would this positively or negatively impact the sale of Nike shoes in your store? What are steps you could take to leverage the NIKEiD concept to help increase your own store’s sales? Considerations like this are examples of what marketing professionals would address. Nike wants to ensure this concept does not negatively impact the existing relationships it has, and Nike works to ensure this program is also beneficial to its existing distribution partners. In Merchandising Transactions you will learn about merchandising transactions, which include concepts and specific accounting practices for retail firms. You will learn, among other things, how to account for purchasing products from suppliers, selling the products to customers, and prepare the financial reports for retail firms. Service Businesses As the term implies, service businesses are businesses that provide services to customers. A major difference between manufacturing and retail firms and service firms is that service firms do not have a tangible product that is sold to customers. Instead, a service business does not sell tangible products to customers but rather provides intangible benefits (services) to customers. A service business can be either a for-profit or a not-for-profit business. Figure 1.5 illustrates the distinction between manufacturing, retail, and service businesses. Examples of service-oriented businesses include hotels, cab services, entertainment, and tax preparers. Efficiency is one advantage service businesses offer to their customers. For example, while taxpayers can certainly read the tax code, read the instructions, and complete the forms necessary to file their annual tax returns, many choose to take their tax returns to a person who has specialized training and experience with preparing tax returns. Although it is more expensive to do so, many feel it is a worthwhile investment because the tax professional has invested the time and has the knowledge to prepare the forms properly and in a timely manner. Hiring a tax preparer is efficient for the taxpayer because it allows the taxpayer to file the required forms without having to invest numerous hours researching and preparing the forms. The accounting conventions for service businesses are similar to the accounting conventions for manufacturing and retail businesses. In fact, the accounting for service businesses is easier in one respect. Because service businesses do not sell tangible products, there is no need to account for products that are being held for sale (inventory). Therefore, while we briefly discuss service businesses, we’ll focus mostly on accounting for manufacturing and retail businesses. Your Turn Categorizing Restaurants So far, you’ve learned about three types of for-profit businesses: manufacturing, retail, and service. Previously, you saw how some firms such as Dell serve as both manufacturer and retailer. Now, think of the last restaurant where you ate. Of the three business types (manufacturer, retailer, or service provider), how would you categorize the restaurant? Is it a manufacturer? A retailer? A service provider? Can you think of examples of how a restaurant has characteristics of all three types of businesses? Solution Answers will vary. Responses may initially consider a restaurant to be only a service provider. Students may also recognize that a restaurant possesses aspects of a manufacturer (by preparing the meals), retailer (by selling merchandise and/or gift cards), and service provider (by waiting on customers). Governmental Entities A governmental entity provides services to the general public (taxpayers). Governmental agencies exist at the federal, state, and local levels. These entities are funded through the issuance of taxes and other fees. Accountants working in governmental entities perform the same function as accountants working at for-profit businesses. Accountants help to serve the public interest by providing to the public an accounting for the receipts and disbursements of taxpayer dollars. Governmental leaders are accountable to taxpayers, and accountants help assure the public that tax dollars are being utilized in an efficient manner. Examples of governmental entities that require financial reporting include federal agencies such as the Social Security Administration, state agencies such as the Department of Transportation, and local agencies such as county engineers. Students continuing their study of accounting may take a specific course or courses related to governmental accounting. While the specific accounting used in governmental entities differs from traditional accounting conventions, the goal of providing accurate and unbiased financial information useful for decision-making remains the same, regardless of the type of entity. Government accounting standards are governed by the Governmental Accounting Standards Board (GASB) . This organization creates standards that are specifically appropriate for state and local governments in the United States. Not-for-Profit Entities To be fair, the name “not-for-profit” can be somewhat confusing. As with “for-profit” entities, the name refers to the primary purpose or mission of the organization. In the case of for-profit organizations, the primary purpose is to generate a profit. The profits, then, can be used to sustain and improve the business through investments in employees, research, and development, and other measures intended to help ensure the long-term success of the business. But in the case of a nonprofit (not-for-profit) organization the primary purpose or mission is to serve a particular interest or need in the community. A not-for-profit entity tends to depend on financial longevity based on donations, grants, and revenues generated. It may be helpful to think of not-for-profit entities as “mission-based” entities. It is important to note that not-for-profit entities, while having a primary purpose of serving a particular interest, also have a need for financial sustainability. An adage in the not-for-profit sector states that “being a not-for-profit organization does not mean it is for-loss.” That is, not-for-profit entities must also ensure that resources are used efficiently, allowing for inflows of resources to be greater than (or, at a minimum, equal to) outflows of resources. This allows the organization to continue and perhaps expand its valuable mission. Examples of not-for-profit entities are numerous. Food banks have as a primary purpose the collection, storage, and distribution of food to those in need. Charitable foundations have as a primary purpose the provision of funding to local agencies that support specific community needs, such as reading and after-school programs. Many colleges and universities are structured as not-for-profit entities because the primary purpose is to provide education and research opportunities. Similar to accounting for governmental entities, students continuing their study of accounting may take a specific course or courses related to not-for-profit accounting. While the specific accounting used in not-for-profit entities differs slightly from traditional accounting conventions, the goal of providing reliable and unbiased financial information useful for decision-making is vitally important. Some of the governmental and regulatory entities involved in maintaining the rules and principles in accounting are discussed in Explain Why Accounting Is Important to Business Stakeholders . Your Turn Types of Organizations Think of the various organizations discussed so far. Now try to identify people in your personal and professional network who work for these types of agencies. Can you think of someone in a career at each of these types of organizations? One way to explore career paths is to talk with professionals who work in the areas that interest you. You may consider reaching out to the individuals you identified and learning more about the work that they do. Find out about the positive and negative aspects of the work. Find out what advice they have relating to education. Try to gain as much information as you can to determine whether that is a career you can envision yourself pursuing. Also, ask about opportunities for job shadowing, co-ops, or internships. Solution Answers will vary, but this should be an opportunity to learn about careers in a variety of organizations (for-profit including manufacturing, retail, and services; not-for-profit; and governmental agencies). You may have an assumption about a career that is based only on the positive aspects. Learning from experienced professionals may help you understand all aspects of the careers. In addition, this exercise may help you confirm or alter your potential career path, including the preparation required (based on advice given from those you talk with). 1.4 Explain Why Accounting Is Important to Business Stakeholders The number of decisions we make in a single day is staggering. For example, think about what you had for breakfast this morning. What pieces of information factored into that decision? A short list might include the foods that were available in your home, the amount of time you had to prepare and eat the food, and what sounded good to eat this morning. Let’s say you do not have much food in your home right now because you are overdue on a trip to the grocery store. Deciding to grab something at a local restaurant involves an entirely new set of choices. Can you think of some of the factors that might influence the decision to grab a meal at a local restaurant? Your Turn Daily Decisions Many academic studies have been conducted on the topic of consumer behavior and decision-making. It is a fascinating topic of study that attempts to learn what type of advertising works best, the best place to locate a business, and many other business-related activities. One such study, conducted by researchers at Cornell University, concluded that people make more than 200 food-related decisions per day. 2 2 B. Wansink and J. Sobal. “Mindless Eating: The 200 Daily Food Decisions We Overlook.” 2007. Environment & Behavior , 39[1], 106–123. This is astonishing considering the number of decisions found in this particular study related only to decisions involving food. Imagine how many day-to-day decisions involve other issues that are important to us, such as what to wear and how to get from point A to point B. For this exercise, provide and discuss some of the food-related decisions that you recently made. Solution In consideration of food-related decisions, there are many options you can consider. For example, what types, in terms of ethnic groups or styles, do you prefer? Do you want a dining experience or just something inexpensive and quick? Do you have allergy-related food issues? These are just a few of the myriad potential decisions you might make. It is no different when it comes to financial decisions. Decision makers rely on unbiased, relevant, and timely financial information in order to make sound decisions. In this context, the term stakeholder refers to a person or group who relies on financial information to make decisions, since they often have an interest in the economic viability of an organization or business. Stakeholders may be stockholders, creditors, governmental and regulatory agencies, customers, management and other employees, and various other parties and entities. Stockholders A stockholder is an owner of stock in a business. Owners are called stockholders because in exchange for cash, they are given an ownership interest in the business, called stock. Stock is sometimes referred to as “shares.” Historically, stockholders received paper certificates reflecting the number of stocks owned in the business. Now, many stock transactions are recorded electronically. Introduction to Financial Statements discusses stock in more detail. Corporation Accounting offers a more extensive exploration of the types of stock as well as the accounting related to stock transactions. Recall that organizations can be classified as for-profit, governmental, or not-for-profit entities. Stockholders are associated with for-profit businesses. While governmental and not-for-profit entities have constituents, there is no direct ownership associated with these entities. For-profit businesses are organized into three categories: manufacturing, retail (or merchandising), and service. Another way to categorize for-profit businesses is based on the availability of the company stock (see Table 1.1 ). A publicly traded company is one whose stock is traded (bought and sold) on an organized stock exchange such as the New York Stock Exchange (NYSE) or the National Association of Securities Dealers Automated Quotation (NASDAQ) system. Most large, recognizable companies are publicly traded, meaning the stock is available for sale on these exchanges. A privately held company , in contrast, is one whose stock is not available to the general public. Privately held companies, while accounting for the largest number of businesses and employment in the United States, are often smaller (based on value) than publicly traded companies. Whereas financial information and company stock of publicly traded companies are available to those inside and outside of the organization, financial information and company stock of privately held companies are often limited exclusively to employees at a certain level within the organization as a part of compensation and incentive packages or selectively to individuals or groups (such as banks or other lenders) outside the organization. Publicly Held versus Privately Held Companies Publicly Held Company Privately Held Company Stock available to general public Financial information public Typically larger in value Stock not available to general public Financial information private Typically smaller in value Table 1.1 Whether the stock is owned by a publicly traded or privately held company, owners use financial information to make decisions. Owners use the financial information to assess the financial performance of the business and make decisions such as whether or not to purchase additional stock, sell existing stock, or maintain the current level of stock ownership. Other decisions stockholders make may be influenced by the type of company. For example, stockholders of privately held companies often are also employees of the company, and the decisions they make may be related to day-to-day activities as well as longer-term strategic decisions. Owners of publicly traded companies, on the other hand, will usually only focus on strategic issues such as the company leadership, purchases of other businesses, and executive compensation arrangements. In essence, stockholders predominantly focus on profitability, expected increase in stock value, and corporate stability. Creditors and Lenders In order to provide goods and services to their customers, businesses make purchases from other businesses. These purchases come in the form of materials used to make finished goods or resell, office equipment such as copiers and telephones, utility services such as heating and cooling, and many other products and services that are vital to run the business efficiently and effectively. It is rare that payment is required at the time of the purchase or when the service is provided. Instead, businesses usually extend “credit” to other businesses. Selling and purchasing on credit, which is explored further in Merchandising Transactions and Accounting for Receivables , means the payment is expected after a certain period of time following receipt of the goods or provision of the service. The term creditor refers to a business that grants extended payment terms to other businesses. The time frame for extended credit to other businesses for purchases of goods and services is usually very short, typically thirty-day to forty-five-day periods are common. When businesses need to borrow larger amounts of money and/or for longer periods of time, they will often borrow money from a lender , a bank or other institution that has the primary purpose of lending money with a specified repayment period and stated interest rate. If you or your family own a home, you may already be familiar with lending institutions. The time frame for borrowing from lenders is typically measured in years rather than days, as was the case with creditors. While lending arrangements vary, typically the borrower is required to make periodic, scheduled payments with the full amount being repaid by a certain date. In addition, since the borrowing is for a long period of time, lending institutions require the borrower to pay a fee (called interest) for the use of borrowing. These concepts and the related accounting practices are covered in Long-Term Liabilities . Table 1.2 Summarizes the differences between creditors and lenders. Creditor versus Lender Creditor Lender Business that grants extended payment terms to other businesses Shorter time frame Bank or other institution that lends money Longer time frame Table 1.2 Both creditors and lenders use financial information to make decisions. The ultimate decision that both creditors and lenders have to make is whether or not the funds will be repaid by the borrower. The reason this is important is because lending money involves risk. The type of risk creditors and lenders assess is repayment risk—the risk the funds will not be repaid. As a rule, the longer the money is borrowed, the higher the risk involved. Recall that accounting information is historical in nature. While historical performance is no guarantee of future performance (repayment of borrowed funds, in this case), an established pattern of financial performance using historical accounting information does help creditors and lenders to assess the likelihood the funds will be repaid, which, in turn, helps them to determine how much money to lend, how long to lend the money for, and how much interest (in the case of lenders) to charge the borrower. Sources of Funding Besides borrowing, there are other options for businesses to obtain or raise additional funding (also often labeled as capital). It is important for the business student to understand that businesses generally have three ways to raise capital: profitable operations is the first option; selling ownership—stock—which is also called equity financing, is the second option; and borrowing from lenders (called debt financing) is the final option. In Introduction to Financial Statements , you’ll learn more about the business concept called “profit.” You are already aware of the concept of profit. In short, profit means the inflows of resources are greater than the outflow of resources, or stated in more business-like terms, the revenues that the company generates are larger or greater than the expenses. For example, if a retailer buys a printer for $150 and sells it for $320, then from the sale it would have revenue of $320 and expenses of $150, for a profit of $170. (Actually, the process is a little more complicated because there would typically be other expenses for the operation of the store. However, to keep the example simple, those were not included. You’ll learn more about this later in the course.) Developing and maintaining profitable operations (selling goods and services) typically provides businesses with resources to use for future projects such as hiring additional workers, maintaining equipment, or expanding a warehouse. While profitable operations are valuable to businesses, companies often want to engage in projects that are very expensive and/or are time sensitive. Businesses, then, have other options to raise funds quickly, such as selling stock and borrowing from lenders, as previously discussed. An advantage of selling stock to raise capital is that the business is not committed to a specific payback schedule. A disadvantage of issuing new stock is that the administrative costs (legal and compliance) are high, which makes it an expensive way to raise capital. There are two advantages to raising money by borrowing from lenders. One advantage is that the process, relative to profitable operations and selling ownership, is quicker. As you’ve learned, lenders (and creditors) review financial information provided by the business in order to make assessments on whether or not to lend money to the business, how much money to lend, and the acceptable length of time to lend. A second, and related, advantage of raising capital through borrowing is that it is fairly inexpensive. A disadvantage of borrowing money from lenders is the repayment commitments. Because lenders require the funds to be repaid within a specific time frame, the risk to the business (and, in turn, to the lender) increases. These topics are covered extensively in the area of study called corporate finance. While finance and accounting are similar in many aspects, in practicality finance and accounting are separate disciplines that frequently work in coordination in a business setting. Students may be interested to learn more about the educational and career options in the field of corporate finance. Because there are many similarities in the study of finance and accounting, many college students double major in a combination of finance, accounting, economics, and information systems. Concepts In Practice Profit What is profit? In accounting, there is general consensus on the definition of profit. A typical definition of profit is, in effect, when inflows of cash or other resources are greater than outflows of resources. Ken Blanchard provides another way to define profit. Blanchard is the author of The One Minute Manager , a popular leadership book published in 1982. He is often quoted as saying, “profit is the applause you get for taking care of your customers and creating a motivating environment for your people [employees].” Blanchard’s definition recognizes the multidimensional aspect of profit, which requires successful businesses to focus on their customers, employees, and the community. Check out this short video of Blanchard’s definition of profit for more information. What are alternative approaches to defining profit? Governmental and Regulatory Agencies Publicly traded companies are required to file financial and other informational reports with the Securities and Exchange Commission (SEC) , a federal regulatory agency that regulates corporations with shares listed and traded on security exchanges through required periodic filings Figure 1.6 . The SEC accomplishes this in two primary ways: issuing regulations and providing oversight of financial markets. The goal of these actions is to help ensure that businesses provide investors with access to transparent and unbiased financial information. As an example of its responsibility to issue regulations, you learn in Introduction to Financial Statements that the SEC is responsible for establishing guidelines for the accounting profession. These are called accounting standards or generally accepted accounting principles (GAAP). Although the SEC also had the responsibility of issuing standards for the auditing profession, they relinquished this responsibility to the Financial Accounting Standards Board (FASB). In addition, you will learn in Describe the Varied Career Paths Open to Individuals with an Accounting Education that auditors are accountants charged with providing reasonable assurance to users that financial statements are prepared according to accounting standards. This oversight is administered through the Public Company Accounting Oversight Board (PCAOB), which was established in 2002. The SEC also has responsibility for regulating firms that issue and trade (buy and sell) securities—stocks, bonds, and other investment instruments. Enforcement by the SEC takes many forms. According to the SEC website, “Each year the SEC brings hundreds of civil enforcement actions against individuals and companies for violation of the securities laws. Typical infractions include insider trading, accounting fraud, and providing false or misleading information about securities and the companies that issue them.” 3 Financial information is a valuable tool that is part of the investigatory and enforcement activities of the SEC. 3 U.S. Securities and Exchange Commission. “What We Do.” June 10, 2013. https://www.sec.gov/Article/whatwedo.html Concepts In Practice Financial Professionals and Fraud You may have heard the name Bernard “Bernie” Madoff. Madoff ( Figure 1.7 ) was the founder of an investment firm, Bernard L. Madoff Investment Securities . The original mission of the firm was to provide financial advice and investment services to clients. This is a valuable service to many people because of the complexity of financial investments and retirement planning. Many people rely on financial professionals, like Bernie Madoff, to help them create wealth and be in a position to retire comfortably. Unfortunately, Madoff took advantage of the trust of his investors and was ultimately convicted of stealing (embezzling) over $50 billion (a low amount by some estimates). Madoff’s embezzlement remains one of the biggest financial frauds in US history. The fraud scheme was initially uncovered by a financial analyst named Harry Markopolos. Markopolos became suspicious because Madoff’s firm purported to achieve for its investors abnormally high rates of return for an extended period of time. After analyzing the investment returns, Markopolos reported the suspicious activity to the Securities and Exchange Commission (SEC), which has enforcement responsibility for firms providing investment services. While Madoff was initially able to stay a few steps ahead of the SEC, he was charged in 2009 and will spend the rest of his life in prison. There are many resources to explore the Madoff scandal. You might be interested in reading the book, No One Would Listen: A True Financial Thriller , written by Harry Markopolos. A movie and a TV series have also been made about the Madoff scandal. In addition to governmental and regulatory agencies at the federal level, many state and local agencies use financial information to accomplish the mission of protecting the public interest. The primary goals are to ensure the financial information is prepared according to the relevant rules or practices as well as to ensure funds are being used in an efficient and transparent manner. For example, local school district administrators should ensure that financial information is available to the residents and is presented in an unbiased manner. The residents want to know their tax dollars are not being wasted. Likewise, the school district administrators want to demonstrate they are using the funding in an efficient and effective manner. This helps ensure a good relationship with the community that fosters trust and support for the school system. Customers Depending on the perspective, the term customers can have different meanings. Consider for a moment a retail store that sells electronics. That business has customers that purchase its electronics. These customers are considered the end users of the product. The customers, knowingly or unknowingly, have a stake in the financial performance of the business. The customers benefit when the business is financially successful. Profitable businesses will continue to sell the products the customers want, maintain and improve the business facilities, provide employment for community members, and undertake many other activities that contribute to a vibrant and thriving community. Businesses are also customers. In the example of the electronics store, the business purchases its products from other businesses, including the manufacturers of the electronics. Just as end-user customers have a vested interest in the financial success of the business, business customers also benefit from suppliers that have financial success. A supplier that is financially successful will help ensure the electronics will continue to be available to purchase and resell to the end-use customer, investments in emerging technologies will be made, and improvements in delivery and customer service will result. This, in turn, helps the retail electronics store remain cost competitive while being able to offer its customers a wide variety of products. Managers and Other Employees Employees have a strong interest in the financial performance of the organizations for which they work. At the most basic level, employees want to know their jobs will be secure so they can continue to be paid for their work. In addition, employees increase their value to the organization through their years of service, improving knowledge and skills, and accepting positions of increased responsibility. An organization that is financially successful is able to reward employees for that commitment to the organization through bonuses and increased pay. In addition to promotional and compensation considerations, managers and others in the organization have the responsibility to make day-to-day and long-term (strategic) decisions for the organization. Understanding financial information is vital to making good organizational decisions. Not all decisions, however, are based on strictly financial information. Recall that managers and other decision makers often use nonfinancial, or managerial, information. These decisions take into account other relevant factors that may not have an immediate and direct link to the financial reports. It is important to understand that sound organizational decisions are often (and should be) based on both financial and nonfinancial information. In addition to exploring managerial accounting concepts, you will also learn some of the common techniques that are used to analyze the financial reports of businesses. Appendix A further explores these techniques and how stakeholders can use these techniques for making financial decisions. IFRS Connection Introduction to International Financial Reporting Standards (IFRS) In the past fifty years, rapid advances in communications and technology have led the economy to become more global with companies buying, selling, and providing services to customers all over the world. This increase in globalization creates a greater need for users of financial information to be able to compare and evaluate global companies. Investors, creditors, and management may encounter a need to assess a company that operates outside of the United States. For many years, the ability to compare financial statements and financial ratios of a company headquartered in the United States with a similar company headquartered in another country, such as Japan, was challenging, and only those educated in the accounting rules of both countries could easily handle the comparison. Discussions about creating a common set of international accounting standards that would apply to all publicly traded companies have been occurring since the 1950s and post–World War II economic growth, but only minimal progress was made. In 2002, the Financial Accounting Standards Board (FASB) and the International Accounting Standards Board (IASB) began working more closely together to create a common set of accounting rules. Since 2002, the two organizations have released many accounting standards that are identical or similar, and they continue to work toward unifying or aligning standards, thus improving financial statement comparability between countries. Why create a common set of international standards? As previously mentioned, the global nature of business has increased the need for comparability across companies in different countries. Investors in the United States may want to choose between investing in a US-based company or one based in France. A US company may desire to buy out a company located in Brazil. A Mexican-based company may desire to borrow money from a bank in London. These types of activities require knowledge of financial statements. Prior to the creation of IFRS, most countries had their own form of generally accepted accounting principles (GAAP). This made it difficult for an investor in the United States to analyze or understand the financials of a France-based company or for a bank in London to know all of the nuances of financial statements from a Mexican company. Another reason common international rules are important is the need for similar reporting for similar business models. For example, Nestlé and the Hershey Company are in different countries yet have similar business models; the same applies to Daimler and Ford Motor Company . In these and other instances, despite the similar business models, for many years these companies reported their results differently because they were governed by different GAAP— Nestlé by French GAAP, Daimler by German GAAP, and both the Hershey Company and Ford Motor Company by US GAAP. Wouldn’t it make sense that these companies should report the results of their operations in a similar manner since their business models are similar? The globalization of the economy and the need for similar reporting across business models are just two of the reasons why the push for unified standards took a leap forward in the early twenty-first century. Today, more than 120 countries have adopted all or most of IFRS or permit the use of IFRS for financial reporting. The United States, however, has not adopted IFRS as an acceptable method of GAAP for financial statement preparation and presentation purposes but has worked closely with the IASB. Thus, many US standards are very comparable to the international standards. Interestingly, the Securities and Exchange Commission (SEC) allows foreign companies that are traded on US exchanges to present their statements under IFRS rules without restating to US GAAP. This occurred in 2009 and was an important move by the SEC to show solidarity toward creating financial statement comparability across countries. Throughout this text, “IFRS Connection” feature boxes will discuss the important similarities and most significant differences between reporting using US GAAP as created by FASB and IFRS as created by IASB. For now, know that it is important for anyone in business, not just accountants, to be aware of some of the primary similarities and differences between IFRS and US GAAP, because these differences can impact analysis and decision-making. 1.5 Describe the Varied Career Paths Open to Individuals with an Accounting Education There are often misunderstandings on what exactly accountants do or what attributes are necessary for a successful career in accounting. Often, people perceive accountants as “number-crunchers” or “bean counters” who sit behind a desk, working with numbers, and having little interaction with others. The fact is that this perception could not be further from the truth. Personal Attributes While it is true that accountants often work independently, much of the work that accountants undertake involves interactions with other people. In fact, accountants frequently need to gather information from others and explain complex financial concepts to others, making excellent written and verbal communication skills a must. In addition, accountants often deal with strict deadlines such as tax filings, making prioritizing work commitments and being goal oriented necessities. In addition to these skills, traditionally, an accountant can be described as someone who is goal oriented, is a problem solver, is organized and analytical, has good interpersonal skills, pays attention to detail, has good time-management skills, and is outgoing. The Association of Chartered Certified Accountants (ACCA), the governing body of the global Chartered Certified Accountant (CCA) designation, and the Institute of Management Accountants (IMA), the governing body of the Certified Management Accountant (CMA) designation, conducted a study to research the skills accountants will need given a changing economic and technological context. The findings indicate that, in addition to the traditional personal attributes, accountants should possess “traits such as entrepreneurship, curiosity, creativity, and strategic thinking.” 4 4 The Association of Chartered Certified Accountants (ACCA) and The Association of Accountants and Financial Professionals in Business (IMA). “100 Drivers of Change for the Global Accountancy Profession.” September 2012. https://www.imanet.org/insights-and-trends/the-future-of-management-accounting/100-drivers-of-change-for-the-global-accountancy-profession?ssopc=1 Education Entry-level positions in the accounting profession usually require a minimum of a bachelor’s degree. For advanced positions, firms may consider factors such as years of experience, professional development, certifications, and advanced degrees, such as a master’s or doctorate. The specific factors regarding educational requirements depend on the industry and the specific business. After earning a bachelor’s degree, many students decide to continue their education by earning a master’s degree. A common question for students is when to begin a master’s program, either entering a master’s program immediately after earning a bachelor’s degree or first entering the profession and pursuing a master’s at a later point. On one hand, there are benefits of entering into a master’s program immediately after earning a bachelor’s degree, mainly that students are already into the rhythm of being a full-time student so an additional year or so in a master’s program is appealing. On the other hand, entering the profession directly after earning a bachelor’s degree allows the student to gain valuable professional experience that may enrich the graduate education experience. When to enter a graduate program is not an easy decision. There are pros and cons to either position. In essence, the final decision depends on the personal perspective and alternatives available to the individual student. For example, one student might not have the financial resources to continue immediately on to graduate school and will first need to work to fund additional education, while another student might have outside suppliers of resources or is considering taking on additional student loan debt. The best recommendation for these students is to consider all of the factors and realize that they must make the final decision as to their own best alternative. It is also important to note that if one makes the decision to enter public accounting, as all states require 150 hours of education to earn a Certified Public Accountant (CPA) license, it is customary for regional and national public accounting firms to require a master’s degree or 150 hours earned by other means as a condition for employment; this may influence your decision to enter a master’s degree program as soon as the bachelor’s degree is complete. Related Careers An accounting degree is a valuable tool for other professions too. A thorough understanding of accounting provides the student with a comprehensive understanding of business activity and the importance of financial information to make informed decisions. While an accounting degree is a necessity to work in the accounting profession, it also provides a solid foundation for other careers, such as financial analysts, personal financial planners, and business executives. The number of career options may seem overwhelming at this point, and a career in the accounting profession is no exception. The purpose of this section is to simply highlight the vast number of options that an accounting degree offers. In the workforce, accounting professionals can find a career that best fits their interests. Students may also be interested in learning more about professional certifications in the areas of financial analysis (Chartered Financial Analyst) and personal financial planning (Certified Financial Planner), which are discussed later in this section. Major Categories of Accounting Functions It is a common perception that an accounting career means preparing tax returns. While numerous accountants do prepare tax returns, many people are surprised to learn of the variety of career paths that are available within the accounting profession. An accounting degree is a valuable tool that gives accountants a high level of flexibility and many options. Often individual accountants apply skills in several of the following career paths simultaneously. Figure 1.8 illustrates some of the many career paths open to accounting students. Auditing Auditing , which is performed by accountants with a specialized skill set, is the process of ensuring activities are carried out as intended or designed. There are many examples of the functions that auditors perform. For example, in a manufacturing company, auditors may sample products and assess whether or not the products conform to the customer specifications. As another example, county auditors may test pumps at gas stations to ensure the pumps are delivering the correct amount of gasoline and charging customers correctly. Companies should develop policies and procedures to help ensure the company’s goals are being met and the assets are protected. This is called the internal control system. To help maintain the effectiveness of the internal control system, companies often hire internal auditors, who evaluate internal controls through reviews and tests. For example, internal auditors may review the process of how cash is handled within a business. In this scenario, the goal of the company is to ensure that all cash payments are properly applied to customer accounts and that all funds are properly deposited into the company’s bank account. As another example, internal auditors may review the shipping and receiving process to ensure that all products shipped or received have the proper paperwork and the product is handled and stored properly. While internal auditors also often work to ensure compliance with external regulations, the primary goal of internal auditors is to help ensure the company policies are followed, which helps the company attain its strategic goals and protect its assets. The professional certification most relevant to a career in internal audit is the Certified Internal Auditor (CIA). Financial fraud occurs when an individual or individuals act with intent to deceive for a financial gain. A Certified Fraud Examiner (CFE) is trained to prevent fraud from occurring and to detect when fraud has occurred. A thorough discussion of the internal control system and the role of accountants occurs in Fraud, Internal Controls, and Cash . Companies also want to ensure the financial statements provided to outside parties such as banks, governmental agencies, and the investing public are reliable and consistent. That is, companies have a desire to provide financial statements that are free of errors or fraud. Since internal auditors are committed to providing unbiased financial information, it would be possible for the company to use internal auditors to attest to the integrity of the company’s financial statements. With that said, doing so presents the appearance of a possibility of a conflict of interest and could call into question the validity of the financial statements. Therefore, companies hire external auditors to review and attest to the integrity of the financial statements. External auditors typically work for a public accounting firm. Although the public accounting firm is hired by the company to attest to the fairness of the financial statements, the external auditors are independent of the company and provide an unbiased opinion. Taxation There are many taxes that businesses are required to pay. Examples include income taxes, payroll and related taxes such as workers’ compensation and unemployment, property and inventory taxes, and sales and use taxes. In addition to making the tax payments, many of the taxes require tax returns and other paperwork to be completed. Making things even more complicated is the fact that taxes are levied at the federal, state, and local levels. For larger worldwide companies, the work needed to meet their international tax compliance requirements can take literally thousands of hours of accountants’ time. To sum up the process, the goal of tax accountants is to help ensure the taxes are paid properly and in a timely manner, from an individual level all the way to the company level (including at the level of such companies as Apple and Walmart ). Since accountants have an understanding of various tax laws and filing deadlines, they are also well-positioned to offer tax planning advice. Tax laws are complex and change frequently; therefore, it is helpful for businesses to include tax considerations in their short- and long-term planning. Accountants are a valuable resource in helping businesses minimize the tax liability. Many businesses find it necessary to employ accountants to work on tax compliance and planning on a full-time basis. Other businesses need these services on a periodic (quarterly or annual) basis and hire external accountants accordingly. Financial Accounting Financial accounting measures, in dollars, the activities of an organization. Financial accounting is historical in nature and is prepared using standard conventions, called accounting standards or GAAP. Because nearly every activity in an organization has a financial implication, financial accounting might be thought of as a “monetary scorecard.” Financial accounting is used internally by managers and other decision makers to validate activities that were done well and to highlight areas that need adjusted in the future. Businesses often use discretion as to how much and with whom financial accounting information is shared. Financial accounting is also provided to those outside the organization. For a publicly traded company, issuing financial statements is required by the SEC. Sharing financial information for a privately held company is usually reserved for those instances where the information is required, such as for audits or obtaining loans. Consulting Because nearly every activity within an organization has a financial implication, accountants have a unique opportunity to gain a comprehensive view of an organization. Accountants are able to see how one area of a business affects a different aspect of the business. As accountants gain experience in the profession, this unique perspective allows them to build a “knowledge database” that is valuable to businesses. In this capacity, accountants can provide consulting services, which means giving advice or guidance to managers and other decision makers on the impact (both financial and nonfinancial) of a potential course of action. This role allows the organization to gain knowledge from the accountants in a way that minimizes risk and/or financial investment. As discussed previously, accountants may advise a business on tax-related issues. Other examples of consultative services that accountants perform include selection and installation of computer software applications and other technology considerations, review of internal controls, determination of compliance with relevant laws and regulations, review of compensation and incentive arrangements, and consideration of operational efficiencies within the production process. Accounting Information Services Computers are an integral part of business. Computers and related software programs allow companies to efficiently record, store, and process valuable data and information relevant to the business. Accountants are often an integral part of the selection and maintenance of the company’s computerized accounting and information system. The goal of the accounting information system is to efficiently provide relevant information to internal decision makers, and it is important for businesses to stay abreast of advances in technology and invest in those technologies that help the business remain efficient and competitive. Significant growth is expected in accounting information systems careers. According to the US Bureau of Labor Statistics, in 2010 there were over 130,000 jobs in the accounting informations systems sector, with over 49% growth expected through 2024. Median earnings in this field were over $73,000 in 2011. 5 For those interested in both accounting and computer information systems, there are tremendous career opportunities. 5 Lauren Csorny. “Careers in the Growing Field of Information Technology Services.” Bureau of Labor Statistics/U.S. Department of Labor. April 2013. https://www.bls.gov/opub/btn/volume-2/careers-in-growing-field-of-information-technology-services.htm Concepts In Practice Enterprise Resource Planning As companies grow in size and expand geographically, it is important to assess whether or not a current computerized system is the right size and fit for the organization. For example, a company with a single location can easily manage its business activities with a small, off-the-shelf software package such as QuickBooks and software applications such as Microsoft Excel. A company’s computer system becomes more complex when additional locations are added. As companies continue to grow, larger integrated computer systems, called enterprise resource planning (ERP) systems, may be implemented. Enterprise resource planning systems are designed to maintain the various aspects of the business within a single integrated computer system. For example, a leading ERP system is Microsoft Dynamics GP. Microsoft Dynamics GP is an integrated sytem with the capability to handle the human resource management, production, accounting, manufacturing, and many other aspects of a business. ERP systems, like Microsoft Dynamics GP, are also designed to accommodate companies that have international locations. The benefit of ERP systems is that information is efficiently stored and utilized across the entire business in real time. Cost and Managerial Accounting Cost accounting and managerial accounting are related, but different, types of accounting. In essence, a primary distinction between the two functions is that cost accounting takes a primarily quantitative approach, whereas managerial accounting takes both quantitative and qualitative approaches. The goal of cost accounting is to determine the costs involved with providing goods and services. In a manufacturing business, cost accounting is the recording and tracking of costs such as direct materials, employee wages, and supplies used in the manufacturing process. Managerial accounting uses cost accounting and other financial accounting information, as well as nonfinancial information, to make short-term as well as strategic and other long-term decisions for a business. Both cost and managerial accounting are intended to be used inside a business. Along with financial accounting information, managers and other decision makers within a business use the information to facilitate decision-making, develop long-term plans, and perform other functions necessary for the success of the business. There are two major differences between cost and managerial accounting and financial accounting. Whereas financial accounting requires the use of standard accounting conventions (also called accounting standards or GAAP), there are no such requirements for cost and managerial accounting. In practice, management has different needs that require cost and managerial accounting information. In addition, financial information is prepared in specific intervals of time, usually monthly. The same is not true with cost and managerial accounting, which are prepared on an as-needed basis that is not reported as specific periods of time. An example may be helpful in clarifying the difference between cost and managerial accounting. Manufacturing companies often face the decision of whether to make certain components or purchase the components from an outside supplier. Cost accounting would calculate the cost of each alternative. Managerial accounting would use that cost and supplement the cost with nonfinancial information to arrive at a decision. Let’s say the cost accountants determine that a company would save $0.50 per component if the units were purchased from an outside supplier rather than being produced by the company. Managers would use the $0.50 per piece savings as well as nonfinancial considerations, such as the impact on the morale of current employees and the supplier’s ability to produce a quality product, to make a decision whether or not to purchase the component from the outside supplier. In summary, it may be helpful to think of cost accounting as a subset of managerial accounting. Another way to think about cost and managerial accounting is that the result of cost accounting is a number, whereas the result of managerial accounting is a decision. Financial Planning While accountants spend much of their time interacting with other people, a large component of their work involves numbers and finances. As mentioned previously, many people with an interest in data often go into the accounting profession and have a natural inclination toward solving problems. In addition, accountants also gain a comprehensive view of business. They understand how the diverse aspects of the business are connected and how those activities ultimately have a financial impact on the organization. These attributes allow accountants to offer expertise in financial planning, which takes many forms. Within a business, making estimates and establishing a plan for the future—called a budget—are vital. These actions allow the business to determine the appropriate level of activity and make any adjustments accordingly. Training in accounting is also helpful for those who offer financial planning for individuals. When it comes to investing and saving for the future, there are many options available to individuals. Investing is complicated, and many people want help from someone who understands the complexities of the investment options, the tax implications, and ways to invest and build wealth. Accountants are well trained to offer financial planning services to the businesses they work with as well as individuals investing for their future. Entrepreneurship Many people have an idea for a product or service and decide to start their own business—they are often labeled as entrepreneurs. These individuals have a passion for their product or service and are experts at what they do. But that is not enough. In order for the business to be successful, the entrepreneur must understand all aspects of the business, including and especially the financial aspect. It is important for the entrepreneur to understand how to obtain the funding to start the business, measure the financial performance of the business, and know what adjustments to improve the performance of the business are necessary and when to make them. Understanding accounting, or hiring accountants who can perform these activities, is valuable to the entrepreneur. An entrepreneur works extremely hard and has often taken a great risk in starting his or her own business. Understanding the financial performance of the business helps ensure the business is successful. Concepts In Practice Entrepreneurship Entrepreneurs do not have to develop a brand new product or service in order to open their own business. Often entrepreneurs decide to purchase a store from a business that already exists. This is called a franchise arrangement. In these arrangements, the business owner (the franchisee) typically pays the franchisor (the business offering the franchise opportunity) a lump sum at the beginning of the arrangement. This lump sum payment allows the franchisee an opportunity to use the store logos and receive training, consulting, and other support from the franchisor. A series of scheduled payments is also common. The ongoing payments are often based on a percentage of the franchise store’s sales. The franchise arrangement is beneficial to both parties. For the franchisee, there is less risk involved because they often purchase a franchise from a business with an established track record of success. For the franchisor, it is an opportunity to build the brand without the responsibility of direct oversight for individual stores—each franchise is independently owned and operated (a phrase you might see on franchise stores). The downside of the franchising arrangement is the amount of money that is paid to the franchisor through the initial lump sum as well as continued payments. These costs, however, are necessary for the ongoing support from the franchisor. In addition, franchisees often have restrictions relative to product pricing and offerings, geographic locations, and approved suppliers. According to Entrepreneur.com, based on factors such as costs and fees, support, and brand strength, the number one–ranking franchise in 2017 was 7-Eleven, Inc. According to the website, 7-Eleven has been franchising since 1964 and has 61,086 franchise stores worldwide (7,025 are located in the United States). In addition, 7-Eleven has 1,019 company-owned stores. 6 6 “7-Eleven.” Entrepreneur.com. n.d. https://www.entrepreneur.com/franchises/7eleveninc/282052 Major Categories of Employers Now that you’ve learned about the various career paths that accountants can take, let’s briefly look at the types of organizations that accountants can work for. Figure 1.10 illustrates some common types of employers that require accountants. While this is not an all-inclusive list, most accountants in the profession are employed by these types of organizations. Public Accounting Firms Public accounting firms offer a wide range of accounting, auditing, consulting, and tax preparation services to their clients. A small business might use a public accounting firm to prepare the monthly or quarterly financial statements and/or the payroll. A business (of any size) might hire the public accounting firm to audit the company financial statements or verify that policies and procedures are being followed properly. Public accounting firms may also offer consulting services to their clients to advise them on implementing computerized systems or strengthening the internal control system. (Note that you will learn in your advanced study of accounting that accountants have legal limitations on what consulting services they can provide to their clients.) Public accounting firms also offer tax preparation services for their business and individual clients. Public accounting firms may also offer business valuation, forensic accounting (financial crimes), and other services. Public accounting firms are often categorized based on the size (revenue). The biggest firms are referred to as the “Big Four” and include Deloitte Touche Tohmatsu Limited (DTTL), PricewaterhouseCoopers (PwC), Ernst & Young (EY), and KPMG . Following the Big Four in size are firms such as RSM US , Grant Thornton , BDO USA , Crowe , and CliftonLarsonAllen (CLA) . 7 There are also many other regional and local public accounting firms. 7 “2017 Top 100 Firms.” Accounting Today . 2017. https://lscpagepro.mydigitalpublication.com/publication/?i=390208#{%22issue_id%22:390208,%22page%22:0} Public accounting firms often expect the accountants they employ to have earned (or will earn) the Certified Public Accountant (CPA) designation. It is not uncommon for public accounting firms to specialize. For example, some public accounting firms may specialize in serving clients in the banking or aerospace industries. In addition to specializing in specific industries, public accounting firms may also specialize in areas of accounting such as tax compliance and planning. Hiring public accounting firms to perform various services is an attractive option for many businesses. The primary benefit is that the business has access to experts in the profession without needing to hire accounting specialists on a full-time basis. Corporations Corporations hire accountants to perform various functions within the business. The primary responsibility of corporate accountants (which include cost and managerial accountants) is to provide information for internal users and decision makers, as well as implement and monitor internal controls. The information provided by corporate accountants takes many forms. For example, some of the common responsibilities of corporate accountants include calculating and tracking the costs of providing goods and services, analyzing the financial performance of the business in comparison to expectations, and developing budgets, which help the company plan for future operations and make any necessary adjustments. In addition, many corporate accountants have the responsibility for or help with the company’s payroll and computer network. In smaller corporations, an accountant may be responsible for or assist with several of these activities. In larger firms, however, accountants may specialize in one of the areas of responsibilities and may rotate responsibilities throughout their career. Many larger firms also use accountants as part of the internal audit function. In addition, many large companies are able to dedicate resources to making the organization more efficient. Programs such as Lean Manufacturing and Six Sigma focus on reducing waste and eliminating cost within the organization. Accountants trained in these techniques receive specialized training that focuses on the cost impact of the activities of the business. As with many organizations, professional certifications are highly valued in corporations. The primary certification for corporate accounting is the Certified Management Accountant (CMA). Because corporations also undertake financial reporting and related activities, such as tax compliance, corporations often hire CPAs. Governmental Entities Accountants in governmental entities perform many of the same functions as accountants in public accounting firms and corporations. The primary goal of governmental accounting is to ensure proper tracking of the inflows and outflows of taxpayer funds using the proscribed standards. Some governmental accountants also prepare and may also audit the work of other governmental agencies to ensure the funds are properly accounted for. The major difference between accountants in governmental entities and accountants working in public accounting firms and corporations relates to the specific rules by which the financial reporting must be prepared. Whereas as accountants in public accounting firms and corporations use GAAP, governmental accounting is prepared under a different set of rules that are specific to governmental agencies, as previously referred to as the Governmental Accounting Standards Board (GASB). Students continuing their study of accounting may take specific courses related to governmental accounting. Accountants in the governmental sector may also work in specialized areas. For example, many accountants work for tax agencies at the federal, state, and local levels to ensure the tax returns prepared by businesses and individuals comply with the tax code appropriate for the particular jurisdiction. As another example, accountants employed by the SEC may investigate instances where financial crimes occur, as in the case of Bernie Madoff, which was discussed in Concepts in Practice: Financial Professionals and Fraud . Concepts In Practice Bringing Down Capone Al Capone was one of the most notorious criminals in American history. Born in 1899 in Brooklyn, New York, Al Capone rose to fame as a gangster in Chicago during the era of Prohibition. By the late 1920s–1930s, Capone controlled a syndicate with a reported annual income of $100 million. Al Capone was credited for many murders, including masterminding the famous 1929 St. Valentine’s Day murder, which killed seven rival gang members. But law enforcement was unable to convict Capone for the murders he committed or orchestrated. Through bribes and extortion, Capone was able to evade severe punishment, being charged at one point with gun possession and serving a year in jail. Capone’s luck ran out in 1931 when he was convicted of federal tax evasion. In 1927, the United States Supreme Court ruled that earnings from illegal activities were taxable. Capone, however, did not claim the illegal earnings on his 1928 and 1929 income tax returns and was subsequently sentenced to eleven years in prison. Up to that point, it was the longest-ever sentence for tax evasion. Al Capone was paroled from prison in November 1939 and died on January 25, 1947. His life has been the subject of many articles, books, movies including Scarface (1932), and the TV series The Untouchables (1993). Those interested in stories like this might consider working for the Federal Bureau of Investigation (FBI). According to the FBI, as of 2012, approximately 15% of FBI agents are special agent accountants. Not-for-Profit Entities, Including Charities, Foundations, and Universities Not-for-profit entities include charitable organizations, foundations, and universities. Unlike for-profit entities, not-for-profit organizations have a primary focus of a particular mission. Therefore, not-for-profit (NFP) accounting helps ensure that donor funds are used for the intended mission. Much like accountants in governmental entities, accountants in not-for-profit entities use a slightly different type of accounting than other types of businesses, with the primary difference being that not-for-profit entities typically do not pay income taxes. However, even if a not-for-profit organization is not subjected to income taxes in a particular year, it generally must file informational returns, such as a Form 990, with the Internal Revenue Service (IRS). Information, such as sources and amounts of funding and major types and amounts of expenditures, is documented by the not-for-profit entities to provide information for potential and current donors. Once filed with the IRS, Form 990 is available for public view so that the public can monitor how the specific charity uses proceeds as well as its operational efficiency. Potential Certifications for Accountants As previously discussed, the study of accounting serves as a foundation for other careers that are similar to accounting, and the certifications described here reflect that relationship. There are many benefits to attaining a professional certification (or multiple certifications) in addition to a college degree. Certifications often cover material at a deeper and more complex level than might typically be covered in a college program. Those earning a professional certification demonstrate their willingness to invest the additional time and energy into becoming experts in the particular field. Because of this, employees with professional certifications are often in higher demand and earn higher salaries than those without professional certifications. Companies also benefit by having employees with professional certifications. A well-trained staff with demonstrated expertise conveys a level of professionalism that gives the organization a competitive advantage. In addition, professional certifications often require a certain number of hours of ongoing training. This helps ensure that the certificate holder remains informed as to the current advances within the profession and benefits both the employee and the employer. Certifications are developed and governed by the respective governing body. Each issuing body establishes areas of content and requirements for the specific certification. Links to the particular websites are provided so you can easily gain additional information. It is also important to note that many of the certifications have review courses available. The review courses help students prepare for the exam by offering test-taking strategies, practice questions and exams, and other materials that help students efficiently and effectively prepare for the exams. Ethical Considerations Accounting Codes of Ethics In the United States, accountants can obtain a number of different certifications and can be licensed by each state to practice as a Certified Public Accountant (CPA). Accountants can also belong to professional organizations that have their own codes of conduct. As the online Stanford Encyclopedia of Philosophy explains, “many people engaged in business activity, including accountants and lawyers, are professionals. As such, they are bound by codes of conduct promulgated by professional societies. Many firms also have detailed codes of conduct, developed and enforced by teams of ethics and compliance personnel.” 8 CPAs can find a code of ethics in each state of practice and with the AICPA. 9 Certifications such as the CMA, CIA, CFE, CFA, and CFP each have their own codes of ethics. 8 Jeffrey Moriarty. “Business Ethics.” Stanford Encyclopedia of Philosophy. November 17, 2016. https://plato.stanford.edu/entries/ethics-business/ 9 American Institute of Certified Public Accountants (AICPA). “AICPA Code of Professional Conduct.” n.d. https://www.aicpa.org/research/standards/codeofconduct.html To facilitate cross-border business activities and accounting, an attempt has been made to set international standards. To this end, accounting standards organizations in more than 100 countries use the International Federation of Accountants’ (IFAC) Code of Ethics for Professional Accountants.” 10 When auditing a public company, CPAs may also have to follow a special code of ethics created by the Public Company Accounting Oversight Board (PCAOB), or when performing federal tax work, the US Treasury Department’s Circular No. 230 code of ethics. These are just some examples of ethical codes that are covered in more detail in this course. Each area of accounting work has its own set of ethical rules, but they all require that a professional accountant perform his or her work with integrity. 10 Catherine Allen and Robert Bunting. “A Global Standard for Professional Ethics: Cross-Border Business Concerns.” May 2008. https://www.ifrs.com/overview/Accounting_Firms/Global_Standard.html Certified Public Accountant (CPA) The Certified Public Accountant (CPA) designation is earned after passing a uniform exam issued by the American Institute of Certified Public Accountants (AICPA). While the exam is a uniform, nationally administered exam, each state issues and governs CPA licenses. The CPA exam has four parts: Auditing and Attestation (AUD), Business Environment and Concepts (BEC), Financial Accounting and Reporting (FAR), and Regulation (REG). A score of at least 75% must be earned in order to earn the CPA designation. Since each state determines the requirements for CPA licenses, students are encouraged to check the state board of accountancy for specific requirements. In Ohio, for example, candidates for the CPA exam must have 150 hours of college credit. Of those, thirty semester hours (or equivalent quarter hours) must be in accounting. Once the CPA designation is earned in Ohio, 120 hours of continuing education must be taken over a three-year period in order to maintain the certification. The requirements for the Ohio CPA exam are similar to the requirements for other states. Even though states issue CPA licenses, a CPA will not lose the designation should he or she move to another state. Each state has mobility or reciprocity requirements that allow CPAs to transfer licensure from one state to another. Reciprocity requirements can be obtained by contacting the respective state board of accountancy. The majority of states require 150 hours of college credit. Students often graduate with a bachelor’s degree with approximately 120–130 credit hours. In order to reach the 150-hour requirement that specific states have, students have a couple of options. The extra hours can be earned either by taking additional classes in their undergraduate program or by entering a graduate program, earning a master’s degree. Master’s degrees that would be most beneficial in an accounting or related field would be a master of accountancy, master in taxation, or a master in analytics, which is rapidly increasing in demand. Link to Learning Information about the Certified Public Accountant (CPA) exam is provided by the following: the American Institute of Certified Public Accountants (AICPA) the National Association of State Boards of Accountancy (NASBA) This Way to CPA Certified Management Accountant (CMA) The Certified Management Accountant (CMA) exam is developed and administered by the Institute of Management Accountants (IMA). There are many benefits in earning the CMA designation, including career advancement and earnings potential. Management accountants, among other activities, prepare budgets, perform analysis of financial and operational variances, and determine the cost of providing goods and services. Earning a certification enables the management accountant to advance to management and executive positions within the organization. The CMA exam has two parts: Financial Reporting, Planning, Performance, and Control (part 1) and Financial Decision-Making (part 2). A score of at least 72% must be earned in order to earn the CMA designation. A minimum of a bachelor’s degree is required to take the CMA exam. An accounting degree or a specific number of credit hours in accounting is not required in order to take the CMA exam. Once the CMA designation is earned, thirty hours of continuing education with two of the hours focusing on ethics must be taken annually in order to maintain the certification. Link to Learning Visit the Institute of Management Accountants (IMA)’s page on the Certified Management Accountant (CMA) exam and certification to learn more. Certified Internal Auditor (CIA) The Certified Internal Auditor (CIA) exam is developed and administered by the Institute of Internal Auditors (IIA). According to the IIA website, the four-part CIA exam tests “candidates’ grasp of internal auditing’s role in governance, risk, and control; conducting an audit engagement; business analysis and information technology; and business management skills.” 11 11 The Institute of Internal Auditors. “What Does It Take to Be a Professional?” n.d. https://na.theiia.org/about-ia/PublicDocuments/WDIT_Professional-WEB.pdf If a candidate does not have a bachelor’s degree, eligibility to take the CIA is based on a combination of work experience and education experience. In order to earn the CIA designation, a passing score of 80% is required. After successful passage of the CIA exam, certificate holders are required to earn eighty hours of continuing education credit every two years. 12 12 The Institute of Internal Auditors. “What Does It Take to Be a Professional?” n.d. https://na.theiia.org/about-ia/PublicDocuments/WDIT_Professional-WEB.pdf Link to Learning Information about the Certified Internal Auditor (CIA) exam is provided by the following: the Institute of Internal Auditors (IIA), Global the Institute of Internal Auditors (IIA), North America Certified Fraud Examiner (CFE) The Certified Fraud Examiner (CFE) exam is developed and administered by the Association of Certified Fraud Examiners (ACFE). Eligibility to take the CFE is based on a points system based on education and work experience. Candidates with forty points may take the CFE exam, and official certification is earned with fifty points or more. A bachelor’s degree, for example, is worth forty points toward eligibility of the fifty-point requirement for the CFE certification. The CFE offers an attractive supplement for students interested in pursuing a career in accounting fraud detection. Students might also consider studying forensic accounting in college. These courses are often offered at the graduate level. The CFE exam has four parts: Fraud Prevention and Deterrence, Financial Transactions and Fraud Schemes, Investigation, and Law. Candidates must earn a minimum score of 75%. Once the CFE is earned, certificate holders must annually complete at least twenty hours of continuing education. The CFE certification is valued in many organizations, including governmental agencies at the local, state, and federal levels. Link to Learning Visit the Association of Certified Fraud Examiners (ACFE) page on the Certified Fraud Examiner (CFE) exam to learn more. Chartered Financial Analyst (CFA) The Chartered Financial Analyst (CFA) certification is developed and administered by the CFA Institute. The CFA exam contains three levels (level I, level II, and level III), testing expertise in Investment Tools, Asset Classes, and Portfolio Management. Those with a bachelor’s degree are eligible to take the CFA exam. In lieu of a bachelor’s degree, work experience or a combination of work experience and education is considered satisfactory for eligibility to take the CFA exam. After taking the exam, candidates receive a “Pass” or “Did Not Pass” result. A passing score is determined by the CFA Institute once the examination has been administered. The passing score threshold is established after considering factors such as exam content and current best practices. After successful passage of all three levels of the CFA examination, chartered members must earn at least twenty hours annually of continuing education, of which two hours must be in Standards, Ethics, and Regulations (SER). Link to Learning Visit the the CFA Institute’s page on the Chartered Financial Analyst (CFA) exam to learn more. Certified Financial Planner (CFP) The Certified Financial Planner (CFP) certification is developed and administered by the Certified Financial Planner (CFP) Board of Standards. The CFP exam consists of 170 multiple-choice questions that are taken over two, three-hour sessions. There are several ways in which the eligibility requirements can be met in order to take the CFP exam, which students can explore using the CFP Board of Standards website. As with the Chartered Financial Analyst (CFA) exam, the CFP Board of Standards does not predetermine a passing score but establishes the pass/fail threshold through a deliberative evaluation process. Upon successful completion of the exam, CFPs must obtain thirty hours of continuing education every two years, with two of the hours focused on ethics. Link to Learning Visit the Certified Financial Planners (CFP) Board of Standards page on the the Certified Financial Planner (CFP) exam to learn more.
introduction_to_sociology
Learning Objectives 2.1 Approaches to Sociological Research Define and describe the scientific method Explain how the scientific method is used in sociological research Understand the function and importance of an interpretive framework Define what reliability and validity mean in a research study 2.2 Research Methods Differentiate between four kinds of research methods: surveys, field research, experiments, and secondary data analysis Understand why different topics are better suited to different research approaches 2.3 Ethical Concerns Understand why ethical standards exist Demonstrate awareness of the American Sociological Association’s Code of Ethics Define value neutrality Introduction to Sociological Research In the campus cafeteria, you set your lunch tray down at a table, grab a chair, join a group of your college classmates, and hear the start of two discussions. One person says, “It’s weird how Jimmy Buffett has so many devoted fans.” Another says, “Disney World is packed year-round.” Those two seemingly benign statements are claims, or opinions, based on everyday observation of human behavior. Perhaps the speakers had firsthand experience, talked to experts, conducted online research, or saw news segments on TV. In response, two conversations erupt. “I don’t see why anyone would want to go to Disney World and stand in those long lines.” “Are you kidding?! Going to Disney World is one of my favorite childhood memories.” “It’s the opposite for me with Jimmy Buffett. After seeing one of his shows, I don’t need to go again.” “Yet some people make it a lifestyle.” “A theme park is way different than a concert.” “But both are places people go for the same thing: a good time.” “If you call getting lost in a crowd of thousands of strangers fun.” As your classmates at the lunch table discuss what they know or believe, the two topics converge. The conversation becomes a debate. Someone compares Parrotheads to Packers fans. Someone else compares Disney World to a cruise. Students take sides, agreeing or disagreeing, as the conversation veers to topics such as crowd control, mob mentality, political protests, and group dynamics. If you contributed your expanding knowledge of sociological research to this conversation, you might make statements like these: “Jimmy Buffett’s fans long for escapism. Parrotheads join together claiming they want freedom, except they only want a temporary escape.” And this: “Mickey Mouse is a symbol of America just like the Statue of Liberty. Disney World is a place where families go to celebrate what they see as America.” You finish lunch, clear away your tray, and hurry to your next class. But you are thinking of Jimmy Buffett and Disney World. You have a new perspective on human behavior and a list of questions that you want answered. That is the purpose of sociological research—to investigate and provide insights into how human societies function. Although claims and opinions are part of sociology, sociologists use empirical evidence (that is, evidence corroborated by direct experience and/or observation) combined with the scientific method or an interpretive framework to deliver sound sociological research. They also rely on a theoretical foundation that provides an interpretive perspective through which they can make sense of scientific results. A truly scientific sociological study of the social situations up for discussion in the cafeteria would involve these prescribed steps: defining a specific question, gathering information and resources through observation, forming a hypothesis, testing the hypothesis in a reproducible manner, analyzing and drawing conclusions from the data, publishing the results, and anticipating further development when future researchers respond to and retest findings. An appropriate starting point in this case might be the question "What do fans of Jimmy Buffett seek that drives them to attend his concerts faithfully?" As you begin to think like a sociologist, you may notice that you have tapped into your observation skills. You might assume that your observations and insights are valuable and accurate. But the results of casual observation are limited by the fact that there is no standardization—who is to say one person’s observation of an event is any more accurate than another’s? To mediate these concerns, sociologists rely on systematic research processes.
[ { "answer": { "ans_choice": 2, "ans_text": "valid" }, "bloom": null, "hl_context": "But just because sociological studies use scientific methods does not make the results less human . Sociological topics are not reduced to right or wrong facts . In this field , results of studies tend to provide people with access to knowledge they did not have before — knowledge of other cultures , knowledge of rituals and beliefs , knowledge of trends and attitudes . No matter what research approach is used , researchers want to maximize the study ’ s reliability ( how likely research results are to be replicated if the study is reproduced ) . Reliability increases the likelihood that what happens to one person will happen to all people in a group . <hl> Researchers also strive for validity , which refers to how well the study measures what it was designed to measure . <hl> Returning to the Disney World topic , reliability of a study would reflect how well the resulting experience represents the average experience of theme park-goers . Validity would ensure that the study ’ s design accurately examined what it was designed to study , so an exploration of adults ’ interactions with costumed mascots should address that issue and not veer into other age groups ’ interactions with them or into adult interactions with staff or other guests .", "hl_sentences": "Researchers also strive for validity , which refers to how well the study measures what it was designed to measure .", "question": { "cloze_format": "A measurement is considered ______­ if it actually measures what it is intended to measure, according to the topic of the study.", "normal_format": "What is a measurement considered to be if it actually measures what it is intended to measure, according to the topic of the study?", "question_choices": [ "reliable", "sociological", "valid", "quantitative" ], "question_id": "fs-id2654855", "question_text": "A measurement is considered ______­ if it actually measures what it is intended to measure, according to the topic of the study." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "variable" }, "bloom": null, "hl_context": "<hl> A hypothesis is an assumption about how two or more variables are related ; it makes a conjectural statement about the relationship between those variables . <hl> <hl> In sociology , the hypothesis will often predict how one form of human behavior influences another . <hl> In research , independent variables are the cause of the change . The dependent variable is the effect , or thing that is changed .", "hl_sentences": "A hypothesis is an assumption about how two or more variables are related ; it makes a conjectural statement about the relationship between those variables . In sociology , the hypothesis will often predict how one form of human behavior influences another .", "question": { "cloze_format": "Sociological studies test relationships in which change in one ______ causes change in another.", "normal_format": "Sociological studies test relationships in which change in which of the following causes change in another?", "question_choices": [ "test subject", "behavior", "variable", "operational definition" ], "question_id": "fs-id1169033138764", "question_text": "Sociological studies test relationships in which change in one ______ causes change in another." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 3, "ans_text": "The weight gained " }, "bloom": "4", "hl_context": "<hl> As the chart shows , an independent variable is the one that causes a dependent variable to change . <hl> For example , a researcher might hypothesize that teaching children proper hygiene ( the independent variable ) will boost their sense of self-esteem ( the dependent variable ) . Or rephrased , a child ’ s sense of self-esteem depends , in part , on the quality and availability of hygienic resources . <hl> For example , in a basic study , the researcher would establish one form of human behavior as the independent variable and observe the influence it has on a dependent variable . <hl> How does gender ( the independent variable ) affect rate of income ( the dependent variable ) ? How does one ’ s religion ( the independent variable ) affect family size ( the dependent variable ) ? How is social class ( the dependent variable ) affected by level of education ( the independent variable ) ?", "hl_sentences": "As the chart shows , an independent variable is the one that causes a dependent variable to change . For example , in a basic study , the researcher would establish one form of human behavior as the independent variable and observe the influence it has on a dependent variable .", "question": { "cloze_format": "In a study, a group of 10-year-old boys are fed doughnuts every morning for a week and then weighed to see how much weight they gained. The factor that is the dependent variable is ___.", "normal_format": "In a study, a group of 10-year-old boys are fed doughnuts every morning for a week and then weighed to see how much weight they gained. Which factor is the dependent variable?", "question_choices": [ "The doughnuts", "The boys", "The duration of a week", "The weight gained " ], "question_id": "fs-id1169033153856", "question_text": "In a study, a group of 10-year-old boys are fed doughnuts every morning for a week and then weighed to see how much weight they gained. Which factor is the dependent variable?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "Body weight at least 20% higher than a healthy weight for a child of that height" }, "bloom": null, "hl_context": "Table 2.1 Examples of Dependent and Independent Variables Typically , the independent variable causes the dependent variable to change in some way . At this point , a researcher ’ s operational definitions help measure the variables . <hl> In a study asking how tutoring improves grades , for instance , one researcher might define “ good ” grades as a C or better , while another uses a B + as a starting point for “ good . ” Another operational definition might describe “ tutoring ” as “ one-on-one assistance by an expert in the field , hired by an educational institution . ” Those definitions set limits and establish cut-off points , ensuring consistency and replicability in a study . <hl> That is why sociologists are careful to define their terms . In a hygiene study , for instance , hygiene could be defined as “ personal habits to maintain physical appearance ( as opposed to health ) , ” and a researcher might ask , “ How do differing personal hygiene habits reflect the cultural value placed on appearance ? ” When forming these basic research questions , sociologists develop an operational definition , that is , they define the concept in terms of the physical or concrete steps it takes to objectively measure it . <hl> The operational definition identifies an observable condition of the concept . <hl> <hl> By operationalizing a variable of the concept , all researchers can collect data in a systematic or replicable manner . <hl>", "hl_sentences": "In a study asking how tutoring improves grades , for instance , one researcher might define “ good ” grades as a C or better , while another uses a B + as a starting point for “ good . ” Another operational definition might describe “ tutoring ” as “ one-on-one assistance by an expert in the field , hired by an educational institution . ” Those definitions set limits and establish cut-off points , ensuring consistency and replicability in a study . The operational definition identifies an observable condition of the concept . By operationalizing a variable of the concept , all researchers can collect data in a systematic or replicable manner .", "question": { "cloze_format": "____ provides the best operational definition of \"childhood obesity\".", "normal_format": "Which statement provides the best operational definition of “childhood obesity”?", "question_choices": [ "Children who eat unhealthy foods and spend too much time watching television and playing video games", "A distressing trend that can lead to health issues including type 2 diabetes and heart disease", "Body weight at least 20% higher than a healthy weight for a child of that height", "The tendency of children today to weigh more than children of earlier generations" ], "question_id": "fs-id1169033106284", "question_text": "Which statement provides the best operational definition of “childhood obesity”?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 1, "ans_text": "Books and articles written by other authors about their studies" }, "bloom": "1", "hl_context": "While sociologists often engage in original research studies , they also contribute knowledge to the discipline through secondary data analysis . <hl> Secondary data don ’ t result from firsthand research collected from primary sources , but are the already completed work of other researchers . <hl> Sociologists might study works written by historians , economists , teachers , or early sociologists . They might search through periodicals , newspapers , or magazines from any period in history .", "hl_sentences": "Secondary data don ’ t result from firsthand research collected from primary sources , but are the already completed work of other researchers .", "question": { "cloze_format": "___ are materials that are considered secondary data.", "normal_format": "Which materials are considered secondary data?", "question_choices": [ "Photos and letters given to you by another person", "Books and articles written by other authors about their studies", "Information that you have gathered and now have included in your results", "Responses from participants whom you both surveyed and interviewed" ], "question_id": "fs-id2680643", "question_text": "Which materials are considered secondary data?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "Ethnography" }, "bloom": "1", "hl_context": "Mihelich and Papineau gathered much of their information online . <hl> Referring to their study as a “ Web ethnography , ” they collected extensive narrative material from fans who joined Parrothead clubs and posted their experiences on websites . <hl> “ We do not claim to have conducted a complete ethnography of Parrothead fans , or even of the Parrothead Web activity , ” state the authors , “ but we focused on particular aspects of Parrothead practice as revealed through Web research ” ( 2005 ) . Fan narratives gave them insight into how individuals identify with Buffett ’ s world and how fans used popular music to cultivate personal and collective meaning .", "hl_sentences": "Referring to their study as a “ Web ethnography , ” they collected extensive narrative material from fans who joined Parrothead clubs and posted their experiences on websites .", "question": { "cloze_format": "Researches John Mihelich and John Papineau use ___ to study Parrotheads.", "normal_format": "What method did researchers John Mihelich and John Papineau use to study Parrotheads?", "question_choices": [ "Survey", "Experiment", "Ethnography", "Case study" ], "question_id": "fs-id2276044", "question_text": "What method did researchers John Mihelich and John Papineau use to study Parrotheads?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 3, "ans_text": "Everyone has the same chance of being part of the study" }, "bloom": "2", "hl_context": "A survey targets a specific population , people who are the focus of a study , such as college athletes , international students , or teenagers living with type 1 ( juvenile-onset ) diabetes . Most researchers choose to survey a small sector of the population , or a sample : that is , a manageable number of subjects who represent a larger population . The success of a study depends on how well a population is represented by the sample . <hl> In a random sample , every person in a population has the same chance of being chosen for the study . <hl> According to the laws of probability , random samples represent the population as a whole . For instance , a Gallup Poll , if conducted as a nationwide random sampling , should be able to provide an accurate estimate of public opinion whether it contacts 2,000 or 10,000 people .", "hl_sentences": "In a random sample , every person in a population has the same chance of being chosen for the study .", "question": { "cloze_format": "___ is an effective way to select participants.", "normal_format": "Why is choosing a random sample an effective way to select participants?", "question_choices": [ "Participants do not know they are part of a study", "The researcher has no control over who is in the study", "It is larger than an ordinary sample", "Everyone has the same chance of being part of the study" ], "question_id": "fs-id2198308", "question_text": "Why is choosing a random sample an effective way to select participants?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "Participant observation" }, "bloom": null, "hl_context": "In a study of small-town America conducted by sociological researchers John S . Lynd and Helen Merrell Lynd , the team altered their purpose as they gathered data . They initially planned to focus their study on the role of religion in American towns . <hl> As they gathered observations , they realized that the effect of industrialization and urbanization was the more relevant topic of this social group . <hl> The Lynds did not change their methods , but they revised their purpose . This shaped the structure of Middletown : A Study in Modern American Culture , their published results ( Lynd and Lynd 1959 ) .", "hl_sentences": "As they gathered observations , they realized that the effect of industrialization and urbanization was the more relevant topic of this social group .", "question": { "cloze_format": "John S. Lynd and Helen Merrell Lynd mainly used ___ in their Middletown study.", "normal_format": "What research method did John S. Lynd and Helen Merrell Lynd mainly use in their Middletown study?", "question_choices": [ "Secondary data", "Survey", "Participant observation", "Experiment" ], "question_id": "fs-id2362333", "question_text": "What research method did John S. Lynd and Helen Merrell Lynd mainly use in their Middletown study?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "Questionnaire" }, "bloom": "1", "hl_context": "<hl> As a research method , a survey collects data from subjects who respond to a series of questions about behaviors and opinions , often in the form of a questionnaire . <hl> <hl> The survey is one of the most widely used scientific research methods . <hl> The standard survey format allows individuals a level of anonymity in which they can express personal ideas .", "hl_sentences": "As a research method , a survey collects data from subjects who respond to a series of questions about behaviors and opinions , often in the form of a questionnaire . The survey is one of the most widely used scientific research methods .", "question": { "cloze_format": "____ is best suited to the scientific method.", "normal_format": "Which research approach is best suited to the scientific method?", "question_choices": [ "Questionnaire", "Case study", "Ethnography", "Secondary data analysis" ], "question_id": "fs-id1381268", "question_text": "Which research approach is best suited to the scientific method?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 1, "ans_text": "Its results are not generally applicable" }, "bloom": "1", "hl_context": "Researchers might use this method to study a single case of , for example , a foster child , drug lord , cancer patient , criminal , or rape victim . <hl> However , a major criticism of the case study as a method is that a developed study of a single case , while offering depth on a topic , does not provide enough evidence to form a generalized conclusion . <hl> In other words , it is difficult to make universal claims based on just one person , since one person does not verify a pattern . This is why most sociologists do not use case studies as a primary research method .", "hl_sentences": "However , a major criticism of the case study as a method is that a developed study of a single case , while offering depth on a topic , does not provide enough evidence to form a generalized conclusion .", "question": { "cloze_format": "____ best describes the results of a case study.", "normal_format": "Which best describes the results of a case study?", "question_choices": [ "It produces more reliable results than other methods because of its depth", "Its results are not generally applicable", "It relies solely on secondary data analysis", "All of the above" ], "question_id": "fs-id2868965", "question_text": "Which best describes the results of a case study?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "Non-reactive" }, "bloom": "2", "hl_context": "<hl> One of the advantages of secondary data is that it is nonreactive ( or unobtrusive ) research , meaning that it does not include direct contact with subjects and will not alter or influence people ’ s behaviors . <hl> Unlike studies requiring direct contact with people , using previously published data doesn ’ t require entering a population and the investment and risks inherent in that research process .", "hl_sentences": "One of the advantages of secondary data is that it is nonreactive ( or unobtrusive ) research , meaning that it does not include direct contact with subjects and will not alter or influence people ’ s behaviors .", "question": { "cloze_format": "Using secondary data is considered an unobtrusive or ________ research method.", "normal_format": "Using secondary data is considered an unobtrusive or which research method?", "question_choices": [ "Non-reactive", "non-participatory", "non-restrictive", "non-confrontive" ], "question_id": "fs-id612003", "question_text": "Using secondary data is considered an unobtrusive or ________ research method." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 3, "ans_text": "Max Weber" }, "bloom": "1", "hl_context": "<hl> Pioneer German sociologist Max Weber ( 1864 – 1920 ) identified another crucial ethical concern . <hl> Weber understood that personal values could distort the framework for disclosing study results . While he accepted that some aspects of research design might be influenced by personal values , he declared it was entirely inappropriate to allow personal values to shape the interpretation of the responses . <hl> Sociologists , he stated , must establish value neutrality , a practice of remaining impartial , without bias or judgment , during the course of a study and in publishing results ( 1949 ) . <hl> Sociologists are obligated to disclose research findings without omitting or distorting significant data .", "hl_sentences": "Pioneer German sociologist Max Weber ( 1864 – 1920 ) identified another crucial ethical concern . Sociologists , he stated , must establish value neutrality , a practice of remaining impartial , without bias or judgment , during the course of a study and in publishing results ( 1949 ) .", "question": { "cloze_format": "___ defined the concept of value neutrality.", "normal_format": "Which person or organization defined the concept of value neutrality?", "question_choices": [ "Institutional Review Board (IRB)", "Peter Rossi", "American Sociological Association (ASA)", "Max Weber" ], "question_id": "fs-id1662114", "question_text": "Which person or organization defined the concept of value neutrality?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "A fast-food restaurant" }, "bloom": null, "hl_context": "Researchers are required to protect the privacy of research participants whenever possible . Even if pressured by authorities , such as police or courts , researchers are not ethically allowed to release confidential information . <hl> Researchers must make results available to other sociologists , must make public all sources of financial support , and must not accept funding from any organization that might cause a conflict of interest or seek to influence the research results for its own purposes . <hl> The ASA ’ s ethical considerations shape not only the study but also the publication of results .", "hl_sentences": "Researchers must make results available to other sociologists , must make public all sources of financial support , and must not accept funding from any organization that might cause a conflict of interest or seek to influence the research results for its own purposes .", "question": { "cloze_format": "To study the effects of fast food on lifestyle, health, and culture, a researcher ethically could not accept funding from a ___.", "normal_format": "To study the effects of fast food on lifestyle, health, and culture, from which group would a researcher ethically be unable to accept funding?", "question_choices": [ "A fast-food restaurant", "A nonprofit health organization", "A private hospital", "A governmental agency like Health and Social Services" ], "question_id": "fs-id1902216", "question_text": "To study the effects of fast food on lifestyle, health, and culture, from which group would a researcher ethically be unable to accept funding?" }, "references_are_paraphrase": 0 } ]
2
2.1 Approaches to Sociological Research When sociologists apply the sociological perspective and begin to ask questions, no topic is off limits. Every aspect of human behavior is a source of possible investigation. Sociologists question the world that humans have created and live in. They notice patterns of behavior as people move through that world. Using sociological methods and systematic research within the framework of the scientific method and a scholarly interpretive perspective, sociologists have discovered workplace patterns that have transformed industries, family patterns that have enlightened parents, and education patterns that have aided structural changes in classrooms. The students at that college cafeteria discussion put forth a few loosely stated opinions. If the human behaviors around those claims were tested systematically, a student could write a report and offer the findings to fellow sociologists and the world in general. The new perspective could help people understand themselves and their neighbors and help people make better decisions about their lives. It might seem strange to use scientific practices to study social trends, but, as we shall see, it’s extremely helpful to rely on systematic approaches that research methods provide. Sociologists often begin the research process by asking a question about how or why things happen in this world. It might be a unique question about a new trend or an old question about a common aspect of life. Once a question is formed, a sociologist proceeds through an in-depth process to answer it. In deciding how to design that process, the researcher may adopt a scientific approach or an interpretive framework. The following sections describe these approaches to knowledge. The Scientific Method Sociologists make use of tried and true methods of research, such as experiments, surveys, and field research. But humans and their social interactions are so diverse that they can seem impossible to chart or explain. It might seem that science is about discoveries and chemical reactions or about proving ideas right or wrong rather than about exploring the nuances of human behavior. However, this is exactly why scientific models work for studying human behavior. A scientific process of research establishes parameters that help make sure results are objective and accurate. Scientific methods provide limitations and boundaries that focus a study and organize its results. The scientific method involves developing and testing theories about the world based on empirical evidence. It is defined by its commitment to systematic observation of the empirical world and strives to be objective, critical, skeptical, and logical. It involves a series of prescribed steps that have been established over centuries of scholarship. But just because sociological studies use scientific methods does not make the results less human. Sociological topics are not reduced to right or wrong facts. In this field, results of studies tend to provide people with access to knowledge they did not have before—knowledge of other cultures, knowledge of rituals and beliefs, knowledge of trends and attitudes. No matter what research approach is used, researchers want to maximize the study’s reliability (how likely research results are to be replicated if the study is reproduced). Reliability increases the likelihood that what happens to one person will happen to all people in a group. Researchers also strive for validity , which refers to how well the study measures what it was designed to measure. Returning to the Disney World topic, reliability of a study would reflect how well the resulting experience represents the average experience of theme park-goers. Validity would ensure that the study’s design accurately examined what it was designed to study, so an exploration of adults’ interactions with costumed mascots should address that issue and not veer into other age groups’ interactions with them or into adult interactions with staff or other guests. In general, sociologists tackle questions about the role of social characteristics in outcomes. For example, how do different communities fare in terms of psychological well-being, community cohesiveness, range of vocation, wealth, crime rates, and so on? Are communities functioning smoothly? Sociologists look between the cracks to discover obstacles to meeting basic human needs. They might study environmental influences and patterns of behavior that lead to crime, substance abuse, divorce, poverty, unplanned pregnancies, or illness. And, because sociological studies are not all focused on negative behaviors or challenging situations, researchers might study vacation trends, healthy eating habits, neighborhood organizations, higher education patterns, games, parks, and exercise habits. Sociologists can use the scientific method not only to collect but to interpret and analyze the data. They deliberately apply scientific logic and objectivity. They are interested in but not attached to the results. They work outside of their own political or social agenda. This doesn’t mean researchers do not have their own personalities, complete with preferences and opinions. But sociologists deliberately use the scientific method to maintain as much objectivity, focus, and consistency as possible in a particular study. With its systematic approach, the scientific method has proven useful in shaping sociological studies. The scientific method provides a systematic, organized series of steps that help ensure objectivity and consistency in exploring a social problem. They provide the means for accuracy, reliability, and validity. In the end, the scientific method provides a shared basis for discussion and analysis (Merton 1963). Typically, the scientific method starts with these steps—1) ask a question, 2) research existing sources, 3) formulate a hypothesis—described below. Ask a Question The first step of the scientific method is to ask a question, describe a problem, and identify the specific area of interest. The topic should be narrow enough to study within a geography and timeframe. “Are societies capable of sustained happiness?” would be too vague. The question should also be broad enough to have universal merit. “What do personal hygiene habits reveal about the values of students at XYZ High School?” would be too narrow. That said, happiness and hygiene are worthy topics to study. Sociologists do not rule out any topic, but would strive to frame these questions in better research terms. That is why sociologists are careful to define their terms. In a hygiene study, for instance, hygiene could be defined as “personal habits to maintain physical appearance (as opposed to health),” and a researcher might ask, “How do differing personal hygiene habits reflect the cultural value placed on appearance?” When forming these basic research questions, sociologists develop an operational definition , that is, they define the concept in terms of the physical or concrete steps it takes to objectively measure it. The operational definition identifies an observable condition of the concept. By operationalizing a variable of the concept, all researchers can collect data in a systematic or replicable manner. The operational definition must be valid, appropriate, and meaningful. And it must be reliable, meaning that results will be close to uniform when tested on more than one person. For example, “good drivers” might be defined in many ways: those who use their turn signals, those who don’t speed, or those who courteously allow others to merge. But these driving behaviors could be interpreted differently by different researchers and could be difficult to measure. Alternatively, “a driver who has never received a traffic violation” is a specific description that will lead researchers to obtain the same information, so it is an effective operational definition. Research Existing Sources The next step researchers undertake is to conduct background research through a literature review , which is a review of any existing similar or related studies. A visit to the library and a thorough online search will uncover existing research about the topic of study. This step helps researchers gain a broad understanding of work previously conducted on the topic at hand and enables them to position their own research to build on prior knowledge. Researchers—including student researchers—are responsible for correctly citing existing sources they use in a study or that inform their work. While it is fine to borrow previously published material (as long as it enhances a unique viewpoint), it must be referenced properly and never plagiarized. To study hygiene and its value in a particular society, a researcher might sort through existing research and unearth studies about child-rearing, vanity, obsessive-compulsive behaviors, and cultural attitudes toward beauty. It’s important to sift through this information and determine what is relevant. Using existing sources educates a researcher and helps refine and improve a study’s design. Formulate a Hypothesis A hypothesis is an assumption about how two or more variables are related; it makes a conjectural statement about the relationship between those variables. In sociology, the hypothesis will often predict how one form of human behavior influences another. In research, independent variables are the cause of the change. The dependent variable is the effect , or thing that is changed. For example, in a basic study, the researcher would establish one form of human behavior as the independent variable and observe the influence it has on a dependent variable. How does gender (the independent variable) affect rate of income (the dependent variable)? How does one’s religion (the independent variable) affect family size (the dependent variable)? How is social class (the dependent variable) affected by level of education (the independent variable)? Hypothesis Independent Variable Dependent Variable The greater the availability of affordable housing, the lower the homeless rate. Affordable Housing Homeless Rate The greater the availability of math tutoring, the higher the math grades. Math Tutoring Math Grades The greater the police patrol presence, the safer the neighborhood. Police Patrol Presence Safer Neighborhood The greater the factory lighting, the higher the productivity. Factory Lighting Productivity The greater the amount of observation, the higher the public awareness. Observation Public Awareness Table 2.1 Examples of Dependent and Independent Variables Typically, the independent variable causes the dependent variable to change in some way. At this point, a researcher’s operational definitions help measure the variables. In a study asking how tutoring improves grades, for instance, one researcher might define “good” grades as a C or better, while another uses a B+ as a starting point for “good.” Another operational definition might describe “tutoring” as “one-on-one assistance by an expert in the field, hired by an educational institution.” Those definitions set limits and establish cut-off points, ensuring consistency and replicability in a study. As the chart shows, an independent variable is the one that causes a dependent variable to change. For example, a researcher might hypothesize that teaching children proper hygiene (the independent variable) will boost their sense of self-esteem (the dependent variable). Or rephrased, a child’s sense of self-esteem depends, in part, on the quality and availability of hygienic resources. Of course, this hypothesis can also work the other way around. Perhaps a sociologist believes that increasing a child’s sense of self-esteem (the independent variable) will automatically increase or improve habits of hygiene (now the dependent variable). Identifying the independent and dependent variables is very important. As the hygiene example shows, simply identifying two topics, or variables, is not enough: Their prospective relationship must be part of the hypothesis. Just because a sociologist forms an educated prediction of a study’s outcome doesn’t mean data contradicting the hypothesis aren’t welcome. Sociologists analyze general patterns in response to a study, but they are equally interested in exceptions to patterns. In a study of education, a researcher might predict that high school dropouts have a hard time finding a rewarding career. While it has become at least a cultural assumption that the higher the education, the higher the salary and degree of career happiness, there are certainly exceptions. People with little education have had stunning careers, and people with advanced degrees have had trouble finding work. A sociologist prepares a hypothesis knowing that results will vary. Once the preliminary work is done, it’s time for the next research steps: designing and conducting a study, and drawing conclusions. These research methods are discussed below. Interpretive Framework While many sociologists rely on the scientific method as a research approach, others operate from an interpretive framework . While systematic, this approach doesn’t follow the hypothesis-testing model that seeks to find generalizable results. Instead, an interpretive framework , sometimes referred to as an interpretive perspective, seeks to understand social worlds from the point of view of participants, leading to in-depth knowledge. Interpretive research is generally more descriptive or narrative in its findings. Rather than formulating a hypothesis and method for testing it, an interpretive researcher will develop approaches to explore the topic at hand that may involve lots of direct observation or interaction with subjects. This type of researcher also learns as he or she proceeds, sometimes adjusting the research methods or processes midway to optimize findings as they evolve. 2.2 Research Methods Sociologists examine the world, see a problem or interesting pattern, and set out to study it. They use research methods to design a study—perhaps a detailed, systematic, scientific method for conducting research and obtaining data, or perhaps an ethnographic study utilizing an interpretive framework. Planning the research design is a key step in any sociological study. When entering a particular social environment, a researcher must be careful. There are times to remain anonymous and times to be overt. There are times to conduct interviews and times to simply observe. Some participants need to be thoroughly informed; others should not know they are being observed. A researcher wouldn’t stroll into a crime-ridden neighborhood at midnight, calling out, “Any gang members around?” And if a researcher walked into a coffee shop and told the employees they would be observed as part of a study on work efficiency, the self-conscious, intimidated baristas might not behave naturally. In the 1920s, leaders of a Chicago factory called Hawthorne Works commissioned a study to determine whether or not lighting could increase or decrease worker productivity. Sociologists were brought in. Changes were made. Productivity increased. Results were published. But when the study was over, productivity dropped again. Why did this happen? In 1953, Henry A. Landsberger analyzed the study results to answer this question. He realized that employee productivity increased because sociologists were paying attention to them. The sociologists’ presence influenced the study results. Worker behaviors were altered not by the lighting but by the study itself. From this, sociologists learned the importance of carefully planning their roles as part of their research design (Franke and Kaul 1978). Landsberger called the workers’ response the Hawthorne effect —people changing their behavior because they know they are being watched as part of a study. The Hawthorne effect is unavoidable in some research. In many cases, sociologists have to make the purpose of the study known. Subjects must be aware that they are being observed, and a certain amount of artificiality may result (Sonnenfeld 1985). Making sociologists’ presence invisible is not always realistic for other reasons. That option is not available to a researcher studying prison behaviors, early education, or the Ku Klux Klan. Researchers can’t just stroll into prisons, kindergarten classrooms, or Klan meetings and unobtrusively observe behaviors. In situations like these, other methods are needed. All studies shape the research design, while research design simultaneously shapes the study. Researchers choose methods that best suit their study topic and that fit with their overall approach to research. In planning a study’s design, sociologists generally choose from four widely used methods of social investigation: survey, field research, experiment, and secondary data analysis (or use of existing sources). Every research method comes with plusses and minuses, and the topic of study strongly influences which method or methods are put to use. Surveys As a research method, a survey collects data from subjects who respond to a series of questions about behaviors and opinions, often in the form of a questionnaire. The survey is one of the most widely used scientific research methods. The standard survey format allows individuals a level of anonymity in which they can express personal ideas. At some point or another, everyone responds to some type of survey. The United States Census is an excellent example of a large-scale survey intended to gather sociological data. Customers fill out questionnaires at stores or promotional events, responding to questions such as “How did you hear about the event?” and “Were the staff helpful?” You’ve probably picked up the phone and heard a caller ask you to participate in a political poll or similar type of survey. “Do you eat hot dogs? If yes, how many per month?” Not all surveys would be considered sociological research. Marketing polls help companies refine marketing goals and strategies; they are generally not conducted as part of a scientific study, meaning they are not designed to test a hypothesis or to contribute knowledge to the field of sociology. The results are not published in a refereed scholarly journal, where design, methodology, results, and analyses are vetted. Often, polls on TV do not reflect a general population, but are merely answers from a specific show’s audience. Polls conducted by programs such as American Idol or So You Think You Can Dance represent the opinions of fans but are not particularly scientific. A good contrast to these are the Nielsen Ratings, which determine the popularity of television programming through scientific market research. Sociologists conduct surveys under controlled conditions for specific purposes. Surveys gather different types of information from people. While surveys are not great at capturing the ways people really behave in social situations, they are a great method for discovering how people feel and think—or at least how they say they feel and think. Surveys can track preferences for presidential candidates or reported individual behaviors (such as sleeping, driving, or texting habits), or factual information such as employment status, income, and education levels. A survey targets a specific population , people who are the focus of a study, such as college athletes, international students, or teenagers living with type 1 (juvenile-onset) diabetes. Most researchers choose to survey a small sector of the population, or a sample : that is, a manageable number of subjects who represent a larger population. The success of a study depends on how well a population is represented by the sample. In a random sample , every person in a population has the same chance of being chosen for the study. According to the laws of probability, random samples represent the population as a whole. For instance, a Gallup Poll, if conducted as a nationwide random sampling, should be able to provide an accurate estimate of public opinion whether it contacts 2,000 or 10,000 people. After selecting subjects, the researcher develops a specific plan to ask questions and record responses. It is important to inform subjects of the nature and purpose of the study up front. If they agree to participate, researchers thank subjects and offer them a chance to see the results of the study if they are interested. The researcher presents the subjects with an instrument, a means of gathering the information. A common instrument is a questionnaire, in which subjects answer a series of questions. For some topics, the researcher might ask yes-or-no or multiple-choice questions, allowing subjects to choose possible responses to each question. This kind of quantitative data —research collected in numerical form that can be counted—are easy to tabulate. Just count up the number of “yes” and “no” responses or correct answers and chart them into percentages. Questionnaires can also ask more complex questions with more complex answers—beyond “yes,” “no,” or the option next to a checkbox. In those cases, the answers are subjective, varying from person to person. How do plan to use your college education? Why do you follow Jimmy Buffett around the country and attend every concert? Those types of questions require short essay responses, and participants willing to take the time to write those answers will convey personal information about religious beliefs, political views, and morals. Some topics that reflect internal thought are impossible to observe directly and are difficult to discuss honestly in a public forum. People are more likely to share honest answers if they can respond to questions anonymously. This type of information is qualitative data —results that are subjective and often based on what is seen in a natural setting. Qualitative information is harder to organize and tabulate. The researcher will end up with a wide range of responses, some of which may be surprising. The benefit of written opinions, though, is the wealth of material that they provide. An interview is a one-on-one conversation between the researcher and the subject, and is a way of conducting surveys on a topic. Interviews are similar to the short answer questions on surveys in that the researcher asks subjects a series of questions. However, participants are free to respond as they wish, without being limited by predetermined choices. In the back-and-forth conversation of an interview, a researcher can ask for clarification, spend more time on a subtopic, or ask additional questions. In an interview, a subject will ideally feel free to open up and answer questions that are often complex. There are no right or wrong answers. The subject might not even know how to answer the questions honestly. Questions such as “How did society's view of alcohol consumption influence your decision whether or not to take your first sip of alcohol?” or “Did you feel that the divorce of your parents would put a social stigma on your family?” involve so many factors that the answers are difficult to categorize. A researcher needs to avoid steering or prompting the subject to respond in a specific way; otherwise, the results will prove to be unreliable. And, obviously, a sociological interview is not an interrogation. The researcher will benefit from gaining a subject’s trust, from empathizing or commiserating with a subject, and from listening without judgment. Field Research The work of sociology rarely happens in limited, confined spaces. Sociologists seldom study subjects in their own offices or laboratories. Rather, sociologists go out into the world. They meet subjects where they live, work, and play. Field research refers to gathering primary data from a natural environment without doing a lab experiment or a survey. It is a research method suited to an interpretive framework rather than to the scientific method. To conduct field research, the sociologist must be willing to step into new environments and observe, participate, or experience those worlds. In field work, the sociologists, rather than the subjects, are the ones out of their element. The researcher interacts with or observes a person or people, gathering data along the way. The key point in field research is that it takes place in the subject’s natural environment, whether it’s a coffee shop or tribal village, a homeless shelter or the DMV, a hospital, airport, mall, or beach resort. While field research often begins in a specific setting , the study’s purpose is to observe specific behaviors in that setting. Field work is optimal for observing how people behave. It is less useful, however, for understanding why they behave that way. You can't really narrow down cause and effect when there are so many variables floating around in a natural environment. Much of the data gathered in field research are based not on cause and effect but on correlation . And while field research looks for correlation, its small sample size does not allow for establishing a causal relationship between two variables. Sociology in the Real World Parrotheads as Sociological Subjects Some sociologists study small groups of people who share an identity in one aspect of their lives. Almost everyone belongs to a group of like-minded people who share an interest or hobby. Scientologists, folk dancers, or members of Mensa (an organization for people with exceptionally high IQs) express a specific part of their identity through their affiliation with a group. Those groups are often of great interest to sociologists. Jimmy Buffett, an American musician who built a career from his single top-10 song “Margaritaville,” has a following of devoted groupies called Parrotheads. Some of them have taken fandom to the extreme, making Parrothead culture a lifestyle. In 2005, Parrotheads and their subculture caught the attention of researchers John Mihelich and John Papineau. The two saw the way Jimmy Buffett fans collectively created an artificial reality. They wanted to know how fan groups shape culture. The result was a study and resulting article called “Parrotheads in Margaritaville: Fan Practice, Oppositional Culture, and Embedded Cultural Resistance in Buffett Fandom.” What Mihelich and Papineau found was that Parrotheads, for the most part, do not seek to challenge or even change society, as many sub-groups do. In fact, most Parrotheads live successfully within society, holding upper-level jobs in the corporate world. What they seek is escape from the stress of daily life. They get it from Jimmy Buffett’s concerts and from the public image he projects. Buffett fans collectively keep their version of an alternate reality alive. At Jimmy Buffett concerts, Parrotheads engage in a form of role play. They paint their faces and dress for the tropics in grass skirts, Hawaiian leis, and Parrot hats. These fans don’t generally play the part of Parrotheads outside of these concerts; you are not likely to see a lone Parrothead in a bank or library. In that sense, Parrothead culture is less about individualism and more about conformity. Being a Parrothead means sharing a specific identity. Parrotheads feel connected to each other: it’s a group identity, not an individual one. On fan websites, followers conduct polls calling for responses to message-board prompts such as “Why are you a Parrothead” and “Where is your Margaritaville?” To the latter question, fans define the place as anywhere from a beach to a bar to a peaceful state of mind. Ultimately, however, “Margaritaville” is an imaginary place. In their study, Mihelich and Papineau quote from a recent book by sociologist Richard Butsch, who writes, “un-self-conscious acts, if done by many people together, can produce change, even though the change may be unintended” (2000). Many Parrothead fan groups have performed good works in the name of Jimmy Buffett culture, donating to charities and volunteering their services. However, the authors suggest that what really drives Parrothead culture is commercialism. Jimmy Buffett’s popularity was dying out in the 1980s until being reinvigorated after he signed a sponsorship deal with a beer company. These days, his concert tours alone generate nearly $30 million a year. Buffett made a lucrative career for himself by partnering with product companies and marketing Margaritaville in the form of T-shirts, restaurants, casinos, and an expansive line of products. Some fans accuse Buffett of selling out, while others admire his financial success. Buffett makes no secret of his commercial exploitations; from the stage, he’s been known to tell his fans, “Just remember, I am spending your money foolishly.” Mihelich and Papineau gathered much of their information online. Referring to their study as a “Web ethnography,” they collected extensive narrative material from fans who joined Parrothead clubs and posted their experiences on websites. “We do not claim to have conducted a complete ethnography of Parrothead fans, or even of the Parrothead Web activity,” state the authors, “but we focused on particular aspects of Parrothead practice as revealed through Web research” (2005). Fan narratives gave them insight into how individuals identify with Buffett’s world and how fans used popular music to cultivate personal and collective meaning. In conducting studies about pockets of culture, most sociologists seek to discover a universal appeal. Mihelich and Papineau stated, “Although Parrotheads are a relative minority of the contemporary US population, an in-depth look at their practice and conditions illuminate [sic] cultural practices and conditions many of us experience and participate in” (2005). Here, we will look at three types of field research: participant observation, ethnography, and the case study. Participant Observation In 2000, a comic writer named Rodney Rothman wanted an insider’s view of white-collar work. He slipped into the sterile, high-rise offices of a New York “dot com” agency. Every day for two weeks, he pretended to work there. His main purpose was simply to see if anyone would notice him or challenge his presence. No one did. The receptionist greeted him. The employees smiled and said good morning. Rothman was accepted as part of the team. He even went so far as to claim a desk, inform the receptionist of his whereabouts, and attend a meeting. He published an article about his experience in The New Yorker called “My Fake Job” (2000). Later, he was discredited for allegedly fabricating some details of the story and The New Yorker issued an apology. However, Rothman’s entertaining article still offered fascinating descriptions of the inside workings of a “dot com” company and exemplified the lengths to which a sociologist will go to uncover material. Rothman had conducted a form of study called participant observation , in which researchers join people and participate in a group’s routine activities for the purpose of observing them within that context. This method lets researchers experience a specific aspect of social life. A researcher might go to great lengths to get a firsthand look into a trend, institution, or behavior. Researchers temporarily put themselves into roles and record their observations. A researcher might work as a waitress in a diner, or live as a homeless person for several weeks, or ride along with police officers as they patrol their regular beat. Often, these researchers try to blend in seamlessly with the population they study, and they may not disclose their true identity or purpose if they feel it would compromise the results of their research. At the beginning of a field study, researchers might have a question: “What really goes on in the kitchen of the most popular diner on campus?” or “What is it like to be homeless?” Participant observation is a useful method if the researcher wants to explore a certain environment from the inside. Field researchers simply want to observe and learn. In such a setting, the researcher will be alert and open minded to whatever happens, recording all observations accurately. Soon, as patterns emerge, questions will become more specific, observations will lead to hypotheses, and hypotheses will guide the researcher in shaping data into results. In a study of small-town America conducted by sociological researchers John S. Lynd and Helen Merrell Lynd, the team altered their purpose as they gathered data. They initially planned to focus their study on the role of religion in American towns. As they gathered observations, they realized that the effect of industrialization and urbanization was the more relevant topic of this social group. The Lynds did not change their methods, but they revised their purpose. This shaped the structure of Middletown: A Study in Modern American Culture , their published results (Lynd and Lynd 1959). The Lynds were upfront about their mission. The townspeople of Muncie, Indiana, knew why the researchers were in their midst. But some sociologists prefer not to alert people to their presence. The main advantage of covert participant observation is that it allows the researcher access to authentic, natural behaviors of a group’s members. The challenge, however, is gaining access to a setting without disrupting the pattern of others’ behavior. Becoming an inside member of a group, organization, or subculture takes time and effort. Researchers must pretend to be something they are not. The process could involve role playing, making contacts, networking, or applying for a job. Once inside a group, some researchers spend months or even years pretending to be one of the people they are observing. However, as observers, they cannot get too involved. They must keep their purpose in mind and apply the sociological perspective. That way, they illuminate social patterns that are often unrecognized. Because information gathered during participant observation is mostly qualitative, rather than quantitative, the end results are often descriptive or interpretive. The researcher might present findings in an article or book, describing what he or she witnessed and experienced. This type of research is what journalist Barbara Ehrenreich conducted for her book Nickel and Dimed . One day over lunch with her editor, as the story goes, Ehrenreich mentioned an idea. How can people exist on minimum-wage work? How do low-income workers get by? she wondered. Someone should do a study. To her surprise, her editor responded, Why don’t you do it? That’s how Ehrenreich found herself joining the ranks of the working class. For several months, she left her comfortable home and lived and worked among people who lacked, for the most part, higher education and marketable job skills. Undercover, she applied for and worked minimum wage jobs as a waitress, a cleaning woman, a nursing home aide, and a retail chain employee. During her participant observation, she used only her income from those jobs to pay for food, clothing, transportation, and shelter. She discovered the obvious, that it’s almost impossible to get by on minimum wage work. She also experienced and observed attitudes many middle and upper class people never think about. She witnessed firsthand the treatment of working class employees. She saw the extreme measures people take to make ends meet and to survive. She described fellow employees who held two or three jobs, worked seven days a week, lived in cars, could not pay to treat chronic health conditions, got randomly fired, submitted to drug tests, and moved in and out of homeless shelters. She brought aspects of that life to light, describing difficult working conditions and the poor treatment that low-wage workers suffer. Nickel and Dimed: On (Not) Getting By in America , the book she wrote upon her return to her real life as a well-paid writer, has been widely read and used in many college classrooms. Ethnography Ethnography is the extended observation of the social perspective and cultural values of an entire social setting. Ethnographies involve objective observation of an entire community. The heart of an ethnographic study focuses on how subjects view their own social standing and how they understand themselves in relation to a community. An ethnographic study might observe, for example, a small American fishing town, an Inuit community, a village in Thailand, a Buddhist monastery, a private boarding school, or Disney World. These places all have borders. People live, work, study, or vacation within those borders. People are there for a certain reason and therefore behave in certain ways and respect certain cultural norms. An ethnographer would commit to spending a determined amount of time studying every aspect of the chosen place, taking in as much as possible. A sociologist studying a tribe in the Amazon might watch the way villagers go about their daily lives and then write a paper about it. To observe a spiritual retreat center, an ethnographer might sign up for a retreat and attend as a guest for an extended stay, observe and record data, and collate the material into results. Sociological Research The Making of Middletown: A Study in Modern American Culture In 1924, a young married couple named Robert and Helen Lynd undertook an unprecedented ethnography: to apply sociological methods to the study of one US city in order to discover what “ordinary” Americans did and believed. Choosing Muncie, Indiana (population about 30,000), as their subject, they moved to the small town and lived there for eighteen months. Ethnographers had been examining other cultures for decades—groups considered minority or outsider—like gangs, immigrants, and the poor. But no one had studied the so-called average American. Recording interviews and using surveys to gather data, the Lynds did not sugarcoat or idealize American life (PBS). They objectively stated what they observed. Researching existing sources, they compared Muncie in 1890 to the Muncie they observed in 1924. Most Muncie adults, they found, had grown up on farms but now lived in homes inside the city. From that discovery, the Lynds focused their study on the impact of industrialization and urbanization. They observed that Muncie was divided into business class and working class groups. They defined business class as dealing with abstract concepts and symbols, while working class people used tools to create concrete objects. The two classes led different lives with different goals and hopes. However, the Lynds observed, mass production offered both classes the same amenities. Like wealthy families, the working class was now able to own radios, cars, washing machines, telephones, vacuum cleaners, and refrigerators. This was an emerging material new reality of the 1920s. As the Lynds worked, they divided their manuscript into six sections: Getting a Living, Making a Home, Training the Young, Using Leisure, Engaging in Religious Practices, and Engaging in Community Activities. Each chapter included subsections such as “The Long Arm of the Job” and “Why Do They Work So Hard?” in the “Getting a Living” chapter. When the study was completed, the Lynds encountered a big problem. The Rockefeller Foundation, which had commissioned the book, claimed it was useless and refused to publish it. The Lynds asked if they could seek a publisher themselves. Middletown: A Study in Modern American Culture was not only published in 1929, but became an instant bestseller, a status unheard of for a sociological study. The book sold out six printings in its first year of publication, and has never gone out of print (PBS). Nothing like it had ever been done before. Middletown was reviewed on the front page of the New York Times . Readers in the 1920s and 1930s identified with the citizens of Muncie, Indiana, but they were equally fascinated by the sociological methods and the use of scientific data to define ordinary Americans. The book was proof that social data was important—and interesting—to the American public. Case Study Sometimes a researcher wants to study one specific person or event. A case study is an in-depth analysis of a single event, situation, or individual. To conduct a case study, a researcher examines existing sources like documents and archival records, conducts interviews, engages in direct observation, and even participant observation, if possible. Researchers might use this method to study a single case of, for example, a foster child, drug lord, cancer patient, criminal, or rape victim. However, a major criticism of the case study as a method is that a developed study of a single case, while offering depth on a topic, does not provide enough evidence to form a generalized conclusion. In other words, it is difficult to make universal claims based on just one person, since one person does not verify a pattern. This is why most sociologists do not use case studies as a primary research method. However, case studies are useful when the single case is unique. In these instances, a single case study can add tremendous knowledge to a certain discipline. For example, a feral child, also called “wild child,” is one who grows up isolated from human beings. Feral children grow up without social contact and language, elements crucial to a “civilized” child’s development. These children mimic the behaviors and movements of animals, and often invent their own language. There are only about one hundred cases of “feral children” in the world. As you may imagine, a feral child is a subject of great interest to researchers. Feral children provide unique information about child development because they have grown up outside of the parameters of “normal” child development. And since there are very few feral children, the case study is the most appropriate method for researchers to use in studying the subject. At age 3, a Ukranian girl named Oxana Malaya suffered severe parental neglect. She lived in a shed with dogs, eating raw meat and scraps. Five years later, a neighbor called authorities and reported seeing a girl who ran on all fours, barking. Officials brought Oxana into society, where she was cared for and taught some human behaviors, but she never became fully socialized. She has been designated as unable to support herself and now lives in a mental institution (Grice 2011). Case studies like this offer a way for sociologists to collect data that may not be collectable by any other method. Experiments You’ve probably tested personal social theories. “If I study at night and review in the morning, I’ll improve my retention skills.” Or, “If I stop drinking soda, I’ll feel better.” Cause and effect. If this, then that. When you test the theory, your results either prove or disprove your hypothesis. One way researchers test social theories is by conducting an experiment , meaning they investigate relationships to test a hypothesis—a scientific approach. There are two main types of experiments: lab-based experiments and natural or field experiments. In a lab setting, the research can be controlled so that perhaps more data can be recorded in a certain amount of time. In a natural or field-based experiment, the generation of data cannot be controlled but the information might be considered more accurate since it was collected without interference or intervention by the researcher. As a research method, either type of sociological experiment is useful for testing if-then statements: if a particular thing happens, then another particular thing will result. To set up a lab-based experiment, sociologists create artificial situations that allow them to manipulate variables. Classically, the sociologist selects a set of people with similar characteristics, such as age, class, race, or education. Those people are divided into two groups. One is the experimental group and the other is the control group. The experimental group is exposed to the independent variable(s) and the control group is not. To test the benefits of tutoring, for example, the sociologist might expose the experimental group of students to tutoring but not the control group. Then both groups would be tested for differences in performance to see if tutoring had an effect on the experimental group of students. As you can imagine, in a case like this, the researcher would not want to jeopardize the accomplishments of either group of students, so the setting would be somewhat artificial. The test would not be for a grade reflected on their permanent record, for example. Sociological Research An Experiment in Action A real-life example will help illustrate the experiment process. In 1971, Frances Heussenstamm, a sociology professor at California State University at Los Angeles, had a theory about police prejudice. To test her theory she conducted an experiment. She chose fifteen students from three ethnic backgrounds: black, white, and Hispanic. She chose students who routinely drove to and from campus along Los Angeles freeway routes, and who’d had perfect driving records for longer than a year. Those were her independent variables—students, good driving records, same commute route. Next, she placed a Black Panther bumper sticker on each car. That sticker, a representation of a social value, was the independent variable. In the 1970s, the Black Panthers were a revolutionary group actively fighting racism. Heussenstamm asked the students to follow their normal driving patterns. She wanted to see if seeming support of the Black Panthers would change how these good drivers were treated by the police patrolling the highways. The first arrest, for an incorrect lane change, was made two hours after the experiment began. One participant was pulled over three times in three days. He quit the study. After seventeen days, the fifteen drivers had collected a total of thirty-three traffic citations. The experiment was halted. The funding to pay traffic fines had run out, and so had the enthusiasm of the participants (Heussenstamm 1971). Secondary Data Analysis While sociologists often engage in original research studies, they also contribute knowledge to the discipline through secondary data analysis . Secondary data don’t result from firsthand research collected from primary sources, but are the already completed work of other researchers. Sociologists might study works written by historians, economists, teachers, or early sociologists. They might search through periodicals, newspapers, or magazines from any period in history. Using available information not only saves time and money, but it can add depth to a study. Sociologists often interpret findings in a new way, a way that was not part of an author’s original purpose or intention. To study how women were encouraged to act and behave in the 1960s, for example, a researcher might watch movies, televisions shows, and situation comedies from that period. Or to research changes in behavior and attitudes due to the emergence of television in the late 1950s and early 1960s, a sociologist would rely on new interpretations of secondary data. Decades from now, researchers will most likely conduct similar studies on the advent of mobile phones, the Internet, or Facebook. Social scientists also learn by analyzing the research of a variety of agencies. Governmental departments and global groups, like the U.S. Bureau of Labor Statistics or the World Health Organization, publish studies with findings that are useful to sociologists. A public statistic like the foreclosure rate might be useful for studying the effects of the 2008 recession; a racial demographic profile might be compared with data on education funding to examine the resources accessible by different groups. One of the advantages of secondary data is that it is nonreactive (or unobtrusive) research, meaning that it does not include direct contact with subjects and will not alter or influence people’s behaviors. Unlike studies requiring direct contact with people, using previously published data doesn’t require entering a population and the investment and risks inherent in that research process. Using available data does have its challenges. Public records are not always easy to access. A researcher will need to do some legwork to track them down and gain access to records. To guide the search through a vast library of materials and avoid wasting time reading unrelated sources, sociologists employ content analysis , applying a systematic approach to record and value information gleaned from secondary data as they relate to the study at hand. But, in some cases, there is no way to verify the accuracy of existing data. It is easy to count how many drunk drivers, for example, are pulled over by the police. But how many are not? While it’s possible to discover the percentage of teenage students who drop out of high school, it might be more challenging to determine the number who return to school or get their GED later. Another problem arises when data are unavailable in the exact form needed or do not include the precise angle the researcher seeks. For example, the average salaries paid to professors at a public school is public record. But the separate figures don’t necessarily reveal how long it took each professor to reach the salary range, what their educational backgrounds are, or how long they’ve been teaching. To write some of his books, sociologist Richard Sennett used secondary data to shed light on current trends. In The Craftsman (2008), he studied the human desire to perform quality work, from carpentry to computer programming. He studied the line between craftsmanship and skilled manual labor. He also studied changes in attitudes toward craftsmanship that occurred not only during and after the Industrial Revolution, but also in ancient times. Obviously, he could not have firsthand knowledge of periods of ancient history; he had to rely on secondary data for part of his study. When conducting content analysis, it is important to consider the date of publication of an existing source and to take into account attitudes and common cultural ideals that may have influenced the research. For example, Robert S. Lynd and Helen Merrell Lynd gathered research for their book Middletown: A Study in Modern American Culture in the 1920s. Attitudes and cultural norms were vastly different then than they are now. Beliefs about gender roles, race, education, and work have changed significantly since then. At the time, the study’s purpose was to reveal the truth about small American communities. Today, it is an illustration of 1920s attitudes and values. 2.3 Ethical Concerns Sociologists conduct studies to shed light on human behaviors. Knowledge is a powerful tool that can be used toward positive change. And while a sociologist’s goal is often simply to uncover knowledge rather than to spur action, many people use sociological studies to help improve people’s lives. In that sense, conducting a sociological study comes with a tremendous amount of responsibility. Like any researchers, sociologists must consider their ethical obligation to avoid harming subjects or groups while conducting their research. The American Sociological Association, or ASA, is the major professional organization of sociologists in North America. The ASA is a great resource for students of sociology as well. The ASA maintains a code of ethics —formal guidelines for conducting sociological research—consisting of principles and ethical standards to be used in the discipline. It also describes procedures for filing, investigating, and resolving complaints of unethical conduct. Practicing sociologists and sociology students have a lot to consider. Some of the guidelines state that researchers must try to be skillful and fair-minded in their work, especially as it relates to their human subjects. Researchers must obtain participants’ informed consent, and inform subjects of the responsibilities and risks of research before they agree to partake. During a study, sociologists must ensure the safety of participants and immediately stop work if a subject becomes potentially endangered on any level. Researchers are required to protect the privacy of research participants whenever possible. Even if pressured by authorities, such as police or courts, researchers are not ethically allowed to release confidential information. Researchers must make results available to other sociologists, must make public all sources of financial support, and must not accept funding from any organization that might cause a conflict of interest or seek to influence the research results for its own purposes. The ASA’s ethical considerations shape not only the study but also the publication of results. Pioneer German sociologist Max Weber (1864–1920) identified another crucial ethical concern. Weber understood that personal values could distort the framework for disclosing study results. While he accepted that some aspects of research design might be influenced by personal values, he declared it was entirely inappropriate to allow personal values to shape the interpretation of the responses. Sociologists, he stated, must establish value neutrality , a practice of remaining impartial, without bias or judgment, during the course of a study and in publishing results (1949). Sociologists are obligated to disclose research findings without omitting or distorting significant data. Is value neutrality possible? Many sociologists believe it is impossible to set aside personal values and retain complete objectivity. They caution readers, rather, to understand that sociological studies may, by necessity, contain a certain amount of value bias. It does not discredit the results but allows readers to view them as one form of truth rather than a singular fact. Some sociologists attempt to remain uncritical and as objective as possible when studying cultural institutions. Value neutrality does not mean having no opinions. It means striving to overcome personal biases, particularly subconscious biases, when analyzing data. It means avoiding skewing data in order to match a predetermined outcome that aligns with a particular agenda, such as a political or moral point of view. Investigators are ethically obligated to report results, even when they contradict personal views, predicted outcomes, or widely accepted beliefs.
introduction_to_sociology
Learning Objectives 13.1 Who Are the Elderly? Aging in Society Understand the difference between senior age groups (young-old, middle-old, and old-old) Describe the “graying of the United States” as the population experiences increased life expectancies Examine aging as a global issue 13.2 The Process of Aging Consider the biological, social, and psychological changes in aging Describe about the birth of the field of geriatrics Examine attitudes toward death and dying and how they affect the elderly Name the five stages of grief developed by Dr. Elisabeth Kübler-Ross 13.3 Challenges Facing the Elderly Understand the historical and current trends of poverty among elderly populations Recognize ageist thinking and ageist attitudes in individuals and in institutions Learn about elderly individuals’ risks of being mistreated and abused 13.4 Theoretical Perspectives on Aging Compare and contrast sociological theoretical perspectives on aging Introduction to Aging and the Elderly At age 52, Bridget Fisher became a first-time grandmother. She worked in human resources at a scientific research company, a job she’d held for 20 years. She had raised two children, divorced her first husband, remarried, and survived a cancer scare. Her fast-paced job required her to travel around the country, setting up meetings and conferences. The company did not offer retirement benefits. Bridget had seen many employees put in 10, 15, or 20 years of service only to get laid off when they were considered too old. Because of laws against age discrimination, the company executives were careful to prevent any records from suggesting age as the reason for the layoffs. Seeking to avoid the crisis she would face if she were laid off, Bridget went into action. She took advantage of the company’s policy to put its employees through college if they continued to work two years past graduation. Completing evening classes in nursing at the local technical school, she became a registered nurse after four years. She worked two more years, then quit her job in HR, and accepted a part-time nursing job at a family clinic. Her new job offered retirement benefits. Bridget no longer had to travel to work and she was able to spend more time with her family and to cultivate new hobbies. Today, Bridget Fisher, 62, is a wife, mother of two, grandmother of three, part-time nurse, master gardener, and quilt club member. She enjoys golfing and camping with her husband and taking her terriers to the local dog park. She does not expect to retire from the workforce for five or ten more years, and though the government officially considers her a senior citizen, she doesn’t feel old. In fact, while bouncing her grandchild on her knee, Bridget tells her daughter, 38, “I never felt younger.” Age is not merely a number; it represents a wealth of life experiences that shape whom we become. With medical advancements that prolong human life, old age has taken on a new meaning in societies with the means to provide high-quality medical care. However, many aspects of the aging experience depend on social class, race, gender, and other social factors.
[ { "answer": { "ans_choice": 1, "ans_text": "live a few years longer" }, "bloom": "1", "hl_context": "It is interesting to note that not all Americans age equally . <hl> Most glaring is the difference between men and women ; as the graph below shows , women have longer life expectancies than men . <hl> In 2010 , there were ninety 65 - year-old men per one hundred 65 - year-old women . However , there were only eighty 75 - year-old men per one hundred 75 - year-old women , and only sixty 85 - year-old men per one hundred 85 - year-old women . Nevertheless , as the graph shows , the sex ratio actually increased over time , indicating that men are closing the gap between their life spans and those of women ( U . S . Census Bureau 2010 ) .", "hl_sentences": "Most glaring is the difference between men and women ; as the graph below shows , women have longer life expectancies than men .", "question": { "cloze_format": "In most countries, elderly women ______ than elderly men.", "normal_format": "In most countries, what are elderly women in comparison to elderly men?", "question_choices": [ "are mistreated less", "live a few years longer", "suffer fewer health problems ", "deal with issues of aging better" ], "question_id": "sq1301_ex01", "question_text": "In most countries, elderly women ______ than elderly men." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "Medicaid being in danger of going bankrupt" }, "bloom": "3", "hl_context": "<hl> Certainly , as boomers age , they will put increasing burdens on the entire U . S . health care system . <hl> A study from 2008 indicates that medical schools are not producing enough medical professionals who specialize in treating geriatric patients ( Gerontological Society of America 2008 ) . <hl> However , other studies indicate that aging boomers will bring economic growth to the health care industries , particularly in areas like pharmaceutical manufacturing and home health care services ( Bierman 2011 ) . <hl> Further , some argue that many of our medical advances of the past few decades are a result of boomers ’ health requirements . Unlike the elderly of previous generations , boomers do not expect that turning 65 means their active lives are over . <hl> They are not willing to abandon work or leisure activities , but they may need more medical support to keep living vigorous lives . <hl> <hl> This desire of a large group of over - 65 - year-olds wanting to continue with a high activity level is driving innovation in the medical industry ( Shaw ) . <hl>", "hl_sentences": "Certainly , as boomers age , they will put increasing burdens on the entire U . S . health care system . However , other studies indicate that aging boomers will bring economic growth to the health care industries , particularly in areas like pharmaceutical manufacturing and home health care services ( Bierman 2011 ) . They are not willing to abandon work or leisure activities , but they may need more medical support to keep living vigorous lives . This desire of a large group of over - 65 - year-olds wanting to continue with a high activity level is driving innovation in the medical industry ( Shaw ) .", "question": { "cloze_format": "America’s baby boomer generation has contributed to all of the following except ___", "normal_format": "America’s baby boomer generation has contributed to all of the following except which one?", "question_choices": [ "Social Security’s vulnerability", "improved medical technology ", "Medicaid being in danger of going bankrupt", "rising Medicare budgets" ], "question_id": "sq1301_ex02", "question_text": "America’s baby boomer generation has contributed to all of the following except:" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 1, "ans_text": "sex ratio" }, "bloom": "1", "hl_context": "It is interesting to note that not all Americans age equally . <hl> Most glaring is the difference between men and women ; as the graph below shows , women have longer life expectancies than men . <hl> In 2010 , there were ninety 65 - year-old men per one hundred 65 - year-old women . However , there were only eighty 75 - year-old men per one hundred 75 - year-old women , and only sixty 85 - year-old men per one hundred 85 - year-old women . <hl> Nevertheless , as the graph shows , the sex ratio actually increased over time , indicating that men are closing the gap between their life spans and those of women ( U . S . Census Bureau 2010 ) . <hl>", "hl_sentences": "Most glaring is the difference between men and women ; as the graph below shows , women have longer life expectancies than men . Nevertheless , as the graph shows , the sex ratio actually increased over time , indicating that men are closing the gap between their life spans and those of women ( U . S . Census Bureau 2010 ) .", "question": { "cloze_format": "The measure that compares the number of men to women in a population is ______.", "normal_format": "Which is the measure that compares the number of men to women in a population?", "question_choices": [ "cohort", "sex ratio", "baby boomer", "disengagement" ], "question_id": "sq1301_ex03", "question_text": "The measure that compares the number of men to women in a population is ______." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 0, "ans_text": "the increasing percentage of the population over 65" }, "bloom": "1", "hl_context": "Demographically , the U . S . population over age 65 increased from 3 million in 1900 to 33 million in 1994 ( Hobbs 1994 ) and to 36.8 million in 2010 ( U . S . Census Bureau 2011c ) . <hl> This is a greater than tenfold increase in the elderly population , compared to a mere tripling of both the total population and of the population under 65 ( Hobbs 1994 ) . <hl> <hl> This increase has been called “ the graying of America , ” a term that describes the phenomenon of a larger and larger percentage of the population getting older and older . <hl> There are several reasons why America is graying so rapidly . One of these is life expectancy : the average number of years a person born today may expect to live . When reviewing Census Bureau statistics grouping the elderly by age , it is clear that in the United States , at least , we are living longer . Between 2000 and 2012 , the number of elderly citizens between 90 and 94 increased by more than 30 percent , and the number of elderly citizens 95 to 99 increased by almost 30 percent . Finally , the number of centenarians ( those 100 years or older ) increased by 2,910 : a mere 5.8 percent , but impressive nonetheless ( Werner 2011 ) .", "hl_sentences": "This is a greater than tenfold increase in the elderly population , compared to a mere tripling of both the total population and of the population under 65 ( Hobbs 1994 ) . This increase has been called “ the graying of America , ” a term that describes the phenomenon of a larger and larger percentage of the population getting older and older .", "question": { "cloze_format": "The “graying of the United States” refers to ________.", "normal_format": "What does the “graying of the United States” refer to?", "question_choices": [ "the increasing percentage of the population over 65", "faster aging due to stress", "dissatisfaction with retirement plans", "increased health problems such as Alzheimer’s" ], "question_id": "sq1301_ex04", "question_text": "The “graying of the United States” refers to ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "37" }, "bloom": null, "hl_context": "<hl> Statisticians use data to calculate the median age of a population , that is , the number that marks the halfway point in a group ’ s age range . <hl> <hl> In the United States , the median age is about 40 ( U . S . Census Bureau 2010 ) . <hl> <hl> That means that about half of Americans are under 40 and about half are over 40 . <hl> This median age has been increasing , indicating the population as a whole is growing older .", "hl_sentences": "Statisticians use data to calculate the median age of a population , that is , the number that marks the halfway point in a group ’ s age range . In the United States , the median age is about 40 ( U . S . Census Bureau 2010 ) . That means that about half of Americans are under 40 and about half are over 40 .", "question": { "cloze_format": "___ is the approximate median age of the United States.", "normal_format": "What is the approximate median age of the United States?", "question_choices": [ "85", "65", "37", "18" ], "question_id": "sq1301_ex05", "question_text": "What is the approximate median age of the United States?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "death and dying" }, "bloom": "1", "hl_context": "The work of Kübler-Ross was eye-opening when it was introduced . It broke new ground and opened the doors for sociologists , social workers , health practitioners , and therapists to study death and help those who were facing death . <hl> Kübler-Ross ’ s work is generally considered a major contribution to thanatology : the systematic study of death and dying . <hl>", "hl_sentences": "Kübler-Ross ’ s work is generally considered a major contribution to thanatology : the systematic study of death and dying .", "question": { "cloze_format": "Thanatology is the study of _____.", "normal_format": "Of what is thanatology the study?", "question_choices": [ "life expectancy", "biological aging", "death and dying", "adulthood" ], "question_id": "sq1302_ex01", "question_text": "Thanatology is the study of _____." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "Overcoming despair to achieve integrity" }, "bloom": "1", "hl_context": "Each phase of life has challenges that come with the potential for fear . Erik H . Erikson ( 1902 – 1994 ) , in his view of socialization , broke the typical life span into eight phases . <hl> Each phase presents a particular challenge that must be overcome . <hl> <hl> In the final stage , old age , the challenge is to embrace integrity over despair . <hl> Some people are unable to successfully overcome the challenge . They may have to confront regrets , such as being disappointed in their children ’ s lives or perhaps their own . They may have to accept that they will never reach certain career goals . Or they must come to terms with what their career success has cost them , such as time with their family or declining personal health . Others , however , are able to achieve a strong sense of integrity , embracing the new phase in life . When that happens , there is tremendous potential for creativity . They can learn new skills , practice new activities , and peacefully prepare for the end of life .", "hl_sentences": "Each phase presents a particular challenge that must be overcome . In the final stage , old age , the challenge is to embrace integrity over despair .", "question": { "cloze_format": "In Erik Erikson’s developmental stages of life, the challenge with which older people must struggle is ___ .", "normal_format": "In Erik Erikson’s developmental stages of life, with which challenge must older people struggle?", "question_choices": [ "Overcoming despair to achieve integrity", "Overcoming role confusion to achieve identity", "Overcoming isolation to achieve intimacy", "Overcoming shame to achieve autonomy" ], "question_id": "sq1302_ex02", "question_text": "In Erik Erikson’s developmental stages of life, with which challenge must older people struggle?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "Elisabeth Kübler-Ross" }, "bloom": "1", "hl_context": "What may be surprising is how few studies were conducted on death and dying prior to the 1960s . <hl> Death and dying were fields that had received little attention until a psychologist named Elisabeth Kübler-Ross began observing people who were in the process of dying . <hl> As Kübler-Ross witnessed people ’ s transition toward death , she found some common threads in their experiences . She observed that the process had five distinct stages : denial , anger , bargaining , depression , and acceptance . She published her findings in a 1969 book called On Death and Dying . The book remains a classic on the topic today .", "hl_sentences": "Death and dying were fields that had received little attention until a psychologist named Elisabeth Kübler-Ross began observing people who were in the process of dying .", "question": { "cloze_format": "___ wrote the book On Death and Dying, outlining the five stages of grief.", "normal_format": "Who wrote the book On Death and Dying, outlining the five stages of grief?", "question_choices": [ "Ignatz Nascher", "Erik Erikson", "Elisabeth Kübler-Ross", "Carol Gilligan" ], "question_id": "sq1302_ex03", "question_text": "Who wrote the book On Death and Dying, outlining the five stages of grief?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "the typical sequence of events in their lives" }, "bloom": "2", "hl_context": "As human beings grow older , they go through different phases or stages of life . It is helpful to understand aging in the context of these phases . <hl> A life course is the period from birth to death , including a sequence of predictable life events such as physical maturation . <hl> Each phase comes with different responsibilities and expectations , which of course vary by individual and culture . Children love to play and learn , looking forward to becoming preteens . As preteens begin to test their independence , they are eager to become teenagers . Teenagers anticipate the promises and challenges of adulthood . Adults become focused on creating families , building careers , and experiencing the world as an independent person . Finally , many adults look forward to old age as a wonderful time to enjoy life without as much pressure from work and family life . In old age , grandparenthood can provide many of the joys of parenthood without all the hard work that parenthood entails . And as work responsibilities abate , old age may be a time to explore hobbies and activities that there was no time for earlier in life . But for other people , old age is not a phase looked forward to . Some people fear old age and do anything to “ avoid ” it , seeking medical and cosmetic fixes for the natural effects of age . These differing views on the life course are the result of the cultural values and norms into which people are socialized .", "hl_sentences": "A life course is the period from birth to death , including a sequence of predictable life events such as physical maturation .", "question": { "cloze_format": "For individual people of a certain culture, the life course is ________.", "normal_format": "For individual people of a certain culture, what is the life course?", "question_choices": [ "the average age they will die", "the lessons they must learn", "the length of a typical bereavement period ", "the typical sequence of events in their lives" ], "question_id": "sq1302_ex04", "question_text": "For individual people of a certain culture, the life course is ________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 0, "ans_text": "continued to gradually rise" }, "bloom": null, "hl_context": "Two factors contribute significantly to this country ’ s aging prison population . One is the tough-on-crime reforms of the 1980s and 1990s , when mandatory minimum sentencing and “ three strikes ” policies sent many people to jail for 30 years to life , even when the third strike was a relatively minor offense ( Leadership Conference N . d . ) . Many of today ’ s elderly prisoners are those who were incarcerated 30 years ago for life sentences . The other factor influencing today ’ s aging prison population is the aging of the overall population . <hl> As discussed in the section on aging in the United States , the percentage of people over 65 is increasing each year due to rising life expectancies and the aging of the baby boom generation . <hl>", "hl_sentences": "As discussed in the section on aging in the United States , the percentage of people over 65 is increasing each year due to rising life expectancies and the aging of the baby boom generation .", "question": { "cloze_format": "In the United States, life expectancy rates in recent decades have ______.", "normal_format": "In the United States, what have life expectancy rates in recent decades done?", "question_choices": [ "continued to gradually rise", "gone up and down due to global issues such as military conflicts", "lowered as health care improves", "stayed the same since the mid-1960s" ], "question_id": "sq1302_ex05", "question_text": "In the United States, life expectancy rates in recent decades have ______." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 1, "ans_text": "increasing" }, "bloom": "1", "hl_context": "At the start of the 21st century , the older population was putting an end to that trend . Among people over 65 , the poverty rate fell from 30 percent in 1967 to 9.7 percent in 2008 , well below the national average of 13.2 percent ( U . S . Census Bureau 2009 ) . However , with the subsequent recession , which severely reduced the retirement savings of many while taxing public support systems , how are the elderly affected ? <hl> According to the Kaiser Commission on Medicaid and the Uninsured , the national poverty rate among the elderly had risen to 14 percent by 2010 ( Urban Institute and Kaiser Commission 2010 ) . <hl> Globally , the United States and other core nations are fairly well equipped to handle the demands of an exponentially increasing elderly population . However , peripheral and semi-peripheral nations face similar increases without comparable resources . <hl> Poverty among elders is a concern , especially among elderly women . <hl> The feminization of the aging poor , evident in peripheral nations , is directly due to the number of elderly women in those countries who are single , illiterate , and not a part of the labor force ( Mujahid 2006 ) .", "hl_sentences": "According to the Kaiser Commission on Medicaid and the Uninsured , the national poverty rate among the elderly had risen to 14 percent by 2010 ( Urban Institute and Kaiser Commission 2010 ) . Poverty among elders is a concern , especially among elderly women .", "question": { "cloze_format": "Today in the United States the poverty rate of the elderly is ______.", "normal_format": "What is the poverty rate of the elderly today in the United States?", "question_choices": [ "lower than at any point in history", "increasing", "decreasing", "the same as that of the general population" ], "question_id": "sq1303_ex01", "question_text": "Today in the United States the poverty rate of the elderly is ______." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "Speaking slowly and loudly when talking to someone over age 65" }, "bloom": "3", "hl_context": "Ageism can vary in severity . <hl> Peter ’ s attitudes are probably seen as fairly mild , but relating to the elderly in ways that are patronizing can be offensive . <hl> <hl> When ageism is reflected in the workplace , in health care , and in assisted-living facilities , the effects of discrimination can be more severe . <hl> <hl> Ageism can make older people fear losing a job , feel dismissed by a doctor , or feel a lack of power and control in their daily living situations . <hl> Responses like Peter ’ s toward older people are fairly common . He didn ’ t intend to treat people differently based on personal or cultural biases , but he did . <hl> Ageism is discrimination ( when someone acts on a prejudice ) based on age . <hl> Dr . Robert Butler coined the term in 1968 , noting that ageism exists in all cultures ( Brownell ) . <hl> Ageist attitudes and biases based on stereotypes reduce elderly people to inferior or limited positions . <hl>", "hl_sentences": "Peter ’ s attitudes are probably seen as fairly mild , but relating to the elderly in ways that are patronizing can be offensive . When ageism is reflected in the workplace , in health care , and in assisted-living facilities , the effects of discrimination can be more severe . Ageism can make older people fear losing a job , feel dismissed by a doctor , or feel a lack of power and control in their daily living situations . Ageism is discrimination ( when someone acts on a prejudice ) based on age . Ageist attitudes and biases based on stereotypes reduce elderly people to inferior or limited positions .", "question": { "cloze_format": "The action that reflects ageism is ___.", "normal_format": "Which action reflects ageism?", "question_choices": [ "Enabling WWII veterans to visit war memorials", "Speaking slowly and loudly when talking to someone over age 65", "Believing that older people drive too slowly", "Living in a culture where elders are respected" ], "question_id": "sq1303_ex02", "question_text": "Which action reflects ageism?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "Being frail to the point of dependency on care" }, "bloom": "1", "hl_context": "<hl> Mistreatment and Abuse Mistreatment and abuse of the elderly is a major social problem . <hl> As expected , with the biology of aging , the elderly sometimes become physically frail . <hl> This frailty renders them dependent on others for care — sometimes for small needs like household tasks , and sometimes for assistance with basic functions like eating and toileting . <hl> Unlike a child , who also is dependent on another for care , an elder is an adult with a lifetime of experience , knowledge , and opinions — a more fully developed person . This makes the care providing situation more complex .", "hl_sentences": "Mistreatment and Abuse Mistreatment and abuse of the elderly is a major social problem . This frailty renders them dependent on others for care — sometimes for small needs like household tasks , and sometimes for assistance with basic functions like eating and toileting .", "question": { "cloze_format": "The factor that most increases the risk of an elderly person suffering mistreatment is ___.", "normal_format": "Which factor most increases the risk of an elderly person suffering mistreatment?", "question_choices": [ "Bereavement due to widowhood", "Having been abusive as a younger adult", "Being frail to the point of dependency on care", "The ability to bestow a large inheritance on survivors" ], "question_id": "sq1303_ex03", "question_text": "Which factor most increases the risk of an elderly person suffering mistreatment?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 1, "ans_text": "caregivers" }, "bloom": "1", "hl_context": "A history of depression in the caregiver was also found to increase the likelihood of elder abuse . Neglect was more likely when care was provided by paid caregivers . Many of the caregivers who physically abused elders were themselves abused — in many cases , when they were children . Family members with some sort of dependency on the elder in their care were more likely to physically abuse that elder . <hl> For example , an adult child caring for an elderly parent while , at the same time , depending on some form of income from that parent , would be considered more likely to perpetrate physical abuse ( Kohn and Verhoek-Oftedahl 2011 ) . <hl> <hl> Other studies have focused on the caregivers to the elderly in an attempt to discover the causes of elder abuse . <hl> <hl> Researchers identified factors that increased the likelihood of caregivers perpetrating abuse against those in their care . <hl> Those factors include inexperience , having other demands such as jobs ( for those who weren ’ t professionally employed as caregivers ) , caring for children , living full time with the dependent elder , and experiencing high stress , isolation , and lack of support ( Kohn and Verhoek-Oftedahl 2011 ) .", "hl_sentences": "For example , an adult child caring for an elderly parent while , at the same time , depending on some form of income from that parent , would be considered more likely to perpetrate physical abuse ( Kohn and Verhoek-Oftedahl 2011 ) . Other studies have focused on the caregivers to the elderly in an attempt to discover the causes of elder abuse . Researchers identified factors that increased the likelihood of caregivers perpetrating abuse against those in their care .", "question": { "cloze_format": "If elderly people suffer abuse, it is most often perpetrated by ______.", "normal_format": "If elderly people suffer abuse, it is most often perpetrated by whom?", "question_choices": [ "spouses", "caregivers", "lawyers", "strangers" ], "question_id": "sq1303_ex04", "question_text": "If elderly people suffer abuse, it is most often perpetrated by ______." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 1, "ans_text": "commit suicide" }, "bloom": "1", "hl_context": "<hl> Research has found that veterans of any conflict are more than twice as likely as non-veterans to commit suicide , with rates highest among the oldest veterans . <hl> <hl> Reports show that WWII-era veterans are four times as likely to take their own lives as people of the same age with no military service ( Glantz 2010 ) . <hl>", "hl_sentences": "Research has found that veterans of any conflict are more than twice as likely as non-veterans to commit suicide , with rates highest among the oldest veterans . Reports show that WWII-era veterans are four times as likely to take their own lives as people of the same age with no military service ( Glantz 2010 ) .", "question": { "cloze_format": "Veterans are two to four times more likely to ______ as people who did not serve in the military.", "normal_format": "What are veterans two to four times more likely than people who did not serve in the military?", "question_choices": [ "be a victim of elder abuse", "commit suicide", "be concerned about financial stresses", "be abusive toward care providers" ], "question_id": "sq1303_ex05", "question_text": "Veterans are two to four times more likely to ______ as people who did not serve in the military." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 1, "ans_text": "Men tend to have better retirement plans than women." }, "bloom": "1", "hl_context": "<hl> Functionalists find that people with better resources who stay active in other roles adjust better to old age ( Crosnoe and Elder 2002 ) . <hl> <hl> Three social theories within the functional perspective were developed to explain how older people might deal with later-life experiences . <hl>", "hl_sentences": "Functionalists find that people with better resources who stay active in other roles adjust better to old age ( Crosnoe and Elder 2002 ) . Three social theories within the functional perspective were developed to explain how older people might deal with later-life experiences .", "question": { "cloze_format": "The assertion about aging in men that would be made by a sociologist following the functionalist perspective is that ___.", "normal_format": "Which assertion about aging in men would be made by a sociologist following the functionalist perspective?", "question_choices": [ "Men view balding as representative of a loss of strength.", "Men tend to have better retirement plans than women.", "Men have life expectancies three to five years shorter than women. ", "Men who remain active after retirement play supportive community roles." ], "question_id": "sq1304_ex01", "question_text": "Which assertion about aging in men would be made by a sociologist following the functionalist perspective?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 0, "ans_text": "Industrialization" }, "bloom": null, "hl_context": "There are three classic theories of aging within the conflict perspective . <hl> Modernization theory ( Cowgill and Holmes 1972 ) suggests that the primary cause of the elderly losing power and influence in society are the parallel forces of industrialization and modernization . <hl> As societies modernize , the status of elders decreases , and they are increasingly likely to experience social exclusion . Before industrialization , strong social norms bound the younger generation to care for the older . Now , as societies industrialize , the nuclear family replaces the extended family . Societies become increasingly individualistic , and norms regarding the care of older people change . In an individualistic industrial society , caring for an elderly relative is seen as a voluntary obligation that may be ignored without fear of social censure .", "hl_sentences": "Modernization theory ( Cowgill and Holmes 1972 ) suggests that the primary cause of the elderly losing power and influence in society are the parallel forces of industrialization and modernization .", "question": { "cloze_format": "The primary driver of modernization theory is ___.", "normal_format": "What is the primary driver of modernization theory?", "question_choices": [ "Industrialization", "Aging", "Conflict", "Interactions" ], "question_id": "sq1304_ex04", "question_text": "What is the primary driver of modernization theory?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 3, "ans_text": "Age stratification" }, "bloom": null, "hl_context": "<hl> Age stratification theory has been criticized for its broadness and its inattention to other sources of stratification and how these might intersect with age . <hl> For example , one might argue that an older white male occupies a more powerful role , and is far less limited in his choices , compared to an older white female based on his historical access to political and economic power . <hl> Thanks to amendments to the Age Discrimination in Employment Act ( ADEA ) , which drew attention to some of the ways in which our society is stratified based on age , U . S . workers no longer must retire upon reaching a specified age . <hl> As first passed in 1967 , the ADEA provided protection against a broad range of age discrimination and specifically addressed termination of employment due to age , age specific layoffs , advertised positions specifying age limits or preferences , and denial of health care benefits to those over 65 ( U . S . EEOC 2012 ) .", "hl_sentences": "Age stratification theory has been criticized for its broadness and its inattention to other sources of stratification and how these might intersect with age . Thanks to amendments to the Age Discrimination in Employment Act ( ADEA ) , which drew attention to some of the ways in which our society is stratified based on age , U . S . workers no longer must retire upon reaching a specified age .", "question": { "cloze_format": "The Age Discrimination in Employment Act counteracts the theory of ___ .", "normal_format": "The Age Discrimination in Employment Act counteracts which theory?", "question_choices": [ "Modernization", "Conflict", "Disengagement", "Age stratification" ], "question_id": "sq1304_ex05", "question_text": "The Age Discrimination in Employment Act counteracts which theory?" }, "references_are_paraphrase": 0 } ]
13
13.1 Who Are the Elderly? Aging in Society Think of American movies and television shows you have watched recently. Did any of them feature older actors and actresses? What roles did they play? How were these older actors portrayed? Were they cast as main characters in a love story? Grouchy old people? Many media portrayals of the elderly reflect negative cultural attitudes toward aging. In the United States, society tends to glorify youth, associating it with beauty and sexuality. In comedies, the elderly are often associated with grumpiness or hostility. Rarely do the roles of older people convey the fullness of life experienced by seniors—as employees, lovers, or the myriad roles they have in real life. What values does this reflect? One hindrance to society’s fuller understanding of aging is that people rarely understand the process of aging until they reach old age themselves. (As opposed to childhood, for instance, which we can all look back on.) Therefore, myths and assumptions about the elderly and aging are common. Many stereotypes exist surrounding the realities of being an older adult. While individuals often encounter stereotypes associated with race and gender and are thus more likely to think critically about them, many people accept age stereotypes without question (Levy 2002). Each culture has a certain set of expectations and assumptions about aging, all of which are part of our socialization. While the landmarks of maturing into adulthood are a source of pride, signs of natural aging can be cause for shame or embarrassment. Some people try to fight off the appearance of aging with cosmetic surgery. Although many seniors report that their lives are more satisfying than ever, and their self-esteem is stronger than when they were young, they are still subject to cultural attitudes that make them feel invisible and devalued. Gerontology is a field of science that seeks to understand the process of aging and the challenges encountered as seniors grow older. Gerontologists investigate age, aging, and the aged. Gerontologists study what it is like to be an older adult in a society and the ways that aging affects members of a society. As a multidisciplinary field, gerontology includes the work of medical and biological scientists, social scientists, and even financial and economic scholars. Social gerontology refers to a specialized field of gerontology that examines the social (and sociological) aspects of aging. Researchers focus on developing a broad understanding of the experiences of people at specific ages, such as mental and physical wellbeing, plus age-specific concerns such as the process of dying. Social gerontologists work as social researchers, counselors, community organizers, and service providers for older adults. Because of their specialization, social gerontologists are in a strong position to advocate for older adults. Scholars in these disciplines have learned that “aging” reflects not just the physiological process of growing older, but also our attitudes and beliefs about the aging process. You’ve likely seen online calculators that promise to determine your “real age” as opposed to your chronological age. These ads target the notion that people may “feel” a different age than their actual years. Some 60-year-olds feel frail and elderly, while some 80-year-olds feel sprightly. Equally revealing is that as people grow older they define “old age” in terms of greater years than their current age (Logan 1992). Many people want to postpone old age, regarding it as a phase that will never arrive. Some older adults even succumb to stereotyping their own age group (Rothbaum 1983). In the United States, the experience of being elderly has changed greatly over the past century. In the late 1800s and early 1900s, many U.S. households were home to multigenerational families, and the experiences and wisdom of elders was respected. They offered wisdom and support to their children and often helped raise their grandchildren (Sweetser 1984). But in today’s society, with most households confined to the nuclear family, attitudes toward the elderly have changed. In 2000, the U.S. Census Bureau reported that, of the 105.5 million households in the country, only about 4 million of them (3.7 percent) were multigenerational (U.S. Census Bureau 2001). It is no longer typical for older relatives to live with their children and grandchildren. Attitudes toward the elderly have also been affected by large societal changes that have happened over the past 100 years. Researchers believe industrialization and modernization have contributed greatly to lowering the power, influence, and prestige the elderly once held. The elderly have both benefitted and suffered from these rapid social changes. In modern societies, a strong economy created new levels of prosperity for many people. Health care has become more widely accessible and medicine has advanced, allowing the elderly to live longer. However, older people are not as essential to the economic survival of their families and communities as they were in the past. Studying Aging Populations Since its creation in 1790, the U.S. Census Bureau has been tracking age in the population. Age is an important factor to analyze with accompanying demographic figures, such as income and health. The population pyramid below shows projected age distribution patterns for the next several decades. Statisticians use data to calculate the median age of a population, that is, the number that marks the halfway point in a group’s age range. In the United States, the median age is about 40 (U.S. Census Bureau 2010). That means that about half of Americans are under 40 and about half are over 40. This median age has been increasing, indicating the population as a whole is growing older. A cohort is a group of people who share a statistical or demographic trait. People belonging to the same age cohort were born in the same time frame. Understanding a population’s age composition can point to certain social and cultural factors and help governments and societies plan for future social and economic challenges. The cohort below compares the age distribution of the United States as a whole to the indigenous population. Sociological studies on aging might help explain the difference between Native American age cohorts and the general population. While Native American societies have a strong tradition of revering their elders, they also have a lower life expectancy because of lack of access to health care and high levels of mercury in fish, a traditional part of their diet. Phases of Aging: The Young-Old, Middle-Old, and Old-Old In the United States, all people over age 18 are considered adults, but there is a large difference between a person aged 21 and a person who is 45. More specific breakdowns, such as “young adult” and “middle-aged adult,” are helpful. In the same way, groupings are helpful in understanding the elderly. The elderly are often lumped together, grouping everyone over the age of 65. But a 65-year-old’s experience of life is much different than a 90-year-old’s. The United States’ older adult population can be divided into three life-stage subgroups: the young-old (approximately 65–74), the middle-old (ages 75–84), and the old-old (over age 85). Today’s young-old age group is generally happier, healthier, and financially better off than the young-old of previous generations. In the United States, people are better able to prepare for aging because resources are more widely available. Also, many people are making proactive quality-of-life decisions about their old age while they are still young. In the past, family members made care decisions when an elderly person reached a health crisis, often leaving the elderly person with little choice about what would happen. The elderly are now able to choose housing, for example, that allows them some independence while still providing care when it is needed. Living wills, retirement planning, and medical power of attorney are other concerns that are increasingly handled in advance. The Graying of the United States What does it mean to be elderly? Some define it as an issue of physical health, while others simply define it by chronological age. The U.S. government, for example, typically classifies people aged 65 years old as elderly, at which point citizens are eligible for federal benefits such as Social Security and Medicare. The World Health Organization has no standard, other than noting that 65 years old is the commonly accepted definition in most core nations, but it suggests a cut-off somewhere between 50 and 55 years old for semi-peripheral nations, such as those in Africa (World Health Organization 2012). AARP (formerly the American Association of Retired Persons) cites 50 as the eligible age of membership. It is interesting to note AARP’s name change; by taking the word “retired” out of its name, the organization can broaden its base to any older Americans, not just retirees. This is especially important now that many people are working to age 70 and beyond. There is an element of social construction, both local and global, in the way individuals and nations define who is elderly; that is, the shared meaning of the concept of elderly is created through interactions among people in society. This is exemplified by the truism that you are only as old as you feel. Demographically, the U.S. population over age 65 increased from 3 million in 1900 to 33 million in 1994 (Hobbs 1994) and to 36.8 million in 2010 (U.S. Census Bureau 2011c). This is a greater than tenfold increase in the elderly population, compared to a mere tripling of both the total population and of the population under 65 (Hobbs 1994). This increase has been called “the graying of America,” a term that describes the phenomenon of a larger and larger percentage of the population getting older and older. There are several reasons why America is graying so rapidly. One of these is life expectancy : the average number of years a person born today may expect to live. When reviewing Census Bureau statistics grouping the elderly by age, it is clear that in the United States, at least, we are living longer. Between 2000 and 2012, the number of elderly citizens between 90 and 94 increased by more than 30 percent, and the number of elderly citizens 95 to 99 increased by almost 30 percent. Finally, the number of centenarians (those 100 years or older) increased by 2,910: a mere 5.8 percent, but impressive nonetheless (Werner 2011). It is interesting to note that not all Americans age equally. Most glaring is the difference between men and women; as the graph below shows, women have longer life expectancies than men. In 2010, there were ninety 65-year-old men per one hundred 65-year-old women. However, there were only eighty 75-year-old men per one hundred 75-year-old women, and only sixty 85-year-old men per one hundred 85-year-old women. Nevertheless, as the graph shows, the sex ratio actually increased over time, indicating that men are closing the gap between their life spans and those of women (U.S. Census Bureau 2010). Baby Boomers Of particular interest to gerontologists right now is the population of baby boomers , the cohort born between 1946 and 1964 and just now reaching age 65. Coming of age in the 1960s and early 1970s, the baby boom generation was the first group of children and teenagers with their own spending power and therefore their own marketing power (Macunovich 2000). As this group has aged, it has redefined what it means to be young, middle aged, and, now, old. People in the boomer generation do not want to grow old the way their grandparents did; the result is a wide range of products designed to ward off the effects—or the signs—of aging. Previous generations of people over 65 were “old.” Baby boomers are in “later life” or “the third age” (Gilleard and Higgs 2007). The baby boom generation is the cohort driving much of the dramatic increase in the over-65 population. The figure below shows a comparison of the U.S. population by age and gender between 2000 and 2010. The biggest bulge in the pyramid (representing the largest population group) moves up the pyramid over the course of the decade; in 2000, the largest population group was age 35 to 55. In 2010, that group was age 45 to 65, meaning the oldest baby boomers are just reaching the age at which the U.S. Census considers them elderly. In 2020, we can predict, the baby boom bulge will continue to rise up the pyramid, making the largest U.S. population group between 65 and 85 years old. This aging of the baby boom cohort has serious implications for our society. Health care is one of the areas most impacted by this trend. For years, hand-wringing has abounded about the additional burden the boomer cohort will place on Medicare, a government-funded program that provides health care services to people over 65. And indeed, the Congressional Budget Office’s 2008 long-term outlook report shows that Medicare spending is expected to increase from 3 percent of gross domestic product (GDP) in 2009 to 8 percent of GDP in 2030, and to 15 percent in 2080 (Congressional Budget Office 2008). Certainly, as boomers age, they will put increasing burdens on the entire U.S. health care system. A study from 2008 indicates that medical schools are not producing enough medical professionals who specialize in treating geriatric patients (Gerontological Society of America 2008). However, other studies indicate that aging boomers will bring economic growth to the health care industries, particularly in areas like pharmaceutical manufacturing and home health care services (Bierman 2011). Further, some argue that many of our medical advances of the past few decades are a result of boomers’ health requirements. Unlike the elderly of previous generations, boomers do not expect that turning 65 means their active lives are over. They are not willing to abandon work or leisure activities, but they may need more medical support to keep living vigorous lives. This desire of a large group of over-65-year-olds wanting to continue with a high activity level is driving innovation in the medical industry (Shaw). The economic impact of aging boomers is also an area of concern for many observers. Although the baby boom generation earned more than previous generations and enjoyed a higher standard of living, they also spent their money lavishly and did not adequately prepare for retirement. According to a 2008 report from the McKinsey Global Institute, approximately two-thirds of early boomer households have not accumulated enough savings to maintain their lifestyles. This will have a ripple effect on the economy as boomers work and spend less (Farrel et al. 2008). Just as some observers are concerned about the possibility of Medicare being overburdened, Social Security is considered to be at risk. Social Security is a government-run retirement program funded primarily through payroll taxes. With enough people paying into the program, there should be enough money for retirees to take out. But with the aging boomer cohort starting to receive Social Security benefits, and with fewer workers paying into the Social Security trust fund, economists warn that the system will collapse by the year 2037. A similar warning came in the 1980s; in response to recommendations from the Greenspan Commission, the retirement age (the age at which people could start receiving Social Security benefits) was raised from 62 to 67 and the payroll tax was increased. A similar hike in retirement age, perhaps to 70, is a possible solution to the current threat to Social Security (Reuteman 2010). Aging around the World From 1950 to approximately 2010, the global population of individuals age 65 and older increased by a range of 5–7 percent (Lee 2009). This percentage is expected to increase and will have a huge impact on the dependency ratio : the number of non-productive citizens (young, disabled, elderly) to productive working citizens (Bartram and Roe 2005). One country that will soon face a serious aging crisis is China, which is on the cusp of an “aging boom”: a period when its elderly population will dramatically increase. The number of people above age 60 in China today is about 178 million, which amounts to 13.3 percent of its total population (Xuequan 2011). By 2050, nearly a third of the Chinese population will be age 60 or older, putting a significant burden on the labor force and impacting China’s economic growth (Bannister, Bloom, and Rosenberg 2010). As health care improves and life expectancy increases across the world, elder care will be an emerging issue. Wienclaw (2009) suggests that with fewer working-age citizens available to provide home care and long-term assisted care to the elderly, the costs of elder care will increase. Worldwide, the expectation governing the amount and type of elder care varies from culture to culture. For example, in Asia the responsibility for elder care lies firmly on the family (Yap, Thang, and Traphagan 2005). This is different from the approach in most Western countries, where the elderly are considered independent and are expected to tend to their own care. It is not uncommon for family members to intervene only if the elderly relative requires assistance, often due to poor health. Even then, caring for the elderly is considered voluntary. In the United States, decisions to care for an elderly relative are often conditionally based on the promise of future returns, such as inheritance or, in some cases, the amount of support the elderly provided to the caregiver in the past (Hashimoto 1996). These differences are based on cultural attitudes toward aging. In China, several studies have noted the attitude of filial piety (deference and respect to one’s parents and ancestors in all things)as defining all other virtues (Hsu 1971; Hamilton 1990). Cultural attitudes in Japan prior to approximately 1986 supported the idea that the elderly deserve assistance (Ogawa and Retherford 1993). However, seismic shifts in major social institutions (like family and economy) have created an increased demand for community and government care. For example, the increase in women working outside the home has made it more difficult to provide in-home care to aging parents, leading to an increase in the need for government-supported institutions (Raikhola and Kuroki 2009). In the United States, by contrast, many people view caring for the elderly as a burden. Even when there is a family member able and willing to provide for an elderly family member, 60 percent of family caregivers are employed outside the home and are unable to provide the needed support. At the same time, however, many middle-class families are unable to bear the financial burden of “outsourcing” professional health care, resulting in gaps in care (Bookman and Kimbrel 2011). It is important to note that even within the United States not all demographic groups treat aging the same way. While most Americans are reluctant to place their elderly members into out-of-home assisted care, demographically speaking, the groups least likely to do so are Latinos, African Americans, and Asians (Bookman and Kimbrel 2011). Globally, the United States and other core nations are fairly well equipped to handle the demands of an exponentially increasing elderly population. However, peripheral and semi-peripheral nations face similar increases without comparable resources. Poverty among elders is a concern, especially among elderly women. The feminization of the aging poor, evident in peripheral nations, is directly due to the number of elderly women in those countries who are single, illiterate, and not a part of the labor force (Mujahid 2006). In 2002, the Second World Assembly on Aging was held in Madrid, Spain, resulting in the Madrid Plan, an internationally coordinated effort to create comprehensive social policies to address the needs of the worldwide aging population. The plan identifies three themes to guide international policy on aging: 1) publically acknowledging the global challenges caused by, and the global opportunities created by, a rising global population; 2) empowering the elderly; and 3) linking international policies on aging to international policies on development (Zelenev 2008). The Madrid Plan has not yet been successful in achieving all its aims. However, it has increased awareness of the various issues associated with a global aging population, as well as raising the international consciousness to the way that the factors influencing the vulnerability of the elderly (social exclusion, prejudice and discrimination, and a lack of socio-legal protection) overlap with other developmental issues (basic human rights, empowerment, and participation), leading to an increase in legal protections (Zelenev 2008). 13.2 The Process of Aging As human beings grow older, they go through different phases or stages of life. It is helpful to understand aging in the context of these phases. A life course is the period from birth to death, including a sequence of predictable life events such as physical maturation. Each phase comes with different responsibilities and expectations, which of course vary by individual and culture. Children love to play and learn, looking forward to becoming preteens. As preteens begin to test their independence, they are eager to become teenagers. Teenagers anticipate the promises and challenges of adulthood. Adults become focused on creating families, building careers, and experiencing the world as an independent person. Finally, many adults look forward to old age as a wonderful time to enjoy life without as much pressure from work and family life. In old age, grandparenthood can provide many of the joys of parenthood without all the hard work that parenthood entails. And as work responsibilities abate, old age may be a time to explore hobbies and activities that there was no time for earlier in life. But for other people, old age is not a phase looked forward to. Some people fear old age and do anything to “avoid” it, seeking medical and cosmetic fixes for the natural effects of age. These differing views on the life course are the result of the cultural values and norms into which people are socialized. Through the phases of the life course, dependence and independence levels change. At birth, newborns are dependent on caregivers for everything. As babies become toddlers and toddlers become adolescents and then teenagers, they assert their independence more and more. Gradually, children are considered adults, responsible for their own lives, although the point at which this occurs is widely variable among individuals, families, and cultures. As Riley (1978) notes, the process of aging is a lifelong process and entails maturation and change on physical, psychological, and social levels. Age, much like race, class, and gender, is a hierarchy in which some categories are more highly valued than others. For example, while many children look forward to gaining independence, Packer and Chasteen (2006) suggest that even in children, age prejudice leads both society and the young to view aging in a negative light. This, in turn, can lead to a widespread segregation between the old and the young at the institutional, societal, and cultural levels (Hagestad and Uhlenberg 2006). Sociological Research Dr. Ignatz Nascher and the Birth of Geriatrics In the early 1900s, a New York physician named Dr. Ignatz Nascher coined the term geriatrics , a medical specialty focusing on the elderly. He created the word by combining two Greek words: geron (old man) and iatrikos (medical treatment). Nascher based his work on what he observed as a young medical student, when he saw many acutely ill elderly people who were diagnosed simply as “being old.” There was nothing medicine could do, his professors declared, about the syndrome of “old age.” Nascher refused to accept this dismissive view, seeing it as medical neglect. He believed it was a doctor’s duty to prolong life and relieve suffering whenever possible. In 1914, he published his views in his book Geriatrics: The Diseases of Old Age and Their Treatment (Clarfield 1990). Nascher saw the practice of caring for the elderly as separate from the practice of caring for the young, just as pediatrics (caring for children) is different from caring for grown adults (Clarfield 1990) Nascher had high hopes for his pioneering work. He wanted to treat the aging, especially those who were poor and had no one to care for them. Many of the elderly poor were sent to live in “almshouses,” or public old-age homes (Cole 1993). Conditions were often terrible in these almshouses, where the aging were often sent and just forgotten. As hard as it might be to believe today, Nascher’s approach was considered unique. At the time of Nascher’s death, in 1944, he was disappointed that the field of geriatrics had not made greater strides. In what ways are the elderly better off today than they were before Nascher’s ideas? Biological Changes Each person experiences age-related changes based on many factors. Biological factors such as molecular and cellular changes are called primary aging , while aging that occurs due to controllable factors such as lack of physical exercise and poor diet is called secondary aging (Whitbourne and Whitbourne 2010). Most people begin to see signs of aging after age 50, when they notice the physical markers of age. Skin becomes thinner, drier, and less elastic. Wrinkles form. Hair begins to thin and gray. Men prone to balding start losing hair. The difficulty or relative ease with which people adapt to these changes is dependent in part on the meaning given to aging by their particular culture. A culture that values youthfulness and beauty above all else leads to a negative perception of growing old. Conversely, a culture that reveres the elderly for their life experience and wisdom contributes to a more positive perception of what it means to grow old. The effects of aging can feel daunting, and sometimes the fear of physical changes (like declining energy, food sensitivity, and loss of hearing and vision) is more challenging to deal with than the changes themselves. The way people perceive physical aging is largely dependent on how they were socialized. If people can accept the changes in their bodies as a natural process of aging, the changes will not seem as frightening. According to the federal Administration on Aging (2011), in 2009 fewer people over 65 assessed their health as “excellent” or “very good” (41.6 percent) compared to those aged 18–64 (64.4 percent). Evaluating data from the National Center for Health Statistics and the U.S. Bureau of Labor Statistics, the Administration on Aging found that from 2006 to 2008, the most frequently reported health issues for those over 65 included arthritis (50 percent), hypertension (38 percent), heart disease (32 percent), and cancer (22 percent). Additionally, about 27 percent of people age 60 and older are considered obese by current medical standards. Parker and Thorslunf (2006) found that while the trend is toward steady improvement in most disability measures, there is a concomitant increase in functional impairments (disability) and chronic diseases. At the same time, medical advances have reduced some of the disabling effects of those diseases (Crimmins 2004). Some impacts of aging are gender specific. Some of the disadvantages that aging women face rise from long-standing social gender roles. For example, Social Security favors men over women, inasmuch as women do not earn Social Security benefits for the unpaid labor they perform as an extension of their gender roles. In the health care field, elderly female patients are more likely than elderly men to see their health care concerns trivialized (Sharp 1995) and are more like to have the health issues labeled psychosomatic (Munch 2004). Another female-specific aspect of aging is that mass-media outlets often depict elderly females in terms of negative stereotypes and as less successful than older men (Bazzini and Mclntosh I997). For men, the process of aging—and society’s response to and support of the experience—may be quite different. The gradual decrease in male sexual performance that occurs as a result of primary aging is medicalized and constructed as needing treatment (Marshall and Katz 2002) so that a man may maintain a sense of youthful masculinity. On the other hand, aging men have fewer opportunities to assert the masculine identities in the company of other men (e.g., sports participation) (Drummond 1998). And some social scientists have observed that the aging male body is depicted in the Western world as genderless (Spector-Mersel 2006). Social and Psychological Changes Male or female, growing older means confronting the psychological issues that come with entering the last phase of life. Young people moving into adulthood take on new roles and responsibilities as their lives expand, but an opposite arc can be observed in old age. What are the hallmarks of social and psychological change? Retirement—the idea that one may stop working at a certain age—is a relatively recent idea. Up until the late 19th century, people worked about 60 hours a week and did so until they were physically incapable of continuing. Following the American Civil War, veterans receiving pensions were able to withdraw from the workforce, and the number of working older men began declining. A second large decline in the number of working men began in the post-World War II era, probably due to the availability of Social Security, and a third large decline in the 1960s and 1970s was probably due to the social support offered by Medicare and the increase in Social Security benefits (Munnell 2011). In the 21st century, most people hope that at some point they will be able to stop working and enjoy the fruits of their labor. But do people look forward to this time or do they fear it? When people retire from familiar work routines, some easily seek new hobbies, interests, and forms of recreation. Many find new groups and explore new activities, but others may find it more difficult to adapt to new routines and loss of social roles, losing their sense of self-worth in the process. Each phase of life has challenges that come with the potential for fear. Erik H. Erikson (1902–1994), in his view of socialization, broke the typical life span into eight phases. Each phase presents a particular challenge that must be overcome. In the final stage, old age, the challenge is to embrace integrity over despair. Some people are unable to successfully overcome the challenge. They may have to confront regrets, such as being disappointed in their children’s lives or perhaps their own. They may have to accept that they will never reach certain career goals. Or they must come to terms with what their career success has cost them, such as time with their family or declining personal health. Others, however, are able to achieve a strong sense of integrity, embracing the new phase in life. When that happens, there is tremendous potential for creativity. They can learn new skills, practice new activities, and peacefully prepare for the end of life. For some, overcoming despair might entail remarriage after the death of a spouse. A study conducted by Kate Davidson (2002) reviewed demographic data that asserted men were more likely to remarry after the death of a spouse, and suggested that widows (the surviving female spouse of a deceased male partner) and widowers (the surviving male spouse of a deceased female partner) experience their postmarital lives differently. Many surviving women enjoyed a new sense of freedom, as many were living alone for the first time. On the other hand, for surviving men, there was a greater sense of having lost something, as they were now deprived of a constant source of care as well as the focus on their emotional life. Aging and Sexuality It is no secret that Americans are squeamish about the subject of sex. And when the subject is the sexuality of elderly people? No one wants to think about it or even talk about it. That fact is part of what makes 1971’s Harold and Maude so provocative. In this cult favorite film, Harold, an alienated young man, meets and falls in love with Maude, a 79-year-old woman. What is so telling about the film is the reaction of his family, priest, and psychologist, who exhibit disgust and horror at such a match. Although it is difficult to have an open, public national dialogue about aging and sexuality, the reality is that our sexual selves do not disappear after age 65. People continue to enjoy sex—and not always safe sex—well into their later years. In fact, some research suggests that as many as one in five new cases of AIDS occur in adults over 65 (Hillman 2011). In some ways, old age may be a time to enjoy sex more, not less. For women, the elder years can bring a sense of relief as the fear of an unwanted pregnancy is removed and the children are grown and taking care of themselves. However, while we have expanded the number of psycho-pharmaceuticals to address sexual dysfunction in men, it was not until very recently that the medical field acknowledged the existence of female sexual dysfunctions (Bryant 2004). Social Policy and Debate Aging “Out:” LGBT Seniors How do different groups in our society experience the aging process? Are there any experiences that are universal, or do different populations have different experiences? An emerging field of study looks at how lesbian, gay, bisexual, and transgendered (LGBT) people experience the aging process and how their experience differs from that of other groups or the dominant group. This issue is expanding with the aging of the baby boom generation; not only will aging boomers represent a huge bump in the general elderly population, but the number of LGBT seniors is expected to double by 2030 (Fredriksen-Goldsen et al. 2011). A recent study titled The Aging and Health Report: Disparities and Resilience among Lesbian, Gay, Bisexual, and Transgender Older Adults finds that LGBT older adults have higher rates of disability and depression than their heterosexual peers. They are also less likely to have a support system that might provide elder care: a partner and supportive children (Fredriksen-Goldsen et al. 2011). Even for those LGBT seniors who are partnered, some states do not recognize a legal relationship between two people of the same sex, reducing their legal protection and financial options. As they transition to assisted-living facilities, LGBT people have the added burden of “disclosure management:” the way they share their sexual and relationship identity. In one case study, a 78-year-old lesbian lived alone in a long-term care facility. She had been in a long-term relationship of 32 years and had been visibly active in the gay community earlier in her life. However, in the long-term care setting, she was much quieter about her sexual orientation. She “selectively disclosed” her sexual identity, feeling safer with anonymity and silence (Jenkins et al. 2010). A study from the National Senior Citizens Law Center reports that only 22 percent of LGBT older adults expect they could be open about their sexual orientation or gender identity in a long-term care facility. Even more telling is the finding that only 16 percent of non-LGBT older adults expected that LGBT people could be open with facility staff (National Senior Citizens Law Center 2011). Same-sex marriage—a civil rights battleground that is being fought in many states—can have major implications for the way the LGBT community ages. With marriage comes the legal and financial protection afforded to opposite-sex couples, as well as less fear of exposure and a reduction in the need to “retreat to the closet” (Jenkins et al. 2010). Changes in this area are coming slowly, and in the meantime, advocates have many policy recommendations for how to improve the aging process for LGBT individuals. These recommendations include increasing federal research on LGBT elders, increasing (and enforcing existing) laws against discrimination, and amending the federal Family and Medical Leave Act to cover LGBT caregivers (Grant 2009). Death and Dying For most of human history, the standard of living was significantly lower than it is now. Humans struggled to survive with few amenities and very limited medical technology. The risk of death due to disease or accident was high in any life stage, and life expectancy was low. As people began to live longer, death became associated with old age. For many teenagers and young adults, losing a grandparent or another older relative can be the first loss of a loved one they experience. It may be their first encounter with grief , a psychological, emotional, and social response to the feelings of loss that accompanies death or a similar event. People tend to perceive death, their own and that of others, based on the values of their culture. While some may look upon death as the natural conclusion to a long, fruitful life, others may find the prospect of dying frightening to contemplate. People tend to have strong resistance to the idea of their own death, and strong emotional reactions of loss to the death of loved ones. Viewing death as a loss, as opposed to a natural or tranquil transition, is often considered normal in the United States. What may be surprising is how few studies were conducted on death and dying prior to the 1960s. Death and dying were fields that had received little attention until a psychologist named Elisabeth Kübler-Ross began observing people who were in the process of dying. As Kübler-Ross witnessed people’s transition toward death, she found some common threads in their experiences. She observed that the process had five distinct stages: denial, anger, bargaining, depression, and acceptance. She published her findings in a 1969 book called On Death and Dying . The book remains a classic on the topic today. Kübler-Ross found that a person’s first reaction to the prospect of dying is denial : This is characterized by not wanting to believe that he or she is dying, with common thoughts such as “I feel fine” or “This is not really happening to me.” The second stage is anger , when loss of life is seen as unfair and unjust. A person then resorts to the third stage, bargaining : trying to negotiate with a higher power to postpone the inevitable by reforming or changing the way he or she lives. The fourth stage, psychological depression , allows for resignation as the situation begins to seem hopeless. In the final stage, a person adjusts to the idea of death and reaches acceptance . At this point, the person can face death honestly, regarding it as a natural and inevitable part of life, and can make the most of their remaining time. The work of Kübler-Ross was eye-opening when it was introduced. It broke new ground and opened the doors for sociologists, social workers, health practitioners, and therapists to study death and help those who were facing death. Kübler-Ross’s work is generally considered a major contribution to thanatology : the systematic study of death and dying. Of special interests to thanatologists is the concept of “dying with dignity.” Modern medicine includes advanced medical technology that may prolong life without a parallel improvement to the quality of life one may have. In some cases, people may not want to continue living when they are in constant pain and no longer enjoying life. Should patients have the right to choose to die with dignity? Dr. Jack Kevorkian was a staunch advocate for physician-assisted suicide : the voluntary or physician-assisted use of lethal medication provided by a medical doctor to end one’s life. This right to have a doctor help a patient die with dignity is controversial. In the United States, Oregon was the first state to pass a law allowing physician-assisted suicides. In 1997, Oregon instituted the Death with Dignity Act, which required the presence of two physicians for a legal assisted suicide. This law was successfully challenged by U.S. Attorney General John Ashcroft in 2001, but the appeals process ultimately upheld the Oregon law. Subsequently, both Montana and Washington have passed similar laws. The controversy surrounding death with dignity laws is emblematic of the way our society tries to separate itself from death. Health institutions have built facilities to comfortably house those who are terminally ill. This is seen as a compassionate act, helping relieve the surviving family members of the burden of caring for the dying relative. But studies almost universally show that people prefer to die in their own homes (Lloyd, White, and Sutton 2011). Is it our social responsibility to care for elderly relatives up until their death? How do we balance the responsibility for caring for an elderly relative with our other responsibilities and obligations? As our society grows older, and as new medical technology can prolong life even further, the answers to these questions will develop and change. The changing concept of hospice is an indicator of our society’s changing view of death. Hospice is a type of health care that treats terminally ill people when “cure-oriented treatments” are no longer an option (Hospice Foundation of America 2012b). Hospice doctors, nurses, and therapists receive special training in the care of the dying. The focus is not on getting better or curing the illness, but on passing out of this life in comfort and peace. Hospice centers exist as a place where people can go to die in comfort, and increasingly, hospice services encourage at-home care so that someone has the comfort of dying in a familiar environment, surrounded by family (Hospice Foundation of America 2012a). While many of us would probably prefer to avoid thinking of the end of our lives, it may be possible to take comfort in the idea that when we do approach death in a hospice setting, it is in a familiar, relatively controlled place. 13.3 Challenges Facing the Elderly Aging comes with many challenges. The loss of independence is one potential part of the process, as are diminished physical ability and age discrimination. The term senescence refers to the aging process, including biological, emotional, intellectual, social, and spiritual changes. This section discusses some of the challenges we encounter during this process. As already observed, many older adults remain highly self-sufficient. Others require more care. Because the elderly typically no longer hold jobs, finances can be a challenge. And due to cultural misconceptions, older people can be targets of ridicule and stereotypes. The elderly face many challenges in later life, but they do not have to enter old age without dignity. Poverty For many people in the United States, growing older once meant living with less income. In 1960, almost 35 percent of the elderly existed on poverty-level incomes. A generation ago, the nation’s oldest populations had the highest risk of living in poverty. At the start of the 21st century, the older population was putting an end to that trend. Among people over 65, the poverty rate fell from 30 percent in 1967 to 9.7 percent in 2008, well below the national average of 13.2 percent (U.S. Census Bureau 2009). However, with the subsequent recession, which severely reduced the retirement savings of many while taxing public support systems, how are the elderly affected? According to the Kaiser Commission on Medicaid and the Uninsured, the national poverty rate among the elderly had risen to 14 percent by 2010 (Urban Institute and Kaiser Commission 2010). Before the recession hit, what had changed to cause a reduction in poverty among the elderly? What social patterns contributed to the shift? For several decades, a greater number of women joined the workforce. More married couples earned double incomes during their working years and saved more money for their retirement. Private employers and governments began offering better retirement programs. By 1990, senior citizens reported earning 36 percent more income on average than they did in 1980; that was five times the rate of increase for people under age 35 (U.S. Census Bureau 2009). In addition, many people were gaining access to better health care. New trends encouraged people to live more healthful lifestyles, placing an emphasis on exercise and nutrition. There was also greater access to information about the health risks of behaviors such as cigarette smoking, alcohol consumption, and drug use. Because they were healthier, many older people continue to work past the typical retirement age, providing more opportunity to save for retirement. Will these patterns return once the recession ends? Sociologists will be watching to see. In the meantime, they are realizing the immediate impact of the recession on elderly poverty. During the recession, older people lost some of the financial advantages that they’d gained in the 1980s and 1990s. From October 2007 to October 2009 the values of retirement accounts for people over age 50 lost 18 percent of their value. The sharp decline in the stock market also forced many to delay their retirement (Administration on Aging 2009). Ageism Driving to the grocery store, Peter, 23, got stuck behind a car on a four-lane main artery through his city’s business district. The speed limit was 35 miles per hour, and while most drivers sped along at 40 to 45 mph, the driver in front of him was going the minimum speed. Peter tapped on his horn. He tailgated the driver. Finally, Peter had a chance to pass the car. He glanced over. Sure enough, Peter thought, a gray-haired old man guilty of “DWE,” driving while elderly. At the grocery store, Peter waited in the checkout line behind an older woman. She paid for her groceries, lifted her bags of food into her cart, and toddled toward the exit. Peter, guessing her to be about 80, was reminded of his grandmother. He paid for his groceries and caught up with her. “Can I help you with your cart?” he asked. “No, thank you. I can get it myself,” she said and marched off toward her car. Peter’s responses to both older people, the driver and the shopper, were prejudiced. In both cases, he made unfair assumptions. He assumed the driver drove cautiously simply because the man was a senior citizen, and he assumed the shopper needed help carrying her groceries just because she was an older woman. Responses like Peter’s toward older people are fairly common. He didn’t intend to treat people differently based on personal or cultural biases, but he did. Ageism is discrimination (when someone acts on a prejudice) based on age. Dr. Robert Butler coined the term in 1968, noting that ageism exists in all cultures (Brownell). Ageist attitudes and biases based on stereotypes reduce elderly people to inferior or limited positions. Ageism can vary in severity. Peter’s attitudes are probably seen as fairly mild, but relating to the elderly in ways that are patronizing can be offensive. When ageism is reflected in the workplace, in health care, and in assisted-living facilities, the effects of discrimination can be more severe. Ageism can make older people fear losing a job, feel dismissed by a doctor, or feel a lack of power and control in their daily living situations. In early societies, the elderly were respected and revered. Many preindustrial societies observed gerontocracy , a type of social structure wherein the power is held by a society’s oldest members. In some countries today, the elderly still have influence and power and their vast knowledge is respected. In many modern nations, however, industrialization contributed to the diminished social standing of the elderly. Today wealth, power, and prestige are also held by those in younger age brackets. The average age of corporate executives was 59 in 1980. In 2008, the average age had lowered to 54 (Stuart 2008). Some older members of the workforce felt threatened by this trend and grew concerned that younger employees in higher level positions would push them out of the job market. Rapid advancements in technology and media have required new skill sets that older members of the workforce are less likely to have. Changes happened not only in the workplace but also at home. In agrarian societies, a married couple cared for their aging parents. The oldest members of the family contributed to the household by doing chores, cooking, and helping with child care. As economies shifted from agrarian to industrial, younger generations moved to cities to work in factories. The elderly began to be seen as an expensive burden. They did not have the strength and stamina to work outside the home. What began during industrialization, a trend toward older people living apart from their grown children, has become commonplace. Mistreatment and Abuse Mistreatment and abuse of the elderly is a major social problem. As expected, with the biology of aging, the elderly sometimes become physically frail. This frailty renders them dependent on others for care—sometimes for small needs like household tasks, and sometimes for assistance with basic functions like eating and toileting. Unlike a child, who also is dependent on another for care, an elder is an adult with a lifetime of experience, knowledge, and opinions—a more fully developed person. This makes the care providing situation more complex. Elder abuse describes when a caretaker intentionally deprives an older person of care or harms the person in their charge. Caregivers may be family members, relatives, friends, health professionals, or employees of senior housing or nursing care. The elderly may be subject to many different types of abuse. In a 2009 study on the topic led by Dr. Ron Acierno, the team of researchers identified five major categories of elder abuse: 1) physical abuse, such as hitting or shaking, 2) sexual abuse including rape and coerced nudity, 3) psychological or emotional abuse, such as verbal harassment or humiliation, 4) neglect or failure to provide adequate care, and 5) financial abuse or exploitation (Acierno 2010). The National Center on Elder Abuse (NCEA), a division of the U.S. Administration on Aging, also identifies abandonment and self-neglect as types of abuse. Table 13.1 shows some of the signs and symptoms that the NCEA encourages people to notice. Type of Abuse Signs and Symptoms Physical abuse Bruises, untreated wounds, sprains, broken glasses, lab findings of medication overdosage Sexual abuse Bruises around breasts or genitals, torn or bloody underclothing, unexplained venereal disease Emotional/psychological abuse Being upset or withdrawn, unusual dementia-like behavior (rocking, sucking) Neglect Poor hygiene, untreated bed sores, dehydration, soiled bedding Financial Sudden changes in banking practices, inclusion of additional names on bank cards, abrupt changes to will Self-neglect Untreated medical conditions, unclean living area, lack of medical items like dentures or glasses Table 13.1 Signs of Elder Abuse The National Center on Elder Abuse encourages people to watch for these signs of mistreatment. (Chart courtesy of National Center on Elder Abuse) How prevalent is elder abuse? Two recent U.S. studies found that roughly 1 in 10 elderly people surveyed had suffered at least one form of elder abuse. Some social researchers believe elder abuse is underreported and that the number may be higher. The risk of abuse also increases in people with health issues such as dementia (Kohn and Verhoek-Oftedahl 2011). Older women were found to be victims of verbal abuse more often than their male counterparts. In Acierno’s study, which included a sample of 5,777 respondents age 60 and older, 5.2 percent of respondents reported financial abuse, 5.1 percent said they’d been neglected, and 4.6 endured emotional abuse (Acierno 2010). The prevalence of physical and sexual abuse was lower at 1.6 and 0.6 percent, respectively (Acierno 2010). Other studies have focused on the caregivers to the elderly in an attempt to discover the causes of elder abuse. Researchers identified factors that increased the likelihood of caregivers perpetrating abuse against those in their care. Those factors include inexperience, having other demands such as jobs (for those who weren’t professionally employed as caregivers), caring for children, living full time with the dependent elder, and experiencing high stress, isolation, and lack of support (Kohn and Verhoek-Oftedahl 2011). A history of depression in the caregiver was also found to increase the likelihood of elder abuse. Neglect was more likely when care was provided by paid caregivers. Many of the caregivers who physically abused elders were themselves abused—in many cases, when they were children. Family members with some sort of dependency on the elder in their care were more likely to physically abuse that elder. For example, an adult child caring for an elderly parent while, at the same time, depending on some form of income from that parent, would be considered more likely to perpetrate physical abuse (Kohn and Verhoek-Oftedahl 2011). A survey in Florida found that 60.1 percent of caregivers reported verbal aggression as a style of conflict resolution. Paid caregivers in nursing homes were at a high risk of becoming abusive if they had low job satisfaction, treated the elderly like children, or felt burnt out (Kohn and Verhoek-Oftedahl 2011). Caregivers who tended to be verbally abusive were found to have had less training, lower education, and higher likelihood of depression or other psychiatric disorders. Based on the results of these studies, many housing facilities for seniors have increased their screening procedures for caregiver applicants. Big Picture World War II Veterans World War II veterans are aging. Many are in their 80s and 90s. They are dying at an estimated rate of about 740 per day, according to the U.S. Veterans Administration (National Center for Veterans Analysis and Statistics 2011). Data suggest that by 2036, there will be no living veterans of WWII (U.S. Department of Veteran Affairs). When these veterans came home from the war and ended their service, little was known about posttraumatic stress disorder (PTSD). These heroes did not receive the mental and physical health care that could have helped them. As a result, many of them, now in old age, are dealing with the effects of PTSD. Research suggests a high percentage of World War II veterans are plagued by flashback memories and isolation, and that many “self-medicate” with alcohol. Research has found that veterans of any conflict are more than twice as likely as non-veterans to commit suicide, with rates highest among the oldest veterans. Reports show that WWII-era veterans are four times as likely to take their own lives as people of the same age with no military service (Glantz 2010). In May 2004, the National World War II Memorial in Washington, D.C., was completed and dedicated to honor those who served during the conflict. Dr. Earl Morse, a physician and retired Air Force captain, treated many WWII veterans. He encouraged them to visit the memorial, knowing it could help them heal. Many WWII veterans expressed interest in seeing the memorial. Unfortunately, many were in their 80s and were neither physically nor financially able to travel on their own. Dr. Morse arranged to personally escort some of the veterans and enlisted volunteer pilots who would pay for the flights themselves. He also raised money, insisting the veterans pay nothing. By the end of 2005, 137 veterans, many in wheelchairs, had made the trip. The Honor Flight Network was up and running. As of 2010, the Honor Flight Network had flown more than 120,000 U.S. veterans of World War II, and some veterans of the Korean War, to Washington. The round-trip flights leave for day-long trips from airports in 30 states, staffed by volunteers who care for the needs of the elderly travelers (Honor Flight Network 2011). 13.4 Theoretical Perspectives on Aging What roles do individual senior citizens play in your life? How do you relate to and interact with older people? What role do they play in neighborhoods and communities, in cities and in states? Sociologists are interested in exploring the answers to questions such as these through three different perspectives: functionalism, symbolic interactionism, and conflict theory. Functionalism Functionalists analyze how the parts of society work together. Functionalists gauge how society’s parts are working together to keep society running smoothly. How does this perspective address aging? The elderly, as a group, are one of society’s vital parts. Functionalists find that people with better resources who stay active in other roles adjust better to old age (Crosnoe and Elder 2002). Three social theories within the functional perspective were developed to explain how older people might deal with later-life experiences. The earliest gerontological theory in the functionalist perspective is disengagement theory , which suggests that withdrawing from society and social relationships is a natural part of growing old. There are several main points to the theory. First, because everyone expects to die one day, and because we experience physical and mental decline as we approach death, it is natural to withdraw from individuals and society. Second, as the elderly withdraw, they receive less reinforcement to conform to social norms. Therefore, this withdrawal allows a greater freedom from the pressure to conform. Finally, social withdrawal is gendered, meaning it is experienced differently by men and women. Because men focus on work and women focus on marriage and family, when they withdraw they will be unhappy and directionless until they adopt a role to replace their accustomed role that is compatible with the disengaged state (Cummings and Henry 1961). The suggestion that old age was a distinct state in the life course, characterized by a distinct change in roles and activities, was groundbreaking when it was first introduced. However, the theory is no longer accepted in its classic form. Criticisms typically focus on the application of the idea that seniors universally naturally withdraw from society as they age, and that it does not allow for a wide variation in the way people experience aging (Hothschild 1975). The social withdrawal that Cummings and Henry recognized (1961), and its notion that elderly people need to find replacement roles for those they’ve lost, is addressed anew in activity theory . According to this theory, activity levels and social involvement are key to this process, and key to happiness (Havinghurst 1961; Neugarten 1964; Havinghurst, Neugarten, and Tobin 1968). According to this theory, the more active and involved an elderly person is, the happier he or she will be. Critics of this theory point out that access to social opportunities and activity are not equally available to all. Moreover, not everyone finds fulfillment in the presence of others or participation in activities. Reformulations of this theory suggest that participation in informal activities, such as hobbies, are what most effect later life satisfaction (Lemon, Bengtson, and Petersen 1972). According to continuity theory , the elderly make specific choices to maintain consistency in internal (personality structure, beliefs) and external structures (relationships), remaining active and involved throughout their elder years. This is an attempt to maintain social equilibrium and stability by making future decisions on the basis of already developed social roles (Atchley 1971; Atchley 1989). One criticism of this theory is its emphasis on so-called “normal” aging, which marginalizes those with chronic diseases such as Alzheimer’s. Sociological Research The Graying of American Prisons Earl Grimes is a 79-year-old inmate at a state prison. He has undergone two cataract surgeries and takes about $1,000 a month worth of medication to manage a heart condition. He needs significant help moving around, which he obtains by bribing younger inmates. He is serving a life prison term for a murder he committed 38 years—half a lifetime—ago (Warren 2002). Grimes’ situation exemplifies the problems facing prisons today. According to a recent report released by Human Rights Watch (2012), there are now more than 124,000 prisoners age 55 or older and over 26,000 prisoners age 65 or older in the U.S. prison population. These numbers represent an exponential rise over the last two decades. Why are American prisons graying so rapidly? Two factors contribute significantly to this country’s aging prison population. One is the tough-on-crime reforms of the 1980s and 1990s, when mandatory minimum sentencing and “three strikes” policies sent many people to jail for 30 years to life, even when the third strike was a relatively minor offense (Leadership Conference N.d.). Many of today’s elderly prisoners are those who were incarcerated 30 years ago for life sentences. The other factor influencing today’s aging prison population is the aging of the overall population. As discussed in the section on aging in the United States, the percentage of people over 65 is increasing each year due to rising life expectancies and the aging of the baby boom generation. So why should it matter that the elderly prison population is growing so swiftly? As discussed in the section on the process of aging, growing older is accompanied by a host of physical problems, like failing vision, mobility, and hearing. Chronic illnesses like heart disease, arthritis, and diabetes also become increasingly common as people age, whether they are in prison or not. In many cases, elderly prisoners are physically incapable of committing a violent—or possibly any—crime. Is it ethical to keep them locked up for the short remainder of their lives? There seem to be a lot of reasons, both financial and ethical, to release some elderly prisoners to live the rest of their lives—and die—in freedom. However, few lawmakers are willing to appear soft on crime by releasing convicted felons from prison, especially if their sentence was “life without parole” (Warren 2002). Conflict Perspective Theorists working the conflict perspective view society as inherently unstable, an institution that privileges the powerful wealthy few while marginalizing everyone else. According to the guiding principle of conflict theory, social groups compete with other groups for power and scarce resources. Applied to society’s aging population, the principle means that the elderly struggle with other groups—for example, younger society members—to retain a certain share of resources. At some point, this competition may become conflict. For example, some people complain that the elderly get more than their fair share of society’s resources. In hard economic times, there is great concern about the huge costs of Social Security and Medicare. One of every four tax dollars, or about 28 percent, is spent on these two programs. In 1950, the federal government paid $781 million in Social Security payments. Now, the payments are 870 times higher. In 2008, the government paid $296 billion (Statistical Abstract 2011). The medical bills of the nation’s elderly population are rising dramatically. While there is more care available to certain segments of the senior community, it must be noted that the financial resources available to the aging can vary tremendously by race, social class, and gender. There are three classic theories of aging within the conflict perspective. Modernization theory (Cowgill and Holmes 1972) suggests that the primary cause of the elderly losing power and influence in society are the parallel forces of industrialization and modernization. As societies modernize, the status of elders decreases, and they are increasingly likely to experience social exclusion. Before industrialization, strong social norms bound the younger generation to care for the older. Now, as societies industrialize, the nuclear family replaces the extended family. Societies become increasingly individualistic, and norms regarding the care of older people change. In an individualistic industrial society, caring for an elderly relative is seen as a voluntary obligation that may be ignored without fear of social censure. The central reasoning of modernization theory is that as long as the extended family is the standard family, as in preindustrial economies, elders will have a place in society and a clearly defined role. As societies modernize, the elderly, unable to work outside of the home, have less to offer economically and are seen as a burden. This model may be applied to both the developed and the developing world, and it suggests that as people age they will be abandoned and lose much of their familial support since they become a nonproductive economic burden. Another theory in the conflict perspective is age stratification theory (Riley, Johnson, and Foner 1972). Though it may seem obvious now, with our awareness of ageism, age stratification theorists were the first to suggest that members of society might be stratified by age, just as they are stratified by race, class, and gender. Because age serves as a basis of social control, different age groups will have varying access to social resources such as political and economic power. Within societies, behavioral age norms, including norms about roles and appropriate behavior, dictate what members of age cohorts may reasonably do. For example, it might be considered deviant for an elderly woman to wear a bikini because it violates norms denying the sexuality of older females. These norms are specific to each age strata, developing from culturally based ideas about how people should “act their age.” Thanks to amendments to the Age Discrimination in Employment Act (ADEA), which drew attention to some of the ways in which our society is stratified based on age, U.S. workers no longer must retire upon reaching a specified age. As first passed in 1967, the ADEA provided protection against a broad range of age discrimination and specifically addressed termination of employment due to age, age specific layoffs, advertised positions specifying age limits or preferences, and denial of health care benefits to those over 65 (U.S. EEOC 2012). Age stratification theory has been criticized for its broadness and its inattention to other sources of stratification and how these might intersect with age. For example, one might argue that an older white male occupies a more powerful role, and is far less limited in his choices, compared to an older white female based on his historical access to political and economic power. Finally, exchange theory (Dowd 1975), a rational choice approach, suggests we experience an increased dependence as we age and must increasingly submit to the will of others because we have fewer ways of compelling others to submit to us. Indeed, inasmuch as relationships are based on mutual exchanges, as the elderly become less able to exchange resources, they will see their social circles diminish. In this model, the only means to avoid being discarded is to engage in resource management, like maintaining a large inheritance or participating in social exchange systems via child care. In fact, the theory may depend too much on the assumption that individuals are calculating. It is often criticized for affording too much emphasis to material exchange and devaluing nonmaterial assets such as love and friendship. Symbolic Interactionism Generally, theories within the symbolic interactionist perspective focus on how society is created through the day-to-day interaction of individuals, as well as the way people perceive themselves and others based on cultural symbols. This microanalytic perspective assumes that if people develop a sense of identity through their social interactions, their sense of self is dependent on those interactions. A woman whose main interactions with society make her feel old and unattractive may lose her sense of self. But a woman whose interactions make her feel valued and important will have a stronger sense of self and a happier life. Symbolic interactionists stress that the changes associated with old age, in and of themselves, have no inherent meaning. Nothing in the nature of aging creates any particular, defined set of attitudes. Rather, attitudes toward the elderly are rooted in society. One microanalytical theory is Rose’s (1962) subculture of aging theory , which focuses on the shared community created by the elderly when they are excluded (due to age), voluntarily or involuntarily, from participating in other groups. This theory suggests that elders will disengage from society and develop new patterns of interaction with peers who share common backgrounds and interests. For example, a group consciousness may develop within such groups as AARP around issues specific to the elderly like the Medicare “doughnut hole,” focused on creating social and political pressure to fix those issues. Whether brought together by social or political interests, or even geographic regions, elders may find a strong sense of community with their new group. Another theory within the symbolic interaction perspective is selective optimization with compensation theory . Baltes and Baltes (1990) based their theory on the idea that successful personal development throughout the life course and subsequent mastery of the challenges associated with everyday life are based on the components of selection, optimization, and compensation. Though this happens at all stages in the life course, in the field of gerontology, researchers focus attention on balancing the losses associated with aging with the gains stemming from the same. Here, aging is a process and not an outcome, and the goals (compensation) are specific to the individual. According to this theory, our energy diminishes as we age, and we select (selection) personal goals to get the most (optimize) for the effort we put into activities, in this way making up for (compensation) the loss of a wider range of goals and activities. In this theory, the physical decline postulated by disengagement theory may result in more dependence, but that is not necessarily negative, as it allows aging individuals to save their energy for the most meaningful activities. For example, a professor who values teaching sociology may participate in a phased retirement, never entirely giving up teaching, but acknowledging personal physical limitations that allow teaching only one or two classes per year. Swedish sociologist Lars Tornstam developed a symbolic interactionist theory called gerotranscendence : the idea that as people age, they transcend the limited views of life they held in earlier times. Tornstam believes that throughout the aging process, the elderly become less self-centered and feel more peaceful and connected to the natural world. Wisdom comes to the elderly, Tornstam’s theory states, and as the elderly tolerate ambiguities and seeming contradictions, they let go of conflict, and develop softer views of right and wrong (Tornstam 2005). Tornstam does not claim that everyone will achieve wisdom in aging. Some elderly people might still grow bitter and isolated, feel ignored and left out, or become grumpy and judgmental. Symbolic interactionists believe that, just as in other phases of life, individuals must struggle to overcome their own failings and turn them into strengths.
principles_of_accounting,_volume_2:_managerial_accounting
Summary 2.1 Distinguish between Merchandising, Manufacturing, and Service Organizations Merchandising, manufacturing, and service organizations differ in what they provide to consumers; however, all three types of firms must control costs in order to remain profitable. The type of costs they incur is primarily determined by the product/good, or service they provide. As the type of organization differs, so does the way they account for costs. Some of these differences are reflected in the income statement. 2.2 Identify and Apply Basic Cost Behavior Patterns Costs can be broadly classified as either fixed or variable costs. However, in order for managers to manage effectively, these two cost classifications are often further expanded to include mixed, step, prime, and conversion costs. For manufacturing firms, it is essential that they differentiate among direct materials, direct labor, and manufacturing overhead in order to identify and manage their total product costs. For planning purposes, managers must be careful to consider the relevant range because it is only within this relevant range that total fixed costs remain constant. 2.3 Estimate a Variable and Fixed Cost Equation and Predict Future Costs In order to make business decisions, managers can utilize past cost data to predict future costs employing three methods: scatter graphs, the high-low method, and least-squares regression analysis. Scatter graphs are used as a diagnostic tool to determine if the relationship between activity and cost is a linear relationship. Both the high-low method and the least-squares regression method separate mixed costs into their fixed and variable components to allow managers to predict future costs from historical costs.
Chapter Outline 2.1 Distinguish between Merchandising, Manufacturing, and Service Organizations 2.2 Identify and Apply Basic Cost Behavior Patterns 2.3 Estimate a Variable and Fixed Cost Equation and Predict Future Costs Why It Matters Many 16-year-olds in the United States eagerly anticipate having a car of their own and the freedom that comes from having their own means of transportation. For many, this means not having to bum a ride from a friend, take a bus, hire Uber or Lyft , or worse, borrow the parents’ car. However, as appealing as having one’s own set of wheels sounds, it comes with an array of costs that many young drivers do not anticipate. Some of the costs associated with buying and owning a car are fixed, and some vary with the level of activity. For example, a driver pays car payments and insurance premiums every month whether or not the car is driven, but the cost of maintenance and gas can be controlled by driving less. A driver cannot control the price of gasoline or the mechanic’s hourly wage but can control how much of each is used each month. Just as car owners incur a variety of costs—fixed, variable, controllable, and uncontrollable—businesses incur these types of costs as well. The goal of managerial accountants is to use this cost information to assist management in both long- and short-term decision-making. Managerial accounting follows standards and best practices for reporting cost data that are less formal than those used for financial accounting. This means management often has the discretion to determine how costs are used internally. Since businesses collect and analyze cost data for internal use, there may be distinct differences among businesses in how they estimate and treat certain costs. What does not change, regardless of how cost data is used, are generally agreed upon cost classifications managers use for decision-making. In short, most businesses incur the same type of costs, but how each firm classifies and manages these costs can vary widely.
[ { "answer": { "ans_choice": 2, "ans_text": "C" }, "bloom": null, "hl_context": "For example , Whichard & Klein , LLP , is a full-service accounting firm with their primary offices in Baltimore , Maryland . With two senior partners and a small staff of accountants and payroll specialists , the majority of the costs they incur are related to personnel . <hl> The value of the accounting and payroll services they provide to their clients is intangible in comparison to goods sold by a merchandiser or produced by a manufacturer but has value and is the primary source of revenue for the firm . <hl> At the end of 2019 , Whichard and Klein reported the following revenue and expenses :", "hl_sentences": "The value of the accounting and payroll services they provide to their clients is intangible in comparison to goods sold by a merchandiser or produced by a manufacturer but has value and is the primary source of revenue for the firm .", "question": { "cloze_format": "___ is the primary source of revenue for a service business.", "normal_format": "Which of the following is the primary source of revenue for a service business?", "question_choices": [ "the production of products from raw materials", "the purchase and resale of finished products", "providing intangible goods and services", "the sale of raw materials to manufacturing firms" ], "question_id": "fs-idm227199056", "question_text": "Which of the following is the primary source of revenue for a service business?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 1, "ans_text": "the purchase and resale of finished products" }, "bloom": null, "hl_context": "A merchandising firm is one of the most common types of businesses . <hl> A merchandising firm is a business that purchases finished products and resells them to consumers . <hl> Consider your local grocery store or retail clothing store . Both of these are merchandising firms . Often , merchandising firms are referred to as resellers or retailers since they are in the business of reselling a product to the consumer at a profit .", "hl_sentences": "A merchandising firm is a business that purchases finished products and resells them to consumers .", "question": { "cloze_format": "___ is the primary source of revenue for a merchandising business.", "normal_format": "Which of the following is the primary source of revenue for a merchandising business?", "question_choices": [ "the production of products from raw materials", "the purchase and resale of finished products", "the provision of intangible goods and services", "the sale of raw materials to manufacturing firms" ], "question_id": "fs-idm231959328", "question_text": "Which of the following is the primary source of revenue for a merchandising business?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "A" }, "bloom": null, "hl_context": "For example , Whichard & Klein , LLP , is a full-service accounting firm with their primary offices in Baltimore , Maryland . With two senior partners and a small staff of accountants and payroll specialists , the majority of the costs they incur are related to personnel . <hl> The value of the accounting and payroll services they provide to their clients is intangible in comparison to goods sold by a merchandiser or produced by a manufacturer but has value and is the primary source of revenue for the firm . <hl> At the end of 2019 , Whichard and Klein reported the following revenue and expenses : <hl> A manufacturing organization is a business that uses parts , components , or raw materials to produce finished goods ( Figure 2.6 ) . <hl> <hl> These finished goods are sold either directly to the consumer or to other manufacturing firms that use them as a component part to produce a finished product . <hl> For example , Diehard manufactures automobile batteries that are sold directly to consumers by retail outlets such as AutoZone , Costco , and Advance Auto . However , these batteries are also sold to automobile manufacturers such as Ford , Chevrolet , or Toyota to be installed in cars during the manufacturing process . Regardless of who the final consumer of the final product is , Diehard must control its costs so that the sale of batteries generates revenue sufficient to keep the organization profitable .", "hl_sentences": "The value of the accounting and payroll services they provide to their clients is intangible in comparison to goods sold by a merchandiser or produced by a manufacturer but has value and is the primary source of revenue for the firm . A manufacturing organization is a business that uses parts , components , or raw materials to produce finished goods ( Figure 2.6 ) . These finished goods are sold either directly to the consumer or to other manufacturing firms that use them as a component part to produce a finished product .", "question": { "cloze_format": "___ is the primary source of revenue for a manufacturing business.", "normal_format": "Which of the following is the primary source of revenue for a manufacturing business?", "question_choices": [ "the production of products from raw materials", "the purchase and resale of finished products", "the provision of intangible goods and services", "both the provision of services and the sale of finished goods" ], "question_id": "fs-idm228315696", "question_text": "Which of the following is the primary source of revenue for a manufacturing business?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "Service Revenue – Operating Expenses = operating income" }, "bloom": null, "hl_context": "This simplified income statement demonstrates how merchandising firms account for their sales cycle or process . Sales revenue is the income generated from the sale of finished goods to consumers rather than from the manufacture of goods or provision of services . Since a merchandising firm has to purchase goods for resale , they account for this cost as cost of goods sold — what it cost them to acquire the goods that are then sold to the customer . The difference between what the drug store paid for the toothpaste and the revenue generated by selling the toothpaste to consumers is their gross profit . <hl> However , in order to generate sales revenue , merchandising firms incur expenses related to the process of operating their business and selling the merchandise . <hl> <hl> These costs are called operating expenses , and the business must deduct them from the gross profit to determine the operating profit . <hl> <hl> ( Note that while the terms “ operating profit ” and “ operating income ” are often used interchangeably , in real-world interactions you should confirm exactly what the user means in using those terms . ) <hl> Operating expenses incurred by a merchandising firm include insurance , marketing , administrative salaries , and rent .", "hl_sentences": "However , in order to generate sales revenue , merchandising firms incur expenses related to the process of operating their business and selling the merchandise . These costs are called operating expenses , and the business must deduct them from the gross profit to determine the operating profit . ( Note that while the terms “ operating profit ” and “ operating income ” are often used interchangeably , in real-world interactions you should confirm exactly what the user means in using those terms . )", "question": { "cloze_format": "The components of the income statement for a service business are represented by the expression ___", "normal_format": "Which of the following represents the components of the income statement for a service business?", "question_choices": [ "Sales Revenue – Cost of Goods Sold = gross profit", "Service Revenue – Operating Expenses = operating income", "Sales Revenue – Cost of Goods Manufactured = gross profit", "Service Revenue – Cost of Goods Purchased = gross profit" ], "question_id": "fs-idm224438608", "question_text": "Which of the following represents the components of the income statement for a service business?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 0, "ans_text": "A" }, "bloom": null, "hl_context": "This simplified income statement demonstrates how merchandising firms account for their sales cycle or process . <hl> Sales revenue is the income generated from the sale of finished goods to consumers rather than from the manufacture of goods or provision of services . <hl> <hl> Since a merchandising firm has to purchase goods for resale , they account for this cost as cost of goods sold — what it cost them to acquire the goods that are then sold to the customer . <hl> <hl> The difference between what the drug store paid for the toothpaste and the revenue generated by selling the toothpaste to consumers is their gross profit . <hl> However , in order to generate sales revenue , merchandising firms incur expenses related to the process of operating their business and selling the merchandise . These costs are called operating expenses , and the business must deduct them from the gross profit to determine the operating profit . ( Note that while the terms “ operating profit ” and “ operating income ” are often used interchangeably , in real-world interactions you should confirm exactly what the user means in using those terms . ) Operating expenses incurred by a merchandising firm include insurance , marketing , administrative salaries , and rent .", "hl_sentences": "Sales revenue is the income generated from the sale of finished goods to consumers rather than from the manufacture of goods or provision of services . Since a merchandising firm has to purchase goods for resale , they account for this cost as cost of goods sold — what it cost them to acquire the goods that are then sold to the customer . The difference between what the drug store paid for the toothpaste and the revenue generated by selling the toothpaste to consumers is their gross profit .", "question": { "cloze_format": "___ represents the components of the income statement for a manufacturing business.", "normal_format": "Which of the following represents the components of the income statement for a manufacturing business?", "question_choices": [ "Sales Revenue – Cost of Goods Sold = gross profit", "Service Revenue – Operating Expenses = gross profit", "Service Revenue – Cost of Goods Manufactured = gross profit", "Sales Revenue – Cost of Goods Manufactured = gross profit" ], "question_id": "fs-idm226901216", "question_text": "Which of the following represents the components of the income statement for a manufacturing business?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 0, "ans_text": "Sales Revenue – Cost of Goods Sold = gross profit" }, "bloom": null, "hl_context": "This simplified income statement demonstrates how merchandising firms account for their sales cycle or process . <hl> Sales revenue is the income generated from the sale of finished goods to consumers rather than from the manufacture of goods or provision of services . <hl> <hl> Since a merchandising firm has to purchase goods for resale , they account for this cost as cost of goods sold — what it cost them to acquire the goods that are then sold to the customer . <hl> <hl> The difference between what the drug store paid for the toothpaste and the revenue generated by selling the toothpaste to consumers is their gross profit . <hl> However , in order to generate sales revenue , merchandising firms incur expenses related to the process of operating their business and selling the merchandise . These costs are called operating expenses , and the business must deduct them from the gross profit to determine the operating profit . ( Note that while the terms “ operating profit ” and “ operating income ” are often used interchangeably , in real-world interactions you should confirm exactly what the user means in using those terms . ) Operating expenses incurred by a merchandising firm include insurance , marketing , administrative salaries , and rent .", "hl_sentences": "Sales revenue is the income generated from the sale of finished goods to consumers rather than from the manufacture of goods or provision of services . Since a merchandising firm has to purchase goods for resale , they account for this cost as cost of goods sold — what it cost them to acquire the goods that are then sold to the customer . The difference between what the drug store paid for the toothpaste and the revenue generated by selling the toothpaste to consumers is their gross profit .", "question": { "cloze_format": "___ represents the components of the income statement for a merchandising business.", "normal_format": "Which of the following represents the components of the income statement for a merchandising business?", "question_choices": [ "Sales Revenue – Cost of Goods Sold = gross profit", "Service Revenue – Operating Expenses = gross profit", "Sales Revenue – Cost of Goods Manufactured = gross profit", "Service Revenue – Cost of Goods Purchased = gross profit" ], "question_id": "fs-idm215551504", "question_text": "Which of the following represents the components of the income statement for a merchandising business?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 3, "ans_text": "D" }, "bloom": null, "hl_context": "<hl> In certain production environments , once a business has separated the costs of the product into direct materials , direct labor , and overhead , the costs can then be gathered into two broader categories : prime costs and conversion costs . <hl> <hl> Prime costs are the direct material expenses and direct labor costs , while conversion costs are direct labor and general factory overhead combined . <hl> Please note that these two categories of costs are examples of cost categories where a particular cost can be included in both . In this case , direct labor is included in both prime costs and conversion costs .", "hl_sentences": "In certain production environments , once a business has separated the costs of the product into direct materials , direct labor , and overhead , the costs can then be gathered into two broader categories : prime costs and conversion costs . Prime costs are the direct material expenses and direct labor costs , while conversion costs are direct labor and general factory overhead combined .", "question": { "cloze_format": "Conversion costs include all of the following except ___", "normal_format": "Conversion costs include all of the following except which one?", "question_choices": [ "wages of production workers", "depreciation on factory equipment", "factory utilities", "direct materials purchased" ], "question_id": "fs-idm194431552", "question_text": "Conversion costs include all of the following except :" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 3, "ans_text": "selling expense" }, "bloom": null, "hl_context": "<hl> In certain production environments , once a business has separated the costs of the product into direct materials , direct labor , and overhead , the costs can then be gathered into two broader categories : prime costs and conversion costs . <hl> <hl> Prime costs are the direct material expenses and direct labor costs , while conversion costs are direct labor and general factory overhead combined . <hl> Please note that these two categories of costs are examples of cost categories where a particular cost can be included in both . In this case , direct labor is included in both prime costs and conversion costs . <hl> Similarly , not all materials used in the production process can be traced back to a specific unit of production . <hl> <hl> When this is the case , they are classified as indirect material costs . <hl> Although needed to produce the product , these indirect material costs are not traceable to a specific unit of production . For Carolina Yachts , their indirect materials include supplies like tools , glue , wax , and cleaning supplies . These materials are required to build a boat , but management cannot easily track how much of a bottle of glue they use or how often they use a particular drill to build a specific boat . These indirect materials and their associated cost represent a small fraction of the total materials needed to complete a unit of production . Like direct materials , indirect materials are classified as a variable cost since they vary with the level of production . Table 2.8 provides some examples of manufacturing costs and their classifications .", "hl_sentences": "In certain production environments , once a business has separated the costs of the product into direct materials , direct labor , and overhead , the costs can then be gathered into two broader categories : prime costs and conversion costs . Prime costs are the direct material expenses and direct labor costs , while conversion costs are direct labor and general factory overhead combined . Similarly , not all materials used in the production process can be traced back to a specific unit of production . When this is the case , they are classified as indirect material costs .", "question": { "cloze_format": "___ is/are not considered a product cost.", "normal_format": "Which of the following is not considered a product cost?", "question_choices": [ "direct materials", "direct labor", "indirect materials", "selling expense" ], "question_id": "fs-idm197848320", "question_text": "Which of the following is not considered a product cost?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "D" }, "bloom": null, "hl_context": "We have spent considerable time identifying and describing the various ways that businesses categorize costs . However , categorization itself is not enough . It is important not only to understand the categorization of costs but to understand the relationships between changes in activity levels and the changes in costs in total . It is worth repeating that when a cost is considered to be fixed , that cost is only fixed for the relevant range . Once the boundary of the relevant range has been reached or moved beyond , fixed costs will change and then remain fixed for the new relevant range . <hl> Remember that , within a relevant range of activity , where the relevant range refers to a specific activity level that is bounded by a minimum and maximum amount , total fixed costs are constant , but costs change on a per-unit basis . <hl> Let ’ s examine an example that demonstrates how changes in activity can affect costs . <hl> Unlike fixed costs that remain fixed in total but change on a per-unit basis , variable costs remain the same per unit , but change in total relative to the level of activity in the business . <hl> Revisiting Tony ’ s T-Shirts , Figure 2.16 shows how the variable cost of ink behaves as the level of activity changes . <hl> We have established that fixed costs do not change in total as the level of activity changes , but what about fixed costs on a per-unit basis ? <hl> Let ’ s examine Tony ’ s screen-printing company to illustrate how costs can remain fixed in total but change on a per-unit basis .", "hl_sentences": "Remember that , within a relevant range of activity , where the relevant range refers to a specific activity level that is bounded by a minimum and maximum amount , total fixed costs are constant , but costs change on a per-unit basis . Unlike fixed costs that remain fixed in total but change on a per-unit basis , variable costs remain the same per unit , but change in total relative to the level of activity in the business . We have established that fixed costs do not change in total as the level of activity changes , but what about fixed costs on a per-unit basis ?", "question": { "cloze_format": "Fixed costs are expenses that ________.", "normal_format": "Which of the following is correct about fixed costs that are expenses?", "question_choices": [ "vary in response to changes in activity level", "remain constant on a per-unit basis", "increase on a per-unit basis as activity increases", "remain constant as activity changes" ], "question_id": "fs-idm194442496", "question_text": "Fixed costs are expenses that ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "remain constant on a per-unit basis but change in total based on activity level" }, "bloom": null, "hl_context": "In each of the examples , managers are able to trace the cost of the materials directly to a specific unit ( cake , car , or chair ) produced . <hl> Since the amount of direct materials required will change based on the number of units produced , direct materials are almost always classified as a variable cost . <hl> <hl> They remain fixed per unit of production but change in total based on the level of activity within the business . <hl> <hl> Unlike fixed costs that remain fixed in total but change on a per-unit basis , variable costs remain the same per unit , but change in total relative to the level of activity in the business . <hl> <hl> Revisiting Tony ’ s T-Shirts , Figure 2.16 shows how the variable cost of ink behaves as the level of activity changes . <hl>", "hl_sentences": "Since the amount of direct materials required will change based on the number of units produced , direct materials are almost always classified as a variable cost . They remain fixed per unit of production but change in total based on the level of activity within the business . Unlike fixed costs that remain fixed in total but change on a per-unit basis , variable costs remain the same per unit , but change in total relative to the level of activity in the business . Revisiting Tony ’ s T-Shirts , Figure 2.16 shows how the variable cost of ink behaves as the level of activity changes .", "question": { "cloze_format": "Variable costs are expenses that ________.", "normal_format": "What expenses are variable costs?", "question_choices": [ "remain constant on a per-unit basis but change in total based on activity level", "remain constant on a per-unit basis and remain constant in total regardless of activity level", "decrease on a per-unit basis as activity level increases", "remain constant in total regardless of activity level within a relevant range" ], "question_id": "fs-idm219699584", "question_text": "Variable costs are expenses that ________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "direct labor" }, "bloom": null, "hl_context": "As you have learned , much of the power of managerial accounting is its ability to break costs down into the smallest possible trackable unit . <hl> This also applies to manufacturing overhead . <hl> <hl> In many cases , businesses have a need to further refine their overhead costs and will track indirect labor and indirect materials . <hl> Before examining the typical manufacturing firm ’ s process to track cost of goods manufactured , you need basic definitions of three terms in the schedule of Costs of Goods Manufactured : direct materials , direct labor , and manufacturing overhead . Direct materials are the components used in the production process whose costs can be identified on a per item-produced basis . For example , if you are producing cars , the engine would be a direct material item . The direct material cost would be the cost of one engine . Direct labor represents production labor costs that can be identified on a per item-produced basis . Referring to the car production example , assume that the engines are placed in the car by individuals rather than by an automated process . The direct labor cost would be the amount of labor in hours multiplied by the hourly labor cost . <hl> Manufacturing overhead generally includes those costs incurred in the production process that are not economically feasible to measure as direct material or direct labor costs . <hl> Examples include the department manager ’ s salary , the production factory ’ s utilities , or glue used to attach rubber molding in the auto production process . Since there are so many possible costs that can be classified as manufacturing overhead , they tend to be grouped and then allocated in a predetermined manner to the production process .", "hl_sentences": "This also applies to manufacturing overhead . In many cases , businesses have a need to further refine their overhead costs and will track indirect labor and indirect materials . Manufacturing overhead generally includes those costs incurred in the production process that are not economically feasible to measure as direct material or direct labor costs .", "question": { "cloze_format": "___ would not be classified as manufacturing overhead.", "normal_format": "Which of the following would not be classified as manufacturing overhead?", "question_choices": [ "indirect materials", "indirect labor", "direct labor", "property taxes on factory" ], "question_id": "fs-idm218400800", "question_text": "Which of the following would not be classified as manufacturing overhead?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 3, "ans_text": "D" }, "bloom": null, "hl_context": "In certain production environments , once a business has separated the costs of the product into direct materials , direct labor , and overhead , the costs can then be gathered into two broader categories : prime costs and conversion costs . <hl> Prime costs are the direct material expenses and direct labor costs , while conversion costs are direct labor and general factory overhead combined . <hl> Please note that these two categories of costs are examples of cost categories where a particular cost can be included in both . In this case , direct labor is included in both prime costs and conversion costs . <hl> Before examining the typical manufacturing firm ’ s process to track cost of goods manufactured , you need basic definitions of three terms in the schedule of Costs of Goods Manufactured : direct materials , direct labor , and manufacturing overhead . <hl> Direct materials are the components used in the production process whose costs can be identified on a per item-produced basis . For example , if you are producing cars , the engine would be a direct material item . The direct material cost would be the cost of one engine . Direct labor represents production labor costs that can be identified on a per item-produced basis . Referring to the car production example , assume that the engines are placed in the car by individuals rather than by an automated process . The direct labor cost would be the amount of labor in hours multiplied by the hourly labor cost . Manufacturing overhead generally includes those costs incurred in the production process that are not economically feasible to measure as direct material or direct labor costs . Examples include the department manager ’ s salary , the production factory ’ s utilities , or glue used to attach rubber molding in the auto production process . Since there are so many possible costs that can be classified as manufacturing overhead , they tend to be grouped and then allocated in a predetermined manner to the production process .", "hl_sentences": "Prime costs are the direct material expenses and direct labor costs , while conversion costs are direct labor and general factory overhead combined . Before examining the typical manufacturing firm ’ s process to track cost of goods manufactured , you need basic definitions of three terms in the schedule of Costs of Goods Manufactured : direct materials , direct labor , and manufacturing overhead .", "question": { "cloze_format": "___ are prime costs.", "normal_format": "Which of the following are prime costs?", "question_choices": [ "indirect materials, indirect labor, and direct labor", "direct labor, indirect materials, and indirect labor", "direct labor and indirect labor", "direct labor and direct materials" ], "question_id": "fs-idm219267424", "question_text": "Which of the following are prime costs?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "Average fixed costs per unit fall as the level of activity rises." }, "bloom": null, "hl_context": "We see that total fixed costs remain unchanged , but the average fixed cost per unit goes up and down with the number of boats produced . <hl> As more units are produced , the fixed costs are spread out over more units , making the fixed cost per unit fall . <hl> Likewise , as fewer boats are manufactured , the average fixed costs per unit rises . We can use a similar approach with variable costs .", "hl_sentences": "As more units are produced , the fixed costs are spread out over more units , making the fixed cost per unit fall .", "question": { "cloze_format": "Regarding average fixed costs, the true statement is that ___ .", "normal_format": "Which of the following statements is true regarding average fixed costs?", "question_choices": [ "Average fixed costs per unit remain fixed regardless of level of activity.", "Average fixed costs per unit rise as the level of activity rises.", "Average fixed costs per unit fall as the level of activity rises.", "Average fixed costs per unit cannot be determined." ], "question_id": "fs-idm197905568", "question_text": "Which of the following statements is true regarding average fixed costs?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 3, "ans_text": "D" }, "bloom": null, "hl_context": "Sometimes , a business will need to use cost estimation techniques , particularly in the case of mixed costs , so that they can separate the fixed and variable components , since only the variable components change in the short run . <hl> Estimation is also useful for using current data to predict the effects of future changes in production on total costs . <hl> <hl> Three estimation techniques that can be used include the scatter graph , the high-low method , and regression analysis . <hl> Here we will demonstrate the scatter graph and the high-low methods ( you will learn the regression analysis technique in advanced managerial accounting courses .", "hl_sentences": "Estimation is also useful for using current data to predict the effects of future changes in production on total costs . Three estimation techniques that can be used include the scatter graph , the high-low method , and regression analysis .", "question": { "cloze_format": "The high-low method and least-squares regression are used by managers to ________.", "normal_format": "Why are the high-low method and least-squares regression used by managers?", "question_choices": [ "decide whether to make or buy a component part", "minimize corporate tax liability", "maximize output", "estimate costs" ], "question_id": "fs-idm219733328", "question_text": "The high-low method and least-squares regression are used by managers to ________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 0, "ans_text": "the high-low method" }, "bloom": null, "hl_context": "<hl> The first step in analyzing mixed costs with the high-low method is to identify the periods with the highest and lowest levels of activity . <hl> In this case , it would be February and May , as shown in Figure 2.33 . We always choose the highest and lowest activity and the costs that correspond with those levels of activity , even if they are not the highest and lowest costs .", "hl_sentences": "The first step in analyzing mixed costs with the high-low method is to identify the periods with the highest and lowest levels of activity .", "question": { "cloze_format": "The method of cost estimation that relies on only two data points is ___.", "normal_format": "Which of the following methods of cost estimation relies on only two data points?", "question_choices": [ "the high-low method", "account analysis", "least-squares regression", "SWOT analysis." ], "question_id": "fs-idm209383360", "question_text": "Which of the following methods of cost estimation relies on only two data points?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "C" }, "bloom": null, "hl_context": "<hl> where Y is the total mixed cost , a is the fixed cost , b is the variable cost per unit , and x is the level of activity . <hl> <hl> The cost equation is a linear equation that takes into consideration total fixed costs , the fixed component of mixed costs , and variable cost per unit . <hl> Cost equations can use past data to determine patterns of past costs that can then project future costs , or they can use estimated or expected future data to estimate future costs . Recall the mixed cost equation : As you can see , Tony has both fixed and variable costs associated with his business . His one screen-printing machine can only produce 2,000 T-shirts per month and his current employee can produce 20 shirts per hour ( 160 per 8 - hour work day ) . The space that Tony leases is large enough that he could add an additional screen-printing machine and 1 additional employee . If he expands beyond that , he will need to lease a larger space , and presumably his rent would increase at that point . <hl> It is easy for Tony to predict his costs as long as he operates within the relevant ranges by applying the total cost equation Y = a + bx . <hl> So , for Tony , as long as he produces 2,000 or fewer T-shirts , his total cost will be found by Y = $ 6,000 + $ 0.75 x , where the variable cost of $ 0.75 is the $ 0.25 cost of the ink per shirt and $ 0.50 per shirt for labor ( $ 10 per hour wage / 20 shirts per hour ) . As soon as his production passes the 2,000 T-shirts that his one employee and one machine can handle , he will have to add a second employee and lease a second screen-printing machine . In other words , his fixed costs will rise from $ 6,000 to $ 8,000 , and his variable cost per T-shirt will rise from $ 0.75 to $ 1.25 ( ink plus 2 workers ) . Thus , his new cost equation is Y = $ 8,000 + $ 1.25 x until he “ steps up ” again and adds a third machine and moves to a new location with a presumably higher rent . Let ’ s take a look at this in chart form to better illustrate the “ step ” in cost Tony will experience as he steps past 2,000 T-shirts .", "hl_sentences": "where Y is the total mixed cost , a is the fixed cost , b is the variable cost per unit , and x is the level of activity . The cost equation is a linear equation that takes into consideration total fixed costs , the fixed component of mixed costs , and variable cost per unit . It is easy for Tony to predict his costs as long as he operates within the relevant ranges by applying the total cost equation Y = a + bx .", "question": { "cloze_format": "In the cost equation Y = a + bx , Y represents ___ .", "normal_format": "In the cost equation Y = a + bx , Y represents which of the following?", "question_choices": [ "fixed costs", "variable costs", "total costs", "units of production" ], "question_id": "fs-idm213524336", "question_text": "In the cost equation Y = a + bx , Y represents which of the following?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 3, "ans_text": "linear" }, "bloom": null, "hl_context": "A scatter graph shows plots of points that represent actual costs incurred for various levels of activity . Once the scatter graph is constructed , we draw a line ( often referred to as a trend line ) that appears to best fit the pattern of dots . Because the trend line is somewhat subjective , the scatter graph is often used as a preliminary tool to explore the possibility that the relationship between cost and activity is generally a linear relationship . When interpreting a scatter graph , it is important to remember that different people would likely draw different lines , which would lead to different estimations of fixed and variable costs . No one person ’ s line and cost estimates would necessarily be right or wrong compared to another ; they would just be different . <hl> After using a scatter graph to determine whether cost and activity have a linear relationship , managers often move on to more precise processes for cost estimation , such as the high-low method or least-squares regression analysis . <hl> <hl> One of the assumptions that managers must make in order to use the cost equation is that the relationship between activity and costs is linear . <hl> In other words , costs rise in direct proportion to activity . A diagnostic tool that is used to verify this assumption is a scatter graph .", "hl_sentences": "After using a scatter graph to determine whether cost and activity have a linear relationship , managers often move on to more precise processes for cost estimation , such as the high-low method or least-squares regression analysis . One of the assumptions that managers must make in order to use the cost equation is that the relationship between activity and costs is linear .", "question": { "cloze_format": "A scatter graph is used to test the assumption that the relationship between cost and activity level is ________.", "normal_format": "A scatter graph is used to test the assumption that the relationship between cost and activity level is what?", "question_choices": [ "curvilinear", "cyclical", "unpredictable", "linear" ], "question_id": "fs-idm219708464", "question_text": "A scatter graph is used to test the assumption that the relationship between cost and activity level is ________." }, "references_are_paraphrase": null } ]
2
2.1 Distinguish between Merchandising, Manufacturing, and Service Organizations Most businesses can be classified into one or more of these three categories: manufacturing , merchandising , or service . Stated in broad terms, manufacturing firms typically produce a product that is then sold to a merchandising entity (a retailer) For example, Proctor and Gamble produces a variety of shampoos that it sells to retailers, such as Walmart , Target , or Walgreens . A service entity provides a service such as accounting or legal services or cable television and internet connections. Some companies combine aspects of two or all three of these categories within a single business. If it chooses, the same company can both produce and market its products directly to consumers. For example, Nike produces products that it directly sells to consumers and products that it sells to retailers. An example of a company that fits all three categories is Apple, which produces phones, sells them directly to consumers, and also provides services, such as extended warranties. Regardless of whether a business is a manufacturer of products, a retailer selling to the customer, a service provider, or some combination , all businesses set goals and have strategic plans that guide their operations. Strategic plans look very different from one company to another. For example, a retailer such as Walmart may have a strategic plan that focuses on increasing same store sales. Facebook’s strategic plan may focus on increasing subscribers and attracting new advertisers. An accounting firm may have long-term goals to open offices in neighboring cities in order to serve more clients. Although the goals differ, the process all companies use to achieve their goals is the same. First, they must develop a plan for how they will achieve the goal, and then management will gather, analyze, and use information regarding costs to make decisions, implement plans, and achieve goals. Table 2.1 lists examples of these costs. Some of these are similar across different types of businesses; others are unique to a particular business. Costs Type of Business Costs Incurred Manufacturing Business Direct labor Plant and equipment Manufacturing overhead Raw materials Merchandising Business Lease on retail space Merchandise inventory Retail sales staff Service Business Billing and collections Computer network equipment Professional staff Table 2.1 Some costs, such as raw materials, are unique to a particular type of business. Other costs, such as billing and collections, are common to most businesses, regardless of the type. Knowing the basic characteristics of each cost category is important to understanding how businesses measure, classify, and control costs. Merchandising Organizations A merchandising firm is one of the most common types of businesses. A merchandising firm is a business that purchases finished products and resells them to consumers. Consider your local grocery store or retail clothing store. Both of these are merchandising firms. Often, merchandising firms are referred to as resellers or retailers since they are in the business of reselling a product to the consumer at a profit. Think about purchasing toothpaste from your local drug store. The drug store purchases tens of thousands of tubes of toothpaste from a wholesale distributor or manufacturer in order to get a better per-tube cost. Then, they add their mark-up (or profit margin) to the toothpaste and offer it for sale to you. The drug store did not manufacture the toothpaste; instead, they are reselling a toothpaste that they purchased. Virtually all of your daily purchases are made from merchandising firms such as Walmart , Target , Macy’s , Walgreens , and AutoZone . Merchandising firms account for their costs in a different way from other types of business organizations. To understand merchandising costs, Figure 2.2 shows a simplified income statement for a merchandising firm: This simplified income statement demonstrates how merchandising firms account for their sales cycle or process. Sales revenue is the income generated from the sale of finished goods to consumers rather than from the manufacture of goods or provision of services. Since a merchandising firm has to purchase goods for resale, they account for this cost as cost of goods sold —what it cost them to acquire the goods that are then sold to the customer. The difference between what the drug store paid for the toothpaste and the revenue generated by selling the toothpaste to consumers is their gross profit . However, in order to generate sales revenue, merchandising firms incur expenses related to the process of operating their business and selling the merchandise. These costs are called operating expenses , and the business must deduct them from the gross profit to determine the operating profit . (Note that while the terms “operating profit” and “operating income” are often used interchangeably, in real-world interactions you should confirm exactly what the user means in using those terms.) Operating expenses incurred by a merchandising firm include insurance, marketing, administrative salaries, and rent. Concepts In Practice Balancing Revenue and Expenses Plum Crazy is a small boutique selling the latest in fashion trends. They purchase clothing and fashion accessories from several distributors and manufacturers for resale. In 2017, they reported these revenue and expenses: Before examining the income statement, let’s look at Cost of Goods Sold in more detail. Merchandising companies have to account for inventory, a topic covered in Inventory . As you recall, merchandising companies carry inventory from one period to another. When they prepare their income statement, a crucial step is identifying the actual cost of goods that were sold for the period. For Plum Crazy, their Cost of Goods Sold was calculated as shown in Figure 2.4 . Once the calculation of the Cost of Goods Sold has been completed, Plum Crazy can now construct their income statement, which would appear as shown in Figure 2.5 . Since merchandising firms must pass the cost of goods on to the consumer to earn a profit, they are extremely cost sensitive. Large merchandising businesses like Walmart, Target, and Best Buy manage costs by buying in bulk and negotiating with manufacturers and suppliers to drive the per-unit cost. Continuing Application Introduction to the Gearhead Outfitters Story Gearhead Outfitters , founded by Ted Herget in 1997 in Jonesboro, AR, is a retail chain which sells outdoor gear for men, women, and children. The company’s inventory includes clothing, footwear for hiking and running, camping gear, backpacks, and accessories, by brands such as The North Face, Birkenstock, Wolverine, Yeti, Altra, Mizuno, and Patagonia. Ted fell in love with the outdoor lifestyle while working as a ski instructor in Colorado and wanted to bring that feeling back home to Arkansas. And so, Gearhead was born in a small downtown location in Jonesboro. The company has had great success over the years, expanding to numerous locations in Ted’s home state, as well as Louisiana, Oklahoma, and Missouri. While Ted knew his industry when starting Gearhead, like many entrepreneurs he faced regulatory and financial issues which were new to him. Several of these issues were related to accounting and the wealth of decision-making information which accounting systems provide. For example, measuring revenue and expenses, providing information about cash flow to potential lenders, analyzing whether profit and positive cash flow is sustainable to allow for expansion, and managing inventory levels. Accounting, or the preparation of financial statements (balance sheet, income statement, and statement of cash flows), provides the mechanism for business owners such as Ted to make fundamentally sound business decisions. Link to Learning Walmart is inarguably a retail giant, but how did the company become so successful? Read the article about how low costs have allowed Walmart to keep prices low while still making a large profit to learn more. Manufacturing Organizations A manufacturing organization is a business that uses parts, components, or raw materials to produce finished goods ( Figure 2.6 ). These finished goods are sold either directly to the consumer or to other manufacturing firms that use them as a component part to produce a finished product. For example, Diehard manufactures automobile batteries that are sold directly to consumers by retail outlets such as AutoZone, Costco , and Advance Auto . However, these batteries are also sold to automobile manufacturers such as Ford , Chevrolet , or Toyota to be installed in cars during the manufacturing process. Regardless of who the final consumer of the final product is, Diehard must control its costs so that the sale of batteries generates revenue sufficient to keep the organization profitable. Manufacturing firms are more complex organizations than merchandising firms and therefore have a larger variety of costs to control. For example, a merchandising firm may purchase furniture to sell to consumers, whereas a manufacturing firm must acquire raw materials such as lumber, paint, hardware, glue, and varnish that they transform into furniture. The manufacturer incurs additional costs, such as direct labor, to convert the raw materials into furniture. Operating a physical plant where the production process takes place also generates costs. Some of these costs are tied directly to production, while others are general expenses necessary to operate the business. Because the manufacturing process can be highly complex, manufacturing firms constantly evaluate their production processes to determine where cost savings are possible. Concepts In Practice Cost Control Controlling costs is an integral function of all managers, but companies often hire personnel to specifically oversee cost control. As you’ve learned, controlling costs is vital in all industries, but at Hilton Hotels , they translate this into the position of Cost Controller. Here is an excerpt from one of Hilton’s recent job postings. Position Title: Cost Controller Job Description: “A Cost Controller will work with all Heads of Departments to effectively control all products that enter and exit the hotel.” 1 1 Hilton. “Cost Controller: Job Description.” Hosco. https://www.hosco.com/en/job/hilton-istanbul-bomonti-hotel-conference-center/cost-controller Job Requirements: “As Cost Controller, you will work with all Heads of Departments to effectively control all products that enter and exit the hotel. Specifically, you will be responsible for performing the following tasks to the highest standards: Review the daily intake of products into the hotel and ensure accurate pricing and quantity of goods received Control the stores by ensuring accuracy of inventory and stock control and the pricing of goods received Alert relevant parties of slow-moving goods and goods nearing expiry dates to reduce waste and alter product purchasing to accommodate Manage cost reporting on a weekly basis Attend finance meetings, as required Maintain good communication and working relationships with all hotel areas Act in accordance with fire, health and safety regulations and follow the correct procedures when required” 2 2 Hilton. “Cost Controller: Job Description.” Hosco. https://www.hosco.com/en/job/hilton-istanbul-bomonti-hotel-conference-center/cost-controller As you can see, the individual in this position will interact with others across the organization to find ways to control costs for the benefit of the company. Some of the benefits of cost control include: Lowering overall company expenses, thereby increasing net income. Freeing up financial resources for investment in research & development of new or improved products, goods, or services Providing funding for employee development and training, benefits, and bonuses Allowing corporate earnings to be used to support humanitarian and charitable causes Manufacturing organizations account for costs in a way that is similar to that of merchandising firms. However, as you will learn, there is a significant difference in the calculation of cost of goods sold. Figure 2.7 shows a simplification of the income statement for a manufacturing firm: At first it appears that there is no difference between the income statements of the merchandising firm and the manufacturing firm. However, the difference is in how these two types of firms account for the cost of goods sold. Merchandising firms determine their cost of goods sold by accounting for both existing inventory and new purchases, as shown in the Plum Crazy example. It is typically easy for merchandising firms to calculate their costs because they know exactly what they paid for their merchandise. Unlike merchandising firms, manufacturing firms must calculate their cost of goods sold based on how much they manufacture and how much it costs them to manufacture those goods. This requires manufacturing firms to prepare an additional statement before they can prepare their income statement. This additional statement is the Cost of Goods Manufactured statement. Once the cost of goods manufactured is calculated, the cost is then incorporated into the manufacturing firm’s income statement to calculate its cost of goods sold. One thing manufacturing firms must consider in their cost of goods manufactured is that, at any given time, they have products at varying levels of production: some are finished and others are still process. The cost of goods manufactured statement measures the cost of the goods actually finished during the period, whether or not they were started during that period. Before examining the typical manufacturing firm’s process to track cost of goods manufactured, you need basic definitions of three terms in the schedule of Costs of Goods Manufactured: direct materials, direct labor, and manufacturing overhead. Direct materials are the components used in the production process whose costs can be identified on a per item-produced basis. For example, if you are producing cars, the engine would be a direct material item. The direct material cost would be the cost of one engine. Direct labor represents production labor costs that can be identified on a per item-produced basis. Referring to the car production example, assume that the engines are placed in the car by individuals rather than by an automated process. The direct labor cost would be the amount of labor in hours multiplied by the hourly labor cost. Manufacturing overhead generally includes those costs incurred in the production process that are not economically feasible to measure as direct material or direct labor costs. Examples include the department manager’s salary, the production factory’s utilities, or glue used to attach rubber molding in the auto production process. Since there are so many possible costs that can be classified as manufacturing overhead, they tend to be grouped and then allocated in a predetermined manner to the production process. Figure 2.8 is an example of the calculation of the Cost of Goods Manufactured for Koeller Manufacturing. It demonstrates the relationship between cost of goods manufactured and cost of goods in progress and includes the three main types of manufacturing costs. As you can see, the manufacturing firm takes into account its work-in-process (WIP) inventory as well as the costs incurred during the current period to finish not only the units that were in the beginning WIP inventory, but also a portion of any production that was started but not finished during the month. Notice that the current manufacturing costs, or the additional costs incurred during the month, include direct materials, direct labor, and manufacturing overhead. Direct materials are calculated as All of these costs are carefully tracked and classified because the cost of manufacturing is a vital component of the schedule of cost of goods sold. To continue with the example, Koeller Manufacturing calculated that the cost of goods sold was $95,000, which is carried through to the Schedule of Cost of Goods Sold ( Figure 2.9 ). Now when Koeller Manufacturing prepares its income statement, the simplified statement will appear as shown in Figure 2.10 . So, even though the income statements for the merchandising firm and the manufacturing firm appear very similar at first glance, there are many more costs to be captured by the manufacturing firm. Figure 2.11 compares and contrasts the methods merchandising and manufacturing firms use to calculate the cost of goods sold in their income statement. Concepts In Practice Calculating Cost of Goods Sold in Manufacturing Just Desserts is a bakery that produces and sells cakes and pies to grocery stores for resale. Although they are a small manufacturer, they incur many of the costs of a much larger organization. In 2017, they reported these revenue and expenses: Their income statement is shown in Figure 2.12 . You’ll learn more about the flow of manufacturing costs in Identify and Apply Basic Cost Behavior Patterns . For now, recognize that, unlike a merchandising firm, calculating cost of goods sold in manufacturing firms can be a complex task for management. Service Organizations A service organization is a business that earns revenue by providing intangible products , those that have no physical substance. The service industry is a vital sector of the U.S. economy, providing 65% of the U.S. private-sector gross domestic product and more than 79% of U.S. private-sector jobs. 3 If tangible products , physical goods that customers can handle and see, are provided by a service organization, they are considered ancillary sources of revenue. Large service organizations such as airlines, insurance companies, and hospitals incur a variety of costs in the provision of their services. Costs such as labor, supplies, equipment, advertising, and facility maintenance can quickly spiral out of control if management is not careful. Therefore, although their cost drivers are sometimes not as complex as those of other types of firms, cost identification and control are every bit as important in the service industry. 3 John Ward. “The Services Sector: How Best to Measure It?” International Trade Administration. Oct. 2010. https://2016.trade.gov/publications/ita-newsletter/1010/services-sector-how-best-to-measure-it.asp. “United States GDP from Private Services Producing Industries.” Trading Economics / U.S. Bureau of Economic Analysis. July 2018. https://tradingeconomics.com/united-states/gdp-from-services. “Employment in Services (% of Total Employment) (Modeled ILO Estimate).” International Labour Organization, ILOSTAT database. The World Bank. Sept. 2018. https://data.worldbank.org/indicator/SL.SRV.EMPL.ZS. For example, consider the services that a law firm provides its clients. What clients pay for are services such as representation in legal proceedings, contract negotiations, and preparation of wills. Although the true value of these services is not contained in their physical form, they are of value to the client and the source of revenue to the firm. The managing partners in the firm must be as cost conscious as their counterparts in merchandising and manufacturing firms. Accounting for costs in service firms differs from merchandising and manufacturing firms in that they do not purchase or produce goods. For example, consider a medical practice. Although some services provided are tangible products, such as medications or medical devices, the primary benefits the physicians provide their patients are the intangible services that are comprised of his or her knowledge, experience, and expertise. Service providers have some costs (or revenue) derived from tangible goods that must be taken into account when pricing their services, but their largest cost categories are more likely to be administrative and personnel costs rather than product costs. For example, Whichard & Klein, LLP, is a full-service accounting firm with their primary offices in Baltimore, Maryland. With two senior partners and a small staff of accountants and payroll specialists, the majority of the costs they incur are related to personnel. The value of the accounting and payroll services they provide to their clients is intangible in comparison to goods sold by a merchandiser or produced by a manufacturer but has value and is the primary source of revenue for the firm. At the end of 2019, Whichard and Klein reported the following revenue and expenses: Their Income Statement for the period is shown in Figure 2.13 . The bulk of the expenses incurred by Whichard & Klein are in personnel and administrative/office costs, which are very common among businesses that have services as their primary source of revenue. Concepts In Practice Revenue and Expenses for a Law Office The revenue and expenses for a law firm illustrate how the income statement for a service firm differs from that of a merchandising or manufacturing firm. Welch & Graham is a well-established law firm that provides legal services in the areas of criminal law, real estate transactions, and personal injury. The firm employs several attorneys, paralegals, and office support staff. In 2017, they reported the following revenue and expenses: Their income statement is shown in Figure 2.14 . As you can see, the majority of the costs incurred by the law firm are personnel related. They may also incur costs from equipment and materials such computer networks, phone and switchboard equipment, rent, insurance, and law library materials necessary to support the practice, but these costs represent a much smaller percentage of total cost than the administrative and personnel costs. Think It Through Expanding a Business Margo is the owner of a small retail business that sells gifts and home decorating accessories. Her business is well established, and she is now considering taking over additional retail space to expand her business to include gourmet foods and gift baskets. Based on customer feedback, she is confident that there is a demand for these items, but she is unsure how large that demand really is. Expanding her business this way will require that she incur not only new costs but also increases in existing costs. Margo has asked for your help in identifying the impact of her decision to expand in terms of her costs. When discussing these cost increases, be sure to specifically identify those costs that are directly tied to her products and that would be considered overhead expenses. 2.2 Identify and Apply Basic Cost Behavior Patterns Now that we have identified the three key types of businesses, let’s identify cost behaviors and apply them to the business environment. In managerial accounting, different companies use the term cost in different ways depending on how they will use the cost information. Different decisions require different costs classified in different ways. For instance, a manager may need cost information to plan for the coming year or to make decisions about expanding or discontinuing a product or service. In practice, the classification of costs changes as the use of the cost data changes. In fact, a single cost, such as rent, may be classified by one company as a fixed cost, by another company as a committed cost, and by even another company as a period cost. Understanding different cost classifications and how certain costs can be used in different ways is critical to managerial accounting. Ethical Considerations Institute of Management Accountants and Certified Management Accountant Certification Managerial accountants provide businesses with clear and direct insight into the monetary effects of any particular operational action under consideration. They are expected to report financial information in a transparent and ethical fashion. The Institute of Management Accountants (IMA) offers the Certified Management Accountant (CMA) certification. IMA members and CMAs agree to uphold a set of ethical principles that includes honesty, fairness, objectivity, and responsibility. Any managerial accountant, even if not an IMA member or certified CMA, should act in accordance with these principles and encourage coworkers to follow ethical principles for reporting financial results and monetary effects of financial decisions related to their organization. The IMA Committee on Ethics encourages organizations and individuals to adopt, promote, and execute business practices consistent with high ethical standards. 4 4 “Ethics Center.” Institute of Management Accountants. https://www.imanet.org/career-resources/ethics-center?ssopc=1 Major Cost Behavior Patterns Any discussion of costs begins with the understanding that most costs will be classified in one of three ways: fixed costs, variable costs, or mixed costs. The costs that don’t fall into one of these three categories are hybrid costs, which are examined only briefly because they are addressed in more advanced accounting courses. Because fixed and variable costs are the foundation of all other cost classifications, understanding whether a cost is a fixed cost or a variable cost is very important. Fixed versus Variable Costs A fixed cost is an unavoidable operating expense that does not change in total over the short term, even if a business experiences variation in its level of activity. Table 2.2 illustrates the types of fixed costs for merchandising, service, and manufacturing organizations. Examples of Fixed Costs Type of Business Fixed Cost Merchandising Rent, insurance, managers’ salaries Manufacturing Property taxes, insurance, equipment leases Service Rent, straight-line depreciation, administrative salaries, and insurance Table 2.2 We have established that fixed costs do not change in total as the level of activity changes, but what about fixed costs on a per-unit basis? Let’s examine Tony’s screen-printing company to illustrate how costs can remain fixed in total but change on a per-unit basis. Tony operates a screen-printing company, specializing in custom T-shirts. One of his fixed costs is his monthly rent of $1,000. Regardless of whether he produces and sells any T-shirts, he is obligated under his lease to pay $1,000 per month. However, he can consider this fixed cost on a per-unit basis, as shown in Figure 2.15 . Tony’s information illustrates that, despite the unchanging fixed cost of rent, as the level of activity increases, the per-unit fixed cost falls. In other words, fixed costs remain fixed in total but can increase or decrease on a per-unit basis. Two specialized types of fixed costs are committed fixed costs and discretionary fixed costs. These classifications are generally used for long-range planning purposes and are covered in upper-level managerial accounting courses, so they are only briefly described here. Committed fixed costs are fixed costs that typically cannot be eliminated if the company is going to continue to function. An example would be the lease of factory equipment for a production company. Discretionary fixed costs generally are fixed costs that can be incurred during some periods and postponed during other periods but which cannot normally be eliminated permanently. Examples could include advertising campaigns and employee training. Both of these costs could potentially be postponed temporarily, but the company would probably incur negative effects if the costs were permanently eliminated. These classifications are generally used for long-range planning purposes. In addition to understanding fixed costs, it is critical to understand variable costs, the second fundamental cost classification. A variable cost is one that varies in direct proportion to the level of activity within the business. Typical costs that are classified as variable costs are the cost of raw materials used to produce a product, labor applied directly to the production of the product, and overhead expenses that change based upon activity. For each variable cost, there is some activity that drives the variable cost up or down. A cost driver is defined as any activity that causes the organization to incur a variable cost. Examples of cost drivers are direct labor hours, machine hours, units produced, and units sold. Table 2.3 provides examples of variable costs and their associated cost drivers. Variable Costs and Associated Cost Drivers   Variable Cost Cost Driver Merchandising Total monthly hourly wages for sales staff Hours business is open during month Manufacturing Direct materials used to produce one unit of product Number of units produced Service Cost of laundering linens and towels Number of hotel rooms occupied Table 2.3 Unlike fixed costs that remain fixed in total but change on a per-unit basis, variable costs remain the same per unit, but change in total relative to the level of activity in the business. Revisiting Tony’s T-Shirts, Figure 2.16 shows how the variable cost of ink behaves as the level of activity changes. As Figure 2.16 shows, the variable cost per unit (per T-shirt) does not change as the number of T-shirts produced increases or decreases. However, the variable costs change in total as the number of units produced increases or decreases. In short, total variable costs rise and fall as the level of activity (the cost driver) rises and falls. Distinguishing between fixed and variable costs is critical because the total cost is the sum of all fixed costs (the total fixed costs ) and all variable costs (the total variable costs ). For every unit produced, every customer served, or every hotel room rented, for example, managers can determine their total costs both per unit of activity and in total by combining their fixed and variable costs together. The graphic in Figure 2.17 illustrates the concept of total costs. Remember that the reason that organizations take the time and effort to classify costs as either fixed or variable is to be able to control costs. When they classify costs properly, managers can use cost data to make decisions and plan for the future of the business. Concepts In Practice Boeing 5 If you’ve ever flown on an airplane, there’s a good chance you know Boeing. The Boeing Company generates around $90 billion each year from selling thousands of airplanes to commercial and military customers around the world. It employs around 200,000 people, and it’s indirectly responsible for more than a million jobs through its suppliers, contractors, regulators, and others. Its main assembly line in Everett, WA, is housed in the largest building in the world, a colossal facility that covers nearly a half-trillion cubic feet. Boeing is, simply put, a massive enterprise. And yet, Boeing’s managers know the exact cost of everything the company uses to produce its airplanes: every propeller, flap, seat belt, welder, computer programmer, and so forth. Moreover, they know how those costs would change if they produced more airplanes or fewer. They also know the price at which they sold each plane and the profit the company made on each sale. Boeing’s executives expect their managers to know this information, in real time, if the company is to remain profitable. 5 Attribution: Modification of work by Sharon Kioko and Justin Marlowe. “Cost Analysis.” Financial Strategy for Public Managers . CC BY 4.0. https://press.rebus.community/financialstrategy/chapter/cost-analysis/ Link between Business Decision and Cost Information Utilized Decision Cost Information Discontinue a product line Variable costs, overhead directly tied to product, potential reduction in fixed costs Add second production shift Labor costs, cost of fringe benefits, potential overhead increases (utilities, security personnel) Open additional retail outlets Fixed costs, variable operating costs, potential increases in administrative expenses at corporate headquarters Table 2.4 Average Fixed Costs versus Average Variable Costs Another way management may want to consider their costs is as average costs. Under this approach, managers can calculate both average fixed and average variable costs. Average fixed cost (AFC) is the total fixed costs divided by the total number of units produced, which results in a per-unit cost. The formula is: To show how a company would use AFC to make business decisions, consider Carolina Yachts, a company that manufactures sportfishing boats that are sold to consumers through a network of marinas and boat dealerships. Carolina Yachts produces 625 boats per year, and their total annual fixed costs are $1,560,000. If they want to determine an average fixed cost per unit, they will find it using the formula for AFC: AFC = $1,560,000 625 = $2,496 per boat AFC = $1,560,000 625 = $2,496 per boat When they produce 625 boats, Carolina Yachts has an AFC of $2,496 per boat. What happens to the AFC if they increase or decrease the number of boats produced? Figure 2.18 shows the AFC for different numbers of boats. We see that total fixed costs remain unchanged, but the average fixed cost per unit goes up and down with the number of boats produced. As more units are produced, the fixed costs are spread out over more units, making the fixed cost per unit fall. Likewise, as fewer boats are manufactured, the average fixed costs per unit rises. We can use a similar approach with variable costs. Average variable cost (AVC) is the total variable costs divided by the total number of units produced, which results in a per-unit cost. Like ATC, we can use this formula: To demonstrate AVC, let’s return to Carolina Yachts, which incurs total variable costs of $6,875,000 when they produce 625 boats per year. They can express this as an average variable cost per unit: AVC = $6,875,000 625 = $11,000 per boat AVC = $6,875,000 625 = $11,000 per boat Because average variable costs are the average of all costs that change with production levels on a per-unit basis and include both direct materials and direct labor, managers often use AVC to determine if production should continue or not in the short run. As long as the price Carolina Yachts receives for their boats is greater than the per-unit AVC, they know that they are not only covering the variable cost of production, but each boat is making a contribution toward covering fixed costs. If, at any point, the average variable cost per boat rises to the point that the price no longer covers the AVC, Carolina Yachts may consider halting production until the variable costs fall again. These changes in variable costs per unit could be caused by circumstances beyond their control, such as a shortage of raw materials or an increase in shipping costs due to high gas prices. In any case, average variable cost can be useful for managers to get a big picture look at their variable costs per unit. Link to Learning Watch the video from Khan Academy that uses the scenario of computer programming to teach fixed, variable, and marginal cost to learn more. Mixed Costs and Stepped Costs Not all costs can be classified as purely fixed or purely variable. Mixed costs are those that have both a fixed and variable component. It is important, however, to be able to separate mixed costs into their fixed and variable components because, typically, in the short run, we can only change variable costs but not most fixed costs. To examine how these mixed costs actually work, consider the Ocean Breeze hotel. The Ocean Breeze is located in a resort area where the county assesses an occupancy tax that has both a fixed and a variable component. Ocean Breeze pays $2,000 per month, regardless of the number of rooms rented. Even if it does not rent a single room during the month, Ocean Breeze still must remit this tax to the county. The hotel treats this $2,000 as a fixed cost. However, for every night that a room is rented, Ocean Breeze must remit an additional tax amount of $5.00 per room per night. As a result, the occupancy tax is a mixed cost. Figure 2.19 further illustrates how this mixed cost behaves. Notice that Ocean Breeze cannot control the fixed portion of this cost and that it remains fixed in total, regardless of the activity level. On the other hand, the variable component is fixed per unit, but changes in total based upon the level of activity. The fixed portion of this cost plus the variable portion of this cost combine to make the total cost. As a result, the formula for total cost looks like this: where Y is the total mixed cost, a is the fixed cost, b is the variable cost per unit, and x is the level of activity. Graphically, mixed costs can be explained as shown in Figure 2.20 . The graph shows that mixed costs are typically both fixed and linear in nature. In other words, they will often have an initial cost, in Ocean Breeze’s case, the $2,000 fixed component of the occupancy tax, and a variable component, the $5 per night occupancy tax. Note that the Ocean Breeze mixed cost graph starts at an initial $2,000 for the fixed component and then increases by $5 for each night their rooms are occupied. Some costs behave less linearly. A cost that changes with the level of activity but is not linear is classified as a stepped cost . Step costs remain constant at a fixed amount over a range of activity. The range over which these costs remain unchanged (fixed) is referred to as the relevant range , which is defined as a specific activity level that is bounded by a minimum and maximum amount. Within this relevant range, managers can predict revenue or cost levels. Then, at certain points, the step costs increase to a higher amount. Both fixed and variable costs can take on this stair-step behavior. For instance, wages often act as a stepped variable cost when employees are paid a flat salary and a commission or when the company pays overtime. Further, when additional machinery or equipment is placed into service, businesses will see their fixed costs stepped up. The “trigger” for a cost to step up is the relevant range. Graphically, step costs appear like stair steps ( Figure 2.21 ). For example, suppose a quality inspector can inspect a maximum of 80 units in a regular 8-hour shift and his salary is a fixed cost. Then the relevant range for QA inspection is from 0–80 units per shift. If demand for these units increases and more than 80 inspections are needed per shift, the relevant range has been exceeded and the business will have one of two choices: (1) Pay the quality inspector overtime in order to have the additional units inspected. This overtime will “step up” the variable cost per unit. The advantage to handling the increased cost in this way is that when demand falls, the cost can quickly be “stepped down” again. Because these types of step costs can be adjusted quickly and often, they are often still treated as variable costs for planning purposes. (2) “Step up” fixed costs. If the company hires a second quality inspector, they would be stepping up their fixed costs. In effect, they will double the relevant range to allow for a maximum of 160 inspections per shift, assuming the second QA inspector can inspect an additional 80 units per shift. The down side to this approach is that once the new QA inspector is hired, if demand falls again, the company will be incurring fixed costs that are unnecessary. For this reason, adding salaried personnel to address a short-term increase in demand is not a decision most businesses make. Step costs are best explained in the context of a business experiencing increases in activity beyond the relevant range. As an example, let’s return to Tony’s T-Shirts. Tony’s cost of operations and the associated relevant ranges are shown in Table 2.5 . Tony’s T-Shirts Cost Options   Cost Type of Cost Relevant Range Lease on Screen-Printing Machine $2,000 per month Fixed 0–2,000 T-shirts per month Employee $10 per hour Variable 20 shirts per hour Tony’s Salary $2,500 per month Fixed N/A Screen-Printing Ink $0.25 per shirt Variable N/A Building Rent $1,500 per month Fixed 2 screen-printing machines and 2 employees Table 2.5 As you can see, Tony has both fixed and variable costs associated with his business. His one screen-printing machine can only produce 2,000 T-shirts per month and his current employee can produce 20 shirts per hour (160 per 8-hour work day). The space that Tony leases is large enough that he could add an additional screen-printing machine and 1 additional employee. If he expands beyond that, he will need to lease a larger space, and presumably his rent would increase at that point. It is easy for Tony to predict his costs as long as he operates within the relevant ranges by applying the total cost equation Y = a + bx . So, for Tony, as long as he produces 2,000 or fewer T-shirts, his total cost will be found by Y = $6,000 + $0.75 x , where the variable cost of $0.75 is the $0.25 cost of the ink per shirt and $0.50 per shirt for labor ($10 per hour wage/20 shirts per hour). As soon as his production passes the 2,000 T-shirts that his one employee and one machine can handle, he will have to add a second employee and lease a second screen-printing machine. In other words, his fixed costs will rise from $6,000 to $8,000, and his variable cost per T-shirt will rise from $0.75 to $1.25 (ink plus 2 workers). Thus, his new cost equation is Y = $8,000 + $1.25 x until he “steps up” again and adds a third machine and moves to a new location with a presumably higher rent. Let’s take a look at this in chart form to better illustrate the “step” in cost Tony will experience as he steps past 2,000 T-shirts. Tony’s cost information is shown in the chart for volume between 500 and 4,000 shirts. When presented graphically, notice what happens when Tony steps outside of his original relevant range and has to add a second employee and a second screen-printing machine: It is important to remember that even though Tony’s costs stepped up when he exceeded his original capacity (relevant range), the behavior of the costs did not change. His fixed costs still remained fixed in total and his total variable cost rose as the number of T-shirts he produced rose. Table 2.6 summarizes how costs behave within their relevant ranges. Summary of Fixed and Variable Cost Behaviors Cost In Total Per Unit Variable Cost Changes in response to the level of activity Remains fixed per unit regardless of the level of activity Fixed Cost Does not change with the level of activity, within the relevant range, but does change when the relevant range changes Changes based upon activity within the relevant range: increased activity decreases per-unit cost; decreased activity increases per-unit cost Table 2.6 Product versus Period Costs Many businesses can make decisions by dividing their costs into fixed and variable costs, but there are some business decisions that require grouping costs differently. Sometimes companies need to consider how those costs are reported in the financial statements. At other times, companies group costs based on functions within the business. For example, a business would group administrative and selling expenses by the period (monthly or quarterly) so that they can be reported on an Income Statement. However, a manufacturing firm may carry product costs such as materials from one period to the other in order to have the costs “travel” with the units being produced. It is possible that both the selling and administrative costs and materials costs have both fixed and variable components. As a result, it may be necessary to analyze some fixed costs together with some variable costs. Ultimately, businesses strategically group costs in order to make them more useful for decision-making and planning. Two of the broadest and most common grouping of costs are product costs and period costs. Product costs are all those associated with the acquisition or production of goods and products. When products are purchased for resale, the cost of goods is recorded as an asset on the company’s balance sheet. It is not until the products are sold that they become an expense on the income statement. By moving product costs to the expense account for the cost of goods sold, they are easily matched to the sales revenue income account. For example, Bert’s Bikes is a bicycle retailer who purchases bikes from several wholesale distributors and manufacturers. When Bert purchases bicycles for resale, he places the cost of the bikes into his inventory account, because that is what those bikes are—his inventory available for sale. It is not until someone purchases a bike that it creates sales revenue, and in order to fulfill the requirements of double-entry accounting, he must match that income with an expense: the cost of goods sold ( Figure 2.23 ). Some product costs have both a fixed and variable component. For example, Bert purchases 10 bikes for $100 each. The distributor charges $10 per bike for shipping for 1 to 10 bikes but $8 per bike for 11 to 20 bikes. This shipping cost is fixed per unit but varies in total. If Bert wants to save money and control his cost of goods sold, he can order an 11th bike and drop his shipping cost by $2 per bike. It is important for Bert to know what is fixed and what is variable so that he can control his costs as much as possible. What about the costs Bert incurs that are not product costs? Period costs are simply all of the expenses that are not product costs, such as all selling and administrative expenses. It is important to remember that period costs are treated as expenses in the period in which they occur. In other words, they follow the rules of accrual accounting practice by recognizing the cost (expense) in the period in which they occur regardless of when the cash changes hands. For example, Bert pays his business insurance in January of each year. Bert’s annual insurance premium is $10,800, which is $900 per month. Each month, Bert will recognize 1/12 of this insurance cost as an expense in the period in which it is incurred ( Figure 2.24 ). Why is it so important for Bert to know which costs are product costs and which are period costs? Bert may have little control over his product costs, but he maintains a great deal of control over many of his period costs. For this reason, it is important that Bert be able to identify his period costs and then determine which of them are fixed and which are variable. Remember that fixed costs are fixed over the relevant range, but variable costs change with the level of activity. If Bert wants to control his costs to make his bike business more profitable, he must be able to differentiate between the costs he can and cannot control. Just like a merchandising business such as Bert’s Bikes, manufacturers also classify their costs as either product costs or period costs. For a manufacturing business, product costs are the costs associated with making the product, and period costs are all other costs. For the purposes of external reporting, separating costs into period and product costs is not all that is necessary. However, for management decision-making activities, refinement of the types of product costs is helpful. In a manufacturing firm, the need for management to be aware of the types of costs that make up the cost of a product is of paramount importance. Let’s look at Carolina Yachts again and examine how they can classify the product costs associated with building their sportfishing boats. Just like automobiles, every year, Carolina Yachts makes changes to their boats, introducing new models to their product line. When the engineers begin to redesign boats for the next year, they must be careful not to make changes that would drive the selling price of their boats too high, making them less attractive to the customer. The engineers need to know exactly what the addition of another feature will do to the cost of production. It is not enough for them to get total product cost data; instead, they need specific information about the three classes of product costs: materials, labor, and overhead. As you’ve learned, direct materials are the raw materials and component parts that are directly economically traceable to a unit of production. Table 2.7 provides some examples of direct materials. Examples of Direct Materials Manufacturing Business Product Direct Materials Bakery Birthday cakes Flour, sugar, eggs, milk Automobile manufacturer Cars Glass, steel, tires, carpet Furniture manufacturer Recliners Wood, fabric, cotton batting Table 2.7 In each of the examples, managers are able to trace the cost of the materials directly to a specific unit (cake, car, or chair) produced. Since the amount of direct materials required will change based on the number of units produced, direct materials are almost always classified as a variable cost. They remain fixed per unit of production but change in total based on the level of activity within the business. It takes more than materials for Carolina Yachts to build a boat. It requires the application of labor to the raw materials and component parts. You’ve also learned that direct labor is the work of the employees who are directly involved in the production of goods or services. In fact, for many industries, the largest cost incurred in the production process is labor. For Carolina Yachts, their direct labor would include the wages paid to the carpenters, painters, electricians, and welders who build the boats. Like direct materials, direct labor is typically treated as a variable cost because it varies with the level of activity. However, there are some companies that pay a flat weekly or monthly salary for production workers, and for these employees, their compensation could be classified as a fixed cost. For example, many auto mechanics are now paid a flat weekly or monthly salary. While in the example Carolina Yachts is dependent upon direct labor, the production process for companies in many industries is moving from human labor to a more automated production process. For these companies, direct labor in these industries is becoming less significant. For an example, you can research the current production process for the automobile industry. The third major classification of product costs for a manufacturing business is overhead. Manufacturing overhead (sometimes referred to as factory overhead ) includes all of the costs that a manufacturing business incurs, other than the variable costs of direct materials and direct labor required to build products. These overhead costs are not directly attributable to a specific unit of production, but they are incurred to support the production of goods. Some of the items included in manufacturing overhead include supervisor salaries, depreciation on the factory, maintenance, insurance, and utilities. It is important to note that manufacturing overhead does not include any of the selling or administrative functions of a business. For Carolina Yachts, costs like the sales, marketing, CEO, and clerical staff salaries will not be included in the calculation of manufacturing overhead costs but will instead be allocated to selling and administrative expenses. As you have learned, much of the power of managerial accounting is its ability to break costs down into the smallest possible trackable unit. This also applies to manufacturing overhead. In many cases, businesses have a need to further refine their overhead costs and will track indirect labor and indirect materials. When labor costs are incurred but are not directly involved in the active conversion of materials into finished products, they are classified as indirect labor costs. For example, Carolina Yachts has production supervisors who oversee the manufacturing process but do not actively participate in the construction of the boats. Their wages generally support the production process but cannot be traced back to a single unit. For this reason, the production supervisors’ salary would be classified as indirect labor. Similar to direct labor, on a product or department basis, indirect labor, such as the supervisor’s salary, is often treated as a fixed cost, assuming that it does not vary with the level of activity or number of units produced. However, if you are considering the supervisor’s salary cost on a per unit of production basis, then it could be considered a variable cost. Similarly, not all materials used in the production process can be traced back to a specific unit of production. When this is the case, they are classified as indirect material costs. Although needed to produce the product, these indirect material costs are not traceable to a specific unit of production. For Carolina Yachts, their indirect materials include supplies like tools, glue, wax, and cleaning supplies. These materials are required to build a boat, but management cannot easily track how much of a bottle of glue they use or how often they use a particular drill to build a specific boat. These indirect materials and their associated cost represent a small fraction of the total materials needed to complete a unit of production. Like direct materials, indirect materials are classified as a variable cost since they vary with the level of production. Table 2.8 provides some examples of manufacturing costs and their classifications. Examples of Classifications of Manufacturing Costs Cost Classification Fixed or Variable Production supervisor salary Indirect labor Fixed Raw materials used in production Direct materials Variable Wages of production employees Direct labor Variable Straight-line depreciation on factory equipment General manufacturing overhead Fixed Glue and adhesives Indirect materials Variable Table 2.8 Prime Costs versus Conversion Costs In certain production environments, once a business has separated the costs of the product into direct materials, direct labor, and overhead, the costs can then be gathered into two broader categories: prime costs and conversion costs. Prime costs are the direct material expenses and direct labor costs, while conversion costs are direct labor and general factory overhead combined. Please note that these two categories of costs are examples of cost categories where a particular cost can be included in both. In this case, direct labor is included in both prime costs and conversion costs. These cost classifications are common in businesses that produce large quantities of an item that is then packaged into smaller, sellable quantities such as soft drinks or cereal. In these types of production environments, it easier to lump the costs of direct labor and overhead into one category, since these costs are what are needed to convert raw materials into a finished product. This method of costing is termed process costing and is covered in Process Costing . Although it seems as if there are many classifications or labels associated with costs, remember that the purpose of cost classification is to assist managers in the decision-making process. Since this type of data is not used for external reporting purposes, it is important to understand that (1) a single cost can have many different labels; (2) the terms are used independently, not simultaneously; and (3) each classification is important to understand in order to make business decisions. Figure 2.25 uses some example costs to demonstrate these principles. Effects of Changes in Activity Level on Unit Costs and Total Costs We have spent considerable time identifying and describing the various ways that businesses categorize costs. However, categorization itself is not enough. It is important not only to understand the categorization of costs but to understand the relationships between changes in activity levels and the changes in costs in total. It is worth repeating that when a cost is considered to be fixed, that cost is only fixed for the relevant range. Once the boundary of the relevant range has been reached or moved beyond, fixed costs will change and then remain fixed for the new relevant range. Remember that, within a relevant range of activity, where the relevant range refers to a specific activity level that is bounded by a minimum and maximum amount, total fixed costs are constant, but costs change on a per-unit basis. Let’s examine an example that demonstrates how changes in activity can affect costs. Ethical Considerations Cost Accounting Helps Reduce Fraud and Promotes Ethical Behavior Managerial and related cost accounting systems assist managers in making ethical and sound business decisions. Managerial accountants implement accounting reporting systems to minimize or prevent fraud and promote ethical decision-making. For example, tracking changes in costing activity and ensuring that activity remains in a relevant range, helps ensure that an organization’s business activity is properly bounded within a reasonable range of expense. If the minimum or maximum expense range is exceeded, this can indicate that management is acting without authority or is pursuing unauthorized activities. Excessive costs may even be a red flag that possible fraud is occurring. Cost accounting helps ensure that financial costs are within an acceptable range and helps an organization make reliable forward-looking financial decisions. Comprehensive Example of the Effect on Changes in Activity Level on Costs Pat is planning a three-day ski trip on his spring break after he works on a Habitat for Humanity project in Dallas. The costs for the trip are as follows: He is considering his costs for the trip if he goes alone, or if he takes one, two, three, or four friends. However, before he can begin his analysis, he needs to consider the characteristics of the costs. Some of the costs will stay the same no matter how many people go, and some of the costs will fluctuate, based on the number of participants. Those costs that do not change are the fixed costs. Once you incur a fixed cost, it does not change within a given range. For example, Pat can take up to five people in one car, so the cost of the car is fixed for up to five people. However, if he took more friends, then he would need more cars. The condo rental and the gasoline expenses would also be considered fixed costs, because they are not going to change in the reference range. The costs that do change as the number of participants change are the variable costs. The food and lift ticket expenses are examples of variable costs, since they fluctuate based upon the number of participants and the number of days of activities. In analyzing the costs, Pat also needs to consider the total costs and average costs. The analysis will calculate the average fixed costs, the total fixed costs, the average variable costs, and the total variable costs. In the analysis of total costs versus average costs, both total and average fixed costs will stay the same and total and average variable costs will change. Here are the total fixed costs: The total fixed costs for the trip will be $720.00, no matter whether Pat goes alone or takes up to 4 friends. However, the average fixed costs will be the total fixed costs divided by the number of participants. The average fixed cost could range from $720 (720/1) to $144 (720/5). Here are the variable costs: The average variable cost will be $70.00 per person per day, no matter how many people go on the trip. However, the total variable costs will range from $70.00, if Pat goes alone, to $350.00, if five people go. Figure 2.26 shows the relationships of the various costs, based on the number of participants. Looking at this analysis, it is clear that, if there is an activity that you think that you cannot afford, it can become less expensive if you are creative in your cost-sharing techniques. Your Turn Spring Break Trip Planning Margo is planning an 8-day spring break trip from Atlanta, Georgia, to Tampa, Florida, leaving on Sunday and returning the following Sunday. She has located a condominium on the beach and has put a deposit down on the unit. The rental company has a maximum occupancy for the condominium of seven adults. There is an amusement park that she plans to visit. She is going to use her parents’ car, an SUV that can carry up to six people and their luggage. The SUV can travel an average of 20 miles per gallon, the total distance is approximately 1,250 miles (550 miles each way plus driving around Tampa every day), and the average price of gas is $3 per gallon. A season pass for an amusement park she wants to visit is $168 per person. Margo estimates spending $40 per day per person for food. She estimates the costs for the trip as follows: Now that she has cost estimates, she is trying to decide how many of her friends she wants to invite. Since the car can only seat six people, Marg made a list of five other girls to invite. Use her data to answer the following questions and fill out the cost table: What are the total variable costs for the trip? What are the average variable costs for the trip? What are the total fixed costs for the trip? What are the average fixed costs for the trip? What are the average costs per person for the trip? What would the trip cost Margo if she were to go alone? What additional costs would be incurred if a seventh girl was invited on the trip? Would this be a wise decision (from a cost perspective)? Why or why not? Which cost will not be affected if a seventh girl was invited on the trip? Solution Answers will vary. All responses should recognize that there is no room in the car for the seventh girl and her luggage, although the condominium will accommodate the extra person. This means they will have to either find a larger vehicle and incur higher gas expenses or take a second car, which will at least double the fixed gas cost. 2.3 Estimate a Variable and Fixed Cost Equation and Predict Future Costs Sometimes, a business will need to use cost estimation techniques, particularly in the case of mixed costs, so that they can separate the fixed and variable components, since only the variable components change in the short run. Estimation is also useful for using current data to predict the effects of future changes in production on total costs. Three estimation techniques that can be used include the scatter graph, the high-low method, and regression analysis. Here we will demonstrate the scatter graph and the high-low methods (you will learn the regression analysis technique in advanced managerial accounting courses. Functions of Cost Equations The cost equation is a linear equation that takes into consideration total fixed costs, the fixed component of mixed costs, and variable cost per unit. Cost equations can use past data to determine patterns of past costs that can then project future costs, or they can use estimated or expected future data to estimate future costs. Recall the mixed cost equation: where Y is the total mixed cost, a is the fixed cost, b is the variable cost per unit, and x is the level of activity. Let’s take a more in-depth look at the cost equation by examining the costs incurred by Eagle Electronics in the manufacture of home security systems, as shown in Table 2.9 . Cost Information for Eagle Electronics Cost Incurred Fixed or Variable Cost Lease on manufacturing equipment Fixed $50,000 per year Supervisor salary Fixed $75,000 per year Direct materials Variable $50 per unit Direct labor Variable $20 per unit Table 2.9 By applying the cost equation, Eagle Electronics can predict its costs at any level of activity ( x ) as follows: Determine total fixed costs: $50,000 + $75,000 = $125,000 Determine variable costs per unit: $50 + $20 = $70 Complete the cost equation: Y = $125,000 + $70 x Using this equation, Eagle Electronics can now predict its total costs ( Y ) for any given level of activity ( x ), as in Figure 2.27 : When using this approach, Eagle Electronics must be certain that it is only predicting costs for its relevant range. For example, if they must hire a second supervisor in order to produce 12,000 units, they must go back and adjust the total fixed costs used in the equation. Likewise, if variable costs per unit change, these must also be adjusted. This same approach can be used to predict costs for service and merchandising firms, as shown by examining the costs incurred by J&L Accounting to prepare a corporate income tax return, shown in Table 2.10 . Cost Information for J&L Accounting Cost Incurred Fixed or Variable Cost Building rent Fixed $1,000 per month Direct labor (for CPAs) Variable $250 per tax return Secretarial staff Fixed $2,000 per month Accounting clerks Variable $100 per return Table 2.10 J&L wants to predict their total costs if they complete 25 corporate tax returns in the month of February. Determine total fixed costs: $1,000 + $2,000 = $3,000 Determine variable costs per tax return: $250 + $100 = $350 Complete the cost equation: Y = $3,000 + $350 x Using this equation, J&L can now predict its total costs ( Y ) for the month of February when they anticipate preparing 25 corporate tax returns: Y = $ 3,000 + ( $ 350 × 25 ) Y = $ 3,000 + $ 8,750 Y = $ 11,750 Y = $ 3,000 + ( $ 350 × 25 ) Y = $ 3,000 + $ 8,750 Y = $ 11,750 J&L can now use this predicted total cost figure of $11,750 to make decisions regarding how much to charge clients or how much cash they need to cover expenses. Again, J&L must be careful to try not to predict costs outside of the relevant range without adjusting the corresponding total cost components. J&L can make predictions for their costs because they have the data they need, but what happens when a business wants to estimate total costs but has not collected data regarding per-unit costs? This is the case for the managers at the Beach Inn, a small hotel on the coast of South Carolina. They know what their costs were for June, but now they want to predict their costs for July. They have gathered the information in Figure 2.28 . In June, they had an occupancy of 75 nights. For the Beach Inn, occupancy (rooms rented) is the cost driver. Since they know what is driving their costs, they can determine their per-unit variable costs in order to forecast future costs: Front Desk Staff 75 nights = $3,800 75 = $50.67 variable front desk staff costs per night Cleaning Staff 75 nights = $2,500 75 = $33.33 variable cleaning staff costs per night Laundry Service 75 nights = $1,200 75 = $16.00 variable laundry service costs per night Front Desk Staff 75 nights = $3,800 75 = $50.67 variable front desk staff costs per night Cleaning Staff 75 nights = $2,500 75 = $33.33 variable cleaning staff costs per night Laundry Service 75 nights = $1,200 75 = $16.00 variable laundry service costs per night Now, the Beach Inn can apply the cost equation in order to forecast total costs for any number of nights, within the relevant range. Determine total fixed costs: $700 + $2,500 = $3,200 Determine variable costs per night of occupancy: $50.67 + $33.33 + $16.00 = $100 Complete the cost equation: Y = $3,200 + $100 x Using this equation, the Beach Inn can now predict its total costs ( Y ) for the month of July, when they anticipate an occupancy of 93 nights. Y = $ 3,200 + ( $ 100 × 93 ) Y = $ 3,200 + $ 9,300 Y = $ 12,500 Y = $ 3,200 + ( $ 100 × 93 ) Y = $ 3,200 + $ 9,300 Y = $ 12,500 In all three examples, managers used cost data they have collected to forecast future costs at various activity levels. Your Turn Waymaker Furniture Waymaker Furniture has collected cost information from its production process and now wants to predict costs for various levels of activity. They plan to use the cost equation to formulate these predictions. Information gathered from March is presented in Table 2.11 . March Cost Information for Waymaker Furniture Cost Incurred Fixed or Variable March Cost Plant supervisor salary Fixed $12,000 per month Lumber (direct materials) Variable $75,000 total Production worker wages Variable $11.00 per hour Machine maintenance Variable $5.00 per unit produced Lease on factory Fixed $15,000 per month Table 2.11 In March, Waymaker produced 1,000 units and used 2,000 hours of production labor. Using this information and the cost equation, predict Waymaker’s total costs for the levels of production in Table 2.12 . Waymaker’s Levels of Production Month Activity Level April 1,500 units May 2,000 units June 2,500 units Table 2.12 Solution Total Fixed Cost = $12,000 + $15,000 = $27,000. Direct Materials per Unit = $75,000 / 1,000 Units = $75 per unit. Direct Labor per Hour = $11.00 x 2 hours per unit (2,000 hrs/1,000 units) = $22 per unit. Machine Maintenance = $5.00 per unit. Total Variable Cost per Unit = $75 + $22 + $5 = $102 per unit. Demonstration of the Scatter Graph Method to Calculate Future Costs at Varying Activity Levels One of the assumptions that managers must make in order to use the cost equation is that the relationship between activity and costs is linear. In other words, costs rise in direct proportion to activity. A diagnostic tool that is used to verify this assumption is a scatter graph. A scatter graph shows plots of points that represent actual costs incurred for various levels of activity. Once the scatter graph is constructed, we draw a line (often referred to as a trend line ) that appears to best fit the pattern of dots. Because the trend line is somewhat subjective, the scatter graph is often used as a preliminary tool to explore the possibility that the relationship between cost and activity is generally a linear relationship. When interpreting a scatter graph, it is important to remember that different people would likely draw different lines, which would lead to different estimations of fixed and variable costs. No one person’s line and cost estimates would necessarily be right or wrong compared to another; they would just be different. After using a scatter graph to determine whether cost and activity have a linear relationship, managers often move on to more precise processes for cost estimation, such as the high-low method or least-squares regression analysis. To demonstrate how a company would use a scatter graph, let’s turn to the data for Regent Airlines, which operates a fleet of regional jets serving the northeast United States. The Federal Aviation Administration establishes guidelines for routine aircraft maintenance based upon the number of flight hours. As a result, Regent finds that its maintenance costs vary from month to month with the number of flight hours, as depicted in Figure 2.29 . When creating the scatter graph, each point will represent a pair of activity and cost values. Maintenance costs are plotted on the vertical axis ( Y ), while flight hours are plotted on the horizontal axis ( X ). For instance, one point will represent 21,000 hours and $84,000 in costs. The next point on the graph will represent 23,000 hours and $90,000 in costs, and so forth, until all of the pairs of data have been plotted. Finally, a trend line is added to the chart in order to assist managers in seeing if there is a positive, negative, or zero relationship between the activity level and cost. Figure 2.30 shows a scatter graph for Regent Airlines. In scatter graphs, cost is considered the dependent variable because cost depends upon the level of activity. The activity is considered the independent variable since it is the cause of the variation in costs. Regent’s scatter graph shows a positive relationship between flight hours and maintenance costs because, as flight hours increase, maintenance costs also increase. This is referred to as a positive linear relationship or a linear cost behavior. Will all cost and activity relationships be linear? Only when there is a relationship between the activity and that particular cost. What if, instead, the cost of snow removal for the runways is plotted against flight hours? Suppose the snow removal costs are as listed in Table 2.13 . Snow Removal Costs Month Activity Level: Flight Hours Snow Removal Costs January 21,000 $40,000 February 23,000 50,000 March 14,000 8,000 April 17,000 0 May 10,000 0 June 19,000 0 Table 2.13 As you can see from the scatter graph, there is really not a linear relationship between how many flight hours are flown and the costs of snow removal. This makes sense as snow removal costs are linked to the amount of snow and the number of flights taking off and landing but not to how many hours the planes fly. Using a scatter graph to determine if this linear relationship exists is an essential first step in cost behavior analysis. If the scatter graph reveals a linear cost behavior, then managers can proceed with a more sophisticated analyses to separate mixed costs into their fixed and variable components. However, if this linear relationship is not present, then other methods of analysis are not appropriate. Let’s examine the cost data from Regent Airline using the high-low method. Demonstration of the High-Low Method to Calculate Future Costs at Varying Activity Levels As you’ve learned, the purpose of identifying costs is to control them, and managers regularly use past costs to predict future costs. Since we know that variable costs change with the level of activity, we can conclude that there is usually a positive relationship between cost and activity: As one rises, so does the other. Ideally, this can be confirmed on a scatter graph. One of the simplest ways to analyze costs is to use the high-low method , a technique for separating the fixed and variable cost components of mixed costs. Using the highest and lowest levels of activity and their associated costs, we are able to estimate the variable cost components of mixed costs. Once we have established that there is linear cost behavior, we can equate variable costs with the slope of the line, expressed as the rise of the line over the run. The steeper the slope of the line, the faster costs rise in response to a change in activity. Recall from the scatter graph that costs are the dependent Y variable and activity is the independent X variable. By examining the change in Y relative to the change in X , we can predict cost: where Y 2 is the total cost at the highest level of activity; Y 1 is the total cost at the lowest level of activity; X 2 is the number of units, labor hours, etc., at the highest level of activity; and X 1 is the number of units, labor hours, etc., at the lowest level of activity. Using the maintenance cost data from Regent Airlines shown in Figure 2.32 , we will examine how this method works in practice. The first step in analyzing mixed costs with the high-low method is to identify the periods with the highest and lowest levels of activity. In this case, it would be February and May, as shown in Figure 2.33 . We always choose the highest and lowest activity and the costs that correspond with those levels of activity, even if they are not the highest and lowest costs. We are now able to estimate the variable costs by dividing the difference between the costs of the high and the low periods by the change in activity using this formula: For Regent Airlines, this is: Variable Cost = $90,000 – $64,500 23,000 – 10,000 = $1.96 per flight hour Variable Cost = $90,000 – $64,500 23,000 – 10,000 = $1.96 per flight hour Having determined that the variable cost per flight-hour is $1.96, we can now determine the amount of fixed costs. We can determine these fixed costs by taking the total costs at either the high or the low level of activity and subtracting this variable component. You will recall that total cost = fixed costs + variable costs, so the fixed cost component for Regent Airlines can be isolated as shown: Fixed cost = total cost – variable cost Fixed cost = $90,000 – ( 23,000 × $1.96 ) Fixed cost = $44,920 Fixed cost = total cost – variable cost Fixed cost = $90,000 – ( 23,000 × $1.96 ) Fixed cost = $44,920 Notice that if we had chosen the other data point, the low cost and activity, we would still get the same fixed cost of $44,920 = [$64,500 – (10,000 × $1.96)]. Now that we have isolated both the fixed and the variable components, we can express Regent Airlines’ cost of maintenance using the total cost equation: Y = $ 44,920 + $ 1.96 x Y = $ 44,920 + $ 1.96 x where Y is total cost and x is flight hours. Because we confirmed that the relationship between cost and activity at Regent exhibits linear cost behavior on the scatter graph, this equation allows managers at Regent Airlines to conclude that for every one unit increase in activity, there will be a corresponding rise in variable cost of $1.96. When put into practice, the managers at Regent Airlines can now predict their total costs at any level of activity, as shown in Figure 2.34 . Although managers frequently use this method, it is not the most accurate approach to predicting future costs because it is based on only two pieces of cost data: the highest and the lowest levels of activity. Actual costs can vary significantly from these estimates, especially when the high or low activity levels are not representative of the usual level of activity within the business. For a more accurate model, the least-squares regression method would be used to separate mixed costs into their fixed and variable components. The least-squares regression method is a statistical technique that may be used to estimate the total cost at the given level of activity based on past cost data. Least-squares regression minimizes the errors of trying to fit a line between the data points and thus fits the line more closely to all the data points. Understanding the various labels used for costs is the first step toward using costs to evaluate business decisions. You will learn more about these various labels and how they are applied in decision-making processes as you continue your study of managerial accounting in this course.
business_ethics
Summary 6.1 The Workplace Environment and Working Conditions A company and its managers need to provide a workplace at which employees want to work, free of safety hazards and all types of harassment. Perks and benefits also make the company an attractive place to work. Yet another factor is managers who make employees feel valued and respected. A company can use all these tools to attract and retain top talent, helping to reach the goals of having a well-run company with a satisfied workforce. Philosophers Aristotle and Immanuel Kant said taking ethical action is the right thing to do. The decision to create an environment in which employees want to come to work each day is, in large part, an ethical choice, because it creates a healthy environment for all to encounter. However, the bonus comes when a satisfied workforce fosters increased quality and productivity, which leads to appreciative customers or clients and increased profitability. There is a financial payoff in that a well-treated workforce is also a productive one. 6.2 What Constitutes a Fair Wage? The concept of paying people fairly can become complicated. It includes trying to allocate and compensate workers in the most effective manner for the company, but it takes judgement, wisdom, and a moral imperative to do it fairly. Managers must balance issues of compensation equity, employee morale, motivation, and profits—all of which may have legal, ethical, and business elements. The issue of a fair wage is particularly salient for those earning the minimum wage, which, in real terms, has declined by 23 percent since 1960, and for women, who continue to experience a significant pay gap as compared with their male counterparts. 6.3 An Organized Workforce Employees seek fair treatment in the workplace and sometimes gain a negotiating advantage with management by choosing to be represented by a labor union. Union membership in the United States has fallen in recent years as federal and state law have expanded to include worker protections unions fought for, and as the nation has shifted from a manufacturing to a service economy. Public-sector employee groups such as teachers, professors, first responders, and nurses are unionized in some cities and states. U.S. workers have contributed to a long rise in productivity over the last forty years but have not generally shared in wage gains. 6.4 Privacy in the Workplace Monitoring of employees, whether electronically or through drug testing, is a complex area of workforce management. Numerous state and federal legal restrictions apply, and employers must decide not only what they are legally allowed to do but also what they should do ethically, keeping in mind the individual privacy concerns of their employees.
Chapter Outline 6.1 The Workplace Environment and Working Conditions 6.2 What Constitutes a Fair Wage? 6.3 An Organized Workforce 6.4 Privacy in the Workplace Introduction The 2020 Gender Diversity Index shows that many Fortune 1000 boards of directors still lack diversity. 1 Women and minorities continue to be underrepresented at the chief executive officer (CEO) level too. 2 Does this sameness merely look bad, or are there ethical and business reasons why top U.S. management should be more diverse ( Figure 6.1 )? A demographic disconnect between leadership and workforce influences working conditions in many ways. For example, if more women held leadership roles, would workplace sexual harassment have come to light before the #MeToo movement? Would more companies offer paid family leave? If minorities were better represented at the executive level, would corporate lobbyists advocate differently for immigration and health care policies? When 70 percent of boardroom seats are occupied by White men, 3 who make up only 30 percent of the population, many people’s views, ideas, and opinions will go unheard in decisions that affect their lives and livelihoods. We have seen progress, but much remains to be accomplished. Does management have an ethical duty to try to diversify top leadership? Whatever individual responses we might offer to each of these questions, a significant theme in this chapter is that ethical behavior in the workplace is most effectively instituted when it is modeled by senior leadership.
[ { "answer": { "ans_choice": 2, "ans_text": "C" }, "bloom": null, "hl_context": "<hl> OSHA and related regulations give employees several important rights , including the right to make a confidential complaint with OSHA that might result in an inspection of the workplace , to obtain information about the hazards of the workplace and ways to avoid harm , to obtain and review documentation of work-related illnesses and injuries at the job site , to obtain copies of tests done to measure workplace hazards , and protection against any employer sanctions as a consequence of complaining to OSHA about workplace conditions or hazards . <hl> <hl> 6 A worker who believes his or her OSHA rights are being violated can make an anonymous report . <hl> OSHA will then establish whether there are reasonable grounds for believing a violation exists . If so , OSHA will conduct an inspection of the workplace and report any findings to the employer and employee , or their representatives , including any steps needed to correct safety and health issues .", "hl_sentences": "OSHA and related regulations give employees several important rights , including the right to make a confidential complaint with OSHA that might result in an inspection of the workplace , to obtain information about the hazards of the workplace and ways to avoid harm , to obtain and review documentation of work-related illnesses and injuries at the job site , to obtain copies of tests done to measure workplace hazards , and protection against any employer sanctions as a consequence of complaining to OSHA about workplace conditions or hazards . 6 A worker who believes his or her OSHA rights are being violated can make an anonymous report .", "question": { "cloze_format": "Managers in a workplace should ___ anticipate an inspection from the Occupational Safety and Health Administration.", "normal_format": "How often should managers in a workplace anticipate an inspection from the Occupational Safety and Health Administration?", "question_choices": [ "every day", "once a month", "upon request or complaint", "never" ], "question_id": "fs-idm386483872", "question_text": "How often should managers in a workplace anticipate an inspection from the Occupational Safety and Health Administration?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 1, "ans_text": "B" }, "bloom": null, "hl_context": "Even after all possible qualifiers have been added , it remains true that women earn less than men . Managers sometimes offer multiple excuses to justify pay inequities between women and men , such as , “ Women take time off for having babies ” or “ Women have less experience , ” but these usually do not explain away the differences . <hl> The data show that a woman with the same education , experience , and skills , doing the same job as a man , is still likely to earn less , at all levels from bottom to top . <hl> <hl> According to a study by the Institute for Women ’ s Policy Research , even women in top positions such as CEO , vice president , and general counsel often earn only about 80 percent of what men with the same job titles earn . <hl> 24 Data from the EEOC over the five years from 2011 through 2015 for salaries of senior-level officials and managers ( defined by the EEOC as those who set broad policy and are responsible for overseeing execution of those policies ) show women in these roles earned an average of about $ 600,000 per year , compared with their male counterparts , who earned more than $ 800,000 per year . 25 That $ 200,000 difference amounts to a wage gap of about 35 percent each year .", "hl_sentences": "The data show that a woman with the same education , experience , and skills , doing the same job as a man , is still likely to earn less , at all levels from bottom to top . According to a study by the Institute for Women ’ s Policy Research , even women in top positions such as CEO , vice president , and general counsel often earn only about 80 percent of what men with the same job titles earn .", "question": { "cloze_format": "According to data presented in the chapter, women earn ___ in comparison with men doing the same job.", "normal_format": "According to data presented in the chapter, about how much do women earn in comparison with men doing the same job?", "question_choices": [ "a lot less (about 40%–50%)", "somewhat less (about 70%–80%)", "almost the same (95%)", "about the same (100%)" ], "question_id": "fs-idm199087392", "question_text": "According to data presented in the chapter, about how much do women earn in comparison with men doing the same job?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "D" }, "bloom": null, "hl_context": "<hl> The average multiplier effect in the United States is in the range of three hundred . <hl> <hl> This means that CEO pay is , on average , three hundred times as high as the pay of the average worker in the same company . <hl> In the United Kingdom , the multiplier is twenty-two ; in France , it is fifteen ; and in Germany , it is twelve . 48 The 1965 U . S . ratio was only twenty to one , which raises the question , why and how did CEO pay rise so dramatically high in the United States compared with the rest of the world ? Are CEOs in the United States that much better than CEOs in Germany or Japan ? Do American companies perform that much better ? Is this ratio fair to investors and employees ? A large part of executive compensation is in the form of stock options , which frequently are included in the calculation of an executive ’ s salary and benefits , rather than direct salary . However , this , in turn , raises the question of whether all or a portion of the general workforce should also share in some form of stock options .", "hl_sentences": "The average multiplier effect in the United States is in the range of three hundred . This means that CEO pay is , on average , three hundred times as high as the pay of the average worker in the same company .", "question": { "cloze_format": "In the United States, CEO pay is on average ________ times as high as the pay of the average worker in the same company.", "normal_format": "In the United States, CEO pay is on average how many times as high as the pay of the average worker in the same company?", "question_choices": [ "30", "50", "100", "300" ], "question_id": "fs-idm258288736", "question_text": "In the United States, CEO pay is on average ________ times as high as the pay of the average worker in the same company." }, "references_are_paraphrase": 0 } ]
6
6.1 The Workplace Environment and Working Conditions Learning Objectives By the end of this section, you will be able to: Identify specific ethical duties managers owe employees Describe the provisions of the Occupational Safety and Health Act Identify Equal Employment Opportunity Commission protections, including those against sexual harassment at work Describe how employees’ expectations of work have changed All employees want and deserve a workplace that is physically and emotionally safe, where they can focus on their job responsibilities and obtain some fulfillment, rather than worrying about dangerous conditions, harassment, or discrimination. Workers also expect fair pay and respect for their privacy. This section will explore the ethical and legal duties of employers to provide a workplace in which employees want to work. Ethical Decision-Making and Leadership in the Workplace A contemporary corporation always owes an ethical, and in some cases legal, duty to employees to be a responsible employer . In a business context, the definition of this responsibility includes providing a safe workplace, compensating workers fairly, and treating them with a sense of dignity and equality while respecting at least a minimum of their privacy. Managers should be ethical leaders who serve as role models and mentors for all employees. A manager’s job, perhaps the most important one, is to give people a reason to come back to work tomorrow. Good managers model ethical behavior. If a corporation expects its employees to act ethically, that behavior must start at the top, where managers hold themselves to a high standard of conduct and can rightly say, “Follow my lead, do as I do.” At a minimum, leaders model ethical behavior by not violating the law or company policy. One who says, “Get this deal done, I don’t care what it takes,” may very well be sending a message that unethical tactics and violating the spirit, if not the letter, of the law are acceptable. A manager who abuses company property by taking home office supplies or using the company’s computers for personal business but then disciplines any employee who does the same is not modeling ethical behavior. Likewise, a manager who consistently leaves early but expects all other employees to stay until the last minute is not demonstrating fairness. Another responsibility business owes the workforce is transparency . This duty begins during the hiring process, when the company communicates to potential employees exactly what is expected of them. Once hired, employees should receive training on the company rules and expectations. Management should explain how an employee’s work contributes to the achievement of company-wide goals. In other words, a company owes it to its employees to keep them in the loop about significant matters that affect them and their job, whether good or bad, formal or informal. A more complete understanding of all relevant information usually results in a better working relationship. That said, some occasions do arise when full transparency may not be warranted. If a company is in the midst of confidential negotiations to acquire, or be acquired by, another firm, this information must be kept secret until a deal has been completed (or abandoned). Regulatory statutes and criminal law may require this. Similarly, any internal personnel performance issues or employee criminal investigations should normally be kept confidential within the ranks of management. Transparency can be especially important to workers in circumstances that involve major changes, such as layoffs, reductions in the workforce, plant closings, and other consequential events. These kinds of events typically have a psychological and financial impact on the entire workforce. However, some businesses fail to show leadership at the most crucial times. A leader who is honest and open with the employees should be able to say, “This is a very difficult decision, but one that I made and will stand behind and accept responsibility for it.” To workers, euphemisms such as “right sizing” to describe layoffs and job loss only sounds like corporate doublespeak designed to help managers justify, and thereby feel better (and minimize guilt), about their (or the company’s) decisions. An ethical company will give workers advance notice, a severance package, and assistance with the employment search, without being forced to do so by law. Proactive rather than reactive behavior is the ethical and just thing to do. Historically, however, a significant number of companies and managers failed to demonstrate ethical leadership in downsizing, eventually leading Congress to take action. The Worker Adjustment and Retraining Notification (WARN) Act of 1989 has now been in effect for almost three decades, protecting workers and their families (as well as their communities) by mandating that employers provide sixty days’ advance notice of mass layoffs and plant closings ( Figure 6.2 ). This law was enacted precisely because companies were not behaving ethically. A report by the Cornell University Institute of Labor Relations indicated that, prior to passage of WARN, only 20 percent of displaced workers received written advance notice, and those who did received very short notice, usually a few days. Only 7 percent had two months’ notice of their impending displacement. 4 Employers typically preferred to get as many days of work as possible from their workforces before a mass layoff or closing, figuring that workers might reduce productivity or look for other jobs sooner if the company were transparent and open about its situation. In other words, when companies put their own interests and needs ahead of the workforce, we can hardly call that ethical leadership. Other management actions covered by WARN include outsourcing, automation, and artificial intelligence in the workplace. Arguably, a company has an ethical duty to notify workers who might be adversely affected even if the WARN law does not apply, demonstrating that the appropriate ethical standard for management often exceeds the minimum requirements of the law. Put another way, the law sometimes is often slow to keep up with ethical reflection on best management practices. Workplace Safety under the Occupational Safety and Health Act The primary federal law ensuring physical safety on the job is the Occupational Safety and Health Act ( OSHA ), which was passed in 1970. 5 The goal of the law is to ensure that employers provide a workplace environment free of risk to employees’ safety and health, such as mechanical or electrical dangers, toxic chemicals, severe heat or cold, unsanitary conditions, and dangerous equipment. OSHA also refers to the Occupational Safety and Health Administration, which operates as a division of the Department of Labor and oversees enforcement of the law. This act created the National Institute for Occupational Safety and Health (NIOSH), which serves as the research institute for OSHA and enunciates appropriate standards for safety and health on the job. Employer obligations under OSHA include the duty to provide a safe workplace free of serious hazards, to identify and eliminate health and safety hazards ( Figure 6.3 ), to inform employees of hazards present on the job and institute training protocols sufficient to address them, to extend to employees protective gear and appropriate safeguards at no cost to them, and to publicly post and maintain records of worker injuries and OSHA citations. OSHA and related regulations give employees several important rights, including the right to make a confidential complaint with OSHA that might result in an inspection of the workplace, to obtain information about the hazards of the workplace and ways to avoid harm, to obtain and review documentation of work-related illnesses and injuries at the job site, to obtain copies of tests done to measure workplace hazards, and protection against any employer sanctions as a consequence of complaining to OSHA about workplace conditions or hazards. 6 A worker who believes his or her OSHA rights are being violated can make an anonymous report. OSHA will then establish whether there are reasonable grounds for believing a violation exists. If so, OSHA will conduct an inspection of the workplace and report any findings to the employer and employee, or their representatives, including any steps needed to correct safety and health issues. OSHA has the authority to levy significant fines against companies that commit serious violations. The largest imposed to date were against BP, the oil company responsible for the largest oil spill in U.S. history, discussed in the feature box on BP Deepwater Horizon Oil Spill and Government Regulation . OSHA took into account that seventeen workers died on BP’s rig, Deepwater Horizon, as a result of the initial explosion and fire in April 2010. Consequently, rig-worker safety was upgraded by statute. Total OSHA penalties issued to BP from 2005 to 2009 exceed $102 million. 7 Other large fines issued over the last thirty years include $2.8 million against Union Carbide for violations related to an explosion and fire at its plant in Seadrift, Texas, in March 1991; $8.2 million levied against Samsung Guam in the wake of numerous worksite accidents at Guam’s International Airport in 1995; and $8.7 million against Imperial Sugar in connection with an explosion at the company’s plant in Port Wentworth, Georgia, in February 2008. 8 More recently, OSHA fined the producers of The Walking Dead $12,675 (the maximum allowable for a single citation) in the wake of the death of a stuntman working on an episode of the television show in Georgia in July 2017. 9 These fines demonstrate that the agency is serious about trying to protect the environment and workers. However, for some, the question remains whether it is more profitable for a business to gamble on cutting corners on safety and pay the fine if caught than to spend the money ahead of time to make workplaces completely safe. OSHA fines do not really tell the whole story of the penalties for workplace safety issues. There can also be significant civil liability exposure and public relations damage, as well as worker compensation payments and adverse media coverage, making an unsafe workplace a very expensive risk on multiple levels. Link to Learning Read this OSHA Fact Sheet about fines and penalties to learn more. A Workplace Free of Harassment Employers have an ethical and a legal duty to provide a workplace free of harassment of all types. This includes harassment based on sex, race, religion, national origin, and any other protected status, including disability. Employees should not be expected to work in an atmosphere where they feel harassed, prejudiced against, or disadvantaged. The two complaints most frequently filed with the Equal Employment Opportunity Commission ( EEOC ), which strives to eliminate racial, gender, and religious discrimination in the workplace, are sexual harassment and racial harassment. Together, these categories made up two-thirds of all cases filed during 2017. More than thirty thousand complaints of sexual, gender, racial, or creedal harassment are filed each year, illustrating the frequency of the problem. 10 The EEOC enforces Title VII of the Civil Rights Act (CRA) of 1964, which prohibits workplace discrimination including sexual harassment . 11 (As discussed elsewhere in the text, the CRA also protects employees from discrimination based on race, gender, religion, and national origin.) According to EEOC guidelines, it is unlawful to sexually harass a person because of that person’s sex, either through explicit offers in exchange for sexual favors (known as quid pro quo) or through actions at a broader more systemic level that create a “hostile working environment.” Sexual harassment includes unwelcome touching, requests for sexual favors, any other verbal or physical harassment of a sexual nature, offensive remarks based on a person’s sex, and off-color jokes. The harasser can be the victim’s supervisor (which creates company liability the first time it happens) or a peer coworker (which usually creates liability after the second time it happens, assuming the company had notice of the first occurrence). It can even be someone who is not an employee, such as a client or customer, and the law applies to men and women. Thus, the victim and the harasser both can be either a woman or a man, and offenses include both opposite-sex and same-sex harassment. Although the law does not prohibit mild teasing, offhand comments, or isolated incidents that are not serious, harassment does become illegal when, according to the law, it is so frequent “that it creates a hostile or offensive work environment or when it is so severe that it results in an adverse employment decision (such as the victim being fired or demoted).” 12 It is management’s responsibility to prevent harassment through education, training, and enforcement of a policy against it, and failure to do so will result in legal liability for the company. Two relatively recent examples of workplace environments that descended into the worst excesses of sexist and other inappropriate behavior occurred at American Apparel and Uber. In both cases, principal leaders were mostly men who engaged in ruthless, no-holds-barred management practices that benefitted only those subordinates who most resembled the leaders themselves. Such environments may thrive for a while, but the long-term consequences can include criminal violations that produce hefty fines and imprisonment, bankruptcy, and radical upheaval in corporate management. At American Apparel and at Uber, these events resulted in the dismissal of each company’s CEO, Dov Charney (who also was the founder of the company) and Travis Kalanick (who was one of the corporation’s founders), respectively. 13 In 2017 and 2018, a renewed focus on sexual harassment in the workplace and other inappropriate sexual behaviors brought a stream of accusations against high-profile men in politics, entertainment, sports, and business. They included entertainment industry mogul Harvey Weinstein; Pixar’s John Lasseter; on-air personalities Matt Lauer and Charlie Rose; politicians such as Roy Moore, John Conyers, and Al Franken; and Uber’s Kalanick, to name just a few ( Figure 6.4 ). The workplace harassment problem has continued for many decades despite the EEOC’s enforcement efforts; it remains to be seen whether new public scrutiny will prompt a permanent change in the workplace. The Ford Motor Company serves as a relevant example. Decades after Ford tried to address sexual harassment at two Chicago-area assembly plants, the abuse at the plants evidently continues. According to legal action filed with the EEOC in the early 1990s, conditions for women working at some Ford auto assembly plants were hostile. Female employees alleged they were groped, that men pressed against them and simulated sex acts, and that men even masturbated in front of them. They further asserted that men would routinely make crude comments about the figures of female coworkers, and graffiti depictions of penises were everywhere—carved into tables, spray painted onto floors, and scribbled on walls. Managers and floor supervisors were accused of giving women better assignments in return for sex and punishing those who refused. 14 In the 1990s, lawsuits and an EEOC action led to a $22 million settlement in which Ford admitted to widespread misconduct and committed to crack down on the offenders. However, it seems Ford still did not learn its lesson, or, after almost three decades, the memory dimmed and they slipped right back into old habits. In August 2017, the EEOC reached a new $10 million settlement with Ford for sexual and racial harassment at the two Chicago plants. Though Ford did not admit any wrongdoing in the recent settlement, it appears that neither millions of dollars in earlier damages nor promises by management led to any serious change. The New York Times interviewed some of the women at Ford, 15 and Sharon Dunn, who was a party of the first case and is now again a party of the second, said, “For all the good that was supposed to come out of what happened to us, it seems like Ford did nothing. If I had that choice today, I wouldn’t say a damn word.” A Satisfied Workforce Although the workplace should be free of harassment and intimidation of every sort, and management should provide a setting where all employees are treated with dignity and respect, ideally, employers should go much further. Most people spend at least one-third and possibly as much as one-half of their waking hours at work. Management, therefore, should make work a place where people can thrive, that fosters an atmosphere in which they can be engaged and productive. Workers are happier when they like where they work and when they do not have to worry about childcare, health insurance, or being able to leave early on occasion to attend a child’s school play, for example. For our grandparents’ generation, a good job was dependably steady, and employees tended to stay with the same employer for years. There were not many extras other than a secure job, health insurance, and a pension plan. However, today’s workers expect these traditional benefits and more. They may even be willing to set aside some salary demands in exchange for an environment featuring perquisites (or “perks”; nonmonetary benefits) such as a park-like campus, an on-the-premises gym or recreational center, flextime schedules, on-site day care and dry cleaning, a gourmet coffee house or café, and more time off. This section will explore how savvy managers establish a harmonious, compassionate workplace while still setting expectations of top performance. Happy employees are more productive and more focused, which enhances their performance and leads to better customer treatment, fewer sick days, fewer on-the-job accidents, and less stress and burnout. They are more focused on their work, more creative, and better team players, and they are more likely to help others and demonstrate more leadership qualities. How, then, does an employer go about the process of making workers happy? Research has identified several pitfalls that managers should avoid if they want to have a good working relationship with their direct reports and, indeed, all their employees. 16 One is making employees feel like they are just employees. To be happy at work, employees, instead, need to feel like they know each other, have friends at work, are valued, and belong. Another pitfall is remaining aloof or above your employees. Taking an authentic interest in who they are as people really does matter. When surveys ask employees, “Do you feel like your boss cares about you?,” too frequently the answer is no. One way to show caring and interest is to recognize when employees are making progress; another might be to take a personal interest in their lives and families. Asking employees to share their ideas and implementing these ideas whenever possible is another form of acknowledgement and recognition. Pause and highlight important milestones people achieve, and ensure that they feel their contributions are noticed by saying thank you. Good advice to new managers includes making work fun. Allow people to joke around as appropriate so that when mistakes occur they can find humor in the situation and move forward without fixating simply on the downside. Celebrate accomplishments. Camaraderie and the right touch of humor can build a stronger workplace culture. Encourage exercise and sleep rather than long work hours, because those two factors improve employees’ health, focus, attention, creativity, energy, and mood. In the long run, expecting or encouraging people to regularly work long hours because leaving on time looks bad is counterproductive to the goals of a firm. Accept that employees need to disengage sometimes. People who feel they are always working because their management team expects they must remain in touch via e-mail or mobile phone can become tremendously stressed. To combat this, companies should not expect their workers to be available around the clock, and workers should not feel compelled to be so available. Rather, employers should allow employees to completely disengage regularly so they can focus on their friends and families and tend to their own personal priorities. By way of international comparison, according to a recent article in Fortune , Germany and France have actually gone as far as banning work-related e-mails from employers on the weekends, which is a step in the right direction, even if only because disconnecting from work is now mandated by law. 17 Employers must decide exactly how to spend the resources they have allocated to labor, and it can be challenging to make the right decision about what to provide workers ( Figure 6.5 ). Should managers ask employees what they want? Benchmark the competition? Follow the founder’s or the board’s recommendations? How does a company make lifestyle benefits fair and act ethically when there is backlash against family-friendly policies from people who do not have their own families? Unlike the purchase of raw materials, utilities, and other budgetary items, which is driven primarily by cost and may present only a few choices, management’s offering of employee benefits can present dozens of options, with costs ranging from minimal to very high. Work-at-home programs may actually cost the company very little, for example, whereas health insurance benefits may cost significantly more. In many other industrialized countries, the government provides (i.e., subsidizes) benefits such as health insurance and retirement plans, so a company does not have to weigh the pros and cons (i.e., do a cost-benefit analysis) of what to offer in this area. In the United States, employee benefits become part of a cost-benefit analysis, especially for small and mid-sized companies. Even larger companies today are debating what benefits to offer. Management has to decide not only how much money to spend on benefits and perks but precisely what to spend the money on. Another decision is what benefit choices management should allow each employee to make, and which choices to make for the workforce as a whole. The best managers communicate regularly with their workforce; as a result, they are more likely to know (and be able inform top management about) the types of perks most desired and most likely to attract and keep good workers. Figure 6.6 shows that men and women do not always want the same benefits, which presents a challenge for management. For instance, many women place about twice as much value as many men do on day care (23%–11%) and on paid family leave (24%–14%). Also valued more highly generally by women than by men are better health insurance, work-from-home options, and flexible hours, whereas more men value an on-site gym and free coffee more than women typically do. Age and generation also play a role in the types of perks that employees value. Workers aged eighteen to thirty-five rank career advancement opportunities (32%) and work-life balance (33%) as most important to them at work. However, 42 percent of workers older than thirty-five say work-life balance is the most important feature. This is likely because Generation X (born in the years 1965–1980) place a high value on opportunities for work-life balance, although, like Baby Boomers (born in the years 1946–1964), they also value salary and a solid retirement plan. On the other hand, Millennials (born in the years 1981–1997) appreciate flexibility: having a choice of benefits, paid time off, the ability to telecommute, flexible hours, and opportunities for professional development. 18 The menu of benefits and perks thus depends on several variables, such as what the company can afford, whether employees value perks over the more direct benefit of higher pay, what the competition offers, what the industry norm is, and the company’s geographic location. For example, Google is constantly searching for ways to improve the health, well-being, and morale of its “Googlers.” The company is famous for offering unusual perks, like bicycles and electric cars to get staff around its sprawling California campus. Additional benefits are generous paid parental leave for new parents, on-site childcare centers at one location, paid leaves of absence to pursue further education with tuition covered, and on-site physicians, nurses, and health care. Other perks are gaming centers, organic gardens, eco-friendly furnishings, a pets-at-work policy, meditation and mindfulness training, and travel insurance and emergency assistance on personal and work-related travel. On the death of a Google employee, his or her spouse or domestic partner is compensated with a check for 50 percent of the employee’s salary each year for a decade. In addition, all a deceased employee’s stock options vest immediately for the surviving spouse or domestic partner. Furthermore, a deceased employee’s children receive $1000 per month until they reach the age of nineteen, or until the age of twenty-three if they are full-time students. 19 Link to Learning Of course, Google is not the only company that offers good perks. Another is the software giant SAS. Glassdoor has an article describing some interesting benefits offered by other companies in 2017. Do a quick comparison of a few of these companies. Do the perks influence your choice? Would you be willing to work for any of them? In addition to offering benefits and perks, managers can foster a healthy workplace by applying good “people skills” as well. Managers who are respectful, open, transparent, and approachable can achieve two goals simultaneously: a workforce that is happier and also one that is more productive. Good management requires constant awareness that each team member is also an individual working to meet both personal and company goals. Effective managers act on this by regularly meeting with employees to recognize strengths, identify constructive ways to improve on weaknesses, and help workers realize collective and individual goals. Ethical businesses and good managers also invest in efforts like performance management and employee training and development. These commitments call for giving employees frequent and honest feedback about what they do well and where they need improvement, thereby enabling them to develop the skills they need, not only to succeed in the current job but to move on to the next level. Fostering teamwork by treating people fairly and acknowledging their strengths is also an important responsibility of management. Ethical managers, therefore, demonstrate most, if not all, of the following qualities: cultural awareness, positive attitude, warmth and empathy, authenticity, emotional intelligence, patience, competence, accountability, respectful, and honesty. 6.2 What Constitutes a Fair Wage? Learning Objectives By the end of this section, you will be able to: Explain why compensation is a controversial issue in the United States Discuss statistics about the gender pay gap Identify possible ways to achieve equal pay for equal work Discuss the ethics of some innovative compensation methods The Center for Financial Services Innovation (CFSI) is a nonprofit, nonpartisan organization funded by many of the largest American companies to research issues affecting workers and their employers. Findings of CFSI studies indicate that employee financial stress permeates the workplaces of virtually all industries and professions. This stress eats away at morale and affects business profits. A recent CFSI report details data showing that “85% of Americans are anxious about their personal financial situation, and admit that their anxiety interferes with work. Furthermore, this financial stress leads to productivity losses and increased absenteeism, healthcare claims, turnover and costs affecting workers who cannot afford to retire.” 20 The report also indicates that employees with high financial anxiety are twice as likely to take unnecessary sick time, which is can be expensive for an employer. The CFSI report makes clear that ensuring workers are paid a fair wage is not only an ethical practice; it is also an effective way to achieve employees’ highest and most productive level of performance, which is what every manager wants. In the process, it also makes workers more loyal to the company and less likely to jump ship at the first sign of a slightly better wage somewhere else. The concept of a fair wage has a greater significance than simply one worker’s pay or one company’s policy. It is an economic concept critical to the nation as a whole in an economic system like capitalism, in which individuals pay for most of what they need in life rather than receiving government benefits funded by taxes. The ethical issues for the business community and for society at large are to identify democratic systems that can effectively eradicate the financial suffering of the poorest citizens and to generate sufficient wages to support the economic sustainability of all workers in the United States. Put another way, has the real income of average American workers declined so much over the past few decades that it now threatens the productivity of the largest economy in the world? Economic Data as an Indicator of Fair Wages The Pew Research Center indicates that over the thirty-five years between 1980 and 2014, the inflation-adjusted hourly wages of most middle-income American workers were nearly stagnant, rising just 6 percent, or an average of less than 0.2 percent, per year. 21 (The Pew Research Center defines middle-class adults as those living in households with disposable incomes ranging from 65 percent to 200 percent of the national median, which is approximately $60,000.) The data collected by the Economic Policy Institute, a nonprofit, nonpartisan think tank, show the same stagnant trend. 22 Contrast this picture with the wages of high-income workers, which rose 41 percent over the same years. Many economists, political leaders, and even business leaders admit that increasing wage and wealth disparities are not a sustainable pattern if the U.S. economy is to succeed in the long term. 23 Wage growth for all workers must be fair, which, in most cases, means higher wages for low- and middle-income workers. Figure 6.7 presents evidence of the growth of the income gap in the United Sates since the start of the great recession in 2007. No reasonable person, regardless of profession or political party, would dispute that employees are entitled to a fair or just wage. Rather, it is in the calculation of a fair wage that the debate begins. Economists, sociologists, psychologists, and politicians all have opinions about this, as have most workers. Some of the factors that feature in calculations are federal and state minimum-wage standards, the cost of living, and the rate of inflation. Should a fair wage include enough money to raise a family, too, if the wage earner is the sole or principal support of a family? Figure 6.8 shows the growth, or lack of growth, in the buying power of a minimum-wage earner since 1940. Compare the twenty-year period of 1949 through 1968 with the fifty-year period from 1968 through 2017. The difference has created a sobering reality for many workers. In the nearly six decades since 1960, the inflation-adjusted real minimum wage actually declined by 23 percent. That means minimum-wage workers did not even break even; the value of their wages declined over fifty years, meaning they have effectively worked half a century with no raise. In the following chart, nominal wage represents the actual amount of money a worker earns per hour; real wage represents the nominal wage adjusted for inflation. We consider real wages because nominal wages do not take into account changes in prices and, therefore, do not measure workers’ actual purchasing power. One positive development for minimum-wage workers is that state governments have taken the lead in what was once viewed primarily as a federal issue. Today, most states have a higher minimum hourly wage than the federal minimum of $7.25. States with the highest minimum hourly wages are Washington ($11.50), California and Massachusetts ($11.00), Arizona and Vermont ($10.50), New York and Colorado ($10.40), and Connecticut ($10.00). Some cities have even higher minimum hourly wages than under state law; for example, San Francisco and Seattle are at $15.00. As of the end of 2017, twenty-nine states had higher minimum hourly wages than the federal rate, according to Bankrate.com ( Figure 6.9 ). Unfair Wages: The Gender Pay Gap Even after all possible qualifiers have been added, it remains true that women earn less than men. Managers sometimes offer multiple excuses to justify pay inequities between women and men, such as, “Women take time off for having babies” or “Women have less experience,” but these usually do not explain away the differences. The data show that a woman with the same education, experience, and skills, doing the same job as a man, is still likely to earn less, at all levels from bottom to top. According to a study by the Institute for Women’s Policy Research, even women in top positions such as CEO, vice president, and general counsel often earn only about 80 percent of what men with the same job titles earn. 24 Data from the EEOC over the five years from 2011 through 2015 for salaries of senior-level officials and managers (defined by the EEOC as those who set broad policy and are responsible for overseeing execution of those policies) show women in these roles earned an average of about $600,000 per year, compared with their male counterparts, who earned more than $800,000 per year. 25 That $200,000 difference amounts to a wage gap of about 35 percent each year. The same is true in mid-level jobs as well. In a long-term study of compensation in the energy industry, researchers looked at the job of a land professional—who negotiates with property owners to lease land on which the oil companies then drill wells—and found evidence of women consistently getting paid less than men for doing the same job. Median salaries were compared for male and female land professionals with similar experience (one to five years) and educational background (bachelor’s degree), and men earned $7000 more per year than their female counterparts. 26 Doesn’t the law require men and women to be paid the same? The answer is yes and no. Compensation discrimination has been illegal for more than fifty years under a U.S. law called the Equal Pay Act , passed in 1963. But the problem persists. Women earned about 60 percent of what men earned in 1960, and that value had risen to only 80 percent by 2016. Given these historic rates, women are not projected to reach pay equity until at least 2059, with projections based on recent trends predicting dates as late as 2119. 27 These are aggregate data; thus, they include women and men with the same job, or similar jobs, or jobs considered to fall in the same general category, but the data do not compare the salary of a secretary to that of a CEO, which would be an unrealistic comparison. Equal pay under the law means equal pay for the “same” job, but not for the “equivalent” job. Those companies wishing to avoid strict compliance with the law may use several devices to justify unequal pay, including using slightly different job titles, slightly different lists of job duties, and other techniques that lead to different pay for different employees doing essentially the same job. Women have taken employers to court for decades, only to find their lawsuits unsuccessful because proving individual compensation discrimination is very difficult, especially given that multiple factors can come into play in compensation decisions. Sometimes class-action lawsuits have been more successful, but even then plaintiffs often lose. Can anything be done to achieve equal pay? One step would be to pass a new law strengthening the rules on equal pay, but two recent attempts to pass the Paycheck Fairness Act (S.84, H.R.377) and the Fair Pay Act (S.168, H.R.438) narrowly failed. 28 These or similar bills, if ever enacted into law, would significantly reduce wage discrimination against those who work in similar job categories by establishing equal pay for “equivalent” work, rather than the current law which uses the term “same” job. The idea of pay equivalency is closely related to comparable worth , a concept that has been put into action on a limited basis over the years, but never on a large scale. Comparable worth holds that workers should be paid on the basis of the worth of their job to the organization. Equivalent work and comparable worth can be important next steps in the path to equal pay, but they are challenging to implement because they require rethinking the entire basis for pay decisions. Link to Learning Though the federal government has not yet passed the Paycheck Fairness Act, some states have taken action on their own. The website for the National Conference of State Legislatures’ section on state equal pay laws provides a chart listing states that go beyond the current federal law to mandate equal pay for comparable or equivalent work. Look up your state in the chart. How does it compare with others in this regard? If a woman’s starting salary for the first job of her career is less than that of a man, the initial difference, even if small, tends to cause a systemic, career-long problem in terms of pay equity. Researchers at Temple University and George Mason University found that if a new hire gets $5000 more than another worker hired at the same time, the difference is significantly magnified over time. Assuming an average annual pay increase of 5 percent, an employee starting with a $55,000 salary will earn at least $600,000 more over a forty-year career than an employee who starts an equivalent job with a $50,000 salary. This significantly affects many personal decisions, including retirement, because, all other things equal, a lower-paid woman will have to work three years longer than a man to earn the same amount of money over the course of her career. 29 Ethics Across Time and Cultures European Approaches to the Gender Pay Gap The policies of other nations can offer some insight into how to address pay inequality. Iceland, for example, has consistently been at the top of the world rankings for workplace gender equality in the World Economic Forum survey. 30 A new Icelandic law went into effect on January 1, 2018, that makes it illegal to pay men more than women, gauged not by specific job category, but rather in all jobs collectively at any employer with twenty-five or more employees, a concept known as an aggregate salary data approach. 31 The burden of proof is on employers to show that men and women are paid equally or they face a fine. The ultimate goal is to eliminate all pay inequities in Iceland by the year 2022. The United Kingdom has taken a first step toward addressing this issue by mandating pay transparency, which requires employers with 250 workers or more to publish details on the gaps in average pay between their male and female employees. 32 Policies not directly linked to salary can help as well. German children have a legal right to a place in kindergarten from the age of three years, which has allowed one-third of mothers who could not otherwise afford nursery school or kindergarten to join the workforce. 33 In the United Kingdom, the government offers up to thirty hours weekly of free care for three- and four-year-old children to help mothers get back in the workforce. Laws such as these allow women, who are often the primary caregivers in a household, to experience fewer interruptions in their careers, a factor often blamed for the wage gap in the United States. The World Economic Forum reports that about 65 percent of all Organization for Economic Cooperation and Development (OECD) countries have introduced new policies on pay equality, including requiring many employers to publish calculations every year showing the gender pay gap. 34 Steps such as the collection and reporting of aggregate salary data, or some form of early education or subsidized childcare, are positive steps toward eventually achieving the goal of wage equality. Critical Thinking Which of these policies do you think would be the most likely to be implemented in the United States and why? How would each of the normative theories of ethical behavior (virtue ethics, utilitarianism, deontology, and justice theory) view this issue and these proposed solutions? Part of the reason that initial pay disparity is heightened over a career is that when a worker changes jobs, the new employer usually asks what the employee was making in his or her last job and uses that as a baseline for pay in the new job. To combat the problem of history-based pay, which often hurts women, eight states (and numerous municipalities) in the United States now ban employers from asking job applicants to name their last salary. 35 Although this restriction will not solve the entire problem, it could have a positive effect if it spreads nationally. In a survey by the executive search firm Korn Ferry, forty-six of one hundred companies said they usually comply with the legal requirements in force in the strictest of the locations in which they operate, meaning workers in states without this law might not be asked about their salary history during new-job negotiations either. 36 Experiments in Compensation Whether we are discussing fair wages, minimum wages, or equal wages, the essence of the debate often boils down to ethics. What should people get paid, who should determine that, and should managers and upper management do only what is required by law or go above and beyond if that means doing what they think is right? Organizational pay structures are set by a variety of methods, including internal policies, the advice of outside compensation consultants, and external data, such as market salaries. An innovative compensation decision in Seattle may provide some insight. In 2011, a young man earning $35,000 a year told his boss at Gravity, a credit-card payments business, that his earnings were not sufficient for a decent life in expensive Seattle. The boss, Dan Price, who cofounded the company in 2004, was somewhat surprised as he had always taken pride in treating employees well. Nevertheless, he decided his employee was right. For the next three years, Gravity gave every employee a 20 percent annual raise. Still, profit continued to outgrow wages. So Price announced that over the next three years, Gravity would phase in a minimum salary of $70,000 for all employees. He reduced his own salary from $1 million to $70,000, to demonstrate the point and help fund it. The following week, five thousand people applied for jobs at Gravity, including a Yahoo executive who took a pay cut to transfer to a company she considered fun and meaningful to work for. Price’s decision started a national debate: How much should people be paid? Since 2000, U.S. productivity has increased 22 percent, yet inflation-adjusted median wages have increased only 2 percent. That means a larger share of capitalism’s rewards are going to shareholders and top executives (who already earn an average of three hundred times more than typical workers, up from seventy times more just a decade ago), and a smaller share is going to workers. If Gravity profits while sharing the benefits of capitalism more broadly, Price’s actions will be seen as demonstrating that underpaying the workforce hurts employers. If it fails, it may look like proof that companies should not overpay. Price recognized that low starting salaries were antithetical to his values and felt that struggling employees would not be motivated to maintain the high quality that made his company successful with that compensation. He calls the $70,000 minimum wage an ethical and moral imperative rather than a business strategy, and, though it will cost Gravity about $2 million per year, he has ruled out price increases and layoffs. More than half the initial cost was offset by his own pay cut, the rest by profit. Revenue continues to grow at Gravity, along with the customer base and the workforce. Currently, the firm has a retention rate of 91 percent. 37 Yet Price says managers’ scorecards should measure purpose, impact, and service, as much as profit. Michael Wheeler, a professor at Harvard Business School who teaches a course called “Negotiation and The Moral Leader,” recently discussed the aftermath of Dan Price’s decision at Gravity. He interviewed other entrepreneurs about their plans for creative compensation to help develop a happy and motivated workforce, and it appears that some other companies are taking notice of how successful Gravity has been since Price made the decision to pay his workers more. 38 One of these entrepreneurs was Megan Driscoll, the CEO of Pharmalogics Recruiting, who, after hearing Dan Price speak to a group of executives, was inspired to raise the starting base pay of her employees by 33 percent. When Driscoll put her plan to work, her business had forty-six employees and $6.7 million in revenue. A year later, staff and revenues had jumped to seventy-two and $15 million, respectively. Driscoll points to data showing her people are working harder and smarter after the pay raise than before. There has been a 32 percent increase in clients, and the client retention rate doubled to 80 percent. 39 Stephan Aarstol, CEO of Tower Paddleboards, wanted to give his workers a raise, but his company did not have the cash. Instead, Aarstol boldly cut the work day to five hours from the ten hours most employees had been working. Essentially that doubled their pay, and as a result, he says, employee focus and engagement have skyrocketed, as have company profits. 40 Managers must carefully balance the short term, such as quarterly profits, versus long-term sustainability as a successful company. This requires recognizing the value of work that each person contributes and devising a fair, and sometimes creative, compensation plan. 6.3 An Organized Workforce Learning Objectives By the end of this section, you will be able to: Discuss trends in U.S. labor union membership Define codetermination Compare labor union membership in the United States with that in other nations Explain the relationship between labor productivity gains and the pay ratio in the United States The issue of worker representation in the United States is a century-old debate, with economic, ethical, and political aspects. Are unions good for workers, good for companies, good for the nation? There is no single correct response. Your answer depends upon your perspective—whether you are a worker, a manager, an executive, a shareholder, or an economist. How might an ethical leader address the issue of the gap between labor’s productivity gains and their relatively stagnant wages as compared with that of management? Organized Labor Americans’ longstanding belief in individualism makes some managers wonder why employees would want or need to be represented by a labor union . The answer is, for the same reasons a CEO wants to be represented by an attorney when negotiating an employment contract, or that an entertainer wants to be represented by an agent. Unions act as the agent/lawyer/negotiator for employees during collective bargaining , a negotiation process aimed at getting management’s agreement to a fair employment contract for members of the union. Everyone wants to be successful in any important negotiation, and people often turn to professionals to help them in such a situation. However, in the United States, as elsewhere around the globe, the concept of worker organization has been about more than simply good representation. Unionization and worker rights have often been at the core of debates related to class economics, political power, and ethical values. There are legitimate points on each side of the union debate ( Table 6.1 ). Pros and Cons of Unions Pros of Unions Cons of Unions Unions negotiate increased pay and benefits for workers. Unions can make it harder to fast-track promotions for high-performing workers and/or get rid of low-performing ones. Unions create a formal dispute resolution process for workers. Workers are required to pay union dues/fees that some might rather not pay. Unions act as an organized lobbying group for worker rights. Unions sometimes lead to a closed culture that makes it harder to diversify the workforce. Collective bargaining agreements often set norms for employment for an entire industry—benefiting all workers, including those who are not at a union company. Collective bargaining contracts can drive up costs for employers and lead to an adversarial relationship between management and workers. Table 6.1 The value of unions is a topic that produces significant disagreement. Historically, unions have attained many improvements for workers in terms of wages and benefits, standardized employment practices, labor protections (e.g., child labor laws), workplace environment, and on-the-job safety. Nevertheless, sometimes unions have acted in their own interests to sustain their own existence, without primary concern for the workers they represent. The history of the worker movement (summarized in the video in the following Link to Learning ) reveals that in the first half of the twentieth century, wages were abysmally low, few workplace safety laws existed, and exploitive working conditions allowed businesses to use child labor. Unions stepped in and played an important role in leveling the playing field by representing the interests of the workers. Union membership grew to a relatively high level (33% of wage and salary workers) in the 1950s, and unions became a force in politics. However, their dominance was relatively short-lived, not least because in the 1960s, the federal government started to enact employment laws that codified many of the worker protections unions had championed. In the 1980s and 1990s, the U.S. economy gradually evolved from manufacturing, where unions were strong, to services, where unions were not as prevalent. The service sector is more difficult to organize, due to a variety of factors such as the historical absence of unions in the sector, workers’ widely differing work functions and schedules, challenging organizational status, and white-collar bias against unions. Link to Learning This three-minute video entitled “The Rise and Fall of U.S. Labor Unions” summarizes the history of the union movement. It is based on information from University of California Santa Cruz Professor William Domhoff and the University of Houston Bauer College of Business. These developments, along with the appearance of state right-to-work laws, have led to a decline in unions and their membership. Right-to-work laws give workers the option of not joining the union, even at companies where the majority has voted to be represented by a union, resulting in lower membership. Right-to-work laws attempt to counter the concept of a union shop or closed shop , which requires that all new hires automatically be enrolled in the labor union appropriate to their job function and that union dues automatically be deducted from their pay. Some question the fairness of right-to-work laws, because they allow those who do not join the union to get the same pay and benefits as those who do join and who pay unions dues for their representation. On the other hand, right-to-work laws provide workers the right of choice; those who do not want to join a union are not forced to do so. Those who do not choose to join may end up having a strained relationship with union workers, however, when a union-mandated strike occurs. Some non-union members, and even union members, elect to cross the picket line and continue to work. Traditionally, these “scabs,” as they are derisively labeled by unions, have faced both overt and subtle retaliation at the hands of their coworkers, who prioritize loyalty to the union. Twenty-eight states have right-to-work laws ( Figure 6.10 ). Notice that many right-to-work states, such as Michigan, Missouri, Indiana, Wisconsin, Kentucky, Tennessee, Alabama, and Mississippi, are among the top ten states where automobiles are manufactured and unions once were strong. According to the U.S. Bureau of Labor Statistics, total union membership in the United States dropped to 20 percent of the workforce in 1980; by 2016, it was down to about half that ( Figure 6.11 ). 41 Public sector (government) workers have a relatively high union membership rate of 35 percent, more than five times that of private-sector workers, which is at an all-time low of 6.5 percent. White-collar workers in education and training, as well as first responders such as police and firefighters now have some of the highest unionization rates, also 35 percent. Among states, New York continues to have the highest union membership rate at 23 percent, whereas South Carolina has the lowest, at slightly more than 1 percent. Codetermination is a workplace concept that goes beyond unionization to embrace shared governance, in which management and workers cooperate in decision-making and workers have the right to participate on the board of directors of their company. Board-level representation by employees is widespread in European Union countries. Most codetermination laws apply to companies over a certain size. For example, in Germany, they apply to companies with more than five hundred employees. 42 The labor union movement never has been quite as strong in the United States as in Europe—the trade-union movement began in Europe and remains more vibrant there even today—and codetermination is thus not common in U.S. companies ( Table 6.2 ). Unionization as Percentage of Workforce in Eight Industrialized Nations Country Workforce in Unions, % Australia 25 Canada 30 France 9 Germany 26 Italy 35 Japan 22 Sweden 82 United Kingdom 29 United States 12 Table 6.2 Labor union membership remains much higher in Europe and other Group of Seven (G7) countries than in the United States. Only France has a lower percentage of union membership. 43 Codetermination has worked relatively well in some countries. For example, in Germany, workers, managers, and the public at large support the system, and it has often resulted in workers who are more engaged and have a real voice in their workplaces. Management and labor have cooperated, which, in turn, has led to higher productivity, fewer strikes, better pay, and safer working conditions for employees, which is a classic win-win for both sides. Pay and Productivity in the United States Some managers, politicians, and even members of the general public believe unions are a big part of the reason that U.S. companies have difficulty competing in the global economy. The conservative think tank Heritage Foundation conducted a study that concluded unions may be responsible, in part, for a slower work process and reduced productivity. 44 However, multiple other studies indicate that U.S. productivity is up. 45 Productivity in the United States increased 74 percent in the period 1973 to 2016, according to the OECD. In global productivity rankings, most studies indicate the United States ranks quite high, among the top five or six countries in the world and number two on the list compiled by the OECD ( Table 6.3 ). Productivity in 2015 by Country (Sample of Eight Industrialized Nations) Country Productivity (output/hours worked) Australia 102.20 Canada 109.45 Germany 105.90 Japan 103.90 Mexico 105.10 South Korea 97.60 United Kingdom 100.80 United States 108.87 Table 6.3 This table compares 2015 productivity among several industrialized nations. U.S. productivity ranks high on the list. 46 During the same period as the productivity gains discussed in the preceding paragraph, 1973 to 2016, wages for U.S. workers increased only 12 percent. In other words, productivity has grown six times more than pay. Taken together, these facts mean that American workers, union members or not, should not shoulder the blame for competitive challenges faced by U.S. companies. Instead, they are a relative bargain for most companies. Figure 6.12 compares productivity and pay and demonstrates the growing disparity between the two, based on data collected by the Economic Policy Institute. Is Management Compensation Fair? We gain yet another perspective on labor by looking at management compensation relative to that of employees. Between 1978 and 2014, inflation-adjusted CEO pay increased by almost 1,000 percent in the United States, while worker pay rose 11 percent. 47 A popular way to compare the fairness of a company’s compensation system with that in other countries is the widely reported pay ratio , which measures how many times greater CEO pay is than the wages for the average employee. The average multiplier effect in the United States is in the range of three hundred. This means that CEO pay is, on average, three hundred times as high as the pay of the average worker in the same company. In the United Kingdom, the multiplier is twenty-two; in France, it is fifteen; and in Germany, it is twelve. 48 The 1965 U.S. ratio was only twenty to one, which raises the question, why and how did CEO pay rise so dramatically high in the United States compared with the rest of the world? Are CEOs in the United States that much better than CEOs in Germany or Japan? Do American companies perform that much better? Is this ratio fair to investors and employees? A large part of executive compensation is in the form of stock options, which frequently are included in the calculation of an executive’s salary and benefits, rather than direct salary. However, this, in turn, raises the question of whether all or a portion of the general workforce should also share in some form of stock options. Link to Learning Some corporate boards claim executive pay is performance based; others claim it is a retention strategy to prevent CEOs from going to another company for more money. This video shows former CEO Steven Clifford discussing CEO pay and claiming that U.S. executives often dramatically, and in many cases unjustifiably, boost their own pay to astronomical levels, leaving shareholders and workers wondering why. He also discusses how it can be stopped. Everyone wants to be paid fairly for their work. Whether CEO or administrative assistant, engineer or assembly-line worker, we naturally look out for our own best interest. Thus, management compensation is a topic that often causes resentment among the rank and file, especially when organized workers go on strike. From the employee viewpoint, the question is why management often wants to hold the line when it comes to everyone’s wages but their own. Cases from the Real World Verizon Strike More than forty thousand Verizon workers went on strike in 2016 ( Figure 6.13 ). The strike was eventually settled, with workers getting a raise, but bitter feelings and distrust remained on both sides. Workers thought management salaries were too high; management thought workers were seeking excessive raises. To continue basic phone services for its customers during the strike, Verizon called on thousands of non-union employees to perform the strikers’ work. Non-union staff had to cross picket lines formed by fellow employees to go to work each day during the strike. Enmity toward these picket-line crossers was exceptionally high among some union members. Critical Thinking How does management reintroduce civility to the workplace to keep peace between different factions? How could Verizon please union workers after the strike without firing the picket-line crossers, some of whom were Verizon union employees who consciously chose to cross the picket line? 6.4 Privacy in the Workplace Learning Objectives By the end of this section, you will be able to: Explain what constitutes a reasonable right to privacy on the job Identify management’s responsibilities when monitoring employee behavior at work Employers are justifiably concerned about threats to and in the workplace, such as theft of property, breaches of data security, identity theft, viewing of pornography, inappropriate and/or offensive behavior, violence, drug use, and others. They seek to minimize these risks, and that often requires monitoring employees at work. Employers might also be concerned about the productivity loss resulting from employees using office technology for personal matters while on the job. At the same time, however, organizations must balance the valid business interests of the company with employees’ reasonable expectations of privacy. Magnifying ethical and legal questions in the area of privacy is the availability of new technology that lets employers track all employee Internet, e-mail, social media, and telephone use. What kind and extent of monitoring do you believe should be allowed? What basic rights to privacy ought a person have at work? Does your view align more closely with the employer’s or the employee’s? Legal and Ethical Aspects of Electronic Monitoring Monitored workstations, cameras, microphones, and other electronic monitoring devices permit employers to oversee virtually every aspect of employees’ at-work behavior ( Figure 6.14 ). Technology also allows employers to monitor every aspect of computer use by employees, such as downloads of software and documents, Internet use, images displayed, time a computer has been idle, number of keystrokes per hour, words typed, and the content of e-mails. According to a survey by the American Management Association, 48 percent of employers used a form of video monitoring in the workplace, and 67 percent monitored employee Internet use. In 30 percent of the organizations responding to the survey, this electronic monitoring had ultimately led to an employee’s termination. 49 The laws and regulations governing electronic monitoring are somewhat indirect and inconsistent. Very few specific federal statutes directly regulate private employers when it comes to broad workplace privacy issues. However, monitoring is subject to various state rules under both statutory and common law, and sometimes federal and state constitutional provisions as well. The two primary areas of the law related to workplace monitoring are a federal statute called the Electronic Communications Privacy Act of 1986 (ECPA) and various state common law protections against invasion of privacy. 50 Although the ECPA may appear to prohibit an employer from monitoring its employees’ oral, wire, and electronic communications, it contains two big exceptions that weaken its protection of employees’ rights. One is the business purpose exception . This allows employers—on the basis of legitimate business purposes—to monitor electronic and oral communications, and employers generally assert a legitimate business purpose to be present. The other widely used exception is the consent exception , which allows employers to monitor employee communications provided employees have given their consent. According to the Society for Human Resource Management, the ECPA definition of electronic communication applies to the electronic transmission of communications but not to their electronic storage. Therefore, courts have distinguished between monitoring electronic communications such as e-mail during transmission and viewing e-mails in storage. Viewing emails during transmission is broadly allowed, whereas viewing stored e-mail is considered similar to searching an employee’s private papers and thus is not routinely allowed under the ECPA unless certain circumstances apply (e.g., the e-mails are stored in the employer’s computer systems). 51 In general, it is legal for a company to monitor the use of its own property, including but not limited to computers, laptops, and cell phones. According to the ECPA, an employer-provided computer system is the property of the employer, and when the employer provides employees with a laptop they can take home, it likely violates no laws when it monitors everything employees do with that computer, whether business-related or personal. The same is true of an employer-provided cell phone or tablet, and always true when an employer gives employees notice of a written policy regarding electronic monitoring of equipment supplied by the company. Generally, the same is not true of equipment owned by the employee, such as a personal cell phone. However, an important distinction is based on the issue of consent. The consent provision in the ECPA is not limited to business communications only; therefore, a company might be able to assert the right to monitor personal electronic communications if it can show employee consent (although this is very likely to worry employees, as discussed in the next section). Another consideration is whose e-mail server is being used. The ECPA and some state laws generally make it illegal for employers to intercept private e-mail by using an employee’s personal log-on/user ID/password information. Although the ECPA and National Labor Relations Act are both federal laws, individual states are free to pass laws that impose greater limitations, and several states have done so. Some require employers to provide employees advance written notice that specifies the types or methods of monitoring to which they will be subjected. Examples of state laws creating some degree of protection for workers include laws in California and Pennsylvania that require consent of both parties before any conversation can be monitored or recorded. Employees can bring common law privacy claims to challenge employer monitoring. (Common laws are those based on prior court decisions rather than on legislatively enacted statutes.) To prevail on a common law claim of invasion of privacy, which is a tort, the employee must demonstrate a right to privacy with respect to the information being monitored. Several state constitutions, such as those in Louisiana, Florida, South Carolina, and California, expressly provide citizens a right to privacy, which may protect employees with respect to monitoring of their personal electronic information and personal communication in the workplace. One additional regulatory consideration applicable to electronic monitoring is whether the company’s workforce is unionized. The National Labor Relations Board, the federal labor law agency, has ruled that the video surveillance of any portion of the workplace is a condition of employment subject to collective bargaining and must be agreed to by the union before implementation, so employees have notice. If a workplace is not unionized (the majority are not), then this federal regulation requiring notice does not apply, and as stated previously in this chapter, if there is any protection at all, it would have to be given by state regulation (which is rare in the private [nongovernmental] sector). What Constitutes a Reasonable Monitoring Policy? Many employees generally are not be familiar with the specific details of the law. They may feel offended by monitoring , especially of their own equipment. Companies must also consider the effect on workplace morale if everyone feels spied upon, and the risk that some high-performing employees may decide to look elsewhere for career opportunities. Employers should develop a clear, specific, and reasonable monitoring policy. The policy should limit monitoring to that which is directly work related. For example, if a company is concerned about productivity and the goal of monitoring is to keep tabs on employee performance, then neither keystroke logging nor screenshot recording is necessary; software designed to show idle time or personal Internet use would be more helpful in identifying wasted time, which is the ultimate goal. Employers should always remember their business goals when monitoring employees. It is not only a matter of treating employees ethically; it also makes good business sense to ensure that monitoring pertains only to business matters and does not unnecessarily intrude into the privacy of employees. Perhaps most importantly, in the interest of fairness, the monitoring policy must be communicated to the employees. When, if ever, is it acceptable to monitor without notice to the employee and without his or her knowledge? Link to Learning This notice by the State of Connecticut mandates that all employers inform employees of the kinds of electronic monitoring of their activities and communications that may be undertaken at work, and the responsibilities of an employer. Read the notice and decide whether you think it is a reasonable policy. Would it make sense to the average worker? Do you think it is unfair to either party? The Connecticut policy in the preceding Link to Learning applies to all employers (i.e., in state and in private sector workplaces). However, many states have policies that apply only to employees who work for the government. State employees hold a special status that conveys certain state constitutional rights with regard to due process, reasonable searches, and related legal doctrines. The same is true for federal government employees and the U.S. Constitution, which means the government has a duty of fairness in employee surveillance. It does not mean, however, that the government cannot monitor its employees at all, as demonstrated by an incident involving a California police officer. In a unanimous decision in Ontario v. Quon , 52 the U.S. Supreme Court in 2010 ruled in favor of a police chief in Ontario, California, who read nearly five hundred text messages sent by one of his sergeants on a police-issued pager. Many of the text messages were personal and some were sexually explicit. Only a few dozen were work related. The justices agreed that constitutional limits on unreasonable searches by public employers (under the Fourth Amendment) were minimal given a work-related purpose. This decision creates precedent for more than 25 million employees of federal, state, and local governments and limits their expectation of privacy when using employer-issued tools. “Because the search [by the police chief] was motivated by a legitimate work-related purpose and because it was not excessive in scope, the search was reasonable,” said Justice Anthony M. Kennedy. In the private sector, where employees are not working for the government and the constitutional prohibition on unreasonable searches and seizures has very little applicability, if any, employers have even more latitude in terms of employee monitoring than in a government setting. The Ontario v. Quon case in all likelihood would never even make it to court if the employer were a private-sector company, because the issue of whether getting the text message was a reasonable search and seizure under the Fourth Amendment does not apply in a nongovernment employment setting. The Constitution acts to limit government intrusions but does not generally restrict private companies in this type of situation. However, ethical considerations may encourage private-sector employers to treat their workers respectfully, even if not required by law. What Would You Do? Security versus Privacy You manage a large, high-end jewelry store with an international clientele. Your workforce of 150 is demographically diverse, and your employees are trustworthy as a rule. However, you have experienced some unexplained loss of inventory and suspect a couple of employees are stealing valuable pieces, removing them from backroom storage safes and handing them off to another person somewhere in the store who leaves with them or to a third person pretending to be a customer. To prevent this, your assistant managers are urging you to place discreet cameras in the restrooms and break rooms, where these exchanges are likely occurring. Some managers might be concerned about using cameras at all due to privacy issues; others might want to use them without notifying employees or putting up signs because they do not want to tip off the suspects or deal with the negative reaction of the workforce (although that brings up invasion of privacy issues). You are weighing the pros of catching the thieves against the possible loss of other employees’ trust. Critical Thinking What issues must you confront as you decide whether you will take the recommendation of your assistant managers? What, ultimately, will you do? Explain your decision. Drug Testing in the Workplace Key issues that arise about a drug testing or monitoring program begin with whether an employer wants or needs to do it. Is it required by law for a particular job, under state or local regulations? Is it for pre-employment clearance? Does the employer need employees’ permission? Does a failed test require mandatory termination? With the exception of employers in industries regulated by the federal government, such as airlines, trucking companies, rail lines, and national security-related firms, federal law is not controlling on the issue of drug testing in the workplace; it is largely a state issue. At the federal level, the Department of Transportation does mandate drug testing for workers such as airline crews and railway conductors and has a specific procedure that must be followed. However, for the most part, drug testing is not mandatory and depends on whether the employer wants to do it. Multiple states do regulate drug testing, but to varying degrees, and there is no common standard to be followed. Testing of job applicants is the most common form of drug testing. State laws typically allow it, but the employer must follow state rules, if they exist, about providing notice and following standard procedures intended to prevent inaccurate samples. Testing current employees is much less common, primarily due to cost; however, companies that do use drug testing include some in the pharmaceutical and financial services industries. Some states put legal constraints on drug testing of private-sector employees. For example, in a few states. the job must include the possibility of property damage or injury to others, or the employer must believe the employee is using drugs. Challenging a drug test is difficult because tests are considered highly accurate. An applicant or employee can refuse to take the test, but that often means not being hired or losing the job, assuming the worker is an employee at will. The concept of employment at will affirms that either the employee or the employer may dissolve an employment arrangement at will (i.e., without cause and at any time unless an employment contract is in effect that stipulates differently). Most workers are considered employees at will because neither the employer nor employee is obligated to the other; the worker can quit or be fired at any time for any reason because there is no contractual obligation. In some states, the employee risks not only job loss but also the denial of unemployment benefits if fired for refusing to take a drug test. Thus, the key concept that makes drug testing possible is employment at will, which covers approximately 85 percent of the employees in the private sector (unionized workers and top executives have contracts and thus are not at will, nor are government employees who have due process rights). The only legal limitation is that, in some states, the drug testing procedure must be fair, accurate, and designed to minimize errors and false-positive results. The drug testing process, however, raises some difficult privacy issues. Employers want and are allowed to protect against specimen tampering by taking such steps as requiring subjects to wear a hospital gown. Some employers use test monitors who check the temperature of the urine and/or listen as a urine sample is collected. According to the Cornell University Law School Legal Information Institute, some state courts (e.g., Georgia, Louisiana, Hawaii) have found it an unreasonable invasion of privacy for the monitor to watch an employee in the restroom; however, in other states (e.g., Texas, Nevada), this is allowed. 53 Case examples abound of challenges based on privacy concerns. In an article in the Harvard Journal of Law and Technology , University of Houston Law School professor Mark Rothstein, who is director of the Health Law and Policy Institute, summarized examples of legal challenges. 54 In one case, the court ruled that an employer engaged in unlawful retaliation as defined by the Mine Safety and Health Act. The employer dismissed two employees who were required to urinate in the presence of others but found themselves unable to do so. In a different case, $125,000 in tort damages was awarded to a worker for invasion of privacy and negligent infliction of emotional distress as a consequence of his being forced to submit a urine sample as he was being directly observed.
biology
Chapter Outline 24.1 Characteristics of Fungi 24.2 Classifications of Fungi 24.3 Ecology of Fungi 24.4 Fungal Parasites and Pathogens 24.5 Importance of Fungi in Human Life Introduction The word fungus comes from the Latin word for mushrooms. Indeed, the familiar mushroom is a reproductive structure used by many types of fungi. However, there are also many fungi species that don't produce mushrooms at all. Being eukaryotes, a typical fungal cell contains a true nucleus and many membrane-bound organelles. The kingdom Fungi includes an enormous variety of living organisms collectively referred to as Eucomycota, or true Fungi. While scientists have identified about 100,000 species of fungi, this is only a fraction of the 1.5 million species of fungus likely present on Earth. Edible mushrooms, yeasts, black mold, and the producer of the antibiotic penicillin, Penicillium notatum , are all members of the kingdom Fungi, which belongs to the domain Eukarya. Fungi, once considered plant-like organisms, are more closely related to animals than plants. Fungi are not capable of photosynthesis: they are heterotrophic because they use complex organic compounds as sources of energy and carbon. Some fungal organisms multiply only asexually, whereas others undergo both asexual reproduction and sexual reproduction with alternation of generations. Most fungi produce a large number of spores , which are haploid cells that can undergo mitosis to form multicellular, haploid individuals. Like bacteria, fungi play an essential role in ecosystems because they are decomposers and participate in the cycling of nutrients by breaking down organic materials to simple molecules. Fungi often interact with other organisms, forming beneficial or mutualistic associations. For example most terrestrial plants form symbiotic relationships with fungi. The roots of the plant connect with the underground parts of the fungus forming mycorrhizae . Through mycorrhizae, the fungus and plant exchange nutrients and water, greatly aiding the survival of both species Alternatively, lichens are an association between a fungus and its photosynthetic partner (usually an alga). Fungi also cause serious infections in plants and animals. For example, Dutch elm disease, which is caused by the fungus Ophiostoma ulmi , is a particularly devastating type of fungal infestation that destroys many native species of elm ( Ulmus sp.) by infecting the tree’s vascular system. The elm bark beetle acts as a vector, transmitting the disease from tree to tree. Accidentally introduced in the 1900s, the fungus decimated elm trees across the continent. Many European and Asiatic elms are less susceptible to Dutch elm disease than American elms. In humans, fungal infections are generally considered challenging to treat. Unlike bacteria, fungi do not respond to traditional antibiotic therapy, since they are eukaryotes. Fungal infections may prove deadly for individuals with compromised immune systems. Fungi have many commercial applications. The food industry uses yeasts in baking, brewing, and cheese and wine making. Many industrial compounds are byproducts of fungal fermentation. Fungi are the source of many commercial enzymes and antibiotics.
[ { "answer": { "ans_choice": 2, "ans_text": "chitin" }, "bloom": "2", "hl_context": "The only class in the Phylum Chytridiomycota is the Chytridiomycetes . The chytrids are the simplest and most primitive Eumycota , or true fungi . The evolutionary record shows that the first recognizable chytrids appeared during the late pre-Cambrian period , more than 500 million years ago . <hl> Like all fungi , chytrids have chitin in their cell walls , but one group of chytrids has both cellulose and chitin in the cell wall . <hl> Most chytrids are unicellular ; a few form multicellular organisms and hyphae , which have no septa between cells ( coenocytic ) . They produce gametes and diploid zoospores that swim with the help of a single flagellum . <hl> Like plant cells , fungal cells have a thick cell wall . <hl> <hl> The rigid layers of fungal cell walls contain complex polysaccharides called chitin and glucans . <hl> Chitin , also found in the exoskeleton of insects , gives structural strength to the cell walls of fungi . The wall protects the cell from desiccation and predators . Fungi have plasma membranes similar to other eukaryotes , except that the structure is stabilized by ergosterol : a steroid molecule that replaces the cholesterol found in animal cell membranes . Most members of the kingdom Fungi are nonmotile . Flagella are produced only by the gametes in the primitive Phylum Chytridiomycota .", "hl_sentences": "Like all fungi , chytrids have chitin in their cell walls , but one group of chytrids has both cellulose and chitin in the cell wall . Like plant cells , fungal cells have a thick cell wall . The rigid layers of fungal cell walls contain complex polysaccharides called chitin and glucans .", "question": { "cloze_format": "The polysaccharide that is usually found in the cell wall of fungi is ___.", "normal_format": "Which polysaccharide is usually found in the cell wall of fungi?", "question_choices": [ "starch", "glycogen", "chitin", "cellulose" ], "question_id": "fs-idp69376608", "question_text": "Which polysaccharide is usually found in the cell wall of fungi?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "chloroplast" }, "bloom": "3", "hl_context": "<hl> Unlike plant cells , fungal cells do not have chloroplasts or chlorophyll . <hl> Many fungi display bright colors arising from other cellular pigments , ranging from red to green to black . The poisonous Amanita muscaria ( fly agaric ) is recognizable by its bright red cap with white patches ( Figure 24.2 ) . Pigments in fungi are associated with the cell wall and play a protective role against ultraviolet radiation . Some fungal pigments are toxic .", "hl_sentences": "Unlike plant cells , fungal cells do not have chloroplasts or chlorophyll .", "question": { "cloze_format": "The ___ organelle is not found in a fungal cell.", "normal_format": "Which of these organelles is not found in a fungal cell?", "question_choices": [ "chloroplast", "nucleus", "mitochondrion", "Golgi apparatus" ], "question_id": "fs-idm11390416", "question_text": "Which of these organelles is not found in a fungal cell?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "septum" }, "bloom": "2", "hl_context": "<hl> Most fungal hyphae are divided into separate cells by endwalls called septa ( singular , septum ) ( Figure 24.5 a , c ) . <hl> In most phyla of fungi , tiny holes in the septa allow for the rapid flow of nutrients and small molecules from cell to cell along the hypha . They are described as perforated septa . The hyphae in bread molds ( which belong to the Phylum Zygomycota ) are not separated by septa . Instead , they are formed by large cells containing many nuclei , an arrangement described as coenocytic hyphae ( Figure 24.5 b ) .", "hl_sentences": "Most fungal hyphae are divided into separate cells by endwalls called septa ( singular , septum ) ( Figure 24.5 a , c ) .", "question": { "cloze_format": "The wall dividing individual cells in a fungal filament is called a ___ .", "normal_format": "What is called the wall that dividing individual cells in a fungal filament?", "question_choices": [ "thallus", "hypha", "mycelium", "septum" ], "question_id": "fs-idp69580656", "question_text": "The wall dividing individual cells in a fungal filament is called a" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "both mating types" }, "bloom": null, "hl_context": "Sexual reproduction introduces genetic variation into a population of fungi . In fungi , sexual reproduction often occurs in response to adverse environmental conditions . <hl> During sexual reproduction , two mating types are produced . <hl> <hl> When both mating types are present in the same mycelium , it is called homothallic , or self-fertile . <hl> Heterothallic mycelia require two different , but compatible , mycelia to reproduce sexually .", "hl_sentences": "During sexual reproduction , two mating types are produced . When both mating types are present in the same mycelium , it is called homothallic , or self-fertile .", "question": { "cloze_format": "During sexual reproduction, a homothallic mycelium contains ___.", "normal_format": "What a homothallic mycelium contains during sexual reproduction?", "question_choices": [ "all septated hyphae", "all haploid nuclei", "both mating types", "none of the above" ], "question_id": "fs-idm41599056", "question_text": "During sexual reproduction, a homothallic mycelium contains" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "Chytridiomycota" }, "bloom": null, "hl_context": "<hl> The only class in the Phylum Chytridiomycota is the Chytridiomycetes . <hl> <hl> The chytrids are the simplest and most primitive Eumycota , or true fungi . <hl> The evolutionary record shows that the first recognizable chytrids appeared during the late pre-Cambrian period , more than 500 million years ago . Like all fungi , chytrids have chitin in their cell walls , but one group of chytrids has both cellulose and chitin in the cell wall . Most chytrids are unicellular ; a few form multicellular organisms and hyphae , which have no septa between cells ( coenocytic ) . They produce gametes and diploid zoospores that swim with the help of a single flagellum . Like plant cells , fungal cells have a thick cell wall . The rigid layers of fungal cell walls contain complex polysaccharides called chitin and glucans . Chitin , also found in the exoskeleton of insects , gives structural strength to the cell walls of fungi . The wall protects the cell from desiccation and predators . Fungi have plasma membranes similar to other eukaryotes , except that the structure is stabilized by ergosterol : a steroid molecule that replaces the cholesterol found in animal cell membranes . <hl> Most members of the kingdom Fungi are nonmotile . <hl> <hl> Flagella are produced only by the gametes in the primitive Phylum Chytridiomycota . <hl>", "hl_sentences": "The only class in the Phylum Chytridiomycota is the Chytridiomycetes . The chytrids are the simplest and most primitive Eumycota , or true fungi . Most members of the kingdom Fungi are nonmotile . Flagella are produced only by the gametes in the primitive Phylum Chytridiomycota .", "question": { "cloze_format": "The most primitive phylum of fungi is the ________.", "normal_format": "What is the most primitive phylum of fungi?", "question_choices": [ "Chytridiomycota", "Zygomycota", "Glomeromycota", "Ascomycota" ], "question_id": "fs-idm2671184", "question_text": "The most primitive phylum of fungi is the ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "Basidiomycota" }, "bloom": "2", "hl_context": "<hl> The fungi in the Phylum Basidiomycota are easily recognizable under a light microscope by their club-shaped fruiting bodies called basidia ( singular , basidium ) , which are the swollen terminal cell of a hypha . <hl> <hl> The basidia , which are the reproductive organs of these fungi , are often contained within the familiar mushroom , commonly seen in fields after rain , on the supermarket shelves , and growing on your lawn ( Figure 24.15 ) . <hl> These mushroom-producing basidiomyces are sometimes referred to as “ gill fungi ” because of the presence of gill-like structures on the underside of the cap . The “ gills ” are actually compacted hyphae on which the basidia are borne .", "hl_sentences": "The fungi in the Phylum Basidiomycota are easily recognizable under a light microscope by their club-shaped fruiting bodies called basidia ( singular , basidium ) , which are the swollen terminal cell of a hypha . The basidia , which are the reproductive organs of these fungi , are often contained within the familiar mushroom , commonly seen in fields after rain , on the supermarket shelves , and growing on your lawn ( Figure 24.15 ) .", "question": { "cloze_format": "Members of the ___ phylum produce a club-shaped structure that contains spores.", "normal_format": "Members of which phylum produce a club-shaped structure that contains spores?", "question_choices": [ "Chytridiomycota", "Basidiomycota", "Glomeromycota", "Ascomycota" ], "question_id": "fs-idm56776016", "question_text": "Members of which phylum produce a club-shaped structure that contains spores?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "Glomeromycota" }, "bloom": "2", "hl_context": "<hl> The Glomeromycota is a newly established phylum which comprises about 230 species that all live in close association with the roots of trees . <hl> Fossil records indicate that trees and their root symbionts share a long evolutionary history . It appears that all members of this family form arbuscular mycorrhizae : the hyphae interact with the root cells forming a mutually beneficial association where the plants supply the carbon source and energy in the form of carbohydrates to the fungus , and the fungus supplies essential minerals from the soil to the plant . The glomeromycetes do not reproduce sexually and do not survive without the presence of plant roots . Although they have coenocytic hyphae like the zygomycetes , they do not form zygospores . DNA analysis shows that all glomeromycetes probably descended from a common ancestor , making them a monophyletic lineage . 24.3 Ecology of Fungi Learning Objectives By the end of this section , you will be able to :", "hl_sentences": "The Glomeromycota is a newly established phylum which comprises about 230 species that all live in close association with the roots of trees .", "question": { "cloze_format": "Members of the ___ phylum establish a successful symbiotic relationship with the roots of trees.", "normal_format": "Members of which phylum establish a successful symbiotic relationship with the roots of trees?", "question_choices": [ "Ascomycota", "Deuteromycota", "Basidiomycota", "Glomeromycota" ], "question_id": "fs-idp7754752", "question_text": "Members of which phylum establish a successful symbiotic relationship with the roots of trees?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "Deuteromycota" }, "bloom": null, "hl_context": "<hl> Asexual Ascomycota and Basidiomycota Imperfect fungi — those that do not display a sexual phase — use to be classified in the form phylum Deuteromycota , , a classification group no longer used in the present , ever-developing classification of organisms . <hl> <hl> While Deuteromycota use to be a classification group , recent moleclular analysis has shown that the members classified in this group belong to the Ascomycota or the Basidiomycota classifications . <hl> <hl> Since they do not possess the sexual structures that are used to classify other fungi , they are less well described in comparison to other members . <hl> Most members live on land , with a few aquatic exceptions . They form visible mycelia with a fuzzy appearance and are commonly known as mold . Reproduction of the fungi in this group is strictly asexual and occurs mostly by production of asexual conidiospores ( Figure 24.17 ) . Some hyphae may recombine and form heterokaryotic hyphae . Genetic recombination is known to take place between the different nuclei . The fungi in this group have a large impact on everyday human life . The food industry relies on them for ripening some cheeses . The blue veins in Roquefort cheese and the white crust on Camembert are the result of fungal growth . The antibiotic penicillin was originally discovered on an overgrown Petri plate , on which a colony of Penicillium fungi killed the bacterial growth surrounding it . Other fungi in this group cause serious diseases , either directly as parasites ( which infect both plants and humans ) , or as producers of potent toxic compounds , as seen in the aflatoxins released by fungi of the genus Aspergillus . The kingdom Fungi contains five major phyla that were established according to their mode of sexual reproduction or using molecular data . Polyphyletic , unrelated fungi that reproduce without a sexual cycle , are placed for convenience in a sixth group called a “ form phylum ” . Not all mycologists agree with this scheme . Rapid advances in molecular biology and the sequencing of 18S rRNA ( a part of RNA ) continue to show new and different relationships between the various categories of fungi . <hl> The five true phyla of fungi are the Chytridiomycota ( Chytrids ) , the Zygomycota ( conjugated fungi ) , the Ascomycota ( sac fungi ) , the Basidiomycota ( club fungi ) and the recently described Phylum Glomeromycota . <hl> <hl> An older classification scheme grouped fungi that strictly use asexual reproduction into Deuteromycota , a group that is no longer in use . <hl> Note : “ - mycota ” is used to designate a phylum while “ - mycetes ” formally denotes a class or is used informally to refer to all members of the phylum . Chytridiomycota : The Chytrids", "hl_sentences": "Asexual Ascomycota and Basidiomycota Imperfect fungi — those that do not display a sexual phase — use to be classified in the form phylum Deuteromycota , , a classification group no longer used in the present , ever-developing classification of organisms . While Deuteromycota use to be a classification group , recent moleclular analysis has shown that the members classified in this group belong to the Ascomycota or the Basidiomycota classifications . Since they do not possess the sexual structures that are used to classify other fungi , they are less well described in comparison to other members . The five true phyla of fungi are the Chytridiomycota ( Chytrids ) , the Zygomycota ( conjugated fungi ) , the Ascomycota ( sac fungi ) , the Basidiomycota ( club fungi ) and the recently described Phylum Glomeromycota . An older classification scheme grouped fungi that strictly use asexual reproduction into Deuteromycota , a group that is no longer in use .", "question": { "cloze_format": "The fungi that do not reproduce sexually use to be classified as ________.", "normal_format": "What did the fungi that do not reproduce sexually use to be classified as?", "question_choices": [ "Ascomycota", "Deuteromycota", "Basidiomycota", "Glomeromycota" ], "question_id": "fs-idm45775568", "question_text": "The fungi that do not reproduce sexually use to be classified as ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "a mycorrhiza" }, "bloom": "2", "hl_context": "<hl> Mycorrhizae are the mutually beneficial symbiotic association between roots of vascular plants and fungi . <hl> A well-accepted theory proposes that fungi were instrumental in the evolution of the root system in plants and contributed to the success of Angiosperms . The bryophytes ( mosses and liverworts ) , which are considered the most primitive plants and the first to survive on dry land , do not have a true root system ; some have vesicular – arbuscular mycorrhizae and some do not . They depend on a simple rhizoid ( an underground organ ) and cannot survive in dry areas . True roots appeared in vascular plants . Vascular plants that developed a system of thin extensions from the rhizoids ( found in mosses ) are thought to have had a selective advantage because they had a greater surface area of contact with the fungal partners than the mosses and liverworts , thus availing themselves of more nutrients in the ground . Fossil records indicate that fungi preceded plants on dry land . The first association between fungi and photosynthetic organisms on land involved moss-like plants and endophytes . These early associations developed before roots appeared in plants . Slowly , the benefits of the endophyte and rhizoid interactions for both partners led to present-day mycorrhizae ; up to about 90 percent of today ’ s vascular plants have associations with fungi in their rhizosphere . The fungi involved in mycorrhizae display many characteristics of primitive fungi ; they produce simple spores , show little diversification , do not have a sexual reproductive cycle , and cannot live outside of a mycorrhizal association . The plants benefited from the association because mycorrhizae allowed them to move into new habitats because of increased uptake of nutrients , and this gave them a selective advantage over plants that did not establish symbiotic relationships . <hl> One of the most remarkable associations between fungi and plants is the establishment of mycorrhizae . <hl> <hl> Mycorrhiza , which comes from the Greek words myco meaning fungus and rhizo meaning root , refers to the association between vascular plant roots and their symbiotic fungi . <hl> Somewhere between 80 and 90 percent of all plant species have mycorrhizal partners . In a mycorrhizal association , the fungal mycelia use their extensive network of hyphae and large surface area in contact with the soil to channel water and minerals from the soil into the plant . In exchange , the plant supplies the products of photosynthesis to fuel the metabolism of the fungus .", "hl_sentences": "Mycorrhizae are the mutually beneficial symbiotic association between roots of vascular plants and fungi . One of the most remarkable associations between fungi and plants is the establishment of mycorrhizae . Mycorrhiza , which comes from the Greek words myco meaning fungus and rhizo meaning root , refers to the association between vascular plant roots and their symbiotic fungi .", "question": { "cloze_format": "The term that describes the close association of a fungus with the root of a tree is ___.", "normal_format": "What term describes the close association of a fungus with the root of a tree?", "question_choices": [ "a rhizoid", "a lichen", "a mycorrhiza", "an endophyte" ], "question_id": "fs-idm67316192", "question_text": "What term describes the close association of a fungus with the root of a tree?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "They recycle carbon and inorganic minerals by the process of decomposition." }, "bloom": "4", "hl_context": "Chapter Outline 24.1 Characteristics of Fungi 24.2 Classifications of Fungi 24.3 Ecology of Fungi 24.4 Fungal Parasites and Pathogens 24.5 Importance of Fungi in Human Life Introduction The word fungus comes from the Latin word for mushrooms . Indeed , the familiar mushroom is a reproductive structure used by many types of fungi . However , there are also many fungi species that don't produce mushrooms at all . Being eukaryotes , a typical fungal cell contains a true nucleus and many membrane-bound organelles . The kingdom Fungi includes an enormous variety of living organisms collectively referred to as Eucomycota , or true Fungi . While scientists have identified about 100,000 species of fungi , this is only a fraction of the 1.5 million species of fungus likely present on Earth . Edible mushrooms , yeasts , black mold , and the producer of the antibiotic penicillin , Penicillium notatum , are all members of the kingdom Fungi , which belongs to the domain Eukarya . Fungi , once considered plant-like organisms , are more closely related to animals than plants . Fungi are not capable of photosynthesis : they are heterotrophic because they use complex organic compounds as sources of energy and carbon . Some fungal organisms multiply only asexually , whereas others undergo both asexual reproduction and sexual reproduction with alternation of generations . Most fungi produce a large number of spores , which are haploid cells that can undergo mitosis to form multicellular , haploid individuals . <hl> Like bacteria , fungi play an essential role in ecosystems because they are decomposers and participate in the cycling of nutrients by breaking down organic materials to simple molecules . <hl> Fungi often interact with other organisms , forming beneficial or mutualistic associations . For example most terrestrial plants form symbiotic relationships with fungi . The roots of the plant connect with the underground parts of the fungus forming mycorrhizae . Through mycorrhizae , the fungus and plant exchange nutrients and water , greatly aiding the survival of both species Alternatively , lichens are an association between a fungus and its photosynthetic partner ( usually an alga ) . Fungi also cause serious infections in plants and animals . For example , Dutch elm disease , which is caused by the fungus Ophiostoma ulmi , is a particularly devastating type of fungal infestation that destroys many native species of elm ( Ulmus sp . ) by infecting the tree ’ s vascular system . The elm bark beetle acts as a vector , transmitting the disease from tree to tree . Accidentally introduced in the 1900s , the fungus decimated elm trees across the continent . Many European and Asiatic elms are less susceptible to Dutch elm disease than American elms . In humans , fungal infections are generally considered challenging to treat . Unlike bacteria , fungi do not respond to traditional antibiotic therapy , since they are eukaryotes . Fungal infections may prove deadly for individuals with compromised immune systems . Fungi have many commercial applications . The food industry uses yeasts in baking , brewing , and cheese and wine making . Many industrial compounds are byproducts of fungal fermentation . Fungi are the source of many commercial enzymes and antibiotics .", "hl_sentences": "Like bacteria , fungi play an essential role in ecosystems because they are decomposers and participate in the cycling of nutrients by breaking down organic materials to simple molecules .", "question": { "cloze_format": "Fungi are important decomposers because ___.", "normal_format": "Why are fungi important decomposers?", "question_choices": [ "They produce many spores.", "They can grow in many different environments.", "They produce mycelia.", "They recycle carbon and inorganic minerals by the process of decomposition." ], "question_id": "fs-idm61794080", "question_text": "Why are fungi important decomposers?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "superficial mycosis" }, "bloom": null, "hl_context": "<hl> Fungi can affect animals , including humans , in several ways . <hl> <hl> A mycosis is a fungal disease that results from infection and direct damage . <hl> Fungi attack animals directly by colonizing and destroying tissues . Mycotoxicosis is the poisoning of humans ( and other animals ) by foods contaminated by fungal toxins ( mycotoxins ) . Mycetismus describes the ingestion of preformed toxins in poisonous mushrooms . In addition , individuals who display hypersensitivity to molds and spores develop strong and dangerous allergic reactions . <hl> Fungal infections are generally very difficult to treat because , unlike bacteria , fungi are eukaryotes . <hl> <hl> Antibiotics only target prokaryotic cells , whereas compounds that kill fungi also harm the eukaryotic animal host . <hl> <hl> Many fungal infections are superficial ; that is , they occur on the animal ’ s skin . <hl> <hl> Termed cutaneous ( “ skin ” ) mycoses , they can have devastating effects . <hl> For example , the decline of the world ’ s frog population in recent years may be caused by the chytrid fungus Batrachochytrium dendrobatidis , which infects the skin of frogs and presumably interferes with gaseous exchange . Similarly , more than a million bats in the United States have been killed by white-nose syndrome , which appears as a white ring around the mouth of the bat . It is caused by the cold-loving fungus Pseudogymnoascus destructans , which disseminates its deadly spores in caves where bats hibernate . Mycologists are researching the transmission , mechanism , and control of P . destructans to stop its spread . <hl> Fungi that cause the superficial mycoses of the epidermis , hair , and nails rarely spread to the underlying tissue ( Figure 24.26 ) . <hl> These fungi are often misnamed “ dermatophytes ” , from the Greek words dermis meaning skin and phyte meaning plant , although they are not plants . Dermatophytes are also called “ ringworms ” because of the red ring they cause on skin . They secrete extracellular enzymes that break down keratin ( a protein found in hair , skin , and nails ) , causing conditions such as athlete ’ s foot and jock itch . These conditions are usually treated with over-the-counter topical creams and powders , and are easily cleared . More persistent superficial mycoses may require prescription oral medications .", "hl_sentences": "Fungi can affect animals , including humans , in several ways . A mycosis is a fungal disease that results from infection and direct damage . Fungal infections are generally very difficult to treat because , unlike bacteria , fungi are eukaryotes . Antibiotics only target prokaryotic cells , whereas compounds that kill fungi also harm the eukaryotic animal host . Many fungal infections are superficial ; that is , they occur on the animal ’ s skin . Termed cutaneous ( “ skin ” ) mycoses , they can have devastating effects . Fungi that cause the superficial mycoses of the epidermis , hair , and nails rarely spread to the underlying tissue ( Figure 24.26 ) .", "question": { "cloze_format": "A fungal infection that affects nails and skin is classified as ________.", "normal_format": "How is a fungal infection that affects nails and skin classified?", "question_choices": [ "systemic mycosis", "mycetismus", "superficial mycosis", "mycotoxicosis" ], "question_id": "fs-idp5013264", "question_text": "A fungal infection that affects nails and skin is classified as ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "the atmosphere does not contain oxygen" }, "bloom": null, "hl_context": "Fungi thrive in environments that are moist and slightly acidic , and can grow with or without light . They vary in their oxygen requirement . Most fungi are obligate aerobes , requiring oxygen to survive . Other species , such as the Chytridiomycota that reside in the rumen of cattle , are are obligate anaerobes , in that they only use anaerobic respiration because oxygen will disrupt their metabolism or kill them . <hl> Yeasts are intermediate , being faculative anaerobes . <hl> <hl> This means that they grow best in the presence of oxygen using aerobic respiration , but can survive using anaerobic respiration when oxygen is not available . <hl> The alcohol produced from yeast fermentation is used in wine and beer production .", "hl_sentences": "Yeasts are intermediate , being faculative anaerobes . This means that they grow best in the presence of oxygen using aerobic respiration , but can survive using anaerobic respiration when oxygen is not available .", "question": { "cloze_format": "Yeast is a facultative anaerobe. This means that alcohol fermentation takes place only if ___ .", "normal_format": "Yeast is a facultative anaerobe. When does alcohol fermentation take place?", "question_choices": [ "the temperature is close to 37°C", "the atmosphere does not contain oxygen", "sugar is provided to the cells", "light is provided to the cells" ], "question_id": "fs-idp82473248", "question_text": "Yeast is a facultative anaerobe. This means that alcohol fermentation takes place only if:" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "yeast cells are eukaryotic and modify proteins similarly to human cells" }, "bloom": null, "hl_context": "As simple eukaryotic organisms , fungi are important model research organisms . Many advances in modern genetics were achieved by the use of the red bread mold Neurospora crassa . Additionally , many important genes originally discovered in S . cerevisiae served as a starting point in discovering analogous human genes . <hl> As a eukaryotic organism , the yeast cell produces and modifies proteins in a manner similar to human cells , as opposed to the bacterium Escherichia coli , which lacks the internal membrane structures and enzymes to tag proteins for export . <hl> This makes yeast a much better organism for use in recombinant DNA technology experiments . Like bacteria , yeasts grow easily in culture , have a short generation time , and are amenable to genetic modification .", "hl_sentences": "As a eukaryotic organism , the yeast cell produces and modifies proteins in a manner similar to human cells , as opposed to the bacterium Escherichia coli , which lacks the internal membrane structures and enzymes to tag proteins for export .", "question": { "cloze_format": "The advantage of yeast cells over bacterial cells to express human proteins is that ___ .", "normal_format": "What is the advantage of yeast cells over bacterial cells to express human proteins?", "question_choices": [ "yeast cells grow faster", "yeast cells are easier to manipulate genetically", "yeast cells are eukaryotic and modify proteins similarly to human cells", "yeast cells are easily lysed to purify the proteins" ], "question_id": "fs-idm24109536", "question_text": "The advantage of yeast cells over bacterial cells to express human proteins is that:" }, "references_are_paraphrase": null } ]
24
24.1 Characteristics of Fungi Learning Objectives By the end of this section, you will be able to: List the characteristics of fungi Describe the composition of the mycelium Describe the mode of nutrition of fungi Explain sexual and asexual reproduction in fungi Although humans have used yeasts and mushrooms since prehistoric times, until recently, the biology of fungi was poorly understood. Up until the mid-20th century, many scientists classified fungi as plants. Fungi, like plants, arose mostly sessile and seemingly rooted in place. They possess a stem-like structure similar to plants, as well as having a root-like fungal mycelium in the soil. In addition, their mode of nutrition was poorly understood. Progress in the field of fungal biology was the result of mycology : the scientific study of fungi. Based on fossil evidence, fungi appeared in the pre-Cambrian era, about 450 million years ago. Molecular biology analysis of the fungal genome demonstrates that fungi are more closely related to animals than plants. They are a polyphyletic group of organisms that share characteristics, rather than sharing a single common ancestor. Career Connection Mycologist Mycologists are biologists who study fungi. Mycology is a branch of microbiology, and many mycologists start their careers with a degree in microbiology. To become a mycologist, a bachelor's degree in a biological science (preferably majoring in microbiology) and a master's degree in mycology are minimally necessary. Mycologists can specialize in taxonomy and fungal genomics, molecular and cellular biology, plant pathology, biotechnology, or biochemistry. Some medical microbiologists concentrate on the study of infectious diseases caused by fungi (mycoses). Mycologists collaborate with zoologists and plant pathologists to identify and control difficult fungal infections, such as the devastating chestnut blight, the mysterious decline in frog populations in many areas of the world, or the deadly epidemic called white nose syndrome, which is decimating bats in the Eastern United States. Government agencies hire mycologists as research scientists and technicians to monitor the health of crops, national parks, and national forests. Mycologists are also employed in the private sector by companies that develop chemical and biological control products or new agricultural products, and by companies that provide disease control services. Because of the key role played by fungi in the fermentation of alcohol and the preparation of many important foods, scientists with a good understanding of fungal physiology routinely work in the food technology industry. Oenology, the science of wine making, relies not only on the knowledge of grape varietals and soil composition, but also on a solid understanding of the characteristics of the wild yeasts that thrive in different wine-making regions. It is possible to purchase yeast strains isolated from specific grape-growing regions. The great French chemist and microbiologist, Louis Pasteur, made many of his essential discoveries working on the humble brewer’s yeast, thus discovering the process of fermentation. Cell Structure and Function Fungi are eukaryotes, and as such, have a complex cellular organization. As eukaryotes, fungal cells contain a membrane-bound nucleus. The DNA in the nucleus is wrapped around histone proteins, as is observed in other eukaryotic cells. A few types of fungi have structures comparable to bacterial plasmids (loops of DNA); however, the horizontal transfer of genetic information from one mature bacterium to another rarely occurs in fungi. Fungal cells also contain mitochondria and a complex system of internal membranes, including the endoplasmic reticulum and Golgi apparatus. Unlike plant cells, fungal cells do not have chloroplasts or chlorophyll. Many fungi display bright colors arising from other cellular pigments, ranging from red to green to black. The poisonous Amanita muscaria (fly agaric) is recognizable by its bright red cap with white patches ( Figure 24.2 ). Pigments in fungi are associated with the cell wall and play a protective role against ultraviolet radiation. Some fungal pigments are toxic. Like plant cells, fungal cells have a thick cell wall. The rigid layers of fungal cell walls contain complex polysaccharides called chitin and glucans. Chitin, also found in the exoskeleton of insects, gives structural strength to the cell walls of fungi. The wall protects the cell from desiccation and predators. Fungi have plasma membranes similar to other eukaryotes, except that the structure is stabilized by ergosterol: a steroid molecule that replaces the cholesterol found in animal cell membranes. Most members of the kingdom Fungi are nonmotile. Flagella are produced only by the gametes in the primitive Phylum Chytridiomycota. Growth The vegetative body of a fungus is a unicellular or multicellular thallus . Dimorphic fungi can change from the unicellular to multicellular state depending on environmental conditions. Unicellular fungi are generally referred to as yeasts . Saccharomyces cerevisiae (baker’s yeast) and Candida species (the agents of thrush, a common fungal infection) are examples of unicellular fungi ( Figure 24.3 ). Most fungi are multicellular organisms. They display two distinct morphological stages: the vegetative and reproductive. The vegetative stage consists of a tangle of slender thread-like structures called hyphae (singular, hypha ), whereas the reproductive stage can be more conspicuous. The mass of hyphae is a mycelium ( Figure 24.4 ). It can grow on a surface, in soil or decaying material, in a liquid, or even on living tissue. Although individual hyphae must be observed under a microscope, the mycelium of a fungus can be very large, with some species truly being “the fungus humongous.” The giant Armillaria solidipes (honey mushroom) is considered the largest organism on Earth, spreading across more than 2,000 acres of underground soil in eastern Oregon; it is estimated to be at least 2,400 years old. Most fungal hyphae are divided into separate cells by endwalls called septa (singular, septum ) ( Figure 24.5 a, c ). In most phyla of fungi, tiny holes in the septa allow for the rapid flow of nutrients and small molecules from cell to cell along the hypha. They are described as perforated septa. The hyphae in bread molds (which belong to the Phylum Zygomycota) are not separated by septa. Instead, they are formed by large cells containing many nuclei, an arrangement described as coenocytic hyphae ( Figure 24.5 b ). Fungi thrive in environments that are moist and slightly acidic, and can grow with or without light. They vary in their oxygen requirement. Most fungi are obligate aerobes , requiring oxygen to survive. Other species, such as the Chytridiomycota that reside in the rumen of cattle, are are obligate anaerobes , in that they only use anaerobic respiration because oxygen will disrupt their metabolism or kill them. Yeasts are intermediate, being faculative anaerobes . This means that they grow best in the presence of oxygen using aerobic respiration, but can survive using anaerobic respiration when oxygen is not available. The alcohol produced from yeast fermentation is used in wine and beer production. Nutrition Like animals, fungi are heterotrophs; they use complex organic compounds as a source of carbon, rather than fix carbon dioxide from the atmosphere as do some bacteria and most plants. In addition, fungi do not fix nitrogen from the atmosphere. Like animals, they must obtain it from their diet. However, unlike most animals, which ingest food and then digest it internally in specialized organs, fungi perform these steps in the reverse order; digestion precedes ingestion. First, exoenzymes are transported out of the hyphae, where they process nutrients in the environment. Then, the smaller molecules produced by this external digestion are absorbed through the large surface area of the mycelium. As with animal cells, the polysaccharide of storage is glycogen, rather than starch, as found in plants. Fungi are mostly saprobes (saprophyte is an equivalent term): organisms that derive nutrients from decaying organic matter. They obtain their nutrients from dead or decomposing organic matter: mainly plant material. Fungal exoenzymes are able to break down insoluble polysaccharides, such as the cellulose and lignin of dead wood, into readily absorbable glucose molecules. The carbon, nitrogen, and other elements are thus released into the environment. Because of their varied metabolic pathways, fungi fulfill an important ecological role and are being investigated as potential tools in bioremediation. For example, some species of fungi can be used to break down diesel oil and polycyclic aromatic hydrocarbons (PAHs). Other species take up heavy metals, such as cadmium and lead. Some fungi are parasitic, infecting either plants or animals. Smut and Dutch elm disease affect plants, whereas athlete’s foot and candidiasis (thrush) are medically important fungal infections in humans. In environments poor in nitrogen, some fungi resort to predation of nematodes (small non-segmented roundworms). Species of Arthrobotrys fungi have a number of mechanisms to trap nematodes. One mechanism involves constricting rings within the network of hyphae. The rings swell when they touch the nematode, gripping it in a tight hold. The fungus penetrates the tissue of the worm by extending specialized hyphae called haustoria . Many parasitic fungi possess haustoria, as these structures penetrate the tissues of the host, release digestive enzymes within the host's body, and absorb the digested nutrients. Reproduction Fungi reproduce sexually and/or asexually. Perfect fungi reproduce both sexually and asexually, while the so-called imperfect fungi reproduce only asexually (by mitosis). In both sexual and asexual reproduction, fungi produce spores that disperse from the parent organism by either floating on the wind or hitching a ride on an animal. Fungal spores are smaller and lighter than plant seeds. The giant puffball mushroom bursts open and releases trillions of spores. The huge number of spores released increases the likelihood of landing in an environment that will support growth ( Figure 24.6 ). Asexual Reproduction Fungi reproduce asexually by fragmentation, budding, or producing spores. Fragments of hyphae can grow new colonies. Somatic cells in yeast form buds. During budding (a type of cytokinesis), a bulge forms on the side of the cell, the nucleus divides mitotically, and the bud ultimately detaches itself from the mother cell ( Figure 24.7 ). The most common mode of asexual reproduction is through the formation of asexual spores, which are produced by one parent only (through mitosis) and are genetically identical to that parent ( Figure 24.8 ). Spores allow fungi to expand their distribution and colonize new environments. They may be released from the parent thallus either outside or within a special reproductive sac called a sporangium . There are many types of asexual spores. Conidiospores are unicellular or multicellular spores that are released directly from the tip or side of the hypha. Other asexual spores originate in the fragmentation of a hypha to form single cells that are released as spores; some of these have a thick wall surrounding the fragment. Yet others bud off the vegetative parent cell. Sporangiospores are produced in a sporangium ( Figure 24.9 ). Sexual Reproduction Sexual reproduction introduces genetic variation into a population of fungi. In fungi, sexual reproduction often occurs in response to adverse environmental conditions. During sexual reproduction, two mating types are produced. When both mating types are present in the same mycelium, it is called homothallic , or self-fertile. Heterothallic mycelia require two different, but compatible, mycelia to reproduce sexually. Although there are many variations in fungal sexual reproduction, all include the following three stages ( Figure 24.8 ). First, during plasmogamy (literally, “marriage or union of cytoplasm”), two haploid cells fuse, leading to a dikaryotic stage where two haploid nuclei coexist in a single cell. During karyogamy (“nuclear marriage”), the haploid nuclei fuse to form a diploid zygote nucleus. Finally, meiosis takes place in the gametangia (singular, gametangium) organs, in which gametes of different mating types are generated. At this stage, spores are disseminated into the environment. Link to Learning Review the characteristics of fungi by visiting this interactive site from Wisconsin-online. 24.2 Classifications of Fungi Learning Objectives By the end of this section, you will be able to: Classify fungi into the five major phyla Describe each phylum in terms of major representative species and patterns of reproduction The kingdom Fungi contains five major phyla that were established according to their mode of sexual reproduction or using molecular data. Polyphyletic, unrelated fungi that reproduce without a sexual cycle, are placed for convenience in a sixth group called a “form phylum”. Not all mycologists agree with this scheme. Rapid advances in molecular biology and the sequencing of 18S rRNA (a part of RNA) continue to show new and different relationships between the various categories of fungi. The five true phyla of fungi are the Chytridiomycota (Chytrids), the Zygomycota (conjugated fungi), the Ascomycota (sac fungi), the Basidiomycota (club fungi) and the recently described Phylum Glomeromycota. An older classification scheme grouped fungi that strictly use asexual reproduction into Deuteromycota, a group that is no longer in use. Note: “-mycota” is used to designate a phylum while “-mycetes” formally denotes a class or is used informally to refer to all members of the phylum. Chytridiomycota: The Chytrids The only class in the Phylum Chytridiomycota is the Chytridiomycetes . The chytrids are the simplest and most primitive Eumycota, or true fungi. The evolutionary record shows that the first recognizable chytrids appeared during the late pre-Cambrian period, more than 500 million years ago. Like all fungi, chytrids have chitin in their cell walls, but one group of chytrids has both cellulose and chitin in the cell wall. Most chytrids are unicellular; a few form multicellular organisms and hyphae, which have no septa between cells (coenocytic). They produce gametes and diploid zoospores that swim with the help of a single flagellum. The ecological habitat and cell structure of chytrids have much in common with protists. Chytrids usually live in aquatic environments, although some species live on land. Some species thrive as parasites on plants, insects, or amphibians ( Figure 24.10 ), while others are saprobes. The chytrid species Allomyces is well characterized as an experimental organism. Its reproductive cycle includes both asexual and sexual phases. Allomyces produces diploid or haploid flagellated zoospores in a sporangium. Zygomycota: The Conjugated Fungi The zygomycetes are a relatively small group of fungi belonging to the Phylum Zygomycota . They include the familiar bread mold, Rhizopus stolonifer , which rapidly propagates on the surfaces of breads, fruits, and vegetables. Most species are saprobes, living off decaying organic material; a few are parasites, particularly of insects. Zygomycetes play a considerable commercial role. The metabolic products of other species of Rhizopus are intermediates in the synthesis of semi-synthetic steroid hormones. Zygomycetes have a thallus of coenocytic hyphae in which the nuclei are haploid when the organism is in the vegetative stage. The fungi usually reproduce asexually by producing sporangiospores ( Figure 24.11 ). The black tips of bread mold are the swollen sporangia packed with black spores ( Figure 24.12 ). When spores land on a suitable substrate, they germinate and produce a new mycelium. Sexual reproduction starts when conditions become unfavorable. Two opposing mating strains (type + and type –) must be in close proximity for gametangia from the hyphae to be produced and fuse, leading to karyogamy. The developing diploid zygospores have thick coats that protect them from desiccation and other hazards. They may remain dormant until environmental conditions are favorable. When the zygospore germinates, it undergoes meiosis and produces haploid spores, which will, in turn, grow into a new organism. This form of sexual reproduction in fungi is called conjugation (although it differs markedly from conjugation in bacteria and protists), giving rise to the name “conjugated fungi”. Ascomycota: The Sac Fungi The majority of known fungi belong to the Phylum Ascomycota , which is characterized by the formation of an ascus (plural, asci), a sac-like structure that contains haploid ascospores. Many ascomycetes are of commercial importance. Some play a beneficial role, such as the yeasts used in baking, brewing, and wine fermentation, plus truffles and morels, which are held as gourmet delicacies. Aspergillus oryzae is used in the fermentation of rice to produce sake. Other ascomycetes parasitize plants and animals, including humans. For example, fungal pneumonia poses a significant threat to AIDS patients who have a compromised immune system. Ascomycetes not only infest and destroy crops directly; they also produce poisonous secondary metabolites that make crops unfit for consumption. Filamentous ascomycetes produce hyphae divided by perforated septa, allowing streaming of cytoplasm from one cell to the other. Conidia and asci, which are used respectively for asexual and sexual reproductions, are usually separated from the vegetative hyphae by blocked (non-perforated) septa. Asexual reproduction is frequent and involves the production of conidiophores that release haploid conidiospores ( Figure 24.13 ). Sexual reproduction starts with the development of special hyphae from either one of two types of mating strains ( Figure 24.13 ). The “male” strain produces an antheridium and the “female” strain develops an ascogonium. At fertilization, the antheridium and the ascogonium combine in plasmogamy without nuclear fusion. Special ascogenous hyphae arise, in which pairs of nuclei migrate: one from the “male” strain and one from the “female” strain. In each ascus, two or more haploid ascospores fuse their nuclei in karyogamy. During sexual reproduction, thousands of asci fill a fruiting body called the ascocarp . The diploid nucleus gives rise to haploid nuclei by meiosis. The ascospores are then released, germinate, and form hyphae that are disseminated in the environment and start new mycelia ( Figure 24.14 ). Visual Connection Which of the following statements is true? A dikaryotic ascus that forms in the ascocarp undergoes karyogamy, meiosis, and mitosis to form eight ascospores. A diploid ascus that forms in the ascocarp undergoes karyogamy, meiosis, and mitosis to form eight ascospores. A haploid zygote that forms in the ascocarp undergoes karyogamy, meiosis, and mitosis to form eight ascospores. A dikaryotic ascus that forms in the ascocarp undergoes plasmogamy, meiosis, and mitosis to form eight ascospores. Basidiomycota: The Club Fungi The fungi in the Phylum Basidiomycota are easily recognizable under a light microscope by their club-shaped fruiting bodies called basidia (singular, basidium ), which are the swollen terminal cell of a hypha. The basidia, which are the reproductive organs of these fungi, are often contained within the familiar mushroom, commonly seen in fields after rain, on the supermarket shelves, and growing on your lawn ( Figure 24.15 ). These mushroom-producing basidiomyces are sometimes referred to as “gill fungi” because of the presence of gill-like structures on the underside of the cap. The “gills” are actually compacted hyphae on which the basidia are borne. This group also includes shelf fungus, which cling to the bark of trees like small shelves. In addition, the basidiomycota includes smuts and rusts, which are important plant pathogens; toadstools, and shelf fungi stacked on tree trunks. Most edible fungi belong to the Phylum Basidiomycota; however, some basidiomycetes produce deadly toxins. For example, Cryptococcus neoformans causes severe respiratory illness. The lifecycle of basidiomycetes includes alternation of generations ( Figure 24.16 ). Spores are generally produced through sexual reproduction, rather than asexual reproduction. The club-shaped basidium carries spores called basidiospores. In the basidium, nuclei of two different mating strains fuse (karyogamy), giving rise to a diploid zygote that then undergoes meiosis. The haploid nuclei migrate into basidiospores, which germinate and generate monokaryotic hyphae. The mycelium that results is called a primary mycelium. Mycelia of different mating strains can combine and produce a secondary mycelium that contains haploid nuclei of two different mating strains. This is the dikaryotic stage of the basidiomyces lifecyle and and it is the dominant stage. Eventually, the secondary mycelium generates a basidiocarp , which is a fruiting body that protrudes from the ground—this is what we think of as a mushroom. The basidiocarp bears the developing basidia on the gills under its cap. Visual Connection Which of the following statements is true? A basidium is the fruiting body of a mushroom-producing fungus, and it forms four basidiocarps. The result of the plasmogamy step is four basidiospores. Karyogamy results directly in the formation of mycelia. A basidiocarp is the fruiting body of a mushroom-producing fungus. Asexual Ascomycota and Basidiomycota Imperfect fungi—those that do not display a sexual phase—use to be classified in the form phylum Deuteromycota , , a classification group no longer used in the present, ever-developing classification of organisms. While Deuteromycota use to be a classification group, recent moleclular analysis has shown that the members classified in this group belong to the Ascomycota or the Basidiomycota classifications. Since they do not possess the sexual structures that are used to classify other fungi, they are less well described in comparison to other members. Most members live on land, with a few aquatic exceptions. They form visible mycelia with a fuzzy appearance and are commonly known as mold . Reproduction of the fungi in this group is strictly asexual and occurs mostly by production of asexual conidiospores ( Figure 24.17 ). Some hyphae may recombine and form heterokaryotic hyphae. Genetic recombination is known to take place between the different nuclei. The fungi in this group have a large impact on everyday human life. The food industry relies on them for ripening some cheeses. The blue veins in Roquefort cheese and the white crust on Camembert are the result of fungal growth. The antibiotic penicillin was originally discovered on an overgrown Petri plate, on which a colony of Penicillium fungi killed the bacterial growth surrounding it. Other fungi in this group cause serious diseases, either directly as parasites (which infect both plants and humans), or as producers of potent toxic compounds, as seen in the aflatoxins released by fungi of the genus Aspergillus . Glomeromycota The Glomeromycota is a newly established phylum which comprises about 230 species that all live in close association with the roots of trees. Fossil records indicate that trees and their root symbionts share a long evolutionary history. It appears that all members of this family form arbuscular mycorrhizae : the hyphae interact with the root cells forming a mutually beneficial association where the plants supply the carbon source and energy in the form of carbohydrates to the fungus, and the fungus supplies essential minerals from the soil to the plant. The glomeromycetes do not reproduce sexually and do not survive without the presence of plant roots. Although they have coenocytic hyphae like the zygomycetes, they do not form zygospores. DNA analysis shows that all glomeromycetes probably descended from a common ancestor, making them a monophyletic lineage. 24.3 Ecology of Fungi Learning Objectives By the end of this section, you will be able to: Describe the role of fungi in the ecosystem Describe mutualistic relationships of fungi with plant roots and photosynthetic organisms Describe the beneficial relationship between some fungi and insects Fungi play a crucial role in the balance of ecosystems. They colonize most habitats on Earth, preferring dark, moist conditions. They can thrive in seemingly hostile environments, such as the tundra, thanks to a most successful symbiosis with photosynthetic organisms like algae to produce lichens. Fungi are not obvious in the way large animals or tall trees appear. Yet, like bacteria, they are the major decomposers of nature. With their versatile metabolism, fungi break down organic matter, which would not otherwise be recycled. Habitats Although fungi are primarily associated with humid and cool environments that provide a supply of organic matter, they colonize a surprising diversity of habitats, from seawater to human skin and mucous membranes. Chytrids are found primarily in aquatic environments. Other fungi, such as Coccidioides immitis , which causes pneumonia when its spores are inhaled, thrive in the dry and sandy soil of the southwestern United States. Fungi that parasitize coral reefs live in the ocean. However, most members of the Kingdom Fungi grow on the forest floor, where the dark and damp environment is rich in decaying debris from plants and animals. In these environments, fungi play a major role as decomposers and recyclers, making it possible for members of the other kingdoms to be supplied with nutrients and live. Decomposers and Recyclers The food web would be incomplete without organisms that decompose organic matter ( Figure 24.18 ). Some elements—such as nitrogen and phosphorus—are required in large quantities by biological systems, and yet are not abundant in the environment. The action of fungi releases these elements from decaying matter, making them available to other living organisms. Trace elements present in low amounts in many habitats are essential for growth, and would remain tied up in rotting organic matter if fungi and bacteria did not return them to the environment via their metabolic activity. The ability of fungi to degrade many large and insoluble molecules is due to their mode of nutrition. As seen earlier, digestion precedes ingestion. Fungi produce a variety of exoenzymes to digest nutrients. The enzymes are either released into the substrate or remain bound to the outside of the fungal cell wall. Large molecules are broken down into small molecules, which are transported into the cell by a system of protein carriers embedded in the cell membrane. Because the movement of small molecules and enzymes is dependent on the presence of water, active growth depends on a relatively high percentage of moisture in the environment. As saprobes, fungi help maintain a sustainable ecosystem for the animals and plants that share the same habitat. In addition to replenishing the environment with nutrients, fungi interact directly with other organisms in beneficial, and sometimes damaging, ways ( Figure 24.19 ). Mutualistic Relationships Symbiosis is the ecological interaction between two organisms that live together. The definition does not describe the quality of the interaction. When both members of the association benefit, the symbiotic relationship is called mutualistic. Fungi form mutualistic associations with many types of organisms, including cyanobacteria, algae, plants, and animals. Fungus/Plant Mutualism One of the most remarkable associations between fungi and plants is the establishment of mycorrhizae. Mycorrhiza , which comes from the Greek words myco meaning fungus and rhizo meaning root, refers to the association between vascular plant roots and their symbiotic fungi. Somewhere between 80 and 90 percent of all plant species have mycorrhizal partners. In a mycorrhizal association, the fungal mycelia use their extensive network of hyphae and large surface area in contact with the soil to channel water and minerals from the soil into the plant. In exchange, the plant supplies the products of photosynthesis to fuel the metabolism of the fungus. There are a number of types of mycorrhizae. Ectomycorrhizae (“outside” mycorrhiza) depend on fungi enveloping the roots in a sheath (called a mantle) and a Hartig net of hyphae that extends into the roots between cells ( Figure 24.20 ). The fungal partner can belong to the Ascomycota, Basidiomycota or Zygomycota. In a second type, the Glomeromycete fungi form vesicular–arbuscular interactions with arbuscular mycorrhiza (sometimes called endomycorrhizae). In these mycorrhiza, the fungi form arbuscules that penetrate root cells and are the site of the metabolic exchanges between the fungus and the host plant ( Figure 24.20 and Figure 24.21 ). The arbuscules (from the Latin for little trees) have a shrub-like appearance. Orchids rely on a third type of mycorrhiza. Orchids are epiphytes that form small seeds without much storage to sustain germination and growth. Their seeds will not germinate without a mycorrhizal partner (usually a Basidiomycete). After nutrients in the seed are depleted, fungal symbionts support the growth of the orchid by providing necessary carbohydrates and minerals. Some orchids continue to be mycorrhizal throughout their lifecycle. Visual Connection If symbiotic fungi are absent from the soil, what impact do you think this would have on plant growth? Other examples of fungus–plant mutualism include the endophytes: fungi that live inside tissue without damaging the host plant. Endophytes release toxins that repel herbivores, or confer resistance to environmental stress factors, such as infection by microorganisms, drought, or heavy metals in soil. Evolution Connection Coevolution of Land Plants and Mycorrhizae Mycorrhizae are the mutually beneficial symbiotic association between roots of vascular plants and fungi. A well-accepted theory proposes that fungi were instrumental in the evolution of the root system in plants and contributed to the success of Angiosperms. The bryophytes (mosses and liverworts), which are considered the most primitive plants and the first to survive on dry land, do not have a true root system; some have vesicular–arbuscular mycorrhizae and some do not. They depend on a simple rhizoid (an underground organ) and cannot survive in dry areas. True roots appeared in vascular plants. Vascular plants that developed a system of thin extensions from the rhizoids (found in mosses) are thought to have had a selective advantage because they had a greater surface area of contact with the fungal partners than the mosses and liverworts, thus availing themselves of more nutrients in the ground. Fossil records indicate that fungi preceded plants on dry land. The first association between fungi and photosynthetic organisms on land involved moss-like plants and endophytes. These early associations developed before roots appeared in plants. Slowly, the benefits of the endophyte and rhizoid interactions for both partners led to present-day mycorrhizae; up to about 90 percent of today’s vascular plants have associations with fungi in their rhizosphere. The fungi involved in mycorrhizae display many characteristics of primitive fungi; they produce simple spores, show little diversification, do not have a sexual reproductive cycle, and cannot live outside of a mycorrhizal association. The plants benefited from the association because mycorrhizae allowed them to move into new habitats because of increased uptake of nutrients, and this gave them a selective advantage over plants that did not establish symbiotic relationships. Lichens Lichens display a range of colors and textures ( Figure 24.22 ) and can survive in the most unusual and hostile habitats. They cover rocks, gravestones, tree bark, and the ground in the tundra where plant roots cannot penetrate. Lichens can survive extended periods of drought, when they become completely desiccated, and then rapidly become active once water is available again. Link to Learning Explore the world of lichens using this site from Oregon State University. Lichens are not a single organism, but rather an example of a mutualism, in which a fungus (usually a member of the Ascomycota or Basidiomycota phyla) lives in close contact with a photosynthetic organism (a eukaryotic alga or a prokaryotic cyanobacterium) ( Figure 24.23 ). Generally, neither the fungus nor the photosynthetic organism can survive alone outside of the symbiotic relationship. The body of a lichen, referred to as a thallus, is formed of hyphae wrapped around the photosynthetic partner. The photosynthetic organism provides carbon and energy in the form of carbohydrates. Some cyanobacteria fix nitrogen from the atmosphere, contributing nitrogenous compounds to the association. In return, the fungus supplies minerals and protection from dryness and excessive light by encasing the algae in its mycelium. The fungus also attaches the symbiotic organism to the substrate. The thallus of lichens grows very slowly, expanding its diameter a few millimeters per year. Both the fungus and the alga participate in the formation of dispersal units for reproduction. Lichens produce soredia , clusters of algal cells surrounded by mycelia. Soredia are dispersed by wind and water and form new lichens. Lichens are extremely sensitive to air pollution, especially to abnormal levels of nitrogen and sulfur. The U.S. Forest Service and National Park Service can monitor air quality by measuring the relative abundance and health of the lichen population in an area. Lichens fulfill many ecological roles. Caribou and reindeer eat lichens, and they provide cover for small invertebrates that hide in the mycelium. In the production of textiles, weavers used lichens to dye wool for many centuries until the advent of synthetic dyes. Link to Learning Lichens are used to monitor the quality of air. Read more on this site from the United States Forest Service. Fungus/Animal Mutualism Fungi have evolved mutualisms with numerous insects in Phylum Arthropoda: jointed, legged invertebrates. Arthropods depend on the fungus for protection from predators and pathogens, while the fungus obtains nutrients and a way to disseminate spores into new environments. The association between species of Basidiomycota and scale insects is one example. The fungal mycelium covers and protects the insect colonies. The scale insects foster a flow of nutrients from the parasitized plant to the fungus. In a second example, leaf-cutting ants of Central and South America literally farm fungi. They cut disks of leaves from plants and pile them up in gardens ( Figure 24.24 ). Fungi are cultivated in these disk gardens, digesting the cellulose in the leaves that the ants cannot break down. Once smaller sugar molecules are produced and consumed by the fungi, the fungi in turn become a meal for the ants. The insects also patrol their garden, preying on competing fungi. Both ants and fungi benefit from the association. The fungus receives a steady supply of leaves and freedom from competition, while the ants feed on the fungi they cultivate. Fungivores Animal dispersal is important for some fungi because an animal may carry spores considerable distances from the source. Fungal spores are rarely completely degraded in the gastrointestinal tract of an animal, and many are able to germinate when they are passed in the feces. Some dung fungi actually require passage through the digestive system of herbivores to complete their lifecycle. The black truffle—a prized gourmet delicacy—is the fruiting body of an underground mushroom. Almost all truffles are ectomycorrhizal, and are usually found in close association with trees. Animals eat truffles and disperse the spores. In Italy and France, truffle hunters use female pigs to sniff out truffles. Female pigs are attracted to truffles because the fungus releases a volatile compound closely related to a pheromone produced by male pigs. 24.4 Fungal Parasites and Pathogens Learning Objectives By the end of this section, you will be able to: Describe fungal parasites and pathogens of plants Describe the different types of fungal infections in humans Explain why antifungal therapy is hampered by the similarity between fungal and animal cells Parasitism describes a symbiotic relationship in which one member of the association benefits at the expense of the other. Both parasites and pathogens harm the host; however, the pathogen causes a disease, whereas the parasite usually does not. Commensalism occurs when one member benefits without affecting the other. Plant Parasites and Pathogens The production of sufficient good-quality crops is essential to human existence. Plant diseases have ruined crops, bringing widespread famine. Many plant pathogens are fungi that cause tissue decay and eventual death of the host ( Figure 24.25 ). In addition to destroying plant tissue directly, some plant pathogens spoil crops by producing potent toxins. Fungi are also responsible for food spoilage and the rotting of stored crops. For example, the fungus Claviceps purpurea causes ergot, a disease of cereal crops (especially of rye). Although the fungus reduces the yield of cereals, the effects of the ergot's alkaloid toxins on humans and animals are of much greater significance. In animals, the disease is referred to as ergotism. The most common signs and symptoms are convulsions, hallucination, gangrene, and loss of milk in cattle. The active ingredient of ergot is lysergic acid, which is a precursor of the drug LSD. Smuts, rusts, and powdery or downy mildew are other examples of common fungal pathogens that affect crops. Aflatoxins are toxic, carcinogenic compounds released by fungi of the genus Aspergillus . Periodically, harvests of nuts and grains are tainted by aflatoxins, leading to massive recall of produce. This sometimes ruins producers and causes food shortages in developing countries. Animal and Human Parasites and Pathogens Fungi can affect animals, including humans, in several ways. A mycosis is a fungal disease that results from infection and direct damage. Fungi attack animals directly by colonizing and destroying tissues. Mycotoxicosis is the poisoning of humans (and other animals) by foods contaminated by fungal toxins (mycotoxins). Mycetismus describes the ingestion of preformed toxins in poisonous mushrooms. In addition, individuals who display hypersensitivity to molds and spores develop strong and dangerous allergic reactions. Fungal infections are generally very difficult to treat because, unlike bacteria, fungi are eukaryotes. Antibiotics only target prokaryotic cells, whereas compounds that kill fungi also harm the eukaryotic animal host. Many fungal infections are superficial; that is, they occur on the animal’s skin. Termed cutaneous (“skin”) mycoses, they can have devastating effects. For example, the decline of the world’s frog population in recent years may be caused by the chytrid fungus Batrachochytrium dendrobatidis, which infects the skin of frogs and presumably interferes with gaseous exchange. Similarly, more than a million bats in the United States have been killed by white-nose syndrome, which appears as a white ring around the mouth of the bat. It is caused by the cold-loving fungus Pseudogymnoascus destructans , which disseminates its deadly spores in caves where bats hibernate. Mycologists are researching the transmission, mechanism, and control of P. destructans to stop its spread. Fungi that cause the superficial mycoses of the epidermis, hair, and nails rarely spread to the underlying tissue ( Figure 24.26 ). These fungi are often misnamed “dermatophytes”, from the Greek words dermis meaning skin and phyte meaning plant, although they are not plants. Dermatophytes are also called “ringworms” because of the red ring they cause on skin. They secrete extracellular enzymes that break down keratin (a protein found in hair, skin, and nails), causing conditions such as athlete’s foot and jock itch. These conditions are usually treated with over-the-counter topical creams and powders, and are easily cleared. More persistent superficial mycoses may require prescription oral medications. Systemic mycoses spread to internal organs, most commonly entering the body through the respiratory system. For example, coccidioidomycosis (valley fever) is commonly found in the southwestern United States, where the fungus resides in the dust. Once inhaled, the spores develop in the lungs and cause symptoms similar to those of tuberculosis. Histoplasmosis is caused by the dimorphic fungus Histoplasma capsulatum. It also causes pulmonary infections, and in rarer cases, swelling of the membranes of the brain and spinal cord. Treatment of these and many other fungal diseases requires the use of antifungal medications that have serious side effects. Opportunistic mycoses are fungal infections that are either common in all environments, or part of the normal biota. They mainly affect individuals who have a compromised immune system. Patients in the late stages of AIDS suffer from opportunistic mycoses that can be life threatening. The yeast Candida sp., a common member of the natural biota, can grow unchecked and infect the vagina or mouth (oral thrush) if the pH of the surrounding environment, the person’s immune defenses, or the normal population of bacteria are altered. Mycetismus can occur when poisonous mushrooms are eaten. It causes a number of human fatalities during mushroom-picking season. Many edible fruiting bodies of fungi resemble highly poisonous relatives, and amateur mushroom hunters are cautioned to carefully inspect their harvest and avoid eating mushrooms of doubtful origin. The adage “there are bold mushroom pickers and old mushroom pickers, but are there no old, bold mushroom pickers” is unfortunately true. Scientific Method Connection Dutch Elm Disease Question : Do trees resistant to Dutch elm disease secrete antifungal compounds? Hypothesis : Construct a hypothesis that addresses this question. Background : Dutch elm disease is a fungal infestation that affects many species of elm ( Ulmus ) in North America. The fungus infects the vascular system of the tree, which blocks water flow within the plant and mimics drought stress. Accidently introduced to the United States in the early 1930s, it decimated shade trees across the continent. It is caused by the fungus Ophiostoma ulmi . The elm bark beetle acts as a vector and transmits the disease from tree to tree. Many European and Asiatic elms are less susceptible to the disease than are American elms. Test the hypothesis : A researcher testing this hypothesis might do the following. Inoculate several Petri plates containing a medium that supports the growth of fungi with fragments of Ophiostoma mycelium. Cut (with a metal punch) several disks from the vascular tissue of susceptible varieties of American elms and resistant European and Asiatic elms. Include control Petri plates inoculated with mycelia without plant tissue to verify that the medium and incubation conditions do not interfere with fungal growth. As a positive control, add paper disks impregnated with a known fungicide to Petri plates inoculated with the mycelium. Incubate the plates for a set number of days to allow fungal growth and spreading of the mycelium over the surface of the plate. Record the diameter of the zone of clearing, if any, around the tissue samples and the fungicide control disk. Record your observations in the following table. Results of Antifungal Testing of Vascular Tissue from Different Species of Elm Disk Zone of Inhibition (mm) Distilled Water Fungicide Tissue from Susceptible Elm #1 Tissue from Susceptible Elm #2 Tissue from Resistant Elm #1 Tissue from Resistant Elm #2 Table 24.1 Analyze the data and report the results. Compare the effect of distilled water to the fungicide. These are negative and positive controls that validate the experimental set up. The fungicide should be surrounded by a clear zone where the fungus growth was inhibited. Is there a difference among different species of elm? Draw a conclusion : Was there antifungal activity as expected from the fungicide? Did the results support the hypothesis? If not, how can this be explained? There are several possible explanations for resistance to a pathogen. Active deterrence of infection is only one of them. 24.5 Importance of Fungi in Human Life Learning Objectives By the end of this section, you will be able to: Describe the importance of fungi to the balance of the environment Summarize the role of fungi in food and beverage preparation Describe the importance of fungi in the chemical and pharmaceutical industries Discuss the role of fungi as model organisms Although we often think of fungi as organisms that cause disease and rot food, fungi are important to human life on many levels. As we have seen, they influence the well-being of human populations on a large scale because they are part of the nutrient cycle in ecosystems. They have other ecosystem roles as well. As animal pathogens, fungi help to control the population of damaging pests. These fungi are very specific to the insects they attack, and do not infect animals or plants. Fungi are currently under investigation as potential microbial insecticides, with several already on the market. For example, the fungus Beauveria bassiana is a pesticide being tested as a possible biological control agent for the recent spread of emerald ash borer. It has been released in Michigan, Illinois, Indiana, Ohio, West Virginia and Maryland ( Figure 24.27 ). The mycorrhizal relationship between fungi and plant roots is essential for the productivity of farm land. Without the fungal partner in root systems, 80–90 percent of trees and grasses would not survive. Mycorrhizal fungal inoculants are available as soil amendments from gardening supply stores and are promoted by supporters of organic agriculture. We also eat some types of fungi. Mushrooms figure prominently in the human diet. Morels, shiitake mushrooms, chanterelles, and truffles are considered delicacies ( Figure 24.28 ). The humble meadow mushroom, Agaricus campestris , appears in many dishes. Molds of the genus Penicillium ripen many cheeses. They originate in the natural environment such as the caves of Roquefort, France, where wheels of sheep milk cheese are stacked in order to capture the molds responsible for the blue veins and pungent taste of the cheese. Fermentation—of grains to produce beer, and of fruits to produce wine—is an ancient art that humans in most cultures have practiced for millennia. Wild yeasts are acquired from the environment and used to ferment sugars into CO 2 and ethyl alcohol under anaerobic conditions. It is now possible to purchase isolated strains of wild yeasts from different wine-making regions. Louis Pasteur was instrumental in developing a reliable strain of brewer’s yeast, Saccharomyces cerevisiae , for the French brewing industry in the late 1850s. This was one of the first examples of biotechnology patenting. Many secondary metabolites of fungi are of great commercial importance. Antibiotics are naturally produced by fungi to kill or inhibit the growth of bacteria, limiting their competition in the natural environment. Important antibiotics, such as penicillin and the cephalosporins, are isolated from fungi. Valuable drugs isolated from fungi include the immunosuppressant drug cyclosporine (which reduces the risk of rejection after organ transplant), the precursors of steroid hormones, and ergot alkaloids used to stop bleeding. Psilocybin is a compound found in fungi such as Psilocybe semilanceata and Gymnopilus junonius, which have been used for their hallucinogenic properties by various cultures for thousands of years. As simple eukaryotic organisms, fungi are important model research organisms. Many advances in modern genetics were achieved by the use of the red bread mold Neurospora crassa . Additionally, many important genes originally discovered in S. cerevisiae served as a starting point in discovering analogous human genes. As a eukaryotic organism, the yeast cell produces and modifies proteins in a manner similar to human cells, as opposed to the bacterium Escherichia coli, which lacks the internal membrane structures and enzymes to tag proteins for export. This makes yeast a much better organism for use in recombinant DNA technology experiments. Like bacteria, yeasts grow easily in culture, have a short generation time, and are amenable to genetic modification.
business_ethics
Summary 8.1 Diversity and Inclusion in the Workforce A diverse workforce yields many positive outcomes for a company. Access to a deep pool of talent, positive customer experiences, and strong performance are all documented positives. Diversity may also bring some initial challenges, and some employees can be reluctant to see its advantages, but committed managers can deal with these obstacles effectively and make diversity a success through inclusion. 8.2 Accommodating Different Abilities and Faiths To accommodate religious beliefs, the absence of formal religious faith, or disabilities, businesses should make every reasonable accommodation they can to allow workers to contribute to the company. This may require scheduling flexibility, the use of special devices, or simply an understanding manager. 8.3 Sexual Identification and Orientation Although about half the states prohibit sexual orientation discrimination in private and public workplaces and a few do so in public workplaces only, federal law does not. Successful companies will not only follow the applicable law but also develop ethical policies to send a clear message that they are interested in job skills and abilities, not sexual orientation or personal life choices. 8.4 Income Inequalities Income inequality has grown sharply while the U.S. middle class, though vital to economic growth, has continued to shrink. Currently, the federal minimum wage is $7.25 per hour, and many states simply follow the federal lead in establishing their own minimums. Though some economists dispute the existence of a simple, direct link between a shrinking middle class and governmental failure to raise the minimum wage at a sufficiently rapid pace, no one denies that businesses themselves could take the lead here by paying a higher minimum wage. Companies also can commit to hire workers as employees rather than as independent contractors and pay the cost of their benefits, and to pay women the same as men for similar work. 8.5 Animal Rights and the Implications for Business Mainstream businesses from pharmaceutical and medical companies to grocers and restaurants must all consider the growing public awareness of the ethical treatment of nonhuman animals. This evolving concern has particular consequences for agribusiness in terms of what creatures we consider appropriate to cultivate and eat. Cosmetic companies are increasingly subject to legislative mandates in the global marketplace and to consumer pressure at home to adopt ethical policies with regard to animal testing. An aware consuming public can continue to force improvements in our treatment of animals.
Chapter Outline 8.1 Diversity and Inclusion in the Workforce 8.2 Accommodating Different Abilities and Faiths 8.3 Sexual Identification and Orientation 8.4 Income Inequalities 8.5 Animal Rights and the Implications for Business Introduction Effective business managers in the twenty-first century need to be aware of a broad array of ethical choices they can make that affect their employees, their customers, and society as a whole. What these decisions have in common is the need for managers to recognize and respect the rights of all. Actively supporting human diversity at work, for instance, benefits the business organization as well as society on a broader level ( Figure 8.1 ). Thus, ethical managers recognize and accommodate the special needs of some employees, show respect for workers’ different faiths, appreciate and accept their differing sexual orientations and identification, and ensure pay equity for all. Ethical managers are also tuned in to public sentiment, such as calls by stakeholders to respect the rights of animals, and they monitor trends in these social attitudes, especially on social media. How would you, as a manager, ensure a workplace that values inclusion and diversity? How would you respond to employees who resisted such a workplace? How would you approach broader social concerns such as income inequality or animal rights? This chapter introduces the potential impacts on business of some of the most pressing social themes of our time, and it discusses ways managers can respect the rights of all and improve business results by choosing an ethical path.
[ { "answer": { "ans_choice": 1, "ans_text": "B" }, "bloom": null, "hl_context": "<hl> Diversity and inclusion are positive steps for business organizations , and despite their sometimes slow pace , the majority are moving in the right direction . <hl> Diversity strengthens the company ’ s internal relationships with employees and improves employee morale , as well as its external relationships with customer groups . Communication , a core value of most successful businesses , becomes more effective with a diverse workforce . Performance improves for multiple reasons , not the least of which is that acknowledging diversity and respecting differences is the ethical thing to do . 10", "hl_sentences": "Diversity and inclusion are positive steps for business organizations , and despite their sometimes slow pace , the majority are moving in the right direction .", "question": { "cloze_format": "Diversity and inclusion at all levels of a private-sector company is ________.", "normal_format": "Which of the following is correct about diversity and inclusion at all levels of a private-sector company?", "question_choices": [ "mandated by federal law", "the approach preferred by many companies", "required by state law in thirty states", "contrary to the company’s fiduciary duty to stockholders" ], "question_id": "fs-idm253727888", "question_text": "Diversity and inclusion at all levels of a private-sector company is ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "C" }, "bloom": null, "hl_context": "<hl> Google is not alone in coming up short on diversity . <hl> <hl> Recruiting and hiring a diverse workforce has been a challenge for most major technology companies , including Facebook , Apple , and Yahoo ( now owned by Verizon ); all have reported gender and ethnic shortfalls in their workforces . <hl>", "hl_sentences": "Google is not alone in coming up short on diversity . Recruiting and hiring a diverse workforce has been a challenge for most major technology companies , including Facebook , Apple , and Yahoo ( now owned by Verizon ); all have reported gender and ethnic shortfalls in their workforces .", "question": { "cloze_format": "Google ________.", "normal_format": "Which of the following is correct about Google?", "question_choices": [ "has the most diverse workforce of any major U.S. company", "uses a strict quota system in its hiring practices", "is similar to other technology companies, most of which lag on diversity", "promotes women at higher rates than men" ], "question_id": "fs-idm265593920", "question_text": "Google ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "C" }, "bloom": null, "hl_context": "<hl> Cases from the Real World The Abercrombie & Fitch Religious Discrimination Case The U . S . Supreme Court , in a 2015 case involving Abercrombie & Fitch , ruled that that “ an employer may not refuse to hire an applicant for work if the employer was motivated by avoiding the need to accommodate a religious practice , ” and that doing so violates the prohibition against religious discrimination contained in the CRA of 1964 , Title VII . <hl> According to the EEOC general counsel David Lopez , “ This case is about defending the American principles of religious freedom and tolerance . This decision is a victory for our increasingly diverse society . ” 22", "hl_sentences": "Cases from the Real World The Abercrombie & Fitch Religious Discrimination Case The U . S . Supreme Court , in a 2015 case involving Abercrombie & Fitch , ruled that that “ an employer may not refuse to hire an applicant for work if the employer was motivated by avoiding the need to accommodate a religious practice , ” and that doing so violates the prohibition against religious discrimination contained in the CRA of 1964 , Title VII .", "question": { "cloze_format": "The primary law prohibiting religious discrimination in the private sector workplace is ________.", "normal_format": "What is the primary law prohibiting religious discrimination in the private sector workplace?", "question_choices": [ "the First Amendment of the Constitution", "state law", "Title VII of the Civil Rights Act", "the Declaration of Independence" ], "question_id": "fs-idm203558432", "question_text": "The primary law prohibiting religious discrimination in the private sector workplace is ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "A" }, "bloom": null, "hl_context": "<hl> Cases from the Real World The ADA and Verizon Attendance Policy Managers are usually sticklers about attendance , but Verizon recently learned an expensive lesson about its mandatory attendance policies from a 2011 class action lawsuit by employees and the EEOC . <hl> <hl> The suit asserted that Verizon denied reasonable accommodations to several hundred employees , disciplining or firing them for missing too many days of work and refusing to make exceptions for those whose absences were caused by their disabilities . <hl> <hl> According to the EEOC , Verizon violated the ADA because its no-fault attendance policy was an inflexible and “ unreasonable ” one-size-fits-all rule . <hl>", "hl_sentences": "Cases from the Real World The ADA and Verizon Attendance Policy Managers are usually sticklers about attendance , but Verizon recently learned an expensive lesson about its mandatory attendance policies from a 2011 class action lawsuit by employees and the EEOC . The suit asserted that Verizon denied reasonable accommodations to several hundred employees , disciplining or firing them for missing too many days of work and refusing to make exceptions for those whose absences were caused by their disabilities . According to the EEOC , Verizon violated the ADA because its no-fault attendance policy was an inflexible and “ unreasonable ” one-size-fits-all rule .", "question": { "cloze_format": "If an ADA accommodation is significantly expensive, ________.", "normal_format": "What if an ADA accommodation is significantly expensive?", "question_choices": [ "the courts may rule that it is not reasonable", "the courts may rule that it must be provided anyway", "the EEOC guidelines do not apply", "the federal government must subsidize the expense" ], "question_id": "fs-idm176660384", "question_text": "If an ADA accommodation is significantly expensive, ________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "C" }, "bloom": null, "hl_context": "Although the U . S . Supreme Court ruled in United States v . Windsor ( 2013 ) that Section 3 of the 1996 Defense of Marriage Act ( which had restricted the federal interpretations of “ marriage ” and “ spouse ” to opposite-sex unions ) was unconstitutional , and guaranteed same-sex couples the right to marry in Obergefell v . Hodges ( 2015 ) , 24 marital status has little or no direct applicability to the circumstances of someone ’ s employment . <hl> In terms of legal protections at work , the LGBTQ community had been at a disadvantage because Title VII of the CRA was not interpreted to address sexual orientation and federal law did not prohibit discrimination based on this characteristic . <hl> In the 2020 Supreme Court case Bostock v . Clayton County , the Court held that discrimination based on \" sex \" includes discrimination based on sexual orientation and gender identity . While the 2020 Supreme Court decision extended protection in terms of employment considerations , discrimination in other forms remains . <hl> For example , a proposed law named the Equality Act is a federal LGBTQ nondiscrimination bill that would provide protections for LGBTQ individuals in employment , housing , credit , and education . <hl> <hl> But unless and until it passes , it remains up to the business community to provide protections consistent with those provided under federal law for other employees or applicants . <hl>", "hl_sentences": "In terms of legal protections at work , the LGBTQ community had been at a disadvantage because Title VII of the CRA was not interpreted to address sexual orientation and federal law did not prohibit discrimination based on this characteristic . For example , a proposed law named the Equality Act is a federal LGBTQ nondiscrimination bill that would provide protections for LGBTQ individuals in employment , housing , credit , and education . But unless and until it passes , it remains up to the business community to provide protections consistent with those provided under federal law for other employees or applicants .", "question": { "cloze_format": "A motivated answer to the question whether individual states are allowed to have laws protecting LGBTQ applicant or employee rights, is ___.", "normal_format": "Are individual states allowed to have laws protecting LGBTQ applicant or employee rights?", "question_choices": [ "Yes, but it is not really necessary because federal law already protects them.", "No, because it would violate federal law, which prohibits it.", "Yes, some states extend this protection because there is no law at the federal level.", "No, because the Supreme Court ruling in Obergefell v. Hodges now protects these rights." ], "question_id": "fs-idm265752848", "question_text": "Are individual states allowed to have laws protecting LGBTQ applicant or employee rights?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 0, "ans_text": "A" }, "bloom": null, "hl_context": "<hl> Some business leaders , such as Bill Gross , chair of the world ’ s largest bond-trading firm , suggest raising the federal minimum wage , currently $ 7.25 per hour for all employers doing any type of business in interstate commerce ( e . g . , sending or receiving mail out of state ) or for any company with more than $ 500,000 in sales . <hl> Many business leaders and economists agree that a higher minimum wage would help address at least part of the problem of income inequality ; industrialized economies function best when income inequality is minimal , according to Gross and others who advocate for policies that bring the power of workers and corporations back into balance . 34 A hike in the minimum wage affects middle-class workers in two ways . First , it is a direct help to those who are part of a two-earner family at the lower end of the middle class , giving them more income to spend on necessities . Second , many higher-paid workers earn a wage that is tied to the minimum wage . Their salaries would increase as well .", "hl_sentences": "Some business leaders , such as Bill Gross , chair of the world ’ s largest bond-trading firm , suggest raising the federal minimum wage , currently $ 7.25 per hour for all employers doing any type of business in interstate commerce ( e . g . , sending or receiving mail out of state ) or for any company with more than $ 500,000 in sales .", "question": { "cloze_format": "As of 2018, the current federal minimum wage is ________.", "normal_format": "As of 2018, what is the current federal minimum wage?", "question_choices": [ "$7.25 per hour", "$10 per hour", "$12.50 per hour", "$15 per hour" ], "question_id": "fs-idm248457568", "question_text": "As of 2018, the current federal minimum wage is ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "C" }, "bloom": null, "hl_context": "The middle class is not a homogenous group , however . For example , split fairly evenly between Democratic and Republican parties , the middle class helped elect Republican George W . Bush in 2004 and Democrat Barack Obama in 2008 and 2012 . And , of course , a suburban house with a white picket fence represents a consumption economy , which is not everyone ’ s idea of utopia , nor should it be . More importantly , not everyone had equal access to this ideal . But one thing almost everyone agrees on is that a shrinking middle class is not good for the economy . <hl> Data from the International Monetary Fund indicate the U . S . middle class is going in the wrong direction . <hl> <hl> 30 Only one-quarter of 1 percent of all U . S . households have moved up from the middle - to the upper-income bracket since 2000 , while twelve times that many have slid to the lower-income bracket . <hl> That is a complete reversal from the period between 1970 and 2000 , when middle-income households were more likely to move up than down . According to Business Insider , the U . S . middle class is “ hollowing out , and it ’ s hurting U . S . economic growth . ” 31", "hl_sentences": "Data from the International Monetary Fund indicate the U . S . middle class is going in the wrong direction . 30 Only one-quarter of 1 percent of all U . S . households have moved up from the middle - to the upper-income bracket since 2000 , while twelve times that many have slid to the lower-income bracket .", "question": { "cloze_format": "The middle class in the United States ________.", "normal_format": "Which of the following is correct about the middle class in the United States?", "question_choices": [ "has steadily increased every year since World War II", "has steadily declined every year since 1990", "shows a significant decline since 2000", "has grown since the recession of 2008" ], "question_id": "fs-idm250297184", "question_text": "The middle class in the United States ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "B" }, "bloom": null, "hl_context": "<hl> In cosmetic testing , the United States has relatively few laws protecting animals , whereas about forty other nations have taken more direct action . <hl> <hl> In 2013 , the European Union banned animal testing for cosmetics and the marketing and sale of cosmetics tested on animals . <hl> Norway and Switzerland passed similar laws . Outside Europe , a variety of other nations , including Guatemala , India , Israel , New Zealand , South Korea , Taiwan , and Turkey , have also passed laws to ban or limit cosmetic animal testing . U . S . cosmetic companies will not be able to sell their products in any of these countries unless they change their practices . <hl> The Humane Cosmetics Act has been introduced but not yet passed by Congress . <hl> <hl> If enacted , it would end cosmetics testing on animals in the United States and ban the import of animal-tested cosmetics . <hl> 46 However , in the current antiregulatory environment , passage seems unlikely .", "hl_sentences": "In cosmetic testing , the United States has relatively few laws protecting animals , whereas about forty other nations have taken more direct action . In 2013 , the European Union banned animal testing for cosmetics and the marketing and sale of cosmetics tested on animals . The Humane Cosmetics Act has been introduced but not yet passed by Congress . If enacted , it would end cosmetics testing on animals in the United States and ban the import of animal-tested cosmetics .", "question": { "cloze_format": "Laws protecting animal rights in cosmetic testing are ________.", "normal_format": "What are laws protecting animal rights in cosmetic testing?", "question_choices": [ "more advanced in the United States than in the European Union", "more advanced in the European Union than in the United States", "more advanced in Asia than in the United States", "more advanced in Asia than in the European Union" ], "question_id": "fs-idm192406064", "question_text": "Laws protecting animal rights in cosmetic testing are ________." }, "references_are_paraphrase": null } ]
8
8.1 Diversity and Inclusion in the Workforce Learning Objectives By the end of this section, you will be able to: Explain the benefits of employee diversity in the workplace Discuss the challenges presented by workplace diversity Diversity is not simply a box to be checked; rather, it is an approach to business that unites ethical management and high performance. Business leaders in the global economy recognize the benefits of a diverse workforce and see it as an organizational strength, not as a mere slogan or a form of regulatory compliance with the law. They recognize that diversity can enhance performance and drive innovation; conversely, adhering to the traditional business practices of the past can cost them talented employees and loyal customers. A study by global management consulting firm McKinsey & Company indicates that businesses with gender and ethnic diversity outperform others. According to Mike Dillon, chief diversity and inclusion officer for PwC in San Francisco, “attracting, retaining and developing a diverse group of professionals stirs innovation and drives growth.” 1 Living this goal means not only recruiting, hiring, and training talent from a wide demographic spectrum but also including all employees in every aspect of the organization. Workplace Diversity The twenty-first century workplace features much greater diversity than was common even a couple of generations ago. Individuals who might once have faced employment challenges because of religious beliefs, ability differences, or sexual orientation now regularly join their peers in interview pools and on the job. Each may bring a new outlook and different information to the table; employees can no longer take for granted that their coworkers think the same way they do. This pushes them to question their own assumptions, expand their understanding, and appreciate alternate viewpoints. The result is more creative ideas, approaches, and solutions. Thus, diversity may also enhance corporate decision-making. Communicating with those who differ from us may require us to make an extra effort and even change our viewpoint, but it leads to better collaboration and more favorable outcomes overall, according to David Rock, director of the Neuro-Leadership Institute in New York City, who says diverse coworkers “challenge their own and others’ thinking.” 2 According to the Society for Human Resource Management (SHRM), organizational diversity now includes more than just racial, gender, and religious differences. It also encompasses different thinking styles and personality types, as well as other factors such as physical and cognitive abilities and sexual orientation, all of which influence the way people perceive the world. “Finding the right mix of individuals to work on teams, and creating the conditions in which they can excel, are key business goals for today’s leaders, given that collaboration has become a paradigm of the twenty-first century workplace,” according to an SHRM article. 3 Attracting workers who are not all alike is an important first step in the process of achieving greater diversity. However, managers cannot stop there. Their goals must also encompass inclusion , or the engagement of all employees in the corporate culture. “The far bigger challenge is how people interact with each other once they’re on the job,” says Howard J. Ross, founder and chief learning officer at Cook Ross, a consulting firm specializing in diversity. “Diversity is being invited to the party; inclusion is being asked to dance. Diversity is about the ingredients, the mix of people and perspectives. Inclusion is about the container—the place that allows employees to feel they belong, to feel both accepted and different.” 4 Workplace diversity is not a new policy idea; its origins date back to at least the passage of the Civil Rights Act of 1964 (CRA) or before. Census figures show that women made up less than 29 percent of the civilian workforce when Congress passed Title VII of the CRA prohibiting workplace discrimination. After passage of the law, gender diversity in the workplace expanded significantly. According to the U.S. Bureau of Labor Statistics (BLS), the percentage of women in the labor force increased from 48 percent in 1977 to a peak of 60 percent in 1999. Over the last five years, the percentage has held relatively steady at 57 percent. Over the past forty years, the total number of women in the labor force has risen from 41 million in 1977 to 71 million in 2017. 5 The BLS projects that the number of women in the U.S. labor force will reach 92 million in 2050 (an increase that far outstrips population growth). The statistical data show a similar trend for African American, Asian American, and Hispanic workers ( Figure 8.2 ). Just before passage of the CRA in 1964, the percentages of minorities in the official on-the-books workforce were relatively small compared with their representation in the total population. In 1966, Asians accounted for just 0.5 percent of private-sector employment, with Hispanics at 2.5 percent and African Americans at 8.2 percent. 6 However, Hispanic employment numbers have significantly increased since the CRA became law; they are expected to more than double from 15 percent in 2010 to 30 percent of the labor force in 2050. Similarly, Asian Americans are projected to increase their share from 5 to 8 percent between 2010 and 2050. Much more progress remains to be made, however. For example, many people think of the technology sector as the workplace of open-minded millennials. Yet Google, as one example of a large and successful company, revealed in its latest diversity statistics that its progress toward a more inclusive workforce may be steady but it is very slow. Men still account for the great majority of employees at the corporation; only about 30 percent are women, and women fill fewer than 20 percent of Google’s technical roles ( Figure 8.3 ). The company has shown a similar lack of gender diversity in leadership roles, where women hold fewer than 25 percent of positions. Despite modest progress, an ocean-sized gap remains to be narrowed. When it comes to ethnicity, approximately 56 percent of Google employees are White. About 35 percent are Asian, 3.5 percent are Latino, and 2.4 percent are Black, and of the company’s management and leadership roles, 68 percent are held by White people. 7 Google is not alone in coming up short on diversity. Recruiting and hiring a diverse workforce has been a challenge for most major technology companies, including Facebook, Apple, and Yahoo (now owned by Verizon); all have reported gender and ethnic shortfalls in their workforces. The Equal Employment Opportunity Commission (EEOC) has made available 2014 data comparing the participation of women and minorities in the high-technology sector with their participation in U.S. private-sector employment overall, and the results show the technology sector still lags. 8 Compared with all private-sector industries, the high-technology industry employs a larger share of Whites (68.5%), Asian Americans (14%), and men (64%), and a smaller share of African Americans (7.4%), Latinos (8%), and women (36%). Whites also represent a much higher share of those in the executive category (83.3%), whereas other groups hold a significantly lower share, including African Americans (2%), Latinos (3.1%), and Asian Americans (10.6%). In addition, and perhaps not surprisingly, 80 percent of executives are men and only 20 percent are women. This compares negatively with all other private-sector industries, in which 70 percent of executives are men and 30 percent women. Technology companies are generally not trying to hide the problem. Many have been publicly releasing diversity statistics since 2014, and they have been vocal about their intentions to close diversity gaps. More than thirty technology companies, including Intel, Spotify, Lyft, Airbnb, and Pinterest, each signed a written pledge to increase workforce diversity and inclusion, and Google pledged to spend more than $100 million to address diversity issues. 9 Diversity and inclusion are positive steps for business organizations, and despite their sometimes slow pace, the majority are moving in the right direction. Diversity strengthens the company’s internal relationships with employees and improves employee morale, as well as its external relationships with customer groups. Communication, a core value of most successful businesses, becomes more effective with a diverse workforce. Performance improves for multiple reasons, not the least of which is that acknowledging diversity and respecting differences is the ethical thing to do. 10 Adding Value through Diversity Diversity need not be a financial drag on a company, measured as a cost of compliance with no return on the investment. A recent McKinsey & Company study concluded that companies that adopt diversity policies do well financially, realizing what is sometimes called a diversity dividend . The study results demonstrated a statistically significant relationship of better financial performance from companies with a more diverse leadership team, as indicated in Figure 8.4 . Companies in the top 25 percent in terms of gender diversity were 15 percent more likely to post financial returns above their industry median in the United States. Likewise, companies in the top 25 percent of racial and/or ethnic diversity were 35 percent more likely to show returns exceeding their respective industry median. 11 These results demonstrate a positive correlation between diversity and performance, rebutting any claim that affirmative action and other such programs are social engineering that constitutes a financial drag on earnings. In fact, the results reveal a negative correlation between performance and lack of diversity, with companies in the bottom 25 percent for gender and ethnicity or race proving to be statistically less likely to achieve above-average financial returns than the average companies. Non-diverse companies were not leaders in performance indicators. Positive correlations do not equal causation, of course, and greater gender and ethnic diversity do not automatically translate into profit. Rather, as this chapter shows, they enhance creativity and decision-making, employee satisfaction, an ethical work environment, and customer goodwill, all of which, in turn, improve operations and boost performance. Diversity is not a concept that matters only for the rank-and-file workforce; it makes a difference at all levels of an organization. The McKinsey & Company study, which examined twenty thousand firms in ninety countries, also found that companies in the top 25 percent for executive and/or board diversity had returns on equity more than 50 percent higher than those companies that ranked in the lowest 25 percent. Companies with a higher percentage of female executives tended to be more profitable. 12 Link to Learning Read the working paper “Is Gender Diversity Profitable? Evidence from a Global Survey,” from the Peterson Institute for International Economics for a closer look at the profitability of gender diversity. Achieving equal representation in employment based on demographic data is the ethical thing to do because it represents the essential American ideal of equal opportunity for all. It is a basic assumption of an egalitarian society that all have the same chance without being hindered by immutable characteristics. However, there are also directly relevant business reasons to do it. More diverse companies perform better, as we saw earlier in this chapter, but why? The reasons are intriguing and complex. Among them are that diversity improves a company’s chances of attracting top talent and that considering all points of view may lead to better decision-making. Diversity also improves customer experience and employee satisfaction. To achieve improved results, companies need to expand their definition of diversity beyond race and gender. For example, differences in age, experience, and country of residence may result in a more refined global mind-set and cultural fluency, which can help companies succeed in international business. A salesperson may know the language of customers or potential customers from a specific region or country, for example, or a customer service representative may understand the norms of another culture. Diverse product-development teams can grasp what a group of customers may want that is not currently being offered. Resorting to the same approaches repeatedly is not likely to result in breakthrough solutions. Diversity, however, provides usefully divergent perspectives on the business challenges companies face. New ideas help solve old problems—another way diversity makes a positive contribution to the bottom line. The Challenges of a Diverse Workforce Diversity is not always an instant success; it can sometimes introduce workplace tensions and lead to significant challenges for a business to address. Some employees simply are slow to come around to a greater appreciation of the value of diversity because they may never have considered this perspective before. Others may be prejudiced and consequently attempt to undermine the success of diversity initiatives in general. In 2017, for example, a senior software engineer’s memo criticizing Google’s diversity initiatives was leaked, creating significant protests on social media and adverse publicity in national news outlets. 13 The memo asserted “biological causes” and “men’s higher drive for status” to account for women’s unequal representation in Google’s technology departments and leadership. Google’s response was quick. The engineer was fired, and statements were released emphasizing the company’s commitment to diversity. 14 Although Google was applauded for its quick response, however, some argued that an employee should be free to express personal opinions without punishment (despite the fact that there is no right of free speech while at work in the private sector). In the latest development, the fired engineer and a coworker filed a class-action lawsuit against Google on behalf of three specific groups of employees who claim they have been discriminated against by Google: Whites, conservatives, and men. 15 This is not just the standard “reverse discrimination” lawsuit; it goes to the heart of the culture of diversity and one of its greatest challenges for management—the backlash against change. In February 2018, the National Labor Relations Board ruled that Google’s termination of the engineer did not violate federal labor law 16 and that Google had discharged the employee only for inappropriate but unprotected conduct or speech that demeaned women and had no relationship to any terms of employment. Although this ruling settles the administrative labor law aspect of the case, it has no effect on the private wrongful termination lawsuit filed by the engineer, which is still proceeding. Yet other employees are resistant to change in whatever form it takes. As inclusion initiatives and considerations of diversity become more prominent in employment practices, wise leaders should be prepared to fully explain the advantages to the company of greater diversity in the workforce as well as making the appropriate accommodations to support it. Accommodations can take various forms. For example, if you hire more women, should you change the way you run meetings so everyone has a chance to be heard? Have you recognized that women returning to work after childrearing may bring improved skills such as time management or the ability to work well under pressure? If you are hiring more people of different faiths, should you set aside a prayer room? Should you give out tickets to football games as incentives? Or build team spirit with trips to a local bar? Your managers may need to accept that these initiatives may not suit everyone. Adherents of some faiths may abstain from alcohol, and some people prefer cultural events to sports. Many might welcome a menu of perquisites (“perks”) from which to choose, and these will not necessarily be the ones that were valued in the past. Mentoring new and diverse peers can help erase bias and overcome preconceptions about others. However, all levels of a company must be engaged in achieving diversity, and all must work together to overcome resistance. Link to Learning Read this article for strategies on overcoming gendered meeting dynamics in the workplace from the Harvard Business Review . Cases from the Real World Companies with Diverse Workforces Texas Health Resources, a Dallas-area healthcare and hospital company, ranked No. 1 among Fortune’s Best Workplaces for Diversity and No. 2 for Best Workplaces for African Americans. 17 Texas Health employs a diverse workforce that is about 75 percent female and 40 percent minority. The company goes above and beyond by offering English classes for Hispanic workers and hosting several dozen social and professional events each year to support networking and connections among peers with different backgrounds. It also offers same-sex partner benefits; approximately 3 percent of its workforce identifies as LGBTQ (lesbian, gay, bisexual, transgender, queer or questioning). Another company receiving recognition is Marriott International, ranked No. 6 among Best Workplaces for Diversity and No. 7 among Best Workplaces for African Americans and for Latinos. African American, Latino, and other ethnic minorities account for about 65 percent of Marriott’s 100,000 employees, and 15 percent of its executives are minorities. Marriott’s president and CEO, Arne Sorenson, is recognized as an advocate for LGBTQ equality in the workplace, published an open letter on LinkedIn expressing his support for diversity and entreating then president-elect Donald Trump to use his position to advocate for inclusiveness. “Everyone, no matter their sexual orientation or identity, gender, race, religion disability or ethnicity should have an equal opportunity to get a job, start a business or be served by a business,” Sorenson wrote. “Use your leadership to minimize divisiveness around these areas by letting people live their lives and by ensuring that they are treated equally in the public square.” 18 Critical Thinking Is it possible that Texas Health and Marriott rank highly for diversity because the hospitality and healthcare industries tend to hire more women and minorities in general? Why or why not? 8.2 Accommodating Different Abilities and Faiths Learning Objectives By the end of this section, you will be able to: Identify workplace accommodations often provided for persons with differing abilities Describe workplace accommodations made for religious reasons The traditional definition of diversity is broad, encompassing not only race, ethnicity, and gender but also religious beliefs, national origin, and cognitive and physical abilities as well as sexual preference or orientation. This section examines two of these categories, religion and ability, looking at how an ethical manager handles them as part of an overall diversity policy. In both cases, the concept of reasonable accommodation means an employer must try to allow for differences among the workforce. Protections for People with Disabilities In the United States, the Americans with Disabilities Act (ADA), passed in 1990, stipulates that a person has a disability if he or she has a physical or mental impairment that reduces participation in “a major life activity,” such as work. An employer may not discriminate in offering employment to an individual who is diagnosed as having such a disability. Furthermore, if employment is offered, the employer is obliged to make reasonable accommodations to enable him or her to carry out normal job tasks. Making reasonable accommodations may include altering the physical workplace so it is readily accessible, restructuring a job, providing or modifying equipment or devices, or offering part-time or modified work schedules. Other accommodations could include providing readers, interpreters, or other necessary forms of assistance such as an assistive animal ( Figure 8.5 ). The ADA also prohibits discriminating against individuals with disabilities in providing access to government services, public accommodations, transportation, telecommunications, and other essential services. 19 Access and accommodation for employees with physical or mental disabilities are good for business because they expand the potential pool of good workers. It is also ethical to have compassion for those who want to work and be contributing members of society. This principle holds for customers as well as employees. Recognizing the need for protection in this area, the federal government has enacted several laws to provide it. The Disability Rights Division of the U.S. Department of Justice lists ten different federal laws protecting people with disabilities, including not only the ADA but also laws such as the Rehabilitation Act, the Air Carrier Access Act, and the Architectural Barriers Act. Link to Learning The EEOC is the primary federal agency responsible for enforcing the ADA (as well as Title VII of the Civil Rights Act of 1964, mentioned earlier in the chapter). It hears complaints, tries to settle cases through administrative action, and, if cases cannot be settled, works with the Department of Justice to file lawsuits against violators. Visit the EEOC website to learn more. A key part of complying with the law is understanding and applying the concept of reasonableness : “An employer is required to provide a reasonable accommodation to a qualified applicant or employee with a disability unless the employer can show that the accommodation would be an undue hardship —that is, that it would require significant difficulty or expense.” 20 The law does not require an employee to refer to the ADA or to “disability” or “reasonable accommodation” when requesting some type of assistance. Managers need to be able to recognize the variety of ways in which a request for an accommodation is communicated. For example, an employee might not specifically say, “I need a reasonable accommodation for my disability” but rather, “I’m having a hard time getting to work on time because of the medical treatments I’m undergoing.” This example demonstrates a challenge employers may face under the ADA in properly identifying requests for accommodation. Cases from the Real World The ADA and Verizon Attendance Policy Managers are usually sticklers about attendance, but Verizon recently learned an expensive lesson about its mandatory attendance policies from a 2011 class action lawsuit by employees and the EEOC. The suit asserted that Verizon denied reasonable accommodations to several hundred employees, disciplining or firing them for missing too many days of work and refusing to make exceptions for those whose absences were caused by their disabilities. According to the EEOC, Verizon violated the ADA because its no-fault attendance policy was an inflexible and “unreasonable” one-size-fits-all rule. The EEOC required Verizon to pay $20 million to settle the suit, the largest single disability discrimination settlement in the agency’s history. The settlement also forced Verizon to change its attendance policy to include reasonable accommodations for persons with disabilities. A third requirement was that Verizon provide regular training on ADA requirements to all mangers responsible for administering attendance policies. Critical Thinking What are some specific rules that would fit within a fair and reasonable attendance policy? How would you decide whether an employee was taking advantage of an absenteeism policy? Managing Religious Diversity in the Workplace Title VII of the CRA, which governs nondiscrimination, applies the same rules to the religious beliefs (or nonbeliefs) of employees and job applicants as it does to race, gender, and other categories. The essence of the law mandates four tenets that all employers should follow: nondiscrimination, nonharassment, nonretaliation, and reasonable accommodation. Regulations require that an employee notify the employer of a bona fide religious belief for which he or she wants protection, but the employee need not expressly request a specific accommodation. The employer must consider all possible accommodations that do not require violating the individual’s beliefs and/or practices, such as allowing time off ( Figure 8.6 ). However, the accommodation need not pose undue hardship on the firm, in terms of either scheduling or financial sacrifice. The employer must present proof of hardship if it decides it cannot offer an accommodation. Some cases of accommodation are based on cultural heritage rather than religion. What Would You Do? Can Everyone’s Wishes Be Accommodated? You are a manager in a large Texas-based oil and gas company planning an annual summer company picnic and barbecue on the weekend of June 19. The oil industry has a long tradition of outdoor barbecues, and this one is a big morale-building event. However, June 19 is “Juneteenth,” the day on which news of the Emancipation Proclamation reached slaves in Texas in 1865. Several African American employees always attend the barbecue event and are looking forward to it, but they also want to celebrate Emancipation Day, rich in history and culture and accompanied by its own official event. The picnic date cannot be easily rescheduled because of all the catering arrangements that had to be made. Critical Thinking Is there a way to permit some employees to celebrate both occasions without inconveniencing others who will be attending only one? What would you do as the manager, keeping in mind that you do not want to offend anyone? Reasonable accommodation may require more than just a couple of hours off to go to weekly worship or to celebrate a holiday. It may extend to dress and uniform requirements, grooming rules, work rules and responsibilities, religious expression and displays, prayer or meditation rooms, and dietary issues. Link to Learning The Sikh faith dates to roughly the fourteenth century in India. Its practitioners have made their way to many Western nations, including the United Kingdom, Canada, Italy, and the United States. Sikhs in the West have experienced discrimination due to the distinctive turbans adult males wear, which are sometimes mistaken for Islamic apparel. Men are also required to wear a dagger called a kirpan . California law permits religious observers to wear a sheathed dagger openly, but not hidden away. Watch this video showing a San Joaquin County Sheriff’s sergeant explaining the accommodation given to Sikhs to wear a kirpan in public to learn more. How comfortable are you with permitting daggers to be carried openly in the workplace? The law also protects those who do not have traditional beliefs. In Welsh v. United States (1970), the Supreme Court ruled that any belief occupying “a place parallel to that filled by the God of those admittedly qualifying for the exception” is covered by the law. 21 A nontheistic value system consisting of personal, moral, or ethical beliefs that is sincerely held with the strength of traditional religious views is deserving of protection. Protected individuals need not have a religion; indeed, if atheist or agnostic, they may have no religion at all. Religion has become a hot-button issue for some political groups in the United States. Religious tolerance is the official national policy enshrined in the Constitution, but it has come under attack by some who want to label the United States an exclusively Christian nation. Cases from the Real World The Abercrombie & Fitch Religious Discrimination Case The U.S. Supreme Court, in a 2015 case involving Abercrombie & Fitch, ruled that that “an employer may not refuse to hire an applicant for work if the employer was motivated by avoiding the need to accommodate a religious practice,” and that doing so violates the prohibition against religious discrimination contained in the CRA of 1964, Title VII. According to the EEOC general counsel David Lopez, “This case is about defending the American principles of religious freedom and tolerance. This decision is a victory for our increasingly diverse society.” 22 The case arose when, as part of her Muslim faith, a teenage girl named Samantha Elauf wore a hijab (headscarf) to a job interview with Abercrombie & Fitch. Elauf was denied a job because she did not conform to the company’s “Look Policy,” which Abercrombie claimed banned head coverings. Elauf filed a complaint with the EEOC alleging religious discrimination, and the EEOC, in turn, filed suit against Abercrombie & Fitch, alleging it refused to hire Elauf because of her religious beliefs and failed to accommodate her by making an exception to its “Look Policy.” “I was a teenager who loved fashion and was eager to work for Abercrombie & Fitch,” said Elauf. “Observance of my faith should not have prevented me from getting a job. I am glad that I stood up for my rights, and happy that the EEOC was there for me and took my complaint to the courts. I am grateful to the Supreme Court for the decision and hope that other people realize that this type of discrimination is wrong and the EEOC is there to help.” 23 Critical Thinking Does a retail clothing store have an interest in employee appearance that it can justify in terms of customer sales? Does it matter to you what a sales associate looks like when you shop for clothes? Why or why not? 8.3 Sexual Identification and Orientation Learning Objectives By the end of this section, you will be able to: Explain how sexual identification and orientation are protected by law Discuss the ethical issues raised in the workplace by differences in sexual identification and orientation As society expands its understanding and appreciation of sexual orientation and identity, companies and managers must adopt a more inclusive perspective that keeps pace with evolving norms. Successful managers are those who willing to create a more welcoming work environment for all employees, given the wide array of sexual orientations and identities evident today. Legal Protections Workplace discrimination in this area means treating someone differently solely because of his or her sexual identification or sexual orientation , which can include, but is not limited to, identification as gay or lesbian (homosexual), bisexual, transsexual, or straight (heterosexual). Discrimination may also be based on an individual’s association with someone of a different sexual orientation. Forms that such discrimination may take in the workplace include denial of opportunities, termination, and sexual assault, as well as the use of offensive terms, stereotyping, and other harassment. Although the U.S. Supreme Court ruled in United States v. Windsor (2013) that Section 3 of the 1996 Defense of Marriage Act (which had restricted the federal interpretations of “marriage” and “spouse” to opposite-sex unions) was unconstitutional, and guaranteed same-sex couples the right to marry in Obergefell v. Hodges (2015), 24 marital status has little or no direct applicability to the circumstances of someone’s employment. In terms of legal protections at work, the LGBTQ community had been at a disadvantage because Title VII of the CRA was not interpreted to address sexual orientation and federal law did not prohibit discrimination based on this characteristic. In the 2020 Supreme Court case Bostock v. Clayton County , the Court held that discrimination based on "sex" includes discrimination based on sexual orientation and gender identity. While the 2020 Supreme Court decision extended protection in terms of employment considerations, discrimination in other forms remains. For example, a proposed law named the Equality Act is a federal LGBTQ nondiscrimination bill that would provide protections for LGBTQ individuals in employment, housing, credit, and education. But unless and until it passes, it remains up to the business community to provide protections consistent with those provided under federal law for other employees or applicants. Ethical Considerations In the absence of a specific law, LGBTQ issues present a unique opportunity for ethical leadership. Many companies choose to do the ethically and socially responsible thing and treat all workers equally, for example, by extending the same benefits to same-sex partners that they extend to opposite-sex spouses. Ethical leaders are also willing to listen and be considerate when dealing with employees who may still be coming to an understanding of their sexual identification. Financial and performance-related considerations come into play as well. Denver Investments recently analyzed the stock performance of companies before and after their adoption of LGBTQ-inclusive workplace policies. 25 The number of companies outperforming their peers in various industries increased after companies adopted LGBTQ-inclusive workplace policies. Once again, being ethical does not mean losing money or performing poorly. In fact, states that have passed legislation considered anti-LGBTQ by the wider U.S. community, such as the Religious Freedom Restoration Act in Indiana or North Carolina’s H.B. 2, the infamous “bathroom bill” that would require transgender individuals to use the restroom corresponding with their birth certificate, have experienced significant economic pushback. These states have seen statewide and targeted boycotts by consumers, major corporations, national organizations such as the National Collegiate Athletic Association, and even other cities and states. 26 In 2016, in response to H.B. 2, nearly seventy large U.S. companies, including American Airlines, Apple, DuPont, General Electric, IBM, Morgan Stanley, and Wal-Mart, signed an amicus (“friend of the court”) brief in opposition to the unpopular North Carolina bill. 27 In 2017, the North Carolina legislature replaced the law, and a 2019 court settlement substantially altered it; however, the remaining North Carolina law limits local municipalities' protections for LGBTQ people. Indiana’s Religious Freedom Restoration Act evoked a similar backlash in 2015 and public criticism from U.S. businesses. To assess LGBTQ equality policies at a corporate level, the Human Rights Campaign foundation publishes an annual Corporate Equality Index (CEI) of approximately one thousand large U.S. companies and scores each on a scale of 0 to 100 on the basis of how LGBTW-friendly its benefits and employment policies are ( Figure 8.8 ). More than six hundred companies recently earned a perfect score in the 2018 CEI, including such household names as AT&T, Boeing, Coca-Cola, Gap Inc., General Motors, Johnson & Johnson, Kellogg, United Parcel Service, and Xerox. 28 Link to Learning Read the Human Rights Campaign’s 2018 report for more on the Human Rights Campaign’s CEI and its criteria. Another organization tracking LGBTQ equality and inclusion in the workplace is the National LGBT Chamber of Commerce, which issues third-party certification for businesses that are majority-owned by LGBT individuals. There are currently more than one thousand LGBT-certified business enterprises across the country, although California, New York, Texas, Florida, and Georgia account for approximately 50 percent of them. Although these are all top-ranked states for new business startups in general, they are also home to multiple Fortune 500 companies whose diversity programs encourage LGBT-certified businesses to become part of their supply chains. Examples of large LGBT-friendly companies with headquarters in these states are American Airlines, JPMorgan Chase, SunTrust Bank, and Pacific Gas & Electric. 8.4 Income Inequalities Learning Objectives By the end of this section, you will be able to: Explain why income inequality is a problem for the United States and the world Analyze the effects of income inequality on the middle class Describe possible solutions to the problem of income inequality The gap in earnings between the United States’ affluent upper class and the rest of the country continues to grow every year. The imbalance in the distribution of income among the participants of an economy, or income inequality , is an enormous challenge for U.S. businesses and for society. The middle class, often called the engine of growth and prosperity, is shrinking, and new ethical, cultural, and economic problems are following from that change. Some identify income inequality as an ethical problem, some as an economic problem. Perhaps it is both. This section will address income inequality and the way it affects U.S. businesses and consumers. The Middle Class in the United States Data collected by economic researchers at the University of California show that income disparities have become more pronounced over the past thirty-five years, with the top 10 percent of income earners averaging ten times as much income as the bottom 90 percent, and the top 1 percent making more than forty times what the bottom 90 percent does. 29 The percentage of total U.S. income earned by the top 1 percent increased from 8 percent to 22 percent during this period. Figure 8.9 indicates the disparity as of 2015. The U.S. economy was built largely on the premise of an expanding and prosperous middle class to which everyone had a chance of belonging. This ideal set the United States apart from other countries, in its own eyes and those of the world. In the years after World War II, the GI Bill and returning prosperity provided veterans with money for education, home mortgages, and even small businesses, all of which helped the economy grow. For the first time, many people could afford homes of their own, and residential home construction reached record rates. Families bought cars and opened credit card accounts. The culture of the middle class with picket fences, backyard barbecues, and black-and-white televisions had arrived. Television shows such as Leave it to Beaver and Father Knows Best reflected the “good life” desired by many in this newly emerging group. By the mid-1960s, middle-class wage earners were fast becoming the engine of the world’s largest economy. The middle class is not a homogenous group, however. For example, split fairly evenly between Democratic and Republican parties, the middle class helped elect Republican George W. Bush in 2004 and Democrat Barack Obama in 2008 and 2012. And, of course, a suburban house with a white picket fence represents a consumption economy, which is not everyone’s idea of utopia, nor should it be. More importantly, not everyone had equal access to this ideal. But one thing almost everyone agrees on is that a shrinking middle class is not good for the economy. Data from the International Monetary Fund indicate the U.S. middle class is going in the wrong direction. 30 Only one-quarter of 1 percent of all U.S. households have moved up from the middle- to the upper-income bracket since 2000, while twelve times that many have slid to the lower-income bracket. That is a complete reversal from the period between 1970 and 2000, when middle-income households were more likely to move up than down. According to Business Insider , the U.S. middle class is “hollowing out, and it’s hurting U.S. economic growth.” 31 Not only has the total wealth of middle-income families remained flat ( Figure 8.10 ) but the overall percentage of middle-income households in the United States has shrunk from almost 60 percent in 1970 to only 47 percent in 2014, a very significant drop. Because consumers of comfortable means are a huge driver of the U.S. economy, with their household consumption of goods and services like food, energy, and education making up more than two-thirds of the nation’s gross domestic product (GDP), the downward trend is an economic challenge for corporate America and the government. Business must be part of the solution. But exactly what can U.S. companies do to help address income inequality? Addressing Income Inequality Robert Reich was U.S. Secretary of Labor from 1993 to 1997 and served in the administrations of three presidents (Gerald Ford, Jimmy Carter, and Bill Clinton). He is one of the nation’s leading experts on the labor market and the economy and is currently the chancellor’s professor of Public Policy at University of California, Berkeley, and a senior fellow at the Blum Center for Developing Economies. Reich recently told this story: “I was visited in my office by the chairman of one of the country’s biggest high-tech firms. He wanted to talk about the causes and consequences of widening inequality and the shrinking middle class, and what to do about it.” Reich asked the chairman why he was concerned. “Because the American middle class is the core of our customer base. If they can’t afford our products in the years ahead, we’re in deep trouble.” 32 Reich is hearing a similar concern from a growing number of business leaders, who see an economy that is leaving out too many people. Business leaders know the U.S. economy cannot grow when wages are declining, nor can their businesses succeed over the long term without a growing or at least a stable middle class. Other business leaders, such as Lloyd Blankfein, CEO of Goldman Sachs, have also said that income inequality is a negative development. Reich quoted Blankfein: “It is destabilizing the nation and is responsible for the divisions in the country . . . too much of the GDP over the last generation has gone to too few of the people.” 33 Some business leaders, such as Bill Gross, chair of the world’s largest bond-trading firm, suggest raising the federal minimum wage , currently $7.25 per hour for all employers doing any type of business in interstate commerce (e.g., sending or receiving mail out of state) or for any company with more than $500,000 in sales. Many business leaders and economists agree that a higher minimum wage would help address at least part of the problem of income inequality; industrialized economies function best when income inequality is minimal, according to Gross and others who advocate for policies that bring the power of workers and corporations back into balance. 34 A hike in the minimum wage affects middle-class workers in two ways. First, it is a direct help to those who are part of a two-earner family at the lower end of the middle class, giving them more income to spend on necessities. Second, many higher-paid workers earn a wage that is tied to the minimum wage. Their salaries would increase as well. Without congressional action to raise the minimum wage, states have taken the lead, along with businesses that are voluntarily raising their own minimum wage. Twenty-nine states have minimum wages that exceed the federal rate of $7.25 per hour. Costco, T.J. Maxx, Marshalls, Ikea, Starbucks, Gap, In-and-Out Burger, Whole Foods, Ben & Jerry’s, Shake Shack, and McDonalds have also raised minimum wages in the past two years. Target recently announced a rise in its minimum wage to eleven dollars per hour, and banks, including Wells Fargo, PNC Financial Services, and Fifth Third Bank, announced a fifteen-dollar minimum wage. 35 Link to Learning Go to the National Conference of State Legislatures website for information about various laws in each state and to look up the minimum wage law in your state. The American Sustainable Business Council, in conjunction with Business for a Fair Wage, surveyed more than five hundred small businesses, and the results were surprising. A clear majority (58%–66%, depending on region) supported raising the minimum wage to at least ten dollars per hour. 36 Business owners were not simply being ethical; most understand that their business would benefit from an increase in consumers’ purchasing power, and that this, in turn, would help the general economy. Frank Knapp, CEO of the South Carolina Small Business Chamber of Commerce representing five thousand business owners, said a higher minimum wage “will put more money in the hands of 300,000 South Carolinians who make less than ten dollars per hour and they will spend it here in our local economies. This minimum wage increase will also benefit another 150,000 employees who will have their wages adjusted. The resulting net $500 million increase in state GDP will be good for small businesses and good for the economy of South Carolina.” 37 In addition to paying a higher wage, businesses can help workers move to, or stay in, the middle class in other ways. For decades, some companies have hired many full-time workers as independent contractors because it saves them money on a variety of employee benefits they do not have to offer as a result. However, that practice shifts the burden to the workers, who now have to pay the full cost of their health insurance, workers’ compensation, unemployment benefits, time off, and payroll taxes. A recent Department of Labor study indicates that employer costs for employee compensation averaged $35.64 per hour worked in September 2017; wages and salaries averaged $24.33 per hour worked and accounted for 68 percent of these costs, whereas benefit costs averaged $11.31 and accounted for the remaining 32 percent. 38 That means if employees on the payroll were paid as independent contractors, their pay would effectively be about one-third less, assuming they purchased benefits on their own. The 30 percent difference companies save by hiring independent contractors is often the margin between being in the middle class and falling below it. Ethics Across Time and Cultures Falling Out of the Middle Class Imagine a child living in a house with no power for lights, heat, or cooking, embarrassed to invite friends over to play or study, and not understanding what happened to a once-normal life. This is a story many middle-class families in the United States think could happen only to someone else, never to them. However, an HBO documentary entitled American Winter suggests the opposite is true; many seemingly solid middle-class families can slip all too easily into the lower class, into poverty, in houses that are dark with empty refrigerators. The film, set in Portland, Oregon, tells the story of an economic tragedy. Families that were once financially stable are now barely keeping their heads above water. A needed job was outsourced or given to an independent contractor, or a raise failed to come even as necessities kept getting more expensive. The families had to try to pay for healthcare or make a mortgage payment when their bank account was overdrawn. Once-proud middle-class workers talk about the shame of having to ask friends for help or turn to public assistance as a last resort. The fall of the U.S. middle class is more than a line on an economic chart; it is a cold reality for many families who never saw it coming. Critical Thinking Does a company have an ethical duty to find a balance between remaining profitable and paying all workers a decent living wage? Why or why not? Who decides what constitutes a fair wage? How would you explain to a board of directors your decision to pay entry-level workers a higher wage than required by law? Yet sympathy for raising the minimum wage at either the federal or state level to sustain the middle class or reduce poverty in general has not been unanimous. Indeed, some economists have questioned whether a positive correlation exists between greater wages and a lowering of the poverty rate. Representative of such thought is the work of David Neumark, an economist at the University of California, Irvine, and William L. Wascher, a long-time economic researcher on the staff of the Board of Governors of the Federal Reserve System. They argue that, however well-meaning such efforts might be, simply raising the minimum wage can be counterproductive to driving down poverty. Rather, they maintain, the right calculus for achieving this goal is much more complex. As they put it, “we are hard-pressed to imagine a compelling argument for a higher minimum wage when it neither helps low-income families nor reduces poverty.” Instead, the federal and state governments should consider a series of steps, such as the Earned Income Tax Credit, that would be more effective in mitigating poverty. 39 Pay Equity as a Corollary of Income Equality The issue of income inequality is of particular significance as it relates to women. According to the World Economic Forum (WEF), gender inequality is strongly associated with income inequality. 40 The WEF studied the association between the two phenomena in 140 countries over the past twenty years and discovered they are linked virtually everywhere, not only in developing nations. The issue of pay discrimination is addressed elsewhere in this textbook; however, the issue merits mention here as a part of the bigger picture of equality in the workplace. Adding to the disparity in income between men and women is the reality that many women are single mothers with dependent children and sometimes grandchildren. Hence, any reduction in their earning power has direct implications for their dependents, too, constituting injustice to multiple generations. According to multiple studies, including those by the American Association of University Women and the Pew Research Center, on average, women are paid approximately 80 percent of what men are paid. 41 Laws that attempt to address this issue have not eradicated the problem. A recent trend is to take legislative action at the state rather than the federal level. A New Jersey law, for example, was named the Diane B. Allen Equal Pay Act to honor a retired state senator who experienced pay discrimination. 42 It will be the strongest such law in the country, allowing victims of discrimination to seek redress for up to six years of underpayment, and monetary damages for a prevailing plaintiff will be tripled. The most significant part of the law, however, is a seemingly small change in wording that will have a big impact. Rather than requiring “equal pay for equal work,” as does the federal law and most state laws aimed at the gender wage gap , the Diane B. Allen Equal Pay Act will require “equal pay for substantially similar work.” This means that if a New Jersey woman has a different title than her male colleague but performs the same kinds of tasks and has the same level of responsibility, she must be paid the same. The new law recognizes that slight differences in job titles are sometimes used to justify pay differences but in reality are often arbitrary. Minnesota recently passed a similar law, but it applies only to state government employees, not private-sector workers. It mandates that women be paid the same for comparable jobs and analyzes the work performed on the basis of how much knowledge, problem solving, and responsibility is required, and on working conditions rather than merely on job titles. Ethical business managers will see this trend as an effort to address an ethical issue that has existed for well over a century and will follow the lead of states such as New Jersey and Minnesota. A company can help solve this problem by changing the way it uses job titles and creating a compensation system built on the ideas behind these two laws, which focus on job characteristics and not titles. 8.5 Animal Rights and the Implications for Business Learning Objectives By the end of this section, you will be able to: Explain rising concerns about corporate treatment of animals Explain the concept of agribusiness ethics Describe the financial implications of animal ethics for business Ethical questions about our treatment of animals arise in several different industries, such as agriculture, medicine, and cosmetics. This section addresses these questions because they form part of the larger picture of the way society treats all living things—including nonhuman animals as well as the environment. All states in the United States have some form of laws to protect animals; some violations carry criminal penalties and some carry civil penalties. Consumer groups and the media have also applied pressure to the business community to consider animal ethics seriously, and businesses have discovered money to be made in the booming business of pets. Of course, as always, we should acknowledge that culture and geography influence our understanding of ethical issues at a personal and a business level. A Brief History of the Animal Rights Movement Rhode Island, along with Boulder, Colorado, and Berkeley, California, led the way in enacting legislation recognizing individuals as guardians, not owners, of their animals, thus giving animals legal status beyond being just items of property. Many U.S. colleges now teach courses on animal rights law, there is strong support for granting fundamental legal rights to animals, and some attorneys, scientists, and ethicists dedicate their careers to animal rights. The animal movement started in the late nineteenth century when the American Society for the Prevention of Cruelty to Animals (ASPCA) was formed, along with the American Humane Association. The American Welfare Institute and the Humane Society of the United States (HSUS) were established in the 1950s. The first federal animal protection law, the Humane Slaughter Act, was passed in the 1950s to avoid unnecessary suffering to farm animals (ten billion of which are killed every year). The most important U.S. law forbidding cruelty to animals in laboratory settings was enacted in 1966; the Animal Welfare Act requires basic humane conditions to be maintained for animals in testing facilities. Finally, in the 1970s and 1980s, the modern animal rights social movement emerged. It has led to an increased awareness of animal ethics by consumers and businesses. However, despite significant progress, research using animals for product testing continues to be controversial in the United States, particularly because improved technology has offered humane and effective alternatives. The use of animals in biomedical research has drawn slightly less negative reaction than in consumer product testing, because of the more critical nature of the research. Though animal welfare laws have ameliorated some of the pain of animals used in biomedical research, ethical concerns remain, and veterinarians and physicians are demanding change, as are animal rights groups and policy and ethics experts. Increased integration of ethics in business conduct is operating alongside the desire to recognize animal rights , the entitlement of nonhuman animals to ethical treatment. The Ethics of What We Eat Concern for the welfare of animals beyond pets brings us to the agribusiness industry. This is where groups such as the ASPCA and HSUS have been particularly active. Agribusiness is a huge industry that provides us with the food we eat, including plant-based and animal-based foodstuffs. The industry has changed significantly over the past century, evolving from one consisting primarily of family and/or small businesses to a much larger one dominated mostly by large corporations. Aspects of this business with relevant and interrelated ethical questions range from ecology, animal rights, and economics to food safety and long-term sustainability ( Figure 8.11 ). To achieve a high level of sustainability in the world’s food supply chain, all stakeholders—the political sector, the business sector, the finance sector, the academic sector, and the consumer—must work in concert to achieve an optimal result, and a cost-benefit analysis of ethics in the food industry should include a recognition of all their concerns. Experts predict that for us to meet the food needs of the world’s population, we will need to double food production over the next fifty years. Given this, a high priority in the agribusiness industry ought to be to meet this demand for food at a reasonable price with products that are not a threat to human health and safety, animal health, or the limited resources in Earth’s environment. However, to do so requires attention to factors such as soil and surface water conservation and protection of natural land and water areas. Furthermore, the treatment of animals by everyone in the livestock chain (e.g., livestock farmers, dealers, fish farmers, animal transporters, slaughterhouses) must be appropriate for a society with high legal and ethical standards. The food chain can be truly sustainable only when it safeguards the social welfare and living environment of the people working in it. This means eliminating corruption, human rights violations (including forced labor and child labor), and poor working conditions. We must also encourage and empower consumers to make informed choices, which includes enforcing labeling regulations and the posting of relevant and accurate dietary information. Finally, an analysis of the food supply chain must also include an awareness of people’s food needs and preferences. For example, the fact that growing numbers of consumers are adopting vegetarian, vegan, gluten-free, or non–genetically modified organism diets is now apparent at responsive restaurants, grocery stores, and employer-provided cafés. For many, the ethical treatment of animals remains a philosophic issue; however, some rules about what foods are morally acceptable and how they are prepared for consumption (e.g., halal or kosher) are also grounded in faith, so animal rights have religious implications, too. All in all, consumers’ growing ethical sensitivity about what we eat could ultimately transform agribusiness. More acreage might be assigned to growing fruits and vegetables relative to those given over to livestock grazing, for instance. Or revelations about slaughterhouse processes may reduce our acceptance of the ways in which meat is processed for consumption. The economic consequences for agribusiness of such changes are difficult to underestimate. Link to Learning Peter Singer is an Australian-born philosopher who has teaching appointments at Princeton University and Monash University in Australia. His book Animal Liberation , originally published in 1975 but revised many times since, serves as a sort of bible for the animal rights movement. Yet Singer is highly controversial because he argues that some humans have fewer cognitive skills than some animals. Therefore, if we determine what we eat on the basis of sentience (the ability to think and/or feel pain), then many animals we eat should be off limits. Watch Singer’s talk, “The Ethics of What We Eat,” which was recorded at Williams College in December 2009 as an introduction to Singer’s philosophy. The Use of Animals in Medical and Cosmetic Research Viewpoints about animals used in medical research are changing in very significant ways and have resulted in a variety of initiatives seeking alternatives to animal testing. As an example, in conjunction with professionals from human and veterinary medicine and the law, the Yale University Hastings Program in Ethics and Health Policy, a bioethics research institute, is seeking alternatives to animal testing that focus on animal welfare. Animals such as monkeys and dogs are used in medical research ranging from the study of Parkinson disease to toxicity testing and studies of drug interactions and allergies. There is no question that medical research is a valuable and important practice. The question is whether the use of animals is a necessary or even best practice for producing the most reliable results. Alternatives include the use of patient-drug databases, virtual drug trials, computer models and simulations, and noninvasive imaging techniques such as magnetic resonance imaging and computed tomography scans. 43 Other techniques, such as microdosing, use humans not as test animals but as a means to improve the accuracy and reliability of test results. In vitro methods based on human cell and tissue cultures, stem cells, and genetic testing methods are also increasingly available. As for consumer product testing, which produces the loudest outcry, the Federal Food, Drug, and Cosmetic Act does not require that animal tests be conducted to demonstrate the safety of cosmetics. Rather, companies test formulations on animals in an attempt to protect themselves from liability if a consumer is harmed by a product. However, a significant amount of new research shows that consumer products such as cosmetics can be accurately tested for safety without the abuse of animals. Some companies may resist altering their methods of conducting research, but a growing number are now realizing that their customers are demanding a change. Regulating the Use of Animals in Research and Testing Like virtually every other industrialized nation, the United States permits medical experimentation on animals, with few limitations (assuming sufficient scientific justification). The goal of any laws that exist is not to ban such tests but rather to limit unnecessary animal suffering by establishing standards for the humane treatment and housing of animals in laboratories. As explained by Stephen Latham, the director of the Interdisciplinary Center for Bioethics at Yale, 44 possible legal and regulatory approaches to animal testing vary on a continuum from strong government regulation and monitoring of all experimentation at one end, to a self-regulated approach that depends on the ethics of the researchers at the other end. The United Kingdom has the most significant regulatory scheme, whereas Japan uses the self-regulation approach. The U.S. approach is somewhere in the middle, the result of a gradual blending of the two approaches. A movement has begun to win legal recognition of chimpanzees as the near-equivalent of humans, therefore, as “persons” with legal rights. This is analogous to the effort called environmental justice, an attempt to do the same for the environment (discussed in the section on Environmental Justice in Three Special Stakeholders: Society, the Environment, and Government ). A nonprofit organization in Florida, the Nonhuman Rights Project, is an animal advocacy group that has hired attorneys to present a theory in court that two chimpanzees (Tommy and Kiko) have the legal standing and right to be freed from cages to live in an outdoor sanctuary ( Figure 8.12 ). In this case, the attorneys have been trying for years to get courts to grant the chimps habeas corpus (Latin for “you shall have the body”), a right people have under the U.S. Constitution when held against their will. To date, this effort has been unsuccessful. 45 The courts have extended certain constitutional rights to corporations, such as the First Amendment right to free speech (in the 2010 Citizens United case). Therefore, some reason, a logical extension of that concept would hold that animals and the environment have rights as well. In cosmetic testing , the United States has relatively few laws protecting animals, whereas about forty other nations have taken more direct action. In 2013, the European Union banned animal testing for cosmetics and the marketing and sale of cosmetics tested on animals. Norway and Switzerland passed similar laws. Outside Europe, a variety of other nations, including Guatemala, India, Israel, New Zealand, South Korea, Taiwan, and Turkey, have also passed laws to ban or limit cosmetic animal testing. U.S. cosmetic companies will not be able to sell their products in any of these countries unless they change their practices. The Humane Cosmetics Act has been introduced but not yet passed by Congress. If enacted, it would end cosmetics testing on animals in the United States and ban the import of animal-tested cosmetics. 46 However, in the current antiregulatory environment, passage seems unlikely. Cases from the Real World Beagles Freedom Project Beagles are popular pets because—like most dogs—they are people pleasers, plus they are obedient and easy to care for ( Figure 8.13 ). These same qualities make them the primary breed for animal testing: ninety-six percent of all dogs used in testing are beagles, leading animal-rights groups like the Beagle Freedom Project to make rescuing them a priority. 47 Even animal activists have to compromise to make progress, however, as the director of Beagle Freedom explains: “We have a policy position against animal testing. We don’t like it philosophically, scientifically, even personally. . . . But that doesn’t mean we can’t find common ground, a common-sense solution, to bridge two sides of a very controversial and polarizing debate, which is animal testing, and find this area in the middle where we can get together to help animals.” 48 Dogs used as subjects in laboratory experiments live in stacked metal cages with only fluorescent light, never walk on grass, and associate humans with pain. In toxicology testing, they are exposed to toxins at increasing levels to determine at what point they become ill. Before a beagle can be rescued, the laboratory has to agree to release it, which can be a challenge. If the laboratory is willing, the Beagle Freedom Project still has to negotiate, which usually means paying for all costs, including veterinary care and transportation, and absolving the laboratory of all liability, and then find the dog a home. Alternatives to testing on beagles include three-dimensional human-skin-equivalent systems and a variety of advanced computer-based models for measuring skin irritation, for instance. According to the New England Anti-Vivisection Society, nonanimal tests are often more cost-effective, practical, and expedient; some produce results in a significantly shorter time. 49 Critical Thinking Why have U.S. cosmetics companies continued to use beagles for testing when there are more humane alternatives at lower costs? According to the Humane Society of the United States, a more realistic alternative approach is to develop nonanimal tests that could provide more human safety data, including information about cancer and birth defects related to new products. Consumer pressure can also influence change. If consumer purchases demonstrate a preference for cruelty-free cosmetics and support ending cosmetics animal testing, businesses will get the message. Almost one hundred companies have already ceased testing cosmetics on animals, including The Body Shop, Burt’s Bees, E.L.F. Cosmetics, Lush, and Tom’s of Maine. Lists of such firms are maintained by People for the Ethical Treatment of Animals and similar organizations. 50 Link to Learning Cruelty Free International is an organization working to end animal experiments worldwide. It provides information about products that are not tested on animals in an effort to help consumers become more aware of the issues. Take a look at the Cruelty Free International website to learn more. Companies will be wise to adapt to the increasing level of public awareness and consumer expectations, not least because U.S. culture now incorporates pets in almost every aspect of life. Dogs, cats, and other animals function as therapy pets for patients and those experiencing stress; an Uber-style dog service will bring dogs to work or school for a few minutes of companionship. Pets visit hospitals and act as service animals, appearing in restaurants, campuses, and workplaces where they would have been prohibited as recently as ten years ago. According to the American Pet Products Association (APPA), a trade group, two-thirds of U.S. households own a pet, and pet industry sales have tripled in the past fifteen years. 51 The APPA estimates U.S. spending on pets will reach almost $70 billion a year by 2018. “People are fascinated by pets. We act and spend on them as if they were our children,” says New York University sociology professor Colin Jerolmack, who studies animals in society. 52 As people increasingly want to include pets in all aspects of life, new and different industries have emerged and will continue to do so, such as tourism centered on the presence of pets and retail opportunities such as health insurance for animals, upscale stores, and new products specifically tailored for pets. With interest in pets at an all-time high, businesses cannot ignore the trend, either in terms of revenue to be earned or in terms of the ethical treatment of their fellow animals in laboratories.
anatomy_and_physiology
Chapter Objectives After studying this chapter, you will be able to: Distinguish between anatomy and physiology, and identify several branches of each Describe the structure of the body, from simplest to most complex, in terms of the six levels of organization Identify the functional characteristics of human life Identify the four requirements for human survival Define homeostasis and explain its importance to normal human functioning Use appropriate anatomical terminology to identify key body structures, body regions, and directions in the body Compare and contrast at least four medical imaging techniques in terms of their function and use in medicine Introduction Though you may approach a course in anatomy and physiology strictly as a requirement for your field of study, the knowledge you gain in this course will serve you well in many aspects of your life. An understanding of anatomy and physiology is not only fundamental to any career in the health professions, but it can also benefit your own health. Familiarity with the human body can help you make healthful choices and prompt you to take appropriate action when signs of illness arise. Your knowledge in this field will help you understand news about nutrition, medications, medical devices, and procedures and help you understand genetic or infectious diseases. At some point, everyone will have a problem with some aspect of his or her body and your knowledge can help you to be a better parent, spouse, partner, friend, colleague, or caregiver. This chapter begins with an overview of anatomy and physiology and a preview of the body regions and functions. It then covers the characteristics of life and how the body works to maintain stable conditions. It introduces a set of standard terms for body structures and for planes and positions in the body that will serve as a foundation for more comprehensive information covered later in the text. It ends with examples of medical imaging used to see inside the living body.
[ { "answer": { "ans_choice": 2, "ans_text": "regional anatomy" }, "bloom": null, "hl_context": "<hl> is the study of the interrelationships of all of the structures in a specific body region , such as the abdomen . <hl> <hl> Studying regional anatomy helps us appreciate the interrelationships of body structures , such as how muscles , nerves , blood vessels , and <hl> <hl> Regional anatomy <hl>", "hl_sentences": "is the study of the interrelationships of all of the structures in a specific body region , such as the abdomen . Studying regional anatomy helps us appreciate the interrelationships of body structures , such as how muscles , nerves , blood vessels , and Regional anatomy", "question": { "cloze_format": "The specialty that might focus on studying all of the structures of the ankle and foot is ___.", "normal_format": "Which of the following specialties might focus on studying all of the structures of the ankle and foot?", "question_choices": [ "microscopic anatomy", "muscle anatomy", "regional anatomy", "systemic anatomy" ], "question_id": "fs-id2662649", "question_text": "Which of the following specialties might focus on studying all of the structures of the ankle and foot?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "cell" }, "bloom": "1", "hl_context": "Before you begin to study the different structures and functions of the human body , it is helpful to consider its basic architecture ; that is , how its smallest parts are assembled into larger structures . It is convenient to consider the structures of the body in terms of fundamental levels of organization that increase in complexity : subatomic particles , atoms , molecules , organelles , cells , tissues , organs , organ systems , organisms and biosphere ( Figure 1.3 ) . The Levels of Organization To study the chemical level of organization , scientists consider the simplest building blocks of matter : subatomic particles , atoms and molecules . All matter in the universe is composed of one or more unique pure substances called elements , familiar examples of which are hydrogen , oxygen , carbon , nitrogen , calcium , and iron . The smallest unit of any of these pure substances ( elements ) is an atom . Atoms are made up of subatomic particles such as the proton , electron and neutron . Two or more atoms combine to form a molecule , such as the water molecules , proteins , and sugars found in living things . Molecules are the chemical building blocks of all body structures . <hl> A cell is the smallest independently functioning unit of a living organism . <hl> Even bacteria , which are extremely small , independently-living organisms , have a cellular structure . Each bacterium is a single cell . All living structures of human anatomy contain cells , and almost all functions of human physiology are performed in cells or are initiated by cells .", "hl_sentences": "A cell is the smallest independently functioning unit of a living organism .", "question": { "cloze_format": "The smallest independently functioning biological unit of an organism is a(n) ________.", "normal_format": "Which is the smallest independently functioning biological unit of an organism?", "question_choices": [ "cell", "molecule", "organ", "tissue" ], "question_id": "fs-id2070158", "question_text": "The smallest independently functioning biological unit of an organism is a(n) ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "organ" }, "bloom": "1", "hl_context": "A human cell typically consists of flexible membranes that enclose cytoplasm , a water-based cellular fluid together with a variety of tiny functioning units called organelles . In humans , as in all organisms , cells perform all functions of life . A tissue is a group of many similar cells ( though sometimes composed of a few related types ) that work together to perform a specific function . An organ is an anatomically distinct structure of the body composed of two or more tissue types . Each organ performs one or more specific physiological functions . <hl> An organ system is a group of organs that work together to perform major functions or meet physiological needs of the body . <hl> This book covers eleven distinct organ systems in the human body ( Figure 1.4 and Figure 1.5 ) . Assigning organs to organ systems can be imprecise since organs that “ belong ” to one system can also have functions integral to another system . In fact , most organs contribute to more than one system .", "hl_sentences": "An organ system is a group of organs that work together to perform major functions or meet physiological needs of the body .", "question": { "cloze_format": "A collection of similar tissues that performs a specific function is an ________.", "normal_format": "What is a collection of similar tissues that performs a specific function?", "question_choices": [ "organ", "organelle", "organism", "organ system" ], "question_id": "fs-id1894304", "question_text": "A collection of similar tissues that performs a specific function is an ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "sum of all chemical reactions in an organism" }, "bloom": null, "hl_context": "Taken together , these two processes are called metabolism . <hl> Metabolism is the sum of all anabolic and catabolic reactions that take place in the body ( Figure 1.6 ) . <hl> Both anabolism and catabolism occur simultaneously and continuously to keep you alive .", "hl_sentences": "Metabolism is the sum of all anabolic and catabolic reactions that take place in the body ( Figure 1.6 ) .", "question": { "cloze_format": "Metabolism can be defined as the ________.", "normal_format": "What can metabolism be defined as?", "question_choices": [ "adjustment by an organism to external or internal changes", "process whereby all unspecialized cells become specialized to perform distinct functions", "process whereby new cells are formed to replace worn-out cells", "sum of all chemical reactions in an organism" ], "question_id": "fs-id2250378", "question_text": "Metabolism can be defined as the ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "stores energy for use by body cells" }, "bloom": "1", "hl_context": "<hl> Every cell in your body makes use of a chemical compound , adenosine triphosphate ( ATP ) , to store and release energy . <hl> <hl> The cell stores energy in the synthesis ( anabolism ) of ATP , then moves the ATP molecules to the location where energy is needed to fuel cellular activities . <hl> Then the ATP is broken down ( catabolism ) and a controlled amount of energy is released , which is used by the cell to perform a particular job . Interactive Link", "hl_sentences": "Every cell in your body makes use of a chemical compound , adenosine triphosphate ( ATP ) , to store and release energy . The cell stores energy in the synthesis ( anabolism ) of ATP , then moves the ATP molecules to the location where energy is needed to fuel cellular activities .", "question": { "cloze_format": "Adenosine triphosphate (ATP) is an important molecule because it ________.", "normal_format": "Why is adenosine triphosphate (ATP) an important molecule?", "question_choices": [ "is the result of catabolism", "release energy in uncontrolled bursts", "stores energy for use by body cells", "All of the above" ], "question_id": "fs-id1933014", "question_text": "Adenosine triphosphate (ATP) is an important molecule because it ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "differentiation" }, "bloom": null, "hl_context": "Development , growth and reproduction Development is all of the changes the body goes through in life . <hl> Development includes the process of differentiation , in which unspecialized cells become specialized in structure and function to perform certain tasks in the body . <hl> Development also includes the processes of growth and repair , both of which involve cell differentiation . Growth is the increase in body size . Humans , like all multicellular organisms , grow by increasing the number of existing cells , increasing the amount of non-cellular material around cells ( such as mineral deposits in bone ) , and , within very narrow limits , increasing the size of existing cells .", "hl_sentences": "Development includes the process of differentiation , in which unspecialized cells become specialized in structure and function to perform certain tasks in the body .", "question": { "cloze_format": "Cancer cells can be characterized as “generic” cells that perform no specialized body function. Thus cancer cells lack ________.", "normal_format": "Cancer cells can be characterized as “generic” cells that perform no specialized body function. What do cancer cells thus lack?", "question_choices": [ "differentiation", "reproduction", "responsiveness", "both reproduction and responsiveness" ], "question_id": "fs-id1897403", "question_text": "Cancer cells can be characterized as “generic” cells that perform no specialized body function. Thus cancer cells lack ________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "oxygen" }, "bloom": "1", "hl_context": "<hl> Atmospheric air is only about 20 percent oxygen , but that oxygen is a key component of the chemical reactions that keep the body alive , including the reactions that produce ATP . <hl> Brain cells are especially sensitive to lack of oxygen because of their requirement for a high-and-steady production of ATP . Brain damage is likely within five minutes without oxygen , and death is likely within ten minutes .", "hl_sentences": "Atmospheric air is only about 20 percent oxygen , but that oxygen is a key component of the chemical reactions that keep the body alive , including the reactions that produce ATP .", "question": { "cloze_format": "Humans have the most urgent need for a continuous supply of ________.", "normal_format": "Humans have the most urgent need for a continuous supply of which?", "question_choices": [ "food", "nitrogen", "oxygen", "water" ], "question_id": "fs-id2303679", "question_text": "Humans have the most urgent need for a continuous supply of ________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 0, "ans_text": "All classes of nutrients are essential to human survival." }, "bloom": "1", "hl_context": "<hl> A nutrient is a substance in foods and beverages that is essential to human survival . <hl> <hl> The three basic classes of nutrients are water , the energy-yielding and body-building nutrients , and the micronutrients ( vitamins and minerals ) . <hl>", "hl_sentences": "A nutrient is a substance in foods and beverages that is essential to human survival . The three basic classes of nutrients are water , the energy-yielding and body-building nutrients , and the micronutrients ( vitamins and minerals ) .", "question": { "cloze_format": "A true statement about nutrients is that ___ .", "normal_format": "Which of the following statements about nutrients is true?", "question_choices": [ "All classes of nutrients are essential to human survival.", "Because the body cannot store any micronutrients, they need to be consumed nearly every day.", "Carbohydrates, lipids, and proteins are micronutrients.", "Macronutrients are vitamins and minerals." ], "question_id": "fs-id2378896", "question_text": "Which of the following statements about nutrients is true?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "breaking down stored energy" }, "bloom": "3", "hl_context": "The body can also respond effectively to short-term exposure to cold . One response to cold is shivering , which is random muscle movement that generates heat . <hl> Another response is increased breakdown of stored energy to generate heat . <hl> When that energy reserve is depleted , however , and the core temperature begins to drop significantly , red blood cells will lose their ability to give up oxygen , denying the brain of this critical component of ATP production . This lack of oxygen can cause confusion , lethargy , and eventually loss of consciousness and death . The body responds to cold by reducing blood circulation to the extremities , the hands and feet , in order to prevent blood from cooling there and so that the body ’ s core can stay warm . Even when core body temperature remains stable , however , tissues exposed to severe cold , especially the fingers and toes , can develop frostbite when blood flow to the extremities has been much reduced . This form of tissue damage can be permanent and lead to gangrene , requiring amputation of the affected region .", "hl_sentences": "Another response is increased breakdown of stored energy to generate heat .", "question": { "cloze_format": "C.J. is stuck in her car during a bitterly cold blizzard. Her body responds to the cold by ________.", "normal_format": "C.J. is stuck in her car during a bitterly cold blizzard. How does her body responds to the cold?", "question_choices": [ "increasing the blood to her hands and feet", "becoming lethargic to conserve heat", "breaking down stored energy", "significantly increasing blood oxygen levels" ], "question_id": "fs-id2570988", "question_text": "C.J. is stuck in her car during a bitterly cold blizzard. Her body responds to the cold by ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "a control center" }, "bloom": "3", "hl_context": "A negative feedback system has three basic components ( Figure 1.10 a ) . <hl> A sensor , also referred to a receptor , is a component of a feedback system that monitors a physiological value . <hl> <hl> This value is reported to the control center . <hl> <hl> The control center is the component in a feedback system that compares the value to the normal range . <hl> <hl> If the value deviates too much from the set point , then the control center activates an effector . <hl> An effector is the component in a feedback system that causes a change to reverse the situation and return the value to the normal range .", "hl_sentences": "A sensor , also referred to a receptor , is a component of a feedback system that monitors a physiological value . This value is reported to the control center . The control center is the component in a feedback system that compares the value to the normal range . If the value deviates too much from the set point , then the control center activates an effector .", "question": { "cloze_format": "After you eat lunch, nerve cells in your stomach respond to the distension (the stimulus) resulting from the food. They relay this information to ________.", "normal_format": "After you eat lunch, nerve cells in your stomach respond to the distension (the stimulus) resulting from the food. What do they relay this information to?", "question_choices": [ "a control center", "a set point", "effectors", "sensors" ], "question_id": "fs-id2200529", "question_text": "After you eat lunch, nerve cells in your stomach respond to the distension (the stimulus) resulting from the food. They relay this information to ________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "sweat glands to increase their output" }, "bloom": "1", "hl_context": "<hl> As blood flow to the skin increases , sweat glands are activated to increase their output . <hl> <hl> As the sweat evaporates from the skin surface into the surrounding air , it takes heat with it . <hl>", "hl_sentences": "As blood flow to the skin increases , sweat glands are activated to increase their output . As the sweat evaporates from the skin surface into the surrounding air , it takes heat with it .", "question": { "cloze_format": "Stimulation of the heat-loss center causes ________.", "normal_format": "What does stimulation of the heat-loss center cause?", "question_choices": [ "blood vessels in the skin to constrict", "breathing to become slow and shallow", "sweat glands to increase their output", "All of the above" ], "question_id": "fs-id2097087", "question_text": "Stimulation of the heat-loss center causes ________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 1, "ans_text": "childbirth" }, "bloom": "1", "hl_context": "Positive feedback intensifies a change in the body ’ s physiological condition rather than reversing it . A deviation from the normal range results in more change , and the system moves farther away from the normal range . Positive feedback in the body is normal only when there is a definite end point . <hl> Childbirth and the body ’ s response to blood loss are two examples of positive feedback loops that are normal but are activated only when needed . <hl> Childbirth at full term is an example of a situation in which the maintenance of the existing body state is not desired . Enormous changes in the mother ’ s body are required to expel the baby at the end of pregnancy . And the events of childbirth , once begun , must progress rapidly to a conclusion or the life of the mother and the baby are at risk . The extreme muscular work of labor and delivery are the result of a positive feedback system ( Figure 1.11 ) .", "hl_sentences": "Childbirth and the body ’ s response to blood loss are two examples of positive feedback loops that are normal but are activated only when needed .", "question": { "cloze_format": "___ is an example of a normal physiologic process that uses a positive feedback loop.", "normal_format": "Which of the following is an example of a normal physiologic process that uses a positive feedback loop?", "question_choices": [ "blood pressure regulation", "childbirth", "regulation of fluid balance", "temperature regulation" ], "question_id": "fs-id1284438", "question_text": "Which of the following is an example of a normal physiologic process that uses a positive feedback loop?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "None of the above" }, "bloom": null, "hl_context": "To further increase precision , anatomists standardize the way in which they view the body . <hl> Just as maps are normally oriented with north at the top , the standard body “ map , ” or anatomical position , is that of the body standing upright , with the feet at shoulder width and parallel , toes forward . <hl> <hl> The upper limbs are held out to each side , and the palms of the hands face forward as illustrated in Figure 1.12 . <hl> Using this standard position reduces confusion . It does not matter how the body being described is oriented , the terms are used as if it is in anatomical position . For example , a scar in the “ anterior ( front ) carpal ( wrist ) region ” would be present on the palm side of the wrist . The term “ anterior ” would be used even if the hand were palm down on a table .", "hl_sentences": "Just as maps are normally oriented with north at the top , the standard body “ map , ” or anatomical position , is that of the body standing upright , with the feet at shoulder width and parallel , toes forward . The upper limbs are held out to each side , and the palms of the hands face forward as illustrated in Figure 1.12 .", "question": { "cloze_format": "When the body is in the “normal anatomical position?”, ___ . ", "normal_format": "What is the position of the body when it is in the “normal anatomical position?”", "question_choices": [ "The person is prone with upper limbs, including palms, touching sides and lower limbs touching at sides.", "The person is standing facing the observer, with upper limbs extended out at a ninety-degree angle from the torso and lower limbs in a wide stance with feet pointing laterally", "The person is supine with upper limbs, including palms, touching sides and lower limbs touching at sides.", "None of the above" ], "question_id": "fs-id1488272", "question_text": "What is the position of the body when it is in the “normal anatomical position?”" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "midsagittal plane" }, "bloom": null, "hl_context": "The sagittal plane is the plane that divides the body or an organ vertically into right and left sides . <hl> If this vertical plane runs directly down the middle of the body , it is called the midsagittal or median plane . <hl> If it divides the body into unequal right and left sides , it is called a parasagittal plane or less commonly a longitudinal section .", "hl_sentences": "If this vertical plane runs directly down the middle of the body , it is called the midsagittal or median plane .", "question": { "cloze_format": "To make a banana split, you halve a banana into two long, thin, right and left sides along the ________.", "normal_format": "To make a banana split, you halve a banana into two long, thin, right and left sides along which of the following?", "question_choices": [ "coronal plane", "longitudinal plane", "midsagittal plane", "transverse plane" ], "question_id": "fs-id2697370", "question_text": "To make a banana split, you halve a banana into two long, thin, right and left sides along the ________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 1, "ans_text": "mediastinum" }, "bloom": "1", "hl_context": "The anterior ( ventral ) cavity has two main subdivisions : the thoracic cavity and the abdominopelvic cavity ( see Figure 1.15 ) . The thoracic cavity is the more superior subdivision of the anterior cavity , and it is enclosed by the rib cage . <hl> The thoracic cavity contains the lungs and the heart , which is located in the mediastinum . <hl> The diaphragm forms the floor of the thoracic cavity and separates it from the more inferior abdominopelvic cavity . The abdominopelvic cavity is the largest cavity in the body . Although no membrane physically divides the abdominopelvic cavity , it can be useful to distinguish between the abdominal cavity , the division that houses the digestive organs , and the pelvic cavity , the division that houses the organs of reproduction .", "hl_sentences": "The thoracic cavity contains the lungs and the heart , which is located in the mediastinum .", "question": { "cloze_format": "The heart is within the ________.", "normal_format": "The heart is within which of the following? ", "question_choices": [ "cranial cavity", "mediastinum", "posterior (dorsal) cavity", "All of the above" ], "question_id": "fs-id2433961", "question_text": "The heart is within the ________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 3, "ans_text": "X-rays" }, "bloom": null, "hl_context": "German physicist Wilhelm Röntgen ( 1845 – 1923 ) was experimenting with electrical current when he discovered that a mysterious and invisible “ ray ” would pass through his flesh but leave an outline of his bones on a screen coated with a metal compound . In 1895 , Röntgen made the first durable record of the internal parts of a living human : an “ X-ray ” image ( as it came to be called ) of his wife ’ s hand . <hl> Scientists around the world quickly began their own experiments with X-rays , and by 1900 , X-rays were widely used to detect a variety of injuries and diseases . <hl> <hl> In 1901 , Röntgen was awarded the first Nobel Prize for physics for his work in this field . <hl> The X-ray is a form of high energy electromagnetic radiation with a short wavelength capable of penetrating solids and ionizing gases . As they are used in medicine , X-rays are emitted from an X-ray machine and directed toward a specially treated metallic plate placed behind the patient ’ s body . The beam of radiation results in darkening of the X-ray plate . X-rays are slightly impeded by soft tissues , which show up as gray on the X-ray plate , whereas hard tissues , such as bone , largely block the rays , producing a light-toned “ shadow . ” Thus , X-rays are best used to visualize hard body structures such as teeth and bones ( Figure 1.18 ) . Like many forms of high energy radiation , however , X-rays are capable of damaging cells and initiating changes that can lead to cancer . This danger of excessive exposure to X-rays was not fully appreciated for many years after their widespread use . Refinements and enhancements of X-ray techniques have continued throughout the twentieth and twenty-first centuries . Although often supplanted by more sophisticated imaging techniques , the X-ray remains a “ workhorse ” in medical imaging , especially for viewing fractures and for dentistry . The disadvantage of irradiation to the patient and the operator is now attenuated by proper shielding and by limiting exposure .", "hl_sentences": "Scientists around the world quickly began their own experiments with X-rays , and by 1900 , X-rays were widely used to detect a variety of injuries and diseases . In 1901 , Röntgen was awarded the first Nobel Prize for physics for his work in this field .", "question": { "cloze_format": "In 1901, Wilhelm Röntgen was the first person to win the Nobel Prize for physics. The discovery he won for was ___.", "normal_format": "In 1901, Wilhelm Röntgen was the first person to win the Nobel Prize for physics. For what discovery did he win?", "question_choices": [ "nuclear physics", "radiopharmaceuticals", "the link between radiation and cancer", "X-rays" ], "question_id": "fs-id22465801", "question_text": "In 1901, Wilhelm Röntgen was the first person to win the Nobel Prize for physics. For what discovery did he win?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "PET" }, "bloom": "3", "hl_context": "<hl> Positron emission tomography ( PET ) is a medical imaging technique involving the use of so-called radiopharmaceuticals , substances that emit radiation that is short-lived and therefore relatively safe to administer to the body . <hl> Although the first PET scanner was introduced in 1961 , it took 15 more years before radiopharmaceuticals were combined with the technique and revolutionized its potential . <hl> The main advantage is that PET ( see Figure 1.19 c ) can illustrate physiologic activity — including nutrient metabolism and blood flow — of the organ or organs being targeted , whereas CT and MRI scans can only show static images . <hl> <hl> PET is widely used to diagnose a multitude of conditions , such as heart disease , the spread of cancer , certain forms of infection , brain abnormalities , bone disease , and thyroid disease . <hl>", "hl_sentences": "Positron emission tomography ( PET ) is a medical imaging technique involving the use of so-called radiopharmaceuticals , substances that emit radiation that is short-lived and therefore relatively safe to administer to the body . The main advantage is that PET ( see Figure 1.19 c ) can illustrate physiologic activity — including nutrient metabolism and blood flow — of the organ or organs being targeted , whereas CT and MRI scans can only show static images . PET is widely used to diagnose a multitude of conditions , such as heart disease , the spread of cancer , certain forms of infection , brain abnormalities , bone disease , and thyroid disease .", "question": { "cloze_format": "The ___ imaging technique would be best to use to study the uptake of nutrients by rapidly multiplying cancer cells.", "normal_format": "Which of the following imaging techniques would be best to use to study the uptake of nutrients by rapidly multiplying cancer cells?", "question_choices": [ "CT", "MRI", "PET", "ultrasonography" ], "question_id": "fs-id2607216", "question_text": "Which of the following imaging techniques would be best to use to study the uptake of nutrients by rapidly multiplying cancer cells?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "ultrasounds" }, "bloom": "1", "hl_context": "Ultrasonography is an imaging technique that uses the transmission of high-frequency sound waves into the body to generate an echo signal that is converted by a computer into a real-time image of anatomy and physiology ( see Figure 1.19 d ) . <hl> Ultrasonography is the least invasive of all imaging techniques , and it is therefore used more freely in sensitive situations such as pregnancy . <hl> The technology was first developed in the 1940s and 1950s . Ultrasonography is used to study heart function , blood flow in the neck or extremities , certain conditions such as gallbladder disease , and fetal growth and development . The main disadvantages of ultrasonography are that the image quality is heavily operator-dependent and that it is unable to penetrate bone and gas .", "hl_sentences": "Ultrasonography is the least invasive of all imaging techniques , and it is therefore used more freely in sensitive situations such as pregnancy .", "question": { "cloze_format": "The imaging study that can be used most safely during pregnancy is___.", "normal_format": "Which of the following imaging studies can be used most safely during pregnancy?", "question_choices": [ "CT scans", "PET scans", "ultrasounds", "X-rays" ], "question_id": "fs-id1227190", "question_text": "Which of the following imaging studies can be used most safely during pregnancy?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "high cost and the need for shielding from the magnetic signals" }, "bloom": "1", "hl_context": "<hl> Drawbacks of MRI scans include their much higher cost , and patient discomfort with the procedure . <hl> <hl> The MRI scanner subjects the patient to such powerful electromagnets that the scan room must be shielded . <hl> The patient must be enclosed in a metal tube-like device for the duration of the scan ( see Figure 1.19 b ) , sometimes as long as thirty minutes , which can be uncomfortable and impractical for ill patients . The device is also so noisy that , even with earplugs , patients can become anxious or even fearful . These problems have been overcome somewhat with the development of “ open ” MRI scanning , which does not require the patient to be entirely enclosed in the metal tube . Patients with iron-containing metallic implants ( internal sutures , some prosthetic devices , and so on ) cannot undergo MRI scanning because it can dislodge these implants .", "hl_sentences": "Drawbacks of MRI scans include their much higher cost , and patient discomfort with the procedure . The MRI scanner subjects the patient to such powerful electromagnets that the scan room must be shielded .", "question": { "cloze_format": "Two major disadvantages of MRI scans (are) ___.", "normal_format": "What are two major disadvantages of MRI scans?", "question_choices": [ "release of radiation and poor quality images", "high cost and the need for shielding from the magnetic signals", "can only view metabolically active tissues and inadequate availability of equipment", "release of radiation and the need for a patient to be confined to metal tube for up to 30 minutes" ], "question_id": "fs-id1435474", "question_text": "What are two major disadvantages of MRI scans?" }, "references_are_paraphrase": null } ]
1
1.1 Overview of Anatomy and Physiology Learning Objectives By the end of this section, you will be able to: Compare and contrast anatomy and physiology, including their specializations and methods of study Discuss the fundamental relationship between anatomy and physiology Human anatomy is the scientific study of the body’s structures. Some of these structures are very small and can only be observed and analyzed with the assistance of a microscope. Other larger structures can readily be seen, manipulated, measured, and weighed. The word “anatomy” comes from a Greek root that means “to cut apart.” Human anatomy was first studied by observing the exterior of the body and observing the wounds of soldiers and other injuries. Later, physicians were allowed to dissect bodies of the dead to augment their knowledge. When a body is dissected, its structures are cut apart in order to observe their physical attributes and their relationships to one another. Dissection is still used in medical schools, anatomy courses, and in pathology labs. In order to observe structures in living people, however, a number of imaging techniques have been developed. These techniques allow clinicians to visualize structures inside the living body such as a cancerous tumor or a fractured bone. Like most scientific disciplines, anatomy has areas of specialization. Gross anatomy is the study of the larger structures of the body, those visible without the aid of magnification ( Figure 1.2 a ). Macro- means “large,” thus, gross anatomy is also referred to as macroscopic anatomy. In contrast, micro- means “small,” and microscopic anatomy is the study of structures that can be observed only with the use of a microscope or other magnification devices ( Figure 1.2 b ). Microscopic anatomy includes cytology, the study of cells and histology, the study of tissues. As the technology of microscopes has advanced, anatomists have been able to observe smaller and smaller structures of the body, from slices of large structures like the heart, to the three-dimensional structures of large molecules in the body. Anatomists take two general approaches to the study of the body’s structures: regional and systemic. Regional anatomy is the study of the interrelationships of all of the structures in a specific body region, such as the abdomen. Studying regional anatomy helps us appreciate the interrelationships of body structures, such as how muscles, nerves, blood vessels, and other structures work together to serve a particular body region. In contrast, systemic anatomy is the study of the structures that make up a discrete body system—that is, a group of structures that work together to perform a unique body function. For example, a systemic anatomical study of the muscular system would consider all of the skeletal muscles of the body. Whereas anatomy is about structure, physiology is about function. Human physiology is the scientific study of the chemistry and physics of the structures of the body and the ways in which they work together to support the functions of life. Much of the study of physiology centers on the body’s tendency toward homeostasis. Homeostasis is the state of steady internal conditions maintained by living things. The study of physiology certainly includes observation, both with the naked eye and with microscopes, as well as manipulations and measurements. However, current advances in physiology usually depend on carefully designed laboratory experiments that reveal the functions of the many structures and chemical compounds that make up the human body. Like anatomists, physiologists typically specialize in a particular branch of physiology. For example, neurophysiology is the study of the brain, spinal cord, and nerves and how these work together to perform functions as complex and diverse as vision, movement, and thinking. Physiologists may work from the organ level (exploring, for example, what different parts of the brain do) to the molecular level (such as exploring how an electrochemical signal travels along nerves). Form is closely related to function in all living things. For example, the thin flap of your eyelid can snap down to clear away dust particles and almost instantaneously slide back up to allow you to see again. At the microscopic level, the arrangement and function of the nerves and muscles that serve the eyelid allow for its quick action and retreat. At a smaller level of analysis, the function of these nerves and muscles likewise relies on the interactions of specific molecules and ions. Even the three-dimensional structure of certain molecules is essential to their function. Your study of anatomy and physiology will make more sense if you continually relate the form of the structures you are studying to their function. In fact, it can be somewhat frustrating to attempt to study anatomy without an understanding of the physiology that a body structure supports. Imagine, for example, trying to appreciate the unique arrangement of the bones of the human hand if you had no conception of the function of the hand. Fortunately, your understanding of how the human hand manipulates tools—from pens to cell phones—helps you appreciate the unique alignment of the thumb in opposition to the four fingers, making your hand a structure that allows you to pinch and grasp objects and type text messages. 1.2 Structural Organization of the Human Body Learning Objectives By the end of this section, you will be able to: Describe the structure of the human body in terms of six levels of organization List the eleven organ systems of the human body and identify at least one organ and one major function of each Before you begin to study the different structures and functions of the human body, it is helpful to consider its basic architecture; that is, how its smallest parts are assembled into larger structures. It is convenient to consider the structures of the body in terms of fundamental levels of organization that increase in complexity: subatomic particles, atoms, molecules, organelles, cells, tissues, organs, organ systems, organisms and biosphere ( Figure 1.3 ). The Levels of Organization To study the chemical level of organization, scientists consider the simplest building blocks of matter: subatomic particles, atoms and molecules. All matter in the universe is composed of one or more unique pure substances called elements, familiar examples of which are hydrogen, oxygen, carbon, nitrogen, calcium, and iron. The smallest unit of any of these pure substances (elements) is an atom. Atoms are made up of subatomic particles such as the proton, electron and neutron. Two or more atoms combine to form a molecule, such as the water molecules, proteins, and sugars found in living things. Molecules are the chemical building blocks of all body structures. A cell is the smallest independently functioning unit of a living organism. Even bacteria, which are extremely small, independently-living organisms, have a cellular structure. Each bacterium is a single cell. All living structures of human anatomy contain cells, and almost all functions of human physiology are performed in cells or are initiated by cells. A human cell typically consists of flexible membranes that enclose cytoplasm, a water-based cellular fluid together with a variety of tiny functioning units called organelles . In humans, as in all organisms, cells perform all functions of life. A tissue is a group of many similar cells (though sometimes composed of a few related types) that work together to perform a specific function. An organ is an anatomically distinct structure of the body composed of two or more tissue types. Each organ performs one or more specific physiological functions. An organ system is a group of organs that work together to perform major functions or meet physiological needs of the body. This book covers eleven distinct organ systems in the human body ( Figure 1.4 and Figure 1.5 ). Assigning organs to organ systems can be imprecise since organs that “belong” to one system can also have functions integral to another system. In fact, most organs contribute to more than one system. The organism level is the highest level of organization. An organism is a living being that has a cellular structure and that can independently perform all physiologic functions necessary for life. In multicellular organisms, including humans, all cells, tissues, organs, and organ systems of the body work together to maintain the life and health of the organism. 1.3 Functions of Human Life Learning Objectives By the end of this section, you will be able to: Explain the importance of organization to the function of the human organism Distinguish between metabolism, anabolism, and catabolism Provide at least two examples of human responsiveness and human movement Compare and contrast growth, differentiation, and reproduction The different organ systems each have different functions and therefore unique roles to perform in physiology. These many functions can be summarized in terms of a few that we might consider definitive of human life: organization, metabolism, responsiveness, movement, development, and reproduction. Organization A human body consists of trillions of cells organized in a way that maintains distinct internal compartments. These compartments keep body cells separated from external environmental threats and keep the cells moist and nourished. They also separate internal body fluids from the countless microorganisms that grow on body surfaces, including the lining of certain passageways that connect to the outer surface of the body. The intestinal tract, for example, is home to more bacterial cells than the total of all human cells in the body, yet these bacteria are outside the body and cannot be allowed to circulate freely inside the body. Cells, for example, have a cell membrane (also referred to as the plasma membrane) that keeps the intracellular environment—the fluids and organelles—separate from the extracellular environment. Blood vessels keep blood inside a closed circulatory system, and nerves and muscles are wrapped in connective tissue sheaths that separate them from surrounding structures. In the chest and abdomen, a variety of internal membranes keep major organs such as the lungs, heart, and kidneys separate from others. The body’s largest organ system is the integumentary system, which includes the skin and its associated structures, such as hair and nails. The surface tissue of skin is a barrier that protects internal structures and fluids from potentially harmful microorganisms and other toxins. Metabolism The first law of thermodynamics holds that energy can neither be created nor destroyed—it can only change form. Your basic function as an organism is to consume (ingest) energy and molecules in the foods you eat, convert some of it into fuel for movement, sustain your body functions, and build and maintain your body structures. There are two types of reactions that accomplish this: anabolism and catabolism . Anabolism is the process whereby smaller, simpler molecules are combined into larger, more complex substances. Your body can assemble, by utilizing energy, the complex chemicals it needs by combining small molecules derived from the foods you eat Catabolism is the process by which larger more complex substances are broken down into smaller simpler molecules. Catabolism releases energy. The complex molecules found in foods are broken down so the body can use their parts to assemble the structures and substances needed for life. Taken together, these two processes are called metabolism. Metabolism is the sum of all anabolic and catabolic reactions that take place in the body ( Figure 1.6 ). Both anabolism and catabolism occur simultaneously and continuously to keep you alive. Every cell in your body makes use of a chemical compound, adenosine triphosphate (ATP) , to store and release energy. The cell stores energy in the synthesis (anabolism) of ATP, then moves the ATP molecules to the location where energy is needed to fuel cellular activities. Then the ATP is broken down (catabolism) and a controlled amount of energy is released, which is used by the cell to perform a particular job. Interactive Link View this animation to learn more about metabolic processes. Which organs of the body likely carry out anabolic processes? What about catabolic processes? Responsiveness Responsiveness is the ability of an organism to adjust to changes in its internal and external environments. An example of responsiveness to external stimuli could include moving toward sources of food and water and away from perceived dangers. Changes in an organism’s internal environment, such as increased body temperature, can cause the responses of sweating and the dilation of blood vessels in the skin in order to decrease body temperature, as shown by the runners in Figure 1.7 . Movement Human movement includes not only actions at the joints of the body, but also the motion of individual organs and even individual cells. As you read these words, red and white blood cells are moving throughout your body, muscle cells are contracting and relaxing to maintain your posture and to focus your vision, and glands are secreting chemicals to regulate body functions. Your body is coordinating the action of entire muscle groups to enable you to move air into and out of your lungs, to push blood throughout your body, and to propel the food you have eaten through your digestive tract. Consciously, of course, you contract your skeletal muscles to move the bones of your skeleton to get from one place to another (as the runners are doing in Figure 1.7 ), and to carry out all of the activities of your daily life. Development, growth and reproduction Development is all of the changes the body goes through in life. Development includes the process of differentiation , in which unspecialized cells become specialized in structure and function to perform certain tasks in the body. Development also includes the processes of growth and repair, both of which involve cell differentiation. Growth is the increase in body size. Humans, like all multicellular organisms, grow by increasing the number of existing cells, increasing the amount of non-cellular material around cells (such as mineral deposits in bone), and, within very narrow limits, increasing the size of existing cells. Reproduction is the formation of a new organism from parent organisms. In humans, reproduction is carried out by the male and female reproductive systems. Because death will come to all complex organisms, without reproduction, the line of organisms would end. 1.4 Requirements for Human Life Learning Objectives By the end of this section, you will be able to: Discuss the role of oxygen and nutrients in maintaining human survival Explain why extreme heat and extreme cold threaten human survival Explain how the pressure exerted by gases and fluids influences human survival Humans have been adapting to life on Earth for at least the past 200,000 years. Earth and its atmosphere have provided us with air to breathe, water to drink, and food to eat, but these are not the only requirements for survival. Although you may rarely think about it, you also cannot live outside of a certain range of temperature and pressure that the surface of our planet and its atmosphere provides. The next sections explore these four requirements of life. Oxygen Atmospheric air is only about 20 percent oxygen, but that oxygen is a key component of the chemical reactions that keep the body alive, including the reactions that produce ATP. Brain cells are especially sensitive to lack of oxygen because of their requirement for a high-and-steady production of ATP. Brain damage is likely within five minutes without oxygen, and death is likely within ten minutes. Nutrients A nutrient is a substance in foods and beverages that is essential to human survival. The three basic classes of nutrients are water, the energy-yielding and body-building nutrients, and the micronutrients (vitamins and minerals). The most critical nutrient is water. Depending on the environmental temperature and our state of health, we may be able to survive for only a few days without water. The body’s functional chemicals are dissolved and transported in water, and the chemical reactions of life take place in water. Moreover, water is the largest component of cells, blood, and the fluid between cells, and water makes up about 70 percent of an adult’s body mass. Water also helps regulate our internal temperature and cushions, protects, and lubricates joints and many other body structures. The energy-yielding nutrients are primarily carbohydrates and lipids, while proteins mainly supply the amino acids that are the building blocks of the body itself. You ingest these in plant and animal foods and beverages, and the digestive system breaks them down into molecules small enough to be absorbed. The breakdown products of carbohydrates and lipids can then be used in the metabolic processes that convert them to ATP. Although you might feel as if you are starving after missing a single meal, you can survive without consuming the energy-yielding nutrients for at least several weeks. Water and the energy-yielding nutrients are also referred to as macronutrients because the body needs them in large amounts. In contrast, micronutrients are vitamins and minerals. These elements and compounds participate in many essential chemical reactions and processes, such as nerve impulses, and some, such as calcium, also contribute to the body’s structure. Your body can store some of the micronutrients in its tissues, and draw on those reserves if you fail to consume them in your diet for a few days or weeks. Some others micronutrients, such as vitamin C and most of the B vitamins, are water-soluble and cannot be stored, so you need to consume them every day or two. Narrow Range of Temperature You have probably seen news stories about athletes who died of heat stroke, or hikers who died of exposure to cold. Such deaths occur because the chemical reactions upon which the body depends can only take place within a narrow range of body temperature, from just below to just above 37°C (98.6°F). When body temperature rises well above or drops well below normal, certain proteins (enzymes) that facilitate chemical reactions lose their normal structure and their ability to function and the chemical reactions of metabolism cannot proceed. That said, the body can respond effectively to short-term exposure to heat ( Figure 1.8 ) or cold. One of the body’s responses to heat is, of course, sweating. As sweat evaporates from skin, it removes some thermal energy from the body, cooling it. Adequate water (from the extracellular fluid in the body) is necessary to produce sweat, so adequate fluid intake is essential to balance that loss during the sweat response. Not surprisingly, the sweat response is much less effective in a humid environment because the air is already saturated with water. Thus, the sweat on the skin’s surface is not able to evaporate, and internal body temperature can get dangerously high. The body can also respond effectively to short-term exposure to cold. One response to cold is shivering, which is random muscle movement that generates heat. Another response is increased breakdown of stored energy to generate heat. When that energy reserve is depleted, however, and the core temperature begins to drop significantly, red blood cells will lose their ability to give up oxygen, denying the brain of this critical component of ATP production. This lack of oxygen can cause confusion, lethargy, and eventually loss of consciousness and death. The body responds to cold by reducing blood circulation to the extremities, the hands and feet, in order to prevent blood from cooling there and so that the body’s core can stay warm. Even when core body temperature remains stable, however, tissues exposed to severe cold, especially the fingers and toes, can develop frostbite when blood flow to the extremities has been much reduced. This form of tissue damage can be permanent and lead to gangrene, requiring amputation of the affected region. Everyday Connection Controlled Hypothermia As you have learned, the body continuously engages in coordinated physiological processes to maintain a stable temperature. In some cases, however, overriding this system can be useful, or even life-saving. Hypothermia is the clinical term for an abnormally low body temperature (hypo- = “below” or “under”). Controlled hypothermia is clinically induced hypothermia performed in order to reduce the metabolic rate of an organ or of a person’s entire body. Controlled hypothermia often is used, for example, during open-heart surgery because it decreases the metabolic needs of the brain, heart, and other organs, reducing the risk of damage to them. When controlled hypothermia is used clinically, the patient is given medication to prevent shivering. The body is then cooled to 25–32°C (79–89°F). The heart is stopped and an external heart-lung pump maintains circulation to the patient’s body. The heart is cooled further and is maintained at a temperature below 15°C (60°F) for the duration of the surgery. This very cold temperature helps the heart muscle to tolerate its lack of blood supply during the surgery. Some emergency department physicians use controlled hypothermia to reduce damage to the heart in patients who have suffered a cardiac arrest. In the emergency department, the physician induces coma and lowers the patient’s body temperature to approximately 91 degrees. This condition, which is maintained for 24 hours, slows the patient’s metabolic rate. Because the patient’s organs require less blood to function, the heart’s workload is reduced. Narrow Range of Atmospheric Pressure Pressure is a force exerted by a substance that is in contact with another substance. Atmospheric pressure is pressure exerted by the mixture of gases (primarily nitrogen and oxygen) in the Earth’s atmosphere. Although you may not perceive it, atmospheric pressure is constantly pressing down on your body. This pressure keeps gases within your body, such as the gaseous nitrogen in body fluids, dissolved. If you were suddenly ejected from a space ship above Earth’s atmosphere, you would go from a situation of normal pressure to one of very low pressure. The pressure of the nitrogen gas in your blood would be much higher than the pressure of nitrogen in the space surrounding your body. As a result, the nitrogen gas in your blood would expand, forming bubbles that could block blood vessels and even cause cells to break apart. Atmospheric pressure does more than just keep blood gases dissolved. Your ability to breathe—that is, to take in oxygen and release carbon dioxide—also depends upon a precise atmospheric pressure. Altitude sickness occurs in part because the atmosphere at high altitudes exerts less pressure, reducing the exchange of these gases, and causing shortness of breath, confusion, headache, lethargy, and nausea. Mountain climbers carry oxygen to reduce the effects of both low oxygen levels and low barometric pressure at higher altitudes ( Figure 1.9 ). Homeostatic Imbalances Decompression Sickness Decompression sickness (DCS) is a condition in which gases dissolved in the blood or in other body tissues are no longer dissolved following a reduction in pressure on the body. This condition affects underwater divers who surface from a deep dive too quickly, and it can affect pilots flying at high altitudes in planes with unpressurized cabins. Divers often call this condition “the bends,” a reference to joint pain that is a symptom of DCS. In all cases, DCS is brought about by a reduction in barometric pressure. At high altitude, barometric pressure is much less than on Earth’s surface because pressure is produced by the weight of the column of air above the body pressing down on the body. The very great pressures on divers in deep water are likewise from the weight of a column of water pressing down on the body. For divers, DCS occurs at normal barometric pressure (at sea level), but it is brought on by the relatively rapid decrease of pressure as divers rise from the high pressure conditions of deep water to the now low, by comparison, pressure at sea level. Not surprisingly, diving in deep mountain lakes, where barometric pressure at the surface of the lake is less than that at sea level is more likely to result in DCS than diving in water at sea level. In DCS, gases dissolved in the blood (primarily nitrogen) come rapidly out of solution, forming bubbles in the blood and in other body tissues. This occurs because when pressure of a gas over a liquid is decreased, the amount of gas that can remain dissolved in the liquid also is decreased. It is air pressure that keeps your normal blood gases dissolved in the blood. When pressure is reduced, less gas remains dissolved. You have seen this in effect when you open a carbonated drink. Removing the seal of the bottle reduces the pressure of the gas over the liquid. This in turn causes bubbles as dissolved gases (in this case, carbon dioxide) come out of solution in the liquid. The most common symptoms of DCS are pain in the joints, with headache and disturbances of vision occurring in 10 percent to 15 percent of cases. Left untreated, very severe DCS can result in death. Immediate treatment is with pure oxygen. The affected person is then moved into a hyperbaric chamber. A hyperbaric chamber is a reinforced, closed chamber that is pressurized to greater than atmospheric pressure. It treats DCS by repressurizing the body so that pressure can then be removed much more gradually. Because the hyperbaric chamber introduces oxygen to the body at high pressure, it increases the concentration of oxygen in the blood. This has the effect of replacing some of the nitrogen in the blood with oxygen, which is easier to tolerate out of solution. The dynamic pressure of body fluids is also important to human survival. For example, blood pressure, which is the pressure exerted by blood as it flows within blood vessels, must be great enough to enable blood to reach all body tissues, and yet low enough to ensure that the delicate blood vessels can withstand the friction and force of the pulsating flow of pressurized blood. 1.5 Homeostasis Learning Objectives By the end of this section, you will be able to: Discuss the role of homeostasis in healthy functioning Contrast negative and positive feedback, giving one physiologic example of each mechanism Maintaining homeostasis requires that the body continuously monitor its internal conditions. From body temperature to blood pressure to levels of certain nutrients, each physiological condition has a particular set point. A set point is the physiological value around which the normal range fluctuates. A normal range is the restricted set of values that is optimally healthful and stable. For example, the set point for normal human body temperature is approximately 37°C (98.6°F) Physiological parameters, such as body temperature and blood pressure, tend to fluctuate within a normal range a few degrees above and below that point. Control centers in the brain and other parts of the body monitor and react to deviations from homeostasis using negative feedback. Negative feedback is a mechanism that reverses a deviation from the set point. Therefore, negative feedback maintains body parameters within their normal range. The maintenance of homeostasis by negative feedback goes on throughout the body at all times, and an understanding of negative feedback is thus fundamental to an understanding of human physiology. Negative Feedback A negative feedback system has three basic components ( Figure 1.10 a ). A sensor , also referred to a receptor, is a component of a feedback system that monitors a physiological value. This value is reported to the control center. The control center is the component in a feedback system that compares the value to the normal range. If the value deviates too much from the set point, then the control center activates an effector. An effector is the component in a feedback system that causes a change to reverse the situation and return the value to the normal range. In order to set the system in motion, a stimulus must drive a physiological parameter beyond its normal range (that is, beyond homeostasis). This stimulus is “heard” by a specific sensor. For example, in the control of blood glucose, specific endocrine cells in the pancreas detect excess glucose (the stimulus) in the bloodstream. These pancreatic beta cells respond to the increased level of blood glucose by releasing the hormone insulin into the bloodstream. The insulin signals skeletal muscle fibers, fat cells (adipocytes), and liver cells to take up the excess glucose, removing it from the bloodstream. As glucose concentration in the bloodstream drops, the decrease in concentration—the actual negative feedback—is detected by pancreatic alpha cells, and insulin release stops. This prevents blood sugar levels from continuing to drop below the normal range. Humans have a similar temperature regulation feedback system that works by promoting either heat loss or heat gain ( Figure 1.10 b ). When the brain’s temperature regulation center receives data from the sensors indicating that the body’s temperature exceeds its normal range, it stimulates a cluster of brain cells referred to as the “heat-loss center.” This stimulation has three major effects: Blood vessels in the skin begin to dilate allowing more blood from the body core to flow to the surface of the skin allowing the heat to radiate into the environment. As blood flow to the skin increases, sweat glands are activated to increase their output. As the sweat evaporates from the skin surface into the surrounding air, it takes heat with it. The depth of respiration increases, and a person may breathe through an open mouth instead of through the nasal passageways. This further increases heat loss from the lungs. In contrast, activation of the brain’s heat-gain center by exposure to cold reduces blood flow to the skin, and blood returning from the limbs is diverted into a network of deep veins. This arrangement traps heat closer to the body core and restricts heat loss. If heat loss is severe, the brain triggers an increase in random signals to skeletal muscles, causing them to contract and producing shivering. The muscle contractions of shivering release heat while using up ATP. The brain triggers the thyroid gland in the endocrine system to release thyroid hormone, which increases metabolic activity and heat production in cells throughout the body. The brain also signals the adrenal glands to release epinephrine (adrenaline), a hormone that causes the breakdown of glycogen into glucose, which can be used as an energy source. The breakdown of glycogen into glucose also results in increased metabolism and heat production. Interactive Link Water concentration in the body is critical for proper functioning. A person’s body retains very tight control on water levels without conscious control by the person. Watch this video to learn more about water concentration in the body. Which organ has primary control over the amount of water in the body? Positive Feedback Positive feedback intensifies a change in the body’s physiological condition rather than reversing it. A deviation from the normal range results in more change, and the system moves farther away from the normal range. Positive feedback in the body is normal only when there is a definite end point. Childbirth and the body’s response to blood loss are two examples of positive feedback loops that are normal but are activated only when needed. Childbirth at full term is an example of a situation in which the maintenance of the existing body state is not desired. Enormous changes in the mother’s body are required to expel the baby at the end of pregnancy. And the events of childbirth, once begun, must progress rapidly to a conclusion or the life of the mother and the baby are at risk. The extreme muscular work of labor and delivery are the result of a positive feedback system ( Figure 1.11 ). The first contractions of labor (the stimulus) push the baby toward the cervix (the lowest part of the uterus). The cervix contains stretch-sensitive nerve cells that monitor the degree of stretching (the sensors). These nerve cells send messages to the brain, which in turn causes the pituitary gland at the base of the brain to release the hormone oxytocin into the bloodstream. Oxytocin causes stronger contractions of the smooth muscles in of the uterus (the effectors), pushing the baby further down the birth canal. This causes even greater stretching of the cervix. The cycle of stretching, oxytocin release, and increasingly more forceful contractions stops only when the baby is born. At this point, the stretching of the cervix halts, stopping the release of oxytocin. A second example of positive feedback centers on reversing extreme damage to the body. Following a penetrating wound, the most immediate threat is excessive blood loss. Less blood circulating means reduced blood pressure and reduced perfusion (penetration of blood) to the brain and other vital organs. If perfusion is severely reduced, vital organs will shut down and the person will die. The body responds to this potential catastrophe by releasing substances in the injured blood vessel wall that begin the process of blood clotting. As each step of clotting occurs, it stimulates the release of more clotting substances. This accelerates the processes of clotting and sealing off the damaged area. Clotting is contained in a local area based on the tightly controlled availability of clotting proteins. This is an adaptive, life-saving cascade of events. 1.6 Anatomical Terminology Learning Objectives By the end of this section, you will be able to: Demonstrate the anatomical position Describe the human body using directional and regional terms Identify three planes most commonly used in the study of anatomy Distinguish between the posterior (dorsal) and the anterior (ventral) body cavities, identifying their subdivisions and representative organs found in each Describe serous membrane and explain its function Anatomists and health care providers use terminology that can be bewildering to the uninitiated. However, the purpose of this language is not to confuse, but rather to increase precision and reduce medical errors. For example, is a scar “above the wrist” located on the forearm two or three inches away from the hand? Or is it at the base of the hand? Is it on the palm-side or back-side? By using precise anatomical terminology, we eliminate ambiguity. Anatomical terms derive from ancient Greek and Latin words. Because these languages are no longer used in everyday conversation, the meaning of their words does not change. Anatomical terms are made up of roots, prefixes, and suffixes. The root of a term often refers to an organ, tissue, or condition, whereas the prefix or suffix often describes the root. For example, in the disorder hypertension, the prefix “hyper-” means “high” or “over,” and the root word “tension” refers to pressure, so the word “hypertension” refers to abnormally high blood pressure. Anatomical Position To further increase precision, anatomists standardize the way in which they view the body. Just as maps are normally oriented with north at the top, the standard body “map,” or anatomical position , is that of the body standing upright, with the feet at shoulder width and parallel, toes forward. The upper limbs are held out to each side, and the palms of the hands face forward as illustrated in Figure 1.12 . Using this standard position reduces confusion. It does not matter how the body being described is oriented, the terms are used as if it is in anatomical position. For example, a scar in the “anterior (front) carpal (wrist) region” would be present on the palm side of the wrist. The term “anterior” would be used even if the hand were palm down on a table. A body that is lying down is described as either prone or supine. Prone describes a face-down orientation, and supine describes a face up orientation. These terms are sometimes used in describing the position of the body during specific physical examinations or surgical procedures. Regional Terms The human body’s numerous regions have specific terms to help increase precision (see Figure 1.12 ). Notice that the term “brachium” or “arm” is reserved for the “upper arm” and “antebrachium” or “forearm” is used rather than “lower arm.” Similarly, “femur” or “thigh” is correct, and “leg” or “crus” is reserved for the portion of the lower limb between the knee and the ankle. You will be able to describe the body’s regions using the terms from the figure. Directional Terms Certain directional anatomical terms appear throughout this and any other anatomy textbook ( Figure 1.13 ). These terms are essential for describing the relative locations of different body structures. For instance, an anatomist might describe one band of tissue as “inferior to” another or a physician might describe a tumor as “superficial to” a deeper body structure. Commit these terms to memory to avoid confusion when you are studying or describing the locations of particular body parts. Anterior (or ventral ) Describes the front or direction toward the front of the body. The toes are anterior to the foot. Posterior (or dorsal ) Describes the back or direction toward the back of the body. The popliteus is posterior to the patella. Superior (or cranial ) describes a position above or higher than another part of the body proper. The orbits are superior to the oris. Inferior (or caudal ) describes a position below or lower than another part of the body proper; near or toward the tail (in humans, the coccyx, or lowest part of the spinal column). The pelvis is inferior to the abdomen. Lateral describes the side or direction toward the side of the body. The thumb (pollex) is lateral to the digits. Medial describes the middle or direction toward the middle of the body. The hallux is the medial toe. Proximal describes a position in a limb that is nearer to the point of attachment or the trunk of the body. The brachium is proximal to the antebrachium. Distal describes a position in a limb that is farther from the point of attachment or the trunk of the body. The crus is distal to the femur. Superficial describes a position closer to the surface of the body. The skin is superficial to the bones. Deep describes a position farther from the surface of the body. The brain is deep to the skull. Body Planes A section is a two-dimensional surface of a three-dimensional structure that has been cut. Modern medical imaging devices enable clinicians to obtain “virtual sections” of living bodies. We call these scans. Body sections and scans can be correctly interpreted, however, only if the viewer understands the plane along which the section was made. A plane is an imaginary two-dimensional surface that passes through the body. There are three planes commonly referred to in anatomy and medicine, as illustrated in Figure 1.14 . The sagittal plane is the plane that divides the body or an organ vertically into right and left sides. If this vertical plane runs directly down the middle of the body, it is called the midsagittal or median plane. If it divides the body into unequal right and left sides, it is called a parasagittal plane or less commonly a longitudinal section. The frontal plane is the plane that divides the body or an organ into an anterior (front) portion and a posterior (rear) portion. The frontal plane is often referred to as a coronal plane. (“Corona” is Latin for “crown.”) The transverse plane is the plane that divides the body or organ horizontally into upper and lower portions. Transverse planes produce images referred to as cross sections. Body Cavities and Serous Membranes The body maintains its internal organization by means of membranes, sheaths, and other structures that separate compartments. The dorsal (posterior) cavity and the ventral (anterior) cavity are the largest body compartments ( Figure 1.15 ). These cavities contain and protect delicate internal organs, and the ventral cavity allows for significant changes in the size and shape of the organs as they perform their functions. The lungs, heart, stomach, and intestines, for example, can expand and contract without distorting other tissues or disrupting the activity of nearby organs. Subdivisions of the Posterior (Dorsal) and Anterior (Ventral) Cavities The posterior (dorsal) and anterior (ventral) cavities are each subdivided into smaller cavities. In the posterior (dorsal) cavity, the cranial cavity houses the brain, and the spinal cavity (or vertebral cavity) encloses the spinal cord. Just as the brain and spinal cord make up a continuous, uninterrupted structure, the cranial and spinal cavities that house them are also continuous. The brain and spinal cord are protected by the bones of the skull and vertebral column and by cerebrospinal fluid, a colorless fluid produced by the brain, which cushions the brain and spinal cord within the posterior (dorsal) cavity. The anterior (ventral) cavity has two main subdivisions: the thoracic cavity and the abdominopelvic cavity (see Figure 1.15 ). The thoracic cavity is the more superior subdivision of the anterior cavity, and it is enclosed by the rib cage. The thoracic cavity contains the lungs and the heart, which is located in the mediastinum. The diaphragm forms the floor of the thoracic cavity and separates it from the more inferior abdominopelvic cavity. The abdominopelvic cavity is the largest cavity in the body. Although no membrane physically divides the abdominopelvic cavity, it can be useful to distinguish between the abdominal cavity, the division that houses the digestive organs, and the pelvic cavity, the division that houses the organs of reproduction. Abdominal Regions and Quadrants To promote clear communication, for instance about the location of a patient’s abdominal pain or a suspicious mass, health care providers typically divide up the cavity into either nine regions or four quadrants ( Figure 1.16 ). The more detailed regional approach subdivides the cavity with one horizontal line immediately inferior to the ribs and one immediately superior to the pelvis, and two vertical lines drawn as if dropped from the midpoint of each clavicle (collarbone). There are nine resulting regions. The simpler quadrants approach, which is more commonly used in medicine, subdivides the cavity with one horizontal and one vertical line that intersect at the patient’s umbilicus (navel). Membranes of the Anterior (Ventral) Body Cavity A serous membrane (also referred to a serosa) is one of the thin membranes that cover the walls and organs in the thoracic and abdominopelvic cavities. The parietal layers of the membranes line the walls of the body cavity (pariet- refers to a cavity wall). The visceral layer of the membrane covers the organs (the viscera). Between the parietal and visceral layers is a very thin, fluid-filled serous space, or cavity ( Figure 1.17 ). There are three serous cavities and their associated membranes. The pleura is the serous membrane that encloses the pleural cavity; the pleural cavity surrounds the lungs. The pericardium is the serous membrane that encloses the pericardial cavity; the pericardial cavity surrounds the heart. The peritoneum is the serous membrane that encloses the peritoneal cavity; the peritoneal cavity surrounds several organs in the abdominopelvic cavity. The serous membranes form fluid-filled sacs, or cavities, that are meant to cushion and reduce friction on internal organs when they move, such as when the lungs inflate or the heart beats. Both the parietal and visceral serosa secrete the thin, slippery serous fluid located within the serous cavities. The pleural cavity reduces friction between the lungs and the body wall. Likewise, the pericardial cavity reduces friction between the heart and the wall of the pericardium. The peritoneal cavity reduces friction between the abdominal and pelvic organs and the body wall. Therefore, serous membranes provide additional protection to the viscera they enclose by reducing friction that could lead to inflammation of the organs. 1.7 Medical Imaging Learning Objectives By the end of this section, you will be able to: Discuss the uses and drawbacks of X-ray imaging Identify four modern medical imaging techniques and how they are used For thousands of years, fear of the dead and legal sanctions limited the ability of anatomists and physicians to study the internal structures of the human body. An inability to control bleeding, infection, and pain made surgeries infrequent, and those that were performed—such as wound suturing, amputations, tooth and tumor removals, skull drilling, and cesarean births—did not greatly advance knowledge about internal anatomy. Theories about the function of the body and about disease were therefore largely based on external observations and imagination. During the fourteenth and fifteenth centuries, however, the detailed anatomical drawings of Italian artist and anatomist Leonardo da Vinci and Flemish anatomist Andreas Vesalius were published, and interest in human anatomy began to increase. Medical schools began to teach anatomy using human dissection; although some resorted to grave robbing to obtain corpses. Laws were eventually passed that enabled students to dissect the corpses of criminals and those who donated their bodies for research. Still, it was not until the late nineteenth century that medical researchers discovered non-surgical methods to look inside the living body. X-Rays German physicist Wilhelm Röntgen (1845–1923) was experimenting with electrical current when he discovered that a mysterious and invisible “ray” would pass through his flesh but leave an outline of his bones on a screen coated with a metal compound. In 1895, Röntgen made the first durable record of the internal parts of a living human: an “X-ray” image (as it came to be called) of his wife’s hand. Scientists around the world quickly began their own experiments with X-rays, and by 1900, X-rays were widely used to detect a variety of injuries and diseases. In 1901, Röntgen was awarded the first Nobel Prize for physics for his work in this field. The X-ray is a form of high energy electromagnetic radiation with a short wavelength capable of penetrating solids and ionizing gases. As they are used in medicine, X-rays are emitted from an X-ray machine and directed toward a specially treated metallic plate placed behind the patient’s body. The beam of radiation results in darkening of the X-ray plate. X-rays are slightly impeded by soft tissues, which show up as gray on the X-ray plate, whereas hard tissues, such as bone, largely block the rays, producing a light-toned “shadow.” Thus, X-rays are best used to visualize hard body structures such as teeth and bones ( Figure 1.18 ). Like many forms of high energy radiation, however, X-rays are capable of damaging cells and initiating changes that can lead to cancer. This danger of excessive exposure to X-rays was not fully appreciated for many years after their widespread use. Refinements and enhancements of X-ray techniques have continued throughout the twentieth and twenty-first centuries. Although often supplanted by more sophisticated imaging techniques, the X-ray remains a “workhorse” in medical imaging, especially for viewing fractures and for dentistry. The disadvantage of irradiation to the patient and the operator is now attenuated by proper shielding and by limiting exposure. Modern Medical Imaging X-rays can depict a two-dimensional image of a body region, and only from a single angle. In contrast, more recent medical imaging technologies produce data that is integrated and analyzed by computers to produce three-dimensional images or images that reveal aspects of body functioning. Computed Tomography Tomography refers to imaging by sections. Computed tomography (CT) is a noninvasive imaging technique that uses computers to analyze several cross-sectional X-rays in order to reveal minute details about structures in the body ( Figure 1.19 a ). The technique was invented in the 1970s and is based on the principle that, as X-rays pass through the body, they are absorbed or reflected at different levels. In the technique, a patient lies on a motorized platform while a computerized axial tomography (CAT) scanner rotates 360 degrees around the patient, taking X-ray images. A computer combines these images into a two-dimensional view of the scanned area, or “slice.” Since 1970, the development of more powerful computers and more sophisticated software has made CT scanning routine for many types of diagnostic evaluations. It is especially useful for soft tissue scanning, such as of the brain and the thoracic and abdominal viscera. Its level of detail is so precise that it can allow physicians to measure the size of a mass down to a millimeter. The main disadvantage of CT scanning is that it exposes patients to a dose of radiation many times higher than that of X-rays. In fact, children who undergo CT scans are at increased risk of developing cancer, as are adults who have multiple CT scans. Interactive Link A CT or CAT scan relies on a circling scanner that revolves around the patient’s body. Watch this video to learn more about CT and CAT scans. What type of radiation does a CT scanner use? Magnetic Resonance Imaging Magnetic resonance imaging (MRI) is a noninvasive medical imaging technique based on a phenomenon of nuclear physics discovered in the 1930s, in which matter exposed to magnetic fields and radio waves was found to emit radio signals. In 1970, a physician and researcher named Raymond Damadian noticed that malignant (cancerous) tissue gave off different signals than normal body tissue. He applied for a patent for the first MRI scanning device, which was in use clinically by the early 1980s. The early MRI scanners were crude, but advances in digital computing and electronics led to their advancement over any other technique for precise imaging, especially to discover tumors. MRI also has the major advantage of not exposing patients to radiation. Drawbacks of MRI scans include their much higher cost, and patient discomfort with the procedure. The MRI scanner subjects the patient to such powerful electromagnets that the scan room must be shielded. The patient must be enclosed in a metal tube-like device for the duration of the scan (see Figure 1.19 b ), sometimes as long as thirty minutes, which can be uncomfortable and impractical for ill patients. The device is also so noisy that, even with earplugs, patients can become anxious or even fearful. These problems have been overcome somewhat with the development of “open” MRI scanning, which does not require the patient to be entirely enclosed in the metal tube. Patients with iron-containing metallic implants (internal sutures, some prosthetic devices, and so on) cannot undergo MRI scanning because it can dislodge these implants. Functional MRIs (fMRIs), which detect the concentration of blood flow in certain parts of the body, are increasingly being used to study the activity in parts of the brain during various body activities. This has helped scientists learn more about the locations of different brain functions and more about brain abnormalities and diseases. Interactive Link A patient undergoing an MRI is surrounded by a tube-shaped scanner. Watch this video to learn more about MRIs. What is the function of magnets in an MRI? Positron Emission Tomography Positron emission tomography (PET) is a medical imaging technique involving the use of so-called radiopharmaceuticals, substances that emit radiation that is short-lived and therefore relatively safe to administer to the body. Although the first PET scanner was introduced in 1961, it took 15 more years before radiopharmaceuticals were combined with the technique and revolutionized its potential. The main advantage is that PET (see Figure 1.19 c ) can illustrate physiologic activity—including nutrient metabolism and blood flow—of the organ or organs being targeted, whereas CT and MRI scans can only show static images. PET is widely used to diagnose a multitude of conditions, such as heart disease, the spread of cancer, certain forms of infection, brain abnormalities, bone disease, and thyroid disease. Interactive Link PET relies on radioactive substances administered several minutes before the scan. Watch this video to learn more about PET. How is PET used in chemotherapy? Ultrasonography Ultrasonography is an imaging technique that uses the transmission of high-frequency sound waves into the body to generate an echo signal that is converted by a computer into a real-time image of anatomy and physiology (see Figure 1.19 d ). Ultrasonography is the least invasive of all imaging techniques, and it is therefore used more freely in sensitive situations such as pregnancy. The technology was first developed in the 1940s and 1950s. Ultrasonography is used to study heart function, blood flow in the neck or extremities, certain conditions such as gallbladder disease, and fetal growth and development. The main disadvantages of ultrasonography are that the image quality is heavily operator-dependent and that it is unable to penetrate bone and gas.
anatomy_and_physiology
Chapter Objectives After studying this chapter, you will be able to: Describe the major sections of the neurological exam Outline the benefits of rapidly assessing neurological function Relate anatomical structures of the nervous system to specific functions Diagram the connections of the nervous system to the musculature and integument involved in primary sensorimotor responses Compare and contrast the somatic and visceral reflexes with respect to how they are assessed through the neurological exam Introduction A man arrives at the hospital after feeling faint and complaining of a “pins-and-needles” feeling all along one side of his body. The most likely explanation is that he has suffered a stroke, which has caused a loss of oxygen to a particular part of the central nervous system (CNS). The problem is finding where in the entire nervous system the stroke has occurred. By checking reflexes, sensory responses, and motor control, a health care provider can focus on what abilities the patient may have lost as a result of the stroke and can use this information to determine where the injury occurred. In the emergency department of the hospital, this kind of rapid assessment of neurological function is key to treating trauma to the nervous system. In the classroom, the neurological exam is a valuable tool for learning the anatomy and physiology of the nervous system because it allows you to relate the functions of the system to particular locations in the nervous system. As a student of anatomy and physiology, you may be planning to go into an allied health field, perhaps nursing or physical therapy. You could be in the emergency department treating a patient such as the one just described. An important part of this course is to understand the nervous system. This can be especially challenging because you need to learn about the nervous system using your own nervous system. The first chapter in this unit about the nervous system began with a quote: “If the human brain were simple enough for us to understand, we would be too simple to understand it.” However, you are being asked to understand aspects of it. A healthcare provider can pinpoint problems with the nervous system in minutes by running through the series of tasks to test neurological function that are described in this chapter. You can use the same approach, though not as quickly, to learn about neurological function and its relationship to the structures of the nervous system. Nervous tissue is different from other tissues in that it is not classified into separate tissue types. It does contain two types of cells, neurons and glia, but it is all just nervous tissue. White matter and gray matter are not types of nervous tissue, but indications of different specializations within the nervous tissue. However, not all nervous tissue performs the same function. Furthermore, specific functions are not wholly localized to individual brain structures in the way that other bodily functions occur strictly within specific organs. In the CNS, we must consider the connections between cells over broad areas, not just the function of cells in one particular nucleus or region. In a broad sense, the nervous system is responsible for the majority of electrochemical signaling in the body, but the use of those signals is different in various regions. The nervous system is made up of the brain and spinal cord as the central organs, and the ganglia and nerves as organs in the periphery. The brain and spinal cord can be thought of as a collection of smaller organs, most of which would be the nuclei (such as the oculomotor nuclei), but white matter structures play an important role (such as the corpus callosum). Studying the nervous system requires an understanding of the varied physiology of the nervous system. For example, the hypothalamus plays a very different role than the visual cortex. The neurological exam provides a way to elicit behavior that represents those varied functions.
[ { "answer": { "ans_choice": 3, "ans_text": "coordination exam" }, "bloom": "1", "hl_context": "The five major sections of the neurological exam are related to the major regions of the CNS ( Figure 16.2 ) . The mental status exam assesses functions related to the cerebrum . The cranial nerve exam is for the nerves that connect to the diencephalon and brain stem ( as well as the olfactory connections to the forebrain ) . <hl> The coordination exam and the related gait exam primarily assess the functions of the cerebellum . <hl> The motor and sensory exams are associated with the spinal cord and its connections through the spinal nerves . The exam is a series of subtests separated into five major sections . The first of these is the mental status exam , which assesses the higher cognitive functions such as memory , orientation , and language . Then there is the cranial nerve exam , which tests the function of the 12 cranial nerves and , therefore , the central and peripheral structures associated with them . The cranial nerve exam tests the sensory and motor functions of each of the nerves , as applicable . Two major sections , the sensory exam and the motor exam , test the sensory and motor functions associated with spinal nerves . <hl> Finally , the coordination exam tests the ability to perform complex and coordinated movements . <hl> The gait exam , which is often considered a sixth major exam , specifically assesses the motor function of walking and can be considered part of the coordination exam because walking is a coordinated movement .", "hl_sentences": "The coordination exam and the related gait exam primarily assess the functions of the cerebellum . Finally , the coordination exam tests the ability to perform complex and coordinated movements .", "question": { "cloze_format": "The major section of the neurological exam that is most likely to reveal damage to the cerebellum is the ___.", "normal_format": "Which major section of the neurological exam is most likely to reveal damage to the cerebellum?", "question_choices": [ "cranial nerve exam", "mental status exam", "sensory exam", "coordination exam" ], "question_id": "fs-id2030074", "question_text": "Which major section of the neurological exam is most likely to reveal damage to the cerebellum?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 0, "ans_text": "language" }, "bloom": null, "hl_context": "<hl> The cerebrum is the seat of many of the higher mental functions , such as memory and learning , language , and conscious perception , which are the subjects of subtests of the mental status exam . <hl> The cerebral cortex is the thin layer of gray matter on the outside of the cerebrum . On average , it is approximately 2.55 mm thick and highly folded to fit within the limited space of the cranial vault . These higher functions are distributed across various regions of the cortex , and specific locations can be said to be responsible for particular functions . <hl> There is a limited set of regions , for example , that are involved in language function , and they can be subdivided on the basis of the particular part of language function that each governs . <hl> Localization of function is the concept that circumscribed locations are responsible for specific functions . The neurological exam highlights this relationship . For example , the cognitive functions that are assessed in the mental status exam are based on functions in the cerebrum , mostly in the cerebral cortex . <hl> Several of the subtests examine language function . <hl> <hl> Deficits in neurological function uncovered by these examinations usually point to damage to the left cerebral cortex . <hl> <hl> In the majority of individuals , language function is localized to the left hemisphere between the superior temporal lobe and the posterior frontal lobe , including the intervening connections through the inferior parietal lobe . <hl>", "hl_sentences": "The cerebrum is the seat of many of the higher mental functions , such as memory and learning , language , and conscious perception , which are the subjects of subtests of the mental status exam . There is a limited set of regions , for example , that are involved in language function , and they can be subdivided on the basis of the particular part of language function that each governs . Several of the subtests examine language function . Deficits in neurological function uncovered by these examinations usually point to damage to the left cerebral cortex . In the majority of individuals , language function is localized to the left hemisphere between the superior temporal lobe and the posterior frontal lobe , including the intervening connections through the inferior parietal lobe .", "question": { "cloze_format": "The ___ function would most likely be affected by a restriction of a blood vessel in the cerebral cortex.", "normal_format": "What function would most likely be affected by a restriction of a blood vessel in the cerebral cortex?", "question_choices": [ "language", "gait", "facial expressions", "knee-jerk reflex" ], "question_id": "fs-id2602113", "question_text": "What function would most likely be affected by a restriction of a blood vessel in the cerebral cortex?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "coordination exam" }, "bloom": null, "hl_context": "The exam is a series of subtests separated into five major sections . The first of these is the mental status exam , which assesses the higher cognitive functions such as memory , orientation , and language . Then there is the cranial nerve exam , which tests the function of the 12 cranial nerves and , therefore , the central and peripheral structures associated with them . The cranial nerve exam tests the sensory and motor functions of each of the nerves , as applicable . Two major sections , the sensory exam and the motor exam , test the sensory and motor functions associated with spinal nerves . Finally , the coordination exam tests the ability to perform complex and coordinated movements . <hl> The gait exam , which is often considered a sixth major exam , specifically assesses the motor function of walking and can be considered part of the coordination exam because walking is a coordinated movement . <hl>", "hl_sentences": "The gait exam , which is often considered a sixth major exam , specifically assesses the motor function of walking and can be considered part of the coordination exam because walking is a coordinated movement .", "question": { "cloze_format": "The major section of the neurological exam that includes subtests that are sometimes considered a separate set of tests concerned with walking is the ___.", "normal_format": "Which major section of the neurological exam includes subtests that are sometimes considered a separate set of tests concerned with walking?", "question_choices": [ "mental status exam", "cranial nerve exam", "coordination exam", "sensory exam" ], "question_id": "fs-id1638408", "question_text": "Which major section of the neurological exam includes subtests that are sometimes considered a separate set of tests concerned with walking?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "number of neurons per square millimeter" }, "bloom": null, "hl_context": "The basis for parceling out areas of the cortex and attributing them to various functions has its root in pure anatomical underpinnings . The German neurologist and histologist Korbinian Brodmann , who made a careful study of the cytoarchitecture of the cerebrum around the turn of the nineteenth century , described approximately 50 regions of the cortex that differed enough from each other to be considered separate areas ( Figure 16.4 ) . Brodmann made preparations of many different regions of the cerebral cortex to view with a microscope . <hl> He compared the size , shape , and number of neurons to find anatomical differences in the various parts of the cerebral cortex . <hl> Continued investigation into these anatomical areas over the subsequent 100 or more years has demonstrated a strong correlation between the structures and the functions attributed to those structures . For example , the first three areas in Brodmann ’ s list — which are in the postcentral gyrus — compose the primary somatosensory cortex . Within this area , finer separation can be made on the basis of the concept of the sensory homunculus , as well as the different submodalities of somatosensation such as touch , vibration , pain , temperature , or proprioception . Today , we more frequently refer to these regions by their function ( i . e . , primary sensory cortex ) than by the number Brodmann assigned to them , but in some situations the use of Brodmann numbers persists .", "hl_sentences": "He compared the size , shape , and number of neurons to find anatomical differences in the various parts of the cerebral cortex .", "question": { "cloze_format": "___ could be an element of cytoarchitecture, as related to Brodmann’s microscopic studies of the cerebral cortex.", "normal_format": "Which of the following could be elements of cytoarchitecture, as related to Brodmann’s microscopic studies of the cerebral cortex?", "question_choices": [ "connections to the cerebellum", "activation by visual stimuli", "number of neurons per square millimeter", "number of gyri or sulci" ], "question_id": "fs-id2096768", "question_text": "Which of the following could be elements of cytoarchitecture, as related to Brodmann’s microscopic studies of the cerebral cortex?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "Wernicke’s area" }, "bloom": "1", "hl_context": "<hl> An important example of multimodal integrative areas is associated with language function ( Figure 16.6 ) . <hl> <hl> Adjacent to the auditory association cortex , at the end of the lateral sulcus just anterior to the visual cortex , is Wernicke ’ s area . <hl> In the lateral aspect of the frontal lobe , just anterior to the region of the motor cortex associated with the head and neck , is Broca ’ s area . Both regions were originally described on the basis of losses of speech and language , which is called aphasia . The aphasia associated with Broca ’ s area is known as an expressive aphasia , which means that speech production is compromised . This type of aphasia is often described as non-fluency because the ability to say some words leads to broken or halting speech . Grammar can also appear to be lost . The aphasia associated with Wernicke ’ s area is known as a receptive aphasia , which is not a loss of speech production , but a loss of understanding of content . Patients , after recovering from acute forms of this aphasia , report not being able to understand what is said to them or what they are saying themselves , but they often cannot keep from talking .", "hl_sentences": "An important example of multimodal integrative areas is associated with language function ( Figure 16.6 ) . Adjacent to the auditory association cortex , at the end of the lateral sulcus just anterior to the visual cortex , is Wernicke ’ s area .", "question": { "cloze_format": "The ___ could be a multimodal integrative area.", "normal_format": "Which of the following could be a multimodal integrative area?", "question_choices": [ "primary visual cortex", "premotor cortex", "hippocampus", "Wernicke’s area" ], "question_id": "fs-id1971768", "question_text": "Which of the following could be a multimodal integrative area?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 1, "ans_text": "your last birthday party" }, "bloom": "1", "hl_context": "Henry Molaison , who was referred to as patient HM when he was alive , had epilepsy localized to both of his medial temporal lobes . In 1953 , a bilateral lobectomy was performed that alleviated the epilepsy but resulted in the inability for HM to form new memories — a condition called anterograde amnesia . HM was able to recall most events from before his surgery , although there was a partial loss of earlier memories , which is referred to as retrograde amnesia . HM became the subject of extensive studies into how memory works . What he was unable to do was form new memories of what happened to him , what are now called episodic memory . <hl> Episodic memory is autobiographical in nature , such as remembering riding a bicycle as a child around the neighborhood , as opposed to the procedural memory of how to ride a bike . <hl> HM also retained his short-term memory , such as what is tested by the three-word task described above . After a brief period , those memories would dissipate or decay and not be stored in the long-term because the medial temporal lobe structures were removed .", "hl_sentences": "Episodic memory is autobiographical in nature , such as remembering riding a bicycle as a child around the neighborhood , as opposed to the procedural memory of how to ride a bike .", "question": { "cloze_format": "An example of episodic memory is ___.", "normal_format": "Which is an example of episodic memory?", "question_choices": [ "how to bake a cake", "your last birthday party", "how old you are", "needing to wear an oven mitt to take a cake out of the oven" ], "question_id": "fs-id2316130", "question_text": "Which is an example of episodic memory?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "receptive aphasia" }, "bloom": "1", "hl_context": "An important example of multimodal integrative areas is associated with language function ( Figure 16.6 ) . Adjacent to the auditory association cortex , at the end of the lateral sulcus just anterior to the visual cortex , is Wernicke ’ s area . In the lateral aspect of the frontal lobe , just anterior to the region of the motor cortex associated with the head and neck , is Broca ’ s area . Both regions were originally described on the basis of losses of speech and language , which is called aphasia . The aphasia associated with Broca ’ s area is known as an expressive aphasia , which means that speech production is compromised . This type of aphasia is often described as non-fluency because the ability to say some words leads to broken or halting speech . Grammar can also appear to be lost . <hl> The aphasia associated with Wernicke ’ s area is known as a receptive aphasia , which is not a loss of speech production , but a loss of understanding of content . <hl> Patients , after recovering from acute forms of this aphasia , report not being able to understand what is said to them or what they are saying themselves , but they often cannot keep from talking .", "hl_sentences": "The aphasia associated with Wernicke ’ s area is known as a receptive aphasia , which is not a loss of speech production , but a loss of understanding of content .", "question": { "cloze_format": "The type of aphasia that is more like hearing a foreign language spoken is ___.", "normal_format": "Which type of aphasia is more like hearing a foreign language spoken?", "question_choices": [ "receptive aphasia", "expressive aphasia", "conductive aphasia", "Broca’s aphasia" ], "question_id": "fs-id1230744", "question_text": "Which type of aphasia is more like hearing a foreign language spoken?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "superior temporal gyrus" }, "bloom": "1", "hl_context": "Localization of function is the concept that circumscribed locations are responsible for specific functions . The neurological exam highlights this relationship . For example , the cognitive functions that are assessed in the mental status exam are based on functions in the cerebrum , mostly in the cerebral cortex . Several of the subtests examine language function . Deficits in neurological function uncovered by these examinations usually point to damage to the left cerebral cortex . <hl> In the majority of individuals , language function is localized to the left hemisphere between the superior temporal lobe and the posterior frontal lobe , including the intervening connections through the inferior parietal lobe . <hl>", "hl_sentences": "In the majority of individuals , language function is localized to the left hemisphere between the superior temporal lobe and the posterior frontal lobe , including the intervening connections through the inferior parietal lobe .", "question": { "cloze_format": "The region of the cerebral cortex that is associated with understanding language, both from another person and the language a person generates himself or herself is the ___.", "normal_format": "What region of the cerebral cortex is associated with understanding language, both from another person and the language a person generates himself or herself?", "question_choices": [ "medial temporal lobe", "ventromedial prefrontal cortex", "superior temporal gyrus", "postcentral gyrus" ], "question_id": "fs-id1959011", "question_text": "What region of the cerebral cortex is associated with understanding language, both from another person and the language a person generates himself or herself?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 0, "ans_text": "salt" }, "bloom": "1", "hl_context": "Olfaction is not the pre-eminent sense , but its loss can be quite detrimental . The enjoyment of food is largely based on our sense of smell . Anosmia means that food will not seem to have the same taste , though the gustatory sense is intact , and food will often be described as being bland . <hl> However , the taste of food can be improved by adding ingredients ( e . g . , salt ) that stimulate the gustatory sense . <hl>", "hl_sentences": "However , the taste of food can be improved by adding ingredients ( e . g . , salt ) that stimulate the gustatory sense .", "question": { "cloze_format": "Without olfactory sensation to complement gustatory stimuli, food will taste bland unless it is seasoned with the substance ___ .", "normal_format": "Without olfactory sensation to complement gustatory stimuli, food will taste bland unless it is seasoned with which substance?", "question_choices": [ "salt", "thyme", "garlic", "olive oil" ], "question_id": "fs-id2094567", "question_text": "Without olfactory sensation to complement gustatory stimuli, food will taste bland unless it is seasoned with which substance?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "optic" }, "bloom": null, "hl_context": "A crucial function of the cranial nerves is to keep visual stimuli centered on the fovea of the retina . <hl> The vestibulo-ocular reflex ( VOR ) coordinates all of the components ( Figure 16.10 ) , both sensory and motor , that make this possible . <hl> If the head rotates in one direction — for example , to the right — the horizontal pair of semicircular canals in the inner ear indicate the movement by increased activity on the right and decreased activity on the left . The information is sent to the abducens nuclei and oculomotor nuclei on either side to coordinate the lateral and medial rectus muscles . The left lateral rectus and right medial rectus muscles will contract , rotating the eyes in the opposite direction of the head , while nuclei controlling the right lateral rectus and left medial rectus muscles will be inhibited to reduce antagonism of the contracting muscles . These actions stabilize the visual field by compensating for the head rotation with opposite rotation of the eyes in the orbits . Deficits in the VOR may be related to vestibular damage , such as in Ménière ’ s disease , or from dorsal brain stem damage that would affect the eye movement nuclei or their connections through the MLF .", "hl_sentences": "The vestibulo-ocular reflex ( VOR ) coordinates all of the components ( Figure 16.10 ) , both sensory and motor , that make this possible .", "question": { "cloze_format": "The ___ cranial nerve that is not part of the VOR.", "normal_format": "Which of the following cranial nerves is not part of the VOR?", "question_choices": [ "optic", "oculomotor", "abducens", "vestibulocochlear" ], "question_id": "fs-id2019800", "question_text": "Which of the following cranial nerves is not part of the VOR?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 3, "ans_text": "vagus" }, "bloom": null, "hl_context": "The trigeminal nerve controls the muscles of chewing , which are tested for stretch reflexes . Motor functions of the facial nerve are usually obvious if facial expressions are compromised , but can be tested by having the patient raise their eyebrows , smile , and frown . Movements of the tongue , soft palate , or superior pharynx can be observed directly while the patient swallows , while the gag reflex is elicited , or while the patient says repetitive consonant sounds . <hl> The motor control of the gag reflex is largely controlled by fibers in the vagus nerve and constitutes a test of that nerve because the parasympathetic functions of that nerve are involved in visceral regulation , such as regulating the heartbeat and digestion . <hl> The facial and glossopharyngeal nerves convey gustatory stimulation to the brain . Testing this is as simple as introducing salty , sour , bitter , or sweet stimuli to either side of the tongue . The patient should respond to the taste stimulus before retracting the tongue into the mouth . Stimuli applied to specific locations on the tongue will dissolve into the saliva and may stimulate taste buds connected to either the left or right of the nerves , masking any lateral deficits . Along with taste , the glossopharyngeal nerve relays general sensations from the pharyngeal walls . These sensations , along with certain taste stimuli , can stimulate the gag reflex . If the examiner moves the tongue depressor to contact the lateral wall of the fauces , this should elicit the gag reflex . Stimulation of either side of the fauces should elicit an equivalent response . <hl> The motor response , through contraction of the muscles of the pharynx , is mediated through the vagus nerve . <hl> Normally , the vagus nerve is considered autonomic in nature . The vagus nerve directly stimulates the contraction of skeletal muscles in the pharynx and larynx to contribute to the swallowing and speech functions . Further testing of vagus motor function has the patient repeating consonant sounds that require movement of the muscles around the fauces . The patient is asked to say “ lah-kah-pah ” or a similar set of alternating sounds while the examiner observes the movements of the soft palate and arches between the palate and tongue .", "hl_sentences": "The motor control of the gag reflex is largely controlled by fibers in the vagus nerve and constitutes a test of that nerve because the parasympathetic functions of that nerve are involved in visceral regulation , such as regulating the heartbeat and digestion . The motor response , through contraction of the muscles of the pharynx , is mediated through the vagus nerve .", "question": { "cloze_format": "The nerve ___ is responsible for controlling the muscles that result in the gag reflex.", "normal_format": "Which nerve is responsible for controlling the muscles that result in the gag reflex?", "question_choices": [ "trigeminal", "facial", "glossopharyngeal", "vagus" ], "question_id": "fs-id1363683", "question_text": "Which nerve is responsible for controlling the muscles that result in the gag reflex?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "facial" }, "bloom": "1", "hl_context": "<hl> The facial and glossopharyngeal nerves are also responsible for the initiation of salivation . <hl> Neurons in the salivary nuclei of the medulla project through these two nerves as preganglionic fibers , and synapse in ganglia located in the head . The parasympathetic fibers of the facial nerve synapse in the pterygopalatine ganglion , which projects to the submandibular gland and sublingual gland . The parasympathetic fibers of the glossopharyngeal nerve synapse in the otic ganglion , which projects to the parotid gland . Salivation in response to food in the oral cavity is based on a visceral reflex arc within the facial or glossopharyngeal nerves . Other stimuli that stimulate salivation are coordinated through the hypothalamus , such as the smell and sight of food . <hl> The facial and glossopharyngeal nerves convey gustatory stimulation to the brain . <hl> Testing this is as simple as introducing salty , sour , bitter , or sweet stimuli to either side of the tongue . The patient should respond to the taste stimulus before retracting the tongue into the mouth . Stimuli applied to specific locations on the tongue will dissolve into the saliva and may stimulate taste buds connected to either the left or right of the nerves , masking any lateral deficits . Along with taste , the glossopharyngeal nerve relays general sensations from the pharyngeal walls . These sensations , along with certain taste stimuli , can stimulate the gag reflex . If the examiner moves the tongue depressor to contact the lateral wall of the fauces , this should elicit the gag reflex . Stimulation of either side of the fauces should elicit an equivalent response . The motor response , through contraction of the muscles of the pharynx , is mediated through the vagus nerve . Normally , the vagus nerve is considered autonomic in nature . The vagus nerve directly stimulates the contraction of skeletal muscles in the pharynx and larynx to contribute to the swallowing and speech functions . Further testing of vagus motor function has the patient repeating consonant sounds that require movement of the muscles around the fauces . The patient is asked to say “ lah-kah-pah ” or a similar set of alternating sounds while the examiner observes the movements of the soft palate and arches between the palate and tongue .", "hl_sentences": "The facial and glossopharyngeal nerves are also responsible for the initiation of salivation . The facial and glossopharyngeal nerves convey gustatory stimulation to the brain .", "question": { "cloze_format": "The ___ nerve is responsible for taste, as well as salivation, in the anterior oral cavity.", "normal_format": "Which nerve is responsible for taste, as well as salivation, in the anterior oral cavity?", "question_choices": [ "facial", "glossopharyngeal", "vagus", "hypoglossal" ], "question_id": "fs-id1637999", "question_text": "Which nerve is responsible for taste, as well as salivation, in the anterior oral cavity?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "spinal accessory" }, "bloom": null, "hl_context": "<hl> The accessory nerve , also referred to as the spinal accessory nerve , innervates the sternocleidomastoid and trapezius muscles ( Figure 16.11 ) . <hl> <hl> When both the sternocleidomastoids contract , the head flexes forward ; individually , they cause rotation to the opposite side . <hl> <hl> The trapezius can act as an antagonist , causing extension and hyperextension of the neck . <hl> These two superficial muscles are important for changing the position of the head . Both muscles also receive input from cervical spinal nerves . Along with the spinal accessory nerve , these nerves contribute to elevating the scapula and clavicle through the trapezius , which is tested by asking the patient to shrug both shoulders , and watching for asymmetry . For the sternocleidomastoid , those spinal nerves are primarily sensory projections , whereas the trapezius also has lateral insertions to the clavicle and scapula , and receives motor input from the spinal cord . Calling the nerve the spinal accessory nerve suggests that it is aiding the spinal nerves . Though that is not precisely how the name originated , it does help make the association between the function of this nerve in controlling these muscles and the role these muscles play in movements of the trunk or shoulders .", "hl_sentences": "The accessory nerve , also referred to as the spinal accessory nerve , innervates the sternocleidomastoid and trapezius muscles ( Figure 16.11 ) . When both the sternocleidomastoids contract , the head flexes forward ; individually , they cause rotation to the opposite side . The trapezius can act as an antagonist , causing extension and hyperextension of the neck .", "question": { "cloze_format": "___ is the nerve that controls movements of the neck.", "normal_format": "Which of the following nerves controls movements of the neck?", "question_choices": [ "oculomotor", "vestibulocochlear", "spinal accessory", "hypoglossal" ], "question_id": "fs-id2302721", "question_text": "Which of the following nerves controls movements of the neck?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 0, "ans_text": "cerebellar deep white matter" }, "bloom": "1", "hl_context": "The cerebellum is located in apposition to the dorsal surface of the brain stem , centered on the pons . The name of the pons is derived from its connection to the cerebellum . The word means “ bridge ” and refers to the thick bundle of myelinated axons that form a bulge on its ventral surface . Those fibers are axons that project from the gray matter of the pons into the contralateral cerebellar cortex . <hl> These fibers make up the middle cerebellar peduncle ( MCP ) and are the major physical connection of the cerebellum to the brain stem ( Figure 16.14 ) . <hl> <hl> Two other white matter bundles connect the cerebellum to the other regions of the brain stem . <hl> <hl> The superior cerebellar peduncle ( SCP ) is the connection of the cerebellum to the midbrain and forebrain . <hl> <hl> The inferior cerebellar peduncle ( ICP ) is the connection to the medulla . <hl> The skeletomotor system is largely based on the simple , two-cell projection from the precentral gyrus of the frontal lobe to the skeletal muscles . <hl> The corticospinal tract represents the neurons that send output from the primary motor cortex . <hl> <hl> These fibers travel through the deep white matter of the cerebrum , then through the midbrain and pons , into the medulla where most of them decussate , and finally through the spinal cord white matter in the lateral ( crossed fibers ) or anterior ( uncrossed fibers ) columns . <hl> These fibers synapse on motor neurons in the ventral horn . The ventral horn motor neurons then project to skeletal muscle and cause contraction . These two cells are termed the upper motor neuron ( UMN ) and the lower motor neuron ( LMN ) . Voluntary movements require these two cells to be active .", "hl_sentences": "These fibers make up the middle cerebellar peduncle ( MCP ) and are the major physical connection of the cerebellum to the brain stem ( Figure 16.14 ) . Two other white matter bundles connect the cerebellum to the other regions of the brain stem . The superior cerebellar peduncle ( SCP ) is the connection of the cerebellum to the midbrain and forebrain . The inferior cerebellar peduncle ( ICP ) is the connection to the medulla . The corticospinal tract represents the neurons that send output from the primary motor cortex . These fibers travel through the deep white matter of the cerebrum , then through the midbrain and pons , into the medulla where most of them decussate , and finally through the spinal cord white matter in the lateral ( crossed fibers ) or anterior ( uncrossed fibers ) columns .", "question": { "cloze_format": "___ is not part of the corticospinal pathway.", "normal_format": "Which of the following is not part of the corticospinal pathway?", "question_choices": [ "cerebellar deep white matter", "midbrain", "medulla", "lateral column" ], "question_id": "fs-id1493090", "question_text": "Which of the following is not part of the corticospinal pathway?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 3, "ans_text": "Romberg test" }, "bloom": "1", "hl_context": "Gait can either be considered a separate part of the neurological exam or a subtest of the coordination exam that addresses walking and balance . Testing posture and gait addresses functions of the spinocerebellum and the vestibulocerebellum because both are part of these activities . A subtest called station begins with the patient standing in a normal position to check for the placement of the feet and balance . The patient is asked to hop on one foot to assess the ability to maintain balance and posture during movement . Though the station subtest appears to be similar to the Romberg test , the difference is that the patient ’ s eyes are open during station . The Romberg test has the patient stand still with the eyes closed . <hl> Any changes in posture would be the result of proprioceptive deficits , and the patient is able to recover when they open their eyes . <hl> <hl> A final subtest of sensory perception that concentrates on the sense of proprioception is known as the Romberg test . <hl> The patient is asked to stand straight with feet together . Once the patient has achieved their balance in that position , they are asked to close their eyes . Without visual feedback that the body is in a vertical orientation relative to the surrounding environment , the patient must rely on the proprioceptive stimuli of joint and muscle position , as well as information from the inner ear , to maintain balance . This test can indicate deficits in dorsal column pathway proprioception , as well as problems with proprioceptive projections to the cerebellum through the spinocerebellar tract .", "hl_sentences": "Any changes in posture would be the result of proprioceptive deficits , and the patient is able to recover when they open their eyes . A final subtest of sensory perception that concentrates on the sense of proprioception is known as the Romberg test .", "question": { "cloze_format": "The subtest that is directed at proprioceptive sensation is the ___.", "normal_format": "Which subtest is directed at proprioceptive sensation?", "question_choices": [ "two-point discrimination", "tactile movement", "vibration", "Romberg test" ], "question_id": "fs-id2528969", "question_text": "Which subtest is directed at proprioceptive sensation?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "paresis" }, "bloom": "1", "hl_context": "<hl> A lesion on the LMN would result in paralysis , or at least partial loss of voluntary muscle control , which is known as paresis . <hl> The paralysis observed in LMN diseases is referred to as flaccid paralysis , referring to a complete or partial loss of muscle tone , in contrast to the loss of control in UMN lesions in which tone is retained and spasticity is exhibited . Other signs of an LMN lesion are fibrillation , fasciculation , and compromised or lost reflexes resulting from the denervation of the muscle fibers . The motor exam tests the function of these neurons and the muscles they control . First , the muscles are inspected and palpated for signs of structural irregularities . Movement disorders may be the result of changes to the muscle tissue , such as scarring , and these possibilities need to be ruled out before testing function . <hl> Along with this inspection , muscle tone is assessed by moving the muscles through a passive range of motion . <hl> The arm is moved at the elbow and wrist , and the leg is moved at the knee and ankle . Skeletal muscle should have a resting tension representing a slight contraction of the fibers . The lack of muscle tone , known as hypotonicity or flaccidity , may indicate that the LMN is not conducting action potentials that will keep a basal level of acetylcholine in the neuromuscular junction . The skeletomotor system is largely based on the simple , two-cell projection from the precentral gyrus of the frontal lobe to the skeletal muscles . The corticospinal tract represents the neurons that send output from the primary motor cortex . These fibers travel through the deep white matter of the cerebrum , then through the midbrain and pons , into the medulla where most of them decussate , and finally through the spinal cord white matter in the lateral ( crossed fibers ) or anterior ( uncrossed fibers ) columns . These fibers synapse on motor neurons in the ventral horn . The ventral horn motor neurons then project to skeletal muscle and cause contraction . <hl> These two cells are termed the upper motor neuron ( UMN ) and the lower motor neuron ( LMN ) . <hl> <hl> Voluntary movements require these two cells to be active . <hl>", "hl_sentences": "A lesion on the LMN would result in paralysis , or at least partial loss of voluntary muscle control , which is known as paresis . Along with this inspection , muscle tone is assessed by moving the muscles through a passive range of motion . These two cells are termed the upper motor neuron ( UMN ) and the lower motor neuron ( LMN ) . Voluntary movements require these two cells to be active .", "question": { "cloze_format": "The term that describes the inability to lift the arm above the level of the shoulder is ___.", "normal_format": "What term describes the inability to lift the arm above the level of the shoulder?", "question_choices": [ "paralysis", "paresis", "fasciculation", "fibrillation" ], "question_id": "fs-id2051709", "question_text": "What term describes the inability to lift the arm above the level of the shoulder?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "stretch reflex" }, "bloom": null, "hl_context": "The trigeminal system of the head and neck is the equivalent of the ascending spinal cord systems of the dorsal column and the spinothalamic pathways . Somatosensation of the face is conveyed along the nerve to enter the brain stem at the level of the pons . Synapses of those axons , however , are distributed across nuclei found throughout the brain stem . The mesencephalic nucleus processes proprioceptive information of the face , which is the movement and position of facial muscles . <hl> It is the sensory component of the jaw-jerk reflex , a stretch reflex of the masseter muscle . <hl> The chief nucleus , located in the pons , receives information about light touch as well as proprioceptive information about the mandible , which are both relayed to the thalamus and , ultimately , to the postcentral gyrus of the parietal lobe . The spinal trigeminal nucleus , located in the medulla , receives information about crude touch , pain , and temperature to be relayed to the thalamus and cortex . Essentially , the projection through the chief nucleus is analogous to the dorsal column pathway for the body , and the projection through the spinal trigeminal nucleus is analogous to the spinothalamic pathway .", "hl_sentences": "It is the sensory component of the jaw-jerk reflex , a stretch reflex of the masseter muscle .", "question": { "cloze_format": "The type of reflex that is the jaw-jerk reflex that is part of the cranial nerve exam for the vestibulocochlear nerve is the ___.", "normal_format": "Which type of reflex is the jaw-jerk reflex that is part of the cranial nerve exam for the vestibulocochlear nerve?", "question_choices": [ "visceral reflex", "withdrawal reflex", "stretch reflex", "superficial reflex" ], "question_id": "fs-id1246774", "question_text": "Which type of reflex is the jaw-jerk reflex that is part of the cranial nerve exam for the vestibulocochlear nerve?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "involves an axon in the ventral nerve root" }, "bloom": "1", "hl_context": "<hl> The general senses are distributed throughout the body , relying on nervous tissue incorporated into various organs . <hl> <hl> Somatic senses are incorporated mostly into the skin , muscles , or tendons , whereas the visceral senses come from nervous tissue incorporated into the majority of organs such as the heart or stomach . <hl> The somatic senses are those that usually make up the conscious perception of the how the body interacts with the environment . The visceral senses are most often below the limit of conscious perception because they are involved in homeostatic regulation through the autonomic nervous system . The facial and glossopharyngeal nerves are also responsible for the initiation of salivation . Neurons in the salivary nuclei of the medulla project through these two nerves as preganglionic fibers , and synapse in ganglia located in the head . <hl> The parasympathetic fibers of the facial nerve synapse in the pterygopalatine ganglion , which projects to the submandibular gland and sublingual gland . <hl> <hl> The parasympathetic fibers of the glossopharyngeal nerve synapse in the otic ganglion , which projects to the parotid gland . <hl> <hl> Salivation in response to food in the oral cavity is based on a visceral reflex arc within the facial or glossopharyngeal nerves . <hl> Other stimuli that stimulate salivation are coordinated through the hypothalamus , such as the smell and sight of food . The olfactory , optic , and vestibulocochlear nerves ( cranial nerves I , II , and VIII ) are dedicated to four of the special senses : smell , vision , equilibrium , and hearing , respectively . <hl> Taste sensation is relayed to the brain stem through fibers of the facial and glossopharyngeal nerves . <hl> <hl> The trigeminal nerve is a mixed nerve that carries the general somatic senses from the head , similar to those coming through spinal nerves from the rest of the body . <hl>", "hl_sentences": "The general senses are distributed throughout the body , relying on nervous tissue incorporated into various organs . Somatic senses are incorporated mostly into the skin , muscles , or tendons , whereas the visceral senses come from nervous tissue incorporated into the majority of organs such as the heart or stomach . The parasympathetic fibers of the facial nerve synapse in the pterygopalatine ganglion , which projects to the submandibular gland and sublingual gland . The parasympathetic fibers of the glossopharyngeal nerve synapse in the otic ganglion , which projects to the parotid gland . Salivation in response to food in the oral cavity is based on a visceral reflex arc within the facial or glossopharyngeal nerves . Taste sensation is relayed to the brain stem through fibers of the facial and glossopharyngeal nerves . The trigeminal nerve is a mixed nerve that carries the general somatic senses from the head , similar to those coming through spinal nerves from the rest of the body .", "question": { "cloze_format": "It is a feature of both somatic and visceral senses that it ___.", "normal_format": "Which of the following is a feature of both somatic and visceral senses?", "question_choices": [ "requires cerebral input", "causes skeletal muscle contraction", "projects to a ganglion near the target effector", "involves an axon in the ventral nerve root" ], "question_id": "fs-id2297726", "question_text": "Which of the following is a feature of both somatic and visceral senses?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "middle cerebellar peduncle" }, "bloom": null, "hl_context": "These connections can also be broadly described by their functions . The ICP conveys sensory input to the cerebellum , partially from the spinocerebellar tract , but also through fibers of the inferior olive . <hl> The MCP is part of the cortico-ponto-cerebellar pathway that connects the cerebral cortex with the cerebellum and preferentially targets the lateral regions of the cerebellum . <hl> It includes a copy of the motor commands sent from the precentral gyrus through the corticospinal tract , arising from collateral branches that synapse in the gray matter of the pons , along with input from other regions such as the visual cortex . The SCP is the major output of the cerebellum , divided between the red nucleus in the midbrain and the thalamus , which will return cerebellar processing to the motor cortex . These connections describe a circuit that compares motor commands and sensory feedback to generate a new output . These comparisons make it possible to coordinate movements . If the cerebral cortex sends a motor command to initiate walking , that command is copied by the pons and sent into the cerebellum through the MCP . Sensory feedback in the form of proprioception from the spinal cord , as well as vestibular sensations from the inner ear , enters through the ICP . If you take a step and begin to slip on the floor because it is wet , the output from the cerebellum — through the SCP — can correct for that and keep you balanced and moving . The red nucleus sends new motor commands to the spinal cord through the rubrospinal tract . The cerebellum is located in apposition to the dorsal surface of the brain stem , centered on the pons . The name of the pons is derived from its connection to the cerebellum . <hl> The word means “ bridge ” and refers to the thick bundle of myelinated axons that form a bulge on its ventral surface . <hl> <hl> Those fibers are axons that project from the gray matter of the pons into the contralateral cerebellar cortex . <hl> <hl> These fibers make up the middle cerebellar peduncle ( MCP ) and are the major physical connection of the cerebellum to the brain stem ( Figure 16.14 ) . <hl> Two other white matter bundles connect the cerebellum to the other regions of the brain stem . The superior cerebellar peduncle ( SCP ) is the connection of the cerebellum to the midbrain and forebrain . The inferior cerebellar peduncle ( ICP ) is the connection to the medulla .", "hl_sentences": "The MCP is part of the cortico-ponto-cerebellar pathway that connects the cerebral cortex with the cerebellum and preferentially targets the lateral regions of the cerebellum . The word means “ bridge ” and refers to the thick bundle of myelinated axons that form a bulge on its ventral surface . Those fibers are axons that project from the gray matter of the pons into the contralateral cerebellar cortex . These fibers make up the middle cerebellar peduncle ( MCP ) and are the major physical connection of the cerebellum to the brain stem ( Figure 16.14 ) .", "question": { "cloze_format": "The white matter structure that carries information from the cerebral cortex to the cerebellum is called a/an ___.", "normal_format": "Which white matter structure carries information from the cerebral cortex to the cerebellum?", "question_choices": [ "cerebral peduncle", "superior cerebellar peduncle", "middle cerebellar peduncle", "inferior cerebellar peduncle" ], "question_id": "fs-id2081476", "question_text": "Which white matter structure carries information from the cerebral cortex to the cerebellum?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 0, "ans_text": "vermis" }, "bloom": "1", "hl_context": "The cerebellum is divided into regions that are based on the particular functions and connections involved . <hl> The midline regions of the cerebellum , the vermis and flocculonodular lobe , are involved in comparing visual information , equilibrium , and proprioceptive feedback to maintain balance and coordinate movements such as walking , or gait , through the descending output of the red nucleus ( Figure 16.15 ) . <hl> The lateral hemispheres are primarily concerned with planning motor functions through frontal lobe inputs that are returned through the thalamic projections back to the premotor and motor cortices . Processing in the midline regions targets movements of the axial musculature , whereas the lateral regions target movements of the appendicular musculature . The vermis is referred to as the spinocerebellum because it primarily receives input from the dorsal columns and spinocerebellar pathways . The flocculonodular lobe is referred to as the vestibulocerebellum because of the vestibular projection into that region . Finally , the lateral cerebellum is referred to as the cerebrocerebellum , reflecting the significant input from the cerebral cortex through the cortico-ponto-cerebellar pathway . These connections can also be broadly described by their functions . The ICP conveys sensory input to the cerebellum , partially from the spinocerebellar tract , but also through fibers of the inferior olive . The MCP is part of the cortico-ponto-cerebellar pathway that connects the cerebral cortex with the cerebellum and preferentially targets the lateral regions of the cerebellum . It includes a copy of the motor commands sent from the precentral gyrus through the corticospinal tract , arising from collateral branches that synapse in the gray matter of the pons , along with input from other regions such as the visual cortex . The SCP is the major output of the cerebellum , divided between the red nucleus in the midbrain and the thalamus , which will return cerebellar processing to the motor cortex . These connections describe a circuit that compares motor commands and sensory feedback to generate a new output . These comparisons make it possible to coordinate movements . If the cerebral cortex sends a motor command to initiate walking , that command is copied by the pons and sent into the cerebellum through the MCP . <hl> Sensory feedback in the form of proprioception from the spinal cord , as well as vestibular sensations from the inner ear , enters through the ICP . <hl> If you take a step and begin to slip on the floor because it is wet , the output from the cerebellum — through the SCP — can correct for that and keep you balanced and moving . The red nucleus sends new motor commands to the spinal cord through the rubrospinal tract . The cerebellum is located in apposition to the dorsal surface of the brain stem , centered on the pons . The name of the pons is derived from its connection to the cerebellum . The word means “ bridge ” and refers to the thick bundle of myelinated axons that form a bulge on its ventral surface . Those fibers are axons that project from the gray matter of the pons into the contralateral cerebellar cortex . These fibers make up the middle cerebellar peduncle ( MCP ) and are the major physical connection of the cerebellum to the brain stem ( Figure 16.14 ) . Two other white matter bundles connect the cerebellum to the other regions of the brain stem . The superior cerebellar peduncle ( SCP ) is the connection of the cerebellum to the midbrain and forebrain . <hl> The inferior cerebellar peduncle ( ICP ) is the connection to the medulla . <hl>", "hl_sentences": "The midline regions of the cerebellum , the vermis and flocculonodular lobe , are involved in comparing visual information , equilibrium , and proprioceptive feedback to maintain balance and coordinate movements such as walking , or gait , through the descending output of the red nucleus ( Figure 16.15 ) . Sensory feedback in the form of proprioception from the spinal cord , as well as vestibular sensations from the inner ear , enters through the ICP . The inferior cerebellar peduncle ( ICP ) is the connection to the medulla .", "question": { "cloze_format": "The ___ region of the cerebellum receives proprioceptive input from the spinal cord.", "normal_format": "Which region of the cerebellum receives proprioceptive input from the spinal cord?", "question_choices": [ "vermis", "left hemisphere", "flocculonodular lobe", "right hemisphere" ], "question_id": "fs-id1418043", "question_text": "Which region of the cerebellum receives proprioceptive input from the spinal cord?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 1, "ans_text": "station" }, "bloom": "1", "hl_context": "<hl> Gait can either be considered a separate part of the neurological exam or a subtest of the coordination exam that addresses walking and balance . <hl> Testing posture and gait addresses functions of the spinocerebellum and the vestibulocerebellum because both are part of these activities . <hl> A subtest called station begins with the patient standing in a normal position to check for the placement of the feet and balance . <hl> The patient is asked to hop on one foot to assess the ability to maintain balance and posture during movement . Though the station subtest appears to be similar to the Romberg test , the difference is that the patient ’ s eyes are open during station . The Romberg test has the patient stand still with the eyes closed . Any changes in posture would be the result of proprioceptive deficits , and the patient is able to recover when they open their eyes .", "hl_sentences": "Gait can either be considered a separate part of the neurological exam or a subtest of the coordination exam that addresses walking and balance . A subtest called station begins with the patient standing in a normal position to check for the placement of the feet and balance .", "question": { "cloze_format": "___ tests cerebellar function related to gait.", "normal_format": "Which of the following tests cerebellar function related to gait?", "question_choices": [ "toe-to-finger", "station", "lah-kah-pah", "finger-to-nose" ], "question_id": "fs-id1478128", "question_text": "Which of the following tests cerebellar function related to gait?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "antibiotics" }, "bloom": "1", "hl_context": "Ataxia is often the result of exposure to exogenous substances , focal lesions , or a genetic disorder . <hl> Focal lesions include strokes affecting the cerebellar arteries , tumors that may impinge on the cerebellum , trauma to the back of the head and neck , or MS . Alcohol intoxication or drugs such as ketamine cause ataxia , but it is often reversible . <hl> <hl> Mercury in fish can cause ataxia as well . <hl> <hl> Hereditary conditions can lead to degeneration of the cerebellum or spinal cord , as well as malformation of the brain , or the abnormal accumulation of copper seen in Wilson ’ s disease . <hl> <hl> Ataxia A movement disorder of the cerebellum is referred to as ataxia . <hl> It presents as a loss of coordination in voluntary movements . Ataxia can also refer to sensory deficits that cause balance problems , primarily in proprioception and equilibrium . When the problem is observed in movement , it is ascribed to cerebellar damage . Sensory and vestibular ataxia would likely also present with problems in gait and station .", "hl_sentences": "Focal lesions include strokes affecting the cerebellar arteries , tumors that may impinge on the cerebellum , trauma to the back of the head and neck , or MS . Alcohol intoxication or drugs such as ketamine cause ataxia , but it is often reversible . Mercury in fish can cause ataxia as well . Hereditary conditions can lead to degeneration of the cerebellum or spinal cord , as well as malformation of the brain , or the abnormal accumulation of copper seen in Wilson ’ s disease . Ataxia A movement disorder of the cerebellum is referred to as ataxia .", "question": { "cloze_format": "___ is not a cause of cerebellar ataxia.", "normal_format": "Which of the following is not a cause of cerebellar ataxia?", "question_choices": [ "mercury from fish", "drinking alcohol", "antibiotics", "hereditary degeneration of the cerebellum" ], "question_id": "fs-id1881166", "question_text": "Which of the following is not a cause of cerebellar ataxia?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 3, "ans_text": "processing visual information" }, "bloom": "1", "hl_context": "<hl> The role of the cerebellum is a subject of debate . <hl> <hl> There is an obvious connection to motor function based on the clinical implications of cerebellar damage . <hl> <hl> There is also strong evidence of the cerebellar role in procedural memory . <hl> <hl> The two are not incompatible ; in fact , procedural memory is motor memory , such as learning to ride a bicycle . <hl> <hl> Significant work has been performed to describe the connections within the cerebellum that result in learning . <hl> A model for this learning is classical conditioning , as shown by the famous dogs from the physiologist Ivan Pavlov ’ s work . This classical conditioning , which can be related to motor learning , fits with the neural connections of the cerebellum . The cerebellum is 10 percent of the mass of the brain and has varied functions that all point to a role in the motor system . Those parts of the brain involved in the reception and interpretation of sensory stimuli are referred to collectively as the sensorium . The cerebral cortex has several regions that are necessary for sensory perception . <hl> From the primary cortical areas of the somatosensory , visual , auditory , and gustatory senses to the association areas that process information in these modalities , the cerebral cortex is the seat of conscious sensory perception . <hl> <hl> In contrast , sensory information can also be processed by deeper brain regions , which we may vaguely describe as subconscious — for instance , we are not constantly aware of the proprioceptive information that the cerebellum uses to maintain balance . <hl> Several of the subtests can reveal activity associated with these sensory modalities , such as being able to hear a question or see a picture . Two subtests assess specific functions of these cortical areas . The five major sections of the neurological exam are related to the major regions of the CNS ( Figure 16.2 ) . The mental status exam assesses functions related to the cerebrum . The cranial nerve exam is for the nerves that connect to the diencephalon and brain stem ( as well as the olfactory connections to the forebrain ) . <hl> The coordination exam and the related gait exam primarily assess the functions of the cerebellum . <hl> <hl> The motor and sensory exams are associated with the spinal cord and its connections through the spinal nerves . <hl>", "hl_sentences": "The role of the cerebellum is a subject of debate . There is an obvious connection to motor function based on the clinical implications of cerebellar damage . There is also strong evidence of the cerebellar role in procedural memory . The two are not incompatible ; in fact , procedural memory is motor memory , such as learning to ride a bicycle . Significant work has been performed to describe the connections within the cerebellum that result in learning . From the primary cortical areas of the somatosensory , visual , auditory , and gustatory senses to the association areas that process information in these modalities , the cerebral cortex is the seat of conscious sensory perception . In contrast , sensory information can also be processed by deeper brain regions , which we may vaguely describe as subconscious — for instance , we are not constantly aware of the proprioceptive information that the cerebellum uses to maintain balance . The coordination exam and the related gait exam primarily assess the functions of the cerebellum . The motor and sensory exams are associated with the spinal cord and its connections through the spinal nerves .", "question": { "cloze_format": "The function that cannot be attributed to the cerebellum is ___.", "normal_format": "Which of the following functions cannot be attributed to the cerebellum?", "question_choices": [ "comparing motor commands and sensory feedback", "associating sensory stimuli with learned behavior", "coordinating complex movements", "processing visual information" ], "question_id": "fs-id1481522", "question_text": "Which of the following functions cannot be attributed to the cerebellum?" }, "references_are_paraphrase": 0 } ]
16
16.1 Overview of the Neurological Exam Learning Objectives By the end of this section, you will be able to: List the major sections of the neurological exam Explain the connection between location and function in the nervous system Explain the benefit of a rapid assessment for neurological function in a clinical setting List the causes of neurological deficits Describe the different ischemic events in the nervous system The neurological exam is a clinical assessment tool used to determine what specific parts of the CNS are affected by damage or disease. It can be performed in a short time—sometimes as quickly as 5 minutes—to establish neurological function. In the emergency department, this rapid assessment can make the difference with respect to proper treatment and the extent of recovery that is possible. The exam is a series of subtests separated into five major sections. The first of these is the mental status exam , which assesses the higher cognitive functions such as memory, orientation, and language. Then there is the cranial nerve exam , which tests the function of the 12 cranial nerves and, therefore, the central and peripheral structures associated with them. The cranial nerve exam tests the sensory and motor functions of each of the nerves, as applicable. Two major sections, the sensory exam and the motor exam , test the sensory and motor functions associated with spinal nerves. Finally, the coordination exam tests the ability to perform complex and coordinated movements. The gait exam , which is often considered a sixth major exam, specifically assesses the motor function of walking and can be considered part of the coordination exam because walking is a coordinated movement. Neuroanatomy and the Neurological Exam Localization of function is the concept that circumscribed locations are responsible for specific functions. The neurological exam highlights this relationship. For example, the cognitive functions that are assessed in the mental status exam are based on functions in the cerebrum, mostly in the cerebral cortex. Several of the subtests examine language function. Deficits in neurological function uncovered by these examinations usually point to damage to the left cerebral cortex. In the majority of individuals, language function is localized to the left hemisphere between the superior temporal lobe and the posterior frontal lobe, including the intervening connections through the inferior parietal lobe. The five major sections of the neurological exam are related to the major regions of the CNS ( Figure 16.2 ). The mental status exam assesses functions related to the cerebrum. The cranial nerve exam is for the nerves that connect to the diencephalon and brain stem (as well as the olfactory connections to the forebrain). The coordination exam and the related gait exam primarily assess the functions of the cerebellum. The motor and sensory exams are associated with the spinal cord and its connections through the spinal nerves. Part of the power of the neurological exam is this link between structure and function. Testing the various functions represented in the exam allows an accurate estimation of where the nervous system may be damaged. Consider the patient described in the chapter introduction. In the emergency department, he is given a quick exam to find where the deficit may be localized. Knowledge of where the damage occurred will lead to the most effective therapy. In rapid succession, he is asked to smile, raise his eyebrows, stick out his tongue, and shrug his shoulders. The doctor tests muscular strength by providing resistance against his arms and legs while he tries to lift them. With his eyes closed, he has to indicate when he feels the tip of a pen touch his legs, arms, fingers, and face. He follows the tip of a pen as the doctor moves it through the visual field and finally toward his face. A formal mental status exam is not needed at this point; the patient will demonstrate any possible deficits in that area during normal interactions with the interviewer. If cognitive or language deficits are apparent, the interviewer can pursue mental status in more depth. All of this takes place in less than 5 minutes. The patient reports that he feels pins and needles in his left arm and leg, and has trouble feeling the tip of the pen when he is touched on those limbs. This suggests a problem with the sensory systems between the spinal cord and the brain. The emergency department has a lead to follow before a CT scan is performed. He is put on aspirin therapy to limit the possibility of blood clots forming, in case the cause is an embolus —an obstruction such as a blood clot that blocks the flow of blood in an artery or vein. Interactive Link Watch this video to see a demonstration of the neurological exam—a series of tests that can be performed rapidly when a patient is initially brought into an emergency department. The exam can be repeated on a regular basis to keep a record of how and if neurological function changes over time. In what order were the sections of the neurological exam tested in this video, and which section seemed to be left out? Causes of Neurological Deficits Damage to the nervous system can be limited to individual structures or can be distributed across broad areas of the brain and spinal cord. Localized, limited injury to the nervous system is most often the result of circulatory problems. Neurons are very sensitive to oxygen deprivation and will start to deteriorate within 1 or 2 minutes, and permanent damage (cell death) could result within a few hours. The loss of blood flow to part of the brain is known as a stroke , or a cerebrovascular accident (CVA). There are two main types of stroke, depending on how the blood supply is compromised: ischemic and hemorrhagic. An ischemic stroke is the loss of blood flow to an area because vessels are blocked or narrowed. This is often caused by an embolus, which may be a blood clot or fat deposit. Ischemia may also be the result of thickening of the blood vessel wall, or a drop in blood volume in the brain known as hypovolemia . A related type of CVA is known as a transient ischemic attack (TIA) , which is similar to a stroke although it does not last as long. The diagnostic definition of a stroke includes effects that last at least 24 hours. Any stroke symptoms that are resolved within a 24-hour period because of restoration of adequate blood flow are classified as a TIA. A hemorrhagic stroke is bleeding into the brain because of a damaged blood vessel. Accumulated blood fills a region of the cranial vault and presses against the tissue in the brain ( Figure 16.3 ). Physical pressure on the brain can cause the loss of function, as well as the squeezing of local arteries resulting in compromised blood flow beyond the site of the hemorrhage. As blood pools in the nervous tissue and the vasculature is damaged, the blood-brain barrier can break down and allow additional fluid to accumulate in the region, which is known as edema . Whereas hemorrhagic stroke may involve bleeding into a large region of the CNS, such as into the deep white matter of a cerebral hemisphere, other events can cause widespread damage and loss of neurological functions. Infectious diseases can lead to loss of function throughout the CNS as components of nervous tissue, specifically astrocytes and microglia, react to the disease. Blunt force trauma, such as from a motor vehicle accident, can physically damage the CNS. A class of disorders that affect the nervous system are the neurodegenerative diseases: Alzheimer’s disease, Parkinson’s disease, Huntington’s disease, amyotrophic lateral sclerosis (ALS), Creutzfeld–Jacob disease, multiple sclerosis (MS), and other disorders that are the result of nervous tissue degeneration. In diseases like Alzheimer’s, Parkinson’s, or ALS, neurons die; in diseases like MS, myelin is affected. Some of these disorders affect motor function, and others present with dementia. How patients with these disorders perform in the neurological exam varies, but is often broad in its effects, such as memory deficits that compromise many aspects of the mental status exam, or movement deficits that compromise aspects of the cranial nerve exam, the motor exam, or the coordination exam. The causes of these disorders are also varied. Some are the result of genetics, such as Huntington’s disease, or the result of autoimmunity, such as MS; others are not entirely understood, such as Alzheimer’s and Parkinson’s diseases. Current research suggests that many of these diseases are related in how the degeneration takes place and may be treated by common therapies. Finally, a common cause of neurological changes is observed in developmental disorders. Whether the result of genetic factors or the environment during development, there are certain situations that result in neurological functions being different from the expected norms. Developmental disorders are difficult to define because they are caused by defects that existed in the past and disrupted the normal development of the CNS. These defects probably involve multiple environmental and genetic factors—most of the time, we don’t know what the cause is other than that it is more complex than just one factor. Furthermore, each defect on its own may not be a problem, but when several are added together, they can disrupt growth processes that are not well understand in the first place. For instance, it is possible for a stroke to damage a specific region of the brain and lead to the loss of the ability to recognize faces (prosopagnosia). The link between cell death in the fusiform gyrus and the symptom is relatively easy to understand. In contrast, similar deficits can be seen in children with the developmental disorder, autism spectrum disorder (ASD). However, these children do not lack a fusiform gyrus, nor is there any damage or defect visible to this brain region. We conclude, rather poorly, that this brain region is not connected properly to other brain regions. Infection, trauma, and congenital disorders can all lead to significant signs, as identified through the neurological exam. It is important to differentiate between an acute event, such as stroke, and a chronic or global condition such as blunt force trauma. Responses seen in the neurological exam can help. A loss of language function observed in all its aspects is more likely a global event as opposed to a discrete loss of one function, such as not being able to say certain types of words. A concern, however, is that a specific function—such as controlling the muscles of speech—may mask other language functions. The various subtests within the mental status exam can address these finer points and help clarify the underlying cause of the neurological loss. Interactive Link Watch this video for an introduction to the neurological exam. Studying the neurological exam can give insight into how structure and function in the nervous system are interdependent. This is a tool both in the clinic and in the classroom, but for different reasons. In the clinic, this is a powerful but simple tool to assess a patient’s neurological function. In the classroom, it is a different way to think about the nervous system. Though medical technology provides noninvasive imaging and real-time functional data, the presenter says these cannot replace the history at the core of the medical examination. What does history mean in the context of medical practice? 16.2 The Mental Status Exam Learning Objectives By the end of this section, you will be able to: Describe the relationship of mental status exam results to cerebral functions Explain the categorization of regions of the cortex based on anatomy and physiology Differentiate between primary, association, and integration areas of the cerebral cortex Provide examples of localization of function related to the cerebral cortex In the clinical setting, the set of subtests known as the mental status exam helps us understand the relationship of the brain to the body. Ultimately, this is accomplished by assessing behavior. Tremors related to intentional movements, incoordination, or the neglect of one side of the body can be indicative of failures of the connections of the cerebrum either within the hemispheres, or from the cerebrum to other portions of the nervous system. There is no strict test for what the cerebrum does alone, but rather in what it does through its control of the rest of the CNS, the peripheral nervous system (PNS), and the musculature. Sometimes eliciting a behavior is as simple as asking a question. Asking a patient to state his or her name is not only to verify that the file folder in a health care provider’s hands is the correct one, but also to be sure that the patient is aware, oriented, and capable of interacting with another person. If the answer to “What is your name?” is “Santa Claus,” the person may have a problem understanding reality. If the person just stares at the examiner with a confused look on their face, the person may have a problem understanding or producing speech. Functions of the Cerebral Cortex The cerebrum is the seat of many of the higher mental functions, such as memory and learning, language, and conscious perception, which are the subjects of subtests of the mental status exam. The cerebral cortex is the thin layer of gray matter on the outside of the cerebrum. On average, it is approximately 2.55 mm thick and highly folded to fit within the limited space of the cranial vault. These higher functions are distributed across various regions of the cortex, and specific locations can be said to be responsible for particular functions. There is a limited set of regions, for example, that are involved in language function, and they can be subdivided on the basis of the particular part of language function that each governs. The basis for parceling out areas of the cortex and attributing them to various functions has its root in pure anatomical underpinnings. The German neurologist and histologist Korbinian Brodmann, who made a careful study of the cytoarchitecture of the cerebrum around the turn of the nineteenth century, described approximately 50 regions of the cortex that differed enough from each other to be considered separate areas ( Figure 16.4 ). Brodmann made preparations of many different regions of the cerebral cortex to view with a microscope. He compared the size, shape, and number of neurons to find anatomical differences in the various parts of the cerebral cortex. Continued investigation into these anatomical areas over the subsequent 100 or more years has demonstrated a strong correlation between the structures and the functions attributed to those structures. For example, the first three areas in Brodmann’s list—which are in the postcentral gyrus—compose the primary somatosensory cortex. Within this area, finer separation can be made on the basis of the concept of the sensory homunculus, as well as the different submodalities of somatosensation such as touch, vibration, pain, temperature, or proprioception. Today, we more frequently refer to these regions by their function (i.e., primary sensory cortex) than by the number Brodmann assigned to them, but in some situations the use of Brodmann numbers persists. Area 17, as Brodmann described it, is also known as the primary visual cortex. Adjacent to that are areas 18 and 19, which constitute subsequent regions of visual processing. Area 22 is the primary auditory cortex, and it is followed by area 23, which further processes auditory information. Area 4 is the primary motor cortex in the precentral gyrus, whereas area 6 is the premotor cortex. These areas suggest some specialization within the cortex for functional processing, both in sensory and motor regions. The fact that Brodmann’s areas correlate so closely to functional localization in the cerebral cortex demonstrates the strong link between structure and function in these regions. Areas 1, 2, 3, 4, 17, and 22 are each described as primary cortical areas. The adjoining regions are each referred to as association areas. Primary areas are where sensory information is initially received from the thalamus for conscious perception, or—in the case of the primary motor cortex—where descending commands are sent down to the brain stem or spinal cord to execute movements ( Figure 16.5 ). A number of other regions, which extend beyond these primary or association areas of the cortex, are referred to as integrative areas. These areas are found in the spaces between the domains for particular sensory or motor functions, and they integrate multisensory information, or process sensory or motor information in more complex ways. Consider, for example, the posterior parietal cortex that lies between the somatosensory cortex and visual cortex regions. This has been ascribed to the coordination of visual and motor functions, such as reaching to pick up a glass. The somatosensory function that would be part of this is the proprioceptive feedback from moving the arm and hand. The weight of the glass, based on what it contains, will influence how those movements are executed. Cognitive Abilities Assessment of cerebral functions is directed at cognitive abilities. The abilities assessed through the mental status exam can be separated into four groups: orientation and memory, language and speech, sensorium, and judgment and abstract reasoning. Orientation and Memory Orientation is the patient’s awareness of his or her immediate circumstances. It is awareness of time, not in terms of the clock, but of the date and what is occurring around the patient. It is awareness of place, such that a patient should know where he or she is and why. It is also awareness of who the patient is—recognizing personal identity and being able to relate that to the examiner. The initial tests of orientation are based on the questions, “Do you know what the date is?” or “Do you know where you are?” or “What is your name?” Further understanding of a patient’s awareness of orientation can come from questions that address remote memory, such as “Who is the President of the United States?”, or asking what happened on a specific date. There are also specific tasks to address memory. One is the three-word recall test. The patient is given three words to recall, such as book, clock, and shovel. After a short interval, during which other parts of the interview continue, the patient is asked to recall the three words. Other tasks that assess memory—aside from those related to orientation—have the patient recite the months of the year in reverse order to avoid the overlearned sequence and focus on the memory of the months in an order, or to spell common words backwards, or to recite a list of numbers back. Memory is largely a function of the temporal lobe, along with structures beneath the cerebral cortex such as the hippocampus and the amygdala. The storage of memory requires these structures of the medial temporal lobe. A famous case of a man who had both medial temporal lobes removed to treat intractable epilepsy provided insight into the relationship between the structures of the brain and the function of memory. Henry Molaison, who was referred to as patient HM when he was alive, had epilepsy localized to both of his medial temporal lobes. In 1953, a bilateral lobectomy was performed that alleviated the epilepsy but resulted in the inability for HM to form new memories—a condition called anterograde amnesia . HM was able to recall most events from before his surgery, although there was a partial loss of earlier memories, which is referred to as retrograde amnesia . HM became the subject of extensive studies into how memory works. What he was unable to do was form new memories of what happened to him, what are now called episodic memory . Episodic memory is autobiographical in nature, such as remembering riding a bicycle as a child around the neighborhood, as opposed to the procedural memory of how to ride a bike. HM also retained his short-term memory , such as what is tested by the three-word task described above. After a brief period, those memories would dissipate or decay and not be stored in the long-term because the medial temporal lobe structures were removed. The difference in short-term, procedural, and episodic memory, as evidenced by patient HM, suggests that there are different parts of the brain responsible for those functions. The long-term storage of episodic memory requires the hippocampus and related medial temporal structures, and the location of those memories is in the multimodal integration areas of the cerebral cortex. However, short-term memory—also called working or active memory—is localized to the prefrontal lobe. Because patient HM had only lost his medial temporal lobe—and lost very little of his previous memories, and did not lose the ability to form new short-term memories—it was concluded that the function of the hippocampus, and adjacent structures in the medial temporal lobe, is to move (or consolidate) short-term memories (in the pre-frontal lobe) to long-term memory (in the temporal lobe). The prefrontal cortex can also be tested for the ability to organize information. In one subtest of the mental status exam called set generation, the patient is asked to generate a list of words that all start with the same letter, but not to include proper nouns or names. The expectation is that a person can generate such a list of at least 10 words within 1 minute. Many people can likely do this much more quickly, but the standard separates the accepted normal from those with compromised prefrontal cortices. Interactive Link Read this article to learn about a young man who texts his fiancée in a panic as he finds that he is having trouble remembering things. At the hospital, a neurologist administers the mental status exam, which is mostly normal except for the three-word recall test. The young man could not recall them even 30 seconds after hearing them and repeating them back to the doctor. An undiscovered mass in the mediastinum region was found to be Hodgkin’s lymphoma, a type of cancer that affects the immune system and likely caused antibodies to attack the nervous system. The patient eventually regained his ability to remember, though the events in the hospital were always elusive. Considering that the effects on memory were temporary, but resulted in the loss of the specific events of the hospital stay, what regions of the brain were likely to have been affected by the antibodies and what type of memory does that represent? Language and Speech Language is, arguably, a very human aspect of neurological function. There are certainly strides being made in understanding communication in other species, but much of what makes the human experience seemingly unique is its basis in language. Any understanding of our species is necessarily reflective, as suggested by the question “What am I?” And the fundamental answer to this question is suggested by the famous quote by René Descartes: “Cogito Ergo Sum” (translated from Latin as “I think, therefore I am”). Formulating an understanding of yourself is largely describing who you are to yourself. It is a confusing topic to delve into, but language is certainly at the core of what it means to be self-aware. The neurological exam has two specific subtests that address language. One measures the ability of the patient to understand language by asking them to follow a set of instructions to perform an action, such as “touch your right finger to your left elbow and then to your right knee.” Another subtest assesses the fluency and coherency of language by having the patient generate descriptions of objects or scenes depicted in drawings, and by reciting sentences or explaining a written passage. Language, however, is important in so many ways in the neurological exam. The patient needs to know what to do, whether it is as simple as explaining how the knee-jerk reflex is going to be performed, or asking a question such as “What is your name?” Often, language deficits can be determined without specific subtests; if a person cannot reply to a question properly, there may be a problem with the reception of language. An important example of multimodal integrative areas is associated with language function ( Figure 16.6 ). Adjacent to the auditory association cortex, at the end of the lateral sulcus just anterior to the visual cortex, is Wernicke’s area . In the lateral aspect of the frontal lobe, just anterior to the region of the motor cortex associated with the head and neck, is Broca’s area. Both regions were originally described on the basis of losses of speech and language, which is called aphasia . The aphasia associated with Broca’s area is known as an expressive aphasia , which means that speech production is compromised. This type of aphasia is often described as non-fluency because the ability to say some words leads to broken or halting speech. Grammar can also appear to be lost. The aphasia associated with Wernicke’s area is known as a receptive aphasia , which is not a loss of speech production, but a loss of understanding of content. Patients, after recovering from acute forms of this aphasia, report not being able to understand what is said to them or what they are saying themselves, but they often cannot keep from talking. The two regions are connected by white matter tracts that run between the posterior temporal lobe and the lateral aspect of the frontal lobe. Conduction aphasia associated with damage to this connection refers to the problem of connecting the understanding of language to the production of speech. This is a very rare condition, but is likely to present as an inability to faithfully repeat spoken language. Sensorium Those parts of the brain involved in the reception and interpretation of sensory stimuli are referred to collectively as the sensorium. The cerebral cortex has several regions that are necessary for sensory perception. From the primary cortical areas of the somatosensory, visual, auditory, and gustatory senses to the association areas that process information in these modalities, the cerebral cortex is the seat of conscious sensory perception. In contrast, sensory information can also be processed by deeper brain regions, which we may vaguely describe as subconscious—for instance, we are not constantly aware of the proprioceptive information that the cerebellum uses to maintain balance. Several of the subtests can reveal activity associated with these sensory modalities, such as being able to hear a question or see a picture. Two subtests assess specific functions of these cortical areas. The first is praxis , a practical exercise in which the patient performs a task completely on the basis of verbal description without any demonstration from the examiner. For example, the patient can be told to take their left hand and place it palm down on their left thigh, then flip it over so the palm is facing up, and then repeat this four times. The examiner describes the activity without any movements on their part to suggest how the movements are to be performed. The patient needs to understand the instructions, transform them into movements, and use sensory feedback, both visual and proprioceptive, to perform the movements correctly. The second subtest for sensory perception is gnosis , which involves two tasks. The first task, known as stereognosis , involves the naming of objects strictly on the basis of the somatosensory information that comes from manipulating them. The patient keeps their eyes closed and is given a common object, such as a coin, that they have to identify. The patient should be able to indicate the particular type of coin, such as a dime versus a penny, or a nickel versus a quarter, on the basis of the sensory cues involved. For example, the size, thickness, or weight of the coin may be an indication, or to differentiate the pairs of coins suggested here, the smooth or corrugated edge of the coin will correspond to the particular denomination. The second task, graphesthesia , is to recognize numbers or letters written on the palm of the hand with a dull pointer, such as a pen cap. Praxis and gnosis are related to the conscious perception and cortical processing of sensory information. Being able to transform verbal commands into a sequence of motor responses, or to manipulate and recognize a common object and associate it with a name for that object. Both subtests have language components because language function is integral to these functions. The relationship between the words that describe actions, or the nouns that represent objects, and the cerebral location of these concepts is suggested to be localized to particular cortical areas. Certain aphasias can be characterized by a deficit of verbs or nouns, known as V impairment or N impairment, or may be classified as V–N dissociation. Patients have difficulty using one type of word over the other. To describe what is happening in a photograph as part of the expressive language subtest, a patient will use active- or image-based language. The lack of one or the other of these components of language can relate to the ability to use verbs or nouns. Damage to the region at which the frontal and temporal lobes meet, including the region known as the insula, is associated with V impairment; damage to the middle and inferior temporal lobe is associated with N impairment. Judgment and Abstract Reasoning Planning and producing responses requires an ability to make sense of the world around us. Making judgments and reasoning in the abstract are necessary to produce movements as part of larger responses. For example, when your alarm goes off, do you hit the snooze button or jump out of bed? Is 10 extra minutes in bed worth the extra rush to get ready for your day? Will hitting the snooze button multiple times lead to feeling more rested or result in a panic as you run late? How you mentally process these questions can affect your whole day. The prefrontal cortex is responsible for the functions responsible for planning and making decisions. In the mental status exam, the subtest that assesses judgment and reasoning is directed at three aspects of frontal lobe function. First, the examiner asks questions about problem solving, such as “If you see a house on fire, what would you do?” The patient is also asked to interpret common proverbs, such as “Don’t look a gift horse in the mouth.” Additionally, pairs of words are compared for similarities, such as apple and orange, or lamp and cabinet. The prefrontal cortex is composed of the regions of the frontal lobe that are not directly related to specific motor functions. The most posterior region of the frontal lobe, the precentral gyrus, is the primary motor cortex. Anterior to that are the premotor cortex, Broca’s area, and the frontal eye fields, which are all related to planning certain types of movements. Anterior to what could be described as motor association areas are the regions of the prefrontal cortex. They are the regions in which judgment, abstract reasoning, and working memory are localized. The antecedents to planning certain movements are judging whether those movements should be made, as in the example of deciding whether to hit the snooze button. To an extent, the prefrontal cortex may be related to personality. The neurological exam does not necessarily assess personality, but it can be within the realm of neurology or psychiatry. A clinical situation that suggests this link between the prefrontal cortex and personality comes from the story of Phineas Gage, the railroad worker from the mid-1800s who had a metal spike impale his prefrontal cortex. There are suggestions that the steel rod led to changes in his personality. A man who was a quiet, dependable railroad worker became a raucous, irritable drunkard. Later anecdotal evidence from his life suggests that he was able to support himself, although he had to relocate and take on a different career as a stagecoach driver. A psychiatric practice to deal with various disorders was the prefrontal lobotomy. This procedure was common in the 1940s and early 1950s, until antipsychotic drugs became available. The connections between the prefrontal cortex and other regions of the brain were severed. The disorders associated with this procedure included some aspects of what are now referred to as personality disorders, but also included mood disorders and psychoses. Depictions of lobotomies in popular media suggest a link between cutting the white matter of the prefrontal cortex and changes in a patient’s mood and personality, though this correlation is not well understood. Everyday Connection Left Brain, Right Brain Popular media often refer to right-brained and left-brained people, as if the brain were two independent halves that work differently for different people. This is a popular misinterpretation of an important neurological phenomenon. As an extreme measure to deal with a debilitating condition, the corpus callosum may be sectioned to overcome intractable epilepsy. When the connections between the two cerebral hemispheres are cut, interesting effects can be observed. If a person with an intact corpus callosum is asked to put their hands in their pockets and describe what is there on the basis of what their hands feel, they might say that they have keys in their right pocket and loose change in the left. They may even be able to count the coins in their pocket and say if they can afford to buy a candy bar from the vending machine. If a person with a sectioned corpus callosum is given the same instructions, they will do something quite peculiar. They will only put their right hand in their pocket and say they have keys there. They will not even move their left hand, much less report that there is loose change in the left pocket. The reason for this is that the language functions of the cerebral cortex are localized to the left hemisphere in 95 percent of the population. Additionally, the left hemisphere is connected to the right side of the body through the corticospinal tract and the ascending tracts of the spinal cord. Motor commands from the precentral gyrus control the opposite side of the body, whereas sensory information processed by the postcentral gyrus is received from the opposite side of the body. For a verbal command to initiate movement of the right arm and hand, the left side of the brain needs to be connected by the corpus callosum. Language is processed in the left side of the brain and directly influences the left brain and right arm motor functions, but is sent to influence the right brain and left arm motor functions through the corpus callosum. Likewise, the left-handed sensory perception of what is in the left pocket travels across the corpus callosum from the right brain, so no verbal report on those contents would be possible if the hand happened to be in the pocket. Interactive Link Watch the video titled “The Man With Two Brains” to see the neuroscientist Michael Gazzaniga introduce a patient he has worked with for years who has had his corpus callosum cut, separating his two cerebral hemispheres. A few tests are run to demonstrate how this manifests in tests of cerebral function. Unlike normal people, this patient can perform two independent tasks at the same time because the lines of communication between the right and left sides of his brain have been removed. Whereas a person with an intact corpus callosum cannot overcome the dominance of one hemisphere over the other, this patient can. If the left cerebral hemisphere is dominant in the majority of people, why would right-handedness be most common? The Mental Status Exam The cerebrum, particularly the cerebral cortex, is the location of important cognitive functions that are the focus of the mental status exam. The regionalization of the cortex, initially described on the basis of anatomical evidence of cytoarchitecture, reveals the distribution of functionally distinct areas. Cortical regions can be described as primary sensory or motor areas, association areas, or multimodal integration areas. The functions attributed to these regions include attention, memory, language, speech, sensation, judgment, and abstract reasoning. The mental status exam addresses these cognitive abilities through a series of subtests designed to elicit particular behaviors ascribed to these functions. The loss of neurological function can illustrate the location of damage to the cerebrum. Memory functions are attributed to the temporal lobe, particularly the medial temporal lobe structures known as the hippocampus and amygdala, along with the adjacent cortex. Evidence of the importance of these structures comes from the side effects of a bilateral temporal lobectomy that were studied in detail in patient HM. Losses of language and speech functions, known as aphasias, are associated with damage to the important integration areas in the left hemisphere known as Broca’s or Wernicke’s areas, as well as the connections in the white matter between them. Different types of aphasia are named for the particular structures that are damaged. Assessment of the functions of the sensorium includes praxis and gnosis. The subtests related to these functions depend on multimodal integration, as well as language-dependent processing. The prefrontal cortex contains structures important for planning, judgment, reasoning, and working memory. Damage to these areas can result in changes to personality, mood, and behavior. The famous case of Phineas Gage suggests a role for this cortex in personality, as does the outdated practice of prefrontal lobectomy. 16.3 The Cranial Nerve Exam Learning Objectives By the end of this section, you will be able to: Describe the functional grouping of cranial nerves Match the regions of the forebrain and brain stem that are connected to each cranial nerve Suggest diagnoses that would explain certain losses of function in the cranial nerves Relate cranial nerve deficits to damage of adjacent, unrelated structures The twelve cranial nerves are typically covered in introductory anatomy courses, and memorizing their names is facilitated by numerous mnemonics developed by students over the years of this practice. But knowing the names of the nerves in order often leaves much to be desired in understanding what the nerves do. The nerves can be categorized by functions, and subtests of the cranial nerve exam can clarify these functional groupings. Three of the nerves are strictly responsible for special senses whereas four others contain fibers for special and general senses. Three nerves are connected to the extraocular muscles resulting in the control of gaze. Four nerves connect to muscles of the face, oral cavity, and pharynx, controlling facial expressions, mastication, swallowing, and speech. Four nerves make up the cranial component of the parasympathetic nervous system responsible for pupillary constriction, salivation, and the regulation of the organs of the thoracic and upper abdominal cavities. Finally, one nerve controls the muscles of the neck, assisting with spinal control of the movement of the head and neck. The cranial nerve exam allows directed tests of forebrain and brain stem structures. The twelve cranial nerves serve the head and neck. The vagus nerve (cranial nerve X) has autonomic functions in the thoracic and superior abdominal cavities. The special senses are served through the cranial nerves, as well as the general senses of the head and neck. The movement of the eyes, face, tongue, throat, and neck are all under the control of cranial nerves. Preganglionic parasympathetic nerve fibers that control pupillary size, salivary glands, and the thoracic and upper abdominal viscera are found in four of the nerves. Tests of these functions can provide insight into damage to specific regions of the brain stem and may uncover deficits in adjacent regions. Sensory Nerves The olfactory, optic, and vestibulocochlear nerves (cranial nerves I, II, and VIII) are dedicated to four of the special senses: smell, vision, equilibrium, and hearing, respectively. Taste sensation is relayed to the brain stem through fibers of the facial and glossopharyngeal nerves. The trigeminal nerve is a mixed nerve that carries the general somatic senses from the head, similar to those coming through spinal nerves from the rest of the body. Testing smell is straightforward, as common smells are presented to one nostril at a time. The patient should be able to recognize the smell of coffee or mint, indicating the proper functioning of the olfactory system. Loss of the sense of smell is called anosmia and can be lost following blunt trauma to the head or through aging. The short axons of the first cranial nerve regenerate on a regular basis. The neurons in the olfactory epithelium have a limited life span, and new cells grow to replace the ones that die off. The axons from these neurons grow back into the CNS by following the existing axons—representing one of the few examples of such growth in the mature nervous system. If all of the fibers are sheared when the brain moves within the cranium, such as in a motor vehicle accident, then no axons can find their way back to the olfactory bulb to re-establish connections. If the nerve is not completely severed, the anosmia may be temporary as new neurons can eventually reconnect. Olfaction is not the pre-eminent sense, but its loss can be quite detrimental. The enjoyment of food is largely based on our sense of smell. Anosmia means that food will not seem to have the same taste, though the gustatory sense is intact, and food will often be described as being bland. However, the taste of food can be improved by adding ingredients (e.g., salt) that stimulate the gustatory sense. Testing vision relies on the tests that are common in an optometry office. The Snellen chart ( Figure 16.7 ) demonstrates visual acuity by presenting standard Roman letters in a variety of sizes. The result of this test is a rough generalization of the acuity of a person based on the normal accepted acuity, such that a letter that subtends a visual angle of 5 minutes of an arc at 20 feet can be seen. To have 20/60 vision, for example, means that the smallest letters that a person can see at a 20-foot distance could be seen by a person with normal acuity from 60 feet away. Testing the extent of the visual field means that the examiner can establish the boundaries of peripheral vision as simply as holding their hands out to either side and asking the patient when the fingers are no longer visible without moving the eyes to track them. If it is necessary, further tests can establish the perceptions in the visual fields. Physical inspection of the optic disk, or where the optic nerve emerges from the eye, can be accomplished by looking through the pupil with an ophthalmoscope. The optic nerves from both sides enter the cranium through the respective optic canals and meet at the optic chiasm at which fibers sort such that the two halves of the visual field are processed by the opposite sides of the brain. Deficits in visual field perception often suggest damage along the length of the optic pathway between the orbit and the diencephalon. For example, loss of peripheral vision may be the result of a pituitary tumor pressing on the optic chiasm ( Figure 16.8 ). The pituitary, seated in the sella turcica of the sphenoid bone, is directly inferior to the optic chiasm. The axons that decussate in the chiasm are from the medial retinae of either eye, and therefore carry information from the peripheral visual field. The vestibulocochlear nerve (CN VIII) carries both equilibrium and auditory sensations from the inner ear to the medulla. Though the two senses are not directly related, anatomy is mirrored in the two systems. Problems with balance, such as vertigo, and deficits in hearing may both point to problems with the inner ear. Within the petrous region of the temporal bone is the bony labyrinth of the inner ear. The vestibule is the portion for equilibrium, composed of the utricle, saccule, and the three semicircular canals. The cochlea is responsible for transducing sound waves into a neural signal. The sensory nerves from these two structures travel side-by-side as the vestibulocochlear nerve, though they are really separate divisions. They both emerge from the inner ear, pass through the internal auditory meatus, and synapse in nuclei of the superior medulla. Though they are part of distinct sensory systems, the vestibular nuclei and the cochlear nuclei are close neighbors with adjacent inputs. Deficits in one or both systems could occur from damage that encompasses structures close to both. Damage to structures near the two nuclei can result in deficits to one or both systems. Balance or hearing deficits may be the result of damage to the middle or inner ear structures. Ménière's disease is a disorder that can affect both equilibrium and audition in a variety of ways. The patient can suffer from vertigo, a low-frequency ringing in the ears, or a loss of hearing. From patient to patient, the exact presentation of the disease can be different. Additionally, within a single patient, the symptoms and signs may change as the disease progresses. Use of the neurological exam subtests for the vestibulocochlear nerve illuminates the changes a patient may go through. The disease appears to be the result of accumulation, or over-production, of fluid in the inner ear, in either the vestibule or cochlea. Tests of equilibrium are important for coordination and gait and are related to other aspects of the neurological exam. The vestibulo-ocular reflex involves the cranial nerves for gaze control. Balance and equilibrium, as tested by the Romberg test, are part of spinal and cerebellar processes and involved in those components of the neurological exam, as discussed later. Hearing is tested by using a tuning fork in a couple of different ways. The Rinne test involves using a tuning fork to distinguish between conductive hearing and sensorineural hearing . Conductive hearing relies on vibrations being conducted through the ossicles of the middle ear. Sensorineural hearing is the transmission of sound stimuli through the neural components of the inner ear and cranial nerve. A vibrating tuning fork is placed on the mastoid process and the patient indicates when the sound produced from this is no longer present. Then the fork is immediately moved to just next to the ear canal so the sound travels through the air. If the sound is not heard through the ear, meaning the sound is conducted better through the temporal bone than through the ossicles, a conductive hearing deficit is present. The Weber test also uses a tuning fork to differentiate between conductive versus sensorineural hearing loss. In this test, the tuning fork is placed at the top of the skull, and the sound of the tuning fork reaches both inner ears by travelling through bone. In a healthy patient, the sound would appear equally loud in both ears. With unilateral conductive hearing loss, however, the tuning fork sounds louder in the ear with hearing loss. This is because the sound of the tuning fork has to compete with background noise coming from the outer ear, but in conductive hearing loss, the background noise is blocked in the damaged ear, allowing the tuning fork to sound relatively louder in that ear. With unilateral sensorineural hearing loss, however, damage to the cochlea or associated nervous tissue means that the tuning fork sounds quieter in that ear. The trigeminal system of the head and neck is the equivalent of the ascending spinal cord systems of the dorsal column and the spinothalamic pathways. Somatosensation of the face is conveyed along the nerve to enter the brain stem at the level of the pons. Synapses of those axons, however, are distributed across nuclei found throughout the brain stem. The mesencephalic nucleus processes proprioceptive information of the face, which is the movement and position of facial muscles. It is the sensory component of the jaw-jerk reflex , a stretch reflex of the masseter muscle. The chief nucleus, located in the pons, receives information about light touch as well as proprioceptive information about the mandible, which are both relayed to the thalamus and, ultimately, to the postcentral gyrus of the parietal lobe. The spinal trigeminal nucleus, located in the medulla, receives information about crude touch, pain, and temperature to be relayed to the thalamus and cortex. Essentially, the projection through the chief nucleus is analogous to the dorsal column pathway for the body, and the projection through the spinal trigeminal nucleus is analogous to the spinothalamic pathway. Subtests for the sensory component of the trigeminal system are the same as those for the sensory exam targeting the spinal nerves. The primary sensory subtest for the trigeminal system is sensory discrimination. A cotton-tipped applicator, which is cotton attached to the end of a thin wooden stick, can be used easily for this. The wood of the applicator can be snapped so that a pointed end is opposite the soft cotton-tipped end. The cotton end provides a touch stimulus, while the pointed end provides a painful, or sharp, stimulus. While the patient’s eyes are closed, the examiner touches the two ends of the applicator to the patient’s face, alternating randomly between them. The patient must identify whether the stimulus is sharp or dull. These stimuli are processed by the trigeminal system separately. Contact with the cotton tip of the applicator is a light touch, relayed by the chief nucleus, but contact with the pointed end of the applicator is a painful stimulus relayed by the spinal trigeminal nucleus. Failure to discriminate these stimuli can localize problems within the brain stem. If a patient cannot recognize a painful stimulus, that might indicate damage to the spinal trigeminal nucleus in the medulla. The medulla also contains important regions that regulate the cardiovascular, respiratory, and digestive systems, as well as being the pathway for ascending and descending tracts between the brain and spinal cord. Damage, such as a stroke, that results in changes in sensory discrimination may indicate these unrelated regions are affected as well. Gaze Control The three nerves that control the extraocular muscles are the oculomotor, trochlear, and abducens nerves, which are the third, fourth, and sixth cranial nerves. As the name suggests, the abducens nerve is responsible for abducting the eye, which it controls through contraction of the lateral rectus muscle. The trochlear nerve controls the superior oblique muscle to rotate the eye along its axis in the orbit medially, which is called intorsion , and is a component of focusing the eyes on an object close to the face. The oculomotor nerve controls all the other extraocular muscles, as well as a muscle of the upper eyelid. Movements of the two eyes need to be coordinated to locate and track visual stimuli accurately. When moving the eyes to locate an object in the horizontal plane, or to track movement horizontally in the visual field, the lateral rectus muscle of one eye and medial rectus muscle of the other eye are both active. The lateral rectus is controlled by neurons of the abducens nucleus in the superior medulla, whereas the medial rectus is controlled by neurons in the oculomotor nucleus of the midbrain. Coordinated movement of both eyes through different nuclei requires integrated processing through the brain stem. In the midbrain, the superior colliculus integrates visual stimuli with motor responses to initiate eye movements. The paramedian pontine reticular formation (PPRF) will initiate a rapid eye movement, or saccade , to bring the eyes to bear on a visual stimulus quickly. These areas are connected to the oculomotor, trochlear, and abducens nuclei by the medial longitudinal fasciculus (MLF) that runs through the majority of the brain stem. The MLF allows for conjugate gaze , or the movement of the eyes in the same direction, during horizontal movements that require the lateral and medial rectus muscles. Control of conjugate gaze strictly in the vertical direction is contained within the oculomotor complex. To elevate the eyes, the oculomotor nerve on either side stimulates the contraction of both superior rectus muscles; to depress the eyes, the oculomotor nerve on either side stimulates the contraction of both inferior rectus muscles. Purely vertical movements of the eyes are not very common. Movements are often at an angle, so some horizontal components are necessary, adding the medial and lateral rectus muscles to the movement. The rapid movement of the eyes used to locate and direct the fovea onto visual stimuli is called a saccade. Notice that the paths that are traced in Figure 16.9 are not strictly vertical. The movements between the nose and the mouth are closest, but still have a slant to them. Also, the superior and inferior rectus muscles are not perfectly oriented with the line of sight. The origin for both muscles is medial to their insertions, so elevation and depression may require the lateral rectus muscles to compensate for the slight adduction inherent in the contraction of those muscles, requiring MLF activity as well. Testing eye movement is simply a matter of having the patient track the tip of a pen as it is passed through the visual field. This may appear similar to testing visual field deficits related to the optic nerve, but the difference is that the patient is asked to not move the eyes while the examiner moves a stimulus into the peripheral visual field. Here, the extent of movement is the point of the test. The examiner is watching for conjugate movements representing proper function of the related nuclei and the MLF. Failure of one eye to abduct while the other adducts in a horizontal movement is referred to as internuclear ophthalmoplegia . When this occurs, the patient will experience diplopia , or double vision, as the two eyes are temporarily pointed at different stimuli. Diplopia is not restricted to failure of the lateral rectus, because any of the extraocular muscles may fail to move one eye in perfect conjugation with the other. The final aspect of testing eye movements is to move the tip of the pen in toward the patient’s face. As visual stimuli move closer to the face, the two medial recti muscles cause the eyes to move in the one nonconjugate movement that is part of gaze control. When the two eyes move to look at something closer to the face, they both adduct, which is referred to as convergence . To keep the stimulus in focus, the eye also needs to change the shape of the lens, which is controlled through the parasympathetic fibers of the oculomotor nerve. The change in focal power of the eye is referred to as accommodation . Accommodation ability changes with age; focusing on nearer objects, such as the written text of a book or on a computer screen, may require corrective lenses later in life. Coordination of the skeletal muscles for convergence and coordination of the smooth muscles of the ciliary body for accommodation are referred to as the accommodation–convergence reflex . A crucial function of the cranial nerves is to keep visual stimuli centered on the fovea of the retina. The vestibulo-ocular reflex (VOR) coordinates all of the components ( Figure 16.10 ), both sensory and motor, that make this possible. If the head rotates in one direction—for example, to the right—the horizontal pair of semicircular canals in the inner ear indicate the movement by increased activity on the right and decreased activity on the left. The information is sent to the abducens nuclei and oculomotor nuclei on either side to coordinate the lateral and medial rectus muscles. The left lateral rectus and right medial rectus muscles will contract, rotating the eyes in the opposite direction of the head, while nuclei controlling the right lateral rectus and left medial rectus muscles will be inhibited to reduce antagonism of the contracting muscles. These actions stabilize the visual field by compensating for the head rotation with opposite rotation of the eyes in the orbits. Deficits in the VOR may be related to vestibular damage, such as in Ménière’s disease, or from dorsal brain stem damage that would affect the eye movement nuclei or their connections through the MLF. Nerves of the Face and Oral Cavity An iconic part of a doctor’s visit is the inspection of the oral cavity and pharynx, suggested by the directive to “open your mouth and say ‘ah.’” This is followed by inspection, with the aid of a tongue depressor, of the back of the mouth, or the opening of the oral cavity into the pharynx known as the fauces . Whereas this portion of a medical exam inspects for signs of infection, such as in tonsillitis, it is also the means to test the functions of the cranial nerves that are associated with the oral cavity. The facial and glossopharyngeal nerves convey gustatory stimulation to the brain. Testing this is as simple as introducing salty, sour, bitter, or sweet stimuli to either side of the tongue. The patient should respond to the taste stimulus before retracting the tongue into the mouth. Stimuli applied to specific locations on the tongue will dissolve into the saliva and may stimulate taste buds connected to either the left or right of the nerves, masking any lateral deficits. Along with taste, the glossopharyngeal nerve relays general sensations from the pharyngeal walls. These sensations, along with certain taste stimuli, can stimulate the gag reflex. If the examiner moves the tongue depressor to contact the lateral wall of the fauces, this should elicit the gag reflex. Stimulation of either side of the fauces should elicit an equivalent response. The motor response, through contraction of the muscles of the pharynx, is mediated through the vagus nerve. Normally, the vagus nerve is considered autonomic in nature. The vagus nerve directly stimulates the contraction of skeletal muscles in the pharynx and larynx to contribute to the swallowing and speech functions. Further testing of vagus motor function has the patient repeating consonant sounds that require movement of the muscles around the fauces. The patient is asked to say “lah-kah-pah” or a similar set of alternating sounds while the examiner observes the movements of the soft palate and arches between the palate and tongue. The facial and glossopharyngeal nerves are also responsible for the initiation of salivation. Neurons in the salivary nuclei of the medulla project through these two nerves as preganglionic fibers, and synapse in ganglia located in the head. The parasympathetic fibers of the facial nerve synapse in the pterygopalatine ganglion, which projects to the submandibular gland and sublingual gland. The parasympathetic fibers of the glossopharyngeal nerve synapse in the otic ganglion, which projects to the parotid gland. Salivation in response to food in the oral cavity is based on a visceral reflex arc within the facial or glossopharyngeal nerves. Other stimuli that stimulate salivation are coordinated through the hypothalamus, such as the smell and sight of food. The hypoglossal nerve is the motor nerve that controls the muscles of the tongue, except for the palatoglossus muscle, which is controlled by the vagus nerve. There are two sets of muscles of the tongue. The extrinsic muscles of the tongue are connected to other structures, whereas the intrinsic muscles of the tongue are completely contained within the lingual tissues. While examining the oral cavity, movement of the tongue will indicate whether hypoglossal function is impaired. The test for hypoglossal function is the “stick out your tongue” part of the exam. The genioglossus muscle is responsible for protrusion of the tongue. If the hypoglossal nerves on both sides are working properly, then the tongue will stick straight out. If the nerve on one side has a deficit, the tongue will stick out to that side—pointing to the side with damage. Loss of function of the tongue can interfere with speech and swallowing. Additionally, because the location of the hypoglossal nerve and nucleus is near the cardiovascular center, inspiratory and expiratory areas for respiration, and the vagus nuclei that regulate digestive functions, a tongue that protrudes incorrectly can suggest damage in adjacent structures that have nothing to do with controlling the tongue. Interactive Link Watch this short video to see an examination of the facial nerve using some simple tests. The facial nerve controls the muscles of facial expression. Severe deficits will be obvious in watching someone use those muscles for normal control. One side of the face might not move like the other side. But directed tests, especially for contraction against resistance, require a formal testing of the muscles. The muscles of the upper and lower face need to be tested. The strength test in this video involves the patient squeezing her eyes shut and the examiner trying to pry her eyes open. Why does the examiner ask her to try a second time? Motor Nerves of the Neck The accessory nerve, also referred to as the spinal accessory nerve, innervates the sternocleidomastoid and trapezius muscles ( Figure 16.11 ). When both the sternocleidomastoids contract, the head flexes forward; individually, they cause rotation to the opposite side. The trapezius can act as an antagonist, causing extension and hyperextension of the neck. These two superficial muscles are important for changing the position of the head. Both muscles also receive input from cervical spinal nerves. Along with the spinal accessory nerve, these nerves contribute to elevating the scapula and clavicle through the trapezius, which is tested by asking the patient to shrug both shoulders, and watching for asymmetry. For the sternocleidomastoid, those spinal nerves are primarily sensory projections, whereas the trapezius also has lateral insertions to the clavicle and scapula, and receives motor input from the spinal cord. Calling the nerve the spinal accessory nerve suggests that it is aiding the spinal nerves. Though that is not precisely how the name originated, it does help make the association between the function of this nerve in controlling these muscles and the role these muscles play in movements of the trunk or shoulders. To test these muscles, the patient is asked to flex and extend the neck or shrug the shoulders against resistance, testing the strength of the muscles. Lateral flexion of the neck toward the shoulder tests both at the same time. Any difference on one side versus the other would suggest damage on the weaker side. These strength tests are common for the skeletal muscles controlled by spinal nerves and are a significant component of the motor exam. Deficits associated with the accessory nerve may have an effect on orienting the head, as described with the VOR. Homeostatic Imbalances The Pupillary Light Response The autonomic control of pupillary size in response to a bright light involves the sensory input of the optic nerve and the parasympathetic motor output of the oculomotor nerve. When light hits the retina, specialized photosensitive ganglion cells send a signal along the optic nerve to the pretectal nucleus in the superior midbrain. A neuron from this nucleus projects to the Edinger–Westphal nuclei in the oculomotor complex in both sides of the midbrain. Neurons in this nucleus give rise to the preganglionic parasympathetic fibers that project through the oculomotor nerve to the ciliary ganglion in the posterior orbit. The postganglionic parasympathetic fibers from the ganglion project to the iris, where they release acetylcholine onto circular fibers that constrict the pupil to reduce the amount of light hitting the retina. The sympathetic nervous system is responsible for dilating the pupil when light levels are low. Shining light in one eye will elicit constriction of both pupils. The efferent limb of the pupillary light reflex is bilateral. Light shined in one eye causes a constriction of that pupil, as well as constriction of the contralateral pupil. Shining a penlight in the eye of a patient is a very artificial situation, as both eyes are normally exposed to the same light sources. Testing this reflex can illustrate whether the optic nerve or the oculomotor nerve is damaged. If shining the light in one eye results in no changes in pupillary size but shining light in the opposite eye elicits a normal, bilateral response, the damage is associated with the optic nerve on the nonresponsive side. If light in either eye elicits a response in only one eye, the problem is with the oculomotor system. If light in the right eye only causes the left pupil to constrict, the direct reflex is lost and the consensual reflex is intact, which means that the right oculomotor nerve (or Edinger–Westphal nucleus) is damaged. Damage to the right oculomotor connections will be evident when light is shined in the left eye. In that case, the direct reflex is intact but the consensual reflex is lost, meaning that the left pupil will constrict while the right does not. The Cranial Nerve Exam The cranial nerves can be separated into four major groups associated with the subtests of the cranial nerve exam. First are the sensory nerves, then the nerves that control eye movement, the nerves of the oral cavity and superior pharynx, and the nerve that controls movements of the neck. The olfactory, optic, and vestibulocochlear nerves are strictly sensory nerves for smell, sight, and balance and hearing, whereas the trigeminal, facial, and glossopharyngeal nerves carry somatosensation of the face, and taste—separated between the anterior two-thirds of the tongue and the posterior one-third. Special senses are tested by presenting the particular stimuli to each receptive organ. General senses can be tested through sensory discrimination of touch versus painful stimuli. The oculomotor, trochlear, and abducens nerves control the extraocular muscles and are connected by the medial longitudinal fasciculus to coordinate gaze. Testing conjugate gaze is as simple as having the patient follow a visual target, like a pen tip, through the visual field ending with an approach toward the face to test convergence and accommodation. Along with the vestibular functions of the eighth nerve, the vestibulo-ocular reflex stabilizes gaze during head movements by coordinating equilibrium sensations with the eye movement systems. The trigeminal nerve controls the muscles of chewing, which are tested for stretch reflexes. Motor functions of the facial nerve are usually obvious if facial expressions are compromised, but can be tested by having the patient raise their eyebrows, smile, and frown. Movements of the tongue, soft palate, or superior pharynx can be observed directly while the patient swallows, while the gag reflex is elicited, or while the patient says repetitive consonant sounds. The motor control of the gag reflex is largely controlled by fibers in the vagus nerve and constitutes a test of that nerve because the parasympathetic functions of that nerve are involved in visceral regulation, such as regulating the heartbeat and digestion. Movement of the head and neck using the sternocleidomastoid and trapezius muscles is controlled by the accessory nerve. Flexing of the neck and strength testing of those muscles reviews the function of that nerve. 16.4 The Sensory and Motor Exams Learning Objectives By the end of this section, you will be able to: Describe the arrangement of sensory and motor regions in the spinal cord Relate damage in the spinal cord to sensory or motor deficits Differentiate between upper motor neuron and lower motor neuron diseases Describe the clinical indications of common reflexes Connections between the body and the CNS occur through the spinal cord. The cranial nerves connect the head and neck directly to the brain, but the spinal cord receives sensory input and sends motor commands out to the body through the spinal nerves. Whereas the brain develops into a complex series of nuclei and fiber tracts, the spinal cord remains relatively simple in its configuration ( Figure 16.12 ). From the initial neural tube early in embryonic development, the spinal cord retains a tube-like structure with gray matter surrounding the small central canal and white matter on the surface in three columns. The dorsal, or posterior, horns of the gray matter are mainly devoted to sensory functions whereas the ventral, or anterior, and lateral horns are associated with motor functions. In the white matter, the dorsal column relays sensory information to the brain, and the anterior column is almost exclusively relaying motor commands to the ventral horn motor neurons. The lateral column, however, conveys both sensory and motor information between the spinal cord and brain. Sensory Modalities and Location The general senses are distributed throughout the body, relying on nervous tissue incorporated into various organs. Somatic senses are incorporated mostly into the skin, muscles, or tendons, whereas the visceral senses come from nervous tissue incorporated into the majority of organs such as the heart or stomach. The somatic senses are those that usually make up the conscious perception of the how the body interacts with the environment. The visceral senses are most often below the limit of conscious perception because they are involved in homeostatic regulation through the autonomic nervous system. The sensory exam tests the somatic senses, meaning those that are consciously perceived. Testing of the senses begins with examining the regions known as dermatomes that connect to the cortical region where somatosensation is perceived in the postcentral gyrus. To test the sensory fields, a simple stimulus of the light touch of the soft end of a cotton-tipped applicator is applied at various locations on the skin. The spinal nerves, which contain sensory fibers with dendritic endings in the skin, connect with the skin in a topographically organized manner, illustrated as dermatomes ( Figure 16.13 ). For example, the fibers of eighth cervical nerve innervate the medial surface of the forearm and extend out to the fingers. In addition to testing perception at different positions on the skin, it is necessary to test sensory perception within the dermatome from distal to proximal locations in the appendages, or lateral to medial locations in the trunk. In testing the eighth cervical nerve, the patient would be asked if the touch of the cotton to the fingers or the medial forearm was perceptible, and whether there were any differences in the sensations. Other modalities of somatosensation can be tested using a few simple tools. The perception of pain can be tested using the broken end of the cotton-tipped applicator. The perception of vibratory stimuli can be testing using an oscillating tuning fork placed against prominent bone features such as the distal head of the ulna on the medial aspect of the elbow. When the tuning fork is still, the metal against the skin can be perceived as a cold stimulus. Using the cotton tip of the applicator, or even just a fingertip, the perception of tactile movement can be assessed as the stimulus is drawn across the skin for approximately 2–3 cm. The patient would be asked in what direction the stimulus is moving. All of these tests are repeated in distal and proximal locations and for different dermatomes to assess the spatial specificity of perception. The sense of position and motion, proprioception, is tested by moving the fingers or toes and asking the patient if they sense the movement. If the distal locations are not perceived, the test is repeated at increasingly proximal joints. The various stimuli used to test sensory input assess the function of the major ascending tracts of the spinal cord. The dorsal column pathway conveys fine touch, vibration, and proprioceptive information, whereas the spinothalamic pathway primarily conveys pain and temperature. Testing these stimuli provides information about whether these two major ascending pathways are functioning properly. Within the spinal cord, the two systems are segregated. The dorsal column information ascends ipsilateral to the source of the stimulus and decussates in the medulla, whereas the spinothalamic pathway decussates at the level of entry and ascends contralaterally. The differing sensory stimuli are segregated in the spinal cord so that the various subtests for these stimuli can distinguish which ascending pathway may be damaged in certain situations. Whereas the basic sensory stimuli are assessed in the subtests directed at each submodality of somatosensation, testing the ability to discriminate sensations is important. Pairing the light touch and pain subtests together makes it possible to compare the two submodalities at the same time, and therefore the two major ascending tracts at the same time. Mistaking painful stimuli for light touch, or vice versa, may point to errors in ascending projections, such as in a hemisection of the spinal cord that might come from a motor vehicle accident. Another issue of sensory discrimination is not distinguishing between different submodalities, but rather location. The two-point discrimination subtest highlights the density of sensory endings, and therefore receptive fields in the skin. The sensitivity to fine touch, which can give indications of the texture and detailed shape of objects, is highest in the fingertips. To assess the limit of this sensitivity, two-point discrimination is measured by simultaneously touching the skin in two locations, such as could be accomplished with a pair of forceps. Specialized calipers for precisely measuring the distance between points are also available. The patient is asked to indicate whether one or two stimuli are present while keeping their eyes closed. The examiner will switch between using the two points and a single point as the stimulus. Failure to recognize two points may be an indication of a dorsal column pathway deficit. Similar to two-point discrimination, but assessing laterality of perception, is double simultaneous stimulation. Two stimuli, such as the cotton tips of two applicators, are touched to the same position on both sides of the body. If one side is not perceived, this may indicate damage to the contralateral posterior parietal lobe. Because there is one of each pathway on either side of the spinal cord, they are not likely to interact. If none of the other subtests suggest particular deficits with the pathways, the deficit is likely to be in the cortex where conscious perception is based. The mental status exam contains subtests that assess other functions that are primarily localized to the parietal cortex, such as stereognosis and graphesthesia. A final subtest of sensory perception that concentrates on the sense of proprioception is known as the Romberg test . The patient is asked to stand straight with feet together. Once the patient has achieved their balance in that position, they are asked to close their eyes. Without visual feedback that the body is in a vertical orientation relative to the surrounding environment, the patient must rely on the proprioceptive stimuli of joint and muscle position, as well as information from the inner ear, to maintain balance. This test can indicate deficits in dorsal column pathway proprioception, as well as problems with proprioceptive projections to the cerebellum through the spinocerebellar tract . Interactive Link Watch this video to see a quick demonstration of two-point discrimination. Touching a specialized caliper to the surface of the skin will measure the distance between two points that are perceived as distinct stimuli versus a single stimulus. The patient keeps their eyes closed while the examiner switches between using both points of the caliper or just one. The patient then must indicate whether one or two stimuli are in contact with the skin. Why is the distance between the caliper points closer on the fingertips as opposed to the palm of the hand? And what do you think the distance would be on the arm, or the shoulder? Muscle Strength and Voluntary Movement The skeletomotor system is largely based on the simple, two-cell projection from the precentral gyrus of the frontal lobe to the skeletal muscles. The corticospinal tract represents the neurons that send output from the primary motor cortex. These fibers travel through the deep white matter of the cerebrum, then through the midbrain and pons, into the medulla where most of them decussate, and finally through the spinal cord white matter in the lateral (crossed fibers) or anterior (uncrossed fibers) columns. These fibers synapse on motor neurons in the ventral horn. The ventral horn motor neurons then project to skeletal muscle and cause contraction. These two cells are termed the upper motor neuron (UMN) and the lower motor neuron (LMN). Voluntary movements require these two cells to be active. The motor exam tests the function of these neurons and the muscles they control. First, the muscles are inspected and palpated for signs of structural irregularities. Movement disorders may be the result of changes to the muscle tissue, such as scarring, and these possibilities need to be ruled out before testing function. Along with this inspection, muscle tone is assessed by moving the muscles through a passive range of motion. The arm is moved at the elbow and wrist, and the leg is moved at the knee and ankle. Skeletal muscle should have a resting tension representing a slight contraction of the fibers. The lack of muscle tone, known as hypotonicity or flaccidity , may indicate that the LMN is not conducting action potentials that will keep a basal level of acetylcholine in the neuromuscular junction. If muscle tone is present, muscle strength is tested by having the patient contract muscles against resistance. The examiner will ask the patient to lift the arm, for example, while the examiner is pushing down on it. This is done for both limbs, including shrugging the shoulders. Lateral differences in strength—being able to push against resistance with the right arm but not the left—would indicate a deficit in one corticospinal tract versus the other. An overall loss of strength, without laterality, could indicate a global problem with the motor system. Diseases that result in UMN lesions include cerebral palsy or MS, or it may be the result of a stroke. A sign of UMN lesion is a negative result in the subtest for pronator drift . The patient is asked to extend both arms in front of the body with the palms facing up. While keeping the eyes closed, if the patient unconsciously allows one or the other arm to slowly relax, toward the pronated position, this could indicate a failure of the motor system to maintain the supinated position. Reflexes Reflexes combine the spinal sensory and motor components with a sensory input that directly generates a motor response. The reflexes that are tested in the neurological exam are classified into two groups. A deep tendon reflex is commonly known as a stretch reflex, and is elicited by a strong tap to a tendon, such as in the knee-jerk reflex. A superficial reflex is elicited through gentle stimulation of the skin and causes contraction of the associated muscles. For the arm, the common reflexes to test are of the biceps, brachioradialis, triceps, and flexors for the digits. For the leg, the knee-jerk reflex of the quadriceps is common, as is the ankle reflex for the gastrocnemius and soleus. The tendon at the insertion for each of these muscles is struck with a rubber mallet. The muscle is quickly stretched, resulting in activation of the muscle spindle that sends a signal into the spinal cord through the dorsal root. The fiber synapses directly on the ventral horn motor neuron that activates the muscle, causing contraction. The reflexes are physiologically useful for stability. If a muscle is stretched, it reflexively contracts to return the muscle to compensate for the change in length. In the context of the neurological exam, reflexes indicate that the LMN is functioning properly. The most common superficial reflex in the neurological exam is the plantar reflex that tests for the Babinski sign on the basis of the extension or flexion of the toes at the plantar surface of the foot. The plantar reflex is commonly tested in newborn infants to establish the presence of neuromuscular function. To elicit this reflex, an examiner brushes a stimulus, usually the examiner’s fingertip, along the plantar surface of the infant’s foot. An infant would present a positive Babinski sign, meaning the foot dorsiflexes and the toes extend and splay out. As a person learns to walk, the plantar reflex changes to cause curling of the toes and a moderate plantar flexion. If superficial stimulation of the sole of the foot caused extension of the foot, keeping one’s balance would be harder. The descending input of the corticospinal tract modifies the response of the plantar reflex, meaning that a negative Babinski sign is the expected response in testing the reflex. Other superficial reflexes are not commonly tested, though a series of abdominal reflexes can target function in the lower thoracic spinal segments. Interactive Link Watch this video to see how to test reflexes in the abdomen. Testing reflexes of the trunk is not commonly performed in the neurological exam, but if findings suggest a problem with the thoracic segments of the spinal cord, a series of superficial reflexes of the abdomen can localize function to those segments. If contraction is not observed when the skin lateral to the umbilicus (belly button) is stimulated, what level of the spinal cord may be damaged? Comparison of Upper and Lower Motor Neuron Damage Many of the tests of motor function can indicate differences that will address whether damage to the motor system is in the upper or lower motor neurons. Signs that suggest a UMN lesion include muscle weakness, strong deep tendon reflexes, decreased control of movement or slowness, pronator drift, a positive Babinski sign, spasticity , and the clasp-knife response . Spasticity is an excess contraction in resistance to stretch. It can result in hyperflexia , which is when joints are overly flexed. The clasp-knife response occurs when the patient initially resists movement, but then releases, and the joint will quickly flex like a pocket knife closing. A lesion on the LMN would result in paralysis, or at least partial loss of voluntary muscle control, which is known as paresis . The paralysis observed in LMN diseases is referred to as flaccid paralysis , referring to a complete or partial loss of muscle tone, in contrast to the loss of control in UMN lesions in which tone is retained and spasticity is exhibited. Other signs of an LMN lesion are fibrillation , fasciculation , and compromised or lost reflexes resulting from the denervation of the muscle fibers. Disorders of the... Spinal Cord In certain situations, such as a motorcycle accident, only half of the spinal cord may be damaged in what is known as a hemisection. Forceful trauma to the trunk may cause ribs or vertebrae to fracture, and debris can crush or section through part of the spinal cord. The full section of a spinal cord would result in paraplegia, or loss of voluntary motor control of the lower body, as well as loss of sensations from that point down. A hemisection, however, will leave spinal cord tracts intact on one side. The resulting condition would be hemiplegia on the side of the trauma—one leg would be paralyzed. The sensory results are more complicated. The ascending tracts in the spinal cord are segregated between the dorsal column and spinothalamic pathways. This means that the sensory deficits will be based on the particular sensory information each pathway conveys. Sensory discrimination between touch and painful stimuli will illustrate the difference in how these pathways divide these functions. On the paralyzed leg, a patient will acknowledge painful stimuli, but not fine touch or proprioceptive sensations. On the functional leg, the opposite is true. The reason for this is that the dorsal column pathway ascends ipsilateral to the sensation, so it would be damaged the same way as the lateral corticospinal tract. The spinothalamic pathway decussates immediately upon entering the spinal cord and ascends contralateral to the source; it would therefore bypass the hemisection. The motor system can indicate the loss of input to the ventral horn in the lumbar enlargement where motor neurons to the leg are found, but motor function in the trunk is less clear. The left and right anterior corticospinal tracts are directly adjacent to each other. The likelihood of trauma to the spinal cord resulting in a hemisection that affects one anterior column, but not the other, is very unlikely. Either the axial musculature will not be affected at all, or there will be bilateral losses in the trunk. Sensory discrimination can pinpoint the level of damage in the spinal cord. Below the hemisection, pain stimuli will be perceived in the damaged side, but not fine touch. The opposite is true on the other side. The pain fibers on the side with motor function cross the midline in the spinal cord and ascend in the contralateral lateral column as far as the hemisection. The dorsal column will be intact ipsilateral to the source on the intact side and reach the brain for conscious perception. The trauma would be at the level just before sensory discrimination returns to normal, helping to pinpoint the trauma. Whereas imaging technology, like magnetic resonance imaging (MRI) or computed tomography (CT) scanning, could localize the injury as well, nothing more complicated than a cotton-tipped applicator can localize the damage. That may be all that is available on the scene when moving the victim requires crucial decisions be made. 16.5 The Coordination and Gait Exams Learning Objectives By the end of this section, you will be able to: Explain the relationship between the location of the cerebellum and its function in movement Chart the major divisions of the cerebellum List the major connections of the cerebellum Describe the relationship of the cerebellum to axial and appendicular musculature Explain the prevalent causes of cerebellar ataxia The role of the cerebellum is a subject of debate. There is an obvious connection to motor function based on the clinical implications of cerebellar damage. There is also strong evidence of the cerebellar role in procedural memory. The two are not incompatible; in fact, procedural memory is motor memory, such as learning to ride a bicycle. Significant work has been performed to describe the connections within the cerebellum that result in learning. A model for this learning is classical conditioning, as shown by the famous dogs from the physiologist Ivan Pavlov’s work. This classical conditioning, which can be related to motor learning, fits with the neural connections of the cerebellum. The cerebellum is 10 percent of the mass of the brain and has varied functions that all point to a role in the motor system. Location and Connections of the Cerebellum The cerebellum is located in apposition to the dorsal surface of the brain stem, centered on the pons. The name of the pons is derived from its connection to the cerebellum. The word means “bridge” and refers to the thick bundle of myelinated axons that form a bulge on its ventral surface. Those fibers are axons that project from the gray matter of the pons into the contralateral cerebellar cortex. These fibers make up the middle cerebellar peduncle (MCP) and are the major physical connection of the cerebellum to the brain stem ( Figure 16.14 ). Two other white matter bundles connect the cerebellum to the other regions of the brain stem. The superior cerebellar peduncle (SCP) is the connection of the cerebellum to the midbrain and forebrain. The inferior cerebellar peduncle (ICP) is the connection to the medulla. These connections can also be broadly described by their functions. The ICP conveys sensory input to the cerebellum, partially from the spinocerebellar tract, but also through fibers of the inferior olive . The MCP is part of the cortico-ponto-cerebellar pathway that connects the cerebral cortex with the cerebellum and preferentially targets the lateral regions of the cerebellum. It includes a copy of the motor commands sent from the precentral gyrus through the corticospinal tract, arising from collateral branches that synapse in the gray matter of the pons, along with input from other regions such as the visual cortex. The SCP is the major output of the cerebellum, divided between the red nucleus in the midbrain and the thalamus, which will return cerebellar processing to the motor cortex. These connections describe a circuit that compares motor commands and sensory feedback to generate a new output. These comparisons make it possible to coordinate movements. If the cerebral cortex sends a motor command to initiate walking, that command is copied by the pons and sent into the cerebellum through the MCP. Sensory feedback in the form of proprioception from the spinal cord, as well as vestibular sensations from the inner ear, enters through the ICP. If you take a step and begin to slip on the floor because it is wet, the output from the cerebellum—through the SCP—can correct for that and keep you balanced and moving. The red nucleus sends new motor commands to the spinal cord through the rubrospinal tract . The cerebellum is divided into regions that are based on the particular functions and connections involved. The midline regions of the cerebellum, the vermis and flocculonodular lobe , are involved in comparing visual information, equilibrium, and proprioceptive feedback to maintain balance and coordinate movements such as walking, or gait , through the descending output of the red nucleus ( Figure 16.15 ). The lateral hemispheres are primarily concerned with planning motor functions through frontal lobe inputs that are returned through the thalamic projections back to the premotor and motor cortices. Processing in the midline regions targets movements of the axial musculature, whereas the lateral regions target movements of the appendicular musculature. The vermis is referred to as the spinocerebellum because it primarily receives input from the dorsal columns and spinocerebellar pathways. The flocculonodular lobe is referred to as the vestibulocerebellum because of the vestibular projection into that region. Finally, the lateral cerebellum is referred to as the cerebrocerebellum , reflecting the significant input from the cerebral cortex through the cortico-ponto-cerebellar pathway. Coordination and Alternating Movement Testing for cerebellar function is the basis of the coordination exam. The subtests target appendicular musculature, controlling the limbs, and axial musculature for posture and gait. The assessment of cerebellar function will depend on the normal functioning of other systems addressed in previous sections of the neurological exam. Motor control from the cerebrum, as well as sensory input from somatic, visual, and vestibular senses, are important to cerebellar function. The subtests that address appendicular musculature, and therefore the lateral regions of the cerebellum, begin with a check for tremor. The patient extends their arms in front of them and holds the position. The examiner watches for the presence of tremors that would not be present if the muscles are relaxed. By pushing down on the arms in this position, the examiner can check for the rebound response, which is when the arms are automatically brought back to the extended position. The extension of the arms is an ongoing motor process, and the tap or push on the arms presents a change in the proprioceptive feedback. The cerebellum compares the cerebral motor command with the proprioceptive feedback and adjusts the descending input to correct. The red nucleus would send an additional signal to the LMN for the arm to increase contraction momentarily to overcome the change and regain the original position. The check reflex depends on cerebellar input to keep increased contraction from continuing after the removal of resistance. The patient flexes the elbow against resistance from the examiner to extend the elbow. When the examiner releases the arm, the patient should be able to stop the increased contraction and keep the arm from moving. A similar response would be seen if you try to pick up a coffee mug that you believe to be full but turns out to be empty. Without checking the contraction, the mug would be thrown from the overexertion of the muscles expecting to lift a heavier object. Several subtests of the cerebellum assess the ability to alternate movements, or switch between muscle groups that may be antagonistic to each other. In the finger-to-nose test, the patient touches their finger to the examiner’s finger and then to their nose, and then back to the examiner’s finger, and back to the nose. The examiner moves the target finger to assess a range of movements. A similar test for the lower extremities has the patient touch their toe to a moving target, such as the examiner’s finger. Both of these tests involve flexion and extension around a joint—the elbow or the knee and the shoulder or hip—as well as movements of the wrist and ankle. The patient must switch between the opposing muscles, like the biceps and triceps brachii, to move their finger from the target to their nose. Coordinating these movements involves the motor cortex communicating with the cerebellum through the pons and feedback through the thalamus to plan the movements. Visual cortex information is also part of the processing that occurs in the cerebrocerebellum while it is involved in guiding movements of the finger or toe. Rapid, alternating movements are tested for the upper and lower extremities. The patient is asked to touch each finger to their thumb, or to pat the palm of one hand on the back of the other, and then flip that hand over and alternate back-and-forth. To test similar function in the lower extremities, the patient touches their heel to their shin near the knee and slides it down toward the ankle, and then back again, repetitively. Rapid, alternating movements are part of speech as well. A patient is asked to repeat the nonsense consonants “lah-kah-pah” to alternate movements of the tongue, lips, and palate. All of these rapid alternations require planning from the cerebrocerebellum to coordinate movement commands that control the coordination. Posture and Gait Gait can either be considered a separate part of the neurological exam or a subtest of the coordination exam that addresses walking and balance. Testing posture and gait addresses functions of the spinocerebellum and the vestibulocerebellum because both are part of these activities. A subtest called station begins with the patient standing in a normal position to check for the placement of the feet and balance. The patient is asked to hop on one foot to assess the ability to maintain balance and posture during movement. Though the station subtest appears to be similar to the Romberg test, the difference is that the patient’s eyes are open during station. The Romberg test has the patient stand still with the eyes closed. Any changes in posture would be the result of proprioceptive deficits, and the patient is able to recover when they open their eyes. Subtests of walking begin with having the patient walk normally for a distance away from the examiner, and then turn and return to the starting position. The examiner watches for abnormal placement of the feet and the movement of the arms relative to the movement. The patient is then asked to walk with a few different variations. Tandem gait is when the patient places the heel of one foot against the toe of the other foot and walks in a straight line in that manner. Walking only on the heels or only on the toes will test additional aspects of balance. Ataxia A movement disorder of the cerebellum is referred to as ataxia . It presents as a loss of coordination in voluntary movements. Ataxia can also refer to sensory deficits that cause balance problems, primarily in proprioception and equilibrium. When the problem is observed in movement, it is ascribed to cerebellar damage. Sensory and vestibular ataxia would likely also present with problems in gait and station. Ataxia is often the result of exposure to exogenous substances, focal lesions, or a genetic disorder. Focal lesions include strokes affecting the cerebellar arteries, tumors that may impinge on the cerebellum, trauma to the back of the head and neck, or MS. Alcohol intoxication or drugs such as ketamine cause ataxia, but it is often reversible. Mercury in fish can cause ataxia as well. Hereditary conditions can lead to degeneration of the cerebellum or spinal cord, as well as malformation of the brain, or the abnormal accumulation of copper seen in Wilson’s disease. Interactive Link Watch this short video to see a test for station. Station refers to the position a person adopts when they are standing still. The examiner would look for issues with balance, which coordinates proprioceptive, vestibular, and visual information in the cerebellum. To test the ability of a subject to maintain balance, asking them to stand or hop on one foot can be more demanding. The examiner may also push the subject to see if they can maintain balance. An abnormal finding in the test of station is if the feet are placed far apart. Why would a wide stance suggest problems with cerebellar function? Everyday Connection The Field Sobriety Test The neurological exam has been described as a clinical tool throughout this chapter. It is also useful in other ways. A variation of the coordination exam is the Field Sobriety Test (FST) used to assess whether drivers are under the influence of alcohol. The cerebellum is crucial for coordinated movements such as keeping balance while walking, or moving appendicular musculature on the basis of proprioceptive feedback. The cerebellum is also very sensitive to ethanol, the particular type of alcohol found in beer, wine, and liquor. Walking in a straight line involves comparing the motor command from the primary motor cortex to the proprioceptive and vestibular sensory feedback, as well as following the visual guide of the white line on the side of the road. When the cerebellum is compromised by alcohol, the cerebellum cannot coordinate these movements effectively, and maintaining balance becomes difficult. Another common aspect of the FST is to have the driver extend their arms out wide and touch their fingertip to their nose, usually with their eyes closed. The point of this is to remove the visual feedback for the movement and force the driver to rely just on proprioceptive information about the movement and position of their fingertip relative to their nose. With eyes open, the corrections to the movement of the arm might be so small as to be hard to see, but proprioceptive feedback is not as immediate and broader movements of the arm will probably be needed, particularly if the cerebellum is affected by alcohol. Reciting the alphabet backwards is not always a component of the FST, but its relationship to neurological function is interesting. There is a cognitive aspect to remembering how the alphabet goes and how to recite it backwards. That is actually a variation of the mental status subtest of repeating the months backwards. However, the cerebellum is important because speech production is a coordinated activity. The speech rapid alternating movement subtest is specifically using the consonant changes of “lah-kah-pah” to assess coordinated movements of the lips, tongue, pharynx, and palate. But the entire alphabet, especially in the nonrehearsed backwards order, pushes this type of coordinated movement quite far. It is related to the reason that speech becomes slurred when a person is intoxicated.
american_government
Summary 7.1 Voter Registration Voter registration varies from state to state, depending on local culture and concerns. In an attempt to stop the disenfranchisement of black voters, Congress passed the Voting Rights Act (1965), which prohibited states from denying voting rights based on race, and the Supreme Court determined grandfather clauses and other restrictions were unconstitutional. Some states only require that a citizen be over eighteen and reside in the state. Others include additional requirements. Some states require registration to occur thirty days prior to an election, and some allow voters to register the same day as the election. Following the passage of the Help America Vote Act (2002), states are required to maintain accurate voter registration rolls and are working harder to register citizens and update records. Registering has become easier over the years; the National Voter Registration Act (1993) requires states to add voter registration to government applications, while an increasing number of states are implementing novel approaches such as online voter registration and automatic registration. 7.2 Voter Turnout Some believe a healthy democracy needs many participating citizens, while others argue that only informed citizens should vote. When turnout is calculated as a percentage of the voting-age population (VAP), it often appears that just over half of U.S. citizens vote. Using the voting-eligible population (VEP) yields a slightly higher number, and the highest turnout, 87 percent, is calculated as a percentage of registered voters. Citizens older than sixty-five and those with a high income and advanced education are very likely to vote. Those younger than thirty years old, especially if still in school and earning low income, are less likely to vote. Hurdles in a state’s registration system and a high number of yearly elections may also decrease turnout. Some states have turned to early voting and mail-only ballots as ways to combat the limitations of one-day and weekday voting. The Supreme Court’s decision in Shelby v. Holder led to states’ removal from the Voting Rights Act’s preclearance list. Many of these states implemented changes to their election laws, including the requirement to show photo identification before voting. Globally, the United States experiences lower turnout than other nations; some counties automatically register citizens or require citizens to vote. 7.3 Elections The Federal Election Commission was created in an effort to control federal campaign donations and create transparency in campaign finance. Individuals and organizations have contribution limits, and candidates must disclose the source of their funds. However, decisions by the Supreme Court, such as Citizens United , have voided sections of the campaign finance law, and businesses and organizations may now run campaign ads and support candidates for offices. The cases also resulted in the creation of super PACs, which can raise unlimited funds, provided they do not coordinate with candidates’ campaigns. The first stage in the election cycle is nomination, where parties determine who the party nominee will be. State political parties choose to hold either primaries or caucuses, depending on whether they want a fast and private ballot election or an informal, public caucus. Delegates from the local primaries and caucuses will go to state or national conventions to vote on behalf of local and state voters. During the general election, candidates debate one another and run campaigns. Election Day is in early November, but the Electoral College formally elects the president mid-December. Congressional incumbents often win or lose seats based on the popularity of their party’s president or presidential candidate. 7.4 Campaigns and Voting Campaigns must try to convince undecided voters to vote for a candidate and get the party voters to the polls. Early money allows candidates to start a strong campaign and attract other donations. The election year starts with primary campaigns, in which multiple candidates compete for each party’s nomination, and the focus is on name recognition and issue positions. General election campaigns focus on getting party members to the polls. Shadow campaigns and super PACs may run negative ads to influence voters. Modern campaigns use television to create emotions and the Internet to interact with supporters and fundraise. Most voters will cast a ballot for the candidate from their party. Others will consider the issues a candidate supports. Some voters care about what candidates have done in the past, or what they may do in the future, while others are concerned only about their personal finances. Lastly, some citizens will be concerned with the candidate’s physical characteristics. Incumbents have many advantages, including war chests, franking privileges, and gerrymandering. 7.5 Direct Democracy Direct democracy allows the voters in a state to write laws, amend constitutions, remove politicians from office, and approve decisions made by government. Initiatives are laws or constitutional amendments on the ballot. Referendums ask voters to approve a decision by the government. The process for ballot measures requires the collection of signatures from voters, approval of the measure by state government, and a ballot election. Recalls allow citizens to remove politicians from office. While direct democracy does give citizens a say in the policies and laws of their state, it can also be used by businesses and the wealthy to pass policy goals. Initiatives can also lead to bad policy if voters do not research the measure or misunderstand the law.
Chapter Outline 7.1 Voter Registration 7.2 Voter Turnout 7.3 Elections 7.4 Campaigns and Voting 7.5 Direct Democracy Introduction The first Republican candidate to throw a hat into the ring for 2016, Ted Cruz had been preparing for his presidential run since 2013 when he went hunting in Iowa and vacationed in New Hampshire, both key states in the nomination process. 1 He had also strongly opposed the Affordable Care Act while showcasing his family side by reading Green Eggs and Ham aloud in a filibuster attack on the act. 2 If Cruz had been campaigning all along, why make a grand announcement at Liberty University in 2015? First, by officially declaring his candidacy at Liberty University, whose stated mission is to provide “a world-class education with a solid Christian foundation,” Cruz sought to demonstrate that his values were the same as those of the Christian students before him ( Figure 7.1 ). 3 Second, the speech reminded Christians to vote. As Cruz told the students, “imagine millions of young people coming together and standing together, saying ‘we will stand for liberty.’” 4 Like candidates for office at all levels of U.S. government, Cruz understood that campaigns must reach out to the voters and compel them to vote or the candidate will fail miserably. But what brings voters to the polls, and how do they make their voting decisions? Those are just two of the questions about voting and elections this chapter will explore.
[ { "answer": { "ans_choice": 2, "ans_text": "C" }, "bloom": null, "hl_context": "Some attempts have been made to streamline voter registration . <hl> The National Voter Registration Act ( 1993 ) , often referred to as Motor Voter , was enacted to expedite the registration process and make it as simple as possible for voters . <hl> <hl> The act required states to allow citizens to register to vote when they sign up for driver ’ s licenses and Social Security benefits . <hl> On each government form , the citizen need only mark an additional box to also register to vote . Unfortunately , while increasing registrations by 7 percent between 1992 and 2012 , Motor Voter did not dramatically increase voter turnout . 13 In fact , for two years following the passage of the act , voter turnout decreased slightly . 14 It appears that the main users of the expedited system were those already intending to vote . One study , however , found that preregistration may have a different effect on youth than on the overall voter pool ; in Florida , it increased turnout of young voters by 13 percent . 15", "hl_sentences": "The National Voter Registration Act ( 1993 ) , often referred to as Motor Voter , was enacted to expedite the registration process and make it as simple as possible for voters . The act required states to allow citizens to register to vote when they sign up for driver ’ s licenses and Social Security benefits .", "question": { "cloze_format": "The ___ makes it easy for a citizen to register to vote.", "normal_format": "Which of the following makes it easy for a citizen to register to vote?", "question_choices": [ "grandfather clause", "lengthy residency requirement", "National Voter Registration Act", "competency requirement" ], "question_id": "fs-id1171472161832", "question_text": "Which of the following makes it easy for a citizen to register to vote?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 1, "ans_text": "decrease election fraud" }, "bloom": null, "hl_context": "<hl> Other states have decided against online registration due to concerns about voter fraud and security . <hl> <hl> Legislators also argue that online registration makes it difficult to ensure that only citizens are registering and that they are registering in the correct precincts . <hl> As technology continues to update other areas of state recordkeeping , online registration may become easier and safer . In some areas , citizens have pressured the states and pushed the process along . A bill to move registration online in Florida stalled for over a year in the legislature , based on security concerns . With strong citizen support , however , it was passed and signed in 2015 , despite the governor ’ s lingering concerns . In other states , such as Texas , both the government and citizens are concerned about identity fraud , so traditional paper registration is still preferred .", "hl_sentences": "Other states have decided against online registration due to concerns about voter fraud and security . Legislators also argue that online registration makes it difficult to ensure that only citizens are registering and that they are registering in the correct precincts .", "question": { "cloze_format": "A reason to make voter registration more difficult is to ___ .", "normal_format": "Which of the following is a reason to make voter registration more difficult?", "question_choices": [ "increase voter turnout", "decrease election fraud", "decrease the cost of elections", "make the registration process faster" ], "question_id": "fs-id1171472100477", "question_text": "Which of the following is a reason to make voter registration more difficult?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 0, "ans_text": "A" }, "bloom": null, "hl_context": "<hl> In 2015 , Oregon made news when it took the concept of Motor Voter further . <hl> <hl> When citizens turn eighteen , the state now automatically registers most of them using driver ’ s license and state identification information . <hl> When a citizen moves , the voter rolls are updated when the license is updated . While this policy has been controversial , with some arguing that private information may become public or that Oregon is moving toward mandatory voting , automatic registration is consistent with the state ’ s efforts to increase registration and turnout . 16", "hl_sentences": "In 2015 , Oregon made news when it took the concept of Motor Voter further . When citizens turn eighteen , the state now automatically registers most of them using driver ’ s license and state identification information .", "question": { "cloze_format": "An unusual step Oregon took to increase voter registration was that ___.", "normal_format": "What unusual step did Oregon take to increase voter registration?", "question_choices": [ "The state automatically registers all citizens over eighteen to vote.", "The state ended voter registration.", "The state sends every resident a voter registration ballot.", "The state allows online voter registration." ], "question_id": "fs-id1171472208064", "question_text": "What unusual step did Oregon take to increase voter registration?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "voting-age population" }, "bloom": null, "hl_context": "The last and smallest population is registered voters , who , as the name implies , are citizens currently registered to vote . Now we can appreciate how reports of voter turnout can vary . As Figure 7.6 shows , although 87 percent of registered voters voted in the 2012 presidential election , this represents only 42 percent of the total U . S . population . While 42 percent is indeed low and might cause alarm , some people included in it are under eighteen , not citizens , or unable to vote due to competency or prison status . The next number shows that just over 57 percent of the voting-age population voted , and 60 percent of the voting-eligible population . The best turnout ratio is calculated using the smallest population : 87 percent of registered voters voted . <hl> Those who argue that a healthy democracy needs high voter turnout will look at the voting-age population or voting-eligible population as proof that the United States has a problem . <hl> Those who believe only informed and active citizens should vote point to the registered voter turnout numbers instead .", "hl_sentences": "Those who argue that a healthy democracy needs high voter turnout will look at the voting-age population or voting-eligible population as proof that the United States has a problem .", "question": { "cloze_format": "If you wanted to prove the United States is suffering from low voter turnout, a calculation based on ___ would yield the lowest voter turnout rate.", "normal_format": "If you wanted to prove the United States is suffering from low voter turnout, a calculation based on which population would yield the lowest voter turnout rate?", "question_choices": [ "registered voters", "voting-eligible population", "voting-age population", "voters who voted in the last election" ], "question_id": "fs-id1171473148721", "question_text": "If you wanted to prove the United States is suffering from low voter turnout, a calculation based on which population would yield the lowest voter turnout rate?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 0, "ans_text": "compulsory voting laws" }, "bloom": null, "hl_context": "One prominent reason for low national turnout is that participation is not mandated . <hl> Some countries , such as Belgium and Turkey , have compulsory voting laws , which require citizens to vote in elections or pay a fine . <hl> <hl> This helps the two countries attain VAP turnouts of 87 percent and 86 percent , respectively , compared to the U . S . turnout of 54 percent . <hl> Sweden and Germany automatically register their voters , and 83 percent and 66 percent vote , respectively . Chile ’ s decision to move from compulsory voting to voluntary voting caused a drop in participation from 87 percent to 46 percent . 46", "hl_sentences": "Some countries , such as Belgium and Turkey , have compulsory voting laws , which require citizens to vote in elections or pay a fine . This helps the two countries attain VAP turnouts of 87 percent and 86 percent , respectively , compared to the U . S . turnout of 54 percent .", "question": { "cloze_format": "The reason Belgium, Turkey, and Australia have higher voter turnout rates than the United States is because of ___ .", "normal_format": "Why do Belgium, Turkey, and Australia have higher voter turnout rates than the United States?", "question_choices": [ "compulsory voting laws", "more elections", "fewer registration laws", "more polling locations" ], "question_id": "fs-id1171471011874", "question_text": "Why do Belgium, Turkey, and Australia have higher voter turnout rates than the United States?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "faster and has higher turnout" }, "bloom": null, "hl_context": "The caucus has its proponents and opponents . Many argue that it is more interesting than the primary and brings out more sophisticated voters , who then benefit from the chance to debate the strengths and weaknesses of the candidates . The caucus system is also more transparent than ballots . The local party members get to see the election outcome and pick the delegates who will represent them at the national convention . There is less of a possibility for deception or dishonesty . <hl> Opponents point out that caucuses take two to three hours and are intimidating to less experienced voters . <hl> <hl> These factors , they argue , lead to lower voter turnout . <hl> <hl> And they have a point — voter turnout for a caucus is generally 20 percent lower than for a primary . <hl> 85", "hl_sentences": "Opponents point out that caucuses take two to three hours and are intimidating to less experienced voters . These factors , they argue , lead to lower voter turnout . And they have a point — voter turnout for a caucus is generally 20 percent lower than for a primary .", "question": { "cloze_format": "A state might hold a primary instead of a caucus because a primary is ________.", "normal_format": "Why might a state hold a primary instead of a caucus?", "question_choices": [ "inexpensive and simple", "transparent and engages local voters", "faster and has higher turnout", "highly active and promotes dialog during voting" ], "question_id": "fs-id1171474215415", "question_text": "A state might hold a primary instead of a caucus because a primary is ________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 1, "ans_text": "B" }, "bloom": null, "hl_context": "When candidates run for office , they are most likely to choose local or state office first . <hl> For women , studies have shown that family obligations rather than desire or ambition account for this choice . <hl> <hl> Further , women are more likely than men to wait until their children are older before entering politics , and women say that they struggle to balance campaigning and their workload with parenthood . <hl> 64 Because higher office is often attained only after service in lower office , there are repercussions to women waiting so long . If they do decide to run for the U . S . House of Representatives or Senate , they are often older , and fewer in number , than their male colleagues ( Figure 7.11 ) . As of 2015 , only 24.4 percent of state legislators and 20 percent of U . S . Congress members are women . 65 The number of women in executive office is often lower as well . It is thus no surprise that 80 percent of members of Congress are male , 90 percent have at least a bachelor ’ s degree , and their average age is sixty . 66 Despite these problems , most elections will have at least one candidate per party on the ballot . In states or districts where one party holds a supermajority , such as Georgia , candidates from the other party may be discouraged from running because they don ’ t think they have a chance to win . <hl> 62 Candidates are likely to be moving up from prior elected office or are professionals , like lawyers , who can take time away from work to campaign and serve in office . <hl> 63", "hl_sentences": "For women , studies have shown that family obligations rather than desire or ambition account for this choice . Further , women are more likely than men to wait until their children are older before entering politics , and women say that they struggle to balance campaigning and their workload with parenthood . 62 Candidates are likely to be moving up from prior elected office or are professionals , like lawyers , who can take time away from work to campaign and serve in office .", "question": { "cloze_format": "The citizen that is most likely to run for office is ___ .", "normal_format": "Which of the following citizens is most likely to run for office?", "question_choices": [ "Maria Trejo, a 28-year-old part-time sonogram technician and mother of two", "Jeffrey Lyons, a 40-year-old lawyer and father of one", "Linda Tepsett, a 40-year-old full-time orthopedic surgeon", "Mark Forman, a 70-year-old retired steelworker" ], "question_id": "fs-id1171474344759", "question_text": "Which of the following citizens is most likely to run for office?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "in their state capitol, in December" }, "bloom": null, "hl_context": "Once the voters have cast ballots in November and all the election season madness comes to a close , races for governors and local representatives may be over , but the constitutional process of electing a president has only begun . <hl> The electors of the Electoral College travel to their respective state capitols and cast their votes in mid-December , often by signing a certificate recording their vote . <hl> In most cases , electors cast their ballots for the candidate who won the majority of votes in their state . The states then forward the certificates to the U . S . Senate .", "hl_sentences": "The electors of the Electoral College travel to their respective state capitols and cast their votes in mid-December , often by signing a certificate recording their vote .", "question": { "cloze_format": "The place and time Electoral College electors vote, are ___.", "normal_format": "Where and when do Electoral College electors vote?", "question_choices": [ "at their precinct, on Election Day", "at their state capitol, on Election Day", "in their state capitol, in December", "in Washington D.C., in December" ], "question_id": "fs-id1171474496407", "question_text": "Where and when do Electoral College electors vote?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "A" }, "bloom": null, "hl_context": "<hl> During a presidential election year , members of Congress often experience the coattail effect , which gives members of a popular presidential candidate ’ s party an increase in popularity and raises their odds of retaining office . <hl> During a midterm election year , however , the president ’ s party often is blamed for the president ’ s actions or inaction . Representatives and senators from the sitting president ’ s party are more likely to lose their seats during a midterm election year . Many recent congressional realignments , in which the House or Senate changed from Democratic to Republican control , occurred because of this reverse-coattail effect during midterm elections . The most recent example is the 2010 election , in which control of the House returned to the Republican Party after two years of a Democratic presidency . 7.4 Campaigns and Voting", "hl_sentences": "During a presidential election year , members of Congress often experience the coattail effect , which gives members of a popular presidential candidate ’ s party an increase in popularity and raises their odds of retaining office .", "question": { "cloze_format": "The type of election in which you are most likely to see coattail effects is the ___ election.", "normal_format": "In which type of election are you most likely to see coattail effects?", "question_choices": [ "presidential", "midterm", "special", "caucuses" ], "question_id": "fs-id1171472135911", "question_text": "In which type of election are you most likely to see coattail effects?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "B" }, "bloom": null, "hl_context": "<hl> Another incumbent advantage is gerrymandering , the drawing of district lines to guarantee a desired electoral outcome . <hl> Every ten years , following the U . S . Census , the number of House of Representatives members allotted to each state is determined based on a state ’ s population . If a state gains or loses seats in the House , the state must redraw districts to ensure each district has an equal number of citizens . States may also choose to redraw these districts at other times and for other reasons . 108 If the district is drawn to ensure that it includes a majority of Democratic or Republican Party members within its boundaries , for instance , then candidates from those parties will have an advantage .", "hl_sentences": "Another incumbent advantage is gerrymandering , the drawing of district lines to guarantee a desired electoral outcome .", "question": { "cloze_format": "The factor that is most likely to lead to the incumbency advantage for a candidate is the ___ .", "normal_format": "Which factor is most likely to lead to the incumbency advantage for a candidate?", "question_choices": [ "candidate’s socioeconomic status", "gerrymandering of the candidate’s district", "media’s support of the candidate", "candidate’s political party" ], "question_id": "fs-id1171472242504", "question_text": "Which factor is most likely to lead to the incumbency advantage for a candidate?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "signature or veto by state governor" }, "bloom": null, "hl_context": "<hl> Once the petition has enough signatures from registered voters , it is approved by a state agency or the secretary of state for placement on the ballot . <hl> Signatures are verified by the state or a county elections office to ensure the signatures are valid . <hl> If the petition is approved , the initiative is then placed on the next ballot , and the organization campaigns to voters . <hl> The most common form of direct democracy is the initiative , or proposition . An initiative is normally a law or constitutional amendment proposed and passed by the citizens of a state . Initiatives completely bypass the legislatures and governor , but they are subject to review by the state courts if they are not consistent with the state or national constitution . <hl> The process to pass an initiative is not easy and varies from state to state . <hl> <hl> Most states require that a petitioner or the organizers supporting an initiative file paperwork with the state and include the proposed text of the initiative . <hl> This allows the state or local office to determine whether the measure is legal , as well as estimate the cost of implementing it . <hl> This approval may come at the beginning of the process or after organizers have collected signatures . <hl> The initiative may be reviewed by the state attorney general , as in Oregon ’ s procedures , or by another state official or office . In Utah , the lieutenant governor reviews measures to ensure they are constitutional .", "hl_sentences": "Once the petition has enough signatures from registered voters , it is approved by a state agency or the secretary of state for placement on the ballot . If the petition is approved , the initiative is then placed on the next ballot , and the organization campaigns to voters . The process to pass an initiative is not easy and varies from state to state . Most states require that a petitioner or the organizers supporting an initiative file paperwork with the state and include the proposed text of the initiative . This approval may come at the beginning of the process or after organizers have collected signatures .", "question": { "cloze_format": "The ___ is not a step in the initiative process.", "normal_format": "Which of the following is not a step in the initiative process?", "question_choices": [ "approval of initiative petition by state or local government", "collection of signatures", "state-wide vote during a ballot election", "signature or veto by state governor" ], "question_id": "fs-id1171470891054", "question_text": "Which of the following is not a step in the initiative process?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "C" }, "bloom": null, "hl_context": "There are three forms of direct democracy used in the United States . <hl> A referendum asks citizens to confirm or repeal a decision made by the government . <hl> A legislative referendum occurs when a legislature passes a law or a series of constitutional amendments and presents them to the voters to ratify with a yes or no vote . A judicial appointment to a state supreme court may require voters to confirm whether the judge should remain on the bench . Popular referendums occur when citizens petition to place a referendum on a ballot to repeal legislation enacted by their state government . This form of direct democracy gives citizens a limited amount of power , but it does not allow them to overhaul policy or circumvent the government . <hl> Direct democracy occurs when policy questions go directly to the voters for a decision . <hl> These decisions include funding , budgets , candidate removal , candidate approval , policy changes , and constitutional amendments . Not all states allow direct democracy , nor does the United States government .", "hl_sentences": "A referendum asks citizens to confirm or repeal a decision made by the government . Direct democracy occurs when policy questions go directly to the voters for a decision .", "question": { "cloze_format": "A referendum is not purely direct democracy because the ________.", "normal_format": "Why is a referendum not purely direct democracy?", "question_choices": [ "voters propose something but the governor approves it", "voters propose and approve something but the legislature also approves it", "government proposes something and the voters approve it", "government proposes something and the legislature approves it" ], "question_id": "fs-id1171469700980", "question_text": "A referendum is not purely direct democracy because the ________." }, "references_are_paraphrase": 0 } ]
7
7.1 Voter Registration Learning Objectives By the end of this section, you will be able to: Identify ways the U.S. government has promoted voter rights and registration Summarize similarities and differences in states’ voter registration methods Analyze ways states increase voter registration and decrease fraud Before most voters are allowed to cast a ballot, they must register to vote in their state. This process may be as simple as checking a box on a driver’s license application or as difficult as filling out a long form with complicated questions. Registration allows governments to determine which citizens are allowed to vote and, in some cases, from which list of candidates they may select a party nominee. Ironically, while government wants to increase voter turnout, the registration process may prevent various groups of citizens and non-citizens from participating in the electoral process. VOTER REGISTRATION ACROSS THE UNITED STATES Elections are state-by-state contests. They include general elections for president and statewide offices (e.g., governor and U.S. senator), and they are often organized and paid for by the states. Because political cultures vary from state to state, the process of voter registration similarly varies. For example, suppose an 85-year-old retiree with an expired driver’s license wants to register to vote. He or she might be able to register quickly in California or Florida, but a current government ID might be required prior to registration in Texas or Indiana. The varied registration and voting laws across the United States have long caused controversy. In the aftermath of the Civil War, southern states enacted literacy test s, grandfather clause s, and other requirements intended to disenfranchise black voters in Alabama, Georgia, and Mississippi. Literacy tests were long and detailed exams on local and national politics, history, and more. They were often administered arbitrarily with more black people required to take them than white people. 5 Poll taxes required voters to pay a fee to vote. Grandfather clauses exempted individuals from taking literacy tests or paying poll tax es if they or their fathers or grandfathers had been permitted to vote prior to a certain point in time. While the Supreme Court determined that grandfather clauses were unconstitutional in 1915, states continued to use poll taxes and literacy tests to deter potential voters from registering. 6 States also ignored instances of violence and intimidation against African Americans wanting to register or vote. 7 The ratification of the Twenty-Fourth Amendment in 1964 ended poll taxes, but the passage of the Voting Rights Act (VRA) in 1965 had a more profound effect ( Figure 7.2 ). The act protected the rights of minority voters by prohibiting state laws that denied voting rights based on race. The VRA gave the attorney general of the United States authority to order federal examiners to areas with a history of discrimination. These examiners had the power to oversee and monitor voter registration and elections. States found to violate provisions of the VRA were required to get any changes in their election laws approved by the U.S. attorney general or by going through the court system. However, in Shelby County v. Holder (2013), the Supreme Court, in a 5–4 decision, threw out the standards and process of the VRA, effectively gutting the landmark legislation. 8 This decision effectively pushed decision-making and discretion for election policy in VRA states to the state and local level. Several such states subsequently made changes to their voter ID laws and North Carolina changed its plans for how many polling places were available in certain areas. The extent to which such changes will violate equal protection is unknown in advance, but such changes often do not have a neutral effect. The effects of the VRA were visible almost immediately. In Mississippi, only 6.7 percent of black people were registered to vote in 1965; however, by the fall of 1967, nearly 60 percent were registered. Alabama experienced similar effects, with African American registration increasing from 19.3 percent to 51.6 percent. Voter turnout across these two states similarly increased. Mississippi went from 33.9 percent turnout to 53.2 percent, while Alabama increased from 35.9 percent to 52.7 percent between the 1964 and 1968 presidential elections. 9 Following the implementation of the VRA, many states have sought other methods of increasing voter registration. Several states make registering to vote relatively easy for citizens who have government documentation. Oregon has few requirements for registering and registers many of its voters automatically. North Dakota has no registration at all. In 2002, Arizona was the first state to offer online voter registration, which allowed citizens with a driver’s license to register to vote without any paper application or signature. The system matches the information on the application to information stored at the Department of Motor Vehicles, to ensure each citizen is registering to vote in the right precinct. Citizens without a driver’s license still need to file a paper application. More than eighteen states have moved to online registration or passed laws to begin doing so. The National Conference of State Legislatures estimates, however, that adopting an online voter registration system can initially cost a state between $250,000 and $750,000. 10 Other states have decided against online registration due to concerns about voter fraud and security. Legislators also argue that online registration makes it difficult to ensure that only citizens are registering and that they are registering in the correct precincts. As technology continues to update other areas of state recordkeeping, online registration may become easier and safer. In some areas, citizens have pressured the states and pushed the process along. A bill to move registration online in Florida stalled for over a year in the legislature, based on security concerns. With strong citizen support, however, it was passed and signed in 2015, despite the governor’s lingering concerns. In other states, such as Texas, both the government and citizens are concerned about identity fraud, so traditional paper registration is still preferred. HOW DOES SOMEONE REGISTER TO VOTE? The National Commission on Voting Rights completed a study in September 2015 that found state registration laws can either raise or reduce voter turnout rates, especially among citizens who are young or whose income falls below the poverty line. States with simple voter registration had more registered citizens. 11 In all states except North Dakota, a citizen wishing to vote must complete an application. Whether the form is online or on paper, the prospective voter will list his or her name, residency address, and in many cases party identification (with Independent as an option) and affirm that he or she is competent to vote. States may also have a residency requirement , which establishes how long a citizen must live in a state before becoming eligible to register: it is often thirty days. Beyond these requirements, there may be an oath administered or more questions asked, such as felony convictions. If the application is completely online and the citizen has government documents (e.g., driver’s license or state identification card), the system will compare the application to other state records and accept an online signature or affidavit if everything matches up correctly. Citizens who do not have these state documents are often required to complete paper applications. States without online registration often allow a citizen to fill out an application on a website, but the citizen will receive a paper copy in the mail to sign and mail back to the state. Another aspect of registering to vote is the timeline. States may require registration to take place as much as thirty days before voting, or they may allow same-day registration. Maine first implemented same-day registration in 1973. Fourteen states and the District of Columbia now allow voters to register the day of the election if they have proof of residency, such as a driver’s license or utility bill. Many of the more populous states (e.g., Michigan and Texas), require registration forms to be mailed thirty days before an election. Moving means citizens must re-register or update addresses ( Figure 7.3 ). College students, for example, may have to re-register or update addresses each year as they move. States that use same-day registration had a 4 percent higher voter turnout in the 2012 presidential election than states that did not. 12 Yet another consideration is how far in advance of an election one must apply to change one’s political party affiliation. In states with closed primaries, it is important for voters to be allowed to register into whichever party they prefer. This issue came up during the 2016 presidential primaries in New York, where there is a lengthy timeline for changing your party affiliation. Some attempts have been made to streamline voter registration. The National Voter Registration Act (1993), often referred to as Motor Voter, was enacted to expedite the registration process and make it as simple as possible for voters. The act required states to allow citizens to register to vote when they sign up for driver’s licenses and Social Security benefits. On each government form, the citizen need only mark an additional box to also register to vote. Unfortunately, while increasing registrations by 7 percent between 1992 and 2012, Motor Voter did not dramatically increase voter turnout. 13 In fact, for two years following the passage of the act, voter turnout decreased slightly. 14 It appears that the main users of the expedited system were those already intending to vote. One study, however, found that preregistration may have a different effect on youth than on the overall voter pool; in Florida, it increased turnout of young voters by 13 percent. 15 In 2015, Oregon made news when it took the concept of Motor Voter further. When citizens turn eighteen, the state now automatically registers most of them using driver’s license and state identification information. When a citizen moves, the voter rolls are updated when the license is updated. While this policy has been controversial, with some arguing that private information may become public or that Oregon is moving toward mandatory voting, automatic registration is consistent with the state’s efforts to increase registration and turnout. 16 Oregon’s example offers a possible solution to a recurring problem for states—maintaining accurate voter registration rolls. During the 2000 election, in which George W. Bush won Florida’s electoral votes by a slim majority, attention turned to the state’s election procedures and voter registration rolls. Journalists found that many states, including Florida, had large numbers of phantom voters on their rolls, voters had moved or died but remained on the states’ voter registration rolls. 17 The Help America Vote Act of 2002 (HAVA) was passed in order to reform voting across the states and reduce these problems. As part of the Act, states were required to update voting equipment, make voting more accessible to the disabled, and maintain computerized voter rolls that could be updated regularly. 18 Over a decade later, there has been some progress. In Louisiana, voters are placed on ineligible lists if a voting registrar is notified that they have moved or become ineligible to vote. If the voter remains on this list for two general elections, his or her registration is cancelled. In Oklahoma, the registrar receives a list of deceased residents from the Department of Health. 19 Twenty-nine states now participate in the Interstate Voter Registration Crosscheck Program , which allows states to check for duplicate registrations. 20 At the same time, Florida’s use of the federal Systematic Alien Verification for Entitlements (SAVE) database has proven to be controversial, because county elections supervisors are allowed to remove voters deemed ineligible to vote. 21 Despite these efforts, a study commissioned by the Pew Charitable Trust found twenty-four million voter registrations nationwide were no longer valid. 22 Pew is now working with eight states to update their voter registration rolls and encouraging more states to share their rolls in an effort to find duplicates. 23 Link to Learning The National Association of Secretaries of State maintains a website that directs users to their state’s information regarding voter registration, identification policies, and polling locations. WHO IS ALLOWED TO REGISTER? In order to be eligible to vote in the United States, a person must be a citizen, resident, and eighteen years old. But states often place additional requirements on the right to vote. The most common requirement is that voters must be mentally competent and not currently serving time in jail. Some states enforce more stringent or unusual requirements on citizens who have committed crimes. Florida and Kentucky permanently bar felons and ex-felons from voting unless they obtain a pardon from the governor, while Mississippi and Nevada allow former felons to apply to have their voting rights restored. 24 On the other end of the spectrum, Vermont does not limit voting based on incarceration unless the crime was election fraud. 25 Maine citizens serving in Maine prisons also may vote in elections. Beyond those jailed, some citizens have additional expectations placed on them when they register to vote. Wisconsin requires that voters “not wager on an election,” and Vermont citizens must recite the “Voter’s Oath” before they register, swearing to cast votes with a conscience and “without fear or favor of any person.” 26 Get Connected! Where to Register? Across the United States, over twenty million college and university students begin classes each fall, many away from home. The simple act of moving away to college presents a voter registration problem. Elections are local. Each citizen lives in a district with state legislators, city council or other local elected representatives, a U.S. House of Representatives member, and more. State and national laws require voters to reside in their districts, but students are an unusual case. They often hold temporary residency while at school and return home for the summer. Therefore, they have to decide whether to register to vote near campus or vote back in their home district. What are the pros and cons of each option? Maintaining voter registration back home is legal in most states, assuming a student holds only temporary residency at school. This may be the best plan, because students are likely more familiar with local politicians and issues. But it requires the student to either go home to vote or apply for an absentee ballot. With classes, clubs, work, and more, it may be difficult to remember this task. One study found that students living more than two hours from home were less likely to vote than students living within thirty minutes of campus, which is not surprising. 27 Registering to vote near campus makes it easier to vote, but it requires an extra step that students may forget ( Figure 7.4 ). And in many states, registration to vote in a November election takes place in October, just when students are acclimating to the semester. They must also become familiar with local candidates and issues, which takes time and effort they may not have. But they will not have to travel to vote, and their vote is more likely to affect their college and local town. Have you registered to vote in your college area, or will you vote back home? What factors influenced your decision about where to vote? 7.2 Voter Turnout Learning Objectives By the end of this section, you will be able to: Identify factors that motivate registered voters to vote Discuss circumstances that prevent citizens from voting Analyze reasons for low voter turnout in the United States Campaign managers worry about who will show up at the polls on Election Day. Will more Republicans come? More Democrats? Will a surge in younger voters occur this year, or will an older population cast ballots? We can actually predict with strong accuracy who is likely to vote each year, based on identified influence factors such as age, education, and income. Campaigns will often target each group of voters in different ways, spending precious campaign dollars on the groups already most likely to show up at the polls rather than trying to persuade citizens who are highly unlikely to vote. COUNTING VOTERS Low voter turnout has long caused the media and others to express concern and frustration. A healthy democratic society is expected to be filled with citizens who vote regularly and participate in the electoral process. Organizations like Rock the Vote and Project Vote Smart ( Figure 7.5 ) work alongside MTV to increase voter turnout in all age groups across the United States. But just how low is voter turnout? The answer depends on who is calculating it and how. There are several methods, each of which highlights a different problem with the electoral system in the United States. Link to Learning Interested in mobilizing voters? Explore Rock the Vote and The Voter Participation Center for more information. Calculating voter turnout begins by counting how many ballots were cast in a particular election. These votes must be cast on time, either by mail or in person. The next step is to count how many people could have voted in the same election. This is the number that causes different people to calculate different turnout rates. The complete population of the country includes all people, regardless of age, nationality, mental capacity, or freedom. We can count subsections of this population to calculate voter turnout. For instance, the next largest population in the country is the voting-age population (VAP), which consists of persons who are eighteen and older. Some of these persons may not be eligible to vote in their state, but they are included because they are of age to do so. 28 An even smaller group is the voting-eligible population (VEP), citizens eighteen and older who, whether they have registered or not, are eligible to vote because they are citizens, mentally competent, and not imprisoned. If a state has more stringent requirements, such as not having a felony conviction, citizens counted in the VEP must meet those criteria as well. This population is much harder to measure, but statisticians who use the VEP will generally take the VAP and subtract the state’s prison population and any other known group that cannot vote. This results in a number that is somewhat theoretical; however, in a way, it is more accurate when determining voter turnout. 29 The last and smallest population is registered voters, who, as the name implies, are citizens currently registered to vote. Now we can appreciate how reports of voter turnout can vary. As Figure 7.6 shows, although 87 percent of registered voters voted in the 2012 presidential election, this represents only 42 percent of the total U.S. population. While 42 percent is indeed low and might cause alarm, some people included in it are under eighteen, not citizens, or unable to vote due to competency or prison status. The next number shows that just over 57 percent of the voting-age population voted, and 60 percent of the voting-eligible population. The best turnout ratio is calculated using the smallest population: 87 percent of registered voters voted. Those who argue that a healthy democracy needs high voter turnout will look at the voting-age population or voting-eligible population as proof that the United States has a problem. Those who believe only informed and active citizens should vote point to the registered voter turnout numbers instead. WHAT FACTORS DRIVE VOTER TURNOUT? Political parties and campaign managers approach every population of voters differently, based on what they know about factors that influence turnout. Everyone targets likely voters, which are the category of registered voters who vote regularly. Most campaigns also target registered voters in general, because they are more likely to vote than unregistered citizens. For this reason, many polling agencies ask respondents whether they are already registered and whether they voted in the last election. Those who are registered and did vote in the last election are likely to have a strong interest in politics and elections and will vote again, provided they are not angry with the political system or politicians. Some campaigns and civic groups target members of the voting-eligible population who are not registered, especially in states that are highly contested during a particular election. The Association of Community Organizations for Reform Now (ACORN), which is now defunct, was both lauded and criticized for its efforts to get voters in low socio-economic areas registered during the 2008 election. 30 Similarly, interest groups in Los Angeles were criticized for registering homeless citizens as a part of an effort to gather signatures to place propositions on the ballot. 31 These potential voters may not think they can vote, but they might be persuaded to register and then vote if the process is simplified or the information they receive encourages them to do so. Campaigns also target different age groups with different intensity, because age is a relatively consistent factor in predicting voting behavior. Those between eighteen and twenty-five are least likely to vote, while those sixty-five to seventy-four are most likely. One reason for lower voter turnout among younger citizens may be that they move frequently. 32 Another reason may be circular: Youth are less active in government and politics, leading the parties to neglect them. When people are neglected, they are in turn less likely to become engaged in government. 33 They may also be unaware of what a government provides. Younger people are often still in college, perhaps working part-time and earning low wages. They are unlikely to be receiving government benefits beyond Pell Grants or government-subsidized tuition and loans. They are also unlikely to be paying taxes at a high rate. Government is a distant concept rather than a daily concern, which may drive down turnout. In 2012, for example, the Census Bureau reported that only 53.6 percent of eligible voters between the ages of eighteen and twenty-four registered and 41.2 percent voted, while 79.7 percent of sixty-five to seventy-four-year-olds registered and 73.5 percent voted. 34 Once a person has retired, reliance on the government will grow if he or she draws income from Social Security, receives health care from Medicare, and enjoys benefits such as transportation and social services from state and local governments ( Figure 7.7 ). Due to consistently low turnout among the young, several organizations have made special efforts to demonstrate to younger citizens that voting is an important activity. Rock the Vote began in 1990, with the goal of bringing music, art, and pop culture together to encourage the youth to participate in government. The organization hosts rallies, festivals, and concerts that also register voters and promote voter awareness, bringing celebrities and musicians to set examples of civic involvement. Rock the Vote also maintains a website that helps young adults find out how to register in their state. Citizen Change , started by Sean “Diddy” Combs and other hip hop artists, pushed slogans such as “Vote or Die” during the 2004 presidential election in an effort to increase youth voting turnout. These efforts may have helped in 2004 and 2008, when the number of youth voting in the presidential elections increased ( Figure 7.8 ). 35 Milestone Making a Difference In 2008, for the first time since 1972, a presidential candidate intrigued America’s youth and persuaded them to flock to the polls in record numbers. Barack Obama not only spoke to young people’s concerns but his campaign also connected with them via technology, wielding texts and tweets to bring together a new generation of voters ( Figure 7.9 ). The high level of interest Obama inspired among college-aged voters was a milestone in modern politics. Since the 1971 passage of the Twenty-Sixth Amendment, which lowered the voting age from 21 to 18, voter turnout in the under-25 range has been low. While opposition to the Vietnam War and the military draft sent 50.9 percent of 21- to 24-year-old voters to the polls in 1964, after 1972, turnout in that same age group dropped to below 40 percent as youth became disenchanted with politics. In 2008, however, it briefly increased to 45 percent from only 32 percent in 2000. Yet, despite high interest in Obama’s candidacy in 2008, younger voters were less enchanted in 2012—only 38 percent showed up to vote that year. 36 What qualities should a presidential or congressional candidate show in order to get college students excited and voting? Why? A citizen’s socioeconomic status—the combination of education, income, and social status—may also predict whether he or she will vote. Among those who have completed college, the 2012 voter turnout rate jumps to 75 percent of eligible voters, compared to about 52.6 percent for those who have completed only high school. 37 This is due in part to the powerful effect of education, one of the strongest predictors of voting turnout. Income also has a strong effect on the likelihood of voting. Citizens earning $100,000 to $149,999 a year are very likely to vote and 76.9 percent of them do, while only 50.4 percent of those who earn $15,000 to $19,999 vote. 38 Once high income and college education are combined, the resulting high socioeconomic status strongly predicts the likelihood that a citizen will vote. Race is also a factor. Caucasians turn out to vote in the highest numbers, with 63 percent of white citizens voting in 2012. In comparison, 62 percent of African Americans, 31.3 percent of Asian Americans, and 31.8 percent of Hispanic citizens voted in 2012. Voting turnout can increase or decrease based upon the political culture of a state, however. Hispanics, for example, often vote in higher numbers in states where there has historically been higher Hispanic involvement and representation, such as New Mexico, where 49 percent of Hispanic voters turned out in 2012. 39 In 2016, while Donald Trump rode a wave of discontent among white voters to the presidency, the fact that Hillary Clinton nearly beat him may have had as much to do with the record turnout of Latinos in response to numerous remarks on immigration that Trump made throughout his campaign. Latinos made up 11 percent of the electorate in 2016, up from 10 percent in 2012 and 9 percent in 2008. 40 While less of a factor today, gender has historically been a factor in voter turnout. After 1920, when the Nineteenth Amendment gave women the right to vote, women began slowly turning out to vote, and now they do so in high numbers. Today, more women vote than men. In 2012, 59.7 percent of men and 63.7 percent of women reported voting. 41 While women do not vote exclusively for one political party, 41 percent are likely to identify as Democrats and only 25 percent are likely to identify as Republicans. 42 In 2016, a record 73.7 percent of women reporting voting, 43 while a record 63.8 percent of men reported voting. In 2012, these numbers were 71.4 percent for women and 61.6 percent for men. The margin that Hillary Clinton won was more narrow in Florida than many presumed it would be and may have helped Donald Trump win that state. Even after allegations of sexual assault and revelations of several instances of sexism by Mr. Trump, Clinton only won 54 percent of the women’s vote in Florida. In contrast, rural voters voted overwhelmingly for Trump, at much higher rates than they had for Mitt Romney in 2012. Link to Learning Check out this website to find out who is voting and who isn’t. WHAT FACTORS DECREASE VOTER TURNOUT? Just as political scientists and campaign managers worry about who does vote, they also look at why people choose to stay home on Election Day. Over the years, studies have explored why a citizen might not vote. The reasons range from the obvious excuse of being too busy (19 percent) to more complex answers, such as transportation problems (3.3 percent) and restrictive registration laws (5.5 percent). 44 With only 57 percent of our voting-age population (VAP) voting in the presidential election of 2012, 45 however, we should examine why the rest do not participate. One prominent reason for low national turnout is that participation is not mandated. Some countries, such as Belgium and Turkey, have compulsory voting laws, which require citizens to vote in elections or pay a fine. This helps the two countries attain VAP turnouts of 87 percent and 86 percent, respectively, compared to the U.S. turnout of 54 percent. Sweden and Germany automatically register their voters, and 83 percent and 66 percent vote, respectively. Chile’s decision to move from compulsory voting to voluntary voting caused a drop in participation from 87 percent to 46 percent. 46 Link to Learning Do you wonder what voter turnout looks like in other developed countries? Visit the Pew Research Center report on international voting turnout to find out. Low turnout also occurs when some citizens are not allowed to vote. One method of limiting voter access is the requirement to show identification at polling places. In 2005, the Indiana legislature passed the first strict photo identification law. Voters must provide photo identification that shows their names match the voter registration records, clearly displays an expiration date, is current or has expired only since the last general election, and was issued by the state of Indiana or the U.S. government. Student identification cards that meet the standards and are from an Indiana state school are allowed. 47 Indiana’s law allows voters without an acceptable identification to obtain a free state identification card. 48 The state also extended service hours for state offices that issue identification in the days leading up to elections. 49 The photo identification law was quickly contested. The American Civil Liberties Union and other groups argued that it placed an unfair burden on people who were poor, older, or had limited finances, while the state argued that it would prevent fraud. In Crawford v. Marion County Election Board (2008), the Supreme Court decided that Indiana’s voter identification requirement was constitutional, although the decision left open the possibility that another case might meet the burden of proof required to overturn the law. 50 In 2011, Texas passed a strict photo identification law for voters, allowing concealed-handgun permits as identification but not student identification. The Texas law was blocked by the Obama administration before it could be implemented, because Texas was on the Voting Rights Act’s preclearance list. Other states, such as Alabama, Alaska, Arizona, Georgia, and Virginia similarly had laws and districting changes blocked. 51 As a result, Shelby County, Alabama, and several other states sued the U.S. attorney general, arguing the Voting Rights Act’s preclearance list was unconstitutional and that the formula that determined whether states had violated the VRA was outdated. In Shelby County v. Holder (2013), the Supreme Court agreed. In a 5–4 decision, the justices in the majority said the formula for placing states on the VRA preclearance list was outdated and reached into the states’ authority to oversee elections. 52 States and counties on the preclearance list were released, and Congress was told to design new guidelines for placing states on the list. Following the Shelby decision, Texas implemented its photo identification law, leading plaintiffs to bring cases against the state, charging that the law disproportionally affects minority voters. 53 Alabama, Georgia, and Virginia similarly implemented their photo identification laws, joining Kansas, South Carolina, Tennessee, and Wisconsin. Some of these states offer low-cost or free identification for the purposes of voting or will offer help with the completion of registration applications, but citizens must provide birth certificates or other forms of identification, which can be difficult and/or costly to obtain. Opponents of photo identification laws argue that these restrictions are unfair because they have an unusually strong effect on some demographics. One study, done by Reuters, found that requiring a photo ID would disproportionally prevent citizens aged 18–24, Hispanics, and those without a college education from voting. These groups are unlikely to have the right paperwork or identification, unlike citizens who have graduated from college. The same study found that 4 percent of households with yearly incomes under $25,000 said they did not have an ID that would be considered valid for voting. 54 Another reason for not voting is that polling places may be open only on Election Day. This makes it difficult for voters juggling school, work, and child care during polling hours ( Figure 7.10 ). Many states have tried to address this problem with early voting , which opens polling places as much as two weeks early. Texas opened polling places on weekdays and weekends in 1988 and initially saw an increase in voting in gubernatorial and presidential elections, although the impact tapered off over time. 55 Other states with early voting, however, showed a decline in turnout, possibly because there is less social pressure to vote when voting is spread over several days. 56 Early voting was used in a widespread manner across most states in 2016, including Nevada, where 60 percent of votes were cast prior to Election Day. In a similar effort, Oregon, Colorado, and Washington have moved to a mail-only voting system in which there are no polling locations, only mailed ballots. These states have seen a rise in turnout, with Colorado’s numbers increasing from 1.8 million votes in the 2010 congressional elections to 2 million votes in the 2014 congressional elections. 57 One argument against early and mail-only voting is that those who vote early cannot change their minds during the final days of the campaign, such as in response to an “October surprise,” a highly negative story about a candidate that leaks right before Election Day in November. (For example, a week before the 2000 election, a Dallas Morning News journalist reported that George W. Bush had lied about whether he had been arrested for driving under the influence. 58 ) In 2016, two such stories, one for each nominee, broke just prior to Election Day. First, the Billy Bush Access Hollywood tape showed a braggadocian Donald Trump detailing his ability to do what he pleases with women, including grabbing at their genitals. This tape led some Republican officeholders, such as Senator Jeff Flake (R-AZ), to disavow Trump. However, perhaps eclipsing this episode was the release by former FBI director James Comey of a letter to Congress re-opening the Hillary Clinton email investigation a mere eleven days prior to the election. It is impossible to know the exact dynamics of how someone decides to vote, but one theory is that women jumped from Trump after the Access Hollywood tape emerged, only to go back to supporting him when the FBI seemed to reopen its investigation. Apathy may also play a role. Some people avoid voting because their vote is unlikely to make a difference or the election is not competitive. If one party has a clear majority in a state or district, for instance, members of the minority party may see no reason to vote. Democrats in Utah and Republicans in California are so outnumbered that they are unlikely to affect the outcome of an election, and they may opt to stay home. Because the presidential candidate with the highest number of popular votes receives all of Utah’s and California’s electoral votes, there is little incentive for some citizens to vote: they will never change the outcome of the state-level election. These citizens, as well as those who vote for third parties like the Green Party or the Libertarian Party, are sometimes referred to as the chronic minority . While third-party candidates sometimes win local or state office or even dramatize an issue for national discussion, such as when Ross Perot discussed the national debt during his campaign as an independent presidential candidate in 1992, they never win national elections. Finally, some voters may view non-voting as a means of social protest or may see volunteering as a better way to spend their time. Younger voters are more likely to volunteer their time rather than vote, believing that serving others is more important than voting. 59 Possibly related to this choice is voter fatigue . In many states, due to our federal structure with elections at many levels of government, voters may vote many times per year on ballots filled with candidates and issues to research. The less time there is between elections, the lower the turnout. 60 7.3 Elections Learning Objectives By the end of this section, you will be able to: Describe the stages in the election process Compare the primary and caucus systems Summarize how primary election returns lead to the nomination of the party candidates Elections offer American voters the opportunity to participate in their government with little investment of time or personal effort. Yet voters should make decisions carefully. The electoral system allows them the chance to pick party nominees as well as office-holders, although not every citizen will participate in every step. The presidential election is often criticized as a choice between two evils, yet citizens can play a prominent part in every stage of the race and influence who the final candidates actually are. DECIDING TO RUN Running for office can be as easy as collecting one hundred signatures on a city election form or paying a registration fee of several thousand dollars to run for governor of a state. However, a potential candidate still needs to meet state-specific requirements covering length of residency, voting status, and age. Potential candidates must also consider competitors, family obligations, and the likelihood of drawing financial backing. His or her spouse, children, work history, health, financial history, and business dealings also become part of the media’s focus, along with many other personal details about the past. Candidates for office are slightly more diverse than the representatives serving in legislative and executive bodies, but the realities of elections drive many eligible and desirable candidates away from running. 61 Despite these problems, most elections will have at least one candidate per party on the ballot. In states or districts where one party holds a supermajority, such as Georgia, candidates from the other party may be discouraged from running because they don’t think they have a chance to win. 62 Candidates are likely to be moving up from prior elected office or are professionals, like lawyers, who can take time away from work to campaign and serve in office. 63 When candidates run for office, they are most likely to choose local or state office first. For women, studies have shown that family obligations rather than desire or ambition account for this choice. Further, women are more likely than men to wait until their children are older before entering politics, and women say that they struggle to balance campaigning and their workload with parenthood. 64 Because higher office is often attained only after service in lower office, there are repercussions to women waiting so long. If they do decide to run for the U.S. House of Representatives or Senate, they are often older, and fewer in number, than their male colleagues ( Figure 7.11 ). As of 2015, only 24.4 percent of state legislators and 20 percent of U.S. Congress members are women. 65 The number of women in executive office is often lower as well. It is thus no surprise that 80 percent of members of Congress are male, 90 percent have at least a bachelor’s degree, and their average age is sixty. 66 Another factor for potential candidates is whether the seat they are considering is competitive or open. A competitive seat describes a race where a challenger runs against the incumbent —the current office holder. An open seat is one whose incumbent is not running for reelection. Incumbents who run for reelection are very likely to win for a number of reasons, which are discussed later in this chapter. In fact, in the U.S. Congress, 95 percent of representatives and 82 percent of senators were reelected in 2014. 67 But when an incumbent retires, the seat is open and more candidates will run for that seat. Many potential candidates will also decline to run if their opponent has a lot of money in a campaign war chest. War chests are campaign accounts registered with the Federal Election Commission, and candidates are allowed to keep earlier donations if they intend to run for office again. Incumbents and candidates trying to move from one office to another very often have money in their war chests. Those with early money are hard to beat because they have an easier time showing they are a viable candidate (one likely to win). They can woo potential donors, which brings in more donations and strengthens the campaign. A challenger who does not have money, name recognition, or another way to appear viable will have fewer campaign donations and will be less competitive against the incumbent. CAMPAIGN FINANCE LAWS In the 2012 presidential election cycle, candidates for all parties raised a total of over $1.3 billion dollars for campaigns. 68 Congressional candidates running in the 2014 Senate elections raised $634 million, while candidates running for the House of Representatives raised $1.03 billion. 69 This, however, pales in comparison to the amounts raised by political action committees (PACs) , which are organizations created to raise and spend money to influence politics and contribute to candidates’ campaigns. In the 2014 congressional elections, PACs raised over $1.7 billion to help candidates and political parties. 70 How does the government monitor the vast amounts of money that are now a part of the election process? The history of campaign finance monitoring has its roots in a federal law written in 1867, which prohibited government employees from asking Naval Yard employees for donations. 71 In 1896, the Republican Party spent about $16 million overall, which includes William McKinley’s $6–7 million campaign expenses. 72 This raised enough eyebrows that several key politicians, including Theodore Roosevelt, took note. After becoming president in 1901, Roosevelt pushed Congress to look for political corruption and influence in government and elections. 73 Shortly after, the Tillman Act (1907) was passed by Congress, which prohibited corporations from contributing money to candidates running in federal elections. Other congressional acts followed, limiting how much money individuals could contribute to candidates, how candidates could spend contributions, and what information would be disclosed to the public. 74 While these laws intended to create transparency in campaign funding, government did not have the power to stop the high levels of money entering elections, and little was done to enforce the laws. In 1971, Congress again tried to fix the situation by passing the Federal Election Campaign Act (FECA), which outlined how candidates would report all contributions and expenditures related to their campaigns. The FECA also created rules governing the way organizations and companies could contribute to federal campaigns, which allowed for the creation of political action committees. 75 Finally, a 1974 amendment to the act created the Federal Election Commission (FEC), which operates independently of government and enforces the elections laws. While some portions of the FECA were ruled unconstitutional by the courts in Buckley v. Valeo (1976), such as limits on personal spending on campaigns by candidates not using federal money, the FEC began enforcing campaign finance laws in 1976. 76 Even with the new laws and the FEC, money continued to flow into elections. By using loopholes in the laws, political parties and political action committees donated large sums of money to candidates, and new reforms were soon needed. Senators John McCain (R-AZ) and Russ Feingold (former D-WI) cosponsored the Bipartisan Campaign Reform Act of 2002 (BCRA), also referred to as the McCain–Feingold Act. McCain–Feingold restricts the amount of money given to political parties, which had become a way for companies and PACs to exert influence. It placed limits on total contributions to political parties, prohibited coordination between candidates and PAC campaigns, and required candidates to include personal endorsements on their political ads. It also limited advertisements run by unions and corporations thirty days before a primary election and sixty days before a general election. 77 Soon after the passage of the McCain–Feingold Act, the FEC’s enforcement of the law spurred court cases challenging it. The first, McConnell v. Federal Election Commission (2003), resulted in the Supreme Court’s upholding the act’s restrictions on how candidates and parties could spend campaign contributions. But later court challenges led to the removal of limits on personal spending and ended the ban on ads run by interest groups in the days leading up to an election. 78 In 2010, the Supreme Court’s ruling on Citizens United v. Federal Election Commission led to the removal of spending limits on corporations. Justices in the majority argued that the BCRA violated a corporation’s free speech rights. 79 The court ruling also allowed corporations to place unlimited money into super PACs , or Independent Expenditure-Only Committees. These organizations cannot contribute directly to a candidate, nor can they strategize with a candidate’s campaign. They can, however, raise and spend as much money as they please to support or attack a candidate, including running advertisements and hosting events. 80 In 2012, the super PAC “Restore Our Future” raised $153 million and spent $142 million supporting conservative candidates, including Mitt Romney. “Priorities USA Action” raised $79 million and spent $65 million supporting liberal candidates, including Barack Obama. The total expenditure by super PACs alone was $609 million in the 2012 election and $345 million in the 2014 congressional elections. 81 Several limits on campaign contributions have been upheld by the courts and remain in place. Individuals may contribute up to $2,700 per candidate per election. This means a teacher living in Nebraska may contribute $2,700 to Bernie Sanders for his campaign to become to the Democratic presidential nominee, and if Sanders becomes the nominee, the teacher may contribute another $2,700 to his general election campaign. Individuals may also give $5,000 to political action committees and $33,400 to a national party committee. PACs that contribute to more than one candidate are permitted to contribute $5,000 per candidate per election, and up to $15,000 to a national party. PACs created to give money to only one candidate are limited to only $2,700 per candidate, however ( Figure 7.12 ). 82 The amounts are adjusted every two years, based on inflation. These limits are intended to create a more equal playing field for the candidates, so that candidates must raise their campaign funds from a broad pool of contributors. NOMINATION STAGE Although the Constitution explains how candidates for national office are elected, it is silent on how those candidates are nominated. Political parties have taken on the role of promoting nominees for offices, such as the presidency and seats in the Senate and the House of Representatives. Because there are no national guidelines, there is much variation in the nomination process. States pass election laws and regulations, choose the selection method for party nominees, and schedule the election, but the process also greatly depends on the candidates and the political parties. States, through their legislatures, often influence the nomination method by paying for an election to help parties identify the nominee the voters prefer. Many states fund elections because they can hold several nomination races at once. In 2012, many voters had to choose a presidential nominee, U.S. Senate nominee, House of Representatives nominee, and state-level legislature nominee for their parties. The most common method of picking a party nominee for state, local, and presidential contests is the primary. Party members use a ballot to indicate which candidate they desire for the party nominee. Despite the ease of voting using a ballot, primary elections have a number of rules and variations that can still cause confusion for citizens. In a closed primary , only members of the political party selecting nominees may vote. A registered Green Party member, for example, is not allowed to vote in the Republican or Democratic primary. Parties prefer this method, because it ensures the nominee is picked by voters who legitimately support the party. An open primary allows all voters to vote. In this system, a Green Party member is allowed to pick either a Democratic or Republican ballot when voting. For state-level office nominations, or the nomination of a U.S. Senator or House member, some states use the top-two primary method. A top-two primary , sometimes called a jungle primary, pits all candidates against each other, regardless of party affiliation. The two candidates with the most votes become the final candidates for the general election. Thus, two candidates from the same party could run against each other in the general election. In one California congressional district, for example, four Democrats and two Republicans all ran against one another in the June 2012 primary. The two Republicans received the most votes, so they ran against one another in the general election in November. 83 In 2016, thirty-four candidates filed to run to replace Senator Barbara Boxer (D-CA). In the end, two Democratic women of color emerged to compete head-to-head in the general election. California attorney general Kamala Harris eventually won the seat on Election Day, helping to quadruple the number of women of color in the U.S. Senate overnight. More often than not, however, the top-two system is used in state-level elections for non-partisan elections, in which none of the candidates are allowed to declare a political party. In general, parties do not like nominating methods that allow non-party members to participate in the selection of party nominees. In 2000, the Supreme Court heard a case brought by the California Democratic Party, the California Republican Party, and the California Libertarian Party. 84 The parties argued that they had a right to determine who associated with the party and who participated in choosing the party nominee. The Supreme Court agreed, limiting the states’ choices for nomination methods to closed and open primaries. Despite the common use of the primary system, at least five states (Alaska, Hawaii, Idaho, Colorado, and Iowa) regularly use caucuses for presidential, state, and local-level nominations. A caucus is a meeting of party members in which nominees are selected informally. Caucuses are less expensive than primaries because they rely on voting methods such as dropping marbles in a jar, placing names in a hat, standing under a sign bearing the candidate’s name, or taking a voice vote. Volunteers record the votes and no poll workers need to be trained or compensated. The party members at the caucus also help select delegates , who represent their choice at the party’s state- or national-level nominating convention. The Iowa Democratic Caucus is well-known for its spirited nature. The party’s voters are asked to align themselves into preference groups, which often means standing in a room or part of a room that has been designated for the candidate of choice. The voters then get to argue and discuss the candidates, sometimes in a very animated and forceful manner. After a set time, party members are allowed to realign before the final count is taken. The caucus leader then determines how many members support each candidate, which determines how many delegates each candidate will receive. The caucus has its proponents and opponents. Many argue that it is more interesting than the primary and brings out more sophisticated voters, who then benefit from the chance to debate the strengths and weaknesses of the candidates. The caucus system is also more transparent than ballots. The local party members get to see the election outcome and pick the delegates who will represent them at the national convention. There is less of a possibility for deception or dishonesty. Opponents point out that caucuses take two to three hours and are intimidating to less experienced voters. These factors, they argue, lead to lower voter turnout. And they have a point—voter turnout for a caucus is generally 20 percent lower than for a primary. 85 Regardless of which nominating system the states and parties choose, states must also determine which day they wish to hold their nomination. When the nominations are for state-level office, such as governor, the state legislatures receive little to no input from the national political parties. In presidential election years, however, the national political parties pressure most states to hold their primaries or caucuses in March or later. Only Iowa, New Hampshire, and South Carolina are given express permission by the national parties to hold presidential primaries or caucuses in January or February ( Figure 7.13 ). Both political parties protect the three states’ status as the first states to host caucuses and primaries, due to tradition and the relative ease of campaigning in these smaller states. Other states, especially large states like California, Florida, Michigan, and Wisconsin, often are frustrated that they must wait to hold their presidential primary elections later in the season. Their frustration is reasonable: candidates who do poorly in the first few primaries often drop out entirely, leaving fewer candidates to run in caucuses and primaries held in February and later. In 2008, California, New York, and several other states disregarded the national party’s guidelines and scheduled their primaries the first week of February. In response, Florida and Michigan moved their primaries to January and many other states moved forward to March. This was not the first time states participated in frontloading and scheduled the majority of the primaries and caucuses at the beginning of the primary season. It was, however, one of the worst occurrences. States have been frontloading since the 1976 presidential election, with the problem becoming more severe in the 1992 election and later. 86 Political parties allot delegates to their national nominating conventions based on the number of registered party voters in each state. California, the state with the most Democrats, sent 548 delegates to the 2016 Democratic National Convention, while Wyoming, with far fewer Democrats, sent only 18 delegates. When the national political parties want to prevent states from frontloading, or doing anything else they deem detrimental, they can change the state’s delegate count, which in essence increases or reduces the state’s say in who becomes the presidential nominee. In 1996, the Republicans offered bonus delegates to states that held their primaries and caucuses later in the nominating season. 87 In 2008, the national parties ruled that only Iowa, South Carolina, and New Hampshire could hold primaries or caucuses in January. Both parties also reduced the number of delegates from Michigan and Florida as punishment for those states’ holding early primaries. 88 Despite these efforts, candidates in 2008 had a very difficult time campaigning during the tight window caused by frontloading. One of the criticisms of the modern nominating system is that parties today have less influence over who becomes their nominee. In the era of party “bosses,” candidates who hoped to run for president needed the blessing and support of party leadership and a strong connection with the party’s values. Now, anyone can run for a party’s nomination. The candidates with enough money to campaign the longest, gaining media attention, momentum, and voter support are more likely to become the nominee than candidates without these attributes, regardless of what the party leadership wants. This new reality has dramatically increased the number of politically inexperienced candidates running for national office. In 2012, for example, eleven candidates ran multistate campaigns for the Republican nomination. Dozens more had their names on one or two state ballots. With a long list of challengers, candidates must find more ways to stand out, leading them to espouse extreme positions or display high levels of charisma. Add to this that primary and caucus voters are often more extreme in their political beliefs, and it is easy to see why fewer moderates become party nominees. The 2016 primary campaign by President Donald Trump shows that grabbing the media’s attention with fiery partisan rhetoric can get a campaign started strong. This does not guarantee a candidate will make it through the primaries, however. Link to Learning Take a look at Campaigns & Elections to see what hopeful candidates are reading. CONVENTION SEASON Once it is clear who the parties’ nominees will be, presidential and gubernatorial campaigns enter a quiet period. Candidates run fewer ads and concentrate on raising funds for the fall. This is a crucial time because lack of money can harm their chances. The media spends much of the summer keeping track of the fundraising totals while the political parties plan their conventions. State parties host state-level conventions during gubernatorial elections, while national parties host national conventions during presidential election years. Party conventions are typically held between June and September, with state-level conventions earlier in the summer and national conventions later. Conventions normally last four to five days, with days devoted to platform discussion and planning and nights reserved for speeches ( Figure 7.14 ). Local media covers the speeches given at state-level conventions, showing speeches given by the party nominees for governor and lieutenant governor, and perhaps important guests or the state’s U.S. senators. The national media covers the Democratic and Republican conventions during presidential election years, mainly showing the speeches. Some cable networks broadcast delegate voting and voting on party platforms. Members of the candidate’s family and important party members generally speak during the first few days of a national convention, with the vice presidential nominee speaking on the next-to-last night and the presidential candidate on the final night. The two chosen candidates then hit the campaign trail for the general election. The party with the incumbent president holds the later convention, so in 2016, the Democrats held their convention after the Republicans. There are rarely surprises at the modern convention. Thanks to party rules, the nominee for each party is generally already clear. In 2008, John McCain had locked up the Republican nomination in March by having enough delegates, while in 2012, President Obama was an unchallenged incumbent and hence people knew he would be the nominee. In 2016, both apparent nominees (Democrat Hillary Clinton and Republican Donald Trump) faced primary opponents who stayed in the race even when the nominations were effectively sewn up—Democrat Bernie Sanders and Republican Ted Cruz—though no “convention surprise” took place. The naming of the vice president is generally not a surprise either. Even if a presidential nominee tries to keep it a secret, the news often leaks out before the party convention or official announcement. In 2004, the media announced John Edwards was John Kerry’s running mate. The Kerry campaign had not made a formal announcement, but an amateur photographer had taken a picture of Edwards’ name being added to the candidate’s plane and posted it to an aviation message board. Despite the lack of surprises, there are several reasons to host traditional conventions. First, the parties require that the delegates officially cast their ballots. Delegates from each state come to the national party convention to publicly state who their state’s voters selected as the nominee. Second, delegates will bring state-level concerns and issues to the national convention for discussion, while local-level delegates bring concerns and issues to state-level conventions. This list of issues that concern local party members, like limiting abortions in a state or removing restrictions on gun ownership, are called planks , and they will be discussed and voted upon by the delegates and party leadership at the convention. Just as wood planks make a platform, issues important to the party and party delegates make up the party platform . The parties take the cohesive list of issues and concerns and frame the election around the platform. Candidates will try to keep to the platform when campaigning, and outside groups that support them, such as super PACs, may also try to keep to these issues. Third, conventions are covered by most news networks and cable programs. This helps the party nominee get positive attention while surrounded by loyal delegates, family members, friends, and colleagues. For presidential candidates, this positivity often leads to a bump in popularity, so the candidate gets a small increase in favorability. If a candidate does not get the bump, however, the campaign manager has to evaluate whether the candidate is connecting well with the voters or is out of step with the party faithful. In 2004, John Kerry spent the Democratic convention talking about getting U.S. troops out of the war in Iraq and increasing spending at home. Yet after his patriotic and positive convention, Gallup recorded no convention bump and the voters did not appear more likely to vote for him. GENERAL ELECTIONS AND ELECTION DAY The general election campaign period occurs between mid-August and early November. These elections are simpler than primaries and conventions, because there are only two major party candidates and a few minor party candidates. About 50 percent of voters will make their decisions based on party membership, so the candidates will focus on winning over independent voters and visiting states where the election is close. 89 In 2016, both candidates sensed shifts in the electorate that led them to visit states that were not recently battleground states. Clinton visited Republican stronghold Arizona as Latino voter interest surged. Defying conventional campaign movements, Trump spent many hours over the last days of the campaign in the Democratic Rust Belt states, namely Michigan and Wisconsin. President Trump ended up winning both states and industrial Pennsylvania as well. Debates are an important element of the general election season, allowing voters to see candidates answer questions on policy and prior decisions. While most voters think only of presidential debates, the general election season sees many debates. In a number of states, candidates for governor are expected to participate in televised debates, as are candidates running for the U.S. Senate. Debates not only give voters a chance to hear answers, but also to see how candidates hold up under stress. Because television and the Internet make it possible to stream footage to a wide audience, modern campaign managers understand the importance of a debate ( Figure 7.15 ). In 1960, the first televised presidential debate showed that answering questions well is not the only way to impress voters. Senator John F. Kennedy, the Democratic nominee, and Vice President Richard Nixon, the Republican nominee, prepared in slightly different ways for their first of four debates. Although both studied answers to possible questions, Kennedy also worked on the delivery of his answers, including accent, tone, facial displays, and body movements, as well as overall appearance. Nixon, however, was ill in the days before the debate and appeared sweaty and gaunt. He also chose not to wear makeup, a decision that left his pale, unshaven face vulnerable. 90 Interestingly, while people who watched the debate thought Kennedy won, those listening on radio saw the debate as more of a draw. Insider Perspective Inside the Debate Debating an opponent in front of sixty million television voters is intimidating. Most presidential candidates spend days, if not weeks, preparing. Newspapers and cable news programs proclaim winners and losers, and debates can change the tide of a campaign. Yet, Paul Begala , a strategist with Bill Clinton’s 1992 campaign, saw debates differently. In one of his columns for CNN, Begala recommends that candidates relax and have a little fun. Debates are relatively easy, he says, more like a scripted program than an interview that puts candidates on the spot. They can memorize answers and deliver them convincingly, making sure they hit their mark. Second, a candidate needs a clear message explaining why the voters should pick him or her. Is he or she a needed change? Or the only experienced candidate? If the candidate’s debate answers reinforce this message, the voters will remember. Third, candidates should be humorous, witty, and comfortable with their knowledge. Trying to be too formal or cramming information at the last minute will cause the candidate to be awkward or get overwhelmed. Finally, a candidate is always on camera. Making faces, sighing at an opponent, or simply making a mistake gives the media something to discuss and can cause a loss. In essence, Begala argues that if candidates wish to do well, preparation and confidence are key factors. 91 Is Begala’s advice good? Why or why not? What positives or negatives would make a candidate’s debate performance stand out for you as a voter? While debates are not just about a candidate’s looks, most debate rules contain language that prevents candidates from artificially enhancing their physical qualities. For example, prior rules have prohibited shoes that increase a candidate’s height, banned prosthetic devices that change a candidate’s physical appearance, and limited camera angles to prevent unflattering side and back shots. Candidates and their campaign managers are aware that visuals matter. Debates are generally over by the end of October, just in time for Election Day. Beginning with the election of 1792, presidential elections were to be held in the thirty-four days prior to the “first Wednesday in December.” 92 In 1845, Congress passed legislation that moved the presidential Election Day to the first Tuesday after the first Monday in November, and in 1872, elections for the House of Representatives were also moved to that same Tuesday. 93 The United States was then an agricultural country, and because a number of states restricted voting to property-owning males over twenty-one, farmers made up nearly 74 percent of voters. 94 The tradition of Election Day to fall in November allowed time for the lucrative fall harvest to be brought in and the farming season to end. And, while not all members of government were of the same religion, many wanted to ensure that voters were not kept from the polls by a weekend religious observance. Finally, business and mercantile concerns often closed their books on the first of the month. Rather than let accounting get in the way of voting, the bill’s language forces Election Day to fall between the second and eighth of the month. THE ELECTORAL COLLEGE Once the voters have cast ballots in November and all the election season madness comes to a close, races for governors and local representatives may be over, but the constitutional process of electing a president has only begun. The electors of the Electoral College travel to their respective state capitols and cast their votes in mid-December, often by signing a certificate recording their vote. In most cases, electors cast their ballots for the candidate who won the majority of votes in their state. The states then forward the certificates to the U.S. Senate. The number of Electoral College votes granted to each state equals the total number of representatives and senators that state has in the U.S. Congress or, in the case of Washington, DC, as many electors as it would have if it were a state. The number of representatives may fluctuate based on state population, which is determined every ten years by the U.S. Census, mandated by Article I, Section 2, of the Constitution. For the 2016 and 2020 presidential elections, there are a total of 538 electors in the Electoral College, and a majority of 270 electoral votes is required to win the presidency. Once the electoral votes have been read by the president of the Senate (i.e., the vice president of the United States) during a special joint session of Congress in January, the presidential candidate who received the majority of electoral votes is officially named president. Should a tie occur, the sitting House of Representatives elects the president, with each state receiving one vote. While this rarely occurs, both the 1800 and the 1824 elections were decided by the House of Representatives. As election night 2016 played out after the polls closed, one such scenario was in play for a tie. However, the states that Hillary Clinton needed to make that tie were lost narrowly to Trump. Had the tie occurred, the Republican House would have likely selected Trump as president anyway. As political parties became stronger and the Progressive Era’s influence shaped politics from the 1890s to the 1920s, states began to allow state parties rather than legislators to nominate a slate of electors. Electors cannot be elected officials nor can they work for the federal government. Since the Republican and Democratic parties choose faithful party members who have worked hard for their candidates, the modern system decreases the chance they will vote differently from the state’s voters. There is no guarantee of this, however. Occasionally there are examples of faithless electors . In 2000, the majority of the District of Columbia’s voters cast ballots for Al Gore, and all three electoral votes should have been cast for him. Yet one of the electors cast a blank ballot, denying Gore a precious electoral vote, reportedly to contest the unequal representation of the District in the Electoral College. In 2004, one of the Minnesota electors voted for John Edwards, the vice presidential nominee, to be president ( Figure 7.16 ) and misspelled the candidate’s last name in the process. Some believe this was a result of confusion rather than a political statement. The electors’ names and votes are publicly available on the electoral certificates, which are scanned and documented by the National Archives and easily available for viewing online. In forty-eight states and the District of Columbia, the candidate who wins the most votes in November receives all the state’s electoral votes, and only the electors from that party will vote. This is often called the winner-take-all system . In two states, Nebraska and Maine, the electoral votes are divided. The candidate who wins the state gets two electoral votes, but the winner of each congressional district also receives an electoral vote. In 2008, for example, Republican John McCain won two congressional districts and the majority of the voters across the state of Nebraska, earning him four electoral votes from Nebraska. Obama won in one congressional district and earned one electoral vote from Nebraska. 95 In 2016, Republican Donald Trump won one congressional district in Maine, even though Hillary Clinton won the state overall. This Electoral College voting method is referred to as the district system . MIDTERM ELECTIONS Presidential elections garner the most attention from the media and political elites. Yet they are not the only important elections. The even-numbered years between presidential years, like 2014 and 2018, are reserved for congressional elections—sometimes referred to as midterm elections because they are in the middle of the president’s term. Midterm elections are held because all members of the House of Representatives and one-third of the senators come up for reelection every two years. During a presidential election year, members of Congress often experience the coattail effect , which gives members of a popular presidential candidate’s party an increase in popularity and raises their odds of retaining office. During a midterm election year, however, the president’s party often is blamed for the president’s actions or inaction. Representatives and senators from the sitting president’s party are more likely to lose their seats during a midterm election year. Many recent congressional realignments, in which the House or Senate changed from Democratic to Republican control, occurred because of this reverse-coattail effect during midterm elections. The most recent example is the 2010 election, in which control of the House returned to the Republican Party after two years of a Democratic presidency. 7.4 Campaigns and Voting Learning Objectives By the end of this section, you will be able to: Compare campaign methods for elections Identify strategies campaign managers use to reach voters Analyze the factors that typically affect a voter’s decision Campaign managers know that to win an election, they must do two things: reach voters with their candidate’s information and get voters to show up at the polls. To accomplish these goals, candidates and their campaigns will often try to target those most likely to vote. Unfortunately, these voters change from election to election and sometimes from year to year. Primary and caucus voters are different from voters who vote only during presidential general elections. Some years see an increase in younger voters turning out to vote. Elections are unpredictable, and campaigns must adapt to be effective. FUNDRAISING Even with a carefully planned and orchestrated presidential run, early fundraising is vital for candidates. Money helps them win, and the ability to raise money identifies those who are viable. In fact, the more money a candidate raises, the more he or she will continue to raise. EMILY’s List , a political action group, was founded on this principle; its name is an acronym for “Early Money Is Like Yeast” (it makes the dough rise). This group helps progressive women candidates gain early campaign contributions, which in turn helps them get further donations ( Figure 7.17 ). Early in the 2016 election season, several candidates had fundraised well ahead of their opponents. Hillary Clinton , Jeb Bush , and Ted Cruz were the top fundraisers by July 2015. Clinton reported $47 million, Cruz with $14 million, and Bush with $11 million in contributions. In comparison, Bobby Jindal and George Pataki (who both dropped out relatively early) each reported less than $1 million in contributions during the same period. Bush later reported over $100 million in contributions, while the other Republican candidates continued to report lower contributions. Media stories about Bush’s fundraising discussed his powerful financial networking, while coverage of the other candidates focused on their lack of money. Donald Trump, the eventual Republican nominee and president, showed a comparatively low fundraising amount in the primary phase as he enjoyed much free press coverage because of his notoriety. He also flirted with the idea of being an entirely self-funded candidate. COMPARING PRIMARY AND GENERAL CAMPAIGNS Although candidates have the same goal for primary and general elections, which is to win, these elections are very different from each other and require a very different set of strategies. Primary elections are more difficult for the voter. There are more candidates vying to become their party’s nominee, and party identification is not a useful cue because each party has many candidates rather than just one. In the 2016 presidential election, Republican voters in the early primaries were presented with a number of options, including Mike Huckabee, Donald Trump , Jeb Bush, Ted Cruz, Marco Rubio, John Kasich, Chris Christie, Carly Fiorina, Ben Carson, and more. (Huckabee, Christie, and Fiorina dropped out relatively early.) Democrats had to decide between Hillary Clinton, Bernie Sanders , and Martin O’Malley (who soon dropped out). Voters must find more information about each candidate to decide which is closest to their preferred issue positions. Due to time limitations, voters may not research all the candidates. Nor will all the candidates get enough media or debate time to reach the voters. These issues make campaigning in a primary election difficult, so campaign managers tailor their strategy. First, name recognition is extremely important. Voters are unlikely to cast a vote for an unknown. Some candidates, like Hillary Clinton and Jeb Bush, have held or are related to someone who held national office, but most candidates will be governors, senators, or local politicians who are less well-known nationally. Barack Obama was a junior senator from Illinois and Bill Clinton was a governor from Arkansas prior to running for president. Voters across the country had little information about them, and both candidates needed media time to become known. While well-known candidates have longer records that can be attacked by the opposition, they also have an easier time raising campaign funds because their odds of winning are better. Newer candidates face the challenge of proving themselves during the short primary season and are more likely to lose. In 2016, both eventual party nominees had massive name recognition. Hillary Clinton enjoyed notoriety from having been First Lady, a U.S. senator from New York, and secretary of state. Donald Trump had name recognition from being an iconic real estate tycoon with Trump buildings all over the world plus a reality TV star via shows like The Apprentice . With Arnold Schwarzenegger having successfully campaigned for California governor, perhaps it should not have surprised the country when Trump was elected president. Second, visibility is crucial when a candidate is one in a long parade of faces. Given that voters will want to find quick, useful information about each, candidates will try to get the media’s attention and pick up momentum. Media attention is especially important for newer candidates. Most voters assume a candidate’s website and other campaign material will be skewed, showing only the most positive information. The media, on the other hand, are generally considered more reliable and unbiased than a candidate’s campaign materials, so voters turn to news networks and journalists to pick up information about the candidates’ histories and issue positions. Candidates are aware of voters’ preference for quick information and news and try to get interviews or news coverage for themselves. Candidates also benefit from news coverage that is longer and cheaper than campaign ads. For all these reasons, campaign ads in primary elections rarely mention political parties and instead focus on issue positions or name recognition. Many of the best primary ads help the voters identify issue positions they have in common with the candidate. In 2008, for example, Hillary Clinton ran a holiday ad in which she was seen wrapping presents. Each present had a card with an issue position listed, such as “bring back the troops” or “universal pre-kindergarten.” In a similar, more humorous vein, Mike Huckabee gained name recognition and issue placement with his 2008 primary ad. The “HuckChuck” spot had Chuck Norris repeat Huckabee’s name several times while listing the candidate’s issue positions. Norris’s line, “Mike Huckabee wants to put the IRS out of business,” was one of many statements that repeatedly used Huckabee’s name, increasing voters’ recognition of it ( Figure 7.18 ). While neither of these candidates won the nomination, the ads were viewed by millions and were successful as primary ads. By the general election, each party has only one candidate, and campaign ads must accomplish a different goal with different voters. Because most party-affiliated voters will cast a ballot for their party’s candidate, the campaigns must try to reach the independent and undecided, as well as try to convince their party members to get out and vote. Some ads will focus on issue and policy positions, comparing the two main party candidates. Other ads will remind party loyalists why it is important to vote. President Lyndon B. Johnson used the infamous “Daisy Girl” ad, which cut from a little girl counting daisy petals to an atomic bomb being dropped, to explain why voters needed to turn out and vote for him. If the voters stayed home, Johnson implied, his opponent, Republican Barry Goldwater, might start an atomic war. The ad aired once as a paid ad on NBC before it was pulled, but the footage appeared on other news stations as newscasters discussed the controversy over it. 96 More recently, Mitt Romney used the economy to remind moderates and independents in 2012 that household incomes had dropped and the national debt increased. The ad’s goal was to reach voters who had not already decided on a candidate and would use the economy as a primary deciding factor. Part of the reason Johnson’s campaign ad worked is that more voters turn out for a general election than for other elections. These additional voters are often less ideological and more independent, making them harder to target but possible to win over. They are also less likely to complete a lot of research on the candidates, so campaigns often try to create emotion-based negative ads. While negative ads may decrease voter turnout by making voters more cynical about politics and the election, voters watch and remember them. 97 Another source of negative ads is from groups outside the campaigns. Sometimes, shadow campaigns , run by political action committees and other organizations without the coordination or guidance of candidates, also use negative ads to reach voters. Even before the Citizens United decision allowed corporations and interest groups to run ads supporting candidates, shadow campaigns existed. In 2004, the Swift Boat Veterans for Truth organization ran ads attacking John Kerry’s military service record, and MoveOn attacked George W. Bush’s decision to commit to the wars in Afghanistan and Iraq. In 2014, super PACs poured more than $300 million into supporting candidates. 98 Link to Learning Want to know how much money federal candidates and PACs are raising? Visit the Campaign Finance Disclosure Portal at the Federal Election Commission website. General campaigns also try to get voters to the polls in closely contested states. In 2004, realizing that it would be difficult to convince Ohio Democrats to vote Republican, George W. Bush’s campaign focused on getting the state’s Republican voters to the polls. The volunteers walked through precincts and knocked on Republican doors to raise interest in Bush and the election. Volunteers also called Republican and former Republican households to remind them when and where to vote. 99 The strategy worked, and it reminded future campaigns that an organized effort to get out the vote is still a viable way to win an election. TECHNOLOGY Campaigns have always been expensive. Also, they have sometimes been negative and nasty. The 1828 “ Coffin Handbill ” that John Quincy Adams ran, for instance, listed the names and circumstances of the executions his opponent Andrew Jackson had ordered ( Figure 7.19 ). This was in addition to gossip and verbal attacks against Jackson’s wife, who had accidentally committed bigamy when she married him without a proper divorce. Campaigns and candidates have not become more amicable in the years since then. Once television became a fixture in homes, campaign advertising moved to the airwaves. Television allowed candidates to connect with the voters through video, allowing them to appeal directly to and connect emotionally with voters. While Adlai Stevenson and Dwight D. Eisenhower were the first to use television in their 1952 and 1956 campaigns, the ads were more like jingles with images. Stevenson’s “Let’s Not Forget the Farmer” ad had a catchy tune, but its animated images were not serious and contributed little to the message. The “Eisenhower Answers America” spots allowed Eisenhower to answer policy questions, but his answers were glib rather than helpful. John Kennedy’s campaign was the first to use images to show voters that the candidate was the choice for everyone. His ad, “Kennedy,” combined the jingle “Kennedy for me” and photographs of a diverse population dealing with life in the United States. Link to Learning The Museum of the Moving Image has collected presidential campaign ads from 1952 through today, including the “Kennedy for Me” spot mentioned above. Take a look and see how candidates have created ads to get the voters’ attention and votes over time. Over time, however, ads became more negative and manipulative. In reaction, the Bipartisan Campaign Reform Act of 2002, or McCain–Feingold, included a requirement that candidates stand by their ad and include a recorded statement within the ad stating that they approved the message. Although ads, especially those run by super PACs, continue to be negative, candidates can no longer dodge responsibility for them. Candidates are also frequently using interviews on late night television to get messages out. Soft news, or infotainment, is a new type of news that combines entertainment and information. Shows like The Daily Show and Last Week Tonight make the news humorous or satirical while helping viewers become more educated about the events around the nation and the world. 100 In 2008, Huckabee, Obama, and McCain visited popular programs like The Daily Show , The Colbert Report , and Late Night with Conan O’Brien to target informed voters in the under-45 age bracket. The candidates were able to show their funny sides and appear like average Americans, while talking a bit about their policy preferences. By fall of 2015, The Late Show with Stephen Colbert had already interviewed most of the potential presidential candidates, including Hillary Clinton, Bernie Sanders, Jeb Bush, Ted Cruz, and Donald Trump. The Internet has given candidates a new platform and a new way to target voters. In the 2000 election, campaigns moved online and created websites to distribute information. They also began using search engine results to target voters with ads. In 2004, Democratic candidate Howard Dean used the Internet to reach out to potential donors. Rather than host expensive dinners to raise funds, his campaign posted footage on his website of the candidate eating a turkey sandwich. The gimmick brought over $200,000 in campaign donations and reiterated Dean’s commitment to be a down-to-earth candidate. Candidates also use social media, such as Facebook, Twitter, and YouTube, to interact with supporters and get the attention of younger voters. VOTER DECISION MAKING When citizens do vote, how do they make their decisions? The election environment is complex and most voters don’t have time to research everything about the candidates and issues. Yet they will need to make a fully rational assessment of the choices for an elected office. To meet this goal, they tend to take shortcuts. One popular shortcut is simply to vote using party affiliation. Many political scientists consider party-line voting to be rational behavior because citizens register for parties based upon either position preference or socialization. Similarly, candidates align with parties based upon their issue positions. A Democrat who votes for a Democrat is very likely selecting the candidate closest to his or her personal ideology. While party identification is a voting cue, it also makes for a logical decision. Citizens also use party identification to make decisions via straight-ticket voting —choosing every Republican or Democratic Party member on the ballot. In some states, such as Texas or Michigan, selecting one box at the top of the ballot gives a single party all the votes on the ballot ( Figure 7.20 ). Straight-ticket voting does cause problems in states that include non-partisan positions on the ballot. In Michigan, for example, the top of the ballot (presidential, gubernatorial, senatorial and representative seats) will be partisan, and a straight-ticket vote will give a vote to all the candidates in the selected party. But the middle or bottom of the ballot includes seats for local offices or judicial seats, which are non-partisan. These offices would receive no vote, because the straight-ticket votes go only to partisan seats. In 2010, actors from the former political drama The West Wing came together to create an advertisement for Mary McCormack’s sister Bridget, who was running for a non-partisan seat on the Michigan Supreme Court. The ad reminded straight-ticket voters to cast a ballot for the court seats as well; otherwise, they would miss an important election. McCormack won the seat. Straight-ticket voting does have the advantage of reducing ballot fatigue . Ballot fatigue occurs when someone votes only for the top or important ballot positions, such as president or governor, and stops voting rather than continue to the bottom of a long ballot. In 2012, for example, 70 percent of registered voters in Colorado cast a ballot for the presidential seat, yet only 54 percent voted yes or no on retaining Nathan B. Coats for the state supreme court. 101 Voters make decisions based upon candidates’ physical characteristics, such as attractiveness or facial features. 102 They may also vote based on gender or race, because they assume the elected official will make policy decisions based on a demographic shared with the voters. Candidates are very aware of voters’ focus on these non-political traits. In 2008, a sizable portion of the electorate wanted to vote for either Hillary Clinton or Barack Obama because they offered new demographics—either the first woman or the first black president. Demographics hurt John McCain that year, because many people believed that at 71 he was too old to be president. 103 Hillary Clinton faced this situation again in 2016 as she became the first female nominee from a major party. In essence, attractiveness can make a candidate appear more competent, which in turn can help him or her ultimately win. 104 Aside from party identification and demographics, voters will also look at issues or the economy when making a decision. For some single-issue voters, a candidate’s stance on abortion rights will be a major factor, while other voters may look at the candidates’ beliefs on the Second Amendment and gun control. Single-issue voting may not require much more effort by the voter than simply using party identification; however, many voters are likely to seek out a candidate’s position on a multitude of issues before making a decision. They will use the information they find in several ways. Retrospective voting occurs when the voter looks at the candidate’s past actions and the past economic climate and makes a decision only using these factors. This behavior may occur during economic downturns or after political scandals, when voters hold politicians accountable and do not wish to give the representative a second chance. Pocketbook voting occurs when the voter looks at his or her personal finances and circumstances to decide how to vote. Someone having a harder time finding employment or seeing investments suffer during a particular candidate or party’s control of government will vote for a different candidate or party than the incumbent. Prospective voting occurs when the voter applies information about a candidate’s past behavior to decide how the candidate will act in the future. For example, will the candidate’s voting record or actions help the economy and better prepare him or her to be president during an economic downturn? The challenge of this voting method is that the voters must use a lot of information, which might be conflicting or unrelated, to make an educated guess about how the candidate will perform in the future. Voters do appear to rely on prospective and retrospective voting more often than on pocketbook voting. In some cases, a voter may cast a ballot strategically. In these cases, a person may vote for a second- or third-choice candidate, either because his or her preferred candidate cannot win or in the hope of preventing another candidate from winning. This type of voting is likely to happen when there are multiple candidates for one position or multiple parties running for one seat. 105 In Florida and Oregon, for example, Green Party voters (who tend to be liberal) may choose to vote for a Democrat if the Democrat might otherwise lose to a Republican. Similarly, in Georgia, while a Libertarian may be the preferred candidate, the voter would rather have the Republican candidate win over the Democrat and will vote accordingly. 106 One other way voters make decisions is through incumbency. In essence, this is retrospective voting, but it requires little of the voter. In congressional and local elections, incumbents win reelection up to 90 percent of the time, a result called the incumbency advantage . What contributes to this advantage and often persuades competent challengers not to run? First, incumbents have name recognition and voting records. The media is more likely to interview them because they have advertised their name over several elections and have voted on legislation affecting the state or district. Incumbents also have won election before, which increases the odds that political action committees and interest groups will give them money; most interest groups will not give money to a candidate destined to lose. Incumbents also have franking privileges, which allows them a limited amount of free mail to communicate with the voters in their district. While these mailings may not be sent in the days leading up to an election—sixty days for a senator and ninety days for a House member—congressional representatives are able to build a free relationship with voters through them. 107 Moreover, incumbents have exiting campaign organizations, while challengers must build new organizations from the ground up. Lastly, incumbents have more money in their war chests than most challengers. Another incumbent advantage is gerrymandering, the drawing of district lines to guarantee a desired electoral outcome. Every ten years, following the U.S. Census, the number of House of Representatives members allotted to each state is determined based on a state’s population. If a state gains or loses seats in the House, the state must redraw districts to ensure each district has an equal number of citizens. States may also choose to redraw these districts at other times and for other reasons. 108 If the district is drawn to ensure that it includes a majority of Democratic or Republican Party members within its boundaries, for instance, then candidates from those parties will have an advantage. Gerrymandering helps local legislative candidates and members of the House of Representatives, who win reelection over 90 percent of the time. Senators and presidents do not benefit from gerrymandering because they are not running in a district. Presidents and senators win states, so they benefit only from war chests and name recognition. This is one reason why senators running in 2014, for example, won reelection only 82 percent of the time. 109 Link to Learning Since 1960, the American National Election Studies has been asking a random sample of voters a battery of questions about how they voted. The data are available at the Inter-university Consortium for Political and Social Research at the University of Michigan. 7.5 Direct Democracy Learning Objectives By the end of this section, you will be able to: Identify the different forms of and reasons for direct democracy Summarize the steps needed to place initiatives on a ballot Explain why some policies are made by elected representatives and others by voters The majority of elections in the United States are held to facilitate indirect democracy. Elections allow the people to pick representatives to serve in government and make decisions on the citizens’ behalf. Representatives pass laws, implement taxes, and carry out decisions. Although direct democracy had been used in some of the colonies, the framers of the Constitution granted voters no legislative or executive powers, because they feared the masses would make poor decisions and be susceptible to whims. During the Progressive Era, however, governments began granting citizens more direct political power. States that formed and joined the United States after the Civil War often assigned their citizens some methods of directly implementing laws or removing corrupt politicians. Citizens now use these powers at the ballot to change laws and direct public policy in their states. DIRECT DEMOCRACY DEFINED Direct democracy occurs when policy questions go directly to the voters for a decision. These decisions include funding, budgets, candidate removal, candidate approval, policy changes, and constitutional amendments. Not all states allow direct democracy, nor does the United States government. Direct democracy takes many forms. It may occur locally or statewide. Local direct democracy allows citizens to propose and pass laws that affect local towns or counties. Towns in Massachusetts, for example, may choose to use town meetings, which is a meeting comprised of the town’s eligible voters, to make decisions on budgets, salaries, and local laws. 110 Link to Learning To learn more about what type of direct democracy is practiced in your state, visit the University of Southern California’s Initiative & Referendum Institute . This site also allows you to look up initiatives and measures that have appeared on state ballots. Statewide direct democracy allows citizens to propose and pass laws that affect state constitutions, state budgets, and more. Most states in the western half of the country allow citizens all forms of direct democracy, while most states on the eastern and southern regions allow few or none of these forms ( Figure 7.21 ). States that joined the United States after the Civil War are more likely to have direct democracy, possibly due to the influence of Progressives during the late 1800s and early 1900s. Progressives believed citizens should be more active in government and democracy, a hallmark of direct democracy. There are three forms of direct democracy used in the United States. A referendum asks citizens to confirm or repeal a decision made by the government. A legislative referendum occurs when a legislature passes a law or a series of constitutional amendments and presents them to the voters to ratify with a yes or no vote. A judicial appointment to a state supreme court may require voters to confirm whether the judge should remain on the bench. Popular referendums occur when citizens petition to place a referendum on a ballot to repeal legislation enacted by their state government. This form of direct democracy gives citizens a limited amount of power, but it does not allow them to overhaul policy or circumvent the government. The most common form of direct democracy is the initiative , or proposition. An initiative is normally a law or constitutional amendment proposed and passed by the citizens of a state. Initiatives completely bypass the legislatures and governor, but they are subject to review by the state courts if they are not consistent with the state or national constitution. The process to pass an initiative is not easy and varies from state to state. Most states require that a petitioner or the organizers supporting an initiative file paperwork with the state and include the proposed text of the initiative. This allows the state or local office to determine whether the measure is legal, as well as estimate the cost of implementing it. This approval may come at the beginning of the process or after organizers have collected signatures. The initiative may be reviewed by the state attorney general, as in Oregon’s procedures, or by another state official or office. In Utah, the lieutenant governor reviews measures to ensure they are constitutional. Next, organizers gather registered voters’ signatures on a petition. The number of signatures required is often a percentage of the number of votes from a past election. In California, for example, the required numbers are 5 percent (law) and 8 percent (amendment) of the votes in the last gubernatorial election. This means through 2018, it will take 365,880 signatures to place a law on the ballot and 585,407 to place a constitutional amendment on the ballot. 111 Once the petition has enough signatures from registered voters, it is approved by a state agency or the secretary of state for placement on the ballot. Signatures are verified by the state or a county elections office to ensure the signatures are valid. If the petition is approved, the initiative is then placed on the next ballot, and the organization campaigns to voters. While the process is relatively clear, each step can take a lot of time and effort. First, most states place a time limit on the signature collection period. Organizations may have only 150 days to collect signatures, as in California, or as long as two years, as in Arizona. For larger states, the time limit may pose a dilemma if the organization is trying to collect more than 500,000 signatures from registered voters. Second, the state may limit who may circulate the petition and collect signatures. Some states, like Colorado, restrict what a signature collector may earn, while Oregon bans payments to signature-collecting groups. And the minimum number of signatures required affects the number of ballot measures. Arizona had more than sixty ballot measures on the 2000 general election ballot, because the state requires so few signatures to get an initiative on the ballot. Oklahomans see far fewer ballot measures because the number of required signatures is higher. Another consideration is that, as we’ve seen, voters in primaries are more ideological and more likely to research the issues. Measures that are complex or require a lot of research, such as a lend-lease bond or changes in the state’s eminent-domain language, may do better on a primary ballot. Measures that deal with social policy, such as laws preventing animal cruelty, may do better on a general election ballot, when more of the general population comes out to vote. Proponents for the amendments or laws will take this into consideration as they plan. Finally, the recall is one of the more unusual forms of direct democracy; it allows voters to decide whether to remove a government official from office. All states have ways to remove officials, but removal by voters is less common. The recall of California Governor Gray Davis in 2003 and his replacement by Arnold Schwarzenegger is perhaps one of the more famous such recalls. The recent attempt by voters in Wisconsin to recall Governor Scott Walker show how contentious and expensive a recall can be. Walker spent over $60 million in the election to retain his seat. 112 POLICYMAKING THROUGH DIRECT DEMOCRACY Politicians are often unwilling to wade into highly political waters if they fear it will harm their chances for reelection. When a legislature refuses to act or change current policy, initiatives allow citizens to take part in the policy process and end the impasse. In Colorado, Amendment 64 allowed the recreational use of marijuana by adults, despite concerns that state law would then conflict with national law. Colorado and Washington’s legalization of recreational marijuana use started a trend, leading to more states adopting similar laws. Finding a Middle Ground Too Much Democracy? How much direct democracy is too much? When citizens want one policy direction and government prefers another, who should prevail? Consider recent laws and decisions about marijuana. California was the first state to allow the use of medical marijuana, after the passage of Proposition 215 in 1996. Just a few years later, however, in Gonzales v. Raich (2005), the Supreme Court ruled that the U.S. government had the authority to criminalize the use of marijuana. In 2009, Attorney General Eric Holder said the federal government would not seek to prosecute patients using marijuana medically, citing limited resources and other priorities. Perhaps emboldened by the national government’s stance, Colorado voters approved recreational marijuana use in 2012. Since then, other states have followed. Twenty-three states and the District of Columbia now have laws in place that legalize the use of marijuana to varying degrees. In a number of these cases, the decision was made by voters through initiatives and direct democracy ( Figure 7.22 ). So where is the problem? First, while citizens of these states believe smoking or consuming marijuana should be legal, the U.S. government does not. The Controlled Substances Act (CSA), passed by Congress in 1970, declares marijuana a dangerous drug and makes its sale a prosecutable act. And despite Holder’s statement, a 2013 memo by James Cole , the deputy attorney general, reminded states that marijuana use is still illegal. 113 But the federal government cannot enforce the CSA on its own; it relies on the states’ help. And while Congress has decided not to prosecute patients using marijuana for medical reasons, it has not waived the Justice Department’s right to prosecute recreational use. 114 Direct democracy has placed the states and its citizens in an interesting position. States have a legal obligation to enforce state laws and the state constitution, yet they also must follow the laws of the United States. Citizens who use marijuana legally in their state are not using it legally in their country. This leads many to question whether direct democracy gives citizens too much power. Is it a good idea to give citizens the power to pass laws? Or should this power be subjected to checks and balances, as legislative bills are? Why or why not? Direct democracy has drawbacks, however. One is that it requires more of voters. Instead of voting based on party, the voter is expected to read and become informed to make smart decisions. Initiatives can fundamentally change a constitution or raise taxes. Recalls remove politicians from office. These are not small decisions. Most citizens, however, do not have the time to perform a lot of research before voting. Given the high number of measures on some ballots, this may explain why many citizens simply skip ballot measures they do not understand. Direct democracy ballot items regularly earn fewer votes than the choice of a governor or president. When citizens rely on television ads, initiative titles, or advice from others in determining how to vote, they can become confused and make the wrong decisions. In 2008, Californians voted on Proposition 8 , titled “Eliminates Rights of Same-Sex Couples to Marry.” A yes vote meant a voter wanted to define marriage as only between a woman and man. Even though the information was clear and the law was one of the shortest in memory, many voters were confused. Some thought of the amendment as the same-sex marriage amendment. In short, some people voted for the initiative because they thought they were voting for same-sex marriage. Others voted against it because they were against same-sex marriage. 115 Direct democracy also opens the door to special interests funding personal projects. Any group can create an organization to spearhead an initiative or referendum. And because the cost of collecting signatures can be high in many states, signature collection may be backed by interest groups or wealthy individuals wishing to use the initiative to pass pet projects. The 2003 recall of California governor Gray Davis faced difficulties during the signature collection phase, but $2 million in donations by Representative Darrell Issa (R-CA) helped the organization attain nearly one million signatures. 116 Many commentators argued that this example showed direct democracy is not always a process by the people, but rather a process used by the wealthy and business.
psychology
Summary 4.1 What Is Consciousness? States of consciousness vary over the course of the day and throughout our lives. Important factors in these changes are the biological rhythms, and, more specifically, the circadian rhythms generated by the suprachiasmatic nucleus (SCN). Typically, our biological clocks are aligned with our external environment, and light tends to be an important cue in setting this clock. When people travel across multiple time zones or work rotating shifts, they can experience disruptions of their circadian cycles that can lead to insomnia, sleepiness, and decreased alertness. Bright light therapy has shown to be promising in dealing with circadian disruptions. If people go extended periods of time without sleep, they will accrue a sleep debt and potentially experience a number of adverse psychological and physiological consequences. 4.2 Sleep and Why We Sleep We devote a very large portion of time to sleep, and our brains have complex systems that control various aspects of sleep. Several hormones important for physical growth and maturation are secreted during sleep. While the reason we sleep remains something of a mystery, there is some evidence to suggest that sleep is very important to learning and memory. 4.3 Stages of Sleep The different stages of sleep are characterized by the patterns of brain waves associated with each stage. As a person transitions from being awake to falling asleep, alpha waves are replaced by theta waves. Sleep spindles and K-complexes emerge in stage 2 sleep. Stage 3 and stage 4 are described as slow-wave sleep that is marked by a predominance of delta waves. REM sleep involves rapid movements of the eyes, paralysis of voluntary muscles, and dreaming. Both NREM and REM sleep appear to play important roles in learning and memory. Dreams may represent life events that are important to the dreamer. Alternatively, dreaming may represent a state of protoconsciousness, or a virtual reality, in the mind that helps a person during consciousness. 4.4 Sleep Problems and Disorders Many individuals suffer from some type of sleep disorder or disturbance at some point in their lives. Insomnia is a common experience in which people have difficulty falling or staying asleep. Parasomnias involve unwanted motor behavior or experiences throughout the sleep cycle and include RBD, sleepwalking, restless leg syndrome, and night terrors. Sleep apnea occurs when individuals stop breathing during their sleep, and in the case of sudden infant death syndrome, infants will stop breathing during sleep and die. Narcolepsy involves an irresistible urge to fall asleep during waking hours and is often associated with cataplexy and hallucination. 4.5 Substance Use and Abuse Substance use disorder is defined in DSM-5 as a compulsive pattern of drug use despite negative consequences. Both physical and psychological dependence are important parts of this disorder. Alcohol, barbiturates, and benzodiazepines are central nervous system depressants that affect GABA neurotransmission. Cocaine, amphetamine, cathinones, and MDMA are all central nervous stimulants that agonize dopamine neurotransmission, while nicotine and caffeine affect acetylcholine and adenosine, respectively. Opiate drugs serve as powerful analgesics through their effects on the endogenous opioid neurotransmitter system, and hallucinogenic drugs cause pronounced changes in sensory and perceptual experiences. The hallucinogens are variable with regards to the specific neurotransmitter systems they affect. 4.6 Other States of Consciousness Hypnosis is a focus on the self that involves suggested changes of behavior and experience. Meditation involves relaxed, yet focused, awareness. Both hypnotic and meditative states may involve altered states of consciousness that have potential application for the treatment of a variety of physical and psychological disorders.
Chapter Outline 4.1 What Is Consciousness? 4.2 Sleep and Why We Sleep 4.3 Stages of Sleep 4.4 Sleep Problems and Disorders 4.5 Substance Use and Abuse 4.6 Other States of Consciousness Introduction Our lives involve regular, dramatic changes in the degree to which we are aware of our surroundings and our internal states. While awake, we feel alert and aware of the many important things going on around us. Our experiences change dramatically while we are in deep sleep and once again when we are dreaming. This chapter will discuss states of consciousness with a particular emphasis on sleep. The different stages of sleep will be identified, and sleep disorders will be described. The chapter will close with discussions of altered states of consciousness produced by psychoactive drugs, hypnosis, and meditation.
[ { "answer": { "ans_choice": 2, "ans_text": "hypothalamus" }, "bloom": null, "hl_context": "Sleep-wake cycles seem to be controlled by multiple brain areas acting in conjunction with one another . Some of these areas include the thalamus , the hypothalamus , and the pons . <hl> As already mentioned , the hypothalamus contains the SCN — the biological clock of the body — in addition to other nuclei that , in conjunction with the thalamus , regulate slow-wave sleep . <hl> The pons is important for regulating rapid eye movement ( REM ) sleep ( National Institutes of Health , n . d . ) . <hl> The brain ’ s clock mechanism is located in an area of the hypothalamus known as the suprachiasmatic nucleus ( SCN ) . <hl> The axons of light-sensitive neurons in the retina provide information to the SCN based on the amount of light present , allowing this internal clock to be synchronized with the outside world ( Klein , Moore , & Reppert , 1991 ; Welsh , Takahashi , & Kay , 2010 ) ( Figure 4.3 ) . <hl> If we have biological rhythms , then is there some sort of biological clock ? <hl> <hl> In the brain , the hypothalamus , which lies above the pituitary gland , is a main center of homeostasis . <hl> Homeostasis is the tendency to maintain a balance , or optimal level , within a biological system .", "hl_sentences": "As already mentioned , the hypothalamus contains the SCN — the biological clock of the body — in addition to other nuclei that , in conjunction with the thalamus , regulate slow-wave sleep . The brain ’ s clock mechanism is located in an area of the hypothalamus known as the suprachiasmatic nucleus ( SCN ) . If we have biological rhythms , then is there some sort of biological clock ? In the brain , the hypothalamus , which lies above the pituitary gland , is a main center of homeostasis .", "question": { "cloze_format": "The body’s biological clock is located in the ________.", "normal_format": "Where is the body’s biological clock located?", "question_choices": [ "hippocampus", "thalamus", "hypothalamus", "pituitary gland" ], "question_id": "fs-idm52639440", "question_text": "The body’s biological clock is located in the ________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 3, "ans_text": "sleep debt" }, "bloom": null, "hl_context": "When people have difficulty getting sleep due to their work or the demands of day-to-day life , they accumulate a sleep debt . <hl> A person with a sleep debt does not get sufficient sleep on a chronic basis . <hl> The consequences of sleep debt include decreased levels of alertness and mental efficiency . Interestingly , since the advent of electric light , the amount of sleep that people get has declined . While we certainly welcome the convenience of having the darkness lit up , we also suffer the consequences of reduced amounts of sleep because we are more active during the nighttime hours than our ancestors were . As a result , many of us sleep less than 7 – 8 hours a night and accrue a sleep debt . While there is tremendous variation in any given individual ’ s sleep needs , the National Sleep Foundation ( n . d . ) cites research to estimate that newborns require the most sleep ( between 12 and 18 hours a night ) and that this amount declines to just 7 – 9 hours by the time we are adults .", "hl_sentences": "A person with a sleep debt does not get sufficient sleep on a chronic basis .", "question": { "cloze_format": "________ occurs when there is a chronic deficiency in sleep.", "normal_format": "What occurs when there is a chronic deficiency in sleep?", "question_choices": [ "jet lag", "rotating shift work", "circadian rhythm", "sleep debt" ], "question_id": "fs-idm57322112", "question_text": "________ occurs when there is a chronic deficiency in sleep." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "circadian" }, "bloom": null, "hl_context": "This pattern of temperature fluctuation , which repeats every day , is one example of a circadian rhythm . <hl> A circadian rhythm is a biological rhythm that takes place over a period of about 24 hours . <hl> Our sleep-wake cycle , which is linked to our environment ’ s natural light-dark cycle , is perhaps the most obvious example of a circadian rhythm , but we also have daily fluctuations in heart rate , blood pressure , blood sugar , and body temperature . Some circadian rhythms play a role in changes in our state of consciousness .", "hl_sentences": "A circadian rhythm is a biological rhythm that takes place over a period of about 24 hours .", "question": { "cloze_format": "________ cycles occur roughly once every 24 hours.", "normal_format": "Which cycles occur roughly once every 24 hours?", "question_choices": [ "biological", "circadian", "rotating", "conscious" ], "question_id": "fs-idp30059616", "question_text": "________ cycles occur roughly once every 24 hours." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "Light-dark exposure" }, "bloom": null, "hl_context": "<hl> While disruptions in circadian rhythms can have negative consequences , there are things we can do to help us realign our biological clocks with the external environment . <hl> Some of these approaches , such as using a bright light as shown in Figure 4.4 , have been shown to alleviate some of the problems experienced by individuals suffering from jet lag or from the consequences of rotating shift work . <hl> Because the biological clock is driven by light , exposure to bright light during working shifts and dark exposure when not working can help combat insomnia and symptoms of anxiety and depression ( Huang , Tsai , Chen , & Hsu , 2013 ) . <hl>", "hl_sentences": "While disruptions in circadian rhythms can have negative consequences , there are things we can do to help us realign our biological clocks with the external environment . Because the biological clock is driven by light , exposure to bright light during working shifts and dark exposure when not working can help combat insomnia and symptoms of anxiety and depression ( Huang , Tsai , Chen , & Hsu , 2013 ) .", "question": { "cloze_format": "________ is one way in which people can help reset their biological clocks.", "normal_format": "What is one way in which people can help reset their biological clocks?", "question_choices": [ "Light-dark exposure", "coffee consumption", "alcohol consumption", "napping" ], "question_id": "fs-idp26678096", "question_text": "________ is one way in which people can help reset their biological clocks." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "pituitary gland" }, "bloom": null, "hl_context": "Sleep is also associated with the secretion and regulation of a number of hormones from several endocrine glands including : melatonin , follicle stimulating hormone ( FSH ) , luteinizing hormone ( LH ) , and growth hormone ( National Institutes of Health , n . d . ) . You have read that the pineal gland releases melatonin during sleep ( Figure 4.7 ) . Melatonin is thought to be involved in the regulation of various biological rhythms and the immune system ( Hardeland et al . , 2006 ) . During sleep , the pituitary gland secretes both FSH and LH which are important in regulating the reproductive system ( Christensen et al . , 2012 ; Sofikitis et al . , 2008 ) . <hl> The pituitary gland also secretes growth hormone , during sleep , which plays a role in physical growth and maturation as well as other metabolic processes ( Bartke , Sun , & Longo , 2013 ) . <hl>", "hl_sentences": "The pituitary gland also secretes growth hormone , during sleep , which plays a role in physical growth and maturation as well as other metabolic processes ( Bartke , Sun , & Longo , 2013 ) .", "question": { "cloze_format": "Growth hormone is secreted by the ________ while we sleep.", "normal_format": "By which is the growth hormone is secreted while we sleep?", "question_choices": [ "pineal gland", "thyroid", "pituitary gland", "pancreas" ], "question_id": "fs-idp41971648", "question_text": "Growth hormone is secreted by the ________ while we sleep." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 3, "ans_text": "both a and b" }, "bloom": null, "hl_context": "Sleep-wake cycles seem to be controlled by multiple brain areas acting in conjunction with one another . Some of these areas include the thalamus , the hypothalamus , and the pons . <hl> As already mentioned , the hypothalamus contains the SCN — the biological clock of the body — in addition to other nuclei that , in conjunction with the thalamus , regulate slow-wave sleep . <hl> The pons is important for regulating rapid eye movement ( REM ) sleep ( National Institutes of Health , n . d . ) .", "hl_sentences": "As already mentioned , the hypothalamus contains the SCN — the biological clock of the body — in addition to other nuclei that , in conjunction with the thalamus , regulate slow-wave sleep .", "question": { "cloze_format": "The ________ plays a role in controlling slow-wave sleep.", "normal_format": "What plays a role in controlling slow-wave sleep?", "question_choices": [ "hypothalamus", "thalamus", "pons", "both a and b" ], "question_id": "fs-idm27557440", "question_text": "The ________ plays a role in controlling slow-wave sleep." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 1, "ans_text": "melatonin" }, "bloom": null, "hl_context": "Sleep is also associated with the secretion and regulation of a number of hormones from several endocrine glands including : melatonin , follicle stimulating hormone ( FSH ) , luteinizing hormone ( LH ) , and growth hormone ( National Institutes of Health , n . d . ) . You have read that the pineal gland releases melatonin during sleep ( Figure 4.7 ) . <hl> Melatonin is thought to be involved in the regulation of various biological rhythms and the immune system ( Hardeland et al . , 2006 ) . <hl> During sleep , the pituitary gland secretes both FSH and LH which are important in regulating the reproductive system ( Christensen et al . , 2012 ; Sofikitis et al . , 2008 ) . The pituitary gland also secretes growth hormone , during sleep , which plays a role in physical growth and maturation as well as other metabolic processes ( Bartke , Sun , & Longo , 2013 ) . Generally , and for most people , our circadian cycles are aligned with the outside world . For example , most people sleep during the night and are awake during the day . One important regulator of sleep-wake cycles is the hormone melatonin . <hl> The pineal gland , an endocrine structure located inside the brain that releases melatonin , is thought to be involved in the regulation of various biological rhythms and of the immune system during sleep ( Hardeland , Pandi-Perumal , & Cardinali , 2006 ) . <hl> Melatonin release is stimulated by darkness and inhibited by light .", "hl_sentences": "Melatonin is thought to be involved in the regulation of various biological rhythms and the immune system ( Hardeland et al . , 2006 ) . The pineal gland , an endocrine structure located inside the brain that releases melatonin , is thought to be involved in the regulation of various biological rhythms and of the immune system during sleep ( Hardeland , Pandi-Perumal , & Cardinali , 2006 ) .", "question": { "cloze_format": "________ is a hormone secreted by the pineal gland that plays a role in regulating biological rhythms and immune function.", "normal_format": "What is a hormone secreted by the pineal gland that plays a role in regulating biological rhythms and immune function?", "question_choices": [ "growth hormone", "melatonin", "LH", "FSH" ], "question_id": "fs-idm124856416", "question_text": "________ is a hormone secreted by the pineal gland that plays a role in regulating biological rhythms and immune function." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "slow-wave sleep" }, "bloom": null, "hl_context": "Another theory regarding why we sleep involves sleep ’ s importance for cognitive function and memory formation ( Rattenborg , Lesku , Martinez-Gonzalez , & Lima , 2007 ) . Indeed , we know sleep deprivation results in disruptions in cognition and memory deficits ( Brown , 2012 ) , leading to impairments in our abilities to maintain attention , make decisions , and recall long-term memories . Moreover , these impairments become more severe as the amount of sleep deprivation increases ( Alhola & Polo-Kantola , 2007 ) . <hl> Furthermore , slow-wave sleep after learning a new task can improve resultant performance on that task ( Huber , Ghilardi , Massimini , & Tononi , 2004 ) and seems essential for effective memory formation ( Stickgold , 2005 ) . <hl> Understanding the impact of sleep on cognitive function should help you understand that cramming all night for a test may be not effective and can even prove counterproductive .", "hl_sentences": "Furthermore , slow-wave sleep after learning a new task can improve resultant performance on that task ( Huber , Ghilardi , Massimini , & Tononi , 2004 ) and seems essential for effective memory formation ( Stickgold , 2005 ) .", "question": { "cloze_format": "________ appears to be especially important for enhanced performance on recently learned tasks.", "normal_format": "What appears to be especially important for enhanced performance on recently learned tasks?", "question_choices": [ "melatonin", "slow-wave sleep", "sleep deprivation", "growth hormone" ], "question_id": "fs-idm129498528", "question_text": "________ appears to be especially important for enhanced performance on recently learned tasks." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "stage 3 and stage 4" }, "bloom": null, "hl_context": "<hl> Stage 3 and stage 4 of sleep are often referred to as deep sleep or slow-wave sleep because these stages are characterized by low frequency ( up to 4 Hz ) , high amplitude delta waves ( Figure 4.11 ) . <hl> During this time , an individual ’ s heart rate and respiration slow dramatically . It is much more difficult to awaken someone from sleep during stage 3 and stage 4 than during earlier stages . Interestingly , individuals who have increased levels of alpha brain wave activity ( more often associated with wakefulness and transition into stage 1 sleep ) during stage 3 and stage 4 often report that they do not feel refreshed upon waking , regardless of how long they slept ( Stone , Taylor , McCrae , Kalsekar , & Lichstein , 2008 ) .", "hl_sentences": "Stage 3 and stage 4 of sleep are often referred to as deep sleep or slow-wave sleep because these stages are characterized by low frequency ( up to 4 Hz ) , high amplitude delta waves ( Figure 4.11 ) .", "question": { "cloze_format": "________ is(are) described as slow-wave sleep.", "normal_format": "What is(are) described as slow-wave sleep?", "question_choices": [ "stage 1", "stage 2", "stage 3 and stage 4", "REM sleep" ], "question_id": "fs-idm105179072", "question_text": "________ is(are) described as slow-wave sleep." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "stage 2" }, "bloom": null, "hl_context": "<hl> As we move into stage 2 sleep , the body goes into a state of deep relaxation . <hl> <hl> Theta waves still dominate the activity of the brain , but they are interrupted by brief bursts of activity known as sleep spindles ( Figure 4.10 ) . <hl> A sleep spindle is a rapid burst of higher frequency brain waves that may be important for learning and memory ( Fogel & Smith , 2011 ; Poe , Walsh , & Bjorness , 2010 ) . <hl> In addition , the appearance of K-complexes is often associated with stage 2 sleep . <hl> A K-complex is a very high amplitude pattern of brain activity that may in some cases occur in response to environmental stimuli . Thus , K-complexes might serve as a bridge to higher levels of arousal in response to what is going on in our environments ( Halász , 1993 ; Steriade & Amzica , 1998 ) .", "hl_sentences": "As we move into stage 2 sleep , the body goes into a state of deep relaxation . Theta waves still dominate the activity of the brain , but they are interrupted by brief bursts of activity known as sleep spindles ( Figure 4.10 ) . In addition , the appearance of K-complexes is often associated with stage 2 sleep .", "question": { "cloze_format": "Sleep spindles and K-complexes are most often associated with ________ sleep.", "normal_format": "Sleep spindles and K-complexes are most often associated with which type of sleep?", "question_choices": [ "stage 1", "stage 2", "stage 3 and stage 4", "REM" ], "question_id": "fs-idm88813632", "question_text": "Sleep spindles and K-complexes are most often associated with ________ sleep." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "depression" }, "bloom": null, "hl_context": "<hl> It should be pointed out that some reviews of the literature challenge this finding , suggesting that sleep deprivation that is not limited to REM sleep is just as effective or more effective at alleviating depressive symptoms among some patients suffering from depression . <hl> In either case , why sleep deprivation improves the mood of some patients is not entirely understood ( Giedke & Schwärzler , 2002 ) . Recently , however , some have suggested that sleep deprivation might change emotional processing so that various stimuli are more likely to be perceived as positive in nature ( Gujar , Yoo , Hu , & Walker , 2011 ) . The hypnogram below ( Figure 4.13 ) shows a person ’ s passage through the stages of sleep . Link to Learning While sleep deprivation in general is associated with a number of negative consequences ( Brown , 2012 ) , the consequences of REM deprivation appear to be less profound ( as discussed in Siegel , 2001 ) . In fact , some have suggested that REM deprivation can actually be beneficial in some circumstances . <hl> For instance , REM sleep deprivation has been demonstrated to improve symptoms of people suffering from major depression , and many effective antidepressant medications suppress REM sleep ( Riemann , Berger , & Volderholzer , 2001 ; Vogel , 1975 ) . <hl>", "hl_sentences": "It should be pointed out that some reviews of the literature challenge this finding , suggesting that sleep deprivation that is not limited to REM sleep is just as effective or more effective at alleviating depressive symptoms among some patients suffering from depression . For instance , REM sleep deprivation has been demonstrated to improve symptoms of people suffering from major depression , and many effective antidepressant medications suppress REM sleep ( Riemann , Berger , & Volderholzer , 2001 ; Vogel , 1975 ) .", "question": { "cloze_format": "Symptoms of ________ may be improved by REM deprivation.", "normal_format": "Which symptoms may be improved by REM deprivation?", "question_choices": [ "schizophrenia", "Parkinson’s disease", "depression", "generalized anxiety disorder" ], "question_id": "fs-idp34384432", "question_text": "Symptoms of ________ may be improved by REM deprivation." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "latent" }, "bloom": null, "hl_context": "The meaning of dreams varies across different cultures and periods of time . By the late 19th century , German psychiatrist Sigmund Freud had become convinced that dreams represented an opportunity to gain access to the unconscious . By analyzing dreams , Freud thought people could increase self-awareness and gain valuable insight to help them deal with the problems they faced in their lives . Freud made distinctions between the manifest content and the latent content of dreams . Manifest content is the actual content , or storyline , of a dream . <hl> Latent content , on the other hand , refers to the hidden meaning of a dream . <hl> For instance , if a woman dreams about being chased by a snake , Freud might have argued that this represents the woman ’ s fear of sexual intimacy , with the snake serving as a symbol of a man ’ s penis .", "hl_sentences": "Latent content , on the other hand , refers to the hidden meaning of a dream .", "question": { "cloze_format": "The ________ content of a dream refers to the true meaning of the dream.", "normal_format": "Which content of a dream refers to the true meaning of the dream?", "question_choices": [ "latent", "manifest", "collective unconscious", "important" ], "question_id": "fs-idp7972992", "question_text": "The ________ content of a dream refers to the true meaning of the dream." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "cataplexy" }, "bloom": null, "hl_context": "<hl> Unlike the other sleep disorders described in this section , a person with narcolepsy cannot resist falling asleep at inopportune times . <hl> <hl> These sleep episodes are often associated with cataplexy , which is a lack of muscle tone or muscle weakness , and in some cases involves complete paralysis of the voluntary muscles . <hl> This is similar to the kind of paralysis experienced by healthy individuals during REM sleep ( Burgess & Scammell , 2012 ; Hishikawa & Shimizu , 1995 ; Luppi et al . , 2011 ) . Narcoleptic episodes take on other features of REM sleep . For example , around one third of individuals diagnosed with narcolepsy experience vivid , dream-like hallucinations during narcoleptic attacks ( Chokroverty , 2010 ) .", "hl_sentences": "Unlike the other sleep disorders described in this section , a person with narcolepsy cannot resist falling asleep at inopportune times . These sleep episodes are often associated with cataplexy , which is a lack of muscle tone or muscle weakness , and in some cases involves complete paralysis of the voluntary muscles .", "question": { "cloze_format": "________ is loss of muscle tone or control that is often associated with narcolepsy.", "normal_format": "Which is loss of muscle tone or control that is often associated with narcolepsy?", "question_choices": [ "RBD", "CPAP", "cataplexy", "insomnia" ], "question_id": "fs-idp166258352", "question_text": "________ is loss of muscle tone or control that is often associated with narcolepsy." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 0, "ans_text": "central sleep apnea" }, "bloom": null, "hl_context": "There are two types of sleep apnea : obstructive sleep apnea and central sleep apnea . Obstructive sleep apnea occurs when an individual ’ s airway becomes blocked during sleep , and air is prevented from entering the lungs . <hl> In central sleep apnea , disruption in signals sent from the brain that regulate breathing cause periods of interrupted breathing ( White , 2005 ) . <hl> One of the most common treatments for sleep apnea involves the use of a special device during sleep . A continuous positive airway pressure ( CPAP ) device includes a mask that fits over the sleeper ’ s nose and mouth , which is connected to a pump that pumps air into the person ’ s airways , forcing them to remain open , as shown in Figure 4.14 . Some newer CPAP masks are smaller and cover only the nose . This treatment option has proven to be effective for people suffering from mild to severe cases of sleep apnea ( McDaid et al . , 2009 ) . However , alternative treatment options are being explored because consistent compliance by users of CPAP devices is a problem . Recently , a new EPAP ( expiratory positive air pressure ) device has shown promise in double-blind trials as one such alternative ( Berry , Kryger , & Massie , 2011 ) .", "hl_sentences": "In central sleep apnea , disruption in signals sent from the brain that regulate breathing cause periods of interrupted breathing ( White , 2005 ) .", "question": { "cloze_format": "An individual may suffer from ________ if there is a disruption in the brain signals that are sent to the muscles that regulate breathing.", "normal_format": "If there is a disruption in the brain signals that are sent to the muscles that regulate breathing, an individual may suffer from what?", "question_choices": [ "central sleep apnea", "obstructive sleep apnea", "narcolepsy", "SIDS" ], "question_id": "fs-idp50398976", "question_text": "An individual may suffer from ________ if there is a disruption in the brain signals that are sent to the muscles that regulate breathing." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "narcolepsy" }, "bloom": null, "hl_context": "<hl> Generally , narcolepsy is treated using psychomotor stimulant drugs , such as amphetamines ( Mignot , 2012 ) . <hl> These drugs promote increased levels of neural activity . Narcolepsy is associated with reduced levels of the signaling molecule hypocretin in some areas of the brain ( De la Herrán-Arita & Drucker-Colín , 2012 ; Han , 2012 ) , and the traditional stimulant drugs do not have direct effects on this system . Therefore , it is quite likely that new medications that are developed to treat narcolepsy will be designed to target the hypocretin system .", "hl_sentences": "Generally , narcolepsy is treated using psychomotor stimulant drugs , such as amphetamines ( Mignot , 2012 ) .", "question": { "cloze_format": "The most common treatment for ________ involves the use of amphetamine-like medications.", "normal_format": "The use of amphetamine-like medications is the most common treatment for what? ", "question_choices": [ "sleep apnea", "RBD", "SIDS", "narcolepsy" ], "question_id": "fs-idp59045744", "question_text": "The most common treatment for ________ involves the use of amphetamine-like medications." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 1, "ans_text": "somnambulism" }, "bloom": null, "hl_context": "<hl> Historically , somnambulism has been treated with a variety of pharmacotherapies ranging from benzodiazepines to antidepressants . <hl> However , the success rate of such treatments is questionable . Guilleminault et al . <hl> ( 2005 ) found that sleepwalking was not alleviated with the use of benzodiazepines . <hl> However , all of their somnambulistic patients who also suffered from sleep-related breathing problems showed a marked decrease in sleepwalking when their breathing problems were effectively treated . <hl> In sleepwalking , or somnambulism , the sleeper engages in relatively complex behaviors ranging from wandering about to driving an automobile . <hl> During periods of sleepwalking , sleepers often have their eyes open , but they are not responsive to attempts to communicate with them . Sleepwalking most often occurs during slow-wave sleep , but it can occur at any time during a sleep period in some affected individuals ( Mahowald & Schenck , 2000 ) .", "hl_sentences": "Historically , somnambulism has been treated with a variety of pharmacotherapies ranging from benzodiazepines to antidepressants . ( 2005 ) found that sleepwalking was not alleviated with the use of benzodiazepines . In sleepwalking , or somnambulism , the sleeper engages in relatively complex behaviors ranging from wandering about to driving an automobile .", "question": { "cloze_format": "________ is another word for sleepwalking.", "normal_format": "What is another word for sleepwalking?", "question_choices": [ "insomnia", "somnambulism", "cataplexy", "narcolepsy" ], "question_id": "fs-idp46013312", "question_text": "________ is another word for sleepwalking." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "tolerance" }, "bloom": null, "hl_context": "Drug withdrawal includes a variety of negative symptoms experienced when drug use is discontinued . These symptoms usually are opposite of the effects of the drug . For example , withdrawal from sedative drugs often produces unpleasant arousal and agitation . <hl> In addition to withdrawal , many individuals who are diagnosed with substance use disorders will also develop tolerance to these substances . <hl> <hl> Psychological dependence , or drug craving , is a recent addition to the diagnostic criteria for substance use disorder in DSM - 5 . <hl> This is an important factor because we can develop tolerance and experience withdrawal from any number of drugs that we do not abuse . In other words , physical dependence in and of itself is of limited utility in determining whether or not someone has a substance use disorder . The fifth edition of the Diagnostic and Statistical Manual of Mental Disorders , Fifth Edition ( DSM - 5 ) is used by clinicians to diagnose individuals suffering from various psychological disorders . Drug use disorders are addictive disorders , and the criteria for specific substance ( drug ) use disorders are described in DSM - 5 . A person who has a substance use disorder often uses more of the substance than they originally intended to and continues to use that substance despite experiencing significant adverse consequences . In individuals diagnosed with a substance use disorder , there is a compulsive pattern of drug use that is often associated with both physical and psychological dependence . Physical dependence involves changes in normal bodily functions — the user will experience withdrawal from the drug upon cessation of use . In contrast , a person who has psychological dependence has an emotional , rather than physical , need for the drug and may use the drug to relieve psychological distress . <hl> Tolerance is linked to physiological dependence , and it occurs when a person requires more and more drug to achieve effects previously experienced at lower doses . <hl> Tolerance can cause the user to increase the amount of drug used to a dangerous level — even to the point of overdose and death .", "hl_sentences": "In addition to withdrawal , many individuals who are diagnosed with substance use disorders will also develop tolerance to these substances . Psychological dependence , or drug craving , is a recent addition to the diagnostic criteria for substance use disorder in DSM - 5 . Tolerance is linked to physiological dependence , and it occurs when a person requires more and more drug to achieve effects previously experienced at lower doses .", "question": { "cloze_format": "________ occurs when a drug user requires more and more of a given drug in order to experience the same effects of the drug.", "normal_format": "Which occurs when a drug user requires more and more of a given drug in order to experience the same effects of the drug?", "question_choices": [ "withdrawal", "psychological dependence", "tolerance", "reuptake" ], "question_id": "fs-idp191755232", "question_text": "________ occurs when a drug user requires more and more of a given drug in order to experience the same effects of the drug." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "dopamine" }, "bloom": null, "hl_context": "<hl> Amphetamines have a mechanism of action quite similar to cocaine in that they block the reuptake of dopamine in addition to stimulating its release ( Figure 4.19 ) . <hl> While amphetamines are often abused , they are also commonly prescribed to children diagnosed with attention deficit hyperactivity disorder ( ADHD ) . It may seem counterintuitive that stimulant medications are prescribed to treat a disorder that involves hyperactivity , but the therapeutic effect comes from increases in neurotransmitter activity within certain areas of the brain associated with impulse control . Cocaine can be taken in multiple ways . While many users snort cocaine , intravenous injection and ingestion are also common . The freebase version of cocaine , known as crack , is a potent , smokable version of the drug . <hl> Like many other stimulants , cocaine agonizes the dopamine neurotransmitter system by blocking the reuptake of dopamine in the neuronal synapse . <hl> Stimulants are drugs that tend to increase overall levels of neural activity . Many of these drugs act as agonists of the dopamine neurotransmitter system . <hl> Dopamine activity is often associated with reward and craving ; therefore , drugs that affect dopamine neurotransmission often have abuse liability . <hl> <hl> Drugs in this category include cocaine , amphetamines ( including methamphetamine ) , cathinones ( i . e . , bath salts ) , MDMA ( ecstasy ) , nicotine , and caffeine . <hl>", "hl_sentences": "Amphetamines have a mechanism of action quite similar to cocaine in that they block the reuptake of dopamine in addition to stimulating its release ( Figure 4.19 ) . Like many other stimulants , cocaine agonizes the dopamine neurotransmitter system by blocking the reuptake of dopamine in the neuronal synapse . Dopamine activity is often associated with reward and craving ; therefore , drugs that affect dopamine neurotransmission often have abuse liability . Drugs in this category include cocaine , amphetamines ( including methamphetamine ) , cathinones ( i . e . , bath salts ) , MDMA ( ecstasy ) , nicotine , and caffeine .", "question": { "cloze_format": "Cocaine blocks the reuptake of ________.", "normal_format": "Cocaine blocks the reuptake of which of the following?", "question_choices": [ "GABA", "glutamate", "acetylcholine", "dopamine" ], "question_id": "fs-idp83498720", "question_text": "Cocaine blocks the reuptake of ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "psychological dependence" }, "bloom": null, "hl_context": "<hl> With repeated use of many central nervous system depressants , such as alcohol , a person becomes physically dependent upon the substance and will exhibit signs of both tolerance and withdrawal . <hl> Psychological dependence on these drugs is also possible . Therefore , the abuse potential of central nervous system depressants is relatively high . Drug withdrawal includes a variety of negative symptoms experienced when drug use is discontinued . These symptoms usually are opposite of the effects of the drug . For example , withdrawal from sedative drugs often produces unpleasant arousal and agitation . In addition to withdrawal , many individuals who are diagnosed with substance use disorders will also develop tolerance to these substances . <hl> Psychological dependence , or drug craving , is a recent addition to the diagnostic criteria for substance use disorder in DSM - 5 . <hl> This is an important factor because we can develop tolerance and experience withdrawal from any number of drugs that we do not abuse . In other words , physical dependence in and of itself is of limited utility in determining whether or not someone has a substance use disorder . The fifth edition of the Diagnostic and Statistical Manual of Mental Disorders , Fifth Edition ( DSM - 5 ) is used by clinicians to diagnose individuals suffering from various psychological disorders . Drug use disorders are addictive disorders , and the criteria for specific substance ( drug ) use disorders are described in DSM - 5 . A person who has a substance use disorder often uses more of the substance than they originally intended to and continues to use that substance despite experiencing significant adverse consequences . <hl> In individuals diagnosed with a substance use disorder , there is a compulsive pattern of drug use that is often associated with both physical and psychological dependence . <hl> <hl> Physical dependence involves changes in normal bodily functions — the user will experience withdrawal from the drug upon cessation of use . <hl> In contrast , a person who has psychological dependence has an emotional , rather than physical , need for the drug and may use the drug to relieve psychological distress . Tolerance is linked to physiological dependence , and it occurs when a person requires more and more drug to achieve effects previously experienced at lower doses . Tolerance can cause the user to increase the amount of drug used to a dangerous level — even to the point of overdose and death .", "hl_sentences": "With repeated use of many central nervous system depressants , such as alcohol , a person becomes physically dependent upon the substance and will exhibit signs of both tolerance and withdrawal . Psychological dependence , or drug craving , is a recent addition to the diagnostic criteria for substance use disorder in DSM - 5 . In individuals diagnosed with a substance use disorder , there is a compulsive pattern of drug use that is often associated with both physical and psychological dependence . Physical dependence involves changes in normal bodily functions — the user will experience withdrawal from the drug upon cessation of use .", "question": { "cloze_format": "________ refers to drug craving.", "normal_format": "Which refers to drug craving?", "question_choices": [ "psychological dependence", "antagonism", "agonism", "physical dependence" ], "question_id": "fs-idp8084416", "question_text": "________ refers to drug craving." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "serotonin" }, "bloom": null, "hl_context": "As a group , hallucinogens are incredibly varied in terms of the neurotransmitter systems they affect . <hl> Mescaline and LSD are serotonin agonists , and PCP ( angel dust ) and ketamine ( an animal anesthetic ) act as antagonists of the NMDA glutamate receptor . <hl> In general , these drugs are not thought to possess the same sort of abuse potential as other classes of drugs discussed in this section .", "hl_sentences": "Mescaline and LSD are serotonin agonists , and PCP ( angel dust ) and ketamine ( an animal anesthetic ) act as antagonists of the NMDA glutamate receptor .", "question": { "cloze_format": "LSD affects ________ neurotransmission.", "normal_format": "LSD affects which neurotransmission?", "question_choices": [ "dopamine", "serotonin", "acetylcholine", "norepinephrine" ], "question_id": "fs-idp47645952", "question_text": "LSD affects ________ neurotransmission." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 0, "ans_text": "hypnosis" }, "bloom": null, "hl_context": "<hl> Some scientists are working to determine whether the power of suggestion can affect cognitive processes such as learning , with a view to using hypnosis in educational settings ( Wark , 2011 ) . <hl> Furthermore , there is some evidence that hypnosis can alter processes that were once thought to be automatic and outside the purview of voluntary control , such as reading ( Lifshitz , Aubert Bonn , Fischer , Kashem , & Raz , 2013 ; Raz , Shapiro , Fan , & Posner , 2002 ) . However , it should be noted that others have suggested that the automaticity of these processes remains intact ( Augustinova & Ferrand , 2012 ) . Hypnosis is a state of extreme self-focus and attention in which minimal attention is given to external stimuli . In the therapeutic setting , a clinician may use relaxation and suggestion in an attempt to alter the thoughts and perceptions of a patient . Hypnosis has also been used to draw out information believed to be buried deeply in someone ’ s memory . <hl> For individuals who are especially open to the power of suggestion , hypnosis can prove to be a very effective technique , and brain imaging studies have demonstrated that hypnotic states are associated with global changes in brain functioning ( Del Casale et al . , 2012 ; Guldenmund , Vanhaudenhuyse , Boly , Laureys , & Soddu , 2012 ) . <hl> Historically , hypnosis has been viewed with some suspicion because of its portrayal in popular media and entertainment ( Figure 4.23 ) . Therefore , it is important to make a distinction between hypnosis as an empirically based therapeutic approach versus as a form of entertainment . Contrary to popular belief , individuals undergoing hypnosis usually have clear memories of the hypnotic experience and are in control of their own behaviors . While hypnosis may be useful in enhancing memory or a skill , such enhancements are very modest in nature ( Raz , 2011 ) .", "hl_sentences": "Some scientists are working to determine whether the power of suggestion can affect cognitive processes such as learning , with a view to using hypnosis in educational settings ( Wark , 2011 ) . For individuals who are especially open to the power of suggestion , hypnosis can prove to be a very effective technique , and brain imaging studies have demonstrated that hypnotic states are associated with global changes in brain functioning ( Del Casale et al . , 2012 ; Guldenmund , Vanhaudenhuyse , Boly , Laureys , & Soddu , 2012 ) .", "question": { "cloze_format": "________ is most effective in individuals that are very open to the power of suggestion.", "normal_format": "What is most effective in individuals that are very open to the power of suggestion?", "question_choices": [ "hypnosis", "meditation", "mindful awareness", "cognitive therapy" ], "question_id": "fs-idp25759632", "question_text": "________ is most effective in individuals that are very open to the power of suggestion." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 1, "ans_text": "meditation" }, "bloom": null, "hl_context": "<hl> Meditative techniques have their roots in religious practices ( Figure 4.24 ) , but their use has grown in popularity among practitioners of alternative medicine . <hl> Research indicates that meditation may help reduce blood pressure , and the American Heart Association suggests that meditation might be used in conjunction with more traditional treatments as a way to manage hypertension , although there is not sufficient data for a recommendation to be made ( Brook et al . , 2013 ) . Like hypnosis , meditation also shows promise in stress management , sleep quality ( Caldwell , Harrison , Adams , Quin , & Greeson , 2010 ) , treatment of mood and anxiety disorders ( Chen et al . , 2013 ; Freeman et al . , 2010 ; Vøllestad , Nielsen , & Nielsen , 2012 ) , and pain management ( Reiner , Tibi , & Lipsitz , 2013 ) .", "hl_sentences": "Meditative techniques have their roots in religious practices ( Figure 4.24 ) , but their use has grown in popularity among practitioners of alternative medicine .", "question": { "cloze_format": "________ has its roots in religious practice.", "normal_format": "What has its roots in religious practice?", "question_choices": [ "hypnosis", "meditation", "cognitive therapy", "behavioral therapy" ], "question_id": "fs-idp8414096", "question_text": "________ has its roots in religious practice." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 3, "ans_text": "both a and b" }, "bloom": null, "hl_context": "Meditative techniques have their roots in religious practices ( Figure 4.24 ) , but their use has grown in popularity among practitioners of alternative medicine . Research indicates that meditation may help reduce blood pressure , and the American Heart Association suggests that meditation might be used in conjunction with more traditional treatments as a way to manage hypertension , although there is not sufficient data for a recommendation to be made ( Brook et al . , 2013 ) . <hl> Like hypnosis , meditation also shows promise in stress management , sleep quality ( Caldwell , Harrison , Adams , Quin , & Greeson , 2010 ) , treatment of mood and anxiety disorders ( Chen et al . , 2013 ; Freeman et al . , 2010 ; Vøllestad , Nielsen , & Nielsen , 2012 ) , and pain management ( Reiner , Tibi , & Lipsitz , 2013 ) . <hl>", "hl_sentences": "Like hypnosis , meditation also shows promise in stress management , sleep quality ( Caldwell , Harrison , Adams , Quin , & Greeson , 2010 ) , treatment of mood and anxiety disorders ( Chen et al . , 2013 ; Freeman et al . , 2010 ; Vøllestad , Nielsen , & Nielsen , 2012 ) , and pain management ( Reiner , Tibi , & Lipsitz , 2013 ) .", "question": { "cloze_format": "Meditation may be helpful in ________.", "normal_format": "Where may meditation be helpful?", "question_choices": [ "pain management", "stress control", "treating the flu", "both a and b" ], "question_id": "fs-idp68569264", "question_text": "Meditation may be helpful in ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "hypnosis" }, "bloom": null, "hl_context": "<hl> Some scientists are working to determine whether the power of suggestion can affect cognitive processes such as learning , with a view to using hypnosis in educational settings ( Wark , 2011 ) . <hl> Furthermore , there is some evidence that hypnosis can alter processes that were once thought to be automatic and outside the purview of voluntary control , such as reading ( Lifshitz , Aubert Bonn , Fischer , Kashem , & Raz , 2013 ; Raz , Shapiro , Fan , & Posner , 2002 ) . However , it should be noted that others have suggested that the automaticity of these processes remains intact ( Augustinova & Ferrand , 2012 ) .", "hl_sentences": "Some scientists are working to determine whether the power of suggestion can affect cognitive processes such as learning , with a view to using hypnosis in educational settings ( Wark , 2011 ) .", "question": { "cloze_format": "Research suggests that cognitive processes, such as learning, may be affected by ________.", "normal_format": "According to research, what may be affected cognitive processes, such as learning?", "question_choices": [ "hypnosis", "meditation", "mindful awareness", "progressive relaxation" ], "question_id": "fs-idp67628048", "question_text": "Research suggests that cognitive processes, such as learning, may be affected by ________." }, "references_are_paraphrase": 0 } ]
4
4.1 What Is Consciousness? Learning Objectives By the end of this section, you will be able to: Understand what is meant by consciousness Explain how circadian rhythms are involved in regulating the sleep-wake cycle, and how circadian cycles can be disrupted Discuss the concept of sleep debt Consciousness describes our awareness of internal and external stimuli. Awareness of internal stimuli includes feeling pain, hunger, thirst, sleepiness, and being aware of our thoughts and emotions. Awareness of external stimuli includes seeing the light from the sun, feeling the warmth of a room, and hearing the voice of a friend. We experience different states of consciousness and different levels of awareness on a regular basis. We might even describe consciousness as a continuum that ranges from full awareness to a deep sleep. Sleep is a state marked by relatively low levels of physical activity and reduced sensory awareness that is distinct from periods of rest that occur during wakefulness. Wakefulness is characterized by high levels of sensory awareness, thought, and behavior. In between these extremes are states of consciousness related to daydreaming, intoxication as a result of alcohol or other drug use, meditative states, hypnotic states, and altered states of consciousness following sleep deprivation. We might also experience unconscious states of being via drug-induced anesthesia for medical purposes. Often, we are not completely aware of our surroundings, even when we are fully awake. For instance, have you ever daydreamed while driving home from work or school without really thinking about the drive itself? You were capable of engaging in the all of the complex tasks involved with operating a motor vehicle even though you were not aware of doing so. Many of these processes, like much of psychological behavior, are rooted in our biology. Biological Rhythms Biological rhythms are internal rhythms of biological activity. A woman’s menstrual cycle is an example of a biological rhythm—a recurring, cyclical pattern of bodily changes. One complete menstrual cycle takes about 28 days—a lunar month—but many biological cycles are much shorter. For example, body temperature fluctuates cyclically over a 24-hour period ( Figure 4.2 ). Alertness is associated with higher body temperatures, and sleepiness with lower body temperatures. This pattern of temperature fluctuation, which repeats every day, is one example of a circadian rhythm. A circadian rhythm is a biological rhythm that takes place over a period of about 24 hours. Our sleep-wake cycle, which is linked to our environment’s natural light-dark cycle, is perhaps the most obvious example of a circadian rhythm, but we also have daily fluctuations in heart rate, blood pressure, blood sugar, and body temperature. Some circadian rhythms play a role in changes in our state of consciousness. If we have biological rhythms, then is there some sort of biological clock ? In the brain, the hypothalamus, which lies above the pituitary gland, is a main center of homeostasis. Homeostasis is the tendency to maintain a balance, or optimal level, within a biological system. The brain’s clock mechanism is located in an area of the hypothalamus known as the suprachiasmatic nucleus (SCN) . The axons of light-sensitive neurons in the retina provide information to the SCN based on the amount of light present, allowing this internal clock to be synchronized with the outside world (Klein, Moore, & Reppert, 1991; Welsh, Takahashi, & Kay, 2010) ( Figure 4.3 ). Problems With Circadian Rhythms Generally, and for most people, our circadian cycles are aligned with the outside world. For example, most people sleep during the night and are awake during the day. One important regulator of sleep-wake cycles is the hormone melatonin . The pineal gland , an endocrine structure located inside the brain that releases melatonin, is thought to be involved in the regulation of various biological rhythms and of the immune system during sleep (Hardeland, Pandi-Perumal, & Cardinali, 2006). Melatonin release is stimulated by darkness and inhibited by light. There are individual differences with regards to our sleep-wake cycle. For instance, some people would say they are morning people, while others would consider themselves to be night owls. These individual differences in circadian patterns of activity are known as a person’s chronotype, and research demonstrates that morning larks and night owls differ with regard to sleep regulation (Taillard, Philip, Coste, Sagaspe, & Bioulac, 2003). Sleep regulation refers to the brain’s control of switching between sleep and wakefulness as well as coordinating this cycle with the outside world. Link to Learning Watch this brief video describing circadian rhythms and how they affect sleep. Disruptions of Normal Sleep Whether lark, owl, or somewhere in between, there are situations in which a person’s circadian clock gets out of synchrony with the external environment. One way that this happens involves traveling across multiple time zones. When we do this, we often experience jet lag. Jet lag is a collection of symptoms that results from the mismatch between our internal circadian cycles and our environment. These symptoms include fatigue, sluggishness, irritability, and insomnia (i.e., a consistent difficulty in falling or staying asleep for at least three nights a week over a month’s time) (Roth, 2007). Individuals who do rotating shift work are also likely to experience disruptions in circadian cycles. Rotating shift work refers to a work schedule that changes from early to late on a daily or weekly basis. For example, a person may work from 7:00 a.m. to 3:00 p.m. on Monday, 3:00 a.m. to 11:00 a.m. on Tuesday, and 11:00 a.m. to 7:00 p.m. on Wednesday. In such instances, the individual’s schedule changes so frequently that it becomes difficult for a normal circadian rhythm to be maintained. This often results in sleeping problems, and it can lead to signs of depression and anxiety. These kinds of schedules are common for individuals working in health care professions and service industries, and they are associated with persistent feelings of exhaustion and agitation that can make someone more prone to making mistakes on the job (Gold et al., 1992; Presser, 1995). Rotating shift work has pervasive effects on the lives and experiences of individuals engaged in that kind of work, which is clearly illustrated in stories reported in a qualitative study that researched the experiences of middle-aged nurses who worked rotating shifts (West, Boughton & Byrnes, 2009). Several of the nurses interviewed commented that their work schedules affected their relationships with their family. One of the nurses said, If you’ve had a partner who does work regular job 9 to 5 office hours . . . the ability to spend time, good time with them when you’re not feeling absolutely exhausted . . . that would be one of the problems that I’ve encountered. (West et al., 2009, p. 114) While disruptions in circadian rhythms can have negative consequences, there are things we can do to help us realign our biological clocks with the external environment. Some of these approaches, such as using a bright light as shown in Figure 4.4 , have been shown to alleviate some of the problems experienced by individuals suffering from jet lag or from the consequences of rotating shift work. Because the biological clock is driven by light, exposure to bright light during working shifts and dark exposure when not working can help combat insomnia and symptoms of anxiety and depression (Huang, Tsai, Chen, & Hsu, 2013). Link to Learning Watch this video to hear tips on how to overcome jet lag. Insufficient Sleep When people have difficulty getting sleep due to their work or the demands of day-to-day life, they accumulate a sleep debt. A person with a sleep debt does not get sufficient sleep on a chronic basis. The consequences of sleep debt include decreased levels of alertness and mental efficiency. Interestingly, since the advent of electric light, the amount of sleep that people get has declined. While we certainly welcome the convenience of having the darkness lit up, we also suffer the consequences of reduced amounts of sleep because we are more active during the nighttime hours than our ancestors were. As a result, many of us sleep less than 7–8 hours a night and accrue a sleep debt. While there is tremendous variation in any given individual’s sleep needs, the National Sleep Foundation (n.d.) cites research to estimate that newborns require the most sleep (between 12 and 18 hours a night) and that this amount declines to just 7–9 hours by the time we are adults. If you lie down to take a nap and fall asleep very easily, chances are you may have sleep debt. Given that college students are notorious for suffering from significant sleep debt (Hicks, Fernandez, & Pelligrini, 2001; Hicks, Johnson, & Pelligrini, 1992; Miller, Shattuck, & Matsangas, 2010), chances are you and your classmates deal with sleep debt-related issues on a regular basis. In 2015, the National Sleep Foundation updated their sleep duration hours, to better accommodate individual differences. Table 4.1 shows the new recommendations, which describe sleep durations that are “recommended”, “may be appropriate”, and “not recommended”. Age Recommended May be appropriate Not recommended 0–3 months 14–17 hours 11–13 hours 18–19 hours Less than 11 hours More than 19 hours 4–11 months 12–15 hours 10–11 hours 16–18 hours Less than 10 hours More than 18 hours 1–2 years 11–14 hours 9–10 hours 15–16 hours Less than 9 hours More than 16 hours 3–5 years 10–13 hours 8–9 hours 14 hours Less than 8 hours More than 14 hours 6–13 years 9–11 hours 7–8 hours 12 hours Less than 7 hours More than 12 hours 14–17 years 8–10 hours 7 hours 11 hours Less than 7 hours More than 11 hours 18–25 years 7–9 hours 6 hours 10–11 hours Less than 6 hours More than 11 hours 26–64 years 7–9 hours 6 hours 10 hours Less than 6 hours More than 10 hours ≥65 years 7–8 hours 5–6 hours 9 hours Less than 5 hours More than 9 hours Table 4.1 Sleep Needs at Different Ages Sleep debt and sleep deprivation have significant negative psychological and physiological consequences Figure 4.5 . As mentioned earlier, lack of sleep can result in decreased mental alertness and cognitive function. In addition, sleep deprivation often results in depression-like symptoms. These effects can occur as a function of accumulated sleep debt or in response to more acute periods of sleep deprivation. It may surprise you to know that sleep deprivation is associated with obesity, increased blood pressure, increased levels of stress hormones, and reduced immune functioning (Banks & Dinges, 2007). A sleep deprived individual generally will fall asleep more quickly than if she were not sleep deprived. Some sleep-deprived individuals have difficulty staying awake when they stop moving (example sitting and watching television or driving a car). That is why individuals suffering from sleep deprivation can also put themselves and others at risk when they put themselves behind the wheel of a car or work with dangerous machinery. Some research suggests that sleep deprivation affects cognitive and motor function as much as, if not more than, alcohol intoxication (Williamson & Feyer, 2000). Link to Learning To assess your own sleeping habits, read this article about sleep needs. The amount of sleep we get varies across the lifespan. When we are very young, we spend up to 16 hours a day sleeping. As we grow older, we sleep less. In fact, a meta-analysis , which is a study that combines the results of many related studies, conducted within the last decade indicates that by the time we are 65 years old, we average fewer than 7 hours of sleep per day (Ohayon, Carskadon, Guilleminault, & Vitiello, 2004). As the amount of time we sleep varies over our lifespan, presumably the sleep debt would adjust accordingly. 4.2 Sleep and Why We Sleep Learning Objectives By the end of this section, you will be able to: Describe areas of the brain involved in sleep Understand hormone secretions associated with sleep Describe several theories aimed at explaining the function of sleep We spend approximately one-third of our lives sleeping. Given the average life expectancy for U.S. citizens falls between 73 and 79 years old (Singh & Siahpush, 2006), we can expect to spend approximately 25 years of our lives sleeping. Some animals never sleep (e.g., several fish and amphibian species); other animals can go extended periods of time without sleep and without apparent negative consequences (e.g., dolphins); yet some animals (e.g., rats) die after two weeks of sleep deprivation (Siegel, 2008). Why do we devote so much time to sleeping? Is it absolutely essential that we sleep? This section will consider these questions and explore various explanations for why we sleep. What is Sleep? You have read that sleep is distinguished by low levels of physical activity and reduced sensory awareness. As discussed by Siegel (2008), a definition of sleep must also include mention of the interplay of the circadian and homeostatic mechanisms that regulate sleep. Homeostatic regulation of sleep is evidenced by sleep rebound following sleep deprivation. Sleep rebound refers to the fact that a sleep-deprived individual will tend to take a shorter time to fall asleep during subsequent opportunities for sleep. Sleep is characterized by certain patterns of activity of the brain that can be visualized using electroencephalography (EEG), and different phases of sleep can be differentiated using EEG as well ( Figure 4.6 ). Sleep-wake cycles seem to be controlled by multiple brain areas acting in conjunction with one another. Some of these areas include the thalamus, the hypothalamus, and the pons. As already mentioned, the hypothalamus contains the SCN—the biological clock of the body—in addition to other nuclei that, in conjunction with the thalamus, regulate slow-wave sleep. The pons is important for regulating rapid eye movement (REM) sleep (National Institutes of Health, n.d.). Sleep is also associated with the secretion and regulation of a number of hormones from several endocrine glands including: melatonin, follicle stimulating hormone (FSH), luteinizing hormone (LH), and growth hormone (National Institutes of Health, n.d.). You have read that the pineal gland releases melatonin during sleep ( Figure 4.7 ). Melatonin is thought to be involved in the regulation of various biological rhythms and the immune system (Hardeland et al., 2006). During sleep, the pituitary gland secretes both FSH and LH which are important in regulating the reproductive system (Christensen et al., 2012; Sofikitis et al., 2008). The pituitary gland also secretes growth hormone, during sleep, which plays a role in physical growth and maturation as well as other metabolic processes (Bartke, Sun, & Longo, 2013). Why Do We Sleep? Given the central role that sleep plays in our lives and the number of adverse consequences that have been associated with sleep deprivation, one would think that we would have a clear understanding of why it is that we sleep. Unfortunately, this is not the case; however, several hypotheses have been proposed to explain the function of sleep. Adaptive Function of Sleep One popular hypothesis of sleep incorporates the perspective of evolutionary psychology. Evolutionary psychology is a discipline that studies how universal patterns of behavior and cognitive processes have evolved over time as a result of natural selection . Variations and adaptations in cognition and behavior make individuals more or less successful in reproducing and passing their genes to their offspring. One hypothesis from this perspective might argue that sleep is essential to restore resources that are expended during the day. Just as bears hibernate in the winter when resources are scarce, perhaps people sleep at night to reduce their energy expenditures. While this is an intuitive explanation of sleep, there is little research that supports this explanation. In fact, it has been suggested that there is no reason to think that energetic demands could not be addressed with periods of rest and inactivity (Frank, 2006; Rial et al., 2007), and some research has actually found a negative correlation between energetic demands and the amount of time spent sleeping (Capellini, Barton, McNamara, Preston, & Nunn, 2008). Another evolutionary hypothesis of sleep holds that our sleep patterns evolved as an adaptive response to predatory risks, which increase in darkness. Thus we sleep in safe areas to reduce the chance of harm. Again, this is an intuitive and appealing explanation for why we sleep. Perhaps our ancestors spent extended periods of time asleep to reduce attention to themselves from potential predators. Comparative research indicates, however, that the relationship that exists between predatory risk and sleep is very complex and equivocal. Some research suggests that species that face higher predatory risks sleep fewer hours than other species (Capellini et al., 2008), while other researchers suggest there is no relationship between the amount of time a given species spends in deep sleep and its predation risk (Lesku, Roth, Amlaner, & Lima, 2006). It is quite possible that sleep serves no single universally adaptive function, and different species have evolved different patterns of sleep in response to their unique evolutionary pressures. While we have discussed the negative outcomes associated with sleep deprivation, it should be pointed out that there are many benefits that are associated with adequate amounts of sleep. A few such benefits listed by the National Sleep Foundation (n.d.) include maintaining healthy weight, lowering stress levels, improving mood, and increasing motor coordination, as well as a number of benefits related to cognition and memory formation. Cognitive Function of Sleep Another theory regarding why we sleep involves sleep’s importance for cognitive function and memory formation (Rattenborg, Lesku, Martinez-Gonzalez, & Lima, 2007). Indeed, we know sleep deprivation results in disruptions in cognition and memory deficits (Brown, 2012), leading to impairments in our abilities to maintain attention, make decisions, and recall long-term memories. Moreover, these impairments become more severe as the amount of sleep deprivation increases (Alhola & Polo-Kantola, 2007). Furthermore, slow-wave sleep after learning a new task can improve resultant performance on that task (Huber, Ghilardi, Massimini, & Tononi, 2004) and seems essential for effective memory formation (Stickgold, 2005). Understanding the impact of sleep on cognitive function should help you understand that cramming all night for a test may be not effective and can even prove counterproductive. Link to Learning Watch this brief video describing sleep deprivation in college students. Here’s another brief video describing sleep tips for college students. Sleep has also been associated with other cognitive benefits. Research indicates that included among these possible benefits are increased capacities for creative thinking (Cai, Mednick, Harrison, Kanady, & Mednick, 2009; Wagner, Gais, Haider, Verleger, & Born, 2004), language learning (Fenn, Nusbaum, & Margoliash, 2003; Gómez, Bootzin, & Nadel, 2006), and inferential judgments (Ellenbogen, Hu, Payne, Titone, & Walker, 2007). It is possible that even the processing of emotional information is influenced by certain aspects of sleep (Walker, 2009). Link to Learning Watch this brief video describing the relationship between sleep and memory. 4.3 Stages of Sleep Learning Objectives By the end of this section, you will be able to: Differentiate between REM and non-REM sleep Describe the differences between the four stages of non-REM sleep Understand the role that REM and non-REM sleep play in learning and memory Sleep is not a uniform state of being. Instead, sleep is composed of several different stages that can be differentiated from one another by the patterns of brain wave activity that occur during each stage. These changes in brain wave activity can be visualized using EEG and are distinguished from one another by both the frequency and amplitude of brain waves ( Figure 4.8 ). Sleep can be divided into two different general phases: REM sleep and non-REM (NREM) sleep. Rapid eye movement (REM) sleep is characterized by darting movements of the eyes under closed eyelids. Brain waves during REM sleep appear very similar to brain waves during wakefulness. In contrast, non-REM (NREM) sleep is subdivided into four stages distinguished from each other and from wakefulness by characteristic patterns of brain waves. The first four stages of sleep are NREM sleep, while the fifth and final stage of sleep is REM sleep. In this section, we will discuss each of these stages of sleep and their associated patterns of brain wave activity. NREM Stages of Sleep The first stage of NREM sleep is known as stage 1 sleep. Stage 1 sleep is a transitional phase that occurs between wakefulness and sleep, the period during which we drift off to sleep. During this time, there is a slowdown in both the rates of respiration and heartbeat. In addition, stage 1 sleep involves a marked decrease in both overall muscle tension and core body temperature. In terms of brain wave activity, stage 1 sleep is associated with both alpha and theta waves. The early portion of stage 1 sleep produces alpha waves , which are relatively low frequency (8–13Hz), high amplitude patterns of electrical activity (waves) that become synchronized ( Figure 4.9 ). This pattern of brain wave activity resembles that of someone who is very relaxed, yet awake. As an individual continues through stage 1 sleep, there is an increase in theta wave activity. Theta waves are even lower frequency (4–7 Hz), higher amplitude brain waves than alpha waves. It is relatively easy to wake someone from stage 1 sleep; in fact, people often report that they have not been asleep if they are awoken during stage 1 sleep. As we move into stage 2 sleep , the body goes into a state of deep relaxation. Theta waves still dominate the activity of the brain, but they are interrupted by brief bursts of activity known as sleep spindles ( Figure 4.10 ). A sleep spindle is a rapid burst of higher frequency brain waves that may be important for learning and memory (Fogel & Smith, 2011; Poe, Walsh, & Bjorness, 2010). In addition, the appearance of K-complexes is often associated with stage 2 sleep. A K-complex is a very high amplitude pattern of brain activity that may in some cases occur in response to environmental stimuli. Thus, K-complexes might serve as a bridge to higher levels of arousal in response to what is going on in our environments (Halász, 1993; Steriade & Amzica, 1998). Stage 3 and stage 4 of sleep are often referred to as deep sleep or slow-wave sleep because these stages are characterized by low frequency (up to 4 Hz), high amplitude delta waves ( Figure 4.11 ). During this time, an individual’s heart rate and respiration slow dramatically. It is much more difficult to awaken someone from sleep during stage 3 and stage 4 than during earlier stages. Interestingly, individuals who have increased levels of alpha brain wave activity (more often associated with wakefulness and transition into stage 1 sleep) during stage 3 and stage 4 often report that they do not feel refreshed upon waking, regardless of how long they slept (Stone, Taylor, McCrae, Kalsekar, & Lichstein, 2008). REM Sleep As mentioned earlier, REM sleep is marked by rapid movements of the eyes. The brain waves associated with this stage of sleep are very similar to those observed when a person is awake, as shown in Figure 4.12 , and this is the period of sleep in which dreaming occurs. It is also associated with paralysis of muscle systems in the body with the exception of those that make circulation and respiration possible. Therefore, no movement of voluntary muscles occurs during REM sleep in a normal individual; REM sleep is often referred to as paradoxical sleep because of this combination of high brain activity and lack of muscle tone. Like NREM sleep, REM has been implicated in various aspects of learning and memory (Wagner, Gais, & Born, 2001), although there is disagreement within the scientific community about how important both NREM and REM sleep are for normal learning and memory (Siegel, 2001). If people are deprived of REM sleep and then allowed to sleep without disturbance, they will spend more time in REM sleep in what would appear to be an effort to recoup the lost time in REM. This is known as the REM rebound, and it suggests that REM sleep is also homeostatically regulated. Aside from the role that REM sleep may play in processes related to learning and memory, REM sleep may also be involved in emotional processing and regulation. In such instances, REM rebound may actually represent an adaptive response to stress in nondepressed individuals by suppressing the emotional salience of aversive events that occurred in wakefulness (Suchecki, Tiba, & Machado, 2012). While sleep deprivation in general is associated with a number of negative consequences (Brown, 2012), the consequences of REM deprivation appear to be less profound (as discussed in Siegel, 2001). In fact, some have suggested that REM deprivation can actually be beneficial in some circumstances. For instance, REM sleep deprivation has been demonstrated to improve symptoms of people suffering from major depression, and many effective antidepressant medications suppress REM sleep (Riemann, Berger, & Volderholzer, 2001; Vogel, 1975). It should be pointed out that some reviews of the literature challenge this finding, suggesting that sleep deprivation that is not limited to REM sleep is just as effective or more effective at alleviating depressive symptoms among some patients suffering from depression. In either case, why sleep deprivation improves the mood of some patients is not entirely understood (Giedke & Schwärzler, 2002). Recently, however, some have suggested that sleep deprivation might change emotional processing so that various stimuli are more likely to be perceived as positive in nature (Gujar, Yoo, Hu, & Walker, 2011). The hypnogram below ( Figure 4.13 ) shows a person’s passage through the stages of sleep. Link to Learning View this video that describes the various stages of sleep. Dreams The meaning of dreams varies across different cultures and periods of time. By the late 19th century, German psychiatrist Sigmund Freud had become convinced that dreams represented an opportunity to gain access to the unconscious. By analyzing dreams, Freud thought people could increase self-awareness and gain valuable insight to help them deal with the problems they faced in their lives. Freud made distinctions between the manifest content and the latent content of dreams. Manifest content is the actual content, or storyline, of a dream. Latent content , on the other hand, refers to the hidden meaning of a dream. For instance, if a woman dreams about being chased by a snake, Freud might have argued that this represents the woman’s fear of sexual intimacy, with the snake serving as a symbol of a man’s penis. Freud was not the only theorist to focus on the content of dreams. The 20th century Swiss psychiatrist Carl Jung believed that dreams allowed us to tap into the collective unconscious. The collective unconscious , as described by Jung , is a theoretical repository of information he believed to be shared by everyone. According to Jung, certain symbols in dreams reflected universal archetypes with meanings that are similar for all people regardless of culture or location. The sleep and dreaming researcher Rosalind Cartwright, however, believes that dreams simply reflect life events that are important to the dreamer. Unlike Freud and Jung, Cartwright’s ideas about dreaming have found empirical support. For example, she and her colleagues published a study in which women going through divorce were asked several times over a five month period to report the degree to which their former spouses were on their minds. These same women were awakened during REM sleep in order to provide a detailed account of their dream content. There was a significant positive correlation between the degree to which women thought about their former spouses during waking hours and the number of times their former spouses appeared as characters in their dreams (Cartwright, Agargun, Kirkby, & Friedman, 2006). Recent research (Horikawa, Tamaki, Miyawaki, & Kamitani, 2013) has uncovered new techniques by which researchers may effectively detect and classify the visual images that occur during dreaming by using fMRI for neural measurement of brain activity patterns, opening the way for additional research in this area. Recently, neuroscientists have also become interested in understanding why we dream. For example, Hobson (2009) suggests that dreaming may represent a state of protoconsciousness. In other words, dreaming involves constructing a virtual reality in our heads that we might use to help us during wakefulness. Among a variety of neurobiological evidence, John Hobson cites research on lucid dreams as an opportunity to better understand dreaming in general. Lucid dreams are dreams in which certain aspects of wakefulness are maintained during a dream state. In a lucid dream, a person becomes aware of the fact that they are dreaming, and as such, they can control the dream’s content (LaBerge, 1990). 4.4 Sleep Problems and Disorders Learning Objectives By the end of this section, you will be able to: Describe the symptoms and treatments of insomnia Recognize the symptoms of several parasomnias Describe the symptoms and treatments for sleep apnea Recognize risk factors associated with sudden infant death syndrome (SIDS) and steps to prevent it Describe the symptoms and treatments for narcolepsy Many people experience disturbances in their sleep at some point in their lives. Depending on the population and sleep disorder being studied, between 30% and 50% of the population suffers from a sleep disorder at some point in their lives (Bixler, Kales, Soldatos, Kaels, & Healey, 1979; Hossain & Shapiro, 2002; Ohayon, 1997, 2002; Ohayon & Roth, 2002). This section will describe several sleep disorders as well as some of their treatment options. Insomnia Insomnia, a consistent difficulty in falling or staying asleep, is the most common of the sleep disorders. Individuals with insomnia often experience long delays between the times that they go to bed and actually fall asleep. In addition, these individuals may wake up several times during the night only to find that they have difficulty getting back to sleep. As mentioned earlier, one of the criteria for insomnia involves experiencing these symptoms for at least three nights a week for at least one month’s time (Roth, 2007). It is not uncommon for people suffering from insomnia to experience increased levels of anxiety about their inability to fall asleep. This becomes a self-perpetuating cycle because increased anxiety leads to increased arousal, and higher levels of arousal make the prospect of falling asleep even more unlikely. Chronic insomnia is almost always associated with feeling overtired and may be associated with symptoms of depression. There may be many factors that contribute to insomnia, including age, drug use, exercise, mental status, and bedtime routines. Not surprisingly, insomnia treatment may take one of several different approaches. People who suffer from insomnia might limit their use of stimulant drugs (such as caffeine) or increase their amount of physical exercise during the day. Some people might turn to over-the-counter (OTC) or prescribed sleep medications to help them sleep, but this should be done sparingly because many sleep medications result in dependence and alter the nature of the sleep cycle, and they can increase insomnia over time. Those who continue to have insomnia, particularly if it affects their quality of life, should seek professional treatment. Some forms of psychotherapy, such as cognitive-behavioral therapy, can help sufferers of insomnia. Cognitive-behavioral therapy is a type of psychotherapy that focuses on cognitive processes and problem behaviors. The treatment of insomnia likely would include stress management techniques and changes in problematic behaviors that could contribute to insomnia (e.g., spending more waking time in bed). Cognitive-behavioral therapy has been demonstrated to be quite effective in treating insomnia (Savard, Simard, Ivers, & Morin, 2005; Williams, Roth, Vatthauer, & McCrae, 2013). Parasomnias A parasomnia is one of a group of sleep disorders in which unwanted, disruptive motor activity and/or experiences during sleep play a role. Parasomnias can occur in either REM or NREM phases of sleep. Sleepwalking, restless leg syndrome, and night terrors are all examples of parasomnias (Mahowald & Schenck, 2000). Sleepwalking In sleepwalking , or somnambulism, the sleeper engages in relatively complex behaviors ranging from wandering about to driving an automobile. During periods of sleepwalking, sleepers often have their eyes open, but they are not responsive to attempts to communicate with them. Sleepwalking most often occurs during slow-wave sleep, but it can occur at any time during a sleep period in some affected individuals (Mahowald & Schenck, 2000). Historically, somnambulism has been treated with a variety of pharmacotherapies ranging from benzodiazepines to antidepressants. However, the success rate of such treatments is questionable. Guilleminault et al. (2005) found that sleepwalking was not alleviated with the use of benzodiazepines. However, all of their somnambulistic patients who also suffered from sleep-related breathing problems showed a marked decrease in sleepwalking when their breathing problems were effectively treated. Dig Deeper A Sleepwalking Defense? On January 16, 1997, Scott Falater sat down to dinner with his wife and children and told them about difficulties he was experiencing on a project at work. After dinner, he prepared some materials to use in leading a church youth group the following morning, and then he attempted to repair the family’s swimming pool pump before retiring to bed. The following morning, he awoke to barking dogs and unfamiliar voices from downstairs. As he went to investigate what was going on, he was met by a group of police officers who arrested him for the murder of his wife (Cartwright, 2004; CNN, 1999). Yarmila Falater’s body was found in the family’s pool with 44 stab wounds. A neighbor called the police after witnessing Falater standing over his wife’s body before dragging her into the pool. Upon a search of the premises, police found blood-stained clothes and a bloody knife in the trunk of Falater’s car, and he had blood stains on his neck. Remarkably, Falater insisted that he had no recollection of hurting his wife in any way. His children and his wife’s parents all agreed that Falater had an excellent relationship with his wife and they couldn’t think of a reason that would provide any sort of motive to murder her (Cartwright, 2004). Scott Falater had a history of regular episodes of sleepwalking as a child, and he had even behaved violently toward his sister once when she tried to prevent him from leaving their home in his pajamas during a sleepwalking episode. He suffered from no apparent anatomical brain anomalies or psychological disorders. It appeared that Scott Falater had killed his wife in his sleep, or at least, that is the defense he used when he was tried for his wife’s murder (Cartwright, 2004; CNN, 1999). In Falater’s case, a jury found him guilty of first degree murder in June of 1999 (CNN, 1999); however, there are other murder cases where the sleepwalking defense has been used successfully. As scary as it sounds, many sleep researchers believe that homicidal sleepwalking is possible in individuals suffering from the types of sleep disorders described below (Broughton et al., 1994; Cartwright, 2004; Mahowald, Schenck, & Cramer Bornemann, 2005; Pressman, 2007). REM Sleep Behavior Disorder (RBD) REM sleep behavior disorder (RBD) occurs when the muscle paralysis associated with the REM sleep phase does not occur. Individuals who suffer from RBD have high levels of physical activity during REM sleep, especially during disturbing dreams. These behaviors vary widely, but they can include kicking, punching, scratching, yelling, and behaving like an animal that has been frightened or attacked. People who suffer from this disorder can injure themselves or their sleeping partners when engaging in these behaviors. Furthermore, these types of behaviors ultimately disrupt sleep, although affected individuals have no memories that these behaviors have occurred (Arnulf, 2012). This disorder is associated with a number of neurodegenerative diseases such as Parkinson’s disease. In fact, this relationship is so robust that some view the presence of RBD as a potential aid in the diagnosis and treatment of a number of neurodegenerative diseases (Ferini-Strambi, 2011). Clonazepam, an anti-anxiety medication with sedative properties, is most often used to treat RBD. It is administered alone or in conjunction with doses of melatonin (the hormone secreted by the pineal gland). As part of treatment, the sleeping environment is often modified to make it a safer place for those suffering from RBD (Zangini, Calandra-Buonaura, Grimaldi, & Cortelli, 2011). Other Parasomnias A person with restless leg syndrome has uncomfortable sensations in the legs during periods of inactivity or when trying to fall asleep. This discomfort is relieved by deliberately moving the legs, which, not surprisingly, contributes to difficulty in falling or staying asleep. Restless leg syndrome is quite common and has been associated with a number of other medical diagnoses, such as chronic kidney disease and diabetes (Mahowald & Schenck, 2000). There are a variety of drugs that treat restless leg syndrome: benzodiazepines, opiates, and anticonvulsants (Restless Legs Syndrome Foundation, n.d.). Night terrors result in a sense of panic in the sufferer and are often accompanied by screams and attempts to escape from the immediate environment (Mahowald & Schenck, 2000). Although individuals suffering from night terrors appear to be awake, they generally have no memories of the events that occurred, and attempts to console them are ineffective. Typically, individuals suffering from night terrors will fall back asleep again within a short time. Night terrors apparently occur during the NREM phase of sleep (Provini, Tinuper, Bisulli, & Lagaresi, 2011). Generally, treatment for night terrors is unnecessary unless there is some underlying medical or psychological condition that is contributing to the night terrors (Mayo Clinic, n.d.). Sleep Apnea Sleep apnea is defined by episodes during which a sleeper’s breathing stops. These episodes can last 10–20 seconds or longer and often are associated with brief periods of arousal. While individuals suffering from sleep apnea may not be aware of these repeated disruptions in sleep, they do experience increased levels of fatigue. Many individuals diagnosed with sleep apnea first seek treatment because their sleeping partners indicate that they snore loudly and/or stop breathing for extended periods of time while sleeping (Henry & Rosenthal, 2013). Sleep apnea is much more common in overweight people and is often associated with loud snoring. Surprisingly, sleep apnea may exacerbate cardiovascular disease (Sánchez-de-la-Torre, Campos-Rodriguez, & Barbé, 2012). While sleep apnea is less common in thin people, anyone, regardless of their weight, who snores loudly or gasps for air while sleeping, should be checked for sleep apnea. While people are often unaware of their sleep apnea, they are keenly aware of some of the adverse consequences of insufficient sleep. Consider a patient who believed that as a result of his sleep apnea he “had three car accidents in six weeks. They were ALL my fault. Two of them I didn’t even know I was involved in until afterwards” (Henry & Rosenthal, 2013, p. 52). It is not uncommon for people suffering from undiagnosed or untreated sleep apnea to fear that their careers will be affected by the lack of sleep, illustrated by this statement from another patient, “I’m in a job where there’s a premium on being mentally alert. I was really sleepy… and having trouble concentrating…. It was getting to the point where it was kind of scary” (Henry & Rosenthal, 2013, p. 52). There are two types of sleep apnea: obstructive sleep apnea and central sleep apnea. Obstructive sleep apnea occurs when an individual’s airway becomes blocked during sleep, and air is prevented from entering the lungs. In central sleep apnea , disruption in signals sent from the brain that regulate breathing cause periods of interrupted breathing (White, 2005). One of the most common treatments for sleep apnea involves the use of a special device during sleep. A continuous positive airway pressure (CPAP) device includes a mask that fits over the sleeper’s nose and mouth, which is connected to a pump that pumps air into the person’s airways, forcing them to remain open, as shown in Figure 4.14 . Some newer CPAP masks are smaller and cover only the nose. This treatment option has proven to be effective for people suffering from mild to severe cases of sleep apnea (McDaid et al., 2009). However, alternative treatment options are being explored because consistent compliance by users of CPAP devices is a problem. Recently, a new EPAP (expiratory positive air pressure) device has shown promise in double-blind trials as one such alternative (Berry, Kryger, & Massie, 2011). SIDS In sudden infant death syndrome (SIDS) an infant stops breathing during sleep and dies. Infants younger than 12 months appear to be at the highest risk for SIDS, and boys have a greater risk than girls. A number of risk factors have been associated with SIDS including premature birth, smoking within the home, and hyperthermia. There may also be differences in both brain structure and function in infants that die from SIDS (Berkowitz, 2012; Mage & Donner, 2006; Thach, 2005). The substantial amount of research on SIDS has led to a number of recommendations to parents to protect their children ( Figure 4.15 ). For one, research suggests that infants should be placed on their backs when put down to sleep, and their cribs should not contain any items which pose suffocation threats, such as blankets, pillows or padded crib bumpers (cushions that cover the bars of a crib). Infants should not have caps placed on their heads when put down to sleep in order to prevent overheating, and people in the child’s household should abstain from smoking in the home. Recommendations like these have helped to decrease the number of infant deaths from SIDS in recent years (Mitchell, 2009; Task Force on Sudden Infant Death Syndrome, 2011). Narcolepsy Unlike the other sleep disorders described in this section, a person with narcolepsy cannot resist falling asleep at inopportune times. These sleep episodes are often associated with cataplexy , which is a lack of muscle tone or muscle weakness, and in some cases involves complete paralysis of the voluntary muscles. This is similar to the kind of paralysis experienced by healthy individuals during REM sleep (Burgess & Scammell, 2012; Hishikawa & Shimizu, 1995; Luppi et al., 2011). Narcoleptic episodes take on other features of REM sleep. For example, around one third of individuals diagnosed with narcolepsy experience vivid, dream-like hallucinations during narcoleptic attacks (Chokroverty, 2010). Surprisingly, narcoleptic episodes are often triggered by states of heightened arousal or stress. The typical episode can last from a minute or two to half an hour. Once awakened from a narcoleptic attack, people report that they feel refreshed (Chokroverty, 2010). Obviously, regular narcoleptic episodes could interfere with the ability to perform one’s job or complete schoolwork, and in some situations, narcolepsy can result in significant harm and injury (e.g., driving a car or operating machinery or other potentially dangerous equipment). Generally, narcolepsy is treated using psychomotor stimulant drugs, such as amphetamines (Mignot, 2012). These drugs promote increased levels of neural activity. Narcolepsy is associated with reduced levels of the signaling molecule hypocretin in some areas of the brain (De la Herrán-Arita & Drucker-Colín, 2012; Han, 2012), and the traditional stimulant drugs do not have direct effects on this system. Therefore, it is quite likely that new medications that are developed to treat narcolepsy will be designed to target the hypocretin system. There is a tremendous amount of variability among sufferers, both in terms of how symptoms of narcolepsy manifest and the effectiveness of currently available treatment options. This is illustrated by McCarty’s (2010) case study of a 50-year-old woman who sought help for the excessive sleepiness during normal waking hours that she had experienced for several years. She indicated that she had fallen asleep at inappropriate or dangerous times, including while eating, while socializing with friends, and while driving her car. During periods of emotional arousal, the woman complained that she felt some weakness in the right side of her body. Although she did not experience any dream-like hallucinations, she was diagnosed with narcolepsy as a result of sleep testing. In her case, the fact that her cataplexy was confined to the right side of her body was quite unusual. Early attempts to treat her condition with a stimulant drug alone were unsuccessful. However, when a stimulant drug was used in conjunction with a popular antidepressant, her condition improved dramatically. 4.5 Substance Use and Abuse Learning Objectives By the end of this section, you will be able to: Describe the diagnostic criteria for substance use disorders Identify the neurotransmitter systems impacted by various categories of drugs Describe how different categories of drugs affect behavior and experience While we all experience altered states of consciousness in the form of sleep on a regular basis, some people use drugs and other substances that result in altered states of consciousness as well. This section will present information relating to the use of various psychoactive drugs and problems associated with such use. This will be followed by brief descriptions of the effects of some of the more well-known drugs commonly used today. Substance Use Disorders The fifth edition of the Diagnostic and Statistical Manual of Mental Disorders , Fifth Edition (DSM-5) is used by clinicians to diagnose individuals suffering from various psychological disorders. Drug use disorders are addictive disorders, and the criteria for specific substance (drug) use disorders are described in DSM-5. A person who has a substance use disorder often uses more of the substance than they originally intended to and continues to use that substance despite experiencing significant adverse consequences. In individuals diagnosed with a substance use disorder, there is a compulsive pattern of drug use that is often associated with both physical and psychological dependence. Physical dependence involves changes in normal bodily functions—the user will experience withdrawal from the drug upon cessation of use. In contrast, a person who has psychological dependence has an emotional, rather than physical, need for the drug and may use the drug to relieve psychological distress. Tolerance is linked to physiological dependence, and it occurs when a person requires more and more drug to achieve effects previously experienced at lower doses. Tolerance can cause the user to increase the amount of drug used to a dangerous level—even to the point of overdose and death. Drug withdrawal includes a variety of negative symptoms experienced when drug use is discontinued. These symptoms usually are opposite of the effects of the drug. For example, withdrawal from sedative drugs often produces unpleasant arousal and agitation. In addition to withdrawal, many individuals who are diagnosed with substance use disorders will also develop tolerance to these substances. Psychological dependence, or drug craving, is a recent addition to the diagnostic criteria for substance use disorder in DSM-5. This is an important factor because we can develop tolerance and experience withdrawal from any number of drugs that we do not abuse. In other words, physical dependence in and of itself is of limited utility in determining whether or not someone has a substance use disorder. Drug Categories The effects of all psychoactive drugs occur through their interactions with our endogenous neurotransmitter systems. Many of these drugs, and their relationships, are shown in Figure 4.16 . As you have learned, drugs can act as agonists or antagonists of a given neurotransmitter system. An agonist facilitates the activity of a neurotransmitter system, and antagonists impede neurotransmitter activity. Alcohol and Other Depressants Ethanol, which we commonly refer to as alcohol, is in a class of psychoactive drugs known as depressants ( Figure 4.17 ). A depressant is a drug that tends to suppress central nervous system activity. Other depressants include barbiturates and benzodiazepines. These drugs share in common their ability to serve as agonists of the gamma-Aminobutyric acid (GABA) neurotransmitter system. Because GABA has a quieting effect on the brain, GABA agonists also have a quieting effect; these types of drugs are often prescribed to treat both anxiety and insomnia. Acute alcohol administration results in a variety of changes to consciousness. At rather low doses, alcohol use is associated with feelings of euphoria. As the dose increases, people report feeling sedated. Generally, alcohol is associated with decreases in reaction time and visual acuity, lowered levels of alertness, and reduction in behavioral control. With excessive alcohol use, a person might experience a complete loss of consciousness and/or difficulty remembering events that occurred during a period of intoxication (McKim & Hancock, 2013). In addition, if a pregnant woman consumes alcohol, her infant may be born with a cluster of birth defects and symptoms collectively called fetal alcohol spectrum disorder (FASD) or fetal alcohol syndrome (FAS). With repeated use of many central nervous system depressants, such as alcohol, a person becomes physically dependent upon the substance and will exhibit signs of both tolerance and withdrawal. Psychological dependence on these drugs is also possible. Therefore, the abuse potential of central nervous system depressants is relatively high. Drug withdrawal is usually an aversive experience, and it can be a life-threatening process in individuals who have a long history of very high doses of alcohol and/or barbiturates. This is of such concern that people who are trying to overcome addiction to these substances should only do so under medical supervision. Stimulants Stimulants are drugs that tend to increase overall levels of neural activity. Many of these drugs act as agonists of the dopamine neurotransmitter system. Dopamine activity is often associated with reward and craving; therefore, drugs that affect dopamine neurotransmission often have abuse liability. Drugs in this category include cocaine, amphetamines (including methamphetamine), cathinones (i.e., bath salts), MDMA (ecstasy), nicotine, and caffeine. Cocaine can be taken in multiple ways. While many users snort cocaine, intravenous injection and ingestion are also common. The freebase version of cocaine, known as crack, is a potent, smokable version of the drug. Like many other stimulants, cocaine agonizes the dopamine neurotransmitter system by blocking the reuptake of dopamine in the neuronal synapse. Dig Deeper Crack Cocaine Crack ( Figure 4.18 ) is often considered to be more addictive than cocaine itself because it is smokable and reaches the brain very quickly. Crack is often less expensive than other forms of cocaine; therefore, it tends to be a more accessible drug for individuals from impoverished segments of society. During the 1980s, many drug laws were rewritten to punish crack users more severely than cocaine users. This led to discriminatory sentencing with low-income, inner-city minority populations receiving the harshest punishments. The wisdom of these laws has recently been called into question, especially given research that suggests crack may not be more addictive than other forms of cocaine, as previously thought (Haasen & Krausz, 2001; Reinerman, 2007). Link to Learning Read this interesting newspaper article describing myths about crack cocaine. Amphetamines have a mechanism of action quite similar to cocaine in that they block the reuptake of dopamine in addition to stimulating its release ( Figure 4.19 ). While amphetamines are often abused, they are also commonly prescribed to children diagnosed with attention deficit hyperactivity disorder (ADHD). It may seem counterintuitive that stimulant medications are prescribed to treat a disorder that involves hyperactivity, but the therapeutic effect comes from increases in neurotransmitter activity within certain areas of the brain associated with impulse control. In recent years, methamphetamine (meth) use has become increasingly widespread. Methamphetamine is a type of amphetamine that can be made from ingredients that are readily available (e.g., medications containing pseudoephedrine, a compound found in many over-the-counter cold and flu remedies). Despite recent changes in laws designed to make obtaining pseudoephedrine more difficult, methamphetamine continues to be an easily accessible and relatively inexpensive drug option (Shukla, Crump, & Chrisco, 2012). The cocaine, amphetamine, cathinones, and MDMA users seek a euphoric high , feelings of intense elation and pleasure, especially in those users who take the drug via intravenous injection or smoking. Repeated use of these stimulants can have significant adverse consequences. Users can experience physical symptoms that include nausea, elevated blood pressure, and increased heart rate. In addition, these drugs can cause feelings of anxiety, hallucinations, and paranoia (Fiorentini et al., 2011). Normal brain functioning is altered after repeated use of these drugs. For example, repeated use can lead to overall depletion among the monoamine neurotransmitters (dopamine, norepinephrine, and serotonin). People may engage in compulsive use of these stimulant substances in part to try to reestablish normal levels of these neurotransmitters (Jayanthi & Ramamoorthy, 2005; Rothman, Blough, & Baumann, 2007). Caffeine is another stimulant drug. While it is probably the most commonly used drug in the world, the potency of this particular drug pales in comparison to the other stimulant drugs described in this section. Generally, people use caffeine to maintain increased levels of alertness and arousal. Caffeine is found in many common medicines (such as weight loss drugs), beverages, foods, and even cosmetics (Herman & Herman, 2013). While caffeine may have some indirect effects on dopamine neurotransmission, its primary mechanism of action involves antagonizing adenosine activity (Porkka-Heiskanen, 2011). While caffeine is generally considered a relatively safe drug, high blood levels of caffeine can result in insomnia, agitation, muscle twitching, nausea, irregular heartbeat, and even death (Reissig, Strain, & Griffiths, 2009; Wolt, Ganetsky, & Babu, 2012). In 2012, Kromann and Nielson reported on a case study of a 40-year-old woman who suffered significant ill effects from her use of caffeine. The woman used caffeine in the past to boost her mood and to provide energy, but over the course of several years, she increased her caffeine consumption to the point that she was consuming three liters of soda each day. Although she had been taking a prescription antidepressant, her symptoms of depression continued to worsen and she began to suffer physically, displaying significant warning signs of cardiovascular disease and diabetes. Upon admission to an outpatient clinic for treatment of mood disorders, she met all of the diagnostic criteria for substance dependence and was advised to dramatically limit her caffeine intake. Once she was able to limit her use to less than 12 ounces of soda a day, both her mental and physical health gradually improved. Despite the prevalence of caffeine use and the large number of people who confess to suffering from caffeine addiction, this was the first published description of soda dependence appearing in scientific literature. Nicotine is highly addictive, and the use of tobacco products is associated with increased risks of heart disease, stroke, and a variety of cancers. Nicotine exerts its effects through its interaction with acetylcholine receptors. Acetylcholine functions as a neurotransmitter in motor neurons. In the central nervous system, it plays a role in arousal and reward mechanisms. Nicotine is most commonly used in the form of tobacco products like cigarettes or chewing tobacco; therefore, there is a tremendous interest in developing effective smoking cessation techniques. To date, people have used a variety of nicotine replacement therapies in addition to various psychotherapeutic options in an attempt to discontinue their use of tobacco products. In general, smoking cessation programs may be effective in the short term, but it is unclear whether these effects persist (Cropley, Theadom, Pravettoni, & Webb, 2008; Levitt, Shaw, Wong, & Kaczorowski, 2007; Smedslund, Fisher, Boles, & Lichtenstein, 2004). Opioids An opioid is one of a category of drugs that includes heroin, morphine, methadone, and codeine. Opioids have analgesic properties; that is, they decrease pain. Humans have an endogenous opioid neurotransmitter system—the body makes small quantities of opioid compounds that bind to opioid receptors reducing pain and producing euphoria. Thus, opioid drugs, which mimic this endogenous painkilling mechanism, have an extremely high potential for abuse. Natural opioids, called opiates , are derivatives of opium, which is a naturally occurring compound found in the poppy plant. There are now several synthetic versions of opiate drugs (correctly called opioids) that have very potent painkilling effects, and they are often abused. For example, the National Institutes of Drug Abuse has sponsored research that suggests the misuse and abuse of the prescription pain killers hydrocodone and oxycodone are significant public health concerns (Maxwell, 2006). In 2013, the U.S. Food and Drug Administration recommended tighter controls on their medical use. Historically, heroin has been a major opioid drug of abuse ( Figure 4.20 ). Heroin can be snorted, smoked, or injected intravenously. Like the stimulants described earlier, the use of heroin is associated with an initial feeling of euphoria followed by periods of agitation. Because heroin is often administered via intravenous injection, users often bear needle track marks on their arms and, like all abusers of intravenous drugs, have an increased risk for contraction of both tuberculosis and HIV. Aside from their utility as analgesic drugs, opioid-like compounds are often found in cough suppressants, anti-nausea, and anti-diarrhea medications. Given that withdrawal from a drug often involves an experience opposite to the effect of the drug, it should be no surprise that opioid withdrawal resembles a severe case of the flu. While opioid withdrawal can be extremely unpleasant, it is not life-threatening (Julien, 2005). Still, people experiencing opioid withdrawal may be given methadone to make withdrawal from the drug less difficult. Methadone is a synthetic opioid that is less euphorigenic than heroin and similar drugs. Methadone clinics help people who previously struggled with opioid addiction manage withdrawal symptoms through the use of methadone. Other drugs, including the opioid buprenorphine, have also been used to alleviate symptoms of opiate withdrawal. Codeine is an opioid with relatively low potency. It is often prescribed for minor pain, and it is available over-the-counter in some other countries. Like all opioids, codeine does have abuse potential. In fact, abuse of prescription opioid medications is becoming a major concern worldwide (Aquina, Marques-Baptista, Bridgeman, & Merlin, 2009; Casati, Sedefov, & Pfeiffer-Gerschel, 2012). Hallucinogens A hallucinogen is one of a class of drugs that results in profound alterations in sensory and perceptual experiences ( Figure 4.21 ). In some cases, users experience vivid visual hallucinations. It is also common for these types of drugs to cause hallucinations of body sensations (e.g., feeling as if you are a giant) and a skewed perception of the passage of time. As a group, hallucinogens are incredibly varied in terms of the neurotransmitter systems they affect. Mescaline and LSD are serotonin agonists, and PCP (angel dust) and ketamine (an animal anesthetic) act as antagonists of the NMDA glutamate receptor. In general, these drugs are not thought to possess the same sort of abuse potential as other classes of drugs discussed in this section. Link to Learning To learn more about some of the most commonly abused prescription and street drugs, check out the Commonly Abused Drugs Chart and the Commonly Abused Prescription Drugs Chart from the National Institute on Drug Abuse. Dig Deeper Medical Marijuana While the possession and use of marijuana is illegal in most states, it is now legal in Washington and Colorado to possess limited quantities of marijuana for recreational use ( Figure 4.22 ). In contrast, medical marijuana use is now legal in nearly half of the United States and in the District of Columbia. Medical marijuana is marijuana that is prescribed by a doctor for the treatment of a health condition. For example, people who undergo chemotherapy will often be prescribed marijuana to stimulate their appetites and prevent excessive weight loss resulting from the side effects of chemotherapy treatment. Marijuana may also have some promise in the treatment of a variety of medical conditions (Mather, Rauwendaal, Moxham-Hall, & Wodak, 2013; Robson, 2014; Schicho & Storr, 2014). While medical marijuana laws have been passed on a state-by-state basis, federal laws still classify this as an illicit substance, making conducting research on the potentially beneficial medicinal uses of marijuana problematic. There is quite a bit of controversy within the scientific community as to the extent to which marijuana might have medicinal benefits due to a lack of large-scale, controlled research (Bostwick, 2012). As a result, many scientists have urged the federal government to allow for relaxation of current marijuana laws and classifications in order to facilitate a more widespread study of the drug’s effects (Aggarwal et al., 2009; Bostwick, 2012; Kogan & Mechoulam, 2007). Until recently, the United States Department of Justice routinely arrested people involved and seized marijuana used in medicinal settings. In the latter part of 2013, however, the United States Department of Justice issued statements indicating that they would not continue to challenge state medical marijuana laws. This shift in policy may be in response to the scientific community’s recommendations and/or reflect changing public opinion regarding marijuana. 4.6 Other States of Consciousness Learning Objectives By the end of this section, you will be able to: Define hypnosis and meditation Understand the similarities and differences of hypnosis and meditation Our states of consciousness change as we move from wakefulness to sleep. We also alter our consciousness through the use of various psychoactive drugs. This final section will consider hypnotic and meditative states as additional examples of altered states of consciousness experienced by some individuals. Hypnosis Hypnosis is a state of extreme self-focus and attention in which minimal attention is given to external stimuli. In the therapeutic setting, a clinician may use relaxation and suggestion in an attempt to alter the thoughts and perceptions of a patient. Hypnosis has also been used to draw out information believed to be buried deeply in someone’s memory. For individuals who are especially open to the power of suggestion, hypnosis can prove to be a very effective technique, and brain imaging studies have demonstrated that hypnotic states are associated with global changes in brain functioning (Del Casale et al., 2012; Guldenmund, Vanhaudenhuyse, Boly, Laureys, & Soddu, 2012). Historically, hypnosis has been viewed with some suspicion because of its portrayal in popular media and entertainment ( Figure 4.23 ). Therefore, it is important to make a distinction between hypnosis as an empirically based therapeutic approach versus as a form of entertainment. Contrary to popular belief, individuals undergoing hypnosis usually have clear memories of the hypnotic experience and are in control of their own behaviors. While hypnosis may be useful in enhancing memory or a skill, such enhancements are very modest in nature (Raz, 2011). How exactly does a hypnotist bring a participant to a state of hypnosis? While there are variations, there are four parts that appear consistent in bringing people into the state of suggestibility associated with hypnosis (National Research Council, 1994). These components include: The participant is guided to focus on one thing, such as the hypnotist’s words or a ticking watch. The participant is made comfortable and is directed to be relaxed and sleepy. The participant is told to be open to the process of hypnosis, trust the hypnotist and let go. The participant is encouraged to use his or her imagination. These steps are conducive to being open to the heightened suggestibility of hypnosis. People vary in terms of their ability to be hypnotized, but a review of available research suggests that most people are at least moderately hypnotizable (Kihlstrom, 2013). Hypnosis in conjunction with other techniques is used for a variety of therapeutic purposes and has shown to be at least somewhat effective for pain management, treatment of depression and anxiety, smoking cessation, and weight loss (Alladin, 2012; Elkins, Johnson, & Fisher, 2012; Golden, 2012; Montgomery, Schnur, & Kravits, 2012). Some scientists are working to determine whether the power of suggestion can affect cognitive processes such as learning, with a view to using hypnosis in educational settings (Wark, 2011). Furthermore, there is some evidence that hypnosis can alter processes that were once thought to be automatic and outside the purview of voluntary control, such as reading (Lifshitz, Aubert Bonn, Fischer, Kashem, & Raz, 2013; Raz, Shapiro, Fan, & Posner, 2002). However, it should be noted that others have suggested that the automaticity of these processes remains intact (Augustinova & Ferrand, 2012). How does hypnosis work? Two theories attempt to answer this question: One theory views hypnosis as dissociation and the other theory views it as the performance of a social role. According to the dissociation view, hypnosis is effectively a dissociated state of consciousness, much like our earlier example where you may drive to work, but you are only minimally aware of the process of driving because your attention is focused elsewhere. This theory is supported by Ernest Hilgard’s research into hypnosis and pain. In Hilgard’s experiments, he induced participants into a state of hypnosis, and placed their arms into ice water. Participants were told they would not feel pain, but they could press a button if they did; while they reported not feeling pain, they did, in fact, press the button, suggesting a dissociation of consciousness while in the hypnotic state (Hilgard & Hilgard, 1994). Taking a different approach to explain hypnosis, the social-cognitive theory of hypnosis sees people in hypnotic states as performing the social role of a hypnotized person. As you will learn when you study social roles, people’s behavior can be shaped by their expectations of how they should act in a given situation. Some view a hypnotized person’s behavior not as an altered or dissociated state of consciousness, but as their fulfillment of the social expectations for that role. Meditation Meditation is the act of focusing on a single target (such as the breath or a repeated sound) to increase awareness of the moment. While hypnosis is generally achieved through the interaction of a therapist and the person being treated, an individual can perform meditation alone. Often, however, people wishing to learn to meditate receive some training in techniques to achieve a meditative state. A meditative state, as shown by EEG recordings of newly-practicing meditators, is not an altered state of consciousness per se; however, patterns of brain waves exhibited by expert meditators may represent a unique state of consciousness (Fell, Axmacher, & Haupt, 2010). Although there are a number of different techniques in use, the central feature of all meditation is clearing the mind in order to achieve a state of relaxed awareness and focus (Chen et al., 2013; Lang et al., 2012). Mindfulness meditation has recently become popular. In the variation of meditation, the meditator’s attention is focused on some internal process or an external object (Zeidan, Grant, Brown, McHaffie, & Coghill, 2012). Meditative techniques have their roots in religious practices ( Figure 4.24 ), but their use has grown in popularity among practitioners of alternative medicine. Research indicates that meditation may help reduce blood pressure, and the American Heart Association suggests that meditation might be used in conjunction with more traditional treatments as a way to manage hypertension, although there is not sufficient data for a recommendation to be made (Brook et al., 2013). Like hypnosis, meditation also shows promise in stress management, sleep quality (Caldwell, Harrison, Adams, Quin, & Greeson, 2010), treatment of mood and anxiety disorders (Chen et al., 2013; Freeman et al., 2010; Vøllestad, Nielsen, & Nielsen, 2012), and pain management (Reiner, Tibi, & Lipsitz, 2013). Link to Learning Feeling stressed? Think meditation might help? This instructional video teaches how to use Buddhist meditation techniques to alleviate stress. Link to Learning Watch this video describe the results of a brain imaging study in individuals who underwent specific mindfulness-meditative techniques.
american_government
Summary 10.1 Interest Groups Defined Some interest groups represent a broad set of interests, while others focus on only a single issue. Some interests are organizations, like businesses, corporations, or governments, which register to lobby, typically to obtain some benefit from the legislature. Other interest groups consist of dues-paying members who join a group, usually voluntarily. Some organizations band together, often joining trade associations that represent their industry or field. Interest groups represent either the public interest or private interests. Private interests often lobby government for particularized benefits, which are narrowly distributed. These benefits usually accrue to wealthier members of society. Public interests, on the other hand, try to represent a broad segment of society or even all persons. 10.2 Collective Action and Interest Group Formation Interest groups often have to contend with disincentives to participate, particularly when individuals realize their participation is not critical to a group’s success. People often free ride when they can obtain benefits without contributing to the costs of obtaining these benefits. To overcome these challenges, group leaders may offer incentives to members or potential members to help them mobilize. Groups that are small, wealthy, and/or better organized are sometimes better able to overcome collective action problems. Sometimes external political, social, or economic disturbances result in interest group mobilization. 10.3 Interest Groups as Political Participation Interest groups afford people the opportunity to become more civically engaged. Socioeconomic status is an important predictor of who will likely join groups. The number and types of groups actively lobbying to get what they want from government have been increasing rapidly. Many business and public interest groups have arisen, and many new interests have developed due to technological advances, increased specialization of industry, and fragmentation of interests. Lobbying has also become more sophisticated in recent years, and many interests now hire lobbying firms to represent them. Some scholars assume that groups will compete for access to decision-makers and that most groups have the potential to be heard. Critics suggest that some groups are advantaged by their access to economic resources. Yet others acknowledge these resource advantages but suggest that the political environment is equally important in determining who gets heard. 10.4 Pathways of Interest Group Influence Interest groups support candidates sympathetic to their views in hopes of gaining access to them once they are in office. PACs and super PACs collect money from donors and distribute it to political groups that they support. Lawmakers rely on interest groups and lobbyists to provide them with information about the technical details of policy proposals, as well as about fellow lawmakers’ stands and constituents’ perceptions, for cues about how to vote on issues, particularly those with which they are unfamiliar. Lobbyists also target the executive and judiciary branches. 10.5 Free Speech and the Regulation of Interest Groups Some argue that contributing to political candidates is a form of free speech. According to this view, the First Amendment protects the right of interest groups to give money to politicians. However, others argue that monetary contributions should not be protected by the First Amendment and that corporations and unions should not be treated as individuals, although the Supreme Court has disagreed. Currently, lobbyist and interest groups are restricted by laws that require them to register with the federal government and abide by a waiting period when moving between lobbying and lawmaking positions. Interest groups and their lobbyists are also prohibited from undertaking certain activities and are required to disclose their lobbying activities. Violation of the law can, and sometimes does, result in prison sentences for lobbyists and lawmakers alike.
Chapter Outline 10.1 Interest Groups Defined 10.2 Collective Action and Interest Group Formation 10.3 Interest Groups as Political Participation 10.4 Pathways of Interest Group Influence 10.5 Free Speech and the Regulation of Interest Groups Introduction The 2010 Patient Protection and Affordable Care Act (ACA), also known as Obamacare , represented a substantial overhaul of the U.S. healthcare system. 1 Given its potential impact, interest group representatives (lobbyists) from the insurance industry, hospitals, medical device manufacturers, and organizations representing doctors, patients, and employers all tried to influence what the law would look like and the way it would operate. Ordinary people took to the streets to voice their opinion ( Figure 10.1 ). Some state governors sued to prevent a requirement in the law that their states expand Medicaid coverage. A number of interest groups challenged the law in court, where two Supreme Court decisions have left it largely intact. Interest groups like those for and against the ACA play a fundamental role in representing individuals, corporate interests, and the public before the government. They help inform the public and lawmakers about issues, monitor government actions, and promote policies that benefit their interests, using all three branches of government at the federal, state, and local levels. In this chapter, we answer several key questions about interest groups. What are they, and why and how do they form? How do they provide avenues for political participation? Why are some groups advantaged by the lobbying of government representatives, while others are disadvantaged? Finally, how do interest groups try to achieve their objectives, and how are they regulated?
[ { "answer": { "ans_choice": 0, "ans_text": "A" }, "bloom": null, "hl_context": "Interest groups may also form to represent companies , corporate organizations , and governments . These groups do not have individual members but rather are offshoots of corporate or governmental entities with a compelling interest to be represented in front of one or more branches of government . Verizon and Coca-Cola will register to lobby in order to influence policy in a way that benefits them . <hl> These corporations will either have one or more in-house lobbyist s , who work for one interest group or firm and represent their organization in a lobbying capacity , and / or will hire a contract lobbyist , individuals who work for firms that represent a multitude of clients and are often hired because of their resources and their ability to contact and lobby lawmakers , to represent them before the legislature . <hl>", "hl_sentences": "These corporations will either have one or more in-house lobbyist s , who work for one interest group or firm and represent their organization in a lobbying capacity , and / or will hire a contract lobbyist , individuals who work for firms that represent a multitude of clients and are often hired because of their resources and their ability to contact and lobby lawmakers , to represent them before the legislature .", "question": { "cloze_format": "Someone who lobbies on behalf of a company that he or she works for as part of his or her job is ________.", "normal_format": "Who is a person which lobbies on behalf of a company that he or she works for as part of his or her job?", "question_choices": [ "an in-house lobbyist", "a volunteer lobbyist", "a contract lobbyist", "a legislative liaison" ], "question_id": "fs-id1171474222415", "question_text": "Someone who lobbies on behalf of a company that he or she works for as part of his or her job is ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "Collective goods offer broadly distributed benefits, while private goods offer particularized benefits." }, "bloom": null, "hl_context": "On the other hand , public interest group s attempt to promote public , or collective , goods . <hl> Such collective good s are benefits — tangible or intangible — that help most or all citizens . <hl> These goods are often produced collectively , and because they may not be profitable and everyone may not agree on what public goods are best for society , they are often underfunded and thus will be underproduced unless there is government involvement . The Tennessee Valley Authority , a government corporation , provides electricity in some places where it is not profitable for private firms to do so . Other examples of collective goods are public safety , highway safety , public education , and environmental protection . With some exceptions , if an environmental interest promotes clean air or water , most or all citizens are able to enjoy the result . So if the Sierra Club encourages Congress to pass legislation that improves national air quality , citizens receive the benefit regardless of whether they are members of the organization or even support the legislation . Many environmental groups are public interest groups that lobby for and raise awareness of issues that affect large segments of the population . 16 Interest groups and organizations represent both private and public interests in the United States . <hl> Private interests usually seek particularized benefit s from government that favor either a single interest or a narrow set of interests . <hl> For example , corporations and political institutions may lobby government for tax exemptions , fewer regulations , or favorable laws that benefit individual companies or an industry more generally . Their goal is to promote private goods . <hl> Private goods are items individuals can own , including corporate profits . <hl> An automobile is a private good ; when you purchase it , you receive ownership . <hl> Wealthy individuals are more likely to accumulate private goods , and they can sometimes obtain private goods from governments , such as tax benefits , government subsidies , or government contracts . <hl>", "hl_sentences": "Such collective good s are benefits — tangible or intangible — that help most or all citizens . Private interests usually seek particularized benefit s from government that favor either a single interest or a narrow set of interests . Private goods are items individuals can own , including corporate profits . Wealthy individuals are more likely to accumulate private goods , and they can sometimes obtain private goods from governments , such as tax benefits , government subsidies , or government contracts .", "question": { "cloze_format": "Collective goods are different from private goods because ___.", "normal_format": "How are collective goods different from private goods?", "question_choices": [ "Collective goods offer particularized benefits, while private goods are broadly distributed.", "Collective goods and private goods both offer particularized benefits.", "Collective goods and private goods both offer broadly distributed benefits.", "Collective goods offer broadly distributed benefits, while private goods offer particularized benefits." ], "question_id": "fs-id1171474312581", "question_text": "How are collective goods different from private goods?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "D" }, "bloom": null, "hl_context": "<hl> Interest groups also include association s , which are typically groups of institutions that join with others , often within the same trade or industry ( trade associations ) , and have similar concerns . <hl> The American Beverage Association 10 includes Coca-Cola , Red Bull North America , ROCKSTAR , and Kraft Foods . <hl> Despite the fact that these companies are competitors , they have common interests related to the manufacturing , bottling , and distribution of beverages , as well as the regulation of their business activities . <hl> <hl> The logic is that there is strength in numbers , and if members can lobby for tax breaks or eased regulations for an entire industry , they may all benefit . <hl> These common goals do not , however , prevent individual association members from employing in-house lobbyists or contract lobbying firms to represent their own business or organization as well . Indeed , many members of associations are competitors who also seek representation individually before the legislature .", "hl_sentences": "Interest groups also include association s , which are typically groups of institutions that join with others , often within the same trade or industry ( trade associations ) , and have similar concerns . Despite the fact that these companies are competitors , they have common interests related to the manufacturing , bottling , and distribution of beverages , as well as the regulation of their business activities . The logic is that there is strength in numbers , and if members can lobby for tax breaks or eased regulations for an entire industry , they may all benefit .", "question": { "cloze_format": "Several competing corporations might join together in an association ___ .", "normal_format": "Why might several competing corporations join together in an association?", "question_choices": [ "because there is often strength in numbers", "because they often have common issues that may affect an entire industry", "because they can all benefit from governmental policies", "all the above" ], "question_id": "fs-id1171472075563", "question_text": "Why might several competing corporations join together in an association?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 1, "ans_text": "B" }, "bloom": null, "hl_context": "<hl> Similarly , purposive incentives focus on the issues or causes promoted by the group . <hl> Someone concerned about protecting individual rights might join a group like the American Civil Liberties Union ( ACLU ) because it supports the liberties guaranteed in the U . S . Constitution , even the free expression of unpopular views . 22 Members of the ACLU sometimes find the messages of those they defend ( including Nazis and the Ku Klux Klan ) deplorable , but they argue that the principle of protecting civil liberties is critical to U . S . democracy . In many ways , the organization ’ s stance is analogous to James Madison ’ s defense of factions mentioned earlier in this chapter . A commitment to protecting rights and liberties can serve as an incentive in overcoming collective action problems , because members or potential members care enough about the issues to join or participate . Thus , interest groups and their leadership will use whatever incentives they have at their disposal to overcome collective action problems and mobilize their members .", "hl_sentences": "Similarly , purposive incentives focus on the issues or causes promoted by the group .", "question": { "cloze_format": "___ are a type of incentives that appeal to someone’s concern about a cause.", "normal_format": "What type of incentives appeal to someone’s concern about a cause?", "question_choices": [ "solidary incentives", "purposive incentives", "material incentives", "negative incentives" ], "question_id": "fs-id1171474369273", "question_text": "What type of incentives appeal to someone’s concern about a cause?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 0, "ans_text": "joining a group to be with others like you" }, "bloom": null, "hl_context": "Group leaders also play an important role in overcoming collective action problems . For instance , political scientist Robert Salisbury suggests that group leaders will offer incentives to induce activity among individuals . 20 Some offer material incentives , which are tangible benefits of joining a group . AARP , for example , offers discounts on hotel accommodations and insurance rates for its members , while dues are very low , so they can actually save money by joining . <hl> Group leaders may also offer solidary incentives , which provide the benefit of joining with others who have the same concerns or are similar in other ways . <hl> Some scholars suggest that people are naturally drawn to others with similar concerns . The NAACP is a civil rights groups concerned with promoting equality and eliminating discrimination based on race , and members may join to associate with others who have dealt with issues of inequality . 21", "hl_sentences": "Group leaders may also offer solidary incentives , which provide the benefit of joining with others who have the same concerns or are similar in other ways .", "question": { "cloze_format": "___ is the best example of a solidary benefit.", "normal_format": "Which of the following is the best example of a solidary benefit?", "question_choices": [ "joining a group to be with others like you", "joining a group to obtain a monetary benefit", "joining a group because you care about a cause", "joining a group because it is a requirement of your job" ], "question_id": "fs-id1171472084270", "question_text": "Which of the following is the best example of a solidary benefit?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 3, "ans_text": "D" }, "bloom": null, "hl_context": "<hl> Over the last few decades , we have also witnessed an increase in professionalization in lobbying and in the sophistication of lobbying techniques . <hl> This was not always the case , because lobbying was not considered a serious profession in the mid-twentieth century . <hl> Over the past three decades , there has been an increase in the number of contract lobbying firms . <hl> These firms are often effective because they bring significant resources to the table , their lobbyists are knowledgeable about the issues on which they lobby , and they may have existing relationships with lawmakers . In fact , relationships between lobbyists and legislators are often ongoing , and these are critical if lobbyists want access to lawmakers . However , not every interest can afford to hire high-priced contract lobbyists to represent it . As Table 10.1 suggests , a great deal of money is spent on lobbying activities . <hl> A number of changes in interest groups have taken place over the last three or four decades in the United States . <hl> <hl> The most significant change is the tremendous increase in both the number and type of groups . <hl> 31 Political scientists often examine the diversity of registered groups , in part to determine how well they reflect the variety of interests in society . Some areas may be dominated by certain industries , while others may reflect a multitude of interests . <hl> Some interests appear to have increased at greater rates than others . <hl> For example , the number of institutions and corporate interests has increased both in Washington and in the states . <hl> Telecommunication companies like Verizon and AT & T will lobby Congress for laws beneficial to their businesses , but they also target the states because state legislatures make laws that can benefit or harm their activities . <hl> There has also been an increase in the number of public interest groups that represent the public as opposed to economic interests . U . S . PIRG is a public interest group that represents the public on issues including public health , the environment , and consumer protection . 32", "hl_sentences": "Over the last few decades , we have also witnessed an increase in professionalization in lobbying and in the sophistication of lobbying techniques . Over the past three decades , there has been an increase in the number of contract lobbying firms . A number of changes in interest groups have taken place over the last three or four decades in the United States . The most significant change is the tremendous increase in both the number and type of groups . Some interests appear to have increased at greater rates than others . Telecommunication companies like Verizon and AT & T will lobby Congress for laws beneficial to their businesses , but they also target the states because state legislatures make laws that can benefit or harm their activities .", "question": { "cloze_format": "The changes that have occurred in the lobbying environment over the past three or four decades are that ___ .", "normal_format": "What changes have occurred in the lobbying environment over the past three or four decades?", "question_choices": [ "There is more professional lobbying.", "Many interests lobby both the national government and the states.", "A fragmentation of interests has taken place.", "all the above" ], "question_id": "fs-id1171470962894", "question_text": "What changes have occurred in the lobbying environment over the past three or four decades?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "a symbiotic relationship among Congressional committees, executive agencies, and interest groups" }, "bloom": null, "hl_context": "Interest group politics are often characterized by whether the groups have access to decision-makers and can participate in the policy-making process . <hl> The iron triangle is a hypothetical arrangement among three elements ( the corners of the triangle ): an interest group , a congressional committee member or chair , and an agency within the bureaucracy . <hl> <hl> 49 Each element has a symbiotic relationship with the other two , and it is difficult for those outside the triangle to break into it . <hl> The congressional committee members , including the chair , rely on the interest group for campaign contributions and policy information , while the interest group needs the committee to consider laws favorable to its view . The interest group and the committee need the agency to implement the law , while the agency needs the interest group for information and the committee for funding and autonomy in implementing the law . 50", "hl_sentences": "The iron triangle is a hypothetical arrangement among three elements ( the corners of the triangle ): an interest group , a congressional committee member or chair , and an agency within the bureaucracy . 49 Each element has a symbiotic relationship with the other two , and it is difficult for those outside the triangle to break into it .", "question": { "cloze_format": "___ is an aspect of iron triangles.", "normal_format": "Which of the following is an aspect of iron triangles?", "question_choices": [ "fluid participation among interests", "a great deal of competition for access to decision-makers", "a symbiotic relationship among Congressional committees, executive agencies, and interest groups", "three interest groups that have formed a coalition" ], "question_id": "fs-id1171470933814", "question_text": "Which of the following is an aspect of iron triangles?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "The Supreme Court has yet to address the issue of money in politics." }, "bloom": null, "hl_context": "Insider Perspective The Koch Brothers Conservative billionaires Charles and David Koch have become increasingly active in U . S . elections in recent years . These brothers run Koch Industries , a multinational corporation that manufactures and produces a number of products including paper , plastics , petroleum-based products , and chemicals . In the 2012 election , the Koch brothers and their affiliates spent nearly $ 400 million supporting Republican candidates . Many people have suggested that this spending helped put many Republicans in office . The Kochs and their related organizations planned to raise and spend nearly $ 900 million on the 2016 elections . Critics have accused them and other wealthy donors of attempting to buy elections . <hl> However , others point out that their activities are legal according to current campaign finance laws and recent Supreme Court decisions , and that these individuals , their companies , and their affiliates should be able to spend what they want politically . <hl> As you might expect , there are wealthy donors on both the political left and the right who will continue to spend money on U . S . elections . <hl> Some critics have called for a constitutional amendment restricting spending that would overturn recent Supreme Court decisions . <hl> 69", "hl_sentences": "However , others point out that their activities are legal according to current campaign finance laws and recent Supreme Court decisions , and that these individuals , their companies , and their affiliates should be able to spend what they want politically . Some critics have called for a constitutional amendment restricting spending that would overturn recent Supreme Court decisions .", "question": { "cloze_format": "___ is true of spending in politics.", "normal_format": "Which of the following is true of spending in politics?", "question_choices": [ "The Supreme Court has yet to address the issue of money in politics.", "The Supreme Court has restricted spending on politics.", "The Supreme Court has opposed restrictions on spending on politics.", "The Supreme Court has ruled that corporations may spend unlimited amounts of money but unions may not." ], "question_id": "fs-id1171474316506", "question_text": "Which of the following is true of spending in politics?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "B" }, "bloom": null, "hl_context": "Joining interest groups can help facilitate civic engagement , which allows people to feel more connected to the political and social community . Some interest groups develop as grassroots movement s , which often begin from the bottom up among a small number of people at the local level . Interest groups can amplify the voices of such individuals through proper organization and allow them to participate in ways that would be less effective or even impossible alone or in small numbers . The Tea Party is an example of a so-called astroturf movement , because it is not , strictly speaking , a grassroots movement . Many trace the party ’ s origins to groups that champion the interests of the wealthy such as Americans for Prosperity and Citizens for a Sound Economy . Although many ordinary citizens support the Tea Party because of its opposition to tax increases , it attracts a great deal of support from elite and wealthy sponsors , some of whom are active in lobbying . <hl> The FreedomWorks political action committee ( PAC ) , for example , is a conservative advocacy group that has supported the Tea Party movement . <hl> FreedomWorks is an offshoot of the interest group Citizens for a Sound Economy , which was founded by billionaire industrialists David H . and Charles G . Koch in 1984 .", "hl_sentences": "The FreedomWorks political action committee ( PAC ) , for example , is a conservative advocacy group that has supported the Tea Party movement .", "question": { "cloze_format": "A difference between a PAC and a super PAC is that ___ .", "normal_format": "What is a difference between a PAC and a super PAC?", "question_choices": [ "PACs can contribute directly to candidates, but super PACs cannot.", "Conservative interests favor PACs over super PACs.", "Contributions to PACs are unlimited, but restrictions have been placed on how much money can be contributed to super PACs.", "Super PACS are much more likely to support incumbent candidates than are PACs." ], "question_id": "fs-id1171472179314", "question_text": "What is a difference between a PAC and a super PAC?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 0, "ans_text": "prevent lawmakers from utilizing their legislative relationships by becoming lobbyists immediately after leaving office" }, "bloom": null, "hl_context": "Second , the federal and state governments prohibit certain activities like providing gifts to lawmakers and compensating lobbyists with commissions for successful lobbying . Many activities are prohibited to prevent accusations of vote buying or currying favor with lawmakers . Some states , for example , have strict limits on how much money lobbyists can spend on lobbying lawmakers , or on the value of gifts lawmakers can accept from lobbyists . According to the Honest Leadership and Open Government Act , lobbyists must certify that they have not violated the law regarding gift giving , and the penalty for knowingly violating the law increased from a fine of $ 50,000 to one of $ 200,000 . <hl> Also , revolving door laws also prevent lawmakers from lobbying government immediately after leaving public office . <hl> Members of the House of Representatives cannot register to lobby for a year after they leave office , while senators have a two-year “ cooling off ” period before they can officially lobby . Former cabinet secretaries must wait the same period of time after leaving their positions before lobbying the department of which they had been the head . <hl> These laws are designed to restrict former lawmakers from using their connections in government to give them an advantage when lobbying . <hl> Still , many former lawmakers do become lobbyists , including former Senate majority leader Trent Lott and former House minority leader Richard Gephardt .", "hl_sentences": "Also , revolving door laws also prevent lawmakers from lobbying government immediately after leaving public office . These laws are designed to restrict former lawmakers from using their connections in government to give them an advantage when lobbying .", "question": { "cloze_format": "Revolving door laws are designed to ___ .", "normal_format": "Revolving door laws are designed to do which of the following?", "question_choices": [ "prevent lawmakers from utilizing their legislative relationships by becoming lobbyists immediately after leaving office", "help lawmakers find work after they leave office", "restrict lobbyists from running for public office", "all the above" ], "question_id": "fs-id1171474201926", "question_text": "Revolving door laws are designed to do which of the following?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 3, "ans_text": "D" }, "bloom": null, "hl_context": "Second , the federal and state governments prohibit certain activities like providing gifts to lawmakers and compensating lobbyists with commissions for successful lobbying . Many activities are prohibited to prevent accusations of vote buying or currying favor with lawmakers . Some states , for example , have strict limits on how much money lobbyists can spend on lobbying lawmakers , or on the value of gifts lawmakers can accept from lobbyists . According to the Honest Leadership and Open Government Act , lobbyists must certify that they have not violated the law regarding gift giving , and the penalty for knowingly violating the law increased from a fine of $ 50,000 to one of $ 200,000 . <hl> Also , revolving door laws also prevent lawmakers from lobbying government immediately after leaving public office . <hl> Members of the House of Representatives cannot register to lobby for a year after they leave office , while senators have a two-year “ cooling off ” period before they can officially lobby . Former cabinet secretaries must wait the same period of time after leaving their positions before lobbying the department of which they had been the head . These laws are designed to restrict former lawmakers from using their connections in government to give them an advantage when lobbying . Still , many former lawmakers do become lobbyists , including former Senate majority leader Trent Lott and former House minority leader Richard Gephardt . While the Supreme Court has paved the way for increased spending in politics , lobbying is still regulated in many ways . 70 The 1995 Lobbying Disclosure Act defined who can and cannot lobby , and requires lobbyists and interest groups to register with the federal government . <hl> 71 The Honest Leadership and Open Government Act of 2007 further increased restrictions on lobbying . <hl> <hl> For example , the act prohibited contact between members of Congress and lobbyists who were the spouses of other Congress members . <hl> <hl> The laws broadened the definition of lobbyist and require detailed disclosure of spending on lobbying activity , including who is lobbied and what bills are of interest . <hl> <hl> In addition , President Obama ’ s Executive Order 13490 prohibited appointees in the executive branch from accepting gifts from lobbyists and banned them from participating in matters , including the drafting of any contracts or regulations , involving the appointee ’ s former clients or employer for a period of two years . <hl> The states also have their own registration requirements , with some defining lobbying broadly and others more narrowly .", "hl_sentences": "Also , revolving door laws also prevent lawmakers from lobbying government immediately after leaving public office . 71 The Honest Leadership and Open Government Act of 2007 further increased restrictions on lobbying . For example , the act prohibited contact between members of Congress and lobbyists who were the spouses of other Congress members . The laws broadened the definition of lobbyist and require detailed disclosure of spending on lobbying activity , including who is lobbied and what bills are of interest . In addition , President Obama ’ s Executive Order 13490 prohibited appointees in the executive branch from accepting gifts from lobbyists and banned them from participating in matters , including the drafting of any contracts or regulations , involving the appointee ’ s former clients or employer for a period of two years .", "question": { "cloze_format": "Lobbyists are regulated by ___ .", "normal_format": "In what ways are lobbyists regulated?", "question_choices": [ "Certain activities are prohibited.", "Contributions must be disclosed.", "Lobbying is prohibited immediately after leaving office.", "all the above" ], "question_id": "fs-id1171472222639", "question_text": "In what ways are lobbyists regulated?" }, "references_are_paraphrase": 0 } ]
10
10.1 Interest Groups Defined Learning Objectives By the end of this section, you will be able to: Explain how interest groups differ from political parties Evaluate the different types of interests and what they do Compare public and private interest groups While the term interest group is not mentioned in the U.S. Constitution, the framers were aware that individuals would band together in an attempt to use government in their favor. In Federalist No. 10 , James Madison warned of the dangers of “factions,” minorities who would organize around issues they felt strongly about, possibly to the detriment of the majority. But Madison believed limiting these factions was worse than facing the evils they might produce, because such limitations would violate individual freedoms. Instead, the natural way to control factions was to let them flourish and compete against each other. The sheer number of interests in the United States suggests that many have, indeed, flourished. They compete with similar groups for membership, and with opponents for access to decision-makers. Some people suggest there may be too many interests in the United States. Others argue that some have gained a disproportionate amount of influence over public policy, whereas many others are underrepresented. Madison’s definition of factions can apply to both interest groups and political parties. But unlike political parties, interest groups do not function primarily to elect candidates under a certain party label or to directly control the operation of the government. Political parties in the United States are generally much broader coalitions that represent a significant proportion of citizens. In the American two-party system, the Democratic and Republican Parties spread relatively wide nets to try to encompass large segments of the population. In contrast, while interest groups may support or oppose political candidates, their goals are usually more issue-specific and narrowly focused on areas like taxes, the environment, and gun rights or gun control, or their membership is limited to specific professions. They may represent interests ranging from well-known organizations, such as the Sierra Club, IBM, or the American Lung Association, to obscure ones, such as the North Carolina Gamefowl Breeders Association. Thus, with some notable exceptions, specific interest groups have much more limited membership than do political parties. Political parties and interest groups both work together and compete for influence, although in different ways. While interest group activity often transcends party lines, many interests are perceived as being more supportive of one party than the other. The American Conservative Union, Citizens United, the National Rifle Association, and National Right to Life are more likely to have relationships with Republican lawmakers than with Democratic ones. Americans for Democratic Action, Moveon.org, and the Democratic Governors Association all have stronger relationships with the Democratic Party. Parties and interest groups do compete with each other, however, often for influence. At the state level, we typically observe an inverse relationship between them in terms of power. Interest groups tend to have greater influence in states where political parties are comparatively weaker. WHAT ARE INTEREST GROUPS AND WHAT DO THEY WANT? Definitions abound when it comes to interest groups, which are sometimes referred to as special interests, interest organizations, pressure groups, or just interests. Most definitions specify that interest group indicates any formal association of individuals or organizations that attempt to influence government decision-making and/or the making of public policy. Often, this influence is exercised by a lobbyist or a lobbying firm. Formally, a lobbyist is someone who represents the interest organization before government, is usually compensated for doing so, and is required to register with the government in which he or she lobbies, whether state or federal. The lobbyist’s primary goal is usually to influence policy. Most interest organizations engage in lobbying activity to achieve their objectives. As you might expect, the interest hires a lobbyist, employs one internally, or has a member volunteer to lobby on its behalf. For present purposes, we might restrict our definition to the relatively broad one in the Lobbying Disclosure Act . 2 This act requires the registration of lobbyists representing any interest group and devoting more than 20 percent of their time to it. 3 Clients and lobbying firms must also register with the federal government based on similar requirements. Moreover, campaign finance laws require disclosure of campaign contributions given to political candidates by organizations. Link to Learning Visit this site to research donations and campaign contributions given to political candidates by organizations. Lobbying is not limited to Washington, DC, however, and many interests lobby there as well as in one or more states. Each state has its own laws describing which individuals and entities must register, so the definitions of lobbyists and interests, and of what lobbying is and who must register to do it, also vary from state to state. Therefore, while a citizen contacting a lawmaker to discuss an issue is generally not viewed as lobbying, an organization that devotes a certain amount of time and resources to contacting lawmakers may be classified as lobbying, depending on local, state, or federal law. Largely for this reason, there is no comprehensive list of all interest groups to tell us how many there are in the United States. Estimates of the number vary widely, suggesting that if we use a broad definition and include all interests at all levels of government, there may be more than 200,000. 4 Following the passage of the Lobbying Disclosure Act in 1995, we had a much better understanding of the number of interests registered in Washington, DC; however, it was not until several years later that we had a complete count and categorization of the interests registered in each of the fifty states. 5 Political scientists have categorized interest groups in a number of ways. 6 First, interest groups may take the form of membership organization s , which individuals join voluntarily and to which they usually pay dues. Membership groups often consist of people who have common issues or concerns, or who want to be with others who share their views. The National Rifle Association (NRA) is a membership group consisting of members who promote gun rights ( Figure 10.2 ). For those who advocate greater regulation of access to firearms, such as background checks prior to gun purchases, the Brady Campaign to Prevent Gun Violence is a membership organization that weighs in on the other side of the issue. 7 Interest groups may also form to represent companies, corporate organizations, and governments. These groups do not have individual members but rather are offshoots of corporate or governmental entities with a compelling interest to be represented in front of one or more branches of government. Verizon and Coca-Cola will register to lobby in order to influence policy in a way that benefits them. These corporations will either have one or more in-house lobbyist s , who work for one interest group or firm and represent their organization in a lobbying capacity, and/or will hire a contract lobbyist , individuals who work for firms that represent a multitude of clients and are often hired because of their resources and their ability to contact and lobby lawmakers, to represent them before the legislature. Governments such as municipalities and executive departments such as the Department of Education register to lobby in an effort to maximize their share of budgets or increase their level of autonomy. These government institutions are represented by a legislative liaison , whose job is to present issues to decision-makers. For example, a state university usually employs a lobbyist, legislative liaison, or government affairs person to represent its interests before the legislature. This includes lobbying for a given university’s share of the budget or for its continued autonomy from lawmakers and other state-level officials who may attempt to play a greater oversight role. In 2015, thirteen states had their higher education budgets cut from the previous year, and nearly all states have seen some cuts to higher education funding since the recession began in 2008. 8 In 2015, as in many states, universities and community colleges in Mississippi lobbied the legislature over pending budget cuts. 9 These examples highlight the need for universities and state university systems to have representation before the legislature. On the federal level, universities may lobby for research funds from government departments. For example, the Departments of Defense and Homeland Security may be willing to fund scientific research that might better enable them to defend the nation. Interest groups also include association s , which are typically groups of institutions that join with others, often within the same trade or industry (trade associations), and have similar concerns. The American Beverage Association 10 includes Coca-Cola, Red Bull North America, ROCKSTAR, and Kraft Foods. Despite the fact that these companies are competitors, they have common interests related to the manufacturing, bottling, and distribution of beverages, as well as the regulation of their business activities. The logic is that there is strength in numbers, and if members can lobby for tax breaks or eased regulations for an entire industry, they may all benefit. These common goals do not, however, prevent individual association members from employing in-house lobbyists or contract lobbying firms to represent their own business or organization as well. Indeed, many members of associations are competitors who also seek representation individually before the legislature. Link to Learning Visit the website of an association like the American Beverage Association or the American Bankers Association and look over the key issues it addresses. Do any of the issues it cares about surprise you? What areas do you think members can agree about? Are there issues on which the membership might disagree? Why would competitors join together when they normally compete for business? Finally, sometimes individuals volunteer to represent an organization. They are called amateur or volunteer lobbyists, and are typically not compensated for their lobbying efforts. In some cases, citizens may lobby for pet projects because they care about some issue or cause. They may or may not be members of an interest group, but if they register to lobby, they are sometimes nicknamed “hobbyists.” Lobbyists representing a variety of organizations employ different techniques to achieve their objectives. One method is inside lobbying or direct lobbying, which takes the interest group’s message directly to a government official such as a lawmaker. 11 Inside lobbying tactics include testifying in legislative hearings and helping to draft legislation. Numerous surveys of lobbyists have confirmed that the vast majority rely on these inside strategies. For example, nearly all report that they contact lawmakers, testify before the legislature, help draft legislation, and contact executive agencies. Trying to influence government appointments or providing favors to members of government are somewhat less common insider tactics. Many lobbyists also use outside lobbying or indirect lobbying tactics, whereby the interest attempts to get its message out to the public. 12 These tactics include issuing press releases, placing stories and articles in the media, entering coalitions with other groups, and contacting interest group members, hoping that they will individually pressure lawmakers to support or oppose legislation. An environmental interest group like the Sierra Club, for example, might issue a press release or encourage its members to contact their representatives in Congress about legislation of concern to the group. It might also use outside tactics if there is a potential threat to the environment and the group wants to raise awareness among its members and the public ( Figure 10.3 ). Members of Congress are likely to pay attention when many constituents contact them about an issue or proposed bill. Many interest groups, including the Sierra Club, will use a combination of inside and outside tactics in their lobbying efforts, choosing whatever strategy is most likely to help them achieve their goals. The primary goal of most interests, no matter their lobbying approach, is to influence decision-makers and public policies. For example, National Right to Life, an anti-abortion interest group, lobbies to encourage government to enact laws that restrict abortion access, while NARAL Pro-Choice America lobbies to promote the right of women to have safe choices about abortion. Environmental interests like the Sierra Club lobby for laws designed to protect natural resources and minimize the use of pollutants. On the other hand, some interests lobby to reduce regulations that an organization might view as burdensome. Air and water quality regulations designed to improve or protect the environment may be viewed as onerous by industries that pollute as a byproduct of their production or manufacturing process. Other interests lobby for budgetary allocations; the farm lobby, for example, pressures Congress to secure new farm subsidies or maintain existing ones. Farm subsidies are given to some farmers because they grow certain crops and to other farmers so they will not grow certain crops. 13 As expected, any bill that might attempt to alter these subsidies raises the antennae of many agricultural interests. INTEREST GROUP FUNCTIONS While influencing policy is the primary goal, interest groups also monitor government activity, serve as a means of political participation for members, and provide information to the public and to lawmakers. According to the National Conference of State Legislatures , by November 2015, thirty-six states had laws requiring that voters provide identification at the polls. 14 A civil rights group like the National Association for the Advancement of Colored People (NAACP) will keep track of proposed voter-identification bills in state legislatures that might have an effect on voting rights. This organization will contact lawmakers to voice approval or disapproval of proposed legislation (inside lobbying) and encourage group members to take action by either donating money to it or contacting lawmakers about the proposed bill (outside lobbying). Thus, a member of the organization or a citizen concerned about voting rights need not be an expert on the legislative process or the technical or legal details of a proposed bill to be informed about potential threats to voting rights. Other interest groups function in similar ways. For example, the NRA monitors attempts by state legislatures to tighten gun control laws. Interest groups facilitate political participation in a number of ways. Some members become active within a group, working on behalf of the organization to promote its agenda. Some interests work to increase membership, inform the public about issues the group deems important, or organize rallies and promote get-out-the-vote efforts. Sometimes groups will utilize events to mobilize existing members or encourage new members to join. For example, following Barack Obama’s presidential victory in 2008, the NRA used the election as a rallying cry for its supporters, and it continues to attack the president on the issue of guns, despite the fact that gun rights have in some ways expanded over the course of the Obama presidency. Interest groups also organize letter-writing campaigns, stage protests, and sometimes hold fundraisers for their cause or even for political campaigns. Some interests are more broadly focused than others. AARP (formerly the American Association of Retired Persons) has approximately thirty-seven million members and advocates for individuals fifty and over on a variety of issues including health care, insurance, employment, financial security, and consumer protection ( Figure 10.4 ). 15 This organization represents both liberals and conservatives, Democrats and Republicans, and many who do not identify with these categorizations. On the other hand, the Association of Black Cardiologists is a much smaller and far-narrower organization. Over the last several decades, some interest groups have sought greater specialization and have even fragmented. As you may imagine, the Association of Black Cardiologists is more specialized than the American Medical Association, which tries to represent all physicians regardless of race or specialty. PUBLIC VS. PRIVATE INTEREST GROUPS Interest groups and organizations represent both private and public interests in the United States. Private interests usually seek particularized benefit s from government that favor either a single interest or a narrow set of interests. For example, corporations and political institutions may lobby government for tax exemptions, fewer regulations, or favorable laws that benefit individual companies or an industry more generally. Their goal is to promote private goods. Private goods are items individuals can own, including corporate profits. An automobile is a private good; when you purchase it, you receive ownership. Wealthy individuals are more likely to accumulate private goods, and they can sometimes obtain private goods from governments, such as tax benefits, government subsidies, or government contracts. On the other hand, public interest group s attempt to promote public, or collective, goods. Such collective good s are benefits—tangible or intangible—that help most or all citizens. These goods are often produced collectively, and because they may not be profitable and everyone may not agree on what public goods are best for society, they are often underfunded and thus will be underproduced unless there is government involvement. The Tennessee Valley Authority, a government corporation, provides electricity in some places where it is not profitable for private firms to do so. Other examples of collective goods are public safety, highway safety, public education, and environmental protection. With some exceptions, if an environmental interest promotes clean air or water, most or all citizens are able to enjoy the result. So if the Sierra Club encourages Congress to pass legislation that improves national air quality, citizens receive the benefit regardless of whether they are members of the organization or even support the legislation. Many environmental groups are public interest groups that lobby for and raise awareness of issues that affect large segments of the population. 16 As the clean air example above suggests, collective goods are generally nonexcludable, meaning all or most people are entitled to the public good and cannot be prevented from enjoying it. Furthermore, collective goods are generally not subject to crowding, so that even as the population increases, people still have access to the entire public good. Thus, the military does not protect citizens only in Texas and Maryland while neglecting those in New York and Idaho, but instead it provides the collective good of national defense equally to citizens in all states. As another example, even as more cars use a public roadway, under most circumstances, additional drivers still have the option of using the same road. (High-occupancy vehicle lanes may restrict some lanes of a highway for drivers who do not car pool.) 10.2 Collective Action and Interest Group Formation Learning Objectives By the end of this section, you will be able to: Explain the concept of collective action and its effect on interest group formation Describe free riding and the reasons it occurs Discuss ways to overcome collective action problems In any group project in which you have participated, you may have noticed that a small number of students did the bulk of the work while others did very little. Yet everyone received the same grade. Why do some do all the work, while others do little or none? How is it possible to get people to work when there is a disincentive to do so? This situation is an example of a collective action problem, and it exists in government as well as in public and private organizations. Whether it is Congress trying to pass a budget or an interest group trying to motivate members to contact lawmakers, organizations must overcome collective action problems to be productive. This is especially true of interest groups, whose formation and survival depend on members doing the necessary work to keep the group funded and operating. COLLECTIVE ACTION AND FREE RIDING Collective action problems exist when people have a disincentive to take action. 17 In his classic work, The Logic of Collective Action , economist Mancur Olson discussed the conditions under which collective actions problems would exist, and he noted that they were prevalent among organized interests. People tend not to act when the perceived benefit is insufficient to justify the costs associated with engaging in the action. Many citizens may have concerns about the appropriate level of taxation, gun control, or environmental protection, but these concerns are not necessarily strong enough for them to become politically active. In fact, most people take no action on most issues, either because they do not feel strongly enough or because their action will likely have little bearing on whether a given policy is adopted. Thus, there is a disincentive to call your member of Congress, because rarely will a single phone call sway a politician on an issue. Why do some students elect to do little on a group project? The answer is that they likely prefer to do something else and realize they can receive the same grade as the rest of the group without contributing to the effort. This result is often termed the free rider problem , because some individuals can receive benefits (get a free ride) without helping to bear the cost. When National Public Radio (NPR) engages in a fund-raising effort to help maintain the station, many listeners will not contribute. Since it is unlikely that any one listener’s donation will be decisive in whether NPR has adequate funding to continue to operate, most listeners will not contribute to the costs but instead will free ride and continue to receive the benefits of listening. Collective action problems and free riding occur in many other situations as well. If union membership is optional and all workers will receive a salary increase regardless of whether they make the time and money commitment to join, some workers may free ride. The benefits sought by unions, such as higher wages, collective bargaining rights, and safer working conditions, are often enjoyed by all workers regardless of whether they are members. Therefore, free riders can receive the benefit of the pay increase without helping defray the cost by paying dues, attending meetings or rallies, or joining protests, like that shown in Figure 10.5 . If free riding is so prevalent, why are there so many interest groups and why is interest group membership so high in the United States? One reason is that free riding can be overcome in a variety of ways. Olson argued, for instance, that some groups are better able than others to surmount collective action problems. 18 They can sometimes maintain themselves by obtaining financial support from patrons outside the group. 19 Groups with financial resources have an advantage in mobilizing in that they can offer incentives or hire a lobbyist. Smaller, well-organized groups also have an advantage. For one thing, opinions within smaller groups may be more similar, making it easier to reach consensus. It is also more difficult for members to free ride in a smaller group. In comparison, larger groups have a greater number of individuals and therefore more viewpoints to consider, making consensus more difficult. It may also be easier to free ride because it is less obvious in a large group when any single person does not contribute. However, if people do not lobby for their own interests, they may find that they are ignored, especially if smaller but more active groups with interests opposed to theirs lobby on behalf of themselves. Even though the United States is a democracy, policy is often made to suit the interests of the few instead of the needs of the many. Group leaders also play an important role in overcoming collective action problems. For instance, political scientist Robert Salisbury suggests that group leaders will offer incentives to induce activity among individuals. 20 Some offer material incentives , which are tangible benefits of joining a group. AARP, for example, offers discounts on hotel accommodations and insurance rates for its members, while dues are very low, so they can actually save money by joining. Group leaders may also offer solidary incentives , which provide the benefit of joining with others who have the same concerns or are similar in other ways. Some scholars suggest that people are naturally drawn to others with similar concerns. The NAACP is a civil rights groups concerned with promoting equality and eliminating discrimination based on race, and members may join to associate with others who have dealt with issues of inequality. 21 Similarly, purposive incentives focus on the issues or causes promoted by the group. Someone concerned about protecting individual rights might join a group like the American Civil Liberties Union (ACLU) because it supports the liberties guaranteed in the U.S. Constitution, even the free expression of unpopular views. 22 Members of the ACLU sometimes find the messages of those they defend (including Nazis and the Ku Klux Klan) deplorable, but they argue that the principle of protecting civil liberties is critical to U.S. democracy. In many ways, the organization’s stance is analogous to James Madison’s defense of factions mentioned earlier in this chapter. A commitment to protecting rights and liberties can serve as an incentive in overcoming collective action problems, because members or potential members care enough about the issues to join or participate. Thus, interest groups and their leadership will use whatever incentives they have at their disposal to overcome collective action problems and mobilize their members. Finally, sometimes collective action problems are overcome because there is little choice about whether to join an organization. For example, some organizations may require membership in order to participate in a profession. To practice law, individuals may be required to join the American Bar Association or a state bar association. In the past, union membership could be required of workers, particularly in urban areas controlled by political machines consisting of a combination of parties, elected representatives, and interest groups. Link to Learning Visit the Free Rider Problem for a closer look at free riding as a philosophical problem. Think of a situation you have been in where a collective action problem existed or someone engaged in free riding behavior. Why did the collective action problem or free riding occur? What could have been done to overcome the problem? How will knowledge of these problems affect the way you act in future group settings? DISTURBANCE THEORY AND COLLECTIVE ACTION In addition to the factors discussed above that can help overcome collective action problems, external events can sometimes help mobilize groups and potential members. Some scholars argue that disturbance theory can explain why groups mobilize due to an event in the political, economic, or social environment. 23 For example, in 1962, Rachel Carson published Silent Spring , a book exposing the dangers posed by pesticides such as DDT. 24 The book served as a catalyst for individuals worried about the environment and the potential dangers of pesticides. The result was an increase in both the number of environmental interest groups, such as Greenpeace and American Rivers, and the number of members within them. More recently, several shooting deaths of unarmed young African American men have raised awareness of racial issues in the United States and potential problems in policing practices. In 2014, Ferguson , Missouri, erupted in protests and riots following a decision not to indict Darren Wilson, a white police officer, in the fatal shooting of Michael Brown , who had allegedly been involved in a theft at a local convenience store and ended up in a dispute with the officer. 25 The incident mobilized groups representing civil rights, such as the protestors in Figure 10.6 , as well as others supporting the interests of police officers. Both the Silent Spring and Ferguson examples demonstrate the idea that people will naturally join groups in response to disturbances. Some mobilization efforts develop more slowly and may require the efforts of group leaders. Sometimes political candidates can push issues to the forefront, which may result in interest group mobilization. The recent focus on immigration, for example, has resulted in the mobilization of those in support of restrictive policies as well as those opposed to them ( Figure 10.7 ). Rather than being a single disturbance, debate about immigration policy has ebbed and flowed in recent years, creating what might best be described as a series of minor disturbances. When, during his presidential candidacy, Donald Trump made controversial statements about immigrants, many rallied both for and against him. 26 Finding a Middle Ground Student Activism and Apathy Student behavior is somewhat paradoxical when it comes to political participation. On one hand, students have been very active on college campuses at various times over the past half-century. Many became politically active in the 1960s as part of the civil rights movement, with some joining campus groups that promoted civil rights, while others supported groups that opposed these rights. In the late 1960s and early 1970s, college campuses were very active in opposition to the Vietnam War. More recently, in 2015, students at the University of Missouri protested against the university system president, who was accused of not taking racial issues at the university seriously. The student protests were supported by civil rights groups like the NAACP, and their efforts culminated in the president’s resignation. 27 Yet at the same time, students participate by voting and joining groups at lower rates than members of other age cohorts. Why is it the case that students can play such an important role in facilitating political change in some cases, while at the same time they are typically less active than other demographic groups? Are there groups on campus that represent issues important to you? If not, find out what you could do to start such a group. 10.3 Interest Groups as Political Participation Learning Objectives By the end of this section, you will be able to: Analyze how interest groups provide a means for political participation Discuss recent changes to interest groups and the way they operate in the United States Explain why lower socioeconomic status citizens are not well represented by interest groups Identify the barriers to interest group participation in the United States Interest groups offer individuals an important avenue for political participation. Tea Party protests, for instance, gave individuals all over the country the opportunity to voice their opposition to government actions and control. Likewise, the Occupy Wall Street movement also gave a voice to those individuals frustrated with economic inequality and the influence of large corporations on the public sector. Individually, the protestors would likely have received little notice, but by joining with others, they drew substantial attention in the media and from lawmakers ( Figure 10.8 ). While the Tea Party movement might not meet the definition of interest groups presented earlier, its aims have been promoted by established interest groups. Other opportunities for participation that interest groups offer or encourage include voting, campaigning, contacting lawmakers, and informing the public about causes. GROUP PARTICIPATION AS CIVIC ENGAGEMENT Joining interest groups can help facilitate civic engagement, which allows people to feel more connected to the political and social community. Some interest groups develop as grassroots movement s , which often begin from the bottom up among a small number of people at the local level. Interest groups can amplify the voices of such individuals through proper organization and allow them to participate in ways that would be less effective or even impossible alone or in small numbers. The Tea Party is an example of a so-called astroturf movement , because it is not, strictly speaking, a grassroots movement. Many trace the party’s origins to groups that champion the interests of the wealthy such as Americans for Prosperity and Citizens for a Sound Economy. Although many ordinary citizens support the Tea Party because of its opposition to tax increases, it attracts a great deal of support from elite and wealthy sponsors, some of whom are active in lobbying. The FreedomWorks political action committee (PAC), for example, is a conservative advocacy group that has supported the Tea Party movement. FreedomWorks is an offshoot of the interest group Citizens for a Sound Economy, which was founded by billionaire industrialists David H. and Charles G. Koch in 1984. According to political scientists Jeffrey Berry and Clyde Wilcox , interest groups provide a means of representing people and serve as a link between them and government. 28 Interest groups also allow people to actively work on an issue in an effort to influence public policy. Another function of interest groups is to help educate the public. Someone concerned about the environment may not need to know what an acceptable level of sulfur dioxide is in the air, but by joining an environmental interest group, he or she can remain informed when air quality is poor or threatened by legislative action. A number of education-related interests have been very active following cuts to education spending in many states, including North Carolina, Mississippi, and Wisconsin, to name a few. Interest groups also help frame issues, usually in a way that best benefits their cause. Abortion rights advocates often use the term “pro-choice” to frame abortion as an individual’s private choice to be made free of government interference, while an anti-abortion group might use the term “pro-life” to frame its position as protecting the life of the unborn. “Pro-life” groups often label their opponents as “pro-abortion,” rather than “pro-choice,” a distinction that can affect the way the public perceives the issue. Similarly, scientists and others who believe that human activity has had a negative effect on the earth’s temperature and weather patterns attribute such phenomena as the increasing frequency and severity of storms to “climate change.” Industrialists and their supporters refer to alterations in the earth’s climate as “global warming.” Those who dispute that such a change is taking place can thus point to blizzards and low temperatures as evidence that the earth is not becoming warmer. Interest groups also try to get issues on the government agenda and to monitor a variety of government programs. Following the passage of the ACA, numerous interest groups have been monitoring the implementation of the law, hoping to use successes and failures to justify their positions for and against the legislation. Those opposed have utilized the court system to try to alter or eliminate the law, or have lobbied executive agencies or departments that have a role in the law’s implementation. Similarly, teachers’ unions, parent-teacher organizations, and other education-related interests have monitored implementation of the No Child Left Behind Act promoted and signed into law by President George W. Bush. Milestone Interest Groups as a Response to Riots The LGBT (lesbian, gay, bisexual, and transgender) movement owes a great deal to the gay rights movement of the 1960s and 1970s, and in particular to the 1969 riots at the Stonewall Inn in New York’s Greenwich Village. These were a series of violent responses to a police raid on the bar, a popular gathering place for members of the LGBT community. The riots culminated in a number of arrests but also raised awareness of the struggles faced by members of the gay and lesbian community. 29 The Stonewall Inn has recently been granted landmark status by New York City’s Landmarks Preservation Commission ( Figure 10.9 ). The Castro district in San Francisco, California, was also home to a significant LGBT community during the same time period. In 1978, the community was shocked when Harvey Milk, a gay local activist and sitting member of San Francisco’s Board of Supervisors, was assassinated by a former city supervisor due to political differences. 30 This resulted in protests in San Francisco and other cities across the country and the mobilization of interests concerned about gay and lesbian rights. Today, advocacy interest organizations like Human Rights Watch and the Human Rights Council are at the forefront in supporting members of the LGBT community and popularizing a number of relevant issues. They played an active role in the effort to legalize same-sex marriage in individual states and later nationwide. Now that same-sex marriage is legal, these organizations and others are dealing with issues related to continuing discrimination against members of this community. One current debate centers around whether an individual’s religious freedom allows him or her to deny services to members of the LGBT community. What do you feel are lingering issues for the LGBT community? What approaches could you take to help increase attention and support for gay, lesbian, bisexual, and transgender rights? Do you think someone’s religious beliefs should allow them the freedom to discriminate against members of the LGBT community? Why or why not? TRENDS IN PUBLIC INTEREST GROUP FORMATION AND ACTIVITY A number of changes in interest groups have taken place over the last three or four decades in the United States. The most significant change is the tremendous increase in both the number and type of groups. 31 Political scientists often examine the diversity of registered groups, in part to determine how well they reflect the variety of interests in society. Some areas may be dominated by certain industries, while others may reflect a multitude of interests. Some interests appear to have increased at greater rates than others. For example, the number of institutions and corporate interests has increased both in Washington and in the states. Telecommunication companies like Verizon and AT&T will lobby Congress for laws beneficial to their businesses, but they also target the states because state legislatures make laws that can benefit or harm their activities. There has also been an increase in the number of public interest groups that represent the public as opposed to economic interests. U.S. PIRG is a public interest group that represents the public on issues including public health, the environment, and consumer protection. 32 Get Connected! Public Interest Research Groups Public interest research groups (PIRGs) have increased in recent years, and many now exist nationally and at the state level. PIRGs represent the public in a multitude of issue areas, ranging from consumer protection to the environment, and like other interests, they provide opportunities for people to make a difference in the political process. PIRGs try to promote the common or public good, and most issues they favor affect many or even all citizens. Student PIRGs focus on issues that are important to students, including tuition costs, textbook costs, new voter registration, sustainable universities, and homelessness. Consider the cost of a college education. You may want to research how education costs have increased over time. Are cost increases similar across universities and colleges? Are they similar across states? What might explain similarities and differences in tuition costs? What solutions might help address the rising costs of higher education? How can you get involved in the drive for affordable college education? Consider why students might become engaged in it and why they might not do so. A number of countries have made tuition free or nearly free. 33 Is this feasible or desirable in the United States? Why or why not? Link to Learning Take a look at the website for Student PIRGs. What issues does this interest group address? Are these issues important to you? How can you get involved? Visit this section of their site to learn more about their position on financing higher education. What are the reasons for the increase in the number of interest groups? In some cases, it simply reflects new interests in society. Forty years ago, stem cell research was not an issue on the government agenda, but as science and technology advanced, its techniques and possibilities became known to the media and the public, and a number of interests began lobbying for and against this type of research. Medical research firms and medical associations will lobby in favor of greater spending and increased research on stem cell research, while some religious organizations and anti-abortion groups will oppose it. As societal attitudes change and new issues develop, and as the public becomes aware of them, we can expect to see the rise of interests addressing them. The devolution of power also explains some of the increase in the number and type of interests, at least at the state level. As power and responsibility shifted to state governments in the 1980s, the states began to handle responsibilities that had been under the jurisdiction of the federal government. A number of federal welfare programs, for example, are generally administered at the state level. This means interests might be better served targeting their lobbying efforts in Albany, Raleigh, Austin, or Sacramento, rather than only in Washington, DC. As the states have become more active in more policy areas, they have become prime targets for interests wanting to influence policy in their favor. 34 We have also seen increased specialization by some interests and even fragmentation of existing interests. While the American Medical Association may take a stand on stem cell research, the issue is not critical to the everyday activities of many of its members. On the other hand, stem cell research is highly salient to members of the American Neurological Association, an interest organization that represents academic neurologists and neuroscientists. Accordingly, different interests represent the more specialized needs of different specialties within the medical community, but fragmentation can occur when a large interest like this has diverging needs. Such was also the case when several unions split from the AFL-CIO (American Federation of Labor-Congress of Industrial Organizations), the nation’s largest federation of unions, in 2005. 35 Improved technology and the development of social media have made it easier for smaller groups to form and to attract and communicate with members. The use of the Internet to raise money has also made it possible for even small groups to receive funding. None of this suggests that an unlimited number of interests can exist in society. The size of the economy has a bearing on the number of interests, but only up to a certain point, after which the number increases at a declining rate. As we will see below, the limit on the number of interests depends on the available resources and levels of competition. Over the last few decades, we have also witnessed an increase in professionalization in lobbying and in the sophistication of lobbying techniques. This was not always the case, because lobbying was not considered a serious profession in the mid-twentieth century. Over the past three decades, there has been an increase in the number of contract lobbying firms. These firms are often effective because they bring significant resources to the table, their lobbyists are knowledgeable about the issues on which they lobby, and they may have existing relationships with lawmakers. In fact, relationships between lobbyists and legislators are often ongoing, and these are critical if lobbyists want access to lawmakers. However, not every interest can afford to hire high-priced contract lobbyists to represent it. As Table 10.1 suggests, a great deal of money is spent on lobbying activities. Top Lobbying Firms in 2014 Lobbying Firm Total Lobbying Annual Income Akin, Gump et al. $35,550,000 Squire Patton Boggs $31,540,000 Podesta Group $25,070,000 Brownstein, Hyatt et al. $23,400,000 Van Scoyoc Assoc. $21,420,000 Holland & Knight $19,250,000 Capitol Counsel $17,930,000 K&L Gates $17,420,000 Williams & Jensen $16,430,000 BGR Group $15,470,000 Peck Madigan Jones $13,395,000 Cornerstone Government Affairs $13,380,000 Ernst & Young $12,440,000 Hogan Lovells $12,410,000 Capitol Tax Partners $12,390,000 Cassidy & Assoc. $12,090,000 Fierce, Isakowitz & Blalock $11,970,000 Covington & Burling $11,537,000 Mehlman, Castagnetti et al. $11,180,000 Alpine Group $10,950,00 Table 10.1 This table lists the top twenty U.S. lobbying firms in 2014 as determined by total lobbying income. 36 We have also seen greater limits on inside lobbying activities. In the past, many lobbyists were described as “good ol’ boys” who often provided gifts or other favors in exchange for political access or other considerations. Today, restrictions limit the types of gifts and benefits lobbyists can bestow on lawmakers. There are certainly fewer “good ol’ boy” lobbyists, and many lobbyists are now full-time professionals. The regulation of lobbying is addressed in greater detail below. HOW REPRESENTATIVE IS THE INTEREST GROUP SYSTEM? Participation in the United States has never been equal; wealth and education, components of socioeconomic status, are strong predictors of political engagement. 37 We already discussed how wealth can help overcome collective action problems, but lack of wealth also serves as a barrier to participation more generally. These types of barriers pose challenges, making it less likely for some groups than others to participate. 38 Some institutions, including large corporations, are more likely to participate in the political process than others, simply because they have tremendous resources. And with these resources, they can write a check to a political campaign or hire a lobbyist to represent their organization. Writing a check and hiring a lobbyist are unlikely options for a disadvantaged group ( Figure 10.10 ). Individually, the poor may not have the same opportunities to join groups. 39 They may work two jobs to make ends meet and lack the free time necessary to participate in politics. Further, there are often financial barriers to participation. For someone who punches a time-clock, spending time with political groups may be costly and paying dues may be a hardship. Certainly, the poor are unable to hire expensive lobbying firms to represent them. Structural barriers like voter identification laws may also disproportionately affect people with low socioeconomic status, although the effects of these laws may not be fully understood for some time. The poor may also have low levels of efficacy , which refers to the conviction that you can make a difference or that government cares about you and your views. People with low levels of efficacy are less likely to participate in politics, including voting and joining interest groups. Therefore, they are often underrepresented in the political arena. Minorities may also participate less often than the majority population, although when we control for wealth and education levels, we see fewer differences in participation rates. Still, there is a bias in participation and representation, and this bias extends to interest groups as well. For example, when fast food workers across the United States went on strike to demand an increase in their wages, they could do little more than take to the streets bearing signs, like the protestors shown in Figure 10.11 . Their opponents, the owners of restaurant chains and others who pay their employees minimum wage, could hire groups such as the Employment Policies Institute, which paid for billboard ads in Times Square in New York City. The billboards implied that raising the minimum wage was an insult to people who worked hard and discouraged people from getting an education to better their lives. 40 Finally, people do not often participate because they lack the political skill to do so or believe that it is impossible to influence government actions. 41 They might also lack interest or could be apathetic. Participation usually requires some knowledge of the political system, the candidates, or the issues. Younger people in particular are often cynical about government’s response to the needs of non-elites. How do these observations translate into the way different interests are represented in the political system? Some pluralist scholars like David Truman suggest that people naturally join groups and that there will be a great deal of competition for access to decision-makers. 42 Scholars who subscribe to this pluralist view assume this competition among diverse interests is good for democracy. Political theorist Robert Dahl argued that “all active and legitimate groups had the potential to make themselves heard.” 43 In many ways, this is an optimistic assessment of representation in the United States. However, not all scholars accept the premise that mobilization is natural and that all groups have the potential for access to decision-makers. The elite critique suggests that certain interests, typically businesses and the wealthy, are advantaged and that policies more often reflect their wishes than anyone else’s. Political scientist E. E. Schattschneider noted that “the flaw in the pluralist heaven is that the heavenly chorus sings with a strong upperclass accent.” 44 A number of scholars have suggested that businesses and other wealthy interests are often overrepresented before government, and that poorer interests are at a comparative disadvantage. 45 For example, as we’ve seen, wealthy corporate interests have the means to hire in-house lobbyists or high-priced contract lobbyists to represent them. They can also afford to make financial contributions to politicians, which at least may grant them access. The ability to overcome collective action problems is not equally distributed across groups; as Mancur Olson noted, small groups and those with economic advantages were better off in this regard. 46 Disadvantaged interests face many challenges including shortages of resources, time, and skills. A study of almost eighteen hundred policy decisions made over a twenty-year period revealed that the interests of the wealthy have much greater influence on the government than those of average citizens. The approval or disapproval of proposed policy changes by average voters had relatively little effect on whether the changes took place. When wealthy voters disapproved of a particular policy, it almost never was enacted. When wealthy voters favored a particular policy, the odds of the policy proposal’s passing increased to more than 50 percent. 47 Indeed, the preferences of those in the top 10 percent of the population in terms of income had an impact fifteen times greater than those of average income. In terms of the effect of interest groups on policy, Gilens and Page found that business interest groups had twice the influence of public interest groups. 48 Figure 10.12 shows contributions by interests from a variety of different sectors. We can draw a few notable observations from the table. First, large sums of money are spent by different interests. Second, many of these interests are business sectors, including the real estate sector, the insurance industry, businesses, and law firms. Interest group politics are often characterized by whether the groups have access to decision-makers and can participate in the policy-making process. The iron triangle is a hypothetical arrangement among three elements (the corners of the triangle): an interest group, a congressional committee member or chair, and an agency within the bureaucracy. 49 Each element has a symbiotic relationship with the other two, and it is difficult for those outside the triangle to break into it. The congressional committee members, including the chair, rely on the interest group for campaign contributions and policy information, while the interest group needs the committee to consider laws favorable to its view. The interest group and the committee need the agency to implement the law, while the agency needs the interest group for information and the committee for funding and autonomy in implementing the law. 50 An alternate explanation of the arrangement of duties carried out in a given policy area by interest groups, legislators, and agency bureaucrats is that these actors are the experts in that given policy area. Hence, perhaps they are the ones most qualified to process policy in the given area. Some view the iron triangle idea as outdated. Hugh Heclo of George Mason University has sketched a more open pattern he calls an issue network that includes a number of different interests and political actors that work together in support of a single issue or policy. 51 Some interest group scholars have studied the relationship among a multitude of interest groups and political actors, including former elected officials, the way some interests form coalitions with other interests, and the way they compete for access to decision-makers. 52 Some coalitions are long-standing, while others are temporary. Joining coalitions does come with a cost, because it can dilute preferences and split potential benefits that the groups attempt to accrue. Some interest groups will even align themselves with opposing interests if the alliance will achieve their goals. For example, left-leaning groups might oppose a state lottery system because it disproportionately hurts the poor (who participate in this form of gambling at higher rates), while right-leaning groups might oppose it because they view gambling as a sinful activity. These opposing groups might actually join forces in an attempt to defeat the lottery. While most scholars agree that some interests do have advantages, others have questioned the overwhelming dominance of certain interests. Additionally, neopluralist scholars argue that certainly some interests are in a privileged position, but these interests do not always get what they want. 53 Instead, their influence depends on a number of factors in the political environment such as public opinion, political culture, competition for access, and the relevance of the issue. Even wealthy interests do not always win if their position is at odds with the wish of an attentive public. And if the public cares about the issue, politicians may be reluctant to defy their constituents. If a prominent manufacturing firm wants fewer regulations on environmental pollutants, and environmental protection is a salient issue to the public, the manufacturing firm may not win in every exchange, despite its resource advantage. We also know that when interests mobilize, opposing interests often counter-mobilize, which can reduce advantages of some interests. Thus, the conclusion that businesses, the wealthy, and elites win in every situation is overstated. 54 A good example is the recent dispute between fast food chains and their employees. During the spring of 2015, workers at McDonald’s restaurants across the country went on strike and marched in protest of the low wages the fast food giant paid its employees. Despite the opposition of restaurant chains and claims by the National Restaurant Association that increasing the minimum wage would result in the loss of jobs, in September 2015, the state of New York raised the minimum wage for fast food employees to $15 per hour, an amount to be phased in over time. Buoyed by this success, fast food workers in other cities continued to campaign for a pay increase, and many low-paid workers have promised to vote for politicians who plan to boost the federal minimum wage. 55 Link to Learning Visit the websites for the California or Michigan secretary of state, state boards of elections, or relevant governmental entity and ethics websites where lobbyists and interest groups must register. Several examples are provided but feel free to examine the comparable web page in your own state. Spend some time looking over the lists of interest groups registered in these states. Do the registered interests appear to reflect the important interests within the states? Are there patterns in the types of interests registered? Are certain interests over- or underrepresented? 10.4 Pathways of Interest Group Influence Learning Objectives By the end of this section, you will be able to: Describe how interest groups influence the government through elections Explain how interest groups influence the government through the governance processes Many people criticize the huge amounts of money spent in politics. Some argue that interest groups have too much influence on who wins elections, while others suggest influence is also problematic when interests try to sway politicians in office. There is little doubt that interest groups often try to achieve their objectives by influencing elections and politicians, but discovering whether they have succeeded in changing minds is actually challenging because they tend to support those who already agree with them. INFLUENCE IN ELECTIONS Interest groups support candidates who are sympathetic to their views in hopes of gaining access to them once they are in office. 56 For example, an organization like the NRA will back candidates who support Second Amendment rights. Both the NRA and the Brady Campaign to Prevent Gun Violence (an interest group that favors background checks for firearm purchases) have grading systems that evaluate candidates and states based on their records of supporting these organizations. 57 To garner the support of the NRA, candidates must receive an A+ rating for the group. In much the same way, Americans for Democratic Action, a liberal interest group, and the American Conservative Union, a conservative interest group, both rate politicians based on their voting records on issues these organizations view as important. 58 These ratings, and those of many other groups, are useful for interests and the public in deciding which candidates to support and which to oppose. Incumbents have electoral advantages in terms of name recognition, experience, and fundraising abilities, and they often receive support because interest groups want access to the candidate who is likely to win. Some interest groups will offer support to the challenger, particularly if the challenger better aligns with the interest’s views or the incumbent is vulnerable. Sometimes, interest groups even hedge their bets and give to both major party candidates for a particular office in the hopes of having access regardless of who wins. Some interests groups form political action committees (PACs), groups that collect funds from donors and distribute them to candidates who support their issues. As Figure 10.13 makes apparent, many large corporations like Honeywell International, AT&T, and Lockheed Martin form PACs to distribute money to candidates. 59 Other PACs are either politically or ideologically oriented. For example, the MoveOn.org PAC is a progressive group that formed following the impeachment trial of President Bill Clinton, whereas GOPAC is a Republican PAC that promotes state and local candidates of that party. PACs are limited in the amount of money that they can contribute to individual candidates or to national party organizations; they can contribute no more than $5,000 per candidate per election and no more than $15,000 a year to a national political party. Individual contributions to PACs are also limited to $5,000 a year. PACs through which corporations and unions can spend virtually unlimited amounts of money on behalf of political candidates are called super PACs. 60 As a result of a 2010 Supreme Court decision, Citizens United v. Federal Election Commission , there is no limit to how much money unions or corporations can donate to super PACs. Unlike PACs, however, super PACs cannot contribute money directly to individual candidates. If the 2014 elections were any indication, super PACs will continue to spend large sums of money in an attempt to influence future election results. INFLUENCING GOVERNMENTAL POLICY Interest groups support candidates in order to have access to lawmakers once they are in office. Lawmakers, for their part, lack the time and resources to pursue every issue; they are policy generalists. Therefore, they (and their staff members) rely on interest groups and lobbyists to provide them with information about the technical details of policy proposals, as well as about fellow lawmakers’ stands and constituents’ perceptions. These voting cues give lawmakers an indication of how to vote on issues, particularly those with which they are unfamiliar. But lawmakers also rely on lobbyists for information about ideas they can champion and that will benefit them when they run for reelection. 61 Interest groups likely cannot target all 535 lawmakers in both the House and the Senate, nor would they wish to do so. There is little reason for the Brady Campaign to Prevent Gun Violence to lobby members of Congress who vehemently oppose any restrictions on gun access. Instead, the organization will often contact lawmakers who are amenable to some restrictions on access to firearms. Thus, interest groups first target lawmakers they think will consider introducing or sponsoring legislation. Second, they target members of relevant committees. 62 If a company that makes weapons systems wants to influence a defense bill, it will lobby members of the Armed Services Committees in the House and the Senate or the House and Senate appropriations committees if the bill requires new funding. Many members of these committees represent congressional districts with military bases, so they often sponsor or champion bills that allow them to promote policies popular with their districts or state. Interest groups attempt to use this to their advantage. But they also conduct strategic targeting because legislatures function by respectfully considering fellow lawmakers’ positions. Since lawmakers cannot possess expertise on every issue, they defer to their trusted colleagues on issues with which they are unfamiliar. So targeting committee members also allows the lobbyist to inform other lawmakers indirectly. Third, interest groups target lawmakers when legislation is on the floor of the House and/or Senate, but again, they rely on the fact that many members will defer to their colleagues who are more familiar with a given issue. Finally, since legislation must past both chambers in identical form, interest groups may target members of the conference committees whose job it is to iron out differences across the chambers. At this negotiation stage, a 1 percent difference in, say, the corporate income tax rate could mean millions of dollars in increased or decreased revenue or taxation for various interests. Interest groups also target the budgetary process in order to maximize benefits to their group. In some cases, their aim is to influence the portion of the budget allocated to a given policy, program, or policy area. For example, interests for groups that represent the poor may lobby for additional appropriations for various welfare programs; those interests opposed to government assistance to the poor may lobby for reduced funding to certain programs. It is likely that the legislative liaison for your university or college spends time trying to advocate for budgetary allocations in your state. Interest groups also try to defeat legislation that may be detrimental to their views. For example, when Congress considers legislation to improve air quality, it is not unusual for some industries to oppose it if it requires additional regulations on factory emissions. In some cases, proposed legislation may serve as a disturbance, resulting in group formation or mobilization to help defeat the bill. For example, a proposed tax increase may result in the formation or mobilization of anti-tax groups that will lobby the legislature and try to encourage the public to oppose the proposed legislation. Prior to the election in 2012, political activist Grover Norquist , the founder of Americans for Tax Reform (ATR), asked all Republican members of Congress to sign a “Taxpayer Protection Pledge” that they would fight efforts to raise taxes or to eliminate any deductions that were not accompanied by tax cuts. Ninety-five percent of the Republicans in Congress signed the pledge. 63 Some interests arise solely to defeat legislation and go dormant after they achieve their immediate objectives. Once legislation has been passed, interest groups may target the executive branch of government, whose job is to implement the law. The U.S. Department of Veterans Affairs has some leeway in providing care for military veterans, and interests representing veterans’ needs may pressure this department to address their concerns or issues. Other entities within the executive branch, like the Securities and Exchange Commission, which maintains and regulates financial markets, are not designed to be responsive to the interests they regulate, because to make such a response would be a conflict of interest. Interest groups may lobby the executive branch on executive, judicial, and other appointments that require Senate confirmation. As a result, interest group members may be appointed to positions in which they can influence proposed regulation of the industry of which they are a part. In addition to lobbying the legislative and executive branches of government, many interest groups also lobby the judicial branch. Lobbying the judiciary takes two forms, the first of which was mentioned above. This is lobbying the executive branch about judicial appointments the president makes and lobbying the Senate to confirm these appointments. The second form of lobbying consists of filing amicus briefs, which are also known as “friend of the court” briefs. These documents present legal arguments stating why a given court should take a case and/or why a court should rule a certain way. In Obergefell v. Hodges (2015), the Supreme Court case that legalized same-sex marriage nationwide, numerous interest groups filed amicus briefs. 64 For example, the Human Rights Campaign, shown demonstrating in Figure 10.14 , filed a brief arguing that the Fourteenth Amendment’s due process and equal protection clauses required that same-sex couples be afforded the same rights to marry as opposite-sex couples. In a 5–4 decision, the U.S. Supreme Court agreed. Link to Learning The briefs submitted in Obergefell v. Hodges are available on the website of the U.S. Supreme Court. What arguments did the authors of these briefs make, other than those mentioned in this chapter, in favor of Obergefell’s position? Measuring the effect of interest groups’ influence is somewhat difficult because lobbyists support lawmakers who would likely have supported them in the first place. Thus, National Right to Life, an anti-abortion interest group, does not generally lobby lawmakers who favor abortion rights; instead, it supports lawmakers and candidates who have professed “pro-life” positions. While some scholars note that lobbyists sometimes try to influence those on the fence or even their enemies, most of the time, they support like-minded individuals. Thus, contributions are unlikely to sway lawmakers to change their views; what they do buy is access, including time with lawmakers. The problem for those trying to assess whether interest groups influence lawmakers, then, is that we are uncertain what would happen in the absence of interest group contributions. For example, we can only speculate what the ACA might have looked like had lobbyists from a host of interests not lobbied on the issue. Link to Learning Examine websites for the American Conservative Union and Americans for Democratic Action that compile legislative ratings and voting records. On what issues do these organizations choose to take positions? Where do your representatives and senators rank according to these groups? Are these rankings surprising? 10.5 Free Speech and the Regulation of Interest Groups Learning Objectives By the end of this section, you will be able to: Identify the various court cases, policies, and laws that outline what interest groups can and cannot do Evaluate the arguments for and against whether contributions are a form of freedom of speech How are lobbying and interest group activity regulated? As we noted earlier in the chapter, James Madison viewed factions as a necessary evil and thought preventing people from joining together would be worse than any ills groups might cause. The First Amendment guarantees, among other things, freedom of speech, petition, and assembly. However, people have different views on how far this freedom extends. For example, should freedom of speech as afforded to individuals in the U.S. Constitution also apply to corporations and unions? To what extent can and should government restrict the activities of lobbyists and lawmakers, limiting who may lobby and how they may do it? INTEREST GROUPS AND FREE SPEECH Most people would agree that interest groups have a right under the Constitution to promote a particular point of view. What people do not necessarily agree upon, however, is the extent to which certain interest group and lobbying activities are protected under the First Amendment. In addition to free speech rights, the First Amendment grants people the right to assemble. We saw above that pluralists even argued that assembling in groups is natural and that people will gravitate toward others with similar views. Most people acknowledge the right of others to assemble to voice unpopular positions, but this was not always the case. At various times, groups representing racial and religious minorities, communists, and members of the LGBT community have had their First Amendment rights to speech and assembly curtailed. And as noted above, organizations like the ACLU support free speech rights regardless of whether the speech is popular. Today, the debate about interest groups often revolves around whether the First Amendment protects the rights of individuals and groups to give money, and whether government can regulate the use of this money. In 1971, the Federal Election Campaign Act was passed, setting limits on how much presidential and vice-presidential candidates and their families could donate to their own campaigns. 65 The law also allowed corporations and unions to form PACs and required public disclosure of campaign contributions and their sources. In 1974, the act was amended in an attempt to limit the amount of money spent on congressional campaigns. The amended law banned the transfer of union, corporate, and trade association money to parties for distribution to campaigns. In Buckley v. Valeo (1976), the Supreme Court upheld Congress’s right to regulate elections by restricting contributions to campaigns and candidates. However, at the same time, it overturned restrictions on expenditures by candidates and their families, as well as total expenditures by campaigns. 66 In 1979, an exemption was granted to get-out-the vote and grassroots voter registration drives, creating what has become known as the soft-money loophole; soft money was a way in which interests could spend money on behalf of candidates without being restricted by federal law. To close this loophole, Senators John McCain and Russell Feingold sponsored the Bipartisan Campaign Reform Act in 2002 to ban parties from collecting and distributing unregulated money. Some continued to argue that campaign expenditures are a form of speech, a position with which two recent Supreme Court decisions are consistent. The Citizens United v. Federal Election Commission 67 and the McCutcheon v. Federal Election Commission 68 cases opened the door for a substantially greater flow of money into elections. Citizens United overturned the soft money ban of the Bipartisan Campaign Reform Act and allowed corporations and unions to spend unlimited amounts of money on elections. Essentially, the Supreme Court argued in a 5–4 decision that these entities had free speech rights, much like individuals, and that free speech included campaign spending. The McCutcheon decision further extended spending allowances based on the First Amendment by striking down aggregate contribution limits. These limits put caps on the total contributions allowed and some say have contributed to a subsequent increase in groups and lobbying activities ( Figure 10.15 ). Link to Learning Read about the rights that corporations share with people. Should corporations have the same rights as people? Insider Perspective The Koch Brothers Conservative billionaires Charles and David Koch have become increasingly active in U.S. elections in recent years. These brothers run Koch Industries, a multinational corporation that manufactures and produces a number of products including paper, plastics, petroleum-based products, and chemicals. In the 2012 election, the Koch brothers and their affiliates spent nearly $400 million supporting Republican candidates. Many people have suggested that this spending helped put many Republicans in office. The Kochs and their related organizations planned to raise and spend nearly $900 million on the 2016 elections. Critics have accused them and other wealthy donors of attempting to buy elections. However, others point out that their activities are legal according to current campaign finance laws and recent Supreme Court decisions, and that these individuals, their companies, and their affiliates should be able to spend what they want politically. As you might expect, there are wealthy donors on both the political left and the right who will continue to spend money on U.S. elections. Some critics have called for a constitutional amendment restricting spending that would overturn recent Supreme Court decisions. 69 Do you agree, as some have argued, that the Constitution protects the ability to donate unlimited amounts of money to political candidates as a First Amendment right? Is spending money a form of exercising free speech? If so, does a PAC have this right? Why or why not? REGULATING LOBBYING AND INTEREST GROUP ACTIVITY While the Supreme Court has paved the way for increased spending in politics, lobbying is still regulated in many ways. 70 The 1995 Lobbying Disclosure Act defined who can and cannot lobby, and requires lobbyists and interest groups to register with the federal government. 71 The Honest Leadership and Open Government Act of 2007 further increased restrictions on lobbying. For example, the act prohibited contact between members of Congress and lobbyists who were the spouses of other Congress members. The laws broadened the definition of lobbyist and require detailed disclosure of spending on lobbying activity, including who is lobbied and what bills are of interest. In addition, President Obama’s Executive Order 13490 prohibited appointees in the executive branch from accepting gifts from lobbyists and banned them from participating in matters, including the drafting of any contracts or regulations, involving the appointee’s former clients or employer for a period of two years. The states also have their own registration requirements, with some defining lobbying broadly and others more narrowly. Second, the federal and state governments prohibit certain activities like providing gifts to lawmakers and compensating lobbyists with commissions for successful lobbying. Many activities are prohibited to prevent accusations of vote buying or currying favor with lawmakers. Some states, for example, have strict limits on how much money lobbyists can spend on lobbying lawmakers, or on the value of gifts lawmakers can accept from lobbyists. According to the Honest Leadership and Open Government Act, lobbyists must certify that they have not violated the law regarding gift giving, and the penalty for knowingly violating the law increased from a fine of $50,000 to one of $200,000. Also, revolving door laws also prevent lawmakers from lobbying government immediately after leaving public office. Members of the House of Representatives cannot register to lobby for a year after they leave office, while senators have a two-year “cooling off” period before they can officially lobby. Former cabinet secretaries must wait the same period of time after leaving their positions before lobbying the department of which they had been the head. These laws are designed to restrict former lawmakers from using their connections in government to give them an advantage when lobbying. Still, many former lawmakers do become lobbyists, including former Senate majority leader Trent Lott and former House minority leader Richard Gephardt. Third, governments require varying levels of disclosure about the amount of money spent on lobbying efforts. The logic here is that lawmakers will think twice about accepting money from controversial donors. The other advantage to disclosure requirements is that they promote transparency. Many have argued that the public has a right to know where candidates get their money. Candidates may be reluctant to accept contributions from donors affiliated with unpopular interests such as hate groups. This was one of the key purposes of the Lobbying Disclosure Act and comparable laws at the state level. Finally, there are penalties for violating the law. Lobbyists and, in some cases, government officials can be fined, banned from lobbying, or even sentenced to prison. While state and federal laws spell out what activities are legal and illegal, the attorneys general and prosecutors responsible for enforcing lobbying regulations may be understaffed, have limited budgets, or face backlogs of work, making it difficult for them to investigate or prosecute alleged transgressions. While most lobbyists do comply with the law, exactly how the laws alter behavior is not completely understood. We know the laws prevent lobbyists from engaging in certain behaviors, such as by limiting campaign contributions or preventing the provision of certain gifts to lawmakers, but how they alter lobbyists’ strategies and tactics remains unclear. The need to strictly regulate the actions of lobbyists became especially relevant after the activities of lobbyist Jack Abramoff were brought to light ( Figure 10.16 ). A prominent lobbyist with ties to many of the Republican members of Congress, Abramoff used funds provided by his clients to fund reelection campaigns, pay for trips, and hire the spouses of members of Congress. Between 1994 and 2001, Abramoff, who then worked as a lobbyist for a prominent law firm, paid for eighty-five members of Congress to travel to the Northern Mariana Islands, a U.S. territory in the Pacific. The territory’s government was a client of the firm for which he worked. At the time, Abramoff was lobbying Congress to exempt the Northern Mariana Islands from paying the federal minimum wage and to allow the territory to continue to operate sweatshops in which people worked in deplorable conditions. In 2000, while representing Native American casino interests who sought to defeat anti-gambling legislation, Abramoff paid for a trip to Scotland for Tom DeLay, the majority whip in the House of Representatives, and an aide. Shortly thereafter, DeLay helped to defeat anti-gambling legislation in the House. He also hired DeLay’s wife Christine to research the favorite charity of each member of Congress and paid her $115,000 for her efforts. 72 In 2008, Jack Abramoff was sentenced to four years in prison for tax evasion, fraud, and corruption of public officials. 73 He was released early, in December 2010.
microbiology
Summary 9.1 How Microbes Grow Most bacterial cells divide by binary fission . Generation time in bacterial growth is defined as the doubling time of the population. Cells in a closed system follow a pattern of growth with four phases: lag , logarithmic (exponential) , stationary , and death . Cells can be counted by direct viable cell count . The pour plate and spread plate methods are used to plate serial dilutions into or onto, respectively, agar to allow counting of viable cells that give rise to colony-forming units . Membrane filtration is used to count live cells in dilute solutions. The most probable cell number (MPN) method allows estimation of cell numbers in cultures without using solid media. Indirect methods can be used to estimate culture density by measuring turbidity of a culture or live cell density by measuring metabolic activity. Other patterns of cell division include multiple nucleoid formation in cells; asymmetric division, as in budding ; and the formation of hyphae and terminal spores. Biofilms are communities of microorganisms enmeshed in a matrix of extracellular polymeric substance . The formation of a biofilm occurs when planktonic cells attach to a substrate and become sessile . Cells in biofilms coordinate their activity by communicating through quorum sensing . Biofilms are commonly found on surfaces in nature and in the human body, where they may be beneficial or cause severe infections. Pathogens associated with biofilms are often more resistant to antibiotics and disinfectants. 9.2 Oxygen Requirements for Microbial Growth Aerobic and anaerobic environments can be found in diverse niches throughout nature, including different sites within and on the human body. Microorganisms vary in their requirements for molecular oxygen. Obligate aerobes depend on aerobic respiration and use oxygen as a terminal electron acceptor. They cannot grow without oxygen. Obligate anaerobes cannot grow in the presence of oxygen. They depend on fermentation and anaerobic respiration using a final electron acceptor other than oxygen. Facultative anaerobes show better growth in the presence of oxygen but will also grow without it. Although aerotolerant anaerobes do not perform aerobic respiration, they can grow in the presence of oxygen. Most aerotolerant anaerobes test negative for the enzyme catalase . Microaerophiles need oxygen to grow, albeit at a lower concentration than 21% oxygen in air. Optimum oxygen concentration for an organism is the oxygen level that promotes the fastest growth rate. The minimum permissive oxygen concentration and the maximum permissive oxygen concentration are, respectively, the lowest and the highest oxygen levels that the organism will tolerate. Peroxidase , superoxide dismutase , and catalase are the main enzymes involved in the detoxification of the reactive oxygen species . Superoxide dismutase is usually present in a cell that can tolerate oxygen. All three enzymes are usually detectable in cells that perform aerobic respiration and produce more ROS. A capnophile is an organism that requires a higher than atmospheric concentration of CO 2 to grow. 9.3 The Effects of pH on Microbial Growth Bacteria are generally neutrophiles . They grow best at neutral pH close to 7.0. Acidophiles grow optimally at a pH near 3.0. Alkaliphiles are organisms that grow optimally between a pH of 8 and 10.5. Extreme acidophiles and alkaliphiles grow slowly or not at all near neutral pH. Microorganisms grow best at their optimum growth pH . Growth occurs slowly or not at all below the minimum growth pH and above the maximum growth pH . 9.4 Temperature and Microbial Growth Microorganisms thrive at a wide range of temperatures; they have colonized different natural environments and have adapted to extreme temperatures. Both extreme cold and hot temperatures require evolutionary adjustments to macromolecules and biological processes. Psychrophiles grow best in the temperature range of 0–15 °C whereas psychrotrophs thrive between 4°C and 25 °C. Mesophiles grow best at moderate temperatures in the range of 20 °C to about 45 °C. Pathogens are usually mesophiles. Thermophiles and hyperthemophiles are adapted to life at temperatures above 50 °C. Adaptations to cold and hot temperatures require changes in the composition of membrane lipids and proteins. 9.5 Other Environmental Conditions that Affect Growth Halophiles require high salt concentration in the medium, whereas halotolerant organisms can grow and multiply in the presence of high salt but do not require it for growth. Halotolerant pathogens are an important source of foodborne illnesses because they contaminate foods preserved in salt. Photosynthetic bacteria depend on visible light for energy. Most bacteria, with few exceptions, require high moisture to grow. 9.6 Media Used for Bacterial Growth Chemically defined media contain only chemically known components. Selective media favor the growth of some microorganisms while inhibiting others. Enriched media contain added essential nutrients a specific organism needs to grow Differential media help distinguish bacteria by the color of the colonies or the change in the medium.
Chapter Outline 9.1 How Microbes Grow 9.2 Oxygen Requirements for Microbial Growth 9.3 The Effects of pH on Microbial Growth 9.4 Temperature and Microbial Growth 9.5 Other Environmental Conditions that Affect Growth 9.6 Media Used for Bacterial Growth Introduction We are all familiar with the slimy layer on a pond surface or that makes rocks slippery. These are examples of biofilms—microorganisms embedded in thin layers of matrix material ( Figure 9.1 ). Biofilms were long considered random assemblages of cells and had little attention from researchers. Recently, progress in visualization and biochemical methods has revealed that biofilms are an organized ecosystem within which many cells, usually of different species of bacteria, fungi, and algae, interact through cell signaling and coordinated responses. The biofilm provides a protected environment in harsh conditions and aids colonization by microorganisms. Biofilms also have clinical importance. They form on medical devices, resist routine cleaning and sterilization, and cause health-acquired infections. Within the body, biofilms form on the teeth as plaque, in the lungs of patients with cystic fibrosis, and on the cardiac tissue of patients with endocarditis. The slime layer helps protect the cells from host immune defenses and antibiotic treatments. Studying biofilms requires new approaches. Because of the cells’ adhesion properties, many of the methods for culturing and counting cells that are explored in this chapter are not easily applied to biofilms. This is the beginning of a new era of challenges and rewarding insight into the ways that microorganisms grow and thrive in nature.
[ { "answer": { "ans_choice": 1, "ans_text": "B" }, "bloom": null, "hl_context": "Measuring dry weight of a culture sample is another indirect method of evaluating culture density without directly measuring cell counts . The cell suspension used for weighing must be concentrated by filtration or centrifugation , washed , and then dried before the measurements are taken . The degree of drying must be standardized to account for residual water content . <hl> This method is especially useful for filamentous microorganisms , which are difficult to enumerate by direct or viable plate count . <hl>", "hl_sentences": "This method is especially useful for filamentous microorganisms , which are difficult to enumerate by direct or viable plate count .", "question": { "cloze_format": "The method that would be used to measure the concentration of bacterial contamination in processed peanut butter is ___.", "normal_format": "Which of the following methods would be used to measure the concentration of bacterial contamination in processed peanut butter?", "question_choices": [ "turbidity measurement", "total plate count", "dry weight measurement", "direct counting of bacteria on a calibrated slide under the microscope" ], "question_id": "fs-id1172099705617", "question_text": "Which of the following methods would be used to measure the concentration of bacterial contamination in processed peanut butter?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 0, "ans_text": "A" }, "bloom": null, "hl_context": "As a culture medium accumulates toxic waste and nutrients are exhausted , cells die in greater and greater numbers . Soon , the number of dying cells exceeds the number of dividing cells , leading to an exponential decrease in the number of cells ( Figure 9.5 ) . <hl> This is the aptly named death phase , sometimes called the decline phase . <hl> <hl> Many cells lyse and release nutrients into the medium , allowing surviving cells to maintain viability and form endospores . <hl> A few cells , the so-called persisters , are characterized by a slow metabolic rate . Persister cells are medically important because they are associated with certain chronic infections , such as tuberculosis , that do not respond to antibiotic treatment .", "hl_sentences": "This is the aptly named death phase , sometimes called the decline phase . Many cells lyse and release nutrients into the medium , allowing surviving cells to maintain viability and form endospores .", "question": { "cloze_format": "The phase in which you would expect to observe the most endospores in a Bacillus cell culture is the ___.", "normal_format": "In which phase would you expect to observe the most endospores in a Bacillus cell culture?", "question_choices": [ "death phase", "lag phase", "log phase", "log, lag, and death phases would all have roughly the same number of endospores." ], "question_id": "fs-id1172101685442", "question_text": "In which phase would you expect to observe the most endospores in a Bacillus cell culture?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "C" }, "bloom": null, "hl_context": "Cells in the log phase show constant growth rate and uniform metabolic activity . For this reason , cells in the log phase are preferentially used for industrial applications and research work . <hl> The log phase is also the stage where bacteria are the most susceptible to the action of disinfectants and common antibiotics that affect protein , DNA , and cell-wall synthesis . <hl>", "hl_sentences": "The log phase is also the stage where bacteria are the most susceptible to the action of disinfectants and common antibiotics that affect protein , DNA , and cell-wall synthesis .", "question": { "cloze_format": "The phase during which penicillin, an antibiotic that inhibits cell-wall synthesis, would be most effective is the ___.", "normal_format": "During which phase would penicillin, an antibiotic that inhibits cell-wall synthesis, be most effective?", "question_choices": [ "death phase", "lag phase", "log phase", "stationary phase" ], "question_id": "fs-id1172101759753", "question_text": "During which phase would penicillin, an antibiotic that inhibits cell-wall synthesis, be most effective?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 1, "ans_text": "B" }, "bloom": null, "hl_context": "In eukaryotic organisms , the generation time is the time between the same points of the life cycle in two successive generations . For example , the typical generation time for the human population is 25 years . This definition is not practical for bacteria , which may reproduce rapidly or remain dormant for thousands of years . <hl> In prokaryotes ( Bacteria and Archaea ) , the generation time is also called the doubling time and is defined as the time it takes for the population to double through one round of binary fission . <hl> Bacterial doubling times vary enormously . Whereas Escherichia coli can double in as little as 20 minutes under optimal growth conditions in the laboratory , bacteria of the same species may need several days to double in especially harsh environments . Most pathogens grow rapidly , like E . coli , but there are exceptions . For example , Mycobacterium tuberculosis , the causative agent of tuberculosis , has a generation time of between 15 and 20 hours . On the other hand , M . leprae , which causes Hansen ’ s disease ( leprosy ) , grows much more slowly , with a doubling time of 14 days .", "hl_sentences": "In prokaryotes ( Bacteria and Archaea ) , the generation time is also called the doubling time and is defined as the time it takes for the population to double through one round of binary fission .", "question": { "cloze_format": "___ is the best definition of generation time in a bacterium.", "normal_format": "Which of the following is the best definition of generation time in a bacterium?", "question_choices": [ "the length of time it takes to reach the log phase", "the length of time it takes for a population of cells to double", "the time it takes to reach stationary phase", "the length of time of the exponential phase" ], "question_id": "fs-id1172099739651", "question_text": "Which of the following is the best definition of generation time in a bacterium?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 1, "ans_text": "B" }, "bloom": null, "hl_context": "The center of the enlarged cell constricts until two daughter cells are formed , each offspring receiving a complete copy of the parental genome and a division of the cytoplasm ( cytokinesis ) . This process of cytokinesis and cell division is directed by a protein called FtsZ . <hl> FtsZ assembles into a Z ring on the cytoplasmic membrane ( Figure 9.3 ) . <hl> <hl> The Z ring is anchored by FtsZ-binding proteins and defines the division plane between the two daughter cells . <hl> <hl> Additional proteins required for cell division are added to the Z ring to form a structure called the divisome . <hl> <hl> The divisome activates to produce a peptidoglycan cell wall and build a septum that divides the two daughter cells . <hl> <hl> The daughter cells are separated by the division septum , where all of the cells ’ outer layers ( the cell wall and outer membranes , if present ) must be remodeled to complete division . <hl> For example , we know that specific enzymes break bonds between the monomers in peptidoglycans and allow addition of new subunits along the division septum .", "hl_sentences": "FtsZ assembles into a Z ring on the cytoplasmic membrane ( Figure 9.3 ) . The Z ring is anchored by FtsZ-binding proteins and defines the division plane between the two daughter cells . Additional proteins required for cell division are added to the Z ring to form a structure called the divisome . The divisome activates to produce a peptidoglycan cell wall and build a septum that divides the two daughter cells . The daughter cells are separated by the division septum , where all of the cells ’ outer layers ( the cell wall and outer membranes , if present ) must be remodeled to complete division .", "question": { "cloze_format": "The function of the Z ring in binary fission is that ___ .", "normal_format": "What is the function of the Z ring in binary fission?", "question_choices": [ "It controls the replication of DNA.", "It forms a contractile ring at the septum.", "It separates the newly synthesized DNA molecules.", "It mediates the addition of new peptidoglycan subunits." ], "question_id": "fs-id1172102011174", "question_text": "What is the function of the Z ring in binary fission?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "C" }, "bloom": null, "hl_context": "<hl> N n = N 0 2 n N n = N 0 2 n <hl> <hl> The number of cells increases exponentially and can be expressed as 2 n , where n is the number of generations . <hl> <hl> If cells divide every 30 minutes , after 24 hours , 48 divisions would have taken place . <hl> <hl> If we apply the formula 2 n , where n is equal to 48 , the single cell would give rise to 2 48 or 281,474 , 976,710 , 656 cells at 48 generations ( 24 hours ) . <hl> When dealing with such huge numbers , it is more practical to use scientific notation . Therefore , we express the number of cells as 2.8 × 10 14 cells .", "hl_sentences": "N n = N 0 2 n N n = N 0 2 n The number of cells increases exponentially and can be expressed as 2 n , where n is the number of generations . If cells divide every 30 minutes , after 24 hours , 48 divisions would have taken place . If we apply the formula 2 n , where n is equal to 48 , the single cell would give rise to 2 48 or 281,474 , 976,710 , 656 cells at 48 generations ( 24 hours ) .", "question": { "cloze_format": "If a culture starts with 50 cells, the number of cells that will be present after five generations with no cell death is ___ .", "normal_format": "If a culture starts with 50 cells, how many cells will be present after five generations with no cell death?", "question_choices": [ "200", "400", "1600", "3200" ], "question_id": "fs-id1172099498850", "question_text": "If a culture starts with 50 cells, how many cells will be present after five generations with no cell death?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "C" }, "bloom": null, "hl_context": "In some cyanobacteria , many nucleoids may accumulate in an enlarged round cell or along a filament , leading to the generation of many new cells at once . The new cells often split from the parent filament and float away in a process called fragmentation ( Figure 9.16 ) . <hl> Fragmentation is commonly observed in the Actinomycetes , a group of gram-positive , anaerobic bacteria commonly found in soil . <hl> Another curious example of cell division in prokaryotes , reminiscent of live birth in animals , is exhibited by the giant bacterium Epulopiscium . Several daughter cells grow fully in the parent cell , which eventually disintegrates , releasing the new cells to the environment . Other species may form a long narrow extension at one pole in a process called budding . The tip of the extension swells and forms a smaller cell , the bud that eventually detaches from the parent cell . Budding is most common in yeast ( Figure 9.16 ) , but it is also observed in prosthecate bacteria and some cyanobacteria .", "hl_sentences": "Fragmentation is commonly observed in the Actinomycetes , a group of gram-positive , anaerobic bacteria commonly found in soil .", "question": { "cloze_format": "Filamentous cyanobacteria often divide by ___ .", "normal_format": "Filamentous cyanobacteria often divide by which of the following?", "question_choices": [ "budding", "mitosis", "fragmentation", "formation of endospores" ], "question_id": "fs-id1172099908439", "question_text": "Filamentous cyanobacteria often divide by which of the following?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "C" }, "bloom": null, "hl_context": "<hl> Pathogens embedded within biofilms exhibit a higher resistance to antibiotics than their free-floating counterparts . <hl> Several hypotheses have been proposed to explain why . <hl> Cells in the deep layers of a biofilm are metabolically inactive and may be less susceptible to the action of antibiotics that disrupt metabolic activities . <hl> The EPS may also slow the diffusion of antibiotics and antiseptics , preventing them from reaching cells in the deeper layers of the biofilm . Phenotypic changes may also contribute to the increased resistance exhibited by bacterial cells in biofilms . For example , the increased production of efflux pumps , membrane-embedded proteins that actively extrude antibiotics out of bacterial cells , have been shown to be an important mechanism of antibiotic resistance among biofilm-associated bacteria . Finally , biofilms provide an ideal environment for the exchange of extrachromosomal DNA , which often includes genes that confer antibiotic resistance .", "hl_sentences": "Pathogens embedded within biofilms exhibit a higher resistance to antibiotics than their free-floating counterparts . Cells in the deep layers of a biofilm are metabolically inactive and may be less susceptible to the action of antibiotics that disrupt metabolic activities .", "question": { "cloze_format": "It is a reason for antimicrobial resistance being higher in a biofilm than in free-floating bacterial cells that ___.", "normal_format": "Which is a reason for antimicrobial resistance being higher in a biofilm than in free-floating bacterial cells?", "question_choices": [ "The EPS allows faster diffusion of chemicals in the biofilm.", "Cells are more metabolically active at the base of a biofilm.", "Cells are metabolically inactive at the base of a biofilm.", "The structure of a biofilm favors the survival of antibiotic resistant cells." ], "question_id": "fs-id1172099693365", "question_text": "Which is a reason for antimicrobial resistance being higher in a biofilm than in free-floating bacterial cells?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 3, "ans_text": "D" }, "bloom": null, "hl_context": "The mechanism by which cells in a biofilm coordinate their activities in response to environmental stimuli is called quorum sensing . <hl> Quorum sensing — which can occur between cells of different species within a biofilm — enables microorganisms to detect their cell density through the release and binding of small , diffusible molecules called autoinducers . <hl> When the cell population reaches a critical threshold ( a quorum ) , these autoinducers initiate a cascade of reactions that activate genes associated with cellular functions that are beneficial only when the population reaches a critical density . For example , in some pathogens , synthesis of virulence factors only begins when enough cells are present to overwhelm the immune defenses of the host . Although mostly studied in bacterial populations , quorum sensing takes place between bacteria and eukaryotes and between eukaryotic cells such as the fungus Candida albicans , a common member of the human microbiota that can cause infections in immunocompromised individuals .", "hl_sentences": "Quorum sensing — which can occur between cells of different species within a biofilm — enables microorganisms to detect their cell density through the release and binding of small , diffusible molecules called autoinducers .", "question": { "cloze_format": "Quorum sensing is used by bacterial cells to determine ___.", "normal_format": "Quorum sensing is used by bacterial cells to determine which of the following?", "question_choices": [ "the size of the population", "the availability of nutrients", "the speed of water flow", "the density of the population" ], "question_id": "fs-id1172101797052", "question_text": "Quorum sensing is used by bacterial cells to determine which of the following?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "A" }, "bloom": null, "hl_context": "The signaling molecules in quorum sensing belong to two major classes . Gram-negative bacteria communicate mainly using N-acylated homoserine lactones , whereas gram-positive bacteria mostly use small peptides ( Figure 9.18 ) . <hl> In all cases , the first step in quorum sensing consists of the binding of the autoinducer to its specific receptor only when a threshold concentration of signaling molecules is reached . <hl> <hl> Once binding to the receptor takes place , a cascade of signaling events leads to changes in gene expression . <hl> <hl> The result is the activation of biological responses linked to quorum sensing , notably an increase in the production of signaling molecules themselves , hence the term autoinducer . <hl>", "hl_sentences": "In all cases , the first step in quorum sensing consists of the binding of the autoinducer to its specific receptor only when a threshold concentration of signaling molecules is reached . Once binding to the receptor takes place , a cascade of signaling events leads to changes in gene expression . The result is the activation of biological responses linked to quorum sensing , notably an increase in the production of signaling molecules themselves , hence the term autoinducer .", "question": { "cloze_format": "The statement about autoinducers that is incorrect is that ___.", "normal_format": "Which of the following statements about autoinducers is incorrect?", "question_choices": [ "They bind directly to DNA to activate transcription.", "They can activate the cell that secreted them.", "N-acylated homoserine lactones are autoinducers in gram-negative cells.", "Autoinducers may stimulate the production of virulence factors." ], "question_id": "fs-id1172101689049", "question_text": "Which of the following statements about autoinducers is incorrect?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 1, "ans_text": "B" }, "bloom": null, "hl_context": "<hl> The growth of bacteria with varying oxygen requirements in thioglycolate tubes is illustrated in Figure 9.20 . <hl> In tube A , all the growth is seen at the top of the tube . The bacteria are obligate ( strict ) aerobe s that cannot grow without an abundant supply of oxygen . Tube B looks like the opposite of tube A . Bacteria grow at the bottom of tube B . Those are obligate anaerobe s , which are killed by oxygen . <hl> Tube C shows heavy growth at the top of the tube and growth throughout the tube , a typical result with facultative anaerobe s . Facultative anaerobes are organisms that thrive in the presence of oxygen but also grow in its absence by relying on fermentation or anaerobic respiration , if there is a suitable electron acceptor other than oxygen and the organism is able to perform anaerobic respiration . <hl> The aerotolerant anaerobe s in tube D are indifferent to the presence of oxygen . They do not use oxygen because they usually have a fermentative metabolism , but they are not harmed by the presence of oxygen as obligate anaerobes are . Tube E on the right shows a “ Goldilocks ” culture . The oxygen level has to be just right for growth , not too much and not too little . These microaerophile s are bacteria that require a minimum level of oxygen for growth , about 1 % – 10 % , well below the 21 % found in the atmosphere .", "hl_sentences": "The growth of bacteria with varying oxygen requirements in thioglycolate tubes is illustrated in Figure 9.20 . Tube C shows heavy growth at the top of the tube and growth throughout the tube , a typical result with facultative anaerobe s . Facultative anaerobes are organisms that thrive in the presence of oxygen but also grow in its absence by relying on fermentation or anaerobic respiration , if there is a suitable electron acceptor other than oxygen and the organism is able to perform anaerobic respiration .", "question": { "cloze_format": "An inoculated thioglycolate medium culture tube shows dense growth at the surface and turbidity throughout the rest of the tube. Your conclusion is that ___ .", "normal_format": "An inoculated thioglycolate medium culture tube shows dense growth at the surface and turbidity throughout the rest of the tube. What is your conclusion?", "question_choices": [ "The organisms die in the presence of oxygen", "The organisms are facultative anaerobes.", "The organisms should be grown in an anaerobic chamber.", "The organisms are obligate aerobes." ], "question_id": "fs-id1172100830414", "question_text": "An inoculated thioglycolate medium culture tube shows dense growth at the surface and turbidity throughout the rest of the tube. What is your conclusion?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 0, "ans_text": "A" }, "bloom": null, "hl_context": "<hl> The growth of bacteria with varying oxygen requirements in thioglycolate tubes is illustrated in Figure 9.20 . <hl> In tube A , all the growth is seen at the top of the tube . The bacteria are obligate ( strict ) aerobe s that cannot grow without an abundant supply of oxygen . Tube B looks like the opposite of tube A . Bacteria grow at the bottom of tube B . Those are obligate anaerobe s , which are killed by oxygen . Tube C shows heavy growth at the top of the tube and growth throughout the tube , a typical result with facultative anaerobe s . Facultative anaerobes are organisms that thrive in the presence of oxygen but also grow in its absence by relying on fermentation or anaerobic respiration , if there is a suitable electron acceptor other than oxygen and the organism is able to perform anaerobic respiration . <hl> The aerotolerant anaerobe s in tube D are indifferent to the presence of oxygen . <hl> <hl> They do not use oxygen because they usually have a fermentative metabolism , but they are not harmed by the presence of oxygen as obligate anaerobes are . <hl> Tube E on the right shows a “ Goldilocks ” culture . The oxygen level has to be just right for growth , not too much and not too little . These microaerophile s are bacteria that require a minimum level of oxygen for growth , about 1 % – 10 % , well below the 21 % found in the atmosphere .", "hl_sentences": "The growth of bacteria with varying oxygen requirements in thioglycolate tubes is illustrated in Figure 9.20 . The aerotolerant anaerobe s in tube D are indifferent to the presence of oxygen . They do not use oxygen because they usually have a fermentative metabolism , but they are not harmed by the presence of oxygen as obligate anaerobes are .", "question": { "cloze_format": "An inoculated thioglycolate medium culture tube is clear throughout the tube except for dense growth at the bottom of the tube. The conclusion is that ___", "normal_format": "An inoculated thioglycolate medium culture tube is clear throughout the tube except for dense growth at the bottom of the tube. What is your conclusion?", "question_choices": [ "The organisms are obligate anaerobes.", "The organisms are facultative anaerobes.", "The organisms are aerotolerant.", "The organisms are obligate aerobes." ], "question_id": "fs-id1172098359319", "question_text": "An inoculated thioglycolate medium culture tube is clear throughout the tube except for dense growth at the bottom of the tube. What is your conclusion?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "C" }, "bloom": null, "hl_context": "<hl> Bacteria that grow best in a higher concentration of CO 2 and a lower concentration of oxygen than present in the atmosphere are called capnophiles . <hl> One common approach to grow capnophiles is to use a candle jar . A candle jar consists of a jar with a tight-fitting lid that can accommodate the cultures and a candle . After the cultures are added to the jar , the candle is lit and the lid closed . As the candle burns , it consumes most of the oxygen present and releases CO 2 .", "hl_sentences": "Bacteria that grow best in a higher concentration of CO 2 and a lower concentration of oxygen than present in the atmosphere are called capnophiles .", "question": { "cloze_format": "The reason that the instructions for the growth of Neisseria gonorrhoeae recommend a CO2-enriched atmosphere is that ___ .", "normal_format": "Why do the instructions for the growth of Neisseria gonorrhoeae recommend a CO2-enriched atmosphere?", "question_choices": [ "It uses CO2 as a final electron acceptor in respiration.", "It is an obligate anaerobe.", "It is a capnophile.", "It fixes CO2 through photosynthesis." ], "question_id": "fs-id1172098560868", "question_text": "Why do the instructions for the growth of Neisseria gonorrhoeae recommend a CO2-enriched atmosphere?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 1, "ans_text": "B" }, "bloom": null, "hl_context": "<hl> Microorganisms that grow optimally at pH less than 5.55 are called acidophile s . For example , the sulfur-oxidizing Sulfolobus spp . <hl> isolated from sulfur mud fields and hot springs in Yellowstone National Park are extreme acidophiles . These archaea survive at pH values of 2.5 – 3.5 . Species of the archaean genus Ferroplasma live in acid mine drainage at pH values of 0 – 2.9 . Lactobacillus bacteria , which are an important part of the normal microbiota of the vagina , can tolerate acidic environments at pH values 3.5 – 6.8 and also contribute to the acidity of the vagina ( pH of 4 , except at the onset of menstruation ) through their metabolic production of lactic acid . The vagina ’ s acidity plays an important role in inhibiting other microbes that are less tolerant of acidity . Acidophilic microorganisms display a number of adaptations to survive in strong acidic environments . For example , proteins show increased negative surface charge that stabilizes them at low pH . Pumps actively eject H + ions out of the cells . The changes in the composition of membrane phospholipids probably reflect the need to maintain membrane fluidity at low pH .", "hl_sentences": "Microorganisms that grow optimally at pH less than 5.55 are called acidophile s . For example , the sulfur-oxidizing Sulfolobus spp .", "question": { "cloze_format": "Bacteria that grow in mine drainage at pH 1–2 are probably ___.", "normal_format": "Bacteria that grow in mine drainage at pH 1–2 are probably which of the following?", "question_choices": [ "alkaliphiles", "acidophiles", "neutrophiles", "obligate anaerobes" ], "question_id": "fs-id1172099736093", "question_text": "Bacteria that grow in mine drainage at pH 1–2 are probably which of the following?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "A" }, "bloom": null, "hl_context": "<hl> At the other end of the spectrum are alkaliphile s , microorganisms that grow best at pH between 8.0 and 10.5 . <hl> Vibrio cholerae , the pathogenic agent of cholera , grows best at the slightly basic pH of 8.0 ; it can survive pH values of 11.0 but is inactivated by the acid of the stomach . When it comes to survival at high pH , the bright pink archaean Natronobacterium , found in the soda lakes of the African Rift Valley , may hold the record at a pH of 10.5 ( Figure 9.27 ) . Extreme alkaliphiles have adapted to their harsh environment through evolutionary modification of lipid and protein structure and compensatory mechanisms to maintain the proton motive force in an alkaline environment . For example , the alkaliphile Bacillus firmus derives the energy for transport reactions and motility from a Na + ion gradient rather than a proton motive force . Many enzymes from alkaliphiles have a higher isoelectric point , due to an increase in the number of basic amino acids , than homologous enzymes from neutrophiles .", "hl_sentences": "At the other end of the spectrum are alkaliphile s , microorganisms that grow best at pH between 8.0 and 10.5 .", "question": { "cloze_format": "Bacteria isolated from Lake Natron, where the water pH is close to 10, are ___.", "normal_format": "Bacteria isolated from Lake Natron, where the water pH is close to 10, are which of the following?", "question_choices": [ "alkaliphiles", "facultative anaerobes", "neutrophiles", "obligate anaerobes" ], "question_id": "fs-id1172101736995", "question_text": "Bacteria isolated from Lake Natron, where the water pH is close to 10, are which of the following?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "B" }, "bloom": null, "hl_context": "<hl> Microorganisms that grow optimally at pH less than 5.55 are called acidophile s . For example , the sulfur-oxidizing Sulfolobus spp . <hl> isolated from sulfur mud fields and hot springs in Yellowstone National Park are extreme acidophiles . These archaea survive at pH values of 2.5 – 3.5 . Species of the archaean genus Ferroplasma live in acid mine drainage at pH values of 0 – 2.9 . Lactobacillus bacteria , which are an important part of the normal microbiota of the vagina , can tolerate acidic environments at pH values 3.5 – 6.8 and also contribute to the acidity of the vagina ( pH of 4 , except at the onset of menstruation ) through their metabolic production of lactic acid . The vagina ’ s acidity plays an important role in inhibiting other microbes that are less tolerant of acidity . Acidophilic microorganisms display a number of adaptations to survive in strong acidic environments . For example , proteins show increased negative surface charge that stabilizes them at low pH . Pumps actively eject H + ions out of the cells . The changes in the composition of membrane phospholipids probably reflect the need to maintain membrane fluidity at low pH .", "hl_sentences": "Microorganisms that grow optimally at pH less than 5.55 are called acidophile s . For example , the sulfur-oxidizing Sulfolobus spp .", "question": { "cloze_format": "___ is the environment in which you are most likely to encounter an acidophile.", "normal_format": "In which environment are you most likely to encounter an acidophile?", "question_choices": [ "human blood at pH 7.2", "a hot vent at pH 1.5", "human intestine at pH 8.5", "milk at pH 6.5" ], "question_id": "fs-id1172101802674", "question_text": "In which environment are you most likely to encounter an acidophile?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 3, "ans_text": "D" }, "bloom": null, "hl_context": "<hl> Organisms called psychrotrophs , also known as psychrotolerant , prefer cooler environments , from a high temperature of 25 ° C to refrigeration temperature about 4 ° C . <hl> They are found in many natural environments in temperate climates . They are also responsible for the spoilage of refrigerated food .", "hl_sentences": "Organisms called psychrotrophs , also known as psychrotolerant , prefer cooler environments , from a high temperature of 25 ° C to refrigeration temperature about 4 ° C .", "question": { "cloze_format": "A soup container was forgotten in the refrigerator and shows contamination. The contaminants are probably ___ .", "normal_format": "A soup container was forgotten in the refrigerator and shows contamination. The contaminants are probably which of the following?", "question_choices": [ "thermophiles", "acidophiles", "mesophiles", "psychrotrophs" ], "question_id": "fs-id11721017369950", "question_text": "A soup container was forgotten in the refrigerator and shows contamination. The contaminants are probably which of the following?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "C" }, "bloom": null, "hl_context": "<hl> Organisms categorized as mesophile s ( “ middle loving ” ) are adapted to moderate temperatures , with optimal growth temperatures ranging from room temperature ( about 20 ° C ) to about 45 ° C . <hl> As would be expected from the core temperature of the human body , 37 ° C ( 98.6 ° F ) , normal human microbiota and pathogens ( e . g . , E . coli , Salmonella spp . , and Lactobacillus spp . ) are mesophiles .", "hl_sentences": "Organisms categorized as mesophile s ( “ middle loving ” ) are adapted to moderate temperatures , with optimal growth temperatures ranging from room temperature ( about 20 ° C ) to about 45 ° C .", "question": { "cloze_format": "Bacteria isolated from a hot tub at 39 °C are probably ___.", "normal_format": "Bacteria isolated from a hot tub at 39 °C are probably which of the following?", "question_choices": [ "thermophiles", "psychrotrophs", "mesophiles", "hyperthermophiles" ], "question_id": "fs-id1172099896238", "question_text": "Bacteria isolated from a hot tub at 39 °C are probably which of the following?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "C" }, "bloom": null, "hl_context": "The organisms retrieved from arctic lakes such as Lake Whillans are considered extreme psychrophile s ( cold loving ) . Psychrophiles are microorganisms that can grow at 0 ° C and below , have an optimum growth temperature close to 15 ° C , and usually do not survive at temperatures above 20 ° C . They are found in permanently cold environments such as the deep waters of the oceans . Because they are active at low temperature , psychrophiles and psychrotrophs are important decomposers in cold climates . Organisms that grow at optimum temperatures of 50 ° C to a maximum of 80 ° C are called thermophiles ( “ heat loving ” ) . They do not multiply at room temperature . Thermophiles are widely distributed in hot springs , geothermal soils , and manmade environments such as garden compost piles where the microbes break down kitchen scraps and vegetal material . Examples of thermophiles include Thermus aquaticus and Geobacillus spp . <hl> Higher up on the extreme temperature scale we find the hyperthermophiles , which are characterized by growth ranges from 80 ° C to a maximum of 110 ° C , with some extreme examples that survive temperatures above 121 ° C , the average temperature of an autoclave . <hl> <hl> The hydrothermal vents at the bottom of the ocean are a prime example of extreme environments , with temperatures reaching an estimated 340 ° C ( Figure 9.28 ) . <hl> Microbes isolated from the vents achieve optimal growth at temperatures higher than 100 ° C . Noteworthy examples are Pyrobolus and Pyrodictium , archaea that grow at 105 ° C and survive autoclaving . Figure 9.29 shows the typical skewed curves of temperature-dependent growth for the categories of microorganisms we have discussed .", "hl_sentences": "Higher up on the extreme temperature scale we find the hyperthermophiles , which are characterized by growth ranges from 80 ° C to a maximum of 110 ° C , with some extreme examples that survive temperatures above 121 ° C , the average temperature of an autoclave . The hydrothermal vents at the bottom of the ocean are a prime example of extreme environments , with temperatures reaching an estimated 340 ° C ( Figure 9.28 ) .", "question": { "cloze_format": "The environment in which you are most likely to encounter a hyperthermophile is a ___ .", "normal_format": "In which environment are you most likely to encounter a hyperthermophile?", "question_choices": [ "hot tub", "warm ocean water in Florida", "hydrothermal vent at the bottom of the ocean", "human body" ], "question_id": "fs-id1172099387798", "question_text": "In which environment are you most likely to encounter a hyperthermophile?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "A" }, "bloom": null, "hl_context": "The organisms retrieved from arctic lakes such as Lake Whillans are considered extreme psychrophile s ( cold loving ) . <hl> Psychrophiles are microorganisms that can grow at 0 ° C and below , have an optimum growth temperature close to 15 ° C , and usually do not survive at temperatures above 20 ° C . <hl> <hl> They are found in permanently cold environments such as the deep waters of the oceans . <hl> Because they are active at low temperature , psychrophiles and psychrotrophs are important decomposers in cold climates . Organisms that grow at optimum temperatures of 50 ° C to a maximum of 80 ° C are called thermophiles ( “ heat loving ” ) . They do not multiply at room temperature . Thermophiles are widely distributed in hot springs , geothermal soils , and manmade environments such as garden compost piles where the microbes break down kitchen scraps and vegetal material . Examples of thermophiles include Thermus aquaticus and Geobacillus spp . Higher up on the extreme temperature scale we find the hyperthermophiles , which are characterized by growth ranges from 80 ° C to a maximum of 110 ° C , with some extreme examples that survive temperatures above 121 ° C , the average temperature of an autoclave . The hydrothermal vents at the bottom of the ocean are a prime example of extreme environments , with temperatures reaching an estimated 340 ° C ( Figure 9.28 ) . Microbes isolated from the vents achieve optimal growth at temperatures higher than 100 ° C . Noteworthy examples are Pyrobolus and Pyrodictium , archaea that grow at 105 ° C and survive autoclaving . Figure 9.29 shows the typical skewed curves of temperature-dependent growth for the categories of microorganisms we have discussed .", "hl_sentences": "Psychrophiles are microorganisms that can grow at 0 ° C and below , have an optimum growth temperature close to 15 ° C , and usually do not survive at temperatures above 20 ° C . They are found in permanently cold environments such as the deep waters of the oceans .", "question": { "cloze_format": "The environment that would harbor psychrophiles is (a) ___.", "normal_format": "Which of the following environments would harbor psychrophiles?", "question_choices": [ "mountain lake with a water temperature of 12 °C", "contaminated plates left in a 35 °C incubator", "yogurt cultured at room temperature", "salt pond in the desert with a daytime temperature of 34 °C" ], "question_id": "fs-id1172099536151", "question_text": "Which of the following environments would harbor psychrophiles?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "D" }, "bloom": null, "hl_context": "<hl> Microorganisms depend on available water to grow . <hl> <hl> Available moisture is measured as water activity ( a w ) , which is the ratio of the vapor pressure of the medium of interest to the vapor pressure of pure distilled water ; therefore , the a w of water is equal to 1.0 . <hl> <hl> Bacteria require high a w ( 0.97 – 0.99 ) , whereas fungi can tolerate drier environments ; for example , the range of a w for growth of Aspergillus spp . <hl> is 0.8 – 0.75 . <hl> Decreasing the water content of foods by drying , as in jerky , or through freeze-drying or by increasing osmotic pressure , as in brine and jams , are common methods of preventing spoilage . <hl>", "hl_sentences": "Microorganisms depend on available water to grow . Available moisture is measured as water activity ( a w ) , which is the ratio of the vapor pressure of the medium of interest to the vapor pressure of pure distilled water ; therefore , the a w of water is equal to 1.0 . Bacteria require high a w ( 0.97 – 0.99 ) , whereas fungi can tolerate drier environments ; for example , the range of a w for growth of Aspergillus spp . Decreasing the water content of foods by drying , as in jerky , or through freeze-drying or by increasing osmotic pressure , as in brine and jams , are common methods of preventing spoilage .", "question": { "cloze_format": "___ is the reason jams and dried meats often do not require refrigeration to prevent spoilage.", "normal_format": "Which of the following is the reason jams and dried meats often do not require refrigeration to prevent spoilage?", "question_choices": [ "low pH", "toxic alkaline chemicals", "naturally occurring antibiotics", "low water activity" ], "question_id": "fs-id1172101005919", "question_text": "Which of the following is the reason jams and dried meats often do not require refrigeration to prevent spoilage?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "C" }, "bloom": null, "hl_context": "Dunaliella spp . counters the tremendous osmotic pressure of the environment with a high cytoplasmic concentration of glycerol and by actively pumping out salt ions . Halobacterium spp . accumulates large concentrations of K + and other ions in its cytoplasm . <hl> Its proteins are designed for high salt concentrations and lose activity at salt concentrations below 1 – 2 M . Although most halotolerant organisms , for example Halomonas spp . <hl> <hl> in salt marshes , do not need high concentrations of salt for growth , they will survive and divide in the presence of high salt . <hl> Not surprisingly , the staphylococci , micrococci , and corynebacteria that colonize our skin tolerate salt in their environment . Halotolerant pathogens are an important cause of food-borne illnesses because they survive and multiply in salty food . For example , the halotolerant bacteria S . aureus , Bacillus cereus , and V . cholerae produce dangerous enterotoxins and are major causes of food poisoning .", "hl_sentences": "Its proteins are designed for high salt concentrations and lose activity at salt concentrations below 1 – 2 M . Although most halotolerant organisms , for example Halomonas spp . in salt marshes , do not need high concentrations of salt for growth , they will survive and divide in the presence of high salt .", "question": { "cloze_format": "Bacteria living in salt marshes are most likely ___.", "normal_format": "Bacteria living in salt marshes are most likely which of the following?", "question_choices": [ "acidophiles", "barophiles", "halotolerant", "thermophiles" ], "question_id": "fs-id1172100754712", "question_text": "Bacteria living in salt marshes are most likely which of the following?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 3, "ans_text": "D" }, "bloom": null, "hl_context": "<hl> The number of available media to grow bacteria is considerable . <hl> Some media are considered general all-purpose media and support growth of a large variety of organisms . A prime example of an all-purpose medium is tryptic soy broth ( TSB ) . Specialized media are used in the identification of bacteria and are supplemented with dyes , pH indicators , or antibiotics . <hl> One type , enriched media , contains growth factors , vitamins , and other essential nutrients to promote the growth of fastidious organisms , organisms that cannot make certain nutrients and require them to be added to the medium . <hl> When the complete chemical composition of a medium is known , it is called a chemically defined medium . For example , in EZ medium , all individual chemical components are identified and the exact amounts of each is known . In complex media , which contain extracts and digests of yeasts , meat , or plants , the precise chemical composition of the medium is not known . Amounts of individual components are undetermined and variable . Nutrient broth , tryptic soy broth , and brain heart infusion , are all examples of complex media .", "hl_sentences": "The number of available media to grow bacteria is considerable . One type , enriched media , contains growth factors , vitamins , and other essential nutrients to promote the growth of fastidious organisms , organisms that cannot make certain nutrients and require them to be added to the medium .", "question": { "cloze_format": "Haemophilus influenzae must be grown on chocolate agar, which is blood agar treated with heat to release growth factors in the medium. H. influenzae is described as ________.", "normal_format": "Haemophilus influenzae must be grown on chocolate agar, which is blood agar treated with heat to release growth factors in the medium. How is H. influenzae described as?", "question_choices": [ "an acidophile", "a thermophile", "an obligate anaerobe", "fastidious" ], "question_id": "fs-id1172101913252", "question_text": "Haemophilus influenzae must be grown on chocolate agar, which is blood agar treated with heat to release growth factors in the medium. H. influenzae is described as ________." }, "references_are_paraphrase": 0 } ]
9
9.1 How Microbes Grow Learning Objectives Define the generation time for growth based on binary fission Identify and describe the activities of microorganisms undergoing typical phases of binary fission (simple cell division) in a growth curve Explain several laboratory methods used to determine viable and total cell counts in populations undergoing exponential growth Describe examples of cell division not involving binary fission, such as budding or fragmentation Describe the formation and characteristics of biofilms Identify health risks associated with biofilms and how they are addressed Describe quorum sensing and its role in cell-to-cell communication and coordination of cellular activities Clinical Focus Part 1 Jeni, a 24-year-old pregnant woman in her second trimester, visits a clinic with complaints of high fever, 38.9 °C (102 °F), fatigue, and muscle aches—typical flu-like signs and symptoms. Jeni exercises regularly and follows a nutritious diet with emphasis on organic foods, including raw milk that she purchases from a local farmer’s market. All of her immunizations are up to date. However, the health-care provider who sees Jeni is concerned and orders a blood sample to be sent for testing by the microbiology laboratory. Why is the health-care provider concerned about Jeni’s signs and symptoms? Jump to the next Clinical Focus box The bacterial cell cycle involves the formation of new cells through the replication of DNA and partitioning of cellular components into two daughter cells. In prokaryotes, reproduction is always asexual, although extensive genetic recombination in the form of horizontal gene transfer takes place, as will be explored in a different chapter. Most bacteria have a single circular chromosome; however, some exceptions exist. For example, Borrelia burgdorferi , the causative agent of Lyme disease, has a linear chromosome. Binary Fission The most common mechanism of cell replication in bacteria is a process called binary fission , which is depicted in Figure 9.2 . Before dividing, the cell grows and increases its number of cellular components. Next, the replication of DNA starts at a location on the circular chromosome called the origin of replication, where the chromosome is attached to the inner cell membrane. Replication continues in opposite directions along the chromosome until the terminus is reached. The center of the enlarged cell constricts until two daughter cells are formed, each offspring receiving a complete copy of the parental genome and a division of the cytoplasm (cytokinesis). This process of cytokinesis and cell division is directed by a protein called FtsZ . FtsZ assembles into a Z ring on the cytoplasmic membrane ( Figure 9.3 ). The Z ring is anchored by FtsZ-binding proteins and defines the division plane between the two daughter cells. Additional proteins required for cell division are added to the Z ring to form a structure called the divisome . The divisome activates to produce a peptidoglycan cell wall and build a septum that divides the two daughter cells. The daughter cells are separated by the division septum, where all of the cells’ outer layers (the cell wall and outer membranes, if present) must be remodeled to complete division. For example, we know that specific enzymes break bonds between the monomers in peptidoglycans and allow addition of new subunits along the division septum. Check Your Understanding What is the name of the protein that assembles into a Z ring to initiate cytokinesis and cell division? Generation Time In eukaryotic organisms, the generation time is the time between the same points of the life cycle in two successive generations. For example, the typical generation time for the human population is 25 years. This definition is not practical for bacteria, which may reproduce rapidly or remain dormant for thousands of years. In prokaryotes (Bacteria and Archaea), the generation time is also called the doubling time and is defined as the time it takes for the population to double through one round of binary fission. Bacterial doubling times vary enormously. Whereas Escherichia coli can double in as little as 20 minutes under optimal growth conditions in the laboratory, bacteria of the same species may need several days to double in especially harsh environments. Most pathogens grow rapidly, like E. coli , but there are exceptions. For example, Mycobacterium tuberculosis , the causative agent of tuberculosis, has a generation time of between 15 and 20 hours. On the other hand, M. leprae , which causes Hansen’s disease (leprosy), grows much more slowly, with a doubling time of 14 days. Micro Connections Calculating Number of Cells It is possible to predict the number of cells in a population when they divide by binary fission at a constant rate. As an example, consider what happens if a single cell divides every 30 minutes for 24 hours. The diagram in Figure 9.4 shows the increase in cell numbers for the first three generations. The number of cells increases exponentially and can be expressed as 2 n , where n is the number of generations. If cells divide every 30 minutes, after 24 hours, 48 divisions would have taken place. If we apply the formula 2 n , where n is equal to 48, the single cell would give rise to 2 48 or 281,474,976,710,656 cells at 48 generations (24 hours). When dealing with such huge numbers, it is more practical to use scientific notation. Therefore, we express the number of cells as 2.8 × 10 14 cells. In our example, we used one cell as the initial number of cells. For any number of starting cells, the formula is adapted as follows: N n = N 0 2 n N n = N 0 2 n N n is the number of cells at any generation n , N 0 is the initial number of cells, and n is the number of generations. Check Your Understanding With a doubling time of 30 minutes and a starting population size of 1 × 10 5 cells, how many cells will be present after 2 hours, assuming no cell death? The Growth Curve Microorganisms grown in closed culture (also known as a batch culture ), in which no nutrients are added and most waste is not removed, follow a reproducible growth pattern referred to as the growth curve . An example of a batch culture in nature is a pond in which a small number of cells grow in a closed environment. The culture density is defined as the number of cells per unit volume. In a closed environment, the culture density is also a measure of the number of cells in the population. Infections of the body do not always follow the growth curve, but correlations can exist depending upon the site and type of infection. When the number of live cells is plotted against time, distinct phases can be observed in the curve ( Figure 9.5 ). The Lag Phase The beginning of the growth curve represents a small number of cells, referred to as an inoculum , that are added to a fresh culture medium , a nutritional broth that supports growth. The initial phase of the growth curve is called the lag phase , during which cells are gearing up for the next phase of growth. The number of cells does not change during the lag phase; however, cells grow larger and are metabolically active, synthesizing proteins needed to grow within the medium. If any cells were damaged or shocked during the transfer to the new medium, repair takes place during the lag phase. The duration of the lag phase is determined by many factors, including the species and genetic make-up of the cells, the composition of the medium, and the size of the original inoculum. The Log Phase In the logarithmic (log) growth phase , sometimes called exponential growth phase , the cells are actively dividing by binary fission and their number increases exponentially. For any given bacterial species, the generation time under specific growth conditions (nutrients, temperature, pH, and so forth) is genetically determined, and this generation time is called the intrinsic growth rate . During the log phase, the relationship between time and number of cells is not linear but exponential; however, the growth curve is often plotted on a semilogarithmic graph, as shown in Figure 9.6 , which gives the appearance of a linear relationship. Cells in the log phase show constant growth rate and uniform metabolic activity. For this reason, cells in the log phase are preferentially used for industrial applications and research work. The log phase is also the stage where bacteria are the most susceptible to the action of disinfectants and common antibiotics that affect protein, DNA, and cell-wall synthesis. Stationary Phase As the number of cells increases through the log phase, several factors contribute to a slowing of the growth rate. Waste products accumulate and nutrients are gradually used up. In addition, gradual depletion of oxygen begins to limit aerobic cell growth. This combination of unfavorable conditions slows and finally stalls population growth. The total number of live cells reaches a plateau referred to as the stationary phase ( Figure 9.5 ). In this phase, the number of new cells created by cell division is now equivalent to the number of cells dying; thus, the total population of living cells is relatively stagnant. The culture density in a stationary culture is constant. The culture’s carrying capacity, or maximum culture density, depends on the types of microorganisms in the culture and the specific conditions of the culture; however, carrying capacity is constant for a given organism grown under the same conditions. During the stationary phase, cells switch to a survival mode of metabolism. As growth slows, so too does the synthesis of peptidoglycans, proteins, and nucleic-acids; thus, stationary cultures are less susceptible to antibiotics that disrupt these processes. In bacteria capable of producing endospores, many cells undergo sporulation during the stationary phase. Secondary metabolites, including antibiotics, are synthesized in the stationary phase. In certain pathogenic bacteria, the stationary phase is also associated with the expression of virulence factors, products that contribute to a microbe’s ability to survive, reproduce, and cause disease in a host organism. For example, quorum sensing in Staphylococcus aureus initiates the production of enzymes that can break down human tissue and cellular debris, clearing the way for bacteria to spread to new tissue where nutrients are more plentiful. The Death Phase As a culture medium accumulates toxic waste and nutrients are exhausted, cells die in greater and greater numbers. Soon, the number of dying cells exceeds the number of dividing cells, leading to an exponential decrease in the number of cells ( Figure 9.5 ). This is the aptly named death phase , sometimes called the decline phase. Many cells lyse and release nutrients into the medium, allowing surviving cells to maintain viability and form endospores. A few cells, the so-called persisters , are characterized by a slow metabolic rate. Persister cells are medically important because they are associated with certain chronic infections, such as tuberculosis, that do not respond to antibiotic treatment. Sustaining Microbial Growth The growth pattern shown in Figure 9.5 takes place in a closed environment; nutrients are not added and waste and dead cells are not removed. In many cases, though, it is advantageous to maintain cells in the logarithmic phase of growth. One example is in industries that harvest microbial products. A chemostat ( Figure 9.7 ) is used to maintain a continuous culture in which nutrients are supplied at a steady rate. A controlled amount of air is mixed in for aerobic processes. Bacterial suspension is removed at the same rate as nutrients flow in to maintain an optimal growth environment. Check Your Understanding During which phase does growth occur at the fastest rate? Name two factors that limit microbial growth. Measurement of Bacterial Growth Estimating the number of bacterial cells in a sample, known as a bacterial count, is a common task performed by microbiologists. The number of bacteria in a clinical sample serves as an indication of the extent of an infection. Quality control of drinking water, food, medication, and even cosmetics relies on estimates of bacterial counts to detect contamination and prevent the spread of disease. Two major approaches are used to measure cell number. The direct methods involve counting cells, whereas the indirect methods depend on the measurement of cell presence or activity without actually counting individual cells. Both direct and indirect methods have advantages and disadvantages for specific applications. Direct Cell Count Direct cell count refers to counting the cells in a liquid culture or colonies on a plate. It is a direct way of estimating how many organisms are present in a sample. Let’s look first at a simple and fast method that requires only a specialized slide and a compound microscope. The simplest way to count bacteria is called the direct microscopic cell count , which involves transferring a known volume of a culture to a calibrated slide and counting the cells under a light microscope. The calibrated slide is called a Petroff-Hausser chamber ( Figure 9.8 ) and is similar to a hemocytometer used to count red blood cells. The central area of the counting chamber is etched into squares of various sizes. A sample of the culture suspension is added to the chamber under a coverslip that is placed at a specific height from the surface of the grid. It is possible to estimate the concentration of cells in the original sample by counting individual cells in a number of squares and determining the volume of the sample observed. The area of the squares and the height at which the coverslip is positioned are specified for the chamber. The concentration must be corrected for dilution if the sample was diluted before enumeration. Cells in several small squares must be counted and the average taken to obtain a reliable measurement. The advantages of the chamber are that the method is easy to use, relatively fast, and inexpensive. On the downside, the counting chamber does not work well with dilute cultures because there may not be enough cells to count. Using a counting chamber does not necessarily yield an accurate count of the number of live cells because it is not always possible to distinguish between live cells, dead cells, and debris of the same size under the microscope. However, newly developed fluorescence staining techniques make it possible to distinguish viable and dead bacteria. These viability stains (or live stains) bind to nucleic acids, but the primary and secondary stains differ in their ability to cross the cytoplasmic membrane. The primary stain, which fluoresces green, can penetrate intact cytoplasmic membranes, staining both live and dead cells. The secondary stain, which fluoresces red, can stain a cell only if the cytoplasmic membrane is considerably damaged. Thus, live cells fluoresce green because they only absorb the green stain, whereas dead cells appear red because the red stain displaces the green stain on their nucleic acids ( Figure 9.9 ). Another technique uses an electronic cell counting device ( Coulter counter ) to detect and count the changes in electrical resistance in a saline solution. A glass tube with a small opening is immersed in an electrolyte solution. A first electrode is suspended in the glass tube. A second electrode is located outside of the tube. As cells are drawn through the small aperture in the glass tube, they briefly change the resistance measured between the two electrodes and the change is recorded by an electronic sensor ( Figure 9.10 ); each resistance change represents a cell. The method is rapid and accurate within a range of concentrations; however, if the culture is too concentrated, more than one cell may pass through the aperture at any given time and skew the results. This method also does not differentiate between live and dead cells. Direct counts provide an estimate of the total number of cells in a sample. However, in many situations, it is important to know the number of live, or viable , cells. Counts of live cells are needed when assessing the extent of an infection, the effectiveness of antimicrobial compounds and medication, or contamination of food and water. Check Your Understanding Why would you count the number of cells in more than one square in the Petroff-Hausser chamber to estimate cell numbers? In the viability staining method, why do dead cells appear red? Plate Count The viable plate count , or simply plate count , is a count of viable or live cells. It is based on the principle that viable cells replicate and give rise to visible colonies when incubated under suitable conditions for the specimen. The results are usually expressed as colony-forming unit s per milliliter (CFU/mL) rather than cells per milliliter because more than one cell may have landed on the same spot to give rise to a single colony. Furthermore, samples of bacteria that grow in clusters or chains are difficult to disperse and a single colony may represent several cells. Some cells are described as viable but nonculturable and will not form colonies on solid media. For all these reasons, the viable plate count is considered a low estimate of the actual number of live cells. These limitations do not detract from the usefulness of the method, which provides estimates of live bacterial numbers. Microbiologists typically count plates with 30–300 colonies. Samples with too few colonies (<30) do not give statistically reliable numbers, and overcrowded plates (>300 colonies) make it difficult to accurately count individual colonies. Also, counts in this range minimize occurrences of more than one bacterial cell forming a single colony. Thus, the calculated CFU is closer to the true number of live bacteria in the population. There are two common approaches to inoculating plates for viable counts: the pour plate and the spread plate methods. Although the final inoculation procedure differs between these two methods, they both start with a serial dilution of the culture. Serial Dilution The serial dilution of a culture is an important first step before proceeding to either the pour plate or spread plate method. The goal of the serial dilution process is to obtain plates with CFUs in the range of 30–300, and the process usually involves several dilutions in multiples of 10 to simplify calculation. The number of serial dilutions is chosen according to a preliminary estimate of the culture density. Figure 9.11 illustrates the serial dilution method. A fixed volume of the original culture, 1.0 mL, is added to and thoroughly mixed with the first dilution tube solution, which contains 9.0 mL of sterile broth. This step represents a dilution factor of 10, or 1:10, compared with the original culture. From this first dilution, the same volume, 1.0 mL, is withdrawn and mixed with a fresh tube of 9.0 mL of dilution solution. The dilution factor is now 1:100 compared with the original culture. This process continues until a series of dilutions is produced that will bracket the desired cell concentration for accurate counting. From each tube, a sample is plated on solid medium using either the pour plate method ( Figure 9.12 ) or the spread plate method ( Figure 9.13 ). The plates are incubated until colonies appear. Two to three plates are usually prepared from each dilution and the numbers of colonies counted on each plate are averaged. In all cases, thorough mixing of samples with the dilution medium (to ensure the cell distribution in the tube is random) is paramount to obtaining reliable results. The dilution factor is used to calculate the number of cells in the original cell culture. In our example, an average of 50 colonies was counted on the plates obtained from the 1:10,000 dilution. Because only 0.1 mL of suspension was pipetted on the plate, the multiplier required to reconstitute the original concentration is 10 × 10,000. The number of CFU per mL is equal to 50 × 10 × 10,000 = 5,000,000. The number of bacteria in the culture is estimated as 5 million cells/mL. The colony count obtained from the 1:1000 dilution was 389, well below the expected 500 for a 10-fold difference in dilutions. This highlights the issue of inaccuracy when colony counts are greater than 300 and more than one bacterial cell grows into a single colony. A very dilute sample—drinking water, for example—may not contain enough organisms to use either of the plate count methods described. In such cases, the original sample must be concentrated rather than diluted before plating. This can be accomplished using a modification of the plate count technique called the membrane filtration technique . Known volumes are vacuum-filtered aseptically through a membrane with a pore size small enough to trap microorganisms. The membrane is transferred to a Petri plate containing an appropriate growth medium. Colonies are counted after incubation. Calculation of the cell density is made by dividing the cell count by the volume of filtered liquid. Link to Learning Watch this video for demonstrations of serial dilutions and spread plate techniques. The Most Probable Number The number of microorganisms in dilute samples is usually too low to be detected by the plate count methods described thus far. For these specimens, microbiologists routinely use the most probable number (MPN) method , a statistical procedure for estimating of the number of viable microorganisms in a sample. Often used for water and food samples, the MPN method evaluates detectable growth by observing changes in turbidity or color due to metabolic activity. A typical application of MPN method is the estimation of the number of coliforms in a sample of pond water. Coliforms are gram-negative rod bacteria that ferment lactose. The presence of coliforms in water is considered a sign of contamination by fecal matter. For the method illustrated in Figure 9.14 , a series of three dilutions of the water sample is tested by inoculating five lactose broth tubes with 10 mL of sample, five lactose broth tubes with 1 mL of sample, and five lactose broth tubes with 0.1 mL of sample. The lactose broth tubes contain a pH indicator that changes color from red to yellow when the lactose is fermented. After inoculation and incubation, the tubes are examined for an indication of coliform growth by a color change in media from red to yellow. The first set of tubes (10-mL sample) showed growth in all the tubes; the second set of tubes (1 mL) showed growth in two tubes out of five; in the third set of tubes, no growth is observed in any of the tubes (0.1-mL dilution). The numbers 5, 2, and 0 are compared with Figure B1 in Appendix B , which has been constructed using a probability model of the sampling procedure. From our reading of the table, we conclude that 49 is the most probable number of bacteria per 100 mL of pond water.no lo Check Your Understanding What is a colony-forming unit? What two methods are frequently used to estimate bacterial numbers in water samples? Indirect Cell Counts Besides direct methods of counting cells, other methods, based on an indirect detection of cell density, are commonly used to estimate and compare cell densities in a culture. The foremost approach is to measure the turbidity (cloudiness) of a sample of bacteria in a liquid suspension. The laboratory instrument used to measure turbidity is called a spectrophotometer ( Figure 9.15 ). In a spectrophotometer, a light beam is transmitted through a bacterial suspension, the light passing through the suspension is measured by a detector, and the amount of light passing through the sample and reaching the detector is converted to either percent transmission or a logarithmic value called absorbance (optical density). As the numbers of bacteria in a suspension increase, the turbidity also increases and causes less light to reach the detector. The decrease in light passing through the sample and reaching the detector is associated with a decrease in percent transmission and increase in absorbance measured by the spectrophotometer. Measuring turbidity is a fast method to estimate cell density as long as there are enough cells in a sample to produce turbidity. It is possible to correlate turbidity readings to the actual number of cells by performing a viable plate count of samples taken from cultures having a range of absorbance values. Using these values, a calibration curve is generated by plotting turbidity as a function of cell density. Once the calibration curve has been produced, it can be used to estimate cell counts for all samples obtained or cultured under similar conditions and with densities within the range of values used to construct the curve. Measuring dry weight of a culture sample is another indirect method of evaluating culture density without directly measuring cell counts. The cell suspension used for weighing must be concentrated by filtration or centrifugation, washed, and then dried before the measurements are taken. The degree of drying must be standardized to account for residual water content. This method is especially useful for filamentous microorganisms, which are difficult to enumerate by direct or viable plate count. As we have seen, methods to estimate viable cell numbers can be labor intensive and take time because cells must be grown. Recently, indirect ways of measuring live cells have been developed that are both fast and easy to implement. These methods measure cell activity by following the production of metabolic products or disappearance of reactants. Adenosine triphosphate (ATP) formation, biosynthesis of proteins and nucleic acids, and consumption of oxygen can all be monitored to estimate the number of cells. Check Your Understanding What is the purpose of a calibration curve when estimating cell count from turbidity measurements? What are the newer indirect methods of counting live cells? Alternative Patterns of Cell Division Binary fission is the most common pattern of cell division in prokaryotes, but it is not the only one. Other mechanisms usually involve asymmetrical division (as in budding) or production of spores in aerial filaments. In some cyanobacteria , many nucleoids may accumulate in an enlarged round cell or along a filament, leading to the generation of many new cells at once. The new cells often split from the parent filament and float away in a process called fragmentation ( Figure 9.16 ). Fragmentation is commonly observed in the Actinomycetes , a group of gram-positive, anaerobic bacteria commonly found in soil. Another curious example of cell division in prokaryotes, reminiscent of live birth in animals, is exhibited by the giant bacterium Epulopiscium . Several daughter cells grow fully in the parent cell, which eventually disintegrates, releasing the new cells to the environment. Other species may form a long narrow extension at one pole in a process called budding . The tip of the extension swells and forms a smaller cell, the bud that eventually detaches from the parent cell. Budding is most common in yeast ( Figure 9.16 ), but it is also observed in prosthecate bacteria and some cyanobacteria. The soil bacteria Actinomyces grow in long filaments divided by septa, similar to the mycelia seen in fungi, resulting in long cells with multiple nucleoids. Environmental signals, probably related to low nutrient availability, lead to the formation of aerial filaments. Within these aerial filaments , elongated cells divide simultaneously. The new cells, which contain a single nucleoid, develop into spores that give rise to new colonies. Check Your Understanding Identify at least one difference between fragmentation and budding. Biofilms In nature, microorganisms grow mainly in biofilms , complex and dynamic ecosystems that form on a variety of environmental surfaces, from industrial conduits and water treatment pipelines to rocks in river beds. Biofilms are not restricted to solid surface substrates, however. Almost any surface in a liquid environment containing some minimal nutrients will eventually develop a biofilm. Microbial mats that float on water, for example, are biofilms that contain large populations of photosynthetic microorganisms. Biofilms found in the human mouth may contain hundreds of bacterial species. Regardless of the environment where they occur, biofilms are not random collections of microorganisms; rather, they are highly structured communities that provide a selective advantage to their constituent microorganisms. Biofilm Structure Observations using confocal microscopy have shown that environmental conditions influence the overall structure of biofilms. Filamentous biofilms called streamers form in rapidly flowing water, such as freshwater streams, eddies, and specially designed laboratory flow cells that replicate growth conditions in fast-moving fluids. The streamers are anchored to the substrate by a “head” and the “tail” floats downstream in the current. In still or slow-moving water, biofilms mainly assume a mushroom-like shape. The structure of biofilms may also change with other environmental conditions such as nutrient availability. Detailed observations of biofilms under confocal laser and scanning electron microscopes reveal clusters of microorganisms embedded in a matrix interspersed with open water channels. The extracellular matrix consists of extracellular polymeric substances (EPS) secreted by the organisms in the biofilm. The extracellular matrix represents a large fraction of the biofilm, accounting for 50%–90% of the total dry mass. The properties of the EPS vary according to the resident organisms and environmental conditions. EPS is a hydrated gel composed primarily of polysaccharides and containing other macromolecules such as proteins, nucleic acids, and lipids. It plays a key role in maintaining the integrity and function of the biofilm. Channels in the EPS allow movement of nutrients, waste, and gases throughout the biofilm. This keeps the cells hydrated, preventing desiccation. EPS also shelters organisms in the biofilm from predation by other microbes or cells (e.g., protozoans, white blood cells in the human body). Biofilm Formation Free-floating microbial cells that live in an aquatic environment are called planktonic cells. The formation of a biofilm essentially involves the attachment of planktonic cells to a substrate, where they become sessile (attached to a surface). This occurs in stages, as depicted in Figure 9.17 . The first stage involves the attachment of planktonic cells to a surface coated with a conditioning film of organic material. At this point, attachment to the substrate is reversible, but as cells express new phenotypes that facilitate the formation of EPS, they transition from a planktonic to a sessile lifestyle. The biofilm develops characteristic structures, including an extensive matrix and water channels. Appendages such as fimbriae , pili , and flagella interact with the EPS, and microscopy and genetic analysis suggest that such structures are required for the establishment of a mature biofilm. In the last stage of the biofilm life cycle, cells on the periphery of the biofilm revert to a planktonic lifestyle, sloughing off the mature biofilm to colonize new sites. This stage is referred to as dispersal . Within a biofilm, different species of microorganisms establish metabolic collaborations in which the waste product of one organism becomes the nutrient for another. For example, aerobic microorganisms consume oxygen, creating anaerobic regions that promote the growth of anaerobes. This occurs in many polymicrobial infections that involve both aerobic and anaerobic pathogens. The mechanism by which cells in a biofilm coordinate their activities in response to environmental stimuli is called quorum sensing . Quorum sensing—which can occur between cells of different species within a biofilm—enables microorganisms to detect their cell density through the release and binding of small, diffusible molecules called autoinducers . When the cell population reaches a critical threshold (a quorum), these autoinducers initiate a cascade of reactions that activate genes associated with cellular functions that are beneficial only when the population reaches a critical density. For example, in some pathogens, synthesis of virulence factors only begins when enough cells are present to overwhelm the immune defenses of the host. Although mostly studied in bacterial populations, quorum sensing takes place between bacteria and eukaryotes and between eukaryotic cells such as the fungus Candida albicans , a common member of the human microbiota that can cause infections in immunocompromised individuals. The signaling molecules in quorum sensing belong to two major classes. Gram-negative bacteria communicate mainly using N-acylated homoserine lactones, whereas gram-positive bacteria mostly use small peptides ( Figure 9.18 ). In all cases, the first step in quorum sensing consists of the binding of the autoinducer to its specific receptor only when a threshold concentration of signaling molecules is reached. Once binding to the receptor takes place, a cascade of signaling events leads to changes in gene expression. The result is the activation of biological responses linked to quorum sensing, notably an increase in the production of signaling molecules themselves, hence the term autoinducer. Biofilms and Human Health The human body harbors many types of biofilms, some beneficial and some harmful. For example, the layers of normal microbiota lining the intestinal and respiratory mucosa play a role in warding off infections by pathogens. However, other biofilms in the body can have a detrimental effect on health. For example, the plaque that forms on teeth is a biofilm that can contribute to dental and periodontal disease. Biofilms can also form in wounds, sometimes causing serious infections that can spread. The bacterium Pseudomonas aeruginosa often colonizes biofilms in the airways of patients with cystic fibrosis , causing chronic and sometimes fatal infections of the lungs. Biofilms can also form on medical devices used in or on the body, causing infections in patients with in-dwelling catheters , artificial joints, or contact lenses . Pathogens embedded within biofilms exhibit a higher resistance to antibiotics than their free-floating counterparts. Several hypotheses have been proposed to explain why. Cells in the deep layers of a biofilm are metabolically inactive and may be less susceptible to the action of antibiotics that disrupt metabolic activities. The EPS may also slow the diffusion of antibiotics and antiseptics, preventing them from reaching cells in the deeper layers of the biofilm. Phenotypic changes may also contribute to the increased resistance exhibited by bacterial cells in biofilms. For example, the increased production of efflux pumps , membrane-embedded proteins that actively extrude antibiotics out of bacterial cells, have been shown to be an important mechanism of antibiotic resistance among biofilm-associated bacteria. Finally, biofilms provide an ideal environment for the exchange of extrachromosomal DNA , which often includes genes that confer antibiotic resistance. Check Your Understanding What is the matrix of a biofilm composed of? What is the role of quorum sensing in a biofilm? 9.2 Oxygen Requirements for Microbial Growth Learning Objectives Interpret visual data demonstrating minimum, optimum, and maximum oxygen or carbon dioxide requirements for growth Identify and describe different categories of microbes with requirements for growth with or without oxygen: obligate aerobe, obligate anaerobe, facultative anaerobe, aerotolerant anaerobe, microaerophile, and capnophile Give examples of microorganisms for each category of growth requirements Ask most people “What are the major requirements for life?” and the answers are likely to include water and oxygen. Few would argue about the need for water, but what about oxygen? Can there be life without oxygen? The answer is that molecular oxygen (O 2 ) is not always needed. The earliest signs of life are dated to a period when conditions on earth were highly reducing and free oxygen gas was essentially nonexistent. Only after cyanobacteria started releasing oxygen as a byproduct of photosynthesis and the capacity of iron in the oceans for taking up oxygen was exhausted did oxygen levels increase in the atmosphere. This event, often referred to as the Great Oxygenation Event or the Oxygen Revolution , caused a massive extinction. Most organisms could not survive the powerful oxidative properties of reactive oxygen species (ROS), highly unstable ions and molecules derived from partial reduction of oxygen that can damage virtually any macromolecule or structure with which they come in contact. Singlet oxygen (O 2 •), superoxide ( O 2 − ) , ( O 2 − ) , peroxides (H 2 O 2 ), hydroxyl radical (OH•), and hypochlorite ion (OCl − ), the active ingredient of household bleach, are all examples of ROS. The organisms that were able to detoxify reactive oxygen species harnessed the high electronegativity of oxygen to produce free energy for their metabolism and thrived in the new environment. Oxygen Requirements of Microorganisms Many ecosystems are still free of molecular oxygen. Some are found in extreme locations, such as deep in the ocean or in earth’s crust; others are part of our everyday landscape, such as marshes, bogs, and sewers. Within the bodies of humans and other animals, regions with little or no oxygen provide an anaerobic environment for microorganisms. ( Figure 9.19 ). We can easily observe different requirements for molecular oxygen by growing bacteria in thioglycolate tube culture s. A test-tube culture starts with autoclaved thioglycolate medium containing a low percentage of agar to allow motile bacteria to move throughout the medium. Thioglycolate has strong reducing properties and autoclaving flushes out most of the oxygen. The tubes are inoculated with the bacterial cultures to be tested and incubated at an appropriate temperature. Over time, oxygen slowly diffuses throughout the thioglycolate tube culture from the top. Bacterial density increases in the area where oxygen concentration is best suited for the growth of that particular organism. The growth of bacteria with varying oxygen requirements in thioglycolate tubes is illustrated in Figure 9.20 . In tube A, all the growth is seen at the top of the tube. The bacteria are obligate (strict) aerobe s that cannot grow without an abundant supply of oxygen. Tube B looks like the opposite of tube A. Bacteria grow at the bottom of tube B. Those are obligate anaerobe s, which are killed by oxygen. Tube C shows heavy growth at the top of the tube and growth throughout the tube, a typical result with facultative anaerobe s. Facultative anaerobes are organisms that thrive in the presence of oxygen but also grow in its absence by relying on fermentation or anaerobic respiration, if there is a suitable electron acceptor other than oxygen and the organism is able to perform anaerobic respiration. The aerotolerant anaerobe s in tube D are indifferent to the presence of oxygen. They do not use oxygen because they usually have a fermentative metabolism, but they are not harmed by the presence of oxygen as obligate anaerobes are. Tube E on the right shows a “Goldilocks” culture. The oxygen level has to be just right for growth, not too much and not too little. These microaerophile s are bacteria that require a minimum level of oxygen for growth, about 1%–10%, well below the 21% found in the atmosphere. Examples of obligate aerobes are Mycobacterium tuberculosis , the causative agent of tuberculosis and Micrococcus luteus , a gram-positive bacterium that colonizes the skin. Neisseria meningitidis , the causative agent of severe bacterial meningitis , and N. gonorrhoeae , the causative agent of sexually transmitted gonorrhea , are also obligate aerobes. Many obligate anaerobes are found in the environment where anaerobic conditions exist, such as in deep sediments of soil, still waters, and at the bottom of the deep ocean where there is no photosynthetic life. Anaerobic conditions also exist naturally in the intestinal tract of animals. Obligate anaerobes, mainly Bacteroidetes , represent a large fraction of the microbes in the human gut. Transient anaerobic conditions exist when tissues are not supplied with blood circulation; they die and become an ideal breeding ground for obligate anaerobes. Another type of obligate anaerobe encountered in the human body is the gram-positive, rod-shaped Clostridium spp. Their ability to form endospores allows them to survive in the presence of oxygen. One of the major causes of health-acquired infections is C. difficile , known as C. diff. Prolonged use of antibiotics for other infections increases the probability of a patient developing a secondary C. difficile infection. Antibiotic treatment disrupts the balance of microorganisms in the intestine and allows the colonization of the gut by C. difficile , causing a significant inflammation of the colon. Other clostridia responsible for serious infections include C. tetani , the agent of tetanus, and C. perfringens , which causes gas gangrene . In both cases, the infection starts in necrotic tissue (dead tissue that is not supplied with oxygen by blood circulation). This is the reason that deep puncture wounds are associated with tetanus. When tissue death is accompanied by lack of circulation, gangrene is always a danger. The study of obligate anaerobes requires special equipment. Obligate anaerobic bacteria must be grown under conditions devoid of oxygen. The most common approach is culture in an anaerobic jar ( Figure 9.21 ). Anaerobic jars include chemical packs that remove oxygen and release carbon dioxide (CO 2 ). An anaerobic chamber is an enclosed box from which all oxygen is removed. Gloves sealed to openings in the box allow handling of the cultures without exposing the culture to air ( Figure 9.21 ). Staphylococci and Enterobacteriaceae are examples of facultative anaerobes. Staphylococci are found on the skin and upper respiratory tract. Enterobacteriaceae are found primarily in the gut and upper respiratory tract but can sometimes spread to the urinary tract, where they are capable of causing infections. It is not unusual to see mixed bacterial infections in which the facultative anaerobes use up the oxygen, creating an environment for the obligate anaerobes to flourish. Examples of aerotolerant anaerobes include lactobacilli and streptococci, both found in the oral microbiota. Campylobacter jejuni , which causes gastrointestinal infections, is an example of a microaerophile and is grown under low-oxygen conditions. The optimum oxygen concentration , as the name implies, is the ideal concentration of oxygen for a particular microorganism. The lowest concentration of oxygen that allows growth is called the minimum permissive oxygen concentration . The highest tolerated concentration of oxygen is the maximum permissive oxygen concentration . The organism will not grow outside the range of oxygen levels found between the minimum and maximum permissive oxygen concentrations. Check Your Understanding Would you expect the oldest bacterial lineages to be aerobic or anaerobic? Which bacteria grow at the top of a thioglycolate tube, and which grow at the bottom of the tube? Case in Point An Unwelcome Anaerobe Charles is a retired bus driver who developed type 2 diabetes over 10 years ago. Since his retirement, his lifestyle has become very sedentary and he has put on a substantial amount of weight. Although he has felt tingling and numbness in his left foot for a while, he has not been worried because he thought his foot was simply “falling asleep.” Recently, a scratch on his foot does not seem to be healing and is becoming increasingly ugly. Because the sore did not bother him much, Charles figured it could not be serious until his daughter noticed a purplish discoloration spreading on the skin and oozing ( Figure 9.22 ). When he was finally seen by his physician, Charles was rushed to the operating room. His open sore, or ulcer, is the result of a diabetic foot . The concern here is that gas gangrene may have taken hold in the dead tissue. The most likely agent of gas gangrene is Clostridium perfringens , an endospore-forming, gram-positive bacterium. It is an obligate anaerobe that grows in tissue devoid of oxygen. Since dead tissue is no longer supplied with oxygen by the circulatory system, the dead tissue provides pockets of ideal environment for the growth of C. perfringens . A surgeon examines the ulcer and radiographs of Charles’s foot and determines that the bone is not yet infected. The wound will have to be surgically debrided (debridement refers to the removal of dead and infected tissue) and a sample sent for microbiological lab analysis, but Charles will not have to have his foot amputated. Many diabetic patients are not so lucky. In 2008, nearly 70,000 diabetic patients in the United States lost a foot or limb to amputation, according to statistics from the Centers for Disease Control and Prevention. 1 1 Centers for Disease Control and Prevention. “Living With Diabetes: Keep Your Feet Healthy.” http://www.cdc.gov/Features/DiabetesFootHealth/ Which growth conditions would you recommend for the detection of C. perfringens ? Detoxification of Reactive Oxygen Species Aerobic respiration constantly generates reactive oxygen species (ROS), byproducts that must be detoxified. Even organisms that do not use aerobic respiration need some way to break down some of the ROS that may form from atmospheric oxygen. Three main enzymes break down those toxic byproducts: superoxide dismutase, peroxidase, and catalase. Each one catalyzes a different reaction. Reactions of type seen in Reaction 1 are catalyzed by peroxidase s. ( 1 ) X − ( 2 H + ) + H 2 O 2 → oxidized-X + 2 H 2 O ( 1 ) X − ( 2 H + ) + H 2 O 2 → oxidized-X + 2 H 2 O In these reactions, an electron donor (reduced compound; e.g., reduced nicotinamide adenine dinucleotide [NADH]) oxidizes hydrogen peroxide , or other peroxides, to water. The enzymes play an important role by limiting the damage caused by peroxidation of membrane lipids. Reaction 2 is mediated by the enzyme superoxide dismutase (SOD) and breaks down the powerful superoxide anions generated by aerobic metabolism: ( 2 ) 2 O 2 − + 2 H + → H 2 O 2 + O 2 ( 2 ) 2 O 2 − + 2 H + → H 2 O 2 + O 2 The enzyme catalase converts hydrogen peroxide to water and oxygen as shown in Reaction 3. ( 3 ) 2 H 2 O 2 → 2 H 2 O + O 2 ( 3 ) 2 H 2 O 2 → 2 H 2 O + O 2 Obligate anaerobes usually lack all three enzymes. Aerotolerant anaerobes do have SOD but no catalase. Reaction 3, shown occurring in Figure 9.23 , is the basis of a useful and rapid test to distinguish streptococci, which are aerotolerant and do not possess catalase, from staphylococci, which are facultative anaerobes. A sample of culture rapidly mixed in a drop of 3% hydrogen peroxide will release bubbles if the culture is catalase positive. Bacteria that grow best in a higher concentration of CO 2 and a lower concentration of oxygen than present in the atmosphere are called capnophiles . One common approach to grow capnophiles is to use a candle jar . A candle jar consists of a jar with a tight-fitting lid that can accommodate the cultures and a candle. After the cultures are added to the jar, the candle is lit and the lid closed. As the candle burns, it consumes most of the oxygen present and releases CO 2 . Check Your Understanding What substance is added to a sample to detect catalase? What is the function of the candle in a candle jar? Clinical Focus Part 2 The health-care provider who saw Jeni was concerned primarily because of her pregnancy. Her condition enhances the risk for infections and makes her more vulnerable to those infections. The immune system is downregulated during pregnancy, and pathogens that cross the placenta can be very dangerous for the fetus. A note on the provider’s order to the microbiology lab mentions a suspicion of infection by Listeria monocytogenes , based on the signs and symptoms exhibited by the patient. Jeni’s blood samples are streaked directly on sheep blood agar , a medium containing tryptic soy agar enriched with 5% sheep blood. (Blood is considered sterile; therefore, competing microorganisms are not expected in the medium.) The inoculated plates are incubated at 37 °C for 24 to 48 hours. Small grayish colonies surrounded by a clear zone emerge. Such colonies are typical of Listeria and other pathogens such as streptococci; the clear zone surrounding the colonies indicates complete lysis of blood in the medium, referred to as beta-hemolysis ( Figure 9.24 ). When tested for the presence of catalase, the colonies give a positive response, eliminating Streptococcus as a possible cause. Furthermore, a Gram stain shows short gram-positive bacilli. Cells from a broth culture grown at room temperature displayed the tumbling motility characteristic of Listeria ( Figure 9.24 ). All of these clues lead the lab to positively confirm the presence of Listeria in Jeni’s blood samples. How serious is Jeni’s condition and what is the appropriate treatment? Jump to the next Clinical Focus box. Go back to the previous Clinical Focus box. 9.3 The Effects of pH on Microbial Growth Learning Objectives Illustrate and briefly describe minimum, optimum, and maximum pH requirements for growth Identify and describe the different categories of microbes with pH requirements for growth: acidophiles, neutrophiles, and alkaliphiles Give examples of microorganisms for each category of pH requirement Yogurt, pickles, sauerkraut, and lime-seasoned dishes all owe their tangy taste to a high acid content ( Figure 9.25 ). Recall that acidity is a function of the concentration of hydrogen ions [H + ] and is measured as pH. Environments with pH values below 7.0 are considered acidic, whereas those with pH values above 7.0 are considered basic. Extreme pH affects the structure of all macromolecules. The hydrogen bonds holding together strands of DNA break up at high pH. Lipids are hydrolyzed by an extremely basic pH. The proton motive force responsible for production of ATP in cellular respiration depends on the concentration gradient of H + across the plasma membrane (see Cellular Respiration ). If H + ions are neutralized by hydroxide ions, the concentration gradient collapses and impairs energy production. But the component most sensitive to pH in the cell is its workhorse, the protein. Moderate changes in pH modify the ionization of amino-acid functional groups and disrupt hydrogen bonding, which, in turn, promotes changes in the folding of the molecule, promoting denaturation and destroying activity. The optimum growth pH is the most favorable pH for the growth of an organism. The lowest pH value that an organism can tolerate is called the minimum growth pH and the highest pH is the maximum growth pH . These values can cover a wide range, which is important for the preservation of food and to microorganisms’ survival in the stomach. For example, the optimum growth pH of Salmonella spp. is 7.0–7.5, but the minimum growth pH is closer to 4.2. Most bacteria are neutrophile s, meaning they grow optimally at a pH within one or two pH units of the neutral pH of 7 (see Figure 9.26 ). Most familiar bacteria, like Escherichia coli , staphylococci, and Salmonella spp. are neutrophiles and do not fare well in the acidic pH of the stomach. However, there are pathogenic strains of E. coli, S. typhi, and other species of intestinal pathogens that are much more resistant to stomach acid. In comparison, fungi thrive at slightly acidic pH values of 5.0–6.0. Microorganisms that grow optimally at pH less than 5.55 are called acidophile s. For example, the sulfur-oxidizing Sulfolobus spp. isolated from sulfur mud fields and hot springs in Yellowstone National Park are extreme acidophiles. These archaea survive at pH values of 2.5–3.5. Species of the archaean genus Ferroplasma live in acid mine drainage at pH values of 0–2.9. Lactobacillus bacteria, which are an important part of the normal microbiota of the vagina, can tolerate acidic environments at pH values 3.5–6.8 and also contribute to the acidity of the vagina (pH of 4, except at the onset of menstruation) through their metabolic production of lactic acid. The vagina’s acidity plays an important role in inhibiting other microbes that are less tolerant of acidity. Acidophilic microorganisms display a number of adaptations to survive in strong acidic environments. For example, proteins show increased negative surface charge that stabilizes them at low pH. Pumps actively eject H + ions out of the cells. The changes in the composition of membrane phospholipids probably reflect the need to maintain membrane fluidity at low pH. At the other end of the spectrum are alkaliphile s, microorganisms that grow best at pH between 8.0 and 10.5. Vibrio cholerae , the pathogenic agent of cholera , grows best at the slightly basic pH of 8.0; it can survive pH values of 11.0 but is inactivated by the acid of the stomach. When it comes to survival at high pH, the bright pink archaean Natronobacterium , found in the soda lakes of the African Rift Valley, may hold the record at a pH of 10.5 ( Figure 9.27 ). Extreme alkaliphiles have adapted to their harsh environment through evolutionary modification of lipid and protein structure and compensatory mechanisms to maintain the proton motive force in an alkaline environment. For example, the alkaliphile Bacillus firmus derives the energy for transport reactions and motility from a Na + ion gradient rather than a proton motive force. Many enzymes from alkaliphiles have a higher isoelectric point, due to an increase in the number of basic amino acids, than homologous enzymes from neutrophiles. Micro Connections Survival at the Low pH of the Stomach Peptic ulcers (or stomach ulcers ) are painful sores on the stomach lining. Until the 1980s, they were believed to be caused by spicy foods, stress, or a combination of both. Patients were typically advised to eat bland foods, take anti-acid medications, and avoid stress. These remedies were not particularly effective, and the condition often recurred. This all changed dramatically when the real cause of most peptic ulcers was discovered to be a slim, corkscrew-shaped bacterium, Helicobacter pylori . This organism was identified and isolated by Barry Marshall and Robin Warren, whose discovery earned them the Nobel Prize in Medicine in 2005. The ability of H. pylori to survive the low pH of the stomach would seem to suggest that it is an extreme acidophile. As it turns out, this is not the case. In fact, H. pylori is a neutrophile. So, how does it survive in the stomach? Remarkably, H. pylori creates a microenvironment in which the pH is nearly neutral. It achieves this by producing large amounts of the enzyme urease, which breaks down urea to form NH 4 + and CO 2 . The ammonium ion raises the pH of the immediate environment. This metabolic capability of H. pylori is the basis of an accurate, noninvasive test for infection. The patient is given a solution of urea containing radioactively labeled carbon atoms. If H. pylori is present in the stomach, it will rapidly break down the urea, producing radioactive CO 2 that can be detected in the patient’s breath. Because peptic ulcers may lead to gastric cancer, patients who are determined to have H. pylori infections are treated with antibiotics. Check Your Understanding What effect do extremes of pH have on proteins? What pH-adaptive type of bacteria would most human pathogens be? 9.4 Temperature and Microbial Growth Learning Objectives Illustrate and briefly describe minimum, optimum, and maximum temperature requirements for growth Identify and describe different categories of microbes with temperature requirements for growth: psychrophile, psychrotrophs, mesophile, thermophile, hyperthermophile Give examples of microorganisms in each category of temperature tolerance When the exploration of Lake Whillans started in Antarctica, researchers did not expect to find much life. Constant subzero temperatures and lack of obvious sources of nutrients did not seem to be conditions that would support a thriving ecosystem. To their surprise, the samples retrieved from the lake showed abundant microbial life. In a different but equally harsh setting, bacteria grow at the bottom of the ocean in sea vents ( Figure 9.28 ), where temperatures can reach 340 °C (700 °F). Microbes can be roughly classified according to the range of temperature at which they can grow. The growth rates are the highest at the optimum growth temperature for the organism. The lowest temperature at which the organism can survive and replicate is its minimum growth temperature . The highest temperature at which growth can occur is its maximum growth temperature . The following ranges of permissive growth temperatures are approximate only and can vary according to other environmental factors. Organisms categorized as mesophile s (“middle loving”) are adapted to moderate temperatures, with optimal growth temperatures ranging from room temperature (about 20 °C) to about 45 °C. As would be expected from the core temperature of the human body, 37 °C (98.6 °F), normal human microbiota and pathogens (e.g., E. coli , Salmonella spp., and Lactobacillus spp.) are mesophiles. Organisms called psychrotrophs , also known as psychrotolerant, prefer cooler environments, from a high temperature of 25 °C to refrigeration temperature about 4 °C. They are found in many natural environments in temperate climates. They are also responsible for the spoilage of refrigerated food. Clinical Focus Resolution The presence of Listeria in Jeni’s blood suggests that her symptoms are due to listeriosis , an infection caused by L. monocytogenes . Listeriosis is a serious infection with a 20% mortality rate and is a particular risk to Jeni’s fetus. A sample from the amniotic fluid cultured for the presence of Listeria gave negative results. Because the absence of organisms does not rule out the possibility of infection, a molecular test based on the nucleic acid amplification of the 16S ribosomal RNA of Listeria was performed to confirm that no bacteria crossed the placenta. Fortunately, the results from the molecular test were also negative. Jeni was admitted to the hospital for treatment and recovery. She received a high dose of two antibiotics intravenously for 2 weeks. The preferred drugs for the treatment of listeriosis are ampicillin or penicillin G with an aminoglycoside antibiotic. Resistance to common antibiotics is still rare in Listeria and antibiotic treatment is usually successful. She was released to home care after a week and fully recovered from her infection. L. monocytogenes is a gram-positive short rod found in soil, water, and food. It is classified as a psychrophile and is halotolerant. Its ability to multiply at refrigeration temperatures (4–10 °C) and its tolerance for high concentrations of salt (up to 10% sodium chloride [NaCl]) make it a frequent source of food poisoning. Because Listeria can infect animals, it often contaminates food such as meat, fish, or dairy products. Contamination of commercial foods can often be traced to persistent biofilms that form on manufacturing equipment that is not sufficiently cleaned. Listeria infection is relatively common among pregnant women because the elevated levels of progesterone downregulate the immune system, making them more vulnerable to infection. The pathogen can cross the placenta and infect the fetus, often resulting in miscarriage, stillbirth, or fatal neonatal infection. Pregnant women are thus advised to avoid consumption of soft cheeses, refrigerated cold cuts, smoked seafood, and unpasteurized dairy products. Because Listeria bacteria can easily be confused with diphtheroids, another common group of gram-positive rods, it is important to alert the laboratory when listeriosis is suspected. Go back to the previous Clinical Focus box. The organisms retrieved from arctic lakes such as Lake Whillans are considered extreme psychrophile s (cold loving). Psychrophiles are microorganisms that can grow at 0 °C and below, have an optimum growth temperature close to 15 °C, and usually do not survive at temperatures above 20 °C. They are found in permanently cold environments such as the deep waters of the oceans. Because they are active at low temperature, psychrophiles and psychrotrophs are important decomposers in cold climates. Organisms that grow at optimum temperatures of 50 °C to a maximum of 80 °C are called thermophiles (“heat loving”). They do not multiply at room temperature. Thermophiles are widely distributed in hot springs, geothermal soils, and manmade environments such as garden compost piles where the microbes break down kitchen scraps and vegetal material. Examples of thermophiles include Thermus aquaticus and Geobacillus spp. Higher up on the extreme temperature scale we find the hyperthermophiles , which are characterized by growth ranges from 80 °C to a maximum of 110 °C, with some extreme examples that survive temperatures above 121 °C, the average temperature of an autoclave. The hydrothermal vents at the bottom of the ocean are a prime example of extreme environments, with temperatures reaching an estimated 340 °C ( Figure 9.28 ). Microbes isolated from the vents achieve optimal growth at temperatures higher than 100 °C. Noteworthy examples are Pyrobolus and Pyrodictium , archaea that grow at 105 °C and survive autoclaving. Figure 9.29 shows the typical skewed curves of temperature-dependent growth for the categories of microorganisms we have discussed. Life in extreme environments raises fascinating questions about the adaptation of macromolecules and metabolic processes. Very low temperatures affect cells in many ways. Membranes lose their fluidity and are damaged by ice crystal formation. Chemical reactions and diffusion slow considerably. Proteins become too rigid to catalyze reactions and may undergo denaturation. At the opposite end of the temperature spectrum, heat denatures proteins and nucleic acids. Increased fluidity impairs metabolic processes in membranes. Some of the practical applications of the destructive effects of heat on microbes are sterilization by steam, pasteurization, and incineration of inoculating loops. Proteins in psychrophiles are, in general, rich in hydrophobic residues, display an increase in flexibility, and have a lower number of secondary stabilizing bonds when compared with homologous proteins from mesophiles. Antifreeze proteins and solutes that decrease the freezing temperature of the cytoplasm are common. The lipids in the membranes tend to be unsaturated to increase fluidity. Growth rates are much slower than those encountered at moderate temperatures. Under appropriate conditions, mesophiles and even thermophiles can survive freezing. Liquid cultures of bacteria are mixed with sterile glycerol solutions and frozen to −80 °C for long-term storage as stocks. Cultures can withstand freeze drying (lyophilization) and then be stored as powders in sealed ampules to be reconstituted with broth when needed. Macromolecules in thermophiles and hyperthermophiles show some notable structural differences from what is observed in the mesophiles. The ratio of saturated to polyunsaturated lipids increases to limit the fluidity of the cell membranes. Their DNA sequences show a higher proportion of guanine–cytosine nitrogenous bases, which are held together by three hydrogen bonds in contrast to adenine and thymine, which are connected in the double helix by two hydrogen bonds. Additional secondary structures, ionic and covalent bonds, as well as the replacement of key amino acids to stabilize folding, contribute to the resistance of proteins to denaturation. The so-called thermoenzymes purified from thermophiles have important practical applications. For example, amplification of nucleic acids in the polymerase chain reaction (PCR) depends on the thermal stability of Taq polymerase , an enzyme isolated from T. aquaticus . Degradation enzymes from thermophiles are added as ingredients in hot-water detergents, increasing their effectiveness. Check Your Understanding What temperature requirements do most bacterial human pathogens have? What DNA adaptation do thermophiles exhibit? Eye on Ethics Feeding the World…and the World’s Algae Artificial fertilizers have become an important tool in food production around the world. They are responsible for many of the gains of the so-called green revolution of the 20th century, which has allowed the planet to feed many of its more than 7 billion people. Artificial fertilizers provide nitrogen and phosphorus, key limiting nutrients, to crop plants, removing the normal barriers that would otherwise limit the rate of growth. Thus, fertilized crops grow much faster, and farms that use fertilizer produce higher crop yields. However, careless use and overuse of artificial fertilizers have been demonstrated to have significant negative impacts on aquatic ecosystems, both freshwater and marine. Fertilizers that are applied at inappropriate times or in too-large quantities allow nitrogen and phosphorus compounds to escape use by crop plants and enter drainage systems. Inappropriate use of fertilizers in residential settings can also contribute to nutrient loads, which find their way to lakes and coastal marine ecosystems. As water warms and nutrients are plentiful, microscopic algae bloom, often changing the color of the water because of the high cell density. Most algal blooms are not directly harmful to humans or wildlife; however, they can cause harm indirectly. As the algal population expands and then dies, it provides a large increase in organic matter to the bacteria that live in deep water. With this large supply of nutrients, the population of nonphotosynthetic microorganisms explodes, consuming available oxygen and creating “ dead zone s” where animal life has virtually disappeared. Depletion of oxygen in the water is not the only damaging consequence of some algal blooms. The algae that produce red tides in the Gulf of Mexico, Karenia brevis , secrete potent toxins that can kill fish and other organisms and also accumulate in shellfish. Consumption of contaminated shellfish can cause severe neurological and gastrointestinal symptoms in humans. Shellfish beds must be regularly monitored for the presence of the toxins, and harvests are often shut down when it is present, incurring economic costs to the fishery. Cyanobacteria, which can form blooms in marine and freshwater ecosystems, produce toxins called microcystins , which can cause allergic reactions and liver damage when ingested in drinking water or during swimming. Recurring cyanobacterial algal blooms in Lake Erie ( Figure 9.30 ) have forced municipalities to issue drinking water bans for days at a time because of unacceptable toxin levels. This is just a small sampling of the negative consequences of algal blooms, red tides, and dead zones. Yet the benefits of crop fertilizer—the main cause of such blooms—are difficult to dispute. There is no easy solution to this dilemma, as a ban on fertilizers is not politically or economically feasible. In lieu of this, we must advocate for responsible use and regulation in agricultural and residential contexts, as well as the restoration of wetlands, which can absorb excess fertilizers before they reach lakes and oceans. Link to Learning This video discusses algal blooms and dead zones in more depth. 9.5 Other Environmental Conditions that Affect Growth Learning Objectives Identify and describe different categories of microbes with specific growth requirements other than oxygen, pH, and temperature, such as altered barometric pressure, osmotic pressure, humidity, and light Give at least one example microorganism for each category of growth requirement Microorganisms interact with their environment along more dimensions than pH, temperature, and free oxygen levels, although these factors require significant adaptations. We also find microorganisms adapted to varying levels of salinity, barometric pressure, humidity, and light. Osmotic and Barometric Pressure Most natural environments tend to have lower solute concentrations than the cytoplasm of most microorganisms. Rigid cell walls protect the cells from bursting in a dilute environment. Not much protection is available against high osmotic pressure . In this case, water, following its concentration gradient, flows out of the cell. This results in plasmolysis (the shrinking of the protoplasm away from the intact cell wall) and cell death. This fact explains why brines and layering meat and fish in salt are time-honored methods of preserving food. Microorganisms called halophiles (“salt loving”) actually require high salt concentrations for growth. These organisms are found in marine environments where salt concentrations hover at 3.5%. Extreme halophilic microorganisms, such as the red alga Dunaliella salina and the archaeal species Halobacterium in Figure 9.31 , grow in hypersaline lakes such as the Great Salt Lake, which is 3.5–8 times saltier than the ocean, and the Dead Sea, which is 10 times saltier than the ocean. Dunaliella spp. counters the tremendous osmotic pressure of the environment with a high cytoplasmic concentration of glycerol and by actively pumping out salt ions. Halobacterium spp. accumulates large concentrations of K + and other ions in its cytoplasm. Its proteins are designed for high salt concentrations and lose activity at salt concentrations below 1–2 M. Although most halotolerant organisms, for example Halomonas spp. in salt marshes, do not need high concentrations of salt for growth, they will survive and divide in the presence of high salt. Not surprisingly, the staphylococci, micrococci, and corynebacteria that colonize our skin tolerate salt in their environment. Halotolerant pathogens are an important cause of food-borne illnesses because they survive and multiply in salty food. For example, the halotolerant bacteria S. aureus, Bacillus cereus , and V. cholerae produce dangerous enterotoxins and are major causes of food poisoning. Microorganisms depend on available water to grow. Available moisture is measured as water activity (a w ) , which is the ratio of the vapor pressure of the medium of interest to the vapor pressure of pure distilled water; therefore, the a w of water is equal to 1.0. Bacteria require high a w (0.97–0.99), whereas fungi can tolerate drier environments; for example, the range of a w for growth of Aspergillus spp. is 0.8–0.75. Decreasing the water content of foods by drying, as in jerky, or through freeze-drying or by increasing osmotic pressure, as in brine and jams, are common methods of preventing spoilage. Microorganisms that require high atmospheric pressure for growth are called barophiles . The bacteria that live at the bottom of the ocean must be able to withstand great pressures. Because it is difficult to retrieve intact specimens and reproduce such growth conditions in the laboratory, the characteristics of these microorganisms are largely unknown. Light Photoautotrophs, such as cyanobacteria or green sulfur bacteria , and photoheterotrophs, such as purple nonsulfur bacteria , depend on sufficient light intensity at the wavelengths absorbed by their pigments to grow and multiply. Energy from light is captured by pigments and converted into chemical energy that drives carbon fixation and other metabolic processes. The portion of the electromagnetic spectrum that is absorbed by these organisms is defined as photosynthetically active radiation (PAR). It lies within the visible light spectrum ranging from 400 to 700 nanometers (nm) and extends in the near infrared for some photosynthetic bacteria. A number of accessory pigments, such as fucoxanthin in brown algae and phycobilins in cyanobacteria, widen the useful range of wavelengths for photosynthesis and compensate for the low light levels available at greater depths of water. Other microorganisms, such as the archaea of the class Halobacteria , use light energy to drive their proton and sodium pumps. The light is absorbed by a pigment protein complex called bacteriorhodopsin, which is similar to the eye pigment rhodopsin. Photosynthetic bacteria are present not only in aquatic environments but also in soil and in symbiosis with fungi in lichens. The peculiar watermelon snow is caused by a microalga Chlamydomonas nivalis , a green alga rich in a secondary red carotenoid pigment (astaxanthin) which gives the pink hue to the snow where the alga grows. Check Your Understanding Which photosynthetic pigments were described in this section? What is the fundamental stress of a hypersaline environment for a cell? 9.6 Media Used for Bacterial Growth Learning Objectives Identify and describe culture media for the growth of bacteria, including examples of all-purpose media, enriched, selective, differential, defined, and enrichment media The study of microorganisms is greatly facilitated if we are able to culture them, that is, to keep reproducing populations alive under laboratory conditions. Culturing many microorganisms is challenging because of highly specific nutritional and environmental requirements and the diversity of these requirements among different species. Nutritional Requirements The number of available media to grow bacteria is considerable. Some media are considered general all-purpose media and support growth of a large variety of organisms. A prime example of an all-purpose medium is tryptic soy broth (TSB) . Specialized media are used in the identification of bacteria and are supplemented with dyes, pH indicators, or antibiotics. One type, enriched media , contains growth factors, vitamins, and other essential nutrients to promote the growth of fastidious organisms , organisms that cannot make certain nutrients and require them to be added to the medium. When the complete chemical composition of a medium is known, it is called a chemically defined medium . For example, in EZ medium , all individual chemical components are identified and the exact amounts of each is known. In complex media , which contain extracts and digests of yeasts, meat, or plants, the precise chemical composition of the medium is not known. Amounts of individual components are undetermined and variable. Nutrient broth, tryptic soy broth, and brain heart infusion , are all examples of complex media. Media that inhibit the growth of unwanted microorganisms and support the growth of the organism of interest by supplying nutrients and reducing competition are called selective media . An example of a selective medium is MacConkey agar . It contains bile salts and crystal violet, which interfere with the growth of many gram-positive bacteria and favor the growth of gram-negative bacteria , particularly the Enterobacteriaceae . These species are commonly named enterics, reside in the intestine, and are adapted to the presence of bile salts. The enrichment culture s foster the preferential growth of a desired microorganism that represents a fraction of the organisms present in an inoculum. For example, if we want to isolate bacteria that break down crude oil, hydrocarbonoclastic bacteria , sequential subculturing in a medium that supplies carbon only in the form of crude oil will enrich the cultures with oil-eating bacteria. The differential media make it easy to distinguish colonies of different bacteria by a change in the color of the colonies or the color of the medium. Color changes are the result of end products created by interaction of bacterial enzymes with differential substrates in the medium or, in the case of hemolytic reactions, the lysis of red blood cells in the medium. In Figure 9.32 , the differential fermentation of lactose can be observed on MacConkey agar. The lactose fermenters produce acid, which turns the medium and the colonies of strong fermenters hot pink. The medium is supplemented with the pH indicator neutral red, which turns to hot pink at low pH. Selective and differential media can be combined and play an important role in the identification of bacteria by biochemical methods. Check Your Understanding Distinguish complex and chemically defined media. Distinguish selective and enrichment media. Link to Learning Compare the compositions of EZ medium and sheep blood agar. Case in Point The End-of-Year Picnic The microbiology department is celebrating the end of the school year in May by holding its traditional picnic on the green. The speeches drag on for a couple of hours, but finally all the faculty and students can dig into the food: chicken salad, tomatoes, onions, salad, and custard pie. By evening, the whole department, except for two vegetarian students who did not eat the chicken salad, is stricken with nausea, vomiting, retching, and abdominal cramping. Several individuals complain of diarrhea. One patient shows signs of shock (low blood pressure). Blood and stool samples are collected from patients, and an analysis of all foods served at the meal is conducted. Bacteria can cause gastroenteritis (inflammation of the stomach and intestinal tract) either by colonizing and replicating in the host, which is considered an infection, or by secreting toxins, which is considered intoxication. Signs and symptoms of infections are typically delayed, whereas intoxication manifests within hours, as happened after the picnic. Blood samples from the patients showed no signs of bacterial infection, which further suggests that this was a case of intoxication. Since intoxication is due to secreted toxins, bacteria are not usually detected in blood or stool samples. MacConkey agar and sorbitol-MacConkey agar plates and xylose-lysine-deoxycholate (XLD) plates were inoculated with stool samples and did not reveal any unusually colored colonies, and no black colonies or white colonies were observed on XLD. All lactose fermenters on MacConkey agar also ferment sorbitol. These results ruled out common agents of food-borne illnesses: E. coli , Salmonella spp., and Shigella spp. Analysis of the chicken salad revealed an abnormal number of gram-positive cocci arranged in clusters ( Figure 9.33 ). A culture of the gram-positive cocci releases bubbles when mixed with hydrogen peroxide. The culture turned mannitol salt agar yellow after a 24-hour incubation. All the tests point to Staphylococcus aureus as the organism that secreted the toxin. Samples from the salad showed the presence of gram-positive cocci bacteria in clusters. The colonies were positive for catalase. The bacteria grew on mannitol salt agar fermenting mannitol, as shown by the change to yellow of the medium. The pH indicator in mannitol salt agar is phenol red, which turns to yellow when the medium is acidified by the products of fermentation. The toxin secreted by S. aureus is known to cause severe gastroenteritis. The organism was probably introduced into the salad during preparation by the food handler and multiplied while the salad was kept in the warm ambient temperature during the speeches. What are some other factors that might have contributed to rapid growth of S. aureus in the chicken salad? Why would S. aureus not be inhibited by the presence of salt in the chicken salad?
principles_of_accounting,_volume_2:_managerial_accounting
Summary 10.1 Identify Relevant Information for Decision-Making Decision-making involves choosing between alternatives. A critical step in the decision-making process is identification of all the relevant information for each alternative. Relevant information is any information that would have an impact on the decision. Relevant information can come in the form of costs or revenues, or be nonfinancial in form. For information regarding costs, this means determining which costs are avoidable and which are unavoidable. 10.2 Evaluate and Determine Whether to Accept or Reject a Special Order Deciding to accept or reject a special order is a choice between alternatives. Accepting or rejecting a special order involves comparing the purchase price associated with the special order to the cost to produce the items. This decision is highly influenced by whether the firm being offered the special order is operating below or at capacity. Qualitative factors would include consequences such as potential loss of current customers or displacement of jobs. 10.3 Evaluate and Determine Whether to Make or Buy a Component Deciding to outsource a component of the operations or manufacturing of a business is a choice between alternatives. Choosing whether to make or to buy a product, or choosing to have services performed by an outside company, are outsourcing decisions. Outsourcing decisions involve comparing the cost to keep the product or service in-house to the cost of buying the product or service from an outside party. An important consideration in these types of decisions is unavoidable costs. 10.4 Evaluate and Determine Whether to Keep or Discontinue a Segment or Product Deciding to keep or discontinue a product line or a segment of a business is a choice between alternatives. The choice to keep or eliminate involves comparing the business’s total operating income generated from keeping the product or segment and comparing this to the business’s total operating income generated if the product or segment is eliminated. An important consideration in these types of decisions is allocated costs. 10.5 Evaluate and Determine Whether to Sell or Process Further Deciding to do more work on a product to develop it into a new product is a choice between alternatives. Choosing whether to sell a product as is or to process it further involves comparing the selling price without further processing (at split-off) to the net price (selling price less additional processing costs) that would be obtained if the product were processed further. An important consideration in these types of decisions is the realization that the costs incurred up to the split-off point are irrelevant to the decision. 10.6 Evaluate and Determine How to Make Decisions When Resources Are Constrained Deciding to how to use scare resources is a choice between alternatives. Scarce resources can include anything that limits productive capacity, such as machine-hours or labor hours. Choosing how to use the scarce resource involves determining the contribution margin for each product or service that uses the constrained resource. The products or services with the highest contribution margin have the largest impact on income. Choosing how to manage the scarce resource will help reduce bottlenecks.
Chapter Outline 10.1 Identify Relevant Information for Decision-Making 10.2 Evaluate and Determine Whether to Accept or Reject a Special Order 10.3 Evaluate and Determine Whether to Make or Buy a Component 10.4 Evaluate and Determine Whether to Keep or Discontinue a Segment or Product 10.5 Evaluate and Determine Whether to Sell or Process Further 10.6 Evaluate and Determine How to Make Decisions When Resources Are Constrained Why It Matters One day, at your part-time job in a local coffee shop, you realize that the employees throw many pounds of used coffee grounds in the trash each day. From an environmental perspective, you are concerned because of the volume of trash being transferred to the landfill. From a business perspective, you wonder if discarding the used grounds is the only option. Could those coffee grounds be used in a profitable manner? After a bit of research, you discover that, if prepared in certain ways, used coffee grounds are good as fertilizer, can kill insects on some plants, can be used as a body scrub, among other options. A recent radio talk show discussed the possibility that coffee grounds could be used as an alternative fuel source, and you learned that coffee grounds are actually being used to help fuel buses in London. You consider the options for the used coffee grounds and come up with three possibilities for your coffee shop: (1) throw away the used grounds; (2) sell the used grounds to a company that will process them into fertilizer, bio-fuel, or some other product; or (3) process and package the used grounds for resale in the coffee shop as fertilizer and bug repellant. What information would you need for your analysis? Which decision would you choose and why? Are the revenue and cost components the only components of the decision that you should consider? These and similar issues are the types of questions that the accounting analysis process can help management address when evaluating short-term decisions.
[ { "answer": { "ans_choice": 1, "ans_text": "B" }, "bloom": null, "hl_context": "<hl> When choosing between two alternatives , usually only one of the two choices can be selected . <hl> <hl> When this is the case , you may be faced with opportunity costs , which are the costs associated with not choosing the other alternative . <hl> For example , if you are trying to choose between going to work immediately after completing your undergraduate degree or continuing to graduate school , you will have an opportunity cost . If you choose to go to work immediately , your opportunity cost is forgoing a graduate degree and any potential job limitations or advancements that result from that decision . If you choose instead to go directly into graduate school , your opportunity cost is the income that you could have been earning by going to work immediately upon graduation .", "hl_sentences": "When choosing between two alternatives , usually only one of the two choices can be selected . When this is the case , you may be faced with opportunity costs , which are the costs associated with not choosing the other alternative .", "question": { "cloze_format": "________ are the costs associated with not choosing the other alternative.", "normal_format": "What are the costs associated with not choosing the other alternative?", "question_choices": [ "Sunk costs", "Opportunity costs", "Differential costs", "Avoidable costs" ], "question_id": "fs-idm197507728", "question_text": "________ are the costs associated with not choosing the other alternative." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "sunk costs" }, "bloom": null, "hl_context": "<hl> A sunk cost is one that cannot be avoided because it has already occurred . <hl> <hl> A sunk cost will not change regardless of the alternative that management chooses ; therefore , sunk costs have no bearing on future events and are not relevant in decision-making . <hl> The basic premise sounds simple enough , but sunk costs are difficult to ignore due to human nature and are sometimes incorrectly included in the decision-making process . For example , suppose you have an old car , a hand-me-down from your grandmother , and last year you spent $ 1,600 on repairs and new tires and were just told by your mechanic that the car needs $ 1,200 in repairs to operate safely . Your goal is to have a safe and reliable car . Your alternatives are to get the repairs completed or trade in the car for a newer used car .", "hl_sentences": "A sunk cost is one that cannot be avoided because it has already occurred . A sunk cost will not change regardless of the alternative that management chooses ; therefore , sunk costs have no bearing on future events and are not relevant in decision-making .", "question": { "cloze_format": "The type of incurred costs that are not relevant in decision-making (i.e., they have no bearing on future events) and should be excluded in decision-making are ___.", "normal_format": "Which type of incurred costs are not relevant in decision-making (i.e., they have no bearing on future events) and should be excluded in decision-making?", "question_choices": [ "avoidable costs", "unavoidable costs", "sunk costs", "differential costs" ], "question_id": "fs-idm224442832", "question_text": "Which type of incurred costs are not relevant in decision-making (i.e., they have no bearing on future events) and should be excluded in decision-making?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 3, "ans_text": "D" }, "bloom": null, "hl_context": "<hl> In carrying out step three of the managerial decision-making process , a differential analysis compares the relevant costs and revenues of potential solutions . <hl> What does this involve ? First , it is important to understand that there are many types of short-term decisions that a business may face , but these decisions always involve choosing between alternatives . Examples of these types of decisions include determining whether to accept a special order ; making a product or component versus buying the product or component ; performing additional processing on a product ; keeping versus eliminating a product or segment ; or determining whether to take on a new project . In each of these situations , the business should compare the relevant costs and the relevant revenues of one alternative to the relevant costs and relevant revenues of the other alternative ( s ) . Therefore , an important step in the differential analysis of potential solutions is to identify the relevant costs and relevant revenues of the decision . <hl> The first step of the decision-making process is to identify the goal . <hl> In the decisions discussed in this course , the quantitative goal will either be to maximize revenues or to minimize costs . <hl> The second step is to identify the alternative courses of action to achieve the goal . <hl> ( In the real world , steps one and two may require more thought and research that you will learn about in advanced cost accounting and management courses . ) . <hl> This chapter focuses on steps three and four , which involve short-term decision analysis : determining the appropriate information necessary for making a decision that will impact the company in the short term , usually 12 months or fewer , and using that information in a proper analysis in order to reach an informed decision among alternatives . <hl> <hl> Step five , which involves reviewing and evaluating the decision , is briefly addressed with each type of decision analyzed . <hl> <hl> Review , analyze , and evaluate the results of the decision . <hl> <hl> Decide , based upon the analysis , the best course of action . <hl> <hl> Perform a comprehensive analysis of potential solutions . <hl> This includes identifying revenues , costs , benefits , and other financial and qualitative variables .", "hl_sentences": "In carrying out step three of the managerial decision-making process , a differential analysis compares the relevant costs and revenues of potential solutions . The first step of the decision-making process is to identify the goal . The second step is to identify the alternative courses of action to achieve the goal . This chapter focuses on steps three and four , which involve short-term decision analysis : determining the appropriate information necessary for making a decision that will impact the company in the short term , usually 12 months or fewer , and using that information in a proper analysis in order to reach an informed decision among alternatives . Step five , which involves reviewing and evaluating the decision , is briefly addressed with each type of decision analyzed . Review , analyze , and evaluate the results of the decision . Decide , based upon the analysis , the best course of action . Perform a comprehensive analysis of potential solutions .", "question": { "cloze_format": "The managerial decision-making process has ___ as its third step.", "normal_format": "The managerial decision-making process has which of the following as its third step?", "question_choices": [ "Review, analyze and evaluate the results of the decision.", "Decide, based upon the analysis, the best course of action.", "Identify alternative courses of action to achieve a goal or solve a problem.", "Perform a comprehensive differential (differential) analysis of potential solutions." ], "question_id": "fs-idm215177952", "question_text": "The managerial decision-making process has which of the following as its third step?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "consult with CFO concerning variable costs" }, "bloom": null, "hl_context": "<hl> The first step of the decision-making process is to identify the goal . <hl> In the decisions discussed in this course , the quantitative goal will either be to maximize revenues or to minimize costs . <hl> The second step is to identify the alternative courses of action to achieve the goal . <hl> ( In the real world , steps one and two may require more thought and research that you will learn about in advanced cost accounting and management courses . ) . <hl> This chapter focuses on steps three and four , which involve short-term decision analysis : determining the appropriate information necessary for making a decision that will impact the company in the short term , usually 12 months or fewer , and using that information in a proper analysis in order to reach an informed decision among alternatives . <hl> <hl> Step five , which involves reviewing and evaluating the decision , is briefly addressed with each type of decision analyzed . <hl>", "hl_sentences": "The first step of the decision-making process is to identify the goal . The second step is to identify the alternative courses of action to achieve the goal . This chapter focuses on steps three and four , which involve short-term decision analysis : determining the appropriate information necessary for making a decision that will impact the company in the short term , usually 12 months or fewer , and using that information in a proper analysis in order to reach an informed decision among alternatives . Step five , which involves reviewing and evaluating the decision , is briefly addressed with each type of decision analyzed .", "question": { "cloze_format": "A step that is not one of the five steps in the decision-making process is ___ .", "normal_format": "Which of the following is not one of the five steps in the decision-making process?", "question_choices": [ "identify alternatives", "review, analyze, and evaluate decision", "decide best action", "consult with CFO concerning variable costs" ], "question_id": "fs-idm201269280", "question_text": "Which of the following is not one of the five steps in the decision-making process?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "B" }, "bloom": null, "hl_context": "<hl> The Robinson-Patman Act prevents large retailers from purchasing goods in bulk at a greater discount than smaller retailers are able to obtain them . <hl> <hl> It helps keep competition fair between large and small businesses and is sometimes called the “ Anti-Chain Store Act . ” Read the LegalDictionary.net full definition and example of the Robinson-Patman Act to learn more . <hl>", "hl_sentences": "The Robinson-Patman Act prevents large retailers from purchasing goods in bulk at a greater discount than smaller retailers are able to obtain them . It helps keep competition fair between large and small businesses and is sometimes called the “ Anti-Chain Store Act . ” Read the LegalDictionary.net full definition and example of the Robinson-Patman Act to learn more .", "question": { "cloze_format": "The ___ is sometimes referred to as the “Anti Chain Store Act”.", "normal_format": "Which of the following is sometimes referred to as the “Anti Chain Store Act”?", "question_choices": [ "Sarbanes-Oxley Act", "Robinson-Patman Act", "Wright-Patman Act", "Securities Act of 1939" ], "question_id": "fs-idm208516080", "question_text": "Which of the following is sometimes referred to as the “Anti Chain Store Act”?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 1, "ans_text": "B" }, "bloom": null, "hl_context": "This type of analysis is also relevant to the service industry ; for example , ADP provides payroll and data processing services to over 650,000 companies worldwide . Or a law firm may decide to hire certain research activities to be completed by outside experts rather than hire the necessary staff to keep that function in-house . These are all examples of outsourcing . <hl> Outsourcing is the act of using another company to provide goods or services that your company requires . <hl>", "hl_sentences": "Outsourcing is the act of using another company to provide goods or services that your company requires .", "question": { "cloze_format": "________ is the act of using another company to provide goods or services that your company requires.", "normal_format": "What is the act of using another company to provide goods or services that your company requires?", "question_choices": [ "Allocating", "Outsourcing", "Segmenting", "Leasing" ], "question_id": "fs-idm237708944", "question_text": "________ is the act of using another company to provide goods or services that your company requires." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "D" }, "bloom": null, "hl_context": "<hl> Companies outsource for the same reasons . <hl> <hl> Many companies have found that it is more cost effective to outsource certain activities , such as payroll , data storage , and web design and hosting . <hl> <hl> It is more efficient to pay an outside expert than to hire the appropriate staff to keep a particular task inside the company . <hl> <hl> One of the most common outsourcing scenarios is one in which a company must decide whether it is going to make a component that it needs in manufacturing a product or buy that component already made . <hl> For example , all of the components of the iPhone are made by companies other than Apple . Ford buys truck and automobile seats , as well as many other components and individual parts , from various suppliers and then assembles them at Ford factories . With each component , Ford must decide if it is more cost effective to make that component internally or to buy that component from an external supplier . In conducting these types of analyses between alternatives , the initial focus will be on each quantitative factor of the analysis — in other words , the component that can be measured numerically . Examples of quantitative factors in business include sales growth , number of defective parts produced , or number of labor hours worked . However , in decision-making , it is important also to consider each qualitative factor , which is one that cannot be measured numerically . <hl> For example , using the same summer job scenario , qualitative factors may include the environment in which you would be working ( road dust and tar odors versus pollen and mower exhaust fumes ) , the amount of time exposed to the sun , the people with whom you will be working ( working with friends versus making new friends ) , and weather-related issues ( both jobs are outdoors , but could one job send you home for the day due to weather ? ) . <hl> <hl> Examples of qualitative factors in business include employee morale , customer satisfaction , and company or brand image . <hl> In making short-term decisions , a business will want to analyze both qualitative and quantitative factors .", "hl_sentences": "Companies outsource for the same reasons . Many companies have found that it is more cost effective to outsource certain activities , such as payroll , data storage , and web design and hosting . It is more efficient to pay an outside expert than to hire the appropriate staff to keep a particular task inside the company . One of the most common outsourcing scenarios is one in which a company must decide whether it is going to make a component that it needs in manufacturing a product or buy that component already made . For example , using the same summer job scenario , qualitative factors may include the environment in which you would be working ( road dust and tar odors versus pollen and mower exhaust fumes ) , the amount of time exposed to the sun , the people with whom you will be working ( working with friends versus making new friends ) , and weather-related issues ( both jobs are outdoors , but could one job send you home for the day due to weather ? ) . Examples of qualitative factors in business include employee morale , customer satisfaction , and company or brand image .", "question": { "cloze_format": "___ is/are not a qualitative decision that should be considered in an outsourcing decision?", "normal_format": "Which of the following is not a qualitative decision that should be considered in an outsourcing decision?", "question_choices": [ "employee morale", "product quality", "company reputation", "relevant costs" ], "question_id": "fs-idm245677888", "question_text": "Which of the following is not a qualitative decision that should be considered in an outsourcing decision?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 0, "ans_text": "comparing contribution margins and fixed costs" }, "bloom": null, "hl_context": "Two basic approaches can be used to analyze data in this type of decision . <hl> One approach is to compare contribution margins and fixed costs . <hl> In this method , the contribution margins with and without the segment ( or division or product line ) are determined . <hl> The two contribution margins are compared and the alternative with the greatest contribution margin would be the chosen alternative because it provides the biggest contribution toward meeting fixed costs . <hl>", "hl_sentences": "One approach is to compare contribution margins and fixed costs . The two contribution margins are compared and the alternative with the greatest contribution margin would be the chosen alternative because it provides the biggest contribution toward meeting fixed costs .", "question": { "cloze_format": "___ is one of the two approaches used to analyze data in the decision to keep or discontinue a segment.", "normal_format": "Which of the following is one of the two approaches used to analyze data in the decision to keep or discontinue a segment?", "question_choices": [ "comparing contribution margins and fixed costs", "comparing contribution margins and variable costs", "comparing gross margin and variable costs", "comparing total contribution margin under each alternative" ], "question_id": "fs-idm380520448", "question_text": "Which of the following is one of the two approaches used to analyze data in the decision to keep or discontinue a segment?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "A" }, "bloom": null, "hl_context": "<hl> The second approach involves calculating the total net income for retaining the segment and comparing it to the total net income for dropping the segment . <hl> <hl> The company would then proceed with the alternative that has the highest net income . <hl> <hl> In order to perform these net income calculations , the company would need more information than they would need in order to follow the contribution margin approach , which does not consider the costs and revenues that are the same between the alternatives . <hl> <hl> Two basic approaches can be used to analyze data in this type of decision . <hl> One approach is to compare contribution margins and fixed costs . In this method , the contribution margins with and without the segment ( or division or product line ) are determined . The two contribution margins are compared and the alternative with the greatest contribution margin would be the chosen alternative because it provides the biggest contribution toward meeting fixed costs . <hl> Fundamentals of the Decision to Keep or Discontinue a Segment or Product <hl>", "hl_sentences": "The second approach involves calculating the total net income for retaining the segment and comparing it to the total net income for dropping the segment . The company would then proceed with the alternative that has the highest net income . In order to perform these net income calculations , the company would need more information than they would need in order to follow the contribution margin approach , which does not consider the costs and revenues that are the same between the alternatives . Two basic approaches can be used to analyze data in this type of decision . Fundamentals of the Decision to Keep or Discontinue a Segment or Product", "question": { "cloze_format": "The ___ would not be an acceptable baseline criterion.", "normal_format": "The third step for making a capital investment decision is to establish baseline criteria for alternatives. Which of the following would not be an acceptable baseline criterion?", "question_choices": [ "only when the decrease in total contribution margin is less than the decrease in fixed cost", "only when the decrease in total contribution margin is equal to fixed cost", "only when the increase in total contribution margin is more than the decrease in fixed cost", "only when the decrease in total contribution margin is less than the decrease in variable cost" ], "question_id": "fs-idm385854864", "question_text": "When should a segment be dropped?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 1, "ans_text": "B" }, "bloom": null, "hl_context": "As with other short-term decisions , a company must consider the relevant costs and revenues when making decisions when resources are constrained . <hl> Whether the organization facing a constraint is a merchandising , manufacturing , or service organization , the initial step in allocating scarce or constrained resources is to determine the unit contribution margin , which is the selling price per unit minus the variable cost per unit , for each product or service . <hl> <hl> The company should produce or provide the products or services that generate the highest contribution margin first , followed by those with the second highest , and so forth . <hl> The total contribution margin will be maximized by promoting those products or accepting those orders with the highest contribution margin in relation to the scarce resource . In other words , products or services should be ranked based on their unit contribution margin per production restraint , which is the unit contribution margin divided by the production restraint .", "hl_sentences": "Whether the organization facing a constraint is a merchandising , manufacturing , or service organization , the initial step in allocating scarce or constrained resources is to determine the unit contribution margin , which is the selling price per unit minus the variable cost per unit , for each product or service . The company should produce or provide the products or services that generate the highest contribution margin first , followed by those with the second highest , and so forth .", "question": { "cloze_format": "When operating in a constrained environment, the products that should be produced are ___ .", "normal_format": "When operating in a constrained environment, which products should be produced?", "question_choices": [ "products with the highest contribution margin per unit", "products with the highest contribution margin per unit of the constrained process", "products with the highest selling price", "products with the lowest allocated joint cost" ], "question_id": "fs-idm203355136", "question_text": "When operating in a constrained environment, which products should be produced?" }, "references_are_paraphrase": null } ]
10
10.1 Identify Relevant Information for Decision-Making Almost everything we do in life results from choosing between alternatives, and the choices we make result in different consequences. For example, when choosing whether or not to eat breakfast before going to class, you face two alternatives and two sets of consequences. Eating breakfast means you must get up a little earlier, have food available, and be willing to prepare the food. Not eating means sleeping in longer, not having to plan food, and being hungry during class. Just as our lives are fraught with decisions large and small, the same is true for businesses. Almost every aspect of being in business involves choosing between alternatives, and each alternative typically has one or more consequences. Understanding how businesses make decisions paves the way not only to better decision-making processes but potentially to better outcomes. Decisions made by businesses can have short-term effects or long-term impacts, or in some situations, both. Short-term decisions often address a temporary circumstance or an immediate need while long-term decisions align more with permanent problem solving and meeting strategic goals. Because these two types of decisions require different types of analyses, we will consider short-term decision-making here and long-term decision-making in Capital Budgeting Decision . Accounting distinguishes between short-term and long-term decisions not only because of the difference in the general nature of these decisions but also because the types of analyses differ significantly between short-term and long-term decision categories. As the time horizon over which the decision will have an impact expands, more costs become relevant to the decision-making process. In addition, when a time element is considered, there will be additional factors such as interest (paid or received) that will have a greater influence on decisions. Table 10.1 provides examples of short-term and long-term business decisions. Examples of Short-Term and Long-Term Business Decisions Short-Term Business Decisions Long-Term Business Decisions Accepting a special production order Determining the best product mix from current products Outsourcing a part or service Further processing or refining a current product Buying new equipment versus remodeling old equipment Choosing which products to manufacture Expanding into a new area or country Diversifying by buying another business Table 10.1 Short-term and long-term business decisions should be analyzed using different frameworks. Continuing Application Short-Term Decision-Making Considering the business challenges facing Gearhead Outfitters , what short-term decisions might the company encounter? Remember that the retailer sells men’s, women’s and children’s outdoor clothing, footwear, and accessories. Gearhead must carry a certain level and variety of inventory to meet the demands of its customers. The company will have to maintain appropriate accounting records to make proper business decisions to promote sustainability and growth. How might Gearhead be able to compete with larger chains and remain profitable? Will every sale result in the anticipated profit to the company? Consider what specialized short-term decision-making processes the company may use to meet its goals. Should more of an item than normal be purchased for resale to receive a larger discount from the supplier? What information about cost, volume, and profit is needed to make a sound business decision in this case? Some items may be sold at a loss (or lesser profit) to attract customers to the store. What type of information and accounting system is needed to help in this situation? The company requires relevant, consistent, and reliable data to determine the proper course of action. Short-term decision-making is vital in any business. Consider this concept in relation to Centralized vs. Decentralized Management and how a company’s approach may affect the decision-making process. Discuss possible short-term issues and decisions, management focuses, and whether or not the centralized versus decentralized style will aid in company flexibility and success. Also, think in terms of how the decision-making process will be evaluated. Relevant Information for Short-Term Decision-Making Business decision-making can be outlined as a process that is applied by management with each decision that is made. The process of decision-making in a managerial business environment can be summed up in these steps. Identify the objective or goal. For a business, typically the goal is to maximize revenues or minimize costs. Identify alternative courses of action that can achieve the goal or address an obstacle that is hindering goal achievement. Perform a comprehensive analysis of potential solutions. This includes identifying revenues, costs, benefits, and other financial and qualitative variables. Decide, based upon the analysis, the best course of action. Review, analyze, and evaluate the results of the decision. The first step of the decision-making process is to identify the goal. In the decisions discussed in this course, the quantitative goal will either be to maximize revenues or to minimize costs. The second step is to identify the alternative courses of action to achieve the goal. (In the real world, steps one and two may require more thought and research that you will learn about in advanced cost accounting and management courses.). This chapter focuses on steps three and four, which involve short-term decision analysis : determining the appropriate information necessary for making a decision that will impact the company in the short term, usually 12 months or fewer, and using that information in a proper analysis in order to reach an informed decision among alternatives. Step five, which involves reviewing and evaluating the decision, is briefly addressed with each type of decision analyzed. Though these same general steps could be used in long-term decision analyses, the nature of long-term decisions is different. Short-term decisions are typically operational in nature: making versus buying a component of a product, using scarce resources, selling a product as-is or processing it further into a different product. It is relatively easy to change a short-term decision with minimal impact on the company. Long-term decisions are strategic in nature and typically involve large sums of money. The effects of a long-term decision can have significant financial impact on a company for years. Examples of long-term decisions include replacing manufacturing equipment, building a new factory, or deciding to eliminate a product line. While you’ve learned how managerial accounting classifies, tracks, monitors, and controls costs, managerial accountants also closely analyze revenues, which are less controllable than costs, but are important in these decisions. As stated in the first step of the decision-making process, maximizing revenues is usually one of the goals of an organization. Therefore, making some short-term decisions requires analysis of both costs and revenues. In carrying out step three of the managerial decision-making process, a differential analysis compares the relevant costs and revenues of potential solutions. What does this involve? First, it is important to understand that there are many types of short-term decisions that a business may face, but these decisions always involve choosing between alternatives. Examples of these types of decisions include determining whether to accept a special order; making a product or component versus buying the product or component; performing additional processing on a product; keeping versus eliminating a product or segment; or determining whether to take on a new project. In each of these situations, the business should compare the relevant costs and the relevant revenues of one alternative to the relevant costs and relevant revenues of the other alternative(s). Therefore, an important step in the differential analysis of potential solutions is to identify the relevant costs and relevant revenues of the decision. What does it mean for something to be relevant? In the context of decision-making, something is relevant if it will influence the decision being made. For example, suppose you have two options for a summer job—either flagging traffic for a road crew or working for a landscaping company doing lawn care. For either job, you will be required to have industrial grade sound protectors (plugs or headphones) for your ears. This cost would not be relevant because it is the same under either alternative, so it will not influence your decision between the two jobs; it would be considered an irrelevant cost . You also believe your transportation costs will be the same for either job; thus this would also be an irrelevant cost. However, if you are required to have steel-toed boots for the road work job but can wear any type of work boot for the landscaping job, you would need to consider the difference between the costs, or the differential cost , of these two types of boots. This difference in cost between the two pairs of boots would be designated as a relevant cost because it influences your decision. The two jobs also may have differences in revenues, called a differential revenue . Because the differential revenue influences the decision, it is also a relevant revenue . If both jobs pay the same hourly wage, it would have an irrelevant revenue , but if the road crew job offers overtime for any time worked over 40 hours, then this overtime wage has the potential to be a relevant revenue if overtime is a likely occurrence. Looking only at these differences—of both costs and revenues—between the alternatives, is known as differential analysis . In conducting these types of analyses between alternatives, the initial focus will be on each quantitative factor of the analysis—in other words, the component that can be measured numerically. Examples of quantitative factors in business include sales growth, number of defective parts produced, or number of labor hours worked. However, in decision-making, it is important also to consider each qualitative factor , which is one that cannot be measured numerically. For example, using the same summer job scenario, qualitative factors may include the environment in which you would be working (road dust and tar odors versus pollen and mower exhaust fumes), the amount of time exposed to the sun, the people with whom you will be working (working with friends versus making new friends), and weather-related issues (both jobs are outdoors, but could one job send you home for the day due to weather?). Examples of qualitative factors in business include employee morale, customer satisfaction, and company or brand image. In making short-term decisions, a business will want to analyze both qualitative and quantitative factors. In short-term decision-making, revenues are often easier to evaluate than costs. In addition, each alternative typically only has one possible one revenue outcome even though there are many costs to consider for each alternative. How do we know if a cost will have an impact on the decision? The starting point is to understand the various labels that are attached to costs in these decision-making environments. Avoidable versus Unavoidable Costs Management must determine if a cost is avoidable or unavoidable because in the short run, only avoidable costs are relevant for decision-making purposes. An avoidable cost is one that can be eliminated (in whole or in part) by choosing one alternative over another. For example, assume that a bike shop offers their customers custom paint jobs for bikes that the customers already own. If they eliminate the service, the cost of the bike paint could be eliminated. Also assume that they had been employing a part-time painter to do the work. The painter’s compensation would also be an avoidable cost. An unavoidable cost is one that does not change or go away in the short-run by choosing one alternative over another. For example, a company might sign a long-term lease on equipment or a production facility. These types of leases typically don’t allow for cancellation, so if this one does not, then their required payments are unavoidable costs for the duration of the lease. Variable costs are avoidable costs, since variable costs do not exist if the product is no longer made, or if the portion of the business (such as a segment or division) that generated the variable costs ceases to operate. Fixed costs , on the other hand, may be unavoidable, partially unavoidable, or avoidable only in certain circumstances. Remember that fixed costs tend to remain constant for a period of time and within a relevant range of production and are not easily eliminated in the short-run. Therefore, most fixed costs also are unavoidable. If a fixed cost is specific only to one of the alternatives, then that fixed cost also may be avoidable. Avoidable costs are future costs that are relevant to decision-making. Past costs are never an avoidable cost. Recall that we are using a short-term viewpoint to determine whether or not costs are avoidable. In the long run, virtually all costs are avoidable. For example, assume that a company has a long-term, ten-year lease on a production facility that cannot be cancelled. For the first ten years it would be noncancelable and thus unavoidable. But after ten years it would become avoidable. Your Turn AlexCo’s Wagons AlexCo produces collapsible wagons that are popular with beachgoers, shoppers, gardeners, parents, and tailgaters. Annual sales have been 100,000 wagons per year. The retail selling price of each wagon is $67.00. To date, AlexCo has produced each of the components used in making the wagons but has been approached by DAL, Inc. with an offer to provide the axle and wheel assembly for $18.75 per assembly. AlexCo’s costs to produce the axle and wheel assembly are $9.00 in direct materials, $6.50 in direct labor, $3.57 in variable overhead, and $2.50 in fixed overhead. Twenty-five percent of the fixed overhead is avoidable if the assembly is produced by DAL. Should AlexCo continue to make the axle and wheel assembly or should it buy the assembly from DAL, Inc.? Solution Ignoring qualitative factors, it would be more cost effective for AlexCo to buy the axle and wheel assembly from DAL, Inc. However, AlexCo should be certain of any qualitative issues and not solely base their decision on the quantitative analysis. Sunk Costs A sunk cost is one that cannot be avoided because it has already occurred. A sunk cost will not change regardless of the alternative that management chooses; therefore, sunk costs have no bearing on future events and are not relevant in decision-making. The basic premise sounds simple enough, but sunk costs are difficult to ignore due to human nature and are sometimes incorrectly included in the decision-making process. For example, suppose you have an old car, a hand-me-down from your grandmother, and last year you spent $1,600 on repairs and new tires and were just told by your mechanic that the car needs $1,200 in repairs to operate safely. Your goal is to have a safe and reliable car. Your alternatives are to get the repairs completed or trade in the car for a newer used car. From a quantitative perspective, you have gathered the following information to help with your decision. The trade-in value of your old car will be the minimum given by the dealer, or $200. The newer used car will require you to make monthly payments of $150 for two years. In analyzing your two alternatives, what costs do you consider? Remember, the $1,600 you have already spent (note the past tense) is a sunk cost; it is a consequence of a past decision. In this example, the relevant costs for each alternative are the following: $1,200 in current repair costs to keep your current car or $3,400 (from the 24 payments of $150 minus $200 for the trade in) to buy a newer used car. Obviously, you also would consider qualitative factors, such as the sentimental value of your grandmother’s car or the excitement of having a newer car. Sunk costs are most problematic for business decisions when they pertain to existing equipment. The book value of an asset (historical cost – accumulated depreciation) is a sunk cost regardless of whether a business keeps the asset or disposes of it in some manner. The cost of the asset occurred in the past and therefore is sunk and irrelevant to the decision at hand. Mangers may be reluctant to ignore sunk costs when making decisions, especially if the prior decision to purchase the asset was an unwise one. Often, when management takes a path of action that is not achieving the desired results, managers may continue the same path in the hope that the effect of prior decisions will improve the results. The use of the word prior is a key indicator that information is nonrelevant to a current decision. Holding on to old decisions or old commitments is common because letting them go forces management to admit they made a bad decision. Future Costs That Do Not Differ Any future cost that does not differ between the alternatives is not a relevant cost for the decision. For example, if a company is considering baking either bagels or doughnuts and both baked goods require $0.30 worth of flour, then the cost of flour would not be a relevant cost in determining which of the two had the highest production cost. As relevant information for short-term decision-making, the cost of sound protectors for your summer job would not be relevant to your decision because that cost exists in both scenarios. Another irrelevant cost would be your transportation cost, since that cost is also the same regardless of the job you choose. In another example, if a company is planning to produce either red widgets or blue wingdings and will need to hire 10 additional employees to produce either of the goods, the cost of those 10 employees is irrelevant because it does not differ between the alternatives. Ethical Considerations Johnson & Johnson’s 1982 Recall and Replacement of All Tylenol in the World In 1982, Johnson & Johnson was faced with a large-scale business and ethical dilemma. During the course of several days beginning on September 29, 1982, seven deaths occurred in the Chicago area that were attributed to consuming capsules of Extra-Strength Tylenol. The painkiller was, at the time, Johnson & Johnson ’s best-selling product. The company had to decide if the short-term cost of replacing the Tylenol was worth the future cost to their reputation and their customer’s health and safety. At tremendous expense, Johnson & Johnson “placed consumers first by recalling 31 million bottles of Tylenol capsules from store shelves and offering replacement product in the safer tablet form free of charge.” 1 1 Judith Rehak. “Tylenol Made a Hero of Johnson & Johnson: The Recall That Started Them All.” New York Times . Mar. 23, 2002. https://www.nytimes.com/2002/03/23/your-money/IHT-tylenol-made-a-hero-of-johnson-johnson-the-recall-that-started.html As it was later discovered, someone was lacing Tylenol capsules with cyanide and returning the pills in the original packages to store shelves. However, Johnson & Johnson ’s decision to incur short-term costs by recalling all of their pills ultimately paid off, as in the long run, the company’s stock value increased and Tylenol sales recovered. One could look at the decision as an opportunity cost: Johnson & Johnson had to choose between two alternatives. The company could have chosen a short-term solution with reduced short-term losses, but by making an ethical business decision, the long-term rewards were greater than the short-term savings. Opportunity Costs When choosing between two alternatives, usually only one of the two choices can be selected. When this is the case, you may be faced with opportunity costs , which are the costs associated with not choosing the other alternative. For example, if you are trying to choose between going to work immediately after completing your undergraduate degree or continuing to graduate school, you will have an opportunity cost. If you choose to go to work immediately, your opportunity cost is forgoing a graduate degree and any potential job limitations or advancements that result from that decision. If you choose instead to go directly into graduate school, your opportunity cost is the income that you could have been earning by going to work immediately upon graduation. Your Turn Costs and Revenue at Carolina Clusters Carolina Clusters, Inc., a candy manufacturer in a resort town, just bought a new taffy pulling machine for $27,000 and is planning to increase the production of salt-water taffy. Due to the increased production, Carolina is deciding between hiring two part-time college students or one full-time employee. Each college student would work half days totaling 20 hours per week, and would earn $12 per hour. The full-time employee would work full days 40 hours per week and would earn $12 per hour plus the equivalent of $2 per hour in benefits. Each employee is given two t-shirts to wear as their uniform. The t-shirts cost Carolina $8 each. In addition, Carolina provides disposable hair coverings and gloves for the employees. Each employee uses, on average, six sets of gloves per eight-hour shift or four sets per four-hour shift. One hair covering per shift per person is typical. The cost of the hair covering is $0.05 per covering and the cost of a pair of gloves is $0.02 per pair. Identify any relevant costs, relevant revenues, sunk costs, and opportunity costs that Carolina Clusters needs to consider in making the decision whether to hire two part-time employees or one full-time employee. Solution Relevant costs: $2 per hour for benefits $16 for two t-shirts: Hiring one full-time person will result in a $16 expenditure for t-shirts. Hiring two college students would result in $32 in t-shirt expenditures, thus the relevant t-shirts costs is the $16 difference. $0.05 for a hair covering: Hiring one full-time person will result in $0.05 per day in hair covering costs but hiring two college students would result in $0.10 per day in hair covering costs thus the relevant hair covering cost is the $0.05 difference. $0.04 for a pair of gloves: Hiring one full-time person will result in $0.12 (6 × $0.02) per day in glove costs, but hiring two college students would result in $0.16 (8 × $0.02) per day in glove costs. Thus, the relevant glove cost is the $0.04 difference. Relevant revenues: None Sunk costs: $27,000 for the taffy machine Opportunity costs: None 10.2 Evaluate and Determine Whether to Accept or Reject a Special Order Both manufacturing and service companies often receive requests to fill special orders. These special orders are typically for goods or services at a reduced price and are usually a one-time order that, in the short-run, does not affect normal sales. When deciding whether to accept a special order, management must consider several factors: The capacity required to fulfill the special order Whether the price offered by the buyer will cover the cost of producing the products The role of fixed costs in the analysis Qualitative factors Whether the order will violate the Robinson-Patman Act and other fair pricing legislation Fundamentals of the Decision to Accept or Reject a Special Order The starting point for making this decision is to assess the company’s normal production capacity. The normal capacity is the production level a company can achieve without adding additional production resources, such as additional equipment or labor. For example, if the company can produce 10,000 towels a month based on its current production capacity, and it is currently contracted to produce 9,000 a month, it could not take on a special one-time order for 3,000 towels without adding additional equipment or workers. Most companies do not work at maximum capacity; rather, they function at normal capacity, which is a concept related to a company’s relevant range. The relevant range is the quantitative range of units that can be produced based on the company’s current productive assets. These assets can include equipment capacity or its labor capacity. Labor capacity is typically easier to increase on a short-term basis than equipment capacity. The following example assumes that labor capacity is available, so only equipment capacity is considered in the example. Assume that based on a company’s present equipment, it can produce 20,000 units a month. Its relevant range of production would be zero to 20,000 units a month. As long as the units of production fall within this range, it does not need additional equipment. However, if it wanted to increase production from 20,000 units to 24,000 units, it would need to buy or lease additional equipment. If production is fewer than 20,000 units, the company would have unused capacity that could be used to produce additional units for its current customers or for new clients. If the company does not have the capacity to produce a special order, it will have to reduce production of another good or service in order to fulfill the special order or provide another means of producing the goods, such as hiring temporary workers, running an additional shift, or securing additional equipment. As you will learn, not having the capacity to fill the special order will create a different analysis than it would if there is sufficient capacity. Next, management must determine if the price offered by the buyer will result in enough revenue to cover the differential costs of producing the items. For example, if price does not meet the variable costs of production, then accepting the special order would be an unprofitable decision. Additionally, fixed costs may be relevant if the company is already operating at capacity, as there may be additional fixed costs, such as the need to run an extra shift, hire an additional supervisor, or buy or lease additional equipment. If the company is not operating at capacity—in other words, the company has unused capacity—then the fixed costs are irrelevant to the decision if the special order can be met with this unused capacity. Special orders create several qualitative issues. A logical issue is the concern for how existing customers will feel if they discover a lower price was offered to the special-order customer. A special order that might be profitable could be rejected if the company determined that accepting the special order could damage relations with current customers. If the goods in the special order are modified so that they are cheaper to manufacture, current customers may prefer the modified, cheaper version of the product. Would this hurt the profitability of the company? Would it affect the reputation? In addition to these considerations, sometimes companies will take on a special order that will not cover costs based on qualitative assessments. For example, the business requesting the special order might be a potential client with whom the manufacturer has been trying to establish a business relationship and the producer is willing to take a one-time loss. However, our coverage of special orders concentrates on decisions based on quantitative factors. Companies considering special orders must also be aware of the anti–price discrimination rules established in the Robinson-Patman Act . The Robinson-Patman Act is a federal law that was passed in 1936. Its primary intent is to prevent some forms of price discrimination in sales transactions between smaller and larger businesses. Link to Learning The Robinson-Patman Act prevents large retailers from purchasing goods in bulk at a greater discount than smaller retailers are able to obtain them. It helps keep competition fair between large and small businesses and is sometimes called the “Anti-Chain Store Act.” Read the LegalDictionary.net full definition and example of the Robinson-Patman Act to learn more. Sample Data Franco, Inc., produces dental office examination chairs. Franco has the capacity to produce 5,000 chairs per year and currently is producing 4,000. Each chair retails for $2,800, and the costs to produce a single chair consist of direct materials of $750, direct labor of $600, and variable overhead of $300. Fixed overhead costs of $1,350,000 are met by selling the first 3,000 chairs. Franco has received a special order from Ghanem, Inc., to buy 800 chairs for $1,800. Should Franco accept the special order? Calculations Using Sample Data Franco is not operating at capacity and the special order does not take them over capacity. Additionally, all the fixed costs have already been met. Therefore, when evaluating the special order, Franco must determine if the special offer price will meet and exceed the costs to produce the chairs. Figure 10.2 details the analysis. Since Franco has already met his fixed costs with current production and since he has the capacity to produce the additional 800 units, Franco only needs to consider his variable costs for this order. Franco’s variable cost to produce one chair is $1,650. Ghanem is offering to buy the chairs for $1,800 apiece. By accepting the special order, Franco would meet his variable costs and make $150 per chair. Considering only quantitative factors, Franco should accept the special offer. How would Franco’s decision change if the factory was already producing at capacity at the time of the special offer? In other words, assume the corporation is already producing the most it can produce without working more hours or adding more equipment. Accepting the order would likely mean that Franco would incur additional fixed costs. Assume that, to fill the order from Ghanem, Franco would have to run an extra shift, and this would require him to hire a temporary production manager at a cost of $90,000. Assume no other fixed costs would be incurred. Also assume Franco will incur additional costs related to maintenance and utilities for this extra shift and estimates those costs will be $70,000. As shown in Figure 10.3 , in this scenario, Franco would have to charge Ghanem at least $1,850 in order to meet his cost. Final Analysis of the Decision The analysis of Franco’s options did not consider any qualitative factors, such as the impact on morale if the company is already at capacity and opts to implement overtime or hire temporary workers to fill the special order. The analysis also does not consider the effect on regular customers if management elects to meet the special order by not fulfilling some of the regular orders. Another consideration is the impact on existing customers if the price offered for the special order is lower than the regular price. These effects may create a bad dynamic between the company and its customers, or they may cause customers to seek products from competitors. As in the example, Franco would need to consider the impact of displacing other customers and the risk of losing business from regular customers, such as dental supply companies, if he is unable to meet their orders. The next step is to do an overall cost/benefit analysis in which Franco would consider not only the quantitative but the qualitative factors before making his final decision on whether or not to accept the special order. Think It Through Athletic Jersey Special Orders Jake’s Jerseys has been asked to produce athletic jerseys for a local school district. The special order is for 1,000 jerseys of varying sizes, and the price offered by the school district is $10 less per jersey than the normal $50 market price. The school district interested in the jerseys is one of the largest in the area. What quantitative and qualitative factors should Jake consider in making the decision to accept or reject the special order? 10.3 Evaluate and Determine Whether to Make or Buy a Component One of the most common outsourcing scenarios is one in which a company must decide whether it is going to make a component that it needs in manufacturing a product or buy that component already made. For example, all of the components of the iPhone are made by companies other than Apple . Ford buys truck and automobile seats, as well as many other components and individual parts, from various suppliers and then assembles them at Ford factories. With each component, Ford must decide if it is more cost effective to make that component internally or to buy that component from an external supplier. This type of analysis is also relevant to the service industry; for example, ADP provides payroll and data processing services to over 650,000 companies worldwide. Or a law firm may decide to hire certain research activities to be completed by outside experts rather than hire the necessary staff to keep that function in-house. These are all examples of outsourcing. Outsourcing is the act of using another company to provide goods or services that your company requires. Many companies outsource some of their work, but why? Consider this scenario: Today, while driving home from class, one of your car’s engine warning lights goes on. You will most likely take your car to an auto repair specialist to have it analyzed and repaired, whereas your grandfather might have popped the hood, grabbed his toolbox, and attempted to diagnose and fix the problem himself. Why? It is often a matter of expertise and sometimes simply a matter of cost benefit. In your grandfather’s time, car engines were more mechanical and less electronic, which made learning to repair cars a simpler process that required less expertise and only basic tools. Today, your car has many electronic components and often requires sophisticated monitors to assess the problem and may involve the replacement of computer chips or electronic sensors. Thus, you opt to outsource the repair of your car to someone who has the knowledge and facilities to provide the repair more cost effectively than you could if you did it yourself. Your grandfather likely could have made the repair to his car several decades ago as cheaply as the mechanic with only a sacrifice of his time. To your grandfather, the cost of his time was worth the benefit of completing the repair himself. Companies outsource for the same reasons. Many companies have found that it is more cost effective to outsource certain activities, such as payroll, data storage, and web design and hosting. It is more efficient to pay an outside expert than to hire the appropriate staff to keep a particular task inside the company. Fundamentals of the Decision to Make or to Buy As with other decisions, the make-versus-buy decision involves both quantitative and qualitative analysis. The quantitative component requires cost analysis to determine which alternative is more cost effective. This cost analysis can be performed by looking at the cost to buy the component versus the cost to produce the component, which allows us to make a decision based on an analysis of unavoidable costs. For example, the costs to produce will include direct materials, direct labor, variable overhead, and fixed overhead. If the business chooses to buy the component instead, the avoidable costs will go away but unavoidable costs will remain and would need to be considered as part of the cost to buy the component. Sample Data Thermal Mugs, Inc., manufactures various types of leak-proof personal drink carriers. Thermal’s T6 container, its most insulated carrier, maintains the temperature of the liquid inside for 6 hours. Thermal has designed a new lid for the T6 carrier that allows for easier drinking and pouring. The cost to produce the new lid is $2.19: Plato Plastics has approached Thermal and offered to produce the 120,000 lids Thermal will require for current production levels of the T6 carrier, at a unit price of $1.75 each. Is this a good deal? Should Thermal buy the lids from Plato rather than produce them themselves? Initially, the $1.75 presented by Plato seems like a much better price than the $2.19 that it would cost Thermal to produce the lids. However, more information about the relevant costs is necessary to determine whether the offer by Plato is the better offer. Remember that all the variable costs of producing the lid will only exist if the lid is produced by Thermal, thus the variable costs (direct materials, direct labor, and variable overhead) are all relevant costs that will differ between the alternatives. What about the fixed costs? Assume all the fixed costs are not tied directly to the production of the lid and therefore will still exist even if the lid is purchased externally from Plato. This means the fixed costs of $0.51 per unit are unavoidable and therefore are not relevant. Calculations Using Sample Data Calculations show that when the relevant costs are compared between the two alternatives, it is more cost effective for Thermal to produce the 120,000 units of the T6 lid internally than to purchase it from Plato. By producing the T6 lid internally, Thermal can save $8,400 ($210,000 − $201,600). How would the analysis change if a portion of the fixed costs were avoidable? Suppose that, of the $0.51 in fixed costs per unit of the T6 lid, $0.12 of those fixed costs are associated with interest costs and insurance expenses and thus would be avoidable if the T6 lid is purchased externally rather than produced internally. How does that change the analysis? In this scenario, it is more cost effective for Thermal to buy the T6 lid from Plato, as Thermal would save $6,000 ($216,000 − $210,000). Final Analysis of the Decision The difference in these two presentations of the data emphasizes the importance of defining which costs are relevant, as improper cost identification can lead to bad decisions. These analyses only considered the quantitative factors in a make-versus-buy decision, but there are qualitative factors to consider as well, including: Will the T6 lid made by Plato meet the quality requirements of Thermal? Will Plato continue to produce the T6 lid at the $1.75 price, or is this a teaser rate to obtain the business, with the plan for the rate to go up in the future? Can Plato continue to produce the quantity of the lids desired? If more or fewer are needed from Plato, is the adjusted production level obtainable, and does it affect the cost? Does using Plato to produce the lids displace Thermal workers or hamper morale? Does using Plato to produce the lids affect the reputation of Thermal? In addition, if the decision is to buy the lid, Thermal is dependent on Plato for quality, timely delivery, and cost control. If Plato fails to deliver the lids on time, this can negatively affect Thermal’s production and sales. If the lids are of poor quality, returns, replacements, and the damage to Thermal’s reputation can be significant. Without long-term agreements on price increases, Plato can increase the price they charge Thermal, thus making the entire drink container more expensive and less profitable. However, buying the lid likely means that Thermal has excess production capacity that can now be applied to making other products. If Thermal chooses to make the lid, this consumes some of the productive capacity and may affect the relationship Thermal has with the outside supplier if that supplier is already working with Thermal on other products. Make versus buy, one of many outsourcing decisions, should involve assessing all relevant costs in conjunction with the qualitative issues that affect the decision or arise because of the choice. Although it may appear that these types of outsourcing decisions are difficult to resolve, companies throughout the world make these decisions daily as part of the company’s strategic plan, and therefore, each company must weigh the advantages and disadvantages of outsourcing production of goods and services. Some examples are shown in Table 10.2 . Advantages and Disadvantages of Outsourcing Advantages of Outsourcing Disadvantages of Outsourcing Utilizes external expertise, removes the need for in-house expertise Frees up capacity for other uses Frees up capital for other uses Allows management to focus on competitive strengths Transfers some production and technological risks to supplier Takes away control over quality and timing of production May limit ability to upsize or downsize production May have hidden costs and/or a lack of stability of price May diminish innovation Often makes it difficult to bring the production back in-house once it has been removed Table 10.2 In an outsourcing decision, the relevant costs and qualitative issues should be analyzed thoroughly. If there are no qualitative issues that affect the decision and the leasing or purchasing price is less than the relevant (avoidable) costs of producing the good or service in house, the company should outsource the product or service. The following example demonstrates this issue for a service entity. Lake Law has ten lawyers on staff who handle workers’ compensation and workplace discrimination lawsuits. Lake has an excellent success rate and frequently wins large settlements for their clients. Because of the size of their settlements, many clients are interested in establishing trusts to manage the investing and distribution of the funds. Lake Law does not have a trust or estate lawyer on staff and is debating between hiring one or using an attorney at a nearby law firm that specializes in wills, trusts, and estates to handle the trusts of Lake’s clients. Hiring a new attorney would require $120,000 in salary for the attorney, an additional 20% in benefits, a legal assistant for the new attorney for 20 hours per week at a cost of $20 per hour, and conversion of a storage room into an office. Lake spent $100,000 on redecorating the offices last year and has sufficient furniture for a new office. The attorney at the nearby firm would charge a retainer of $50,000 plus $200 per hour worked on each trust. The retainer is in addition to the $200 per hour charge for work on trusts. The average trust takes 10 hours to complete and Lake estimates approximately 50 trusts per year. In addition, an external attorney would charge $500 for each trust to cover office expenses and filing fees. Which option should Lake choose? To determine the solution, first, find the relevant costs for hiring internally and for using an external attorney. Based on the quantitative analysis, Lake should hire an estate attorney to have on staff. For the year, the firm would save $10,200 ($164,800 for internal versus $175,000 with the external attorney) by going with the internal hire. Other potential advantages would be that an in-house attorney could complete more than the estimated 50 trusts without incurring additional costs, and by keeping the work in-house, it helps to build the relationship between the firm and the clients. A disadvantage would be if there is not sufficient work to keep the in-house attorney busy, the company would still have to pay the $120,000 salary plus the additional costs of $44,800 for benefits and the legal assistant’s salary, even if the attorney is working at less than full capacity. Link to Learning The iPhone is the ultimate example of outsourcing. Though created in the United States, it is produced all around the globe, with thousands of parts supplied by over 200 suppliers—none of which is Apple . Read this article from The New York Times on where parts for the iPhone are made to learn how an iPhone gets from the design phase in the United States to production of components around the world, to assembly in China, and then back to the United States for sale in a retail store. 10.4 Evaluate and Determine Whether to Keep or Discontinue a Segment or Product Companies tend to divide their organization along product lines, geographic locations, or other management needs for decision-making and reporting. A segment is a portion of the business that management believes has sufficient similarities in product lines, geographic locations, or customers to warrant reporting that portion of the company as a distinct part of the entire company. For example, General Electric, Inc. , has eight segments and the Walt Disney Company has four segments. Table 10.3 shows these segments. Examples of Company Segments General Electric Segments Disney Segments Additive Aviation Capital Digital Healthcare Lighting Power Renewable Energy Transportation Media Networks Parks, Experiences, and Consumer Products Studio Entertainment Direct to Consumer and International Table 10.3 Examples of Company Segments 3 3 GE Businesses. n.d. https://www.ge.com/; Disney. “Our Businesses.” n.d. https://www.thewaltdisneycompany.com/about/#our-businesses As part of the normal operations of a business, managers make decisions such as whether to keep producing a product, whether to continue operating in certain areas, or whether to close entire segments of their operations. These are historically some of the most difficult decisions that managers make. Examples of these types of decisions include Macy’s decision to close 100 stores in 2016 due to increased competition from online retailers such as Amazon.com 4 and Delta Airline’s decision to eliminate 16 routes to save costs. 5 What information does management use in making these types of decisions? 4 Hayley Peterson. “Macy’s May Shut Down Even More Stores.” Business Insider . May 12, 2017. http://www.businessinsider.com/macys-might-shut-down-more-stores-2017-5 5 Jason Williams. “Delta Downsizing Flights to 14 More Cities.” Cincinnati.com. Mar. 11, 2015. http://www.cincinnati.com/story/news/2015/03/10/delta-cincinnati-airline-cuts-kentucky/24701445/ As with other decisions, management must consider both the quantitative and qualitative aspects. In choosing between alternatives—that is, in choosing between keeping and eliminating the product, segment, or service—the relevant revenues and costs should be analyzed. Remember that relevant revenues and costs are those that differ between alternatives. Often, the keep-versus-eliminate decision arises because the product or segment appears to be generating less of a profit than in prior periods or is unprofitable. In these situations, the product or segment may produce a positive contribution margin but may appear to have a lower or negative profit because of the allocation of common fixed costs. Fundamentals of the Decision to Keep or Discontinue a Segment or Product Two basic approaches can be used to analyze data in this type of decision. One approach is to compare contribution margins and fixed costs. In this method, the contribution margins with and without the segment (or division or product line) are determined. The two contribution margins are compared and the alternative with the greatest contribution margin would be the chosen alternative because it provides the biggest contribution toward meeting fixed costs. The second approach involves calculating the total net income for retaining the segment and comparing it to the total net income for dropping the segment. The company would then proceed with the alternative that has the highest net income. In order to perform these net income calculations, the company would need more information than they would need in order to follow the contribution margin approach, which does not consider the costs and revenues that are the same between the alternatives. Think It Through Allocating Common Fixed Costs Acme, Co., has three retail divisions: Small, Medium, and Large. Sales, variable costs, and fixed costs for each of the divisions are: Included in the fixed costs are $5,400,000 in allocated common costs, which are split evenly among the three divisions. Is an even split the best way to allocate those costs? Why or why not? What other ways might Acme consider using to allocate the common fixed costs? Sample Data Suppose SnowBucks, Inc., has three product lines: snow boots, snow sporting equipment, and a clothing line for winter sports. It has been brought to senior management’s attention that the snow boot product line is unprofitable. Figure 10.4 shows the data presented to senior management: Upon initial review, it appears that the snow boot product line is unprofitable. Should this product line be eliminated? To adequately analyze this situation, a proper analysis of the relevant revenues and costs must be made. The functional income statement in Figure 10.4 does not separate relevant from non-relevant costs. In conducting the analysis, the accounting team discovers that each product line is allocated certain costs over which the product line managers have no control. These allocated costs are typically associated with areas of the company that do not generate revenue but are necessary for the running of the organization, such as salaries for executives, human resources, and accounting at headquarters. The cost of these parts of the organization must somehow be shared with the revenue-generating portions of the business. Companies often allocate these costs to other parts of the organization based on some formula, such as dividing the total costs by the number of divisions or segments, as percentage of total revenue, or as percentage of total square footage. SnowBucks currently allocates these costs equally to the three product lines, and all the fixed selling and administrative expenses are considered allocated costs. In addition, the fixed manufacturing expenses represent factory rent, depreciation, and insurance, and all these costs will continue to exist regardless of whether the snow boot division continues. However, included in the fixed manufacturing expenses is the $75,000 salary of a sales supervisor for each division. This is an avoidable fixed cost as this cost would no longer exist if any division ceased operating. Calculations Using Sample Data Based on the new information, a new analysis using a product line margin indicates the following: Final Analysis of the Decision This new analysis shows that when the relevant costs and revenues are considered, it is apparent the snow boot product line is contributing toward meeting the fixed costs of the organization and therefore to overall corporate profitability. The reason the snow boot product line was showing an operating loss was due to the allocation of common costs. Consideration should be given to the way allocated costs are assigned to the various products to determine if the allocation is logical or if another allocation method, such as one based on each product line’s percentage of the total corporate sales, would provide a better matching of costs and services provided by corporate headquarters. Management should also consider qualitative factors, such as the impact of removing one product line on the overall sales of the other products. If customers commonly buy snow boots and skis together, then discontinuing the snow boot line could impact the sales of snow skis. Your Turn Disney’s Segments View Walt Disney Company’s 2018 full year earnings report on their website. Scroll to the section on Segment Results and answer these questions: How many segments does Disney have? Which segment had the highest revenue in 2018? Which segment had the highest operating income in 2018? Which segment has shown the most revenue growth between 2017 and 2018? How many segments showed growth in operating income between 2017 and 2018 and how many segments showed a decline in operating income between 2017 and 2018? Which segment has shown the least operating income growth between 2017 and 2018? Solution Four: Media Networks, Parks & Resorts, Studio Entertainment, and Consumer Products & Interactive Media Media Networks Media Networks Studio Entertainment Two segments (Parks & Resorts and Studio Entertainment) showed operating income growth, while two segments (Media Networks and Consumer Products & Interactive Media) showed a decline in operating income between 2017 and 2018. Consumer Products & Interactive Media 10.5 Evaluate and Determine Whether to Sell or Process Further One major decision a company has to make is to determine the point at which to sell their product—in other words, when it is no longer cost effective to continue processing the product before sale. For example, in refining oil, the refined oil can be sold at various stages of the refining process. The point at which some products are removed from production and sold while others receive additional processing is known as the split-off point . As you have learned, the relevant revenues and costs must be evaluated in order to make the best decision for the company. In making the decision, a company must consider the joint costs , or those costs that have been shared by products up to the split-off point. In some manufacturing processes, several end products are produced from a single raw material input. For example, once milk has been processed it can be sold as milk or it can be processed further into cheese, yogurt, cream, or ice cream. The costs of processing the milk to the stage at which it can be sold or processed further are the joint costs. These costs are allocated among all the products that are sold at the split off point as well as those products that are processed further. Ice cream has the basic costs of the milk plus the costs of processing it further into ice cream. As another example, suppose a company that makes leather jackets realizes it has a reasonable amount of unused leather from the cutting of the patterns for the jackets. Typically, this scrap leather is sold, but the company is beginning to consider using the scrap to make leather belts. How would the company allocate the costs incurred from processing and preparing the leather before cutting it if they decide to make both the jackets and the belts? Would it be financially beneficial to process the scrap leather further into belts? Fundamentals of the Decision to Sell or Process Further When facing the choice of selling or processing further, the company must determine the revenues that would be received if the product is sold at the split-off point versus the net revenues that would be received if the product is processed further. This requires knowing the additional costs of further processing. In general, if the differential revenue from further processing is greater than the differential costs, then it will be profitable to process a joint product after the split-off point. Any costs incurred prior to the split-off point are irrelevant to the decision to process further as those are sunk costs; only future costs are relevant costs. Even though joint product costs are common costs, they are routinely allocated to the joint products. A potential reason for this treatment is the GAAP (generally accepted accounting principles) requirement that all production costs must be inventoried. Be aware that some complexities can arise when allocating joint product costs. The first issue is that joint production costs can be allocated based on varying production and sales characteristics or assumptions. For example, a physical measurement method, a relative sales value method at the point of split-off, and a net realizable value method based on additional processing after the split-off point can all be used to allocate joint production costs. A second complexity is that eliminating the production of one or more joint products will not always enable the company to reduce joint production costs. Because of the mechanics of the common cost allocation process, such an action will only work if reductions are made in all of the joint products collectively. If only some of the joint products are eliminated, the remaining joint product or products would absorb all of the joint product costs. An example of this last issue might help clarify the point. Assume that you have a lumber production company that cuts trees, prepares board lumber for housing and furniture, and also prepares sawdust and wood scraps that is used in the production of particle board. Assume that in a given year the company experienced $1,100,000 in joint costs. Using one of the three previously mentioned cost allocation methods, the company allocated $1,000,000 in joint costs to the production of board lumber and $100,000 to the production of wood scraps and sawdust. Assume that in the next year it also experienced $1,100,000 in joint costs. However, in that year, the company lost its buyer of wood scraps and sawdust, so it had to give both of them away, without generating any revenue. In this case, the company would still realize $1,100,000 in joint costs. However, the entire amount would be allocated to the production of the board lumber. The only way to reduce the joint costs is to realize joint costs of less than $1,000,000. Your Turn Luxury Leathers Luxury Leathers, Inc., produces various leather accessories, such as belts and wallets. In the process of cutting out the leather pieces for each product, 400,000 pounds of scrap leather is produced. Luxury has been selling this leather scrap to Sammy’s Scrap Procurement for $2.25 per pound. Luxury has an employee suggestion box and one of the suggestions was to use most of the scrap to make leather watch bands. The management of Luxury is interested in this idea as the machines necessary to produce the watch bands are the same as the ones used in making belts and would merely need reprogramming for the cutting and stitching processes on the watch bands. The process to attach the buckle would be the same for the watch bands as it was for the belts, thus this would require no additional worker training. Luxury would have additional costs for new packaging and for the supply and insertion of the pins that connect the band to the watch. The total variable cost to produce the watch band would be $2.85. Fixed costs would increase by $85,000 per year for the lease of the packaging equipment, and Luxury estimates it could produce and sell 100,000 watch bands per year. Finished watch bands could be sold for $15.00 each. Should Luxury continue to sell the scrap leather or should Luxury process the scrap into watch bands to sell? Solution Luxury should process the leather scrap further into watch bands. Not only does the act of processing the scrap further result in an increase in operating income, it offers Luxury another product line that may draw customers to its other products. Sample Data Ainsley’s Apples grows organic apples and sells them to national grocery chains, local grocers, and markets. Ainsley purchased a machine for $450,000 that sorts the apples by size. The largest apples are sold as loose apples to the various stores, the medium sized apples are bagged and sold to the grocers in their bagged state, and the smallest apples are sold to deep discounters or to a local manufacturing plant that processes the apples into applesauce. Ainsley is considering keeping the small apples and processing them into apple juice that would be sold under Ainsley’s own label to local grocers. The small apples currently sell to the deep discounters and local manufacturers for $1.10 per dozen. The variable cost to prepare the small apples for sale, including transporting the apples, is $0.30 per dozen. Ainsley can sell each gallon of organic apple juice for $3.50 per gallon. It takes two dozen small apples to make one gallon of apple juice. The cost to produce the organic apple juice will be $0.60 variable cost per gallon plus $200,000 fixed costs for the one-year lease of the equipment needed to make and bottle the juice. Ainsley normally harvests and sells 2,400,000 small apples per year. Should Ainsley continue to sell the small apples to local grocers and the applesauce manufacturer or should Ainsley process the apples further into organic apple juice? Calculations of Sample Data In order to decide whether or not to process the small apples or to process them further into applesauce, Ainsley conducts an analysis of the relevant revenues and costs for the two alternatives: sell at split-off or process further into applesauce. Ainsley should continue to sell the apples at split-off rather than process them further, as selling them generates a $160,000 increase in operating income compared to only $90,000 if she processes the apples further. Final Analysis of the Decision When making the decision to sell or process further, the company also must consider that processing a product further may create a new successful market or it may undercut sales of already existing products. For example, a furniture manufacturer that sells unfinished furniture may lose sales of the unfinished pieces if it decides to stain some pieces and sell them as finished products. Think It Through Disposing of Coffee Grounds Return to Why It Matters in this chapter. With the knowledge you have gained thus far, answer these questions: From your perspective, what are the alternatives for the used coffee grounds? For the alternatives listed in question 1, what information do you need to evaluate between the alternatives? What type of analysis would you do to choose between alternatives? What qualitative factors might influence your decision regarding which alternative to select? Do you think the quantitative and qualitative components both will lead you to the same decision? Why or why not? 10.6 Evaluate and Determine How to Make Decisions When Resources Are Constrained Companies use various resources to be productive. These resources, which include time, labor, space, and machines, are limited, thus constraining the ability of a company to have unlimited productive capacity. For example, a retail store is constrained by the amount of floor space available to display its goods, while a law office may be constrained by the number of hours the paralegal team can feasibly work. These constraints require companies to make decisions on the best ways to allocate their resources in a way that maximizes the benefit to the firm. This situation is especially true when a company is operating at capacity or makes multiple products or provides multiple services. The question as to which products and how many should be made is a common constraint problem. For example, consider a business that runs at capacity, making four products by running two eight-hour shifts per day, seven days a week for 50 weeks per year. This business is limited to 5,600 working hours per year (8 hr. shifts × 2 shifts × 7 days per week × 50 weeks) unless a third shift is added. Adding a third shift may be prohibitive for any number of reasons, including local ordinances that prevent operating twenty-four hours a day, Environmental Protection Agency constraints, or the down-time of the machines that is required several hours a day for maintenance and calibration. What is the best way for this company to use these work hours? Which products should it produce first and how many of each should it produce? These types of situations constrain, or limit, management’s ability to use their facilities and workforce. Having limited availability of a resource, such as time, labor, or machine hours means that item becomes a scarce resource. A constraint is a scarce resource that limits the output or productive capacity of the organization. Ordinarily, there are very few actual constraints in any process. Sometimes, there is only one. However, the existence of a constraint can have a major effect on the productivity of an organization. This fact applies to all types of entities, such as production facilities or service providers. One way to view this issue is to consider the old cliché that a chain is only as strong as its weakest link. In the example, when trying to measure or estimate an organization’s maximum efficiency, its results will often be reduced by the overall negative effects of the constraints. When the constraint slows production, it is called a bottleneck . Managers are often faced with the problem of deciding how to best use a scarce resource to prevent bottlenecks. Under the constraint of limited resources, how do managers make decisions when they are working within these conditions? Fundamentals of How to Make Decisions When Resources are Constrained As with other short-term decisions, a company must consider the relevant costs and revenues when making decisions when resources are constrained. Whether the organization facing a constraint is a merchandising, manufacturing, or service organization, the initial step in allocating scarce or constrained resources is to determine the unit contribution margin , which is the selling price per unit minus the variable cost per unit, for each product or service. The company should produce or provide the products or services that generate the highest contribution margin first, followed by those with the second highest, and so forth. The total contribution margin will be maximized by promoting those products or accepting those orders with the highest contribution margin in relation to the scarce resource. In other words, products or services should be ranked based on their unit contribution margin per production restraint , which is the unit contribution margin divided by the production restraint. If constraints are not managed, a bottleneck usually results, meaning that production slows and a back-up occurs at stages prior to the bottleneck. For example, in producing boxes of cereal, if the cereal is produced at a rate of 1,000 ounces per minute but the bagging machines can only bag 800 ounces per minute, this will create a bottleneck. Similarly, if on a Saturday morning before a home football game, the local grocery store has ten checkout lines but only opens four of them, long lines will result from the constraint of too few checkout lanes available. Management must decide how many scarce resources (employees, in this example) to pull from stocking the shelves to running cash registers. It may be difficult to see how bottlenecks affect profitability, and they appear to be more of a timing or throughput issue. But bottlenecks can affect profitability in a number of ways. Bottlenecks at the grocery story can result in customers leaving to store shop elsewhere or can negatively affect the reputation of the store, which can impact future sales. In the cereal example, bottlenecks in the packaging area can slow the delivery of boxes of cereal to distributors and individual stores. Poor or inconsistent delivery may drive customers to purchase from other cereal manufacturers, which would have a definite impact on profitability. A common problem relating to constraints occurs in multi-product production environments. Management will need to evaluate the constraints to determine the best mix of products that will minimize the effects of the constraints. In addition to making sure that the best product mix is chosen, managers should seek ways to increase the effective capacity of the constraint. Conceptually, there are two ways a company can do this: increase the rate of output at the bottleneck, or increase the time available at the bottleneck. Increasing the capacity of the constraint or bottleneck is also called relaxing the constraint or elevating the constraint . Some specific examples of ways to relax the constraint include: Keep the production facilities open longer hours. This may allow the work-flow through the bottleneck area to be slowed and thus prevent the bottleneck from occurring. However, this may require paying workers overtime pay. If working extra hours is not a viable option, then moving additional workers to the bottleneck area may be beneficial as long as the areas from which they were moved are adequately covered and additional problem areas do not result. Instead of using current workers, additional staff may be hired to smooth the work flow through the bottleneck area. Outsource some or all of the work in the area of the bottleneck. It may be cheaper and more cost effective to buy parts of components than to slow production due to the bottleneck. Redesign the production process to prevent the bottleneck by adding more resources to eliminate the bottleneck, reorganizing the process to distribute the bottleneck-causing activities to different parts of the production process, or managing processing times at other stages prior to the bottleneck to help prevent the bottleneck form occurring. Insuring a minimal number of defects and rework, since they typically slow the production process and thus add to the bottleneck. Preventing and minimizing bottlenecks can have significant benefits to the bottom line of the company. The reduction of bottlenecks allows the company to move more products through the production phase and thus be ready to sell. Ethical Considerations When to Include a Lifesaving Option: The Case of the Ford Pinto The case of the fiery Ford Pinto demonstrates that more than cost and revenue should be considered when making an ethical business decision. In the early 1970s, the Ford Motor Company set out to build a Pinto for less than $2,000. Cars were much less expensive then, and Ford had to determine whether or not to include a component part that cost around $10. Given the high cost, Ford decided not to include the component, a rubber bladder for the gas tank. However, in rear-end collisions at over 21 miles per hour, the rubber bladder component functions to prevent the gas tank from flooding the interior of the car with gasoline and gas fumes. Because of the decision not to include the component, a number of Pintos involved in collisions exploded into flames, injuring and sometimes killing the occupants. Although Ford was aware of the defect, the company’s cost/benefit analysis indicated it was less expensive to build Pintos without the rubber bladder, even when including expected reimbursement costs for anyone injured or killed. However, the decision to allow a defective product to be built in order to reduce overall costs caused a significant hit to Ford ’s reputation. Ultimately, the litigation costs for knowingly constructing a defective car were higher than the original cost of including the rubber bladder component. While Ford ’s decision seemed profitable in the short-term, their financial analysis could have been improved if it also took into account long-term impacts. Sample Data Wood World, Inc., produces wooden desks, chairs, and bookcases. These items are produced using the same machines, and there is a maximum of 80,000 machine-hours available during the year. The information about the production time and costs for these three items is: Wood World is limited in producing its products by the number of possible machine-hours. Orders have been received for 60,000 desks, 48,000 chairs, and 40,000 bookcases, which will require 94,000 machine-hours to produce. Since there are not enough machine-hours available to fill all of the orders, which orders should Wood World fill first? Calculations Using Sample Data To address this question, Wood World must find the contribution margin per machine-hour since machine-hours are the constraining factor for production. Final Analysis of the Decisions Wood World should fulfill the orders for bookcases first, desks second, and chairs last. The bookcases provide the highest contribution margin per machine-hour, followed by desks and then chairs. Maximizing the contribution margin per constraint, in this case per machine-hour, is the best way for Wood World to manage the constraint. How many of each item will be produced? Therefore, based on contribution margin and the constraint of machine hours, Wood World should fill all 40,000 of the bookcase orders first, then fill the 60,000 desk orders and, and fill 20,000 of the chair orders last. Are there any qualitative issues that Wood World should consider? One concern may be that customers who typically buy a desk and chair together may not be able to do so if the chair production is affected by a bottleneck. Another qualitative issue in keeping with the furniture example is that a company might find producing dining room tables to be significantly more profitable than matching chairs or matching cupboards. However, they will still be required to produce the less profitable chairs and cupboards, because many consumers will want to buy all three items as a set. The benefits of effectively managing constraints can be enormous. Managers need to understand the positive impact effective management of constrained resources can have on the company’s bottom line. The contribution margin per unit of the scarce resource can be used to assess the value of relaxing the constraint. When there is unsatisfied demand for a single product because of a constraint, the value of additional time on the constraint is simply the contribution margin per unit of the scarce resource for that product. When there are two or more products with unsatisfied demand, the value of additional time on the bottleneck would be the largest contribution margin per unit of the scarce resource for any product whose demand is unsatisfied. In many situations, when dealing with conflicting time constraints an evaluation of multiple bottlenecks might identify a viable solution. While many bottleneck issues and their solutions could be somewhat complex, others might be addressed more simply. For example, in some cases the problem might be solved by the addition of an additional work shift. Concepts In Practice Distributing Caseloads at a Law Firm As a new business school graduate, you landed you first job in the human resources department of large national law firm in New York City. Your position is providing you with many opportunities to learn about the company and the various tasks for which the human resources department is responsible. Your most recent assignment is to determine the best way to distribute caseloads to the junior level attorneys based on their areas of expertise and to assign paralegal hours to assist the junior level attorneys. What are the constraints with which you are dealing? What information do you need to properly complete this assignment? What type of analysis would be required to effectively allocate caseload hours?
business_law_i_essentials
Chapter Outline 2.1 Negotiation 2.2 Mediation 2.3 Arbitration Introduction Learning Outcome Explain the theory, practice, and law of disputes and resolution.
[ { "answer": { "ans_choice": 0, "ans_text": "a" }, "bloom": null, "hl_context": "Negotiation , mediation , and arbitration are alternatives form of dispute resolution that attempt to help disagreeing parties avoid the time and expense of court litigation . <hl> While negotiation is involved in all three forms , mediation and arbitration involve a neutral third party to help the parties find a solution . <hl> Frameworks that consider self-interest , as opposed to interest in the other party , can help negotiators craft successful negotiation approaches . Mediators , arbitrators , and groups of arbitrators all follow certain steps and play an important role in trying to help parties reach common ground and avoid court proceedings . Mediators who establish rapport with disputing parties can facilitate dispute resolution , as mediation is very much solution-focused . Arbitrators must often decide upon awards when parties cannot reach an agreement . Even when an aggrieved party attains an arbitration award , it may still have to pursue the other party by using a variety of legal techniques to enforce the payment or practice stipulated by the award . Staying current with federal and state laws associated with negotiation proceedings is essential for businesses looking to maximize their relational and outcome goals . <hl> Joint Discussion : The mediator will try to get the two disagreeing parties to speak to one another and will guide the discussion toward a mutually amicable solution . <hl> This part of the mediation process usually identifies which issues need to be resolved and explores ways to address the issues . <hl> Mediation is a method of dispute resolution that relies on an impartial third-party decision-maker , known as a mediator , to settle a dispute . <hl> While requirements vary by state , a mediator is someone who has been trained in conflict resolution , though often , he or she does not have any expertise in the subject matter that is being disputed . Mediation is another form of alternative dispute resolution . It is often used in attempts to resolve a dispute because it can help disagreeing parties avoid the time-consuming and expensive procedures involved in court litigation . Courts will often recommend that a plaintiff , or the party initiating a lawsuit , and a defendant , or the party that is accused of wrongdoing , attempt mediation before proceeding to trial . This recommendation is especially true for issues that are filed in small claims courts , where judges attempt to streamline dispute resolution . Not all mediators are associated with public court systems . There are many agency-connected and private mediation services that disputing parties can hire to help them potentially resolve their dispute . The American Bar Association suggests that , in addition to training courses , one of the best ways to start a private mediation business is to volunteer as a mediator . Research has shown that experience is an important factor for mediators who are seeking to cultivate sensitivity and hone their conflict resolution skills .", "hl_sentences": "While negotiation is involved in all three forms , mediation and arbitration involve a neutral third party to help the parties find a solution . Joint Discussion : The mediator will try to get the two disagreeing parties to speak to one another and will guide the discussion toward a mutually amicable solution . Mediation is a method of dispute resolution that relies on an impartial third-party decision-maker , known as a mediator , to settle a dispute .", "question": { "cloze_format": "A process in which a third party selected by the disputants helps the parties to voluntarily resolve their disagreement is known as ___ .", "normal_format": "What a process in which a third party selected by the disputants helps the parties to voluntarily resolve their disagreement is known as?", "question_choices": [ "Mediation.", "Discovery.", "Arbitration.", "Settlement." ], "question_id": "fs-idm362137536", "question_text": "A process in which a third party selected by the disputants helps the parties to voluntarily resolve their disagreement is known as:" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "Negotiation." }, "bloom": null, "hl_context": "We frequently engage in negotiations as we go about our daily activities , often without being consciously aware that we are doing so . Negotiation can be simple , e . g . , two friends deciding on a place to eat dinner , or complex , e . g . , governments of several nations trying to establish import and export quotas across multiple industries . <hl> When a formal proceeding is started in the court system , alternative dispute resolution ( ADR ) , or ways of solving an issue with the intent to avoid litigation , may be employed . <hl> <hl> Negotiation is often the first step used in ADR . <hl> While there are other forms of alternative dispute resolution , negotiation is considered to be the simplest because it does not require outside parties . An article in the Organization Behavior and Human Decision Processes defined negotiation as the “ process by which parties with nonidentical preferences allocate resources through interpersonal activity and joint decision making . ” Analyzing the various components of this definition is helpful in understanding the theories and practices involved in negotiation as a form of dispute settlement .", "hl_sentences": "When a formal proceeding is started in the court system , alternative dispute resolution ( ADR ) , or ways of solving an issue with the intent to avoid litigation , may be employed . Negotiation is often the first step used in ADR .", "question": { "cloze_format": "___ is the first step in Alternative Dispute Resolution.", "normal_format": "What’s the first step in Alternative Dispute Resolution?", "question_choices": [ "Conciliation.", "Mediation.", "Negotiation.", "Arbitration." ], "question_id": "fs-idm348689424", "question_text": "What’s the first step in Alternative Dispute Resolution?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "A mediator." }, "bloom": null, "hl_context": "Successful mediators work to immediately establish personal rapport with the disputing parties . They often have a short period of time to interact with the parties and work to position themselves as a trustworthy advisor . The Harvard Law School Program on Negotiation reports a study by mediator Peter Adler in which mediation participants remembered the mediators as “ opening the room , making coffee , and getting everyone introduced . ” This quote underscores the need for mediators to play a role beyond mere administrative functions . <hl> The mediator ’ s conflict resolution skills are critical in guiding the parties toward reaching a resolution . <hl> <hl> Mediation is a method of dispute resolution that relies on an impartial third-party decision-maker , known as a mediator , to settle a dispute . <hl> While requirements vary by state , a mediator is someone who has been trained in conflict resolution , though often , he or she does not have any expertise in the subject matter that is being disputed . Mediation is another form of alternative dispute resolution . It is often used in attempts to resolve a dispute because it can help disagreeing parties avoid the time-consuming and expensive procedures involved in court litigation . Courts will often recommend that a plaintiff , or the party initiating a lawsuit , and a defendant , or the party that is accused of wrongdoing , attempt mediation before proceeding to trial . This recommendation is especially true for issues that are filed in small claims courts , where judges attempt to streamline dispute resolution . Not all mediators are associated with public court systems . There are many agency-connected and private mediation services that disputing parties can hire to help them potentially resolve their dispute . The American Bar Association suggests that , in addition to training courses , one of the best ways to start a private mediation business is to volunteer as a mediator . Research has shown that experience is an important factor for mediators who are seeking to cultivate sensitivity and hone their conflict resolution skills .", "hl_sentences": "The mediator ’ s conflict resolution skills are critical in guiding the parties toward reaching a resolution . Mediation is a method of dispute resolution that relies on an impartial third-party decision-maker , known as a mediator , to settle a dispute .", "question": { "cloze_format": "A person trained in conflict resolution is considered ___ .", "normal_format": "A person trained in conflict resolution is considered to be whom?", "question_choices": [ "An arbitrator.", "A mediator.", "A negotiator.", "A judge." ], "question_id": "fs-221", "question_text": "A person trained in conflict resolution is considered:" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 0, "ans_text": "a" }, "bloom": null, "hl_context": "<hl> Mediation is distinguished by its focus on solutions . <hl> Instead of focusing on discoveries , testimonies , and expert witnesses to assess what has happened in the past , it is future-oriented . Mediators focus on discovering ways to solve the dispute in a way that will appease both parties .", "hl_sentences": "Mediation is distinguished by its focus on solutions .", "question": { "cloze_format": "Mediation focuses on ___ .", "normal_format": "What is Mediation focused on?", "question_choices": [ "Solutions.", "Testimony.", "Expert witnesses.", "Discoveries." ], "question_id": "fs-227", "question_text": "Mediation focuses on:" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "c" }, "bloom": null, "hl_context": "While it might seem that the party that is awarded a settlement by an arbitrator has reason to be relieved that the matter is resolved , sometimes this decision represents just one more step toward actually receiving the award . While a party may honor the award and voluntarily comply , this outcome is not always the case . <hl> In cases where the other party does not comply , the next step is to petition the court to enforce the arbitrator ’ s decision . <hl> This task can be accomplished by numerous mechanisms , depending on the governing laws . <hl> These include writs of execution , garnishment , and liens . <hl>", "hl_sentences": "In cases where the other party does not comply , the next step is to petition the court to enforce the arbitrator ’ s decision . These include writs of execution , garnishment , and liens .", "question": { "cloze_format": "A method that does not enforces an arbitrator’s decision is ___", "normal_format": "Which of the following methods does NOT enforce an arbitrator’s decision?", "question_choices": [ "Writs of Execution.", "Garnishment.", "Fines.", "Liens." ], "question_id": "fs-217", "question_text": "All of the following are methods to enforce an arbitrator’s decision except:" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 3, "ans_text": "Torts." }, "bloom": null, "hl_context": "<hl> Property Disputes . <hl> Business can have various types of property disputes . These might include disagreements over physical property , e . g . , deciding where one property ends and another begins , or intellectual property , e . g . , trade secrets , inventions , and artistic works . <hl> Business Transactions . <hl> Whenever two parties conduct business transactions , there is potential for misunderstandings and mistakes . Both business-to-business transactions and business-to-consumer transactions can potentially be solved through arbitration . Any individual or business who is unhappy with a business transaction can attempt arbitration . Jessica Simpson recently won an arbitration case in which she disputed the release of a fitness video she had made because she felt the editor took too long to release it . <hl> Labor . <hl> Arbitration has often been used to resolve labor disputes through interest arbitration and grievance arbitration . Interest arbitration addresses disagreements about the terms to be included in a new contract , e . g . , workers of a union want their break time increased from 15 to 25 minutes . In contrast , grievance arbitration covers disputes about the implementation of existing agreements . In the example previously given , if the workers felt they were being forced to work through their 15 - minute break , they might engage in this type of arbitration to resolve the matter . There are many instances in which arbitration agreements may prove helpful as a form of alternative dispute resolution . <hl> While arbitration can be useful for resolving family law matters , such as divorce , custody , and child support issues , in the domain of business law , it has three major applications : <hl>", "hl_sentences": "Property Disputes . Business Transactions . Labor . While arbitration can be useful for resolving family law matters , such as divorce , custody , and child support issues , in the domain of business law , it has three major applications :", "question": { "cloze_format": "___ decide the business ethics for a company.", "normal_format": "Who decides the business ethics for a company?", "question_choices": [ "Labor.", "Business Transactions.", "Property Disputes.", "Torts." ], "question_id": "fs-2123", "question_text": "All of the following are the most common applications of arbitration in the business context except:" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "c" }, "bloom": null, "hl_context": "<hl> An arbiter can issue either a “ bare bones ” or a reasoned award . <hl> <hl> A bare bones award refers to one in which the arbitrator simply states his or her decision , while a reasoned award lists the rationale behind the decision and award amount . <hl> The decision of the arbitrator is often converted to a judgement , or legal tool that allows the winning party to pursue collection action on the award . The process of converting an award to a judgement is known as confirmation .", "hl_sentences": "An arbiter can issue either a “ bare bones ” or a reasoned award . A bare bones award refers to one in which the arbitrator simply states his or her decision , while a reasoned award lists the rationale behind the decision and award amount .", "question": { "cloze_format": "The type of awards that may be issued by an arbitrator is/are ___.", "normal_format": "What are the type of awards that may be issued by an arbitrator?", "question_choices": [ "Bare Bones.", "Reasoned.", "Both a and b.", "Neither a nor b." ], "question_id": "fs-2129", "question_text": "The following are the type of awards that may be issued by an arbitrator:" }, "references_are_paraphrase": null } ]
2
2.1 Negotiation We frequently engage in negotiations as we go about our daily activities, often without being consciously aware that we are doing so. Negotiation can be simple, e.g., two friends deciding on a place to eat dinner, or complex, e.g., governments of several nations trying to establish import and export quotas across multiple industries. When a formal proceeding is started in the court system, alternative dispute resolution ( ADR ), or ways of solving an issue with the intent to avoid litigation , may be employed. Negotiation is often the first step used in ADR. While there are other forms of alternative dispute resolution, negotiation is considered to be the simplest because it does not require outside parties. An article in the Organization Behavior and Human Decision Processes defined negotiation as the “process by which parties with nonidentical preferences allocate resources through interpersonal activity and joint decision making.” Analyzing the various components of this definition is helpful in understanding the theories and practices involved in negotiation as a form of dispute settlement. Negotiation Types and Objectives Per the above definition, negotiation becomes necessary when two parties hold “non-identical” preferences. This statement seems fairly obvious, since 100% agreement would indicate that there is not any need for negotiation. From this basic starting point, there are several ways of thinking about negotiation, including how many parties are involved. For example, if two small business owners find themselves in a disagreement over property lines, they will frequently engage in dyadic negotiation . Put simply, dyadic negotiation involves two individuals interacting with one another in an attempt to resolve a dispute. If a third neighbor overhears the dispute and believes one or both of them are wrong with regard to the property line, then group negotiation could ensue. Group negotiation involves more than two individuals or parties, and by its very nature, it is often more complex, time-consuming, and challenging to resolve. While dyadic and group negotiations may involve different dynamics, one of the most important aspects of any negotiation, regardless of the quantity of negotiators, is the objective. Negotiation experts recognize two major goals of negotiation: relational and outcome. Relational goals are focused on building, maintaining, or repairing a partnership, connection, or rapport with another party. Outcome goals , on the other hand, concentrate on achieving certain end results. The goal of any negotiation is influenced by numerous factors, such as whether or not there will be contact with the other party in the future. For example, when a business negotiates with a supply company that it intends to do business with in the foreseeable future, it will try to focus on “win-win” solutions that provide the most value for each party. In contrast, if an interaction is of a one-time nature, that same company might approach a supplier with a “win-lose” mentality, viewing its objective as maximizing its own value at the expense of the other party’s value. This approach is referred to as zero-sum negotiation , and it is considered to be a “hard” negotiating style. Zero-sum negotiation is based on the notion that there is a “fixed pie,” and the larger the slice that one party receives, the smaller the slice the other party will receive. Win-win approaches to negotiation are sometimes referred to as integrative , while win-lose approaches are called distributive . Negotiation Style Everyone has a different way of approaching negotiation, depending on the circumstance and the person’s personality. However, the Thomas-Kilmann Conflict Mode Instrument (TKI) is a questionnaire that provides a systematic framework for categorizing five broad negotiation styles. It is closely associated with work done by conflict resolution experts Dean Pruitt and Jeffrey Rubin. These styles are often considered in terms of the level of self-interest, instead of how other negotiators feel. These five general negotiation styles include: Forcing . If a party has high concern for itself, and low concern for the other party, it may adopt a competitive approach that only takes into account the outcomes it desires. This negotiation style is most prone to zero-sum thinking. For example, a car dealership that tries to give each customer as little as possible for his or her trade-in vehicle would be applying a forcing negotiation approach. While the party using the forcing approach is only considering its own self-interests, this negotiating style often undermines the party’s long-term success. For example, in the car dealership example, if a customer feels she has not received a fair trade-in value after the sale, she may leave negative reviews and will not refer her friends and family to that dealership and will not return to it when the time comes to buy another car. Collaborating . If a party has high concern and care for both itself and the other party, it will often employ a collaborative negotiation that seeks to maximum the gain for both. In this negotiating style, parties recognize that acting in their mutual interests may create greater value and synergies. Compromising . A compromising approach to negotiation will take place when parties share some concerns for both themselves and the other party. While it is not always possible to collaborate, parties can often find certain points that are more important to one versus the other, and in that way, find ways to isolate what is most important to each party. Avoiding . When a party has low concern for itself and for the other party, it will often try to avoid negotiation completely. Yielding . Finally, when a party has low self-concern for itself and high concern for the other party, it will yield to demands that may not be in its own best interest. As with avoidance techniques, it is important to ask why the party has low self-concern. It may be due to an unfair power differential between the two parties that has caused the weaker party to feel it is futile to represent its own interests. This example illustrates why negotiation is often fraught with ethical issues. Negotiation Styles in Practice Apple’s response to its treatment of warranties in China, i.e., giving one-year warranties instead of two-year warranties as required by law, serves as an example of how negotiation may be used. While Apple products continued to be successful and popular in China, the issue rankled its customers, and Chinese celebrities joined the movement to address the concern. Chinese consumers felt that Apple was arrogant and didn’t value its customers or the customers’ feedback. In response, Tim Cook issued a public apology in which he expressed regret over the misunderstanding, saying, “We are aware that insufficient communications during this process has led to the perception that Apple is arrogant and disregards, or pays little attention to, consumer feedback. We express our sincere apologies for any concern or misunderstanding arising therefrom.” Apple then listed four ways it intended to resolve the matter. By exhibiting humility and concern for its customers, Apple was able to diffuse a contentious situation that might have resulted in costly litigation. Negotiation Laws Negotiations are covered by a medley of federal and state laws, such as the Federal Arbitration Act and Uniform Arbitration Act . The Federal Arbitration Act (FAA) is a national policy that favors arbitration and enforces situations in which parties have contractually agreed to participate in arbitration. Parties who have decided to be subject to binding arbitration relinquish their constitutional right to settle their dispute in court. It is the FAA that allows parties to confirm their awards, as will be discussed in the following chapters. When considering negotiation laws, it is important to keep in mind that each state has laws with their own definitions and nuances. While the purpose of the Uniform Arbitration Act in the United States was to provide a uniform approach to the way states handle arbitration, it has only been adopted in some form by about 35 states. 2.2 Mediation Court or Agency-Connected Mediation Mediation is a method of dispute resolution that relies on an impartial third-party decision-maker, known as a mediator , to settle a dispute. While requirements vary by state, a mediator is someone who has been trained in conflict resolution, though often, he or she does not have any expertise in the subject matter that is being disputed. Mediation is another form of alternative dispute resolution. It is often used in attempts to resolve a dispute because it can help disagreeing parties avoid the time-consuming and expensive procedures involved in court litigation. Courts will often recommend that a plaintiff , or the party initiating a lawsuit, and a defendant , or the party that is accused of wrongdoing, attempt mediation before proceeding to trial. This recommendation is especially true for issues that are filed in small claims courts, where judges attempt to streamline dispute resolution. Not all mediators are associated with public court systems. There are many agency-connected and private mediation services that disputing parties can hire to help them potentially resolve their dispute. The American Bar Association suggests that, in addition to training courses, one of the best ways to start a private mediation business is to volunteer as a mediator. Research has shown that experience is an important factor for mediators who are seeking to cultivate sensitivity and hone their conflict resolution skills. For businesses, the savings associated with mediation can be substantial. For example, the energy corporation Chevron implemented an internal mediation program. In one instance, it cost $25,000 to resolve a dispute using this internal mediation program, far less than the estimated $700,000 it would have incurred through the use of outside legal services. Even more impressive is the amount it saved by not going to court, which would have cost an estimated $2.5 million. Mediation is distinguished by its focus on solutions. Instead of focusing on discoveries, testimonies, and expert witnesses to assess what has happened in the past, it is future-oriented. Mediators focus on discovering ways to solve the dispute in a way that will appease both parties. Other Benefits of Mediation Confidentiality . Since court proceedings become a matter of public record, it can be advantageous to use mediation to preserve anonymity. This aspect can be especially important when dealing with sensitive matters, where one or both parties feels it is best to keep the situation private. Creativity . Mediators are trained to find ways to resolve disputes and may apply outside-the-box thinking to suggest a resolution that the parties had not considered. Since disagreeing parties can be feeling emotionally contentious toward one another, they may not be able to consider other solutions. In addition, a skilled mediator may be able to recognize cultural differences between the parties that are influencing the parties’ ability to reach a compromise, and thus leverage this awareness to create a novel solution. Control . When a case goes to trial, both parties give up a certain degree of control over the outcome. A judge may come up with a solution to which neither party is in favor. In contrast, mediation gives the disputing parties opportunities to find common ground on their own terms, before relinquishing control to outside forces. Role of the Mediator Successful mediators work to immediately establish personal rapport with the disputing parties. They often have a short period of time to interact with the parties and work to position themselves as a trustworthy advisor. The Harvard Law School Program on Negotiation reports a study by mediator Peter Adler in which mediation participants remembered the mediators as “opening the room, making coffee, and getting everyone introduced.” This quote underscores the need for mediators to play a role beyond mere administrative functions. The mediator’s conflict resolution skills are critical in guiding the parties toward reaching a resolution. Steps of Mediation As explained by nolo.com , mediation, while not being as formal as a court trial, involves the following six steps: Mediator’s Opening Statement : During the opening statement, the mediator introduces himself or herself and explains the goals of mediation. Opening Statements of Plaintiff and Defendant : Both parties are given the opportunity to speak, without interruption. During this opening statement, both parties are afforded the opportunity to describe the nature of the dispute and their desired solution. Joint Discussion : The mediator will try to get the two disagreeing parties to speak to one another and will guide the discussion toward a mutually amicable solution. This part of the mediation process usually identifies which issues need to be resolved and explores ways to address the issues. Private Caucus : During this stage, each party has the ability to meet and speak privately with the mediator. Typically, the mediator will use this time to learn more about what is most important to each party and to brainstorm ways to find a resolution. The mediator may ask the parties to try to put aside their emotional responses and resentments to work toward an agreement. Joint Negotiation : After the private caucuses, the parties are joined again in the same room, and the mediator presents any newly discovered insight to guide them toward an agreement. Closure : During this final stage, an agreement is reached, or it is determined that the parties cannot agree. Either way, the mediator will review the positions of each party and ask them if they would like to meet again or explore escalating options, such as moving the dispute to court. Ethical Issues Both the disputants themselves, and those who attempt to facilitate dispute resolutions, i.e., mediators and attorneys, must navigate a myriad of ethical issues, such as deciding whether they should tell the entire truth, or only offer a partial disclosure. This conflict has long roots in history and has often been considered in terms of consequentialist and deontological ethical theories. Consequentialist ethics , sometimes known as situational ethics, is a way of looking at difficult decisions by considering their implications. Someone who follows consequentialist ethics in mediation or arbitration would consider the impact of his or her decision on the parties in light of their unique circumstances. In contrast, deontologist ethics bases its decision on whether the action itself is right or wrong, regardless of its consequences. Imagine a situation in which a professional accountant holds a consequentialist ethical viewpoint and believes that there are certain scenarios in which the disclosure of only part of the truth is a commendable course of action. For example, if an accountant is interviewed regarding how the company handled a certain transaction in its retirement account, he might choose to withhold certain information because he is afraid it will harm the retirees’ ability to retain the full benefits of their pensions. In this case, the accountant is utilizing “the ends justify the means” logic because he feels that the omission of truth will result in more benefit than its revelation. A mediator or arbitrator who also follows a consequentialist viewpoint would consider the accountant’s motivation and the circumstances, in addition to his or her actions. Ethical situations like these are not only part of dispute mediation in business law scenarios, but also happen in daily life. Consider the case of a parent who is on his way home from work when he receives a call from the babysitter, telling him that his child’s forehead feels hot and that she is complaining of not feeling well. Sitting in traffic, the parent remembers that he does not know the whereabouts of the digital thermometer, so he decides to stop and purchase one. The parking lot at the store is extremely busy, so the parent decides to park in a handicapped spot, even though he does not have any mobility challenges. These types of situations have been addressed by philosophers such as Immanuel Kant, who spoke of the categorical imperative , which he defined as, “Act only according to that maxim whereby you can, at the same time, will that it should become a universal law.” In other words, one’s action should be considered in light of what would happen if everyone were to engage in the same action. While it might not seem like a harmful infraction, if everyone were to do it, then it would cause a true inconvenience and possible suffering for mobility-impaired individuals, for whom those spaces were designated. A deontological ethical viewpoint would determine that it is always wrong to park in the handicapped space, regardless of the situation. In real life, it is very difficult to adopt a 100% deontological viewpoint for dispute resolution. Often, the reason the dispute has arisen in the first place is because of some ambiguity inherent in the situation. In these cases, mediators must apply their best judgment to help the disagreeing parties see one another’s viewpoints and to guide them toward a mutually amicable solution. Future Directions in Mediation As technology continues to change the ways we interact with one another, it is likely that we will see advances in mediation techniques. For example, there are companies that offer online mediation services, known as e-mediation . E-mediation can be useful in situations where the parties are geographically far apart, or the transaction in dispute took place online. Ebay uses e-mediation to handle the sheer volume of misunderstandings between parties. Research has shown that one of the benefits of e-mediation is that it allows people the time needed to “cool down” when they have to explain their feelings in an email, as opposed to speaking to others in person. In addition to technological advancements, new findings in psychology are influencing how disputes are resolved, such as the rising interest in canine-assisted mediation (CAM), in which the presence of dogs is posited to have an impact on human emotional health. Since the presence of dogs has a positive impact on many of the neurophysiological stress markers in humans, researchers are beginning to explore the use of therapy animals to assist in dispute resolution. 2.3 Arbitration The American Bar Association (ABA) defines arbitration as the “private process where disputing parties agree that one or several individuals can make a decision about the dispute after receiving evidence and hearing arguments.” Arbitration is overseen by a neutral arbitrator , or an individual who is responsible for making a decision on how to resolve a dispute and who has the ability to decide on an award , or a course of action that the arbiter believes is fair, given the situation. An award can be a monetary payment that one party must pay to the other; however, awards need not always be financial in nature. An award may require that one business stop engaging in a certain practice that is deemed unfair to the other business. As distinguished from mediation, in which the mediator simply serves as a facilitator who is attempting to help the disagreeing parties reach an agreement, and arbitrator acts more like a judge in a court trial and often has legal expertise, although he or she may or may not have subject matter expertise. Many arbitrators are current or retired lawyers and judges. Types of Arbitration Agreements Parties can enter into either voluntary or involuntary arbitration. In voluntary arbitration , the disputing parties have decided, of their own accord, to seek arbitration as a way to potentially settle their dispute. Depending on the state’s laws and the nature of the dispute, disagreeing parties may have to attempt arbitration before resorting to litigation; this requirement is known as involuntary arbitration because it is forced upon them by an outside party. Arbitration can be either binding or non-binding. In binding arbitration , the decision of the arbitrator(s) is final, and except in rare circumstances, neither party can appeal the decision through the court system. In non-binding arbitration , the arbitrator’s award can be thought of as a recommendation; it is only finalized if both parties agree that it is an acceptable solution. This fact is why non-binding arbitration can be useful for what the American Arbitration Association describes as “disputes where the parties may be too far apart in their viewpoints to mediate or are in need of an objective evaluation of their respective positions.” Having a neutral party assess the situation may help disputants to rethink and reassess their positions and reach a future compromise. Issues Covered by Arbitration Agreements There are many instances in which arbitration agreements may prove helpful as a form of alternative dispute resolution. While arbitration can be useful for resolving family law matters, such as divorce, custody, and child support issues, in the domain of business law, it has three major applications: Labor . Arbitration has often been used to resolve labor disputes through interest arbitration and grievance arbitration. Interest arbitration addresses disagreements about the terms to be included in a new contract, e.g., workers of a union want their break time increased from 15 to 25 minutes. In contrast, grievance arbitration covers disputes about the implementation of existing agreements. In the example previously given, if the workers felt they were being forced to work through their 15-minute break, they might engage in this type of arbitration to resolve the matter. Business Transactions . Whenever two parties conduct business transactions, there is potential for misunderstandings and mistakes. Both business-to-business transactions and business-to-consumer transactions can potentially be solved through arbitration. Any individual or business who is unhappy with a business transaction can attempt arbitration. Jessica Simpson recently won an arbitration case in which she disputed the release of a fitness video she had made because she felt the editor took too long to release it. Property Disputes . Business can have various types of property disputes. These might include disagreements over physical property, e.g., deciding where one property ends and another begins, or intellectual property, e.g., trade secrets, inventions, and artistic works. Typically, civil disputes, as opposed to criminal matters, attempt to use arbitration as a means of dispute resolution. While definitions can vary between municipalities, states, and countries, a civil matter is generally one that is brought when one party has a grievance against another party and seeks monetary damages. In contrast, in a criminal matter , a government pursues an individual or group for violating laws meant to establish the best interests of the public. While the word crime often invokes the idea of violence, there are many crimes, such as embezzlement, in which the harm caused is not physical, but rather monetary. Ethics of Commercial Arbitration Clauses As previously discussed, going to court to solve a dispute is a costly endeavor, and for large companies, it is possible to incur millions of dollars in legal expenses. While arbitration is meant to be a form of dispute resolution that helps disagreeing parties find a low-cost, time-efficient solution, it has become increasingly important to question whose expenses are being lowered, and to what effect. Many consumer advocates are fighting against what are known as forced-arbitration clauses , in which consumers agree to settle all disputes through arbitration, effectively waiving their right to sue a company in court. Some of these forced arbitration clauses cause the other party to forfeit their right to appeal an arbitration decision or participate in any kind of class action lawsuit , in which individuals who have a similar issue sue as one collective group. For example, in 2006, Enron investors initiated a class action lawsuit against executives who hid the company’s losses and were awarded $7.2 billion dollars. While this example represents a case where the company being sued was clearly in the wrong, it is important for large companies to be ethical in their use of arbitration clauses. They should not be used as a way to keep wrongdoings “quiet” or to limit consumers’ abilities to obtain rightful retribution for products and services that do not perform as promised. Arbitration Procedures When parties enter into arbitration, certain procedures are followed. First, the number of arbitrators is decided, along with how they will be chosen. Parties that enter into willing arbitration may have more control over this decision, while those that do so unwillingly may have a limited pool of arbitrators from which to choose. In the case of willing arbitration, parties may decide to have three arbitrators, one chosen by each of the disputants and the third chosen by the elected arbitrators. Next, a timeline is established, and evidence is presented by both parties. Since arbitration is less formal than court proceedings, the evidence phase typically goes faster than it would in a courtroom setting. Finally, the arbitrator will make a decision and usually makes one or more awards. Not all arbitration agreements have the same procedures. It depends on the types of agreements made in advance by the disputing parties. Consider the following scenario: the owner of a large commercial office building uses a lease agreement, which stipulates that arbitration will be used to settle the renewal terms of a lease. For example, the lease may state that, at the end of year one, the second year’s lease payment will be at current market value, and if the tenants cannot agree on that value, they will then allow an arbitrator to decide. If the building owner feels that the renewal rate should be $40/square foot and the tenant feels it should be $20/square foot, an arbiter who may not be an expert in local real estate values might decide to resolve the dispute by using a rule of thumb, such as “splitting the difference.” In this case, the arbiter might decide that $30/square foot represents a fair lease renewal rate. To overcome this shortcoming, the building owner could write a lease agreement that stipulates that the parties use binding baseball arbitration and use subject matter experts as arbitrators. In this case, that might include real estate attorneys or commercial real estate investors. In baseball arbitration, each party would submit a lease renewal figure to an arbitrator. For example, imagine that the renewing tenant submits an offer of $10/per square foot, which is very much under market value, while the building owner submits an offer of $35/square foot. In this scenario, the arbitrator chooses one offer or the other, without modification. This type of arbitration incentivizes both parties to be fair in their dealings with one another because to do otherwise would be to their own detriment. Arbitration Awards An arbiter can issue either a “bare bones” or a reasoned award. A bare bones award refers to one in which the arbitrator simply states his or her decision, while a reasoned award lists the rationale behind the decision and award amount. The decision of the arbitrator is often converted to a judgement, or legal tool that allows the winning party to pursue collection action on the award. The process of converting an award to a judgement is known as confirmation. Judicial Enforcement of Arbitration Awards While it might seem that the party that is awarded a settlement by an arbitrator has reason to be relieved that the matter is resolved, sometimes this decision represents just one more step toward actually receiving the award. While a party may honor the award and voluntarily comply, this outcome is not always the case. In cases where the other party does not comply, the next step is to petition the court to enforce the arbitrator’s decision. This task can be accomplished by numerous mechanisms, depending on the governing laws. These include writs of execution, garnishment, and liens. Writ of Execution . Cornell Law School defines a writ of execution as “A court order that directs law enforcement personnel to take action in an attempt to satisfy a judgment won by the plaintiff.” Garnishment . A garnishment refers to a court order that seizes the money, typically wages, to satisfy a debt. A myriad of laws apply to wage garnishment, e.g., certain types of income, such as Social Security Disability Income (SSDI), cannot be garnished. In addition, depending on state laws, sometimes only debtors who make over a certain amount, e.g. $1,600 gross/month, are subject to wage garnishment. Liens . A lien gives the entitled party in a judgement the right to seize the property of another to satisfy a debt. Commonly, liens can be placed on real estate and personal property, such as automobiles and boats. Property that has a lien cannot be sold because the title is encumbered and often cannot be legally transferred until the lien is satisfied, or paid. Depending on state laws, only certain property is subject to a lien. For example, the winning party in an arbitration case may only be able to place a lien on the other party’s vehicle if it has a market value of over $7,500. The enforcement of arbitration awards is governed by a number of laws, such as Federal Arbitration Act and Uniform Arbitration Act. Summary Negotiation, mediation, and arbitration are alternatives form of dispute resolution that attempt to help disagreeing parties avoid the time and expense of court litigation. While negotiation is involved in all three forms, mediation and arbitration involve a neutral third party to help the parties find a solution. Frameworks that consider self-interest, as opposed to interest in the other party, can help negotiators craft successful negotiation approaches. Mediators, arbitrators, and groups of arbitrators all follow certain steps and play an important role in trying to help parties reach common ground and avoid court proceedings. Mediators who establish rapport with disputing parties can facilitate dispute resolution, as mediation is very much solution-focused. Arbitrators must often decide upon awards when parties cannot reach an agreement. Even when an aggrieved party attains an arbitration award, it may still have to pursue the other party by using a variety of legal techniques to enforce the payment or practice stipulated by the award. Staying current with federal and state laws associated with negotiation proceedings is essential for businesses looking to maximize their relational and outcome goals.
biology
Chapter Outline 37.1 Types of Hormones 37.2 How Hormones Work 37.3 Regulation of Body Processes 37.4 Regulation of Hormone Production 37.5 Endocrine Glands Introduction An animal’s endocrine system controls body processes through the production, secretion, and regulation of hormones, which serve as chemical “messengers” functioning in cellular and organ activity and, ultimately, maintaining the body’s homeostasis. The endocrine system plays a role in growth, metabolism, and sexual development. In humans, common endocrine system diseases include thyroid disease and diabetes mellitus. In organisms that undergo metamorphosis, the process is controlled by the endocrine system. The transformation from tadpole to frog, for example, is complex and nuanced to adapt to specific environments and ecological circumstances.
[ { "answer": { "ans_choice": 2, "ans_text": "peptide hormone" }, "bloom": null, "hl_context": "<hl> The structure of peptide hormones is that of a polypeptide chain ( chain of amino acids ) . <hl> The peptide hormones include molecules that are short polypeptide chains , such as antidiuretic hormone and oxytocin produced in the brain and released into the blood in the posterior pituitary gland . This class also includes small proteins , like growth hormones produced by the pituitary , and large glycoproteins such as follicle-stimulating hormone produced by the pituitary . Figure 37.4 illustrates these peptide hormones .", "hl_sentences": "The structure of peptide hormones is that of a polypeptide chain ( chain of amino acids ) .", "question": { "cloze_format": "A newly discovered hormone contains four amino acids linked together. The chemical class this hormone would be classified under is ___.", "normal_format": "A newly discovered hormone contains four amino acids linked together. Under which chemical class would this hormone be classified?", "question_choices": [ "lipid-derived hormone", "amino acid-derived hormone", "peptide hormone", "glycoprotein" ], "question_id": "fs-idp224539408", "question_text": "A newly discovered hormone contains four amino acids linked together. Under which chemical class would this hormone be classified?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "lipid-derived hormones" }, "bloom": null, "hl_context": "Other lipid-soluble hormones that are not steroid hormones , such as vitamin D and thyroxine , have receptors located in the nucleus . <hl> The hormones diffuse across both the plasma membrane and the nuclear envelope , then bind to receptors in the nucleus . <hl> The hormone-receptor complex stimulates transcription of specific genes . <hl> Lipid-derived ( soluble ) hormones such as steroid hormones diffuse across the membranes of the endocrine cell . <hl> Once outside the cell , they bind to transport proteins that keep them soluble in the bloodstream . At the target cell , the hormones are released from the carrier protein and diffuse across the lipid bilayer of the plasma membrane of cells . The steroid hormones pass through the plasma membrane of a target cell and adhere to intracellular receptors residing in the cytoplasm or in the nucleus . The cell signaling pathways induced by the steroid hormones regulate specific genes on the cell's DNA . The hormones and receptor complex act as transcription regulators by increasing or decreasing the synthesis of mRNA molecules of specific genes . This , in turn , determines the amount of corresponding protein that is synthesized by altering gene expression . This protein can be used either to change the structure of the cell or to produce enzymes that catalyze chemical reactions . In this way , the steroid hormone regulates specific cell processes as illustrated in Figure 37.5 . Visual Connection", "hl_sentences": "The hormones diffuse across both the plasma membrane and the nuclear envelope , then bind to receptors in the nucleus . Lipid-derived ( soluble ) hormones such as steroid hormones diffuse across the membranes of the endocrine cell .", "question": { "cloze_format": "The ___ are a class of hormones that can diffuse through plasma membranes.", "normal_format": "Which class of hormones can diffuse through plasma membranes?", "question_choices": [ "lipid-derived hormones", "amino acid-derived hormones", "peptide hormones", "glycoprotein hormones" ], "question_id": "fs-idp32331328", "question_text": "Which class of hormones can diffuse through plasma membranes?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "It will not affect testosterone-mediated signaling." }, "bloom": null, "hl_context": "Amino acid derived hormones and polypeptide hormones are not lipid-derived ( lipid-soluble ) and therefore cannot diffuse through the plasma membrane of cells . <hl> Lipid insoluble hormones bind to receptors on the outer surface of the plasma membrane , via plasma membrane hormone receptors . <hl> <hl> Unlike steroid hormones , lipid insoluble hormones do not directly affect the target cell because they cannot enter the cell and act directly on DNA . <hl> <hl> Binding of these hormones to a cell surface receptor results in activation of a signaling pathway ; this triggers intracellular activity and carries out the specific effects associated with the hormone . <hl> In this way , nothing passes through the cell membrane ; the hormone that binds at the surface remains at the surface of the cell while the intracellular product remains inside the cell . The hormone that initiates the signaling pathway is called a first messenger , which activates a second messenger in the cytoplasm , as illustrated in Figure 37.6 . <hl> Lipid-derived ( soluble ) hormones such as steroid hormones diffuse across the membranes of the endocrine cell . <hl> Once outside the cell , they bind to transport proteins that keep them soluble in the bloodstream . <hl> At the target cell , the hormones are released from the carrier protein and diffuse across the lipid bilayer of the plasma membrane of cells . <hl> <hl> The steroid hormones pass through the plasma membrane of a target cell and adhere to intracellular receptors residing in the cytoplasm or in the nucleus . <hl> The cell signaling pathways induced by the steroid hormones regulate specific genes on the cell's DNA . The hormones and receptor complex act as transcription regulators by increasing or decreasing the synthesis of mRNA molecules of specific genes . This , in turn , determines the amount of corresponding protein that is synthesized by altering gene expression . This protein can be used either to change the structure of the cell or to produce enzymes that catalyze chemical reactions . In this way , the steroid hormone regulates specific cell processes as illustrated in Figure 37.5 . Visual Connection Receptor binding alters cellular activity and results in an increase or decrease in normal body processes . <hl> Depending on the location of the protein receptor on the target cell and the chemical structure of the hormone , hormones can mediate changes directly by binding to intracellular hormone receptors and modulating gene transcription , or indirectly by binding to cell surface receptors and stimulating signaling pathways . <hl> Most lipid hormones are derived from cholesterol and thus are structurally similar to it , as illustrated in Figure 37.2 . <hl> The primary class of lipid hormones in humans is the steroid hormones . <hl> Chemically , these hormones are usually ketones or alcohols ; their chemical names will end in “ - ol ” for alcohols or “ - one ” for ketones . <hl> Examples of steroid hormones include estradiol , which is an estrogen , or female sex hormone , and testosterone , which is an androgen , or male sex hormone . <hl> These two hormones are released by the female and male reproductive organs , respectively . Other steroid hormones include aldosterone and cortisol , which are released by the adrenal glands along with some other types of androgens . Steroid hormones are insoluble in water , and they are transported by transport proteins in blood . As a result , they remain in circulation longer than peptide hormones . For example , cortisol has a half-life of 60 to 90 minutes , while epinephrine , an amino acid derived-hormone , has a half-life of approximately one minute . Maintaining homeostasis within the body requires the coordination of many different systems and organs . Communication between neighboring cells , and between cells and tissues in distant parts of the body , occurs through the release of chemicals called hormones . Hormones are released into body fluids ( usually blood ) that carry these chemicals to their target cells . At the target cells , which are cells that have a receptor for a signal or ligand from a signal cell , the hormones elicit a response . The cells , tissues , and organs that secrete hormones make up the endocrine system . Examples of glands of the endocrine system include the adrenal glands , which produce hormones such as epinephrine and norepinephrine that regulate responses to stress , and the thyroid gland , which produces thyroid hormones that regulate metabolic rates . Although there are many different hormones in the human body , they can be divided into three classes based on their chemical structure : lipid-derived , amino acid-derived , and peptide ( peptide and proteins ) hormones . <hl> One of the key distinguishing features of lipid-derived hormones is that they can diffuse across plasma membranes whereas the amino acid-derived and peptide hormones cannot . <hl>", "hl_sentences": "Lipid insoluble hormones bind to receptors on the outer surface of the plasma membrane , via plasma membrane hormone receptors . Unlike steroid hormones , lipid insoluble hormones do not directly affect the target cell because they cannot enter the cell and act directly on DNA . Binding of these hormones to a cell surface receptor results in activation of a signaling pathway ; this triggers intracellular activity and carries out the specific effects associated with the hormone . Lipid-derived ( soluble ) hormones such as steroid hormones diffuse across the membranes of the endocrine cell . At the target cell , the hormones are released from the carrier protein and diffuse across the lipid bilayer of the plasma membrane of cells . The steroid hormones pass through the plasma membrane of a target cell and adhere to intracellular receptors residing in the cytoplasm or in the nucleus . Depending on the location of the protein receptor on the target cell and the chemical structure of the hormone , hormones can mediate changes directly by binding to intracellular hormone receptors and modulating gene transcription , or indirectly by binding to cell surface receptors and stimulating signaling pathways . The primary class of lipid hormones in humans is the steroid hormones . Examples of steroid hormones include estradiol , which is an estrogen , or female sex hormone , and testosterone , which is an androgen , or male sex hormone . One of the key distinguishing features of lipid-derived hormones is that they can diffuse across plasma membranes whereas the amino acid-derived and peptide hormones cannot .", "question": { "cloze_format": "A new antagonist molecule has been discovered that binds to and blocks plasma membrane receptors. The effect that this antagonist will have on testosterone, a steroid hormone is that ___.", "normal_format": "A new antagonist molecule has been discovered that binds to and blocks plasma membrane receptors. What effect will this antagonist have on testosterone, a steroid hormone?", "question_choices": [ "It will block testosterone from binding to its receptor.", "It will block testosterone from activating cAMP signaling.", "It will increase testosterone-mediated signaling.", "It will not affect testosterone-mediated signaling." ], "question_id": "fs-idp180067424", "question_text": "A new antagonist molecule has been discovered that binds to and blocks plasma membrane receptors. What effect will this antagonist have on testosterone, a steroid hormone?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "It will prevent activation of protein kinases." }, "bloom": null, "hl_context": "The effect of a hormone is amplified as the signaling pathway progresses . The binding of a hormone at a single receptor causes the activation of many G-proteins , which activates adenylyl cyclase . Each molecule of adenylyl cyclase then triggers the formation of many molecules of cAMP . <hl> Further amplification occurs as protein kinases , once activated by cAMP , can catalyze many reactions . <hl> In this way , a small amount of hormone can trigger the formation of a large amount of cellular product . To stop hormone activity , cAMP is deactivated by the cytoplasmic enzyme phosphodiesterase , or PDE . PDE is always present in the cell and breaks down cAMP to control hormone activity , preventing overproduction of cellular products . The specific response of a cell to a lipid insoluble hormone depends on the type of receptors that are present on the cell membrane and the substrate molecules present in the cell cytoplasm . Cellular responses to hormone binding of a receptor include altering membrane permeability and metabolic pathways , stimulating synthesis of proteins and enzymes , and activating hormone release . 37.3 Regulation of Body Processes Learning Objectives By the end of this section , you will be able to : One very important second messenger is cyclic AMP ( cAMP ) . When a hormone binds to its membrane receptor , a G-protein that is associated with the receptor is activated ; G-proteins are proteins separate from receptors that are found in the cell membrane . When a hormone is not bound to the receptor , the G-protein is inactive and is bound to guanosine diphosphate , or GDP . When a hormone binds to the receptor , the G-protein is activated by binding guanosine triphosphate , or GTP , in place of GDP . After binding , GTP is hydrolysed by the G-protein into GDP and becomes inactive . The activated G-protein in turn activates a membrane-bound enzyme called adenylyl cyclase . Adenylyl cyclase catalyzes the conversion of ATP to cAMP . <hl> cAMP , in turn , activates a group of proteins called protein kinases , which transfer a phosphate group from ATP to a substrate molecule in a process called phosphorylation . <hl> The phosphorylation of a substrate molecule changes its structural orientation , thereby activating it . These activated molecules can then mediate changes in cellular processes .", "hl_sentences": "Further amplification occurs as protein kinases , once activated by cAMP , can catalyze many reactions . cAMP , in turn , activates a group of proteins called protein kinases , which transfer a phosphate group from ATP to a substrate molecule in a process called phosphorylation .", "question": { "cloze_format": "The effect that will a cAMP inhibitor have on a peptide hormone-mediated signaling pathway is that ___.", "normal_format": "What effect will a cAMP inhibitor have on a peptide hormone-mediated signaling pathway?", "question_choices": [ "It will prevent the hormone from binding its receptor.", "It will prevent activation of a G-protein.", "It will prevent activation of adenylate cyclase.", "It will prevent activation of protein kinases." ], "question_id": "fs-idp128311408", "question_text": "What effect will a cAMP inhibitor have on a peptide hormone-mediated signaling pathway?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "inhibits ADH release" }, "bloom": null, "hl_context": "The hypothalamus produces a polypeptide hormone known as antidiuretic hormone ( ADH ) , which is transported to and released from the posterior pituitary gland . The principal action of ADH is to regulate the amount of water excreted by the kidneys . As ADH ( which is also known as vasopressin ) causes direct water reabsorption from the kidney tubules , salts and wastes are concentrated in what will eventually be excreted as urine . The hypothalamus controls the mechanisms of ADH secretion , either by regulating blood volume or the concentration of water in the blood . Dehydration or physiological stress can cause an increase of osmolarity above 300 mOsm / L , which in turn , raises ADH secretion and water will be retained , causing an increase in blood pressure . ADH travels in the bloodstream to the kidneys . Once at the kidneys , ADH changes the kidneys to become more permeable to water by temporarily inserting water channels , aquaporins , into the kidney tubules . Water moves out of the kidney tubules through the aquaporins , reducing urine volume . The water is reabsorbed into the capillaries lowering blood osmolarity back toward normal . As blood osmolarity decreases , a negative feedback mechanism reduces osmoreceptor activity in the hypothalamus , and ADH secretion is reduced . <hl> ADH release can be reduced by certain substances , including alcohol , which can cause increased urine production and dehydration . <hl> Chronic underproduction of ADH or a mutation in the ADH receptor results in diabetes insipidus . If the posterior pituitary does not release enough ADH , water cannot be retained by the kidneys and is lost as urine . This causes increased thirst , but water taken in is lost again and must be continually consumed . If the condition is not severe , dehydration may not occur , but severe cases can lead to electrolyte imbalances due to dehydration .", "hl_sentences": "ADH release can be reduced by certain substances , including alcohol , which can cause increased urine production and dehydration .", "question": { "cloze_format": "Drinking alcoholic beverages causes an increase in urine output. This most likely occurs because alcohol ___ .", "normal_format": "Drinking alcoholic beverages causes an increase in urine output. Why this most likely occurs is because alcohol is what?", "question_choices": [ "inhibits ADH release", "stimulates ADH release", "inhibits TSH release", "stimulates TSH release" ], "question_id": "fs-idp7922736", "question_text": "Drinking alcoholic beverages causes an increase in urine output. This most likely occurs because alcohol:" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "GnRH" }, "bloom": null, "hl_context": "The anterior pituitary gland , or adenohypophysis , is surrounded by a capillary network that extends from the hypothalamus , down along the infundibulum , and to the anterior pituitary . This capillary network is a part of the hypophyseal portal system that carries substances from the hypothalamus to the anterior pituitary and hormones from the anterior pituitary into the circulatory system . A portal system carries blood from one capillary network to another ; therefore , the hypophyseal portal system allows hormones produced by the hypothalamus to be carried directly to the anterior pituitary without first entering the circulatory system . <hl> The anterior pituitary produces seven hormones : growth hormone ( GH ) , prolactin ( PRL ) , thyroid-stimulating hormone ( TSH ) , melanin-stimulating hormone ( MSH ) , adrenocorticotropic hormone ( ACTH ) , follicle-stimulating hormone ( FSH ) , and luteinizing hormone ( LH ) . <hl> Anterior pituitary hormones are sometimes referred to as tropic hormones , because they control the functioning of other organs . While these hormones are produced by the anterior pituitary , their production is controlled by regulatory hormones produced by the hypothalamus . These regulatory hormones can be releasing hormones or inhibiting hormones , causing more or less of the anterior pituitary hormones to be secreted . These travel from the hypothalamus through the hypophyseal portal system to the anterior pituitary where they exert their effect . Negative feedback then regulates how much of these regulatory hormones are released and how much anterior pituitary hormone is secreted . Regulation of the reproductive system is a process that requires the action of hormones from the pituitary gland , the adrenal cortex , and the gonads . <hl> During puberty in both males and females , the hypothalamus produces gonadotropin-releasing hormone ( GnRH ) , which stimulates the production and release of follicle-stimulating hormone ( FSH ) and luteinizing hormone ( LH ) from the anterior pituitary gland . <hl> These hormones regulate the gonads ( testes in males and ovaries in females ) and therefore are called gonadotropins . In both males and females , FSH stimulates gamete production and LH stimulates production of hormones by the gonads . An increase in gonad hormone levels inhibits GnRH production through a negative feedback loop .", "hl_sentences": "The anterior pituitary produces seven hormones : growth hormone ( GH ) , prolactin ( PRL ) , thyroid-stimulating hormone ( TSH ) , melanin-stimulating hormone ( MSH ) , adrenocorticotropic hormone ( ACTH ) , follicle-stimulating hormone ( FSH ) , and luteinizing hormone ( LH ) . During puberty in both males and females , the hypothalamus produces gonadotropin-releasing hormone ( GnRH ) , which stimulates the production and release of follicle-stimulating hormone ( FSH ) and luteinizing hormone ( LH ) from the anterior pituitary gland .", "question": { "cloze_format": "FSH and LH release from the anterior pituitary is stimulated by ________.", "normal_format": "What is FSH and LH release from the anterior pituitary stimulated by?", "question_choices": [ "TSH", "GnRH", "T3", "PTH" ], "question_id": "fs-idp9399248", "question_text": "FSH and LH release from the anterior pituitary is stimulated by ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "insulin" }, "bloom": null, "hl_context": "The endocrine cells of the pancreas form clusters called pancreatic islets or the islets of Langerhans , as visible in the micrograph shown in Figure 37.20 . <hl> The pancreatic islets contain two primary cell types : alpha cells , which produce the hormone glucagon , and beta cells , which produce the hormone insulin . <hl> These hormones regulate blood glucose levels . As blood glucose levels decline , alpha cells release glucagon to raise the blood glucose levels by increasing rates of glycogen breakdown and glucose release by the liver . When blood glucose levels rise , such as after a meal , beta cells release insulin to lower blood glucose levels by increasing the rate of glucose uptake in most body cells , and by increasing glycogen synthesis in skeletal muscles and the liver . Together , glucagon and insulin regulate blood glucose levels . Regulation of Blood Glucose Levels by Insulin and Glucagon Cells of the body require nutrients in order to function , and these nutrients are obtained through feeding . In order to manage nutrient intake , storing excess intake and utilizing reserves when necessary , the body uses hormones to moderate energy stores . <hl> Insulin is produced by the beta cells of the pancreas , which are stimulated to release insulin as blood glucose levels rise ( for example , after a meal is consumed ) . <hl> Insulin lowers blood glucose levels by enhancing the rate of glucose uptake and utilization by target cells , which use glucose for ATP production . It also stimulates the liver to convert glucose to glycogen , which is then stored by cells for later use . Insulin also increases glucose transport into certain cells , such as muscle cells and the liver . This results from an insulin-mediated increase in the number of glucose transporter proteins in cell membranes , which remove glucose from circulation by facilitated diffusion . As insulin binds to its target cell via insulin receptors and signal transduction , it triggers the cell to incorporate glucose transport proteins into its membrane . This allows glucose to enter the cell , where it can be used as an energy source . However , this does not occur in all cells : some cells , including those in the kidneys and brain , can access glucose without the use of insulin . Insulin also stimulates the conversion of glucose to fat in adipocytes and the synthesis of proteins . These actions mediated by insulin cause blood glucose concentrations to fall , called a hypoglycemic “ low sugar ” effect , which inhibits further insulin release from beta cells through a negative feedback loop .", "hl_sentences": "The pancreatic islets contain two primary cell types : alpha cells , which produce the hormone glucagon , and beta cells , which produce the hormone insulin . Insulin is produced by the beta cells of the pancreas , which are stimulated to release insulin as blood glucose levels rise ( for example , after a meal is consumed ) .", "question": { "cloze_format": "The ___ hormone is produced by beta cells of the pancreas.", "normal_format": "What hormone is produced by beta cells of the pancreas?", "question_choices": [ "T3", "glucagon", "insulin", "T4" ], "question_id": "fs-idp10923008", "question_text": "What hormone is produced by beta cells of the pancreas?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "osteoclasts" }, "bloom": null, "hl_context": "<hl> The hormone calcitonin , which is produced by the parafollicular or C cells of the thyroid , has the opposite effect on blood calcium levels as does PTH . <hl> <hl> Calcitonin decreases blood calcium levels by inhibiting osteoclasts , stimulating osteoblasts , and stimulating calcium excretion by the kidneys . <hl> This results in calcium being added to the bones to promote structural integrity . Calcitonin is most important in children ( when it stimulates bone growth ) , during pregnancy ( when it reduces maternal bone loss ) , and during prolonged starvation ( because it reduces bone mass loss ) . In healthy nonpregnant , unstarved adults , the role of calcitonin is unclear . Blood calcium levels are regulated by parathyroid hormone ( PTH ) , which is produced by the parathyroid glands , as illustrated in Figure 37.12 . PTH is released in response to low blood Ca 2 + levels . <hl> PTH increases Ca 2 + levels by targeting the skeleton , the kidneys , and the intestine . <hl> <hl> In the skeleton , PTH stimulates osteoclasts , which causes bone to be reabsorbed , releasing Ca 2 + from bone into the blood . <hl> PTH also inhibits osteoblasts , reducing Ca 2 + deposition in bone . In the intestines , PTH increases dietary Ca 2 + absorption , and in the kidneys , PTH stimulates reabsorption of the CA 2 + . While PTH acts directly on the kidneys to increase Ca 2 + reabsorption , its effects on the intestine are indirect . PTH triggers the formation of calcitriol , an active form of vitamin D , which acts on the intestines to increase absorption of dietary calcium . PTH release is inhibited by rising blood calcium levels . Hyperparathyroidism results from an overproduction of parathyroid hormone . This results in excessive calcium being removed from bones and introduced into blood circulation , producing structural weakness of the bones , which can lead to deformation and fractures , plus nervous system impairment due to high blood calcium levels . Hypoparathyroidism , the underproduction of PTH , results in extremely low levels of blood calcium , which causes impaired muscle function and may result in tetany ( severe sustained muscle contraction ) .", "hl_sentences": "The hormone calcitonin , which is produced by the parafollicular or C cells of the thyroid , has the opposite effect on blood calcium levels as does PTH . Calcitonin decreases blood calcium levels by inhibiting osteoclasts , stimulating osteoblasts , and stimulating calcium excretion by the kidneys . PTH increases Ca 2 + levels by targeting the skeleton , the kidneys , and the intestine . In the skeleton , PTH stimulates osteoclasts , which causes bone to be reabsorbed , releasing Ca 2 + from bone into the blood .", "question": { "cloze_format": "When blood calcium levels are low, PTH stimulates ___.", "normal_format": "What does PTH stimulate when blood calcium levels are low?", "question_choices": [ "excretion of calcium from the kidneys", "excretion of calcium from the intestines", "osteoblasts", "osteoclasts" ], "question_id": "fs-idp9338176", "question_text": "When blood calcium levels are low, PTH stimulates:" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "humoral stimuli" }, "bloom": null, "hl_context": "The term “ humoral ” is derived from the term “ humor , ” which refers to bodily fluids such as blood . A humoral stimulus refers to the control of hormone release in response to changes in extracellular fluids such as blood or the ion concentration in the blood . <hl> For example , a rise in blood glucose levels triggers the pancreatic release of insulin . <hl> Insulin causes blood glucose levels to drop , which signals the pancreas to stop producing insulin in a negative feedback loop .", "hl_sentences": "For example , a rise in blood glucose levels triggers the pancreatic release of insulin .", "question": { "cloze_format": "A rise in blood glucose levels triggers release of insulin from the pancreas. This mechanism of hormone production is stimulated by ___.", "normal_format": "A rise in blood glucose levels triggers release of insulin from the pancreas. What stimulates this hormone-production mechanism?", "question_choices": [ "humoral stimuli", "hormonal stimuli", "neural stimuli", "negative stimuli" ], "question_id": "fs-idp19901472", "question_text": "A rise in blood glucose levels triggers release of insulin from the pancreas. This mechanism of hormone production is stimulated by:" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "hormonal and neural stimuli" }, "bloom": null, "hl_context": "<hl> In some cases , the nervous system directly stimulates endocrine glands to release hormones , which is referred to as neural stimuli . <hl> <hl> Recall that in a short-term stress response , the hormones epinephrine and norepinephrine are important for providing the bursts of energy required for the body to respond . <hl> Here , neuronal signaling from the sympathetic nervous system directly stimulates the adrenal medulla to release the hormones epinephrine and norepinephrine in response to stress . 37.5 Endocrine Glands Learning Objectives By the end of this section , you will be able to : <hl> Hormonal stimuli refers to the release of a hormone in response to another hormone . <hl> A number of endocrine glands release hormones when stimulated by hormones released by other endocrine glands . <hl> For example , the hypothalamus produces hormones that stimulate the anterior portion of the pituitary gland . <hl> The anterior pituitary in turn releases hormones that regulate hormone production by other endocrine glands . The anterior pituitary releases the thyroid-stimulating hormone , which then stimulates the thyroid gland to produce the hormones T 3 and T 4 . As blood concentrations of T 3 and T 4 rise , they inhibit both the pituitary and the hypothalamus in a negative feedback loop . <hl> The sympathetic nervous system regulates the stress response via the hypothalamus . <hl> <hl> Stressful stimuli cause the hypothalamus to signal the adrenal medulla ( which mediates short-term stress responses ) via nerve impulses , and the adrenal cortex , which mediates long-term stress responses , via the hormone adrenocorticotropic hormone ( ACTH ) , which is produced by the anterior pituitary . <hl> Short-term Stress Response When presented with a stressful situation , the body responds by calling for the release of hormones that provide a burst of energy . The hormones epinephrine ( also known as adrenaline ) and norepinephrine ( also known as noradrenaline ) are released by the adrenal medulla . How do these hormones provide a burst of energy ? Epinephrine and norepinephrine increase blood glucose levels by stimulating the liver and skeletal muscles to break down glycogen and by stimulating glucose release by liver cells . Additionally , these hormones increase oxygen availability to cells by increasing the heart rate and dilating the bronchioles . The hormones also prioritize body function by increasing blood supply to essential organs such as the heart , brain , and skeletal muscles , while restricting blood flow to organs not in immediate need , such as the skin , digestive system , and kidneys . Epinephrine and norepinephrine are collectively called catecholamines .", "hl_sentences": "In some cases , the nervous system directly stimulates endocrine glands to release hormones , which is referred to as neural stimuli . Recall that in a short-term stress response , the hormones epinephrine and norepinephrine are important for providing the bursts of energy required for the body to respond . Hormonal stimuli refers to the release of a hormone in response to another hormone . For example , the hypothalamus produces hormones that stimulate the anterior portion of the pituitary gland . The sympathetic nervous system regulates the stress response via the hypothalamus . Stressful stimuli cause the hypothalamus to signal the adrenal medulla ( which mediates short-term stress responses ) via nerve impulses , and the adrenal cortex , which mediates long-term stress responses , via the hormone adrenocorticotropic hormone ( ACTH ) , which is produced by the anterior pituitary .", "question": { "cloze_format": "The mechanism of hormonal stimulation that would be affected if signaling and hormone release from the hypothalamus was blocked are the ___ .", "normal_format": "Which mechanism of hormonal stimulation would be affected if signaling and hormone release from the hypothalamus was blocked?", "question_choices": [ "humoral and hormonal stimuli", "hormonal and neural stimuli", "neural and humoral stimuli", "hormonal and negative stimuli" ], "question_id": "fs-idm120216224", "question_text": "Which mechanism of hormonal stimulation would be affected if signaling and hormone release from the hypothalamus was blocked?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "adrenal glands" }, "bloom": null, "hl_context": "<hl> While the adrenal glands associated with the kidneys are major endocrine glands , the kidneys themselves also possess endocrine function . <hl> Renin is released in response to decreased blood volume or pressure and is part of the renin-angiotensin-aldosterone system that leads to the release of aldosterone . Aldosterone then causes the retention of Na + and water , raising blood volume . The kidneys also release calcitriol , which aids in the absorption of Ca 2 + and phosphate ions . Erythropoietin ( EPO ) is a protein hormone that triggers the formation of red blood cells in the bone marrow . EPO is released in response to low oxygen levels . Because red blood cells are oxygen carriers , increased production results in greater oxygen delivery throughout the body . EPO has been used by athletes to improve performance , as greater oxygen delivery to muscle cells allows for greater endurance . Because red blood cells increase the viscosity of blood , artificially high levels of EPO can cause severe health risks . The thymus is found behind the sternum ; it is most prominent in infants , becoming smaller in size through adulthood . The thymus produces hormones referred to as thymosins , which contribute to the development of the immune response . <hl> The adrenal glands are associated with the kidneys ; one gland is located on top of each kidney as illustrated in Figure 37.18 . <hl> The adrenal glands consist of an outer adrenal cortex and an inner adrenal medulla . These regions secrete different hormones .", "hl_sentences": "While the adrenal glands associated with the kidneys are major endocrine glands , the kidneys themselves also possess endocrine function . The adrenal glands are associated with the kidneys ; one gland is located on top of each kidney as illustrated in Figure 37.18 .", "question": { "cloze_format": "The endocrine glands that are associated with the kidneys are ___.", "normal_format": "Which endocrine glands are associated with the kidneys?", "question_choices": [ "thyroid glands", "pituitary glands", "adrenal glands", "gonads" ], "question_id": "fs-idp85449136", "question_text": "Which endocrine glands are associated with the kidneys?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "oxytocin" }, "bloom": null, "hl_context": "The anterior pituitary gland , or adenohypophysis , is surrounded by a capillary network that extends from the hypothalamus , down along the infundibulum , and to the anterior pituitary . This capillary network is a part of the hypophyseal portal system that carries substances from the hypothalamus to the anterior pituitary and hormones from the anterior pituitary into the circulatory system . A portal system carries blood from one capillary network to another ; therefore , the hypophyseal portal system allows hormones produced by the hypothalamus to be carried directly to the anterior pituitary without first entering the circulatory system . <hl> The anterior pituitary produces seven hormones : growth hormone ( GH ) , prolactin ( PRL ) , thyroid-stimulating hormone ( TSH ) , melanin-stimulating hormone ( MSH ) , adrenocorticotropic hormone ( ACTH ) , follicle-stimulating hormone ( FSH ) , and luteinizing hormone ( LH ) . <hl> Anterior pituitary hormones are sometimes referred to as tropic hormones , because they control the functioning of other organs . While these hormones are produced by the anterior pituitary , their production is controlled by regulatory hormones produced by the hypothalamus . These regulatory hormones can be releasing hormones or inhibiting hormones , causing more or less of the anterior pituitary hormones to be secreted . These travel from the hypothalamus through the hypophyseal portal system to the anterior pituitary where they exert their effect . Negative feedback then regulates how much of these regulatory hormones are released and how much anterior pituitary hormone is secreted . In addition to producing FSH and LH , the anterior portion of the pituitary gland also produces the hormone prolactin ( PRL ) in females . Prolactin stimulates the production of milk by the mammary glands following childbirth . Prolactin levels are regulated by the hypothalamic hormones prolactin-releasing hormone ( PRH ) and prolactin-inhibiting hormone ( PIH ) , which is now known to be dopamine . PRH stimulates the release of prolactin and PIH inhibits it . <hl> The posterior pituitary releases the hormone oxytocin , which stimulates uterine contractions during childbirth . <hl> The uterine smooth muscles are not very sensitive to oxytocin until late in pregnancy when the number of oxytocin receptors in the uterus peaks . Stretching of tissues in the uterus and cervix stimulates oxytocin release during childbirth . Contractions increase in intensity as blood levels of oxytocin rise via a positive feedback mechanism until the birth is complete . Oxytocin also stimulates the contraction of myoepithelial cells around the milk-producing mammary glands . As these cells contract , milk is forced from the secretory alveoli into milk ducts and is ejected from the breasts in milk ejection ( “ let-down ” ) reflex . Oxytocin release is stimulated by the suckling of an infant , which triggers the synthesis of oxytocin in the hypothalamus and its release into circulation at the posterior pituitary .", "hl_sentences": "The anterior pituitary produces seven hormones : growth hormone ( GH ) , prolactin ( PRL ) , thyroid-stimulating hormone ( TSH ) , melanin-stimulating hormone ( MSH ) , adrenocorticotropic hormone ( ACTH ) , follicle-stimulating hormone ( FSH ) , and luteinizing hormone ( LH ) . The posterior pituitary releases the hormone oxytocin , which stimulates uterine contractions during childbirth .", "question": { "cloze_format": "The hormone that is not produced by the anterior pituitary is ___.", "normal_format": "Which of the following hormones is not produced by the anterior pituitary?", "question_choices": [ "oxytocin", "growth hormone", "prolactin", "thyroid-stimulating hormone" ], "question_id": "fs-idm59320032", "question_text": "Which of the following hormones is not produced by the anterior pituitary?" }, "references_are_paraphrase": null } ]
37
37.1 Types of Hormones Learning Objectives By the end of this section, you will be able to: List the different types of hormones Explain their role in maintaining homeostasis Maintaining homeostasis within the body requires the coordination of many different systems and organs. Communication between neighboring cells, and between cells and tissues in distant parts of the body, occurs through the release of chemicals called hormones. Hormones are released into body fluids (usually blood) that carry these chemicals to their target cells. At the target cells, which are cells that have a receptor for a signal or ligand from a signal cell, the hormones elicit a response. The cells, tissues, and organs that secrete hormones make up the endocrine system. Examples of glands of the endocrine system include the adrenal glands, which produce hormones such as epinephrine and norepinephrine that regulate responses to stress, and the thyroid gland, which produces thyroid hormones that regulate metabolic rates. Although there are many different hormones in the human body, they can be divided into three classes based on their chemical structure: lipid-derived, amino acid-derived, and peptide (peptide and proteins) hormones. One of the key distinguishing features of lipid-derived hormones is that they can diffuse across plasma membranes whereas the amino acid-derived and peptide hormones cannot. Lipid-Derived Hormones (or Lipid-soluble Hormones) Most lipid hormones are derived from cholesterol and thus are structurally similar to it, as illustrated in Figure 37.2 . The primary class of lipid hormones in humans is the steroid hormones. Chemically, these hormones are usually ketones or alcohols; their chemical names will end in “-ol” for alcohols or “-one” for ketones. Examples of steroid hormones include estradiol, which is an estrogen , or female sex hormone, and testosterone, which is an androgen, or male sex hormone. These two hormones are released by the female and male reproductive organs, respectively. Other steroid hormones include aldosterone and cortisol, which are released by the adrenal glands along with some other types of androgens. Steroid hormones are insoluble in water, and they are transported by transport proteins in blood. As a result, they remain in circulation longer than peptide hormones. For example, cortisol has a half-life of 60 to 90 minutes, while epinephrine, an amino acid derived-hormone, has a half-life of approximately one minute. Amino Acid-Derived Hormones The amino acid-derived hormones are relatively small molecules that are derived from the amino acids tyrosine and tryptophan, shown in Figure 37.3 . If a hormone is amino acid-derived, its chemical name will end in “-ine”. Examples of amino acid-derived hormones include epinephrine and norepinephrine, which are synthesized in the medulla of the adrenal glands, and thyroxine, which is produced by the thyroid gland. The pineal gland in the brain makes and secretes melatonin which regulates sleep cycles. Peptide Hormones The structure of peptide hormones is that of a polypeptide chain (chain of amino acids). The peptide hormones include molecules that are short polypeptide chains, such as antidiuretic hormone and oxytocin produced in the brain and released into the blood in the posterior pituitary gland. This class also includes small proteins, like growth hormones produced by the pituitary, and large glycoproteins such as follicle-stimulating hormone produced by the pituitary. Figure 37.4 illustrates these peptide hormones. Secreted peptides like insulin are stored within vesicles in the cells that synthesize them. They are then released in response to stimuli such as high blood glucose levels in the case of insulin. Amino acid-derived and polypeptide hormones are water-soluble and insoluble in lipids. These hormones cannot pass through plasma membranes of cells; therefore, their receptors are found on the surface of the target cells. Career Connection Endocrinologist An endocrinologist is a medical doctor who specializes in treating disorders of the endocrine glands, hormone systems, and glucose and lipid metabolic pathways. An endocrine surgeon specializes in the surgical treatment of endocrine diseases and glands. Some of the diseases that are managed by endocrinologists: disorders of the pancreas (diabetes mellitus), disorders of the pituitary (gigantism, acromegaly, and pituitary dwarfism), disorders of the thyroid gland (goiter and Graves’ disease), and disorders of the adrenal glands (Cushing’s disease and Addison’s disease). Endocrinologists are required to assess patients and diagnose endocrine disorders through extensive use of laboratory tests. Many endocrine diseases are diagnosed using tests that stimulate or suppress endocrine organ functioning. Blood samples are then drawn to determine the effect of stimulating or suppressing an endocrine organ on the production of hormones. For example, to diagnose diabetes mellitus, patients are required to fast for 12 to 24 hours. They are then given a sugary drink, which stimulates the pancreas to produce insulin to decrease blood glucose levels. A blood sample is taken one to two hours after the sugar drink is consumed. If the pancreas is functioning properly, the blood glucose level will be within a normal range. Another example is the A1C test, which can be performed during blood screening. The A1C test measures average blood glucose levels over the past two to three months by examining how well the blood glucose is being managed over a long time. Once a disease has been diagnosed, endocrinologists can prescribe lifestyle changes and/or medications to treat the disease. Some cases of diabetes mellitus can be managed by exercise, weight loss, and a healthy diet; in other cases, medications may be required to enhance insulin release. If the disease cannot be controlled by these means, the endocrinologist may prescribe insulin injections. In addition to clinical practice, endocrinologists may also be involved in primary research and development activities. For example, ongoing islet transplant research is investigating how healthy pancreas islet cells may be transplanted into diabetic patients. Successful islet transplants may allow patients to stop taking insulin injections. 37.2 How Hormones Work Learning Objectives By the end of this section, you will be able to: Explain how hormones work Discuss the role of different types of hormone receptors Hormones mediate changes in target cells by binding to specific hormone receptors . In this way, even though hormones circulate throughout the body and come into contact with many different cell types, they only affect cells that possess the necessary receptors. Receptors for a specific hormone may be found on many different cells or may be limited to a small number of specialized cells. For example, thyroid hormones act on many different tissue types, stimulating metabolic activity throughout the body. Cells can have many receptors for the same hormone but often also possess receptors for different types of hormones. The number of receptors that respond to a hormone determines the cell’s sensitivity to that hormone, and the resulting cellular response. Additionally, the number of receptors that respond to a hormone can change over time, resulting in increased or decreased cell sensitivity. In up-regulation , the number of receptors increases in response to rising hormone levels, making the cell more sensitive to the hormone and allowing for more cellular activity. When the number of receptors decreases in response to rising hormone levels, called down-regulation , cellular activity is reduced. Receptor binding alters cellular activity and results in an increase or decrease in normal body processes. Depending on the location of the protein receptor on the target cell and the chemical structure of the hormone, hormones can mediate changes directly by binding to intracellular hormone receptors and modulating gene transcription, or indirectly by binding to cell surface receptors and stimulating signaling pathways. Intracellular Hormone Receptors Lipid-derived (soluble) hormones such as steroid hormones diffuse across the membranes of the endocrine cell. Once outside the cell, they bind to transport proteins that keep them soluble in the bloodstream. At the target cell, the hormones are released from the carrier protein and diffuse across the lipid bilayer of the plasma membrane of cells. The steroid hormones pass through the plasma membrane of a target cell and adhere to intracellular receptors residing in the cytoplasm or in the nucleus. The cell signaling pathways induced by the steroid hormones regulate specific genes on the cell's DNA. The hormones and receptor complex act as transcription regulators by increasing or decreasing the synthesis of mRNA molecules of specific genes. This, in turn, determines the amount of corresponding protein that is synthesized by altering gene expression. This protein can be used either to change the structure of the cell or to produce enzymes that catalyze chemical reactions. In this way, the steroid hormone regulates specific cell processes as illustrated in Figure 37.5 . Visual Connection Heat shock proteins (HSP) are so named because they help refold misfolded proteins. In response to increased temperature (a “heat shock”), heat shock proteins are activated by release from the NR/HSP complex. At the same time, transcription of HSP genes is activated. Why do you think the cell responds to a heat shock by increasing the activity of proteins that help refold misfolded proteins? Other lipid-soluble hormones that are not steroid hormones, such as vitamin D and thyroxine, have receptors located in the nucleus. The hormones diffuse across both the plasma membrane and the nuclear envelope, then bind to receptors in the nucleus. The hormone-receptor complex stimulates transcription of specific genes. Plasma Membrane Hormone Receptors Amino acid derived hormones and polypeptide hormones are not lipid-derived (lipid-soluble) and therefore cannot diffuse through the plasma membrane of cells. Lipid insoluble hormones bind to receptors on the outer surface of the plasma membrane, via plasma membrane hormone receptors . Unlike steroid hormones, lipid insoluble hormones do not directly affect the target cell because they cannot enter the cell and act directly on DNA. Binding of these hormones to a cell surface receptor results in activation of a signaling pathway; this triggers intracellular activity and carries out the specific effects associated with the hormone. In this way, nothing passes through the cell membrane; the hormone that binds at the surface remains at the surface of the cell while the intracellular product remains inside the cell. The hormone that initiates the signaling pathway is called a first messenger , which activates a second messenger in the cytoplasm, as illustrated in Figure 37.6 . One very important second messenger is cyclic AMP (cAMP). When a hormone binds to its membrane receptor, a G-protein that is associated with the receptor is activated; G-proteins are proteins separate from receptors that are found in the cell membrane. When a hormone is not bound to the receptor, the G-protein is inactive and is bound to guanosine diphosphate, or GDP. When a hormone binds to the receptor, the G-protein is activated by binding guanosine triphosphate, or GTP, in place of GDP. After binding, GTP is hydrolysed by the G-protein into GDP and becomes inactive. The activated G-protein in turn activates a membrane-bound enzyme called adenylyl cyclase . Adenylyl cyclase catalyzes the conversion of ATP to cAMP. cAMP, in turn, activates a group of proteins called protein kinases, which transfer a phosphate group from ATP to a substrate molecule in a process called phosphorylation. The phosphorylation of a substrate molecule changes its structural orientation, thereby activating it. These activated molecules can then mediate changes in cellular processes. The effect of a hormone is amplified as the signaling pathway progresses. The binding of a hormone at a single receptor causes the activation of many G-proteins, which activates adenylyl cyclase. Each molecule of adenylyl cyclase then triggers the formation of many molecules of cAMP. Further amplification occurs as protein kinases, once activated by cAMP, can catalyze many reactions. In this way, a small amount of hormone can trigger the formation of a large amount of cellular product. To stop hormone activity, cAMP is deactivated by the cytoplasmic enzyme phosphodiesterase , or PDE. PDE is always present in the cell and breaks down cAMP to control hormone activity, preventing overproduction of cellular products. The specific response of a cell to a lipid insoluble hormone depends on the type of receptors that are present on the cell membrane and the substrate molecules present in the cell cytoplasm. Cellular responses to hormone binding of a receptor include altering membrane permeability and metabolic pathways, stimulating synthesis of proteins and enzymes, and activating hormone release. 37.3 Regulation of Body Processes Learning Objectives By the end of this section, you will be able to: Explain how hormones regulate the excretory system Discuss the role of hormones in the reproductive system Describe how hormones regulate metabolism Explain the role of hormones in different diseases Hormones have a wide range of effects and modulate many different body processes. The key regulatory processes that will be examined here are those affecting the excretory system, the reproductive system, metabolism, blood calcium concentrations, growth, and the stress response. Hormonal Regulation of the Excretory System Maintaining a proper water balance in the body is important to avoid dehydration or over-hydration (hyponatremia). The water concentration of the body is monitored by osmoreceptors in the hypothalamus, which detect the concentration of electrolytes in the extracellular fluid. The concentration of electrolytes in the blood rises when there is water loss caused by excessive perspiration, inadequate water intake, or low blood volume due to blood loss. An increase in blood electrolyte levels results in a neuronal signal being sent from the osmoreceptors in hypothalamic nuclei. The pituitary gland has two components: anterior and posterior. The anterior pituitary is composed of glandular cells that secrete protein hormones. The posterior pituitary is an extension of the hypothalamus. It is composed largely of neurons that are continuous with the hypothalamus. The hypothalamus produces a polypeptide hormone known as antidiuretic hormone (ADH) , which is transported to and released from the posterior pituitary gland. The principal action of ADH is to regulate the amount of water excreted by the kidneys. As ADH (which is also known as vasopressin) causes direct water reabsorption from the kidney tubules, salts and wastes are concentrated in what will eventually be excreted as urine. The hypothalamus controls the mechanisms of ADH secretion, either by regulating blood volume or the concentration of water in the blood. Dehydration or physiological stress can cause an increase of osmolarity above 300 mOsm/L, which in turn, raises ADH secretion and water will be retained, causing an increase in blood pressure. ADH travels in the bloodstream to the kidneys. Once at the kidneys, ADH changes the kidneys to become more permeable to water by temporarily inserting water channels, aquaporins, into the kidney tubules. Water moves out of the kidney tubules through the aquaporins, reducing urine volume. The water is reabsorbed into the capillaries lowering blood osmolarity back toward normal. As blood osmolarity decreases, a negative feedback mechanism reduces osmoreceptor activity in the hypothalamus, and ADH secretion is reduced. ADH release can be reduced by certain substances, including alcohol, which can cause increased urine production and dehydration. Chronic underproduction of ADH or a mutation in the ADH receptor results in diabetes insipidus . If the posterior pituitary does not release enough ADH, water cannot be retained by the kidneys and is lost as urine. This causes increased thirst, but water taken in is lost again and must be continually consumed. If the condition is not severe, dehydration may not occur, but severe cases can lead to electrolyte imbalances due to dehydration. Another hormone responsible for maintaining electrolyte concentrations in extracellular fluids is aldosterone , a steroid hormone that is produced by the adrenal cortex. In contrast to ADH, which promotes the reabsorption of water to maintain proper water balance, aldosterone maintains proper water balance by enhancing Na + reabsorption and K + secretion from extracellular fluid of the cells in kidney tubules. Because it is produced in the cortex of the adrenal gland and affects the concentrations of minerals Na + and K + , aldosterone is referred to as a mineralocorticoid , a corticosteroid that affects ion and water balance. Aldosterone release is stimulated by a decrease in blood sodium levels, blood volume, or blood pressure, or an increase in blood potassium levels. It also prevents the loss of Na + from sweat, saliva, and gastric juice. The reabsorption of Na + also results in the osmotic reabsorption of water, which alters blood volume and blood pressure. Aldosterone production can be stimulated by low blood pressure, which triggers a sequence of chemical release, as illustrated in Figure 37.7 . When blood pressure drops, the renin-angiotensin-aldosterone system (RAAS) is activated. Cells in the juxtaglomerular apparatus, which regulates the functions of the nephrons of the kidney, detect this and release renin . Renin, an enzyme, circulates in the blood and reacts with a plasma protein produced by the liver called angiotensinogen. When angiotensinogen is cleaved by renin, it produces angiotensin I, which is then converted into angiotensin II in the lungs. Angiotensin II functions as a hormone and then causes the release of the hormone aldosterone by the adrenal cortex, resulting in increased Na + reabsorption, water retention, and an increase in blood pressure. Angiotensin II in addition to being a potent vasoconstrictor also causes an increase in ADH and increased thirst, both of which help to raise blood pressure. Hormonal Regulation of the Reproductive System Regulation of the reproductive system is a process that requires the action of hormones from the pituitary gland, the adrenal cortex, and the gonads. During puberty in both males and females, the hypothalamus produces gonadotropin-releasing hormone (GnRH), which stimulates the production and release of follicle-stimulating hormone (FSH) and luteinizing hormone (LH) from the anterior pituitary gland. These hormones regulate the gonads (testes in males and ovaries in females) and therefore are called gonadotropins . In both males and females, FSH stimulates gamete production and LH stimulates production of hormones by the gonads. An increase in gonad hormone levels inhibits GnRH production through a negative feedback loop. Regulation of the Male Reproductive System In males, FSH stimulates the maturation of sperm cells. FSH production is inhibited by the hormone inhibin, which is released by the testes. LH stimulates production of the sex hormones ( androgens ) by the interstitial cells of the testes and therefore is also called interstitial cell-stimulating hormone. The most widely known androgen in males is testosterone. Testosterone promotes the production of sperm and masculine characteristics. The adrenal cortex also produces small amounts of testosterone precursor, although the role of this additional hormone production is not fully understood. Everyday Connection The Dangers of Synthetic Hormones Some athletes attempt to boost their performance by using artificial hormones that enhance muscle performance. Anabolic steroids, a form of the male sex hormone testosterone, are one of the most widely known performance-enhancing drugs. Steroids are used to help build muscle mass. Other hormones that are used to enhance athletic performance include erythropoietin, which triggers the production of red blood cells, and human growth hormone, which can help in building muscle mass. Most performance enhancing drugs are illegal for non-medical purposes. They are also banned by national and international governing bodies including the International Olympic Committee, the U.S. Olympic Committee, the National Collegiate Athletic Association, the Major League Baseball, and the National Football League. The side effects of synthetic hormones are often significant and non-reversible, and in some cases, fatal. Androgens produce several complications such as liver dysfunctions and liver tumors, prostate gland enlargement, difficulty urinating, premature closure of epiphyseal cartilages, testicular atrophy, infertility, and immune system depression. The physiological strain caused by these substances is often greater than what the body can handle, leading to unpredictable and dangerous effects and linking their use to heart attacks, strokes, and impaired cardiac function. Regulation of the Female Reproductive System In females, FSH stimulates development of egg cells, called ova, which develop in structures called follicles. Follicle cells produce the hormone inhibin, which inhibits FSH production. LH also plays a role in the development of ova, induction of ovulation, and stimulation of estradiol and progesterone production by the ovaries, as illustrated in Figure 37.9 . Estradiol and progesterone are steroid hormones that prepare the body for pregnancy. Estradiol produces secondary sex characteristics in females, while both estradiol and progesterone regulate the menstrual cycle. In addition to producing FSH and LH, the anterior portion of the pituitary gland also produces the hormone prolactin (PRL) in females. Prolactin stimulates the production of milk by the mammary glands following childbirth. Prolactin levels are regulated by the hypothalamic hormones prolactin-releasing hormone (PRH) and prolactin-inhibiting hormone (PIH) , which is now known to be dopamine. PRH stimulates the release of prolactin and PIH inhibits it. The posterior pituitary releases the hormone oxytocin , which stimulates uterine contractions during childbirth. The uterine smooth muscles are not very sensitive to oxytocin until late in pregnancy when the number of oxytocin receptors in the uterus peaks. Stretching of tissues in the uterus and cervix stimulates oxytocin release during childbirth. Contractions increase in intensity as blood levels of oxytocin rise via a positive feedback mechanism until the birth is complete. Oxytocin also stimulates the contraction of myoepithelial cells around the milk-producing mammary glands. As these cells contract, milk is forced from the secretory alveoli into milk ducts and is ejected from the breasts in milk ejection (“let-down”) reflex. Oxytocin release is stimulated by the suckling of an infant, which triggers the synthesis of oxytocin in the hypothalamus and its release into circulation at the posterior pituitary. Hormonal Regulation of Metabolism Blood glucose levels vary widely over the course of a day as periods of food consumption alternate with periods of fasting. Insulin and glucagon are the two hormones primarily responsible for maintaining homeostasis of blood glucose levels. Additional regulation is mediated by the thyroid hormones. Regulation of Blood Glucose Levels by Insulin and Glucagon Cells of the body require nutrients in order to function, and these nutrients are obtained through feeding. In order to manage nutrient intake, storing excess intake and utilizing reserves when necessary, the body uses hormones to moderate energy stores. Insulin is produced by the beta cells of the pancreas, which are stimulated to release insulin as blood glucose levels rise (for example, after a meal is consumed). Insulin lowers blood glucose levels by enhancing the rate of glucose uptake and utilization by target cells, which use glucose for ATP production. It also stimulates the liver to convert glucose to glycogen, which is then stored by cells for later use. Insulin also increases glucose transport into certain cells, such as muscle cells and the liver. This results from an insulin-mediated increase in the number of glucose transporter proteins in cell membranes, which remove glucose from circulation by facilitated diffusion. As insulin binds to its target cell via insulin receptors and signal transduction, it triggers the cell to incorporate glucose transport proteins into its membrane. This allows glucose to enter the cell, where it can be used as an energy source. However, this does not occur in all cells: some cells, including those in the kidneys and brain, can access glucose without the use of insulin. Insulin also stimulates the conversion of glucose to fat in adipocytes and the synthesis of proteins. These actions mediated by insulin cause blood glucose concentrations to fall, called a hypoglycemic “low sugar” effect, which inhibits further insulin release from beta cells through a negative feedback loop. Link to Learning This animation describe the role of insulin and the pancreas in diabetes. Click to view content Impaired insulin function can lead to a condition called diabetes mellitus , the main symptoms of which are illustrated in Figure 37.10 . This can be caused by low levels of insulin production by the beta cells of the pancreas, or by reduced sensitivity of tissue cells to insulin. This prevents glucose from being absorbed by cells, causing high levels of blood glucose, or hyperglycemia (high sugar). High blood glucose levels make it difficult for the kidneys to recover all the glucose from nascent urine, resulting in glucose being lost in urine. High glucose levels also result in less water being reabsorbed by the kidneys, causing high amounts of urine to be produced; this may result in dehydration. Over time, high blood glucose levels can cause nerve damage to the eyes and peripheral body tissues, as well as damage to the kidneys and cardiovascular system. Oversecretion of insulin can cause hypoglycemia , low blood glucose levels. This causes insufficient glucose availability to cells, often leading to muscle weakness, and can sometimes cause unconsciousness or death if left untreated. When blood glucose levels decline below normal levels, for example between meals or when glucose is utilized rapidly during exercise, the hormone glucagon is released from the alpha cells of the pancreas. Glucagon raises blood glucose levels, eliciting what is called a hyperglycemic effect, by stimulating the breakdown of glycogen to glucose in skeletal muscle cells and liver cells in a process called glycogenolysis . Glucose can then be utilized as energy by muscle cells and released into circulation by the liver cells. Glucagon also stimulates absorption of amino acids from the blood by the liver, which then converts them to glucose. This process of glucose synthesis is called gluconeogenesis . Glucagon also stimulates adipose cells to release fatty acids into the blood. These actions mediated by glucagon result in an increase in blood glucose levels to normal homeostatic levels. Rising blood glucose levels inhibit further glucagon release by the pancreas via a negative feedback mechanism. In this way, insulin and glucagon work together to maintain homeostatic glucose levels, as shown in Figure 37.11 . Visual Connection Pancreatic tumors may cause excess secretion of glucagon. Type I diabetes results from the failure of the pancreas to produce insulin. Which of the following statement about these two conditions is true? A pancreatic tumor and type I diabetes will have the opposite effects on blood sugar levels. A pancreatic tumor and type I diabetes will both cause hyperglycemia. A pancreatic tumor and type I diabetes will both cause hypoglycemia. Both pancreatic tumors and type I diabetes result in the inability of cells to take up glucose. Regulation of Blood Glucose Levels by Thyroid Hormones The basal metabolic rate, which is the amount of calories required by the body at rest, is determined by two hormones produced by the thyroid gland: thyroxine , also known as tetraiodothyronine or T 4 , and triiodothyronine , also known as T 3 . These hormones affect nearly every cell in the body except for the adult brain, uterus, testes, blood cells, and spleen. They are transported across the plasma membrane of target cells and bind to receptors on the mitochondria resulting in increased ATP production. In the nucleus, T 3 and T 4 activate genes involved in energy production and glucose oxidation. This results in increased rates of metabolism and body heat production, which is known as the hormone’s calorigenic effect. T 3 and T 4 release from the thyroid gland is stimulated by thyroid-stimulating hormone (TSH) , which is produced by the anterior pituitary. TSH binding at the receptors of the follicle of the thyroid triggers the production of T 3 and T 4 from a glycoprotein called thyroglobulin . Thyroglobulin is present in the follicles of the thyroid, and is converted into thyroid hormones with the addition of iodine. Iodine is formed from iodide ions that are actively transported into the thyroid follicle from the bloodstream. A peroxidase enzyme then attaches the iodine to the tyrosine amino acid found in thyroglobulin. T 3 has three iodine ions attached, while T 4 has four iodine ions attached. T 3 and T 4 are then released into the bloodstream, with T 4 being released in much greater amounts than T 3 . As T 3 is more active than T 4 and is responsible for most of the effects of thyroid hormones, tissues of the body convert T 4 to T 3 by the removal of an iodine ion. Most of the released T 3 and T 4 becomes attached to transport proteins in the bloodstream and is unable to cross the plasma membrane of cells. These protein-bound molecules are only released when blood levels of the unattached hormone begin to decline. In this way, a week’s worth of reserve hormone is maintained in the blood. Increased T 3 and T 4 levels in the blood inhibit the release of TSH, which results in lower T 3 and T 4 release from the thyroid. The follicular cells of the thyroid require iodides (anions of iodine) in order to synthesize T 3 and T 4 . Iodides obtained from the diet are actively transported into follicle cells resulting in a concentration that is approximately 30 times higher than in blood. The typical diet in North America provides more iodine than required due to the addition of iodide to table salt. Inadequate iodine intake, which occurs in many developing countries, results in an inability to synthesize T 3 and T 4 hormones. The thyroid gland enlarges in a condition called goiter , which is caused by overproduction of TSH without the formation of thyroid hormone. Thyroglobulin is contained in a fluid called colloid, and TSH stimulation results in higher levels of colloid accumulation in the thyroid. In the absence of iodine, this is not converted to thyroid hormone, and colloid begins to accumulate more and more in the thyroid gland, leading to goiter. Disorders can arise from both the underproduction and overproduction of thyroid hormones. Hypothyroidism , underproduction of the thyroid hormones, can cause a low metabolic rate leading to weight gain, sensitivity to cold, and reduced mental activity, among other symptoms. In children, hypothyroidism can cause cretinism, which can lead to mental retardation and growth defects. Hyperthyroidism , the overproduction of thyroid hormones, can lead to an increased metabolic rate and its effects: weight loss, excess heat production, sweating, and an increased heart rate. Graves’ disease is one example of a hyperthyroid condition. Hormonal Control of Blood Calcium Levels Regulation of blood calcium concentrations is important for generation of muscle contractions and nerve impulses, which are electrically stimulated. If calcium levels get too high, membrane permeability to sodium decreases and membranes become less responsive. If calcium levels get too low, membrane permeability to sodium increases and convulsions or muscle spasms can result. Blood calcium levels are regulated by parathyroid hormone (PTH) , which is produced by the parathyroid glands, as illustrated in Figure 37.12 . PTH is released in response to low blood Ca 2+ levels. PTH increases Ca 2+ levels by targeting the skeleton, the kidneys, and the intestine. In the skeleton, PTH stimulates osteoclasts, which causes bone to be reabsorbed, releasing Ca 2+ from bone into the blood. PTH also inhibits osteoblasts, reducing Ca 2+ deposition in bone. In the intestines, PTH increases dietary Ca 2+ absorption, and in the kidneys, PTH stimulates reabsorption of the CA 2+ . While PTH acts directly on the kidneys to increase Ca 2+ reabsorption, its effects on the intestine are indirect. PTH triggers the formation of calcitriol, an active form of vitamin D, which acts on the intestines to increase absorption of dietary calcium. PTH release is inhibited by rising blood calcium levels. Hyperparathyroidism results from an overproduction of parathyroid hormone. This results in excessive calcium being removed from bones and introduced into blood circulation, producing structural weakness of the bones, which can lead to deformation and fractures, plus nervous system impairment due to high blood calcium levels. Hypoparathyroidism, the underproduction of PTH, results in extremely low levels of blood calcium, which causes impaired muscle function and may result in tetany (severe sustained muscle contraction). The hormone calcitonin , which is produced by the parafollicular or C cells of the thyroid, has the opposite effect on blood calcium levels as does PTH. Calcitonin decreases blood calcium levels by inhibiting osteoclasts, stimulating osteoblasts, and stimulating calcium excretion by the kidneys. This results in calcium being added to the bones to promote structural integrity. Calcitonin is most important in children (when it stimulates bone growth), during pregnancy (when it reduces maternal bone loss), and during prolonged starvation (because it reduces bone mass loss). In healthy nonpregnant, unstarved adults, the role of calcitonin is unclear. Hormonal Regulation of Growth Hormonal regulation is required for the growth and replication of most cells in the body. Growth hormone (GH) , produced by the anterior portion of the pituitary gland, accelerates the rate of protein synthesis, particularly in skeletal muscle and bones. Growth hormone has direct and indirect mechanisms of action. The first direct action of GH is stimulation of triglyceride breakdown (lipolysis) and release into the blood by adipocytes. This results in a switch by most tissues from utilizing glucose as an energy source to utilizing fatty acids. This process is called a glucose-sparing effect . In another direct mechanism, GH stimulates glycogen breakdown in the liver; the glycogen is then released into the blood as glucose. Blood glucose levels increase as most tissues are utilizing fatty acids instead of glucose for their energy needs. The GH mediated increase in blood glucose levels is called a diabetogenic effect because it is similar to the high blood glucose levels seen in diabetes mellitus. The indirect mechanism of GH action is mediated by insulin-like growth factors (IGFs) or somatomedins, which are a family of growth-promoting proteins produced by the liver, which stimulates tissue growth. IGFs stimulate the uptake of amino acids from the blood, allowing the formation of new proteins, particularly in skeletal muscle cells, cartilage cells, and other target cells, as shown in Figure 37.13 . This is especially important after a meal, when glucose and amino acid concentration levels are high in the blood. GH levels are regulated by two hormones produced by the hypothalamus. GH release is stimulated by growth hormone-releasing hormone (GHRH) and is inhibited by growth hormone-inhibiting hormone (GHIH) , also called somatostatin. A balanced production of growth hormone is critical for proper development. Underproduction of GH in adults does not appear to cause any abnormalities, but in children it can result in pituitary dwarfism , in which growth is reduced. Pituitary dwarfism is characterized by symmetric body formation. In some cases, individuals are under 30 inches in height. Oversecretion of growth hormone can lead to gigantism in children, causing excessive growth. In some documented cases, individuals can reach heights of over eight feet. In adults, excessive GH can lead to acromegaly , a condition in which there is enlargement of bones in the face, hands, and feet that are still capable of growth. Hormonal Regulation of Stress When a threat or danger is perceived, the body responds by releasing hormones that will ready it for the “fight-or-flight” response. The effects of this response are familiar to anyone who has been in a stressful situation: increased heart rate, dry mouth, and hair standing up. Evolution Connection Fight-or-Flight Response Interactions of the endocrine hormones have evolved to ensure the body’s internal environment remains stable. Stressors are stimuli that disrupt homeostasis. The sympathetic division of the vertebrate autonomic nervous system has evolved the fight-or-flight response to counter stress-induced disruptions of homeostasis. In the initial alarm phase, the sympathetic nervous system stimulates an increase in energy levels through increased blood glucose levels. This prepares the body for physical activity that may be required to respond to stress: to either fight for survival or to flee from danger. However, some stresses, such as illness or injury, can last for a long time. Glycogen reserves, which provide energy in the short-term response to stress, are exhausted after several hours and cannot meet long-term energy needs. If glycogen reserves were the only energy source available, neural functioning could not be maintained once the reserves became depleted due to the nervous system’s high requirement for glucose. In this situation, the body has evolved a response to counter long-term stress through the actions of the glucocorticoids, which ensure that long-term energy requirements can be met. The glucocorticoids mobilize lipid and protein reserves, stimulate gluconeogenesis, conserve glucose for use by neural tissue, and stimulate the conservation of salts and water. The mechanisms to maintain homeostasis that are described here are those observed in the human body. However, the fight-or-flight response exists in some form in all vertebrates. The sympathetic nervous system regulates the stress response via the hypothalamus. Stressful stimuli cause the hypothalamus to signal the adrenal medulla (which mediates short-term stress responses) via nerve impulses, and the adrenal cortex, which mediates long-term stress responses, via the hormone adrenocorticotropic hormone (ACTH) , which is produced by the anterior pituitary. Short-term Stress Response When presented with a stressful situation, the body responds by calling for the release of hormones that provide a burst of energy. The hormones epinephrine (also known as adrenaline) and norepinephrine (also known as noradrenaline) are released by the adrenal medulla. How do these hormones provide a burst of energy? Epinephrine and norepinephrine increase blood glucose levels by stimulating the liver and skeletal muscles to break down glycogen and by stimulating glucose release by liver cells. Additionally, these hormones increase oxygen availability to cells by increasing the heart rate and dilating the bronchioles. The hormones also prioritize body function by increasing blood supply to essential organs such as the heart, brain, and skeletal muscles, while restricting blood flow to organs not in immediate need, such as the skin, digestive system, and kidneys. Epinephrine and norepinephrine are collectively called catecholamines. Link to Learning Watch this Discovery Channel animation describing the flight-or-flight response. Long-term Stress Response Long-term stress response differs from short-term stress response. The body cannot sustain the bursts of energy mediated by epinephrine and norepinephrine for long times. Instead, other hormones come into play. In a long-term stress response, the hypothalamus triggers the release of ACTH from the anterior pituitary gland. The adrenal cortex is stimulated by ACTH to release steroid hormones called corticosteroids . Corticosteroids turn on transcription of certain genes in the nuclei of target cells. They change enzyme concentrations in the cytoplasm and affect cellular metabolism. There are two main corticosteroids: glucocorticoids such as cortisol , and mineralocorticoids such as aldosterone. These hormones target the breakdown of fat into fatty acids in the adipose tissue. The fatty acids are released into the bloodstream for other tissues to use for ATP production. The glucocorticoids primarily affect glucose metabolism by stimulating glucose synthesis. Glucocorticoids also have anti-inflammatory properties through inhibition of the immune system. For example, cortisone is used as an anti-inflammatory medication; however, it cannot be used long term as it increases susceptibility to disease due to its immune-suppressing effects. Mineralocorticoids function to regulate ion and water balance of the body. The hormone aldosterone stimulates the reabsorption of water and sodium ions in the kidney, which results in increased blood pressure and volume. Hypersecretion of glucocorticoids can cause a condition known as Cushing’s disease , characterized by a shifting of fat storage areas of the body. This can cause the accumulation of adipose tissue in the face and neck, and excessive glucose in the blood. Hyposecretion of the corticosteroids can cause Addison’s disease , which may result in bronzing of the skin, hypoglycemia, and low electrolyte levels in the blood. 37.4 Regulation of Hormone Production Learning Objectives By the end of this section, you will be able to: Explain how hormone production is regulated Discuss the different stimuli that control hormone levels in the body Hormone production and release are primarily controlled by negative feedback. In negative feedback systems, a stimulus elicits the release of a substance; once the substance reaches a certain level, it sends a signal that stops further release of the substance. In this way, the concentration of hormones in blood is maintained within a narrow range. For example, the anterior pituitary signals the thyroid to release thyroid hormones. Increasing levels of these hormones in the blood then give feedback to the hypothalamus and anterior pituitary to inhibit further signaling to the thyroid gland, as illustrated in Figure 37.14 . There are three mechanisms by which endocrine glands are stimulated to synthesize and release hormones: humoral stimuli, hormonal stimuli, and neural stimuli. Visual Connection Hyperthyroidism is a condition in which the thyroid gland is overactive. Hypothyroidism is a condition in which the thyroid gland is underactive. Which of the conditions are the following two patients most likely to have? Patient A has symptoms including weight gain, cold sensitivity, low heart rate and fatigue. Patient B has symptoms including weight loss, profuse sweating, increased heart rate and difficulty sleeping. Humoral Stimuli The term “humoral” is derived from the term “humor,” which refers to bodily fluids such as blood. A humoral stimulus refers to the control of hormone release in response to changes in extracellular fluids such as blood or the ion concentration in the blood. For example, a rise in blood glucose levels triggers the pancreatic release of insulin. Insulin causes blood glucose levels to drop, which signals the pancreas to stop producing insulin in a negative feedback loop. Hormonal Stimuli Hormonal stimuli refers to the release of a hormone in response to another hormone. A number of endocrine glands release hormones when stimulated by hormones released by other endocrine glands. For example, the hypothalamus produces hormones that stimulate the anterior portion of the pituitary gland. The anterior pituitary in turn releases hormones that regulate hormone production by other endocrine glands. The anterior pituitary releases the thyroid-stimulating hormone, which then stimulates the thyroid gland to produce the hormones T 3 and T 4 . As blood concentrations of T 3 and T 4 rise, they inhibit both the pituitary and the hypothalamus in a negative feedback loop. Neural Stimuli In some cases, the nervous system directly stimulates endocrine glands to release hormones, which is referred to as neural stimuli . Recall that in a short-term stress response, the hormones epinephrine and norepinephrine are important for providing the bursts of energy required for the body to respond. Here, neuronal signaling from the sympathetic nervous system directly stimulates the adrenal medulla to release the hormones epinephrine and norepinephrine in response to stress. 37.5 Endocrine Glands Learning Objectives By the end of this section, you will be able to: Describe the role of different glands in the endocrine system Explain how the different glands work together to maintain homeostasis Both the endocrine and nervous systems use chemical signals to communicate and regulate the body's physiology. The endocrine system releases hormones that act on target cells to regulate development, growth, energy metabolism, reproduction, and many behaviors. The nervous system releases neurotransmitters or neurohormones that regulate neurons, muscle cells, and endocrine cells. Because the neurons can regulate the release of hormones, the nervous and endocrine systems work in a coordinated manner to regulate the body's physiology. Hypothalamic-Pituitary Axis The hypothalamus in vertebrates integrates the endocrine and nervous systems. The hypothalamus is an endocrine organ located in the diencephalon of the brain. It receives input from the body and other brain areas and initiates endocrine responses to environmental changes. The hypothalamus acts as an endocrine organ, synthesizing hormones and transporting them along axons to the posterior pituitary gland. It synthesizes and secretes regulatory hormones that control the endocrine cells in the anterior pituitary gland. The hypothalamus contains autonomic centers that control endocrine cells in the adrenal medulla via neuronal control. The pituitary gland , sometimes called the hypophysis or “master gland” is located at the base of the brain in the sella turcica, a groove of the sphenoid bone of the skull, illustrated in Figure 37.15 . It is attached to the hypothalamus via a stalk called the pituitary stalk (or infundibulum). The anterior portion of the pituitary gland is regulated by releasing or release-inhibiting hormones produced by the hypothalamus, and the posterior pituitary receives signals via neurosecretory cells to release hormones produced by the hypothalamus. The pituitary has two distinct regions—the anterior pituitary and the posterior pituitary—which between them secrete nine different peptide or protein hormones. The posterior lobe of the pituitary gland contains axons of the hypothalamic neurons. Anterior Pituitary The anterior pituitary gland, or adenohypophysis, is surrounded by a capillary network that extends from the hypothalamus, down along the infundibulum, and to the anterior pituitary. This capillary network is a part of the hypophyseal portal system that carries substances from the hypothalamus to the anterior pituitary and hormones from the anterior pituitary into the circulatory system. A portal system carries blood from one capillary network to another; therefore, the hypophyseal portal system allows hormones produced by the hypothalamus to be carried directly to the anterior pituitary without first entering the circulatory system. The anterior pituitary produces seven hormones: growth hormone (GH), prolactin (PRL), thyroid-stimulating hormone (TSH), melanin-stimulating hormone (MSH), adrenocorticotropic hormone (ACTH), follicle-stimulating hormone (FSH), and luteinizing hormone (LH). Anterior pituitary hormones are sometimes referred to as tropic hormones, because they control the functioning of other organs. While these hormones are produced by the anterior pituitary, their production is controlled by regulatory hormones produced by the hypothalamus. These regulatory hormones can be releasing hormones or inhibiting hormones, causing more or less of the anterior pituitary hormones to be secreted. These travel from the hypothalamus through the hypophyseal portal system to the anterior pituitary where they exert their effect. Negative feedback then regulates how much of these regulatory hormones are released and how much anterior pituitary hormone is secreted. Posterior Pituitary The posterior pituitary is significantly different in structure from the anterior pituitary. It is a part of the brain, extending down from the hypothalamus, and contains mostly nerve fibers and neuroglial cells, which support axons that extend from the hypothalamus to the posterior pituitary. The posterior pituitary and the infundibulum together are referred to as the neurohypophysis. The hormones antidiuretic hormone (ADH), also known as vasopressin, and oxytocin are produced by neurons in the hypothalamus and transported within these axons along the infundibulum to the posterior pituitary. They are released into the circulatory system via neural signaling from the hypothalamus. These hormones are considered to be posterior pituitary hormones, even though they are produced by the hypothalamus, because that is where they are released into the circulatory system. The posterior pituitary itself does not produce hormones, but instead stores hormones produced by the hypothalamus and releases them into the blood stream. Thyroid Gland The thyroid gland is located in the neck, just below the larynx and in front of the trachea, as shown in Figure 37.16 . It is a butterfly-shaped gland with two lobes that are connected by the isthmus . It has a dark red color due to its extensive vascular system. When the thyroid swells due to dysfunction, it can be felt under the skin of the neck. The thyroid gland is made up of many spherical thyroid follicles, which are lined with a simple cuboidal epithelium. These follicles contain a viscous fluid, called colloid , which stores the glycoprotein thyroglobulin, the precursor to the thyroid hormones. The follicles produce hormones that can be stored in the colloid or released into the surrounding capillary network for transport to the rest of the body via the circulatory system. Thyroid follicle cells synthesize the hormone thyroxine, which is also known as T 4 because it contains four atoms of iodine, and triiodothyronine, also known as T 3 because it contains three atoms of iodine. Follicle cells are stimulated to release stored T 3 and T 4 by thyroid stimulating hormone (TSH), which is produced by the anterior pituitary. These thyroid hormones increase the rates of mitochondrial ATP production. A third hormone, calcitonin, is produced by parafollicular cells of the thyroid either releasing hormones or inhibiting hormones. Calcitonin release is not controlled by TSH, but instead is released when calcium ion concentrations in the blood rise. Calcitonin functions to help regulate calcium concentrations in body fluids. It acts in the bones to inhibit osteoclast activity and in the kidneys to stimulate excretion of calcium. The combination of these two events lowers body fluid levels of calcium. Parathyroid Glands Most people have four parathyroid glands ; however, the number can vary from two to six. These glands are located on the posterior surface of the thyroid gland, as shown in Figure 37.17 . Normally, there is a superior gland and an inferior gland associated with each of the thyroid’s two lobes. Each parathyroid gland is covered by connective tissue and contains many secretory cells that are associated with a capillary network. The parathyroid glands produce parathyroid hormone (PTH). PTH increases blood calcium concentrations when calcium ion levels fall below normal. PTH (1) enhances reabsorption of Ca 2+ by the kidneys, (2) stimulates osteoclast activity and inhibits osteoblast activity, and (3) it stimulates synthesis and secretion of calcitriol by the kidneys, which enhances Ca 2+ absorption by the digestive system. PTH is produced by chief cells of the parathyroid. PTH and calcitonin work in opposition to one another to maintain homeostatic Ca 2+ levels in body fluids. Another type of cells, oxyphil cells, exist in the parathyroid but their function is not known. These hormones encourage bone growth, muscle mass, and blood cell formation in children and women. Adrenal Glands The adrenal glands are associated with the kidneys; one gland is located on top of each kidney as illustrated in Figure 37.18 . The adrenal glands consist of an outer adrenal cortex and an inner adrenal medulla. These regions secrete different hormones. Adrenal Cortex The adrenal cortex is made up of layers of epithelial cells and associated capillary networks. These layers form three distinct regions: an outer zona glomerulosa that produces mineralocorticoids, a middle zona fasciculata that produces glucocorticoids, and an inner zona reticularis that produces androgens. The main mineralocorticoid is aldosterone, which regulates the concentration of Na + ions in urine, sweat, pancreas, and saliva. Aldosterone release from the adrenal cortex is stimulated by a decrease in blood concentrations of sodium ions, blood volume, or blood pressure, or by an increase in blood potassium levels. The three main glucocorticoids are cortisol, corticosterone, and cortisone. The glucocorticoids stimulate the synthesis of glucose and gluconeogenesis (converting a non-carbohydrate to glucose) by liver cells and they promote the release of fatty acids from adipose tissue. These hormones increase blood glucose levels to maintain levels within a normal range between meals. These hormones are secreted in response to ACTH and levels are regulated by negative feedback. Androgens are sex hormones that promote masculinity. They are produced in small amounts by the adrenal cortex in both males and females. They do not affect sexual characteristics and may supplement sex hormones released from the gonads. Adrenal Medulla The adrenal medulla contains large, irregularly shaped cells that are closely associated with blood vessels. These cells are innervated by preganglionic autonomic nerve fibers from the central nervous system. The adrenal medulla contains two types of secretory cells: one that produces epinephrine (adrenaline) and another that produces norepinephrine (noradrenaline). Epinephrine is the primary adrenal medulla hormone accounting for 75 to 80 percent of its secretions. Epinephrine and norepinephrine increase heart rate, breathing rate, cardiac muscle contractions, blood pressure, and blood glucose levels. They also accelerate the breakdown of glucose in skeletal muscles and stored fats in adipose tissue. The release of epinephrine and norepinephrine is stimulated by neural impulses from the sympathetic nervous system. Secretion of these hormones is stimulated by acetylcholine release from preganglionic sympathetic fibers innervating the adrenal medulla. These neural impulses originate from the hypothalamus in response to stress to prepare the body for the fight-or-flight response. Pancreas The pancreas , illustrated in Figure 37.19 , is an elongated organ that is located between the stomach and the proximal portion of the small intestine. It contains both exocrine cells that excrete digestive enzymes and endocrine cells that release hormones. It is sometimes referred to as a heterocrine gland because it has both endocrine and exocrine functions. The endocrine cells of the pancreas form clusters called pancreatic islets or the islets of Langerhans , as visible in the micrograph shown in Figure 37.20 . The pancreatic islets contain two primary cell types: alpha cells , which produce the hormone glucagon, and beta cells , which produce the hormone insulin. These hormones regulate blood glucose levels. As blood glucose levels decline, alpha cells release glucagon to raise the blood glucose levels by increasing rates of glycogen breakdown and glucose release by the liver. When blood glucose levels rise, such as after a meal, beta cells release insulin to lower blood glucose levels by increasing the rate of glucose uptake in most body cells, and by increasing glycogen synthesis in skeletal muscles and the liver. Together, glucagon and insulin regulate blood glucose levels. Pineal Gland The pineal gland produces melatonin. The rate of melatonin production is affected by the photoperiod. Collaterals from the visual pathways innervate the pineal gland. During the day photoperiod, little melatonin is produced; however, melatonin production increases during the dark photoperiod (night). In some mammals, melatonin has an inhibitory affect on reproductive functions by decreasing production and maturation of sperm, oocytes, and reproductive organs. Melatonin is an effective antioxidant, protecting the CNS from free radicals such as nitric oxide and hydrogen peroxide. Lastly, melatonin is involved in biological rhythms, particularly circadian rhythms such as the sleep-wake cycle and eating habits. Gonads The gonads—the male testes and female ovaries—produce steroid hormones. The testes produce androgens, testosterone being the most prominent, which allow for the development of secondary sex characteristics and the production of sperm cells. The ovaries produce estradiol and progesterone, which cause secondary sex characteristics and prepare the body for childbirth. Endocrine Glands and their Associated Hormones Endocrine Gland Associated Hormones Effect Hypothalamus releasing and inhibiting hormones regulate hormone release from pituitary gland; produce oxytocin; produce uterine contractions and milk secretion in females antidiuretic hormone (ADH) water reabsorption from kidneys; vasoconstriction to increase blood pressure Pituitary (Anterior) growth hormone (GH) promotes growth of body tissues, protein synthesis; metabolic functions prolactin (PRL) promotes milk production thyroid stimulating hormone (TSH) stimulates thyroid hormone release adrenocorticotropic hormone (ACTH) stimulates hormone release by adrenal cortex, glucocorticoids follicle-stimulating hormone (FSH) stimulates gamete production (both ova and sperm); secretion of estradiol luteinizing hormone (LH) stimulates androgen production by gonads; ovulation, secretion of progesterone melanocyte-stimulating hormone (MSH) stimulates melanocytes of the skin increasing melanin pigment production. Pituitary (Posterior) antidiuretic hormone (ADH) stimulates water reabsorption by kidneys oxytocin stimulates uterine contractions during childbirth; milk ejection; stimulates ductus deferens and prostate gland contraction during emission Thyroid thyroxine, triiodothyronine stimulate and maintain metabolism; growth and development calcitonin reduces blood Ca 2+ levels Parathyroid parathyroid hormone (PTH) increases blood Ca 2+ levels Adrenal (Cortex) aldosterone increases blood Na + levels; increase K + secretion cortisol, corticosterone, cortisone increase blood glucose levels; anti-inflammatory effects Adrenal (Medulla) epinephrine, norepinephrine stimulate fight-or-flight response; increase blood gluclose levels; increase metabolic activities Pancreas insulin reduces blood glucose levels glucagon increases blood glucose levels Pineal gland melatonin regulates some biological rhythms and protects CNS from free radicals Testes androgens regulate, promote, increase or maintain sperm production; male secondary sexual characteristics Ovaries estrogen promotes uterine lining growth; female secondary sexual characteristics progestins promote and maintain uterine lining growth Table 37.1 Organs with Secondary Endocrine Functions There are several organs whose primary functions are non-endocrine but that also possess endocrine functions. These include the heart, kidneys, intestines, thymus, gonads, and adipose tissue. The heart possesses endocrine cells in the walls of the atria that are specialized cardiac muscle cells. These cells release the hormone atrial natriuretic peptide (ANP) in response to increased blood volume. High blood volume causes the cells to be stretched, resulting in hormone release. ANP acts on the kidneys to reduce the reabsorption of Na + , causing Na + and water to be excreted in the urine. ANP also reduces the amounts of renin released by the kidneys and aldosterone released by the adrenal cortex, further preventing the retention of water. In this way, ANP causes a reduction in blood volume and blood pressure, and reduces the concentration of Na + in the blood. The gastrointestinal tract produces several hormones that aid in digestion. The endocrine cells are located in the mucosa of the GI tract throughout the stomach and small intestine. Some of the hormones produced include gastrin, secretin, and cholecystokinin, which are secreted in the presence of food, and some of which act on other organs such as the pancreas, gallbladder, and liver. They trigger the release of gastric juices, which help to break down and digest food in the GI tract. While the adrenal glands associated with the kidneys are major endocrine glands , the kidneys themselves also possess endocrine function. Renin is released in response to decreased blood volume or pressure and is part of the renin-angiotensin-aldosterone system that leads to the release of aldosterone. Aldosterone then causes the retention of Na + and water, raising blood volume. The kidneys also release calcitriol, which aids in the absorption of Ca 2+ and phosphate ions. Erythropoietin (EPO) is a protein hormone that triggers the formation of red blood cells in the bone marrow. EPO is released in response to low oxygen levels. Because red blood cells are oxygen carriers, increased production results in greater oxygen delivery throughout the body. EPO has been used by athletes to improve performance, as greater oxygen delivery to muscle cells allows for greater endurance. Because red blood cells increase the viscosity of blood, artificially high levels of EPO can cause severe health risks. The thymus is found behind the sternum; it is most prominent in infants, becoming smaller in size through adulthood. The thymus produces hormones referred to as thymosins, which contribute to the development of the immune response. Adipose tissue is a connective tissue found throughout the body. It produces the hormone leptin in response to food intake. Leptin increases the activity of anorexigenic neurons and decreases that of orexigenic neurons, producing a feeling of satiety after eating, thus affecting appetite and reducing the urge for further eating. Leptin is also associated with reproduction. It must be present for GnRH and gonadotropin synthesis to occur. Extremely thin females may enter puberty late; however, if adipose levels increase, more leptin will be produced, improving fertility.
principles_of_accounting,_volume_1:_financial_accounting
Summary 5.1 Describe and Prepare Closing Entries for a Business Closing entries: Closing entries prepare a company for the next period and zero out balance in temporary accounts. Purpose of closing entries: Closing entries are necessary because they help a company review income accumulation during a period, and verify data figures found on the adjusted trial balance. Permanent accounts: Permanent accounts do not close and are accounts that transfer balances to the next period. They include balance sheet accounts, such as assets, liabilities, and stockholder’s equity Temporary accounts: Temporary accounts are closed at the end of each accounting period and include income statement, dividends, and income summary accounts. Income Summary: The Income Summary account is an intermediary between revenues and expenses, and the Retained Earnings account. It stores all the closing information for revenues and expenses, resulting in a “summary” of income or loss for the period. Recording closing entries: There are four closing entries; closing revenues to income summary, closing expenses to income summary, closing income summary to retained earnings, and close dividends to retained earnings. Posting closing entries: Once all closing entries are complete, the information is transferred to the general ledger T-accounts. Balances in temporary accounts will show a zero balance. 5.2 Prepare a Post-Closing Trial Balance Post-closing trial balance: The post-closing trial balance is prepared after closing entries have been posted to the ledger. This trial balance only includes permanent accounts. 5.3 Apply the Results from the Adjusted Trial Balance to Compute Current Ratio and Working Capital Balance, and Explain How These Measures Represent Liquidity Cash-basis versus accrual-basis system: The cash-basis system delays revenue and expense recognition until cash is collected, which can mislead investors about the daily operations of a business. The accrual-basis system recognizes revenues and expenses in the period in which they were earned or incurred, allowing for an even distribution of income and a more accurate business of daily operations. Classified balance sheet: The classified balance sheet breaks down assets and liabilities into subcategories focusing on current and long-term classifications. This allows investors to see company position in both the short term and long term. Liquidity: Liquidity means a business has enough cash available to pay bills as they come due. Being too liquid can mean that a company is not using its assets efficiently. Working capital: Working capital shows how efficiently a company operates. The formula is current assets minus current liabilities. Current ratio: The current ratio shows how many times over a company can cover its liabilities. It is found by dividing current assets by current liabilities. 5.4 Appendix: Complete a Comprehensive Accounting Cycle for a Business The comprehensive accounting cycle is the process in which transactions are recorded in the accounting records and are ultimately reflected in the ending period balances on the financial statements. Comprehensive accounting cycle for a business: A service business is taken through the comprehensive accounting cycle, starting with the formation of the entity, recording all necessary journal entries for its transactions, making all required adjusting and closing journal entries, and culminating in the preparation of all requisite financial statements.
Chapter Outline 5.1 Describe and Prepare Closing Entries for a Business 5.2 Prepare a Post-Closing Trial Balance 5.3 Apply the Results from the Adjusted Trial Balance to Compute Current Ratio and Working Capital Balance, and Explain How These Measures Represent Liquidity 5.4 Appendix: Complete a Comprehensive Accounting Cycle for a Business Why It Matters As we learned in Analyzing and Recording Transactions and The Adjustment Process , Mark Summers has started his own dry-cleaning business called Supreme Cleaners. Mark had a busy first month of operations, including purchasing equipment and supplies, paying his employees, and providing dry-cleaning services to customers. Because Mark had established a sound accounting system to keep track of his daily transactions, he was able to prepare complete and accurate financial statements showing his company’s progress and financial position. In order to move forward, Mark needs to review how financial data from his first month of operations transitions into his second month of operations. It is important for Mark to make a smooth transition so he can compare the financials from month to month, and continue on the right path toward growth. It will also assure his investors and lenders that the company is operating as expected. So what does he need to do to prepare for next month?
[ { "answer": { "ans_choice": 0, "ans_text": "A" }, "bloom": null, "hl_context": "<hl> Temporary ( nominal ) accounts are accounts that are closed at the end of each accounting period , and include income statement , dividends , and income summary accounts . <hl> The new account , Income Summary , will be discussed shortly . <hl> These accounts are temporary because they keep their balances during the current accounting period and are set back to zero when the period ends . <hl> <hl> Revenue and expense accounts are closed to Income Summary , and Income Summary and Dividends are closed to the permanent account , Retained Earnings . <hl>", "hl_sentences": "Temporary ( nominal ) accounts are accounts that are closed at the end of each accounting period , and include income statement , dividends , and income summary accounts . These accounts are temporary because they keep their balances during the current accounting period and are set back to zero when the period ends . Revenue and expense accounts are closed to Income Summary , and Income Summary and Dividends are closed to the permanent account , Retained Earnings .", "question": { "cloze_format": "___ is an account that is considered a temporary or nominal account.", "normal_format": "Which of the following accounts is considered a temporary or nominal account?", "question_choices": [ "Fees Earned Revenue", "Prepaid Advertising", "Unearned Service Revenue", "Prepaid Insurance" ], "question_id": "fs-idm196838000", "question_text": "Which of the following accounts is considered a temporary or nominal account?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 1, "ans_text": "Prepaid Insurance" }, "bloom": null, "hl_context": "<hl> Permanent ( real ) accounts are accounts that transfer balances to the next period and include balance sheet accounts , such as assets , liabilities , and stockholders ’ equity . <hl> <hl> These accounts will not be set back to zero at the beginning of the next period ; they will keep their balances . <hl> <hl> Permanent accounts are not part of the closing process . <hl>", "hl_sentences": "Permanent ( real ) accounts are accounts that transfer balances to the next period and include balance sheet accounts , such as assets , liabilities , and stockholders ’ equity . These accounts will not be set back to zero at the beginning of the next period ; they will keep their balances . Permanent accounts are not part of the closing process .", "question": { "cloze_format": "The ___ account is considered a permanent or real account.", "normal_format": "Which of the following accounts is considered a permanent or real account?", "question_choices": [ "Interest Revenue", "Prepaid Insurance", "Insurance Expense", "Supplies Expense" ], "question_id": "fs-idm763804912", "question_text": "Which of the following accounts is considered a permanent or real account?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "C" }, "bloom": null, "hl_context": "Once all journal entries have been created , the next step in the accounting cycle is to post journal information to the ledger . The ledger is visually represented by T-accounts . <hl> Cliff will go through each transaction and transfer the account information into the debit or credit side of that ledger account . <hl> Any account that has more than one transaction needs to have a final balance calculated . This happens by taking the difference between the debits and credits in an account .", "hl_sentences": "Cliff will go through each transaction and transfer the account information into the debit or credit side of that ledger account .", "question": { "cloze_format": "If a journal entry includes a debit or credit to the Cash account, it is most likely ___.", "normal_format": "If a journal entry includes a debit or credit to the Cash account, it is most likely which of the following?", "question_choices": [ "a closing entry", "an adjusting entry", "an ordinary transaction entry", "outside of the accounting cycle" ], "question_id": "fs-idm234512640", "question_text": "If a journal entry includes a debit or credit to the Cash account, it is most likely which of the following?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 0, "ans_text": "a closing entry" }, "bloom": null, "hl_context": "<hl> The statement of retained earnings shows the period-ending retained earnings after the closing entries have been posted . <hl> When you compare the retained earnings ledger ( T-account ) to the statement of retained earnings , the figures must match . It is important to understand retained earnings is not closed out , it is only updated . <hl> Retained Earnings is the only account that appears in the closing entries that does not close . <hl> You should recall from your previous material that retained earnings are the earnings retained by the company over time — not cash flow but earnings . Now that we have closed the temporary accounts , let ’ s review what the post-closing ledger ( T-accounts ) looks like for Printing Plus . <hl> If the balance in Income Summary before closing is a credit balance , you will debit Income Summary and credit Retained Earnings in the closing entry . <hl> This situation occurs when a company has a net income .", "hl_sentences": "The statement of retained earnings shows the period-ending retained earnings after the closing entries have been posted . Retained Earnings is the only account that appears in the closing entries that does not close . If the balance in Income Summary before closing is a credit balance , you will debit Income Summary and credit Retained Earnings in the closing entry .", "question": { "cloze_format": "If a journal entry includes a debit or credit to the Retained Earnings account, it is most likely ___ .", "normal_format": "If a journal entry includes a debit or credit to the Retained Earnings account, it is most likely which of the following?", "question_choices": [ "a closing entry", "an adjusting entry", "an ordinary transaction entry", "outside of the accounting cycle" ], "question_id": "fs-idm208664176", "question_text": "If a journal entry includes a debit or credit to the Retained Earnings account, it is most likely which of the following?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "A" }, "bloom": null, "hl_context": "<hl> The fourth closing entry closes dividends to retained earnings . <hl> To close dividends , Cliff will credit Dividends , and debit Retained Earnings . <hl> Why was income summary not used in the dividends closing entry ? <hl> <hl> Dividends are not an income statement account . <hl> Only income statement accounts help us summarize income , so only income statement accounts should go into income summary .", "hl_sentences": "The fourth closing entry closes dividends to retained earnings . Why was income summary not used in the dividends closing entry ? Dividends are not an income statement account .", "question": { "cloze_format": "The ___ account would be present in the closing entries.", "normal_format": "Which of these accounts would be present in the closing entries?", "question_choices": [ "Dividends", "Accounts Receivable", "Unearned Service Revenue", "Sales Tax Payable" ], "question_id": "fs-idm208499376", "question_text": "Which of these accounts would be present in the closing entries?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 3, "ans_text": "Dividends Payable" }, "bloom": null, "hl_context": "<hl> Why was income summary not used in the dividends closing entry ? <hl> <hl> Dividends are not an income statement account . <hl> Only income statement accounts help us summarize income , so only income statement accounts should go into income summary .", "hl_sentences": "Why was income summary not used in the dividends closing entry ? Dividends are not an income statement account .", "question": { "cloze_format": "The account that would not be present in the closing entries is the ___.", "normal_format": "Which of these accounts would not be present in the closing entries?", "question_choices": [ "Utilities Expense", "Fees Earned Revenue", "Insurance Expense", "Dividends Payable" ], "question_id": "fs-idm456953728", "question_text": "Which of these accounts would not be present in the closing entries?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 1, "ans_text": "B" }, "bloom": null, "hl_context": "The statement of retained earnings shows the period-ending retained earnings after the closing entries have been posted . When you compare the retained earnings ledger ( T-account ) to the statement of retained earnings , the figures must match . <hl> It is important to understand retained earnings is not closed out , it is only updated . <hl> Retained Earnings is the only account that appears in the closing entries that does not close . You should recall from your previous material that retained earnings are the earnings retained by the company over time — not cash flow but earnings . Now that we have closed the temporary accounts , let ’ s review what the post-closing ledger ( T-accounts ) looks like for Printing Plus .", "hl_sentences": "It is important to understand retained earnings is not closed out , it is only updated .", "question": { "cloze_format": "The ___ account is never closed.", "normal_format": "Which of these accounts is never closed?", "question_choices": [ "Dividends", "Retained Earnings", "Service Fee Revenue", "Income Summary" ], "question_id": "fs-idm461383760", "question_text": "Which of these accounts is never closed?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "Prepaid Rent" }, "bloom": null, "hl_context": "Temporary ( nominal ) accounts are accounts that are closed at the end of each accounting period , and include income statement , dividends , and income summary accounts . The new account , Income Summary , will be discussed shortly . These accounts are temporary because they keep their balances during the current accounting period and are set back to zero when the period ends . <hl> Revenue and expense accounts are closed to Income Summary , and Income Summary and Dividends are closed to the permanent account , Retained Earnings . <hl> Closing entries prepare a company for the next accounting period by clearing any outstanding balances in certain accounts that should not transfer over to the next period . Closing , or clearing the balances , means returning the account to a zero balance . Having a zero balance in these accounts is important so a company can compare performance across periods , particularly with income . It also helps the company keep thorough records of account balances affecting retained earnings . <hl> Revenue , expense , and dividend accounts affect retained earnings and are closed so they can accumulate new balances in the next period , which is an application of the time period assumption . <hl>", "hl_sentences": "Revenue and expense accounts are closed to Income Summary , and Income Summary and Dividends are closed to the permanent account , Retained Earnings . Revenue , expense , and dividend accounts affect retained earnings and are closed so they can accumulate new balances in the next period , which is an application of the time period assumption .", "question": { "cloze_format": "The ___ account is never closed.", "normal_format": "Which of these accounts is never closed?", "question_choices": [ "Prepaid Rent", "Income Summary", "Rent Revenue", "Rent Expense" ], "question_id": "fs-idm223682464", "question_text": "Which of these accounts is never closed?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 1, "ans_text": "Rent Expense" }, "bloom": null, "hl_context": "The next day , January 1 , 2019 , you get ready for work , but before you go to the office , you decide to review your financials for 2019 . What are your year-to-date earnings ? So far , you have not worked at all in the current year . What are your total expenses for rent , electricity , cable and internet , gas , and food for the current year ? <hl> You have also not incurred any expenses yet for rent , electricity , cable , internet , gas or food . <hl> <hl> This means that the current balance of these accounts is zero , because they were closed on December 31 , 2018 , to complete the annual accounting period . <hl>", "hl_sentences": "You have also not incurred any expenses yet for rent , electricity , cable , internet , gas or food . This means that the current balance of these accounts is zero , because they were closed on December 31 , 2018 , to complete the annual accounting period .", "question": { "cloze_format": "The ___ account would be credited when closing the account for rent expense for the year.", "normal_format": "Which account would be credited when closing the account for rent expense for the year?", "question_choices": [ "Prepaid Rent", "Rent Expense", "Rent Revenue", "Unearned Rent Revenue" ], "question_id": "fs-idm222764832", "question_text": "Which account would be credited when closing the account for rent expense for the year?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "C" }, "bloom": null, "hl_context": "The process of preparing the post-closing trial balance is the same as you have done when preparing the unadjusted trial balance and adjusted trial balance . Only permanent account balances should appear on the post-closing trial balance . <hl> These balances in post-closing T-accounts are transferred over to either the debit or credit column on the post-closing trial balance . <hl> When all accounts have been recorded , total each column and verify the columns equal each other . Notice that revenues , expenses , dividends , and income summary all have zero balances . <hl> Retained earnings maintains a $ 4,565 credit balance . <hl> <hl> The post-closing T-accounts will be transferred to the post-closing trial balance , which is step 9 in the accounting cycle . <hl>", "hl_sentences": "These balances in post-closing T-accounts are transferred over to either the debit or credit column on the post-closing trial balance . Retained earnings maintains a $ 4,565 credit balance . The post-closing T-accounts will be transferred to the post-closing trial balance , which is step 9 in the accounting cycle .", "question": { "cloze_format": "The account that is included in the post-closing trial balance is the ___.", "normal_format": "Which of these accounts is included in the post-closing trial balance?", "question_choices": [ "Sales Revenue", "Salaries Expense", "Retained Earnings", "Dividends" ], "question_id": "fs-idm486355680", "question_text": "Which of these accounts is included in the post-closing trial balance?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 3, "ans_text": "Dividends" }, "bloom": null, "hl_context": "Notice that only permanent accounts are included . <hl> All temporary accounts with zero balances were left out of this statement . <hl> Unlike previous trial balances , the retained earnings figure is included , which was obtained through the closing process . The process of preparing the post-closing trial balance is the same as you have done when preparing the unadjusted trial balance and adjusted trial balance . <hl> Only permanent account balances should appear on the post-closing trial balance . <hl> These balances in post-closing T-accounts are transferred over to either the debit or credit column on the post-closing trial balance . When all accounts have been recorded , total each column and verify the columns equal each other . <hl> Notice that revenues , expenses , dividends , and income summary all have zero balances . <hl> Retained earnings maintains a $ 4,565 credit balance . The post-closing T-accounts will be transferred to the post-closing trial balance , which is step 9 in the accounting cycle .", "hl_sentences": "All temporary accounts with zero balances were left out of this statement . Only permanent account balances should appear on the post-closing trial balance . Notice that revenues , expenses , dividends , and income summary all have zero balances .", "question": { "cloze_format": "The account that is not included in the post-closing trial balance is ___.", "normal_format": "Which of these accounts is not included in the post-closing trial balance?", "question_choices": [ "Land", "Notes Payable", "Retained Earnings", "Dividends" ], "question_id": "fs-idm277668512", "question_text": "Which of these accounts is not included in the post-closing trial balance?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "C" }, "bloom": null, "hl_context": "The process of preparing the post-closing trial balance is the same as you have done when preparing the unadjusted trial balance and adjusted trial balance . <hl> Only permanent account balances should appear on the post-closing trial balance . <hl> <hl> These balances in post-closing T-accounts are transferred over to either the debit or credit column on the post-closing trial balance . <hl> When all accounts have been recorded , total each column and verify the columns equal each other . Notice that revenues , expenses , dividends , and income summary all have zero balances . <hl> Retained earnings maintains a $ 4,565 credit balance . <hl> <hl> The post-closing T-accounts will be transferred to the post-closing trial balance , which is step 9 in the accounting cycle . <hl>", "hl_sentences": "Only permanent account balances should appear on the post-closing trial balance . These balances in post-closing T-accounts are transferred over to either the debit or credit column on the post-closing trial balance . Retained earnings maintains a $ 4,565 credit balance . The post-closing T-accounts will be transferred to the post-closing trial balance , which is step 9 in the accounting cycle .", "question": { "cloze_format": "The year-end Retained Earnings balance would be stated correctly on (the) ___.", "normal_format": "On which of the following would the year-end Retained Earnings balance be stated correctly?", "question_choices": [ "Unadjusted Trial Balance", "Adjusted Trial Balance", "Post-Closing Trial Balance", "The Worksheet" ], "question_id": "fs-idm273529056", "question_text": "On which of the following would the year-end Retained Earnings balance be stated correctly?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 1, "ans_text": "Accounts Payable" }, "bloom": null, "hl_context": "The last step for the month of August is step 9 , preparing the post-closing trial balance . <hl> The post-closing trial balance should only contain permanent account information . <hl> No temporary accounts should appear on this trial balance . Clip ’ em Cliff ’ s post-closing trial balance is presented in Figure 5.27 . The process of preparing the post-closing trial balance is the same as you have done when preparing the unadjusted trial balance and adjusted trial balance . <hl> Only permanent account balances should appear on the post-closing trial balance . <hl> These balances in post-closing T-accounts are transferred over to either the debit or credit column on the post-closing trial balance . When all accounts have been recorded , total each column and verify the columns equal each other .", "hl_sentences": "The post-closing trial balance should only contain permanent account information . Only permanent account balances should appear on the post-closing trial balance .", "question": { "cloze_format": "The account that is included in the post-closing trial balance is the ___.", "normal_format": "Which of these accounts is included in the post-closing trial balance?", "question_choices": [ "Supplies Expense", "Accounts Payable", "Sales Revenue", "Insurance Expense" ], "question_id": "fs-idm488927344", "question_text": "Which of these accounts is included in the post-closing trial balance?" }, "references_are_paraphrase": 0 } ]
5
5.1 Describe and Prepare Closing Entries for a Business In this chapter, we complete the final steps (steps 8 and 9) of the accounting cycle, the closing process. You will notice that we do not cover step 10, reversing entries. This is an optional step in the accounting cycle that you will learn about in future courses. Steps 1 through 4 were covered in Analyzing and Recording Transactions and Steps 5 through 7 were covered in The Adjustment Process . Our discussion here begins with journalizing and posting the closing entries ( Figure 5.2 ). These posted entries will then translate into a post-closing trial balance , which is a trial balance that is prepared after all of the closing entries have been recorded. Think It Through Should You Compromise to Please Your Supervisor? You are an accountant for a small event-planning business. The business has been operating for several years but does not have the resources for accounting software. This means you are preparing all steps in the accounting cycle by hand. It is the end of the month, and you have completed the post-closing trial balance. You notice that there is still a service revenue account balance listed on this trial balance. Why is it considered an error to have a revenue account on the post-closing trial balance? How do you fix this error? Introduction to the Closing Entries Companies are required to close their books at the end of each fiscal year so that they can prepare their annual financial statements and tax returns. However, most companies prepare monthly financial statements and close their books annually, so they have a clear picture of company performance during the year, and give users timely information to make decisions. Closing entries prepare a company for the next accounting period by clearing any outstanding balances in certain accounts that should not transfer over to the next period. Closing , or clearing the balances, means returning the account to a zero balance . Having a zero balance in these accounts is important so a company can compare performance across periods, particularly with income. It also helps the company keep thorough records of account balances affecting retained earnings. Revenue, expense, and dividend accounts affect retained earnings and are closed so they can accumulate new balances in the next period, which is an application of the time period assumption. To further clarify this concept, balances are closed to assure all revenues and expenses are recorded in the proper period and then start over the following period. The revenue and expense accounts should start at zero each period, because we are measuring how much revenue is earned and expenses incurred during the period. However, the cash balances, as well as the other balance sheet accounts, are carried over from the end of a current period to the beginning of the next period. For example, a store has an inventory account balance of $100,000. If the store closed at 11:59 p.m. on January 31, 2019, then the inventory balance when it reopened at 12:01 a.m. on February 1, 2019, would still be $100,000. The balance sheet accounts, such as inventory, would carry over into the next period, in this case February 2019. The accounts that need to start with a clean or $0 balance going into the next accounting period are revenue, income, and any dividends from January 2019. To determine the income (profit or loss) from the month of January, the store needs to close the income statement information from January 2019. Zeroing January 2019 would then enable the store to calculate the income (profit or loss) for the next month (February 2019), instead of merging it into January’s income and thus providing invalid information solely for the month of February. However, if the company also wanted to keep year-to-date information from month to month, a separate set of records could be kept as the company progresses through the remaining months in the year. For our purposes, assume that we are closing the books at the end of each month unless otherwise noted. Let’s look at another example to illustrate the point. Assume you own a small landscaping business. It is the end of the year, December 31, 2018, and you are reviewing your financials for the entire year. You see that you earned $120,000 this year in revenue and had expenses for rent, electricity, cable, internet, gas, and food that totaled $70,000. You also review the following information: The next day, January 1, 2019, you get ready for work, but before you go to the office, you decide to review your financials for 2019. What are your year-to-date earnings? So far, you have not worked at all in the current year. What are your total expenses for rent, electricity, cable and internet, gas, and food for the current year? You have also not incurred any expenses yet for rent, electricity, cable, internet, gas or food. This means that the current balance of these accounts is zero, because they were closed on December 31, 2018, to complete the annual accounting period. Next, you review your assets and liabilities. What is your current bank account balance? What is the current book value of your electronics, car, and furniture? What about your credit card balances and bank loans? Are the value of your assets and liabilities now zero because of the start of a new year? Your car, electronics, and furniture did not suddenly lose all their value, and unfortunately, you still have outstanding debt. Therefore, these accounts still have a balance in the new year, because they are not closed, and the balances are carried forward from December 31 to January 1 to start the new annual accounting period. This is no different from what will happen to a company at the end of an accounting period. A company will see its revenue and expense accounts set back to zero, but its assets and liabilities will maintain a balance. Stockholders’ equity accounts will also maintain their balances. In summary, the accountant resets the temporary accounts to zero by transferring the balances to permanent accounts. Link to Learning Understanding the accounting cycle and preparing trial balances is a practice valued internationally. The Philippines Center for Entrepreneurship and the government of the Philippines hold regular seminars going over this cycle with small business owners. They are also transparent with their internal trial balances in several key government offices. Check out this article talking about the seminars on the accounting cycle and this public pre-closing trial balance presented by the Philippines Department of Health. Temporary and Permanent Accounts All accounts can be classified as either permanent (real) or temporary (nominal) ( Figure 5.3 ). Permanent (real) accounts are accounts that transfer balances to the next period and include balance sheet accounts, such as assets, liabilities, and stockholders’ equity. These accounts will not be set back to zero at the beginning of the next period; they will keep their balances. Permanent accounts are not part of the closing process. Temporary (nominal) accounts are accounts that are closed at the end of each accounting period, and include income statement, dividends, and income summary accounts. The new account, Income Summary, will be discussed shortly. These accounts are temporary because they keep their balances during the current accounting period and are set back to zero when the period ends. Revenue and expense accounts are closed to Income Summary, and Income Summary and Dividends are closed to the permanent account, Retained Earnings. The income summary account is an intermediary between revenues and expenses, and the Retained Earnings account. It stores all of the closing information for revenues and expenses, resulting in a “summary” of income or loss for the period. The balance in the Income Summary account equals the net income or loss for the period. This balance is then transferred to the Retained Earnings account. Income summary is a nondefined account category. This means that it is not an asset, liability, stockholders’ equity, revenue, or expense account. The account has a zero balance throughout the entire accounting period until the closing entries are prepared. Therefore, it will not appear on any trial balances, including the adjusted trial balance, and will not appear on any of the financial statements. You might be asking yourself, “is the Income Summary account even necessary?” Could we just close out revenues and expenses directly into retained earnings and not have this extra temporary account? We could do this, but by having the Income Summary account, you get a balance for net income a second time. This gives you the balance to compare to the income statement, and allows you to double check that all income statement accounts are closed and have correct amounts. If you put the revenues and expenses directly into retained earnings, you will not see that check figure. No matter which way you choose to close, the same final balance is in retained earnings. Your Turn Permanent versus Temporary Accounts Following is a list of accounts. State whether each account is a permanent or temporary account. rent expense unearned revenue accumulated depreciation, vehicle common stock fees revenue dividends prepaid insurance accounts payable Solution A, E, and F are temporary; B, C, D, G, and H are permanent. Let’s now look at how to prepare closing entries. Journalizing and Posting Closing Entries The eighth step in the accounting cycle is preparing closing entries, which includes journalizing and posting the entries to the ledger. Four entries occur during the closing process. The first entry closes revenue accounts to the Income Summary account. The second entry closes expense accounts to the Income Summary account. The third entry closes the Income Summary account to Retained Earnings. The fourth entry closes the Dividends account to Retained Earnings. The information needed to prepare closing entries comes from the adjusted trial balance. Let’s explore each entry in more detail using Printing Plus’s information from Analyzing and Recording Transactions and The Adjustment Process as our example. The Printing Plus adjusted trial balance for January 31, 2019, is presented in Figure 5.4 . The first entry requires revenue accounts close to the Income Summary account. To get a zero balance in a revenue account, the entry will show a debit to revenues and a credit to Income Summary. Printing Plus has $140 of interest revenue and $10,100 of service revenue, each with a credit balance on the adjusted trial balance. The closing entry will debit both interest revenue and service revenue, and credit Income Summary. The T-accounts after this closing entry would look like the following. Notice that the balances in interest revenue and service revenue are now zero and are ready to accumulate revenues in the next period. The Income Summary account has a credit balance of $10,240 (the revenue sum). The second entry requires expense accounts close to the Income Summary account. To get a zero balance in an expense account, the entry will show a credit to expenses and a debit to Income Summary. Printing Plus has $100 of supplies expense, $75 of depreciation expense–equipment, $5,100 of salaries expense, and $300 of utility expense, each with a debit balance on the adjusted trial balance. The closing entry will credit Supplies Expense, Depreciation Expense–Equipment, Salaries Expense, and Utility Expense, and debit Income Summary. The T-accounts after this closing entry would look like the following. Notice that the balances in the expense accounts are now zero and are ready to accumulate expenses in the next period. The Income Summary account has a new credit balance of $4,665, which is the difference between revenues and expenses ( Figure 5.5 ). The balance in Income Summary is the same figure as what is reported on Printing Plus’s Income Statement. Why are these two figures the same? The income statement summarizes your income, as does income summary. If both summarize your income in the same period, then they must be equal. If they do not match, then you have an error. The third entry requires Income Summary to close to the Retained Earnings account. To get a zero balance in the Income Summary account, there are guidelines to consider. If the balance in Income Summary before closing is a credit balance, you will debit Income Summary and credit Retained Earnings in the closing entry. This situation occurs when a company has a net income. If the balance in Income Summary before closing is a debit balance, you will credit Income Summary and debit Retained Earnings in the closing entry. This situation occurs when a company has a net loss. Remember that net income will increase retained earnings, and a net loss will decrease retained earnings. The Retained Earnings account increases on the credit side and decreases on the debit side. Printing Plus has a $4,665 credit balance in its Income Summary account before closing, so it will debit Income Summary and credit Retained Earnings. The T-accounts after this closing entry would look like the following. Notice that the Income Summary account is now zero and is ready for use in the next period. The Retained Earnings account balance is currently a credit of $4,665. The fourth entry requires Dividends to close to the Retained Earnings account. Remember from your past studies that dividends are not expenses, such as salaries paid to your employees or staff. Instead, declaring and paying dividends is a method utilized by corporations to return part of the profits generated by the company to the owners of the company—in this case, its shareholders. If dividends were not declared, closing entries would cease at this point. If dividends are declared, to get a zero balance in the Dividends account, the entry will show a credit to Dividends and a debit to Retained Earnings. As you will learn in Corporation Accounting , there are three components to the declaration and payment of dividends. The first part is the date of declaration, which creates the obligation or liability to pay the dividend. The second part is the date of record that determines who receives the dividends, and the third part is the date of payment, which is the date that payments are made. Printing Plus has $100 of dividends with a debit balance on the adjusted trial balance. The closing entry will credit Dividends and debit Retained Earnings. The T-accounts after this closing entry would look like the following. Why was income summary not used in the dividends closing entry? Dividends are not an income statement account. Only income statement accounts help us summarize income, so only income statement accounts should go into income summary. Remember, dividends are a contra stockholders’ equity account. It is contra to retained earnings. If we pay out dividends, it means retained earnings decreases. Retained earnings decreases on the debit side. The remaining balance in Retained Earnings is $4,565 ( Figure 5.6 ). This is the same figure found on the statement of retained earnings. The statement of retained earnings shows the period-ending retained earnings after the closing entries have been posted. When you compare the retained earnings ledger (T-account) to the statement of retained earnings, the figures must match. It is important to understand retained earnings is not closed out, it is only updated. Retained Earnings is the only account that appears in the closing entries that does not close. You should recall from your previous material that retained earnings are the earnings retained by the company over time—not cash flow but earnings. Now that we have closed the temporary accounts, let’s review what the post-closing ledger (T-accounts) looks like for Printing Plus. T-Account Summary The T-account summary for Printing Plus after closing entries are journalized is presented in Figure 5.7 . Notice that revenues, expenses, dividends, and income summary all have zero balances. Retained earnings maintains a $4,565 credit balance. The post-closing T-accounts will be transferred to the post-closing trial balance, which is step 9 in the accounting cycle. Think It Through Closing Entries A company has revenue of $48,000 and total expenses of $52,000. What would the third closing entry be? Why? Your Turn Frasker Corp. Closing Entries Prepare the closing entries for Frasker Corp. using the adjusted trial balance provided. Solution 5.2 Prepare a Post-Closing Trial Balance The ninth, and typically final, step of the process is to prepare a post-closing trial balance. The word “post” in this instance means “after.” You are preparing a trial balance after the closing entries are complete. Like all trial balances, the post-closing trial balance has the job of verifying that the debit and credit totals are equal. The post-closing trial balance has one additional job that the other trial balances do not have. The post-closing trial balance is also used to double-check that the only accounts with balances after the closing entries are permanent accounts. If there are any temporary accounts on this trial balance, you would know that there was an error in the closing process. This error must be fixed before starting the new period. The process of preparing the post-closing trial balance is the same as you have done when preparing the unadjusted trial balance and adjusted trial balance. Only permanent account balances should appear on the post-closing trial balance. These balances in post-closing T-accounts are transferred over to either the debit or credit column on the post-closing trial balance. When all accounts have been recorded, total each column and verify the columns equal each other. The post-closing trial balance for Printing Plus is shown in Figure 5.8 . Notice that only permanent accounts are included. All temporary accounts with zero balances were left out of this statement. Unlike previous trial balances, the retained earnings figure is included, which was obtained through the closing process. At this point, the accounting cycle is complete, and the company can begin a new cycle in the next period. In essence, the company’s business is always in operation, while the accounting cycle utilizes the cutoff of month-end to provide financial information to assist and review the operations. It is worth mentioning that there is one step in the process that a company may or may not include, step 10, reversing entries. Reversing entries reverse an adjusting entry made in a prior period at the start of a new period. We do not cover reversing entries in this chapter, but you might approach the subject in future accounting courses. Now that we have completed the accounting cycle, let’s take a look at another way the adjusted trial balance assists users of information with financial decision-making. Link to Learning If you like quizzes, crossword puzzles, fill-in-the-blank, matching exercise, and word scrambles to help you learn the material in this course, go to My Accounting Course for more. This website covers a variety of accounting topics including financial accounting basics, accounting principles, the accounting cycle, and financial statements, all topics introduced in the early part of this course. Concepts In Practice The Importance of Understanding How to Complete the Accounting Cycle Many students who enroll in an introductory accounting course do not plan to become accountants. They will work in a variety of jobs in the business field, including managers, sales, and finance. In a real company, most of the mundane work is done by computers. Accounting software can perform such tasks as posting the journal entries recorded, preparing trial balances, and preparing financial statements. Students often ask why they need to do all of these steps by hand in their introductory class, particularly if they are never going to be an accountant. It is very important to understand that no matter what your position, if you work in business you need to be able to read financial statements, interpret them, and know how to use that information to better your business. If you have never followed the full process from beginning to end, you will never understand how one of your decisions can impact the final numbers that appear on your financial statements. You will not understand how your decisions can affect the outcome of your company. As mentioned previously, once you understand the effect your decisions will have on the bottom line on your income statement and the balances in your balance sheet, you can use accounting software to do all of the mundane, repetitive steps and use your time to evaluate the company based on what the financial statements show. Your stockholders, creditors, and other outside professionals will use your financial statements to evaluate your performance. If you evaluate your numbers as often as monthly, you will be able to identify your strengths and weaknesses before any outsiders see them and make any necessary changes to your plan in the following month. 5.3 Apply the Results from the Adjusted Trial Balance to Compute Current Ratio and Working Capital Balance, and Explain How These Measures Represent Liquidity In The Adjustment Process , we were introduced to the idea of accrual-basis accounting, where revenues and expenses must be recorded in the accounting period in which they were earned or incurred, no matter when cash receipts or outlays occur. We also discussed cash-basis accounting, where income and expenses are recognized when receipts and disbursements occur. In this chapter, we go into more depth about why a company may choose accrual-basis accounting as opposed to cash-basis accounting. Link to Learning Go to the Internal Revenue Service’s website, and look at the most recently updated Pub 334 Tax Guide for Small Business to learn more about the rules for income tax preparation for a small business. Cash Basis versus Accrual Basis Accounting There are several reasons accrual-basis accounting is preferred to cash-basis accounting. Accrual-basis accounting is required by US generally accepted accounting principles (GAAP), as it typically provides a better sense of the financial well-being of a company. Accrual-based accounting information allows management to analyze a company’s progress, and management can use that information to improve their business. Accrual accounting is also used to assist companies in securing financing, because banks will typically require a company to provide accrual-basis financial income statements. The Internal Revenue Service might also require businesses to report using accrual basis information when preparing tax returns. In addition, companies with inventory must use accrual-based accounting for income tax purposes, though there are exceptions to the general rule. So why might a company use cash-basis accounting? Companies that do not sell stock publicly can use cash-basis instead of accrual-basis accounting for internal-management purposes and externally, as long as the Internal Revenue Service does not prevent them from doing so, and they have no other reasons such as agreements per a bank loan. Cash-basis accounting is a simpler accounting system to use than an accrual-basis accounting system when tracking real-time revenues and expenses. Let’s take a look at one example illustrating why accrual-basis accounting might be preferred to cash-basis accounting. In the current year, a company had the following transactions: January to March Transactions Date Transaction Jan. 1 Annual insurance policy purchased for $6,000 cash Jan. 8 Sent payment for December’s electricity bill, $135 Jan. 15 Performed services worth $2,500; customer asked to be billed Jan. 31 Electricity used during January is estimated at $110 Feb. 16 Realized you forgot to pay January’s rent, so sent two months’ rent, $2,000 Feb. 20 Performed services worth $2,400; customer asked to be billed Feb. 28 Electricity used during February is estimated at $150 Mar. 2 Paid March rent, $1,000 Mar. 10 Received all money owed from services performed in January and February Mar. 14 Performed services worth $2,450. Received $1,800 cash Mar. 30 Electricity used during March is estimated at $145 Table 5.1 IFRS Connection Issues in Comparing Closing Procedures Regardless of whether a company uses US GAAP or International Financial Reporting Standards (IFRS), the closing and post-closing processes are the same. However, the results generated by these processes are not the same. These differences can be seen most easily in the ratios formulated from the financial statement information and used to assess various financial qualities of a company. You have learned about the current ratio, which is used to assess a company’s ability to pay debts as they come due. How could the use of IFRS versus US GAAP affect this ratio? US GAAP and IFRS most frequently differ on how certain transactions are measured, or on the timing of measuring and reporting that transaction. You will later learn about this in more detail, but for now we use a difference in inventory measurement to illustrate the effect of the two different sets of standards on the current ratio. US GAAP allows for three different ways to measure ending inventory balances: first-in, first-out (FIFO); last-in, first-out (LIFO); and weighted average. IFRS only allows for FIFO and weighted average. If the prices of inventory being purchased are rising, the FIFO method will result in a higher value of ending inventory on the Balance Sheet than would the LIFO method. Think about this in the context of the current ratio. Inventory is one component of current assets: the numerator of the ratio. The higher the current assets (numerator), the higher is the current ratio. Therefore, if you calculated the current ratio for a company that applied US GAAP, and then recalculated the ratio assuming the company used IFRS, you would get not only different numbers for inventory (and other accounts) in the financial statements, but also different numbers for the ratios. This idea illustrates the impact the application of an accounting standard can have on the results of a company’s financial statements and related ratios. Different standards produce different results. Throughout the remainder of this course, you will learn more details about the similarities and differences between US GAAP and IFRS, and how these differences impact financial reporting. Remember, in a cash-basis system you will record the revenue when the money is received no matter when the service is performed. There was no money received from customers in January or February, so the company, under a cash-basis system, would not show any revenue in those months. In March they received the $2,500 customers owed from January sales, $2,400 from customers for February sales, and $1,800 from cash sales in March. This is a total of $6,700 cash received from customers in March. Since the cash was received in March, the cash-basis system would record revenue in March. In accrual accounting, we record the revenue as it is earned. There was $2,500 worth of service performed in January, so that will show as revenue in January. The $2,400 earned in February is recorded in February, and the $2,450 earned in March is recorded as revenue in March. Remember, it does not matter whether or not the cash came in. For expenses, the cash-basis system is going to record an expense the day the payment leaves company hands. In January, the company purchased an insurance policy. The insurance policy is for the entire year, but since the cash went to the insurance company in January, the company will record the entire amount as an expense in January. The company paid the December electric bill in January. Even though the electricity was used to earn revenue in December, the company will record it as an expense in January. Electricity used in January, February, and March to help earn revenue in those months will show no expense because the bill has not been paid. The company forgot to pay January’s rent in January, so no rent expense is recorded in January. However, in February there is $2,000 worth of rent expense because the company paid for the two months in February. Under accrual accounting, expenses are recorded when they are incurred and not when paid. Electricity used in a month to help earn revenue is recorded as an expense in that month whether the bill is paid or not. The same is true for rent expense. Insurance expense is spread out over 12 months, and each month 1/12 of the total insurance cost is expensed. The comparison of cash-basis and accrual-basis income statements is presented in Figure 5.9 . Concepts In Practice Fundamentals of Financial Ratios One method used by everyone who evaluates financial statements is to calculate financial ratios. Financial ratios take numbers from your income statements and/or your balance sheet to evaluate important financial outcomes that will impact user decisions. There are ratios to evaluate your liquidity, solvency, profitability, and efficiency. Liquidity ratios look at your ability to pay the debts that you owe in the near future. Solvency will show if you can pay your bills not only in the short term but also in the long term. Profitability ratios are calculated to see how much profit is being generated from a company’s sales. Efficiency ratios will be calculated to see how efficient a company is using its assets in running its business. You will be introduced to these ratios and how to interpret them throughout this course. Compare the two sets of income statements. The cash-basis system looks as though no revenue was earned in the first two months, and expenses were excessive. Then in March it looks like the company earned a lot of revenue. How realistic is this picture? Now look at the accrual basis figures. Here you see a better picture of what really happened over the three months. Revenues and expenses stayed relatively even across periods. This comparison can show the dangers of reporting in a cash-basis system. In a cash-basis system, the timing of cash flows can make the business look very profitable one month and not profitable the next. If your company was having a bad year and you do not want to report a loss, just do not pay the bills for the last month of the year and you can suddenly show a profit in a cash-basis system. In an accrual-basis system, it does not matter if you do not pay the bills, you still need to record the expenses and present an income statement that accurately portrays what is happening in your company. The accrual-basis system lends itself to more transparency and detail in reporting. This detail is carried over into what is known as a classified balance sheet. The Classified Balance Sheet A classified balance sheet presents information on your balance sheet in a more informative structure, where asset and liability categories are divided into smaller, more detailed sections. Classified balance sheets show more about the makeup of our assets and liabilities, allowing us to better analyze the current health of our company and make future strategic plans. Assets can be categorized as current; property, plant, and equipment; long-term investments; intangibles; and, if necessary, other assets. As you learned in Introduction to Financial Statements , a current asset (also known as a short-term asset ) is any asset that will be converted to cash, sold, or used up within one year, or one operating cycle, whichever is longer. An operating cycle is the amount of time it takes a company to use its cash to provide a product or service and collect payment from the customer ( Figure 5.10 ). For a merchandising firm that sells inventory, an operating cycle is the time it takes for the firm to use its cash to purchase inventory, sell the inventory, and get its cash back from its customers. Link to Learning Newport News Shipbuilding is an American shipbuilder located in Newport News, Virginia. According to information provided by the company, the company has designed and built 30 aircraft carriers in the past 75 years. That is 30 carriers in 75 years. Newport News constructed the USS Gerald R. Ford . It took the company eight years to build the carrier, christening it in 2013. The ship then underwent rigorous testing until it was finally delivered to its home port, Naval Station Norfolk in 2017. That is 12 years after work commenced on the project. With large shipbuilding projects that take many years to complete, the operating cycle for this type of company could expand beyond a year mark, and Newport News would use this longer operating cycle when dividing current and long-term assets and liabilities. Learn more about Newport News and its parent company Huntington Ingalls Industries and see a time-lapse video of the construction of the carrier . You can easily tell the passage of time if you watch the snow come and go in the video. If an asset does not meet the requirements of a current asset, then it is classified as a long-term asset. It can be further defined as property, plant, and equipment; a long-term investment; or an intangible asset ( Figure 5.11 ). Property, plant, and equipment are tangible assets (those that have a physical presence) held for more than one operating cycle or one year, whichever is longer. A long-term investment is stocks, bonds, or other types of investments that management intends to hold for more than one operating cycle or one year, whichever is longer. Intangible assets do not have a physical presence but give the company a long-term future benefit. Some examples include patents, copyrights, and trademarks. Liabilities are classified as either current liabilities or long-term liabilities. Liabilities also use the one year, or one operating cycle, for the cut-off between current and noncurrent. As we first discussed in Introduction to Financial Statements , if the debt is due within one year or one operating cycle, whichever is longer, the liability is a current liability. If the debt is settled outside one year or one operating cycle, whichever is longer, the liability is a long-term liability . Your Turn How to Classify Assets Classify each of the following assets as current asset; property, plant, and equipment; long-term investment; or intangible asset. machine patent supplies building investment in bonds with intent to hold until maturity in 10 years copyright land being held for future office prepaid insurance accounts receivable investment in stock that will be held for six months Solution A. property, plant, and equipment. B. intangible asset. C. current asset. D. property, plant, and equipment. E. long-term investment. F. intangible asset. G. long-term investment. H. current asset. I. current asset. J. current asset. The land is considered a long-term investment, because it is not land being used currently by the company to earn revenue. Buying real estate is an investment. If the company decided in the future that it was not going to build the new office, it could sell the land and would probably be able to sell the land for more than it was purchased for, because the value of real estate tends to go up over time. But like any investment, there is the risk that the land might actually go down in value. The investment in stock that we only plan to hold for six months will be called a marketable security in the current asset section of the balance sheet. As an example, the balance sheet in Figure 5.12 is classified. Continuing Application Interim Reporting in the Grocery Industry Interim reporting helps determine how well a company is performing at a given time during the year. Some companies revise their earnings estimates depending on how profitable the company has been up until a certain point in time. The grocery industry, which includes both private and publicly traded companies, performs the same exercise. However, grocery companies use such information to inform other important business decisions. Consider the last time you walked through the grocery store and purchased your favorite brand but found another item out of stock. What if the next time you shop, the product you loved is no longer carried, but the out-of-stock item is available? Grocery store profitably is based on small margins of revenue on a multitude of products. The bar codes scanned at checkout not only provide the price of a product but also track how much inventory has been sold. The grocery store analyzes such information to determine how quickly the product turns over, which drives profit on small margins. If a product sells well, the store might stock it all of the time, but if a product does not sell quickly enough, it could be discontinued. Using Classified Balance Sheets to Evaluate Liquidity Categorizing assets and liabilities on a balance sheet helps a company evaluate its business. One way a company can evaluate its business is with financial statement ratios. We consider two measures of liquidity, working capital, and the current ratio. Let’s first explore this idea of liquidity. We first described liquidity in Introduction to Financial Statements as the ability to convert assets into cash. Liquidity is a company's ability to convert assets into cash in order to meet short-term cash needs, so it is very important for a company to remain liquid. A critical piece of information to remember at this point is that most companies use the accrual accounting method to determine and maintain their accounting records. This fact means that even with a positive income position, as reflected by its income statement, a company can go bankrupt due to poor cash flow. It is also important to note that even if a company has a lot of cash, it may still be in bankruptcy trouble if all or much of that cash is borrowed. According to an article published in Money magazine, one in four small businesses fail because of cash flow issues. 1 They are making a profit and seem financially healthy but do not have cash when needed. 1 Elaine Pofeldt. “5 Ways to Tackle the Problem That Kills One of Every Four Small Businesses.” Money . May 19, 2015. http://time.com/money/3888448/cash-flow-small-business-startups/ Companies should analyze liquidity constantly to avoid cash shortages that may result in a need for a short-term loan. Intermittently taking out a short-term loan is often expected, but a company cannot keep coming up short on cash every year if it is going to remain liquid. A seasonal business, such as a specialized holiday retailer, may require a short-term loan to continue its operations during slower revenue-generating periods. Companies will use numbers from their classified balance sheet to test for liquidity. They want to make sure they have enough current assets to pay their current liabilities. Only cash is used to directly pay liabilities, but other current assets, such as accounts receivable or short-term investments, might be sold for cash, converted to cash, or used to bring in cash to pay liabilities. Ethical Considerations Liquidity Is as Important as Net Worth How does a company like Lehman Brothers Holdings , with over $639 billion in assets and $613 billion in liabilities, go bankrupt? That question still confuses many, but it comes down to the fact that having assets recorded on the books at their purchase price is not the same as the immediate value of the assets. Lehman Brothers had a liquidity crisis that led to a solvency crisis, because Lehman Brothers could not sell the assets on its books at book value to cover its short-term cash demands. Matt Johnston, in an article for the online publication Coinmonks , puts it simply: “Liquidity is all about being able to access cash when it’s needed. If you can settle your current obligations with ease, you’ve got liquidity. If you’ve got debts coming due and you don’t have the cash to settle them, then you’ve got a liquidity crisis.” 2 Continuing this Coinmonks discussion, the inability to timely pay debts leads to a business entity becoming insolvent because bills cannot be paid on time and assets need to be written down. When Lehman Brothers could not timely pay their bills in 2008, it went bankrupt, sending a shock throughout the entire banking system. Accountants need to understand the differences between net worth, equity, liquidity, and solvency, and be able to inform stakeholders of their organization’s actual financial position, not just the recorded numbers on the balance sheet. 2 Matt Johnson. “Revisiting the Lehman Brothers Collapse, the Business of Banking and Its Inherent Crises.” Coinmonks . February 1, 2018. https://medium.com/coinmonks/revisiting-the-lehman-brothers-collapse-fb18769d6cf8 Two calculations a company might use to test for liquidity are working capital and the current ratio. Working capital , which was first described in Introduction to Financial Statements , is found by taking the difference between current assets and current liabilities. A positive outcome means the company has enough current assets available to pay its current liabilities or current debts. A negative outcome means the company does not have enough current assets to cover its current liabilities and may have to arrange short-term financing. Though a positive working capital is preferred, a company needs to make sure that there is not too much of a difference between current assets and current liabilities. A company that has a high working capital might have too much money in current assets that could be used for other company investments. Things such as industry and size of a company will dictate what type of margin is best. Let’s consider Printing Plus and its working capital ( Figure 5.13 ). Printing Plus’s current assets include cash, accounts receivable, interest receivable, and supplies. Their current liabilities include accounts payable, salaries payable, and unearned revenue. The following is the computation of working capital: Working capital = $26,540 – $5,400 = $21,140 Working capital = $26,540 – $5,400 = $21,140 This means that you have more than enough working capital to pay the current liabilities your company has recorded. This figure may seem high, but remember that this is the company’s first month of operations and this much cash may need to be available for larger, long-term asset purchases. However, there is also the possibility that the company might choose to identify long-term financing options for the acquisition of expensive, long-term assets, assuming that it can qualify for the increased debt. Notice that part of the current liability calculation is unearned revenue. If a company has a surplus of unearned revenue, it can sometimes get away with less working capital, as it will need less cash to pay its bills. However, the company must be careful, since the cash was recorded before providing the services or products associated with the unearned revenue. This relationship is why the unearned revenue was initially created, and there often will be necessary cash outflows associated with meeting the terms of the unearned revenue creation. Companies with inventory will usually need a higher working capital than a service company, as inventory can tie up a large amount of a company’s cash with less cash available to pay its bills. Also, small companies will normally need a higher working capital than larger companies, because it is harder for smaller companies to get loans, and they usually pay a higher interest rate. Link to Learning PricewaterhouseCoopers (PwC) released its 2015 Annual Global Working Capital Survey which is a detailed study on working capital. Though the report does not show the working capital calculation you just learned, there is very interesting information about working capital in different industries, business sizes, and locations. Take a few minutes and peruse this document. The current ratio (also known as the working capital ratio ), which was first described in Introduction to Financial Statements , tells a company how many times over company current assets can cover current liabilities. It is found by dividing current assets by current liabilities and is calculated as follows: For example, if a company has current assets of $20,000 and current liabilities of $10,000, its current ratio is $20,000/$10,000 = two times. This means the company has enough current assets to cover its current liabilities twice. Ideally, many companies would like to maintain a 1.5:2 times current assets over current liabilities ratio. However, depending on the company’s function or purpose, an optimal ratio could be lower or higher than the previous recommendation. For example, many utilities do not have large fluctuations in anticipated seasonal current ratios, so they might decide to maintain a current ratio of 1.25:1.5 times current assets over current liabilities ratio, while a high-tech startup might want to maintain a ratio of 2.5:3 times current assets over current liabilities ratio. The current ratio for Printing Plus is $26,540/$5,400 = 4.91 times. That is a very high current ratio, but since the business was just started, having more cash might allow the company to make larger purchases while still paying its liabilities. However, this ratio might be a result of short-term conditions, so the company is advised to still plan on maintaining a ratio that is considered both rational and not too risky. Using ratios for a single year does not provide a broad picture. A company will get much better information if it compares the working capital and current ratio numbers for several years so it can see increases, decreases, and where numbers remain fairly consistent. Companies can also benefit from comparing this financial data to that of other companies in the industry. Ethical Considerations Computers Still Use Debits and Credits: Check behind the Dashboard for Fraud Newly hired accountants are often sat at a computer to work off of a dashboard, which is a computer screen where entries are made into the accounting system. New accountants working with modern accounting software may not be aware that their software uses the debit and credit system you learned about, and that the system may automatically close the books without the accountant’s review of closing entries. Manually closing the books gives accountants a chance to review the balances of different accounts; if accountants do not review the entries, they will not know what is occurring in the accounting system or in their organization’s financial statements. Many accounting systems automatically close the books if the command is made in the system. While debits and credits are being entered and may not have been reviewed, the system can be instructed to close out the revenue and expense accounts and create an Income Statement. A knowledgeable accountant can review entries within the software’s audit function. The accountant will be able to look at every entry, its description, both sides of the entry (debit and credit), and any changes made in the entry. This review is important in determining if any incorrect entry was either a mistake or fraud. The accountant can see who made the entry and how the entry occurred in the accounting system. To ensure the integrity of the system, each person working in the system must have a unique user identification, and no users may know others’ passwords. If there is an entry or updated entry, the accountant will be able to see the entry in the audit function of the software. If an employee has changed expense items to pay his or her personal bills, the accountant can see the change. Similarly, changes in transaction dates can be reviewed to determine whether they are fraudulent. Professional accountants know what goes on in their organization’s accounting system. 5.4 Appendix: Complete a Comprehensive Accounting Cycle for a Business We have gone through the entire accounting cycle for Printing Plus with the steps spread over three chapters. Let’s go through the complete accounting cycle for another company here. The full accounting cycle diagram is presented in Figure 5.14 . We next take a look at a comprehensive example that works through the entire accounting cycle for Clip’em Cliff. Clifford Girard retired from the US Marine Corps after 20 years of active duty. Cliff decides it would be fun to become a barber and open his own shop called “Clip’em Cliff.” He will run the barber shop out of his home for the first couple of months while he identifies a new location for his shop. Since his Marines career included several years of logistics, he is also going to operate a consulting practice where he will help budding barbers create a barbering practice. He will charge a flat fee or a per hour charge. His consulting practice will be recognized as service revenue and will provide additional revenue while he develops his barbering practice. He obtains a barber’s license after the required training and is ready to open his shop on August 1. Table 5.2 shows his transactions from the first month of business. Transactions for August Date Transaction Aug. 1 Cliff issues $70,000 shares of common stock for cash. Aug. 3 Cliff purchases barbering equipment for $45,000; $37,500 was paid immediately with cash, and the remaining $7,500 was billed to Cliff with payment due in 30 days. He decided to buy used equipment, because he was not sure if he truly wanted to run a barber shop. He assumed that he will replace the used equipment with new equipment within a couple of years. Aug. 6 Cliff purchases supplies for $300 cash. Aug. 10 Cliff provides $4,000 in services to a customer who asks to be billed for the services. Aug. 13 Cliff pays a $75 utility bill with cash. Aug. 14 Cliff receives $3,200 cash in advance from a customer for services not yet rendered. Aug. 16 Cliff distributed $150 cash in dividends to stockholders. Aug. 17 Cliff receives $5,200 cash from a customer for services rendered. Aug. 19 Cliff paid $2,000 toward the outstanding liability from the August 3 transaction. Aug. 22 Cliff paid $4,600 cash in salaries expense to employees. Aug. 28 The customer from the August 10 transaction pays $1,500 cash toward Cliff’s account. Table 5.2 Transaction 1: On August 1, 2019, Cliff issues $70,000 shares of common stock for cash. Analysis: Clip’em Cliff now has more cash. Cash is an asset, which is increasing on the debit side. When the company issues stock, this yields a higher common stock figure than before issuance. The common stock account is increasing on the credit side. Transaction 2: On August 3, 2019, Cliff purchases barbering equipment for $45,000; $37,500 was paid immediately with cash, and the remaining $7,500 was billed to Cliff with payment due in 30 days. Analysis: Clip’em Cliff now has more equipment than before. Equipment is an asset, which is increasing on the debit side for $45,000. Cash is used to pay for $37,500. Cash is an asset, decreasing on the credit side. Cliff asked to be billed, which means he did not pay cash immediately for $7,500 of the equipment. Accounts Payable is used to signal this short-term liability. Accounts payable is increasing on the credit side. Transaction 3: On August 6, 2019, Cliff purchases supplies for $300 cash. Analysis: Clip’em Cliff now has less cash. Cash is an asset, which is decreasing on the credit side. Supplies, an asset account, is increasing on the debit side. Transaction 4: On August 10, 2019, provides $4,000 in services to a customer who asks to be billed for the services. Analysis: Clip’em Cliff provided service, thus earning revenue. Revenue impacts equity, and increases on the credit side. The customer did not pay immediately for the service and owes Cliff payment. This is an Accounts Receivable for Cliff. Accounts Receivable is an asset that is increasing on the debit side. Transaction 5: On August 13, 2019, Cliff pays a $75 utility bill with cash. Analysis: Clip’em Cliff now has less cash than before. Cash is an asset that is decreasing on the credit side. Utility payments are billed expenses. Utility Expense negatively impacts equity, and increases on the debit side. Transaction 6: On August 14, 2019, Cliff receives $3,200 cash in advance from a customer for services to be rendered. Analysis: Clip’em Cliff now has more cash. Cash is an asset, which is increasing on the debit side. The customer has not yet received services but already paid the company. This means the company owes the customer the service. This creates a liability to the customer, and revenue cannot yet be recognized. Unearned Revenue is the liability account, which is increasing on the credit side. Transaction 7: On August 16, 2019, Cliff distributed $150 cash in dividends to stockholders. Analysis: Clip’em Cliff now has less cash. Cash is an asset, which is decreasing on the credit side. When the company pays out dividends, this decreases equity and increases the dividends account. Dividends increases on the debit side. Transaction 8: On August 17, 2019, Cliff receives $5,200 cash from a customer for services rendered. Analysis: Clip’em Cliff now has more cash than before. Cash is an asset, which is increasing on the debit side. Service was provided, which means revenue can be recognized. Service Revenue increases equity. Service Revenue is increasing on the credit side. Transaction 9: On August 19, 2019, Cliff paid $2,000 toward the outstanding liability from the August 3 transaction. Analysis: Clip’em Cliff now has less cash. Cash is an asset, which is decreasing on the credit side. Accounts Payable is a liability account, decreasing on the debit side. Transaction 10: On August 22, 2019, Cliff paid $4,600 cash in salaries expense to employees. Analysis: Clip’em Cliff now has less cash. Cash is an asset, which is decreasing on the credit side. When the company pays salaries, this is an expense to the business. Salaries Expense reduces equity by increasing on the debit side. Transaction 11: On August 28, 2019, the customer from the August 10 transaction pays $1,500 cash toward Cliff’s account. Analysis: The customer made a partial payment on their outstanding account. This reduces Accounts Receivable. Accounts Receivable is an asset account decreasing on the credit side. Cash is an asset, increasing on the debit side. The complete journal for August is presented in Figure 5.15 . Once all journal entries have been created, the next step in the accounting cycle is to post journal information to the ledger. The ledger is visually represented by T-accounts. Cliff will go through each transaction and transfer the account information into the debit or credit side of that ledger account. Any account that has more than one transaction needs to have a final balance calculated. This happens by taking the difference between the debits and credits in an account. Clip’em Cliff’s ledger represented by T-accounts is presented in Figure 5.16 . You will notice that the sum of the asset account balances in Cliff’s ledger equals the sum of the liability and equity account balances at $83,075. The final debit or credit balance in each account is transferred to the unadjusted trial balance in the corresponding debit or credit column as illustrated in Figure 5.17 . Once all of the account balances are transferred to the correct columns, each column is totaled. The total in the debit column must match the total in the credit column to remain balanced. The unadjusted trial balance for Clip’em Cliff appears in Figure 5.18 . The unadjusted trial balance shows a debit and credit balance of $87,900. Remember, the unadjusted trial balance is prepared before any period-end adjustments are made. On August 31, Cliff has the transactions shown in Table 5.3 requiring adjustment. August 31 Transactions Date Transaction Aug. 31 Cliff took an inventory of supplies and discovered that $250 of supplies remain unused at the end of the month. Aug. 31 The equipment purchased on August 3 depreciated $2,500 during the month of August. Aug. 31 Clip’em Cliff performed $1,100 of services during August for the customer from the August 14 transaction. Aug. 31 Reviewing the company bank statement, Clip’em Cliff discovers $350 of interest earned during the month of August that was previously uncollected and unrecorded. As a new customer for the bank, the interest was paid by a bank that offered an above-market-average interest rate. Aug. 31 Unpaid and previously unrecorded income taxes for the month are $3,400. The tax payment was to cover his federal quarterly estimated income taxes. He lives in a state that does not have an individual income tax Table 5.3 Adjusting Transaction 1: Cliff took an inventory of supplies and discovered that $250 of supplies remain unused at the end of the month. Analysis: $250 of supplies remain at the end of August. The company began the month with $300 worth of supplies. Therefore, $50 of supplies were used during the month and must be recorded (300 – 250). Supplies is an asset that is decreasing (credit). Supplies is a type of prepaid expense, that when used, becomes an expense. Supplies Expense would increase (debit) for the $50 of supplies used during August. Adjusting Transaction 2: The equipment purchased on August 3 depreciated $2,500 during the month of August. Analysis: Equipment cost of $2,500 was allocated during August. This depreciation will affect the Accumulated Depreciation–Equipment account and the Depreciation Expense–Equipment account. While we are not doing depreciation calculations here, you will come across more complex calculations, such as depreciation in Long-Term Assets . Accumulated Depreciation–Equipment is a contra asset account (contrary to Equipment) and increases (credit) for $2,500. Depreciation Expense–Equipment is an expense account that is increasing (debit) for $2,500. Adjusting Transaction 3: Clip’em Cliff performed $1,100 of services during August for the customer from the August 14 transaction. Analysis: The customer from the August 14 transaction gave the company $3,200 in advanced payment for services. By the end of August the company had earned $1,100 of the advanced payment. This means that the company still has yet to provide $2,100 in services to that customer. Since some of the unearned revenue is now earned, Unearned Revenue would decrease. Unearned Revenue is a liability account and decreases on the debit side. The company can now recognize the $1,100 as earned revenue. Service Revenue increases (credit) for $1,100. Adjusting Transaction 4: Reviewing the company bank statement, Clip’em Cliff identifies $350 of interest earned during the month of August that was previously unrecorded. Analysis: Interest is revenue for the company on money kept in a money market account at the bank. The company only sees the bank statement at the end of the month and needs to record as received interest revenue reflected on the bank statement. Interest Revenue is a revenue account that increases (credit) for $350. Since Clip’em Cliff has yet to collect this interest revenue, it is considered a receivable. Interest Receivable increases (debit) for $350. Adjusting Transaction 5: Unpaid and previously unrecorded income taxes for the month are $3,400. Analysis: Income taxes are an expense to the business that accumulate during the period but are only paid at predetermined times throughout the year. This period did not require payment but did accumulate income tax. Income Tax Expense is an expense account that negatively affects equity. Income Tax Expense increases on the debit side. The company owes the tax money but has not yet paid, signaling a liability. Income Tax Payable is a liability that is increasing on the credit side. The summary of adjusting journal entries for Clip’em Cliff is presented in Figure 5.19 . Now that all of the adjusting entries are journalized, they must be posted to the ledger. Posting adjusting entries is the same process as posting the general journal entries. Each journalized account figure will transfer to the corresponding ledger account on either the debit or credit side as illustrated in Figure 5.20 . We would normally use a general ledger, but for illustrative purposes, we are using T-accounts to represent the ledgers. The T-accounts after the adjusting entries are posted are presented in Figure 5.21 . You will notice that the sum of the asset account balances equals the sum of the liability and equity account balances at $80,875. The final debit or credit balance in each account is transferred to the adjusted trial balance, the same way the general ledger transferred to the unadjusted trial balance. The next step in the cycle is to prepare the adjusted trial balance. Clip’em Cliff’s adjusted trial balance is shown in Figure 5.22 . The adjusted trial balance shows a debit and credit balance of $94,150. Once the adjusted trial balance is prepared, Cliff can prepare his financial statements (step 7 in the cycle). We only prepare the income statement, statement of retained earnings, and the balance sheet. The statement of cash flows is discussed in detail in Statement of Cash Flows . To prepare your financial statements, you want to work with your adjusted trial balance. Remember, revenues and expenses go on an income statement. Dividends, net income (loss), and retained earnings balances go on the statement of retained earnings. On a balance sheet you find assets, contra assets, liabilities, and stockholders’ equity accounts. The income statement for Clip’em Cliff is shown in Figure 5.23 . Note that expenses were only $25 less than revenues. For the first month of operations, Cliff welcomes any income. Cliff will want to increase income in the next period to show growth for investors and lenders. Next, Cliff prepares the following statement of retained earnings ( Figure 5.24 ). The beginning retained earnings balance is zero because Cliff just began operations and does not have a balance to carry over to a future period. The ending retained earnings balance is –$125. You probably never want to have a negative value on your retained earnings statement, but this situation is not totally unusual for an organization in its initial operations. Cliff will want to improve this outcome going forward. It might make sense for Cliff to not pay dividends until he increases his net income. Cliff then prepares the balance sheet for Clip’em Cliff as shown in Figure 5.25 . The balance sheet shows total assets of $80,875, which equals total liabilities and equity. Now that the financial statements are complete, Cliff will go to the next step in the accounting cycle, preparing and posting closing entries. To do this, Cliff needs his adjusted trial balance information. Cliff will only close temporary accounts, which include revenues, expenses, income summary, and dividends. The first entry closes revenue accounts to income summary. To close revenues, Cliff will debit revenue accounts and credit income summary. The second entry closes expense accounts to income summary. To close expenses, Cliff will credit expense accounts and debit income summary. The third entry closes income summary to retained earnings. To find the balance, take the difference between the income summary amount in the first and second entries (10,650 – 10,625). To close income summary, Cliff would debit Income Summary and credit Retained Earnings. The fourth closing entry closes dividends to retained earnings. To close dividends, Cliff will credit Dividends, and debit Retained Earnings. Once all of the closing entries are journalized, Cliff will post this information to the ledger. The closed accounts with their final balances, as well as Retained Earnings, are presented in Figure 5.26 . Now that the temporary accounts are closed, they are ready for accumulation in the next period. The last step for the month of August is step 9, preparing the post-closing trial balance. The post-closing trial balance should only contain permanent account information. No temporary accounts should appear on this trial balance. Clip’em Cliff’s post-closing trial balance is presented in Figure 5.27 . At this point, Cliff has completed the accounting cycle for August. He is now ready to begin the process again for September, and future periods. Concepts In Practice Reversing Entries One step in the accounting cycle that we did not cover is reversing entries. Reversing entries can be made at the beginning of a new period to certain accruals. The company will reverse adjusting entries made in the prior period to the revenue and expense accruals. It can be difficult to keep track of accruals from prior periods, as support documentation may not be readily available in current or future periods. This requires an accountant to remember when these accruals came from. By reversing these accruals, there is a reduced risk for counting revenues and expenses twice. The support documentation received in the current or future period for an accrual will be easier to match to prior revenues and expenses with the reversal. Link to Learning As we have learned, the current ratio shows how well a company can cover short-term debt with short-term assets. Look through the balance sheet in the 2017 Annual Report for Target and calculate the current ratio. What does the outcome mean for Target ? Think It Through Using Liquidity Ratios to Evaluate Financial Performance You own a landscaping business that has just begun operations. You made several expensive equipment purchases in your first month to get your business started. These purchases very much reduced your cash-on-hand, and in turn your liquidity suffered in the following months with a low working capital and current ratio. Your business is now in its eighth month of operation, and while you are starting to see a growth in sales, you are not seeing a significant change in your working capital or current ratio from the low numbers in your early months. What could you attribute to this stagnancy in liquidity? Is there anything you can do as a business owner to better these liquidity measurements? What will happen if you cannot change your liquidity or it gets worse?
biology
Chapter Outline 38.1 Types of Skeletal Systems 38.2 Bone 38.3 Joints and Skeletal Movement 38.4 Muscle Contraction and Locomotion Introduction The muscular and skeletal systems provide support to the body and allow for a wide range of movement. The bones of the skeletal system protect the body’s internal organs and support the weight of the body. The muscles of the muscular system contract and pull on the bones, allowing for movements as diverse as standing, walking, running, and grasping items. Injury or disease affecting the musculoskeletal system can be very debilitating. In humans, the most common musculoskeletal diseases worldwide are caused by malnutrition. Ailments that affect the joints are also widespread, such as arthritis, which can make movement difficult and—in advanced cases—completely impair mobility. In severe cases in which the joint has suffered extensive damage, joint replacement surgery may be needed. Progress in the science of prosthesis design has resulted in the development of artificial joints, with joint replacement surgery in the hips and knees being the most common. Replacement joints for shoulders, elbows, and fingers are also available. Even with this progress, there is still room for improvement in the design of prostheses. The state-of-the-art prostheses have limited durability and therefore wear out quickly, particularly in young or active individuals. Current research is focused on the use of new materials, such as carbon fiber, that may make prostheses more durable.
[ { "answer": { "ans_choice": 0, "ans_text": "radius and ulna" }, "bloom": null, "hl_context": "Some movements that cannot be classified as gliding , angular , or rotational are called special movements . Inversion involves the soles of the feet moving inward , toward the midline of the body . Eversion is the opposite of inversion , movement of the sole of the foot outward , away from the midline of the body . Protraction is the anterior movement of a bone in the horizontal plane . Retraction occurs as a joint moves back into position after protraction . Protraction and retraction can be seen in the movement of the mandible as the jaw is thrust outwards and then back inwards . Elevation is the movement of a bone upward , such as when the shoulders are shrugged , lifting the scapulae . Depression is the opposite of elevation — movement downward of a bone , such as after the shoulders are shrugged and the scapulae return to their normal position from an elevated position . Dorsiflexion is a bending at the ankle such that the toes are lifted toward the knee . Plantar flexion is a bending at the ankle when the heel is lifted , such as when standing on the toes . <hl> Supination is the movement of the radius and ulna bones of the forearm so that the palm faces forward . <hl> Pronation is the opposite movement , in which the palm faces backward . Opposition is the movement of the thumb toward the fingers of the same hand , making it possible to grasp and hold objects . An articulation is any place at which two bones are joined . The humerus is the largest and longest bone of the upper limb and the only bone of the arm . It articulates with the scapula at the shoulder and with the forearm at the elbow . <hl> The forearm extends from the elbow to the wrist and consists of two bones : the ulna and the radius . <hl> The radius is located along the lateral ( thumb ) side of the forearm and articulates with the humerus at the elbow . The ulna is located on the medial aspect ( pinky-finger side ) of the forearm . It is longer than the radius . The ulna articulates with the humerus at the elbow . The radius and ulna also articulate with the carpal bones and with each other , which in vertebrates enables a variable degree of rotation of the carpus with respect to the long axis of the limb . The hand includes the eight bones of the carpus ( wrist ) , the five bones of the metacarpus ( palm ) , and the 14 bones of the phalanges ( digits ) . Each digit consists of three phalanges , except for the thumb , when present , which has only two . <hl> The upper limb contains 30 bones in three regions : the arm ( shoulder to elbow ) , the forearm ( ulna and radius ) , and the wrist and hand ( Figure 38.12 ) . <hl>", "hl_sentences": "Supination is the movement of the radius and ulna bones of the forearm so that the palm faces forward . The forearm extends from the elbow to the wrist and consists of two bones : the ulna and the radius . The upper limb contains 30 bones in three regions : the arm ( shoulder to elbow ) , the forearm ( ulna and radius ) , and the wrist and hand ( Figure 38.12 ) .", "question": { "cloze_format": "The forearm consists of the ___.", "normal_format": "What does the forearm consist of?", "question_choices": [ "radius and ulna", "radius and humerus", "ulna and humerus", "humerus and carpus" ], "question_id": "fs-idm190702464", "question_text": "The forearm consists of the:" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "clavicle and scapula" }, "bloom": null, "hl_context": "The pectoral girdle bones provide the points of attachment of the upper limbs to the axial skeleton . <hl> The human pectoral girdle consists of the clavicle ( or collarbone ) in the anterior , and the scapula ( or shoulder blades ) in the posterior ( Figure 38.11 ) . <hl>", "hl_sentences": "The human pectoral girdle consists of the clavicle ( or collarbone ) in the anterior , and the scapula ( or shoulder blades ) in the posterior ( Figure 38.11 ) .", "question": { "cloze_format": "The pectoral girdle consists of the ___.", "normal_format": "What does the pectoral girdle consists of?", "question_choices": [ "clavicle and sternum", "sternum and scapula", "clavicle and scapula", "clavicle and coccyx" ], "question_id": "fs-idm201806112", "question_text": "The pectoral girdle consists of the:" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "pelvic" }, "bloom": null, "hl_context": "Each vertebral body has a large hole in the center through which the nerves of the spinal cord pass . There is also a notch on each side through which the spinal nerves , which serve the body at that level , can exit from the spinal cord . The vertebral column is approximately 71 cm ( 28 inches ) in adult male humans and is curved , which can be seen from a side view . The names of the spinal curves correspond to the region of the spine in which they occur . <hl> The thoracic and sacral curves are concave ( curve inwards relative to the front of the body ) and the cervical and lumbar curves are convex ( curve outwards relative to the front of the body ) . <hl> The arched curvature of the vertebral column increases its strength and flexibility , allowing it to absorb shocks like a spring ( Figure 38.8 ) . The vertebral column , or spinal column , surrounds and protects the spinal cord , supports the head , and acts as an attachment point for the ribs and muscles of the back and neck . The adult vertebral column comprises 26 bones : the 24 vertebrae , the sacrum , and the coccyx bones . In the adult , the sacrum is typically composed of five vertebrae that fuse into one . The coccyx is typically 3 – 4 vertebrae that fuse into one . Around the age of 70 , the sacrum and the coccyx may fuse together . We begin life with approximately 33 vertebrae , but as we grow , several vertebrae fuse together . <hl> The adult vertebrae are further divided into the 7 cervical vertebrae , 12 thoracic vertebrae , and 5 lumbar vertebrae ( Figure 38.8 ) . <hl>", "hl_sentences": "The thoracic and sacral curves are concave ( curve inwards relative to the front of the body ) and the cervical and lumbar curves are convex ( curve outwards relative to the front of the body ) . The adult vertebrae are further divided into the 7 cervical vertebrae , 12 thoracic vertebrae , and 5 lumbar vertebrae ( Figure 38.8 ) .", "question": { "cloze_format": "All of the following are groups of vertebrae except ________, which is a curvature.", "normal_format": "Which of the following is is a curvature and it is NOT in groups of vertebrae?", "question_choices": [ "thoracic", "cervical", "lumbar", "pelvic" ], "question_id": "fs-idp96311840", "question_text": "All of the following are groups of vertebrae except ________, which is a curvature." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "lacrimal" }, "bloom": null, "hl_context": "The bones of the skull support the structures of the face and protect the brain . The skull consists of 22 bones , which are divided into two categories : cranial bones and facial bones . The cranial bones are eight bones that form the cranial cavity , which encloses the brain and serves as an attachment site for the muscles of the head and neck . The eight cranial bones are the frontal bone , two parietal bones , two temporal bones , occipital bone , sphenoid bone , and the ethmoid bone . Although the bones developed separately in the embryo and fetus , in the adult , they are tightly fused with connective tissue and adjoining bones do not move ( Figure 38.6 ) . The auditory ossicles of the middle ear transmit sounds from the air as vibrations to the fluid-filled cochlea . The auditory ossicles consist of three bones each : the malleus , incus , and stapes . These are the smallest bones in the body and are unique to mammals . Fourteen facial bones form the face , provide cavities for the sense organs ( eyes , mouth , and nose ) , protect the entrances to the digestive and respiratory tracts , and serve as attachment points for facial muscles . <hl> The 14 facial bones are the nasal bones , the maxillary bones , zygomatic bones , palatine , vomer , lacrimal bones , the inferior nasal conchae , and the mandible . <hl> All of these bones occur in pairs except for the mandible and the vomer ( Figure 38.7 ) .", "hl_sentences": "The 14 facial bones are the nasal bones , the maxillary bones , zygomatic bones , palatine , vomer , lacrimal bones , the inferior nasal conchae , and the mandible .", "question": { "cloze_format": "The ___ is a facial bone.", "normal_format": "Which of these is a facial bone?", "question_choices": [ "frontal", "occipital", "lacrimal", "temporal" ], "question_id": "fs-idp33973808", "question_text": "Which of these is a facial bone?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "contains the bone’s blood vessels and nerve fibers" }, "bloom": null, "hl_context": "<hl> Haversian canals contain blood vessels and nerve fibers . <hl> Compact bone ( or cortical bone ) forms the hard external layer of all bones and surrounds the medullary cavity , or bone marrow . It provides protection and strength to bones . Compact bone tissue consists of units called osteons or Haversian systems . Osteons are cylindrical structures that contain a mineral matrix and living osteocytes connected by canaliculi , which transport blood . They are aligned parallel to the long axis of the bone . Each osteon consists of lamellae , which are layers of compact matrix that surround a central canal called the Haversian canal . <hl> The Haversian canal ( osteonic canal ) contains the bone ’ s blood vessels and nerve fibers ( Figure 38.19 ) . <hl> Osteons in compact bone tissue are aligned in the same direction along lines of stress and help the bone resist bending or fracturing . Therefore , compact bone tissue is prominent in areas of bone at which stresses are applied in only a few directions .", "hl_sentences": "Haversian canals contain blood vessels and nerve fibers . The Haversian canal ( osteonic canal ) contains the bone ’ s blood vessels and nerve fibers ( Figure 38.19 ) .", "question": { "cloze_format": "The Haversian canal ___.", "normal_format": "What are the Haversian canal?", "question_choices": [ "is arranged as rods or plates", "contains the bone’s blood vessels and nerve fibers", "is responsible for the lengthwise growth of long bones", "synthesizes and secretes matrix" ], "question_id": "fs-idm138246016", "question_text": "The Haversian canal:" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "is responsible for the lengthwise growth of long bones" }, "bloom": null, "hl_context": "<hl> Long bones stop growing at around the age of 18 in females and the age of 21 in males in a process called epiphyseal plate closure . <hl> During this process , cartilage cells stop dividing and all of the cartilage is replaced by bone . The epiphyseal plate fades , leaving a structure called the epiphyseal line or epiphyseal remnant , and the epiphysis and diaphysis fuse . <hl> Long bones continue to lengthen , potentially until adolescence , through the addition of bone tissue at the epiphyseal plate . <hl> They also increase in width through appositional growth . In the last stage of prenatal bone development , the centers of the epiphyses begin to calcify . Secondary ossification centers form in the epiphyses as blood vessels and osteoblasts enter these areas and convert hyaline cartilage into spongy bone . <hl> Until adolescence , hyaline cartilage persists at the epiphyseal plate ( growth plate ) , which is the region between the diaphysis and epiphysis that is responsible for the lengthwise growth of long bones ( Figure 38.21 ) . <hl>", "hl_sentences": "Long bones stop growing at around the age of 18 in females and the age of 21 in males in a process called epiphyseal plate closure . Long bones continue to lengthen , potentially until adolescence , through the addition of bone tissue at the epiphyseal plate . Until adolescence , hyaline cartilage persists at the epiphyseal plate ( growth plate ) , which is the region between the diaphysis and epiphysis that is responsible for the lengthwise growth of long bones ( Figure 38.21 ) .", "question": { "cloze_format": "The epiphyseal plate ___.", "normal_format": "What are the epiphyseal plate?", "question_choices": [ "is arranged as rods or plates", "contains the bone’s blood vessels and nerve fibers", "is responsible for the lengthwise growth of long bones", "synthesizes and secretes bone matrix" ], "question_id": "fs-idp39504416", "question_text": "The epiphyseal plate:" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "osteoclasts" }, "bloom": null, "hl_context": "Bone renewal continues after birth into adulthood . Bone remodeling is the replacement of old bone tissue by new bone tissue . <hl> It involves the processes of bone deposition by osteoblasts and bone resorption by osteoclasts . <hl> Normal bone growth requires vitamins D , C , and A , plus minerals such as calcium , phosphorous , and magnesium . Hormones such as parathyroid hormone , growth hormone , and calcitonin are also required for proper bone growth and maintenance . In long bones , chondrocytes form a template of the hyaline cartilage diaphysis . Responding to complex developmental signals , the matrix begins to calcify . This calcification prevents diffusion of nutrients into the matrix , resulting in chondrocytes dying and the opening up of cavities in the diaphysis cartilage . Blood vessels invade the cavities , and osteoblasts and osteoclasts modify the calcified cartilage matrix into spongy bone . <hl> Osteoclasts then break down some of the spongy bone to create a marrow , or medullary , cavity in the center of the diaphysis . <hl> Dense , irregular connective tissue forms a sheath ( periosteum ) around the bones . The periosteum assists in attaching the bone to surrounding tissues , tendons , and ligaments . The bone continues to grow and elongate as the cartilage cells at the epiphyses divide . Bone consists of four types of cells : osteoblasts , osteoclasts , osteocytes , and osteoprogenitor cells . Osteoblasts are bone cells that are responsible for bone formation . Osteoblasts synthesize and secrete the organic part and inorganic part of the extracellular matrix of bone tissue , and collagen fibers . Osteoblasts become trapped in these secretions and differentiate into less active osteocytes . <hl> Osteoclasts are large bone cells with up to 50 nuclei . <hl> <hl> They remove bone structure by releasing lysosomal enzymes and acids that dissolve the bony matrix . <hl> These minerals , released from bones into the blood , help regulate calcium concentrations in body fluids . Bone may also be resorbed for remodeling , if the applied stresses have changed . Osteocytes are mature bone cells and are the main cells in bony connective tissue ; these cells cannot divide . Osteocytes maintain normal bone structure by recycling the mineral salts in the bony matrix . Osteoprogenitor cells are squamous stem cells that divide to produce daughter cells that differentiate into osteoblasts . Osteoprogenitor cells are important in the repair of fractures .", "hl_sentences": "It involves the processes of bone deposition by osteoblasts and bone resorption by osteoclasts . Osteoclasts then break down some of the spongy bone to create a marrow , or medullary , cavity in the center of the diaphysis . Osteoclasts are large bone cells with up to 50 nuclei . They remove bone structure by releasing lysosomal enzymes and acids that dissolve the bony matrix .", "question": { "cloze_format": "The cells responsible for bone resorption are ________.", "normal_format": "What are cells responsible for bone resorption?", "question_choices": [ "osteoclasts", "osteoblasts", "fibroblasts", "osteocytes" ], "question_id": "fs-idm163280576", "question_text": "The cells responsible for bone resorption are ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "osteons" }, "bloom": null, "hl_context": "<hl> Compact bone tissue is made of cylindrical osteons that are aligned such that they travel the length of the bone . <hl> Compact bone ( or cortical bone ) forms the hard external layer of all bones and surrounds the medullary cavity , or bone marrow . It provides protection and strength to bones . <hl> Compact bone tissue consists of units called osteons or Haversian systems . <hl> Osteons are cylindrical structures that contain a mineral matrix and living osteocytes connected by canaliculi , which transport blood . They are aligned parallel to the long axis of the bone . Each osteon consists of lamellae , which are layers of compact matrix that surround a central canal called the Haversian canal . The Haversian canal ( osteonic canal ) contains the bone ’ s blood vessels and nerve fibers ( Figure 38.19 ) . Osteons in compact bone tissue are aligned in the same direction along lines of stress and help the bone resist bending or fracturing . Therefore , compact bone tissue is prominent in areas of bone at which stresses are applied in only a few directions .", "hl_sentences": "Compact bone tissue is made of cylindrical osteons that are aligned such that they travel the length of the bone . Compact bone tissue consists of units called osteons or Haversian systems .", "question": { "cloze_format": "Compact bone is composed of ________.", "normal_format": "What is compact bone composed of?", "question_choices": [ "trabeculae", "compacted collagen", "osteons", "calcium phosphate only" ], "question_id": "fs-idm120974144", "question_text": "Compact bone is composed of ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "cartilaginous joints" }, "bloom": null, "hl_context": "Cartilaginous joints are joints in which the bones are connected by cartilage . <hl> There are two types of cartilaginous joints : synchondroses and symphyses . <hl> In a synchondrosis , the bones are joined by hyaline cartilage . Synchondroses are found in the epiphyseal plates of growing bones in children . In symphyses , hyaline cartilage covers the end of the bone but the connection between bones occurs through fibrocartilage . Symphyses are found at the joints between vertebrae . Either type of cartilaginous joint allows for very little movement .", "hl_sentences": "There are two types of cartilaginous joints : synchondroses and symphyses .", "question": { "cloze_format": "Synchondroses and symphyses are ___ .", "normal_format": "What are synchondroses and symphyses?", "question_choices": [ "synovial joints", "cartilaginous joints", "fibrous joints", "condyloid joints" ], "question_id": "fs-idp150175248", "question_text": "Synchondroses and symphyses are:" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "abduction" }, "bloom": null, "hl_context": "<hl> Abduction occurs when a bone moves away from the midline of the body . <hl> Examples of abduction are moving the arms or legs laterally to lift them straight out to the side . Adduction is the movement of a bone toward the midline of the body . Movement of the limbs inward after abduction is an example of adduction . Circumduction is the movement of a limb in a circular motion , as in moving the arm in a circular motion .", "hl_sentences": "Abduction occurs when a bone moves away from the midline of the body .", "question": { "cloze_format": "The movement of bone away from the midline of the body is called ________.", "normal_format": "What is the movement of bone away from the midline of the body?", "question_choices": [ "circumduction", "extension", "adduction", "abduction" ], "question_id": "fs-idp14892512", "question_text": "The movement of bone away from the midline of the body is called ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "regulation of water balance in the joint" }, "bloom": null, "hl_context": "Synovial joints are the only joints that have a space between the adjoining bones ( Figure 38.25 ) . This space is referred to as the synovial ( or joint ) cavity and is filled with synovial fluid . <hl> Synovial fluid lubricates the joint , reducing friction between the bones and allowing for greater movement . <hl> The ends of the bones are covered with articular cartilage , a hyaline cartilage , and the entire joint is surrounded by an articular capsule composed of connective tissue that allows movement of the joint while resisting dislocation . Articular capsules may also possess ligaments that hold the bones together . Synovial joints are capable of the greatest movement of the three structural joint types ; however , the more mobile a joint , the weaker the joint . Knees , elbows , and shoulders are examples of synovial joints .", "hl_sentences": "Synovial fluid lubricates the joint , reducing friction between the bones and allowing for greater movement .", "question": { "cloze_format": "___ is not a characteristic of the synovial fluid.", "normal_format": "Which of the following is not a characteristic of the synovial fluid?", "question_choices": [ "lubrication", "shock absorption", "regulation of water balance in the joint", "protection of articular cartilage" ], "question_id": "fs-idp115912912", "question_text": "Which of the following is not a characteristic of the synovial fluid?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "hinge" }, "bloom": null, "hl_context": "In hinge joints , the slightly rounded end of one bone fits into the slightly hollow end of the other bone . In this way , one bone moves while the other remains stationary , like the hinge of a door . <hl> The elbow is an example of a hinge joint . <hl> The knee is sometimes classified as a modified hinge joint ( Figure 38.28 ) .", "hl_sentences": "The elbow is an example of a hinge joint .", "question": { "cloze_format": "The elbow is an example of the ___type of joint.", "normal_format": "The elbow is an example of which type of joint?", "question_choices": [ "hinge", "pivot", "saddle", "gliding" ], "question_id": "fs-idp208632384", "question_text": "The elbow is an example of which type of joint?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "tropomyosin" }, "bloom": "1", "hl_context": "<hl> To enable a muscle contraction , tropomyosin must change conformation , uncovering the myosin-binding site on an actin molecule and allowing cross-bridge formation . <hl> This can only happen in the presence of calcium , which is kept at extremely low concentrations in the sarcoplasm . If present , calcium ions bind to troponin , causing conformational changes in troponin that allow tropomyosin to move away from the myosin binding sites on actin . Once the tropomyosin is removed , a cross-bridge can form between actin and myosin , triggering contraction . Cross-bridge cycling continues until Ca 2 + ions and ATP are no longer available and tropomyosin again covers the binding sites on actin . When a muscle is in a resting state , actin and myosin are separated . To keep actin from binding to the active site on myosin , regulatory proteins block the molecular binding sites . <hl> Tropomyosin blocks myosin binding sites on actin molecules , preventing cross-bridge formation and preventing contraction in a muscle without nervous input . <hl> Troponin binds to tropomyosin and helps to position it on the actin molecule ; it also binds calcium ions . Thick and thin filaments are themselves composed of proteins . Thick filaments are composed of the protein myosin . The tail of a myosin molecule connects with other myosin molecules to form the central region of a thick filament near the M line , whereas the heads align on either side of the thick filament where the thin filaments overlap . The primary component of thin filaments is the actin protein . Two other components of the thin filament are tropomyosin and troponin . Actin has binding sites for myosin attachment . <hl> Strands of tropomyosin block the binding sites and prevent actin – myosin interactions when the muscles are at rest . <hl> Troponin consists of three globular subunits . One subunit binds to tropomyosin , one subunit binds to actin , and one subunit binds Ca 2 + ions .", "hl_sentences": "To enable a muscle contraction , tropomyosin must change conformation , uncovering the myosin-binding site on an actin molecule and allowing cross-bridge formation . Tropomyosin blocks myosin binding sites on actin molecules , preventing cross-bridge formation and preventing contraction in a muscle without nervous input . Strands of tropomyosin block the binding sites and prevent actin – myosin interactions when the muscles are at rest .", "question": { "cloze_format": "In relaxed muscle, the myosin-binding site on actin is blocked by ________.", "normal_format": "In relaxed muscle, the myosin-binding site on actin is blocked by what?", "question_choices": [ "titin", "troponin", "myoglobin", "tropomyosin" ], "question_id": "fs-idp38549936", "question_text": "In relaxed muscle, the myosin-binding site on actin is blocked by ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "sarcolemma" }, "bloom": "1", "hl_context": "Skeletal Muscle Fiber Structure Each skeletal muscle fiber is a skeletal muscle cell . These cells are incredibly large , with diameters of up to 100 µm and lengths of up to 30 cm . <hl> The plasma membrane of a skeletal muscle fiber is called the sarcolemma . <hl> The sarcolemma is the site of action potential conduction , which triggers muscle contraction . Within each muscle fiber are myofibrils — long cylindrical structures that lie parallel to the muscle fiber . Myofibrils run the entire length of the muscle fiber , and because they are only approximately 1.2 µm in diameter , hundreds to thousands can be found inside one muscle fiber . They attach to the sarcolemma at their ends , so that as myofibrils shorten , the entire muscle cell contracts ( Figure 38.34 ) . The striated appearance of skeletal muscle tissue is a result of repeating bands of the proteins actin and myosin that are present along the length of myofibrils . Dark A bands and light I bands repeat along myofibrils , and the alignment of myofibrils in the cell causes the entire cell to appear striated or banded .", "hl_sentences": "The plasma membrane of a skeletal muscle fiber is called the sarcolemma .", "question": { "cloze_format": "The cell membrane of a muscle fiber is called a ________.", "normal_format": "What is the cell membrane of muscle fiber called?", "question_choices": [ "myofibril", "sarcolemma", "sarcoplasm", "myofilament" ], "question_id": "fs-idm99696576", "question_text": "The cell membrane of a muscle fiber is called a ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "acetylcholinesterase" }, "bloom": null, "hl_context": "<hl> ACh is broken down by the enzyme acetylcholinesterase ( AChE ) into acetyl and choline . <hl> <hl> AChE resides in the synaptic cleft , breaking down ACh so that it does not remain bound to ACh receptors , which would cause unwanted extended muscle contraction ( Figure 38.38 ) . <hl> <hl> Acetylcholine ( ACh ) is a neurotransmitter released by motor neurons that binds to receptors in the motor end plate . <hl> Neurotransmitter release occurs when an action potential travels down the motor neuron ’ s axon , resulting in altered permeability of the synaptic terminal membrane and an influx of calcium . The Ca 2 + ions allow synaptic vesicles to move to and bind with the presynaptic membrane ( on the neuron ) , and release neurotransmitter from the vesicles into the synaptic cleft . Once released by the synaptic terminal , ACh diffuses across the synaptic cleft to the motor end plate , where it binds with ACh receptors . As a neurotransmitter binds , these ion channels open , and Na + ions cross the membrane into the muscle cell . This reduces the voltage difference between the inside and outside of the cell , which is called depolarization . As ACh binds at the motor end plate , this depolarization is called an end-plate potential . The depolarization then spreads along the sarcolemma , creating an action potential as sodium channels adjacent to the initial depolarization site sense the change in voltage and open . The action potential moves across the entire cell , creating a wave of depolarization . The sodium – potassium ATPase uses cellular energy to move K + ions inside the cell and Na + ions outside . This alone accumulates a small electrical charge , but a big concentration gradient . There is lots of K + in the cell and lots of Na + outside the cell . Potassium is able to leave the cell through K + channels that are open 90 % of the time , and it does . However , Na + channels are rarely open , so Na + remains outside the cell . When K + leaves the cell , obeying its concentration gradient , that effectively leaves a negative charge behind . So at rest , there is a large concentration gradient for Na + to enter the cell , and there is an accumulation of negative charges left behind in the cell . This is the resting membrane potential . Potential in this context means a separation of electrical charge that is capable of doing work . It is measured in volts , just like a battery . However , the transmembrane potential is considerably smaller ( 0.07 V ); therefore , the small value is expressed as millivolts ( mV ) or 70 mV . Because the inside of a cell is negative compared with the outside , a minus sign signifies the excess of negative charges inside the cell , − 70 mV . If an event changes the permeability of the membrane to Na + ions , they will enter the cell . That will change the voltage . This is an electrical event , called an action potential , that can be used as a cellular signal . <hl> Communication occurs between nerves and muscles through neurotransmitters . <hl> <hl> Neuron action potentials cause the release of neurotransmitters from the synaptic terminal into the synaptic cleft , where they can then diffuse across the synaptic cleft and bind to a receptor molecule on the motor end plate . <hl> The motor end plate possesses junctional folds — folds in the sarcolemma that create a large surface area for the neurotransmitter to bind to receptors . The receptors are actually sodium channels that open to allow the passage of Na + into the cell when they receive neurotransmitter signal .", "hl_sentences": "ACh is broken down by the enzyme acetylcholinesterase ( AChE ) into acetyl and choline . AChE resides in the synaptic cleft , breaking down ACh so that it does not remain bound to ACh receptors , which would cause unwanted extended muscle contraction ( Figure 38.38 ) . Acetylcholine ( ACh ) is a neurotransmitter released by motor neurons that binds to receptors in the motor end plate . Communication occurs between nerves and muscles through neurotransmitters . Neuron action potentials cause the release of neurotransmitters from the synaptic terminal into the synaptic cleft , where they can then diffuse across the synaptic cleft and bind to a receptor molecule on the motor end plate .", "question": { "cloze_format": "The muscle relaxes if no new nerve signal arrives. However the neurotransmitter from the previous stimulation is still present in the synapse. The activity of ________ helps to remove this neurotransmitter.", "normal_format": "The muscle relaxes if no new nerve signal arrives. However the neurotransmitter from the previous stimulation is still present in the synapse. Which activity helps to remove this neurotransmitter?", "question_choices": [ "myosin", "action potential", "tropomyosin", "acetylcholinesterase" ], "question_id": "fs-idp49818544", "question_text": "The muscle relaxes if no new nerve signal arrives. However the neurotransmitter from the previous stimulation is still present in the synapse. The activity of ________ helps to remove this neurotransmitter." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "none of the above" }, "bloom": null, "hl_context": "<hl> Neural control initiates the formation of actin – myosin cross-bridges , leading to the sarcomere shortening involved in muscle contraction . <hl> These contractions extend from the muscle fiber through connective tissue to pull on bones , causing skeletal movement . The pull exerted by a muscle is called tension , and the amount of force created by this tension can vary . This enables the same muscles to move very light objects and very heavy objects . In individual muscle fibers , the amount of tension produced depends on the cross-sectional area of the muscle fiber and the frequency of neural stimulation .", "hl_sentences": "Neural control initiates the formation of actin – myosin cross-bridges , leading to the sarcomere shortening involved in muscle contraction .", "question": { "cloze_format": "The ability of a muscle to generate tension immediately after stimulation is dependent on ___.", "normal_format": "The ability of a muscle to generate tension immediately after stimulation is dependent on what?", "question_choices": [ "myosin interaction with the M line", "overlap of myosin and actin", "actin attachments to the Z line", "none of the above" ], "question_id": "fs-idm30541680", "question_text": "The ability of a muscle to generate tension immediately after stimulation is dependent on:" }, "references_are_paraphrase": null } ]
38
38.1 Types of Skeletal Systems Learning Objectives By the end of this section, you will be able to: Discuss the different types of skeletal systems Explain the role of the human skeletal system Compare and contrast different skeletal systems A skeletal system is necessary to support the body, protect internal organs, and allow for the movement of an organism. There are three different skeleton designs that fulfill these functions: hydrostatic skeleton, exoskeleton, and endoskeleton. Hydrostatic Skeleton A hydrostatic skeleton is a skeleton formed by a fluid-filled compartment within the body, called the coelom. The organs of the coelom are supported by the aqueous fluid, which also resists external compression. This compartment is under hydrostatic pressure because of the fluid and supports the other organs of the organism. This type of skeletal system is found in soft-bodied animals such as sea anemones, earthworms, Cnidaria, and other invertebrates ( Figure 38.2 ). Movement in a hydrostatic skeleton is provided by muscles that surround the coelom. The muscles in a hydrostatic skeleton contract to change the shape of the coelom; the pressure of the fluid in the coelom produces movement. For example, earthworms move by waves of muscular contractions of the skeletal muscle of the body wall hydrostatic skeleton, called peristalsis, which alternately shorten and lengthen the body. Lengthening the body extends the anterior end of the organism. Most organisms have a mechanism to fix themselves in the substrate. Shortening the muscles then draws the posterior portion of the body forward. Although a hydrostatic skeleton is well-suited to invertebrate organisms such as earthworms and some aquatic organisms, it is not an efficient skeleton for terrestrial animals. Exoskeleton An exoskeleton is an external skeleton that consists of a hard encasement on the surface of an organism. For example, the shells of crabs and insects are exoskeletons ( Figure 38.3 ). This skeleton type provides defence against predators, supports the body, and allows for movement through the contraction of attached muscles. As with vertebrates, muscles must cross a joint inside the exoskeleton. Shortening of the muscle changes the relationship of the two segments of the exoskeleton. Arthropods such as crabs and lobsters have exoskeletons that consist of 30–50 percent chitin, a polysaccharide derivative of glucose that is a strong but flexible material. Chitin is secreted by the epidermal cells. The exoskeleton is further strengthened by the addition of calcium carbonate in organisms such as the lobster. Because the exoskeleton is acellular, arthropods must periodically shed their exoskeletons because the exoskeleton does not grow as the organism grows. Endoskeleton An endoskeleton is a skeleton that consists of hard, mineralized structures located within the soft tissue of organisms. An example of a primitive endoskeletal structure is the spicules of sponges. The bones of vertebrates are composed of tissues, whereas sponges have no true tissues ( Figure 38.4 ). Endoskeletons provide support for the body, protect internal organs, and allow for movement through contraction of muscles attached to the skeleton. The human skeleton is an endoskeleton that consists of 206 bones in the adult. It has five main functions: providing support to the body, storing minerals and lipids, producing blood cells, protecting internal organs, and allowing for movement. The skeletal system in vertebrates is divided into the axial skeleton (which consists of the skull, vertebral column, and rib cage), and the appendicular skeleton (which consists of the shoulders, limb bones, the pectoral girdle, and the pelvic girdle). Link to Learning Visit the interactive body site to build a virtual skeleton: select "skeleton" and click through the activity to place each bone. Human Axial Skeleton The axial skeleton forms the central axis of the body and includes the bones of the skull, ossicles of the middle ear, hyoid bone of the throat, vertebral column, and the thoracic cage (ribcage) ( Figure 38.5 ). The function of the axial skeleton is to provide support and protection for the brain, the spinal cord, and the organs in the ventral body cavity. It provides a surface for the attachment of muscles that move the head, neck, and trunk, performs respiratory movements, and stabilizes parts of the appendicular skeleton. The Skull The bones of the skull support the structures of the face and protect the brain. The skull consists of 22 bones, which are divided into two categories: cranial bones and facial bones. The cranial bones are eight bones that form the cranial cavity, which encloses the brain and serves as an attachment site for the muscles of the head and neck. The eight cranial bones are the frontal bone, two parietal bones, two temporal bones, occipital bone, sphenoid bone, and the ethmoid bone. Although the bones developed separately in the embryo and fetus, in the adult, they are tightly fused with connective tissue and adjoining bones do not move ( Figure 38.6 ). The auditory ossicles of the middle ear transmit sounds from the air as vibrations to the fluid-filled cochlea. The auditory ossicles consist of three bones each: the malleus, incus, and stapes. These are the smallest bones in the body and are unique to mammals. Fourteen facial bones form the face, provide cavities for the sense organs (eyes, mouth, and nose), protect the entrances to the digestive and respiratory tracts, and serve as attachment points for facial muscles. The 14 facial bones are the nasal bones, the maxillary bones, zygomatic bones, palatine, vomer, lacrimal bones, the inferior nasal conchae, and the mandible. All of these bones occur in pairs except for the mandible and the vomer ( Figure 38.7 ). Although it is not found in the skull, the hyoid bone is considered a component of the axial skeleton. The hyoid bone lies below the mandible in the front of the neck. It acts as a movable base for the tongue and is connected to muscles of the jaw, larynx, and tongue. The mandible articulates with the base of the skull. The mandible controls the opening to the airway and gut. In animals with teeth, the mandible brings the surfaces of the teeth in contact with the maxillary teeth. The Vertebral Column The vertebral column , or spinal column, surrounds and protects the spinal cord, supports the head, and acts as an attachment point for the ribs and muscles of the back and neck. The adult vertebral column comprises 26 bones: the 24 vertebrae, the sacrum, and the coccyx bones. In the adult, the sacrum is typically composed of five vertebrae that fuse into one. The coccyx is typically 3–4 vertebrae that fuse into one. Around the age of 70, the sacrum and the coccyx may fuse together. We begin life with approximately 33 vertebrae, but as we grow, several vertebrae fuse together. The adult vertebrae are further divided into the 7 cervical vertebrae, 12 thoracic vertebrae, and 5 lumbar vertebrae ( Figure 38.8 ). Each vertebral body has a large hole in the center through which the nerves of the spinal cord pass. There is also a notch on each side through which the spinal nerves, which serve the body at that level, can exit from the spinal cord. The vertebral column is approximately 71 cm (28 inches) in adult male humans and is curved, which can be seen from a side view. The names of the spinal curves correspond to the region of the spine in which they occur. The thoracic and sacral curves are concave (curve inwards relative to the front of the body) and the cervical and lumbar curves are convex (curve outwards relative to the front of the body). The arched curvature of the vertebral column increases its strength and flexibility, allowing it to absorb shocks like a spring ( Figure 38.8 ). Intervertebral discs composed of fibrous cartilage lie between adjacent vertebral bodies from the second cervical vertebra to the sacrum. Each disc is part of a joint that allows for some movement of the spine and acts as a cushion to absorb shocks from movements such as walking and running. Intervertebral discs also act as ligaments to bind vertebrae together. The inner part of discs, the nucleus pulposus, hardens as people age and becomes less elastic. This loss of elasticity diminishes its ability to absorb shocks. The Thoracic Cage The thoracic cage , also known as the ribcage, is the skeleton of the chest, and consists of the ribs, sternum, thoracic vertebrae, and costal cartilages ( Figure 38.9 ). The thoracic cage encloses and protects the organs of the thoracic cavity, including the heart and lungs. It also provides support for the shoulder girdles and upper limbs, and serves as the attachment point for the diaphragm, muscles of the back, chest, neck, and shoulders. Changes in the volume of the thorax enable breathing. The sternum , or breastbone, is a long, flat bone located at the anterior of the chest. It is formed from three bones that fuse in the adult. The ribs are 12 pairs of long, curved bones that attach to the thoracic vertebrae and curve toward the front of the body, forming the ribcage. Costal cartilages connect the anterior ends of the ribs to the sternum, with the exception of rib pairs 11 and 12, which are free-floating ribs. Human Appendicular Skeleton The appendicular skeleton is composed of the bones of the upper limbs (which function to grasp and manipulate objects) and the lower limbs (which permit locomotion). It also includes the pectoral girdle, or shoulder girdle, that attaches the upper limbs to the body, and the pelvic girdle that attaches the lower limbs to the body ( Figure 38.10 ). The Pectoral Girdle The pectoral girdle bones provide the points of attachment of the upper limbs to the axial skeleton. The human pectoral girdle consists of the clavicle (or collarbone) in the anterior, and the scapula (or shoulder blades) in the posterior ( Figure 38.11 ). The clavicles are S-shaped bones that position the arms on the body. The clavicles lie horizontally across the front of the thorax (chest) just above the first rib. These bones are fairly fragile and are susceptible to fractures. For example, a fall with the arms outstretched causes the force to be transmitted to the clavicles, which can break if the force is excessive. The clavicle articulates with the sternum and the scapula. The scapulae are flat, triangular bones that are located at the back of the pectoral girdle. They support the muscles crossing the shoulder joint. A ridge, called the spine, runs across the back of the scapula and can easily be felt through the skin ( Figure 38.11 ). The spine of the scapula is a good example of a bony protrusion that facilitates a broad area of attachment for muscles to bone. The Upper Limb The upper limb contains 30 bones in three regions: the arm (shoulder to elbow), the forearm (ulna and radius), and the wrist and hand ( Figure 38.12 ). An articulation is any place at which two bones are joined. The humerus is the largest and longest bone of the upper limb and the only bone of the arm. It articulates with the scapula at the shoulder and with the forearm at the elbow. The forearm extends from the elbow to the wrist and consists of two bones: the ulna and the radius. The radius is located along the lateral (thumb) side of the forearm and articulates with the humerus at the elbow. The ulna is located on the medial aspect (pinky-finger side) of the forearm. It is longer than the radius. The ulna articulates with the humerus at the elbow. The radius and ulna also articulate with the carpal bones and with each other, which in vertebrates enables a variable degree of rotation of the carpus with respect to the long axis of the limb. The hand includes the eight bones of the carpus (wrist), the five bones of the metacarpus (palm), and the 14 bones of the phalanges (digits). Each digit consists of three phalanges, except for the thumb, when present, which has only two. The Pelvic Girdle The pelvic girdle attaches to the lower limbs of the axial skeleton. Because it is responsible for bearing the weight of the body and for locomotion, the pelvic girdle is securely attached to the axial skeleton by strong ligaments. It also has deep sockets with robust ligaments to securely attach the femur to the body. The pelvic girdle is further strengthened by two large hip bones. In adults, the hip bones, or coxal bones , are formed by the fusion of three pairs of bones: the ilium, ischium, and pubis. The pelvis joins together in the anterior of the body at a joint called the pubic symphysis and with the bones of the sacrum at the posterior of the body. The female pelvis is slightly different from the male pelvis. Over generations of evolution, females with a wider pubic angle and larger diameter pelvic canal reproduced more successfully. Therefore, their offspring also had pelvic anatomy that enabled successful childbirth ( Figure 38.13 ). The Lower Limb The lower limb consists of the thigh, the leg, and the foot. The bones of the lower limb are the femur (thigh bone), patella (kneecap), tibia and fibula (bones of the leg), tarsals (bones of the ankle), and metatarsals and phalanges (bones of the foot) ( Figure 38.14 ). The bones of the lower limbs are thicker and stronger than the bones of the upper limbs because of the need to support the entire weight of the body and the resulting forces from locomotion. In addition to evolutionary fitness, the bones of an individual will respond to forces exerted upon them. The femur , or thighbone, is the longest, heaviest, and strongest bone in the body. The femur and pelvis form the hip joint at the proximal end. At the distal end, the femur, tibia, and patella form the knee joint. The patella , or kneecap, is a triangular bone that lies anterior to the knee joint. The patella is embedded in the tendon of the femoral extensors (quadriceps). It improves knee extension by reducing friction. The tibia , or shinbone, is a large bone of the leg that is located directly below the knee. The tibia articulates with the femur at its proximal end, with the fibula and the tarsal bones at its distal end. It is the second largest bone in the human body and is responsible for transmitting the weight of the body from the femur to the foot. The fibula , or calf bone, parallels and articulates with the tibia. It does not articulate with the femur and does not bear weight. The fibula acts as a site for muscle attachment and forms the lateral part of the ankle joint. The tarsals are the seven bones of the ankle. The ankle transmits the weight of the body from the tibia and the fibula to the foot. The metatarsals are the five bones of the foot. The phalanges are the 14 bones of the toes. Each toe consists of three phalanges, except for the big toe that has only two ( Figure 38.15 ). Variations exist in other species; for example, the horse’s metacarpals and metatarsals are oriented vertically and do not make contact with the substrate. Evolution Connection Evolution of Body Design for Locomotion on Land The transition of vertebrates onto land required a number of changes in body design, as movement on land presents a number of challenges for animals that are adapted to movement in water. The buoyancy of water provides a certain amount of lift, and a common form of movement by fish is lateral undulations of the entire body. This back and forth movement pushes the body against the water, creating forward movement. In most fish, the muscles of paired fins attach to girdles within the body, allowing for some control of locomotion. As certain fish began moving onto land, they retained their lateral undulation form of locomotion (anguilliform). However, instead of pushing against water, their fins or flippers became points of contact with the ground, around which they rotated their bodies. The effect of gravity and the lack of buoyancy on land meant that body weight was suspended on the limbs, leading to increased strengthening and ossification of the limbs. The effect of gravity also required changes to the axial skeleton. Lateral undulations of land animal vertebral columns cause torsional strain. A firmer, more ossified vertebral column became common in terrestrial tetrapods because it reduces strain while providing the strength needed to support the body’s weight. In later tetrapods, the vertebrae began allowing for vertical motion rather than lateral flexion. Another change in the axial skeleton was the loss of a direct attachment between the pectoral girdle and the head. This reduced the jarring to the head caused by the impact of the limbs on the ground. The vertebrae of the neck also evolved to allow movement of the head independently of the body. The appendicular skeleton of land animals is also different from aquatic animals. The shoulders attach to the pectoral girdle through muscles and connective tissue, thus reducing the jarring of the skull. Because of a lateral undulating vertebral column, in early tetrapods, the limbs were splayed out to the side and movement occurred by performing “push-ups.” The vertebrae of these animals had to move side-to-side in a similar manner to fish and reptiles. This type of motion requires large muscles to move the limbs toward the midline; it was almost like walking while doing push-ups, and it is not an efficient use of energy. Later tetrapods have their limbs placed under their bodies, so that each stride requires less force to move forward. This resulted in decreased adductor muscle size and an increased range of motion of the scapulae. This also restricts movement primarily to one plane, creating forward motion rather than moving the limbs upward as well as forward. The femur and humerus were also rotated, so that the ends of the limbs and digits were pointed forward, in the direction of motion, rather than out to the side. By placement underneath the body, limbs can swing forward like a pendulum to produce a stride that is more efficient for moving over land. 38.2 Bone Learning Objectives By the end of this section, you will be able to: Classify the different types of bones in the skeleton Explain the role of the different cell types in bone Explain how bone forms during development Bone , or osseous tissue , is a connective tissue that constitutes the endoskeleton. It contains specialized cells and a matrix of mineral salts and collagen fibers. The mineral salts primarily include hydroxyapatite, a mineral formed from calcium phosphate. Calcification is the process of deposition of mineral salts on the collagen fiber matrix that crystallizes and hardens the tissue. The process of calcification only occurs in the presence of collagen fibers. The bones of the human skeleton are classified by their shape: long bones, short bones, flat bones, sutural bones, sesamoid bones, and irregular bones ( Figure 38.16 ). Long bones are longer than they are wide and have a shaft and two ends. The diaphysis , or central shaft, contains bone marrow in a marrow cavity. The rounded ends, the epiphyses , are covered with articular cartilage and are filled with red bone marrow, which produces blood cells ( Figure 38.17 ). Most of the limb bones are long bones—for example, the femur, tibia, ulna, and radius. Exceptions to this include the patella and the bones of the wrist and ankle. Short bones , or cuboidal bones, are bones that are the same width and length, giving them a cube-like shape. For example, the bones of the wrist (carpals) and ankle (tarsals) are short bones ( Figure 38.16 ). Flat bones are thin and relatively broad bones that are found where extensive protection of organs is required or where broad surfaces of muscle attachment are required. Examples of flat bones are the sternum (breast bone), ribs, scapulae (shoulder blades), and the roof of the skull ( Figure 38.16 ). Irregular bones are bones with complex shapes. These bones may have short, flat, notched, or ridged surfaces. Examples of irregular bones are the vertebrae, hip bones, and several skull bones. Sesamoid bones are small, flat bones and are shaped similarly to a sesame seed. The patellae are sesamoid bones ( Figure 38.18 ). Sesamoid bones develop inside tendons and may be found near joints at the knees, hands, and feet. Sutural bones are small, flat, irregularly shaped bones. They may be found between the flat bones of the skull. They vary in number, shape, size, and position. Bone Tissue Bones are considered organs because they contain various types of tissue, such as blood, connective tissue, nerves, and bone tissue. Osteocytes, the living cells of bone tissue, form the mineral matrix of bones. There are two types of bone tissue: compact and spongy. Compact Bone Tissue Compact bone (or cortical bone) forms the hard external layer of all bones and surrounds the medullary cavity, or bone marrow. It provides protection and strength to bones. Compact bone tissue consists of units called osteons or Haversian systems. Osteons are cylindrical structures that contain a mineral matrix and living osteocytes connected by canaliculi, which transport blood. They are aligned parallel to the long axis of the bone. Each osteon consists of lamellae , which are layers of compact matrix that surround a central canal called the Haversian canal. The Haversian canal (osteonic canal) contains the bone’s blood vessels and nerve fibers ( Figure 38.19 ). Osteons in compact bone tissue are aligned in the same direction along lines of stress and help the bone resist bending or fracturing. Therefore, compact bone tissue is prominent in areas of bone at which stresses are applied in only a few directions. Visual Connection Which of the following statements about bone tissue is false? Compact bone tissue is made of cylindrical osteons that are aligned such that they travel the length of the bone. Haversian canals contain blood vessels only. Haversian canals contain blood vessels and nerve fibers. Spongy tissue is found on the interior of the bone, and compact bone tissue is found on the exterior. Spongy Bone Tissue Whereas compact bone tissue forms the outer layer of all bones, spongy bone or cancellous bone forms the inner layer of all bones. Spongy bone tissue does not contain osteons that constitute compact bone tissue. Instead, it consists of trabeculae , which are lamellae that are arranged as rods or plates. Red bone marrow is found between the trabuculae. Blood vessels within this tissue deliver nutrients to osteocytes and remove waste. The red bone marrow of the femur and the interior of other large bones, such as the ileum, forms blood cells. Spongy bone reduces the density of bone and allows the ends of long bones to compress as the result of stresses applied to the bone. Spongy bone is prominent in areas of bones that are not heavily stressed or where stresses arrive from many directions. The epiphyses of bones, such as the neck of the femur, are subject to stress from many directions. Imagine laying a heavy framed picture flat on the floor. You could hold up one side of the picture with a toothpick if the toothpick was perpendicular to the floor and the picture. Now drill a hole and stick the toothpick into the wall to hang up the picture. In this case, the function of the toothpick is to transmit the downward pressure of the picture to the wall. The force on the picture is straight down to the floor, but the force on the toothpick is both the picture wire pulling down and the bottom of the hole in the wall pushing up. The toothpick will break off right at the wall. The neck of the femur is horizontal like the toothpick in the wall. The weight of the body pushes it down near the joint, but the vertical diaphysis of the femur pushes it up at the other end. The neck of the femur must be strong enough to transfer the downward force of the body weight horizontally to the vertical shaft of the femur ( Figure 38.20 ). Link to Learning View micrographs of musculoskeletal tissues as you review the anatomy. Cell Types in Bones Bone consists of four types of cells: osteoblasts, osteoclasts, osteocytes, and osteoprogenitor cells. Osteoblasts are bone cells that are responsible for bone formation. Osteoblasts synthesize and secrete the organic part and inorganic part of the extracellular matrix of bone tissue, and collagen fibers. Osteoblasts become trapped in these secretions and differentiate into less active osteocytes. Osteoclasts are large bone cells with up to 50 nuclei. They remove bone structure by releasing lysosomal enzymes and acids that dissolve the bony matrix. These minerals, released from bones into the blood, help regulate calcium concentrations in body fluids. Bone may also be resorbed for remodeling, if the applied stresses have changed. Osteocytes are mature bone cells and are the main cells in bony connective tissue; these cells cannot divide. Osteocytes maintain normal bone structure by recycling the mineral salts in the bony matrix. Osteoprogenitor cells are squamous stem cells that divide to produce daughter cells that differentiate into osteoblasts. Osteoprogenitor cells are important in the repair of fractures. Development of Bone Ossification , or osteogenesis, is the process of bone formation by osteoblasts. Ossification is distinct from the process of calcification; whereas calcification takes place during the ossification of bones, it can also occur in other tissues. Ossification begins approximately six weeks after fertilization in an embryo. Before this time, the embryonic skeleton consists entirely of fibrous membranes and hyaline cartilage. The development of bone from fibrous membranes is called intramembranous ossification; development from hyaline cartilage is called endochondral ossification. Bone growth continues until approximately age 25. Bones can grow in thickness throughout life, but after age 25, ossification functions primarily in bone remodeling and repair. Intramembranous Ossification Intramembranous ossification is the process of bone development from fibrous membranes. It is involved in the formation of the flat bones of the skull, the mandible, and the clavicles. Ossification begins as mesenchymal cells form a template of the future bone. They then differentiate into osteoblasts at the ossification center. Osteoblasts secrete the extracellular matrix and deposit calcium, which hardens the matrix. The non-mineralized portion of the bone or osteoid continues to form around blood vessels, forming spongy bone. Connective tissue in the matrix differentiates into red bone marrow in the fetus. The spongy bone is remodeled into a thin layer of compact bone on the surface of the spongy bone. Endochondral Ossification Endochondral ossification is the process of bone development from hyaline cartilage. All of the bones of the body, except for the flat bones of the skull, mandible, and clavicles, are formed through endochondral ossification. In long bones, chondrocytes form a template of the hyaline cartilage diaphysis. Responding to complex developmental signals, the matrix begins to calcify. This calcification prevents diffusion of nutrients into the matrix, resulting in chondrocytes dying and the opening up of cavities in the diaphysis cartilage. Blood vessels invade the cavities, and osteoblasts and osteoclasts modify the calcified cartilage matrix into spongy bone. Osteoclasts then break down some of the spongy bone to create a marrow, or medullary, cavity in the center of the diaphysis. Dense, irregular connective tissue forms a sheath (periosteum) around the bones. The periosteum assists in attaching the bone to surrounding tissues, tendons, and ligaments. The bone continues to grow and elongate as the cartilage cells at the epiphyses divide. In the last stage of prenatal bone development, the centers of the epiphyses begin to calcify. Secondary ossification centers form in the epiphyses as blood vessels and osteoblasts enter these areas and convert hyaline cartilage into spongy bone. Until adolescence, hyaline cartilage persists at the epiphyseal plate (growth plate), which is the region between the diaphysis and epiphysis that is responsible for the lengthwise growth of long bones ( Figure 38.21 ). Growth of Bone Long bones continue to lengthen, potentially until adolescence, through the addition of bone tissue at the epiphyseal plate. They also increase in width through appositional growth. Lengthening of Long Bones Chondrocytes on the epiphyseal side of the epiphyseal plate divide; one cell remains undifferentiated near the epiphysis, and one cell moves toward the diaphysis. The cells, which are pushed from the epiphysis, mature and are destroyed by calcification. This process replaces cartilage with bone on the diaphyseal side of the plate, resulting in a lengthening of the bone. Long bones stop growing at around the age of 18 in females and the age of 21 in males in a process called epiphyseal plate closure. During this process, cartilage cells stop dividing and all of the cartilage is replaced by bone. The epiphyseal plate fades, leaving a structure called the epiphyseal line or epiphyseal remnant, and the epiphysis and diaphysis fuse. Thickening of Long Bones Appositional growth is the increase in the diameter of bones by the addition of bony tissue at the surface of bones. Osteoblasts at the bone surface secrete bone matrix, and osteoclasts on the inner surface break down bone. The osteoblasts differentiate into osteocytes. A balance between these two processes allows the bone to thicken without becoming too heavy. Bone Remodeling and Repair Bone renewal continues after birth into adulthood. Bone remodeling is the replacement of old bone tissue by new bone tissue. It involves the processes of bone deposition by osteoblasts and bone resorption by osteoclasts. Normal bone growth requires vitamins D, C, and A, plus minerals such as calcium, phosphorous, and magnesium. Hormones such as parathyroid hormone, growth hormone, and calcitonin are also required for proper bone growth and maintenance. Bone turnover rates are quite high, with five to seven percent of bone mass being recycled every week. Differences in turnover rate exist in different areas of the skeleton and in different areas of a bone. For example, the bone in the head of the femur may be fully replaced every six months, whereas the bone along the shaft is altered much more slowly. Bone remodeling allows bones to adapt to stresses by becoming thicker and stronger when subjected to stress. Bones that are not subject to normal stress, for example when a limb is in a cast, will begin to lose mass. A fractured or broken bone undergoes repair through four stages: Blood vessels in the broken bone tear and hemorrhage, resulting in the formation of clotted blood, or a hematoma, at the site of the break. The severed blood vessels at the broken ends of the bone are sealed by the clotting process, and bone cells that are deprived of nutrients begin to die. Within days of the fracture, capillaries grow into the hematoma, and phagocytic cells begin to clear away the dead cells. Though fragments of the blood clot may remain, fibroblasts and osteoblasts enter the area and begin to reform bone. Fibroblasts produce collagen fibers that connect the broken bone ends, and osteoblasts start to form spongy bone. The repair tissue between the broken bone ends is called the fibrocartilaginous callus, as it is composed of both hyaline and fibrocartilage ( Figure 38.22 ). Some bone spicules may also appear at this point. The fibrocartilaginous callus is converted into a bony callus of spongy bone. It takes about two months for the broken bone ends to be firmly joined together after the fracture. This is similar to the endochondral formation of bone, as cartilage becomes ossified; osteoblasts, osteoclasts, and bone matrix are present. The bony callus is then remodelled by osteoclasts and osteoblasts, with excess material on the exterior of the bone and within the medullary cavity being removed. Compact bone is added to create bone tissue that is similar to the original, unbroken bone. This remodeling can take many months, and the bone may remain uneven for years. Scientific Method Connection Decalcification of Bones Question: What effect does the removal of calcium and collagen have on bone structure? Background: Conduct a literature search on the role of calcium and collagen in maintaining bone structure. Conduct a literature search on diseases in which bone structure is compromised. Hypothesis: Develop a hypothesis that states predictions of the flexibility, strength, and mass of bones that have had the calcium and collagen components removed. Develop a hypothesis regarding the attempt to add calcium back to decalcified bones. Test the hypothesis: Test the prediction by removing calcium from chicken bones by placing them in a jar of vinegar for seven days. Test the hypothesis regarding adding calcium back to decalcified bone by placing the decalcified chicken bones into a jar of water with calcium supplements added. Test the prediction by denaturing the collagen from the bones by baking them at 250°C for three hours. Analyze the data: Create a table showing the changes in bone flexibility, strength, and mass in the three different environments. Report the results: Under which conditions was the bone most flexible? Under which conditions was the bone the strongest? Draw a conclusion: Did the results support or refute the hypothesis? How do the results observed in this experiment correspond to diseases that destroy bone tissue? 38.3 Joints and Skeletal Movement Learning Objectives By the end of this section, you will be able to: Classify the different types of joints on the basis of structure Explain the role of joints in skeletal movement The point at which two or more bones meet is called a joint , or articulation . Joints are responsible for movement, such as the movement of limbs, and stability, such as the stability found in the bones of the skull. Classification of Joints on the Basis of Structure There are two ways to classify joints: on the basis of their structure or on the basis of their function. The structural classification divides joints into bony, fibrous, cartilaginous, and synovial joints depending on the material composing the joint and the presence or absence of a cavity in the joint. Fibrous Joints The bones of fibrous joints are held together by fibrous connective tissue. There is no cavity, or space, present between the bones and so most fibrous joints do not move at all, or are only capable of minor movements. There are three types of fibrous joints: sutures, syndesmoses, and gomphoses. Sutures are found only in the skull and possess short fibers of connective tissue that hold the skull bones tightly in place ( Figure 38.23 ). Syndesmoses are joints in which the bones are connected by a band of connective tissue, allowing for more movement than in a suture. An example of a syndesmosis is the joint of the tibia and fibula in the ankle. The amount of movement in these types of joints is determined by the length of the connective tissue fibers. Gomphoses occur between teeth and their sockets; the term refers to the way the tooth fits into the socket like a peg ( Figure 38.24 ). The tooth is connected to the socket by a connective tissue referred to as the periodontal ligament. Cartilaginous Joints Cartilaginous joints are joints in which the bones are connected by cartilage. There are two types of cartilaginous joints: synchondroses and symphyses. In a synchondrosis , the bones are joined by hyaline cartilage. Synchondroses are found in the epiphyseal plates of growing bones in children. In symphyses , hyaline cartilage covers the end of the bone but the connection between bones occurs through fibrocartilage. Symphyses are found at the joints between vertebrae. Either type of cartilaginous joint allows for very little movement. Synovial Joints Synovial joints are the only joints that have a space between the adjoining bones ( Figure 38.25 ). This space is referred to as the synovial (or joint) cavity and is filled with synovial fluid. Synovial fluid lubricates the joint, reducing friction between the bones and allowing for greater movement. The ends of the bones are covered with articular cartilage, a hyaline cartilage, and the entire joint is surrounded by an articular capsule composed of connective tissue that allows movement of the joint while resisting dislocation. Articular capsules may also possess ligaments that hold the bones together. Synovial joints are capable of the greatest movement of the three structural joint types; however, the more mobile a joint, the weaker the joint. Knees, elbows, and shoulders are examples of synovial joints. Classification of Joints on the Basis of Function The functional classification divides joints into three categories: synarthroses, amphiarthroses, and diarthroses. A synarthrosis is a joint that is immovable. This includes sutures, gomphoses, and synchondroses. Amphiarthroses are joints that allow slight movement, including syndesmoses and symphyses. Diarthroses are joints that allow for free movement of the joint, as in synovial joints. Movement at Synovial Joints The wide range of movement allowed by synovial joints produces different types of movements. The movement of synovial joints can be classified as one of four different types: gliding, angular, rotational, or special movement. Gliding Movement Gliding movements occur as relatively flat bone surfaces move past each other. Gliding movements produce very little rotation or angular movement of the bones. The joints of the carpal and tarsal bones are examples of joints that produce gliding movements. Angular Movement Angular movements are produced when the angle between the bones of a joint changes. There are several different types of angular movements, including flexion, extension, hyperextension, abduction, adduction, and circumduction. Flexion , or bending, occurs when the angle between the bones decreases. Moving the forearm upward at the elbow or moving the wrist to move the hand toward the forearm are examples of flexion. Extension is the opposite of flexion in that the angle between the bones of a joint increases. Straightening a limb after flexion is an example of extension. Extension past the regular anatomical position is referred to as hyperextension . This includes moving the neck back to look upward, or bending the wrist so that the hand moves away from the forearm. Abduction occurs when a bone moves away from the midline of the body. Examples of abduction are moving the arms or legs laterally to lift them straight out to the side. Adduction is the movement of a bone toward the midline of the body. Movement of the limbs inward after abduction is an example of adduction. Circumduction is the movement of a limb in a circular motion, as in moving the arm in a circular motion. Rotational Movement Rotational movement is the movement of a bone as it rotates around its longitudinal axis. Rotation can be toward the midline of the body, which is referred to as medial rotation , or away from the midline of the body, which is referred to as lateral rotation . Movement of the head from side to side is an example of rotation. Special Movements Some movements that cannot be classified as gliding, angular, or rotational are called special movements. Inversion involves the soles of the feet moving inward, toward the midline of the body. Eversion is the opposite of inversion, movement of the sole of the foot outward, away from the midline of the body. Protraction is the anterior movement of a bone in the horizontal plane. Retraction occurs as a joint moves back into position after protraction. Protraction and retraction can be seen in the movement of the mandible as the jaw is thrust outwards and then back inwards. Elevation is the movement of a bone upward, such as when the shoulders are shrugged, lifting the scapulae. Depression is the opposite of elevation—movement downward of a bone, such as after the shoulders are shrugged and the scapulae return to their normal position from an elevated position. Dorsiflexion is a bending at the ankle such that the toes are lifted toward the knee. Plantar flexion is a bending at the ankle when the heel is lifted, such as when standing on the toes. Supination is the movement of the radius and ulna bones of the forearm so that the palm faces forward. Pronation is the opposite movement, in which the palm faces backward. Opposition is the movement of the thumb toward the fingers of the same hand, making it possible to grasp and hold objects. Types of Synovial Joints Synovial joints are further classified into six different categories on the basis of the shape and structure of the joint. The shape of the joint affects the type of movement permitted by the joint ( Figure 38.26 ). These joints can be described as planar, hinge, pivot, condyloid, saddle, or ball-and-socket joints. Planar Joints Planar joints have bones with articulating surfaces that are flat or slightly curved faces. These joints allow for gliding movements, and so the joints are sometimes referred to as gliding joints. The range of motion is limited in these joints and does not involve rotation. Planar joints are found in the carpal bones in the hand and the tarsal bones of the foot, as well as between vertebrae ( Figure 38.27 ). Hinge Joints In hinge joints , the slightly rounded end of one bone fits into the slightly hollow end of the other bone. In this way, one bone moves while the other remains stationary, like the hinge of a door. The elbow is an example of a hinge joint. The knee is sometimes classified as a modified hinge joint ( Figure 38.28 ). Pivot Joints Pivot joints consist of the rounded end of one bone fitting into a ring formed by the other bone. This structure allows rotational movement, as the rounded bone moves around its own axis. An example of a pivot joint is the joint of the first and second vertebrae of the neck that allows the head to move back and forth ( Figure 38.29 ). The joint of the wrist that allows the palm of the hand to be turned up and down is also a pivot joint. Condyloid Joints Condyloid joints consist of an oval-shaped end of one bone fitting into a similarly oval-shaped hollow of another bone ( Figure 38.30 ). This is also sometimes called an ellipsoidal joint. This type of joint allows angular movement along two axes, as seen in the joints of the wrist and fingers, which can move both side to side and up and down. Saddle Joints Saddle joints are so named because the ends of each bone resemble a saddle, with concave and convex portions that fit together. Saddle joints allow angular movements similar to condyloid joints but with a greater range of motion. An example of a saddle joint is the thumb joint, which can move back and forth and up and down, but more freely than the wrist or fingers ( Figure 38.31 ). Ball-and-Socket Joints Ball-and-socket joints possess a rounded, ball-like end of one bone fitting into a cuplike socket of another bone. This organization allows the greatest range of motion, as all movement types are possible in all directions. Examples of ball-and-socket joints are the shoulder and hip joints ( Figure 38.32 ). Link to Learning Watch this animation showing the six types of synovial joints. Click to view content Career Connection Rheumatologist Rheumatologists are medical doctors who specialize in the diagnosis and treatment of disorders of the joints, muscles, and bones. They diagnose and treat diseases such as arthritis, musculoskeletal disorders, osteoporosis, and autoimmune diseases such as ankylosing spondylitis and rheumatoid arthritis. Rheumatoid arthritis (RA) is an inflammatory disorder that primarily affects the synovial joints of the hands, feet, and cervical spine. Affected joints become swollen, stiff, and painful. Although it is known that RA is an autoimmune disease in which the body’s immune system mistakenly attacks healthy tissue, the cause of RA remains unknown. Immune cells from the blood enter joints and the synovium causing cartilage breakdown, swelling, and inflammation of the joint lining. Breakdown of cartilage causes bones to rub against each other causing pain. RA is more common in women than men and the age of onset is usually 40–50 years of age. Rheumatologists can diagnose RA on the basis of symptoms such as joint inflammation and pain, X-ray and MRI imaging, and blood tests. Arthrography is a type of medical imaging of joints that uses a contrast agent, such as a dye, that is opaque to X-rays. This allows the soft tissue structures of joints—such as cartilage, tendons, and ligaments—to be visualized. An arthrogram differs from a regular X-ray by showing the surface of soft tissues lining the joint in addition to joint bones. An arthrogram allows early degenerative changes in joint cartilage to be detected before bones become affected. There is currently no cure for RA; however, rheumatologists have a number of treatment options available. Early stages can be treated with rest of the affected joints by using a cane or by using joint splints that minimize inflammation. When inflammation has decreased, exercise can be used to strengthen the muscles that surround the joint and to maintain joint flexibility. If joint damage is more extensive, medications can be used to relieve pain and decrease inflammation. Anti-inflammatory drugs such as aspirin, topical pain relievers, and corticosteroid injections may be used. Surgery may be required in cases in which joint damage is severe. 38.4 Muscle Contraction and Locomotion Learning Objectives By the end of this section, you will be able to: Classify the different types of muscle tissue Explain the role of muscles in locomotion Muscle cells are specialized for contraction. Muscles allow for motions such as walking, and they also facilitate bodily processes such as respiration and digestion. The body contains three types of muscle tissue: skeletal muscle, cardiac muscle, and smooth muscle ( Figure 38.33 ). Skeletal muscle tissue forms skeletal muscles, which attach to bones or skin and control locomotion and any movement that can be consciously controlled. Because it can be controlled by thought, skeletal muscle is also called voluntary muscle. Skeletal muscles are long and cylindrical in appearance; when viewed under a microscope, skeletal muscle tissue has a striped or striated appearance. The striations are caused by the regular arrangement of contractile proteins (actin and myosin). Actin is a globular contractile protein that interacts with myosin for muscle contraction. Skeletal muscle also has multiple nuclei present in a single cell. Smooth muscle tissue occurs in the walls of hollow organs such as the intestines, stomach, and urinary bladder, and around passages such as the respiratory tract and blood vessels. Smooth muscle has no striations, is not under voluntary control, has only one nucleus per cell, is tapered at both ends, and is called involuntary muscle. Cardiac muscle tissue is only found in the heart, and cardiac contractions pump blood throughout the body and maintain blood pressure. Like skeletal muscle, cardiac muscle is striated, but unlike skeletal muscle, cardiac muscle cannot be consciously controlled and is called involuntary muscle. It has one nucleus per cell, is branched, and is distinguished by the presence of intercalated disks. Skeletal Muscle Fiber Structure Each skeletal muscle fiber is a skeletal muscle cell. These cells are incredibly large, with diameters of up to 100 µm and lengths of up to 30 cm. The plasma membrane of a skeletal muscle fiber is called the sarcolemma . The sarcolemma is the site of action potential conduction, which triggers muscle contraction. Within each muscle fiber are myofibrils —long cylindrical structures that lie parallel to the muscle fiber. Myofibrils run the entire length of the muscle fiber, and because they are only approximately 1.2 µm in diameter, hundreds to thousands can be found inside one muscle fiber. They attach to the sarcolemma at their ends, so that as myofibrils shorten, the entire muscle cell contracts ( Figure 38.34 ). The striated appearance of skeletal muscle tissue is a result of repeating bands of the proteins actin and myosin that are present along the length of myofibrils. Dark A bands and light I bands repeat along myofibrils, and the alignment of myofibrils in the cell causes the entire cell to appear striated or banded. Each I band has a dense line running vertically through the middle called a Z disc or Z line. The Z discs mark the border of units called sarcomeres , which are the functional units of skeletal muscle. One sarcomere is the space between two consecutive Z discs and contains one entire A band and two halves of an I band, one on either side of the A band. A myofibril is composed of many sarcomeres running along its length, and as the sarcomeres individually contract, the myofibrils and muscle cells shorten ( Figure 38.35 ). Myofibrils are composed of smaller structures called myofilaments . There are two main types of filaments: thick filaments and thin filaments; each has different compositions and locations. Thick filaments occur only in the A band of a myofibril. Thin filaments attach to a protein in the Z disc called alpha-actinin and occur across the entire length of the I band and partway into the A band. The region at which thick and thin filaments overlap has a dense appearance, as there is little space between the filaments. Thin filaments do not extend all the way into the A bands, leaving a central region of the A band that only contains thick filaments. This central region of the A band looks slightly lighter than the rest of the A band and is called the H zone. The middle of the H zone has a vertical line called the M line, at which accessory proteins hold together thick filaments. Both the Z disc and the M line hold myofilaments in place to maintain the structural arrangement and layering of the myofibril. Myofibrils are connected to each other by intermediate, or desmin, filaments that attach to the Z disc. Thick and thin filaments are themselves composed of proteins. Thick filaments are composed of the protein myosin. The tail of a myosin molecule connects with other myosin molecules to form the central region of a thick filament near the M line, whereas the heads align on either side of the thick filament where the thin filaments overlap. The primary component of thin filaments is the actin protein. Two other components of the thin filament are tropomyosin and troponin. Actin has binding sites for myosin attachment. Strands of tropomyosin block the binding sites and prevent actin–myosin interactions when the muscles are at rest. Troponin consists of three globular subunits. One subunit binds to tropomyosin, one subunit binds to actin, and one subunit binds Ca 2+ ions. Link to Learning View this animation showing the organization of muscle fibers. Click to view content Sliding Filament Model of Contraction For a muscle cell to contract, the sarcomere must shorten. However, thick and thin filaments—the components of sarcomeres—do not shorten. Instead, they slide by one another, causing the sarcomere to shorten while the filaments remain the same length. The sliding filament theory of muscle contraction was developed to fit the differences observed in the named bands on the sarcomere at different degrees of muscle contraction and relaxation. The mechanism of contraction is the binding of myosin to actin, forming cross-bridges that generate filament movement ( Figure 38.36 ). When a sarcomere shortens, some regions shorten whereas others stay the same length. A sarcomere is defined as the distance between two consecutive Z discs or Z lines; when a muscle contracts, the distance between the Z discs is reduced. The H zone—the central region of the A zone—contains only thick filaments and is shortened during contraction. The I band contains only thin filaments and also shortens. The A band does not shorten—it remains the same length—but A bands of different sarcomeres move closer together during contraction, eventually disappearing. Thin filaments are pulled by the thick filaments toward the center of the sarcomere until the Z discs approach the thick filaments. The zone of overlap, in which thin filaments and thick filaments occupy the same area, increases as the thin filaments move inward. ATP and Muscle Contraction The motion of muscle shortening occurs as myosin heads bind to actin and pull the actin inwards. This action requires energy, which is provided by ATP. Myosin binds to actin at a binding site on the globular actin protein. Myosin has another binding site for ATP at which enzymatic activity hydrolyzes ATP to ADP, releasing an inorganic phosphate molecule and energy. ATP binding causes myosin to release actin, allowing actin and myosin to detach from each other. After this happens, the newly bound ATP is converted to ADP and inorganic phosphate, P i . The enzyme at the binding site on myosin is called ATPase. The energy released during ATP hydrolysis changes the angle of the myosin head into a “cocked” position. The myosin head is then in a position for further movement, possessing potential energy, but ADP and P i are still attached. If actin binding sites are covered and unavailable, the myosin will remain in the high energy configuration with ATP hydrolyzed, but still attached. If the actin binding sites are uncovered, a cross-bridge will form; that is, the myosin head spans the distance between the actin and myosin molecules. P i is then released, allowing myosin to expend the stored energy as a conformational change. The myosin head moves toward the M line, pulling the actin along with it. As the actin is pulled, the filaments move approximately 10 nm toward the M line. This movement is called the power stroke, as it is the step at which force is produced. As the actin is pulled toward the M line, the sarcomere shortens and the muscle contracts. When the myosin head is “cocked,” it contains energy and is in a high-energy configuration. This energy is expended as the myosin head moves through the power stroke; at the end of the power stroke, the myosin head is in a low-energy position. After the power stroke, ADP is released; however, the cross-bridge formed is still in place, and actin and myosin are bound together. ATP can then attach to myosin, which allows the cross-bridge cycle to start again and further muscle contraction can occur ( Figure 38.37 ). Link to Learning Watch this video explaining how a muscle contraction is signaled. Click to view content Visual Connection Which of the following statements about muscle contraction is true? The power stroke occurs when ATP is hydrolyzed to ADP and phosphate. The power stroke occurs when ADP and phosphate dissociate from the myosin head. The power stroke occurs when ADP and phosphate dissociate from the actin active site. The power stroke occurs when Ca 2+ binds the calcium head. Link to Learning View this animation of the cross-bridge muscle contraction. Regulatory Proteins When a muscle is in a resting state, actin and myosin are separated. To keep actin from binding to the active site on myosin, regulatory proteins block the molecular binding sites. Tropomyosin blocks myosin binding sites on actin molecules, preventing cross-bridge formation and preventing contraction in a muscle without nervous input. Troponin binds to tropomyosin and helps to position it on the actin molecule; it also binds calcium ions. To enable a muscle contraction, tropomyosin must change conformation, uncovering the myosin-binding site on an actin molecule and allowing cross-bridge formation. This can only happen in the presence of calcium, which is kept at extremely low concentrations in the sarcoplasm. If present, calcium ions bind to troponin, causing conformational changes in troponin that allow tropomyosin to move away from the myosin binding sites on actin. Once the tropomyosin is removed, a cross-bridge can form between actin and myosin, triggering contraction. Cross-bridge cycling continues until Ca 2+ ions and ATP are no longer available and tropomyosin again covers the binding sites on actin. Excitation–Contraction Coupling Excitation–contraction coupling is the link (transduction) between the action potential generated in the sarcolemma and the start of a muscle contraction. The trigger for calcium release from the sarcoplasmic reticulum into the sarcoplasm is a neural signal. Each skeletal muscle fiber is controlled by a motor neuron, which conducts signals from the brain or spinal cord to the muscle. The area of the sarcolemma on the muscle fiber that interacts with the neuron is called the motor end plate . The end of the neuron’s axon is called the synaptic terminal, and it does not actually contact the motor end plate. A small space called the synaptic cleft separates the synaptic terminal from the motor end plate. Electrical signals travel along the neuron’s axon, which branches through the muscle and connects to individual muscle fibers at a neuromuscular junction. The ability of cells to communicate electrically requires that the cells expend energy to create an electrical gradient across their cell membranes. This charge gradient is carried by ions, which are differentially distributed across the membrane. Each ion exerts an electrical influence and a concentration influence. Just as milk will eventually mix with coffee without the need to stir, ions also distribute themselves evenly, if they are permitted to do so. In this case, they are not permitted to return to an evenly mixed state. The sodium–potassium ATPase uses cellular energy to move K + ions inside the cell and Na + ions outside. This alone accumulates a small electrical charge, but a big concentration gradient. There is lots of K + in the cell and lots of Na + outside the cell. Potassium is able to leave the cell through K + channels that are open 90% of the time, and it does. However, Na + channels are rarely open, so Na + remains outside the cell. When K + leaves the cell, obeying its concentration gradient, that effectively leaves a negative charge behind. So at rest, there is a large concentration gradient for Na + to enter the cell, and there is an accumulation of negative charges left behind in the cell. This is the resting membrane potential. Potential in this context means a separation of electrical charge that is capable of doing work. It is measured in volts, just like a battery. However, the transmembrane potential is considerably smaller (0.07 V); therefore, the small value is expressed as millivolts (mV) or 70 mV. Because the inside of a cell is negative compared with the outside, a minus sign signifies the excess of negative charges inside the cell, −70 mV. If an event changes the permeability of the membrane to Na + ions, they will enter the cell. That will change the voltage. This is an electrical event, called an action potential, that can be used as a cellular signal. Communication occurs between nerves and muscles through neurotransmitters. Neuron action potentials cause the release of neurotransmitters from the synaptic terminal into the synaptic cleft, where they can then diffuse across the synaptic cleft and bind to a receptor molecule on the motor end plate. The motor end plate possesses junctional folds—folds in the sarcolemma that create a large surface area for the neurotransmitter to bind to receptors. The receptors are actually sodium channels that open to allow the passage of Na + into the cell when they receive neurotransmitter signal. Acetylcholine (ACh) is a neurotransmitter released by motor neurons that binds to receptors in the motor end plate. Neurotransmitter release occurs when an action potential travels down the motor neuron’s axon, resulting in altered permeability of the synaptic terminal membrane and an influx of calcium. The Ca 2+ ions allow synaptic vesicles to move to and bind with the presynaptic membrane (on the neuron), and release neurotransmitter from the vesicles into the synaptic cleft. Once released by the synaptic terminal, ACh diffuses across the synaptic cleft to the motor end plate, where it binds with ACh receptors. As a neurotransmitter binds, these ion channels open, and Na + ions cross the membrane into the muscle cell. This reduces the voltage difference between the inside and outside of the cell, which is called depolarization. As ACh binds at the motor end plate, this depolarization is called an end-plate potential. The depolarization then spreads along the sarcolemma, creating an action potential as sodium channels adjacent to the initial depolarization site sense the change in voltage and open. The action potential moves across the entire cell, creating a wave of depolarization. ACh is broken down by the enzyme acetylcholinesterase (AChE) into acetyl and choline. AChE resides in the synaptic cleft, breaking down ACh so that it does not remain bound to ACh receptors, which would cause unwanted extended muscle contraction ( Figure 38.38 ). Visual Connection The deadly nerve gas Sarin irreversibly inhibits acetycholinesterase. What effect would Sarin have on muscle contraction? After depolarization, the membrane returns to its resting state. This is called repolarization, during which voltage-gated sodium channels close. Potassium channels continue at 90% conductance. Because the plasma membrane sodium–potassium ATPase always transports ions, the resting state (negatively charged inside relative to the outside) is restored. The period immediately following the transmission of an impulse in a nerve or muscle, in which a neuron or muscle cell regains its ability to transmit another impulse, is called the refractory period. During the refractory period, the membrane cannot generate another action potential. . The refractory period allows the voltage-sensitive ion channels to return to their resting configurations. The sodium potassium ATPase continually moves Na + back out of the cell and K + back into the cell, and the K + leaks out leaving negative charge behind. Very quickly, the membrane repolarizes, so that it can again be depolarized. Control of Muscle Tension Neural control initiates the formation of actin–myosin cross-bridges, leading to the sarcomere shortening involved in muscle contraction. These contractions extend from the muscle fiber through connective tissue to pull on bones, causing skeletal movement. The pull exerted by a muscle is called tension, and the amount of force created by this tension can vary. This enables the same muscles to move very light objects and very heavy objects. In individual muscle fibers, the amount of tension produced depends on the cross-sectional area of the muscle fiber and the frequency of neural stimulation. The number of cross-bridges formed between actin and myosin determine the amount of tension that a muscle fiber can produce. Cross-bridges can only form where thick and thin filaments overlap, allowing myosin to bind to actin. If more cross-bridges are formed, more myosin will pull on actin, and more tension will be produced. The ideal length of a sarcomere during production of maximal tension occurs when thick and thin filaments overlap to the greatest degree. If a sarcomere at rest is stretched past an ideal resting length, thick and thin filaments do not overlap to the greatest degree, and fewer cross-bridges can form. This results in fewer myosin heads pulling on actin, and less tension is produced. As a sarcomere is shortened, the zone of overlap is reduced as the thin filaments reach the H zone, which is composed of myosin tails. Because it is myosin heads that form cross-bridges, actin will not bind to myosin in this zone, reducing the tension produced by this myofiber. If the sarcomere is shortened even more, thin filaments begin to overlap with each other—reducing cross-bridge formation even further, and producing even less tension. Conversely, if the sarcomere is stretched to the point at which thick and thin filaments do not overlap at all, no cross-bridges are formed and no tension is produced. This amount of stretching does not usually occur because accessory proteins, internal sensory nerves, and connective tissue oppose extreme stretching. The primary variable determining force production is the number of myofibers within the muscle that receive an action potential from the neuron that controls that fiber. When using the biceps to pick up a pencil, the motor cortex of the brain only signals a few neurons of the biceps, and only a few myofibers respond. In vertebrates, each myofiber responds fully if stimulated. When picking up a piano, the motor cortex signals all of the neurons in the biceps and every myofiber participates. This is close to the maximum force the muscle can produce. As mentioned above, increasing the frequency of action potentials (the number of signals per second) can increase the force a bit more, because the tropomyosin is flooded with calcium.
american_government
Summary 9.1 What Are Parties and How Did They Form? Political parties are vital to the operation of any democracy. Early U.S. political parties were formed by national elites who disagreed over how to divide power between the national and state governments. The system we have today, divided between Republicans and Democrats, had consolidated by 1860. A number of minor parties have attempted to challenge the status quo, but they have largely failed to gain traction despite having an occasional impact on the national political scene. 9.2 The Two-Party System Electoral rules, such as the use of plurality voting, have helped turn the United States into a two-party system dominated by the Republicans and the Democrats. Several minor parties have attempted to challenge the status quo, but usually they have only been spoilers that served to divide party coalitions. But this doesn’t mean the party system has always been stable; party coalitions have shifted several times in the past two hundred years. 9.3 The Shape of Modern Political Parties Political parties exist primarily as a means to help candidates get elected. The United States thus has a relatively loose system of party identification and a bottom-up approach to party organization structure built around elections. Lower levels, such as the precinct or county, take on the primary responsibility for voter registration and mobilization, whereas the higher state and national levels are responsible for electing major candidates and shaping party ideology. The party in government is responsible for implementing the policies on which its candidates run, but elected officials also worry about winning reelection. 9.4 Divided Government and Partisan Polarization A divided government makes it difficult for elected officials to achieve their policy goals. This problem has gotten worse as U.S. political parties have become increasingly polarized over the past several decades. They are both more likely to fight with each other and more internally divided than just a few decades ago. Some possible causes include sorting and improved gerrymandering, although neither alone offers a completely satisfactory explanation. But whatever the cause, polarization is having negative short-term consequences on American politics.
Chapter Outline 9.1 What Are Parties and How Did They Form? 9.2 The Two-Party System 9.3 The Shape of Modern Political Parties 9.4 Divided Government and Partisan Polarization Introduction In 2012, Barack Obama accepted his second nomination to lead the Democratic Party into the presidential election ( Figure 9.1 ). During his first term, he had been attacked by pundits for his failure to convince congressional Republicans to work with him. Despite that, he was wildly popular in his own party, and voters reelected him by a comfortable margin. His second term seemed to go no better, however, with disagreements between the parties resulting in government shutdowns and the threat of credit defaults. Yet just a few decades ago, then-president Dwight D. Eisenhower was criticized for failing to create a clear vision for his Republican Party, and Congress was lampooned for what was deemed a lack of real conflict over important issues. Political parties, it seems, can never get it right—they are either too polarizing or too noncommittal. While people love to criticize political parties, the reality is that the modern political system could not exist without them. This chapter will explore why the party system may be the most important component of any true democracy. What are political parties? Why do they form, and why has the United States typically had only two? Why have political parties become so highly structured? Finally, why does it seem that parties today are more polarized than they have been in the past?
[ { "answer": { "ans_choice": 3, "ans_text": "D" }, "bloom": null, "hl_context": "Soon after the United States emerged from the Revolutionary War , however , a rift began to emerge between two groups that had very different views about the future direction of U . S . politics . Thus , from the very beginning of its history , the United States has had a system of government dominated by two different philosophies . Federalists , who were largely responsible for drafting and ratifying the U . S . Constitution , generally favored the idea of a stronger , more centralized republic that had greater control over regulating the economy . 1 Anti-Federalists preferred a more confederate system built on state equality and autonomy . 2 The Federalist faction , led by Alexander Hamilton , largely dominated the government in the years immediately after the Constitution was ratified . <hl> Included in the Federalists was President George Washington , who was initially against the existence of parties in the United States . <hl> <hl> When Washington decided to exit politics and leave office , he warned of the potential negative effects of parties in his farewell address to the nation , including their potentially divisive nature and the fact that they might not always focus on the common good but rather on partisan ends . <hl> However , members of each faction quickly realized that they had a vested interest not only in nominating and electing a president who shared their views , but also in winning other elections . Two loosely affiliated party coalitions , known as the Federalists and the Democratic-Republicans , soon emerged . The Federalists succeeded in electing their first leader , John Adams , to the presidency in 1796 , only to see the Democratic-Republicans gain victory under Thomas Jefferson four years later in 1800 .", "hl_sentences": "Included in the Federalists was President George Washington , who was initially against the existence of parties in the United States . When Washington decided to exit politics and leave office , he warned of the potential negative effects of parties in his farewell address to the nation , including their potentially divisive nature and the fact that they might not always focus on the common good but rather on partisan ends .", "question": { "cloze_format": "The supporter of federalism who warned people about the dangers of political parties is ___ .", "normal_format": "Which supporter of federalism warned people about the dangers of political parties?", "question_choices": [ "John Adams", "Alexander Hamilton", "James Madison", "George Washington" ], "question_id": "fs-id1171471020948", "question_text": "Which supporter of federalism warned people about the dangers of political parties?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "Whig Party" }, "bloom": null, "hl_context": "Why do we have two parties ? The two-party system came into being because the structure of U . S . elections , with one seat tied to a geographic district , tends to lead to dominance by two major political parties . Even when there are other options on the ballot , most voters understand that minor parties have no real chance of winning even a single office . Hence , they vote for candidates of the two major parties in order to support a potential winner . Of the 535 members of the House and Senate , only a handful identify as something other than Republican or Democrat . Third parties have fared no better in presidential elections . No third-party candidate has ever won the presidency . <hl> Some historians or political scientists might consider Abraham Lincoln to have been such a candidate , but in 1860 , the Republicans were a major party that had subsumed members of earlier parties , such as the Whig Party , and they were the only major party other than the Democratic Party . <hl> In 1948 , two new third parties appeared on the political scene . <hl> Henry A . Wallace , a vice president under Franklin Roosevelt , formed a new Progressive Party , which had little in common with the earlier Progressive Party . <hl> Wallace favored racial desegregation and believed that the United States should have closer ties to the Soviet Union . Wallace ’ s campaign was a failure , largely because most people believed his policies , including national healthcare , were too much like those of communism , and this party also vanished . <hl> The other third party , the States ’ Rights Democrats , also known as the Dixiecrats , were white , southern Democrats who split from the Democratic Party when Harry Truman , who favored civil rights for African Americans , became the party ’ s nominee for president . <hl> The Dixiecrats opposed all attempts by the federal government to end segregation , extend voting rights , prohibit discrimination in employment , or otherwise promote social equality among races . 15 They remained a significant party that threatened Democratic unity throughout the 1950s and 1960s . <hl> Other examples of third parties in the United States include the American Independent Party , the Libertarian Party , United We Stand America , the Reform Party , and the Green Party . <hl>", "hl_sentences": "Some historians or political scientists might consider Abraham Lincoln to have been such a candidate , but in 1860 , the Republicans were a major party that had subsumed members of earlier parties , such as the Whig Party , and they were the only major party other than the Democratic Party . Henry A . Wallace , a vice president under Franklin Roosevelt , formed a new Progressive Party , which had little in common with the earlier Progressive Party . The other third party , the States ’ Rights Democrats , also known as the Dixiecrats , were white , southern Democrats who split from the Democratic Party when Harry Truman , who favored civil rights for African Americans , became the party ’ s nominee for president . Other examples of third parties in the United States include the American Independent Party , the Libertarian Party , United We Stand America , the Reform Party , and the Green Party .", "question": { "cloze_format": "The ___ was not a third-party challenger.", "normal_format": "Which of the following was not a third-party challenger?", "question_choices": [ "Whig Party", "Progressive Party", "Dixiecrats", "Green Party" ], "question_id": "fs-id1171473177075", "question_text": "Which of the following was not a third-party challenger?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 0, "ans_text": "A" }, "bloom": null, "hl_context": "A second way to increase the number of parties in the U . S . system is to abandon the winner-take-all approach . <hl> Rather than allowing voters to pick their representatives directly , many democracies have chosen to have voters pick their preferred party and allow the party to select the individuals who serve in government . <hl> <hl> The argument for this method is that it is ultimately the party and not the individual who will influence policy . <hl> <hl> Under this model of proportional representation , legislative seats are allocated to competing parties based on the total share of votes they receive in the election . <hl> As a result , any given election can have multiple winners , and voters who might prefer a smaller party over a major one have a chance to be represented in government ( Figure 9.6 ) .", "hl_sentences": "Rather than allowing voters to pick their representatives directly , many democracies have chosen to have voters pick their preferred party and allow the party to select the individuals who serve in government . The argument for this method is that it is ultimately the party and not the individual who will influence policy . Under this model of proportional representation , legislative seats are allocated to competing parties based on the total share of votes they receive in the election .", "question": { "cloze_format": "The type of electoral system in which voters select the party of their choice rather than an individual candidate is ___.", "normal_format": "In which type of electoral system do voters select the party of their choice rather than an individual candidate?", "question_choices": [ "proportional representation", "first-past-the-post", "plurality voting", "majoritarian voting" ], "question_id": "fs-id1171474432483", "question_text": "In which type of electoral system do voters select the party of their choice rather than an individual candidate?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 3, "ans_text": "third parties" }, "bloom": null, "hl_context": "<hl> Given the obstacles to the formation of third parties , it is unlikely that serious challenges to the U . S . two-party system will emerge . <hl> But this does not mean that we should view it as entirely stable either . The U . S . party system is technically a loose organization of fifty different state parties and has undergone several considerable changes since its initial consolidation after the Civil War . <hl> Third-party movements may have played a role in some of these changes , but all resulted in a shifting of party loyalties among the U . S . electorate . <hl>", "hl_sentences": "Given the obstacles to the formation of third parties , it is unlikely that serious challenges to the U . S . two-party system will emerge . Third-party movements may have played a role in some of these changes , but all resulted in a shifting of party loyalties among the U . S . electorate .", "question": { "cloze_format": "___ do not represent a major contributing factor in party realignment.", "normal_format": "Which of the following does not represent a major contributing factor in party realignment?", "question_choices": [ "demographic shifts", "changes in key issues", "changes in party strategies", "third parties" ], "question_id": "fs-id1171474419948", "question_text": "Which of the following does not represent a major contributing factor in party realignment?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 3, "ans_text": "D" }, "bloom": null, "hl_context": "But most people are aware of the presence and activity of the national party organizations for several reasons . First , many Americans , especially young people , are more interested in the topics discussed at the national level than at the state or local level . <hl> According to John Green of the Ray C . Bliss Institute of Applied Politics , “ Local elections tend to be about things like sewers , and roads and police protection — which are not as dramatic an issue as same-sex marriage or global warming or international affairs . ” 41 Presidential elections and the behavior of the U . S . Congress are also far more likely to make the news broadcasts than the activities of county commissioners , and the national-level party organization is mostly responsible for coordinating the activities of participants at this level . <hl> The national party is a fundraising army for presidential candidates and also serves a key role in trying to coordinate and direct the efforts of the House and Senate . For this reason , its leadership is far more likely to become visible to media consumers , whether they intend to vote or not .", "hl_sentences": "According to John Green of the Ray C . Bliss Institute of Applied Politics , “ Local elections tend to be about things like sewers , and roads and police protection — which are not as dramatic an issue as same-sex marriage or global warming or international affairs . ” 41 Presidential elections and the behavior of the U . S . Congress are also far more likely to make the news broadcasts than the activities of county commissioners , and the national-level party organization is mostly responsible for coordinating the activities of participants at this level .", "question": { "cloze_format": "The level of party organization that is most responsible for helping the party’s nominee win the presidency is ___.", "normal_format": "Which level of party organization is most responsible for helping the party’s nominee win the presidency?", "question_choices": [ "precinct", "county", "state", "national" ], "question_id": "fs-id1171474142262", "question_text": "Which level of party organization is most responsible for helping the party’s nominee win the presidency?" }, "references_are_paraphrase": null } ]
9
9.1 What Are Parties and How Did They Form? Learning Objectives By the end of this section, you will be able to: Describe political parties and what they do Differentiate political parties from interest groups Explain how U.S. political parties formed At some point, most of us have found ourselves part of a group trying to solve a problem, like picking a restaurant or movie to attend, or completing a big project at school or work. Members of the group probably had various opinions about what should be done. Some may have even refused to help make the decision or to follow it once it had been made. Still others may have been willing to follow along but were less interested in contributing to a workable solution. Because of this disagreement, at some point, someone in the group had to find a way to make a decision, negotiate a compromise, and ultimately do the work needed for the group to accomplish its goals. This kind of collective action problem is very common in societies, as groups and entire societies try to solve problems or distribute scarce resources. In modern U.S. politics, such problems are usually solved by two important types of organizations: interest groups and political parties. There are many interest groups, all with opinions about what should be done and a desire to influence policy. Because they are usually not officially affiliated with any political party, they generally have no trouble working with either of the major parties. But at some point, a society must find a way of taking all these opinions and turning them into solutions to real problems. That is where political parties come in. Essentially, political parties are groups of people with similar interests who work together to create and implement policies. They do this by gaining control over the government by winning elections. Party platforms guide members of Congress in drafting legislation. Parties guide proposed laws through Congress and inform party members how they should vote on important issues. Political parties also nominate candidates to run for state government, Congress, and the presidency. Finally, they coordinate political campaigns and mobilize voters. POLITICAL PARTIES AS UNIQUE ORGANIZATIONS In Federalist No. 10, written in the late eighteenth century, James Madison noted that the formation of self-interested groups, which he called factions, was inevitable in any society, as individuals started to work together to protect themselves from the government. Interest groups and political parties are two of the most easily identified forms of factions in the United States. These groups are similar in that they are both mediating institutions responsible for communicating public preferences to the government. They are not themselves government institutions in a formal sense. Neither is directly mentioned in the U.S. Constitution nor do they have any real, legal authority to influence policy. But whereas interest groups often work indirectly to influence our leaders, political parties are organizations that try to directly influence public policy through its members who seek to win and hold public office. Parties accomplish this by identifying and aligning sets of issues that are important to voters in the hopes of gaining support during elections; their positions on these critical issues are often presented in documents known as a party platform ( Figure 9.2 ), which is adopted at each party’s presidential nominating convention every four years. If successful, a party can create a large enough electoral coalition to gain control of the government. Once in power, the party is then able to deliver, to its voters and elites, the policy preferences they choose by electing its partisans to the government. In this respect, parties provide choices to the electorate, something they are doing that is in such sharp contrast to their opposition. Link to Learning You can read the full platform of the Republican Party and the Democratic Party at their respective websites. Winning elections and implementing policy would be hard enough in simple political systems, but in a country as complex as the United States, political parties must take on great responsibilities to win elections and coordinate behavior across the many local, state, and national governing bodies. Indeed, political differences between states and local areas can contribute much complexity. If a party stakes out issue positions on which few people agree and therefore builds too narrow a coalition of voter support, that party may find itself marginalized. But if the party takes too broad a position on issues, it might find itself in a situation where the members of the party disagree with one another, making it difficult to pass legislation, even if the party can secure victory. It should come as no surprise that the story of U.S. political parties largely mirrors the story of the United States itself. The United States has seen sweeping changes to its size, its relative power, and its social and demographic composition. These changes have been mirrored by the political parties as they have sought to shift their coalitions to establish and maintain power across the nation and as party leadership has changed. As you will learn later, this also means that the structure and behavior of modern parties largely parallel the social, demographic, and geographic divisions within the United States today. To understand how this has happened, we look at the origins of the U.S. party system. HOW POLITICAL PARTIES FORMED National political parties as we understand them today did not really exist in the United States during the early years of the republic. Most politics during the time of the nation’s founding were local in nature and based on elite politics, limited suffrage (or the ability to vote in elections), and property ownership. Residents of the various colonies, and later of the various states, were far more interested in events in their state legislatures than in those occurring at the national level or later in the nation’s capital. To the extent that national issues did exist, they were largely limited to collective security efforts to deal with external rivals, such as the British or the French, and with perceived internal threats, such as conflicts with Native Americans. Soon after the United States emerged from the Revolutionary War, however, a rift began to emerge between two groups that had very different views about the future direction of U.S. politics. Thus, from the very beginning of its history, the United States has had a system of government dominated by two different philosophies. Federalists , who were largely responsible for drafting and ratifying the U.S. Constitution, generally favored the idea of a stronger, more centralized republic that had greater control over regulating the economy. 1 Anti-Federalists preferred a more confederate system built on state equality and autonomy. 2 The Federalist faction, led by Alexander Hamilton , largely dominated the government in the years immediately after the Constitution was ratified. Included in the Federalists was President George Washington , who was initially against the existence of parties in the United States. When Washington decided to exit politics and leave office, he warned of the potential negative effects of parties in his farewell address to the nation, including their potentially divisive nature and the fact that they might not always focus on the common good but rather on partisan ends. However, members of each faction quickly realized that they had a vested interest not only in nominating and electing a president who shared their views, but also in winning other elections. Two loosely affiliated party coalitions, known as the Federalists and the Democratic-Republicans , soon emerged. The Federalists succeeded in electing their first leader, John Adams , to the presidency in 1796, only to see the Democratic-Republicans gain victory under Thomas Jefferson four years later in 1800. Milestone The “Revolution of 1800”: Uniting the Executive Branch under One Party When the U.S. Constitution was drafted, its authors were certainly aware that political parties existed in other countries (like Great Britain), but they hoped to avoid them in the United States. They felt the importance of states in the U.S. federal structure would make it difficult for national parties to form. They also hoped that having a college of electors vote for the executive branch, with the top two vote-getters becoming president and vice president, would discourage the formation of parties. Their system worked for the first two presidential elections, when essentially all the electors voted for George Washington to serve as president. But by 1796, the Federalist and Anti-Federalist camps had organized into electoral coalitions. The Anti-Federalists joined with many others active in the process to become known as the Democratic-Republicans. The Federalist John Adams won the Electoral College vote, but his authority was undermined when the vice presidency went to Democratic-Republican Thomas Jefferson, who finished second. Four years later, the Democratic-Republicans managed to avoid this outcome by coordinating the electors to vote for their top two candidates. But when the vote ended in a tie, it was ultimately left to Congress to decide who would be the third president of the United States ( Figure 9.3 ). In an effort to prevent a similar outcome in the future, Congress and the states voted to ratify the Twelfth Amendment, which went into effect in 1804. This amendment changed the rules so that the president and vice president would be selected through separate elections within the Electoral College, and it altered the method that Congress used to fill the offices in the event that no candidate won a majority. The amendment essentially endorsed the new party system and helped prevent future controversies. It also served as an early effort by the two parties to collude to make it harder for an outsider to win the presidency. Does the process of selecting the executive branch need to be reformed so that the people elect the president and vice president directly, rather than through the Electoral College? Should the people vote separately on each office rather than voting for both at the same time? Explain your reasoning. Growing regional tensions eroded the Federalist Party’s ability to coordinate elites, and it eventually collapsed following its opposition to the War of 1812. 3 The Democratic-Republican Party, on the other hand, eventually divided over whether national resources should be focused on economic and mercantile development, such as tariffs on imported goods and government funding of internal improvements like roads and canals, or on promoting populist issues that would help the “common man,” such as reducing or eliminating state property requirements that had prevented many men from voting. 4 In the election of 1824, numerous candidates contended for the presidency, all members of the Democratic-Republican Party. Andrew Jackson won more popular votes and more votes in the Electoral College than any other candidate. However, because he did not win the majority (more than half) of the available electoral votes, the election was decided by the House of Representatives, as required by the Twelfth Amendment . The Twelfth Amendment limited the House’s choice to the three candidates with the greatest number of electoral votes. Thus, Andrew Jackson, with 99 electoral votes, found himself in competition with only John Quincy Adams, the second place finisher with 84 electoral votes, and William H. Crawford, who had come in third with 41. The fourth-place finisher, Henry Clay, who was no longer in contention, had won 37 electoral votes. Clay strongly disliked Jackson, and his ideas on government support for tariffs and internal improvements were similar to those of Adams. Clay thus gave his support to Adams, who was chosen on the first ballot. Jackson considered the actions of Clay and Adams, the son of the Federalist president John Adams, to be an unjust triumph of supporters of the elite and referred to it as “the corrupt bargain.” 5 This marked the beginning of what historians call the Second Party System (the first parties had been the Federalists and the Jeffersonian Republicans), with the splitting of the Democratic-Republicans and the formation of two new political parties. One half, called simply the Democratic Party, was the party of Jackson; it continued to advocate for the common people by championing westward expansion and opposing a national bank. The branch of the Democratic-Republicans that believed that the national government should encourage economic (primarily industrial) development was briefly known as the National Republicans and later became the Whig Party 6 . In the election of 1828, Democrat Andrew Jackson was triumphant. Three times as many people voted in 1828 as had in 1824, and most cast their ballots for him. 7 The formation of the Democratic Party marked an important shift in U.S. politics. Rather than being built largely to coordinate elite behavior, the Democratic Party worked to organize the electorate by taking advantage of state-level laws that had extended suffrage from male property owners to nearly all white men. 8 This change marked the birth of what is often considered the first modern political party in any democracy in the world. 9 It also dramatically changed the way party politics was, and still is, conducted. For one thing, this new party organization was built to include structures that focused on organizing and mobilizing voters for elections at all levels of government. The party also perfected an existing spoils system, in which support for the party during elections was rewarded with jobs in the government bureaucracy after victory. 10 Many of these positions were given to party bosses and their friends. These men were the leaders of political machine s , organizations that secured votes for the party’s candidates or supported the party in other ways. Perhaps more importantly, this election-focused organization also sought to maintain power by creating a broader coalition and thereby expanding the range of issues upon which the party was constructed. 11 Link to Learning Each of the two main U.S. political parties today—the Democrats and the Republicans —maintains an extensive website with links to its affiliated statewide organizations, which in turn often maintain links to the party’s country organizations. By comparison, here are websites for the Green Party and the Libertarian Party that are two other parties in the United States today. The Democratic Party emphasized personal politics , which focused on building direct relationships with voters rather than on promoting specific issues. This party dominated national politics from Andrew Jackson’s presidential victory in 1828 until the mid-1850s, when regional tensions began to threaten the nation’s very existence. The growing power of industrialists, who preferred greater national authority, combined with increasing tensions between the northern and southern states over slavery, led to the rise of the Republican Party and its leader Abraham Lincoln in the election of 1860, while the Democratic Party dominated in the South. Like the Democrats, the Republicans also began to utilize a mass approach to party design and organization. Their opposition to the expansion of slavery, and their role in helping to stabilize the Union during Reconstruction, made them the dominant player in national politics for the next several decades. 12 The Democratic and Republican parties have remained the two dominant players in the U.S. party system since the Civil War (1861–1865). That does not mean, however, that the system has been stagnant. Every political actor and every citizen has the ability to determine for him- or herself whether one of the two parties meets his or her needs and provides an appealing set of policy options, or whether another option is preferable. At various points in the past 170 years, elites and voters have sought to create alternatives to the existing party system. Political parties that are formed as alternatives to the Republican and Democratic parties are known as third parties , or minor parties ( Figure 9.4 ). In 1892, a third party known as the Populist Party formed in reaction to what its constituents perceived as the domination of U.S. society by big business and a decline in the power of farmers and rural communities. The Populist Party called for the regulation of railroads, an income tax, and the popular election of U.S. senators, who at this time were chosen by state legislatures and not by ordinary voters. 13 The party’s candidate in the 1892 elections, James B. Weaver, did not perform as well as the two main party candidates, and, in the presidential election of 1896, the Populists supported the Democratic candidate William Jennings Bryan. Bryan lost, and the Populists once again nominated their own presidential candidates in 1900, 1904, and 1908. The party disappeared from the national scene after 1908, but its ideas were similar to those of the Progressive Party, a new political party created in 1912. In 1912, former Republican president Theodore Roosevelt attempted to form a third party, known as the Progressive Party , as an alternative to the more business-minded Republicans. The Progressives sought to correct the many problems that had arisen as the United States transformed itself from a rural, agricultural nation into an increasingly urbanized, industrialized country dominated by big business interests. Among the reforms that the Progressive Party called for in its 1912 platform were women’s suffrage, an eight-hour workday, and workers’ compensation. The party also favored some of the same reforms as the Populist Party, such as the direct election of U.S. senators and an income tax, although Populists tended to be farmers while the Progressives were from the middle class. In general, Progressives sought to make government more responsive to the will of the people and to end political corruption in government. They wished to break the power of party bosses and political machines, and called upon states to pass laws allowing voters to vote directly on proposed legislation, propose new laws, and recall from office incompetent or corrupt elected officials. The Progressive Party largely disappeared after 1916, and most members returned to the Republican Party. 14 The party enjoyed a brief resurgence in 1924, when Robert “Fighting Bob” La Follette ran unsuccessfully for president under the Progressive banner. In 1948, two new third parties appeared on the political scene. Henry A. Wallace , a vice president under Franklin Roosevelt, formed a new Progressive Party, which had little in common with the earlier Progressive Party. Wallace favored racial desegregation and believed that the United States should have closer ties to the Soviet Union. Wallace’s campaign was a failure, largely because most people believed his policies, including national healthcare, were too much like those of communism, and this party also vanished. The other third party, the States’ Rights Democrats, also known as the Dixiecrats , were white, southern Democrats who split from the Democratic Party when Harry Truman , who favored civil rights for African Americans, became the party’s nominee for president. The Dixiecrats opposed all attempts by the federal government to end segregation, extend voting rights, prohibit discrimination in employment, or otherwise promote social equality among races. 15 They remained a significant party that threatened Democratic unity throughout the 1950s and 1960s. Other examples of third parties in the United States include the American Independent Party, the Libertarian Party, United We Stand America, the Reform Party, and the Green Party. None of these alternatives to the two major political parties had much success at the national level, and most are no longer viable parties. All faced the same fate. Formed by charismatic leaders, each championed a relatively narrow set of causes and failed to gain broad support among the electorate. Once their leaders had been defeated or discredited, the party structures that were built to contest elections collapsed. And within a few years, most of their supporters were eventually pulled back into one of the existing parties. To be sure, some of these parties had an electoral impact. For example, the Progressive Party pulled enough votes away from the Republicans to hand the 1912 election to the Democrats. Thus, the third-party rival’s principal accomplishment was helping its least-preferred major party win, usually at the short-term expense of the very issue it championed. In the long run, however, many third parties have brought important issues to the attention of the major parties, which then incorporated these issues into their platforms. Understanding why this is the case is an important next step in learning about the issues and strategies of the modern Republican and Democratic parties. In the next section, we look at why the United States has historically been dominated by only two political parties. 9.2 The Two-Party System Learning Objectives By the end of this section, you will be able to: Describe the effects of winner-take-all elections Compare plurality and proportional representation Describe the institutional, legal, and social forces that limit the number of parties Discuss the concepts of party alignment and realignment One of the cornerstones of a vibrant democracy is citizens’ ability to influence government through voting. In order for that influence to be meaningful, citizens must send clear signals to their leaders about what they wish the government to do. It only makes sense, then, that a democracy will benefit if voters have several clearly differentiated options available to them at the polls on Election Day. Having these options means voters can select a candidate who more closely represents their own preferences on the important issues of the day. It also gives individuals who are considering voting a reason to participate. After all, you are more likely to vote if you care about who wins and who loses. The existence of two major parties, especially in our present era of strong parties, leads to sharp distinctions between the candidates and between the party organizations. Why do we have two parties? The two-party system came into being because the structure of U.S. elections, with one seat tied to a geographic district, tends to lead to dominance by two major political parties. Even when there are other options on the ballot, most voters understand that minor parties have no real chance of winning even a single office. Hence, they vote for candidates of the two major parties in order to support a potential winner. Of the 535 members of the House and Senate, only a handful identify as something other than Republican or Democrat. Third parties have fared no better in presidential elections. No third-party candidate has ever won the presidency. Some historians or political scientists might consider Abraham Lincoln to have been such a candidate, but in 1860, the Republicans were a major party that had subsumed members of earlier parties, such as the Whig Party, and they were the only major party other than the Democratic Party. ELECTION RULES AND THE TWO-PARTY SYSTEM A number of reasons have been suggested to explain why the structure of U.S. elections has resulted in a two-party system. Most of the blame has been placed on the process used to select its representatives. First, most elections at the state and national levels are winner-take-all: The candidate who receives the greatest overall number of votes wins. Winner-take-all elections with one representative elected for one geographic district allow voters to develop a personal relationship with “their” representative to the government. They know exactly whom to blame, or thank, for the actions of that government. But these elections also tend to limit the number of people who run for office. Otherwise-qualified candidates might not stand for election if they feel the incumbent or another candidate has an early advantage in the race. And since voters do not like to waste votes, third parties must convince voters they have a real chance of winning races before voters will take them seriously. This is a tall order given the vast resources and mobilization tools available to the existing parties, especially if an incumbent is one of the competitors. In turn, the likelihood that third-party challengers will lose an election bid makes it more difficult to raise funds to support later attempts. 16 Winner-take-all systems of electing candidates to office, which exist in several countries other than the United States, require that the winner receive either the majority of votes or a plurality of the votes. U.S. elections are based on plurality voting . Plurality voting, commonly referred to as first-past-the-post , is based on the principle that the individual candidate with the most votes wins, whether or not he or she gains a majority (51 percent or greater) of the total votes cast. For instance, Abraham Lincoln won the presidency in 1860 even though he clearly lacked majority support given the number of candidates in the race. In 1860, four candidates competed for the presidency: Lincoln, a Republican; two Democrats, one from the northern wing of the party and one from the southern wing; and a member of the newly formed Constitutional Union Party, a southern party that wished to prevent the nation from dividing over the issue of slavery. Votes were split among all four parties, and Lincoln became president with only 40 percent of the vote, not a majority of votes cast but more than any of the other three candidates had received, and enough to give him a majority in the Electoral College, the body that ultimately decides presidential elections. Plurality voting has been justified as the simplest and most cost-effective method for identifying a victor in a democracy. A single election can be held on a single day, and the victor of the competition is easily selected. On the other hand, systems in which people vote for a single candidate in an individual district often cost more money because drawing district lines and registering voters according to district is often expensive and cumbersome. 17 In a system in which individual candidates compete for individual seats representing unique geographic districts, a candidate must receive a fairly large number of votes in order to win. A political party that appeals to only a small percentage of voters will always lose to a party that is more popular. 18 Because second-place (or lower) finishers will receive no reward for their efforts, those parties that do not attract enough supporters to finish first at least some of the time will eventually disappear because their supporters realize they have no hope of achieving success at the polls. 19 The failure of third parties to win and the possibility that they will draw votes away from the party the voter had favored before—resulting in a win for the party the voter liked least—makes people hesitant to vote for the third party’s candidates a second time. This has been the fate of all U.S. third parties—the Populist Party, the Progressives, the Dixiecrats, the Reform Party, and others. In a proportional electoral system, however, parties advertise who is on their candidate list and voters pick a party. Then, legislative seats are doled out to the parties based on the proportion of support each party receives. While the Green Party in the United States might not win a single congressional seat in some years thanks to plurality voting, in a proportional system, it stands a chance to get a few seats in the legislature regardless. For example, assume the Green Party gets 7 percent of the vote. In the United States, 7 percent will never be enough to win a single seat, shutting the Green candidates out of Congress entirely, whereas in a proportional system, the Green Party will get 7 percent of the total number of legislative seats available. Hence, it could get a foothold for its issues and perhaps increase its support over time. But with plurality voting, it doesn’t stand a chance. Third parties, often born of frustration with the current system, attract supporters from one or both of the existing parties during an election but fail to attract enough votes to win. After the election is over, supporters experience remorse when their least-favorite candidate wins instead. For example, in the 2000 election, Ralph Nader ran for president as the candidate of the Green Party. Nader, a longtime consumer activist concerned with environmental issues and social justice, attracted many votes from people who usually voted for Democratic candidates. This has caused some to claim that Democratic nominee Al Gore lost the 2000 election to Republican George W. Bush, because Nader won Democratic votes in Florida that might otherwise have gone to Gore ( Figure 9.5 ). 20 Abandoning plurality voting, even if the winner-take-all election were kept, would almost certainly increase the number of parties from which voters could choose. The easiest switch would be to a majoritarian voting scheme, in which a candidate wins only if he or she enjoys the support of a majority of voters. If no candidate wins a majority in the first round of voting, a run-off election is held among the top contenders. Some states conduct their primary elections within the two major political parties in this way. A second way to increase the number of parties in the U.S. system is to abandon the winner-take-all approach. Rather than allowing voters to pick their representatives directly, many democracies have chosen to have voters pick their preferred party and allow the party to select the individuals who serve in government. The argument for this method is that it is ultimately the party and not the individual who will influence policy. Under this model of proportional representation , legislative seats are allocated to competing parties based on the total share of votes they receive in the election. As a result, any given election can have multiple winners, and voters who might prefer a smaller party over a major one have a chance to be represented in government ( Figure 9.6 ). One possible way to implement proportional representation in the United States is to allocate legislative seats based on the national level of support for each party’s presidential candidate, rather than on the results of individual races. If this method had been used in the 1996 elections, 8 percent of the seats in Congress would have gone to Ross Perot’s Reform Party because he won 8 percent of the votes cast. Even though Perot himself lost, his supporters would have been rewarded for their efforts with representatives who had a real voice in government. And Perot’s party’s chances of survival would have greatly increased. Electoral rules are probably not the only reason the United States has a two-party system . We need only look at the number of parties in the British or Canadian systems, both of which are winner-take-all plurality systems like that in the United States, to see that it is possible to have more than two parties while still directly electing representatives. The two-party system is also rooted in U.S. history. The first parties, the Federalists and the Jeffersonian Republicans, disagreed about how much power should be given to the federal government, and differences over other important issues further strengthened this divide. Over time, these parties evolved into others by inheriting, for the most part, the general ideological positions and constituents of their predecessors, but no more than two major parties ever formed. Instead of parties arising based on region or ethnicity, various regions and ethnic groups sought a place in one of the two major parties. Scholars of voting behavior have also suggested at least three other characteristics of the U.S. system that are likely to influence party outcomes: the Electoral College , demobilized ethnicity, and campaign and election laws. First, the United States has a presidential system in which the winner is selected not directly by the popular vote but indirectly by a group of electors known collectively as the Electoral College. The winner-take-all system also applies in the Electoral College. In all but two states (Maine and Nebraska), the total of the state’s electoral votes go to the candidate who wins the plurality of the popular vote in that state. Even if a new, third party is able to win the support of a lot of voters, it must be able to do so in several states in order to win enough electoral votes to have a chance of winning the presidency. 21 Besides the existence of the Electoral College, political scientist Gary W. Cox has also suggested that the relative prosperity of the United States and the relative unity of its citizens have prevented the formation of “large dissenting groups” that might give support to third parties. 22 This is similar to the argument that the United States does not have viable third parties, because none of its regions is dominated by mobilized ethnic minorities that have created political parties in order to defend and to address concerns solely of interest to that ethnic group. Such parties are common in other countries. Finally, party success is strongly influenced by local election laws. Someone has to write the rules that govern elections, and those rules help to determine outcomes. In the United States, such rules have been written to make it easy for existing parties to secure a spot for their candidates in future elections. But some states create significant burdens for candidates who wish to run as independents or who choose to represent new parties. For example, one common practice is to require a candidate who does not have the support of a major party to ask registered voters to sign a petition. Sometimes, thousands of signatures are required before a candidate’s name can be placed on the ballot ( Figure 9.7 ), but a small third party that does have large numbers of supporters in some states may not be able to secure enough signatures for this to happen. 23 Link to Learning Visit Fair Vote for a discussion of ballot access laws across the country. Given the obstacles to the formation of third parties, it is unlikely that serious challenges to the U.S. two-party system will emerge. But this does not mean that we should view it as entirely stable either. The U.S. party system is technically a loose organization of fifty different state parties and has undergone several considerable changes since its initial consolidation after the Civil War. Third-party movements may have played a role in some of these changes, but all resulted in a shifting of party loyalties among the U.S. electorate. CRITICAL ELECTIONS AND REALIGNMENT Political parties exist for the purpose of winning elections in order to influence public policy. This requires them to build coalitions across a wide range of voters who share similar preferences. Since most U.S. voters identify as moderates, 24 the historical tendency has been for the two parties to compete for “the middle” while also trying to mobilize their more loyal bases. If voters’ preferences remained stable for long periods of time, and if both parties did a good job of competing for their votes, we could expect Republicans and Democrats to be reasonably competitive in any given election. Election outcomes would probably be based on the way voters compared the parties on the most important events of the day rather than on electoral strategy. There are many reasons we would be wrong in these expectations, however. First, the electorate isn’t entirely stable. Each generation of voters has been a bit different from the last. Over time, the United States has become more socially liberal, especially on topics related to race and gender, and Millennials—those aged 21–37—are more liberal than members of older generations. 25 The electorate’s economic preferences have changed, and different social groups are likely to become more engaged in politics now than they did in the past. Surveys conducted in 2016, for example, revealed that candidates’ religion is less important to voters than it once was. Also, as young Latinos reach voting age, they seem more inclined to vote than do their parents, which may raise the traditionally low voting rates among this ethnic group. 26 Internal population shifts and displacements have also occurred, as various regions have taken their turn experiencing economic growth or stagnation, and as new waves of immigrants have come to U.S. shores. Additionally, the major parties have not always been unified in their approach to contesting elections. While we think of both Congress and the presidency as national offices, the reality is that congressional elections are sometimes more like local elections. Voters may reflect on their preferences for national policy when deciding whom to send to the Senate or the House of Representatives, but they are very likely to view national policy in the context of its effects on their area, their family, or themselves, not based on what is happening to the country as a whole. For example, while many voters want to reduce the federal budget, those over sixty-five are particularly concerned that no cuts to the Medicare program be made. 27 One-third of those polled reported that “senior’s issues” were most important to them when voting for national officeholders. 28 If they hope to keep their jobs, elected officials must thus be sensitive to preferences in their home constituencies as well as the preferences of their national party. Finally, it sometimes happens that over a series of elections, parties may be unable or unwilling to adapt their positions to broader socio-demographic or economic forces. Parties need to be aware when society changes. If leaders refuse to recognize that public opinion has changed, the party is unlikely to win in the next election. For example, people who describe themselves as evangelical Christians are an important Republican constituency; they are also strongly opposed to abortion. 29 Thus, even though the majority of U.S. adults believe abortion should be legal in at least some instances, such as when a pregnancy is the result of rape or incest, or threatens the life of the mother, the position of many Republican presidential candidates in 2016 was to oppose abortion in all cases. 30 As a result, many women view the Republican Party as unsympathetic to their interests and are more likely to support Democratic candidates. 31 Similarly (or simultaneously), groups that have felt that the party has served their causes in the past may decide to look elsewhere if they feel their needs are no longer being met. Either way, the party system will be upended as a result of a party realignment , or a shifting of party allegiances within the electorate ( Table 9.1 ). 32 Periods of Party Dominance and Realignment Era Party Systems and Realignments 1796–1824 First Party System: Federalists (urban elites, southern planters, New England) oppose Democratic-Republicans (rural, small farmers and artisans, the South and the West). 1828–1856 Second Party System: Democrats (the South, cities, farmers and artisans, immigrants) oppose Whigs (former Federalists, the North, middle class, native-born Americans). 1860–1892 Third Party System: Republicans (former Whigs plus African Americans) control the presidency. Only one Democrat, Grover Cleveland, is elected president (1884, 1892). 1896–1932 Fourth Party System: Republicans control the presidency. Only one Democrat, Woodrow Wilson, is elected president (1912, 1916). Challenges to major parties are raised by Populists and Progressives. 1932–1964 Fifth Party System. Democrats control the presidency. Only one Republican, Dwight Eisenhower, is elected president (1952, 1956). Major party realignment as African Americans become part of the Democratic coalition. 1964–present Sixth Party System. No one party controls the presidency. Ongoing realignment as southern white people and many northern members of the working class begin to vote for Republicans. Latino and Asian people immigrate, most of whom vote for Democrats. Table 9.1 There have been six distinctive periods in U.S. history when new political parties have emerged, control of the presidency has shifted from one party to another, or significant changes in a party’s makeup have occurred. One of the best-known party realignments occurred when Democrats moved to include African Americans and other minorities into their national coalition during the Great Depression . After the Civil War , Republicans, the party of Lincoln, were viewed as the party that had freed the slaves. Their efforts to provide black people with greater legal rights earned them the support of African Americans in both the South, where they were newly enfranchised, and the Northeast. When the Democrats, the party of the Confederacy, lost control of the South after the Civil War, Republicans ruled the region. However, the Democrats regained control of the South after the removal of the Union army in 1877. Democrats had largely supported slavery before the Civil War, and they opposed postwar efforts to integrate African Americans into society after they were liberated. In addition, Democrats in the North and Midwest drew their greatest support from labor union members and immigrants who viewed African Americans as competitors for jobs and government resources, and who thus tended to oppose the extension of rights to African Americans as much as their southern counterparts did. 33 While the Democrats’ opposition to civil rights may have provided regional advantages in southern or urban elections, it was largely disastrous for national politics. From 1868 to 1931, Democratic candidates won just four of sixteen presidential elections. Two of these victories can be explained as a result of the spoiler effect of the Progressive Party in 1912 and then Woodrow Wilson’s reelection during World War I in 1916. This rather-dismal success rate suggested that a change in the governing coalition would be needed if the party were to have a chance at once again becoming a player on the national level. That change began with the 1932 presidential campaign of Franklin Delano Roosevelt . FDR determined that his best path toward victory was to create a new coalition based not on region or ethnicity, but on the suffering of those hurt the most during the Great Depression. This alignment sought to bring African American voters in as a means of shoring up support in major urban areas and the Midwest, where many southern black people had migrated in the decades after the Civil War in search of jobs and better education for their children, as well as to avoid many of the legal restrictions placed on them in the South. Roosevelt accomplished this realignment by promising assistance to those hurt most by the Depression, including African Americans. The strategy worked. Roosevelt won the election with almost 58 percent of the popular vote and 472 Electoral College votes, compared to incumbent Herbert Hoover’s 59. The 1932 election is considered an example of a critical election , one that represents a sudden, clear, and long-term shift in voter allegiances. After this election, the political parties were largely identified as being divided by differences in their members’ socio-economic status. Those who favor stability of the current political and economic system tend to vote Republican, whereas those who would most benefit from changing the system usually favor Democratic candidates. Based on this alignment, the Democratic Party won the next five consecutive presidential elections and was able to build a political machine that dominated Congress into the 1990s, including holding an uninterrupted majority in the House of Representatives from 1954 until 1994. The realignment of the parties did have consequences for Democrats. African Americans became an increasingly important part of the Democratic coalition in the 1940s through the 1960s, as the party took steps to support civil rights. 34 Most changes were limited to the state level at first, but as civil rights reform moved to the national stage, rifts between northern and southern Democrats began to emerge. 35 Southern Democrats became increasingly convinced that national efforts to provide social welfare and encourage racial integration were violating state sovereignty and social norms. By the 1970s, many had begun to shift their allegiance to the Republican Party, whose pro-business wing shared their opposition to the growing encroachment of the national government into what they viewed as state and local matters. 36 Almost fifty years after it had begun, the realignment of the two political parties resulted in the flipping of post-Civil War allegiances, with urban areas and the Northeast now solidly Democratic, and the South and rural areas overwhelmingly voting Republican. The result today is a political system that provides Republicans with considerable advantages in rural areas and most parts of the Deep South. 37 Democrats dominate urban politics and those parts of the South, known as the Black Belt, where the majority of residents are African American. 9.3 The Shape of Modern Political Parties Learning Objectives By the end of this section, you will be able to: Differentiate between the party in the electorate and the party organization Discuss the importance of voting in a political party organization Describe party organization at the county, state, and national levels Compare the perspectives of the party in government and the party in the electorate We have discussed the two major political parties in the United States, how they formed, and some of the smaller parties that have challenged their dominance over time. However, what exactly do political parties do? If the purpose of political parties is to work together to create and implement policies by winning elections, how do they accomplish this task, and who actually participates in the process? The answer was fairly straightforward in the early days of the republic when parties were little more than electoral coalitions of like-minded, elite politicians. But improvements in strategy and changes in the electorate forced the parties to become far more complex organizations that operate on several levels in the U.S. political arena. Modern political parties consist of three components identified by political scientist V. O. Key: the party in the electorate (the voters); the party organization (which helps to coordinate everything the party does in its quest for office); and the party in office (the office holders). To understand how these various elements work together, we begin by thinking about a key first step in influencing policy in any democracy: winning elections. THE PARTY-IN-THE-ELECTORATE A key fact about the U.S. political party system is that it’s all about the votes. If voters do not show up to vote for a party’s candidates on Election Day, the party has no chance of gaining office and implementing its preferred policies. As we have seen, for much of their history, the two parties have been adapting to changes in the size, composition, and preferences of the U.S. electorate. It only makes sense, then, that parties have found it in their interest to build a permanent and stable presence among the voters. By fostering a sense of loyalty, a party can insulate itself from changes in the system and improve its odds of winning elections. The party-in-the-electorate are those members of the voting public who consider themselves to be part of a political party and/or who consistently prefer the candidates of one party over the other. What it means to be part of a party depends on where a voter lives and how much he or she chooses to participate in politics. At its most basic level, being a member of the party-in-the-electorate simply means a voter is more likely to voice support for a party. These voters are often called party identifiers , since they usually represent themselves in public as being members of a party, and they may attend some party events or functions. Party identifiers are also more likely to provide financial support for the candidates of their party during election season. This does not mean self-identified Democrats will support all the party’s positions or candidates, but it does mean that, on the whole, they feel their wants or needs are more likely to be met if the Democratic Party is successful. Party identifiers make up the majority of the voting public. Gallup, the polling agency, has been collecting data on voter preferences for the past several decades. Its research suggests that historically, over half of American adults have called themselves “Republican” or “Democrat” when asked how they identify themselves politically ( Figure 9.8 ). Even among self-proclaimed independents, the overwhelming majority claim to lean in the direction of one party or the other, suggesting they behave as if they identified with a party during elections even if they preferred not to publicly pick a side. Partisan support is so strong that, in a poll conducted from August 5 to August 9, 2015, about 88 percent of respondents said they either identified with or, if they were independents, at least leaned toward one of the major political parties. 38 Thus, in a poll conducted in January 2016, even though about 42 percent of respondents said they were independent, this does not mean that they are not, in fact, more likely to favor one party over the other. 39 Strictly speaking, party identification is not quite the same thing as party membership. People may call themselves Republicans or Democrats without being registered as a member of the party, and the Republican and Democratic parties do not require individuals to join their formal organization in the same way that parties in some other countries do. Many states require voters to declare a party affiliation before participating in primaries, but primary participation is irregular and infrequent, and a voter may change his or her identity long before changing party registration. For most voters, party identification is informal at best and often matters only in the weeks before an election. It does matter, however, because party identification guides some voters, who may know little about a particular issue or candidate, in casting their ballots. If, for example, someone thinks of him- or herself as a Republican and always votes Republican, he or she will not be confused when faced with a candidate, perhaps in a local or county election, whose name is unfamiliar. If the candidate is a Republican, the voter will likely cast a ballot for him or her. Party ties can manifest in other ways as well. The actual act of registering to vote and selecting a party reinforces party loyalty. Moreover, while pundits and scholars often deride voters who blindly vote their party, the selection of a party in the first place can be based on issue positions and ideology. In that regard, voting your party on Election Day is not a blind act—it is a shortcut based on issue positions. THE PARTY ORGANIZATION A significant subset of American voters views their party identification as something far beyond simply a shortcut to voting. These individuals get more energized by the political process and have chosen to become more active in the life of political parties. They are part of what is known as the party organization. The party organization is the formal structure of the political party, and its active members are responsible for coordinating party behavior and supporting party candidates. It is a vital component of any successful party because it bears most of the responsibility for building and maintaining the party “brand.” It also plays a key role in helping select, and elect, candidates for public office. Local Organizations Since winning elections is the first goal of the political party, it makes sense that the formal party organization mirrors the local-state-federal structure of the U.S. political system. While the lowest level of party organization is technically the precinct , many of the operational responsibilities for local elections fall upon the county-level organization. The county-level organization is in many ways the workhorse of the party system, especially around election time. This level of organization frequently takes on many of the most basic responsibilities of a democratic system, including identifying and mobilizing potential voters and donors, identifying and training potential candidates for public office, and recruiting new members for the party. County organizations are also often responsible for finding rank and file members to serve as volunteers on Election Day, either as officials responsible for operating the polls or as monitors responsible for ensuring that elections are conducted honestly and fairly. They may also hold regular meetings to provide members the opportunity to meet potential candidates and coordinate strategy ( Figure 9.9 ). Of course, all this is voluntary and relies on dedicated party members being willing to pitch in to run the party. State Organizations Most of the county organizations’ formal efforts are devoted to supporting party candidates running for county and city offices. But a fair amount of political power is held by individuals in statewide office or in state-level legislative or judicial bodies. While the county-level offices may be active in these local competitions, most of the coordination for them will take place in the state-level organizations. Like their more local counterparts, state-level organizations are responsible for key party functions, such as statewide candidate recruitment and campaign mobilization. Most of their efforts focus on electing high-ranking officials such as the governor or occupants of other statewide offices (e.g., the state’s treasurer or attorney general) as well as candidates to represent the state and its residents in the U.S. Senate and the U.S. House of Representatives. The greater value of state- and national-level offices requires state organizations to take on several key responsibilities in the life of the party. Link to Learning Visit the following Republican and Democratic sites to see what party organizations look like on the local level. Although these sites are for different parties in different parts of the country, they both inform visitors of local party events, help people volunteer to work for the party, and provide a convenient means of contributing to the party. First, state-level organizations usually accept greater fundraising responsibilities than do their local counterparts. Statewide races and races for national office have become increasingly expensive in recent years. The average cost of a successful House campaign was $1.2 million in 2014; for Senate races, it was $8.6 million. 40 While individual candidates are responsible for funding and running their own races, it is typically up to the state-level organization to coordinate giving across multiple races and to develop the staffing expertise that these candidates will draw upon at election time. State organizations are also responsible for creating a sense of unity among members of the state party. Building unity can be very important as the party transitions from sometimes-contentious nomination battles to the all-important general election. The state organization uses several key tools to get its members working together towards a common goal. First, it helps the party’s candidates prepare for state primary elections or caucuses that allow voters to choose a nominee to run for public office at either the state or national level. Caucuses are a form of town hall meeting at which voters in a precinct get together to voice their preferences, rather than voting individually throughout the day ( Figure 9.10 ). Second, the state organization is also responsible for drafting a state platform that serves as a policy guide for partisans who are eventually selected to public office. These platforms are usually the result of a negotiation between the various coalitions within the party and are designed to ensure that everyone in the party will receive some benefits if their candidates win the election. Finally, state organizations hold a statewide convention at which delegates from the various county organizations come together to discuss the needs of their areas. The state conventions are also responsible for selecting delegates to the national convention. National Party Organization The local and state-level party organizations are the workhorses of the political process. They take on most of the responsibility for party activities and are easily the most active participants in the party formation and electoral processes. They are also largely invisible to most voters. The average citizen knows very little of the local party’s behavior unless there is a phone call or a knock on the door in the days or weeks before an election. The same is largely true of the activities of the state-level party. Typically, the only people who notice are those who are already actively engaged in politics or are being targeted for donations. But most people are aware of the presence and activity of the national party organizations for several reasons. First, many Americans, especially young people, are more interested in the topics discussed at the national level than at the state or local level. According to John Green of the Ray C. Bliss Institute of Applied Politics, “Local elections tend to be about things like sewers, and roads and police protection—which are not as dramatic an issue as same-sex marriage or global warming or international affairs.” 41 Presidential elections and the behavior of the U.S. Congress are also far more likely to make the news broadcasts than the activities of county commissioners, and the national-level party organization is mostly responsible for coordinating the activities of participants at this level. The national party is a fundraising army for presidential candidates and also serves a key role in trying to coordinate and direct the efforts of the House and Senate. For this reason, its leadership is far more likely to become visible to media consumers, whether they intend to vote or not. A second reason for the prominence of the national organization is that it usually coordinates the grandest spectacles in the life of a political party. Most voters are never aware of the numerous county-level meetings or coordinating activities. Primary elections, one of the most important events to take place at the state level, have a much lower turnout than the nationwide general election. In 2012, for example, only one-third of the eligible voters in New Hampshire voted in the state’s primary, one of the earliest and thus most important in the nation; however, 70 percent of eligible voters in the state voted in the general election in November 2012. 42 People may see or read an occasional story about the meetings of the state committees or convention but pay little attention. But the national convention s, organized and sponsored by the national-level party, can dominate the national discussion for several weeks in late summer, a time when the major media outlets are often searching for news. These conventions are the definition of a media circus at which high-ranking politicians, party elites, and sometimes celebrities, such as actor/director Clint Eastwood ( Figure 9.11 ), along with individuals many consider to be the future leaders of the party are brought before the public so the party can make its best case for being the one to direct the future of the country. 43 National party conventions culminate in the formal nomination of the party nominees for the offices of president and vice president, and they mark the official beginning of the presidential competition between the two parties. In the past, national conventions were often the sites of high drama and political intrigue. As late as 1968, the identities of the presidential and/or vice-presidential nominees were still unknown to the general public when the convention opened. It was also common for groups protesting key events and issues of the day to try to raise their profile by using the conventions to gain the media spotlight. National media outlets would provide “gavel to gavel” coverage of the conventions, and the relatively limited number of national broadcast channels meant most viewers were essentially forced to choose between following the conventions or checking out of the media altogether. Much has changed since the 1960s, however, and between 1960 and 2004, viewership of both the Democratic National Convention and the Republican National Convention had declined by half. 44 National conventions are not the spectacles they once were, and this fact is almost certainly having an impact on the profile of the national party organization. Both parties have come to recognize the value of the convention as a medium through which they can communicate to the average viewer. To ensure that they are viewed in the best possible light, the parties have worked hard to turn the public face of the convention into a highly sanitized, highly orchestrated media event. Speakers are often required to have their speeches prescreened to ensure that they do not deviate from the party line or run the risk of embarrassing the eventual nominee—whose name has often been known by all for several months. And while protests still happen, party organizations have becoming increasingly adept at keeping protesters away from the convention sites, arguing that safety and security are more important than First Amendment rights to speech and peaceable assembly. For example, protestors were kept behind concrete barriers and fences at the Democratic National Convention in 2004. 45 With the advent of cable TV news and the growth of internet blogging, the major news outlets have found it unnecessary to provide the same level of coverage they once did. Between 1976 and 1996, ABC and CBS cut their coverage of the nominating conventions from more than fifty hours to only five. NBC cut its coverage to fewer than five hours. 46 One reason may be that the outcome of nominating conventions are also typically known in advance, meaning there is no drama. Today, the nominee’s acceptance speech is expected to be no longer than an hour, so it will not take up more than one block of prime-time TV programming. This is not to say the national conventions are no longer important, or that the national party organizations are becoming less relevant. The conventions, and the organizations that run them, still contribute heavily to a wide range of key decisions in the life of both parties. The national party platform is formally adopted at the convention, as are the key elements of the strategy for contesting the national campaign. And even though the media is paying less attention, key insiders and major donors often use the convention as a way of gauging the strength of the party and its ability to effectively organize and coordinate its members. They are also paying close attention to the rising stars who are given time at the convention’s podium, to see which are able to connect with the party faithful. Most observers credit Barack Obama’s speech at the 2004 Democratic National Convention with bringing him to national prominence. 47 Insider Perspective Conventions and Trial Balloons While both political parties use conventions to help win the current elections, they also use them as a way of elevating local politicians to the national spotlight. This has been particularly true for the Democratic Party. In 1988, the Democrats tapped Arkansas governor Bill Clinton to introduce their nominee Michael Dukakis at the convention. Clinton’s speech was lampooned for its length and lack of focus, but it served to get his name in front of Democratic voters. Four years later, Clinton was able to leverage this national exposure to help his own presidential campaign. The pattern was repeated when Illinois state senator Barack Obama gave the keynote address at the 2004 convention ( Figure 9.12 ). Although he was only a candidate for the U.S. Senate at the time, his address caught the attention of the Democratic establishment and ultimately led to his emergence as a viable presidential candidate just four years later. Should the media devote more attention to national conventions? Would this help voters choose the candidate they want to vote for? Link to Learning Bill Clinton’s lengthy nomination speech in 1988 was much derided, but served the purpose of providing national exposure to a state governor. Barack Obama’s inspirational speech at the 2004 national convention resulted in immediate speculation as to his wider political aspirations. THE PARTY-IN-GOVERNMENT One of the first challenges facing the party-in-government , or the party identifiers who have been elected or appointed to hold public office, is to achieve their policy goals. The means to do this is chosen in meetings of the two major parties; Republican meetings are called party conferences and Democrat meetings are called party caucuses. Members of each party meet in these closed sessions and discuss what items to place on the legislative agenda and make decisions about which party members should serve on the committees that draft proposed laws. Party members also elect the leaders of their respective parties in the House and the Senate, and their party whips. Leaders serve as party managers and are the highest-ranking members of the party in each chamber of Congress. The party whip ensures that members are present when a piece of legislation is to be voted on and directs them how to vote. The whip is the second-highest ranking member of the party in each chamber. Thus, both the Republicans and the Democrats have a leader and a whip in the House, and a leader and a whip in the Senate. The leader and whip of the party that holds the majority of seats in each house are known as the majority leader and the majority whip. The leader and whip of the party with fewer seats are called the minority leader and the minority whip. The party that controls the majority of seats in the House of Representatives also elects someone to serve as Speaker of the House. People elected to Congress as independents (that is, not members of either the Republican or Democratic parties) must choose a party to conference or caucus with. For example, Senator Bernie Sanders of Vermont, who originally ran for Senate as an independent candidate, caucuses with the Democrats and ran for the presidency as a Democrat. He returned to the Senate in 2017 as an independent. 48 Link to Learning The political parties in government must represent their parties and the entire country at the same time. One way they do this is by creating separate governing and party structures in the legislature, even though these are run by the same people. Check out some of the more important leadership organizations and their partisan counterparts in the House of Representatives and the Senate leadership. Get Connected! Party Organization from the Inside Interested in a cool summer job? Want to actually make a difference in your community? Consider an internship at the Democratic National Committee (DNC) or Republican National Committee (RNC). Both organizations offer internship programs for college students who want hands-on experience working in community outreach and grassroots organizing. While many internship opportunities are based at the national headquarters in Washington, DC, openings may exist within state party organizations. Internship positions can be very competitive; most applicants are juniors or seniors with high grade-point averages and strong recommendations from their faculty. Successful applicants get an inside view of government, build a great professional network, and have the opportunity to make a real difference in the lives of their friends and families. Visit the DNC or RNC website and find out what it takes to be an intern. While there, also check out the state party organization. Is there a local leader you feel you could work for? Are any upcoming events scheduled in your state? One problem facing the party-in-government relates to the design of the country’s political system. The U.S. government is based on a complex principle of separation of powers, with power divided among the executive, legislative, and judiciary branches. The system is further complicated by federalism, which relegates some powers to the states, which also have separation of powers. This complexity creates a number of problems for maintaining party unity. The biggest is that each level and unit of government has different constituencies that the office holder must satisfy. The person elected to the White House is more beholden to the national party organization than are members of the House or Senate, because members of Congress must be reelected by voters in very different states, each with its own state-level and county-level parties. Some of this complexity is eased for the party that holds the executive branch of government. Executive offices are typically more visible to the voters than the legislature, in no small part because a single person holds the office. Voters are more likely to show up at the polls and vote if they feel strongly about the candidate running for president or governor, but they are also more likely to hold that person accountable for the government’s failures. 49 Members of the legislature from the executive’s party are under a great deal of pressure to make the executive look good, because a popular president or governor may be able to help other party members win office. Even so, partisans in the legislature cannot be expected to simply obey the executive’s orders. First, legislators may serve a constituency that disagrees with the executive on key matters of policy. If the issue is important enough to voters, as in the case of gun control or abortion rights, an office holder may feel his or her job will be in jeopardy if he or she too closely follows the party line, even if that means disagreeing with the executive. A good example occurred when the Civil Rights Act of 1964, which desegregated public accommodations and prohibited discrimination in employment on the basis of race, was introduced in Congress. The bill was supported by Presidents John F. Kennedy and Lyndon Johnson, both of whom were Democrats. Nevertheless, many Republicans, such as William McCulloch, a conservative representative from Ohio, voted in its favor while many southern Democrats opposed it. 50 A second challenge is that each house of the legislature has its own leadership and committee structure, and those leaders may not be in total harmony with the president. Key benefits like committee appointments, leadership positions, and money for important projects in their home district may hinge on legislators following the lead of the party. These pressures are particularly acute for the majority party , so named because it controls more than half the seats in one of the two chambers. The Speaker of the House and the Senate majority leader, the majority party’s congressional leaders, have significant tools at their disposal to punish party members who defect on a particular vote. Finally, a member of the minority party must occasionally work with the opposition on some issues in order to accomplish any of his or her constituency’s goals. This is especially the case in the Senate, which is a super-majority institution. Sixty votes (of the 100 possible) are required to get anything accomplished, because Senate rules allow individual members to block legislation via holds and filibusters. The only way to block the blocking is to invoke cloture , a procedure calling for a vote on an issue, which takes 60 votes. 9.4 Divided Government and Partisan Polarization Learning Objectives By the end of this section, you will be able to: Discuss the problems and benefits of divided government Define party polarization List the main explanations for partisan polarization Explain the implications of partisan polarization In 1950, the American Political Science Association’s Committee on Political Parties (APSA) published an article offering a criticism of the current party system. The parties, it argued, were too similar. Distinct, cohesive political parties were critical for any well-functioning democracy. First, distinct parties offer voters clear policy choices at election time. Second, cohesive parties could deliver on their agenda, even under conditions of lower bipartisanship. The party that lost the election was also important to democracy because it served as the “loyal opposition” that could keep a check on the excesses of the party in power. Finally, the paper suggested that voters could signal whether they preferred the vision of the current leadership or of the opposition. This signaling would keep both parties accountable to the people and lead to a more effective government, better capable of meeting the country’s needs. But, the APSA article continued, U.S. political parties of the day were lacking in this regard. Rarely did they offer clear and distinct visions of the country’s future, and, on the rare occasions they did, they were typically unable to enact major reforms once elected. Indeed, there was so much overlap between the parties when in office that it was difficult for voters to know whom they should hold accountable for bad results. The article concluded by advocating a set of reforms that, if implemented, would lead to more distinct parties and better government. While this description of the major parties as being too similar may have been accurate in the 1950s; that is no longer the case. 51 THE PROBLEM OF DIVIDED GOVERNMENT The problem of majority versus minority politics is particularly acute under conditions of divided government . Divided government occurs when one or more houses of the legislature are controlled by the party in opposition to the executive. Unified government occurs when the same party controls the executive and the legislature entirely. Divided government can pose considerable difficulties for both the operations of the party and the government as a whole. It makes fulfilling campaign promises extremely difficult, for instance, since the cooperation (or at least the agreement) of both Congress and the president is typically needed to pass legislation. Furthermore, one party can hardly claim credit for success when the other side has been a credible partner, or when nothing can be accomplished. Party loyalty may be challenged too, because individual politicians might be forced to oppose their own party agenda if it will help their personal reelection bids. Divided government can also be a threat to government operations, although its full impact remains unclear. 52 For example, when the divide between the parties is too great, government may shut down. A 1976 dispute between Republican president Gerald Ford and a Democrat-controlled Congress over the issue of funding for certain cabinet departments led to a ten-day shutdown of the government (although the federal government did not cease to function entirely). But beginning in the 1980s, the interpretation that Republican president Ronald Reagan’s attorney general gave to a nineteenth-century law required a complete shutdown of federal government operations until a funding issue was resolved ( Figure 9.13 ). 53 Clearly, the parties’ willingness to work together and compromise can be a very good thing. However, the past several decades have brought an increased prevalence of divided government. Since 1969, the U.S. electorate has sent the president a Congress of his own party in only seven of twenty-three congressional elections, and during George W. Bush’s first administration, the Republican majority was so narrow that a combination of resignations and defections gave the Democrats control before the next election could be held. Over the short term, however, divided government can make for very contentious politics. A well-functioning government usually requires a certain level of responsiveness on the part of both the executive and the legislative branches. This responsiveness is hard enough if government is unified under one party. During the presidency of Democrat Jimmy Carter (1977–1980), despite the fact that both houses of Congress were controlled by Democratic majorities, the government was shut down on five occasions because of conflict between the executive and legislative branches. 54 Shutdowns are even more likely when the president and at least one house of Congress are of opposite parties. During the presidency of Ronald Reagan, for example, the federal government shut down eight times; on seven of those occasions, the shutdown was caused by disagreements between Reagan and the Republican-controlled Senate on the one hand and the Democrats in the House on the other, over such issues as spending cuts, abortion rights, and civil rights. 55 More such disputes and government shutdowns took place during the administrations of George H. W. Bush, Bill Clinton, and Barack Obama, when different parties controlled Congress and the presidency. For the first few decades of the current pattern of divided government, the threat it posed to the government appears to have been muted by a high degree of bipartisanship , or cooperation through compromise. Many pieces of legislation were passed in the 1960s and 1970s with reasonably high levels of support from both parties. Most members of Congress had relatively moderate voting records, with regional differences within parties that made bipartisanship on many issues more likely. For example, until the 1980s, northern and midwestern Republicans were often fairly progressive, supporting racial equality, workers’ rights, and farm subsidies. Southern Democrats were frequently quite socially and racially conservative and were strong supporters of states’ rights. Cross-party cooperation on these issues was fairly frequent. But in the past few decades, the number of moderates in both houses of Congress has declined. This has made it more difficult for party leadership to work together on a range of important issues, and for members of the minority party in Congress to find policy agreement with an opposing party president. THE IMPLICATIONS OF POLARIZATION The past thirty years have brought a dramatic change in the relationship between the two parties as fewer conservative Democrats and liberal Republicans have been elected to office. As political moderate s , or individuals with ideologies in the middle of the ideological spectrum, leave the political parties at all levels, the parties have grown farther apart ideologically, a result called party polarization . In other words, at least organizationally and in government, Republicans and Democrats have become increasingly dissimilar from one another ( Figure 9.14 ). In the party-in-government, this means fewer members of Congress have mixed voting records; instead they vote far more consistently on issues and are far more likely to side with their party leadership. 56 It also means a growing number of moderate voters aren’t participating in party politics. Either they are becoming independents, or they are participating only in the general election and are therefore not helping select party candidates in primaries. What is most interesting about this shift to increasingly polarized parties is that it does not appear to have happened as a result of the structural reforms recommended by APSA. Rather, it has happened because moderate politicians have simply found it harder and harder to win elections. There are many conflicting theories about the causes of polarization, some of which we discuss below. But whatever its origin, party polarization in the United States does not appear to have had the net positive effects that the APSA committee was hoping for. With the exception of providing voters with more distinct choices, positives of polarization are hard to find. The negative impacts are many. For one thing, rather than reducing interparty conflict, polarization appears to have only amplified it. For example, the Republican Party (or the GOP, standing for Grand Old Party) has historically been a coalition of two key and overlapping factions: pro-business rightists and social conservatives. The GOP has held the coalition of these two groups together by opposing programs designed to redistribute wealth (and advocating small government) while at the same time arguing for laws preferred by conservative Christians. But it was also willing to compromise with pro-business Democrats, often at the expense of social issues, if it meant protecting long-term business interests. Recently, however, a new voice has emerged that has allied itself with the Republican Party. Born in part from an older third-party movement known as the Libertarian Party, the Tea Party is more hostile to government and views government intervention in all forms, and especially taxation and the regulation of business, as a threat to capitalism and democracy. It is less willing to tolerate interventions in the market place, even when they are designed to protect the markets themselves. Although an anti-tax faction within the Republican Party has existed for some time, some factions of the Tea Party movement are also active at the intersection of religious liberty and social issues, especially in opposing such initiatives as same-sex marriage and abortion rights. 57 The Tea Party has argued that government, both directly and by neglect, is threatening the ability of evangelicals to observe their moral obligations, including practices some perceive as endorsing social exclusion. Although the Tea Party is a movement and not a political party, 86 percent of Tea Party members who voted in 2012 cast their votes for Republicans. 58 Some members of the Republican Party are closely affiliated with the movement, and before the 2012 elections, Tea Party activist Grover Norquist exacted promises from many Republicans in Congress that they would oppose any bill that sought to raise taxes. 59 The inflexibility of Tea Party members has led to tense floor debates and was ultimately responsible for the 2014 primary defeat of Republican majority leader Eric Cantor and the 2015 resignation of the sitting Speaker of the House John Boehner. In 2015, Chris Christie, John Kasich, Ben Carson, Marco Rubio, and Ted Cruz, all of whom were Republican presidential candidates, signed Norquist’s pledge as well ( Figure 9.15 ). Movements on the left have also arisen. The Occupy Wall Street movement was born of the government’s response to the Great Recession of 2008 and its assistance to endangered financial institutions, provided through the Troubled Asset Relief Program, TARP ( Figure 9.16 ). The Occupy Movement believed the recession was caused by a failure of the government to properly regulate the banking industry. The Occupiers further maintained that the government moved swiftly to protect the banking industry from the worst of the recession but largely failed to protect the average person, thereby worsening the growing economic inequality in the United States. While the Occupy Movement itself has largely fizzled, the anti-business sentiment to which it gave voice continues within the Democratic Party, and many Democrats have proclaimed their support for the movement and its ideals, if not for its tactics. 60 Champions of the left wing of the Democratic Party, however, such as former presidential candidate Senator Bernie Sanders and Massachusetts senator Elizabeth Warren, have ensured that the Occupy Movement’s calls for more social spending and higher taxes on the wealthy remain a prominent part of the national debate. Their popularity, and the growing visibility of race issues in the United States, have helped sustain the left wing of the Democratic Party. Bernie Sanders’ presidential run made these topics and causes even more salient, especially among younger voters. This reality led Hillary Clinton to move left during the primaries and attempt to win people over. However, the left never warmed up to Clinton after Sanders exited the race. After Clinton lost to Trump, many on the left blamed Clinton for not going far enough left, and they further claimed that Sanders would have had a better chance at beating Trump. 61 Unfortunately, party factions haven’t been the only result of party polarization. By most measures, the U.S. government in general and Congress in particular have become less effective in recent years. Congress has passed fewer pieces of legislation, confirmed fewer appointees, and been less effective at handling the national purse than in recent memory. If we define effectiveness as legislative productivity, the 106th Congress (1999–2000) passed 463 pieces of substantive legislation (not including commemorative legislation, such as bills proclaiming an official doughnut of the United States). The 107th Congress (2000–2001) passed 294 such pieces of legislation. By 2013–2014, the total had fallen to 212. 62 Perhaps the clearest sign of Congress’ ineffectiveness is that the threat of government shutdown has become a constant. Shutdowns occur when Congress and the president are unable to authorize and appropriate funds before the current budget runs out. This is now an annual problem. Relations between the two parties became so bad that financial markets were sent into turmoil in 2014 when Congress failed to increase the government’s line of credit before a key deadline, thus threatening a U.S. government default on its loans. While any particular trend can be the result of multiple factors, the decline of key measures of institutional confidence and trust suggest the negative impact of polarization. Public approval ratings for Congress have been near single digits for several years, and a poll taken in February 2016 revealed that only 11 percent of respondents thought Congress was doing a “good or excellent job.” 63 In the wake of the Great Recession, President Obama’s average approval rating remained low for several years, despite an overall trend in economic growth since the end of 2008, before he enjoyed an uptick in support during his final year in office. 64 Typically, economic conditions are a significant driver of presidential approval, suggesting the negative effect of partisanship on presidential approval. THE CAUSES OF POLARIZATION Scholars agree that some degree of polarization is occurring in the United States, even if some contend it is only at the elite level. But they are less certain about exactly why, or how, polarization has become such a mainstay of American politics. Several conflicting theories have been offered. The first and perhaps best argument is that polarization is a party-in-government phenomenon driven by a decades-long sorting of the voting public, or a change in party allegiance in response to shifts in party position. 65 According to the sorting thesis, before the 1950s, voters were mostly concerned with state-level party positions rather than national party concerns. Since parties are bottom-up institutions, this meant local issues dominated elections; it also meant national-level politicians typically paid more attention to local problems than to national party politics. But over the past several decades, voters have started identifying more with national-level party politics, and they began to demand their elected representatives become more attentive to national party positions. As a result, they have become more likely to pick parties that consistently represent national ideals, are more consistent in their candidate selection, and are more willing to elect office-holders likely to follow their party’s national agenda. One example of the way social change led to party sorting revolves around race. The Democratic Party returned to national power in the 1930s largely as the result of a coalition among low socio-economic status voters in northern and midwestern cities. These new Democratic voters were religiously and ethnically more diverse than the mostly white, mostly Protestant voters who supported Republicans. But the southern United States (often called the “Solid South”) had been largely dominated by Democratic politicians since the Civil War. These politicians agreed with other Democrats on most issues, but they were more evangelical in their religious beliefs and less tolerant on racial matters. The federal nature of the United States meant that Democrats in other parts of the country were free to seek alliances with minorities in their states. But in the South, African Americans were still largely disenfranchised well after Franklin Roosevelt had brought other groups into the Democratic tent. The Democratic alliance worked relatively well through the 1930s and 1940s when post-Depression politics revolved around supporting farmers and helping the unemployed. But in the late 1950s and early 1960s, social issues became increasingly prominent in national politics. Southern Democrats, who had supported giving the federal government authority for economic redistribution, began to resist calls for those powers to be used to restructure society. Many of these Democrats broke away from the party only to find a home among Republicans, who were willing to help promote smaller national government and greater states’ rights. 66 This shift was largely completed with the rise of the evangelical movement in politics, when it shepherded its supporters away from Jimmy Carter, an evangelical Christian, to Ronald Reagan in the 1980 presidential election. At the same time social issues were turning the Solid South towards the Republican Party, they were having the opposite effect in the North and West. Moderate Republicans, who had been champions of racial equality since the time of Lincoln, worked with Democrats to achieve social reform. These Republicans found it increasing difficult to remain in their party as it began to adjust to the growing power of the small government–states’ rights movement. A good example was Senator Arlen Specter, a moderate Republican who represented Pennsylvania and ultimately switched to become a Democrat before the end of his political career. A second possible culprit in increased polarization is the impact of technology on the public square. Before the 1950s, most people got their news from regional newspapers and local radio stations. While some national programming did exist, most editorial control was in the hands of local publishers and editorial boards. These groups served as a filter of sorts as they tried to meet the demands of local markets. As described in detail in the media chapter, the advent of television changed that. Television was a powerful tool, with national news and editorial content that provided the same message across the country. All viewers saw the same images of the women’s rights movement and the war in Vietnam. The expansion of news coverage to cable, and the consolidation of local news providers into big corporate conglomerates, amplified this nationalization. Average citizens were just as likely to learn what it meant to be a Republican from a politician in another state as from one in their own, and national news coverage made it much more difficult for politicians to run away from their votes. The information explosion that followed the heyday of network TV by way of cable, the Internet, and blogs has furthered this nationalization trend. A final possible cause for polarization is the increasing sophistication of gerrymandering , or the manipulation of legislative districts in an attempt to favor a particular candidate ( Figure 9.17 ). According to the gerrymandering thesis, the more moderate or heterogeneous a voting district, the more moderate the politician’s behavior once in office. Taking extreme or one-sided positions on a large number of issues would be hazardous for a member who needs to build a diverse electoral coalition. But if the district has been drawn to favor a particular group, it now is necessary for the elected official to serve only the portion of the constituency that dominates. Gerrymandering is a centuries-old practice. There has always been an incentive for legislative bodies to draw districts in such a way that sitting legislators have the best chance of keeping their jobs. But changes in law and technology have transformed gerrymandering from a crude art into a science. The first advance came with the introduction of the “one-person-one-vote” principle by the U.S. Supreme Court in 1962. Before then, it was common for many states to practice redistricting , or redrawing of their electoral maps, only if they gained or lost seats in the U.S. House of Representatives. This can happen once every ten years as a result of a constitutionally mandated reapportionment process, in which the number of House seats given to each state is adjusted to account for population changes. But if there was no change in the number of seats, there was little incentive to shift district boundaries. After all, if a legislator had won election based on the current map, any change to the map could make losing seats more likely. Even when reapportionment led to new maps, most legislators were more concerned with protecting their own seats than with increasing the number of seats held by their party. As a result, some districts had gone decades without significant adjustment, even as the U.S. population changed from largely rural to largely urban. By the early 1960s, some electoral districts had populations several times greater than those of their more rural neighbors. However, in its one-person-one-vote decision in Reynolds v. Simms (1964), the Supreme Court argued that everyone’s vote should count roughly the same regardless of where they lived. 67 Districts had to be adjusted so they would have roughly equal populations. Several states therefore had to make dramatic changes to their electoral maps during the next two redistricting cycles (1970–1972 and 1980–1982). Map designers, no longer certain how to protect individual party members, changed tactics to try and create safe seat s so members of their party could be assured of winning by a comfortable margin. The basic rule of thumb was that designers sought to draw districts in which their preferred party had a 55 percent or better chance of winning a given district, regardless of which candidate the party nominated. Of course, many early efforts at post- Reynolds gerrymandering were crude since map designers had no good way of knowing exactly where partisans lived. At best, designers might have a rough idea of voting patterns between precincts, but they lacked the ability to know voting patterns in individual blocks or neighborhoods. They also had to contend with the inherent mobility of the U.S. population, which meant the most carefully drawn maps could be obsolete just a few years later. Designers were often forced to use crude proxies for party, such as race or the socio-economic status of a neighborhood ( Figure 9.18 ). Some maps were so crude they were ruled unconstitutionally discriminatory by the courts. Proponents of the gerrymandering thesis point out that the decline in the number of moderate voters began during this period of increased redistricting. But it wasn’t until later, they argue, that the real effects could be seen. A second advance in redistricting, via computer-aided map making, truly transformed gerrymandering into a science. Refined computing technology, the ability to collect data about potential voters, and the use of advanced algorithms have given map makers a good deal of certainty about where to place district boundaries to best predetermine the outcomes. These factors also provided better predictions about future population shifts, making the effects of gerrymandering more stable over time. Proponents argue that this increased efficiency in map drawing has led to the disappearance of moderates in Congress. According to political scientist Nolan McCarty , there is little evidence to support the redistricting hypothesis alone. First, he argues, the Senate has become polarized just as the House of Representatives has, but people vote for Senators on a statewide basis. There are no gerrymandered voting districts in elections for senators. Research showing that more partisan candidates first win election to the House before then running successfully for the Senate, however, helps us understand how the Senate can also become partisan. 68 Furthermore, states like Wyoming and Vermont, which have only one Representative and thus elect House members on a statewide basis as well, have consistently elected people at the far ends of the ideological spectrum. 69 Redistricting did contribute to polarization in the House of Representatives, but it took place largely in districts that had undergone significant change. 70 Furthermore, polarization has been occurring throughout the country, but the use of increasingly polarized district design has not. While some states have seen an increase in these practices, many states were already largely dominated by a single party (such as in the Solid South) but still elected moderate representatives. Some parts of the country have remained closely divided between the two parties, making overt attempts at gerrymandering difficult. But when coupled with the sorting phenomenon discussed above, redistricting probably is contributing to polarization, if only at the margins. Finding a Middle Ground The Politics of Redistricting Voters in a number of states have become so worried about the problem of gerrymandering that they have tried to deny their legislatures the ability to draw district boundaries. The hope is that by taking this power away from whichever party controls the state legislature, voters can ensure more competitive districts and fairer electoral outcomes. In 2000, voters in Arizona approved a referendum that created an independent state commission responsible for drafting legislative districts. But the Arizona legislature fought back against the creation of the commission, filing a lawsuit that claimed only the legislature had the constitutional right to draw districts. Legislators asked the courts to overturn the popular referendum and end the operation of the redistricting commission. However, the U.S. Supreme Court upheld the authority of the independent commission in a 5–4 decision titled Arizona State Legislature v. Arizona Independent Redistricting Commission (2015). 71 Currently, only five states use fully independent commissions—ones that do not include legislators or other elected officials—to draw the lines for both state legislative and congressional districts. These states are Arizona, California, Idaho, Montana, and Washington. In Florida, the League of Women Voters and Common Cause challenged a new voting districts map supported by state Republicans, because they did not believe it fulfilled the requirements of amendments made to the state constitution in 2010 requiring that voting districts not favor any political party or incumbent. 72 Do you think redistricting is a partisan issue? Should commissions draw districts instead of legislators? If commissions are given this task, who should serve on them? Link to Learning Think you have what it takes to gerrymander a district? Play the redistricting game and see whether you can find new ways to help out old politicians.
business_law_i_essentials
Chapter Outline 4.1 Commerce Clause 4.2 Constitutional Protections Introduction Learning Outcome Explain the impact of the U.S. Constitution on business.
[ { "answer": { "ans_choice": 3, "ans_text": "d" }, "bloom": null, "hl_context": "<hl> The main source of authority for the federal regulation of interstate and international commerce is the commerce clause . <hl> This clause is established in Article I , Section 8 , of the Constitution . The Article grants Congress the power to “ regulate Commerce with foreign Nations , and among the several States , and with the Indian Tribes . ” Thus , the commerce clause serves to simultaneously empower the federal government , while limiting state power .", "hl_sentences": "The main source of authority for the federal regulation of interstate and international commerce is the commerce clause .", "question": { "cloze_format": "It is ___ that businesses can be charged with crimes", "normal_format": "Businesses can be charged with crimes.", "question_choices": [ "Supremacy Clause.", "10th Amendment.", "Bill of Rights.", "Commerce Clause." ], "question_id": "fs-2123231", "question_text": "The _____ gives the federal government the authority to regulate interstate and international commerce." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "Federalism." }, "bloom": null, "hl_context": "Federal and state constitutions are a major source of business law . The United States Constitution is the supreme law of the United States . In addition to the individual constitutions established in each state , the U . S . Constitution sets out the fundamental rules and principles by which the country and individual states are governed . Constitutional law is the term used to describe the powers and limits of the federal and state governments as established in the Constitution . <hl> The political system that divides authority to govern between the state and federal governments is known as federalism , and this too is established in the Constitution . <hl> The Tenth Amendment states that any area over which the federal government is not granted authority through the Constitution is reserved for the state . This statement means that any federal legislation impacting business and commerce must be established by an expressed constitutional grant of authority .", "hl_sentences": "The political system that divides authority to govern between the state and federal governments is known as federalism , and this too is established in the Constitution .", "question": { "cloze_format": "The doctrine aimed at dividing the governing powers between the federal governments and the states is ___.", "normal_format": "What is the doctrine aimed at dividing the governing powers between the federal governments and the states?", "question_choices": [ "Judicial review.", "Federalism.", "Separation of powers.", "Preemption." ], "question_id": "fs-21232311", "question_text": "The doctrine aimed at dividing the governing powers between the federal governments and the states is:" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "c" }, "bloom": null, "hl_context": "The Founding Fathers created a federal system that would , at times , “ preempt ” state law through the supremacy clause , outlined in Article VI of the Constitution . <hl> In other words , since the U . S . Constitution is the “ supreme law of the land , ” if a state law conflicts with the U . S . Constitution , the state law is declared invalid . <hl> When the federal constitutional law prevails over the state law , it is said that the state law has been preempted . Before that determination is made , the courts try to determine if Congress intended to preempt state law in enacting the particular provision in question . If the answer is “ no , ” then those who are asserting protections of state law may make claims under state law . If the answer is “ yes , ” however , federal law prevails .", "hl_sentences": "In other words , since the U . S . Constitution is the “ supreme law of the land , ” if a state law conflicts with the U . S . Constitution , the state law is declared invalid .", "question": { "cloze_format": "The ___ is the clause of the U.S. Constitution provides that, within its own sphere, federal law is supreme and that state law must, in case of conflict, yield.", "normal_format": "Which clause of the U.S. Constitution provides that, within its own sphere, federal law is supreme and that state law must, in case of conflict, yield?", "question_choices": [ "Commerce Clause.", "Superior Clause.", "Supremacy Clause.", "Necessary and Proper Clause." ], "question_id": "fs-212323112", "question_text": "Which clause of the U.S. Constitution provides that, within its own sphere, federal law is supreme and that state law must, in case of conflict, yield?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "c" }, "bloom": null, "hl_context": "<hl> For commercial enterprises and businesspeople , it is the due process clause of the Fifth Amendment that offers the most extensive protection . <hl> The clause states that the government cannot take an individual ’ s life , liberty , or property without due process of law . Specifically , there are two types of due process :", "hl_sentences": "For commercial enterprises and businesspeople , it is the due process clause of the Fifth Amendment that offers the most extensive protection .", "question": { "cloze_format": "The _____ of the constitution offers the most extensive protection for businesses.", "normal_format": "Which of the following of the constitution offers the most extensive protection for businesses?", "question_choices": [ "Supremacy Clause.", "Equal Protection Clause.", "Due Process Clause.", "Freedom of Speech Clause." ], "question_id": "fs-2123230", "question_text": "The _____ of the constitution offers the most extensive protection for businesses." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "False." }, "bloom": null, "hl_context": "<hl> The Bill of Rights is the common term given to the first 10 amendments to the U . S . Constitution . <hl> These are not the only set of amendments to the Constitution , but they are considered together as impacting rights because they limit the ability of the federal government to infringe upon individual freedoms . In addition , a later amendment , the Fourteenth Amendment , extends the provisions set out in the Bill of Rights to the states , in addition to federal government . The Bill of Rights has a substantial impact upon government regulation of commercial activity , and therefore , it is important to fully understand it .", "hl_sentences": "The Bill of Rights is the common term given to the first 10 amendments to the U . S . Constitution .", "question": { "cloze_format": "It is ___ that the 14th Amendment is a part of the Bill of Rights.", "normal_format": "The 14th Amendment is a part of the Bill of Rights.", "question_choices": [ "True.", "False." ], "question_id": "fs-21232310", "question_text": "The 14th Amendment is a part of the Bill of Rights." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 0, "ans_text": "a" }, "bloom": null, "hl_context": "The Tenth Amendment to the Constitution gives the states powers over areas of law not held exclusively by the federal government through the U . S . Constitution , e . g . , states can make laws about how to get married , who may get married , or how to dissolve a marriage , as well as which activities are crimes and how the crimes will be punished . <hl> If the U . S . Constitution does give the federal government some power , however , then the federal government may exercise it , free from state interference . <hl> <hl> For instance , the U . S . Congress ( the legislative branch of the federal government ) has the power , among other things , to coin money , to create a military , to establish post offices , and to declare war . <hl> <hl> Since there is specific mention of these powers , states may not create their own currency , military , or postal service , and they may not declare war . <hl>", "hl_sentences": "If the U . S . Constitution does give the federal government some power , however , then the federal government may exercise it , free from state interference . For instance , the U . S . Congress ( the legislative branch of the federal government ) has the power , among other things , to coin money , to create a military , to establish post offices , and to declare war . Since there is specific mention of these powers , states may not create their own currency , military , or postal service , and they may not declare war .", "question": { "cloze_format": "It is correct with regards to the powers of state government in the United States that ___.", "normal_format": "Which of the following is correct with regards to the powers of state government in the United States?", "question_choices": [ "All powers not specifically enumerated to the federal government are reserved to the states.", "The power over crimes is reserved to the federal government.", "The power over the militia is reserved to the states.", "The powers over the federal government are superior to every state power." ], "question_id": "fs-21232314", "question_text": "Which of the following is correct with regards to the powers of state government in the United States?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "False." }, "bloom": null, "hl_context": "<hl> The protections afforded the citizenry in the Bill of Rights are also extended to corporations and commercial activities . <hl> In the next sections , some applications of the various amendments in the area of business are discussed .", "hl_sentences": "The protections afforded the citizenry in the Bill of Rights are also extended to corporations and commercial activities .", "question": { "cloze_format": "It is ___ that all of the sections of the Bill of Rights apply to corporations and commercial activities.", "normal_format": "All of the sections of the Bill of Rights apply to corporations and commercial activities.", "question_choices": [ "True.", "False." ], "question_id": "fs-21232313", "question_text": "All of the sections of the Bill of Rights apply to corporations and commercial activities." }, "references_are_paraphrase": 0 } ]
4
4.1 Commerce Clause The Constitution and the Law Federal and state constitutions are a major source of business law. The United States Constitution is the supreme law of the United States. In addition to the individual constitutions established in each state, the U.S. Constitution sets out the fundamental rules and principles by which the country and individual states are governed. Constitutional law is the term used to describe the powers and limits of the federal and state governments as established in the Constitution. The political system that divides authority to govern between the state and federal governments is known as federalism , and this too is established in the Constitution. The Tenth Amendment states that any area over which the federal government is not granted authority through the Constitution is reserved for the state. This statement means that any federal legislation impacting business and commerce must be established by an expressed constitutional grant of authority . Federal Preemption The Founding Fathers created a federal system that would, at times, “preempt” state law through the supremacy clause , outlined in Article VI of the Constitution. In other words, since the U.S. Constitution is the “supreme law of the land,” if a state law conflicts with the U.S. Constitution, the state law is declared invalid. When the federal constitutional law prevails over the state law, it is said that the state law has been preempted . Before that determination is made, the courts try to determine if Congress intended to preempt state law in enacting the particular provision in question. If the answer is “no,” then those who are asserting protections of state law may make claims under state law. If the answer is “yes,” however, federal law prevails. The Tenth Amendment to the Constitution gives the states powers over areas of law not held exclusively by the federal government through the U.S. Constitution, e.g., states can make laws about how to get married, who may get married, or how to dissolve a marriage, as well as which activities are crimes and how the crimes will be punished. If the U.S. Constitution does give the federal government some power, however, then the federal government may exercise it, free from state interference. For instance, the U.S. Congress (the legislative branch of the federal government) has the power, among other things, to coin money, to create a military, to establish post offices, and to declare war. Since there is specific mention of these powers, states may not create their own currency, military, or postal service, and they may not declare war. The Commerce Clause and The Affordable Care Act After much debate, negotiation, and political wrangling, Congress passed the Patient Protection and Affordable Care Act (PPACA) in 2010, which was designed to increase the number of Americans who had access to health insurance (a policy initiative known as Obamacare). The Act included a provision mandating that individuals not insured through employment or who were otherwise exempt from receiving health insurance obtain minimum essential health insurance or face a penalty issued through the Internal Revenue Service (IRS). The National Federation of Independent Business (NFIB), supported by 26 of the 50 states, challenged the constitutionality of this particular provision, known as the individual mandate. Their argument was upheld by the 11th Circuit Court of Appeals, which ruled that Congress did not have the authority to enact this provision. Later, however, the appellate court determined that the individual mandate was severable from the remainder of the PPACA, so ultimately the Act was upheld. The main source of authority for the federal regulation of interstate and international commerce is the commerce clause . This clause is established in Article I, Section 8, of the Constitution. The Article grants Congress the power to “regulate Commerce with foreign Nations, and among the several States, and with the Indian Tribes.” Thus, the commerce clause serves to simultaneously empower the federal government, while limiting state power. So long as a federal regulation impacts interstate commerce, that regulation can be described as constitutional, according to the commerce clause. However, since the Constitution was first written, there have often been occasions when the judiciary system has needed to step in to interpret the meaning and implications of the commerce clause. In particular, there have been disputes over the intended meaning of the phrase “among the several States.” Up until the 1930s, this phrase was interpreted in a literal way, so that activities subject to federal regulation were required to involve trade between the states. This strict interpretation actually served to limit the federal regulation of commerce. The turning point in the interpretation of the commerce clause came with the 1937 case, NLRB v. Jones & Laughlin Steel Corp . The previous year, in the Carter v. Carter Coal Co case, the court invalidated a program, initiated under the New Deal, that had tried to regulate the labor practices of coal firms on the basis that these practices were local, and therefore had only an indirect impact on interstate commerce. In NLRB v. Jones & Laughlin Steel Corp , the court deviated from that decision by ruling that Congress could regulate employment practices at a steel plant because any stoppages at that plant would have a serious, detrimental impact on interstate commerce. The court concluded that since the steel industry is a networked industry that incorporates mines, plants, and factories from Minnesota to Pennsylvania, the manufacturing of steel properly falls under the jurisdiction of the commerce clause. In summing up, the court concluded that: “Although activities may be intrastate in character when separately considered, if they have such a close and substantial relationship to interstate commerce that their control is essential or appropriate to protect that commerce from burdens or obstructions, Congress cannot be denied the power to exercise that control” (NLRB v. Jones & Laughlin Steel Corp., 301 U.S. 1 1937). Challenges to and Reinterpretations of the Commerce Clause Ever since the NLRB v. Jones & Laughlin Steel Corp case, Congress has invoked the commerce clause to rule on a diverse range of business and commercial activities, as well as to support social reforms that indirectly impact state commerce. Examination of the United States Code reveals that there are more than 700 legislative provisions that explicitly refer to foreign or interstate commerce. What is perhaps most remarkable is the sheer diversity of statutory areas covered by the commerce clause. Areas covered include the regulation of sporting activities, endangered species, energy regulation, gambling, firearms control, and even terrorism. Examples of Federal Legislation Passed by Invoking the Commerce Clause The Controlled Substances Act The Federal Mine Safety and Health Act The Civil Rights Act Americans with Disabilities Act The Indian Child Welfare Act While businesses have often challenged these statutes as existing outside of the realm of congressional authority, in most cases, the courts have upheld the statutes as being valid exercises of congressional power in line with the commerce clause. An exception is the 1995 case, United States v. Lopez . The case centered around the legality of the Gun-Free School Zone Act, which was a federal law that outlawed the possession of guns within 1,000 feet of a school. In a landmark case, the Court ruled that the Act was outside the scope of the commerce clause, and that Congress did not have the authority to regulate in an area that had “nothing to do with commerce, or any sort of enterprise.” A recent controversy pertaining to the commerce clause relates to the passing of the Affordable Care Act, as described earlier. Protestors claimed that the individual mandate aspect of the ACA should be treated as a regulation that affects interstate commerce. According to their argument, after the Act was implemented, there would be an increase in the sale and purchase of health care insurance, such that the market for health care should be seen as being significantly impacted by the Act. However, the Chief Justice of the Supreme Court, Justice Roberts, ruled that actions that create new business activity do not affect interstate commerce. Police Power and the Dormant Commerce Clause The authority of the federal government to regulate interstate commerce has, at times, come into conflict with state authority over the same area of regulation. The courts have tried to resolve these conflicts with reference to the police power of the states. Police power refers to the residual powers granted to each state to safeguard the welfare of their inhabitants. Examples of areas in which states tend to exercise their police power are zoning regulations, building codes, and sanitation standards for eating places. However, there are times when the states’ use of police power impacts interstate commerce. If the exercise of the power interferes with, or discriminates against, interstate commerce, then the action is generally deemed to be unconstitutional. The limitation on the authority of states to regulate in areas that impact interstate commerce is known as the dormant commerce clause . In using the dormant commerce clause to resolve conflicts between state and federal authority, the courts consider the extent to which the state law has a legitimate purpose. If it is determined that the state law has a legitimate purpose, then the court tries to determine whether the impact on interstate commerce is in the interest of the citizens of the state, and will rule accordingly. For instance, an ordinance that banned spray paint, issued in the city of Chicago, was challenged by paint manufacturers under the dormant commerce clause, but was ultimately upheld by the U.S. Court of Appeals because the ban was intended to reduce graffiti and related crimes. Today, Congress uses its authority to regulate commercial activity in four general areas relating to the commerce clause: Regulation of the channels of interstate commerce Regulation of the instrumentalities of interstate commerce Regulation of intangibles and tangibles that cross state lines Regulation of activities that are deemed to be both economic and to have a substantial impact on interstate commerce Area of Regulation Explanation Examples Regulation of the channels of interstate commerce Channels of interstate commerce describe the passages of transportation between the states. Thus, the commerce clause authorizes Congress to regulate activities pertaining to the nation’s airways, waterways, and roadways, and even where the activity itself takes place entirely in a single state. For example, Congress can pass regulations that restrict what can be carried on airlines or on ships. Regulation of the instrumentalities of interstate commerce Instrumentalities of commerce are understood to be any resource employed in the carrying out of commerce. Examples of these resources are machines, equipment, vehicles, and personnel. Thus, Congress has the power to regulate these areas. Congress could pass regulations mandating certain safety standards for equipment used in manufacturing plants. Regulation of intangibles and tangibles that cross state lines Any object, tangible or intangible, that crosses state lines can be regulated under the commerce clause. Tangible objects include goods purchased by consumers, as well as raw materials and equipment used in the production of goods for sale. Intangible objects include services, as well as electronic databases. The Driver’s Privacy Protection Act (DPPA) regulates the sale of information contained in the Department of Motor Vehicles’ (DMV’s) records. Regulation of activities that are deemed to have a substantial impact on interstate commerce Federal regulation of economic commercial activity expected to have a significant (as opposed to minor) effect on interstate commerce is constitutional, according to the commerce clause. Noneconomic commercial activity is not covered. The courts in the United States vs. Lopez case described earlier deemed the Act to be unconstitutional because its terms have “nothing to do with ‘commerce’ or any sort of economic enterprise.” Table 4.1 4.2 Constitutional Protections The Bill of Rights is the common term given to the first 10 amendments to the U.S. Constitution. These are not the only set of amendments to the Constitution, but they are considered together as impacting rights because they limit the ability of the federal government to infringe upon individual freedoms. In addition, a later amendment, the Fourteenth Amendment, extends the provisions set out in the Bill of Rights to the states, in addition to federal government. The Bill of Rights has a substantial impact upon government regulation of commercial activity, and therefore, it is important to fully understand it. A summary of the provisions of the Bill of Rights is supplied below: Amendment Provision First Ensures that U.S. citizens have the right to freedom of speech, press, religion, and peaceable assembly. Provides citizens with the right to appeal to government to redress grievances. Second Establishes that the government cannot infringe upon citizens’ right to bear arms. Establishes the importance of a militia for national security. Third Establishes that the government cannot quarter soldiers in private houses during peacetime or wartime. Fourth States that government can only issue warrants with probable cause and protects U.S. citizens from unwarranted search and seizure. Fifth Establishes rights of due process. Ensures that indictment of a grand jury is necessary to put a citizen on trial and grants citizens the right not to testify against themselves. Sixth Provides citizens with the right to an expeditious public trial, the right to an attorney, and the right to an impartial jury. Seventh States that citizens have the right to a trial by jury for common lawsuits involving monetary value of $20. Eighth Prohibits cruel and unusual punishment, prevents the imposition of excessive fines, and states that the government cannot set bail at excessive amounts. Ninth States that the rights set out in the Bill of Rights do not remove any other rights granted to citizens. Tenth States that any area over which the federal government is not granted authority through the Constitution is reserved for the states. Table 4.2 Application of the Bill of Rights to Commercial Activity The protections afforded the citizenry in the Bill of Rights are also extended to corporations and commercial activities. In the next sections, some applications of the various amendments in the area of business are discussed. The First Amendment The freedom of speech provisions in the First Amendment have application to corporations. The courts distinguish between different types of speech, and each has implications for the power of the federal government and states to regulate in these areas: Corporate Political Speech. Political speech is any speech used to support political agendas or candidates. Until the 1970s, several states prevented firms from financially supporting political advertising because they feared the power of corporate assets. However, since the 1978 case First National Bank of Boston v. Bellotti , it has been established that corporate political speech is protected in the same way as citizens’ free speech. Unprotected Speech. The 1942 case Chaplinsky v. New Hampshire determined that certain types of speech—that which could “inflict injury or incite an immediate breach of the peace”—is not protected under the First Amendment. Therefore, obscenities, defamation, and slanderous speech are not protected. Commercial Speech. This type of speech conveys information pertaining to the sale of goods and services. Ever since the 1980 case Hudson Gas & Electric Corp v. Public Service Commission of New York , a four-part test has been established to determine whether commercial speech should be regulated according to the First Amendment. This test is known as The Central Hudson Test for Commercial Speech. The free exercise clause of the First Amendment states that government is prohibited from making laws that prohibit the free exercise of religion. Issues pertaining to this clause often arise in organizational settings. For example, historically, there have been a number of cases in which government employees have challenged employers’ attempts to inhibit their exercise of religious practice (e.g., the wearing of religious symbols) in the workplace. The Fourth Amendment The Fourth Amendment guarantees that citizens are free from unreasonable searches and seizures, and requires government officials to obtain search warrants to conduct searches. However, government officials can only request a search warrant if they have probable cause to believe that criminal activity is occurring at the location of the search, or that they will locate evidence of criminal activity during the search (except where the official believes items will be removed prior to obtaining a warrant). The Fourth Amendment protects individual organizations and places of business, as well as residences. However, under the terms of the pervasive-regulation exception, administrative agencies can conduct warrantless searches of businesses attached to industries that have a long history of pervasive regulation. For example, public health agencies are allowed to conduct warrantless searches of stone quarries, as authorized by the Federal Mine Safety and Health Act of 1977. The Fifth Amendment For commercial enterprises and businesspeople, it is the due process clause of the Fifth Amendment that offers the most extensive protection. The clause states that the government cannot take an individual’s life, liberty, or property without due process of law. Specifically, there are two types of due process: Substantive due process means that laws that will deprive an individual of his or her life, liberty, or property must be fair and not arbitrary. Laws passed should not affect fundamental rights, and regulations are required to meet the rational-basis test. In other words, the government must demonstrate that the law bears a rational relationship to a legitimate state interest. Many regulations affecting commercial activity, such as banking regulations, minimum wage laws, and regulations inhibiting unfair trade, have been tested against the rational-basis test. Procedural due process means that governments must use fair procedures when depriving an individual of his or her life, liberty, or property. This status quo does not only apply to federal criminal proceedings. For example, if a government employer discharges an employee from his job, or if the government suspends the driver’s license of a worker, the employer must follow procedural due process. Another clause contained in the Fifth Amendment that is relevant to commercial enterprises is the takings clause. According to this clause, when the government seizes private property for public use, it is required that the government pay the owner just compensation for the property. Just compensation is understood to be equivalent to the market value of the property. This clause has been broadly interpreted. For example, if environmental or safety regulations significantly impact the way in which a property owner can use his or her land for economic gain, the regulation can essentially be deemed as depriving the owner of his or her land, and the owner is entitled to compensation. It is important to note that the privilege against self-incrimination , established under the Fifth Amendment (usually interpreted as the right to remain silent), only applies to sole proprietorships that are not legally distinct from the individual who owns them. Custodians and agents of corporations do not enjoy this privilege.
microbiology
Summary 16.1 The Language of Epidemiologists Epidemiology is the science underlying public health. Morbidity means being in a state of illness, whereas mortality refers to death; both morbidity rates and mortality rates are of interest to epidemiologists. Incidence is the number of new cases (morbidity or mortality), usually expressed as a proportion, during a specified time period; prevalence is the total number affected in the population, again usually expressed as a proportion. Sporadic diseases only occur rarely and largely without a geographic focus. Endemic diseases occur at a constant (and often low) level within a population. Epidemic diseases and pandemic diseases occur when an outbreak occurs on a significantly larger than expected level, either locally or globally, respectively. Koch’s postulates specify the procedure for confirming a particular pathogen as the etiologic agent of a particular disease. Koch’s postulates have limitations in application if the microbe cannot be isolated and cultured or if there is no animal host for the microbe. In this case, molecular Koch’s postulates would be utilized. In the United States, the Centers for Disease Control and Prevention monitors notifiable diseases and publishes weekly updates in the Morbidity and Mortality Weekly Report. 16.2 Tracking Infectious Diseases Early pioneers of epidemiology such as John Snow, Florence Nightingale, and Joseph Lister, studied disease at the population level and used data to disrupt disease transmission. Descriptive epidemiology studies rely on case analysis and patient histories to gain information about outbreaks, frequently while they are still occurring. Retrospective epidemiology studies use historical data to identify associations with the disease state of present cases. Prospective epidemiology studies gather data and follow cases to find associations with future disease states. Analytical epidemiology studies are observational studies that are carefully designed to compare groups and uncover associations between environmental or genetic factors and disease. Experimental epidemiology studies generate strong evidence of causation in disease or treatment by manipulating subjects and comparing them with control subjects. 16.3 Modes of Disease Transmission Reservoirs of human disease can include the human and animal populations, soil, water, and inanimate objects or materials. Contact transmission can be direct or indirect through physical contact with either an infected host (direct) or contact with a fomite that an infected host has made contact with previously (indirect). Vector transmission occurs when a living organism carries an infectious agent on its body ( mechanical ) or as an infection host itself ( biological ), to a new host. Vehicle transmission occurs when a substance, such as soil, water, or air, carries an infectious agent to a new host. Healthcare-associated infections (HAI) , or nosocomial infections , are acquired in a clinical setting. Transmission is facilitated by medical interventions and the high concentration of susceptible, immunocompromised individuals in clinical settings. 16.4 Global Public Health The World Health Organization (WHO) is an agency of the United Nations that collects and analyzes data on disease occurrence from member nations. WHO also coordinates public health programs and responses to international health emergencies. Emerging diseases are those that are new to human populations or that have been increasing in the past two decades. Reemerging diseases are those that are making a resurgence in susceptible populations after previously having been controlled in some geographic areas.
Chapter Outline 16.1 The Language of Epidemiologists 16.2 Tracking Infectious Diseases 16.3 Modes of Disease Transmission 16.4 Global Public Health Introduction In the United States and other developed nations, public health is a key function of government. A healthy citizenry is more productive, content, and prosperous; high rates of death and disease, on the other hand, can severely hamper economic productivity and foster social and political instability. The burden of disease makes it difficult for citizens to work consistently, maintain employment, and accumulate wealth to better their lives and support a growing economy. In this chapter, we will explore the intersections between microbiology and epidemiology , the science that underlies public health. Epidemiology studies how disease originates and spreads throughout a population, with the goal of preventing outbreaks and containing them when they do occur. Over the past two centuries, discoveries in epidemiology have led to public health policies that have transformed life in developed nations, leading to the eradication (or near eradication) of many diseases that were once causes of great human suffering and premature death. However, the work of epidemiologists is far from finished. Numerous diseases continue to plague humanity, and new diseases are always emerging. Moreover, in the developing world, lack of infrastructure continues to pose many challenges to efforts to contain disease.
[ { "answer": { "ans_choice": 3, "ans_text": "D" }, "bloom": null, "hl_context": "<hl> Biological transmission occurs when the pathogen reproduces within a biological vector that transmits the pathogen from one host to another ( Figure 16.12 ) . <hl> <hl> Arthropods are the main vectors responsible for biological transmission ( Figure 16.13 ) . <hl> Most arthropod vectors transmit the pathogen by biting the host , creating a wound that serves as a portal of entry . The pathogen may go through part of its reproductive cycle in the gut or salivary glands of the arthropod to facilitate its transmission through the bite . For example , hemipterans ( called “ kissing bugs ” or “ assassin bugs ” ) transmit Chagas disease to humans by defecating when they bite , after which the human scratches or rubs the infected feces into a mucous membrane or break in the skin . <hl> Diseases can also be transmitted by a mechanical or biological vector , an animal ( typically an arthropod ) that carries the disease from one host to another . <hl> Mechanical transmission is facilitated by a mechanical vector , an animal that carries a pathogen from one host to another without being infected itself . For example , a fly may land on fecal matter and later transmit bacteria from the feces to food that it lands on ; a human eating the food may then become infected by the bacteria , resulting in a case of diarrhea or dysentery ( Figure 16.12 ) .", "hl_sentences": "Biological transmission occurs when the pathogen reproduces within a biological vector that transmits the pathogen from one host to another ( Figure 16.12 ) . Arthropods are the main vectors responsible for biological transmission ( Figure 16.13 ) . Diseases can also be transmitted by a mechanical or biological vector , an animal ( typically an arthropod ) that carries the disease from one host to another .", "question": { "cloze_format": "___ are the most common type of biological vector of human disease.", "normal_format": "Which is the most common type of biological vector of human disease?", "question_choices": [ "viruses", "bacteria", "mammals", "arthropods" ], "question_id": "fs-id1167661568286", "question_text": "Which is the most common type of biological vector of human disease?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 1, "ans_text": "B" }, "bloom": null, "hl_context": "<hl> Biological transmission occurs when the pathogen reproduces within a biological vector that transmits the pathogen from one host to another ( Figure 16.12 ) . <hl> <hl> Arthropods are the main vectors responsible for biological transmission ( Figure 16.13 ) . <hl> <hl> Most arthropod vectors transmit the pathogen by biting the host , creating a wound that serves as a portal of entry . <hl> The pathogen may go through part of its reproductive cycle in the gut or salivary glands of the arthropod to facilitate its transmission through the bite . For example , hemipterans ( called “ kissing bugs ” or “ assassin bugs ” ) transmit Chagas disease to humans by defecating when they bite , after which the human scratches or rubs the infected feces into a mucous membrane or break in the skin .", "hl_sentences": "Biological transmission occurs when the pathogen reproduces within a biological vector that transmits the pathogen from one host to another ( Figure 16.12 ) . Arthropods are the main vectors responsible for biological transmission ( Figure 16.13 ) . Most arthropod vectors transmit the pathogen by biting the host , creating a wound that serves as a portal of entry .", "question": { "cloze_format": "A mosquito bites a person who subsequently develops a fever and abdominal rash. This indicates the type of transmission called ___.\n", "normal_format": "A mosquito bites a person who subsequently develops a fever and abdominal rash. What type of transmission would this be?", "question_choices": [ "mechanical vector transmission", "biological vector transmission", "direct contact transmission", "vehicle transmission" ], "question_id": "fs-id1167661437761", "question_text": "A mosquito bites a person who subsequently develops a fever and abdominal rash. What type of transmission would this be?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 0, "ans_text": "A" }, "bloom": null, "hl_context": "<hl> Indirect contact transmission involves inanimate objects called fomites that become contaminated by pathogens from an infected individual or reservoir ( Figure 16.10 ) . <hl> <hl> For example , an individual with the common cold may sneeze , causing droplets to land on a fomite such as a tablecloth or carpet , or the individual may wipe her nose and then transfer mucus to a fomite such as a doorknob or towel . <hl> Transmission occurs indirectly when a new susceptible host later touches the fomite and transfers the contaminated material to a susceptible portal of entry . Fomites can also include objects used in clinical settings that are not properly sterilized , such as syringes , needles , catheters , and surgical equipment . Pathogens transmitted indirectly via such fomites are a major cause of healthcare-associated infections ( see Controlling Microbial Growth ) .", "hl_sentences": "Indirect contact transmission involves inanimate objects called fomites that become contaminated by pathogens from an infected individual or reservoir ( Figure 16.10 ) . For example , an individual with the common cold may sneeze , causing droplets to land on a fomite such as a tablecloth or carpet , or the individual may wipe her nose and then transfer mucus to a fomite such as a doorknob or towel .", "question": { "cloze_format": "The blanket is called a ___ .", "normal_format": "A blanket from a child with chickenpox is likely to be contaminated with the virus that causes chickenpox (Varicella-zoster virus). What is the blanket called?", "question_choices": [ "fomite", "host", "pathogen", "vector" ], "question_id": "fs-id1167661456393", "question_text": "A blanket from a child with chickenpox is likely to be contaminated with the virus that causes chickenpox (Varicella-zoster virus). What is the blanket called?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 1, "ans_text": "B" }, "bloom": null, "hl_context": "<hl> A reemerging infectious disease is a disease that is increasing in frequency after a previous period of decline . <hl> <hl> Its reemergence may be a result of changing conditions or old prevention regimes that are no longer working . <hl> <hl> Examples of such diseases are drug-resistant forms of tuberculosis , bacterial pneumonia , and malaria . <hl> <hl> Drug-resistant strains of the bacteria causing gonorrhea and syphilis are also becoming more widespread , raising concerns of untreatable infections . <hl>", "hl_sentences": "A reemerging infectious disease is a disease that is increasing in frequency after a previous period of decline . Its reemergence may be a result of changing conditions or old prevention regimes that are no longer working . Examples of such diseases are drug-resistant forms of tuberculosis , bacterial pneumonia , and malaria . Drug-resistant strains of the bacteria causing gonorrhea and syphilis are also becoming more widespread , raising concerns of untreatable infections .", "question": { "cloze_format": "The factor that can lead to reemergence of a disease is ___.", "normal_format": "Which of the following factors can lead to reemergence of a disease?", "question_choices": [ "A mutation that allows it to infect humans", "A period of decline in vaccination rates", "A change in disease reporting procedures", "Better education on the signs and symptoms of the disease" ], "question_id": "fs-id1167662471757", "question_text": "Which of the following factors can lead to reemergence of a disease?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "B" }, "bloom": null, "hl_context": "<hl> Emerging diseases may change their frequency gradually over time , or they may experience sudden epidemic growth . <hl> The importance of vigilance was made clear during the Ebola hemorrhagic fever epidemic in western Africa through 2014 – 2015 . Although health experts had been aware of the Ebola virus since the 1970s , an outbreak on such a large scale had never happened before ( Figure 16.16 ) . Previous human epidemics had been small , isolated , and contained . Indeed , the gorilla and chimpanzee populations of western Africa had suffered far worse from Ebola than the human population . The pattern of small isolated human epidemics changed in 2014 . <hl> Its high transmission rate , coupled with cultural practices for treatment of the dead and perhaps its emergence in an urban setting , caused the disease to spread rapidly , and thousands of people died . <hl> <hl> The international public health community responded with a large emergency effort to treat patients and contain the epidemic . <hl> Both WHO and some national public health agencies such as the CDC monitor and prepare for emerging infectious diseases . <hl> An emerging infectious disease is either new to the human population or has shown an increase in prevalence in the previous twenty years . <hl> <hl> Whether the disease is new or conditions have changed to cause an increase in frequency , its status as emerging implies the need to apply resources to understand and control its growing impact . <hl>", "hl_sentences": "Emerging diseases may change their frequency gradually over time , or they may experience sudden epidemic growth . Its high transmission rate , coupled with cultural practices for treatment of the dead and perhaps its emergence in an urban setting , caused the disease to spread rapidly , and thousands of people died . The international public health community responded with a large emergency effort to treat patients and contain the epidemic . An emerging infectious disease is either new to the human population or has shown an increase in prevalence in the previous twenty years . Whether the disease is new or conditions have changed to cause an increase in frequency , its status as emerging implies the need to apply resources to understand and control its growing impact .", "question": { "cloze_format": "The reason emerging diseases with very few cases are the focus of intense scrutiny is that ___ .", "normal_format": "Why are emerging diseases with very few cases the focus of intense scrutiny?", "question_choices": [ "They tend to be more deadly", "They are increasing and therefore not controlled", "They naturally have higher transmission rates", "They occur more in developed countries" ], "question_id": "fs-id1167662464904", "question_text": "Why are emerging diseases with very few cases the focus of intense scrutiny?" }, "references_are_paraphrase": 0 } ]
16
16.1 The Language of Epidemiologists Learning Objectives Explain the difference between prevalence and incidence of disease Distinguish the characteristics of sporadic, endemic, epidemic, and pandemic diseases Explain the use of Koch’s postulates and their modifications to determine the etiology of disease Explain the relationship between epidemiology and public health Clinical Focus Part 1 In late November and early December, a hospital in western Florida started to see a spike in the number of cases of acute gastroenteritis -like symptoms. Patients began arriving at the emergency department complaining of excessive bouts of emesis (vomiting) and diarrhea (with no blood in the stool). They also complained of abdominal pain and cramping, and most were severely dehydrated. Alarmed by the number of cases, hospital staff made some calls and learned that other regional hospitals were also seeing 10 to 20 similar cases per day. What are some possible causes of this outbreak? In what ways could these cases be linked, and how could any suspected links be confirmed? Jump to the next Clinical Focus box. The field of epidemiology concerns the geographical distribution and timing of infectious disease occurrences and how they are transmitted and maintained in nature, with the goal of recognizing and controlling outbreaks. The science of epidemiology includes etiology (the study of the causes of disease) and investigation of disease transmission (mechanisms by which a disease is spread). Analyzing Disease in a Population Epidemiological analyses are always carried out with reference to a population, which is the group of individuals that are at risk for the disease or condition. The population can be defined geographically, but if only a portion of the individuals in that area are susceptible, additional criteria may be required. Susceptible individuals may be defined by particular behaviors, such as intravenous drug use, owning particular pets, or membership in an institution, such as a college. Being able to define the population is important because most measures of interest in epidemiology are made with reference to the size of the population. The state of being diseased is called morbidity . Morbidity in a population can be expressed in a few different ways. Morbidity or total morbidity is expressed in numbers of individuals without reference to the size of the population. The morbidity rate can be expressed as the number of diseased individuals out of a standard number of individuals in the population, such as 100,000, or as a percent of the population. There are two aspects of morbidity that are relevant to an epidemiologist: a disease’s prevalence and its incidence . Prevalence is the number, or proportion, of individuals with a particular illness in a given population at a point in time. For example, the Centers for Disease Control and Prevention (CDC) estimated that in 2012, there were about 1.2 million people 13 years and older with an active human immunodeficiency virus ( HIV ) infection. Expressed as a proportion, or rate, this is a prevalence of 467 infected persons per 100,000 in the population. 1 On the other hand, incidence is the number or proportion of new cases in a period of time. For the same year and population, the CDC estimates that there were 43,165 newly diagnosed cases of HIV infection, which is an incidence of 13.7 new cases per 100,000 in the population. 2 The relationship between incidence and prevalence can be seen in Figure 16.2 . For a chronic disease like HIV infection, prevalence will generally be higher than incidence because it represents the cumulative number of new cases over many years minus the number of cases that are no longer active (e.g., because the patient died or was cured). 1 H. Irene Hall, Qian An, Tian Tang, Ruiguang Song, Mi Chen, Timothy Green, and Jian Kang. “Prevalence of Diagnosed and Undiagnosed HIV Infection—United States, 2008–2012.” Morbidity and Mortality Weekly Report 64, no. 24 (2015): 657–662. 2 Centers for Disease Control and Prevention. “Diagnoses of HIV Infection in the United States and Dependent Areas, 2014.” HIV Surveillance Report 26 (2015). In addition to morbidity rates, the incidence and prevalence of mortality (death) may also be reported. A mortality rate can be expressed as the percentage of the population that has died from a disease or as the number of deaths per 100,000 persons (or other suitable standard number). Check Your Understanding Explain the difference between incidence and prevalence. Describe how morbidity and mortality rates are expressed. Patterns of Incidence Diseases that are seen only occasionally, and usually without geographic concentration, are called sporadic disease s . Examples of sporadic diseases include tetanus , rabies , and plague . In the United States, Clostridium tetani , the bacterium that causes tetanus, is ubiquitous in the soil environment, but incidences of infection occur only rarely and in scattered locations because most individuals are vaccinated, clean wounds appropriately, or are only rarely in a situation that would cause infection. 3 Likewise in the United States there are a few scattered cases of plague each year, usually contracted from rodents in rural areas in the western states. 4 3 Centers for Disease Control and Prevention. “Tetanus Surveillance—United States, 2001–2008.” Morbidity and Mortality Weekly Report 60, no. 12 (2011): 365. 4 Centers for Disease Control and Prevention. “Plague in the United States.” 2015. http://www.cdc.gov/plague/maps. Accessed June 1, 2016. Diseases that are constantly present (often at a low level) in a population within a particular geographic region are called endemic disease s . For example, malaria is endemic to some regions of Brazil, but is not endemic to the United States. Diseases for which a larger than expected number of cases occurs in a short time within a geographic region are called epidemic disease s . Influenza is a good example of a commonly epidemic disease. Incidence patterns of influenza tend to rise each winter in the northern hemisphere. These seasonal increases are expected, so it would not be accurate to say that influenza is epidemic every winter; however, some winters have an usually large number of seasonal influenza cases in particular regions, and such situations would qualify as epidemics ( Figure 16.3 and Figure 16.4 ). An epidemic disease signals the breakdown of an equilibrium in disease frequency, often resulting from some change in environmental conditions or in the population. In the case of influenza, the disruption can be due to antigenic shift or drift (see Virulence Factors of Bacterial and Viral Pathogens ), which allows influenza virus strains to circumvent the acquired immunity of their human hosts. An epidemic that occurs on a worldwide scale is called a pandemic disease . For example, HIV/AIDS is a pandemic disease and novel influenza virus strains often become pandemic. Check Your Understanding Explain the difference between sporadic and endemic disease. Explain the difference between endemic and epidemic disease. Clinical Focus Part 2 Hospital physicians suspected that some type of food poisoning was to blame for the sudden post-Thanksgiving outbreak of gastroenteritis in western Florida. Over a two-week period, 254 cases were observed, but by the end of the first week of December, the epidemic ceased just as quickly as it had started. Suspecting a link between the cases based on the localized nature of the outbreak, hospitals handed over their medical records to the regional public health office for study. Laboratory testing of stool samples had indicated that the infections were caused by Salmonella bacteria. Patients ranged from children as young as three to seniors in their late eighties. Cases were nearly evenly split between males and females. Across the region, there had been three confirmed deaths in the outbreak, all due to severe dehydration. In each of the fatal cases, the patients had not sought medical care until their symptoms were severe; also, all of the deceased had preexisting medical conditions such as congestive heart failure, diabetes, or high blood pressure. After reviewing the medical records, epidemiologists with the public health office decided to conduct interviews with a randomly selected sample of patients. What conclusions, if any, can be drawn from the medical records? What would epidemiologists hope to learn by interviewing patients? What kinds of questions might they ask? Jump to the next Clinical Focus box. Go back to the previous Clinical Focus box. Etiology When studying an epidemic, an epidemiologist’s first task is to determinate the cause of the disease, called the etiologic agent or causative agent . Connecting a disease to a specific pathogen can be challenging because of the extra effort typically required to demonstrate direct causation as opposed to a simple association. It is not enough to observe an association between a disease and a suspected pathogen; controlled experiments are needed to eliminate other possible causes. In addition, pathogens are typically difficult to detect when there is no immediate clue as to what is causing the outbreak. Signs and symptoms of disease are also commonly nonspecific, meaning that many different agents can give rise to the same set of signs and symptoms. This complicates diagnosis even when a causative agent is familiar to scientists. Robert Koch was the first scientist to specifically demonstrate the causative agent of a disease (anthrax) in the late 1800s. Koch developed four criteria, now known as Koch’s postulates , which had to be met in order to positively link a disease with a pathogenic microbe. Without Koch’s postulates, the Golden Age of Microbiology would not have occurred. Between 1876 and 1905, many common diseases were linked with their etiologic agents, including cholera, diphtheria, gonorrhea, meningitis, plague, syphilis, tetanus, and tuberculosis. Today, we use the molecular Koch’s postulates , a variation of Koch’s original postulates that can be used to establish a link between the disease state and virulence traits unique to a pathogenic strain of a microbe. Koch’s original postulates and molecular Koch’s postulates were described in more detail in How Pathogens Cause Disease . Check Your Understanding List some challenges to determining the causative agent of a disease outbreak. The Role of Public Health Organizations The main national public health agency in the United States is the Centers for Disease Control and Prevention (CDC) , an agency of the Department of Health and Human Services. The CDC is charged with protecting the public from disease and injury. One way that the CDC carries out this mission is by overseeing the National Notifiable Disease Surveillance System (NNDSS) in cooperation with regional, state, and territorial public health departments. The NNDSS monitors diseases considered to be of public health importance on a national scale. Such diseases are called notifiable disease s or reportable disease s because all cases must be reported to the CDC. A physician treating a patient with a notifiable disease is legally required to submit a report on the case. Notifiable diseases include HIV infection, measles , West Nile virus infections, and many others. Some states have their own lists of notifiable diseases that include diseases beyond those on the CDC’s list. Notifiable diseases are tracked by epidemiological studies and the data is used to inform health-care providers and the public about possible risks. The CDC publishes the Morbidity and Mortality Weekly Report ( MMWR ) , which provides physicians and health-care workers with updates on public health issues and the latest data pertaining to notifiable diseases. Table 16.1 is an example of the kind of data contained in the MMWR . Incidence of Four Notifiable Diseases in the United States, Week Ending January 2, 2016 Disease Current Week (Jan 2, 2016) Median of Previous 52 Weeks Maximum of Previous 52 Weeks Cumulative Cases 2015 Campylobacteriosis 406 869 1,385 46,618 Chlamydia trachomatis infection 11,024 28,562 31,089 1,425,303 Giardiasis 115 230 335 11,870 Gonorrhea 3,207 7,155 8,283 369,926 Table 16.1 Link to Learning The current Morbidity and Mortality Weekly Report is available online. Check Your Understanding Describe how health agencies obtain data about the incidence of diseases of public health importance. 16.2 Tracking Infectious Diseases Learning Objectives Explain the research approaches used by the pioneers of epidemiology Explain how descriptive, analytical, and experimental epidemiological studies go about determining the cause of morbidity and mortality Epidemiology has its roots in the work of physicians who looked for patterns in disease occurrence as a way to understand how to prevent it. The idea that disease could be transmitted was an important precursor to making sense of some of the patterns. In 1546, Girolamo Fracastoro first proposed the germ theory of disease in his essay De Contagione et Contagiosis Morbis , but this theory remained in competition with other theories, such as the miasma hypothesis , for many years (see What Our Ancestors Knew ). Uncertainty about the cause of disease was not an absolute barrier to obtaining useful knowledge from patterns of disease. Some important researchers, such as Florence Nightingale , subscribed to the miasma hypothesis. The transition to acceptance of the germ theory during the 19th century provided a solid mechanistic grounding to the study of disease patterns. The studies of 19th century physicians and researchers such as John Snow, Florence Nightingale, Ignaz Semmelweis, Joseph Lister, Robert Koch, Louis Pasteur, and others sowed the seeds of modern epidemiology. Pioneers of Epidemiology John Snow ( Figure 16.5 ) was a British physician known as the father of epidemiology for determining the source of the 1854 Broad Street cholera epidemic in London. Based on observations he had made during an earlier cholera outbreak (1848–1849), Snow proposed that cholera was spread through a fecal-oral route of transmission and that a microbe was the infectious agent. He investigated the 1854 cholera epidemic in two ways. First, suspecting that contaminated water was the source of the epidemic, Snow identified the source of water for those infected. He found a high frequency of cholera cases among individuals who obtained their water from the River Thames downstream from London. This water contained the refuse and sewage from London and settlements upstream. He also noted that brewery workers did not contract cholera and on investigation found the owners provided the workers with beer to drink and stated that they likely did not drink water. 5 Second, he also painstakingly mapped the incidence of cholera and found a high frequency among those individuals using a particular water pump located on Broad Street. In response to Snow’s advice, local officials removed the pump’s handle, 6 resulting in the containment of the Broad Street cholera epidemic. 5 John Snow. On the Mode of Communication of Cholera. Second edition, Much Enlarged . John Churchill, 1855. 6 John Snow. “The Cholera near Golden-Wquare, and at Deptford.” Medical Times and Gazette 9 (1854): 321–322. http://www.ph.ucla.edu/epi/snow/choleragoldensquare.html. Snow’s work represents an early epidemiological study and it resulted in the first known public health response to an epidemic. Snow’s meticulous case-tracking methods are now common practice in studying disease outbreaks and in associating new diseases with their causes. His work further shed light on unsanitary sewage practices and the effects of waste dumping in the Thames. Additionally, his work supported the germ theory of disease , which argued disease could be transmitted through contaminated items, including water contaminated with fecal matter. Snow’s work illustrated what is referred to today as a common source spread of infectious disease, in which there is a single source for all of the individuals infected. In this case, the single source was the contaminated well below the Broad Street pump. Types of common source spread include point source spread, continuous common source spread, and intermittent common source spread. In point source spread of infectious disease, the common source operates for a short time period—less than the incubation period of the pathogen. An example of point source spread is a single contaminated potato salad at a group picnic. In continuous common source spread , the infection occurs for an extended period of time, longer than the incubation period. An example of continuous common source spread would be the source of London water taken downstream of the city, which was continuously contaminated with sewage from upstream. Finally, with intermittent common source spread , infections occur for a period, stop, and then begin again. This might be seen in infections from a well that was contaminated only after large rainfalls and that cleared itself of contamination after a short period. In contrast to common source spread, propagated spread occurs through direct or indirect person-to-person contact. With propagated spread, there is no single source for infection; each infected individual becomes a source for one or more subsequent infections. With propagated spread, unless the spread is stopped immediately, infections occur for longer than the incubation period. Although point sources often lead to large-scale but localized outbreaks of short duration, propagated spread typically results in longer duration outbreaks that can vary from small to large, depending on the population and the disease ( Figure 16.6 ). In addition, because of person-to-person transmission, propagated spread cannot be easily stopped at a single source like point source spread. Florence Nightingale ’s work is another example of an early epidemiological study. In 1854, Nightingale was part of a contingent of nurses dispatched by the British military to care for wounded soldiers during the Crimean War. Nightingale kept meticulous records regarding the causes of illness and death during the war. Her recordkeeping was a fundamental task of what would later become the science of epidemiology. Her analysis of the data she collected was published in 1858. In this book, she presented monthly frequency data on causes of death in a wedge chart histogram ( Figure 16.7 ). This graphical presentation of data, unusual at the time, powerfully illustrated that the vast majority of casualties during the war occurred not due to wounds sustained in action but to what Nightingale deemed preventable infectious diseases. Often these diseases occurred because of poor sanitation and lack of access to hospital facilities. Nightingale’s findings led to many reforms in the British military’s system of medical care. Joseph Lister provided early epidemiological evidence leading to good public health practices in clinics and hospitals. These settings were notorious in the mid-1800s for fatal infections of surgical wounds at a time when the germ theory of disease was not yet widely accepted (see Foundations of Modern Cell Theory ). Most physicians did not wash their hands between patient visits or clean and sterilize their surgical tools. Lister, however, discovered the disinfecting properties of carbolic acid , also known as phenol (see Using Chemicals to Control Microorganisms ). He introduced several disinfection protocols that dramatically lowered post-surgical infection rates. 7 He demanded that surgeons who worked for him use a 5% carbolic acid solution to clean their surgical tools between patients, and even went so far as to spray the solution onto bandages and over the surgical site during operations ( Figure 16.8 ). He also took precautions not to introduce sources of infection from his skin or clothing by removing his coat, rolling up his sleeves, and washing his hands in a dilute solution of carbolic acid before and during the surgery. 7 O.M. Lidwell. “Joseph Lister and Infection from the Air.” Epidemiology and Infection 99 (1987): 569–578. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2249236/pdf/epidinfect00006-0004.pdf. Link to Learning John Snow’s own account of his work has additional links and information. This CDC resource further breaks down the pattern expected from a point-source outbreak. Learn more about Nightingale’s wedge chart here. Check Your Understanding Explain the difference between common source spread and propagated spread of disease. Describe how the observations of John Snow, Florence Nightingale, and Joseph Lister led to improvements in public health. Types of Epidemiological Studies Today, epidemiologists make use of study designs, the manner in which data are gathered to test a hypothesis, similar to those of researchers studying other phenomena that occur in populations. These approaches can be divided into observational studies (in which subjects are not manipulated) and experimental studies (in which subjects are manipulated). Collectively, these studies give modern-day epidemiologists multiple tools for exploring the connections between infectious diseases and the populations of susceptible individuals they might infect. Observational Studies In an observational study , data are gathered from study participants through measurements (such as physiological variables like white blood cell count), or answers to questions in interviews (such as recent travel or exercise frequency). The subjects in an observational study are typically chosen at random from a population of affected or unaffected individuals. However, the subjects in an observational study are in no way manipulated by the researcher. Observational studies are typically easier to carry out than experimental studies, and in certain situations they may be the only studies possible for ethical reasons. Observational studies are only able to measure associations between disease occurrence and possible causative agents; they do not necessarily prove a causal relationship. For example, suppose a study finds an association between heavy coffee drinking and lower incidence of skin cancer. This might suggest that coffee prevents skin cancer, but there may be another unmeasured factor involved, such as the amount of sun exposure the participants receive. If it turns out that coffee drinkers work more in offices and spend less time outside in the sun than those who drink less coffee, then it may be possible that the lower rate of skin cancer is due to less sun exposure, not to coffee consumption. The observational study cannot distinguish between these two potential causes. There are several useful approaches in observational studies. These include methods classified as descriptive epidemiology and analytical epidemiology. Descriptive epidemiology gathers information about a disease outbreak, the affected individuals, and how the disease has spread over time in an exploratory stage of study. This type of study will involve interviews with patients, their contacts, and their family members; examination of samples and medical records; and even histories of food and beverages consumed. Such a study might be conducted while the outbreak is still occurring. Descriptive studies might form the basis for developing a hypothesis of causation that could be tested by more rigorous observational and experimental studies. Analytical epidemiology employs carefully selected groups of individuals in an attempt to more convincingly evaluate hypotheses about potential causes for a disease outbreak. The selection of cases is generally made at random, so the results are not biased because of some common characteristic of the study participants . Analytical studies may gather their data by going back in time (retrospective studies), or as events unfold forward in time (prospective studies). Retrospective studies gather data from the past on present-day cases. Data can include things like the medical history, age, gender, or occupational history of the affected individuals. This type of study examines associations between factors chosen or available to the researcher and disease occurrence. Prospective studies follow individuals and monitor their disease state during the course of the study. Data on the characteristics of the study subjects and their environments are gathered at the beginning and during the study so that subjects who become ill may be compared with those who do not. Again, the researchers can look for associations between the disease state and variables that were measured during the study to shed light on possible causes. Analytical studies incorporate groups into their designs to assist in teasing out associations with disease. Approaches to group-based analytical studies include cohort studies, case-control studies, and cross-sectional studies. The cohort method examines groups of individuals (called cohorts) who share a particular characteristic. For example, a cohort might consist of individuals born in the same year and the same place; or it might consist of people who practice or avoid a particular behavior, e.g., smokers or nonsmokers. In a cohort study, cohorts can be followed prospectively or studied retrospectively. If only a single cohort is followed, then the affected individuals are compared with the unaffected individuals in the same group. Disease outcomes are recorded and analyzed to try to identify correlations between characteristics of individuals in the cohort and disease incidence. Cohort studies are a useful way to determine the causes of a condition without violating the ethical prohibition of exposing subjects to a risk factor. Cohorts are typically identified and defined based on suspected risk factors to which individuals have already been exposed through their own choices or circumstances. Case-control studies are typically retrospective and compare a group of individuals with a disease to a similar group of individuals without the disease. Case-control studies are far more efficient than cohort studies because researchers can deliberately select subjects who are already affected with the disease as opposed to waiting to see which subjects from a random sample will develop a disease. A cross-sectional study analyzes randomly selected individuals in a population and compares individuals affected by a disease or condition to those unaffected at a single point in time. Subjects are compared to look for associations between certain measurable variables and the disease or condition. Cross-sectional studies are also used to determine the prevalence of a condition. Experimental Studies Experimental epidemiology uses laboratory or clinical studies in which the investigator manipulates the study subjects to study the connections between diseases and potential causative agents or to assess treatments. Examples of treatments might be the administration of a drug, the inclusion or exclusion of different dietary items, physical exercise, or a particular surgical procedure. Animals or humans are used as test subjects. Because experimental studies involve manipulation of subjects, they are typically more difficult and sometimes impossible for ethical reasons. Koch’s postulates require experimental interventions to determine the causative agent for a disease. Unlike observational studies, experimental studies can provide strong evidence supporting cause because other factors are typically held constant when the researcher manipulates the subject. The outcomes for one group receiving the treatment are compared to outcomes for a group that does not receive the treatment but is treated the same in every other way. For example, one group might receive a regimen of a drug administered as a pill, while the untreated group receives a placebo (a pill that looks the same but has no active ingredient). Both groups are treated as similarly as possible except for the administration of the drug. Because other variables are held constant in both the treated and the untreated groups, the researcher is more certain that any change in the treated group is a result of the specific manipulation. Experimental studies provide the strongest evidence for the etiology of disease, but they must also be designed carefully to eliminate subtle effects of bias . Typically, experimental studies with humans are conducted as double-blind studies , meaning neither the subjects nor the researchers know who is a treatment case and who is not. This design removes a well-known cause of bias in research called the placebo effect , in which knowledge of the treatment by either the subject or the researcher can influence the outcomes. Check Your Understanding Describe the advantages and disadvantages of observational studies and experimental studies. Explain the ways that groups of subjects can be selected for analytical studies. Clinical Focus Part 3 Since laboratory tests had confirmed Salmonella , a common foodborne pathogen, as the etiologic agent, epidemiologists suspected that the outbreak was caused by contamination at a food processing facility serving the region. Interviews with patients focused on food consumption during and after the Thanksgiving holiday, corresponding with the timing of the outbreak. During the interviews, patients were asked to list items consumed at holiday gatherings and describe how widely each item was consumed among family members and relatives. They were also asked about the sources of food items (e.g., brand, location of purchase, date of purchase). By asking such questions, health officials hoped to identify patterns that would lead back to the source of the outbreak. Analysis of the interview responses eventually linked almost all of the cases to consumption of a holiday dish known as the turducken —a chicken stuffed inside a duck stuffed inside a turkey. Turducken is a dish not generally consumed year-round, which would explain the spike in cases just after the Thanksgiving holiday. Additional analysis revealed that the turduckens consumed by the affected patients were purchased already stuffed and ready to be cooked. Moreover, the pre-stuffed turduckens were all sold at the same regional grocery chain under two different brand names. Upon further investigation, officials traced both brands to a single processing plant that supplied stores throughout the Florida panhandle. Is this an example of common source spread or propagated spread? What next steps would the public health office likely take after identifying the source of the outbreak? Jump to the next Clinical Focus box. Go back to the previous Clinical Focus box. 16.3 Modes of Disease Transmission Learning Objectives Describe the different types of disease reservoirs Compare contact, vector, and vehicle modes of transmission Identify important disease vectors Explain the prevalence of nosocomial infections Understanding how infectious pathogens spread is critical to preventing infectious disease. Many pathogens require a living host to survive, while others may be able to persist in a dormant state outside of a living host. But having infected one host, all pathogens must also have a mechanism of transfer from one host to another or they will die when their host dies. Pathogens often have elaborate adaptations to exploit host biology, behavior, and ecology to live in and move between hosts. Hosts have evolved defenses against pathogens, but because their rates of evolution are typically slower than their pathogens (because their generation times are longer), hosts are usually at an evolutionary disadvantage. This section will explore where pathogens survive—both inside and outside hosts—and some of the many ways they move from one host to another. Reservoirs and Carriers For pathogens to persist over long periods of time they require reservoir s where they normally reside. Reservoirs can be living organisms or nonliving sites. Nonliving reservoirs can include soil and water in the environment. These may naturally harbor the organism because it may grow in that environment. These environments may also become contaminated with pathogens in human feces, pathogens shed by intermediate hosts, or pathogens contained in the remains of intermediate hosts. Pathogens may have mechanisms of dormancy or resilience that allow them to survive (but typically not to reproduce) for varying periods of time in nonliving environments. For example, Clostridium tetani survives in the soil and in the presence of oxygen as a resistant endospore. Although many viruses are soon destroyed once in contact with air, water, or other non-physiological conditions, certain types are capable of persisting outside of a living cell for varying amounts of time. For example, a study that looked at the ability of influenza viruses to infect a cell culture after varying amounts of time on a banknote showed survival times from 48 hours to 17 days, depending on how they were deposited on the banknote. 8 On the other hand, cold-causing rhinoviruses are somewhat fragile, typically surviving less than a day outside of physiological fluids. 8 Yves Thomas, Guido Vogel, Werner Wunderli, Patricia Suter, Mark Witschi, Daniel Koch, Caroline Tapparel, and Laurent Kaiser. “Survival of Influenza Virus on Banknotes.” Applied and Environmental Microbiology 74, no. 10 (2008): 3002–3007. A human acting as a reservoir of a pathogen may or may not be capable of transmitting the pathogen, depending on the stage of infection and the pathogen. To help prevent the spread of disease among school children, the CDC has developed guidelines based on the risk of transmission during the course of the disease. For example, children with chickenpox are considered contagious for five days from the start of the rash, whereas children with most gastrointestinal illnesses should be kept home for 24 hours after the symptoms disappear. An individual capable of transmitting a pathogen without displaying symptoms is referred to as a carrier. A passive carrier is contaminated with the pathogen and can mechanically transmit it to another host; however, a passive carrier is not infected. For example, a health-care professional who fails to wash his hands after seeing a patient harboring an infectious agent could become a passive carrier, transmitting the pathogen to another patient who becomes infected. By contrast, an active carrier is an infected individual who can transmit the disease to others. An active carrier may or may not exhibit signs or symptoms of infection. For example, active carriers may transmit the disease during the incubation period (before they show signs and symptoms) or the period of convalescence (after symptoms have subsided). Active carriers who do not present signs or symptoms of disease despite infection are called asymptomatic carrier s . Pathogens such as hepatitis B virus , herpes simplex virus , and HIV are frequently transmitted by asymptomatic carriers. Mary Mallon , better known as Typhoid Mary , is a famous historical example of an asymptomatic carrier. An Irish immigrant, Mallon worked as a cook for households in and around New York City between 1900 and 1915. In each household, the residents developed typhoid fever (caused by Salmonella typhi ) a few weeks after Mallon started working. Later investigations determined that Mallon was responsible for at least 122 cases of typhoid fever, five of which were fatal. 9 See Eye on Ethics: Typhoid Mary for more about the Mallon case. 9 Filio Marineli, Gregory Tsoucalas, Marianna Karamanou, and George Androutsos. “Mary Mallon (1869–1938) and the History of Typhoid Fever.” Annals of Gastroenterology 26 (2013): 132–134. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3959940/pdf/AnnGastroenterol-26-132.pdf. A pathogen may have more than one living reservoir. In zoonotic diseases, animals act as reservoirs of human disease and transmit the infectious agent to humans through direct or indirect contact. In some cases, the disease also affects the animal, but in other cases the animal is asymptomatic. In parasitic infections, the parasite’s preferred host is called the definitive host . In parasites with complex life cycles, the definitive host is the host in which the parasite reaches sexual maturity. Some parasites may also infect one or more intermediate host s in which the parasite goes through several immature life cycle stages or reproduces asexually. Link to Learning George Soper, the sanitary engineer who traced the typhoid outbreak to Mary Mallon, gives an account of his investigation, an example of descriptive epidemiology, in “The Curious Career of Typhoid Mary.” Check Your Understanding List some nonliving reservoirs for pathogens. Explain the difference between a passive carrier and an active carrier. Transmission Regardless of the reservoir, transmission must occur for an infection to spread. First, transmission from the reservoir to the individual must occur. Then, the individual must transmit the infectious agent to other susceptible individuals, either directly or indirectly. Pathogenic microorganisms employ diverse transmission mechanisms. Contact Transmission Contact transmission includes direct contact or indirect contact. Person-to-person transmission is a form of direct contact transmission . Here the agent is transmitted by physical contact between two individuals ( Figure 16.9 ) through actions such as touching, kissing, sexual intercourse, or droplet sprays . Direct contact can be categorized as vertical, horizontal, or droplet transmission. Vertical direct contact transmission occurs when pathogens are transmitted from mother to child during pregnancy, birth, or breastfeeding. Other kinds of direct contact transmission are called horizontal direct contact transmission . Often, contact between mucous membranes is required for entry of the pathogen into the new host, although skin-to-skin contact can lead to mucous membrane contact if the new host subsequently touches a mucous membrane. Contact transmission may also be site-specific; for example, some diseases can be transmitted by sexual contact but not by other forms of contact. When an individual coughs or sneezes, small droplets of mucus that may contain pathogens are ejected. This leads to direct droplet transmission , which refers to droplet transmission of a pathogen to a new host over distances of one meter or less. A wide variety of diseases are transmitted by droplets, including influenza and many forms of pneumonia . Transmission over distances greater than one meter is called airborne transmission . Indirect contact transmission involves inanimate objects called fomites that become contaminated by pathogens from an infected individual or reservoir ( Figure 16.10 ). For example, an individual with the common cold may sneeze, causing droplets to land on a fomite such as a tablecloth or carpet, or the individual may wipe her nose and then transfer mucus to a fomite such as a doorknob or towel. Transmission occurs indirectly when a new susceptible host later touches the fomite and transfers the contaminated material to a susceptible portal of entry. Fomites can also include objects used in clinical settings that are not properly sterilized, such as syringes, needles, catheters, and surgical equipment. Pathogens transmitted indirectly via such fomites are a major cause of healthcare-associated infections (see Controlling Microbial Growth ). Vehicle Transmission The term vehicle transmission refers to the transmission of pathogens through vehicles such as water, food, and air. Water contamination through poor sanitation methods leads to waterborne transmission of disease. Waterborne disease remains a serious problem in many regions throughout the world. The World Health Organization (WHO) estimates that contaminated drinking water is responsible for more than 500,000 deaths each year. 10 Similarly, food contaminated through poor handling or storage can lead to foodborne transmission of disease ( Figure 16.11 ). 10 World Health Organization. Fact sheet No. 391 —Drinking Water. June 2005. http://www.who.int/mediacentre/factsheets/fs391/en. Dust and fine particles known as aerosols , which can float in the air, can carry pathogens and facilitate the airborne transmission of disease. For example, dust particles are the dominant mode of transmission of hantavirus to humans. Hantavirus is found in mouse feces, urine, and saliva, but when these substances dry, they can disintegrate into fine particles that can become airborne when disturbed; inhalation of these particles can lead to a serious and sometimes fatal respiratory infection. Although droplet transmission over short distances is considered contact transmission as discussed above, longer distance transmission of droplets through the air is considered vehicle transmission. Unlike larger particles that drop quickly out of the air column, fine mucus droplets produced by coughs or sneezes can remain suspended for long periods of time, traveling considerable distances. In certain conditions, droplets desiccate quickly to produce a droplet nucleus that is capable of transmitting pathogens; air temperature and humidity can have an impact on effectiveness of airborne transmission. Tuberculosis is often transmitted via airborne transmission when the causative agent, Mycobacterium tuberculosis , is released in small particles with coughs. Because tuberculosis requires as few as 10 microbes to initiate a new infection, patients with tuberculosis must be treated in rooms equipped with special ventilation, and anyone entering the room should wear a mask. Clinical Focus Resolution After identifying the source of the contaminated turduckens, the Florida public health office notified the CDC, which requested an expedited inspection of the facility by state inspectors. Inspectors found that a machine used to process the chicken was contaminated with Salmonella as a result of substandard cleaning protocols. Inspectors also found that the process of stuffing and packaging the turduckens prior to refrigeration allowed the meat to remain at temperatures conducive to bacterial growth for too long. The contamination and the delayed refrigeration led to vehicle (food) transmission of the bacteria in turduckens. Based on these findings, the plant was shut down for a full and thorough decontamination. All turduckens produced in the plant were recalled and pulled from store shelves ahead of the December holiday season, preventing further outbreaks. Go back to the previous Clinical Focus Box. Vector Transmission Diseases can also be transmitted by a mechanical or biological vector , an animal (typically an arthropod ) that carries the disease from one host to another. Mechanical transmission is facilitated by a mechanical vector , an animal that carries a pathogen from one host to another without being infected itself. For example, a fly may land on fecal matter and later transmit bacteria from the feces to food that it lands on; a human eating the food may then become infected by the bacteria, resulting in a case of diarrhea or dysentery ( Figure 16.12 ). Biological transmission occurs when the pathogen reproduces within a biological vector that transmits the pathogen from one host to another ( Figure 16.12 ). Arthropods are the main vectors responsible for biological transmission ( Figure 16.13 ). Most arthropod vectors transmit the pathogen by biting the host, creating a wound that serves as a portal of entry. The pathogen may go through part of its reproductive cycle in the gut or salivary glands of the arthropod to facilitate its transmission through the bite. For example, hemipterans (called “kissing bugs” or “assassin bugs”) transmit Chagas disease to humans by defecating when they bite, after which the human scratches or rubs the infected feces into a mucous membrane or break in the skin. Biological insect vectors include mosquitoes , which transmit malaria and other diseases, and lice , which transmit typhus . Other arthropod vectors can include arachnids, primarily ticks , which transmit Lyme disease and other diseases, and mites , which transmit scrub typhus and rickettsial pox . Biological transmission, because it involves survival and reproduction within a parasitized vector, complicates the biology of the pathogen and its transmission. There are also important non-arthropod vectors of disease, including mammals and birds. Various species of mammals can transmit rabies to humans, usually by means of a bite that transmits the rabies virus. Chickens and other domestic poultry can transmit avian influenza to humans through direct or indirect contact with avian influenza virus A shed in the birds’ saliva, mucous, and feces. Check Your Understanding Describe how diseases can be transmitted through the air. Explain the difference between a mechanical vector and a biological vector. Eye on Ethics Using GMOs to Stop the Spread of Zika In 2016, an epidemic of the Zika virus was linked to a high incidence of birth defects in South America and Central America. As winter turned to spring in the northern hemisphere, health officials correctly predicted the virus would spread to North America, coinciding with the breeding season of its major vector, the Aedes aegypti mosquito. The range of the A. aegypti mosquito extends well into the southern United States ( Figure 16.14 ). Because these same mosquitoes serve as vectors for other problematic diseases ( dengue fever , yellow fever , and others), various methods of mosquito control have been proposed as solutions. Chemical pesticides have been used effectively in the past, and are likely to be used again; but because chemical pesticides can have negative impacts on the environment, some scientists have proposed an alternative that involves genetically engineering A. aegypti so that it cannot reproduce. This method, however, has been the subject of some controversy. One method that has worked in the past to control pests, with little apparent downside, has been sterile male introductions. This method controlled the screw-worm fly pest in the southwest United States and fruit fly pests of fruit crops. In this method, males of the target species are reared in the lab, sterilized with radiation, and released into the environment where they mate with wild females, who subsequently bear no live offspring. Repeated releases shrink the pest population. A similar method, taking advantage of recombinant DNA technology, 11 introduces a dominant lethal allele into male mosquitoes that is suppressed in the presence of tetracycline (an antibiotic) during laboratory rearing. The males are released into the environment and mate with female mosquitoes. Unlike the sterile male method, these matings produce offspring, but they die as larvae from the lethal gene in the absence of tetracycline in the environment. As of 2016, this method has yet to be implemented in the United States, but a UK company tested the method in Piracicaba, Brazil, and found an 82% reduction in wild A. aegypti larvae and a 91% reduction in dengue cases in the treated area. 12 In August 2016, amid news of Zika infections in several Florida communities, the FDA gave the UK company permission to test this same mosquito control method in Key West, Florida, pending compliance with local and state regulations and a referendum in the affected communities. 11 Blandine Massonnet-Bruneel, Nicole Corre-Catelin, Renaud Lacroix, Rosemary S. Lees, Kim Phuc Hoang, Derric Nimmo, Luke Alphey, and Paul Reiter. “Fitness of Transgenic Mosquito Aedes aegypti Males Carrying a Dominant Lethal Genetic System.” PLOS ONE 8, no. 5 (2013): e62711. 12 Richard Levine. “Cases of Dengue Drop 91 Percent Due to Genetically Modified Mosquitoes.” Entomology Today. https://entomologytoday.org/2016/07/14/cases-of-dengue-drop-91-due-to-genetically-modified-mosquitoes. The use of genetically modified organisms (GMOs) to control a disease vector has its advocates as well as its opponents. In theory, the system could be used to drive the A. aegypti mosquito extinct—a noble goal according to some, given the damage they do to human populations. 13 But opponents of the idea are concerned that the gene could escape the species boundary of A. aegypti and cause problems in other species, leading to unforeseen ecological consequences. Opponents are also wary of the program because it is being administered by a for-profit corporation, creating the potential for conflicts of interest that would have to be tightly regulated; and it is not clear how any unintended consequences of the program could be reversed. 13 Olivia Judson. “A Bug’s Death.” The New York Times , September 25, 2003. http://www.nytimes.com/2003/09/25/opinion/a-bug-s-death.html. There are other epidemiological considerations as well. Aedes aegypti is apparently not the only vector for the Zika virus. Aedes albopictus , the Asian tiger mosquito, is also a vector for the Zika virus. 14 A. albopictus is now widespread around the planet including much of the United States ( Figure 16.14 ). Many other mosquitoes have been found to harbor Zika virus, though their capacity to act as vectors is unknown. 15 Genetically modified strains of A. aegypti will not control the other species of vectors. Finally, the Zika virus can apparently be transmitted sexually between human hosts, from mother to child, and possibly through blood transfusion. All of these factors must be considered in any approach to controlling the spread of the virus. 14 Gilda Grard, Mélanie Caron, Illich Manfred Mombo, Dieudonné Nkoghe, Statiana Mboui Ondo, Davy Jiolle, Didier Fontenille, Christophe Paupy, and Eric Maurice Leroy. “Zika Virus in Gabon (Central Africa)–2007: A New Threat from Aedes albopictus ?” PLOS Neglected Tropical Diseases 8, no. 2 (2014): e2681. 15 Constância F.J. Ayres. “Identification of Zika Virus Vectors and Implications for Control.” The Lancet Infectious Diseases 16, no. 3 (2016): 278–279. Clearly there are risks and unknowns involved in conducting an open-environment experiment of an as-yet poorly understood technology. But allowing the Zika virus to spread unchecked is also risky. Does the threat of a Zika epidemic justify the ecological risk of genetically engineering mosquitos? Are current methods of mosquito control sufficiently ineffective or harmful that we need to try untested alternatives? These are the questions being put to public health officials now. Quarantining Individuals suspected or known to have been exposed to certain contagious pathogens may be quarantined , or isolated to prevent transmission of the disease to others. Hospitals and other health-care facilities generally set up special wards to isolate patients with particularly hazardous diseases such as tuberculosis or Ebola ( Figure 16.15 ). Depending on the setting, these wards may be equipped with special air-handling methods, and personnel may implement special protocols to limit the risk of transmission, such as personal protective equipment or the use of chemical disinfectant sprays upon entry and exit of medical personnel. The duration of the quarantine depends on factors such as the incubation period of the disease and the evidence suggestive of an infection. The patient may be released if signs and symptoms fail to materialize when expected or if preventive treatment can be administered in order to limit the risk of transmission. If the infection is confirmed, the patient may be compelled to remain in isolation until the disease is no longer considered contagious. In the United States, public health authorities may only quarantine patients for certain diseases, such as cholera , diphtheria , infectious tuberculosis , and strains of influenza capable of causing a pandemic . Individuals entering the United States or moving between states may be quarantined by the CDC if they are suspected of having been exposed to one of these diseases. Although the CDC routinely monitors entry points to the United States for crew or passengers displaying illness, quarantine is rarely implemented. Healthcare-Associated (Nosocomial) Infections Hospitals, retirement homes, and prisons attract the attention of epidemiologists because these settings are associated with increased incidence of certain diseases. Higher rates of transmission may be caused by characteristics of the environment itself, characteristics of the population, or both. Consequently, special efforts must be taken to limit the risks of infection in these settings. Infections acquired in health-care facilities, including hospitals, are called nosocomial infections or healthcare-associated infections (HAI) . HAIs are often connected with surgery or other invasive procedures that provide the pathogen with access to the portal of infection. For an infection to be classified as an HAI, the patient must have been admitted to the health-care facility for a reason other than the infection. In these settings, patients suffering from primary disease are often afflicted with compromised immunity and are more susceptible to secondary infection and opportunistic pathogens. In 2011, more than 720,000 HAIs occurred in hospitals in the United States, according to the CDC. About 22% of these HAIs occurred at a surgical site, and cases of pneumonia accounted for another 22%; urinary tract infections accounted for an additional 13%, and primary bloodstream infections 10%. 16 Such HAIs often occur when pathogens are introduced to patients’ bodies through contaminated surgical or medical equipment, such as catheters and respiratory ventilators. Health-care facilities seek to limit nosocomial infections through training and hygiene protocols such as those described in Control of Microbial Growth . 16 Centers for Disease Control and Prevention. “HAI Data and Statistics.” 2016. http://www.cdc.gov/hai/surveillance. Accessed Jan 2, 2016. Check Your Understanding Give some reasons why HAIs occur. 16.4 Global Public Health Learning Objectives Describe the entities involved in international public health and their activities Identify and differentiate between emerging and reemerging infectious diseases A large number of international programs and agencies are involved in efforts to promote global public health. Among their goals are developing infrastructure in health care, public sanitation, and public health capacity; monitoring infectious disease occurrences around the world; coordinating communications between national public health agencies in various countries; and coordinating international responses to major health crises. In large part, these international efforts are necessary because disease-causing microorganisms know no national boundaries. The World Health Organization (WHO) International public health issues are coordinated by the World Health Organization (WHO) , an agency of the United Nations. Of its roughly $4 billion budget for 2015–16 17 , about $1 billion was funded by member states and the remaining $3 billion by voluntary contributions. In addition to monitoring and reporting on infectious disease, WHO also develops and implements strategies for their control and prevention. WHO has had a number of successful international public health campaigns. For example, its vaccination program against smallpox , begun in the mid-1960s, resulted in the global eradication of the disease by 1980. WHO continues to be involved in infectious disease control, primarily in the developing world, with programs targeting malaria , HIV/AIDS , and tuberculosis , among others. It also runs programs to reduce illness and mortality that occur as a result of violence, accidents, lifestyle-associated illnesses such as diabetes, and poor health-care infrastructure. 17 World Health Organization. “Programme Budget 2014–2015.” http://www.who.int/about/finances-accountability/budget/en. WHO maintains a global alert and response system that coordinates information from member nations. In the event of a public health emergency or epidemic, it provides logistical support and coordinates international response to the emergency. The United States contributes to this effort through the CDC. The CDC carries out international monitoring and public health efforts, mainly in the service of protecting US public health in an increasingly connected world. Similarly, the European Union maintains a Health Security Committee that monitors disease outbreaks within its member countries and internationally, coordinating with WHO. Check Your Understanding Name the organizations that participate in international public health monitoring. Emerging and Reemerging Infectious Diseases Both WHO and some national public health agencies such as the CDC monitor and prepare for emerging infectious diseases . An emerging infectious disease is either new to the human population or has shown an increase in prevalence in the previous twenty years. Whether the disease is new or conditions have changed to cause an increase in frequency, its status as emerging implies the need to apply resources to understand and control its growing impact. Emerging diseases may change their frequency gradually over time, or they may experience sudden epidemic growth. The importance of vigilance was made clear during the Ebola hemorrhagic fever epidemic in western Africa through 2014–2015. Although health experts had been aware of the Ebola virus since the 1970s, an outbreak on such a large scale had never happened before ( Figure 16.16 ). Previous human epidemics had been small, isolated, and contained. Indeed, the gorilla and chimpanzee populations of western Africa had suffered far worse from Ebola than the human population. The pattern of small isolated human epidemics changed in 2014. Its high transmission rate, coupled with cultural practices for treatment of the dead and perhaps its emergence in an urban setting, caused the disease to spread rapidly, and thousands of people died. The international public health community responded with a large emergency effort to treat patients and contain the epidemic. Emerging diseases are found in all countries, both developed and developing ( Table 16.2 ). Some nations are better equipped to deal with them. National and international public health agencies watch for epidemics like the Ebola outbreak in developing countries because those countries rarely have the health-care infrastructure and expertise to deal with large outbreaks effectively. Even with the support of international agencies, the systems in western Africa struggled to identify and care for the sick and control spread. In addition to the altruistic goal of saving lives and assisting nations lacking in resources, the global nature of transportation means that an outbreak anywhere can spread quickly to every corner of the planet. Managing an epidemic in one location—its source—is far easier than fighting it on many fronts. Ebola is not the only disease that needs to be monitored in the global environment. In 2015, WHO set priorities on several emerging diseases that had a high probability of causing epidemics and that were poorly understood (and thus urgently required research and development efforts). A reemerging infectious disease is a disease that is increasing in frequency after a previous period of decline. Its reemergence may be a result of changing conditions or old prevention regimes that are no longer working. Examples of such diseases are drug-resistant forms of tuberculosis , bacterial pneumonia , and malaria . Drug-resistant strains of the bacteria causing gonorrhea and syphilis are also becoming more widespread, raising concerns of untreatable infections. Some Emerging and Reemerging Infectious Diseases Disease Pathogen Year Discovered Affected Regions Transmission AIDS HIV 1981 Worldwide Contact with infected body fluids Chikungunya fever Chikungunya virus 1952 Africa, Asia, India; spreading to Europe and the Americas Mosquito-borne Ebola virus disease Ebola virus 1976 Central and Western Africa Contact with infected body fluids H1N1 Influenza (swine flu) H1N1 virus 2009 Worldwide Droplet transmission Lyme disease Borrelia burgdorferi bacterium 1981 Northern hemisphere From mammal reservoirs to humans by tick vectors West Nile virus disease West Nile virus 1937 Africa, Australia, Canada to Venezuela, Europe, Middle East, Western Asia Mosquito-borne Table 16.2 Check Your Understanding Explain why it is important to monitor emerging infectious diseases. Explain how a bacterial disease could reemerge, even if it had previously been successfully treated and controlled. Micro Connections SARS Outbreak and Identification On November 16, 2002, the first case of a SARS outbreak was reported in Guangdong Province, China. The patient exhibited influenza-like symptoms such as fever, cough, myalgia, sore throat, and shortness of breath. As the number of cases grew, the Chinese government was reluctant to openly communicate information about the epidemic with the World Health Organization (WHO) and the international community. The slow reaction of Chinese public health officials to this new disease contributed to the spread of the epidemic within and later outside China. In April 2003, the Chinese government finally responded with a huge public health effort involving quarantines, medical checkpoints, and massive cleaning projects. Over 18,000 people were quarantined in Beijing alone. Large funding initiatives were created to improve health-care facilities, and dedicated outbreak teams were created to coordinate the response. By August 16, 2003, the last SARS patients were released from a hospital in Beijing nine months after the first case was reported in China. In the meantime, SARS spread to other countries on its way to becoming a global pandemic . Though the infectious agent had yet to be identified, it was thought to be an influenza virus. The disease was named SARS, an acronym for severe acute respiratory syndrome, until the etiologic agent could be identified. Travel restrictions to Southeast Asia were enforced by many countries. By the end of the outbreak, there were 8,098 cases and 774 deaths worldwide. China and Hong Kong were hit hardest by the epidemic, but Taiwan, Singapore, and Toronto, Canada, also saw significant numbers of cases . Fortunately, timely public health responses in many countries effectively suppressed the outbreak and led to its eventual containment. For example, the disease was introduced to Canada in February 2003 by an infected traveler from Hong Kong, who died shortly after being hospitalized. By the end of March, hospital isolation and home quarantine procedures were in place in the Toronto area, stringent anti-infection protocols were introduced in hospitals, and the media were actively reporting on the disease. Public health officials tracked down contacts of infected individuals and quarantined them. A total of 25,000 individuals were quarantined in the city. Thanks to the vigorous response of the Canadian public health community, SARS was brought under control in Toronto by June, a mere four months after it was introduced. In 2003, WHO established a collaborative effort to identify the causative agent of SARS, which has now been identified as a coronavirus that was associated with horseshoe bats. The genome of the SARS virus was sequenced and published by researchers at the CDC and in Canada in May 2003, and in the same month researchers in the Netherlands confirmed the etiology of the disease by fulfilling Koch’s postulates for the SARS coronavirus. The last known case of SARS worldwide was reported in 2004. Link to Learning This database of reports chronicles outbreaks of infectious disease around the world. It was on this system that the first information about the SARS outbreak in China emerged. The CDC publishes Emerging Infectious Diseases , a monthly journal available online.
microbiology
Summary 21.1 Anatomy and Normal Microbiota of the Skin and Eyes Human skin consists of two main layers, the epidermis and dermis , which are situated on top of the hypodermis , a layer of connective tissue. The skin is an effective physical barrier against microbial invasion. The skin’s relatively dry environment and normal microbiota discourage colonization by transient microbes. The skin’s normal microbiota varies from one region of the body to another. The conjunctiva of the eye is a frequent site for microbial infection, but deeper eye infections are less common; multiple types of conjunctivitis exist. 21.2 Bacterial Infections of the Skin and Eyes Staphylococcus and Streptococcus cause many different types of skin infections, many of which occur when bacteria breach the skin barrier through a cut or wound. S. aureus are frequently associated with purulent skin infections that manifest as folliculitis , furuncles , or carbuncles . S. aureus is also a leading cause of staphylococcal scalded skin syndrome (SSSS). S. aureus is generally drug resistant and current MRSA strains are resistant to a wide range of antibiotics. Community-acquired and hospital-acquired staphyloccocal infections are an ongoing problem because many people are asymptomatic carriers. Group A streptococci (GAS) , S. pyogenes , is often responsible for cases of cellulitis , erysipelas , and erythema nosodum . GAS are also one of many possible causes of necrotizing fasciitis . P. aeruginosa is often responsible for infections of the skin and eyes, including wound and burn infections, hot tub rash , otitis externa , and bacterial keratitis . Acne is a common skin condition that can become more inflammatory when Propionibacterium acnes infects hair follicles and pores clogged with dead skin cells and sebum. Cutaneous anthrax occurs when Bacillus anthracis breaches the skin barrier. The infection results in a localized black eschar on skin. Anthrax can be fatal if B. anthracis spreads to the bloodstream. Common bacterial conjunctivitis is often caused by Haemophilus influenzae and usually resolves on its own in a few days. More serious forms of conjunctivitis include gonococcal ophthalmia neonatorum , inclusion conjunctivitis (chlamydial), and trachoma , all of which can lead to blindness if untreated. Keratitis is frequently caused by Staphylococcus epidermidis and/or Pseudomonas aeruginosa , especially among contact lens users, and can lead to blindness. Biofilms complicate the treatment of wound and eye infections because pathogens living in biofilms can be difficult to treat and eliminate. 21.3 Viral Infections of the Skin and Eyes Papillomas (warts) are caused by human papillomaviruses. Herpes simplex virus (especially HSV-1) mainly causes oral herpes , but lesions can appear on other areas of the skin and mucous membranes. Roseola and fifth disease are common viral illnesses that cause skin rashes; roseola is caused by HHV-6 and HHV-7 while fifth disease is caused by parvovirus 19. Viral conjunctivitis is often caused by adenoviruses and may be associated with the common cold. Herpes keratitis is caused by herpesviruses that spread to the eye. 21.4 Mycoses of the Skin Mycoses can be cutaneous , subcutaneous , or systemic. Common cutaneous mycoses include tineas caused by dermatophytes of the genera Trichophyton , Epidermophyton , and Microsporum. Tinea corporis is called ringworm . Tineas on other parts of the body have names associated with the affected body part. Aspergillosis is a fungal disease caused by molds of the genus Aspergillus . Primary cutaneous aspergillosis enters through a break in the skin, such as the site of an injury or a surgical wound; it is a common hospital-acquired infection. In secondary cutaneous aspergillosis, the fungus enters via the respiratory system and disseminates systemically, manifesting in lesions on the skin. The most common subcutaneous mycosis is sporotrichosis (rose gardener’s disease), caused by Sporothrix schenkii. Yeasts of the genus Candida can cause opportunistic infections of the skin called candidiasis , producing intertrigo , localized rashes, or yellowing of the nails. 21.5 Protozoan and Helminthic Infections of the Skin and Eyes The protozoan Acanthamoeba and the helminth Loa loa are two parasites that can breach the skin barrier, causing infections of the skin and eyes. Acanthamoeba keratitis is a parasitic infection of the eye that often results from improper disinfection of contact lenses or swimming while wearing contact lenses. Loiasis , or eye worm, is a disease endemic to Africa that is caused by parasitic worms that infect the subcutaneous tissue of the skin and eyes. It is transmitted by deerfly vectors.
Chapter Outline 21.1 Anatomy and Normal Microbiota of the Skin and Eyes 21.2 Bacterial Infections of the Skin and Eyes 21.3 Viral Infections of the Skin and Eyes 21.4 Mycoses of the Skin 21.5 Protozoan and Helminthic Infections of the Skin and Eyes Introduction The human body is covered in skin , and like most coverings, skin is designed to protect what is underneath. One of its primary purposes is to prevent microbes in the surrounding environment from invading underlying tissues and organs. But in spite of its role as a protective covering, skin is not itself immune from infection. Certain pathogens and toxins can cause severe infections or reactions when they come in contact with the skin. Other pathogens are opportunistic, breaching the skin’s natural defenses through cuts, wounds, or a disruption of normal microbiota resulting in an infection in the surrounding skin and tissue. Still other pathogens enter the body via different routes—through the respiratory or digestive systems, for example—but cause reactions that manifest as skin rashes or lesions. Nearly all humans experience skin infections to some degree. Many of these conditions are, as the name suggests, “skin deep,” with symptoms that are local and non-life-threatening. At some point, almost everyone must endure conditions like acne, athlete’s foot, and minor infections of cuts and abrasions, all of which result from infections of the skin. But not all skin infections are quite so innocuous. Some can become invasive, leading to systemic infection or spreading over large areas of skin, potentially becoming life-threatening.
[ { "answer": { "ans_choice": 2, "ans_text": "C" }, "bloom": null, "hl_context": "Beneath the epidermis lies a thicker skin layer called the dermis . The dermis contains connective tissue and embedded structures such as blood vessels , nerves , and muscles . Structures called hair follicles ( from which hair grows ) are located within the dermis , even though much of their structure consists of epidermal tissue . <hl> The dermis also contains the two major types of glands found in human skin : sweat glands ( tubular glands that produce sweat ) and sebaceous glands ( which are associated with hair follicles and produce sebum , a lipid-rich substance containing proteins and minerals ) . <hl>", "hl_sentences": "The dermis also contains the two major types of glands found in human skin : sweat glands ( tubular glands that produce sweat ) and sebaceous glands ( which are associated with hair follicles and produce sebum , a lipid-rich substance containing proteins and minerals ) .", "question": { "cloze_format": "_____________ glands produce a lipid-rich substance that contains proteins and minerals and protects the skin.", "normal_format": "Which glands produce a lipid-rich substance that contains proteins and minerals and protects the skin?", "question_choices": [ "Sweat", "Mammary", "Sebaceous", "Endocrine" ], "question_id": "fs-id1167662879215", "question_text": "_____________ glands produce a lipid-rich substance that contains proteins and minerals and protects the skin." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 1, "ans_text": "B" }, "bloom": null, "hl_context": "Beneath the epidermis lies a thicker skin layer called the dermis . <hl> The dermis contains connective tissue and embedded structures such as blood vessels , nerves , and muscles . <hl> Structures called hair follicles ( from which hair grows ) are located within the dermis , even though much of their structure consists of epidermal tissue . The dermis also contains the two major types of glands found in human skin : sweat glands ( tubular glands that produce sweat ) and sebaceous glands ( which are associated with hair follicles and produce sebum , a lipid-rich substance containing proteins and minerals ) . <hl> The epidermis is the outermost layer of the skin , and it is relatively thin . <hl> <hl> The exterior surface of the epidermis , called the stratum corneum , primarily consists of dead skin cells . <hl> This layer of dead cells limits direct contact between the outside world and live cells . The stratum corneum is rich in keratin , a tough , fibrous protein that is also found in hair and nails . Keratin helps make the outer surface of the skin relatively tough and waterproof . It also helps to keep the surface of the skin dry , which reduces microbial growth . However , some microbes are still able to live on the surface of the skin , and some of these can be shed with dead skin cells in the process of desquamation , which is the shedding and peeling of skin that occurs as a normal process but that may be accelerated when infection is present . Human skin is made up of several layers and sublayers . <hl> The two main layers are the epidermis and the dermis . <hl> <hl> These layers cover a third layer of tissue called the hypodermis , which consists of fibrous and adipose connective tissue ( Figure 21.2 ) . <hl>", "hl_sentences": "The dermis contains connective tissue and embedded structures such as blood vessels , nerves , and muscles . The epidermis is the outermost layer of the skin , and it is relatively thin . The exterior surface of the epidermis , called the stratum corneum , primarily consists of dead skin cells . The two main layers are the epidermis and the dermis . These layers cover a third layer of tissue called the hypodermis , which consists of fibrous and adipose connective tissue ( Figure 21.2 ) .", "question": { "cloze_format": "___ is a layer of skin that contains living cells, is vascularized, and lies directly above the hypodermis.", "normal_format": "Which layer of skin contains living cells, is vascularized, and lies directly above the hypodermis?", "question_choices": [ "the stratum corneum", "the dermis", "the epidermis", "the conjunctiva" ], "question_id": "fs-id1167662674494", "question_text": "Which layer of skin contains living cells, is vascularized, and lies directly above the hypodermis?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 0, "ans_text": "A" }, "bloom": null, "hl_context": "<hl> Other tests are performed on samples from the wound in order to distinguish coagulase-positive species of Staphylococcus ( CoPS ) such as S . aureus from common coagulase-negative species ( CoNS ) such as S . epidermidis . <hl> Although CoNS are less likely than CoPS to cause human disease , they can cause infections when they enter the body , as can sometimes occur via catheters , indwelling medical devices , and wounds . <hl> Passive agglutination testing can be used to distinguish CoPS from CoNS . <hl> <hl> If the sample is coagulase-positive , the sample is generally presumed to contain S . aureus . <hl> <hl> Additional genetic testing would be necessary to identify the particular strain of S . aureus . <hl>", "hl_sentences": "Other tests are performed on samples from the wound in order to distinguish coagulase-positive species of Staphylococcus ( CoPS ) such as S . aureus from common coagulase-negative species ( CoNS ) such as S . epidermidis . Passive agglutination testing can be used to distinguish CoPS from CoNS . If the sample is coagulase-positive , the sample is generally presumed to contain S . aureus . Additional genetic testing would be necessary to identify the particular strain of S . aureus .", "question": { "cloze_format": "Staphylococcus aureus is most often associated with being ___ .", "normal_format": "Being what is most often associated with Staphylococcus aureus?", "question_choices": [ "coagulase-positive.", "coagulase-negative.", "catalase-negative.", "gram-negative" ], "question_id": "fs-id1167661641169", "question_text": "Staphylococcus aureus is most often associated with being" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 3, "ans_text": "D" }, "bloom": null, "hl_context": "<hl> The genus Streptococcus includes important pathogens that are categorized in serological Lancefield groups based on the distinguishing characteristics of their surface carbohydrates . <hl> The most clinically important streptococcal species in humans is S . pyogenes , also known as group A streptococcus ( GAS ) . S . pyogenes produces a variety of extracellular enzymes , including streptolysins O and S , hyaluronidase , and streptokinase . These enzymes can aid in transmission and contribute to the inflammatory response . <hl> 6 S . pyogenes also produces a capsule and M protein , a streptococcal cell wall protein . <hl> These virulence factors help the bacteria to avoid phagocytosis while provoking a substantial immune response that contributes to symptoms associated with streptococcal infections . 6 Starr , C . R . and Engelberg N . C . “ Role of Hyaluronidase in Subcutaneous Spread and Growth of Group A Streptococcus . ” Infection and Immunity 2006 ( 7:1 ): 40 – 48 . doi : 10.1128 / IAI . 74.1 . 40-48 . 2006 .", "hl_sentences": "The genus Streptococcus includes important pathogens that are categorized in serological Lancefield groups based on the distinguishing characteristics of their surface carbohydrates . 6 S . pyogenes also produces a capsule and M protein , a streptococcal cell wall protein .", "question": { "cloze_format": "M protein is produced by ___ .", "normal_format": "What produces M protein?", "question_choices": [ "Pseudomonas aeruginosa", "Staphylococcus aureus", "Propionibacterium acnes", "Streptococcus pyogenes" ], "question_id": "fs-id1167663646048", "question_text": "M protein is produced by" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "C" }, "bloom": null, "hl_context": "<hl> Micro Connections SAFE Eradication of Trachoma Though uncommon in the United States and other developed nations , trachoma is the leading cause of preventable blindness worldwide , with more than 4 million people at immediate risk of blindness from trichiasis . <hl> <hl> The vast majority of those affected by trachoma live in Africa and the Middle East in isolated rural or desert communities with limited access to clean water and sanitation . <hl> These conditions provide an environment conducive to the growth and spread of Chlamydia trachomatis , the bacterium that causes trachoma , via wastewater and eye-seeking flies .", "hl_sentences": "Micro Connections SAFE Eradication of Trachoma Though uncommon in the United States and other developed nations , trachoma is the leading cause of preventable blindness worldwide , with more than 4 million people at immediate risk of blindness from trichiasis . The vast majority of those affected by trachoma live in Africa and the Middle East in isolated rural or desert communities with limited access to clean water and sanitation .", "question": { "cloze_format": "___________ is a major cause of preventable blindness that can be reduced through improved sanitation.", "normal_format": "What is a major cause of preventable blindness that can be reduced through improved sanitation?", "question_choices": [ "Ophthalmia neonatorum", "Keratitis", "Trachoma", "Cutaneous anthrax" ], "question_id": "fs-id1167663921504", "question_text": "___________ is a major cause of preventable blindness that can be reduced through improved sanitation." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "A" }, "bloom": null, "hl_context": "Though not as virulent as S . aureus , the staphylococcus S . epidermidis can cause serious opportunistic infections . Such infections usually occur only in hospital settings . S . epidermidis is usually a harmless resident of the normal skin microbiota . <hl> However , health-care workers can inadvertently transfer S . epidermidis to medical devices that are inserted into the body , such as catheters , prostheses , and indwelling medical devices . <hl> Once it has bypassed the skin barrier , S . epidermidis can cause infections inside the body that can be difficult to treat . Like S . aureus , S . epidermidis is resistant to many antibiotics , and localized infections can become systemic if not treated quickly . <hl> To reduce the risk of nosocomial ( hospital-acquired ) S . epidermidis , health-care workers must follow strict procedures for handling and sterilizing medical devices before and during surgical procedures . <hl>", "hl_sentences": "However , health-care workers can inadvertently transfer S . epidermidis to medical devices that are inserted into the body , such as catheters , prostheses , and indwelling medical devices . To reduce the risk of nosocomial ( hospital-acquired ) S . epidermidis , health-care workers must follow strict procedures for handling and sterilizing medical devices before and during surgical procedures .", "question": { "cloze_format": "The species that is frequently associated with nosocomial infections transmitted via medical devices inserted into the body is the ___.", "normal_format": "Which species is frequently associated with nosocomial infections transmitted via medical devices inserted into the body?", "question_choices": [ "Staphylococcus epidermidis", "Streptococcus pyogenes", "Proproniobacterium acnes", "Bacillus anthracis" ], "question_id": "fs-id1167663644101", "question_text": "Which species is frequently associated with nosocomial infections transmitted via medical devices inserted into the body?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "A" }, "bloom": null, "hl_context": "<hl> Papillomas ( warts ) are the expression of common skin infections by human papillomavirus ( HPV ) and are transmitted by direct contact . <hl> There are many types of HPV , and they lead to a variety of different presentations , such as common warts , plantar warts , flat warts , and filiform warts . HPV can also cause sexually-transmitted genital warts , which will be discussed in Urogenital System Infections . Vaccination is available for some strains of HPV .", "hl_sentences": "Papillomas ( warts ) are the expression of common skin infections by human papillomavirus ( HPV ) and are transmitted by direct contact .", "question": { "cloze_format": "Warts are caused by ___", "normal_format": "What causes warts?", "question_choices": [ "human papillomavirus.", "herpes simplex virus.", "adenoviruses.", "parvovirus B19." ], "question_id": "fs-id1167663973352", "question_text": "Warts are caused by" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "B" }, "bloom": null, "hl_context": "<hl> Herpes infections caused by HSV - 1 can sometimes spread to the eye from other areas of the body , which may result in keratoconjunctivitis . <hl> <hl> This condition , generally called herpes keratitis or herpetic keratitis , affects the conjunctiva and cornea , causing irritation , excess tears , and sensitivity to light . <hl> Deep lesions in the cornea may eventually form , leading to blindness . Because keratitis can have numerous causes , laboratory testing is necessary to confirm the diagnosis when HSV - 1 is suspected ; once confirmed , antiviral medications may be prescribed .", "hl_sentences": "Herpes infections caused by HSV - 1 can sometimes spread to the eye from other areas of the body , which may result in keratoconjunctivitis . This condition , generally called herpes keratitis or herpetic keratitis , affects the conjunctiva and cornea , causing irritation , excess tears , and sensitivity to light .", "question": { "cloze_format": "The virus that can spread to the eye to cause a form of keratitis is the ___.", "normal_format": "Which of these viruses can spread to the eye to cause a form of keratitis?", "question_choices": [ "human papillomavirus", "herpes simplex virus 1", "parvovirus 19", "circoviruses" ], "question_id": "fs-id1167663630739", "question_text": "Which of these viruses can spread to the eye to cause a form of keratitis?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "C" }, "bloom": null, "hl_context": "<hl> Infection by HSV - 1 commonly manifests as cold sores or fever blisters , usually on or around the lips ( Figure 21.26 ) . <hl> HSV - 1 is highly contagious , with some studies suggesting that up to 65 % of the US population is infected ; however , many infected individuals are asymptomatic . 15 Moreover , the virus can be latent for long periods , residing in the trigeminal nerve ganglia between recurring bouts of symptoms . Recurrence can be triggered by stress or environmental conditions ( systemic or affecting the skin ) . When lesions are present , they may blister , break open , and crust . The virus can be spread through direct contact , even when a patient is asymptomatic . 15 Wald , A . , and Corey , L . “ Persistence in the Population : Epidemiology , Transmission . ” In : A . Arvin , G . Campadelli-Fiume , E . Mocarski et al . Human Herpesviruses : Biology , Therapy , and Immunoprophylaxis . Cambridge : Cambridge University Press , 2007 . http://www.ncbi.nlm.nih.gov/books/NBK47447/ . Accessed Sept 14 , 2016 .", "hl_sentences": "Infection by HSV - 1 commonly manifests as cold sores or fever blisters , usually on or around the lips ( Figure 21.26 ) .", "question": { "cloze_format": "Cold sores are associated with ___ .", "normal_format": "What are cold sores associated with?", "question_choices": [ "human papillomavirus", "roseola", "herpes simplex viruses", "human herpesvirus 6" ], "question_id": "fs-id1167663731681", "question_text": "Cold sores are associated with:" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "A" }, "bloom": null, "hl_context": "<hl> The viral diseases roseola and fifth disease are somewhat similar in terms of their presentation , but they are caused by different viruses . <hl> <hl> Roseola , sometimes called roseola infantum or exanthem subitum ( “ sudden rash ” ) , is a mild viral infection usually caused by human herpesvirus - 6 ( HHV - 6 ) and occasionally by HHV - 7 . <hl> It is spread via direct contact with the saliva or respiratory secretions of an infected individual , often through droplet aerosols . Roseola is very common in children , with symptoms including a runny nose , a sore throat , and a cough , along with ( or followed by ) a high fever ( 39.4 ºC ) . About three to five days after the fever subsides , a rash may begin to appear on the chest and abdomen . The rash , which does not cause discomfort , initially forms characteristic macules that are flat or papules that are firm and slightly raised ; some macules or papules may be surrounded by a white ring . The rash may eventually spread to the neck and arms , and sometimes continues to spread to the face and legs . The diagnosis is generally made based upon observation of the symptoms . However , it is possible to perform serological tests to confirm the diagnosis . <hl> While treatment may be recommended to control the fever , the disease usually resolves without treatment within a week after the fever develops . <hl> <hl> For individuals at particular risk , such as those who are immunocompromised , the antiviral medication ganciclovir may be used . <hl>", "hl_sentences": "The viral diseases roseola and fifth disease are somewhat similar in terms of their presentation , but they are caused by different viruses . Roseola , sometimes called roseola infantum or exanthem subitum ( “ sudden rash ” ) , is a mild viral infection usually caused by human herpesvirus - 6 ( HHV - 6 ) and occasionally by HHV - 7 . While treatment may be recommended to control the fever , the disease usually resolves without treatment within a week after the fever develops . For individuals at particular risk , such as those who are immunocompromised , the antiviral medication ganciclovir may be used .", "question": { "cloze_format": "The disease that is usually self-limiting but is most commonly treated with ganciclovir if medical treatment is needed is ___.", "normal_format": "Which disease is usually self-limiting but is most commonly treated with ganciclovir if medical treatment is needed?", "question_choices": [ "roseola", "oral herpes", "papillomas", "viral conjunctivitis" ], "question_id": "fs-id1167663560800", "question_text": "Which disease is usually self-limiting but is most commonly treated with ganciclovir if medical treatment is needed?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 0, "ans_text": "A" }, "bloom": null, "hl_context": "<hl> Viral conjunctivitis is commonly associated with colds caused by adenoviruses ; however , other viruses can also cause conjunctivitis . <hl> If the causative agent is uncertain , eye discharge can be tested to aid in diagnosis . Antibiotic treatment of viral conjunctivitis is ineffective , and symptoms usually resolve without treatment within a week or two .", "hl_sentences": "Viral conjunctivitis is commonly associated with colds caused by adenoviruses ; however , other viruses can also cause conjunctivitis .", "question": { "cloze_format": "Adenoviruses can cause ___ .", "normal_format": "What can adenoviruses cause? ", "question_choices": [ "viral conjunctivitis", "herpetic conjunctivitis", "papillomas", "oral herpes" ], "question_id": "fs-id1167663611302", "question_text": "Adenoviruses can cause:" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "B" }, "bloom": null, "hl_context": "Disease Profile Mycoses of the Skin Cutaneous mycoses are typically opportunistic , only able to cause infection when the skin barrier is breached through a wound . <hl> Tineas are the exception , as the dermatophytes responsible for tineas are able to grow on skin , hair , and nails , especially in moist conditions . <hl> Most mycoses of the skin can be avoided through good hygiene and proper wound care . Treatment requires antifungal medications . <hl> Figure 21.33 summarizes the characteristics of some common fungal infections of the skin . <hl> 21.5 Protozoan and Helminthic Infections of the Skin and Eyes", "hl_sentences": "Tineas are the exception , as the dermatophytes responsible for tineas are able to grow on skin , hair , and nails , especially in moist conditions . Figure 21.33 summarizes the characteristics of some common fungal infections of the skin .", "question": { "cloze_format": "___________ is a superficial fungal infection found on the head.", "normal_format": "What is a superficial fungal infection found on the head?", "question_choices": [ "Tinea cruris", "Tinea capitis", "Tinea pedis", "Tinea corporis" ], "question_id": "fs-id1167663660765", "question_text": "___________ is a superficial fungal infection found on the head." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "C" }, "bloom": null, "hl_context": "Several approaches may be used to diagnose tineas . <hl> A Wood ’ s lamp ( also called a black lamp ) with a wavelength of 365 nm is often used . <hl> <hl> When directed on a tinea , the ultraviolet light emitted from the Wood ’ s lamp causes the fungal elements ( spores and hyphae ) to fluoresce . <hl> Direct microscopic evaluation of specimens from skin scrapings , hair , or nails can also be used to detect fungi . <hl> Generally , these specimens are prepared in a wet mount using a potassium hydroxide solution ( 10 % – 20 % aqueous KOH ) , which dissolves the keratin in hair , nails , and skin cells to allow for visualization of the hyphae and fungal spores . <hl> The specimens may be grown on Sabouraud dextrose CC ( chloramphenicol / cyclohexamide ) , a selective agar that supports dermatophyte growth while inhibiting the growth of bacteria and saprophytic fungi ( Figure 21.30 ) . Macroscopic colony morphology is often used to initially identify the genus of the dermatophyte ; identification can be further confirmed by visualizing the microscopic morphology using either a slide culture or a sticky tape prep stained with lactophenol cotton blue .", "hl_sentences": "A Wood ’ s lamp ( also called a black lamp ) with a wavelength of 365 nm is often used . When directed on a tinea , the ultraviolet light emitted from the Wood ’ s lamp causes the fungal elements ( spores and hyphae ) to fluoresce . Generally , these specimens are prepared in a wet mount using a potassium hydroxide solution ( 10 % – 20 % aqueous KOH ) , which dissolves the keratin in hair , nails , and skin cells to allow for visualization of the hyphae and fungal spores .", "question": { "cloze_format": "A health-care professional would use a Wood’s lamp for a suspected case of ringworm ___ .", "normal_format": "For what purpose would a health-care professional use a Wood’s lamp for a suspected case of ringworm?", "question_choices": [ "to prevent the rash from spreading", "to kill the fungus", "to visualize the fungus", "to examine the fungus microscopically" ], "question_id": "fs-id1167661298441", "question_text": "For what purpose would a health-care professional use a Wood’s lamp for a suspected case of ringworm?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "B" }, "bloom": null, "hl_context": "Several approaches may be used to diagnose tineas . A Wood ’ s lamp ( also called a black lamp ) with a wavelength of 365 nm is often used . When directed on a tinea , the ultraviolet light emitted from the Wood ’ s lamp causes the fungal elements ( spores and hyphae ) to fluoresce . Direct microscopic evaluation of specimens from skin scrapings , hair , or nails can also be used to detect fungi . Generally , these specimens are prepared in a wet mount using a potassium hydroxide solution ( 10 % – 20 % aqueous KOH ) , which dissolves the keratin in hair , nails , and skin cells to allow for visualization of the hyphae and fungal spores . <hl> The specimens may be grown on Sabouraud dextrose CC ( chloramphenicol / cyclohexamide ) , a selective agar that supports dermatophyte growth while inhibiting the growth of bacteria and saprophytic fungi ( Figure 21.30 ) . <hl> Macroscopic colony morphology is often used to initially identify the genus of the dermatophyte ; identification can be further confirmed by visualizing the microscopic morphology using either a slide culture or a sticky tape prep stained with lactophenol cotton blue .", "hl_sentences": "The specimens may be grown on Sabouraud dextrose CC ( chloramphenicol / cyclohexamide ) , a selective agar that supports dermatophyte growth while inhibiting the growth of bacteria and saprophytic fungi ( Figure 21.30 ) .", "question": { "cloze_format": "Sabouraud dextrose agar CC is selective for___.", "normal_format": "What is sabouraud dextrose agar CC selective for?", "question_choices": [ "all fungi", "non-saprophytic fungi", "bacteria", "viruses" ], "question_id": "fs-id1167663727696", "question_text": "Sabouraud dextrose agar CC is selective for:" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "A" }, "bloom": null, "hl_context": "<hl> Sporothrix infection can be diagnosed based upon histologic examination of the affected tissue . <hl> Its macroscopic morphology can be observed by culturing the mold on potato dextrose agar , and its microscopic morphology can be observed by staining a slide culture with lactophenol cotton blue . <hl> Treatment with itraconazole is generally recommended . <hl>", "hl_sentences": "Sporothrix infection can be diagnosed based upon histologic examination of the affected tissue . Treatment with itraconazole is generally recommended .", "question": { "cloze_format": "The first-line recommended treatment for sporotrichosis is ___.", "normal_format": "What is the first-line recommended treatment for sporotrichosis?", "question_choices": [ "itraconazole", "clindamycin", "amphotericin", "nystatin" ], "question_id": "fs-id1167661486071", "question_text": "The first-line recommended treatment for sporotrichosis is:" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "A" }, "bloom": null, "hl_context": "<hl> While Acanthamoeba keratitis is initially mild , it can lead to severe corneal damage , vision impairment , or even blindness if left untreated . <hl> <hl> Similar to eye infections involving P . aeruginosa , Acanthamoeba poses a much greater risk to wearers of contact lenses because the amoeba can thrive in the space between contact lenses and the cornea . <hl> Prevention through proper contact lens care is important . <hl> Lenses should always be properly disinfected prior to use , and should never be worn while swimming or using a hot tub . <hl>", "hl_sentences": "While Acanthamoeba keratitis is initially mild , it can lead to severe corneal damage , vision impairment , or even blindness if left untreated . Similar to eye infections involving P . aeruginosa , Acanthamoeba poses a much greater risk to wearers of contact lenses because the amoeba can thrive in the space between contact lenses and the cornea . Lenses should always be properly disinfected prior to use , and should never be worn while swimming or using a hot tub .", "question": { "cloze_format": "___ is most likely to cause an Acanthamoeba infection.", "normal_format": "Which of the following is most likely to cause an Acanthamoeba infection?", "question_choices": [ "swimming in a lake while wearing contact lenses", "being bitten by deerflies in Central Africa", "living environments in a college dormitory with communal showers", "participating in a contact sport such as wrestling" ], "question_id": "fs-id1167662405198", "question_text": "Which of the following is most likely to cause an Acanthamoeba infection?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "C" }, "bloom": null, "hl_context": "<hl> The name “ eye worm ” alludes to the visible migration of worms across the conjunctiva of the eye . <hl> Adult worms live in the subcutaneous tissues and can travel at about 1 cm per hour . They can often be observed when migrating through the eye , and sometimes under the skin ; in fact , this is generally how the disease is diagnosed . It is also possible to test for antibodies , but the presence of antibodies does not necessarily indicate a current infection ; it only means that the individual was exposed at some time . Some patients are asymptomatic , but in others the migrating worms can cause fever and areas of allergic inflammation known as Calabar swellings . <hl> Worms migrating through the conjunctiva can cause temporary eye pain and itching , but generally there is no lasting damage to the eye . <hl> Some patients experience a range of other symptoms , such as widespread itching , hives , and joint and muscle pain . <hl> The helminth Loa loa , also known as the African eye worm , is a nematode that can cause loiasis , a disease endemic to West and Central Africa ( Figure 21.36 ) . <hl> The disease does not occur outside that region except when carried by travelers . There is evidence that individual genetic differences affect susceptibility to developing loiasis after infection by the Loa loa worm . Even in areas in which Loa loa worms are common , the disease is generally found in less than 30 % of the population . 17 It has been suggested that travelers who spend time in the region may be somewhat more susceptible to developing symptoms than the native population , and the presentation of infection may differ . 18 17 Garcia , A .. et al . “ Genetic Epidemiology of Host Predisposition Microfilaraemia in Human Loiasis . ” Tropical Medicine and International Health 4 ( 1999 ) 8: 565 – 74 . http://www.ncbi.nlm.nih.gov/pubmed/10499080 . Accessed Sept 14 , 2016 . 18 Spinello , A . , et al . “ Imported Loa loa Filariasis : Three Cases and a Review of Cases Reported in Non-Endemic Countries in the Past 25 Years . ” International Journal of Infectious Disease 16 ( 2012 ) 9 : e649 – e662 . DOI : http://dx.doi.org/10.1016/j.ijid.2012.05.1023 .", "hl_sentences": "The name “ eye worm ” alludes to the visible migration of worms across the conjunctiva of the eye . Worms migrating through the conjunctiva can cause temporary eye pain and itching , but generally there is no lasting damage to the eye . The helminth Loa loa , also known as the African eye worm , is a nematode that can cause loiasis , a disease endemic to West and Central Africa ( Figure 21.36 ) .", "question": { "cloze_format": "The parasitic Loa loa worm can cause great pain when it ___.", "normal_format": "When can the parasitic Loa loa worm cause great pain?", "question_choices": [ "moves through the bloodstream", "exits through the skin of the foot", "travels through the conjunctiva", "enters the digestive tract" ], "question_id": "fs-id1167660270679", "question_text": "The parasitic Loa loa worm can cause great pain when it:" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 0, "ans_text": "A" }, "bloom": null, "hl_context": "<hl> The name “ eye worm ” alludes to the visible migration of worms across the conjunctiva of the eye . <hl> Adult worms live in the subcutaneous tissues and can travel at about 1 cm per hour . They can often be observed when migrating through the eye , and sometimes under the skin ; in fact , this is generally how the disease is diagnosed . <hl> It is also possible to test for antibodies , but the presence of antibodies does not necessarily indicate a current infection ; it only means that the individual was exposed at some time . <hl> Some patients are asymptomatic , but in others the migrating worms can cause fever and areas of allergic inflammation known as Calabar swellings . Worms migrating through the conjunctiva can cause temporary eye pain and itching , but generally there is no lasting damage to the eye . Some patients experience a range of other symptoms , such as widespread itching , hives , and joint and muscle pain . <hl> The helminth Loa loa , also known as the African eye worm , is a nematode that can cause loiasis , a disease endemic to West and Central Africa ( Figure 21.36 ) . <hl> The disease does not occur outside that region except when carried by travelers . There is evidence that individual genetic differences affect susceptibility to developing loiasis after infection by the Loa loa worm . Even in areas in which Loa loa worms are common , the disease is generally found in less than 30 % of the population . 17 It has been suggested that travelers who spend time in the region may be somewhat more susceptible to developing symptoms than the native population , and the presentation of infection may differ . 18 17 Garcia , A .. et al . “ Genetic Epidemiology of Host Predisposition Microfilaraemia in Human Loiasis . ” Tropical Medicine and International Health 4 ( 1999 ) 8: 565 – 74 . http://www.ncbi.nlm.nih.gov/pubmed/10499080 . Accessed Sept 14 , 2016 . 18 Spinello , A . , et al . “ Imported Loa loa Filariasis : Three Cases and a Review of Cases Reported in Non-Endemic Countries in the Past 25 Years . ” International Journal of Infectious Disease 16 ( 2012 ) 9 : e649 – e662 . DOI : http://dx.doi.org/10.1016/j.ijid.2012.05.1023 .", "hl_sentences": "The name “ eye worm ” alludes to the visible migration of worms across the conjunctiva of the eye . It is also possible to test for antibodies , but the presence of antibodies does not necessarily indicate a current infection ; it only means that the individual was exposed at some time . The helminth Loa loa , also known as the African eye worm , is a nematode that can cause loiasis , a disease endemic to West and Central Africa ( Figure 21.36 ) .", "question": { "cloze_format": "A patient tests positive for Loa loa antibodies. This test indicates that ___ .", "normal_format": "A patient tests positive for Loa loa antibodies. What does this test indicate?", "question_choices": [ "The individual was exposed to Loa loa at some point.", "The individual is currently suffering from loiasis.", "The individual has never been exposed to Loa loa.", "The individual is immunosuppressed." ], "question_id": "fs-id1167662784684", "question_text": "A patient tests positive for Loa loa antibodies. What does this test indicate?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 0, "ans_text": "A" }, "bloom": null, "hl_context": "<hl> Acanthamoeba keratitis is difficult to treat , and prompt treatment is necessary to prevent the condition from progressing . <hl> <hl> The condition generally requires three to four weeks of intensive treatment to resolve . <hl> <hl> Common treatments include topical antiseptics ( e . g . , polyhexamethylene biguanide , chlorhexidine , or both ) , sometimes with painkillers or corticosteroids ( although the latter are controversial because they suppress the immune system , which can worsen the infection ) . <hl> Azoles are sometimes prescribed as well . Advanced cases of keratitis may require a corneal transplant to prevent blindness .", "hl_sentences": "Acanthamoeba keratitis is difficult to treat , and prompt treatment is necessary to prevent the condition from progressing . The condition generally requires three to four weeks of intensive treatment to resolve . Common treatments include topical antiseptics ( e . g . , polyhexamethylene biguanide , chlorhexidine , or both ) , sometimes with painkillers or corticosteroids ( although the latter are controversial because they suppress the immune system , which can worsen the infection ) .", "question": { "cloze_format": "________ is commonly treated with a combination of chlorhexidine and polyhexamethylene biguanide.", "normal_format": "What is commonly treated with a combination of chlorhexidine and polyhexamethylene biguanide?", "question_choices": [ "Acanthamoeba keratitis", "Sporotrichosis", "Candidiasis", "Loiasis" ], "question_id": "fs-id1167662515860", "question_text": "________ is commonly treated with a combination of chlorhexidine and polyhexamethylene biguanide." }, "references_are_paraphrase": 0 } ]
21
21.1 Anatomy and Normal Microbiota of the Skin and Eyes Learning Objectives Describe the major anatomical features of the skin and eyes Compare and contrast the microbiomes of various body sites, such as the hands, back, feet, and eyes Explain how microorganisms overcome defenses of skin and eyes in order to cause infection Describe general signs and symptoms of disease associated with infections of the skin and eyes Clinical Focus Part 1 Sam, a college freshman with a bad habit of oversleeping, nicked himself shaving in a rush to get to class on time. At the time, he didn’t think twice about it. But two days later, he noticed the cut was surrounded by a reddish area of skin that was warm to the touch. When the wound started oozing pus, he decided he had better stop by the university’s clinic. The doctor took a sample from the lesion and then cleaned the area. What type of microbe could be responsible for Sam’s infection? Jump to the next Clinical Focus box. Human skin is an important part of the innate immune system. In addition to serving a wide range of other functions, the skin serves as an important barrier to microbial invasion. Not only is it a physical barrier to penetration of deeper tissues by potential pathogens, but it also provides an inhospitable environment for the growth of many pathogens. In this section, we will provide a brief overview of the anatomy and normal microbiota of the skin and eyes, along with general symptoms associated with skin and eye infections. Layers of the Skin Human skin is made up of several layers and sublayers. The two main layers are the epidermis and the dermis . These layers cover a third layer of tissue called the hypodermis , which consists of fibrous and adipose connective tissue ( Figure 21.2 ). The epidermis is the outermost layer of the skin, and it is relatively thin. The exterior surface of the epidermis, called the stratum corneum , primarily consists of dead skin cells. This layer of dead cells limits direct contact between the outside world and live cells. The stratum corneum is rich in keratin , a tough, fibrous protein that is also found in hair and nails. Keratin helps make the outer surface of the skin relatively tough and waterproof. It also helps to keep the surface of the skin dry, which reduces microbial growth. However, some microbes are still able to live on the surface of the skin, and some of these can be shed with dead skin cells in the process of desquamation , which is the shedding and peeling of skin that occurs as a normal process but that may be accelerated when infection is present. Beneath the epidermis lies a thicker skin layer called the dermis . The dermis contains connective tissue and embedded structures such as blood vessels, nerves, and muscles. Structures called hair follicles (from which hair grows) are located within the dermis, even though much of their structure consists of epidermal tissue. The dermis also contains the two major types of glands found in human skin: sweat glands (tubular glands that produce sweat) and sebaceous glands (which are associated with hair follicles and produce sebum , a lipid-rich substance containing proteins and minerals). Perspiration (sweat) provides some moisture to the epidermis, which can increase the potential for microbial growth. For this reason, more microbes are found on the regions of the skin that produce the most sweat, such as the skin of the underarms and groin. However, in addition to water, sweat also contains substances that inhibit microbial growth, such as salts, lysozyme , and antimicrobial peptides . Sebum also serves to protect the skin and reduce water loss. Although some of the lipids and fatty acids in sebum inhibit microbial growth, sebum contains compounds that provide nutrition for certain microbes. Check Your Understanding How does desquamation help with preventing infections? Normal Microbiota of the Skin The skin is home to a wide variety of normal microbiota, consisting of commensal organisms that derive nutrition from skin cells and secretions such as sweat and sebum. The normal microbiota of skin tends to inhibit transient-microbe colonization by producing antimicrobial substances and outcompeting other microbes that land on the surface of the skin. This helps to protect the skin from pathogenic infection. The skin’s properties differ from one region of the body to another, as does the composition of the skin’s microbiota. The availability of nutrients and moisture partly dictates which microorganisms will thrive in a particular region of the skin. Relatively moist skin, such as that of the nares (nostrils) and underarms, has a much different microbiota than the dryer skin on the arms, legs, hands, and top of the feet. Some areas of the skin have higher densities of sebaceous glands . These sebum -rich areas, which include the back, the folds at the side of the nose, and the back of the neck, harbor distinct microbial communities that are less diverse than those found on other parts of the body. Different types of bacteria dominate the dry, moist, and sebum-rich regions of the skin. The most abundant microbes typically found in the dry and sebaceous regions are Betaproteobacteria and Propionibacteria , respectively. In the moist regions, Corynebacterium and Staphylococcus are most commonly found ( Figure 21.3 ). Viruses and fungi are also found on the skin, with Malassezia being the most common type of fungus found as part of the normal microbiota. The role and populations of viruses in the microbiota, known as viromes , are still not well understood, and there are limitations to the techniques used to identify them. However, Circoviridae , Papillomaviridae , and Polyomaviridae appear to be the most common residents in the healthy skin virome. 1 2 3 1 Belkaid, Y., and J.A. Segre. “Dialogue Between Skin Microbiota and Immunity,” Science 346 (2014) 6212:954–959. 2 Foulongne, Vincent, et al. “Human Skin Microbiota: High Diversity of DNA Viruses Identified on the Human Skin by High Throughput Sequencing.” PLoS ONE (2012) 7(6): e38499. doi: 10.1371/journal.pone.0038499. 3 Robinson, C.M., and J.K. Pfeiffer. “Viruses and the Microbiota.” Annual Review of Virology (2014) 1:55–59. doi: 10.1146/annurev-virology-031413-085550. Check Your Understanding What are the four most common bacteria that are part of the normal skin microbiota? Infections of the Skin While the microbiota of the skin can play a protective role, it can also cause harm in certain cases. Often, an opportunistic pathogen residing in the skin microbiota of one individual may be transmitted to another individual more susceptible to an infection. For example, methicillin-resistant Staphylococcus aureus ( MRSA ) can often take up residence in the nares of health care workers and hospital patients; though harmless on intact, healthy skin, MRSA can cause infections if introduced into other parts of the body, as might occur during surgery or via a post-surgical incision or wound. This is one reason why clean surgical sites are so important. Injury or damage to the skin can allow microbes to enter deeper tissues, where nutrients are more abundant and the environment is more conducive to bacterial growth. Wound infections are common after a puncture or laceration that damages the physical barrier of the skin. Microbes may infect structures in the dermis , such as hair follicles and glands , causing a localized infection, or they may reach the bloodstream, which can lead to a systemic infection. In some cases, infectious microbes can cause a variety of rashes or lesions that differ in their physical characteristics. These rashes can be the result of inflammation reactions or direct responses to toxins produced by the microbes. Table 21.1 lists some of the medical terminology used to describe skin lesions and rashes based on their characteristics; Figure 21.4 and Figure 21.5 illustrate some of the various types of skin lesions. It is important to note that many different diseases can lead to skin conditions of very similar appearance; thus the terms used in the table are generally not exclusive to a particular type of infection or disease. Some Medical Terms Associated with Skin Lesions and Rashes Term Definition abscess localized collection of pus bulla (pl., bullae ) fluid-filled blister no more than 5 mm in diameter carbuncle deep, pus-filled abscess generally formed from multiple furuncles crust dried fluids from a lesion on the surface of the skin cyst encapsulated sac filled with fluid, semi-solid matter, or gas, typically located just below the upper layers of skin folliculitis a localized rash due to inflammation of hair follicles furuncle ( boil ) pus-filled abscess due to infection of a hair follicle macules smooth spots of discoloration on the skin papules small raised bumps on the skin pseudocyst lesion that resembles a cyst but with a less defined boundary purulent pus-producing; suppurative pustules fluid- or pus-filled bumps on the skin pyoderma any suppurative (pus-producing) infection of the skin suppurative producing pus; purulent ulcer break in the skin; open sore vesicle small, fluid-filled lesion wheal swollen, inflamed skin that itches or burns, such as from an insect bite Table 21.1 Check Your Understanding How can asymptomatic health care workers transmit bacteria such as MRSA to patients? Anatomy and Microbiota of the Eye Although the eye and skin have distinct anatomy, they are both in direct contact with the external environment. An important component of the eye is the nasolacrimal drainage system, which serves as a conduit for the fluid of the eye, called tears . Tears flow from the external eye to the nasal cavity by the lacrimal apparatus, which is composed of the structures involved in tear production ( Figure 21.6 ). The lacrimal gland , above the eye, secretes tears to keep the eye moist. There are two small openings, one on the inside edge of the upper eyelid and one on the inside edge of the lower eyelid, near the nose. Each of these openings is called a lacrimal punctum . Together, these lacrimal puncta collect tears from the eye that are then conveyed through lacrimal ducts to a reservoir for tears called the lacrimal sac , also known as the dacrocyst or tear sac . From the sac, tear fluid flows via a nasolacrimal duct to the inner nose. Each nasolacrimal duct is located underneath the skin and passes through the bones of the face into the nose. Chemicals in tears, such as defensins , lactoferrin , and lysozyme , help to prevent colonization by pathogens. In addition, mucins facilitate removal of microbes from the surface of the eye. The surfaces of the eyeball and inner eyelid are mucous membranes called conjunctiva . The normal conjunctival microbiota has not been well characterized, but does exist. One small study (part of the Ocular Microbiome project) found twelve genera that were consistently present in the conjunctiva. 4 These microbes are thought to help defend the membranes against pathogens. However, it is still unclear which microbes may be transient and which may form a stable microbiota. 5 4 Abelson, M.B., Lane, K., and Slocum, C.. “The Secrets of Ocular Microbiomes.” Review of Ophthalmology June 8, 2015. http://www.reviewofophthalmology.com/content/t/ocular_disease/c/55178. Accessed Sept 14, 2016. 5 Shaikh-Lesko, R. “Visualizing the Ocular Microbiome.” The Scientist May 12, 2014. http://www.the-scientist.com/?articles.view/articleNo/39945/title/Visualizing-the-Ocular-Microbiome. Accessed Sept 14, 2016. Use of contact lenses can cause changes in the normal microbiota of the conjunctiva by introducing another surface into the natural anatomy of the eye. Research is currently underway to better understand how contact lenses may impact the normal microbiota and contribute to eye disease. The watery material inside of the eyeball is called the vitreous humor . Unlike the conjunctiva, it is protected from contact with the environment and is almost always sterile, with no normal microbiota ( Figure 21.7 ). Infections of the Eye The conjunctiva is a frequent site of infection of the eye; like other mucous membranes, it is also a common portal of entry for pathogens. Inflammation of the conjunctiva is called conjunctivitis , although it is commonly known as pinkeye because of the pink appearance in the eye. Infections of deeper structures, beneath the cornea, are less common ( Figure 21.8 ). Conjunctivitis occurs in multiple forms. It may be acute or chronic. Acute purulent conjunctivitis is associated with pus formation, while acute hemorrhagic conjunctivitis is associated with bleeding in the conjunctiva. The term blepharitis refers to an inflammation of the eyelids, while keratitis refers to an inflammation of the cornea ( Figure 21.8 ); keratoconjunctivitis is an inflammation of both the cornea and the conjunctiva, and dacryocystitis is an inflammation of the lacrimal sac that can often occur when a nasolacrimal duct is blocked. Infections leading to conjunctivitis, blepharitis, keratoconjunctivitis, or dacryocystitis may be caused by bacteria or viruses, but allergens, pollutants, or chemicals can also irritate the eye and cause inflammation of various structures. Viral infection is a more likely cause of conjunctivitis in cases with symptoms such as fever and watery discharge that occurs with upper respiratory infection and itchy eyes. Table 21.2 summarizes some common forms of conjunctivitis and blepharitis. Types of Conjunctivities and Blepharitis Condition Description Causative Agent(s) Acute purulent conjunctivitis Conjunctivitis with purulent discharge Bacterial ( Haemophilus , Staphylococcus ) Acute hemorrhagic conjunctivitis Involves subconjunctival hemorrhages Viral ( Picornaviradae ) Acute ulcerative blepharitis Infection involving eyelids; pustules and ulcers may develop Bacterial ( Staphylococcal ) or viral (herpes simplex, varicella-zoster, etc.) Follicular conjunctivitis Inflammation of the conjunctiva with nodules (dome-shaped structures that are red at the base and pale on top) Viral ( adenovirus and others); environmental irritants Dacryocystitis Inflammation of the lacrimal sac often associated with a plugged nasolacrimal duct Bacterial ( Haemophilus, Staphylococcus , Streptococcus ) Keratitis Inflammation of cornea Bacterial, viral, or protozoal; environmental irritants Keratoconjunctivitis Inflammation of cornea and conjunctiva Bacterial, viral (adenoviruses), or other causes (including dryness of the eye) Nonulcerative blepharitis Inflammation, irritation, redness of the eyelids without ulceration Environmental irritants; allergens Papillary conjunctivitis Inflammation of the conjunctiva; nodules and papillae with red tops develop Environmental irritants; allergens Table 21.2 Check Your Understanding How does the lacrimal apparatus help to prevent eye infections? 21.2 Bacterial Infections of the Skin and Eyes Learning Objectives Identify the most common bacterial pathogens that cause infections of the skin and eyes Compare the major characteristics of specific bacterial diseases affecting the skin and eyes Despite the skin’s protective functions, infections are common. Gram-positive Staphylococcus spp. and Streptococcus spp. are responsible for many of the most common skin infections. However, many skin conditions are not strictly associated with a single pathogen. Opportunistic pathogens of many types may infect skin wounds, and individual cases with identical symptoms may result from different pathogens or combinations of pathogens. In this section, we will examine some of the most important bacterial infections of the skin and eyes and discuss how biofilms can contribute to and exacerbate such infections. Key features of bacterial skin and eye infections are also summarized in the Disease Profile boxes throughout this section. Staphylococcal Infections of the Skin Staphylococcus species are commonly found on the skin, with S. epidermidis and S. hominis being prevalent in the normal microbiota. S. aureus is also commonly found in the nasal passages and on healthy skin, but pathogenic strains are often the cause of a broad range of infections of the skin and other body systems. S. aureus is quite contagious. It is spread easily through skin-to-skin contact, and because many people are chronic nasal carriers (asymptomatic individuals who carry S. aureus in their nares), the bacteria can easily be transferred from the nose to the hands and then to fomites or other individuals. Because it is so contagious, S. aureus is prevalent in most community settings. This prevalence is particularly problematic in hospitals, where antibiotic-resistant strains of the bacteria may be present, and where immunocompromised patients may be more susceptible to infection. Resistant strains include methicillin-resistant S. aureus (MRSA), which can be acquired through health-care settings ( hospital-acquired MRSA , or HA-MRSA ) or in the community ( community-acquired MRSA , or CA-MRSA ). Hospital patients often arrive at health-care facilities already colonized with antibiotic-resistant strains of S. aureus that can be transferred to health-care providers and other patients. Some hospitals have attempted to detect these individuals in order to institute prophylactic measures, but they have had mixed success (see Eye on Ethics: Screening Patients for MRSA ). When a staphylococcal infection develops, choice of medication is important. As discussed above, many staphylococci (such as MRSA) are resistant to some or many antibiotics. Thus, antibiotic sensitivity is measured to identify the most suitable antibiotic. However, even before receiving the results of sensitivity analysis, suspected S. aureus infections are often initially treated with drugs known to be effective against MRSA, such as trimethoprim-sulfamethoxazole ( TMP/SMZ ), clindamycin , a tetracycline ( doxycycline or minocycline ), or linezolid . The pathogenicity of staphylococcal infections is often enhanced by characteristic chemicals secreted by some strains. Staphylococcal virulence factors include hemolysins called staphylolysins , which are cytotoxic for many types of cells, including skin cells and white blood cells. Virulent strains of S. aureus are also coagulase-positive, meaning they produce coagulase , a plasma-clotting protein that is involved in abscess formation. They may also produce leukocidins , which kill white blood cells and can contribute to the production of pus and Protein A, which inhibits phagocytosis by binding to the constant region of antibodies. Some virulent strains of S. aureus also produce other toxins, such as toxic shock syndrome toxin-1 (see Virulence Factors of Bacterial and Viral Pathogens ). To confirm the causative agent of a suspected staphylococcal skin infection, samples from the wound are cultured. Under the microscope, gram-positive Staphylococcus species have cellular arrangements that form grapelike clusters; when grown on blood agar, colonies have a unique pigmentation ranging from opaque white to cream. A catalase test is used to distinguish Staphylococcus from Streptococcus , which is also a genus of gram-positive cocci and a common cause of skin infections. Staphylococcus species are catalase-positive while Streptococcus species are catalase-negative. Other tests are performed on samples from the wound in order to distinguish coagulase-positive species of Staphylococcus (CoPS) such as S. aureus from common coagulase-negative species (CoNS) such as S. epidermidis . Although CoNS are less likely than CoPS to cause human disease, they can cause infections when they enter the body, as can sometimes occur via catheters, indwelling medical devices, and wounds. Passive agglutination testing can be used to distinguish CoPS from CoNS. If the sample is coagulase-positive, the sample is generally presumed to contain S. aureus . Additional genetic testing would be necessary to identify the particular strain of S. aureus. Another way to distinguish CoPS from CoNS is by culturing the sample on mannitol salt agar (MSA) . Staphylococcus species readily grow on this medium because they are tolerant of the high concentration of sodium chloride (7.5% NaCl). However, CoPS such as S. aureus ferment mannitol (which will be evident on a MSA plate), whereas CoNS such as S. epidermidis do not ferment mannitol but can be distinguished by the fermentation of other sugars such as lactose, malonate, and raffinose ( Figure 21.9 ). Eye on Ethics Screening Patients for MRSA According to the CDC, 86% of invasive MRSA infections are associated in some way with healthcare, as opposed to being community-acquired. In hospitals and clinics, asymptomatic patients who harbor MRSA may spread the bacteria to individuals who are more susceptible to serious illness. In an attempt to control the spread of MRSA, hospitals have tried screening patients for MRSA. If patients test positive following a nasal swab test, they can undergo decolonization using chlorhexidine washes or intranasal mupirocin. Some studies have reported substantial reductions in MRSA disease following implementation of these protocols, while others have not. This is partly because there is no standard protocol for these procedures. Several different MRSA identification tests may be used, some involving slower culturing techniques and others rapid testing. Other factors, such as the effectiveness of general hand-washing protocols, may also play a role in helping to prevent MRSA transmission. There are still other questions that need to be addressed: How frequently should patients be screened? Which individuals should be tested? From where on the body should samples be collected? Will increased resistance develop from the decolonization procedures? Even if identification and decolonization procedures are perfected, ethical questions will remain. Should patients have the right to decline testing? Should a patient who tests positive for MRSA have the right to decline the decolonization procedure, and if so, should hospitals have the right to refuse treatment to the patient? How do we balance the individual’s right to receive care with the rights of other patients who could be exposed to disease as a result? Superficial Staphylococcal Infections S. aureus is often associated with pyoderma , skin infections that are purulent . Pus formation occurs because many strains of S. aureus produce leukocidins , which kill white blood cells. These purulent skin infections may initially manifest as folliculitis , but can lead to furuncle s or deeper abscesses called carbuncle s . Folliculitis generally presents as bumps and pimples that may be itchy, red, and/or pus-filled. In some cases, folliculitis is self-limiting, but if it continues for more than a few days, worsens, or returns repeatedly, it may require medical treatment. Sweat, skin injuries, ingrown hairs, tight clothing, irritation from shaving, and skin conditions can all contribute to folliculitis. Avoidance of tight clothing and skin irritation can help to prevent infection, but topical antibiotics (and sometimes other treatments) may also help. Folliculitis can be identified by skin inspection; treatment is generally started without first culturing and identifying the causative agent. In contrast, furuncles (boils) are deeper infections ( Figure 21.10 ). They are most common in those individuals (especially young adults and teenagers) who play contact sports, share athletic equipment, have poor nutrition, live in close quarters, or have weakened immune systems. Good hygiene and skin care can often help to prevent furuncles from becoming more infective, and they generally resolve on their own. However, if furuncles spread, increase in number or size, or lead to systemic symptoms such as fever and chills, then medical care is needed. They may sometimes need to be drained (at which time the pathogens can be cultured) and treated with antibiotics. When multiple boils develop into a deeper lesion, it is called a carbuncle ( Figure 21.10 ). Because carbuncles are deeper, they are more commonly associated with systemic symptoms and a general feeling of illness. Larger, recurrent, or worsening carbuncles require medical treatment, as do those associated with signs of illness such as fever. Carbuncles generally need to be drained and treated with antibiotics. While carbuncles are relatively easy to identify visually, culturing and laboratory analysis of the wound may be recommended for some infections because antibiotic resistance is relatively common. Proper hygiene is important to prevent these types of skin infections or to prevent the progression of existing infections. Staphylococcal scalded skin syndrome (SSSS) is another superficial infection caused by S. aureus that is most commonly seen in young children, especially infants. Bacterial exotoxins first produce erythema (redness of the skin) and then severe peeling of the skin, as might occur after scalding ( Figure 21.11 ). SSSS is diagnosed by examining characteristics of the skin (which may rub off easily), using blood tests to check for elevated white blood cell counts, culturing, and other methods. Intravenous antibiotics and fluid therapy are used as treatment. Impetigo The skin infection impetigo causes the formation of vesicles, pustules , and possibly bullae , often around the nose and mouth. Bullae are large, fluid-filled blisters that measure at least 5 mm in diameter. Impetigo can be diagnosed as either nonbullous or bullous. In nonbullous impetigo, vesicles and pustules rupture and become encrusted sores. Typically the crust is yellowish, often with exudate draining from the base of the lesion. In bullous impetigo, the bullae fill and rupture, resulting in larger, draining, encrusted lesions ( Figure 21.12 ). Especially common in children, impetigo is particularly concerning because it is highly contagious. Impetigo can be caused by S. aureus alone, by Streptococcus pyogenes alone, or by coinfection of S. aureus and S. pyogenes . Impetigo is often diagnosed through observation of its characteristic appearance, although culture and susceptibility testing may also be used. Topical or oral antibiotic treatment is typically effective in treating most cases of impetigo. However, cases caused by S. pyogenes can lead to serious sequelae (pathological conditions resulting from infection, disease, injury, therapy, or other trauma) such as acute glomerulonephritis ( AGN ), which is severe inflammation in the kidneys. Nosocomial S. epidermidis Infections Though not as virulent as S. aureus , the staphylococcus S. epidermidis can cause serious opportunistic infections. Such infections usually occur only in hospital settings. S. epidermidis is usually a harmless resident of the normal skin microbiota. However, health-care workers can inadvertently transfer S. epidermidis to medical devices that are inserted into the body, such as catheters, prostheses, and indwelling medical devices. Once it has bypassed the skin barrier, S. epidermidis can cause infections inside the body that can be difficult to treat. Like S. aureus , S. epidermidis is resistant to many antibiotics, and localized infections can become systemic if not treated quickly. To reduce the risk of nosocomial (hospital-acquired) S. epidermidis , health-care workers must follow strict procedures for handling and sterilizing medical devices before and during surgical procedures. Check Your Understanding Why are Staphylococcus aureus infections often purulent? Streptococcal Infections of the Skin Streptococcus are gram-positive cocci with a microscopic morphology that resembles chains of bacteria. Colonies are typically small (1–2 mm in diameter), translucent, entire edge, with a slightly raised elevation that can be either nonhemolytic, alpha-hemolytic, or beta-hemolytic when grown on blood agar ( Figure 21.13 ). Additionally, they are facultative anaerobes that are catalase-negative. The genus Streptococcus includes important pathogens that are categorized in serological Lancefield groups based on the distinguishing characteristics of their surface carbohydrates. The most clinically important streptococcal species in humans is S. pyogenes , also known as group A streptococcus (GAS) . S. pyogenes produces a variety of extracellular enzymes, including streptolysins O and S, hyaluronidase , and streptokinase . These enzymes can aid in transmission and contribute to the inflammatory response. 6 S. pyogenes also produces a capsule and M protein , a streptococcal cell wall protein. These virulence factors help the bacteria to avoid phagocytosis while provoking a substantial immune response that contributes to symptoms associated with streptococcal infections. 6 Starr, C.R. and Engelberg N.C. “Role of Hyaluronidase in Subcutaneous Spread and Growth of Group A Streptococcus.” Infection and Immunity 2006(7:1): 40–48. doi: 10.1128/IAI.74.1.40-48.2006. S. pyogenes causes a wide variety of diseases not only in the skin, but in other organ systems as well. Examples of diseases elsewhere in the body include pharyngitis and scarlet fever , which will be covered in later chapters. Cellulitis, Erysipelas, and Erythema Nosodum Common streptococcal conditions of the skin include cellulitis, erysipelas, and erythema nodosum. An infection that develops in the dermis or hypodermis can cause cellulitis , which presents as a reddened area of the skin that is warm to the touch and painful. The causative agent is often S. pyogenes , which may breach the epidermis through a cut or abrasion, although cellulitis may also be caused by staphylococci. S. pyogenes can also cause erysipelas , a condition that presents as a large, intensely inflamed patch of skin involving the dermis (often on the legs or face). These infections can be suppurative , which results in a bullous form of erysipelas. Streptococcal and other pathogens may also cause a condition called erythema nodosum , characterized by inflammation in the subcutaneous fat cells of the hypodermis. It sometimes results from a streptococcal infection, though other pathogens can also cause the condition. It is not suppurative, but leads to red nodules on the skin, most frequently on the shins ( Figure 21.14 ). In general, streptococcal infections are best treated through identification of the specific pathogen followed by treatment based upon that particular pathogen’s susceptibility to different antibiotics. Many immunological tests, including agglutination reactions and ELISA s, can be used to detect streptococci. Penicillin is commonly prescribed for treatment of cellulitis and erysipelas because resistance is not widespread in streptococci at this time. In most patients, erythema nodosum is self-limiting and is not treated with antimicrobial drugs. Recommended treatments may include nonsteroidal anti-inflammatory drugs (NSAIDs), cool wet compresses, elevation, and bed rest. Necrotizing Fasciitis Streptococcal infections that start in the skin can sometimes spread elsewhere, resulting in a rare but potentially life-threatening condition called necrotizing fasciitis , sometimes referred to as flesh-eating bacterial syndrome . S. pyogenes is one of several species that can cause this rare but potentially-fatal condition; others include Klebsiella , Clostridium , Escherichia coli , S. aureus , and Aeromonas hydrophila . Necrotizing fasciitis occurs when the fascia, a thin layer of connective tissue between the skin and muscle, becomes infected. Severe invasive necrotizing fasciitis due to Streptococcus pyogenes occurs when virulence factors that are responsible for adhesion and invasion overcome host defenses. S. pyogenes invasins allow bacterial cells to adhere to tissues and establish infection. Bacterial proteases unique to S. pyogenes aggressively infiltrate and destroy host tissues, inactivate complement, and prevent neutrophil migration to the site of infection. The infection and resulting tissue death can spread very rapidly, as large areas of skin become detached and die. Treatment generally requires debridement (surgical removal of dead or infected tissue) or amputation of infected limbs to stop the spread of the infection; surgical treatment is supplemented with intravenous antibiotics and other therapies ( Figure 21.15 ). Necrotizing fasciitis does not always originate from a skin infection; in some cases there is no known portal of entry. Some studies have suggested that experiencing a blunt force trauma can increase the risk of developing streptococcal necrotizing fasciitis. 7 7 Nuwayhid, Z.B., Aronoff, D.M., and Mulla, Z.D.. “Blunt Trauma as a Risk Factor for Group A Streptococcal Necrotizing Fasciitis.” Annals of Epidemiology (2007) 17:878–881. Check Your Understanding How do staphylococcal infections differ in general presentation from streptococcal infections? Clinical Focus Part 2 Observing that Sam’s wound is purulent, the doctor tells him that he probably has a bacterial infection. She takes a sample from the lesion to send for laboratory analysis, but because it is Friday, she does not expect to receive the results until the following Monday. In the meantime, she prescribes an over-the-counter topical antibiotic ointment. She tells Sam to keep the wound clean and apply a new bandage with the ointment at least twice per day. How would the lab technician determine if the infection is staphylococcal or streptococcal? Suggest several specific methods. What tests might the lab perform to determine the best course of antibiotic treatment? Jump to the next Clinical Focus box. Go back to the previous Clinical Focus box. Pseudomonas Infections of the Skin Another important skin pathogen is Pseudomonas aeruginosa , a gram-negative, oxidase-positive, aerobic bacillus that is commonly found in water and soil as well as on human skin. P. aeruginosa is a common cause of opportunistic infections of wounds and burns. It can also cause hot tub rash , a condition characterized by folliculitis that frequently afflicts users of pools and hot tubs (recall the Clinical Focus case in Microbial Biochemistry ). P. aeruginosa is also the cause of otitis externa ( swimmer’s ear ), an infection of the ear canal that causes itching, redness, and discomfort, and can progress to fever, pain, and swelling ( Figure 21.16 ). Wounds infected with P. aeruginosa have a distinctive odor resembling grape soda or fresh corn tortillas. This odor is caused by the 2-aminoacetophenone that is used by P. aeruginosa in quorum sensing and contributes to its pathogenicity. Wounds infected with certain strains of P. aeruginosa also produce a blue-green pus due to the pigments pyocyanin and pyoverdin , which also contribute to its virulence. Pyocyanin and pyoverdin are siderophores that help P. aeruginosa survive in low-iron environments by enhancing iron uptake. P. aeruginosa also produces several other virulence factors, including phospholipase C (a hemolysin capable of breaking down red blood cells), exoenzyme S (involved in adherence to epithelial cells), and exotoxin A (capable of causing tissue necrosis). Other virulence factors include a slime that allows the bacterium to avoid being phagocytized, fimbriae for adherence, and proteases that cause tissue damage. P. aeruginosa can be detected through the use of cetrimide agar , which is selective for Pseudomonas species ( Figure 21.17 ). Pseudomonas spp. tend to be resistant to most antibiotics. They often produce β-lactamases , may have mutations affecting porins (small cell wall channels) that affect antibiotic uptake, and may pump some antibiotics out of the cell, contributing to this resistance. Polymyxin B and gentamicin are effective, as are some fluoroquinolones . Otitis externa is typically treated with ear drops containing acetic acid, antibacterials, and/or steroids to reduce inflammation; ear drops may also include antifungals because fungi can sometimes cause or contribute to otitis externa. Wound infections caused by Pseudomonas spp. may be treated with topical antibiofilm agents that disrupt the formation of biofilms . Check Your Understanding Name at least two types of skin infections commonly caused by Pseudomonas spp. Acne One of the most ubiquitous skin conditions is acne . Acne afflicts nearly 80% of teenagers and young adults, but it can be found in individuals of all ages. Higher incidence among adolescents is due to hormonal changes that can result in overproduction of sebum. Acne occurs when hair follicles become clogged by shed skin cells and sebum , causing non-inflammatory lesions called comedones . Comedones (singular “comedo”) can take the form of whitehead and blackhead pimples . Whiteheads are covered by skin, whereas blackhead pimples are not; the black color occurs when lipids in the clogged follicle become exposed to the air and oxidize ( Figure 21.18 ). Often comedones lead to infection by Propionibacterium acnes , a gram-positive, non-spore-forming, aerotolerant anaerobic bacillus found on skin that consumes components of sebum. P. acnes secretes enzymes that damage the hair follicle, causing inflammatory lesions that may include papules , pustules , nodules , or pseudocysts, depending on their size and severity. Treatment of acne depends on the severity of the case. There are multiple ways to grade acne severity, but three levels are usually considered based on the number of comedones, the number of inflammatory lesions, and the types of lesions. Mild acne is treated with topical agents that may include salicylic acid (which helps to remove old skin cells) or retinoids (which have multiple mechanisms, including the reduction of inflammation). Moderate acne may be treated with antibiotics ( erythromycin , clindamycin ), acne creams (e.g., benzoyl peroxide ), and hormones. Severe acne may require treatment using strong medications such as isotretinoin (a retinoid that reduces oil buildup, among other effects, but that also has serious side effects such as photosensitivity). Other treatments, such as phototherapy and laser therapy to kill bacteria and possibly reduce oil production, are also sometimes used. Check Your Understanding What is the role of Propionibacterium acnes in causing acne? Clinical Focus Resolution Sam uses the topical antibiotic over the weekend to treat his wound, but he does not see any improvement. On Monday, the doctor calls to inform him that the results from his laboratory tests are in. The tests show evidence of both Staphylococcus and Streptococcus in his wound. The bacterial species were confirmed using several tests. A passive agglutination test confirmed the presence of S. aureus . In this type of test, latex beads with antibodies cause agglutination when S. aureus is present. Streptococcus pyogenes was confirmed in the wound based on bacitracin (0.04 units) susceptibility as well as latex agglutination tests specific for S. pyogenes . Because many strains of S. aureus are resistant to antibiotics, the doctor had also requested an antimicrobial susceptibility test (AST) at the same time the specimen was submitted for identification. The results of the AST indicated no drug resistance for the Streptococcus spp.; the Staphylococcus spp. showed resistance to several common antibiotics, but were susceptible to cefoxitin and oxacillin. Once Sam began to use these new antibiotics, the infection resolved within a week and the lesion healed. Go back to the previous Clinical Focus box. Anthrax The zoonotic disease anthrax is caused by Bacillus anthracis , a gram-positive, endospore-forming, facultative anaerobe. Anthrax mainly affects animals such as sheep, goats, cattle, and deer, but can be found in humans as well. Sometimes called wool sorter’s disease , it is often transmitted to humans through contact with infected animals or animal products, such as wool or hides. However, exposure to B. anthracis can occur by other means, as the endospores are widespread in soils and can survive for long periods of time, sometimes for hundreds of years. The vast majority of anthrax cases (95–99%) occur when anthrax endospores enter the body through abrasions of the skin. 8 This form of the disease is called cutaneous anthrax. It is characterized by the formation of a nodule on the skin; the cells within the nodule die, forming a black eschar , a mass of dead skin tissue ( Figure 21.19 ). The localized infection can eventually lead to bacteremia and septicemia . If untreated, cutaneous anthrax can cause death in 20% of patients. 9 Once in the skin tissues, B. anthracis endospores germinate and produce a capsule, which prevents the bacteria from being phagocytized, and two binary exotoxins that cause edema and tissue damage. The first of the two exotoxins consists of a combination of protective antigen (PA) and an enzymatic lethal factor (LF), forming lethal toxin (LeTX). The second consists of protective antigen (PA) and an edema factor (EF), forming edema toxin (EdTX). 8 Shadomy, S.V., Traxler, R.M., and Marston, C.K. “Infectious Diseases Related to Travel: Anthrax” 2015. Centers for Disease Control and Prevention . http://wwwnc.cdc.gov/travel/yellowbook/2016/infectious-diseases-related-to-travel/anthrax. Accessed Sept 14, 2016. 9 US FDA . “Anthrax.” 2015. http://www.fda.gov/BiologicsBloodVaccines/Vaccines/ucm061751.htm. Accessed Sept 14, 2016. Less commonly, anthrax infections can be initiated through other portals of entry such as the digestive tract ( gastrointestinal anthrax ) or respiratory tract ( pulmonary anthrax or inhalation anthrax ). Typically, cases of noncutaneous anthrax are more difficult to treat than the cutaneous form. The mortality rate for gastrointestinal anthrax can be up to 40%, even with treatment. Inhalation anthrax, which occurs when anthrax spores are inhaled, initially causes influenza-like symptoms, but mortality rates are approximately 45% in treated individuals and 85% in those not treated. A relatively new form of the disease, injection anthrax , has been reported in Europe in intravenous drug users; it occurs when drugs are contaminated with B. anthracis . Patients with injection anthrax show signs and symptoms of severe soft tissue infection that differ clinically from cutaneous anthrax. This often delays diagnosis and treatment, and leads to a high mortality rate. 10 10 Berger, T., Kassirer, M., and Aran, A.A.. “Injectional Anthrax—New Presentation of an Old Disease.” Euro Surveillance 19 (2014) 32. http://www.ncbi.nlm.nih.gov/pubmed/25139073. Accessed Sept 14, 2016. B. anthracis colonies on blood agar have a rough texture and serrated edges that eventually form an undulating band ( Figure 21.19 ). Broad spectrum antibiotics such as penicillin , erythromycin , and tetracycline are often effective treatments. Unfortunately, B. anthracis has been used as a biological weapon and remains on the United Nations’ list of potential agents of bioterrorism . 11 Over a period of several months in 2001, a number of letters were mailed to members of the news media and the United States Congress. As a result, 11 individuals developed cutaneous anthrax and another 11 developed inhalation anthrax. Those infected included recipients of the letters, postal workers, and two other individuals. Five of those infected with pulmonary anthrax died. The anthrax spores had been carefully prepared to aerosolize, showing that the perpetrator had a high level of expertise in microbiology. 12 11 United Nations Office at Geneva. “What Are Biological and Toxin Weapons?” http://www.unog.ch/80256EE600585943/%28httpPages%29/29B727532FECBE96C12571860035A6DB?. Accessed Sept 14, 2016. 12 Federal Bureau of Investigation. “Famous Cases and Criminals: Amerithrax or Anthrax Investigation.” https://www.fbi.gov/history/famous-cases/amerithrax-or-anthrax-investigation. Accessed Sept 14, 2016. A vaccine is available to protect individuals from anthrax. However, unlike most routine vaccines, the current anthrax vaccine is unique in both its formulation and the protocols dictating who receives it. 13 The vaccine is administered through five intramuscular injections over a period of 18 months, followed by annual boosters. The US Food and Drug Administration (FDA) has only approved administration of the vaccine prior to exposure for at-risk adults, such as individuals who work with anthrax in a laboratory, some individuals who handle animals or animal products (e.g., some veterinarians), and some members of the United States military. The vaccine protects against cutaneous and inhalation anthrax using cell-free filtrates of microaerophilic cultures of an avirulent, nonencapsulated strain of B. anthracis . 14 The FDA has not approved the vaccine for routine use after exposure to anthrax, but if there were ever an anthrax emergency in the United States, patients could be given anthrax vaccine after exposure to help prevent disease. 13 Centers for Disease Control and Prevention . “Anthrax: Medical Care: Prevention: Antibiotics.” http://www.cdc.gov/anthrax/medical-care/prevention.html. Accessed Sept 14, 2016. 14 Emergent Biosolutions. AVA (BioThrax) vaccine package insert (Draft). Nov 2015. http://www.fda.gov/downloads/biologicsbloodvaccines/bloodbloodproducts/approvedproducts/licensedproductsblas/ucm074923.pdf. Check Your Understanding What is the characteristic feature of a cutaneous anthrax infection? Disease Profile Bacterial Infections of the Skin Bacterial infections of the skin can cause a wide range of symptoms and syndromes, ranging from the superficial and relatively harmless to the severe and even fatal. Most bacterial skin infections can be diagnosed by culturing the bacteria and treated with antibiotics. Antimicrobial susceptibility testing is also often necessary because many strains of bacteria have developed antibiotic resistance. Figure 21.20 summarizes the characteristics of some common bacterial skin infections. Bacterial Conjunctivitis Like the skin, the surface of the eye comes in contact with the outside world and is somewhat prone to infection by bacteria in the environment. Bacterial conjunctivitis ( pinkeye ) is a condition characterized by inflammation of the conjunctiva, often accompanied by a discharge of sticky fluid (described as acute purulent conjunctivitis) ( Figure 21.21 ). Conjunctivitis can affect one eye or both, and it usually does not affect vision permanently. Bacterial conjunctivitis is most commonly caused by Haemophilus influenzae , but can also be caused by other species such as Moraxella catarrhalis , S. pneumoniae, and S. aureus . The causative agent may be identified using bacterial cultures, Gram stain, and diagnostic biochemical, antigenic, or nucleic acid profile tests of the isolated pathogen. Bacterial conjunctivitis is very contagious, being transmitted via secretions from infected individuals, but it is also self-limiting. Bacterial conjunctivitis usually resolves in a few days, but topical antibiotics are sometimes prescribed. Because this condition is so contagious, medical attention is recommended whenever it is suspected. Individuals who use contact lenses should discontinue their use when conjunctivitis is suspected. Certain symptoms, such as blurred vision, eye pain, and light sensitivity, can be associated with serious conditions and require medical attention. Neonatal Conjunctivitis Newborns whose mothers have certain sexually transmitted infections are at risk of contracting ophthalmia neonatorum or inclusion conjunctivitis , which are two forms of neonatal conjunctivitis contracted through exposure to pathogens during passage through the birth canal. Gonococcal ophthalmia neonatorum is caused by Neisseria gonorrhoeae , the bacterium that causes the STD gonorrhea ( Figure 21.22 ). Inclusion (chlamydial) conjunctivitis is caused by Chlamydia trachomatis , the anaerobic, obligate, intracellular parasite that causes the STD chlamydia . To prevent gonoccocal ophthalmia neonatorum, silver nitrate ointments were once routinely applied to all infants’ eyes shortly after birth; however, it is now more common to apply antibacterial creams or drops, such as erythromycin . Most hospitals are required by law to provide this preventative treatment to all infants, because conjunctivitis caused by N. gonorrhoeae , C. trachomatis, or other bacteria acquired during a vaginal delivery can have serious complications. If untreated, the infection can spread to the cornea, resulting in ulceration or perforation that can cause vision loss or even permanent blindness. As such, neonatal conjunctivitis is treated aggressively with oral or intravenous antibiotics to stop the spread of the infection. Causative agents of inclusion conjunctivitis may be identified using bacterial cultures, Gram stain, and diagnostic biochemical, antigenic, or nucleic acid profile tests. Check Your Understanding Compare and contrast bacterial conjunctivitis with neonatal conjunctivitis. Trachoma Trachoma , or granular conjunctivitis , is a common cause of preventable blindness that is rare in the United States but widespread in developing countries, especially in Africa and Asia. The condition is caused by the same species that causes neonatal inclusion conjunctivitis in infants, Chlamydia trachomatis . C. trachomatis can be transmitted easily through fomites such as contaminated towels, bed linens, and clothing and also by direct contact with infected individuals. C. trachomatis can also be spread by flies that transfer infected mucous containing C. trachomatis from one human to another. Infection by C. trachomatis causes chronic conjunctivitis , which leads to the formation of necrotic follicles and scarring in the upper eyelid. The scars turn the eyelashes inward (a condition known as trichiasis ) and mechanical abrasion of the cornea leads to blindness ( Figure 21.23 ). Antibiotics such as azithromycin are effective in treating trachoma, and outcomes are good when the disease is treated promptly. In areas where this disease is common, large public health efforts are focused on reducing transmission by teaching people how to avoid the risks of the infection. Check Your Understanding Why is trachoma rare in the United States? Micro Connections SAFE Eradication of Trachoma Though uncommon in the United States and other developed nations, trachoma is the leading cause of preventable blindness worldwide, with more than 4 million people at immediate risk of blindness from trichiasis. The vast majority of those affected by trachoma live in Africa and the Middle East in isolated rural or desert communities with limited access to clean water and sanitation. These conditions provide an environment conducive to the growth and spread of Chlamydia trachomatis , the bacterium that causes trachoma, via wastewater and eye-seeking flies. In response to this crisis, recent years have seen major public health efforts aimed at treating and preventing trachoma. The Alliance for Global Elimination of Trachoma by 2020 (GET 2020), coordinated by the World Health Organization (WHO), promotes an initiative dubbed “SAFE,” which stands for “Surgery, Antibiotics, Facial cleanliness, and Environmental improvement.” The Carter Center , a charitable, nongovernment organization led by former US President Jimmy Carter, has partnered with the WHO to promote the SAFE initiative in six of the most critically impacted nations in Africa. Through its Trachoma Control Program, the Carter Center trains and equips local surgeons to correct trichiasis and distributes antibiotics to treat trachoma. The program also promotes better personal hygiene through health education and improves sanitation by funding the construction of household latrines. This reduces the prevalence of open sewage, which provides breeding grounds for the flies that spread trachoma. Bacterial Keratitis Keratitis can have many causes, but bacterial keratitis is most frequently caused by Staphylococcus epidermidis and/or Pseudomonas aeruginosa . Contact lens users are particularly at risk for such an infection because S. epidermidis and P. aeruginosa both adhere well to the surface of the lenses. Risk of infection can be greatly reduced by proper care of contact lenses and avoiding wearing lenses overnight. Because the infection can quickly lead to blindness, prompt and aggressive treatment with antibiotics is important. The causative agent may be identified using bacterial cultures, Gram stain, and diagnostic biochemical, antigenic, or nucleic acid profile tests of the isolated pathogen. Check Your Understanding Why are contact lens wearers at greater risk for developing keratitis? Biofilms and Infections of the Skin and Eyes When treating bacterial infections of the skin and eyes, it is important to consider that few such infections can be attributed to a single pathogen. While biofilms may develop in other parts of the body, they are especially relevant to skin infections (such as those caused by S. aureus or P. aeruginosa ) because of their prevalence in chronic skin wounds. Biofilms develop when bacteria (and sometimes fungi) attach to a surface and produce extracellular polymeric substances (EPS) in which cells of multiple organisms may be embedded. When a biofilm develops on a wound, it may interfere with the natural healing process as well as diagnosis and treatment. Because biofilms vary in composition and are difficult to replicate in the lab, they are still not thoroughly understood. The extracellular matrix of a biofilm consists of polymers such as polysaccharides, extracellular DNA, proteins, and lipids, but the exact makeup varies. The organisms living within the extracellular matrix may include familiar pathogens as well as other bacteria that do not grow well in cultures (such as numerous obligate anaerobes). This presents challenges when culturing samples from infections that involve a biofilm. Because only some species grow in vitro , the culture may contain only a subset of the bacterial species involved in the infection. Biofilms confer many advantages to the resident bacteria. For example, biofilms can facilitate attachment to surfaces on or in the host organism (such as wounds), inhibit phagocytosis, prevent the invasion of neutrophils, and sequester host antibodies. Additionally, biofilms can provide a level of antibiotic resistance not found in the isolated cells and colonies that are typical of laboratory cultures. The extracellular matrix provides a physical barrier to antibiotics, shielding the target cells from exposure. Moreover, cells within a biofilm may differentiate to create subpopulations of dormant cells called persister cells. Nutrient limitations deep within a biofilm add another level of resistance, as stress responses can slow metabolism and increase drug resistance. Disease Profile Bacterial Infections of the Eyes A number of bacteria are able to cause infection when introduced to the mucosa of the eye. In general, bacterial eye infections can lead to inflammation, irritation, and discharge, but they vary in severity. Some are typically short-lived, and others can become chronic and lead to permanent eye damage. Prevention requires limiting exposure to contagious pathogens. When infections do occur, prompt treatment with antibiotics can often limit or prevent permanent damage. Figure 21.24 summarizes the characteristics of some common bacterial infections of the eyes. 21.3 Viral Infections of the Skin and Eyes Learning Objectives Identify the most common viruses associated with infections of the skin and eyes Compare the major characteristics of specific viral diseases affecting the skin and eyes Until recently, it was thought that the normal microbiota of the body consisted primarily of bacteria and some fungi. However, in addition to bacteria, the skin is colonized by viruses, and recent studies suggest that Papillomaviridae , Polyomaviridae and Circoviridae also contribute to the normal skin microbiota. However, some viruses associated with skin are pathogenic, and these viruses can cause diseases with a wide variety of presentations. Numerous types of viral infections cause rashes or lesions on the skin; however, in many cases these skin conditions result from infections that originate in other body systems. In this chapter, we will limit the discussion to viral skin infections that use the skin as a portal of entry. Later chapters will discuss viral infections such as chickenpox, measles, and rubella—diseases that cause skin rashes but invade the body through portals of entry other than the skin. Papillomas Papillomas ( warts ) are the expression of common skin infections by human papillomavirus (HPV) and are transmitted by direct contact. There are many types of HPV, and they lead to a variety of different presentations, such as common warts , plantar warts , flat warts , and filiform warts . HPV can also cause sexually-transmitted genital warts, which will be discussed in Urogenital System Infections . Vaccination is available for some strains of HPV. Common warts tend to develop on fingers, the backs of hands, and around nails in areas with broken skin. In contrast, plantar warts (also called foot warts) develop on the sole of the foot and can grow inwards, causing pain and pressure during walking. Flat warts can develop anywhere on the body, are often numerous, and are relatively smooth and small compared with other wart types. Filiform warts are long, threadlike warts that grow quickly. In some cases, the immune system may be strong enough to prevent warts from forming or to eradicate established warts. However, treatment of established warts is typically required. There are many available treatments for warts, and their effectiveness varies. Common warts can be frozen off with liquid nitrogen. Topical applications of salicylic acid may also be effective. Other options are electrosurgery (burning), curettage (cutting), excision, painting with cantharidin (which causes the wart to die so it can more easily be removed), laser treatments, treatment with bleomycin, chemical peels, and immunotherapy ( Figure 21.25 ). Oral Herpes Another common skin virus is herpes simplex virus (HSV) . HSV has historically been divided into two types, HSV-1 and HSV-2 . HSV-1 is typically transmitted by direct oral contact between individuals, and is usually associated with oral herpes . HSV-2 is usually transmitted sexually and is typically associated with genital herpes . However, both HSV-1 and HSV-2 are capable of infecting any mucous membrane, and the incidence of genital HSV-1 and oral HSV-2 infections has been increasing in recent years. In this chapter, we will limit our discussion to infections caused by HSV-1; HSV-2 and genital herpes will be discussed in Urogenital System Infections . Infection by HSV-1 commonly manifests as cold sores or fever blisters, usually on or around the lips ( Figure 21.26 ). HSV-1 is highly contagious, with some studies suggesting that up to 65% of the US population is infected; however, many infected individuals are asymptomatic. 15 Moreover, the virus can be latent for long periods, residing in the trigeminal nerve ganglia between recurring bouts of symptoms. Recurrence can be triggered by stress or environmental conditions (systemic or affecting the skin). When lesions are present, they may blister, break open, and crust. The virus can be spread through direct contact, even when a patient is asymptomatic. 15 Wald, A., and Corey, L. “Persistence in the Population: Epidemiology, Transmission.” In: A. Arvin, G. Campadelli-Fiume, E. Mocarski et al. Human Herpesviruses: Biology, Therapy, and Immunoprophylaxis . Cambridge: Cambridge University Press, 2007. http://www.ncbi.nlm.nih.gov/books/NBK47447/. Accessed Sept 14, 2016. While the lips, mouth, and face are the most common sites for HSV-1 infections, lesions can spread to other areas of the body. Wrestlers and other athletes involved in contact sports may develop lesions on the neck, shoulders, and trunk. This condition is often called herpes gladiatorum . Herpes lesions that develop on the fingers are often called herpetic whitlow . HSV-1 infections are commonly diagnosed from their appearance, although laboratory testing can confirm the diagnosis. There is no cure, but antiviral medications such as acyclovir , penciclovir , famciclovir , and valacyclovir are used to reduce symptoms and risk of transmission. Topical medications, such as creams with n -docosanol and penciclovir , can also be used to reduce symptoms such as itching, burning, and tingling. Check Your Understanding What are the most common sites for the appearance of herpetic lesions? Roseola and Fifth Disease The viral diseases roseola and fifth disease are somewhat similar in terms of their presentation, but they are caused by different viruses. Roseola, sometimes called roseola infantum or exanthem subitum (“sudden rash”), is a mild viral infection usually caused by human herpesvirus -6 (HHV-6) and occasionally by HHV-7. It is spread via direct contact with the saliva or respiratory secretions of an infected individual, often through droplet aerosols. Roseola is very common in children, with symptoms including a runny nose, a sore throat, and a cough, along with (or followed by) a high fever (39.4 ºC). About three to five days after the fever subsides, a rash may begin to appear on the chest and abdomen. The rash, which does not cause discomfort, initially forms characteristic macules that are flat or papules that are firm and slightly raised; some macules or papules may be surrounded by a white ring. The rash may eventually spread to the neck and arms, and sometimes continues to spread to the face and legs. The diagnosis is generally made based upon observation of the symptoms. However, it is possible to perform serological tests to confirm the diagnosis. While treatment may be recommended to control the fever, the disease usually resolves without treatment within a week after the fever develops. For individuals at particular risk, such as those who are immunocompromised, the antiviral medication ganciclovir may be used. Fifth disease (also known as erythema infectiosum ) is another common, highly contagious illness that causes a distinct rash that is critical to diagnosis. Fifth disease is caused by parvovirus B19 , and is transmitted by contact with respiratory secretions from an infected individual. Infection is more common in children than adults. While approximately 20% of individuals will be asymptomatic during infection, 16 others will exhibit cold-like symptoms (headache, fever, and upset stomach) during the early stages when the illness is most infectious. Several days later, a distinct red facial rash appears, often called “slapped cheek” rash ( Figure 21.27 ). Within a few days, a second rash may appear on the arms, legs, chest, back, or buttocks. The rash may come and go for several weeks, but usually disappears within seven to twenty-one days, gradually becoming lacy in appearance as it recedes. 16 Centers for Disease Control and Prevention. “Fifth Disease.” http://www.cdc.gov/parvovirusb19/fifth-disease.html. Accessed Sept 14, 2016. In children, the disease usually resolves on its own without medical treatment beyond symptom relief as needed. Adults may experience different and possibly more serious symptoms. Many adults with fifth disease do not develop any rash, but may experience joint pain and swelling that lasts several weeks or months. Immunocompromised individuals can develop severe anemia and may need blood transfusions or immune globulin injections. While the rash is the most important component of diagnosis (especially in children), the symptoms of fifth disease are not always consistent. Serological testing can be conducted for confirmation. Check Your Understanding Identify at least one similarity and one difference between roseola and fifth disease. Viral Conjunctivitis Like bacterial conjunctivitis viral infections of the eye can cause inflammation of the conjunctiva and discharge from the eye. However, viral conjunctivitis tends to produce a discharge that is more watery than the thick discharge associated with bacterial conjunctivitis. The infection is contagious and can easily spread from one eye to the other or to other individuals through contact with eye discharge. Viral conjunctivitis is commonly associated with colds caused by adenoviruses; however, other viruses can also cause conjunctivitis. If the causative agent is uncertain, eye discharge can be tested to aid in diagnosis. Antibiotic treatment of viral conjunctivitis is ineffective, and symptoms usually resolve without treatment within a week or two. Herpes Keratitis Herpes infections caused by HSV-1 can sometimes spread to the eye from other areas of the body, which may result in keratoconjunctivitis . This condition, generally called herpes keratitis or herpetic keratitis , affects the conjunctiva and cornea, causing irritation, excess tears, and sensitivity to light. Deep lesions in the cornea may eventually form, leading to blindness. Because keratitis can have numerous causes, laboratory testing is necessary to confirm the diagnosis when HSV-1 is suspected; once confirmed, antiviral medications may be prescribed. Disease Profile Viral Infections of the Skin and Eyes A number of viruses can cause infections via direct contact with skin and eyes, causing signs and symptoms ranging from rashes and lesions to warts and conjunctivitis. All of these viral diseases are contagious, and while some are more common in children (fifth disease and roseola), others are prevalent in people of all ages (oral herpes, viral conjunctivitis, papillomas). In general, the best means of prevention is avoiding contact with infected individuals. Treatment may require antiviral medications; however, several of these conditions are mild and typically resolve without treatment. Figure 21.28 summarizes the characteristics of some common viral infections of the skin and eyes. 21.4 Mycoses of the Skin Learning Objectives Identify the most common fungal pathogens associated with cutaneous and subcutaneous mycoses Compare the major characteristics of specific fungal diseases affecting the skin Many fungal infections of the skin involve fungi that are found in the normal skin microbiota. Some of these fungi can cause infection when they gain entry through a wound; others mainly cause opportunistic infections in immunocompromised patients. Other fungal pathogens primarily cause infection in unusually moist environments that promote fungal growth; for example, sweaty shoes, communal showers, and locker rooms provide excellent breeding grounds that promote the growth and transmission of fungal pathogens. Fungal infections, also called mycoses , can be divided into classes based on their invasiveness. Mycoses that cause superficial infections of the epidermis, hair, and nails, are called cutaneous mycoses . Mycoses that penetrate the epidermis and the dermis to infect deeper tissues are called subcutaneous mycoses . Mycoses that spread throughout the body are called systemic mycoses . Tineas A group of cutaneous mycoses called tineas are caused by dermatophytes , fungal molds that require keratin, a protein found in skin, hair, and nails, for growth. There are three genera of dermatophytes, all of which can cause cutaneous mycoses: Trichophyton , Epidermophyton , and Microsporum . Tineas on most areas of the body are generally called ringworm , but tineas in specific locations may have distinctive names and symptoms (see Table 21.3 and Figure 21.29 ). Keep in mind that these names—even though they are Latinized—refer to locations on the body, not causative organisms. Tineas can be caused by different dermatophytes in most areas of the body. Some Common Tineas and Location on the Body Tinea corporis (ringworm) Body Tinea capitis (ringworm) Scalp Tinea pedis (athlete’s foot) Feet Tinea barbae (barber’s itch) Beard Tinea cruris (jock itch) Groin Tinea unguium (onychomycosis) Toenails, fingernails Table 21.3 Dermatophytes are commonly found in the environment and in soils and are frequently transferred to the skin via contact with other humans and animals. Fungal spores can also spread on hair. Many dermatophytes grow well in moist, dark environments. For example, tinea pedis (athlete’s foot) commonly spreads in public showers, and the causative fungi grow well in the dark, moist confines of sweaty shoes and socks. Likewise, tinea cruris (jock itch) often spreads in communal living environments and thrives in warm, moist undergarments. Tineas on the body ( tinea corporis ) often produce lesions that grow radially and heal towards the center. This causes the formation of a red ring, leading to the misleading name of ringworm recall the Clinical Focus case in The Eukaryotes of Microbiology . Several approaches may be used to diagnose tineas. A Wood’s lamp (also called a black lamp) with a wavelength of 365 nm is often used. When directed on a tinea, the ultraviolet light emitted from the Wood’s lamp causes the fungal elements (spores and hyphae) to fluoresce. Direct microscopic evaluation of specimens from skin scrapings, hair, or nails can also be used to detect fungi. Generally, these specimens are prepared in a wet mount using a potassium hydroxide solution (10%–20% aqueous KOH), which dissolves the keratin in hair, nails, and skin cells to allow for visualization of the hyphae and fungal spores. The specimens may be grown on Sabouraud dextrose CC (chloramphenicol/cyclohexamide), a selective agar that supports dermatophyte growth while inhibiting the growth of bacteria and saprophytic fungi ( Figure 21.30 ). Macroscopic colony morphology is often used to initially identify the genus of the dermatophyte; identification can be further confirmed by visualizing the microscopic morphology using either a slide culture or a sticky tape prep stained with lactophenol cotton blue. Various antifungal treatments can be effective against tineas. Allylamine ointments that include terbinafine are commonly used; miconazole and clotrimazole are also available for topical treatment, and griseofulvin is used orally. Check Your Understanding Why are tineas, caused by fungal molds, often called ringworm? Cutaneous Aspergillosis Another cause of cutaneous mycoses is Aspergillus , a genus consisting of molds of many different species, some of which cause a condition called aspergillosis . Primary cutaneous aspergillosis, in which the infection begins in the skin, is rare but does occur. More common is secondary cutaneous aspergillosis, in which the infection begins in the respiratory system and disseminates systemically. Both primary and secondary cutaneous aspergillosis result in distinctive eschars that form at the site or sites of infection ( Figure 21.31 ). Pulmonary aspergillosis will be discussed more thoroughly in Respiratory Mycoses ). Primary cutaneous aspergillosis usually occurs at the site of an injury and is most often caused by Aspergillus fumigatus or Aspergillus flavus . It is usually reported in patients who have had an injury while working in an agricultural or outdoor environment. However, opportunistic infections can also occur in health-care settings, often at the site of intravenous catheters, venipuncture wounds, or in association with burns, surgical wounds, or occlusive dressing. After candidiasis, aspergillosis is the second most common hospital-acquired fungal infection and often occurs in immunocompromised patients, who are more vulnerable to opportunistic infections. Cutaneous aspergillosis is diagnosed using patient history, culturing, histopathology using a skin biopsy. Treatment involves the use of antifungal medications such as voriconazole (preferred for invasive aspergillosis), itraconazole , and amphotericin B if itraconazole is not effective. For immunosuppressed individuals or burn patients, medication may be used and surgical or immunotherapy treatments may be needed. Check Your Understanding Identify the sources of infection for primary and secondary cutaneous aspergillosis. Candidiasis of the Skin and Nails Candida albicans and other yeasts in the genus Candida can cause skin infections referred to as cutaneous candidiasis. Candida spp. are sometimes responsible for intertrigo , a general term for a rash that occurs in a skin fold, or other localized rashes on the skin. Candida can also infect the nails, causing them to become yellow and harden ( Figure 21.32 ). Candidiasis of the skin and nails is diagnosed through clinical observation and through culture, Gram stain, and KOH wet mounts. Susceptibility testing for anti-fungal agents can also be done. Cutaneous candidiasis can be treated with topical or systemic azole antifungal medications. Because candidiasis can become invasive, patients suffering from HIV/AIDS, cancer, or other conditions that compromise the immune system may benefit from preventive treatment. Azoles, such as clotrimazole , econazole , fluconazole , ketoconazole , and miconazole ; nystatin ; terbinafine ; and naftifine may be used for treatment. Long-term treatment with medications such as itraconazole or ketoconazole may be used for chronic infections. Repeat infections often occur, but this risk can be reduced by carefully following treatment recommendations, avoiding excessive moisture, maintaining good health, practicing good hygiene, and having appropriate clothing (including footwear). Candida also causes infections in other parts of the body besides the skin. These include vaginal yeast infections (see Fungal Infections of the Reproductive System ) and oral thrush (see Microbial Diseases of the Mouth and Oral Cavity ). Check Your Understanding What are the signs and symptoms of candidiasis of the skin and nails? Sporotrichosis Whereas cutaneous mycoses are superficial, subcutaneous mycoses can spread from the skin to deeper tissues. In temperate regions, the most common subcutaneous mycosis is a condition called sporotrichosis , caused by the fungus Sporothrix schenkii and commonly known as rose gardener’s disease or rose thorn disease (recall Case in Point: Every Rose Has Its Thorn ). Sporotrichosis is often contracted after working with soil, plants, or timber, as the fungus can gain entry through a small wound such as a thorn-prick or splinter. Sporotrichosis can generally be avoided by wearing gloves and protective clothing while gardening and promptly cleaning and disinfecting any wounds sustained during outdoor activities. Sporothrix infections initially present as small ulcers in the skin, but the fungus can spread to the lymphatic system and sometimes beyond. When the infection spreads, nodules appear, become necrotic, and may ulcerate. As more lymph nodes become affected, abscesses and ulceration may develop over a larger area (often on one arm or hand). In severe cases, the infection may spread more widely throughout the body, although this is relatively uncommon. Sporothrix infection can be diagnosed based upon histologic examination of the affected tissue. Its macroscopic morphology can be observed by culturing the mold on potato dextrose agar, and its microscopic morphology can be observed by staining a slide culture with lactophenol cotton blue. Treatment with itraconazole is generally recommended. Check Your Understanding Describe the progression of a Sporothrix schenkii infection. Disease Profile Mycoses of the Skin Cutaneous mycoses are typically opportunistic, only able to cause infection when the skin barrier is breached through a wound. Tineas are the exception, as the dermatophytes responsible for tineas are able to grow on skin, hair, and nails, especially in moist conditions. Most mycoses of the skin can be avoided through good hygiene and proper wound care. Treatment requires antifungal medications. Figure 21.33 summarizes the characteristics of some common fungal infections of the skin. 21.5 Protozoan and Helminthic Infections of the Skin and Eyes Learning Objectives Identify two parasites that commonly cause infections of the skin and eyes Identify the major characteristics of specific parasitic diseases affecting the skin and eyes Many parasitic protozoans and helminths use the skin or eyes as a portal of entry. Some may physically burrow into the skin or the mucosa of the eye; others breach the skin barrier by means of an insect bite. Still others take advantage of a wound to bypass the skin barrier and enter the body, much like other opportunistic pathogens. Although many parasites enter the body through the skin, in this chapter we will limit our discussion to those for which the skin or eyes are the primary site of infection. Parasites that enter through the skin but travel to a different site of infection will be covered in other chapters. In addition, we will limit our discussion to microscopic parasitic infections of the skin and eyes. Macroscopic parasites such as lice, scabies, mites, and ticks are beyond the scope of this text. Acanthamoeba Infections Acanthamoeba is a genus of free-living protozoan amoebae that are common in soils and unchlorinated bodies of fresh water. (This is one reason why some swimming pools are treated with chlorine.) The genus contains a few parasitic species, some of which can cause infections of the eyes, skin, and nervous system. Such infections can sometimes travel and affect other body systems. Skin infections may manifest as abscesses, ulcers, and nodules. When acanthamoebae infect the eye , causing inflammation of the cornea, the condition is called Acanthamoeba keratitis . Figure 21.34 illustrates the Acanthamoeba life cycle and various modes of infection. While Acanthamoeba keratitis is initially mild, it can lead to severe corneal damage, vision impairment, or even blindness if left untreated. Similar to eye infections involving P. aeruginosa, Acanthamoeba poses a much greater risk to wearers of contact lenses because the amoeba can thrive in the space between contact lenses and the cornea. Prevention through proper contact lens care is important. Lenses should always be properly disinfected prior to use, and should never be worn while swimming or using a hot tub. Acanthamoeba can also enter the body through other pathways, including skin wounds and the respiratory tract. It usually does not cause disease except in immunocompromised individuals; however, in rare cases, the infection can spread to the nervous system, resulting in a usually fatal condition called granulomatous amoebic encephalitis (GAE) (see Fungal and Parasitic Diseases of the Nervous System ). Disseminated infections, lesions, and Acanthamoeba keratitis can be diagnosed by observing symptoms and examining patient samples under the microscope to view the parasite. Skin biopsies may be used. Acanthamoeba keratitis is difficult to treat, and prompt treatment is necessary to prevent the condition from progressing. The condition generally requires three to four weeks of intensive treatment to resolve. Common treatments include topical antiseptics (e.g., polyhexamethylene biguanide , chlorhexidine , or both), sometimes with painkillers or corticosteroids (although the latter are controversial because they suppress the immune system, which can worsen the infection). Azoles are sometimes prescribed as well. Advanced cases of keratitis may require a corneal transplant to prevent blindness. Check Your Understanding How are Acanthamoeba infections acquired? Loiasis The helminth Loa loa , also known as the African eye worm, is a nematode that can cause loiasis , a disease endemic to West and Central Africa ( Figure 21.36 ). The disease does not occur outside that region except when carried by travelers. There is evidence that individual genetic differences affect susceptibility to developing loiasis after infection by the Loa loa worm. Even in areas in which Loa loa worms are common, the disease is generally found in less than 30% of the population. 17 It has been suggested that travelers who spend time in the region may be somewhat more susceptible to developing symptoms than the native population, and the presentation of infection may differ. 18 17 Garcia, A.. et al. “Genetic Epidemiology of Host Predisposition Microfilaraemia in Human Loiasis.” Tropical Medicine and International Health 4 (1999) 8:565–74. http://www.ncbi.nlm.nih.gov/pubmed/10499080. Accessed Sept 14, 2016. 18 Spinello, A., et al. “Imported Loa loa Filariasis: Three Cases and a Review of Cases Reported in Non-Endemic Countries in the Past 25 Years.” International Journal of Infectious Disease 16 (2012) 9: e649–e662. DOI: http://dx.doi.org/10.1016/j.ijid.2012.05.1023. The parasite is spread by deerflies (genus Chrysops ), which can ingest the larvae from an infected human via a blood meal ( Figure 21.36 ). When the deerfly bites other humans, it deposits the larvae into their bloodstreams. After about five months in the human body, some larvae develop into adult worms, which can grow to several centimeters in length and live for years in the subcutaneous tissue of the host. The name “eye worm” alludes to the visible migration of worms across the conjunctiva of the eye. Adult worms live in the subcutaneous tissues and can travel at about 1 cm per hour. They can often be observed when migrating through the eye, and sometimes under the skin; in fact, this is generally how the disease is diagnosed. It is also possible to test for antibodies, but the presence of antibodies does not necessarily indicate a current infection; it only means that the individual was exposed at some time. Some patients are asymptomatic, but in others the migrating worms can cause fever and areas of allergic inflammation known as Calabar swellings . Worms migrating through the conjunctiva can cause temporary eye pain and itching, but generally there is no lasting damage to the eye. Some patients experience a range of other symptoms, such as widespread itching, hives, and joint and muscle pain. Worms can be surgically removed from the eye or the skin, but this treatment only relieves discomfort; it does not cure the infection, which involves many worms. The preferred treatment is diethylcarbamazine , but this medication produces severe side effects in some individuals, such as brain inflammation and possible death in patients with heavy infections. Albendazole is also sometimes used if diethylcarbamazine is not appropriate or not successful. If left untreated for many years, loiasis can damage the kidneys, heart, and lungs, though these symptoms are rare. Check Your Understanding Describe the most common way to diagnose loiasis. Link to Learning See a video of a live Loa loa microfilaria under the microscope. Disease Profile Parasitic Skin and Eye Infections The protozoan Acanthamoeba and the helminth Loa loa are two parasites capable of causing infections of the skin and eyes. Figure 21.37 summarizes the characteristics of some common fungal infections of the skin.
psychology
Summary 10.1 Motivation Motivation to engage in a given behavior can come from internal and/or external factors. Multiple theories have been put forward regarding motivation. More biologically oriented theories deal with the ways that instincts and the need to maintain bodily homeostasis motivate behavior. Bandura postulated that our sense of self-efficacy motivates behaviors, and there are a number of theories that focus on a variety of social motives. Abraham Maslow’s hierarchy of needs is a model that shows the relationship among multiple motives that range from lower-level physiological needs to the very high level of self-actualization. 10.2 Hunger and Eating Hunger and satiety are highly regulated processes that result in a person maintaining a fairly stable weight that is resistant to change. When more calories are consumed than expended, a person will store excess energy as fat. Being significantly overweight adds substantially to a person’s health risks and problems, including cardiovascular disease, type 2 diabetes, certain cancers, and other medical issues. Sociocultural factors that emphasize thinness as a beauty ideal and a genetic predisposition contribute to the development of eating disorders in many young females, though eating disorders span ages and genders. 10.3 Sexual Behavior The hypothalamus and structures of the limbic system are important in sexual behavior and motivation. There is evidence to suggest that our motivation to engage in sexual behavior and our ability to do so are related, but separate, processes. Alfred Kinsey conducted large-scale survey research that demonstrated the incredible diversity of human sexuality. William Masters and Virginia Johnson observed individuals engaging in sexual behavior in developing their concept of the sexual response cycle. While often confused, sexual orientation and gender identity are related, but distinct, concepts. 10.4 Emotion Emotions are subjective experiences that consist of physiological arousal and cognitive appraisal. Various theories have been put forward to explain our emotional experiences. The James-Lange theory asserts that emotions arise as a function of physiological arousal. The Cannon-Bard theory maintains that emotional experience occurs simultaneous to and independent of physiological arousal. The Schachter-Singer two-factor theory suggests that physiological arousal receives cognitive labels as a function of the relevant context and that these two factors together result in an emotional experience. The limbic system is the brain’s emotional circuit, which includes the amygdala and the hippocampus. Both of these structures are implicated in playing a role in normal emotional processing as well as in psychological mood and anxiety disorders. Increased amygdala activity is associated with learning to fear, and it is seen in individuals who are at risk for or suffering from mood disorders. The volume of the hippocampus has been shown to be reduced in individuals suffering from posttraumatic stress disorder. The ability to produce and recognize facial expressions of emotions seems to be universal regardless of cultural background. However, there are cultural display rules which influence how often and under what circumstances various emotions can be expressed. Tone of voice and body language also serve as a means by which we communicate information about our emotional states.
Chapter Outline 10.1 Motivation 10.2 Hunger and Eating 10.3 Sexual Behavior 10.4 Emotion Introduction What makes us behave as we do? What drives us to eat? What drives us toward sex? Is there a biological basis to explain the feelings we experience? How universal are emotions? In this chapter, we will explore issues relating to both motivation and emotion. We will begin with a discussion of several theories that have been proposed to explain motivation and why we engage in a given behavior. You will learn about the physiological needs that drive some human behaviors, as well as the importance of our social experiences in influencing our actions. Next, we will consider both eating and having sex as examples of motivated behaviors. What are the physiological mechanisms of hunger and satiety? What understanding do scientists have of why obesity occurs, and what treatments exist for obesity and eating disorders? How has research into human sex and sexuality evolved over the past century? How do psychologists understand and study the human experience of sexual orientation and gender identity? These questions—and more—will be explored. This chapter will close with a discussion of emotion. You will learn about several theories that have been proposed to explain how emotion occurs, the biological underpinnings of emotion, and the universality of emotions.
[ { "answer": { "ans_choice": 1, "ans_text": "affiliation" }, "bloom": null, "hl_context": "A number of theorists have focused their research on understanding social motives ( McAdams & Constantian , 1983 ; McClelland & Liberman , 1949 ; Murray et al . , 1938 ) . Among the motives they describe are needs for achievement , affiliation , and intimacy . It is the need for achievement that drives accomplishment and performance . <hl> The need for affiliation encourages positive interactions with others , and the need for intimacy causes us to seek deep , meaningful relationships . <hl> Henry Murray et al . ( 1938 ) categorized these needs into domains . For example , the need for achievement and recognition falls under the domain of ambition . Dominance and aggression were recognized as needs under the domain of human power , and play was a recognized need in the domain of interpersonal affection .", "hl_sentences": "The need for affiliation encourages positive interactions with others , and the need for intimacy causes us to seek deep , meaningful relationships .", "question": { "cloze_format": "Need for ________ refers to maintaining positive relationships with others.", "normal_format": "Need for which of the following refers to maintaining positive relationships with others?", "question_choices": [ "achievement", "affiliation", "intimacy", "power" ], "question_id": "fs-idp116307104", "question_text": "Need for ________ refers to maintaining positive relationships with others." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "Abraham Maslow" }, "bloom": null, "hl_context": "<hl> Maslow ’ s Hierarchy of Needs While the theories of motivation described earlier relate to basic biological drives , individual characteristics , or social contexts , Abraham Maslow ( 1943 ) proposed a hierarchy of needs that spans the spectrum of motives ranging from the biological to the individual to the social . <hl> These needs are often depicted as a pyramid ( Figure 10.8 ) .", "hl_sentences": "Maslow ’ s Hierarchy of Needs While the theories of motivation described earlier relate to basic biological drives , individual characteristics , or social contexts , Abraham Maslow ( 1943 ) proposed a hierarchy of needs that spans the spectrum of motives ranging from the biological to the individual to the social .", "question": { "cloze_format": "________ proposed the hierarchy of needs.", "normal_format": "Who proposed the hierarchy of needs?", "question_choices": [ "William James", "David McClelland", "Abraham Maslow", "Albert Bandura" ], "question_id": "fs-idp62211088", "question_text": "________ proposed the hierarchy of needs." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "self-efficacy" }, "bloom": null, "hl_context": "<hl> Self-efficacy is an individual ’ s belief in her own capability to complete a task , which may include a previous successful completion of the exact task or a similar task . <hl> Albert Bandura ( 1994 ) theorized that an individual ’ s sense of self-efficacy plays a pivotal role in motivating behavior . Bandura argues that motivation derives from expectations that we have about the consequences of our behaviors , and ultimately , it is the appreciation of our capacity to engage in a given behavior that will determine what we do and the future goals that we set for ourselves . For example , if you have a sincere belief in your ability to achieve at the highest level , you are more likely to take on challenging tasks and to not let setbacks dissuade you from seeing the task through to the end .", "hl_sentences": "Self-efficacy is an individual ’ s belief in her own capability to complete a task , which may include a previous successful completion of the exact task or a similar task .", "question": { "cloze_format": "________ is an individual’s belief in her capability to complete some task.", "normal_format": "What is an individual’s belief in her capability to complete some task?", "question_choices": [ "physiological needs", "self-esteem", "self-actualization", "self-efficacy" ], "question_id": "fs-idm21754272", "question_text": "________ is an individual’s belief in her capability to complete some task." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "extrinsic" }, "bloom": null, "hl_context": "Why do we do the things we do ? What motivations underlie our behaviors ? Motivation describes the wants or needs that direct behavior toward a goal . In addition to biological motives , motivations can be intrinsic ( arising from internal factors ) or extrinsic ( arising from external factors ) ( Figure 10.2 ) . <hl> Intrinsically motivated behaviors are performed because of the sense of personal satisfaction that they bring , while extrinsically motivated behaviors are performed in order to receive something from others . <hl>", "hl_sentences": "Intrinsically motivated behaviors are performed because of the sense of personal satisfaction that they bring , while extrinsically motivated behaviors are performed in order to receive something from others .", "question": { "cloze_format": "The type of motivation is ___ .", "normal_format": "Carl mows the yard of his elderly neighbor each week for $20. What type of motivation is this?", "question_choices": [ "extrinsic", "intrinsic", "drive", "biological" ], "question_id": "fs-idm70773152", "question_text": "Carl mows the yard of his elderly neighbor each week for $20. What type of motivation is this?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "one third" }, "bloom": null, "hl_context": "Obesity When someone weighs more than what is generally accepted as healthy for a given height , they are considered overweight or obese . According to the Centers for Disease Control and Prevention ( CDC ) , an adult with a body mass index ( BMI ) between 25 and 29.9 is considered overweight ( Figure 10.10 ) . An adult with a BMI of 30 or higher is considered obese ( Centers for Disease Control and Prevention [ CDC ] , 2012 ) . People who are so overweight that they are at risk for death are classified as morbidly obese . Morbid obesity is defined as having a BMI over 40 . Note that although BMI has been used as a healthy weight indicator by the World Health Organization ( WHO ) , the CDC , and other groups , its value as an assessment tool has been questioned . The BMI is most useful for studying populations , which is the work of these organizations . It is less useful in assessing an individual since height and weight measurements fail to account for important factors like fitness level . An athlete , for example , may have a high BMI because the tool doesn ’ t distinguish between the body ’ s percentage of fat and muscle in a person ’ s weight . Being extremely overweight or obese is a risk factor for several negative health consequences . These include , but are not limited to , an increased risk for cardiovascular disease , stroke , Type 2 diabetes , liver disease , sleep apnea , colon cancer , breast cancer , infertility , and arthritis . <hl> Given that it is estimated that in the United States around one-third of the adult population is obese and that nearly two-thirds of adults and one in six children qualify as overweight ( CDC , 2012 ) , there is substantial interest in trying to understand how to combat this important public health concern . <hl>", "hl_sentences": "Given that it is estimated that in the United States around one-third of the adult population is obese and that nearly two-thirds of adults and one in six children qualify as overweight ( CDC , 2012 ) , there is substantial interest in trying to understand how to combat this important public health concern .", "question": { "cloze_format": "According to your reading, nearly ________ of the adult population in the United States can be classified as obese.", "normal_format": "According to your reading, nearly how many of the adult population in the United States can be classified as obese?", "question_choices": [ "one half", "one third", "one fourth", "one fifth" ], "question_id": "fs-idp74921088", "question_text": "According to your reading, nearly ________ of the adult population in the United States can be classified as obese." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "leptin" }, "bloom": null, "hl_context": "For most people , once they have eaten , they feel satiation , or fullness and satisfaction , and their eating behavior stops . Like the initiation of eating , satiation is also regulated by several physiological mechanisms . As blood glucose levels increase , the pancreas and liver send signals to shut off hunger and eating ( Drazen & Woods , 2003 ; Druce , Small , & Bloom , 2004 ; Greary , 1990 ) . <hl> The food ’ s passage through the gastrointestinal tract also provides important satiety signals to the brain ( Woods , 2004 ) , and fat cells release leptin , a satiety hormone . <hl>", "hl_sentences": "The food ’ s passage through the gastrointestinal tract also provides important satiety signals to the brain ( Woods , 2004 ) , and fat cells release leptin , a satiety hormone .", "question": { "cloze_format": "________ is a chemical messenger secreted by fat cells that acts as an appetite suppressant.", "normal_format": "Which is a chemical messenger secreted by fat cells that acts as an appetite suppressant?", "question_choices": [ "orexin", "angiotensin", "leptin", "ghrelin" ], "question_id": "fs-idm75208336", "question_text": "________ is a chemical messenger secreted by fat cells that acts as an appetite suppressant." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 3, "ans_text": "bulimia nervosa" }, "bloom": null, "hl_context": "<hl> People suffering from bulimia nervosa engage in binge eating behavior that is followed by an attempt to compensate for the large amount of food consumed . <hl> Purging the food by inducing vomiting or through the use of laxatives are two common compensatory behaviors . Some affected individuals engage in excessive amounts of exercise to compensate for their binges . Bulimia is associated with many adverse health consequences that can include kidney failure , heart failure , and tooth decay . In addition , these individuals often suffer from anxiety and depression , and they are at an increased risk for substance abuse ( Mayo Clinic , 2012b ) . The lifetime prevalence rate for bulimia nervosa is estimated at around 1 % for women and less than 0.5 % for men ( Smink , van Hoeken , & Hoek , 2012 ) .", "hl_sentences": "People suffering from bulimia nervosa engage in binge eating behavior that is followed by an attempt to compensate for the large amount of food consumed .", "question": { "cloze_format": "________ is characterized by episodes of binge eating followed by attempts to compensate for the excessive amount of food that was consumed.", "normal_format": "What is characterized by episodes of binge eating followed by attempts to compensate for the excessive amount of food that was consumed?", "question_choices": [ "Prader-Willi syndrome", "morbid obesity", "anorexia nervosa", "bulimia nervosa" ], "question_id": "fs-idm99406256", "question_text": "________ is characterized by episodes of binge eating followed by attempts to compensate for the excessive amount of food that was consumed." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "40 or more" }, "bloom": null, "hl_context": "Obesity When someone weighs more than what is generally accepted as healthy for a given height , they are considered overweight or obese . According to the Centers for Disease Control and Prevention ( CDC ) , an adult with a body mass index ( BMI ) between 25 and 29.9 is considered overweight ( Figure 10.10 ) . An adult with a BMI of 30 or higher is considered obese ( Centers for Disease Control and Prevention [ CDC ] , 2012 ) . People who are so overweight that they are at risk for death are classified as morbidly obese . <hl> Morbid obesity is defined as having a BMI over 40 . <hl> Note that although BMI has been used as a healthy weight indicator by the World Health Organization ( WHO ) , the CDC , and other groups , its value as an assessment tool has been questioned . The BMI is most useful for studying populations , which is the work of these organizations . It is less useful in assessing an individual since height and weight measurements fail to account for important factors like fitness level . An athlete , for example , may have a high BMI because the tool doesn ’ t distinguish between the body ’ s percentage of fat and muscle in a person ’ s weight . Being extremely overweight or obese is a risk factor for several negative health consequences . These include , but are not limited to , an increased risk for cardiovascular disease , stroke , Type 2 diabetes , liver disease , sleep apnea , colon cancer , breast cancer , infertility , and arthritis . Given that it is estimated that in the United States around one-third of the adult population is obese and that nearly two-thirds of adults and one in six children qualify as overweight ( CDC , 2012 ) , there is substantial interest in trying to understand how to combat this important public health concern .", "hl_sentences": "Morbid obesity is defined as having a BMI over 40 .", "question": { "cloze_format": "In order to be classified as morbidly obese, an adult must have a BMI of ________.", "normal_format": "In order to be classified as morbidly obese, an adult must have a BMI of which of the following? ", "question_choices": [ "less than 25", "25–29.9", "30–39.9", "40 or more" ], "question_id": "fs-idm66834848", "question_text": "In order to be classified as morbidly obese, an adult must have a BMI of ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "medial preoptic area of the hypothalamus" }, "bloom": null, "hl_context": "Physiological Mechanisms of Sexual Behavior and Motivation Much of what we know about the physiological mechanisms that underlie sexual behavior and motivation comes from animal research . <hl> As you ’ ve learned , the hypothalamus plays an important role in motivated behaviors , and sex is no exception . <hl> <hl> In fact , lesions to an area of the hypothalamus called the medial preoptic area completely disrupt a male rat ’ s ability to engage in sexual behavior . <hl> Surprisingly , medial preoptic lesions do not change how hard a male rat is willing to work to gain access to a sexually receptive female ( Figure 10.14 ) . This suggests that the ability to engage in sexual behavior and the motivation to do so may be mediated by neural systems distinct from one another .", "hl_sentences": "As you ’ ve learned , the hypothalamus plays an important role in motivated behaviors , and sex is no exception . In fact , lesions to an area of the hypothalamus called the medial preoptic area completely disrupt a male rat ’ s ability to engage in sexual behavior .", "question": { "cloze_format": "Animal research suggests that in male rats the ________ is critical for the ability to engage in sexual behavior, but not for the motivation to do so.", "normal_format": "What do animal research suggest is critical in male rats for the ability to engage in sexual behavior, but not for the motivation to do so?", "question_choices": [ "nucleus accumbens", "amygdala", "medial preoptic area of the hypothalamus", "hippocampus" ], "question_id": "fs-idm194458096", "question_text": "Animal research suggests that in male rats the ________ is critical for the ability to engage in sexual behavior, but not for the motivation to do so." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "orgasm" }, "bloom": null, "hl_context": "<hl> Based on these observations , Masters and Johnson divided the sexual response cycle into four phases that are fairly similar in men and women : excitement , plateau , orgasm , and resolution ( Figure 10.17 ) . <hl> The excitement phase is the arousal phase of the sexual response cycle , and it is marked by erection of the penis or clitoris and lubrication and expansion of the vaginal canal . During plateau , women experience further swelling of the vagina and increased blood flow to the labia minora , and men experience full erection and often exhibit pre-ejaculatory fluid . Both men and women experience increases in muscle tone during this time . <hl> Orgasm is marked in women by rhythmic contractions of the pelvis and uterus along with increased muscle tension . <hl> <hl> In men , pelvic contractions are accompanied by a buildup of seminal fluid near the urethra that is ultimately forced out by contractions of genital muscles , ( i . e . , ejaculation ) . <hl> Resolution is the relatively rapid return to an unaroused state accompanied by a decrease in blood pressure and muscular relaxation . While many women can quickly repeat the sexual response cycle , men must pass through a longer refractory period as part of resolution . The refractory period is a period of time that follows an orgasm during which an individual is incapable of experiencing another orgasm . In men , the duration of the refractory period can vary dramatically from individual to individual with some refractory periods as short as several minutes and others as long as a day . As men age , their refractory periods tend to span longer periods of time .", "hl_sentences": "Based on these observations , Masters and Johnson divided the sexual response cycle into four phases that are fairly similar in men and women : excitement , plateau , orgasm , and resolution ( Figure 10.17 ) . Orgasm is marked in women by rhythmic contractions of the pelvis and uterus along with increased muscle tension . In men , pelvic contractions are accompanied by a buildup of seminal fluid near the urethra that is ultimately forced out by contractions of genital muscles , ( i . e . , ejaculation ) .", "question": { "cloze_format": "During the ________ phase of the sexual response cycle, individuals experience rhythmic contractions of the pelvis that are accompanied by uterine contractions in women and ejaculation in men.", "normal_format": "During which phase of the sexual response cycle, individuals experience rhythmic contractions of the pelvis that are accompanied by uterine contractions in women and ejaculation in men?", "question_choices": [ "excitement", "plateau", "orgasm", "resolution" ], "question_id": "fs-idm165626800", "question_text": "During the ________ phase of the sexual response cycle, individuals experience rhythmic contractions of the pelvis that are accompanied by uterine contractions in women and ejaculation in men." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "Sexual desire and sexual ability can be separate functions." }, "bloom": null, "hl_context": "Physiological Mechanisms of Sexual Behavior and Motivation Much of what we know about the physiological mechanisms that underlie sexual behavior and motivation comes from animal research . As you ’ ve learned , the hypothalamus plays an important role in motivated behaviors , and sex is no exception . In fact , lesions to an area of the hypothalamus called the medial preoptic area completely disrupt a male rat ’ s ability to engage in sexual behavior . Surprisingly , medial preoptic lesions do not change how hard a male rat is willing to work to gain access to a sexually receptive female ( Figure 10.14 ) . <hl> This suggests that the ability to engage in sexual behavior and the motivation to do so may be mediated by neural systems distinct from one another . <hl>", "hl_sentences": "This suggests that the ability to engage in sexual behavior and the motivation to do so may be mediated by neural systems distinct from one another .", "question": { "cloze_format": "___ is not a result of the Kinsey study.", "normal_format": "Which of the following findings was not a result of the Kinsey study?", "question_choices": [ "Sexual desire and sexual ability can be separate functions.", "Females enjoy sex as much as males.", "Homosexual behavior is fairly common.", "Masturbation has no adverse consequences." ], "question_id": "fs-idm147126736", "question_text": "Which of the following findings was not a result of the Kinsey study?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 3, "ans_text": "gender dysphoria" }, "bloom": null, "hl_context": "Gender Identity Many people conflate sexual orientation with gender identity because of stereotypical attitudes that exist about homosexuality . In reality , these are two related , but different , issues . Gender identity refers to one ’ s sense of being male or female . Generally , our gender identities correspond to our chromosomal and phenotypic sex , but this is not always the case . <hl> When individuals do not feel comfortable identifying with the gender associated with their biological sex , then they experience gender dysphoria . <hl> Gender dysphoria is a diagnostic category in the fifth edition of the Diagnostic and Statistical Manual of Mental Disorders ( DSM - 5 ) that describes individuals who do not identify as the gender that most people would assume they are . This dysphoria must persist for at least six months and result in significant distress or dysfunction to meet DSM - 5 diagnostic criteria . In order for children to be assigned this diagnostic category , they must verbalize their desire to become the other gender .", "hl_sentences": "When individuals do not feel comfortable identifying with the gender associated with their biological sex , then they experience gender dysphoria .", "question": { "cloze_format": "If someone is uncomfortable identifying with the gender normally associated with their biological sex, then he could be classified as experiencing ________.", "normal_format": "If someone is uncomfortable identifying with the gender normally associated with their biological sex, then he could be classified as experiencing what?", "question_choices": [ "homosexuality", "bisexuality", "heterosexuality", "gender dysphoria" ], "question_id": "fs-idm163965536", "question_text": "If someone is uncomfortable identifying with the gender normally associated with their biological sex, then he could be classified as experiencing ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "hippocampus" }, "bloom": null, "hl_context": "As mentioned earlier , the hippocampus is also involved in emotional processing . Like the amygdala , research has demonstrated that hippocampal structure and function are linked to a variety of mood and anxiety disorders . <hl> Individuals suffering from posttraumatic stress disorder ( PTSD ) show marked reductions in the volume of several parts of the hippocampus , which may result from decreased levels of neurogenesis and dendritic branching ( the generation of new neurons and the generation of new dendrites in existing neurons , respectively ) ( Wang et al . , 2010 ) . <hl> While it is impossible to make a causal claim with correlational research like this , studies have demonstrated behavioral improvements and hippocampal volume increases following either pharmacological or cognitive-behavioral therapy in individuals suffering from PTSD ( Bremner & Vermetten , 2004 ; Levy-Gigi , Szabó , Kelemen , & Kéri , 2013 ) .", "hl_sentences": "Individuals suffering from posttraumatic stress disorder ( PTSD ) show marked reductions in the volume of several parts of the hippocampus , which may result from decreased levels of neurogenesis and dendritic branching ( the generation of new neurons and the generation of new dendrites in existing neurons , respectively ) ( Wang et al . , 2010 ) .", "question": { "cloze_format": "Individuals suffering from posttraumatic stress disorder have been shown to have reduced volumes of the ________.", "normal_format": "What have individuals suffering from posttraumatic stress disorder been shown to have reduced volumes of?", "question_choices": [ "amygdala", "hippocampus", "hypothalamus", "thalamus" ], "question_id": "fs-idm27014176", "question_text": "Individuals suffering from posttraumatic stress disorder have been shown to have reduced volumes of the ________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 0, "ans_text": "James-Lange" }, "bloom": null, "hl_context": "The James-Lange theory of emotion asserts that emotions arise from physiological arousal . Recall what you have learned about the sympathetic nervous system and our fight or flight response when threatened . If you were to encounter some threat in your environment , like a venomous snake in your backyard , your sympathetic nervous system would initiate significant physiological arousal , which would make your heart race and increase your respiration rate . <hl> According to the James-Lange theory of emotion , you would only experience a feeling of fear after this physiological arousal had taken place . <hl> Furthermore , different arousal patterns would be associated with different feelings .", "hl_sentences": "According to the James-Lange theory of emotion , you would only experience a feeling of fear after this physiological arousal had taken place .", "question": { "cloze_format": "According to the ________ theory of emotion, emotional experiences arise from physiological arousal.", "normal_format": "According to which theory of emotion does emotional experiences arise from physiological arousal?", "question_choices": [ "James-Lange", "Cannon-Bard", "Schachter-Singer two-factor", "Darwinian" ], "question_id": "fs-idm12711264", "question_text": "According to the ________ theory of emotion, emotional experiences arise from physiological arousal." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "melancholy" }, "bloom": null, "hl_context": "Despite different emotional display rules , our ability to recognize and produce facial expressions of emotion appears to be universal . In fact , even congenitally blind individuals produce the same facial expression of emotions , despite their never having the opportunity to observe these facial displays of emotion in other people . This would seem to suggest that the pattern of activity in facial muscles involved in generating emotional expressions is universal , and indeed , this idea was suggested in the late 19th century in Charles Darwin ’ s book The Expression of Emotions in Man and Animals ( 1872 ) . <hl> In fact , there is substantial evidence for seven universal emotions that are each associated with distinct facial expressions . <hl> <hl> These include : happiness , surprise , sadness , fright , disgust , contempt , and anger ( Figure 10.24 ) ( Ekman & Keltner , 1997 ) . <hl>", "hl_sentences": "In fact , there is substantial evidence for seven universal emotions that are each associated with distinct facial expressions . These include : happiness , surprise , sadness , fright , disgust , contempt , and anger ( Figure 10.24 ) ( Ekman & Keltner , 1997 ) .", "question": { "cloze_format": "___ is not one of the seven universal emotions described in this chapter.", "normal_format": "Which of the following is not one of the seven universal emotions described in this chapter?", "question_choices": [ "contempt", "disgust", "melancholy", "anger" ], "question_id": "fs-idp87990016", "question_text": "Which of the following is not one of the seven universal emotions described in this chapter?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 1, "ans_text": "James-Lange theory" }, "bloom": null, "hl_context": "Strong emotional responses are associated with strong physiological arousal . This has led some to suggest that the signs of physiological arousal , which include increased heart rate , respiration rate , and sweating , might serve as a tool to determine whether someone is telling the truth or not . The assumption is that most of us would show signs of physiological arousal if we were being dishonest with someone . <hl> A polygraph , or lie detector test , measures the physiological arousal of an individual responding to a series of questions . <hl> Someone trained in reading these tests would look for answers to questions that are associated with increased levels of arousal as potential signs that the respondent may have been dishonest on those answers . While polygraphs are still commonly used , their validity and accuracy are highly questionable because there is no evidence that lying is associated with any particular pattern of physiological arousal ( Saxe & Ben-Shakhar , 1999 ) . <hl> The James-Lange theory of emotion asserts that emotions arise from physiological arousal . <hl> Recall what you have learned about the sympathetic nervous system and our fight or flight response when threatened . If you were to encounter some threat in your environment , like a venomous snake in your backyard , your sympathetic nervous system would initiate significant physiological arousal , which would make your heart race and increase your respiration rate . <hl> According to the James-Lange theory of emotion , you would only experience a feeling of fear after this physiological arousal had taken place . <hl> <hl> Furthermore , different arousal patterns would be associated with different feelings . <hl>", "hl_sentences": "A polygraph , or lie detector test , measures the physiological arousal of an individual responding to a series of questions . The James-Lange theory of emotion asserts that emotions arise from physiological arousal . According to the James-Lange theory of emotion , you would only experience a feeling of fear after this physiological arousal had taken place . Furthermore , different arousal patterns would be associated with different feelings .", "question": { "cloze_format": "____ suggests that polygraphs should be quite accurate at differentiating one emotion from another.", "normal_format": "Which of the following theories of emotion would suggest that polygraphs should be quite accurate at differentiating one emotion from another?", "question_choices": [ "Cannon-Bard theory", "James-Lange theory", "Schachter-Singer two-factor theory", "Darwinian theory" ], "question_id": "fs-idp30509360", "question_text": "Which of the following theories of emotion would suggest that polygraphs should be quite accurate at differentiating one emotion from another?" }, "references_are_paraphrase": null } ]
10
10.1 Motivation Learning Objectives By the end of this section, you will be able to: Define intrinsic and extrinsic motivation Understand that instincts, drive reduction, self-efficacy, and social motives have all been proposed as theories of motivation Explain the basic concepts associated with Maslow’s hierarchy of needs Why do we do the things we do? What motivations underlie our behaviors? Motivation describes the wants or needs that direct behavior toward a goal. In addition to biological motives, motivations can be intrinsic (arising from internal factors) or extrinsic (arising from external factors) ( Figure 10.2 ). Intrinsically motivated behaviors are performed because of the sense of personal satisfaction that they bring, while extrinsically motivated behaviors are performed in order to receive something from others. Think about why you are currently in college. Are you here because you enjoy learning and want to pursue an education to make yourself a more well-rounded individual? If so, then you are intrinsically motivated. However, if you are here because you want to get a college degree to make yourself more marketable for a high-paying career or to satisfy the demands of your parents, then your motivation is more extrinsic in nature. In reality, our motivations are often a mix of both intrinsic and extrinsic factors, but the nature of the mix of these factors might change over time (often in ways that seem counter-intuitive). There is an old adage: “Choose a job that you love, and you will never have to work a day in your life,” meaning that if you enjoy your occupation, work doesn’t seem like . . . well, work. Some research suggests that this isn’t necessarily the case (Daniel & Esser, 1980; Deci, 1972; Deci, Koestner, & Ryan, 1999). According to this research, receiving some sort of extrinsic reinforcement (i.e., getting paid) for engaging in behaviors that we enjoy leads to those behaviors being thought of as work no longer providing that same enjoyment. As a result, we might spend less time engaging in these reclassified behaviors in the absence of any extrinsic reinforcement. For example, Odessa loves baking, so in her free time, she bakes for fun. Oftentimes, after stocking shelves at her grocery store job, she often whips up pastries in the evenings because she enjoys baking. When a coworker in the store’s bakery department leaves his job, Odessa applies for his position and gets transferred to the bakery department. Although she enjoys what she does in her new job, after a few months, she no longer has much desire to concoct tasty treats in her free time. Baking has become work in a way that changes her motivation to do it ( Figure 10.3 ). What Odessa has experienced is called the overjustification effect—intrinsic motivation is diminished when extrinsic motivation is given. This can lead to extinguishing the intrinsic motivation and creating a dependence on extrinsic rewards for continued performance (Deci et al., 1999). Other studies suggest that intrinsic motivation may not be so vulnerable to the effects of extrinsic reinforcements, and in fact, reinforcements such as verbal praise might actually increase intrinsic motivation (Arnold, 1976; Cameron & Pierce, 1994). In that case, Odessa’s motivation to bake in her free time might remain high if, for example, customers regularly compliment her baking or cake decorating skills. These apparent discrepancies in the researchers’ findings may be understood by considering several factors. For one, physical reinforcement (such as money) and verbal reinforcement (such as praise) may affect an individual in very different ways. In fact, tangible rewards (i.e., money) tend to have more negative effects on intrinsic motivation than do intangible rewards (i.e., praise). Furthermore, the expectation of the extrinsic motivator by an individual is crucial: If the person expects to receive an extrinsic reward, then intrinsic motivation for the task tends to be reduced. If, however, there is no such expectation, and the extrinsic motivation is presented as a surprise, then intrinsic motivation for the task tends to persist (Deci et al., 1999). In educational settings, students are more likely to experience intrinsic motivation to learn when they feel a sense of belonging and respect in the classroom. This internalization can be enhanced if the evaluative aspects of the classroom are de-emphasized and if students feel that they exercise some control over the learning environment. Furthermore, providing students with activities that are challenging, yet doable, along with a rationale for engaging in various learning activities can enhance intrinsic motivation for those tasks (Niemiec & Ryan, 2009). Consider Hakim, a first-year law student with two courses this semester: Family Law and Criminal Law. The Family Law professor has a rather intimidating classroom: He likes to put students on the spot with tough questions, which often leaves students feeling belittled or embarrassed. Grades are based exclusively on quizzes and exams, and the instructor posts results of each test on the classroom door. In contrast, the Criminal Law professor facilitates classroom discussions and respectful debates in small groups. The majority of the course grade is not exam-based, but centers on a student-designed research project on a crime issue of the student’s choice. Research suggests that Hakim will be less intrinsically motivated in his Family Law course, where students are intimidated in the classroom setting, and there is an emphasis on teacher-driven evaluations. Hakim is likely to experience a higher level of intrinsic motivation in his Criminal Law course, where the class setting encourages inclusive collaboration and a respect for ideas, and where students have more influence over their learning activities. Theories About Motivation William James (1842–1910) was an important contributor to early research into motivation, and he is often referred to as the father of psychology in the United States. James theorized that behavior was driven by a number of instincts, which aid survival ( Figure 10.4 ). From a biological perspective, an instinct is a species-specific pattern of behavior that is not learned. There was, however, considerable controversy among James and his contemporaries over the exact definition of instinct. James proposed several dozen special human instincts, but many of his contemporaries had their own lists that differed. A mother’s protection of her baby, the urge to lick sugar, and hunting prey were among the human behaviors proposed as true instincts during James’s era. This view—that human behavior is driven by instincts—received a fair amount of criticism because of the undeniable role of learning in shaping all sorts of human behavior. In fact, as early as the 1900s, some instinctive behaviors were experimentally demonstrated to result from associative learning (recall when you learned about Watson’s conditioning of fear response in “Little Albert”) (Faris, 1921). Another early theory of motivation proposed that the maintenance of homeostasis is particularly important in directing behavior. You may recall from your earlier reading that homeostasis is the tendency to maintain a balance, or optimal level, within a biological system. In a body system, a control center (which is often part of the brain) receives input from receptors (which are often complexes of neurons). The control center directs effectors (which may be other neurons) to correct any imbalance detected by the control center. According to the drive theory of motivation, deviations from homeostasis create physiological needs. These needs result in psychological drive states that direct behavior to meet the need and, ultimately, bring the system back to homeostasis. For example, if it’s been a while since you ate, your blood sugar levels will drop below normal. This low blood sugar will induce a physiological need and a corresponding drive state (i.e., hunger) that will direct you to seek out and consume food ( Figure 10.5 ). Eating will eliminate the hunger, and, ultimately, your blood sugar levels will return to normal. Interestingly, drive theory also emphasizes the role that habits play in the type of behavioral response in which we engage. A habit is a pattern of behavior in which we regularly engage. Once we have engaged in a behavior that successfully reduces a drive, we are more likely to engage in that behavior whenever faced with that drive in the future (Graham & Weiner, 1996). Extensions of drive theory take into account levels of arousal as potential motivators. As you recall from your study of learning, these theories assert that there is an optimal level of arousal that we all try to maintain ( Figure 10.6 ). If we are underaroused, we become bored and will seek out some sort of stimulation. On the other hand, if we are overaroused, we will engage in behaviors to reduce our arousal (Berlyne, 1960). Most students have experienced this need to maintain optimal levels of arousal over the course of their academic career. Think about how much stress students experience toward the end of spring semester. They feel overwhelmed with seemingly endless exams, papers, and major assignments that must be completed on time. They probably yearn for the rest and relaxation that awaits them over the extended summer break. However, once they finish the semester, it doesn’t take too long before they begin to feel bored. Generally, by the time the next semester is beginning in the fall, many students are quite happy to return to school. This is an example of how arousal theory works. So what is the optimal level of arousal? What level leads to the best performance? Research shows that moderate arousal is generally best; when arousal is very high or very low, performance tends to suffer (Yerkes & Dodson, 1908). Think of your arousal level regarding taking an exam for this class. If your level is very low, such as boredom and apathy, your performance will likely suffer. Similarly, a very high level, such as extreme anxiety, can be paralyzing and hinder performance. Consider the example of a softball team facing a tournament. They are favored to win their first game by a large margin, so they go into the game with a lower level of arousal and get beat by a less skilled team. But optimal arousal level is more complex than a simple answer that the middle level is always best. Researchers Robert Yerkes (pronounced “Yerk-EES”) and John Dodson discovered that the optimal arousal level depends on the complexity and difficulty of the task to be performed ( Figure 10.7 ). This relationship is known as Yerkes-Dodson law , which holds that a simple task is performed best when arousal levels are relatively high and complex tasks are best performed when arousal levels are lower. Self-efficacy and Social Motives Self-efficacy is an individual’s belief in her own capability to complete a task, which may include a previous successful completion of the exact task or a similar task. Albert Bandura (1994) theorized that an individual’s sense of self-efficacy plays a pivotal role in motivating behavior. Bandura argues that motivation derives from expectations that we have about the consequences of our behaviors, and ultimately, it is the appreciation of our capacity to engage in a given behavior that will determine what we do and the future goals that we set for ourselves. For example, if you have a sincere belief in your ability to achieve at the highest level, you are more likely to take on challenging tasks and to not let setbacks dissuade you from seeing the task through to the end. A number of theorists have focused their research on understanding social motives (McAdams & Constantian, 1983; McClelland & Liberman, 1949; Murray et al., 1938). Among the motives they describe are needs for achievement, affiliation, and intimacy. It is the need for achievement that drives accomplishment and performance. The need for affiliation encourages positive interactions with others, and the need for intimacy causes us to seek deep, meaningful relationships. Henry Murray et al. (1938) categorized these needs into domains. For example, the need for achievement and recognition falls under the domain of ambition. Dominance and aggression were recognized as needs under the domain of human power, and play was a recognized need in the domain of interpersonal affection. Maslow’s Hierarchy of Needs While the theories of motivation described earlier relate to basic biological drives, individual characteristics, or social contexts, Abraham Maslow (1943) proposed a hierarchy of needs that spans the spectrum of motives ranging from the biological to the individual to the social. These needs are often depicted as a pyramid ( Figure 10.8 ). At the base of the pyramid are all of the physiological needs that are necessary for survival. These are followed by basic needs for security and safety, the need to be loved and to have a sense of belonging, and the need to have self-worth and confidence. The top tier of the pyramid is self-actualization, which is a need that essentially equates to achieving one’s full potential, and it can only be realized when needs lower on the pyramid have been met. To Maslow and humanistic theorists, self-actualization reflects the humanistic emphasis on positive aspects of human nature. Maslow suggested that this is an ongoing, life-long process and that only a small percentage of people actually achieve a self-actualized state (Francis & Kritsonis, 2006; Maslow, 1943). According to Maslow (1943), one must satisfy lower-level needs before addressing those needs that occur higher in the pyramid. So, for example, if someone is struggling to find enough food to meet his nutritional requirements, it is quite unlikely that he would spend an inordinate amount of time thinking about whether others viewed him as a good person or not. Instead, all of his energies would be geared toward finding something to eat. However, it should be pointed out that Maslow’s theory has been criticized for its subjective nature and its inability to account for phenomena that occur in the real world (Leonard, 1982). Other research has more recently addressed that late in life, Maslow proposed a self-transcendence level above self-actualization—to represent striving for meaning and purpose beyond the concerns of oneself (Koltko-Rivera, 2006). For example, people sometimes make self-sacrifices in order to make a political statement or in an attempt to improve the conditions of others. Mohandas K. Gandhi, a world-renowned advocate for independence through nonviolent protest, on several occasions went on hunger strikes to protest a particular situation. People may starve themselves or otherwise put themselves in danger displaying higher-level motives beyond their own needs. Link to Learning Check out this interactive exercise that illustrates some of the important concepts in Maslow’s hierarchy of needs. 10.2 Hunger and Eating Learning Objectives By the end of this section, you will be able to: Describe how hunger and eating are regulated Differentiate between levels of overweight and obesity and the associated health consequences Explain the health consequences resulting from anorexia and bulimia nervosa Eating is essential for survival, and it is no surprise that a drive like hunger exists to ensure that we seek out sustenance. While this chapter will focus primarily on the physiological mechanisms that regulate hunger and eating, powerful social, cultural, and economic influences also play important roles. This section will explain the regulation of hunger, eating, and body weight, and we will discuss the adverse consequences of disordered eating. Physiological Mechanisms There are a number of physiological mechanisms that serve as the basis for hunger. When our stomachs are empty, they contract. Typically, a person then experiences hunger pangs. Chemical messages travel to the brain, and serve as a signal to initiate feeding behavior. When our blood glucose levels drop, the pancreas and liver generate a number of chemical signals that induce hunger (Konturek et al., 2003; Novin, Robinson, Culbreth, & Tordoff, 1985) and thus initiate feeding behavior. For most people, once they have eaten, they feel satiation , or fullness and satisfaction, and their eating behavior stops. Like the initiation of eating, satiation is also regulated by several physiological mechanisms. As blood glucose levels increase, the pancreas and liver send signals to shut off hunger and eating (Drazen & Woods, 2003; Druce, Small, & Bloom, 2004; Greary, 1990). The food’s passage through the gastrointestinal tract also provides important satiety signals to the brain (Woods, 2004), and fat cells release leptin , a satiety hormone. The various hunger and satiety signals that are involved in the regulation of eating are integrated in the brain. Research suggests that several areas of the hypothalamus and hindbrain are especially important sites where this integration occurs (Ahima & Antwi, 2008; Woods & D’Alessio, 2008). Ultimately, activity in the brain determines whether or not we engage in feeding behavior ( Figure 10.9 ). Metabolism and Body Weight Our body weight is affected by a number of factors, including gene-environment interactions, and the number of calories we consume versus the number of calories we burn in daily activity. If our caloric intake exceeds our caloric use, our bodies store excess energy in the form of fat. If we consume fewer calories than we burn off, then stored fat will be converted to energy. Our energy expenditure is obviously affected by our levels of activity, but our body’s metabolic rate also comes into play. A person’s metabolic rate is the amount of energy that is expended in a given period of time, and there is tremendous individual variability in our metabolic rates. People with high rates of metabolism are able to burn off calories more easily than those with lower rates of metabolism. We all experience fluctuations in our weight from time to time, but generally, most people’s weights fluctuate within a narrow margin, in the absence of extreme changes in diet and/or physical activity. This observation led some to propose a set-point theory of body weight regulation. The set-point theory asserts that each individual has an ideal body weight, or set point, which is resistant to change. This set-point is genetically predetermined and efforts to move our weight significantly from the set-point are resisted by compensatory changes in energy intake and/or expenditure (Speakman et al., 2011). Some of the predictions generated from this particular theory have not received empirical support. For example, there are no changes in metabolic rate between individuals who had recently lost significant amounts of weight and a control group (Weinsier et al., 2000). In addition, the set-point theory fails to account for the influence of social and environmental factors in the regulation of body weight (Martin-Gronert & Ozanne, 2013; Speakman et al., 2011). Despite these limitations, set-point theory is still often used as a simple, intuitive explanation of how body weight is regulated. Obesity When someone weighs more than what is generally accepted as healthy for a given height, they are considered overweight or obese. According to the Centers for Disease Control and Prevention (CDC), an adult with a body mass index (BMI) between 25 and 29.9 is considered overweight ( Figure 10.10 ). An adult with a BMI of 30 or higher is considered obese (Centers for Disease Control and Prevention [CDC], 2012). People who are so overweight that they are at risk for death are classified as morbidly obese. Morbid obesity is defined as having a BMI over 40. Note that although BMI has been used as a healthy weight indicator by the World Health Organization (WHO), the CDC, and other groups, its value as an assessment tool has been questioned. The BMI is most useful for studying populations, which is the work of these organizations. It is less useful in assessing an individual since height and weight measurements fail to account for important factors like fitness level. An athlete, for example, may have a high BMI because the tool doesn’t distinguish between the body’s percentage of fat and muscle in a person’s weight. Being extremely overweight or obese is a risk factor for several negative health consequences. These include, but are not limited to, an increased risk for cardiovascular disease, stroke, Type 2 diabetes, liver disease, sleep apnea, colon cancer, breast cancer, infertility, and arthritis. Given that it is estimated that in the United States around one-third of the adult population is obese and that nearly two-thirds of adults and one in six children qualify as overweight (CDC, 2012), there is substantial interest in trying to understand how to combat this important public health concern. What causes someone to be overweight or obese? You have already read that both genes and environment are important factors for determining body weight, and if more calories are consumed than expended, excess energy is stored as fat. However, socioeconomic status and the physical environment must also be considered as contributing factors (CDC, 2012). For example, an individual who lives in an impoverished neighborhood that is overrun with crime may never feel comfortable walking or biking to work or to the local market. This might limit the amount of physical activity in which he engages and result in an increased body weight. Similarly, some people may not be able to afford healthy food options from their market, or these options may be unavailable (especially in urban areas or poorer neighborhoods); therefore, some people rely primarily on available, inexpensive, high fat, and high calorie fast food as their primary source of nutrition. Generally, overweight and obese individuals are encouraged to try to reduce their weights through a combination of both diet and exercise. While some people are very successful with these approaches, many struggle to lose excess weight. In cases in which a person has had no success with repeated attempts to reduce weight or is at risk for death because of obesity, bariatric surgery may be recommended. Bariatric surgery is a type of surgery specifically aimed at weight reduction, and it involves modifying the gastrointestinal system to reduce the amount of food that can be eaten and/or limiting how much of the digested food can be absorbed ( Figure 10.11 ) (Mayo Clinic, 2013). A recent meta-analysis suggests that bariatric surgery is more effective than non-surgical treatment for obesity in the two-years immediately following the procedure, but to date, no long-term studies yet exist (Gloy et al., 2013). Link to Learning Watch this video that describes two different types of bariatric surgeries. Dig Deeper Prader-Willi Syndrome Prader-Willi Syndrome (PWS) is a genetic disorder that results in persistent feelings of intense hunger and reduced rates of metabolism. Typically, affected children have to be supervised around the clock to ensure that they do not engage in excessive eating. Currently, PWS is the leading genetic cause of morbid obesity in children, and it is associated with a number of cognitive deficits and emotional problems ( Figure 10.12 ). While genetic testing can be used to make a diagnosis, there are a number of behavioral diagnostic criteria associated with PWS. From birth to 2 years of age, lack of muscle tone and poor sucking behavior may serve as early signs of PWS. Developmental delays are seen between the ages of 6 and 12, and excessive eating and cognitive deficits associated with PWS usually onset a little later. While the exact mechanisms of PWS are not fully understood, there is evidence that affected individuals have hypothalamic abnormalities. This is not surprising, given the hypothalamus’s role in regulating hunger and eating. However, as you will learn in the next section of this chapter, the hypothalamus is also involved in the regulation of sexual behavior. Consequently, many individuals suffering from PWS fail to reach sexual maturity during adolescence. There is no current treatment or cure for PWS. However, if weight can be controlled in these individuals, then their life expectancies are significantly increased (historically, sufferers of PWS often died in adolescence or early adulthood). Advances in the use of various psychoactive medications and growth hormones continue to enhance the quality of life for individuals with PWS (Cassidy & Driscoll, 2009; Prader-Willi Syndrome Association, 2012). Eating Disorders While nearly two out of three US adults struggle with issues related to being overweight, a smaller, but significant, portion of the population has eating disorders that typically result in being normal weight or underweight. Often, these individuals are fearful of gaining weight. Individuals who suffer from bulimia nervosa and anorexia nervosa face many adverse health consequences (Mayo Clinic, 2012a, 2012b). People suffering from bulimia nervosa engage in binge eating behavior that is followed by an attempt to compensate for the large amount of food consumed. Purging the food by inducing vomiting or through the use of laxatives are two common compensatory behaviors. Some affected individuals engage in excessive amounts of exercise to compensate for their binges. Bulimia is associated with many adverse health consequences that can include kidney failure, heart failure, and tooth decay. In addition, these individuals often suffer from anxiety and depression, and they are at an increased risk for substance abuse (Mayo Clinic, 2012b). The lifetime prevalence rate for bulimia nervosa is estimated at around 1% for women and less than 0.5% for men (Smink, van Hoeken, & Hoek, 2012). As of the 2013 release of the Diagnostic and Statistical Manual, fifth edition , Binge eating disorder is a disorder recognized by the American Psychiatric Association (APA). Unlike with bulimia, eating binges are not followed by inappropriate behavior, such as purging, but they are followed by distress, including feelings of guilt and embarrassment. The resulting psychological distress distinguishes binge eating disorder from overeating (American Psychiatric Association [APA], 2013). Anorexia nervosa is an eating disorder characterized by the maintenance of a body weight well below average through starvation and/or excessive exercise. Individuals suffering from anorexia nervosa often have a distorted body image , referenced in literature as a type of body dysmorphia, meaning that they view themselves as overweight even though they are not. Like bulimia nervosa, anorexia nervosa is associated with a number of significant negative health outcomes: bone loss, heart failure, kidney failure, amenorrhea (cessation of the menstrual period), reduced function of the gonads, and in extreme cases, death. Furthermore, there is an increased risk for a number of psychological problems, which include anxiety disorders, mood disorders, and substance abuse (Mayo Clinic, 2012a). Estimates of the prevalence of anorexia nervosa vary from study to study but generally range from just under one percent to just over four percent in women. Generally, prevalence rates are considerably lower for men (Smink et al., 2012). Link to Learning Watch this news story about an Italian advertising campaign to raise public awareness of anorexia nervosa. While both anorexia and bulimia nervosa occur in men and women of many different cultures, Caucasian females from Western societies tend to be the most at-risk population. Recent research indicates that females between the ages of 15 and 19 are most at risk, and it has long been suspected that these eating disorders are culturally-bound phenomena that are related to messages of a thin ideal often portrayed in popular media and the fashion world ( Figure 10.13 ) (Smink et al., 2012). While social factors play an important role in the development of eating disorders, there is also evidence that genetic factors may predispose people to these disorders (Collier & Treasure, 2004). 10.3 Sexual Behavior Learning Objectives By the end of this section, you will be able to: Understand basic biological mechanisms regulating sexual behavior and motivation Appreciate the importance of Alfred Kinsey’s research on human sexuality Recognize the contributions that William Masters and Virginia Johnson’s research made to our understanding of the sexual response cycle Define sexual orientation and gender identity Like food, sex is an important part of our lives. From an evolutionary perspective, the reason is obvious—perpetuation of the species. Sexual behavior in humans, however, involves much more than reproduction. This section provides an overview of research that has been conducted on human sexual behavior and motivation. This section will close with a discussion of issues related to gender and sexual orientation. Physiological Mechanisms of Sexual Behavior and Motivation Much of what we know about the physiological mechanisms that underlie sexual behavior and motivation comes from animal research. As you’ve learned, the hypothalamus plays an important role in motivated behaviors, and sex is no exception. In fact, lesions to an area of the hypothalamus called the medial preoptic area completely disrupt a male rat’s ability to engage in sexual behavior. Surprisingly, medial preoptic lesions do not change how hard a male rat is willing to work to gain access to a sexually receptive female ( Figure 10.14 ). This suggests that the ability to engage in sexual behavior and the motivation to do so may be mediated by neural systems distinct from one another. Animal research suggests that limbic system structures such as the amygdala and nucleus accumbens are especially important for sexual motivation. Damage to these areas results in a decreased motivation to engage in sexual behavior, while leaving the ability to do so intact ( Figure 10.15 ) (Everett, 1990). Similar dissociations of sexual motivation and sexual ability have also been observed in the female rat (Becker, Rudick, & Jenkins, 2001; Jenkins & Becker, 2001). Although human sexual behavior is much more complex than that seen in rats, some parallels between animals and humans can be drawn from this research. The worldwide popularity of drugs used to treat erectile dysfunction (Conrad, 2005) speaks to the fact that sexual motivation and the ability to engage in sexual behavior can also be dissociated in humans. Moreover, disorders that involve abnormal hypothalamic function are often associated with hypogonadism (reduced function of the gonads) and reduced sexual function (e.g., Prader-Willi syndrome). Given the hypothalamus’s role in endocrine function, it is not surprising that hormones secreted by the endocrine system also play important roles in sexual motivation and behavior. For example, many animals show no sign of sexual motivation in the absence of the appropriate combination of sex hormones from their gonads. While this is not the case for humans, there is considerable evidence that sexual motivation for both men and women varies as a function of circulating testosterone levels (Bhasin, Enzlin, Coviello, & Basson, 2007; Carter, 1992; Sherwin, 1988). Kinsey’s Research Before the late 1940s, access to reliable, empirically-based information on sex was limited. Physicians were considered authorities on all issues related to sex, despite the fact that they had little to no training in these issues, and it is likely that most of what people knew about sex had been learned either through their own experiences or by talking with their peers. Convinced that people would benefit from a more open dialogue on issues related to human sexuality, Dr. Alfred Kinsey of Indiana University initiated large-scale survey research on the topic ( Figure 10.16 ). The results of some of these efforts were published in two books— Sexual Behavior in the Human Male and Sexual Behavior in the Human Female —which were published in 1948 and 1953, respectively (Bullough, 1998). At the time, the Kinsey reports were quite sensational. Never before had the American public seen its private sexual behavior become the focus of scientific scrutiny on such a large scale. The books, which were filled with statistics and scientific lingo, sold remarkably well to the general public, and people began to engage in open conversations about human sexuality. As you might imagine, not everyone was happy that this information was being published. In fact, these books were banned in some countries. Ultimately, the controversy resulted in Kinsey losing funding that he had secured from the Rockefeller Foundation to continue his research efforts (Bancroft, 2004). Although Kinsey’s research has been widely criticized as being riddled with sampling and statistical errors (Jenkins, 2010), there is little doubt that this research was very influential in shaping future research on human sexual behavior and motivation. Kinsey described a remarkably diverse range of sexual behaviors and experiences reported by the volunteers participating in his research. Behaviors that had once been considered exceedingly rare or problematic were demonstrated to be much more common and innocuous than previously imagined (Bancroft, 2004; Bullough, 1998). Link to Learning Watch this trailer from the 2004 film Kinsey that depicts Alfred Kinsey’s life and research. Among the results of Kinsey’s research were the findings that women are as interested and experienced in sex as their male counterparts, that both males and females masturbate without adverse health consequences, and that homosexual acts are fairly common (Bancroft, 2004). Kinsey also developed a continuum known as the Kinsey scale that is still commonly used today to categorize an individual’s sexual orientation (Jenkins, 2010). Sexual orientation is an individual’s emotional and erotic attractions to same-sexed individuals ( homosexual ), opposite-sexed individuals ( heterosexual ), or both ( bisexual ). Masters and Johnson’s Research In 1966, William Masters and Virginia Johnson published a book detailing the results of their observations of nearly 700 people who agreed to participate in their study of physiological responses during sexual behavior. Unlike Kinsey, who used personal interviews and surveys to collect data, Masters and Johnson observed people having intercourse in a variety of positions, and they observed people masturbating, manually or with the aid of a device. While this was occurring, researchers recorded measurements of physiological variables, such as blood pressure and respiration rate, as well as measurements of sexual arousal, such as vaginal lubrication and penile tumescence (swelling associated with an erection). In total, Masters and Johnson observed nearly 10,000 sexual acts as a part of their research (Hock, 2008). Based on these observations, Masters and Johnson divided the sexual response cycle into four phases that are fairly similar in men and women: excitement, plateau, orgasm, and resolution ( Figure 10.17 ). The excitement phase is the arousal phase of the sexual response cycle, and it is marked by erection of the penis or clitoris and lubrication and expansion of the vaginal canal. During plateau , women experience further swelling of the vagina and increased blood flow to the labia minora, and men experience full erection and often exhibit pre-ejaculatory fluid. Both men and women experience increases in muscle tone during this time. Orgasm is marked in women by rhythmic contractions of the pelvis and uterus along with increased muscle tension. In men, pelvic contractions are accompanied by a buildup of seminal fluid near the urethra that is ultimately forced out by contractions of genital muscles, (i.e., ejaculation). Resolution is the relatively rapid return to an unaroused state accompanied by a decrease in blood pressure and muscular relaxation. While many women can quickly repeat the sexual response cycle, men must pass through a longer refractory period as part of resolution. The refractory period is a period of time that follows an orgasm during which an individual is incapable of experiencing another orgasm. In men, the duration of the refractory period can vary dramatically from individual to individual with some refractory periods as short as several minutes and others as long as a day. As men age, their refractory periods tend to span longer periods of time. In addition to the insights that their research provided with regards to the sexual response cycle and the multi-orgasmic potential of women, Masters and Johnson also collected important information about reproductive anatomy. Their research demonstrated the oft-cited statistic of the average size of a flaccid and an erect penis (3 and 6 inches, respectively) as well as dispelling long-held beliefs about relationships between the size of a man’s erect penis and his ability to provide sexual pleasure to his female partner. Furthermore, they determined that the vagina is a very elastic structure that can conform to penises of various sizes (Hock, 2008). Sexual Orientation As mentioned earlier, a person’s sexual orientation is their emotional and erotic attraction toward another individual ( Figure 10.18 ). While the majority of people identify as heterosexual, there is a sizable population of people within the United States who identify as either homosexual or bisexual. Research suggests that somewhere between 3% and 10% of the population identifies as homosexual (Kinsey, Pomeroy, & Martin, 1948; LeVay, 1996; Pillard & Bailey, 1995). Issues of sexual orientation have long fascinated scientists interested in determining what causes one individual to be heterosexual while another is homosexual. For many years, people believed that these differences arose because of different socialization and familial experiences. However, research has consistently demonstrated that the family backgrounds and experiences are very similar among heterosexuals and homosexuals (Bell, Weinberg, & Hammersmith, 1981; Ross & Arrindell, 1988). Genetic and biological mechanisms have also been proposed, and the balance of research evidence suggests that sexual orientation has an underlying biological component. For instance, over the past 25 years, research has demonstrated gene-level contributions to sexual orientation (Bailey & Pillard, 1991; Hamer, Hu, Magnuson, Hu, & Pattatucci, 1993; Rodriguez-Larralde & Paradisi, 2009), with some researchers estimating that genes account for at least half of the variability seen in human sexual orientation (Pillard & Bailey, 1998). Other studies report differences in brain structure and function between heterosexuals and homosexuals (Allen & Gorski, 1992; Byne et al., 2001; Hu et al., 2008; LeVay, 1991; Ponseti et al., 2006; Rahman & Wilson, 2003a; Swaab & Hofman, 1990), and even differences in basic body structure and function have been observed (Hall & Kimura, 1994; Lippa, 2003; Loehlin & McFadden, 2003; McFadden & Champlin, 2000; McFadden & Pasanen, 1998; Rahman & Wilson, 2003b). In aggregate, the data suggest that to a significant extent, sexual orientations are something with which we are born. Misunderstandings About Sexual Orientation Regardless of how sexual orientation is determined, research has made clear that sexual orientation is not a choice, but rather it is a relatively stable characteristic of a person that cannot be changed. Claims of successful gay conversion therapy have received wide criticism from the research community due to significant concerns with research design, recruitment of experimental participants, and interpretation of data. As such, there is no credible scientific evidence to suggest that individuals can change their sexual orientation (Jenkins, 2010). Dr. Robert Spitzer, the author of one of the most widely-cited examples of successful conversion therapy, apologized to both the scientific community and the gay community for his mistakes, and he publically recanted his own paper in a public letter addressed to the editor of Archives of Sexual Behavior in the spring of 2012 (Carey, 2012). In this letter, Spitzer wrote, I was considering writing something that would acknowledge that I now judge the major critiques of the study as largely correct. . . . I believe I owe the gay community an apology for my study making unproven claims of the efficacy of reparative therapy. I also apologize to any gay person who wasted time or energy undergoing some form of reparative therapy because they believed that I had proven that reparative therapy works with some “highly motivated” individuals. (Becker, 2012, pars. 2, 5) Citing research that suggests not only that gay conversion therapy is ineffective, but also potentially harmful, legislative efforts to make such therapy illegal have either been enacted (e.g., it is now illegal in California) or are underway across the United States, and many professional organizations have issued statements against this practice (Human Rights Campaign, n.d.) Link to Learning Read this draft of Dr. Spitzer’s letter. Gender Identity Many people conflate sexual orientation with gender identity because of stereotypical attitudes that exist about homosexuality. In reality, these are two related, but different, issues. Gender identity refers to one’s sense of being male or female. Generally, our gender identities correspond to our chromosomal and phenotypic sex, but this is not always the case. When individuals do not feel comfortable identifying with the gender associated with their biological sex, then they experience gender dysphoria. Gender dysphoria is a diagnostic category in the fifth edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5) that describes individuals who do not identify as the gender that most people would assume they are. This dysphoria must persist for at least six months and result in significant distress or dysfunction to meet DSM-5 diagnostic criteria. In order for children to be assigned this diagnostic category, they must verbalize their desire to become the other gender. Many people who are classified as gender dysphoric seek to live their lives in ways that are consistent with their own gender identity. This involves dressing in opposite-sex clothing and assuming an opposite-sex identity. These individuals may also undertake transgender hormone therapy in an attempt to make their bodies look more like the opposite sex, and in some cases, they elect to have surgeries to alter the appearance of their external genitalia to resemble that of their gender identity ( Figure 10.19 ). While these may sound like drastic changes, gender dysphoric individuals take these steps because their bodies seem to them to be a mistake of nature, and they seek to correct this mistake. Link to Learning Hear firsthand about the transgender experience and the disconnect that occurs when one’s self-identity is betrayed by one’s body. In this brief video , Chaz Bono discusses the difficulties of growing up identifying as male, while living in a female body. Cultural Factors in Sexual Orientation and Gender Identity Issues related to sexual orientation and gender identity are very much influenced by sociocultural factors. Even the ways in which we define sexual orientation and gender vary from one culture to the next. While in the United States exclusive heterosexuality is viewed as the norm, there are societies that have different attitudes regarding homosexual behavior. In fact, in some instances, periods of exclusively homosexual behavior are socially prescribed as a part of normal development and maturation. For example, in parts of New Guinea, young boys are expected to engage in sexual behavior with other boys for a given period of time because it is believed that doing so is necessary for these boys to become men (Baldwin & Baldwin, 1989). There is a two-gendered culture in the United States. We tend to classify an individual as either male or female. However, in some cultures there are additional gender variants resulting in more than two gender categories. For example, in Thailand, you can be male, female, or kathoey. A kathoey is an individual who would be described as intersexed or transgendered in the United States (Tangmunkongvorakul, Banwell, Carmichael, Utomo, & Sleigh, 2010). Dig Deeper The Case of David Reimer In August of 1965, Janet and Ronald Reimer of Winnipeg, Canada, welcomed the birth of their twin sons, Bruce and Brian. Within a few months, the twins were experiencing urinary problems; doctors recommended the problems could be alleviated by having the boys circumcised. A malfunction of the medical equipment used to perform the circumcision resulted in Bruce’s penis being irreparably damaged. Distraught, Janet and Ronald looked to expert advice on what to do with their baby boy. By happenstance, the couple became aware of Dr. John Money at Johns Hopkins University and his theory of psychosexual neutrality (Colapinto, 2000). Dr. Money had spent a considerable amount of time researching transgendered individuals and individuals born with ambiguous genitalia. As a result of this work, he developed a theory of psychosexual neutrality. His theory asserted that we are essentially neutral at birth with regard to our gender identity and that we don’t assume a concrete gender identity until we begin to master language. Furthermore, Dr. Money believed that the way in which we are socialized in early life is ultimately much more important than our biology in determining our gender identity (Money, 1962). Dr. Money encouraged Janet and Ronald to bring the twins to Johns Hopkins University, and he convinced them that they should raise Bruce as a girl. Left with few other options at the time, Janet and Ronald agreed to have Bruce’s testicles removed and to raise him as a girl. When they returned home to Canada, they brought with them Brian and his “sister,” Brenda, along with specific instructions to never reveal to Brenda that she had been born a boy (Colapinto, 2000). Early on, Dr. Money shared with the scientific community the great success of this natural experiment that seemed to fully support his theory of psychosexual neutrality (Money, 1975). Indeed, in early interviews with the children it appeared that Brenda was a typical little girl who liked to play with “girly” toys and do “girly” things. However, Dr. Money was less than forthcoming with information that seemed to argue against the success of the case. In reality, Brenda’s parents were constantly concerned that their little girl wasn’t really behaving as most girls did, and by the time Brenda was nearing adolescence, it was painfully obvious to the family that she was really having a hard time identifying as a female. In addition, Brenda was becoming increasingly reluctant to continue her visits with Dr. Money to the point that she threatened suicide if her parents made her go back to see him again. At that point, Janet and Ronald disclosed the true nature of Brenda’s early childhood to their daughter. While initially shocked, Brenda reported that things made sense to her now, and ultimately, by the time she was an adolescent, Brenda had decided to identify as a male. Thus, she became David Reimer. David was quite comfortable in his masculine role. He made new friends and began to think about his future. Although his castration had left him infertile, he still wanted to be a father. In 1990, David married a single mother and loved his new role as a husband and father. In 1997, David was made aware that Dr. Money was continuing to publicize his case as a success supporting his theory of psychosexual neutrality. This prompted David and his brother to go public with their experiences in attempt to discredit the doctor’s publications. While this revelation created a firestorm in the scientific community for Dr. Money, it also triggered a series of unfortunate events that ultimately led to David committing suicide in 2004 (O’Connell, 2004). This sad story speaks to the complexities involved in gender identity. While the Reimer case had earlier been paraded as a hallmark of how socialization trumped biology in terms of gender identity, the truth of the story made the scientific and medical communities more cautious in dealing with cases that involve intersex children and how to deal with their unique circumstances. In fact, stories like this one have prompted measures to prevent unnecessary harm and suffering to children who might have issues with gender identity. For example, in 2013, a law took effect in Germany allowing parents of intersex children to classify their children as indeterminate so that children can self-assign the appropriate gender once they have fully developed their own gender identities (Paramaguru, 2013). Link to Learning Watch this news story about the experiences of David Reimer and his family. 10.4 Emotion Learning Objectives By the end of this section, you will be able to: Explain the major theories of emotion Describe the role that limbic structures play in emotional processing Understand the ubiquitous nature of producing and recognizing emotional expression As we move through our daily lives, we experience a variety of emotions. An emotion is a subjective state of being that we often describe as our feelings. The words emotion and mood are sometimes used interchangeably, but psychologists use these words to refer to two different things. Typically, the word emotion indicates a subjective, affective state that is relatively intense and that occurs in response to something we experience ( Figure 10.20 ). Emotions are often thought to be consciously experienced and intentional. Mood , on the other hand, refers to a prolonged, less intense, affective state that does not occur in response to something we experience. Mood states may not be consciously recognized and do not carry the intentionality that is associated with emotion (Beedie, Terry, Lane, & Devonport, 2011). Here we will focus on emotion, and you will learn more about mood in the chapter that covers psychological disorders. We can be at the heights of joy or in the depths of despair. We might feel angry when we are betrayed, fear when we are threatened, and surprised when something unexpected happens. This section will outline some of the most well-known theories explaining our emotional experience and provide insight into the biological bases of emotion. This section closes with a discussion of the ubiquitous nature of facial expressions of emotion and our abilities to recognize those expressions in others. Theories of Emotion Our emotional states are combinations of physiological arousal, psychological appraisal, and subjective experiences. Together, these are known as the components of emotion . These appraisals are informed by our experiences, backgrounds, and cultures. Therefore, different people may have different emotional experiences even when faced with similar circumstances. Over time, several different theories of emotion, shown in Figure 10.21 , have been proposed to explain how the various components of emotion interact with one another. The James-Lange theory of emotion asserts that emotions arise from physiological arousal. Recall what you have learned about the sympathetic nervous system and our fight or flight response when threatened. If you were to encounter some threat in your environment, like a venomous snake in your backyard, your sympathetic nervous system would initiate significant physiological arousal, which would make your heart race and increase your respiration rate. According to the James-Lange theory of emotion, you would only experience a feeling of fear after this physiological arousal had taken place. Furthermore, different arousal patterns would be associated with different feelings. Other theorists, however, doubted that the physiological arousal that occurs with different types of emotions is distinct enough to result in the wide variety of emotions that we experience. Thus, the Cannon-Bard theory of emotion was developed. According to this view, physiological arousal and emotional experience occur simultaneously, yet independently (Lang, 1994). So, when you see the venomous snake, you feel fear at exactly the same time that your body mounts its fight or flight response. This emotional reaction would be separate and independent of the physiological arousal, even though they co-occur. The James-Lange and Cannon-Bard theories have each garnered some empirical support in various research paradigms. For instance, Chwalisz, Diener, and Gallagher (1988) conducted a study of the emotional experiences of people who had spinal cord injuries. They reported that individuals who were incapable of receiving autonomic feedback because of their injuries still experienced emotion; however, there was a tendency for people with less awareness of autonomic arousal to experience less intense emotions. More recently, research investigating the facial feedback hypothesis suggested that suppression of facial expression of emotion lowered the intensity of some emotions experienced by participants (Davis, Senghas, & Ochsner, 2009). In both of these examples, neither theory is fully supported because physiological arousal does not seem to be necessary for the emotional experience, but this arousal does appear to be involved in enhancing the intensity of the emotional experience. The Schachter-Singer two-factor theory of emotion is another variation on theories of emotions that takes into account both physiological arousal and the emotional experience. According to this theory, emotions are composed of two factors: physiological and cognitive. In other words, physiological arousal is interpreted in context to produce the emotional experience. In revisiting our example involving the venomous snake in your backyard, the two-factor theory maintains that the snake elicits sympathetic nervous system activation that is labeled as fear given the context, and our experience is that of fear. It is important to point out that Schachter and Singer believed that physiological arousal is very similar across the different types of emotions that we experience, and therefore, the cognitive appraisal of the situation is critical to the actual emotion experienced. In fact, it might be possible to misattribute arousal to an emotional experience if the circumstances were right (Schachter & Singer, 1962). To test their idea, Schachter and Singer performed a clever experiment. Male participants were randomly assigned to one of several groups. Some of the participants received injections of epinephrine that caused bodily changes that mimicked the fight-or-flight response of the sympathetic nervous system; however, only some of these men were told to expect these reactions as side effects of the injection. The other men that received injections of epinephrine were told either that the injection would have no side effects or that it would result in a side effect unrelated to a sympathetic response, such as itching feet or headache. After receiving these injections, participants waited in a room with someone else they thought was another subject in the research project. In reality, the other person was a confederate of the researcher. The confederate engaged in scripted displays of euphoric or angry behavior (Schachter & Singer, 1962). When those subjects who were told that they should expect to feel symptoms of physiological arousal were asked about any emotional changes that they had experienced related to either euphoria or anger (depending on how their confederate behaved), they reported none. However, the men who weren’t expecting physiological arousal as a function of the injection were more likely to report that they experienced euphoria or anger as a function of their assigned confederate’s behavior. While everyone that received an injection of epinephrine experienced the same physiological arousal, only those who were not expecting the arousal used context to interpret the arousal as a change in emotional state (Schachter & Singer, 1962). Strong emotional responses are associated with strong physiological arousal. This has led some to suggest that the signs of physiological arousal, which include increased heart rate, respiration rate, and sweating, might serve as a tool to determine whether someone is telling the truth or not. The assumption is that most of us would show signs of physiological arousal if we were being dishonest with someone. A polygraph , or lie detector test, measures the physiological arousal of an individual responding to a series of questions. Someone trained in reading these tests would look for answers to questions that are associated with increased levels of arousal as potential signs that the respondent may have been dishonest on those answers. While polygraphs are still commonly used, their validity and accuracy are highly questionable because there is no evidence that lying is associated with any particular pattern of physiological arousal (Saxe & Ben-Shakhar, 1999). The relationship between our experiencing of emotions and our cognitive processing of them, and the order in which these occur, remains a topic of research and debate. Lazarus (1991) developed the cognitive-mediational theory that asserts our emotions are determined by our appraisal of the stimulus. This appraisal mediates between the stimulus and the emotional response, and it is immediate and often unconscious. In contrast to the Schachter-Singer model, the appraisal precedes a cognitive label. You will learn more about Lazarus’s appraisal concept when you study stress, health, and lifestyle. Two other prominent views arise from the work of Robert Zajonc and Joseph LeDoux. Zajonc asserted that some emotions occur separately from or prior to our cognitive interpretation of them, such as feeling fear in response to an unexpected loud sound (Zajonc, 1998). He also believed in what we might casually refer to as a gut feeling—that we can experience an instantaneous and unexplainable like or dislike for someone or something (Zajonc, 1980). LeDoux also views some emotions as requiring no cognition: some emotions completely bypass contextual interpretation. His research into the neuroscience of emotion has demonstrated the amygdala’s primary role in fear (Cunha, Monfils, & LeDoux, 2010; LeDoux 1996, 2002). A fear stimulus is processed by the brain through one of two paths: from the thalamus (where it is perceived) directly to the amygdala or from the thalamus through the cortex and then to the amygdala. The first path is quick, while the second enables more processing about details of the stimulus. In the following section, we will look more closely at the neuroscience of emotional response. The Biology of Emotions Earlier, you learned about the limbic system , which is the area of the brain involved in emotion and memory ( Figure 10.22 ). The limbic system includes the hypothalamus, thalamus, amygdala, and the hippocampus. The hypothalamus plays a role in the activation of the sympathetic nervous system that is a part of any given emotional reaction. The thalamus serves as a sensory relay center whose neurons project to both the amygdala and the higher cortical regions for further processing. The amygdala plays a role in processing emotional information and sending that information on to cortical structures (Fossati, 2012).The hippocampus integrates emotional experience with cognition (Femenía, Gómez-Galán, Lindskog, & Magara, 2012). Link to Learning Work through this Open Colleges interactive 3D brain simulator for a refresher on the brain's parts and their functions. To begin, click the “Start Exploring” button. To access the limbic system, click the plus sign in the right-hand menu (set of three tabs). Amygdala The amygdala has received a great deal of attention from researchers interested in understanding the biological basis for emotions, especially fear and anxiety (Blackford & Pine, 2012; Goosens & Maren, 2002; Maren, Phan, & Liberzon, 2013). The amygdala is composed of various subnuclei, including the basolateral complex and the central nucleus ( Figure 10.23 ). The basolateral complex has dense connections with a variety of sensory areas of the brain. It is critical for classical conditioning and for attaching emotional value to learning processes and memory. The central nucleus plays a role in attention, and it has connections with the hypothalamus and various brainstem areas to regulate the autonomic nervous and endocrine systems’ activity (Pessoa, 2010). Animal research has demonstrated that there is increased activation of the amygdala in rat pups that have odor cues paired with electrical shock when their mother is absent. This leads to an aversion to the odor cue that suggests the rats learned to fear the odor cue. Interestingly, when the mother was present, the rats actually showed a preference for the odor cue despite its association with an electrical shock. This preference was associated with no increases in amygdala activation. This suggests a differential effect on the amygdala by the context (the presence or absence of the mother) determined whether the pups learned to fear the odor or to be attracted to it (Moriceau & Sullivan, 2006). Raineki, Cortés, Belnoue, and Sullivan (2012) demonstrated that, in rats, negative early life experiences could alter the function of the amygdala and result in adolescent patterns of behavior that mimic human mood disorders. In this study, rat pups received either abusive or normal treatment during postnatal days 8–12. There were two forms of abusive treatment. The first form of abusive treatment had an insufficient bedding condition. The mother rat had insufficient bedding material in her cage to build a proper nest that resulted in her spending more time away from her pups trying to construct a nest and less times nursing her pups. The second form of abusive treatment had an associative learning task that involved pairing odors and an electrical stimulus in the absence of the mother, as described above. The control group was in a cage with sufficient bedding and was left undisturbed with their mothers during the same time period. The rat pups that experienced abuse were much more likely to exhibit depressive-like symptoms during adolescence when compared to controls. These depressive-like behaviors were associated with increased activation of the amygdala. Human research also suggests a relationship between the amygdala and psychological disorders of mood or anxiety. Changes in amygdala structure and function have been demonstrated in adolescents who are either at-risk or have been diagnosed with various mood and/or anxiety disorders (Miguel-Hidalgo, 2013; Qin et al., 2013). It has also been suggested that functional differences in the amygdala could serve as a biomarker to differentiate individuals suffering from bipolar disorder from those suffering from major depressive disorder (Fournier, Keener, Almeida, Kronhaus, & Phillips, 2013). Link to Learning Watch this video about research regarding stressed out teenagers and the impact on the brain to learn more. Hippocampus As mentioned earlier, the hippocampus is also involved in emotional processing. Like the amygdala, research has demonstrated that hippocampal structure and function are linked to a variety of mood and anxiety disorders. Individuals suffering from posttraumatic stress disorder (PTSD) show marked reductions in the volume of several parts of the hippocampus, which may result from decreased levels of neurogenesis and dendritic branching (the generation of new neurons and the generation of new dendrites in existing neurons, respectively) (Wang et al., 2010). While it is impossible to make a causal claim with correlational research like this, studies have demonstrated behavioral improvements and hippocampal volume increases following either pharmacological or cognitive-behavioral therapy in individuals suffering from PTSD (Bremner & Vermetten, 2004; Levy-Gigi, Szabó, Kelemen, & Kéri, 2013). Facial Expression and Recognition of Emotions Culture can impact the way in which people display emotion. A cultural display rule is one of a collection of culturally specific standards that govern the types and frequencies of displays of emotions that are acceptable (Malatesta & Haviland, 1982). Therefore, people from varying cultural backgrounds can have very different cultural display rules of emotion. For example, research has shown that individuals from the United States express negative emotions like fear, anger, and disgust both alone and in the presence of others, while Japanese individuals only do so while alone (Matsumoto, 1990). Furthermore, individuals from cultures that tend to emphasize social cohesion are more likely to engage in suppression of emotional reaction so they can evaluate which response is most appropriate in a given context (Matsumoto, Yoo, & Nakagawa, 2008). Other distinct cultural characteristics might be involved in emotionality. For instance, there may be gender differences involved in emotional processing. While research into gender differences in emotional display is equivocal, there is some evidence that men and women may differ in regulation of emotions (McRae, Ochsner, Mauss, Gabrieli, & Gross, 2008). Despite different emotional display rules, our ability to recognize and produce facial expressions of emotion appears to be universal. In fact, even congenitally blind individuals produce the same facial expression of emotions, despite their never having the opportunity to observe these facial displays of emotion in other people. This would seem to suggest that the pattern of activity in facial muscles involved in generating emotional expressions is universal, and indeed, this idea was suggested in the late 19th century in Charles Darwin’s book The Expression of Emotions in Man and Animals (1872) . In fact, there is substantial evidence for seven universal emotions that are each associated with distinct facial expressions. These include: happiness, surprise, sadness, fright, disgust, contempt, and anger ( Figure 10.24 ) (Ekman & Keltner, 1997). Does smiling make you happy? Or does being happy make you smile? The facial feedback hypothesis asserts that facial expressions are capable of influencing our emotions, meaning that smiling can make you feel happier (Buck, 1980; Soussignan, 2001; Strack, Martin, & Stepper, 1988). Recent research explored how Botox, which paralyzes facial muscles and limits facial expression, might affect emotion. Havas, Glenberg, Gutowski, Lucarelli, and Davidson (2010) discovered that depressed individuals reported less depression after paralysis of their frowning muscles with Botox injections. Of course, emotion is not only displayed through facial expression. We also use the tone of our voices, various behaviors, and body language to communicate information about our emotional states. Body language is the expression of emotion in terms of body position or movement. Research suggests that we are quite sensitive to the emotional information communicated through body language, even if we’re not consciously aware of it (de Gelder, 2006; Tamietto et al., 2009). Link to Learning Watch this short CNN video about body language to see how it plays out in the tense situation of a political debate. To apply these same concepts to the more everyday situations most of us face, check out these tips from an interview on the show Today with body language expert Janine Driver. Connect the Concepts Autism Spectrum Disorder and Expression of Emotions Autism spectrum disorder (ASD) is a set of neurodevelopmental disorders characterized by repetitive behaviors and communication and social problems. Children who have autism spectrum disorders have difficulty recognizing the emotional states of others, and research has shown that this may stem from an inability to distinguish various nonverbal expressions of emotion (i.e., facial expressions) from one another (Hobson, 1986). In addition, there is evidence to suggest that autistic individuals also have difficulty expressing emotion through tone of voice and by producing facial expressions (Macdonald et al., 1989). Difficulties with emotional recognition and expression may contribute to the impaired social interaction and communication that characterize autism; therefore, various therapeutic approaches have been explored to address these difficulties. Various educational curricula, cognitive-behavioral therapies, and pharmacological therapies have shown some promise in helping autistic individuals process emotionally relevant information (Bauminger, 2002; Golan & Baron-Cohen, 2006; Guastella et al., 2010).
business_ethics
Summary 1.1 Being a Professional of Integrity Ethics sets the standards that govern our personal and professional behavior. To conduct business ethically, we must choose to be a professional of integrity. The first steps are to ask ourselves how we define success and to understand that integrity calls on us to act in a way that is consistent with our words. There is a distinct difference between legal compliance and ethical responsibility, and the law does not fully address all ethical dilemmas that businesses face. Sound ethical practice meets the company’s culture, mission, or policies above and beyond legal responsibilities. The three normative theories of ethical behavior allow us to apply reason to business decisions as we examine the result (utilitarianism), the means of achieving it (deontology), and whether our choice will help us develop a virtuous character (virtue ethics). 1.2 Ethics and Profitability A long-term view of business success is critical for accurately measuring profitability. All the company’s stakeholders benefit from managers’ ethical conduct, which also increases a business’s goodwill and, in turn, supports profitability. Customers and clients tend to trust a business that gives evidence of its commitment to a positive long-term impact. By exercising corporate social responsibility, or CSR, a business views itself within a broader context, as a member of society with certain implicit social obligations and responsibility for its own effects on environmental and social well-being. 1.3 Multiple versus Single Ethical Standards The adoption of a single ethical code is the mark of a professional of integrity and is supported by the reasoned approach of each of the normative theories of business ethics. When we consistently maintain the same values regardless of the context, we are more likely to engender trust among those with whom we interact.
Chapter Outline 1.1 Being a Professional of Integrity 1.2 Ethics and Profitability 1.3 Multiple versus Single Ethical Standards Introduction Ethics consists of the standards of behavior to which we hold ourselves in our personal and professional lives. It establishes the levels of honesty, empathy, and trustworthiness and other virtues by which we hope to identify our personal behavior and our public reputation. In our personal lives, our ethics sets norms for the ways in which we interact with family and friends. In our professional lives, ethics guides our interactions with customers, clients, colleagues, employees, and shareholders affected by our business practices ( Figure 1.1 ). Should we care about ethics in our lives? In our practices in business and the professions? That is the central question we will examine in this chapter and throughout the book. Our goal is to understand why the answer is yes . Whatever hopes you have for your future, you almost certainly want to be successful in whatever career you choose. But what does success mean to you, and how will you know you have achieved it? Will you measure it in terms of wealth, status, power, or recognition? Before blindly embarking on a quest to achieve these goals, which society considers important, stop and think about what a successful career means to you personally. Does it include a blameless reputation, colleagues whose good opinion you value, and the ability to think well of yourself? How might ethics guide your decision-making and contribute to your achievement of these goals?
[ { "answer": { "ans_choice": 0, "ans_text": "A" }, "bloom": null, "hl_context": "<hl> The first normative approach is to examine the ends , or consequences , a decision produces in order to evaluate whether those ends are ethical . <hl> <hl> Variations on this approach include utilitarianism , teleology , and consequentialism . <hl> For example , utilitarianism suggests that an ethical action is one whose consequence achieves the greatest good for the greatest number of people . So if we want to make an ethical decision , we should ask ourselves who is helped and who is harmed by it . Focusing on consequences in this way generally does not require us to take into account the means of achieving that particular end , however . That fact leads us to the second normative theory about what constitutes ethical conduct .", "hl_sentences": "The first normative approach is to examine the ends , or consequences , a decision produces in order to evaluate whether those ends are ethical . Variations on this approach include utilitarianism , teleology , and consequentialism .", "question": { "cloze_format": "___ is/are a concept that relates to utilitarianism.", "normal_format": "Which of these concepts relates to utilitarianism?", "question_choices": [ "consequences", "actions", "character", "duty" ], "question_id": "fs-idm401534320", "question_text": "Which of these concepts relates to utilitarianism?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "B" }, "bloom": null, "hl_context": "<hl> Clients , customers , suppliers , investors , retailers , employees , the media , the government , members of the surrounding community , competitors , and even the environment are stakeholders in a business ; that is , they are individuals and entities affected by the business ’ s decisions ( Figure 1.2 ) . <hl> Stakeholders typically value a leadership team that chooses the ethical way to accomplish the company ’ s legitimate for-profit goals . For example , Patagonia expresses its commitment to environmentalism via its “ 1 % for the Planet ” program , which donates 1 percent of all sales to help save the planet . In part because of this program , Patagonia has become a market leader in outdoor gear .", "hl_sentences": "Clients , customers , suppliers , investors , retailers , employees , the media , the government , members of the surrounding community , competitors , and even the environment are stakeholders in a business ; that is , they are individuals and entities affected by the business ’ s decisions ( Figure 1.2 ) .", "question": { "cloze_format": "___ is not a stakeholder.", "normal_format": "Which of the following is not a stakeholder?", "question_choices": [ "the media", "corporate culture", "the environment", "customers" ], "question_id": "fs-idm413577920", "question_text": "Which of the following is not a stakeholder?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 3, "ans_text": "D" }, "bloom": null, "hl_context": "<hl> Indeed , proponents of all the normative ethical theories would insist that the only rational choice is to have a single ethical standard . <hl> A deontologist would argue that you should adhere to particular duties in performing your actions , regardless of the parties with whom you interact . A utilitarian would say that any act you take should result in the greatest good for the greatest number . A virtue ethicist would state that you cannot be virtuous if you lack integrity in your behavior toward all .", "hl_sentences": "Indeed , proponents of all the normative ethical theories would insist that the only rational choice is to have a single ethical standard .", "question": { "cloze_format": "The normative ethical theory that supports the idea of holding multiple ethical standards is ___.", "normal_format": "Which normative ethical theory supports the idea of holding multiple ethical standards?", "question_choices": [ "deontology", "utilitarianism", "virtue ethics", "none of the above" ], "question_id": "fs-idm377630688", "question_text": "Which normative ethical theory supports the idea of holding multiple ethical standards?" }, "references_are_paraphrase": 0 } ]
1
1.1 Being a Professional of Integrity Learning Objectives By the end of this section, you will be able to: Describe the role of ethics in a business environment Explain what it means to be a professional of integrity Distinguish between ethical and legal responsibilities Describe three approaches for examining the ethical nature of a decision Whenever you think about the behavior you expect of yourself in your personal life and as a professional, you are engaging in a philosophical dialogue with yourself to establish the standards of behavior you choose to uphold, that is, your ethics . You may decide you should always tell the truth to family, friends, customers, clients, and shareholders, and if that is not possible, you should have very good reasons why you cannot. You may also choose never to defraud or mislead your business partners. You may decide, as well, that while you are pursuing profit in your business, you will not require that all the money on the table come your way. Instead, there might be some to go around to those who are important because they are affected one way or another by your business. These are your stakeholders. Acting with Integrity Clients, customers, suppliers, investors, retailers, employees, the media, the government, members of the surrounding community, competitors, and even the environment are stakeholders in a business; that is, they are individuals and entities affected by the business’s decisions ( Figure 1.2 ). Stakeholders typically value a leadership team that chooses the ethical way to accomplish the company’s legitimate for-profit goals. For example, Patagonia expresses its commitment to environmentalism via its “1% for the Planet” program, which donates 1 percent of all sales to help save the planet. In part because of this program, Patagonia has become a market leader in outdoor gear. Being successful at work may therefore consist of much more than simply earning money and promotions. It may also mean treating our employees, customers, and clients with honesty and respect. It may come from the sense of pride we feel about engaging in honest transactions, not just because the law demands it but because we demand it of ourselves. It may lie in knowing the profit we make does not come from shortchanging others. Thus, business ethics guides the conduct by which companies and their agents abide by the law and respect the rights of their stakeholders, particularly their customers, clients, employees, and the surrounding community and environment. Ethical business conduct permits us to sleep well at night. Link to Learning Are business ethics an oxymoron? Read “Why Ethics Matter” to understand just a few of the reasons to have values-driven management. Nearly all systems of religious belief stress the building blocks of engaging others with respect, empathy, and honesty. These foundational beliefs, in turn, prepare us for the codes of ethical behavior that serve as ideal guides for business and the professions. Still, we need not subscribe to any religious faith to hold that ethical behavior in business is still necessary. Just by virtue of being human, we all share obligations to one another, and principal among these is the requirement that we treat others with fairness and dignity, including in our commercial transactions. For this reason, we use the words ethics and morals interchangeably in this book, though some philosophers distinguish between them. We hold that “an ethical person” conveys the same sense as “a moral person,” and we do not regard religious belief as a requirement for acting ethically in business and the professions. Because we are all humans and in the same world, we should extend the same behavior to all. It is the right way to behave, but it also burnishes our own professional reputation as business leaders of integrity. Integrity—that is, unity between what we say and what we do—is a highly valued trait. But it is more than just consistency of character. Acting with integrity means we adhere strongly to a code of ethics, so it implies trustworthiness and incorruptibility. Being a professional of integrity means consistently striving to be the best person you can be in all your interactions with others. It means you practice what you preach, walk the talk, and do what you believe is right based upon reason. Integrity in business brings many advantages, not the least of which is that it is a critical factor in allowing business and society to function properly. Successful corporate leaders and the companies they represent will take pride in their enterprise if they engage in business with honesty and fair play. To treat customers, clients, employees, and all those affected by a firm with dignity and respect is ethical. In addition, laudable business practices serve the long-term interests of corporations. Why? Because customers, clients, employees, and society at large will much more willingly patronize a business and work hard on its behalf if that business is perceived as caring about the community it serves. And what type of firm has long-term customers and employees? One whose track record gives evidence of honest business practice. Link to Learning In this interview, Mark Faris, a white-collar criminal convicted of fraud, claims that greed, arrogance, and ambition were motivating factors in his actions. He also discusses the human ability to rationalize our behavior to justify it to ourselves. Note his proposed solutions: practicing ethical leadership and developing awareness at an individual level via corporate training. Many people confuse legal and ethical compliance. They are, however, totally different and call for different standards of behavior. The concepts are not interchangeable in any sense of the word. The law is needed to establish and maintain a functioning society. Without it, our society would be in chaos. Compliance with these legal standards is strictly mandatory: If we violate these standards, we are subject to punishment as established by the law. Therefore, compliance in terms of business ethics generally refers to the extent to which a company conducts its business operations in accordance with applicable regulations, statutes, and laws. Yet this represents only a baseline minimum. Ethical observance builds on this baseline and reveals the principles of an individual business leader or a specific organization. Ethical acts are generally considered voluntary and personal—often based on our perception of or stand on right and wrong. Some professions, such as medicine and the law, have traditional codes of ethics. The Hippocratic Oath , for example, is embraced by most professionals in health care today as an appropriate standard always owed to patients by physicians, nurses, and others in the field. This obligation traces its lineage to ancient Greece and the physician Hippocrates. Business is different in not having a mutually shared standard of ethics. This is changing, however, as evidenced by the array of codes of conduct and mission statements many companies have adopted over the past century. These have many points in common, and their shared content may eventually produce a code universally claimed by business practitioners. What central point might constitute such a code? Essentially, a commitment to treat with honesty and integrity customers, clients, employees, and others affiliated with a business. The law is typically indebted to tradition and precedence, and compelling reasons are needed to support any change. Ethical reasoning often is more topical and reflects the changes in consciousness that individuals and society undergo. Often, ethical thought precedes and sets the stage for changes in the law. Behaving ethically requires that we meet the mandatory standards of the law, but that is not enough. For example, an action may be legal that we personally consider unacceptable. Companies today need to be focused not only on complying with the letter of the law but also on going above and beyond that basic mandatory requirement to consider their stakeholders and do what is right. Link to Learning To see an example of a corporate ethical code or mission statement, visit Johnson & Johnson and read “Our Credo” written by former chair Robert Wood Johnson. Forbes provides an annual list of companies recently deemed the most ethical according to their standards and research. Ends, Means, and Character in Business How, then, should we behave? Philosophy and science help us answer this question. From philosophy, three different perspectives help us assess whether our decisions are ethical on the basis of reason. These perspectives are called normative ethical theories and focus on how people ought to behave; we discuss them in this chapter and in later chapters. In contrast, descriptive ethical theories are based on scientific evidence, primarily in the field of psychology, and describe how people tend to behave within a particular context; however, they are not the subject of this book. The first normative approach is to examine the ends, or consequences , a decision produces in order to evaluate whether those ends are ethical. Variations on this approach include utilitarianism, teleology, and consequentialism. For example, utilitarianism suggests that an ethical action is one whose consequence achieves the greatest good for the greatest number of people. So if we want to make an ethical decision, we should ask ourselves who is helped and who is harmed by it. Focusing on consequences in this way generally does not require us to take into account the means of achieving that particular end, however. That fact leads us to the second normative theory about what constitutes ethical conduct. The second approach does examine the means, or actions, we use to carry out a business decision. An example of this approach is deontology , which essentially suggests that it is the means that lend nobility to the ends. Deontology contends that each of us owes certain duties to others ( deon is a Greek word for duty or obligation) and that certain universal rules apply to every situation and bind us to these duties. In this view, whether our actions are ethical depends only on whether we adhere to these rules. Thus, the means we use is the primary determinant of ethical conduct. The thinker most closely associated with deontology is the eighteenth-century German philosopher Immanuel Kant ( Figure 1.3 ). The third normative approach, typically called virtue theory , focuses on the character of the decision-maker—a character that reflects the training we receive growing up. In this view, our ethical analysis of a decision is intimately connected with the person we choose to be. It is through the development of habits, the routine actions in which we choose to engage, that we are able to create a character of integrity and make ethical decisions. Put differently, if a two-year-old is taught to take care of and return borrowed toys even though this runs contrary to every instinct they have, they may continue to perfect their ethical behavior so that at age forty, they can be counted on to safeguard the tens of millions of dollars investors have entrusted to their care in brokerages. Virtue theory has its roots in the Greek philosophical tradition, whose followers sought to learn how to live a flourishing life through study, teaching, and practice. The cardinal virtues to be practiced were courage, self-control, justice, and wisdom. Socrates was often cited as a sage and a role model, whose conduct in life was held in high regard. Ethics Across Time and Cultures Aristotle and the Concept of Phronesis, or Practical Wisdom Phrónēsis (fro-NEE-sis) is a type of practical wisdom that enables us to act virtuously. In “The Big Idea: The Wise Leader,” a Harvard Business Review article on leadership and ethical decision-making, Ikujiro Nonaka, a Japanese organizational theorist, and Hirotaka Takeuchi, a professor of Management Practice at Harvard Business School, discuss the gap between the theory and practice of ethics and which characteristics make a wise leader. 1 The authors conclude that “the use of explicit and tacit knowledge isn’t enough; chief executive officers (CEOs) must also draw on a third, often forgotten kind of knowledge, called practical wisdom. Practical wisdom is tacit knowledge acquired from experience that enables people to make prudent judgments and take actions based on the actual situation, guided by values and morals.” The concept of practical wisdom dates back to Aristotle , who considered phronesis, which can also be defined as prudence, to be a key intellectual virtue. Phronesis enables people to make ethically sound judgments. According to the authors, phronetic leaders: practice moral discernment in every situation, making judgments for the common good that are guided by their individual values and ethics; quickly assess situations and envision the consequences of possible actions or responses; create a shared sense of purpose among executives and employees and inspire people to work together in pursuit of a common goal; engage as many people as possible in conversation and communicate using metaphors, stories, and other figurative language in a way that everyone can understand; and encourage practical wisdom in others and support the training of employees at all levels in its use. In essence, the first question any company should ask itself is: “Do we have a moral purpose?” Having a moral purpose requires focusing on the common good, which precedes the accumulation of profit and results in economic and social benefits. If companies seek the common good, profits generally will follow. Critical Thinking In the article cited, the authors stress the importance of being well versed in the liberal arts, such as philosophy, history, literature, and in the fine arts to cultivate judgment. How do you think a strong background in the liberal arts would impart practical wisdom or help you make ethical decisions? 1.2 Ethics and Profitability Learning Objectives By the end of this section, you will be able to: Differentiate between short-term and long-term perspectives Differentiate between stockholder and stakeholder Discuss the relationship among ethical behavior, goodwill, and profit Explain the concept of corporate social responsibility Few directives in business can override the core mission of maximizing shareholder wealth, and today that particularly means increasing quarterly profits. Such an intense focus on one variable over a short time (i.e., a short-term perspective ) leads to a short-sighted view of what constitutes business success. Measuring true profitability, however, requires taking a long-term perspective. We cannot accurately measure success within a quarter of a year; a longer time is often required for a product or service to find its market and gain traction against competitors, or for the effects of a new business policy to be felt. Satisfying consumers’ demands, going green, being socially responsible, and acting above and beyond the basic requirements all take time and money. However, the extra cost and effort will result in profits in the long run. If we measure success from this longer perspective, we are more likely to understand the positive effect ethical behavior has on all who are associated with a business. Profitability and Success: Thinking Long Term Decades ago, some management theorists argued that a conscientious manager in a for-profit setting acts ethically by emphasizing solely the maximization of earnings. Today, most commentators contend that ethical business leadership is grounded in doing right by all stakeholders directly affected by a firm’s operations, including, but not limited to, stockholder s , or those who own shares of the company’s stock. That is, business leaders do right when they give thought to what is best for all who have a stake in their companies. Not only that, firms actually reap greater material success when they take such an approach, especially over the long run. Nobel Prize–winning economist Milton Friedman stated in a now-famous New York Times Magazine article in 1970 that the only “social responsibility of a business is to increase its profits.” 2 This concept took hold in business and even in business school education. However, although it is certainly permissible and even desirable for a company to pursue profitability as a goal, managers must also have an understanding of the context within which their business operates and of how the wealth they create can add positive value to the world. The context within which they act is society, which permits and facilitates a firm’s existence. Thus, a company enters a social contract with society as whole, an implicit agreement among all members to cooperate for social benefits. Even as a company pursues the maximizing of stockholder profit, it must also acknowledge that all of society will be affected to some extent by its operations. In return for society’s permission to incorporate and engage in business, a company owes a reciprocal obligation to do what is best for as many of society’s members as possible, regardless of whether they are stockholders. Therefore, when applied specifically to a business, the social contract implies that a company gives back to the society that permits it to exist, benefiting the community at the same time it enriches itself. Link to Learning What happens when a bank decides to break the social contract? This press conference held by the National Whistleblowers Center describes the events surrounding the $104 million whistleblower reward given to former UBS employee Bradley Birkenfeld by the U.S. Internal Revenue Service. While employed at UBS, Switzerland’s largest bank, Birkenfeld assisted in the company’s illegal offshore tax business, and he later served forty months in prison for conspiracy. But he was also the original source of incriminating information that led to a Federal Bureau of Investigation examination of the bank and to the U.S. government’s decision to impose a $780 million fine on UBS in 2009. In addition, Birkenfeld turned over to investigators the account information of more than 4,500 U.S. private clients of UBS. 3 In addition to taking this more nuanced view of profits, managers must also use a different time frame for obtaining them. Wall Street’s focus on periodic (i.e., quarterly and annual) earnings has led many managers to adopt a short-term perspective, which fails to take into account effects that require a longer time to develop. For example, charitable donations in the form of corporate assets or employees’ volunteered time may not show a return on investment until a sustained effort has been maintained for years. A long-term perspective is a more balanced view of profit maximization that recognizes that the impacts of a business decision may not manifest for a longer time. As an example, consider the business practices of Toyota when it first introduced its vehicles for sale in the United States in 1957. For many years, Toyota was content to sell its cars at a slight loss because it was accomplishing two business purposes: It was establishing a long-term relationship of trust with those who eventually would become its loyal U.S. customers, and it was attempting to disabuse U.S. consumers of their belief that items made in Japan were cheap and unreliable. The company accomplished both goals by patiently playing its long game, a key aspect of its operational philosophy, “The Toyota Way,” which includes a specific emphasis on long-term business goals, even at the expense of short-term profit. 4 What contributes to a corporation’s positive image over the long term? Many factors contribute, including a reputation for treating customers and employees fairly and for engaging in business honestly. Companies that act in this way may emerge from any industry or country. Examples include Fluor, the large U.S. engineering and design firm; illycaffè, the Italian food and beverage purveyor; Marriott, the giant U.S. hotelier; and Nokia, the Finnish telecommunications retailer. The upshot is that when consumers are looking for an industry leader to patronize and would-be employees are seeking a firm to join, companies committed to ethical business practices are often the first to come to mind. Why should stakeholders care about a company acting above and beyond the ethical and legal standards set by society? Simply put, being ethical is simply good business. A business is profitable for many reasons, including expert management teams, focused and happy employees, and worthwhile products and services that meet consumer demand. One more and very important reason is that they maintain a company philosophy and mission to do good for others. Year after year, the nation’s most admired companies are also among those that had the highest profit margins. Going green, funding charities, and taking a personal interest in employee happiness levels adds to the bottom line! Consumers want to use companies that care for others and our environment. During the years 2008 and 2009, many unethical companies went bankrupt. However, those companies that avoided the “quick buck,” risky and unethical investments, and other unethical business practices often flourished. If nothing else, consumer feedback on social media sites such as Yelp and Facebook can damage an unethical company’s prospects. Cases from the Real World Competition and the Markers of Business Success Perhaps you are still thinking about how you would define success in your career. For our purposes here, let us say that success consists simply of achieving our goals. We each have the ability to choose the goals we hope to accomplish in business, of course, and, if we have chosen them with integrity, our goals and the actions we take to achieve them will be in keeping with our character. Warren Buffet ( Figure 1.4 ), whom many consider the most successful investor of all time, is an exemplar of business excellence as well as a good potential role model for professionals of integrity and the art of thinking long term. He had the following to say: “Ultimately, there’s one investment that supersedes all others: Invest in yourself. Nobody can take away what you’ve got in yourself, and everybody has potential they haven’t used yet. . . . You’ll have a much more rewarding life not only in terms of how much money you make, but how much fun you have out of life; you’ll make more friends the more interesting person you are, so go to it, invest in yourself.” 5 The primary principle under which Buffett instructs managers to operate is: “Do nothing you would not be happy to have an unfriendly but intelligent reporter write about on the front page of a newspaper.” 6 This is a very simple and practical guide to encouraging ethical business behavior on a personal level. Buffett offers another, equally wise, principle: “Lose money for the firm, even a lot of money, and I will be understanding; lose reputation for the firm, even a shred of reputation, and I will be ruthless.” 7 As we saw in the example of Toyota, the importance of establishing and maintaining trust in the long term cannot be underestimated. Link to Learning For more on Warren Buffett’s thoughts about being both an economic and ethical leader, watch this interview that appeared on the PBS NewsHour on June 6, 2017. Stockholders, Stakeholders, and Goodwill Earlier in this chapter, we explained that stakeholders are all the individuals and groups affected by a business’s decisions. Among these stakeholders are stockholders (or shareholder s ), individuals and institutions that own stock (or shares) in a corporation. Understanding the impact of a business decision on the stockholder and various other stakeholders is critical to the ethical conduct of business. Indeed, prioritizing the claims of various stakeholders in the company is one of the most challenging tasks business professionals face. Considering only stockholders can often result in unethical decisions; the impact on all stakeholders must be considered and rationally assessed. Managers do sometimes focus predominantly on stockholders, especially those holding the largest number of shares, because these powerful individuals and groups can influence whether managers keep their jobs or are dismissed (e.g., when they are held accountable for the company’s missing projected profit goals). And many believe the sole purpose of a business is, in fact, to maximize stockholders’ short-term profits. However, considering only stockholders and short-term impacts on them is one of the most common errors business managers make. It is often in the long-term interests of a business not to accommodate stockowners alone but rather to take into account a broad array of stakeholders and the long-term and short-term consequences for a course of action. Here is a simple strategy for considering all your stakeholders in practice. Divide your screen or page into three columns; in the first column, list all stakeholders in order of perceived priority ( Figure 1.5 ). Some individuals and groups play more than one role. For instance, some employees may be stockholders, some members of the community may be suppliers, and the government may be a customer of the firm. In the second column, list what you think each stakeholder group’s interests and goals are. For those that play more than one role, choose the interests most directly affected by your actions. In the third column, put the likely impact of your business decision on each stakeholder. This basic spreadsheet should help you identify all your stakeholders and evaluate your decision’s impact on their interests. If you would like to add a human dimension to your analysis, try assigning some of your colleagues to the role of stakeholders and reexamine your analysis. The positive feeling stakeholders have for any particular company is called goodwill , which is an important component of almost any business entity, even though it is not directly attributable to the company’s assets and liabilities. Among other intangible assets, goodwill might include the worth of a business’s reputation, the value of its brand name, the intellectual capital and attitude of its workforce, and the loyalty of its established customer base. Even being socially responsible generates goodwill. The ethical behavior of managers will have a positive influence on the value of each of those components. Goodwill cannot be earned or created in a short time, but it can be the key to success and profitability. A company’s name, its corporate logo, and its trademark will necessarily increase in value as stakeholders view that company in a more favorable light. A good reputation is essential for success in the modern business world, and with information about the company and its actions readily available via mass media and the Internet (e.g., on public rating sites such as Yelp), management’s values are always subject to scrutiny and open debate. These values affect the environment outside and inside the company. The corporate culture , for instance, consists of shared beliefs, values, and behaviors that create the internal or organizational context within which managers and employees interact. Practicing ethical behavior at all levels—from CEO to upper and middle management to general employees—helps cultivate an ethical corporate culture and ethical employee relations. What Would You Do? Which Corporate Culture Do You Value? Imagine that upon graduation you have the good fortune to be offered two job opportunities. The first is with a corporation known to cultivate a hard-nosed, no-nonsense business culture in which keeping long hours and working intensely are highly valued. At the end of each year, the company donates to numerous social and environmental causes. The second job opportunity is with a nonprofit recognized for a very different culture based on its compassionate approach to employee work-life balance. It also offers the chance to pursue your own professional interests or volunteerism during a portion of every work day. The first job offer pays 20 percent more per year. Critical Thinking Which of these opportunities would you pursue and why? How important an attribute is salary, and at what point would a higher salary override for you the nonmonetary benefits of the lower-paid position? Positive goodwill generated by ethical business practices, in turn, generates long-term business success. As recent studies have shown, the most ethical and enlightened companies in the United States consistently outperform their competitors. 8 Thus, viewed from the proper long-term perspective, conducting business ethically is a wise business decision that generates goodwill for the company among stakeholders, contributes to a positive corporate culture, and ultimately supports profitability. You can test the validity of this claim yourself. When you choose a company with which to do business, what factors influence your choice? Let us say you are looking for a financial advisor for your investments and retirement planning, and you have found several candidates whose credentials, experience, and fees are approximately the same. Yet one of these firms stands above the others because it has a reputation, which you discover is well earned, for telling clients the truth and recommending investments that seemed centered on the clients’ benefit and not on potential profit for the firm. Wouldn’t this be the one you would trust with your investments? Or suppose one group of financial advisors has a long track record of giving back to the community of which it is part. It donates to charitable organizations in local neighborhoods, and its members volunteer service hours toward worthy projects in town. Would this group not strike you as the one worthy of your investments? That it appears to be committed to building up the local community might be enough to persuade you to give it your business. This is exactly how a long-term investment in community goodwill can produce a long pipeline of potential clients and customers. Cases from the Real World The Equifax Data Breach In 2017, from mid-May to July, hackers gained unauthorized access to servers used by Equifax, a major credit reporting agency, and accessed the personal information of nearly one-half the U.S. population. 9 Equifax executives sold off nearly $2 million of company stock they owned after finding out about the hack in late July, weeks before it was publicly announced on September 7, 2017, in potential violation of insider trading rules. The company’s shares fell nearly 14 percent after the announcement, but few expect Equifax managers to be held liable for their mistakes, face any regulatory discipline, or pay any penalties for profiting from their actions. To make amends to customers and clients in the aftermath of the hack, the company offered free credit monitoring and identity-theft protection. On September 15, 2017, the company’s chief information officer and chief of security retired. On September 26, 2017, the CEO resigned, days before he was to testify before Congress about the breach. To date, numerous government investigations and hundreds of private lawsuits have been filed as a result of the hack. Critical Thinking Which elements of this case might involve issues of legal compliance? Which elements illustrate acting legally but not ethically? What would acting ethically and with personal integrity in this situation look like? How do you think this breach will affect Equifax’s position relative to those of its competitors? How might it affect the future success of the company? Was it sufficient for Equifax to offer online privacy protection to those whose personal information was hacked? What else might it have done? A Brief Introduction to Corporate Social Responsibility If you truly appreciate the positions of your various stakeholders, you will be well on your way to understanding the concept of corporate social responsibility (CSR) . CSR is the practice by which a business views itself within a broader context, as a member of society with certain implicit social obligations and environmental responsibilities. As previously stated, there is a distinct difference between legal compliance and ethical responsibility, and the law does not fully address all ethical dilemmas that businesses face. CSR ensures that a company is engaging in sound ethical practices and policies in accordance with the company’s culture and mission, above and beyond any mandatory legal standards. A business that practices CSR cannot have maximizing shareholder wealth as its sole purpose, because this goal would necessarily infringe on the rights of other stakeholders in the broader society. For instance, a mining company that disregards its corporate social responsibility may infringe on the right of its local community to clean air and water if it pursues only profit. In contrast, CSR places all stakeholders within a proper contextual framework. An additional perspective to take concerning CSR is that ethical business leaders opt to do good at the same time that they do well. This is a simplistic summation, but it speaks to how CSR plays out within any corporate setting. The idea is that a corporation is entitled to make money, but it should not only make money. It should also be a good civic neighbor and commit itself to the general prospering of society as a whole. It ought to make the communities of which it is part better at the same time it pursues legitimate profit goals. These ends are not mutually exclusive, and it is possible—indeed, praiseworthy—to strive for both. When a company approaches business in this fashion, it is engaging in a commitment to corporate social responsibility. Link to Learning U.S. entrepreneur Blake Mycoskie has created a unique business model combining both for-profit and nonprofit philosophies in an innovative demonstration of corporate social responsibility. The company he founded, TOMS Shoes, donates one pair of shoes to a child in need for every pair sold. As of May 2018, the company has provided more than 75 million pairs of shoes to children in seventy countries. 10 1.3 Multiple versus Single Ethical Standards Learning Objectives By the end of this section, you will be able to: Analyze ethical norms and values as they relate to business standards Explain the doctrine of ethical relativism and why it is problematic Evaluate the claim that having a single ethical standard makes behaving consistently easier Business people sometimes apply different ethical standards in different contexts, especially if they are working in a culture different from the one in which they were raised or with coworkers from other traditions. If we look outside ourselves for ethical guidance, relying on the context in which we find ourselves, we can grow confused about what is ethical business behavior. Stakeholders then observe that the messages we send via our conduct lack a consistent ethical core, which can harm our reputation and that of the business. To avoid falling back on ethical relativism , a philosophy according to which there is no right or wrong and what is ethical depends solely on the context, we must choose a coherent standard we can apply to all our interactions with others. Some people who adopt multiple ethical standards may choose to exhibit the highest standards with their families, because these are the people they most revere. In a business setting, however, this same person may choose to be an unethical actor whose sole goal is the ruthless accumulation of wealth by any means. Because work and family are not the only two settings in which we live our lives, such a person may behave according to yet another standard to competitors in a sporting event, to strangers on the street, or to those in his or her religious community. Although the ethical standard we adopt is always a choice, certain life experiences can have more profound effects on our choice than others. Among the most formative experiences are family upbringing and cultural traditions, broadly defined here to include religious and ethnic norms, the standard patterns of behavior within the context in which we live. Culture and family also influence each other because the family exists in and responds to its cultural context, as well as providing us with the bedrock for our deepest values. Regardless of this initial coding, however, we can choose the ethical standards we apply in the business context. Why should we choose a single ethical code for all the contexts in which we live? The Greek philosophers and later proponents of the normative ethical theories we discussed earlier would say that if you apply your reason to determine how to behave, it makes rational sense to abide by a single ethical code for all interactions with all persons in all contexts. By doing so, you maximize your ethical behavior no matter who the other party is. Furthermore, you have an internally consistent behavior for all family, friends, customers, clients, and anyone else with whom you interact. Thus, we need not choose different values in different contexts, and when people see us in different situations, they are more likely to trust us because they see we uphold the same values regardless of the context. Indeed, proponents of all the normative ethical theories would insist that the only rational choice is to have a single ethical standard. A deontologist would argue that you should adhere to particular duties in performing your actions, regardless of the parties with whom you interact. A utilitarian would say that any act you take should result in the greatest good for the greatest number. A virtue ethicist would state that you cannot be virtuous if you lack integrity in your behavior toward all. Adopting a consistent ethical standard is both selfless and in the manager’s self-interest. That is, would-be customers and clients are more likely to seek out a business that treats all with whom it interacts with honesty and fairness, believing that they themselves will be treated likewise by that firm. Similarly, business leaders who treat everyone in a trustworthy manner need never worry that they might not have impressed a potential customer, because they always engage in honorable commercial practices. A single standard of business behavior that emphasizes respect and good service appeals to all. Normative ethics is about discovering right and delineating it from wrong; it is a way to develop the rules and norms we use to guide meaningful decision-making. The ethics in our single code are not relative to the time, person, or place. In this world, we all wear different hats as we go about our daily lives as employees, parents, leaders, students. Being a truly ethical person requires that no matter what hat we wear, we exhibit a single ethical code and that it includes, among others, such universal principles of behavior as honesty, integrity, loyalty, fairness, respect for law, and respect for others. Yet another reason to adopt a universal ethical standard is the transparent character it nurtures in us. If a company’s leadership insists that it stands for honest business transactions at every turn, it cannot prosecute those who defraud the company and look the other way when its own officers do the same. Stakeholders recognize such hypocrisy and rightly hold it against the business’s leaders. Business leaders are not limited to only one of the normative ethical theories we have described, however. Virtue theory, utilitarianism, and deontology all have advantages to recommend them. Still, what should not change is a corporate commitment to not make exceptions in its practices when those favor the company at the expense of customers, clients, or other stakeholders. Moving from theory to daily life, we can also look at the way our reputation is established by the implicit and explicit messages we send to others. If we adopt ethical relativism, friends, family, and coworkers will notice that we use different standards for different contexts. This lack of consistency and integrity can alter their perception of us and likely damage our reputation. What Would You Do? Taking Advantage of an Employee Discount Suppose you work in retail sales for an international clothing company. A perk of the job is an employee discount of 25 percent on all merchandise you purchase for personal use. Your cousin, who is always looking for a bargain, approaches you in the store one day and implores you to give him your employee discount on a $100 purchase of clothes for himself. Critical Thinking How would you handle this situation and why? Would it matter if the relative were someone closer to you, perhaps a brother or sister? If so, why?
u.s._history
Summary 30.1 Identity Politics in a Fractured Society In the late 1960s and 1970s, Native Americans, gays and lesbians, and women organized to change discriminatory laws and pursue government support for their interests, a strategy known as identity politics. Others, disenchanted with the status quo, distanced themselves from White, middle-class America by forming their own countercultures centered on a desire for peace, the rejection of material goods and traditional morality, concern for the environment, and drug use in pursuit of spiritual revelations. These groups, whose aims and tactics posed a challenge to the existing state of affairs, often met with hostility from individuals, local officials, and the U.S. government alike. Still, they persisted, determined to further their goals and secure for themselves the rights and privileges to which they were entitled as American citizens. 30.2 Coming Apart, Coming Together When a new Republican constituency of moderate southerners and northern, blue-collar workers voted Richard Nixon into the White House in 1968, many were hopeful. In the wake of antiwar and civil rights protests, and the chaos of the 1968 Democratic National Convention, many Americans welcomed Nixon’s promise to uphold law and order. During his first term, Nixon strode a moderate, middle path in domestic affairs, attempting with little success to solve the problems of inflation and unemployment through a combination of austerity and deficit spending. He made substantial progress in foreign policy, however, establishing diplomatic relations with China for the first time since the Communist Revolution and entering into a policy of détente with the Soviet Union. 30.3 Vietnam: The Downward Spiral As the war in Vietnam raged on, Americans were horrified to hear of atrocities committed by U.S. soldiers, such as the 1968 massacre of villagers at My Lai. To try to end the conflict, Nixon escalated it by bombing Hanoi and invading Cambodia; his actions provoked massive antiwar demonstrations in the United States that often ended in violence, such as the tragic shooting of unarmed student protestors at Kent State University in 1970. The 1971 release of the Pentagon Papers revealed the true nature of the war to an increasingly disapproving and disenchanted public. Secretary of State Henry Kissinger eventually drafted a peace treaty with North Vietnam, and, after handing over responsibility for the war to South Vietnam, the United States withdrew its troops in 1973. South Vietnam surrendered to the North two years later. 30.4 Watergate: Nixon’s Domestic Nightmare In 1972, President Nixon faced an easy reelection against a Democratic Party in disarray. But even before his landslide victory, evidence had surfaced that the White House was involved in the break-in at the DNC’s headquarters at the Watergate office complex. As the investigation unfolded, the depths to which Nixon and his advisers had sunk became clear. Some twenty-five of Nixon’s aides were indicted for criminal activity, and he faced impeachment before becoming the first president to resign from office. His successor, Gerald Ford, was unable to solve the pressing problems the United States faced or erase the stain of Watergate. 30.5 Jimmy Carter in the Aftermath of the Storm Jimmy Carter’s administration began with great promise, but his efforts to improve the economy through deregulation largely failed. Carter’s attempt at a foreign policy built on the principle of human rights also prompted much criticism, as did his decision to boycott the Summer Olympics in Moscow. On the other hand, he successfully brokered the beginnings of a historic peace treaty between Egypt and Israel. Remaining public faith in Carter was dealt a serious blow, however, when he proved unable to free the American hostages in Tehran.
Chapter Outline 30.1 Identity Politics in a Fractured Society 30.2 Coming Apart, Coming Together 30.3 Vietnam: The Downward Spiral 30.4 Watergate: Nixon’s Domestic Nightmare 30.5 Jimmy Carter in the Aftermath of the Storm Introduction From May 4 to November 4, 1974, a universal exposition was held in the city of Spokane, Washington. This world’s fair, Expo ‘74, and the postage stamp issued to commemorate it, reflected many of the issues and interests of the 1970s ( Figure 30.1 ). The stamp features psychedelic colors, and the character of the Cosmic Runner in the center wears bellbottoms, a popular fashion at the time. The theme of the fair was the environment, a subject beginning to be of great concern to people in the United States, especially the younger generation and those in the hippie counterculture. In the 1970s, the environment, social justice, distrust of the government, and a desire to end the war in Vietnam—the concerns and attitudes of younger people, women, gays and lesbians, and people of color—began to draw the attention of the mainstream as well.
[ { "answer": { "ans_choice": 1, "ans_text": "B" }, "bloom": null, "hl_context": "As the young , primarily White men and women who became hippies strove to create new identities for themselves , they borrowed liberally from other cultures , including that of Native Americans . At the same time , many Native Americans were themselves seeking to maintain their culture or retrieve elements that had been lost . <hl> In 1968 , a group of American Indian activists , including Dennis Banks , George Mitchell , and Clyde Bellecourt , convened a gathering of two hundred people in Minneapolis , Minnesota , and formed the American Indian Movement ( AIM ) ( Figure 30.4 ) . <hl> The organizers were urban dwellers frustrated by decades of poverty and discrimination . In 1970 , the average life expectancy of Native Americans was forty-six years compared to the national average of sixty-nine . The suicide rate was twice that of the general population , and the infant mortality rate was the highest in the country . Half of all Native Americans lived on reservations , where unemployment reached 50 percent . Among those in cities , 20 percent lived below the poverty line .", "hl_sentences": "In 1968 , a group of American Indian activists , including Dennis Banks , George Mitchell , and Clyde Bellecourt , convened a gathering of two hundred people in Minneapolis , Minnesota , and formed the American Indian Movement ( AIM ) ( Figure 30.4 ) .", "question": { "cloze_format": "One of the original founders of AIM was ________.", "normal_format": "Who was one of the original founders of AIM?", "question_choices": [ "Patsy Mink", "Dennis Banks", "Jerry Rubin", "Glenn Weiser" ], "question_id": "fs-idm1932864", "question_text": "One of the original founders of AIM was ________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 0, "ans_text": "abortions obtained during the first three months of pregnancy were legal" }, "bloom": null, "hl_context": "The majority of feminists , however , sought meaningful accomplishments . In the 1970s , they opened battered women ’ s shelters and successfully fought for protection from employment discrimination for pregnant women , reform of rape laws ( such as the abolition of laws requiring a witness to corroborate a woman ’ s report of rape ) , criminalization of domestic violence , and funding for schools that sought to counter sexist stereotypes of women . <hl> In 1973 , the U . S . Supreme Court in Roe v . Wade invalidated a number of state laws under which abortions obtained during the first three months of pregnancy were illegal . <hl> <hl> This made a nontherapeutic abortion a legal medical procedure nationwide . <hl>", "hl_sentences": "In 1973 , the U . S . Supreme Court in Roe v . Wade invalidated a number of state laws under which abortions obtained during the first three months of pregnancy were illegal . This made a nontherapeutic abortion a legal medical procedure nationwide .", "question": { "cloze_format": "The Supreme Court’s 1973 decision in Roe v. Wade established that ________.", "normal_format": "What did the Supreme Court’s 1973 decision in Roe v. Wade establish?", "question_choices": [ "abortions obtained during the first three months of pregnancy were legal", "witnesses were not required to corroborate a charge of rape", "marriage could not be abolished", "homosexuality was a mental illness" ], "question_id": "fs-idp17827536", "question_text": "The Supreme Court’s 1973 decision in Roe v. Wade established that ________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "met with Chinese leaders in Beijing" }, "bloom": null, "hl_context": "Despite the many domestic issues on Nixon ’ s agenda , he prioritized foreign policy and clearly preferred bold and dramatic actions in that arena . Realizing that five major economic powers — the United States , Western Europe , the Soviet Union , China , and Japan — dominated world affairs , he sought opportunities for the United States to pit the others against each other . In 1969 , he announced a new Cold War principle known as the Nixon Doctrine , a policy whereby the United States would continue to assist its allies but would not assume the responsibility of defending the entire non-Communist world . Other nations , like Japan , needed to assume more of the burden of first defending themselves . <hl> Playing what was later referred to as “ the China card , ” Nixon abruptly reversed two decades of U . S . diplomatic sanctions and hostility to the Communist regime in the People ’ s Republic of China , when he announced , in August 1971 , that he would personally travel to Beijing and meet with China ’ s leader , Chairman Mao Zedong , in February 1972 ( Figure 30.11 ) . <hl> Nixon hoped that opening up to the Chinese government would prompt its bitter rival , the Soviet Union , to compete for global influence and seek a more productive relationship with the United States . He also hoped that establishing a friendly relationship with China would isolate North Vietnam and ease a peace settlement , allowing the United States to extract its troops from the war honorably . Concurring that the Soviet Union should be restrained from making advances in Asia , Nixon and Chinese premier Zhou Enlai agreed to disagree on several issues and ended up signing a friendship treaty . They promised to work towards establishing trade between the two nations and to eventually establishing full diplomatic relations with each other .", "hl_sentences": "Playing what was later referred to as “ the China card , ” Nixon abruptly reversed two decades of U . S . diplomatic sanctions and hostility to the Communist regime in the People ’ s Republic of China , when he announced , in August 1971 , that he would personally travel to Beijing and meet with China ’ s leader , Chairman Mao Zedong , in February 1972 ( Figure 30.11 ) .", "question": { "cloze_format": "President Nixon took a bold diplomatic step in early 1972 when he ________.", "normal_format": "How did President Nixon take a bold diplomatic step in early 1972?", "question_choices": [ "went to Vienna", "declared the Vietnam War over", "met with Chinese leaders in Beijing", "signed the Glasgow Accords" ], "question_id": "fs-idp221292624", "question_text": "President Nixon took a bold diplomatic step in early 1972 when he ________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 3, "ans_text": "D" }, "bloom": null, "hl_context": "<hl> Nixon also courted northern , blue-collar workers , whom he later called the silent majority , to acknowledge their belief that their voices were seldom heard . <hl> These voters feared the social changes taking place in the country : Antiwar protests challenged their own sense of patriotism and civic duty , whereas the recreational use of new drugs threatened their cherished principles of self-discipline , and urban riots invoked the specter of a racial reckoning . <hl> Government action on behalf of the marginalized raised the question of whether its traditional constituency — the White middle class — would lose its privileged place in American politics . <hl> Some felt left behind as the government turned to the problems of African Americans . Nixon ’ s promises of stability and his emphasis on law and order appealed to them . He portrayed himself as a fervent patriot who would take a strong stand against racial unrest and antiwar protests . Nixon harshly critiqued Lyndon Johnson ’ s Great Society , and he promised a secret plan to end the war in Vietnam honorably and bring home the troops . He also promised to reform the Supreme Court , which he contended had gone too far in “ coddling criminals . ” Under Chief Justice Earl Warren , the court had used the due process and equal protection clauses of the Fourteenth Amendment to grant those accused under state law the ability to defend themselves and secure protections against unlawful search and seizure , cruel and unusual punishment , and self-incrimination .", "hl_sentences": "Nixon also courted northern , blue-collar workers , whom he later called the silent majority , to acknowledge their belief that their voices were seldom heard . Government action on behalf of the marginalized raised the question of whether its traditional constituency — the White middle class — would lose its privileged place in American politics .", "question": { "cloze_format": "The blue-collar workers who Nixon called “the silent majority” ________.", "normal_format": "Which of the following is correct about the blue-collar workers who Nixon called “the silent majority”?", "question_choices": [ "fled to the suburbs to avoid integration", "wanted to replace existing social institutions with cooperatives", "opposed the war in Vietnam", "believed their opinions were overlooked in the political process" ], "question_id": "fs-idp234229920", "question_text": "The blue-collar workers who Nixon called “the silent majority” ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "C" }, "bloom": null, "hl_context": "<hl> On May 15 , a similar tragedy took place at Jackson State College , an African American college in Jackson , Mississippi . <hl> <hl> Once again , students gathered on campus to protest the invasion of Cambodia , setting fires and throwing rocks . <hl> The police arrived to disperse the protesters , who had gathered outside a women ’ s dormitory . Shortly after midnight , the police opened fire with shotguns . The dormitory windows shattered , showering people with broken glass . Twelve were wounded , and two young men , one a student at the college and the other a local high school student , were killed . The invasion could not be kept secret , and when Nixon announced it on television on April 30 , 1970 , protests sprang up across the country . <hl> The most tragic and politically damaging occurred on May 1 , 1970 , at Kent State University in Ohio . <hl> Violence erupted in the town of Kent after an initial student demonstration on campus , and the next day , the mayor asked Ohio ’ s governor to send in the National Guard . Troops were sent to the university ’ s campus , where students had set fire to the ROTC building and were fighting off firemen and policemen trying to extinguish it . The National Guard used teargas to break up the demonstration , and several students were arrested ( Figure 30.14 ) .", "hl_sentences": "On May 15 , a similar tragedy took place at Jackson State College , an African American college in Jackson , Mississippi . Once again , students gathered on campus to protest the invasion of Cambodia , setting fires and throwing rocks . The most tragic and politically damaging occurred on May 1 , 1970 , at Kent State University in Ohio .", "question": { "cloze_format": "The demonstrations at Kent State University in May 1970 were held to protest ___.", "normal_format": "The demonstrations at Kent State University in May 1970 were held to protest what event?", "question_choices": [ "the My Lai massacre", "the North Vietnamese invasion of Saigon", "the invasion of Cambodia by U.S. forces", "the signing of a peace agreement with North Vietnam" ], "question_id": "fs-idm14726656", "question_text": "The demonstrations at Kent State University in May 1970 were held to protest what event?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "repealed the Gulf of Tonkin Resolution" }, "bloom": null, "hl_context": "Ongoing protests , campus violence , and the expansion of the war into Cambodia deeply disillusioned Americans about their role in Vietnam . <hl> Understanding the nation ’ s mood , Nixon dropped his opposition to a repeal of the Gulf of Tonkin Resolution of 1964 . <hl> <hl> In January 1971 , he signed Congress ’ s revocation of the notorious blanket military authorization . <hl> Gallup polls taken in May of that year revealed that only 28 percent of the respondents supported the war ; many felt it was not only a mistake but also immoral .", "hl_sentences": "Understanding the nation ’ s mood , Nixon dropped his opposition to a repeal of the Gulf of Tonkin Resolution of 1964 . In January 1971 , he signed Congress ’ s revocation of the notorious blanket military authorization .", "question": { "cloze_format": "Recognizing that ongoing protests and campus violence reflected a sea change in public opinion about the war, in 1971 Nixon ________.", "normal_format": "Recognizing that ongoing protests and campus violence reflected a sea change in public opinion about the war, what did Nixon do in 1971?", "question_choices": [ "repealed the Gulf of Tonkin Resolution", "postponed the invasion of Cambodia", "released the Pentagon Papers", "covered up the My Lai massacre" ], "question_id": "fs-idm27210704", "question_text": "Recognizing that ongoing protests and campus violence reflected a sea change in public opinion about the war, in 1971 Nixon ________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 3, "ans_text": "the Helsinki Accords" }, "bloom": null, "hl_context": "Ford ’ s economic policies ultimately proved unsuccessful . Because of opposition from a Democratic Congress , his foreign policy accomplishments were also limited . When he requested money to assist the South Vietnamese government in its effort to repel North Vietnamese forces , Congress refused . <hl> Ford was more successful in other parts of the world . <hl> <hl> He continued Nixon ’ s policy of détente with the Soviet Union , and he and Secretary of State Kissinger achieved further progress in the second round of SALT talks . <hl> <hl> In August 1975 , Ford went to Finland and signed the Helsinki Accords with Soviet premier Leonid Brezhnev . <hl> <hl> This agreement essentially accepted the territorial boundaries that had been established at the end of World War II in 1945 . <hl> It also exacted a pledge from the signatory nations that they would protect human rights within their countries . Many immigrants to the United States protested Ford ’ s actions , because it seemed as though he had accepted the status quo and left their homelands under Soviet domination . Others considered it a belated American acceptance of the world as it really was . 30.5 Jimmy Carter in the Aftermath of the Storm Learning Objectives By the end of this section , you will be able to :", "hl_sentences": "Ford was more successful in other parts of the world . He continued Nixon ’ s policy of détente with the Soviet Union , and he and Secretary of State Kissinger achieved further progress in the second round of SALT talks . In August 1975 , Ford went to Finland and signed the Helsinki Accords with Soviet premier Leonid Brezhnev . This agreement essentially accepted the territorial boundaries that had been established at the end of World War II in 1945 .", "question": { "cloze_format": "The agreement Gerald Ford signed with the leader of the Soviet Union that ended the territorial issues remaining from World War II was ________.", "normal_format": "Which agreement did Gerald Ford sign with the leader of the Soviet Union that ended the territorial issues remaining from World War II?", "question_choices": [ "the Moscow Communiqué", "the Beijing Treaty", "the Iceland Protocol", "the Helsinki Accords" ], "question_id": "fs-idp226319616", "question_text": "The agreement Gerald Ford signed with the leader of the Soviet Union that ended the territorial issues remaining from World War II was ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "B" }, "bloom": null, "hl_context": "On March 23 , 1973 , Judge Sirica publicly read a letter from one of the Watergate burglars , alleging that perjury had been committed during the trial . <hl> Less than two weeks later , Jeb Magruder , a deputy director of CREEP , admitted lying under oath and indicated that Dean and John Mitchell , who had resigned as attorney general to become the director of CREEP , were also involved in the break-in and its cover-up . <hl> <hl> Dean confessed , and on April 30 , Nixon fired him and requested the resignation of his aides John Ehrlichman and H . R . Haldeman , also implicated . <hl> To defuse criticism and avoid suspicion that he was participating in a cover-up , Nixon also announced the resignation of the current attorney general , Richard Kleindienst , a close friend , and appointed Elliott Richardson to the position . In May 1973 , Richardson named Archibald Cox special prosecutor to investigate the Watergate affair . <hl> In the weeks following the Watergate break-in , Bob Woodward and Carl Bernstein , reporters for The Washington Post , received information from several anonymous sources , including one known to them only as “ Deep Throat , ” that led them to realize the White House was deeply implicated in the break-in . <hl> As the press focused on other events , Woodward and Bernstein continued to dig and publish their findings , keeping the public ’ s attention on the unfolding scandal . Years later , Deep Throat was revealed to be Mark Felt , then the FBI ’ s associate director .", "hl_sentences": "Less than two weeks later , Jeb Magruder , a deputy director of CREEP , admitted lying under oath and indicated that Dean and John Mitchell , who had resigned as attorney general to become the director of CREEP , were also involved in the break-in and its cover-up . Dean confessed , and on April 30 , Nixon fired him and requested the resignation of his aides John Ehrlichman and H . R . Haldeman , also implicated . In the weeks following the Watergate break-in , Bob Woodward and Carl Bernstein , reporters for The Washington Post , received information from several anonymous sources , including one known to them only as “ Deep Throat , ” that led them to realize the White House was deeply implicated in the break-in .", "question": { "cloze_format": "___ was not indicted following the Watergate break-in and cover-up.", "normal_format": "Of these figures, who was not indicted following the Watergate break-in and cover-up?", "question_choices": [ "John Mitchell", "Bob Woodward", "John Ehrlichman", "H.R. Haldeman" ], "question_id": "fs-idp141042992", "question_text": "Of these figures, who was not indicted following the Watergate break-in and cover-up?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 3, "ans_text": "D" }, "bloom": null, "hl_context": "President Ford won the Republican nomination for the presidency in 1976 , narrowly defeating former California governor Ronald Reagan , but he lost the election to his Democratic opponent Jimmy Carter . Carter ran on an “ anti-Washington ” ticket , making a virtue of his lack of experience in what was increasingly seen as the corrupt politics of the nation ’ s capital . Accepting his party ’ s nomination , the former governor of Georgia pledged to combat racism and sexism as well as overhaul the tax structure . He openly proclaimed his faith as a born-again Christian and promised to change the welfare system and provide comprehensive healthcare coverage for neglected citizens who deserved compassion . <hl> Most importantly , Jimmy Carter promised that he would “ never lie . ” <hl>", "hl_sentences": "Most importantly , Jimmy Carter promised that he would “ never lie . ”", "question": { "cloze_format": "During the 1976 election campaign, Jimmy Carter famously promised ________.", "normal_format": "During the 1976 election campaign, What did Jimmy Carter famously promise?", "question_choices": [ "that he would never start a war", "that he would never be unfaithful to his wife", "that he had never smoked marijuana", "that he would never lie" ], "question_id": "fs-idp93151904", "question_text": "During the 1976 election campaign, Jimmy Carter famously promised ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "companies would become more competitive" }, "bloom": null, "hl_context": "<hl> In trying to manage the relatively high unemployment rate of 7.5 percent and inflation that had risen into the double digits by 1978 , Carter was only marginally effective . <hl> <hl> His tax reform measure of 1977 was weak and failed to close the grossest of loopholes . <hl> <hl> His deregulation of major industries , such as aviation and trucking , was intended to force large companies to become more competitive . <hl> Consumers benefited in some ways : For example , airlines offered cheaper fares to beat their competitors . However , some companies , like Pan American World Airways , instead went out of business . Carter also expanded various social programs , improved housing for the elderly , and took steps to improve workplace safety .", "hl_sentences": "In trying to manage the relatively high unemployment rate of 7.5 percent and inflation that had risen into the double digits by 1978 , Carter was only marginally effective . His tax reform measure of 1977 was weak and failed to close the grossest of loopholes . His deregulation of major industries , such as aviation and trucking , was intended to force large companies to become more competitive .", "question": { "cloze_format": "Carter deregulated several major American industries in an effort to ensure that ________.", "normal_format": "Carter deregulated several major American industries in an effort to ensure what?", "question_choices": [ "companies would become more competitive", "airlines would merge", "oil prices would rise", "consumers would start conserving energy" ], "question_id": "fs-idp281552848", "question_text": "Carter deregulated several major American industries in an effort to ensure that ________." }, "references_are_paraphrase": 0 } ]
30
30.1 Identity Politics in a Fractured Society Learning Objectives By the end of this section, you will be able to: Describe the counterculture of the 1960s Explain the origins of the American Indian Movement and its major activities Assess the significance of the gay rights and women’s liberation movements The political divisions that plagued the United States in the 1960s were reflected in the rise of identity politics in the 1970s. As people lost hope of reuniting as a society with common interests and goals, many focused on issues of significance to the subgroups to which they belonged, based on culture, ethnicity, sexual orientation, gender, and religion. HIPPIES AND THE COUNTERCULTURE In the late 1960s and early 1970s, many young people came to embrace a new wave of cultural dissent. The counterculture offered an alternative to the bland homogeneity of American middle-class life, patriarchal family structures, self-discipline, unquestioning patriotism, and the acquisition of property. In fact, there were many alternative cultures. “ Hippies ” rejected the conventions of traditional society. Men sported beards and grew their hair long; both men and women wore clothing from non-Western cultures, defied their parents, rejected social etiquettes and manners, and turned to music as an expression of their sense of self. Casual sex between unmarried men and women was acceptable. Drug use, especially of marijuana and psychedelic drugs like LSD and peyote, was common. Most hippies were also deeply attracted to the ideas of peace and freedom. They protested the war in Vietnam and preached a doctrine of personal freedom to be and act as one wished. Some hippies dropped out of mainstream society altogether and expressed their disillusionment with the cultural and spiritual limitations of American freedom. They joined communes, usually in rural areas, to share a desire to live closer to nature, respect for the earth, a dislike of modern life, and a disdain for wealth and material goods. Many communes grew their own organic food. Others abolished the concept of private property, and all members shared willingly with one another. Some sought to abolish traditional ideas regarding love and marriage, and free love was practiced openly. One of the most famous communes was The Farm, established in Tennessee in 1971. Residents adopted a blend of Christian and Asian beliefs. They shared housing, owned no private property except tools and clothing, advocated nonviolence, and tried to live as one with nature, becoming vegetarians and avoiding the use of animal products. They smoked marijuana in an effort to reach a higher state of consciousness and to achieve a feeling of oneness and harmony. Music, especially rock and folk music, occupied an important place in the counterculture. Concerts provided the opportunity to form seemingly impromptu communities to celebrate youth, rebellion, and individuality. In mid-August 1969, nearly 400,000 people attended a music festival in rural Bethel, New York, many for free ( Figure 30.3 ). They jammed roads throughout the state, and thousands had to be turned around and sent home. Thirty-two acts performed for a crowd that partook freely of marijuana, LSD, and alcohol during the rainy three-day event that became known as Woodstock (after the nearby town) and became the cultural touchstone of a generation. No other event better symbolized the cultural independence and freedom of Americans coming of age in the 1960s. My Story Glenn Weiser on Attending Woodstock On the way to Woodstock, Glenn Weiser remembers that the crowds were so large they essentially turned it into a free concert: As we got closer to the site [on Thursday, August 14, 1969] we heard that so many people had already arrived that the crowd had torn down the fences enclosing the festival grounds (in fact they were never put up to begin with). Everyone was being allowed in for free. . . . Early on Friday afternoon about a dozen of us got together and spread out some blankets on the grass at a spot about a third of the way up the hill on stage right and then dropped LSD. I took Orange Sunshine, a strong, clean dose in an orange tab that was perhaps the best street acid ever. Underground chemists in southern California had made millions of doses, and the nation was flooded with it that summer. We smoked some tasty black hashish to amuse ourselves while waiting for the acid to hit, and sat back to groove along with Richie Havens. In two hours we were all soaring, and everything was just fine. In fact, it couldn’t have been better—there I was with my beautiful hometown friends, higher than a church steeple and listening to wonderful music in the cool summer weather of the Catskills. After all, the dirty little secret of the late ‘60s was that psychedelic drugs taken in a pleasant setting could be completely exhilarating. —Glenn Weiser, “Woodstock 1969 Remembered” In this account, Glenn Weiser describes both the music and his drug use. What social trends did Woodstock reflect? How might the festival have influenced American culture and society, both aesthetically and behaviorally? AMERICAN INDIAN PROTEST As the young, primarily White men and women who became hippies strove to create new identities for themselves, they borrowed liberally from other cultures, including that of Native Americans. At the same time, many Native Americans were themselves seeking to maintain their culture or retrieve elements that had been lost. In 1968, a group of American Indian activists, including Dennis Banks, George Mitchell, and Clyde Bellecourt, convened a gathering of two hundred people in Minneapolis, Minnesota, and formed the American Indian Movement (AIM) ( Figure 30.4 ). The organizers were urban dwellers frustrated by decades of poverty and discrimination. In 1970, the average life expectancy of Native Americans was forty-six years compared to the national average of sixty-nine. The suicide rate was twice that of the general population, and the infant mortality rate was the highest in the country. Half of all Native Americans lived on reservations, where unemployment reached 50 percent. Among those in cities, 20 percent lived below the poverty line. On November 20, 1969, a small group of Native American activists landed on Alcatraz Island (the former site of a notorious federal prison) in San Francisco Bay. They announced plans to build an American Indian cultural center, including a history museum, an ecology center, and a spiritual sanctuary. People on the mainland provided supplies by boat, and celebrities visited Alcatraz to publicize the cause. More people joined the occupiers until, at one point, they numbered about four hundred. From the beginning, the federal government negotiated with them to persuade them to leave. They were reluctant to accede, but over time, the occupiers began to drift away of their own accord. Government forces removed the final holdouts on June 11, 1971, nineteen months after the occupation began. Defining American Proclamation to the Great White Father and All His People In occupying Alcatraz Island, American Indian activists sought to call attention to their grievances and expectations about what America should mean. At the beginning of the nineteen-month occupation, Mohawk Richard Oakes delivered the following proclamation: We, the native Americans, re-claim the land known as Alcatraz Island in the name of all American Indians by right of discovery. We wish to be fair and honorable in our dealings with the Caucasian inhabitants of this land, and hereby offer the following treaty: We will purchase said Alcatraz Island for twenty-four dollars ($24) in glass beads and red cloth, a precedent set by the White man’s purchase of a similar island about 300 years ago. . . . We feel that this so-called Alcatraz Island is more than suitable for an Indian Reservation, as determined by the White man’s own standards. By this we mean that this place resembles most Indian reservations in that: 1. It is isolated from modern facilities, and without adequate means of transportation. 2. It has no fresh running water. 3. It has inadequate sanitation facilities. 4. There are no oil or mineral rights. 5. There is no industry and so unemployment is very great. 6. There are no health care facilities. 7. The soil is rocky and non-productive; and the land does not support game. 8. There are no educational facilities. 9. The population has always exceeded the land base. 10. The population has always been held as prisoners and kept dependent upon others. Further, it would be fitting and symbolic that ships from all over the world, entering the Golden Gate, would first see Indian land, and thus be reminded of the true history of this nation. This tiny island would be a symbol of the great lands once ruled by free and noble Indians. What does the Alcatraz Proclamation reveal about the Native American view of U.S. history? The next major demonstration came in 1972 when AIM members and others marched on Washington, DC—a journey they called the “Trail of Broken Treaties”—and occupied the offices of the Bureau of Indian Affairs (BIA). The group presented a list of demands, which included improved housing, education, and economic opportunities in Native American communities; the drafting of new treaties; the return of Native lands; and protections for Native religions and culture. The most dramatic event staged by AIM was the occupation of the community of Wounded Knee, South Dakota, in February 1973. Wounded Knee, on the Pine Ridge Indian Reservation, had historical significance: It was the site of an 1890 massacre of members of the Lakota tribe by the U.S. Army. AIM went to the reservation following the failure of a group of Oglala to impeach the tribal president Dick Wilson, whom they accused of corruption and the use of strong-arm tactics to silence critics. AIM used the occasion to criticize the U.S. government for failing to live up to its treaties with native peoples. The federal government surrounded the area with U.S. marshals, FBI agents, and other law enforcement forces. A siege ensued that lasted seventy-one days, with frequent gunfire from both sides, wounding a U.S. marshal as well as an FBI agent, and killing two Native Americans. The government did very little to meet the protesters’ demands. Two AIM leaders, Dennis Banks and Russell Means, were arrested, but charges were later dismissed. The Nixon administration had already halted the federal policy of termination and restored millions of acres to tribes. Increased funding for Native American education, healthcare, legal services, housing, and economic development followed, along with the hiring of more Native American employees in the BIA. GAY RIGHTS Combined with the sexual revolution and the feminist movement of the 1960s, the counterculture helped establish a climate that fostered the struggle for gay and lesbian rights. Many gay rights groups were founded in Los Angeles and San Francisco, cities that were administrative centers in the network of U.S. military installations and the places where many gay men suffered dishonorable discharges. The first postwar organization for homosexual civil rights, the Mattachine Society, was launched in Los Angeles in 1950. The first national organization for lesbians, the Daughters of Bilitis, was founded in San Francisco five years later. In 1966, the city became home to the world’s first organization for transsexual people, the National Transsexual Counseling Unit, and in 1967, the Sexual Freedom League of San Francisco was born. Through these organizations and others, gay and lesbian activists fought against the criminalization and discrimination of their sexual identities on a number of occasions throughout the 1960s, employing strategies of both protests and litigation. However, the most famous event in the gay rights movement took place not in San Francisco but in New York City. Early in the morning of June 28, 1969, police raided a Greenwich Village gay bar called the Stonewall Inn. Although such raids were common, the response of the Stonewall patrons was anything but. As the police prepared to arrest many of the customers, especially transsexuals and cross-dressers, who were particular targets for police harassment, a crowd began to gather. Angered by the brutal treatment of the prisoners, the crowd attacked. Beer bottles and bricks were thrown. The police barricaded themselves inside the bar and waited for reinforcements. The riot continued for several hours and resumed the following night. Shortly thereafter, the Gay Liberation Front and Gay Activists’ Alliance were formed, and began to protest discrimination, homophobia, and violence against gay people, promoting gay liberation and gay pride. With a call for gay men and women to “come out”—a consciousness-raising campaign that shared many principles with the counterculture, gay and lesbian communities moved from the urban underground into the political sphere. Gay rights activists protested strongly against the official position of the American Psychiatric Association (APA), which categorized homosexuality as a mental illness and often resulted in job loss, loss of custody, and other serious personal consequences. By 1974, the APA had ceased to classify homosexuality as a form of mental illness but continued to consider it a “sexual orientation disturbance.” Nevertheless, in 1974, Kathy Kozachenko became the first openly lesbian woman voted into office in Ann Arbor, Michigan. In 1977, Harvey Milk became California’s first openly gay man elected to public office, although his service on San Francisco’s board of supervisors, along with that of San Francisco mayor George Moscone, was cut short by the bullet of disgruntled former city supervisor Dan White. MAYBE NOT NOW The feminist push for greater rights continued through the 1970s ( Figure 30.5 ). The media often ridiculed feminists as “women’s libbers” and focused on more radical organizations like W.I.T.C.H. (Women’s International Terrorist Conspiracy from Hell), a loose association of activist groups. Many reporters stressed the most unusual goals of the most radical women—calls for the abolition of marriage and demands that manholes be renamed “personholes.” The majority of feminists, however, sought meaningful accomplishments. In the 1970s, they opened battered women’s shelters and successfully fought for protection from employment discrimination for pregnant women, reform of rape laws (such as the abolition of laws requiring a witness to corroborate a woman’s report of rape), criminalization of domestic violence, and funding for schools that sought to counter sexist stereotypes of women. In 1973, the U.S. Supreme Court in Roe v. Wade invalidated a number of state laws under which abortions obtained during the first three months of pregnancy were illegal. This made a nontherapeutic abortion a legal medical procedure nationwide. Many advances in women’s rights were the result of women’s greater engagement in politics. For example, Patsy Mink, the first Asian American woman elected to Congress, was the co-author of the Education Amendments Act of 1972, Title IX of which prohibits sex discrimination in education. Mink had been interested in fighting discrimination in education since her youth, when she opposed racial segregation in campus housing while a student at the University of Nebraska. She went to law school after being denied admission to medical school because of her gender. Like Mink, many other women sought and won political office, many with the help of the National Women’s Political Caucus (NWPC). In 1971, the NWPC was formed by Bella Abzug, Gloria Steinem, Shirley Chisholm, and other leading feminists to encourage women’s participation in political parties, elect women to office, and raise money for their campaigns ( Figure 30.6 ). The ultimate political goal of the National Organization for Women (NOW) was the passage of an Equal Rights Amendment (ERA). The amendment passed Congress in March 1972, and was sent to the states for ratification with a deadline of seven years for passage; if the amendment was not ratified by thirty-eight states by 1979, it would die. Twenty-two states ratified the ERA in 1972, and eight more in 1973. In the next two years, only four states voted for the amendment. In 1979, still four votes short, the amendment received a brief reprieve when Congress agreed to a three-year extension, but it never passed, as the result of the well-organized opposition of Christian and other socially conservative, grassroots organizations. 30.2 Coming Apart, Coming Together Learning Objectives By the end of this section, you will be able to: Explain the factors responsible for Richard Nixon’s election in 1968 Describe the splintering of the Democratic Party in 1968 Discuss Richard Nixon’s economic policies Discuss the major successes of Richard Nixon’s foreign policy The presidential election of 1968 revealed a rupture of the New Deal coalition that had come together under Franklin Roosevelt in the 1930s. The Democrats were divided by internal dissension over the Vietnam War, the civil rights movement, and the challenges of the New Left. Meanwhile, the Republican candidate, Richard Nixon, won voters in the South, Southwest, and northern suburbs by appealing to their anxieties about civil rights, women’s rights, antiwar protests, and the counterculture taking place around them. Nixon spent his first term in office pushing measures that slowed the progress of civil rights and sought to restore economic stability. His greatest triumphs were in foreign policy. But his largest priority throughout his first term was his reelection in 1972. THE “NEW NIXON” The Republicans held their 1968 national convention from August 5–8 in Miami, Florida. Richard Nixon quickly emerged as the frontrunner for the nomination, ahead of Nelson Rockefeller and Ronald Reagan. This success was not accidental: From 1962, when he lost his bid for the governorship of California, to 1968, Nixon had been collecting political credits by branding himself as a candidate who could appeal to mainstream voters and by tirelessly working for other Republican candidates. In 1964, for example, he vigorously supported Barry Goldwater’s presidential bid and thus built good relationships with the new conservative movement in the Republican Party. Although Goldwater lost the 1964 election, his vigorous rejection of New Deal state and social legislation, along with his support of states’ rights, proved popular in the Deep South, which had resisted federal efforts at racial integration. Taking a lesson from Goldwater’s experience, Nixon also employed a southern strategy in 1968. Denouncing segregation and the denial of the vote to African Americans, he nevertheless maintained that southern states be allowed to pursue racial equality at their own pace and criticized forced integration. Nixon thus garnered the support of South Carolina’s senior senator and avid segregationist Strom Thurmond, which helped him win the Republican nomination on the first ballot. Nixon also courted northern, blue-collar workers, whom he later called the silent majority , to acknowledge their belief that their voices were seldom heard. These voters feared the social changes taking place in the country: Antiwar protests challenged their own sense of patriotism and civic duty, whereas the recreational use of new drugs threatened their cherished principles of self-discipline, and urban riots invoked the specter of a racial reckoning. Government action on behalf of the marginalized raised the question of whether its traditional constituency—the White middle class—would lose its privileged place in American politics. Some felt left behind as the government turned to the problems of African Americans. Nixon’s promises of stability and his emphasis on law and order appealed to them. He portrayed himself as a fervent patriot who would take a strong stand against racial unrest and antiwar protests. Nixon harshly critiqued Lyndon Johnson’s Great Society, and he promised a secret plan to end the war in Vietnam honorably and bring home the troops. He also promised to reform the Supreme Court, which he contended had gone too far in “coddling criminals.” Under Chief Justice Earl Warren, the court had used the due process and equal protection clauses of the Fourteenth Amendment to grant those accused under state law the ability to defend themselves and secure protections against unlawful search and seizure, cruel and unusual punishment, and self-incrimination. Nixon had found the political capital that would ensure his victory in the suburbs, which produced more votes than either urban or rural areas. He championed “middle America,” which was fed up with social convulsions, and called upon the country to come together. His running mate, Spiro T. Agnew, a former governor of Maryland, blasted the Democratic ticket as fiscally irresponsible and “soft on communism.” Nixon and Agnew’s message thus appealed to northern middle-class and blue-collar Whites as well as southern Whites who had fled to the suburbs in the wake of the Supreme Court’s pro-integration decision in Brown v. Board of Education ( Figure 30.7 ). DEMOCRATS IN DISARRAY By contrast, in early 1968, the political constituency that Lyndon Johnson had cobbled together to win the presidency in 1964 seemed to be falling apart. When Eugene McCarthy, the Democratic senator from Minnesota, announced that he would challenge Johnson in the primaries in an explicitly antiwar campaign, Johnson was overwhelmingly favored by Democratic voters. But then the Tet Offensive in Vietnam exploded on American television screens on January 31, playing out on the nightly news for weeks. On February 27, Walter Cronkite, a highly respected television journalist, offered his opinion that the war in Vietnam was unwinnable. When the votes were counted in New Hampshire on March 12, McCarthy had won twenty of the state’s twenty-four delegates. McCarthy’s popularity encouraged Robert (Bobby) Kennedy to also enter the race. Realizing that his war policies could unleash a divisive fight within his own party for the nomination, Johnson announced his withdrawal on March 31, fracturing the Democratic Party. One faction consisted of the traditional party leaders who appealed to unionized, blue-collar constituents and White ethnics (Americans with recent European immigrant backgrounds). This group fell in behind Johnson’s vice president, Hubert H. Humphrey, who took up the mainstream party’s torch almost immediately after Johnson’s announcement. The second group consisted of idealistic young activists who had slogged through the snows of New Hampshire to give McCarthy a boost and saw themselves as the future of the Democratic Party. The third group, composed of Catholics, African Americans and other minorities, and some of the young, antiwar element, galvanized around Robert Kennedy ( Figure 30.8 ). Finally, there were the southern Democrats, the Dixiecrats , who opposed the advances made by the civil rights movement. Some found themselves attracted to the Republican candidate Richard Nixon. Many others, however, supported the third-party candidacy of segregationist George C. Wallace, the former governor of Alabama. Wallace won close to ten million votes, which was 13.5 percent of all votes cast. He was particularly popular in the South, where he carried five states and received forty-six Electoral College votes. Kennedy and McCarthy fiercely contested the remaining primaries of the 1968 season. There were only fifteen at that time. McCarthy beat Kennedy handily in Wisconsin, Pennsylvania, and Massachusetts. Kennedy took Indiana and Nebraska before losing Oregon to McCarthy. Kennedy’s only hope was that a strong enough showing in the California primary on June 4 might swing uncommitted delegates his way. He did manage to beat McCarthy, winning 46 percent of the vote to McCarthy’s 42 percent, but it was a fruitless victory. As he attempted to exit the Ambassador Hotel in Los Angeles after his victory speech, Kennedy was shot; he died twenty-six hours later. His killer, Sirhan B. Sirhan, a Jordanian immigrant, had allegedly targeted him for advocating military support for Israel in its conflict with neighboring Arab states. Going into the nominating convention in Chicago in 1968, Humphrey, who promised to pursue the “Politics of Joy,” seemed clearly in command of the regular party apparatus. But the national debates over civil rights, student protests, and the Vietnam War had made 1968 a particularly anguished year, and many people felt anything but joyful. Some party factions hoped to make their voices heard; others wished to disrupt the convention altogether. Among them were antiwar protestors, hippies, and Yippies —members of the leftist, anarchistic Youth International Party organized by Jerry Rubin and Abbie Hoffman—who called for the establishment of a new nation consisting of cooperative institutions to replace those currently in existence. To demonstrate their contempt for “the establishment” and the proceedings inside the hall, the Yippies nominated a pig named Pigasus for president. A chaotic scene developed inside the convention hall and outside at Grant Park, where the protesters camped. Chicago’s mayor, Richard J. Daley, was anxious to demonstrate that he could maintain law and order, especially because several days of destructive rioting had followed the murder of Martin Luther King, Jr. earlier that year. He thus let loose a force of twelve thousand police officers, six thousand members of the Illinois National Guard, and six thousand U.S. Army soldiers. Television cameras caught what later became known as a “police riot”: Armed officers made their way into crowds of law-abiding protesters, clubbing anyone they encountered and setting off tear gas canisters. The protesters fought back. Inside the convention hall, a Democratic senator from Connecticut called for adjournment, whereas other delegates insisted on proceeding. Ironically, Hubert Humphrey received the nomination and gave an acceptance speech in which he spoke in support of “law and order.” When the convention ended, Rubin, Hoffman, and five other protesters (called the “Chicago Seven”) were placed on trial for inciting a riot ( Figure 30.9 ). THE DOMESTIC NIXON The images of violence and the impression of things spinning out of control seriously damaged Humphrey’s chances for victory. Many liberals and young antiwar activists, disappointed by his selection over McCarthy and still shocked by the death of Robert Kennedy, did not vote for Humphrey. Others turned against him because of his failure to chastise the Chicago police for their violence. Some resented the fact that Humphrey had received 1,759 delegates on the first ballot at the convention, nearly three times the number won by McCarthy, even though in the primaries, he had received only 2 percent of the popular vote. Many loyal Democratic voters at home, shocked by the violence they saw on television, turned away from their party, which seemed to have attracted dangerous “radicals,” and began to consider Nixon’s promises of law and order. As the Democratic Party collapsed, Nixon successfully campaigned for the votes of both working- and middle-class White Americans, winning the 1968 election. Although Humphrey received nearly the same percentage of the popular vote, Nixon easily won the Electoral College, gaining 301 votes to Humphrey’s 191 and Wallace’s 46. Once elected, Nixon began to pursue a policy of deliberate neglect of the civil rights movement and the needs of ethnic minorities. For example, in 1969, for the first time in fifteen years, federal lawyers sided with the state of Mississippi when it sought to slow the pace of school desegregation. Similarly, Nixon consistently showed his opposition to busing to achieve racial desegregation. He saw that restricting African American activity was a way of undercutting a source of votes for the Democratic Party and sought to overhaul the provisions of the Voting Rights Act of 1965. In March 1970, he commented that he did not believe an “open” America had to be homogeneous or fully integrated, maintaining that it was “natural” for members of ethnic groups to live together in their own enclaves. In other policy areas, especially economic ones, Nixon was either moderate or supportive of the progress of African Americans; for example, he expanded affirmative action, a program begun during the Johnson administration to improve employment and educational opportunities for racial minorities. Although Nixon always kept his eye on the political environment, the economy required attention. The nation had enjoyed seven years of expansion since 1961, but inflation (a general rise in prices) was threatening to constrict the purchasing power of the American consumer and therefore curtail economic expansion. Nixon tried to appeal to fiscal conservatives in the Republican Party, reach out to disaffected Democrats, and, at the same time, work with a Democratic Party-controlled Congress. As a result, Nixon’s approach to the economy seemed erratic. Despite the heavy criticisms he had leveled against the Great Society, he embraced and expanded many of its features. In 1969, he signed a tax bill that eliminated the investment tax credit and moved some two million of the poorest people off the tax rolls altogether. He federalized the food stamp program and established national eligibility requirements, and signed into law the automatic adjustments for inflation of Social Security payments. On the other hand, he won the praise of conservatives with his “New Federalism”—drastically expanding the use of federal “block grants” to states to spend as they wished without strings attached. By mid-1970, a recession was beginning and unemployment was 6.2 percent, twice the level under Johnson. After earlier efforts at controlling inflation with controlled federal spending—economists assumed that reduced federal spending and borrowing would curb the amount of money in circulation and stabilize prices—Nixon proposed a budget with an $11 billion deficit in 1971. The hope was that more federal funds in the economy would stimulate investment and job creation. When the unemployment rate refused to budge the following year, he proposed a budget with a $25 billion deficit. At the same time, he tried to fight continuing inflation by freezing wages and prices for ninety days, which proved to be only a temporary fix. The combination of unemployment and rising prices posed an unfamiliar challenge to economists whose fiscal policies of either expanding or contracting federal spending could only address one side of the problem at the cost of the other. This phenomenon of “ stagflation ”—a term that combined the economic conditions of stagnation and inflation—outlived the Nixon administration, enduring into the early 1980s. The origins of the nation’s new economic troubles were not just a matter of policy. Postwar industrial development in Asia and Western Europe—especially in Germany and Japan—had created serious competition to American businesses. By 1971, American appetites for imports left foreign central banks with billions of U.S. currency, which had been fixed to gold in the international monetary and trade agreement of Bretton Woods back in 1944. When foreign dollar holdings exceeded U.S. gold reserves in 1971, President Nixon allowed the dollar to flow freely against the price of gold. This caused an immediate 8 percent devaluation of the dollar, made American goods cheaper abroad, and stimulated exports. Nixon’s move also marked the beginning of the end of the dollar’s dominance in international trade. The situation was made worse in October 1973, when Syria and Egypt jointly attacked Israel to recover territory that had been lost in 1967, starting the Yom Kippur War . The Soviet Union significantly aided its allies, Egypt and Syria, and the United States supported Israel, earning the enmity of Arab nations. In retaliation, the Organization of Arab Petroleum Exporting Countries (OAPEC) imposed an embargo on oil shipments to the United States from October 1973 to March 1974. The ensuing shortage of oil pushed its price from three dollars a barrel to twelve dollars a barrel. The average price of gasoline in the United States shot from thirty-eight cents a gallon before the embargo to fifty-five cents a gallon in June 1974, and the prices of other goods whose manufacture and transportation relied on oil or gas also rose and did not come down. The oil embargo had a lasting impact on the economy and underscored the nation’s interdependency with international political and economic developments. Faced with high fuel prices, American consumers panicked. Gas stations limited the amount customers could purchase and closed on Sundays as supplies ran low ( Figure 30.10 ). To conserve oil, Congress reduced the speed limit on interstate highways to fifty-five miles per hour. People were asked to turn down their thermostats, and automobile manufacturers in Detroit explored the possibility of building more fuel-efficient cars. Even after the embargo ended, prices continued to rise, and by the end of the Nixon years in 1974, inflation had soared to 12.2 percent. Although Nixon’s economic and civil rights policies differed from those of his predecessors, in other areas, he followed their lead. President Kennedy had committed the nation to putting a man on the moon before the end of the decade. Nixon, like Johnson before him, supported significant budget allocations to the National Aeronautics and Space Administration (NASA) to achieve this goal. On July 20, 1969, hundreds of millions of people around the world watched as astronauts Neil Armstrong and Edwin “Buzz” Aldrin walked on the surface of the moon and planted the U.S. flag. Watching from the White House, President Nixon spoke to the astronauts via satellite phone. The entire project cost the American taxpayer some $25 billion, approximately 4 percent of the nation’s gross national product, and was such a source of pride for the nation that the Soviet Union and China refused to televise it. Coming amid all the struggles and crises that the country was enduring, the moon landing gave citizens a sense of accomplishment that stood in stark contrast to the foreign policy failures, growing economic challenges, and escalating divisions at home. NIXON THE DIPLOMAT Despite the many domestic issues on Nixon’s agenda, he prioritized foreign policy and clearly preferred bold and dramatic actions in that arena. Realizing that five major economic powers—the United States, Western Europe, the Soviet Union, China, and Japan—dominated world affairs, he sought opportunities for the United States to pit the others against each other. In 1969, he announced a new Cold War principle known as the Nixon Doctrine, a policy whereby the United States would continue to assist its allies but would not assume the responsibility of defending the entire non-Communist world. Other nations, like Japan, needed to assume more of the burden of first defending themselves. Playing what was later referred to as “the China card,” Nixon abruptly reversed two decades of U.S. diplomatic sanctions and hostility to the Communist regime in the People’s Republic of China, when he announced, in August 1971, that he would personally travel to Beijing and meet with China’s leader, Chairman Mao Zedong, in February 1972 ( Figure 30.11 ). Nixon hoped that opening up to the Chinese government would prompt its bitter rival, the Soviet Union, to compete for global influence and seek a more productive relationship with the United States. He also hoped that establishing a friendly relationship with China would isolate North Vietnam and ease a peace settlement, allowing the United States to extract its troops from the war honorably. Concurring that the Soviet Union should be restrained from making advances in Asia, Nixon and Chinese premier Zhou Enlai agreed to disagree on several issues and ended up signing a friendship treaty. They promised to work towards establishing trade between the two nations and to eventually establishing full diplomatic relations with each other. Continuing his strategy of pitting one Communist nation against another, in May 1972, Nixon made another newsworthy trip, traveling to Moscow to meet with the Soviet leader Leonid Brezhnev. The two discussed a policy of détente , a relaxation of tensions between their nations, and signed the Strategic Arms Limitation Treaty (SALT), which limited each side to deploying only two antiballistic missile systems. It also limited the number of nuclear missiles maintained by each country. In 1974, a protocol was signed that reduced antiballistic missile sites to one per country, since neither country had yet begun to build its second system. Moreover, the two sides signed agreements to allow scientific and technological exchanges, and promised to work towards a joint space mission. 30.3 Vietnam: The Downward Spiral Learning Objectives By the end of this section, you will be able to: Describe the events that fueled antiwar sentiment in the Vietnam era Explain Nixon’s steps to withdraw the United States from the conflict in South Vietnam As early as 1967, critics of the war in Vietnam had begun to call for the repeal of the Gulf of Tonkin Resolution, which gave President Johnson the authority to conduct military operations in Vietnam in defense of an ally, South Vietnam. Nixon initially opposed the repeal efforts, claiming that doing so might have consequences that reached far beyond Vietnam. Nevertheless, by 1969, he was beginning troop withdrawals from Vietnam while simultaneously looking for a “knockout blow” against the North Vietnamese. In sum, the Nixon administration was in need of an exit strategy. The escalation of the war, however, made an easy withdrawal increasingly difficult. Officially, the United States was the ally and partner of the South Vietnamese, whose “hearts and minds” it was trying to win through a combination of military assistance and economic development. In reality, however, U.S. soldiers, who found themselves fighting in an inhospitable environment thousands of miles from home to protect people who often resented their presence and aided their enemies, came to regard the Vietnamese as backward, cowardly people and the government of South Vietnam as hopelessly inefficient and corrupt. Instead of winning “hearts and minds,” U.S. warfare in Vietnam cost the lives and limbs of U.S. troops and millions of Vietnamese combatants and civilians ( Figure 30.12 ). For their part, the North Vietnamese forces and the National Liberation Front in South Vietnam also used brutal tactics to terrorize and kill their opponents or effectively control their territory. Political assassinations and forced indoctrination were common. Captured U.S. soldiers frequently endured torture and imprisonment. MY LAI Racism on the part of some U.S. soldiers and a desire to retaliate against those they perceived to be responsible for harming U.S. troops affected the conduct of the war. A war correspondent who served in Vietnam noted, “In motivating the GI to fight by appealing to his racist feelings, the United States military discovered that it had liberated an emotion over which it was to lose control.” It was not unusual for U.S. soldiers to evacuate and burn villages suspected of shielding Viet Cong fighters, both to deprive the enemy of potential support and to enact revenge for enemy brutality. Troops shot at farmers’ water buffalo for target practice. American and South Vietnamese use of napalm, a jellied gasoline that sticks to the objects it burns, was common. Originally developed to burn down structures during World War II, in Vietnam, it was directed against human beings as well, as had occurred during the Korean War. Defining American Vietnam Veterans against the War Statement Many U.S. soldiers disapproved of the actions of their fellow troops. Indeed, a group of Vietnam veterans formed the organization Vietnam Veterans Against the War (VVAW). Small at first, it grew to perhaps as many as twenty thousand members. In April 1971, John Kerry, a former lieutenant in the U.S. Navy and a member of VVAW, testified before the U.S. Senate Committee on Foreign Relations about conditions in Vietnam based on his personal observations: I would like to talk on behalf of all those veterans and say that several months ago in Detroit we had an investigation at which over 150 honorably discharged, and many very highly decorated, veterans testified to war crimes committed in Southeast Asia. These were not isolated incidents but crimes committed on a day-to-day basis with the full awareness of officers at all levels of command. . . . They relived the absolute horror of what this country, in a sense, made them do. They told stories that at times they had personally raped, cut off ears, cut off heads . . . randomly shot at civilians, razed villages . . . and generally ravaged the countryside of South Vietnam in addition to the normal ravage of war and the normal and very particular ravaging which is done by the applied bombing power of this country. . . . We could come back to this country, we could be quiet, we could hold our silence, we could not tell what went on in Vietnam, but we feel because of what threatens this country, not the reds [Communists], but the crimes which we are committing that threaten it, that we have to speak out. —John Kerry, April 23, 1971 In what way did the actions of U.S. soldiers in Vietnam threaten the United States? On March 16, 1968, men from the U.S. Army’s Twenty-Third Infantry Division committed one of the most notorious atrocities of the war. About one hundred soldiers commanded by Captain Ernest Medina were sent to destroy the village of My Lai, which was suspected of hiding Viet Cong fighters. Although there was later disagreement regarding the captain’s exact words, the platoon leaders believed the order to destroy the enemy included killing women and children. Having suffered twenty-eight casualties in the past three months, the men of Charlie Company were under severe stress and extremely apprehensive as they approached the village. Two platoons entered it, shooting randomly. A group of seventy to eighty unarmed people, including children and infants, were forced into an irrigation ditch by members of the First Platoon under the command of Lt. William L. Calley, Jr. Despite their proclamations of innocence, the villagers were shot ( Figure 30.13 ). Houses were set on fire, and as the inhabitants tried to flee, they were killed with rifles, machine guns, and grenades. The U.S. troops were never fired upon, and one soldier later testified that he did not see any man who looked like a Viet Cong fighter. The precise number of civilians killed that day is unclear: The numbers range from 347 to 504. None were armed. Although not all the soldiers in My Lai took part in the killings, no one attempted to stop the massacre before the arrival by helicopter of Warrant Officer Hugh Thompson, who, along with his crew, attempted to evacuate women and children. Upon returning to his base, Thompson immediately reported the events taking place at My Lai. Shortly thereafter, Medina ordered Charlie Company to cease fire. Although Thompson’s crewmembers confirmed his account, none of the men from Charlie Company gave a report, and a cover-up began almost immediately. The army first claimed that 150 people, the majority of them Viet Cong, had been killed during a firefight with Charlie Company. Hearing details from friends in Charlie Company, a helicopter gunner by the name of Ron Ridenhour began to conduct his own investigation and, in April 1969, wrote to thirty members of Congress, demanding an investigation. By September 1969, the army charged Lt. Calley with premeditated murder. Many Americans were horrified at the graphic footage of the massacre; the incident confirmed their belief that the war was unjust and not being fought on behalf of the Vietnamese people. However, nearly half of the respondents to a Minnesota poll did not believe that the incident at My Lai had actually happened. U.S. soldiers could not possibly do such horrible things, they felt; they were certain that American goals in Vietnam were honorable and speculated that the antiwar movement had concocted the story to generate sympathy for the enemy. Calley was found guilty in March 1971, and sentenced to life in prison. Nationwide, hundreds of thousands of Americans joined a “Free Calley” campaign. Two days later, President Nixon released him from custody and placed him under him house arrest at Fort Benning, Georgia. In August of that same year, Calley’s sentence was reduced to twenty years, and in September 1974, he was paroled. The only soldier convicted in the massacre, he spent a total of three-and-a-half years under house arrest for his crimes. BATTLES AT HOME As the conflict wore on and reports of brutalities increased, the antiwar movement grew in strength. To take the political pressure off himself and his administration, and find a way to exit Vietnam “with honor,” Nixon began the process of Vietnamization , turning more responsibility for the war over to South Vietnamese forces by training them and providing American weaponry, while withdrawing U.S. troops from the field. At the same time, however, Nixon authorized the bombing of neighboring Cambodia, which had declared its neutrality, in an effort to destroy North Vietnamese and Viet Cong bases within that country and cut off supply routes between North and South Vietnam. The bombing was kept secret from both Congress and the American public. In April 1970, Nixon decided to follow up with an invasion of Cambodia. The invasion could not be kept secret, and when Nixon announced it on television on April 30, 1970, protests sprang up across the country. The most tragic and politically damaging occurred on May 1, 1970, at Kent State University in Ohio. Violence erupted in the town of Kent after an initial student demonstration on campus, and the next day, the mayor asked Ohio’s governor to send in the National Guard. Troops were sent to the university’s campus, where students had set fire to the ROTC building and were fighting off firemen and policemen trying to extinguish it. The National Guard used teargas to break up the demonstration, and several students were arrested ( Figure 30.14 ). Tensions came to a head on May 4. Although campus officials had called off a planned demonstration, some fifteen hundred to two thousand students assembled, throwing rocks at a security officer who ordered them to leave. Seventy-seven members of the National Guard, with bayonets attached to their rifles, approached the students. After forcing most of them to retreat, the troops seemed to depart. Then, for reasons that are still unknown, they halted and turned; many began to fire at the students. Nine students were wounded; four were killed. Two of the dead had simply been crossing campus on their way to class. Peace was finally restored when a faculty member pleaded with the remaining students to leave. News of the Kent State shootings shocked students around the country. Millions refused to attend class, as strikes were held at hundreds of colleges and high schools across the United States. On May 8, an antiwar protest took place in New York City, and the next day, 100,000 protesters assembled in Washington, DC. Not everyone sympathized with the slain students, however. Nixon had earlier referred to student demonstrators as “bums,” and construction workers attacked the New York City protestors. A Gallup poll revealed that most Americans blamed the students for the tragic events at Kent State. On May 15, a similar tragedy took place at Jackson State College, an African American college in Jackson, Mississippi. Once again, students gathered on campus to protest the invasion of Cambodia, setting fires and throwing rocks. The police arrived to disperse the protesters, who had gathered outside a women’s dormitory. Shortly after midnight, the police opened fire with shotguns. The dormitory windows shattered, showering people with broken glass. Twelve were wounded, and two young men, one a student at the college and the other a local high school student, were killed. PULLING OUT OF THE QUAGMIRE Ongoing protests, campus violence, and the expansion of the war into Cambodia deeply disillusioned Americans about their role in Vietnam. Understanding the nation’s mood, Nixon dropped his opposition to a repeal of the Gulf of Tonkin Resolution of 1964. In January 1971, he signed Congress’s revocation of the notorious blanket military authorization. Gallup polls taken in May of that year revealed that only 28 percent of the respondents supported the war; many felt it was not only a mistake but also immoral. Just as influential as antiwar protests and campus violence in turning people against the war was the publication of documents the media dubbed the Pentagon Papers in June 1971. These were excerpts from a study prepared during the Johnson administration that revealed the true nature of the conflict in Vietnam. The public learned for the first time that the United States had been planning to oust Ngo Dinh Diem from the South Vietnamese government, that Johnson meant to expand the U.S. role in Vietnam and bomb North Vietnam even as he stated publicly that he had no intentions of doing so, and that his administration had sought to deliberately provoke North Vietnamese attacks in order to justify escalating American involvement. Copies of the study had been given to the New York Times and other newspapers by Daniel Ellsberg, one of the military analysts who had contributed to it. To avoid setting a precedent by allowing the press to publish confidential documents, Nixon’s attorney general, John Mitchell, sought an injunction against the New York Times to prevent its publication of future articles based on the Pentagon Papers. The newspaper appealed. On June 30, 1971, the U.S. Supreme Court held that the government could not prevent the publication of the articles. Realizing that he must end the war but reluctant to make it look as though the United States was admitting its failure to subdue a small Asian nation, Nixon began maneuvering to secure favorable peace terms from the North Vietnamese. Thanks to his diplomatic efforts in China and the Soviet Union, those two nations cautioned North Vietnam to use restraint. The loss of strong support by their patrons, together with intensive bombing of Hanoi and the mining of crucial North Vietnamese harbors by U.S. forces, made the North Vietnamese more willing to negotiate. Nixon’s actions had also won him popular support at home. By the 1972 election, voters again favored his Vietnam policy by a ratio of two to one. On January 27, 1973, Secretary of State Henry Kissinger signed an accord with Le Duc Tho, the chief negotiator for the North Vietnamese, ending American participation in the war. The United States was given sixty days to withdraw its troops, and North Vietnam was allowed to keep its forces in places it currently occupied. This meant that over 100,000 northern soldiers would remain in the South—ideally situated to continue the war with South Vietnam. The United States left behind a small number of military advisors as well as equipment, and Congress continued to approve funds for South Vietnam, but considerably less than in earlier years. So the war continued, but it was clear the South could not hope to defeat the North. As the end was nearing, the United States conducted several operations to evacuate children from the South. On the morning of April 29, 1975, as North Vietnamese and Viet Cong forces moved through the outskirts of Saigon, orders were given to evacuate Americans and South Vietnamese who had supported the United States. Unable to use the airport, helicopters ferried Americans and Vietnamese refugees who had fled to the American embassy to ships off the coast. North Vietnamese forces entered Saigon the next day, and the South surrendered. The war had cost the lives of more than 1.5 million Vietnamese combatants and civilians, as well as over 58,000 U.S. troops. But the war had caused another, more intangible casualty: the loss of consensus, confidence, and a sense of moral high ground in the American political culture. 30.4 Watergate: Nixon’s Domestic Nightmare Learning Objectives By the end of this section, you will be able to: Describe the actions that Nixon and his confederates took to ensure his reelection in 1972 Explain the significance of the Watergate crisis Describe Gerald Ford’s domestic policies and achievements in foreign affairs Feeling the pressure of domestic antiwar sentiment and desiring a decisive victory, Nixon went into the 1972 reelection season having attempted to fashion a “new majority” of moderate southerners and northern, working-class Whites. The Democrats, responding to the chaos and failings of the Chicago convention, had instituted new rules on how delegates were chosen, which they hoped would broaden participation and the appeal of the party. Nixon proved unbeatable, however. Even evidence that his administration had broken the law failed to keep him from winning the White House. THE ELECTION OF 1972 Following the 1968 nominating convention in Chicago, the process of selecting delegates for the Democratic National Convention was redesigned. The new rules, set by a commission led by George McGovern, awarded delegates based on candidates’ performance in state primaries ( Figure 30.15 ). As a result, a candidate who won no primaries could not receive the party’s nomination, as Hubert Humphrey had done in Chicago. This system gave a greater voice to people who voted in the primaries and reduced the influence of party leaders and power brokers. It also led to a more inclusive political environment in which Shirley Chisholm received 156 votes for the Democratic nomination on the first ballot ( Figure 30.15 ). Eventually, the nomination went to George McGovern, a strong opponent of the Vietnam War. Many Democrats refused to support his campaign, however. Working- and middle-class voters turned against him too after allegations that he supported women’s right to an abortion and the decriminalization of drug use. McGovern’s initial support of vice presidential candidate Thomas Eagleton in the face of revelations that Eagleton had undergone electroshock treatment for depression, followed by his withdrawal of that support and acceptance of Eagleton’s resignation, also made McGovern look indecisive and unorganized. Nixon and the Republicans led from the start. To increase their advantage, they attempted to paint McGovern as a radical leftist who favored amnesty for draft dodgers. In the Electoral College, McGovern carried only Massachusetts and Washington, DC. Nixon won a decisive victory of 520 electoral votes to McGovern’s 17. One Democrat described his role in McGovern’s campaign as “recreation director on the Titanic.” HIGH CRIMES AND MISDEMEANORS Nixon’s victory over a Democratic party in disarray was the most remarkable landslide since Franklin D. Roosevelt’s reelection in 1936. But Nixon’s victory was short-lived, however, for it was soon discovered that he and members of his administration had routinely engaged in unethical and illegal behavior during his first term. Following the publication of the Pentagon Papers, for instance, the “ plumbers ,” a group of men used by the White House to spy on the president’s opponents and stop leaks to the press, broke into the office of Daniel Ellsberg’s psychiatrist to steal Ellsberg’s file and learn information that might damage his reputation. During the presidential campaign, the Committee to Re-Elect the President (CREEP) decided to play “dirty tricks” on Nixon’s opponents. Before the New Hampshire Democratic primary, a forged letter supposedly written by Democratic-hopeful Edmund Muskie in which he insulted French Canadians, one of the state’s largest ethnic groups, was leaked to the press. Men were assigned to spy on both McGovern and Senator Edward Kennedy. One of them managed to masquerade as a reporter on board McGovern’s press plane. Men pretending to work for the campaigns of Nixon’s Democratic opponents contacted vendors in various states to rent or purchase materials for rallies; the rallies were never held, of course, and Democratic politicians were accused of failing to pay the bills they owed. CREEP’s most notorious operation, however, was its break-in at the offices of the Democratic National Committee (DNC) in the Watergate office complex in Washington, DC, as well as its subsequent cover-up. On the evening of June 17, 1972, the police arrested five men inside DNC headquarters ( Figure 30.16 ). According to a plan originally proposed by CREEP’s general counsel and White House plumber G. Gordon Liddy, the men were to wiretap DNC telephones. The FBI quickly discovered that two of the men had E. Howard Hunt’s name in their address books. Hunt was a former CIA officer and also one of the plumbers. In the following weeks, yet more connections were found between the burglars and CREEP, and in October 1972, the FBI revealed evidence of illegal intelligence gathering by CREEP for the purpose of sabotaging the Democratic Party. Nixon won his reelection handily in November. Had the president and his reelection team not pursued a strategy of dirty tricks, Richard Nixon would have governed his second term with one of the largest political leads in the twentieth century. In the weeks following the Watergate break-in, Bob Woodward and Carl Bernstein, reporters for The Washington Post , received information from several anonymous sources, including one known to them only as “ Deep Throat ,” that led them to realize the White House was deeply implicated in the break-in. As the press focused on other events, Woodward and Bernstein continued to dig and publish their findings, keeping the public’s attention on the unfolding scandal. Years later, Deep Throat was revealed to be Mark Felt, then the FBI’s associate director. THE WATERGATE CRISIS Initially, Nixon was able to hide his connection to the break-in and the other wrongdoings alleged against members of CREEP. However, by early 1973, the situation quickly began to unravel. In January, the Watergate burglars were convicted, along with Hunt and Liddy. Trial judge John Sirica was not convinced that all the guilty had been discovered. In February, confronted with evidence that people close to the president were connected to the burglary, the Senate appointed the Watergate Committee to investigate. Ten days later, in his testimony before the Senate Judiciary Committee, L. Patrick Gray, acting director of the FBI, admitted destroying evidence taken from Hunt’s safe by John Dean, the White House counsel, after the burglars were caught. On March 23, 1973, Judge Sirica publicly read a letter from one of the Watergate burglars, alleging that perjury had been committed during the trial. Less than two weeks later, Jeb Magruder, a deputy director of CREEP, admitted lying under oath and indicated that Dean and John Mitchell, who had resigned as attorney general to become the director of CREEP, were also involved in the break-in and its cover-up. Dean confessed, and on April 30, Nixon fired him and requested the resignation of his aides John Ehrlichman and H. R. Haldeman, also implicated. To defuse criticism and avoid suspicion that he was participating in a cover-up, Nixon also announced the resignation of the current attorney general, Richard Kleindienst, a close friend, and appointed Elliott Richardson to the position. In May 1973, Richardson named Archibald Cox special prosecutor to investigate the Watergate affair. Throughout the spring and the long, hot summer of 1973, Americans sat glued to their television screens, as the major networks took turns broadcasting the Senate hearings. One by one, disgraced former members of the administration confessed, or denied, their role in the Watergate scandal. Dean testified that Nixon was involved in the conspiracy, allegations the president denied. In March 1974, Haldeman, Ehrlichman, and Mitchell were indicted and charged with conspiracy. Without evidence clearly implicating the president, the investigation might have ended if not for the testimony of Alexander Butterfield, a low-ranking member of the administration, that a voice-activated recording system had been installed in the Oval Office. The President’s most intimate conversations had been caught on tape. Cox and the Senate subpoenaed them. Nixon, however, refused to hand the tapes over and cited executive privilege , the right of the president to refuse certain subpoenas. When he offered to supply summaries of the conversations, Cox refused. On October 20, 1973, in an event that became known as the Saturday Night Massacre, Nixon ordered Attorney General Richardson to fire Cox. Richardson refused and resigned, as did Deputy Attorney General William Ruckelshaus when confronted with the same order. Control of the Justice Department then fell to Solicitor General Robert Bork, who complied with Nixon’s order. In December, the House Judiciary Committee began its own investigation to determine whether there was enough evidence of wrongdoing to impeach the president. The public was enraged by Nixon’s actions. A growing number of citizens felt as though the president had placed himself above the law. Telegrams flooded the White House. The House of Representatives began to discuss impeachment. In April 1974, when Nixon agreed to release transcripts of the tapes, it was too little, too late ( Figure 30.17 ). Yet, while revealing nothing about Nixon’s knowledge of Watergate, the transcripts captured Nixon in a most unflattering light and helped to dismantle the image of himself he had so carefully curated over his years of public service. At the end of its hearings, in July 1974, the House Judiciary Committee voted to pass three of the five articles of impeachment out of committee. However, before the full House could vote, the U.S. Supreme Court ordered Nixon to release the actual tapes of his conversations, not just transcripts or summaries. One of the tapes revealed that he had in fact been told about White House involvement in the Watergate break-in shortly after it occurred. In a speech on August 5, 1974, Nixon, pleading a poor memory, accepted blame for the Watergate scandal. Warned by other Republicans that he would be found guilty by the Senate and removed from office, he resigned the presidency on August 8. Nixon’s resignation, which took effect the next day, did not make the Watergate scandal vanish. Instead, it fed a growing suspicion of government felt by many. The events of Vietnam had already showed that the government could not be trusted to protect the interests of the people or tell them the truth. For many, Watergate confirmed these beliefs, and the suffix “-gate” attached to a word has since come to mean a political scandal. FORD NOT A LINCOLN When Gerald R. Ford took the oath of office on August 9, 1974, he understood that his most pressing task was to help the country move beyond the Watergate scandal. His declaration that “Our long national nightmare is over. . . . [O]ur great Republic is a government of laws and not of men” was met with almost universal applause. It was indeed an unprecedented time. Ford was the first vice president chosen under the terms of the Twenty-Fifth Amendment, which provides for the appointment of a vice president in the event the incumbent dies or resigns; Nixon had appointed Ford, a longtime House representative from Michigan known for his honesty, following the resignation of embattled vice president Spiro T. Agnew over a charge of failing to report income—a lenient charge since this income stemmed from bribes he had received as the governor of Maryland. Ford was also the first vice president to take office after a sitting president’s resignation, and the only chief executive never elected either president or vice president. One of his first actions as president was to grant Richard Nixon a full pardon ( Figure 30.18 ). Ford thus prevented Nixon’s indictment for any crimes he may have committed in office and ended criminal investigations into his actions. The public reacted with suspicion and outrage. Many were convinced that the extent of Nixon’s wrongdoings would now never been known and he would never be called to account for them. When Ford chose to run for the presidency in 1976, the pardon returned to haunt him. As president, Ford confronted monumental issues, such as inflation, a depressed economy, and chronic energy shortages. He established his policies during his first year in office, despite opposition from a heavily Democratic Congress. In October 1974, he labeled inflation the country’s most dangerous public enemy and sought a grassroots campaign to curtail it by encouraging people to be disciplined in their consuming habits and increase their savings. The campaign was titled “Whip Inflation Now” and was advertised on brightly colored “Win” buttons volunteers were to wear. When recession became the nation’s most serious domestic problem, Ford shifted to measures aimed at stimulating the economy. Still fearing inflation, however, he vetoed a number of nonmilitary appropriations bills that would have increased the already-large budget deficit. Ford’s economic policies ultimately proved unsuccessful. Because of opposition from a Democratic Congress, his foreign policy accomplishments were also limited. When he requested money to assist the South Vietnamese government in its effort to repel North Vietnamese forces, Congress refused. Ford was more successful in other parts of the world. He continued Nixon’s policy of détente with the Soviet Union, and he and Secretary of State Kissinger achieved further progress in the second round of SALT talks. In August 1975, Ford went to Finland and signed the Helsinki Accords with Soviet premier Leonid Brezhnev. This agreement essentially accepted the territorial boundaries that had been established at the end of World War II in 1945. It also exacted a pledge from the signatory nations that they would protect human rights within their countries. Many immigrants to the United States protested Ford’s actions, because it seemed as though he had accepted the status quo and left their homelands under Soviet domination. Others considered it a belated American acceptance of the world as it really was. 30.5 Jimmy Carter in the Aftermath of the Storm Learning Objectives By the end of this section, you will be able to: Explain why Gerald Ford lost the election of 1976 Describe Jimmy Carter’s domestic and foreign policy achievements Discuss how the Iranian hostage crisis affected the Carter presidency At his inauguration in January 1977, President Jimmy Carter began his speech by thanking outgoing president Gerald Ford for all he had done to “heal” the scars left by Watergate. American gratitude had not been great enough to return Ford to the Oval Office, but enthusiasm for the new president was not much greater in the new atmosphere of disillusionment with political leaders. Indeed, Carter won his party’s nomination and the presidency largely because the Democratic leadership had been decimated by assassination and the taint of Vietnam, and he had carefully positioned himself as an outsider who could not be blamed for current policies. Ultimately, Carter’s presidency proved a lackluster one that was marked by economic stagnation at home and humiliation overseas. THE ELECTION OF 1976 President Ford won the Republican nomination for the presidency in 1976, narrowly defeating former California governor Ronald Reagan, but he lost the election to his Democratic opponent Jimmy Carter. Carter ran on an “anti-Washington” ticket, making a virtue of his lack of experience in what was increasingly seen as the corrupt politics of the nation’s capital. Accepting his party’s nomination, the former governor of Georgia pledged to combat racism and sexism as well as overhaul the tax structure. He openly proclaimed his faith as a born-again Christian and promised to change the welfare system and provide comprehensive healthcare coverage for neglected citizens who deserved compassion. Most importantly, Jimmy Carter promised that he would “never lie.” Ford’s pardon of Richard Nixon had alienated many Republicans. That, combined with the stagnant economy, cost him votes, and Jimmy Carter, an engineer and former naval officer who portrayed himself as a humble peanut farmer, prevailed, carrying all the southern states, except Virginia and Oklahoma ( Figure 30.19 ). Ford did well in the West, but Carter received 50 percent of the popular vote to Ford’s 48 percent, and 297 electoral votes to Ford’s 240. ON THE INSIDE Making a virtue of his lack of political experience, especially in Washington, Jimmy Carter took office with less practical experience in executive leadership and the workings of the national government than any president since Calvin Coolidge. His first executive act was to fulfill a campaign pledge to grant unconditional amnesty to young men who had evaded the draft during the Vietnam War. Despite the early promise of his rhetoric, within a couple of years of his taking office, liberal Democrats claimed Carter was the most conservative Democratic president since Grover Cleveland. In trying to manage the relatively high unemployment rate of 7.5 percent and inflation that had risen into the double digits by 1978, Carter was only marginally effective. His tax reform measure of 1977 was weak and failed to close the grossest of loopholes. His deregulation of major industries, such as aviation and trucking, was intended to force large companies to become more competitive. Consumers benefited in some ways: For example, airlines offered cheaper fares to beat their competitors. However, some companies, like Pan American World Airways, instead went out of business. Carter also expanded various social programs, improved housing for the elderly, and took steps to improve workplace safety. Because the high cost of fuel continued to hinder economic expansion, the creation of an energy program became a central focus of his administration. Carter stressed energy conservation, encouraging people to insulate their houses and rewarding them with tax credits if they did so, and pushing for the use of coal, nuclear power, and alternative energy sources such as solar power to replace oil and natural gas. To this end, Carter created the Department of Energy. CARTER AND A NEW DIRECTION IN FOREIGN AFFAIRS Carter believed that U.S. foreign policy should be founded upon deeply held moral principles and national values. The mission in Vietnam had failed, he argued, because American actions there were contrary to moral values. His dedication to peace and human rights significantly changed the way that the United States conducted its foreign affairs. He improved relations with China, ended military support to Nicaraguan dictator Anastasio Somoza, and helped arrange for the Panama Canal to be returned to Panamanian control in 1999. He agreed to a new round of talks with the Soviet Union (SALT II) and brought Israeli prime minister Menachem Begin and Egyptian president Anwar Sadat to the United States to discuss peace between their countries. Their meetings at Camp David, the presidential retreat in Maryland, led to the signing of the Camp David Accords in September 1978 ( Figure 30.20 ). This in turn resulted in the drafting of a historic peace treaty between Egypt and Israel in 1979. Despite achieving many successes in the area of foreign policy, Carter made a more controversial decision in response to the Soviet Union’s 1979 invasion of Afghanistan. In January 1980, he declared that if the USSR did not withdraw its forces, the United States would boycott the 1980 Summer Olympic Games in Moscow. The Soviets did not retreat, and the United States did not send a team to Moscow. Only about half of the American public supported this decision, and despite Carter’s call for other countries to join the boycott, very few did so. HOSTAGES TO HISTORY Carter’s biggest foreign policy problem was the Iranian hostage crisis, whose roots lay in the 1950s. In 1953, the United States had assisted Great Britain in the overthrow of Prime Minister Mohammad Mossadegh, a rival of Mohammad Reza Pahlavi, the shah of Iran. Mossadegh had sought greater Iranian control over the nation’s oil wealth, which was claimed by British companies. Following the coup, the shah assumed complete control of Iran’s government. He then disposed of political enemies and eliminated dissent through the use of SAVAK, a secret police force trained by the United States. The United States also supplied the shah’s government with billions of dollars in aid. As Iran’s oil revenue grew, especially after the 1973 oil embargo against the United States, the pace of its economic development and the size of its educated middle class also increased, and the country became less dependent on U.S. aid. Its population increasingly blamed the United States for the death of Iranian democracy and faulted it for its consistent support of Israel. Despite the shah’s unpopularity among his own people, the result of both his brutal policies and his desire to Westernize Iran, the United States supported his regime. In February 1979, the shah was overthrown when revolution broke out, and a few months later, he departed for the United States for medical treatment. The long history of U.S. support for him and its offer of refuge greatly angered Iranian revolutionaries. On November 4, 1979, a group of Iranian students and activists, including Islamic fundamentalists who wished to end the Westernization and secularization of Iran, invaded the American embassy in Tehran and seized sixty-six embassy employees. The women and African Americans were soon released, leaving fifty-three men as hostages. Negotiations failed to free them, and in April 1980, a rescue attempt fell through when the aircraft sent to transport them crashed. Another hostage was released when he developed serious medical problems. President Carter’s inability to free the other captives hurt his performance in the 1980 elections. The fifty-two men still held in Iran were finally freed on January 20, 1981, the day Ronald Reagan took office as president ( Figure 30.21 ). Carter’s handling of the crisis appeared even less effective in the way the media portrayed it publicly. This contributed to a growing sense of malaise, a feeling that the United States’ best days were behind it and the country had entered a period of decline. This belief was compounded by continuing economic problems, and the oil shortage and subsequent rise in prices that followed the Iranian Revolution. The president’s decision to import less oil to the United States and remove price controls on oil and gasoline did not help matters. In 1979, Carter sought to reassure the nation and the rest of the world, especially the Soviet Union, that the United States was still able to defend its interests. To dissuade the Soviets from making additional inroads in southwest Asia, he proposed the Carter Doctrine , which stated that the United States would regard any attempt to interfere with its interests in the Middle East as an act of aggression to be met with force if necessary. Carter had failed to solve the nation’s problems. Some blamed these problems on dishonest politicians; others blamed the problems on the Cold War obsession with fighting Communism, even in small nations like Vietnam that had little influence on American national interests. Still others faulted American materialism. In 1980, a small but growing group called the Moral Majority faulted Carter for betraying his southern roots and began to seek a return to traditional values.
principles_of_accounting,_volume_1:_financial_accounting
Summary 6.1 Compare and Contrast Merchandising versus Service Activities and Transactions Service companies sell intangible services and do not have inventory. Their operating cycle begins with cash-on-hand, providing service to customers, and collecting customer payments. Merchandising companies resell goods to consumers. Their operating cycle begins with cash-on-hand, purchasing inventory, selling merchandise, and collecting customer payments. A purchase discount is an incentive for a retailer to pay their account early. Credit terms establish the percentage discount, and Merchandise Inventory decreases if the discount is taken. A retailer receives a full or partial refund for returning or keeping defective merchandise. This can reduce the value of the Merchandise Inventory account. A customer receives an incentive for paying on their account early. Sales Discounts is a contra revenue account that will reduce Sales at the end of a period. A customer receives a refund for returning or keeping defective merchandise. Sales returns and allowances is a contra revenue account that will reduce Sales at the end of a period. 6.2 Compare and Contrast Perpetual versus Periodic Inventory Systems A perpetual inventory system inventory updates purchase and sales records constantly, particularly impacting Merchandise Inventory and Cost of Goods Sold. A periodic inventory system only records updates to inventory and costs of sales at scheduled times throughout the year, not constantly. Merchandise Inventory and Cost of Goods Sold are updated at the end of a period. Cost of goods sold (COGS) includes all elements of cost related to the sale of merchandise. The formula to determine COGS if one is using the periodic inventory system, is Beginning Inventory + Net Purchases – Ending Inventory. The perpetual inventory system keeps real-time data and the information is more robust. However, it is costly and time consuming, and physical counts of inventory are scarce. With the periodic inventory system, there are more frequent inventory counts and reduced chances for shrinkage and damaged merchandise. However, the periodic system makes it difficult for businesses to keep track of inventory costs and to make present decisions about their business. 6.3 Analyze and Record Transactions for Merchandise Purchases Using the Perpetual Inventory System A retailer can pay with cash or on credit. If paying with cash, Cash decreases. If paying on credit instead of cash, Accounts Payable increases. If a company pays for merchandise within the discount window, they debit Accounts Payable, credit Merchandise Inventory, and credit Cash. If they pay outside the discount window, the company debits Accounts Payable and credits Cash. If a company returns merchandise before remitting payment, they would debit Accounts Payable and credit Merchandise Inventory. If the company returns merchandise after remitting payment, they would debit Cash and credit Merchandise Inventory. If a company obtains an allowance for damaged merchandise before remitting payment, they would debit Accounts Payable and credit Merchandise Inventory. If the company obtains an allowance for damaged merchandise after remitting payment, they would debit Cash and credit Merchandise Inventory. 6.4 Analyze and Record Transactions for the Sale of Merchandise Using the Perpetual Inventory System A customer can pay with cash or on credit. If paying on credit instead of cash, Accounts Receivable increases rather than Cash; Sales increases in both instances. A company must also record the cost of sale entry, where Merchandise Inventory decreases and COGS increases. If a customer pays for merchandise within the discount window, the company would debit Cash and Sales Discounts while crediting Accounts Receivable. If the customer pays outside the discount window, the company debits Cash and credits Accounts Receivable only. If a customer returns merchandise before remitting payment, the company would debit Sales Returns and Allowances and credit Accounts Receivable or Cash. The company may return the merchandise to their inventory by debiting Merchandise Inventory and crediting COGS. If a customer obtains an allowance for damaged merchandise before remitting payment, the company would debit Sales Returns and Allowances and credit Accounts Receivable or Cash. The company does not have to consider the merchandise condition because the customer keeps the merchandise in this instance. 6.5 Discuss and Record Transactions Applying the Two Commonly Used Freight-In Methods Establishing ownership of inventory is important because it helps determine who is responsible for shipping charges, goods in transit, and transfer points. Ownership also determines reporting requirements for the buyer and seller. The buyer is responsible for the merchandise, and the cost of shipping, insurance, purchase price, taxes, and fees are held in inventory in its Merchandise Inventory account. The buyer would record an increase (debit) to Merchandise Inventory and either a decrease to Cash or an increase to Accounts Payable (credit) depending on payment method. FOB Shipping Point means the buyer should record the merchandise as inventory when it leaves the seller’s location. FOB destination means the seller should continue to carry the merchandise in inventory until it reaches the buyer’s location. This becomes really important at year-end when each party is trying to determine their actual balance sheet inventory accounts. FOB Destination means the seller is responsible for the merchandise, and the cost of shipping is expensed immediately in the period as a delivery expense. The seller would record an increase (debit) to Delivery Expense, and a decrease to Cash (credit). In FOB Destination, the seller is responsible for the shipping charges and like expenses. The point of transfer is when the merchandise reaches the buyer’s place of business, and the seller owns the inventory in transit. In FOB Shipping Point, the buyer is responsible for the shipping charges and like expenses. The point of transfer is when the merchandise leaves the seller’s place of business, and the buyer owns the inventory in transit. 6.6 Describe and Prepare Multi-Step and Simple Income Statements for Merchandising Companies Multi-step income statements provide greater detail than simple income statements. The format differentiates sales costs from operating expenses and separates other revenue and expenses from operational activities. This statement is best used internally by managers to make pricing and cost reduction decisions. Simple income statements are not as detailed as multi-step income statements and combine all revenues and all expenses into general categories. There is no differentiation between operational and non-operational activities. Therefore, this statement is sometimes used as a summary for external users to view general company information. The gross profit margin ratio can show a company if they have a significant enough margin after sales revenue and cost data are computed to cover operational costs and profit goals. If a company is not meeting their target for this ratio, they may consider increasing prices or decreasing costs. 6.7 Appendix: Analyze and Record Transactions for Merchandise Purchases and Sales Using the Periodic Inventory System A retailer can pay with cash or credit. Unlike in the perpetual inventory system, purchases of inventory in the periodic inventory system will debit Purchases rather than Merchandise Inventory. If a company pays for merchandise within the discount window, it debits Accounts Payable, credits Purchase Discounts, and credits Cash. If they pay outside the discount window, the company debits Accounts Payable and credits Cash. If a company returns merchandise before remitting payment, they would debit Accounts Payable and credit Purchase Returns and Allowances. If the company returns merchandise after remitting payment, they would debit Cash and credit Purchase Returns and Allowances. If a company obtains an allowance for damaged merchandise before remitting payment, they would debit Accounts Payable and credit Purchase Returns and Allowances. If the company obtains an allowance for damaged merchandise after remitting payment, they would debit Cash and credit Purchase Returns and Allowances. A customer can pay with cash or on credit. Unlike a perpetual inventory system, when recording a sale under a periodic system, there is no cost entry. If a customer pays for merchandise within the discount window, the company would debit Cash and Sales Discounts and credit Accounts Receivable. If the customer pays outside the discount window, the company debits Cash and credits Accounts Receivable only. If a customer returns merchandise before remitting payment, the company would debit Sales Returns and Allowances and credit Accounts Receivable or Cash. If a customer obtains an allowance for damaged merchandise before remitting payment, the company would debit Sales Returns and Allowances and credit Accounts Receivable or Cash. Note: All of the following assessments assume a periodic inventory system unless otherwise noted.
Chapter Outline 6.1 Compare and Contrast Merchandising versus Service Activities and Transactions 6.2 Compare and Contrast Perpetual versus Periodic Inventory Systems 6.3 Analyze and Record Transactions for Merchandise Purchases Using the Perpetual Inventory System 6.4 Analyze and Record Transactions for the Sale of Merchandise Using the Perpetual Inventory System 6.5 Discuss and Record Transactions Applying the Two Commonly Used Freight-In Methods 6.6 Describe and Prepare Multi-Step and Simple Income Statements for Merchandising Companies 6.7 Appendix: Analyze and Record Transactions for Merchandise Purchases and Sales Using the Periodic Inventory System Why It Matters Jason and his brother James own a small business called J&J Games, specializing in the sale of video games and accessories. They purchase their merchandise from a Marcus Electronics manufacturer and sell directly to consumers. When J&J Games (J&J) purchases merchandise from Marcus, they establish a contract detailing purchase costs, payment terms, and shipping charges. It is important to establish this contract so that J&J and Marcus understand the inventory responsibilities of each party. J&J Games typically does not pay with cash immediately and is given an option for delayed payment with the possibility of a discount for early payment. The delayed payment helps continue the strong relationship between the two parties, but the option for early payment gives J&J a monetary incentive to pay early and allow Marcus to use the funds for other business purposes. Until J&J pays on their account, this outstanding balance remains a liability for J&J. J&J Games successfully sells merchandise on a regular basis to customers. As the business grows, the company later considers selling gaming accessories in bulk orders to other businesses. While these bulk sales will provide a new growth opportunity for J&J, the company understands that these clients may need time to pay for their orders. This can create a dilemma; J&J Games needs to offer competitive incentives for these clients while also maintaining the ability to pay their own obligations. They will carefully consider sales discounts, returns, and allowance policies that do not overextend their company’s financial position while giving them an opportunity to create lasting relationships with a new customer base.
[ { "answer": { "ans_choice": 2, "ans_text": "C" }, "bloom": null, "hl_context": "<hl> The sales discounts account is a contra revenue account that is deducted from gross sales at the end of a period in the calculation of net sales . <hl> Sales Discounts has a normal debit balance , which offsets Sales that has a normal credit balance .", "hl_sentences": "The sales discounts account is a contra revenue account that is deducted from gross sales at the end of a period in the calculation of net sales .", "question": { "cloze_format": "___ is an example of a contra revenue account.", "normal_format": "Which of the following is an example of a contra revenue account?", "question_choices": [ "sales", "merchandise inventory", "sales discounts", "accounts payable" ], "question_id": "fs-idm354124480", "question_text": "Which of the following is an example of a contra revenue account?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 1, "ans_text": "accounts payable, merchandise inventory" }, "bloom": null, "hl_context": "<hl> A retailer typically conducts business with a manufacturer or with a supplier who buys from a manufacturer . <hl> The retailer will purchase their finished goods for resale . When the purchase occurs , the retailer may pay for the merchandise with cash or on credit . <hl> If the retailer pays for the merchandise with cash , they would be trading one current asset , Cash , for another current asset , Merchandise Inventory or just Inventory , depending upon the company ’ s account titles . <hl> <hl> In this example , they would record a debit entry to Merchandise Inventory and a credit entry to Cash . <hl> <hl> If they decide to pay on credit , a liability would be created , and Accounts Payable would be credited rather than Cash . <hl> For example , a clothing store may pay a jeans manufacturer cash for 50 pairs of jeans , costing $ 25 each . The following entry would occur .", "hl_sentences": "A retailer typically conducts business with a manufacturer or with a supplier who buys from a manufacturer . If the retailer pays for the merchandise with cash , they would be trading one current asset , Cash , for another current asset , Merchandise Inventory or just Inventory , depending upon the company ’ s account titles . In this example , they would record a debit entry to Merchandise Inventory and a credit entry to Cash . If they decide to pay on credit , a liability would be created , and Accounts Payable would be credited rather than Cash .", "question": { "cloze_format": "The accounts that are used to recognize a retailer’s purchase from a manufacturer on credit are ___.", "normal_format": "What accounts are used to recognize a retailer’s purchase from a manufacturer on credit?", "question_choices": [ "accounts receivable, merchandise inventory", "accounts payable, merchandise inventory", "accounts payable, cash", "sales, accounts receivable" ], "question_id": "fs-idm335148704", "question_text": "What accounts are used to recognize a retailer’s purchase from a manufacturer on credit?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 3, "ans_text": "accounts receivable, sales returns and allowances" }, "bloom": null, "hl_context": "<hl> When a customer returns the merchandise , a retailer issues a credit memo to acknowledge the change in contract and reduction to Accounts Receivable , if applicable . <hl> The retailer records an entry acknowledging the return by reducing either Cash or Accounts Receivable and increasing Sales Returns and Allowances . Cash would decrease if the customer had already paid for the merchandise and cash was thus refunded to the customer . Accounts Receivable would decrease if the customer had not yet paid on their account . Like Sales Discounts , the sales returns and allowances account is a contra revenue account with a normal debit balance that reduces the gross sales figure at the end of the period . <hl> If a customer purchases merchandise and is dissatisfied with their purchase , they may receive a refund or a partial refund , depending on the situation . <hl> <hl> When the customer returns merchandise and receives a full refund , it is considered a sales return . <hl> <hl> When the customer keeps the defective merchandise and is given a partial refund , it is considered a sales allowance . <hl> The biggest difference is that a customer returns merchandise in a sales return and keeps the merchandise in a sales allowance .", "hl_sentences": "When a customer returns the merchandise , a retailer issues a credit memo to acknowledge the change in contract and reduction to Accounts Receivable , if applicable . If a customer purchases merchandise and is dissatisfied with their purchase , they may receive a refund or a partial refund , depending on the situation . When the customer returns merchandise and receives a full refund , it is considered a sales return . When the customer keeps the defective merchandise and is given a partial refund , it is considered a sales allowance .", "question": { "cloze_format": "If a customer purchases merchandise on credit and returns the defective merchandise before payment, the accounts that would recognize this transaction are ___.", "normal_format": "If a customer purchases merchandise on credit and returns the defective merchandise before payment, what accounts would recognize this transaction?", "question_choices": [ "sales discount, cash", "sales returns and allowances, cash", "accounts receivable, sales discount", "accounts receivable, sales returns and allowances" ], "question_id": "fs-idm340621808", "question_text": "If a customer purchases merchandise on credit and returns the defective merchandise before payment, what accounts would recognize this transaction?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "D" }, "bloom": null, "hl_context": "<hl> While each inventory system has its own advantages and disadvantages , the more popular system is the perpetual inventory system . <hl> <hl> The ability to have real-time data to make decisions , the constant update to inventory , and the integration to point-of-sale systems , outweigh the cost and time investments needed to maintain the system . <hl> ( While our main coverage focuses on recognition under the perpetual inventory system , Appendix : Analyze and Record Transactions for Merchandise Purchases and Sales Using the Periodic Inventory System discusses recognition under the periodic inventory system . )", "hl_sentences": "While each inventory system has its own advantages and disadvantages , the more popular system is the perpetual inventory system . The ability to have real-time data to make decisions , the constant update to inventory , and the integration to point-of-sale systems , outweigh the cost and time investments needed to maintain the system .", "question": { "cloze_format": "A disadvantage of the perpetual inventory system is that ___ .", "normal_format": "Which of the following is a disadvantage of the perpetual inventory system?", "question_choices": [ "Inventory information is in real-time.", "Inventory is automatically updated.", "It allows managers to make current decisions about purchases, stock, and sales.", "It is cost-prohibitive." ], "question_id": "fs-idm209981376", "question_text": "Which of the following is a disadvantage of the perpetual inventory system?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "frequent physical inventory counts" }, "bloom": null, "hl_context": "<hl> While both the periodic and perpetual inventory systems require a physical count of inventory , periodic inventorying requires more physical counts to be conducted . <hl> <hl> This updates the inventory account more frequently to record exact costs . <hl> Knowing the exact costs earlier in an accounting cycle can help a company stay on budget and control costs .", "hl_sentences": "While both the periodic and perpetual inventory systems require a physical count of inventory , periodic inventorying requires more physical counts to be conducted . This updates the inventory account more frequently to record exact costs .", "question": { "cloze_format": "___ is/are an advantage of the periodic inventory system.", "normal_format": "Which of the following is an advantage of the periodic inventory system?", "question_choices": [ "frequent physical inventory counts", "cost prohibitive", "time consuming", "real-time information for managers" ], "question_id": "fs-idm205961216", "question_text": "Which of the following is an advantage of the periodic inventory system?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "D" }, "bloom": null, "hl_context": "At the end of the period , a perpetual inventory system will have the Merchandise Inventory account up-to-date ; the only thing left to do is to compare a physical count of inventory to what is on the books . <hl> A physical inventory count requires companies to do a manual “ stock-check ” of inventory to make sure what they have recorded on the books matches what they physically have in stock . <hl> <hl> Differences could occur due to mismanagement , shrinkage , damage , or outdated merchandise . <hl> Shrinkage is a term used when inventory or other assets disappear without an identifiable reason , such as theft . For a perpetual inventory system , the adjusting entry to show this difference follows . This example assumes that the merchandise inventory is overstated in the accounting records and needs to be adjusted downward to reflect the actual value on hand .", "hl_sentences": "A physical inventory count requires companies to do a manual “ stock-check ” of inventory to make sure what they have recorded on the books matches what they physically have in stock . Differences could occur due to mismanagement , shrinkage , damage , or outdated merchandise .", "question": { "cloze_format": "___ is not a reason for the physical inventory count to differ from what is recognized on the company’s books.", "normal_format": "Which of the following is not a reason for the physical inventory count to differ from what is recognized on the company’s books?", "question_choices": [ "mismanagement", "shrinkage", "damage", "sale of services to customers" ], "question_id": "fs-idm216332608", "question_text": "Which of the following is not a reason for the physical inventory count to differ from what is recognized on the company’s books?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 1, "ans_text": "beginning inventory" }, "bloom": null, "hl_context": "Merchandise Inventory is a current asset account that houses all purchase costs associated with the transaction . This includes the cost of the merchandise , shipping charges , insurance fees , taxes , and any other costs that gets the products ready for sale . Gross purchases are defined as the original amount of the purchase without considering reductions for purchase discounts , returns , or allowances . Once the purchase reductions are adjusted at the end of a period , net purchases are calculated . <hl> Net purchases ( see Figure 6.6 ) equals gross purchases less purchase discounts , purchase returns , and purchase allowances . <hl>", "hl_sentences": "Net purchases ( see Figure 6.6 ) equals gross purchases less purchase discounts , purchase returns , and purchase allowances .", "question": { "cloze_format": "The ___ is/are not included when computing Net Purchases.", "normal_format": "Which of the following is not included when computing Net Purchases?", "question_choices": [ "purchase discounts", "beginning inventory", "purchase returns", "purchase allowances" ], "question_id": "fs-idm216349200", "question_text": "Which of the following is not included when computing Net Purchases?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "C" }, "bloom": null, "hl_context": "When a sale occurs under perpetual inventory systems , two entries are required : one to recognize the sale , and the other to recognize the cost of sale . For the cost of sale , Merchandise Inventory and Cost of Goods Sold are updated . Under periodic inventory systems , this cost of sale entry does not exist . <hl> The recognition of merchandise cost only occurs at the end of the period when adjustments are made and temporary accounts are closed . <hl> <hl> When a purchase discount is applied under a perpetual inventory system , Merchandise Inventory decreases for the discount amount . <hl> Under a periodic inventory system , Purchase Discounts ( a temporary , contra account ) , increases for the discount amount and Merchandise Inventory remains unchanged . <hl> A purchase return or allowance under perpetual inventory systems updates Merchandise Inventory for any decreased cost . <hl> Under periodic inventory systems , a temporary account , Purchase Returns and Allowances , is updated . Purchase Returns and Allowances is a contra account and is used to reduce Purchases .", "hl_sentences": "The recognition of merchandise cost only occurs at the end of the period when adjustments are made and temporary accounts are closed . When a purchase discount is applied under a perpetual inventory system , Merchandise Inventory decreases for the discount amount . A purchase return or allowance under perpetual inventory systems updates Merchandise Inventory for any decreased cost .", "question": { "cloze_format": "The accounts used when recording a purchase are ___.", "normal_format": "Which of the following accounts are used when recording a purchase?", "question_choices": [ "cash, merchandise inventory", "accounts payable, merchandise inventory", "A or B", "cash, accounts payable" ], "question_id": "fs-idm263257952", "question_text": "Which of the following accounts are used when recording a purchase?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 0, "ans_text": "A" }, "bloom": null, "hl_context": "<hl> To recognize a return or allowance , the retailer will reduce Accounts Payable ( or increase Cash ) and reduce Merchandise Inventory . <hl> Accounts Payable decreases if the retailer has yet to pay on their account , and Cash increases if they had already paid and received a subsequent refund . Merchandise Inventory decreases to show the reduction of inventory cost from the retailer ’ s inventory stock . Note that if a retailer receives a refund before they make a payment , any discount taken must be from the new cost of the merchandise less the refund .", "hl_sentences": "To recognize a return or allowance , the retailer will reduce Accounts Payable ( or increase Cash ) and reduce Merchandise Inventory .", "question": { "cloze_format": "The accounts that recognize this return before the retailer remits payment to the manufacturer are the ___ .", "normal_format": "A retailer returns $400 worth of inventory to a manufacturer and receives a full refund. What accounts recognize this return before the retailer remits payment to the manufacturer?", "question_choices": [ "accounts payable, merchandise inventory", "accounts payable, cash", "cash, merchandise inventory", "merchandise inventory, cost of goods sold" ], "question_id": "fs-idm267704672", "question_text": "A retailer returns $400 worth of inventory to a manufacturer and receives a full refund. What accounts recognize this return before the retailer remits payment to the manufacturer?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "C" }, "bloom": null, "hl_context": "<hl> Whether or not a customer pays with cash or credit , a business must record two accounting entries . <hl> <hl> One entry recognizes the sale and the other recognizes the cost of the sale . <hl> <hl> The sales entry consists of a debit to either Cash or Accounts Receivable ( if paying on credit ) , and a credit to the revenue account , Sales . <hl>", "hl_sentences": "Whether or not a customer pays with cash or credit , a business must record two accounting entries . One entry recognizes the sale and the other recognizes the cost of the sale . The sales entry consists of a debit to either Cash or Accounts Receivable ( if paying on credit ) , and a credit to the revenue account , Sales .", "question": { "cloze_format": "The accounts that are used when recording the sales entry of a sale on credit are ___.", "normal_format": "Which of the following accounts are used when recording the sales entry of a sale on credit?", "question_choices": [ "merchandise inventory, cash", "accounts receivable, merchandise inventory", "accounts receivable, sales", "sales, cost of goods sold" ], "question_id": "fs-idm243268432", "question_text": "Which of the following accounts are used when recording the sales entry of a sale on credit?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "C" }, "bloom": null, "hl_context": "<hl> If FOB destination point is listed on the purchase contract , this means the seller pays the shipping charges ( freight-out ) . <hl> <hl> This also means goods in transit belong to , and are the responsibility of , the seller . <hl> <hl> The point of transfer is when the goods reach the buyer ’ s place of business . <hl>", "hl_sentences": "If FOB destination point is listed on the purchase contract , this means the seller pays the shipping charges ( freight-out ) . This also means goods in transit belong to , and are the responsibility of , the seller . The point of transfer is when the goods reach the buyer ’ s place of business .", "question": { "cloze_format": "___ is not a characteristic of FOB Destination.", "normal_format": "Which of the following is not a characteristic of FOB Destination?", "question_choices": [ "The seller pays for shipping.", "The seller owns goods in transit.", "The point of transfer is when the goods leave the seller’s place of business.", "The point of transfer is when the goods arrive at the buyer’s place of business." ], "question_id": "fs-idm257828416", "question_text": "Which of the following is not a characteristic of FOB Destination?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 3, "ans_text": "D" }, "bloom": null, "hl_context": "<hl> If FOB shipping point is listed on the purchase contract , this means the buyer pays the shipping charges ( freight-in ) . <hl> <hl> This also means goods in transit belong to , and are the responsibility of , the buyer . <hl> <hl> The point of transfer is when the goods leave the seller ’ s place of business . <hl>", "hl_sentences": "If FOB shipping point is listed on the purchase contract , this means the buyer pays the shipping charges ( freight-in ) . This also means goods in transit belong to , and are the responsibility of , the buyer . The point of transfer is when the goods leave the seller ’ s place of business .", "question": { "cloze_format": "___ is not a characteristic of FOB Shipping Point.", "normal_format": "Which of the following is not a characteristic of FOB Shipping Point?", "question_choices": [ "The buyer pays for shipping.", "The buyer owns goods in transit.", "The point of transfer is when the goods leave the seller’s place of business.", "The point of transfer is when the goods arrive at the buyer’s place of business." ], "question_id": "fs-idm252635808", "question_text": "Which of the following is not a characteristic of FOB Shipping Point?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 0, "ans_text": "separates cost of goods sold from operating expenses" }, "bloom": null, "hl_context": "A multi-step income statement is more detailed than a simple income statement . Because of the additional detail , it is the option selected by many companies whose operations are more complex . Each revenue and expense account is listed individually under the appropriate category on the statement . <hl> The multi-step statement separates cost of goods sold from operating expenses and deducts cost of goods sold from net sales to obtain a gross margin . <hl>", "hl_sentences": "The multi-step statement separates cost of goods sold from operating expenses and deducts cost of goods sold from net sales to obtain a gross margin .", "question": { "cloze_format": "A multi-step income statement ________.", "normal_format": "What is a multi-step income statement?", "question_choices": [ "separates cost of goods sold from operating expenses", "considers interest revenue an operating activity", "is another name for a simple income statement", "combines cost of goods sold and operating expenses" ], "question_id": "fs-idm202944864", "question_text": "A multi-step income statement ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "B" }, "bloom": null, "hl_context": "<hl> Operating expenses are daily operational costs not associated with the direct selling of products or services . <hl> <hl> Operating expenses are broken down into selling expenses ( such as advertising and marketing expenses ) and general and administrative expenses ( such as office supplies expense , and depreciation of office equipment ) . <hl> Deducting the operating expenses from gross margin produces income from operations . A multi-step income statement is more detailed than a simple income statement . Because of the additional detail , it is the option selected by many companies whose operations are more complex . Each revenue and expense account is listed individually under the appropriate category on the statement . <hl> The multi-step statement separates cost of goods sold from operating expenses and deducts cost of goods sold from net sales to obtain a gross margin . <hl>", "hl_sentences": "Operating expenses are daily operational costs not associated with the direct selling of products or services . Operating expenses are broken down into selling expenses ( such as advertising and marketing expenses ) and general and administrative expenses ( such as office supplies expense , and depreciation of office equipment ) . The multi-step statement separates cost of goods sold from operating expenses and deducts cost of goods sold from net sales to obtain a gross margin .", "question": { "cloze_format": "The account that would be reported under operating expenses on a multi-step income statement is ___.", "normal_format": "Which of the following accounts would be reported under operating expenses on a multi-step income statement?", "question_choices": [ "sales", "advertising expense", "sales returns and allowances", "interest expense" ], "question_id": "fs-idm209770256", "question_text": "Which of the following accounts would be reported under operating expenses on a multi-step income statement?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 0, "ans_text": "combines all revenues into one category" }, "bloom": null, "hl_context": "A simple income statement is less detailed than the multi-step format . <hl> A simple income statement combines all revenues into one category , followed by all expenses , to produce net income . <hl> There are very few individual accounts and the statement does not consider cost of sales separate from operating expenses .", "hl_sentences": "A simple income statement combines all revenues into one category , followed by all expenses , to produce net income .", "question": { "cloze_format": "A simple income statement ________.", "normal_format": "Which of the following is correct about a simple income statement?", "question_choices": [ "combines all revenues into one category", "does not combine all expenses into one category", "separates cost of goods sold from operating expenses", "separates revenues into several categories" ], "question_id": "fs-idm207096704", "question_text": "A simple income statement ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "D" }, "bloom": null, "hl_context": "<hl> A simple income statement is less detailed than the multi-step format . <hl> A simple income statement combines all revenues into one category , followed by all expenses , to produce net income . <hl> There are very few individual accounts and the statement does not consider cost of sales separate from operating expenses . <hl>", "hl_sentences": "A simple income statement is less detailed than the multi-step format . There are very few individual accounts and the statement does not consider cost of sales separate from operating expenses .", "question": { "cloze_format": "The account that would not be reported under revenue on a simple income statement is the ___ .", "normal_format": "Which of the following accounts would not be reported under revenue on a simple income statement?", "question_choices": [ "interest revenue", "net sales", "rent revenue", "operating expenses" ], "question_id": "fs-idm210285664", "question_text": "Which of the following accounts would not be reported under revenue on a simple income statement?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "cash, purchases" }, "bloom": null, "hl_context": "A purchase return or allowance under perpetual inventory systems updates Merchandise Inventory for any decreased cost . <hl> Under periodic inventory systems , a temporary account , Purchase Returns and Allowances , is updated . <hl> Purchase Returns and Allowances is a contra account and is used to reduce Purchases . There are some key differences between perpetual and periodic inventory systems . When a company uses the perpetual inventory system and makes a purchase , they will automatically update the Merchandise Inventory account . <hl> Under a periodic inventory system , Purchases will be updated , while Merchandise Inventory will remain unchanged until the company counts and verifies its inventory balance . <hl> This count and verification typically occur at the end of the annual accounting period , which is often on December 31 of the year . The Merchandise Inventory account balance is reported on the balance sheet while the Purchases account is reported on the Income Statement when using the periodic inventory method . The Cost of Goods Sold is reported on the Income Statement under the perpetual inventory method . <hl> A perpetual inventory system automatically updates and records the inventory account every time a sale , or purchase of inventory , occurs . <hl> <hl> You can consider this “ recording as you go . ” The recognition of each sale or purchase happens immediately upon sale or purchase . <hl> <hl> A periodic inventory system updates and records the inventory account at certain , scheduled times at the end of an operating cycle . <hl> The update and recognition could occur at the end of the month , quarter , and year . There is a gap between the sale or purchase of inventory and when the inventory activity is recognized .", "hl_sentences": "Under periodic inventory systems , a temporary account , Purchase Returns and Allowances , is updated . Under a periodic inventory system , Purchases will be updated , while Merchandise Inventory will remain unchanged until the company counts and verifies its inventory balance . A perpetual inventory system automatically updates and records the inventory account every time a sale , or purchase of inventory , occurs . You can consider this “ recording as you go . ” The recognition of each sale or purchase happens immediately upon sale or purchase . A periodic inventory system updates and records the inventory account at certain , scheduled times at the end of an operating cycle .", "question": { "cloze_format": "The accounts that are used when recording a purchase using a periodic inventory system are ___.", "normal_format": "Which of the following accounts are used when recording a purchase using a periodic inventory system?", "question_choices": [ "cash, purchases", "accounts payable, sales", "accounts payable, accounts receivable", "cash, merchandise inventory" ], "question_id": "fs-idm312856048", "question_text": "Which of the following accounts are used when recording a purchase using a periodic inventory system?" }, "references_are_paraphrase": 0 } ]
6
6.1 Compare and Contrast Merchandising versus Service Activities and Transactions Every week, you run errands for your household. These errands may include buying products and services from local retailers, such as gas, groceries, and clothing. As a consumer, you are focused solely on purchasing your items and getting home to your family. You are probably not thinking about how your purchases impact the businesses you frequent. Whether the business is a service or a merchandising company, it tracks sales from customers, purchases from manufacturers or other suppliers, and costs that affect their everyday operations. There are some key differences between these business types in the manner and detail required for transaction recognition. Comparison of Merchandising Transactions versus Service Transactions Some of the biggest differences between a service company and a merchandising company are what they sell, their typical financial transactions, their operating cycles, and how these translate to financial statements. A service company provides intangible services to customers and does not have inventory. Some examples of service companies include lawyers, doctors, consultants, and accountants. Service companies often have simple financial transactions that involve taking customer deposits, billing clients after services have been provided, providing the service, and processing payments. These activities may occur frequently within a company’s accounting cycle and make up a portion of the service company’s operating cycle. An operating cycle is the amount of time it takes a company to use its cash to provide a product or service and collect payment from the customer. Completing this cycle faster puts the company in a more stable financial position. A typical operating cycle for a service company begins with having cash available, providing service to a customer, and then receiving cash from the customer for the service ( Figure 6.2 ). The income statement format is fairly simple as well (see Figure 6.3 ). Revenues (sales) are reported first, followed by any period operating expenses. The outcome of sales less expenses, which is net income (loss), is calculated from these accounts. A merchandising company resells finished goods (inventory) produced by a manufacturer (supplier) to customers. Some examples of merchandising companies include Walmart , Macy’s , and Home Depot . Merchandising companies have financial transactions that include: purchasing merchandise, paying for merchandise, storing inventory, selling merchandise, and collecting customer payments. A typical operating cycle for a merchandising company starts with having cash available, purchasing inventory, selling the merchandise to customers, and finally collecting payment from customers ( Figure 6.4 ). Their income statement format is a bit more complicated than for a service company and is discussed in greater detail in Describe and Prepare Multi-Step and Simple Income Statements for Merchandising Companies . Note that unlike a service company, the merchandiser, also sometimes labeled as a retailer, must first resolve any sale reductions and merchandise costs, known as Cost of Goods Sold, before determining other expenses and net income (loss). A simple retailer income statement is shown in Figure 6.5 for comparison. Characteristics of Merchandising Transactions Merchandising transactions are separated into two categories: purchases and sales. In general, a purchase transaction occurs between a manufacturer and the merchandiser, also called a retailer. A sales transaction occurs between a customer and the merchandiser or retailer. We will now discuss the characteristics that create purchase and sales transactions for a retailer. A merchandiser will need to purchase merchandise for its business to continue operations and can use several purchase situations to accomplish this. Purchases with Cash or on Credit A retailer typically conducts business with a manufacturer or with a supplier who buys from a manufacturer. The retailer will purchase their finished goods for resale. When the purchase occurs, the retailer may pay for the merchandise with cash or on credit. If the retailer pays for the merchandise with cash, they would be trading one current asset, Cash, for another current asset, Merchandise Inventory or just Inventory, depending upon the company’s account titles. In this example, they would record a debit entry to Merchandise Inventory and a credit entry to Cash. If they decide to pay on credit, a liability would be created, and Accounts Payable would be credited rather than Cash. For example, a clothing store may pay a jeans manufacturer cash for 50 pairs of jeans, costing $25 each. The following entry would occur. If this same company decides to purchase merchandise on credit, Accounts Payable is credited instead of Cash. Merchandise Inventory is a current asset account that houses all purchase costs associated with the transaction. This includes the cost of the merchandise, shipping charges, insurance fees, taxes, and any other costs that gets the products ready for sale. Gross purchases are defined as the original amount of the purchase without considering reductions for purchase discounts, returns, or allowances. Once the purchase reductions are adjusted at the end of a period, net purchases are calculated. Net purchases (see Figure 6.6 ) equals gross purchases less purchase discounts, purchase returns, and purchase allowances. Purchase Discounts If a retailer, pays on credit, they will work out payment terms with the manufacturer. These payment terms establish the purchase cost, an invoice date, any discounts, shipping charges, and the final payment due date. Purchase discounts provide an incentive for the retailer to pay early on their accounts by offering a reduced rate on the final purchase cost. Receiving payment in a timely manner allows the manufacturer to free up cash for other business opportunities and decreases the risk of nonpayment. To describe the discount terms, the manufacturer can write descriptions such as 2/10, n/30 on the invoice. The “2” represents a discount rate of 2%, the “10” represents the discount period in days, and the “n/30” means “net of 30” days, representing the entire payment period without a discount application. So, “2/10, n/30” reads as, “The company will receive a 2% discount on their purchase if they pay in 10 days. Otherwise, they have 30 days from the date of the sale to pay in full, no discount received.” In some cases, if the retailer exceeds the full payment period (30 days in this example), the manufacturer may charge interest as a penalty for late payment. The number of days allowed for both the discount period and the full payment period begins counting from the invoice date. If a merchandiser pays an invoice within the discount period, they receive a discount, which affects the cost of the inventory. Let’s say a retailer pays within the discount window. They would need to show a credit to the Merchandise Inventory account, recognizing the decreased final cost of the merchandise. This aligns with the cost principle, which requires a company to record an asset’s value at the cost of acquisition. In addition, since cash is used to pay the manufacturer, Cash is credited. The debit to Accounts Payable does not reflect the discount taken: it reflects fulfillment of the liability in full, and the credits to Merchandise Inventory and Cash reflect the discount taken, as demonstrated in the following example. If the retailer does not pay within the discount window, they do not receive a discount but are still required to pay the full invoice price at the end of the term. In this case, Accounts Payable is debited and Cash is credited, but no reductions are made to Merchandise Inventory. For example, suppose a kitchen appliances retailer purchases merchandise for their store from a manufacturer on September 1 in the amount of $1,600. Credit terms are 2/10, n/30 from the invoice date of September 1. The retailer makes payment on September 5 and receives the discount. The following entry occurs. Let’s consider the same situation except the retailer did not make the discount window and paid in full on September 30. The entry would recognize the following instead. There are two kinds of purchase discounts, cash discounts and trade discounts. Cash discount provides a discount on the final price after purchase if a retailer pays within a discount window. On the other hand, a trade discount is a reduction to the advertised manufacturer’s price that occurs during negotiations of a final purchase price before the inventory is purchased. The trade discount may become larger if the retailer purchases more in one transaction. While the cash discount is recognized in journal entries, a trade discount is not, since it is negotiated before purchase. For example, assume that a retailer is considering an order for $4,000 in inventory on September 1. The manufacturer offers the retailer a 15% discount on the price if they place the order by September 5. Assume that the retailer places the $4,000 order on September 3. The purchase price would be $4,000 less the 15% discount of $600, or $3,400. Since the trade discount is based on when the order was placed and not on any potential payment discounts, the initial journal entry to record the purchase would reflect the discounted amount of $3,400. Even if the retailer receives a trade discount, they may still be eligible for an additional purchase discount if they pay within the discount window of the invoice. Purchase Returns and Allowances If a retailer is unhappy with their purchase—for example, if the order is incorrect or if the products are damaged—they may receive a partial or full refund from the manufacturer in a purchase returns and allowances transaction. A purchase return occurs when merchandise is returned and a full refund is issued. A purchase allowance occurs when merchandise is kept and a partial refund is issued. In either case, a manufacturer will issue a debit memo to acknowledge the change in contract terms and the reduction in the amount owed. To recognize a return or allowance, the retailer will reduce Accounts Payable (or increase Cash) and reduce Merchandise Inventory. Accounts Payable decreases if the retailer has yet to pay on their account, and Cash increases if they had already paid and received a subsequent refund. Merchandise Inventory decreases to show the reduction of inventory cost from the retailer’s inventory stock. Note that if a retailer receives a refund before they make a payment, any discount taken must be from the new cost of the merchandise less the refund. To illustrate, assume that Carter Candle Company received a shipment from a manufacturer that had 150 candles that cost $150. Assume that they have not yet paid for these candles and 100 of the candles are badly damaged and must be returned. The other 50 candles are marketable, but are not the right style. The candle company returned the 100 defective candles for a full refund and requested and received an allowance of $20 for the 50 improper candles they kept. The first entry shows the return and the second entry shows the allowance. It is possible to show these entries as one, since they affect the same accounts and were requested at the same time. From a manager’s standpoint, though, it may be better to record these as separate transactions to better understand the specific reasons for the reduction to inventory (either return or allowance) and restocking needs. Ethical Considerations Internal Controls over Merchandise Returns 1 Returning merchandise requires more than an accountant making journal entries or a clerk restocking items in a warehouse or store. An ethical accountant understands that there must be internal controls governing the return of items. As used in accounting, the term “internal control” describes the methodology of implementing accounting and operational checkpoints in a system to ensure compliance with sound business and operational practices while permitting the proper recording of accounting information. All transactions require both operational and accounting actions to ensure that the amounts have been recorded in the accounting records and that operational requirements have been met. Merchandise return controls require that there be a separation of duties between the employee approving the return and the person recording the return of merchandise in the accounting records. Basically, the person performing the return should not be the person recording the event in the accounting records. This is called separation of duties and is just one example of an internal control that should be used when merchandise is returned. Every company faces different challenges with returns, but one of the most common challenges includes fake or fictitious returns. The use of internal controls is a protective action the company undertakes, with the assistance of professional accountants, to ensure that fictitious returns do not occur. The internal controls may include prescribed actions of employees, special tags on merchandise, specific store layouts that ensure customers pass checkout points before leaving the store, cameras to record activity in the facility, and other activities and internal controls that go beyond accounting and journal entries to ensure that assets of a company are protected. 1 Committee of Sponsoring Organizations of the Treadway Commission (COSO). Internal Control—Integrated Framework . May 2013. https://na.theiia.org/standards-guidance/topics/Documents/Executive_Summary.pdf Characteristics of Sales Transactions Business owners may encounter several sales situations that can help meet customer needs and control inventory operations. For example, some customers will expect the opportunity to buy using short-term credit and often will assume that they will receive a discount for paying within a brief period. The mechanics of sales discounts are demonstrated later in this section. Sales with Cash or on Credit As previously mentioned, a sale is usually considered a transaction between a merchandiser or retailer and a customer. When a sale occurs, a customer has the option to pay with cash or credit. For our purposes, let’s consider “credit” as credit extended from the business directly to the customer. Whether or not a customer pays with cash or credit, a business must record two accounting entries. One entry recognizes the sale and the other recognizes the cost of the sale. The sales entry consists of a debit to either Cash or Accounts Receivable (if paying on credit), and a credit to the revenue account, Sales. The amount recorded in the Sales account is the gross amount. Gross sales is the original amount of the sale without factoring in any possible reductions for discounts, returns, or allowances. Once those reductions are recorded at the end of a period, net sales are calculated. Net sales (see Figure 6.7 ) equals gross sales less sales discounts, sales returns, and sales allowances. Recording the sale as it occurs allows the company to align with the revenue recognition principle. The revenue recognition principle requires companies to record revenue when it is earned, and revenue is earned when a product or service has been provided. The second accounting entry that is made during a sale describes the cost of sales. The cost of sales entry includes decreasing Merchandise Inventory and increasing Cost of Goods Sold (COGS). The decrease to Merchandise Inventory reflects the reduction in the inventory account value due to the sold merchandise. The increase to COGS represents the expense associated with the sale. The cost of goods sold (COGS) is an expense account that houses all costs associated with getting the product ready for sale. This could include purchase costs, shipping, taxes, insurance, stocking fees, and overhead related to preparing the product for sale. By recording the cost of sale when the sale occurs, the company aligns with the matching principle. The matching principle requires companies to match revenues generated with related expenses in the period in which they are incurred. For example, when a shoe store sells 150 pairs of athletic cleats to a local baseball league for $1,500 (cost of $900), the league may pay with cash or credit. If the baseball league elects to pay with cash, the shoe store would debit Cash as part of the sales entry. If the baseball league decides to use a line of credit extended by the shoe store, the shoe store would debit Accounts Receivable as part of the sales entry instead of Cash. With the sales entry, the shoe store must also recognize the $900 cost of the shoes sold and the $900 reduction in Merchandise Inventory. You may have noticed that sales tax has not been discussed as part of the sales entry. Sales taxes are liabilities that require a portion of every sales dollar be remitted to a government entity. This would reduce the amount of cash the company keeps after the sale. Sales tax is relevant to consumer sales and is discussed in detail in Current Liabilities . There are a few transactional situations that may occur after a sale is made that have an effect on reported sales at the end of a period. Sales Discounts Sales discounts are incentives given to customers to entice them to pay off their accounts early. Why would a retailer offer this? Wouldn’t they rather receive the entire amount owed? The discount serves several purposes that are similar to the rationale manufacturers consider when offering discounts to retailers. It can help solidify a long-term relationship with the customer, encourage the customer to purchase more, and decreases the time it takes for the company to see a liquid asset (cash). Cash can be used for other purposes immediately such as reinvesting in the business, paying down loans quicker, and distributing dividends to shareholders. This can help grow the business at a more rapid rate. Similar to credit terms between a retailer and a manufacturer, a customer could see credit terms offered by the retailer in the form of 2/10, n/30. This particular example shows that if a customer pays their account within 10 days, they will receive a 2% discount. Otherwise, they have 30 days to pay in full but do not receive a discount. If the customer does not pay within the discount window, but pays within 30 days, the retailing company records a credit to Accounts Receivable, and a debit to Cash for the full amount stated on the invoice. If the customer is able to pay the account within the discount window, the company records a credit to Accounts Receivable, a debit to Cash, and a debit to Sales Discounts. The sales discounts account is a contra revenue account that is deducted from gross sales at the end of a period in the calculation of net sales. Sales Discounts has a normal debit balance, which offsets Sales that has a normal credit balance. Let’s assume that a customer purchased 10 emergency kits from a retailer at $100 per kit on credit. The retailer offered the customer 2/10, n/30 terms, and the customer paid within the discount window. The retailer recorded the following entry for the initial sale. Since the retailer doesn’t know at the point of sale whether or not the customer will qualify for the sales discount, the entire account receivable of $1,000 is recorded on the retailer’s journal. Also assume that the retailer’s costs of goods sold in this example were $560 and we are using the perpetual inventory method. The journal entry to record the sale of the inventory follows the entry for the sale to the customer. Since the customer paid the account in full within the discount qualification period of ten days, the following journal entry on the retailer’s books reflects the payment. Now, assume that the customer paid the retailer within the 30-day period but did not qualify for the discount. The following entry reflects the payment without the discount. Please note that the entire $1,000 account receivable created is eliminated under both payment options. When the discount is missed, the retail received the entire $1,000. However, when the discount was received by the customer, the retailer received $980, and the remaining $20 is recorded in the sales discount account. Ethical Considerations Ethical Discounts Should employees or companies provide discounts to employees of other organizations? An accountant’s employing organization usually has a code of ethics or conduct that addresses policies for employee discounts. While many companies offer their employees discounts as a benefit, some companies also offer discounts or free products to non-employees who work for governmental organizations. Accountants may need to work in situations where other entities’ codes of ethics/conduct do not permit employees to accept discounts or free merchandise. What should the accountant’s company do when an outside organization’s code of ethics and conduct does not permit its employees to accept discounts or free merchandise? The long-term benefits of discounts are contrasted with organizational codes of ethics and conduct that limit others from accepting discounts from your organization. The International Association of Chiefs of Police’s Law Enforcement Code of Ethics limits the ability of police officers to accept discounts. 2 These discounts may be as simple as a free cup of coffee, other gifts, rewards points, and hospitality points or discounts for employees or family members of the governmental organization’s employees. Providing discounts may create ethical dilemmas. The ethical dilemma may not arise from the accountant’s employer, but from the employer of the person outside the organization receiving the discount. 2 International Association of Chiefs of Police (IACP). Law Enforcement Code of Ethics. October, 1957. https://www.theiacp.org/resources/law-enforcement-code-of-ethics The World Customs Organization’s Model Code of Ethics and Conduct states that “customs employees are called upon to use their best judgment to avoid situations of real or perceived conflict. In doing so, they should consider the following criteria on gifts, hospitality and other benefits, bearing in mind the full context of this Code. Public servants shall not accept or solicit any gifts, hospitality or other benefits that may have a real or apparent influence on their objectivity in carrying out their official duties or that may place them under obligation to the donor.” 3 3 World Customs Organization. Model Code of Ethics and Conduct . n.d. http://www.wcoomd.org/~/media/wco/public/global/pdf/topics/integrity/instruments-and-tools/model-code-of-ethics-and-conduct.pdf?la=en At issue is that the employee of the outside organization is placed in a conflict between their personal interests and the interest of their employer. The accountant’s employer’s discount has created this conflict. In these situations, it is best for the accountant’s employer to respect the other organization’s code of conduct. As well, it might be illegal for the accountant’s employer to provide discounts to a governmental organization’s employees. The professional accountant should always be aware of the discount policy of any outside company prior to providing discounts to the employees of other companies or organizations. Sales Returns and Allowances If a customer purchases merchandise and is dissatisfied with their purchase, they may receive a refund or a partial refund, depending on the situation. When the customer returns merchandise and receives a full refund, it is considered a sales return . When the customer keeps the defective merchandise and is given a partial refund, it is considered a sales allowance . The biggest difference is that a customer returns merchandise in a sales return and keeps the merchandise in a sales allowance. When a customer returns the merchandise, a retailer issues a credit memo to acknowledge the change in contract and reduction to Accounts Receivable, if applicable. The retailer records an entry acknowledging the return by reducing either Cash or Accounts Receivable and increasing Sales Returns and Allowances. Cash would decrease if the customer had already paid for the merchandise and cash was thus refunded to the customer. Accounts Receivable would decrease if the customer had not yet paid on their account. Like Sales Discounts, the sales returns and allowances account is a contra revenue account with a normal debit balance that reduces the gross sales figure at the end of the period. Beyond recording the return, the retailer must also determine if the returned merchandise is in “sellable condition.” An item is in sellable condition if the merchandise is good enough to warrant a sale to another customer in the future. If so, the company would record a decrease to Cost of Goods Sold (COGS) and an increase to Merchandise Inventory to return the merchandise back to the inventory for resale. This is recorded at the merchandise’s costs of goods sold value. If the merchandise is in sellable condition but will not realize the original cost of the good, the company must estimate the loss at this time. On the other hand, when the merchandise is returned and is not in sellable condition, the retailer must estimate the value of the merchandise in its current condition and record a loss. This would increase Merchandise Inventory for the assessed value of the merchandise in its current state, decrease COGS for the original expense amount associated with the sale, and increase Loss on Defective Merchandise for the unsellable merchandise lost value. Let’s say a customer purchases 300 plants on credit from a nursery for $3,000 (with a cost of $1,200). The first entry reflects the initial sale by the nursery. The second entry reflects the cost of goods sold. Upon receipt, the customer discovers the plants have been infested with bugs and they send all the plants back. Assuming that the customer had not yet paid the nursery any of the $3,000 accounts receivable and assuming that the nursery determines the condition of the returned plants to be sellable, the retailer would record the following entries. For another example, let’s say the plant customer was only dissatisfied with 100 of the plants. After speaking with the nursery, the customer decides to keep 200 of the plants for a partial refund of $1,000. The nursery would record the following entry for sales allowance associated with 100 plants. The nursery would also record a corresponding entry for the inventory and the cost of goods sold for the 100 returned plants. For both the return and the allowance, if the customer had already paid their account in full, Cash would be affected rather than Accounts Receivable. There are differing opinions as to whether sales returns and allowances should be in separate accounts. Separating the accounts would help a retailer distinguish between items that are returned and those that the customer kept. This can better identify quality control issues, track whether a customer was satisfied with their purchase, and report how many resources are spent on processing returns. Most companies choose to combine returns and allowances into one account, but from a manager’s perspective, it may be easier to have the accounts separated to make current determinations about inventory. You may have noticed our discussion of credit sales did not include third-party credit card transactions. This is when a customer pays with a credit or debit card from a third-party, such as Visa , MasterCard , Discover , or American Express . These entries and discussion are covered in more advanced accounting courses. A more comprehensive example of merchandising purchase and sale transactions occurs in Calculate Activity-Based Product Costs and Compare and Contrast Traditional and Activity-Based Costing Systems , applying the perpetual inventory method. Link to Learning Major retailers must find new ways to manage inventory and reduce operating cycles to stay competitive. Companies such as Amazon.com Inc. , have been able to reduce their operating cycles and increase their receivable collection rates to a level better than many of their nearest competitors. Check out Stock Analysis on Net to find out how they do this and to see a comparison of operating cycles for top retail brands. 6.2 Compare and Contrast Perpetual versus Periodic Inventory Systems There are two ways in which a company may account for their inventory. They can use a perpetual or periodic inventory system. Let’s look at the characteristics of these two systems. Characteristics of the Perpetual and Periodic Inventory Systems A perpetual inventory system automatically updates and records the inventory account every time a sale, or purchase of inventory, occurs. You can consider this “recording as you go.” The recognition of each sale or purchase happens immediately upon sale or purchase. A periodic inventory system updates and records the inventory account at certain, scheduled times at the end of an operating cycle. The update and recognition could occur at the end of the month, quarter, and year. There is a gap between the sale or purchase of inventory and when the inventory activity is recognized. Generally Accepted Accounting Principles (GAAP) do not state a required inventory system, but the periodic inventory system uses a Purchases account to meet the requirements for recognition under GAAP. IFRS requirements are very similar. The main difference is that assets are valued at net realizable value and can be increased or decreased as values change. Under GAAP, once values are reduced they cannot be increased again. Continuing Application Merchandising Transactions Gearhead Outfitters is a retailer of outdoor-related gear such as clothing, footwear, backpacks, and camping equipment. Therefore, one of the biggest assets on Gearhead’s balance sheet is inventory. The proper presentation of inventory in a company’s books leads to a number of accounting challenges, such as: What method of accounting for inventory is appropriate? How often should inventory be counted? How will inventory in the books be valued? Is any of the inventory obsolete and, if so, how will it be accounted for? Is all inventory included in the books? Are items included as inventory in the books that should not be? Proper application of accounting principles is vital to keep accurate books and records. In accounting for inventory, matching principle, valuation, cutoff, completeness, and cost flow assumptions are all important. Did Gearhead match the cost of sale with the sale itself? Was only inventory that belonged to the company as of the period end date included? Did Gearhead count all the inventory? Perhaps some goods were in transit (on a delivery truck for a sale just made, or en route to Gearhead ). What is the correct cost flow assumption for Gearhead to accurately account for inventory? Should it use a first-in, first-out method, or last-in, first-out? These are all accounting challenges Gearhead faces with respect to inventory. As inventory will represent one of the largest items on the balance sheet, it is vital that Gearhead management take due care with decisions related to inventory accounting. Keeping in mind considerations such as gross profit, inventory turnover, meeting demand, point-of-sale systems, and timeliness of accounting information, what other accounting challenges might arise regarding the company’s inventory accounting processes? Inventory Systems Comparison There are some key differences between perpetual and periodic inventory systems. When a company uses the perpetual inventory system and makes a purchase, they will automatically update the Merchandise Inventory account. Under a periodic inventory system, Purchases will be updated, while Merchandise Inventory will remain unchanged until the company counts and verifies its inventory balance. This count and verification typically occur at the end of the annual accounting period, which is often on December 31 of the year. The Merchandise Inventory account balance is reported on the balance sheet while the Purchases account is reported on the Income Statement when using the periodic inventory method. The Cost of Goods Sold is reported on the Income Statement under the perpetual inventory method. A purchase return or allowance under perpetual inventory systems updates Merchandise Inventory for any decreased cost. Under periodic inventory systems, a temporary account, Purchase Returns and Allowances, is updated. Purchase Returns and Allowances is a contra account and is used to reduce Purchases. When a purchase discount is applied under a perpetual inventory system, Merchandise Inventory decreases for the discount amount. Under a periodic inventory system, Purchase Discounts (a temporary, contra account), increases for the discount amount and Merchandise Inventory remains unchanged. When a sale occurs under perpetual inventory systems, two entries are required: one to recognize the sale, and the other to recognize the cost of sale. For the cost of sale, Merchandise Inventory and Cost of Goods Sold are updated. Under periodic inventory systems, this cost of sale entry does not exist. The recognition of merchandise cost only occurs at the end of the period when adjustments are made and temporary accounts are closed. When a sales return occurs, perpetual inventory systems require recognition of the inventory’s condition. This means a decrease to COGS and an increase to Merchandise Inventory. Under periodic inventory systems, only the sales return is recognized, but not the inventory condition entry. A sales allowance and sales discount follow the same recording formats for either perpetual or periodic inventory systems. Adjusting and Closing Entries for a Perpetual Inventory System You have already explored adjusting entries and the closing process in prior discussions, but merchandising activities require additional adjusting and closing entries to inventory, sales discounts, returns, and allowances. Here, we’ll briefly discuss these additional closing entries and adjustments as they relate to the perpetual inventory system. At the end of the period, a perpetual inventory system will have the Merchandise Inventory account up-to-date; the only thing left to do is to compare a physical count of inventory to what is on the books. A physical inventory count requires companies to do a manual “stock-check” of inventory to make sure what they have recorded on the books matches what they physically have in stock. Differences could occur due to mismanagement, shrinkage, damage, or outdated merchandise. Shrinkage is a term used when inventory or other assets disappear without an identifiable reason, such as theft. For a perpetual inventory system, the adjusting entry to show this difference follows. This example assumes that the merchandise inventory is overstated in the accounting records and needs to be adjusted downward to reflect the actual value on hand. If a physical count determines that merchandise inventory is understated in the accounting records, Merchandise Inventory would need to be increased with a debit entry and the COGS would be reduced with a credit entry. The adjusting entry is: To sum up the potential adjustment process, after the merchandise inventory has been verified with a physical count, its book value is adjusted upward or downward to reflect the actual inventory on hand, with an accompanying adjustment to the COGS. Not only must an adjustment to Merchandise Inventory occur at the end of a period, but closure of temporary merchandising accounts to prepare them for the next period is required. Temporary accounts requiring closure are Sales, Sales Discounts, Sales Returns and Allowances, and Cost of Goods Sold. Sales will close with the temporary credit balance accounts to Income Summary. Sales Discounts, Sales Returns and Allowances, and Cost of Goods Sold will close with the temporary debit balance accounts to Income Summary. Note that for a periodic inventory system, the end of the period adjustments require an update to COGS. To determine the value of Cost of Goods Sold, the business will have to look at the beginning inventory balance, purchases, purchase returns and allowances, discounts, and the ending inventory balance. The formula to compute COGS is: where: Once the COGS balance has been established, an adjustment is made to Merchandise Inventory and COGS, and COGS is closed to prepare for the next period. Table 6.1 summarizes the differences between the perpetual and periodic inventory systems. Perpetual and Periodic Transaction Comparison Transaction Perpetual Inventory System Periodic Inventory System Purchase of Inventory Record cost to Inventory account Record cost to Purchases account Purchase Return or Allowance Record to update Inventory Record to Purchase Returns and Allowances Purchase Discount Record to update Inventory Record to Purchase Discounts Sale of Merchandise Record two entries: one for sale and one for cost of sale Record one entry for the sale Sales Return Record two entries: one for sales return, one for cost of inventory returned Record one entry: sales return, cost not recognized Sales Allowance Same under both systems Same under both systems Sales Discount Same under both systems Same under both systems Table 6.1 There are several differences in account recognition between the perpetual and periodic inventory systems. There are advantages and disadvantages to both the perpetual and periodic inventory systems. Concepts In Practice Point-of-Sale Systems Advancements in point-of-sale (POS) systems have simplified the once tedious task of inventory management. POS systems connect with inventory management programs to make real-time data available to help streamline business operations. The cost of inventory management decreases with this connection tool, allowing all businesses to stay current with technology without “breaking the bank.” One such POS system is Square. Square accepts many payment types and updates accounting records every time a sale occurs through a cloud-based application. Square, Inc. has expanded their product offerings to include Square for Retail POS. This enhanced product allows businesses to connect sales and inventory costs immediately. A business can easily create purchase orders, develop reports for cost of goods sold, manage inventory stock, and update discounts, returns, and allowances. With this application, customers have payment flexibility, and businesses can make present decisions to positively affect growth. Advantages and Disadvantages of the Perpetual Inventory System The perpetual inventory system gives real-time updates and keeps a constant flow of inventory information available for decision-makers. With advancements in point-of-sale technologies, inventory is updated automatically and transferred into the company’s accounting system. This allows managers to make decisions as it relates to inventory purchases, stocking, and sales. The information can be more robust, with exact purchase costs, sales prices, and dates known. Although a periodic physical count of inventory is still required, a perpetual inventory system may reduce the number of times physical counts are needed. The biggest disadvantages of using the perpetual inventory systems arise from the resource constraints for cost and time. It is costly to keep an automatic inventory system up-to-date. This may prohibit smaller or less established companies from investing in the required technologies. The time commitment to train and retrain staff to update inventory is considerable. In addition, since there are fewer physical counts of inventory, the figures recorded in the system may be drastically different from inventory levels in the actual warehouse. A company may not have correct inventory stock and could make financial decisions based on incorrect data. Advantages and Disadvantages of the Periodic Inventory System The periodic inventory system is often less expensive and time consuming than perpetual inventory systems. This is because there is no constant maintenance of inventory records or training and retraining of employees to upkeep the system. The complexity of the system makes it difficult to identify the cost justification associated with the inventory function. While both the periodic and perpetual inventory systems require a physical count of inventory, periodic inventorying requires more physical counts to be conducted. This updates the inventory account more frequently to record exact costs. Knowing the exact costs earlier in an accounting cycle can help a company stay on budget and control costs. However, the need for frequent physical counts of inventory can suspend business operations each time this is done. There are more chances for shrinkage, damaged, or obsolete merchandise because inventory is not constantly monitored. Since there is no constant monitoring, it may be more difficult to make in-the-moment business decisions about inventory needs. While each inventory system has its own advantages and disadvantages, the more popular system is the perpetual inventory system. The ability to have real-time data to make decisions, the constant update to inventory, and the integration to point-of-sale systems, outweigh the cost and time investments needed to maintain the system. (While our main coverage focuses on recognition under the perpetual inventory system, Appendix: Analyze and Record Transactions for Merchandise Purchases and Sales Using the Periodic Inventory System discusses recognition under the periodic inventory system.) Think It Through Comparing Inventory Systems Your company uses a perpetual inventory system to control its operations. They only check inventory once every six months. At the 6-month physical count, an employee notices several inventory items missing and many damaged units. In the company records, it shows an inventory balance of $300,000. The actual physical count values inventory at $200,000. This is a significant difference in valuation and has jeopardized the future of the company. As a manager, how might you avoid this large discrepancy in the future? Would a change in inventory systems benefit the company? Are you constrained by any resources? 6.3 Analyze and Record Transactions for Merchandise Purchases Using the Perpetual Inventory System The following example transactions and subsequent journal entries for merchandise purchases are recognized using a perpetual inventory system . The periodic inventory system recognition of these example transactions and corresponding journal entries are shown in Appendix: Analyze and Record Transactions for Merchandise Purchases and Sales Using the Periodic Inventory System . Basic Analysis of Purchase Transaction Journal Entries To better illustrate merchandising activities, let’s follow California Business Solutions (CBS), a retailer providing electronic hardware packages to meet small business needs. Each electronics hardware package (see Figure 6.9 ) contains a desktop computer, tablet computer, landline telephone, and a 4-in-1 desktop printer with a printer, copier, scanner, and fax machine. CBS purchases each electronic product from a manufacturer. The following are the per-item purchase prices from the manufacturer. Cash and Credit Purchase Transaction Journal Entries On April 1, CBS purchases 10 electronic hardware packages at a cost of $620 each. CBS has enough cash-on-hand to pay immediately with cash. The following entry occurs. Merchandise Inventory-Packages increases (debit) for 6,200 ($620 × 10), and Cash decreases (credit) because the company paid with cash. It is important to distinguish each inventory item type to better track inventory needs. On April 7, CBS purchases 30 desktop computers on credit at a cost of $400 each. The credit terms are n/15 with an invoice date of April 7. The following entry occurs. Merchandise Inventory is specific to desktop computers and is increased (debited) for the value of the computers by $12,000 ($400 × 30). Since the computers were purchased on credit by CBS, Accounts Payable increases (credit). On April 17, CBS makes full payment on the amount due from the April 7 purchase. The following entry occurs. Accounts Payable decreases (debit), and Cash decreases (credit) for the full amount owed. The credit terms were n/15, which is net due in 15 days. No discount was offered with this transaction. Thus the full payment of $12,000 occurs. Purchase Discount Transaction Journal Entries On May 1, CBS purchases 67 tablet computers at a cost of $60 each on credit. The payment terms are 5/10, n/30, and the invoice is dated May 1. The following entry occurs. Merchandise Inventory-Tablet Computers increases (debit) in the amount of $4,020 (67 × $60). Accounts Payable also increases (credit) but the credit terms are a little different than the previous example. These credit terms include a discount opportunity (5/10), meaning, CBS has 10 days from the invoice date to pay on their account to receive a 5% discount on their purchase. On May 10, CBS pays their account in full. The following entry occurs. Accounts Payable decreases (debit) for the original amount owed of $4,020 before any discounts are taken. Since CBS paid on May 10, they made the 10-day window and thus received a discount of 5%. Cash decreases (credit) for the amount owed, less the discount. Merchandise Inventory-Tablet Computers decreases (credit) for the amount of the discount ($4,020 × 5%). Merchandise Inventory decreases to align with the Cost Principle, reporting the value of the merchandise at the reduced cost. Let’s take the same example purchase with the same credit terms, but now CBS paid their account on May 25. The following entry would occur instead. Accounts Payable decreases (debit) and Cash decreases (credit) for $4,020. The company paid on their account outside of the discount window but within the total allotted timeframe for payment. CBS does not receive a discount in this case but does pay in full and on time. Purchase Returns and Allowances Transaction Journal Entries On June 1, CBS purchased 300 landline telephones with cash at a cost of $60 each. On June 3, CBS discovers that 25 of the phones are the wrong color and returns the phones to the manufacturer for a full refund. The following entries occur with the purchase and subsequent return. Both Merchandise Inventory-Phones increases (debit) and Cash decreases (credit) by $18,000 ($60 × 300). Since CBS already paid in full for their purchase, a full cash refund is issued. This increases Cash (debit) and decreases (credit) Merchandise Inventory-Phones because the merchandise has been returned to the manufacturer or supplier. On June 8, CBS discovers that 60 more phones from the June 1 purchase are slightly damaged. CBS decides to keep the phones but receives a purchase allowance from the manufacturer of $8 per phone. The following entry occurs for the allowance. Since CBS already paid in full for their purchase, a cash refund of the allowance is issued in the amount of $480 (60 × $8). This increases Cash (debit) and decreases (credit) Merchandise Inventory-Phones because the merchandise is less valuable than before the damage discovery. CBS purchases 80 units of the 4-in-1 desktop printers at a cost of $100 each on July 1 on credit. Terms of the purchase are 5/15, n/40, with an invoice date of July 1. On July 6, CBS discovers 15 of the printers are damaged and returns them to the manufacturer for a full refund. The following entries show the purchase and subsequent return. Both Merchandise Inventory-Printers increases (debit) and Accounts Payable increases (credit) by $8,000 ($100 × 80). Both Accounts Payable decreases (debit) and Merchandise Inventory-Printers decreases (credit) by $1,500 (15 × $100). The purchase was on credit and the return occurred before payment, thus decreasing Accounts Payable. Merchandise Inventory decreases due to the return of the merchandise back to the manufacturer. On July 10, CBS discovers that 4 more printers from the July 1 purchase are slightly damaged but decides to keep them, with the manufacturer issuing an allowance of $30 per printer. The following entry recognizes the allowance. Both Accounts Payable decreases (debit) and Merchandise Inventory-Printers decreases (credit) by $120 (4 × $30). The purchase was on credit and the allowance occurred before payment, thus decreasing Accounts Payable. Merchandise Inventory decreases due to the loss in value of the merchandise. On July 15, CBS pays their account in full, less purchase returns and allowances. The following payment entry occurs. Accounts Payable decreases (debit) for the amount owed, less the return of $1,500 and the allowance of $120 ($8,000 – $1,500 – $120). Since CBS paid on July 15, they made the 15-day window, thus receiving a discount of 5%. Cash decreases (credit) for the amount owed, less the discount. Merchandise Inventory-Printers decreases (credit) for the amount of the discount ($6,380 × 5%). Merchandise Inventory decreases to align with the Cost Principle, reporting the value of the merchandise at the reduced cost. Summary of Purchase Transaction Journal Entries The chart in Figure 6.10 represents the journal entry requirements based on various merchandising purchase transactions using the perpetual inventory system. Note that Figure 6.10 considers an environment in which inventory physical counts and matching books records align. This is not always the case given concerns with shrinkage (theft), damages, or obsolete merchandise. In this circumstance, an adjustment is recorded to inventory to account for the differences between the physical count and the amount represented on the books. Your Turn Recording a Retailer’s Purchase Transactions Record the journal entries for the following purchase transactions of a retailer. Dec. 3 Purchased $500 worth of inventory on credit with terms 2/10, n/30, and invoice dated December 3. Dec. 6 Returned $150 worth of damaged inventory to the manufacturer and received a full refund. Dec. 9 Paid the account in full Solution Link to Learning Bean Counter is a website that offers free, fun and interactive games, simulations, and quizzes about accounting. You can “Fling the Teacher,” “Walk the Plank,” and play “Basketball” while learning the fundamentals of accounting topics. Check out Bean Counter to see what you can learn. 6.4 Analyze and Record Transactions for the Sale of Merchandise Using the Perpetual Inventory System The following example transactions and subsequent journal entries for merchandise sales are recognized using a perpetual inventory system . The periodic inventory system recognition of these example transactions and corresponding journal entries are shown in Appendix: Analyze and Record Transactions for Merchandise Purchases and Sales Using the Periodic Inventory System . Basic Analysis of Sales Transaction Journal Entries Let’s continue to follow California Business Solutions (CBS) and their sales of electronic hardware packages to business customers. As previously stated, each package contains a desktop computer, tablet computer, landline telephone, and a 4-in-1 printer. CBS sells each hardware package for $1,200. They offer their customers the option of purchasing extra individual hardware items for every electronic hardware package purchase. Figure 6.11 lists the products CBS sells to customers; the prices are per-package, and per unit. Cash and Credit Sales Transaction Journal Entries On July 1, CBS sells 10 electronic hardware packages to a customer at a sales price of $1,200 each. The customer pays immediately with cash. The following entries occur. In the first entry, Cash increases (debit) and Sales increases (credit) for the selling price of the packages, $12,000 ($1,200 × 10). In the second entry, the cost of the sale is recognized. COGS increases (debit) and Merchandise Inventory-Packages decreases (credit) for the cost of the packages, $6,200 ($620 × 10). On July 7, CBS sells 20 desktop computers to a customer on credit. The credit terms are n/15 with an invoice date of July 7. The following entries occur. Since the computers were purchased on credit by the customer, Accounts Receivable increases (debit) and Sales increases (credit) for the selling price of the computers, $15,000 ($750 × 20). In the second entry, Merchandise Inventory-Desktop Computers decreases (credit), and COGS increases (debit) for the cost of the computers, $8,000 ($400 × 20). On July 17, the customer makes full payment on the amount due from the July 7 sale. The following entry occurs. Accounts Receivable decreases (credit) and Cash increases (debit) for the full amount owed. The credit terms were n/15, which is net due in 15 days. No discount was offered with this transaction; thus the full payment of $15,000 occurs. Sales Discount Transaction Journal Entries On August 1, a customer purchases 56 tablet computers on credit. The payment terms are 2/10, n/30, and the invoice is dated August 1. The following entries occur. In the first entry, both Accounts Receivable (debit) and Sales (credit) increase by $16,800 ($300 × 56). These credit terms are a little different than the earlier example. These credit terms include a discount opportunity (2/10), meaning the customer has 10 days from the invoice date to pay on their account to receive a 2% discount on their purchase. In the second entry, COGS increases (debit) and Merchandise Inventory–Tablet Computers decreases (credit) in the amount of $3,360 (56 × $60). On August 10, the customer pays their account in full. The following entry occurs. Since the customer paid on August 10, they made the 10-day window and received a discount of 2%. Cash increases (debit) for the amount paid to CBS, less the discount. Sales Discounts increases (debit) for the amount of the discount ($16,800 × 2%), and Accounts Receivable decreases (credit) for the original amount owed, before discount. Sales Discounts will reduce Sales at the end of the period to produce net sales. Let’s take the same example sale with the same credit terms, but now assume the customer paid their account on August 25. The following entry occurs. Cash increases (debit) and Accounts Receivable decreases (credit) by $16,800. The customer paid on their account outside of the discount window but within the total allotted timeframe for payment. The customer does not receive a discount in this case but does pay in full and on time. Your Turn Recording a Retailer’s Sales Transactions Record the journal entries for the following sales transactions by a retailer. Jan. 5 Sold $2,450 of merchandise on credit (cost of $1,000), with terms 2/10, n/30, and invoice dated January 5. Jan. 9 The customer returned $500 worth of slightly damaged merchandise to the retailer and received a full refund. The retailer returned the merchandise to its inventory at a cost of $130. Jan. 14 Account paid in full. Solution Sales Returns and Allowances Transaction Journal Entries On September 1, CBS sold 250 landline telephones to a customer who paid with cash. On September 3, the customer discovers that 40 of the phones are the wrong color and returns the phones to CBS in exchange for a full refund. CBS determines that the returned merchandise can be resold and returns the merchandise to inventory at its original cost. The following entries occur for the sale and subsequent return. In the first entry on September 1, Cash increases (debit) and Sales increases (credit) by $37,500 (250 × $150), the sales price of the phones. In the second entry, COGS increases (debit), and Merchandise Inventory-Phones decreases (credit) by $15,000 (250 × $60), the cost of the sale. Since the customer already paid in full for their purchase, a full cash refund is issued on September 3. This increases Sales Returns and Allowances (debit) and decreases Cash (credit) by $6,000 (40 × $150). The second entry on September 3 returns the phones back to inventory for CBS because they have determined the merchandise is in sellable condition at its original cost. Merchandise Inventory–Phones increases (debit) and COGS decreases (credit) by $2,400 (40 × $60). On September 8, the customer discovers that 20 more phones from the September 1 purchase are slightly damaged. The customer decides to keep the phones but receives a sales allowance from CBS of $10 per phone. The following entry occurs for the allowance. Since the customer already paid in full for their purchase, a cash refund of the allowance is issued in the amount of $200 (20 × $10). This increases (debit) Sales Returns and Allowances and decreases (credit) Cash. CBS does not have to consider the condition of the merchandise or return it to their inventory because the customer keeps the merchandise. A customer purchases 55 units of the 4-in-1 desktop printers on October 1 on credit. Terms of the sale are 10/15, n/40, with an invoice date of October 1. On October 6, the customer returned 10 of the printers to CBS for a full refund. CBS returns the printers to their inventory at the original cost. The following entries show the sale and subsequent return. In the first entry on October 1, Accounts Receivable increases (debit) and Sales increases (credit) by $19,250 (55 × $350), the sales price of the printers. Accounts Receivable is used instead of Cash because the customer purchased on credit. In the second entry, COGS increases (debit) and Merchandise Inventory–Printers decreases (credit) by $5,500 (55 × $100), the cost of the sale. The customer has not yet paid for their purchase as of October 6. Therefore, the return increases Sales Returns and Allowances (debit) and decreases Accounts Receivable (credit) by $3,500 (10 × $350). The second entry on October 6 returns the printers back to inventory for CBS because they have determined the merchandise is in sellable condition at its original cost. Merchandise Inventory–Printers increases (debit) and COGS decreases (credit) by $1,000 (10 × $100). On October 10, the customer discovers that 5 printers from the October 1 purchase are slightly damaged, but decides to keep them, and CBS issues an allowance of $60 per printer. The following entry recognizes the allowance. Sales Returns and Allowances increases (debit) and Accounts Receivable decreases (credit) by $300 (5 × $60). A reduction to Accounts Receivable occurs because the customer has yet to pay their account on October 10. CBS does not have to consider the condition of the merchandise or return it to their inventory because the customer keeps the merchandise. On October 15, the customer pays their account in full, less sales returns and allowances. The following payment entry occurs. Accounts Receivable decreases (credit) for the original amount owed, less the return of $3,500 and the allowance of $300 ($19,250 – $3,500 – $300). Since the customer paid on October 15, they made the 15-day window, thus receiving a discount of 10%. Sales Discounts increases (debit) for the discount amount ($15,450 × 10%). Cash increases (debit) for the amount owed to CBS, less the discount. Summary of Sales Transaction Journal Entries The chart in Figure 6.12 represents the journal entry requirements based on various merchandising sales transactions. Your Turn Recording a Retailer’s Sales Transactions Record the journal entries for the following sales transactions of a retailer. May 10 Sold $8,600 of merchandise on credit (cost of $2,650), with terms 5/10, n/30, and invoice dated May 10. May 13 The customer returned $1,250 worth of slightly damaged merchandise to the retailer and received a full refund. The retailer returned the merchandise to its inventory at a cost of $380. May 15 The customer discovered some merchandise were the wrong color and received an allowance from the retailer of $230. May 20 The customer paid the account in full, less the return and allowance. Solution 6.5 Discuss and Record Transactions Applying the Two Commonly Used Freight-In Methods When you buy merchandise online, shipping charges are usually one of the negotiated terms of the sale. As a consumer, anytime the business pays for shipping, it is welcomed. For businesses, shipping charges bring both benefits and challenges, and the terms negotiated can have a significant impact on inventory operations. IFRS Connection Shipping Term Effects Companies applying US GAAP as well as those applying IFRS can choose either a perpetual or periodic inventory system to track purchases and sales of inventory. While the tracking systems do not differ between the two methods, they have differences in when sales transactions are reported. If goods are shipped FOB shipping point, under IFRS, the total selling price of the item would be allocated between the item sold (as sales revenue) and the shipping (as shipping revenue). Under US GAAP, the seller can elect whether the shipping costs will be an additional component of revenue (separate performance obligation) or whether they will be considered fulfillment costs (expensed at the time shipping as shipping expense). In an FOB destination scenario, the shipping costs would be considered a fulfillment activity and expensed as incurred rather than be treated as a part of revenue under both IFRS and US GAAP. Example Wally’s Wagons sells and ships 20 deluxe model wagons to Sam’s Emporium for $5,000. Assume $400 of the total costs represents the costs of shipping the wagons and consider these two scenarios: (1) the wagons are shipped FOB shipping point or (2) the wagons are shipped FOB destination. If Wally’s is applying IFRS, the $400 shipping is considered a separate performance obligation, or shipping revenue, and the other $4,600 is considered sales revenue. Both revenues are recorded at the time of shipping and the $400 shipping revenue is offset by a shipping expense. If Wally’s used US GAAP instead, they would choose between using the same treatment as described under IFRS or considering the costs of shipping to be costs of fulfilling the order and expense those costs at the time they are incurred. In this latter case, Wally’s would record Sales Revenue of $5,000 at the time the wagons are shipped and $400 as shipping expense at the time of shipping. Notice that in both cases, the total net revenues are the same $4,600, but the distribution of those revenues is different, which impacts analyses of sales revenue versus total revenues. What happens if the wagons are shipped FOB destination instead? Under both IFRS and US GAAP, the $400 shipping would be treated as an order fulfillment cost and recorded as an expense at the time the goods are shipped. Revenue of $5,000 would be recorded at the time the goods are received by Sam’s emporium. Financial Statement Presentation of Cost of Goods Sold IFRS allows greater flexibility in the presentation of financial statements, including the income statement. Under IFRS, expenses can be reported in the income statement either by nature (for example, rent, salaries, depreciation) or by function (such as COGS or Selling and Administrative). US GAAP has no specific requirements regarding the presentation of expenses, but the SEC requires that expenses be reported by function. Therefore, it may be more challenging to compare merchandising costs (cost of goods sold) across companies if one company’s income statement shows expenses by function and another company shows them by nature. The Basics of Freight-in Versus Freight-out Costs Shipping is determined by contract terms between a buyer and seller. There are several key factors to consider when determining who pays for shipping, and how it is recognized in merchandising transactions. The establishment of a transfer point and ownership indicates who pays the shipping charges, who is responsible for the merchandise, on whose balance sheet the assets would be recorded, and how to record the transaction for the buyer and seller. Ownership of inventory refers to which party owns the inventory at a particular point in time—the buyer or the seller. One particularly important point in time is the point of transfer , when the responsibility for the inventory transfers from the seller to the buyer. Establishing ownership of inventory is important to determine who pays the shipping charges when the goods are in transit as well as the responsibility of each party when the goods are in their possession. Goods in transit refers to the time in which the merchandise is transported from the seller to the buyer (by way of delivery truck, for example). One party is responsible for the goods in transit and the costs associated with transportation. Determining whether this responsibility lies with the buyer or seller is critical to determining the reporting requirements of the retailer or merchandiser. Freight-in refers to the shipping costs for which the buyer is responsible when receiving shipment from a seller, such as delivery and insurance expenses. When the buyer is responsible for shipping costs, they recognize this as part of the purchase cost. This means that the shipping costs stay with the inventory until it is sold. The cost principle requires this expense to stay with the merchandise as it is part of getting the item ready for sale from the buyer’s perspective. The shipping expenses are held in inventory until sold, which means these costs are reported on the balance sheet in Merchandise Inventory. When the merchandise is sold, the shipping charges are transferred with all other inventory costs to Cost of Goods Sold on the income statement. For example, California Business Solutions (CBS) may purchase 30 computers from a manufacturer for $80 and part of the agreement is that CBS (the buyer) pays the shipping costs of $1,000. CBS would record the following entry to recognize the purchase of the goods and the freight-in. Merchandise Inventory increases (debit), and Cash decreases (credit), for the entire cost of the purchase, including shipping, insurance, and taxes. On the balance sheet, the shipping charges would remain a part of inventory. Freight-out refers to the costs for which the seller is responsible when shipping to a buyer, such as delivery and insurance expenses. When the seller is responsible for shipping costs, they recognize this as a delivery expense. The delivery expense is specifically associated with selling and not daily operations; thus, delivery expenses are typically recorded as a selling and administrative expense on the income statement in the current period. For example, CBS may sell electronics packages to a customer and agree to cover the $100 cost associated with shipping and insurance. CBS would record the following entry to recognize freight-out. Delivery Expense increases (debit) and Cash decreases (credit) for the shipping cost amount of $100. On the income statement, this $100 delivery expense will be grouped with Selling and Administrative expenses. Link to Learning Shipping term agreements provide clarity for buyers and sellers with regards to inventory responsibilities. Use the animation on FOB Shipping Point and FOB Destination to learn more. Discussion and Application of FOB Destination As you’ve learned, the seller and buyer will establish terms of purchase that include the purchase price, taxes, insurance, and shipping charges. So, who pays for shipping? On the purchase contract, shipping terms establish who owns inventory in transit, the point of transfer, and who pays for shipping. The shipping terms are known as “free on board,” or simply FOB. Some refer to FOB as the point of transfer, but really, it incorporates more than simply the point at which responsibility transfers. There are two FOB considerations: FOB Destination and FOB Shipping Point. If FOB destination point is listed on the purchase contract, this means the seller pays the shipping charges (freight-out). This also means goods in transit belong to, and are the responsibility of, the seller. The point of transfer is when the goods reach the buyer’s place of business. To illustrate, suppose CBS sells 30 landline telephones at $150 each on credit at a cost of $60 per phone. On the sales contract, FOB Destination is listed as the shipping terms, and shipping charges amount to $120, paid as cash directly to the delivery service. The following entries occur. Accounts Receivable (debit) and Sales (credit) increases for the amount of the sale (30 × $150). Cost of Goods Sold increases (debit) and Merchandise Inventory decreases (credit) for the cost of sale (30 × $60). Delivery Expense increases (debit) and Cash decreases (credit) for the delivery charge of $120. Discussion and Application of FOB Shipping Point If FOB shipping point is listed on the purchase contract, this means the buyer pays the shipping charges (freight-in). This also means goods in transit belong to, and are the responsibility of, the buyer. The point of transfer is when the goods leave the seller’s place of business. Suppose CBS buys 40 tablet computers at $60 each on credit. The purchase contract shipping terms list FOB Shipping Point. The shipping charges amount to an extra $5 per tablet computer. All other taxes, fees, and insurance are included in the purchase price of $60. The following entry occurs to recognize the purchase. Merchandise Inventory increases (debit) and Accounts Payable increases (credit) by the amount of the purchase, including all shipping, insurance, taxes, and fees [(40 × $60) + (40 × $5)]. Figure 6.14 shows a comparison of shipping terms. Think It Through Choosing Suitable Shipping Terms You are a seller and conduct business with several customers who purchase your goods on credit. Your standard contract requires an FOB Shipping Point term, leaving the buyer with the responsibility for goods in transit and shipping charges. One of your long-term customers asks if you can change the terms to FOB Destination to help them save money. Do you change the terms, why or why not? What positive and negative implications could this have for your business, and your customer? What, if any, restrictions might you consider if you did change the terms? 6.6 Describe and Prepare Multi-Step and Simple Income Statements for Merchandising Companies Merchandising companies prepare financial statements at the end of a period that include the income statement, balance sheet, statement of cash flows, and statement of retained earnings. The presentation format for many of these statements is left up to the business. For the income statement, this means a company could prepare the statement using a multi-step format or a simple format (also known as a single-step format). Companies must decide the format that best fits their needs. Similarities and Differences between the Multi-Step and Simple Income Statement Format A multi-step income statement is more detailed than a simple income statement. Because of the additional detail, it is the option selected by many companies whose operations are more complex. Each revenue and expense account is listed individually under the appropriate category on the statement. The multi-step statement separates cost of goods sold from operating expenses and deducts cost of goods sold from net sales to obtain a gross margin . Operating expenses are daily operational costs not associated with the direct selling of products or services. Operating expenses are broken down into selling expenses (such as advertising and marketing expenses) and general and administrative expenses (such as office supplies expense, and depreciation of office equipment). Deducting the operating expenses from gross margin produces income from operations . Following income from operations are other revenue and expenses not obtained from selling goods or services or other daily operations. Other revenue and expenses examples include interest revenue, gains or losses on sales of assets (buildings, equipment, and machinery), and interest expense. Other revenue and expenses added to (or deducted from) income from operations produces net income (loss). A simple income statement is less detailed than the multi-step format. A simple income statement combines all revenues into one category, followed by all expenses, to produce net income. There are very few individual accounts and the statement does not consider cost of sales separate from operating expenses. Demonstration of the Multi-Step Income Statement Format To demonstrate the use of the multi-step income statement format, let’s continue to discuss California Business Solutions (CBS). The following is select account data from the adjusted trial balance for the year ended, December 31, 2018. We will use this information to create a multi-step income statement. Note that the statements prepared are using a perpetual inventory system. The following is the multi-step income statement for CBS. Demonstration of the Simple Income Statement Format We will use the same adjusted trial balance information for CBS but will now create a simple income statement. The following is the simple income statement for CBS. Final Analysis of the Two Income Statement Options While companies may choose the format that best suits their needs, some might choose a combination of both the multi-step and simple income statement formats. The multi-step income statement may be more beneficial for internal use and management decision-making because of the detail in account information. The simple income statement might be more appropriate for external use, as a summary for investors and lenders. From the information obtained on the income statement, a company can make decisions related to growth strategies. One ratio that can help them in this process is the Gross Profit Margin Ratio. The gross profit margin ratio shows the margin of revenue above the cost of goods sold that can be used to cover operating expenses and profit. The larger the margin, the more availability the company has to reinvest in their business, pay down debt, and return dividends to shareholders. Taking our example from CBS, net sales equaled $293,500 and cost of goods sold equaled $180,000. Therefore, the Gross Profit Margin Ratio is computed as 0.39 (rounded to the nearest hundredth). This means that CBS has a margin of 39% to cover operating expenses and profit. Gross profit margin ratio = ( $293,500 – $180,000 ) $293,500 = 0.39 , or 39% Gross profit margin ratio = ( $293,500 – $180,000 ) $293,500 = 0.39 , or 39% Think It Through Which Income Statement Format Do I Choose? You are an accountant for a small retail store and are tasked with determining the best presentation for your income statement. You may choose to present it in a multi-step format or a simple income statement format. The information on the statement will be used by investors, lenders, and management to make financial decisions related to your company. It is important to the store owners that you give enough information to assist management with decision-making, but not too much information to possibly deter investors or lenders. Which statement format do you choose? Why did you choose this format? What are the benefits and challenges of your statement choice for each stakeholder group? Link to Learning Target Brands, Inc. is an international retailer providing a variety of resale products to consumers. Target uses a multi-step income statement format found at Target Brands, Inc. annual report to present information to external stakeholders. 6.7 Appendix: Analyze and Record Transactions for Merchandise Purchases and Sales Using the Periodic Inventory System Some organizations choose to report merchandising transactions using a periodic inventory system rather than a perpetual inventory system. This requires different account usage, transaction recognition, adjustments, and closing procedures. We will not explore the entries for adjustment or closing procedures but will look at some of the common situations that occur with merchandising companies and how these transactions are reported using the periodic inventory system. Merchandise Purchases The following example transactions and subsequent journal entries for merchandise purchases are recognized using a periodic inventory system. Basic Analysis of Purchase Transaction Journal Entries To better illustrate merchandising activities under the periodic system, let’s return to the example of California Business Solutions (CBS). CBS is a retailer providing electronic hardware packages to meet small business needs. Each electronics hardware package contains a desktop computer, tablet computer, landline telephone, and a 4-in-1 desktop printer with a printer, copier, scanner, and fax machine. CBS purchases each electronic product from a manufacturer. The per-item purchase prices from the manufacturer are shown. Cash and Credit Purchase Transaction Journal Entries On April 1, CBS purchases 10 electronic hardware packages at a cost of $620 each. CBS has enough cash-on-hand to pay immediately with cash. The following entry occurs. Purchases-Packages increases (debit) by $6,200 ($620 × 10), and Cash decreases (credit) by the same amount because the company paid with cash. Under a periodic system, Purchases is used instead of Merchandise Inventory. On April 7, CBS purchases 30 desktop computers on credit at a cost of $400 each. The credit terms are n/15 with an invoice date of April 7. The following entry occurs. Purchases-Desktop Computers increases (debit) for the value of the computers, $12,000 ($400 × 30). Since the computers were purchased on credit by CBS, Accounts Payable increases (credit) instead of cash. On April 17, CBS makes full payment on the amount due from the April 7 purchase. The following entry occurs. Accounts Payable decreases (debit) and Cash decreases (credit) for the full amount owed. The credit terms were n/15, which is net due in 15 days. No discount was offered with this transaction. Thus the full payment of $12,000 occurs. Purchase Discount Transaction Journal Entries On May 1, CBS purchases 67 tablet computers at a cost of $60 each on credit. Terms are 5/10, n/30, and invoice dated May 1. The following entry occurs. Purchases–Tablet Computers increases (debit) in the amount of $4,020 (67 × $60). Accounts Payable also increases (credit), but the credit terms are a little different than the earlier example. These credit terms include a discount opportunity (5/10). This means that CBS has 10 days from the invoice date to pay on their account to receive a 5% discount on their purchase. On May 10, CBS pays their account in full. The following entry occurs. Accounts Payable decreases (debit) for the original amount owed of $4,020 before any discounts are taken. Since CBS paid on May 10, they made the 10-day window, thus receiving a discount of 5%. Cash decreases (credit) for the amount owed, less the discount. Purchase Discounts increases (credit) for the amount of the discount ($4,020 × 5%). Purchase Discounts is considered a contra account and will reduce Purchases at the end of the period. Let’s take the same example purchase with the same credit terms, but now assume that CBS paid their account on May 25. The following entry occurs. Accounts Payable decreases (debit) and Cash decreases (credit) for $4,020. The company paid on their account outside of the discount window but within the total allotted timeframe for payment. CBS does not receive a discount in this case but does pay in full and on time. Purchase Returns and Allowances Transaction Journal Entries On June 1, CBS purchased 300 landline telephones with cash at a cost of $60 each. On June 3, CBS discovers that 25 of the phones are the wrong color and returns the phones to the manufacturer for a full refund. The following entries occur with the purchase and subsequent return. Purchases-Phones increases (debit) and Cash decreases (credit) by $18,000 ($60 × 300). Since CBS already paid in full for their purchase, a full cash refund is issued. This increases Cash (debit) and increases (credit) Purchase Returns and Allowances. Purchase Returns and Allowances is a contra account and decreases Purchases at the end of a period. On June 8, CBS discovers that 60 more phones from the June 1 purchase are slightly damaged. CBS decides to keep the phones but receives a purchase allowance from the manufacturer of $8 per phone. The following entry occurs for the allowance. Since CBS already paid in full for their purchase, a cash refund of the allowance is issued in the amount of $480 (60 × $8). This increases Cash (debit) and increases Purchase Returns and Allowances. CBS purchases 80 units of the 4-in-1 desktop printers at a cost of $100 each on July 1 on credit. Terms of the purchase are 5/15, n/40, with an invoice date of July 1. On July 6, CBS discovers 15 of the printers are damaged and returns them to the manufacturer for a full refund. The following entries show the purchase and subsequent return. Purchases-Printers increases (debit) and Accounts Payable increases (credit) by $8,000 ($100 × 80). Accounts Payable decreases (debit) and Purchase Returns and Allowances increases (credit) by $1,500 (15 × $100). The purchase was on credit and the return occurred before payment. Thus Accounts Payable is debited. On July 10, CBS discovers that 4 more printers from the July 1 purchase are slightly damaged but decides to keep them because the manufacturer issues an allowance of $30 per printer. The following entry recognizes the allowance. Accounts Payable decreases (debit) and Purchase Returns and Allowances increases (credit) by $120 (4 × $30). The purchase was on credit and the allowance occurred before payment. Thus, Accounts Payable is debited. On July 15, CBS pays their account in full, less purchase returns and allowances. The following payment entry occurs. Accounts Payable decreases (debit) for the amount owed, less the return of $1,500 and the allowance of $120 ($8,000 – $1,500 – $120). Since CBS paid on July 15, they made the 15-day window and received a discount of 5%. Cash decreases (credit) for the amount owed, less the discount. Purchase Discounts increases (credit) for the amount of the discount ($6,380 × 5%). Summary of Purchase Transaction Journal Entries The chart in Figure 6.16 represents the journal entry requirements based on various merchandising purchase transactions using the periodic inventory system. Your Turn Recording a Retailer’s Purchase Transactions using a Periodic Inventory System Record the journal entries for the following purchase transactions of a retailer, using the periodic inventory system. Dec. 3 Purchased $500 worth of inventory on credit with terms 2/10, n/30, and invoice dated December 3. Dec. 6 Returned $150 worth of damaged inventory to the manufacturer and received a full refund. Dec. 9 Customer paid the account in full, less the return. Solution Merchandise Sales The following example transactions and subsequent journal entries for merchandise sales are recognized using a periodic inventory system. Basic Analysis of Sales Transaction Journal Entries Let’s continue to follow California Business Solutions (CBS) and the sale of electronic hardware packages to business customers. As previously stated, each package contains a desktop computer, tablet computer, landline telephone, and 4-in-1 printer. CBS sells each hardware package for $1,200. They offer their customers the option of purchasing extra individual hardware items for every electronic hardware package purchase. The following is the list of products CBS sells to customers; the prices are per-package, and per unit. Cash and Credit Sales Transaction Journal Entries On July 1, CBS sells 10 electronic packages to a customer at a sales price of $1,200 each. The customer pays immediately with cash. The following entries occur. Cash increases (debit) and Sales increases (credit) by the selling price of the packages, $12,000 ($1,200 × 10). Unlike the perpetual inventory system, there is no entry for the cost of the sale. This recognition occurs at the end of the period with an adjustment to Cost of Goods Sold. On July 7, CBS sells 20 desktop computers to a customer on credit. The credit terms are n/15 with an invoice date of July 7. The following entries occur. Since the computers were purchased on credit by the customer, Accounts Receivable increases (debit) and Sales increases (credit) by the selling price of the computers, $15,000 ($750 × 20). On July 17, the customer makes full payment on the amount due from the July 7 sale. The following entry occurs. Accounts Receivable decreases (credit) and Cash increases (debit) by the full amount owed. The credit terms were n/15, which is net due in 15 days. No discount was offered with this transaction, thus the full payment of $15,000 occurs. Sales Discount Transaction Journal Entries On August 1, a customer purchases 56 tablet computers on credit. Terms are 2/10, n/30, and invoice dated August 1. The following entries occur. Accounts Receivable increases (debit) and Sales increases (credit) by $16,800 ($300 × 56). These credit terms are a little different than the earlier example. These credit terms include a discount opportunity (2/10). This means that the customer has 10 days from the invoice date to pay on their account to receive a 2% discount on their purchase. On August 10, the customer pays their account in full. The following entry occurs. Since the customer paid on August 10, they made the 10-day window, thus receiving a discount of 2%. Cash increases (debit) for the amount paid to CBS, less the discount. Sales Discounts increases (debit) by the amount of the discount ($16,800 × 2%), and Accounts Receivable decreases (credit) by the original amount owed, before discount. Sales Discounts will reduce Sales at the end of the period to produce net sales. Let’s take the same example sale with the same credit terms, but now assume that the customer paid their account on August 25. The following entry occurs. Cash increases (debit) and Accounts Receivable decreases (credit) by $16,800. The customer paid on their account outside of the discount window but within the total allotted timeframe for payment. The customer does not receive a discount in this case but does pay in full and on time. Sales Returns and Allowances Transaction Journal Entries On September 1, CBS sold 250 landline telephones to a customer who paid with cash. On September 3, the customer discovers that 40 of the phones are the wrong color and returns the phones to CBS in exchange for a full refund. The following entries occur for the sale and subsequent return. Cash increases (debit) and Sales increases (credit) by $37,500 (250 × $150), the sales price of the phones. Since the customer already paid in full for their purchase, a full cash refund is issued on September 3. This increases Sales Returns and Allowances (debit) and decreases Cash (credit) by $6,000 (40 × $150). Unlike in the perpetual inventory system, CBS does not recognize the return of merchandise to inventory. Instead, CBS will make an adjustment to Merchandise Inventory at the end of the period. On September 8, the customer discovers that 20 more phones from the September 1 purchase are slightly damaged. The customer decides to keep the phones but receives a sales allowance from CBS of $10 per phone. The following entry occurs for the allowance. Since the customer already paid in full for their purchase, a cash refund of the allowance is issued in the amount of $200 (20 × $10). This increases (debit) Sales Returns and Allowances and decreases (credit) Cash. A customer purchases 55 units of the 4-in-1 desktop printers on October 1 on credit. Terms of the sale are 10/15, n/40, with an invoice date of October 1. On October 6, the customer discovers 10 of the printers are damaged and returns them to CBS for a full refund. The following entries show the sale and subsequent return. Accounts Receivable increases (debit) and Sales increases (credit) by $19,250 (55 × $350), the sales price of the printers. Accounts Receivable is used instead of Cash because the customer purchased on credit. The customer has not yet paid for their purchase as of October 6. This increases Sales Returns and Allowances (debit) and decreases Accounts Receivable (credit) by $3,500 (10 × $350). On October 10, the customer discovers that 5 more printers from the October 1 purchase are slightly damaged, but decides to keep them because CBS issues an allowance of $60 per printer. The following entry recognizes the allowance. Sales Returns and Allowances increases (debit) and Accounts Receivable decreases (credit) by $300 (5 × $60). A reduction to Accounts Receivable occurs because the customer has yet to pay their account on October 10. On October 15, the customer pays their account in full, less sales returns and allowances. The following payment entry occurs. Accounts Receivable decreases (credit) for the original amount owed, less the return of $3,500 and the allowance of $300 ($19,250 – $3,500 – $300). Since the customer paid on October 15, they made the 15-day window and receiving a discount of 10%. Sales Discounts increases (debit) for the discount amount ($15,450 × 10%). Cash increases (debit) for the amount owed to CBS, less the discount. Summary of Sales Transaction Journal Entries The chart in Figure 6.17 represents the journal entry requirements based on various merchandising sales transactions using a periodic inventory system. Your Turn Recording a Retailer’s Sales Transactions using a Periodic Inventory System Record the journal entries for the following sales transactions of a retailer using the periodic inventory system. Jan. 5 Sold $2,450 of merchandise on credit (cost of $1,000), with terms 2/10, n/30, and invoice dated January 5. Jan. 9 The customer returned $500 worth of slightly damaged merchandise to the retailer and received a full refund. Jan. 14 Customer paid the account in full, less the return. Solution
microbiology
Summary 26.1 Anatomy of the Nervous System The nervous system consists of two subsystems: the central nervous system and peripheral nervous system . The skull and three meninges (the dura mater , arachnoid mater , and pia mater ) protect the brain. Tissues of the PNS and CNS are formed of cells called glial cells and neurons . Since the blood-brain barrier excludes most microbes, there is no normal microbiota in the CNS. Some pathogens have specific virulence factors that allow them to breach the blood-brain barrier. Inflammation of the brain or meninges caused by infection is called encephalitis or meningitis , respectively. These conditions can lead to blindness, deafness, coma, and death. 26.2 Bacterial Diseases of the Nervous System Bacterial meningitis can be caused by several species of encapsulated bacteria, including Haemophilus influenzae , Neisseria meningitidis , Streptococcus pneumoniae , and Streptococcus agalactiae (group B streptococci). H. influenzae affects primarily young children and neonates, N. meningitidis is the only communicable pathogen and mostly affects children and young adults, S. pneumoniae affects mostly young children, and S. agalactiae affects newborns during or shortly after birth. Symptoms of bacterial meningitis include fever, neck stiffness, headache, confusion, convulsions, coma, and death. Diagnosis of bacterial meningitis is made through observations and culture of organisms in CSF. Bacterial meningitis is treated with antibiotics. H. influenzae and N. meningitidis have vaccines available. Clostridium species cause neurological diseases, including botulism and tetanus , by producing potent neurotoxins that interfere with neurotransmitter release. The PNS is typically affected. Treatment of Clostridium infection is effective only through early diagnosis with administration of antibiotics to control the infection and antitoxins to neutralize the endotoxin before they enter cells. Listeria monocytogenes is a foodborne pathogen that can infect the CNS, causing meningitis. The infection can be spread through the placenta to a fetus. Diagnosis is through culture of blood or CSF. Treatment is with antibiotics and there is no vaccine. Hansen’s disease ( leprosy ) is caused by the intracellular parasite Mycobacterium leprae . Infections cause demylenation of neurons, resulting in decreased sensation in peripheral appendages and body sites. Treatment is with multi-drug antibiotic therapy, and there is no universally recognized vaccine. 26.3 Acellular Diseases of the Nervous System Viral meningitis is more common and generally less severe than bacterial menigitis. It can result from secondary sequelae of many viruses or be caused by infections of arboviruses. Various types of arboviral encephalitis are concentrated in particular geographic locations throughout the world. These mosquito-borne viral infections of the nervous system are typically mild, but they can be life-threatening in some cases. Zika virus is an emerging arboviral infection with generally mild symptoms in most individuals, but infections of pregnant women can cause the birth defect microcephaly. Polio is typically a mild intestinal infection but can be damaging or fatal if it progresses to a neurological disease. Rabies is nearly always fatal when untreated and remains a significant problem worldwide. Transmissible spongiform encephalopathies such as Creutzfeldt-Jakob disease and kuru are caused by prions. These diseases are untreatable and ultimately fatal. Similar prion diseases are found in animals. 26.4 Fungal and Parasitic Diseases of the Nervous System Neuromycoses are uncommon in immunocompetent people, but immunocompromised individuals with fungal infections have high mortality rates. Treatment of neuromycoses require prolonged therapy with antifungal drugs at low doses to avoid side effects and overcome the effect of the blood-brain barrier. Some protist infections of the nervous systems are fatal if not treated, including primary amoebic meningitis , granulomatous amoebic encephalitis , human African trypanosomiasis , and neurotoxoplasmosis . The various forms of ameobic encephalitis caused by the different amoebic infections are typically fatal even with treatment, but they are rare. African trypanosomiasis is a serious but treatable disease endemic to two distinct regions in sub-Saharan Africa caused by the insect-borne hemoflagellate Trypanosoma brucei . Neurocysticercosis is treated using antihelminthic drugs or surgery to remove the large cysts from the CNS.
Chapter Outline 26.1 Anatomy of the Nervous System 26.2 Bacterial Diseases of the Nervous System 26.3 Acellular Diseases of the Nervous System 26.4 Fungal and Parasitic Diseases of the Nervous System Introduction Few diseases inspire the kind of fear that rabies does. The name is derived from the Latin word for “madness” or “fury,” most likely because animals infected with rabies may behave with uncharacteristic rage and aggression. And while the thought of being attacked by a rabid animal is terrifying enough, the disease itself is even more frightful. Once symptoms appear, the disease is almost always fatal, even when treated. Rabies is an example of a neurological disease caused by an acellular pathogen. The rabies virus enters nervous tissue shortly after transmission and makes its way to the central nervous system, where its presence leads to changes in behavior and motor function. Well-known symptoms associated with rabid animals include foaming at the mouth, hydrophobia (fear of water), and unusually aggressive behavior. Rabies claims tens of thousands of human lives worldwide, mainly in Africa and Asia. Most human cases result from dog bites, although many mammal species can become infected and transmit the disease. Human infection rates are low in the United States and many other countries as a result of control measures in animal populations. However, rabies is not the only disease with serious or fatal neurological effects. In this chapter, we examine the important microbial diseases of the nervous system.
[ { "answer": { "ans_choice": 2, "ans_text": "C" }, "bloom": null, "hl_context": "The brain is the most complex and sensitive organ in the body . It is responsible for all functions of the body , including serving as the coordinating center for all sensations , mobility , emotions , and intellect . Protection for the brain is provided by the bones of the skull , which in turn are covered by the scalp , as shown in Figure 26.3 . The scalp is composed of an outer layer of skin , which is loosely attached to the aponeurosis , a flat , broad tendon layer that anchors the superficial layers of the skin . The periosteum , below the aponeurosis , firmly encases the bones of the skull and provides protection , nutrition to the bone , and the capacity for bone repair . Below the boney layer of the skull are three layers of membranes called meninges that surround the brain . The relative positions of these meninges are shown in Figure 26.3 . <hl> The meningeal layer closest to the bones of the skull is called the dura mater ( literally meaning tough mother ) . <hl> Below the dura mater lies the arachnoid mater ( literally spider-like mother ) . The innermost meningeal layer is a delicate membrane called the pia mater ( literally tender mother ) . Unlike the other meningeal layers , the pia mater firmly adheres to the convoluted surface of the brain . Between the arachnoid mater and pia mater is the subarachnoid space . The subarachnoid space within this region is filled with cerebrospinal fluid ( CSF ) . This watery fluid is produced by cells of the choroid plexus — areas in each ventricle of the brain that consist of cuboidal epithelial cells surrounding dense capillary beds . The CSF serves to deliver nutrients and remove waste from neural tissues .", "hl_sentences": "The meningeal layer closest to the bones of the skull is called the dura mater ( literally meaning tough mother ) .", "question": { "cloze_format": "The outermost membrane surrounding the brain is called the ___.", "normal_format": "What is the outermost membrane surrounding the brain called?", "question_choices": [ "pia mater", "arachnoid mater", "dura mater", "alma mater" ], "question_id": "fs-id1167662756084", "question_text": "What is the outermost membrane surrounding the brain called?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "A" }, "bloom": null, "hl_context": "Although the skull provides the brain with an excellent defense , it can also become problematic during infections . Any swelling of the brain or meninges that results from inflammation can cause intracranial pressure , leading to severe damage of the brain tissues , which have limited space to expand within the inflexible bones of the skull . The term meningitis is used to describe an inflammation of the meninges . Typical symptoms can include severe headache , fever , photophobia ( increased sensitivity to light ) , stiff neck , convulsions , and confusion . <hl> An inflammation of brain tissue is called encephalitis , and patients exhibit signs and symptoms similar to those of meningitis in addition to lethargy , seizures , and personality changes . <hl> When inflammation affects both the meninges and the brain tissue , the condition is called meningoencephalitis . All three forms of inflammation are serious and can lead to blindness , deafness , coma , and death .", "hl_sentences": "An inflammation of brain tissue is called encephalitis , and patients exhibit signs and symptoms similar to those of meningitis in addition to lethargy , seizures , and personality changes .", "question": { "cloze_format": "___ is a term that refers to an inflammation of brain tissues.", "normal_format": "What term refers to an inflammation of brain tissues?", "question_choices": [ "encephalitis", "meningitis", "sinusitis", "meningoencephalitis" ], "question_id": "fs-id1167662737288", "question_text": "What term refers to an inflammation of brain tissues?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "B" }, "bloom": null, "hl_context": "Neurons are specialized cells found throughout the nervous system that transmit signals through the nervous system using electrochemical processes . The basic structure of a neuron is shown in Figure 26.4 . The cell body ( or soma ) is the metabolic center of the neuron and contains the nucleus and most of the cell ’ s organelles . The many finely branched extensions from the soma are called dendrites . <hl> The soma also produces an elongated extension , called the axon , which is responsible for the transmission of electrochemical signals through elaborate ion transport processes . <hl> Axons of some types of neurons can extend up to one meter in length in the human body . To facilitate electrochemical signal transmission , some neurons have a myelin sheath surrounding the axon . Myelin , formed from the cell membranes of glial cells like the Schwann cells in the PNS and oligodendrocytes in the CNS , surrounds and insulates the axon , significantly increasing the speed of electrochemical signal transmission along the axon . The end of an axon forms numerous branches that end in bulbs called synaptic terminals . Neurons form junctions with other cells , such as another neuron , with which they exchange signals . The junctions , which are actually gaps between neurons , are referred to as synapses . At each synapse , there is a presynaptic neuron and a postsynaptic neuron ( or other cell ) . The synaptic terminals of the axon of the presynaptic terminal form the synapse with the dendrites , soma , or sometimes the axon of the postsynaptic neuron , or a part of another type of cell such as a muscle cell . The synaptic terminals contain vesicles filled with chemicals called neurotransmitters . When the electrochemical signal moving down the axon reaches the synapse , the vesicles fuse with the membrane , and neurotransmitters are released , which diffuse across the synapse and bind to receptors on the membrane of the postsynaptic cell , potentially initiating a response in that cell . That response in the postsynaptic cell might include further propagation of an electrochemical signal to transmit information or contraction of a muscle fiber .", "hl_sentences": "The soma also produces an elongated extension , called the axon , which is responsible for the transmission of electrochemical signals through elaborate ion transport processes .", "question": { "cloze_format": "Nerve cells form long projections called ________.", "normal_format": "What nerve cells form long projections are called?", "question_choices": [ "soma", "axons", "dendrites", "synapses" ], "question_id": "fs-id1167660206738", "question_text": "Nerve cells form long projections called ________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 3, "ans_text": "D" }, "bloom": null, "hl_context": "Neurons are specialized cells found throughout the nervous system that transmit signals through the nervous system using electrochemical processes . The basic structure of a neuron is shown in Figure 26.4 . The cell body ( or soma ) is the metabolic center of the neuron and contains the nucleus and most of the cell ’ s organelles . The many finely branched extensions from the soma are called dendrites . The soma also produces an elongated extension , called the axon , which is responsible for the transmission of electrochemical signals through elaborate ion transport processes . Axons of some types of neurons can extend up to one meter in length in the human body . To facilitate electrochemical signal transmission , some neurons have a myelin sheath surrounding the axon . Myelin , formed from the cell membranes of glial cells like the Schwann cells in the PNS and oligodendrocytes in the CNS , surrounds and insulates the axon , significantly increasing the speed of electrochemical signal transmission along the axon . The end of an axon forms numerous branches that end in bulbs called synaptic terminals . Neurons form junctions with other cells , such as another neuron , with which they exchange signals . The junctions , which are actually gaps between neurons , are referred to as synapses . At each synapse , there is a presynaptic neuron and a postsynaptic neuron ( or other cell ) . The synaptic terminals of the axon of the presynaptic terminal form the synapse with the dendrites , soma , or sometimes the axon of the postsynaptic neuron , or a part of another type of cell such as a muscle cell . <hl> The synaptic terminals contain vesicles filled with chemicals called neurotransmitters . <hl> <hl> When the electrochemical signal moving down the axon reaches the synapse , the vesicles fuse with the membrane , and neurotransmitters are released , which diffuse across the synapse and bind to receptors on the membrane of the postsynaptic cell , potentially initiating a response in that cell . <hl> That response in the postsynaptic cell might include further propagation of an electrochemical signal to transmit information or contraction of a muscle fiber .", "hl_sentences": "The synaptic terminals contain vesicles filled with chemicals called neurotransmitters . When the electrochemical signal moving down the axon reaches the synapse , the vesicles fuse with the membrane , and neurotransmitters are released , which diffuse across the synapse and bind to receptors on the membrane of the postsynaptic cell , potentially initiating a response in that cell .", "question": { "cloze_format": "Chemicals called ________ are stored in neurons and released when the cell is stimulated by a signal.", "normal_format": "What are called chemicals that are stored in neurons and released when the cell is stimulated by a signal?", "question_choices": [ "toxins", "cytokines", "chemokines", "neurotransmitters" ], "question_id": "fs-id1167656878578", "question_text": "Chemicals called ________ are stored in neurons and released when the cell is stimulated by a signal." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 3, "ans_text": "D" }, "bloom": null, "hl_context": "<hl> The human nervous system can be divided into two interacting subsystems : the peripheral nervous system ( PNS ) and the central nervous system ( CNS ) . <hl> <hl> The CNS consists of the brain and spinal cord . <hl> The peripheral nervous system is an extensive network of nerves connecting the CNS to the muscles and sensory structures . The relationship of these systems is illustrated in Figure 26.2 .", "hl_sentences": "The human nervous system can be divided into two interacting subsystems : the peripheral nervous system ( PNS ) and the central nervous system ( CNS ) . The CNS consists of the brain and spinal cord .", "question": { "cloze_format": "The central nervous system is made up of ___.", "normal_format": "What is the central nervous system made up?", "question_choices": [ "sensory organs and muscles.", "the brain and muscles.", "the sensory organs and spinal cord.", "the brain and spinal column." ], "question_id": "fs-id1167660443112", "question_text": "The central nervous system is made up of" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "B" }, "bloom": null, "hl_context": "<hl> Twelve different capsular serotypes of N . meningitidis are known to exist . <hl> Serotypes A , B , C , W , X , and Y are the most prevalent worldwide . The CDC recommends that children between 11 – 12 years of age be vaccinated with a single dose of a quadrivalent vaccine that protects against serotypes A , C , W , and Y , with a booster at age 16 . <hl> 6 An additional booster or injections of serogroup B meningococcal vaccine may be given to individuals in high-risk settings ( such as epidemic outbreaks on college campuses ) . <hl> 6 US Centers for Disease Control and Prevention , “ Recommended Immunization Schedule for Persons Aged 0 Through 18 Years , United States , 2016 , ” February 1 , 2016 . Accessed on June 28 , 2016 . http://www.cdc.gov/vaccines/schedules/hcp/imz/child-adolescent.html .", "hl_sentences": "Twelve different capsular serotypes of N . meningitidis are known to exist . 6 An additional booster or injections of serogroup B meningococcal vaccine may be given to individuals in high-risk settings ( such as epidemic outbreaks on college campuses ) .", "question": { "cloze_format": "The ___ is an organism that causes epidemic meningitis cases at college campuses.", "normal_format": "Which of the following organisms causes epidemic meningitis cases at college campuses?", "question_choices": [ "Haemophilus influenzae type b", "Neisseria meningitidis", "Streptococcus pneumoniae", "Listeria monocytogenes" ], "question_id": "fs-id1167661499393", "question_text": "Which of the following organisms causes epidemic meningitis cases at college campuses?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 1, "ans_text": "B" }, "bloom": null, "hl_context": "<hl> S . agalactiae , Group B streptococcus ( GBS ) , is an encapsulated gram-positive bacterium that is the most common cause of neonatal meningitis , a term that refers to meningitis occurring in babies up to 3 months of age . <hl> 10 S . agalactiae can also cause meningitis in people of all ages and can be found in the urogenital and gastrointestinal microbiota of about 10 – 30 % of humans . 10 Thigpen , Michael C . , Cynthia G . Whitney , Nancy E . Messonnier , Elizabeth R . Zell , Ruth Lynfield , James L . Hadler , Lee H . Harrison et al . , “ Bacterial Meningitis in the United States , 1998 – 2007 , ” New England Journal of Medicine 364 , no . 21 ( 2011 ): 2016-25 .", "hl_sentences": "S . agalactiae , Group B streptococcus ( GBS ) , is an encapsulated gram-positive bacterium that is the most common cause of neonatal meningitis , a term that refers to meningitis occurring in babies up to 3 months of age .", "question": { "cloze_format": "___ is the most common cause of neonatal meningitis.", "normal_format": "Which of the following is the most common cause of neonatal meningitis?", "question_choices": [ "Haemophilus influenzae b", "Streptococcus agalactiae", "Neisseria meningitidis", "Streptococcus pneumoniae" ], "question_id": "fs-id1167663617294", "question_text": "Which of the following is the most common cause of neonatal meningitis?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "C" }, "bloom": null, "hl_context": "<hl> If BoNT is absorbed through the gastrointestinal tract , early symptoms of botulism include blurred vision , drooping eyelids , difficulty swallowing , abdominal cramps , nausea , vomiting , constipation , or possibly diarrhea . <hl> This is followed by progressive flaccid paralysis , a gradual weakening and loss of control over the muscles . A patient ’ s experience can be particularly terrifying , because hearing remains normal , consciousness is not lost , and he or she is fully aware of the progression of his or her condition . <hl> In infants , notable signs of botulism include weak cry , decreased ability to suckle , and hypotonia ( limpness of head or body ) . <hl> Eventually , botulism ends in death from respiratory failure caused by the progressive paralysis of the muscles of the upper airway , diaphragm , and chest .", "hl_sentences": "If BoNT is absorbed through the gastrointestinal tract , early symptoms of botulism include blurred vision , drooping eyelids , difficulty swallowing , abdominal cramps , nausea , vomiting , constipation , or possibly diarrhea . In infants , notable signs of botulism include weak cry , decreased ability to suckle , and hypotonia ( limpness of head or body ) .", "question": { "cloze_format": "The sign/symptom that would NOT be associated with infant botulism is ___.", "normal_format": "What sign/symptom would NOT be associated with infant botulism?", "question_choices": [ "difficulty suckling", "limp body", "stiff neck", "weak cry" ], "question_id": "fs-id1167661349188", "question_text": "What sign/symptom would NOT be associated with infant botulism?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 3, "ans_text": "D" }, "bloom": null, "hl_context": "<hl> Treatment for listeriosis involves antibiotic therapy , most commonly with ampicillin and gentamicin . <hl> <hl> There is no vaccine available . <hl> <hl> A tetanus toxoid ( TT ) vaccine is available for protection and prevention of tetanus . <hl> <hl> It is the T component of vaccines such as DTaP , Tdap , and Td . <hl> <hl> The CDC recommends children receive doses of the DTaP vaccine at 2 , 4 , 6 , and 15 – 18 months of age and another at 4 – 6 years of age . <hl> One dose of Td is recommended for adolescents and adults as a TT booster every 10 years . 13 13 US Centers for Disease Control and Prevention , “ Tetanus Vaccination , ” 2013 . Accessed June 29 , 2016 . http://www.cdc.gov/tetanus/vaccination.html . <hl> With the emergence of drug-resistant strains of S . pneumoniae , pneumococcal meningitis is typically treated with broad-spectrum antibiotics , such as levofloxacin , cefotaxime , penicillin , or other β-lactam antibiotics . <hl> <hl> The two available pneumococcal vaccines are described in Bacterial Infections of the Respiratory Tract . <hl> Meningococcal infections can be treated with antibiotic therapy , and third-generation cephalosporins are most often employed . <hl> However , because outcomes can be negative even with treatment , preventive vaccination is the best form of treatment . <hl> <hl> In 2010 , countries in Africa ’ s meningitis belt began using a new serogroup A meningococcal conjugate vaccine . <hl> This program has dramatically reduced the number of cases of meningococcal meningitis by conferring individual and herd immunity . Meningococcal meningitis can infect people of any age , but its prevalence is highest among infants , adolescents , and young adults . <hl> 5 Meningococcal meningitis was once the most common cause of meningitis epidemics in human populations . <hl> <hl> This is still the case in a swath of sub-Saharan Africa known as the meningitis belt , but meningococcal meningitis epidemics have become rare in most other regions , thanks to meningococcal vaccines . <hl> However , outbreaks can still occur in communities , schools , colleges , prisons , and other populations where people are in close direct contact . 5 US Centers for Disease Control and Prevention , “ Meningococcal Disease , ” August 5 , 2015 . Accessed June 28 , 2015 . http://www.cdc.gov/meningococcal/surveillance/index.html .", "hl_sentences": "Treatment for listeriosis involves antibiotic therapy , most commonly with ampicillin and gentamicin . There is no vaccine available . A tetanus toxoid ( TT ) vaccine is available for protection and prevention of tetanus . It is the T component of vaccines such as DTaP , Tdap , and Td . The CDC recommends children receive doses of the DTaP vaccine at 2 , 4 , 6 , and 15 – 18 months of age and another at 4 – 6 years of age . With the emergence of drug-resistant strains of S . pneumoniae , pneumococcal meningitis is typically treated with broad-spectrum antibiotics , such as levofloxacin , cefotaxime , penicillin , or other β-lactam antibiotics . The two available pneumococcal vaccines are described in Bacterial Infections of the Respiratory Tract . However , because outcomes can be negative even with treatment , preventive vaccination is the best form of treatment . In 2010 , countries in Africa ’ s meningitis belt began using a new serogroup A meningococcal conjugate vaccine . 5 Meningococcal meningitis was once the most common cause of meningitis epidemics in human populations . This is still the case in a swath of sub-Saharan Africa known as the meningitis belt , but meningococcal meningitis epidemics have become rare in most other regions , thanks to meningococcal vaccines .", "question": { "cloze_format": "___ can NOT be prevented with a vaccine.", "normal_format": "Which of the following can NOT be prevented with a vaccine?", "question_choices": [ "tetanus", "pneumococcal meningitis", "meningococcal meningitis", "listeriosis" ], "question_id": "fs-id1167661608195", "question_text": "Which of the following can NOT be prevented with a vaccine?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "C" }, "bloom": null, "hl_context": "<hl> Hansen ’ s disease is communicable but not highly contagious ; approximately 95 % of the human population cannot be easily infected because they have a natural immunity to M . leprae . <hl> <hl> Person-to-person transmission occurs by inhalation into nasal mucosa or prolonged and repeated contact with infected skin . <hl> Armadillos , one of only five mammals susceptible to Hansen ’ s disease , have also been implicated in transmission of some cases . 16 16 Sharma , Rahul , Pushpendra Singh , W . J . Loughry , J . Mitchell Lockhart , W . Barry Inman , Malcolm S . Duthie , Maria T . Pena et al . , “ Zoonotic Leprosy in the Southeastern United States , ” Emerging Infectious Diseases 21 , no . 12 ( 2015 ): 2127-34 . <hl> Hansen ’ s disease ( also known as leprosy ) is caused by a long , thin , filamentous rod-shaped bacterium Mycobacterium leprae , an obligate intracellular pathogen . <hl> M . leprae is classified as gram-positive bacteria , but it is best visualized microscopically with an acid-fast stain and is generally referred to as an acid-fast bacterium . Hansen ’ s disease affects the PNS , leading to permanent damage and loss of appendages or other body parts .", "hl_sentences": "Hansen ’ s disease is communicable but not highly contagious ; approximately 95 % of the human population cannot be easily infected because they have a natural immunity to M . leprae . Person-to-person transmission occurs by inhalation into nasal mucosa or prolonged and repeated contact with infected skin . Hansen ’ s disease ( also known as leprosy ) is caused by a long , thin , filamentous rod-shaped bacterium Mycobacterium leprae , an obligate intracellular pathogen .", "question": { "cloze_format": "Leprosy is primarily transmitted from person to person by ___ .", "normal_format": "How is leprosy primarily transmitted from person to person?", "question_choices": [ "contaminated toilet seats", "shaking hands", "blowing nose", "sexual intercourse" ], "question_id": "fs-id1167661460537", "question_text": "How is leprosy primarily transmitted from person to person?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 3, "ans_text": "D" }, "bloom": null, "hl_context": "<hl> Japanese encephalitis , caused by Japanese encephalitis virus ( JEV ) , is the leading cause of vaccine-preventable encephalitis in humans and is endemic to some of the most populous countries in the world , including China , India , Japan , and all of Southeast Asia . <hl> JEV is transmitted to humans by Culex mosquitoes , usually the species C . tritaeniorhynchus . The biological reservoirs for JEV include pigs and wading birds . Most patients with JEV infections are asymptomatic , with symptoms occurring in fewer than 1 % of infected individuals . However , about 25 % of those who do develop encephalitis die , and among those who recover , 30 – 50 % have psychiatric , neurologic , or cognitive impairment . <hl> 24 Fortunately , there is an effective vaccine that can prevent infection with JEV . <hl> <hl> The CDC recommends this vaccine for travelers who expect to spend more than one month in endemic areas . <hl> 24 US Centers for Disease Control and Prevention , “ Japanese Encephalitis , Symptoms and Treatment , ” Accessed June 30 , 2016 . http://www.cdc.gov/japaneseencephalitis/symptoms/index.html .", "hl_sentences": "Japanese encephalitis , caused by Japanese encephalitis virus ( JEV ) , is the leading cause of vaccine-preventable encephalitis in humans and is endemic to some of the most populous countries in the world , including China , India , Japan , and all of Southeast Asia . 24 Fortunately , there is an effective vaccine that can prevent infection with JEV . The CDC recommends this vaccine for travelers who expect to spend more than one month in endemic areas .", "question": { "cloze_format": "The disease that can be prevented with a vaccine for humans is ___.", "normal_format": "Which of these diseases can be prevented with a vaccine for humans?", "question_choices": [ "eastern equine encephalitis", "western equine encephalitis", "West Nile encephalitis", "Japanese encephalitis" ], "question_id": "fs-id1167660249776", "question_text": "Which of these diseases can be prevented with a vaccine for humans?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 0, "ans_text": "A" }, "bloom": null, "hl_context": "<hl> Prions are infectious proteinaceous particles that are not viruses and do not contain nucleic acid . <hl> They are typically transmitted by exposure to and ingestion of infected nervous system tissues , tissue transplants , blood transfusions , or contaminated fomites . Prion proteins are normally found in a healthy brain tissue in a form called PrP C . However , if this protein is misfolded into a denatured form ( PrP Sc ) , it can cause disease . Although the exact function of PrP C is not currently understood , the protein folds into mostly alpha helices and binds copper . The rogue protein , on the other hand , folds predominantly into beta-pleated sheets and is resistant to proteolysis . In addition , PrP Sc can induce PrP C to become misfolded and produce more rogue protein ( Figure 26.18 ) . TSEs in animals include scrapie , a disease in sheep that has been known since the 1700s , and chronic wasting disease , a disease of deer and elk in the United States and Canada . Mad cow disease is seen in cattle and can be transmitted to humans through the consumption of infected nerve tissues . <hl> Human prion diseases include Creutzfeldt-Jakob disease and kuru , a rare disease endemic to Papua New Guinea . <hl>", "hl_sentences": "Prions are infectious proteinaceous particles that are not viruses and do not contain nucleic acid . Human prion diseases include Creutzfeldt-Jakob disease and kuru , a rare disease endemic to Papua New Guinea .", "question": { "cloze_format": "The disease that does NOT require the introduction of foreign nucleic acid is ___.", "normal_format": "Which of these diseases does NOT require the introduction of foreign nucleic acid?", "question_choices": [ "kuru", "polio", "rabies", "St. Louis encephalitis" ], "question_id": "fs-id1167660163444", "question_text": "Which of these diseases does NOT require the introduction of foreign nucleic acid?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "C" }, "bloom": null, "hl_context": "Two different vaccines were introduced in the 1950s that have led to the dramatic decrease in polio worldwide ( Figure 26.17 ) . <hl> The Salk vaccine is an inactivated polio virus that was first introduced in 1955 . <hl> <hl> This vaccine is delivered by intramuscular injection . <hl> <hl> The Sabin vaccine is an oral polio vaccine that contains an attenuated virus ; it was licensed for use in 1962 . <hl> There are three serotypes of poliovirus that cause disease in humans ; both the Salk and the Sabin vaccines are effective against all three .", "hl_sentences": "The Salk vaccine is an inactivated polio virus that was first introduced in 1955 . This vaccine is delivered by intramuscular injection . The Sabin vaccine is an oral polio vaccine that contains an attenuated virus ; it was licensed for use in 1962 .", "question": { "cloze_format": "It is true of the Sabin but NOT the Salk polio vaccine that it (is) ___.", "normal_format": "Which of these is true of the Sabin but NOT the Salk polio vaccine?", "question_choices": [ "requires four injections", "currently administered in the United States", "mimics the normal route of infection", "is an inactivated vaccine" ], "question_id": "fs-id1167662402105", "question_text": "Which of these is true of the Sabin but NOT the Salk polio vaccine?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 3, "ans_text": "D" }, "bloom": null, "hl_context": "<hl> The most common reservoirs in the United States are wild animals such as raccoons ( 30.2 % of all animal cases during 2014 ) , bats ( 29.1 % ) , skunks ( 26.3 % ) , and foxes ( 4.1 % ); collectively , these animals were responsible for a total of 92.6 % of animal rabies cases in the United States in 2014 . <hl> <hl> The remaining 7.4 % of cases that year were in domesticated animals such as dogs , cats , horses , mules , sheep , goats , and llamas . <hl> 27 While there are typically only one or two human cases per year in the United States , rabies still causes tens of thousands of human deaths per year worldwide , primarily in Asia and Africa . 27 US Centers for Disease Control and Prevention , “ Rabies , Wild Animals , ” 2016 . Accessed September 13 , 2016 . http://www.cdc.gov/rabies/location/usa/surveillance/wild_animals.html . <hl> Rabies is a deadly zoonotic disease that has been known since antiquity . <hl> The disease is caused by rabies virus ( RV ) , a member of the family Rhabdoviridae , and is primarily transmitted through the bite of an infected mammal . <hl> Rhabdoviridae are enveloped RNA viruses that have a distinctive bullet shape ( Figure 26.15 ); they were first studied by Louis Pasteur , who obtained rabies virus from rabid dogs and cultivated the virus in rabbits . <hl> He successfully prepared a rabies vaccine using dried nerve tissues from infected animals . This vaccine was used to first treat an infected human in 1885 .", "hl_sentences": "The most common reservoirs in the United States are wild animals such as raccoons ( 30.2 % of all animal cases during 2014 ) , bats ( 29.1 % ) , skunks ( 26.3 % ) , and foxes ( 4.1 % ); collectively , these animals were responsible for a total of 92.6 % of animal rabies cases in the United States in 2014 . The remaining 7.4 % of cases that year were in domesticated animals such as dogs , cats , horses , mules , sheep , goats , and llamas . Rabies is a deadly zoonotic disease that has been known since antiquity . Rhabdoviridae are enveloped RNA viruses that have a distinctive bullet shape ( Figure 26.15 ); they were first studied by Louis Pasteur , who obtained rabies virus from rabid dogs and cultivated the virus in rabbits .", "question": { "cloze_format": "The animal that is NOT a typical reservoir for the spread of rabies is a ___.", "normal_format": "Which of the following animals is NOT a typical reservoir for the spread of rabies?", "question_choices": [ "dog", "bat", "skunk", "chicken" ], "question_id": "fs-id1167660255767", "question_text": "Which of the following animals is NOT a typical reservoir for the spread of rabies?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 0, "ans_text": "A" }, "bloom": null, "hl_context": "<hl> Cryptococcus neoformans is a fungal pathogen that can cause meningitis . <hl> <hl> This yeast is commonly found in soils and is particularly associated with pigeon droppings . <hl> It has a thick capsule that serves as an important virulence factor , inhibiting clearance by phagocytosis . Most C . neoformans cases result in subclinical respiratory infections that , in healthy individuals , generally resolve spontaneously with no long-term consequences ( see Respiratory Mycoses ) . In immunocompromised patients or those with other underlying illnesses , the infection can progress to cause meningitis and granuloma formation in brain tissues . Cryptococcus antigens can also serve to inhibit cell-mediated immunity and delayed-type hypersensitivity .", "hl_sentences": "Cryptococcus neoformans is a fungal pathogen that can cause meningitis . This yeast is commonly found in soils and is particularly associated with pigeon droppings .", "question": { "cloze_format": "The ___ disease results in meningitis caused by an encapsulated yeast.", "normal_format": "Which of these diseases results in meningitis caused by an encapsulated yeast?", "question_choices": [ "cryptococcosis", "histoplasmosis", "candidiasis", "coccidiomycosis" ], "question_id": "fs-id1167660190718", "question_text": "Which of these diseases results in meningitis caused by an encapsulated yeast?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "C" }, "bloom": null, "hl_context": "Human African trypanosomiasis ( also known as African sleeping sickness ) is a serious disease endemic to two distinct regions in sub-Saharan Africa . It is caused by the insect-borne hemoflagellate Trypanosoma brucei . <hl> The subspecies Trypanosoma brucei rhodesiense causes East African trypanosomiasis ( EAT ) , and another subspecies , Trypanosoma brucei gambiense causes West African trypanosomiasis ( WAT ) . <hl> A few hundred cases of EAT are currently reported each year . 34 WAT is more commonly reported and tends to be a more chronic disease . Around 7000 to 10,000 new cases of WAT are identified each year . 35 34 US Centers for Disease Control and Prevention , “ Parasites – African Trypanosomiasis ( also known as Sleeping Sickness ) , East African Trypanosomiasis FAQs , ” 2012 . Accessed June 30 , 2016 . http://www.cdc.gov/parasites/sleepingsickness/gen_info/faqs-east.html . 35 US Centers for Disease Control and Prevention , “ Parasites – African Trypanosomiasis ( also known as Sleeping Sickness ) , Epidemiology & Risk Factors , ” 2012 . Accessed June 30 , 2016 . http://www.cdc.gov/parasites/sleepingsickness/epi.html .", "hl_sentences": "The subspecies Trypanosoma brucei rhodesiense causes East African trypanosomiasis ( EAT ) , and another subspecies , Trypanosoma brucei gambiense causes West African trypanosomiasis ( WAT ) .", "question": { "cloze_format": "___ is the causative agent of East African trypanosomiasis.", "normal_format": "Which of the following is the causative agent of East African trypanosomiasis?", "question_choices": [ "Trypanosoma cruzi", "Trypanosoma vivax", "Trypanosoma brucei rhodanese", "Trypanosoma brucei gambiense" ], "question_id": "fs-id1167660333416", "question_text": "Which of the following is the causative agent of East African trypanosomiasis?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "A" }, "bloom": null, "hl_context": "<hl> Primary amoebic meningoencephalitis ( PAM ) is caused by Naegleria fowleri . <hl> This amoeboflagellate is commonly found free-living in soils and water . It can exist in one of three forms — the infective amoebic trophozoite form , a motile flagellate form , and a resting cyst form . PAM is a rare disease that has been associated with young and otherwise healthy individuals . Individuals are typically infected by the amoeba while swimming in warm bodies of freshwater such as rivers , lakes , and hot springs . The pathogenic trophozoite infects the brain by initially entering through nasal passages to the sinuses ; it then moves down olfactory nerve fibers to penetrate the submucosal nervous plexus , invades the cribriform plate , and reaches the subarachnoid space . The subarachnoid space is highly vascularized and is a route of dissemination of trophozoites to other areas of the CNS , including the brain ( Figure 26.22 ) . Inflammation and destruction of gray matter leads to severe headaches and fever . Within days , confusion and convulsions occur and quickly progress to seizures , coma , and death . The progression can be very rapid , and the disease is often not diagnosed until autopsy .", "hl_sentences": "Primary amoebic meningoencephalitis ( PAM ) is caused by Naegleria fowleri .", "question": { "cloze_format": "___ is/are the causative agent of primary amoebic meningoencephalitis.", "normal_format": "Which of the following is the causative agent of primary amoebic meningoencephalitis?", "question_choices": [ "Naegleria fowleri", "Entameba histolyticum", "Amoeba proteus", "Acanthamoeba polyphaga" ], "question_id": "fs-id1167662646869", "question_text": "Which of the following is the causative agent of primary amoebic meningoencephalitis?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "B" }, "bloom": null, "hl_context": "<hl> T . brucei is primarily transmitted to humans by the bite of the tsetse fly ( Glossina spp . ) . <hl> Soon after the bite of a tsetse fly , a chancre forms at the site of infection . The flagellates then spread , moving into the circulatory system ( Figure 26.24 ) . These systemic infections result in an undulating fever , during which symptoms persist for two or three days with remissions of about a week between bouts . As the disease enters its final phase , the pathogens move from the lymphatics into the CNS . Neurological symptoms include daytime sleepiness , insomnia , and mental deterioration . In EAT , the disease runs its course over a span of weeks to months . In contrast , WAT often occurs over a span of months to years . <hl> Human African trypanosomiasis ( also known as African sleeping sickness ) is a serious disease endemic to two distinct regions in sub-Saharan Africa . <hl> <hl> It is caused by the insect-borne hemoflagellate Trypanosoma brucei . <hl> <hl> The subspecies Trypanosoma brucei rhodesiense causes East African trypanosomiasis ( EAT ) , and another subspecies , Trypanosoma brucei gambiense causes West African trypanosomiasis ( WAT ) . <hl> A few hundred cases of EAT are currently reported each year . 34 WAT is more commonly reported and tends to be a more chronic disease . Around 7000 to 10,000 new cases of WAT are identified each year . 35 34 US Centers for Disease Control and Prevention , “ Parasites – African Trypanosomiasis ( also known as Sleeping Sickness ) , East African Trypanosomiasis FAQs , ” 2012 . Accessed June 30 , 2016 . http://www.cdc.gov/parasites/sleepingsickness/gen_info/faqs-east.html . 35 US Centers for Disease Control and Prevention , “ Parasites – African Trypanosomiasis ( also known as Sleeping Sickness ) , Epidemiology & Risk Factors , ” 2012 . Accessed June 30 , 2016 . http://www.cdc.gov/parasites/sleepingsickness/epi.html .", "hl_sentences": "T . brucei is primarily transmitted to humans by the bite of the tsetse fly ( Glossina spp . ) . Human African trypanosomiasis ( also known as African sleeping sickness ) is a serious disease endemic to two distinct regions in sub-Saharan Africa . It is caused by the insect-borne hemoflagellate Trypanosoma brucei . The subspecies Trypanosoma brucei rhodesiense causes East African trypanosomiasis ( EAT ) , and another subspecies , Trypanosoma brucei gambiense causes West African trypanosomiasis ( WAT ) .", "question": { "cloze_format": "The biological vector for African sleeping sickness is the ___.", "normal_format": "What is the biological vector for African sleeping sickness?", "question_choices": [ "mosquito", "tsetse fly", "deer tick", "sand fly" ], "question_id": "fs-id1167662382922", "question_text": "What is the biological vector for African sleeping sickness?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 3, "ans_text": "D" }, "bloom": null, "hl_context": "<hl> Cysticercosis is a parasitic infection caused by the larval form of the pork tapeworm , Taenia solium . <hl> <hl> When the larvae invade the brain and spinal cord , the condition is referred to as neurocysticercosis . <hl> This condition affects millions of people worldwide and is the leading cause of adult onset epilepsy in the developing world . 39 39 DeGiorgio , Christopher M . , Marco T . Medina , Reyna Durón , Chi Zee , and Susan Pietsch Escueta , “ Neurocysticercosis , ” Epilepsy Currents 4 , no . 3 ( 2004 ): 107-11 .", "hl_sentences": "Cysticercosis is a parasitic infection caused by the larval form of the pork tapeworm , Taenia solium . When the larvae invade the brain and spinal cord , the condition is referred to as neurocysticercosis .", "question": { "cloze_format": "Humans usually contract neurocysticercosis through ___.", "normal_format": "How do humans usually contract neurocysticercosis?", "question_choices": [ "the bite of an infected arthropod", "exposure to contaminated cat feces", "swimming in contaminated water", "ingestion of undercooked pork" ], "question_id": "fs-id1167662886971", "question_text": "How do humans usually contract neurocysticercosis?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 0, "ans_text": "A" }, "bloom": null, "hl_context": "<hl> Cysticercosis is a parasitic infection caused by the larval form of the pork tapeworm , Taenia solium . <hl> <hl> When the larvae invade the brain and spinal cord , the condition is referred to as neurocysticercosis . <hl> <hl> This condition affects millions of people worldwide and is the leading cause of adult onset epilepsy in the developing world . <hl> 39 39 DeGiorgio , Christopher M . , Marco T . Medina , Reyna Durón , Chi Zee , and Susan Pietsch Escueta , “ Neurocysticercosis , ” Epilepsy Currents 4 , no . 3 ( 2004 ): 107-11 .", "hl_sentences": "Cysticercosis is a parasitic infection caused by the larval form of the pork tapeworm , Taenia solium . When the larvae invade the brain and spinal cord , the condition is referred to as neurocysticercosis . This condition affects millions of people worldwide and is the leading cause of adult onset epilepsy in the developing world .", "question": { "cloze_format": "___ is the most important cause of adult onset epilepsy.", "normal_format": "Which of these is the most important cause of adult onset epilepsy?", "question_choices": [ "neurocysticercosis", "neurotoxoplasmosis", "primary amoebic meningoencephalitis", "African trypanosomiasis" ], "question_id": "fs-id1167662736518", "question_text": "Which of these is the most important cause of adult onset epilepsy?" }, "references_are_paraphrase": null } ]
26
26.1 Anatomy of the Nervous System Learning Objectives Describe the major anatomical features of the nervous system Explain why there is no normal microbiota of the nervous system Explain how microorganisms overcome defenses of the nervous system to cause infection Identify and describe general symptoms associated with various infections of the nervous system Clinical Focus Part 1 David is a 35-year-old carpenter from New Jersey. A year ago, he was diagnosed with Crohn’s disease, a chronic inflammatory bowel disease that has no known cause. He has been taking a prescription corticosteroid to manage the condition, and the drug has been highly effective in keeping his symptoms at bay. However, David recently fell ill and decided to visit his primary care physician. His symptoms included a fever, a persistent cough, and shortness of breath. His physician ordered a chest X-ray, which revealed consolidation of the right lung. The doctor prescribed a course of levofloxacin and told David to come back in a week if he did not feel better. What type of drug is levofloxacin? What type of microbes would this drug be effective against? What type of infection is consistent with David’s symptoms? Jump to the next Clinical Focus box. The human nervous system can be divided into two interacting subsystems: the peripheral nervous system (PNS) and the central nervous system (CNS) . The CNS consists of the brain and spinal cord. The peripheral nervous system is an extensive network of nerves connecting the CNS to the muscles and sensory structures. The relationship of these systems is illustrated in Figure 26.2 . The Central Nervous System The brain is the most complex and sensitive organ in the body. It is responsible for all functions of the body, including serving as the coordinating center for all sensations, mobility, emotions, and intellect. Protection for the brain is provided by the bones of the skull, which in turn are covered by the scalp, as shown in Figure 26.3 . The scalp is composed of an outer layer of skin, which is loosely attached to the aponeurosis , a flat, broad tendon layer that anchors the superficial layers of the skin. The periosteum , below the aponeurosis, firmly encases the bones of the skull and provides protection, nutrition to the bone, and the capacity for bone repair. Below the boney layer of the skull are three layers of membranes called meninges that surround the brain. The relative positions of these meninges are shown in Figure 26.3 . The meningeal layer closest to the bones of the skull is called the dura mater (literally meaning tough mother ). Below the dura mater lies the arachnoid mater (literally spider-like mother ). The innermost meningeal layer is a delicate membrane called the pia mater (literally tender mother ). Unlike the other meningeal layers, the pia mater firmly adheres to the convoluted surface of the brain. Between the arachnoid mater and pia mater is the subarachnoid space . The subarachnoid space within this region is filled with cerebrospinal fluid (CSF) . This watery fluid is produced by cells of the choroid plexus—areas in each ventricle of the brain that consist of cuboidal epithelial cells surrounding dense capillary beds. The CSF serves to deliver nutrients and remove waste from neural tissues. The Blood-Brain Barrier The tissues of the CNS have extra protection in that they are not exposed to blood or the immune system in the same way as other tissues. The blood vessels that supply the brain with nutrients and other chemical substances lie on top of the pia mater. The capillaries associated with these blood vessels in the brain are less permeable than those in other locations in the body. The capillary endothelial cells form tight junctions that control the transfer of blood components to the brain. In addition, cranial capillaries have far fewer fenestra (pore-like structures that are sealed by a membrane) and pinocytotic vesicles than other capillaries. As a result, materials in the circulatory system have a very limited ability to interact with the CNS directly. This phenomenon is referred to as the blood-brain barrier . The blood-brain barrier protects the cerebrospinal fluid from contamination, and can be quite effective at excluding potential microbial pathogens. As a consequence of these defenses, there is no normal microbiota in the cerebrospinal fluid. The blood-brain barrier also inhibits the movement of many drugs into the brain, particularly compounds that are not lipid soluble. This has profound ramifications for treatments involving infections of the CNS, because it is difficult for drugs to cross the blood-brain barrier to interact with pathogens that cause infections. The spinal cord also has protective structures similar to those surrounding the brain. Within the bones of the vertebrae are meninges of dura mater (sometimes called the dural sheath ), arachnoid mater, pia mater, and a blood-spinal cord barrier that controls the transfer of blood components from blood vessels associated with the spinal cord. To cause an infection in the CNS, pathogens must successfully breach the blood-brain barrier or blood-spinal cord barrier. Various pathogens employ different virulence factors and mechanisms to achieve this, but they can generally be grouped into four categories: intercellular (also called paracellular), transcellular, leukocyte facilitated, and nonhematogenous. Intercellular entry involves the use of microbial virulence factors, toxins, or inflammation-mediated processes to pass between the cells of the blood-brain barrier. In transcellular entry, the pathogen passes through the cells of the blood-brain barrier using virulence factors that allow it to adhere to and trigger uptake by vacuole- or receptor-mediated mechanisms. Leukocyte-facilitated entry is a Trojan-horse mechanism that occurs when a pathogen infects peripheral blood leukocytes to directly enter the CNS. Nonhematogenous entry allows pathogens to enter the brain without encountering the blood-brain barrier; it occurs when pathogens travel along either the olfactory or trigeminal cranial nerves that lead directly into the CNS. Link to Learning View this video about the blood-brain barrier Check Your Understanding What is the primary function of the blood-brain barrier? The Peripheral Nervous System The PNS is formed of the nerves that connect organs, limbs, and other anatomic structures of the body to the brain and spinal cord. Unlike the brain and spinal cord, the PNS is not protected by bone, meninges, or a blood barrier, and, as a consequence, the nerves of the PNS are much more susceptible to injury and infection. Microbial damage to peripheral nerves can lead to tingling or numbness known as neuropathy . These symptoms can also be produced by trauma and noninfectious causes such as drugs or chronic diseases like diabetes. The Cells of the Nervous System Tissues of the PNS and CNS are formed of cells called glial cells (neuroglial cells) and neurons (nerve cells). Glial cells assist in the organization of neurons, provide a scaffold for some aspects of neuronal function, and aid in recovery from neural injury. Neurons are specialized cells found throughout the nervous system that transmit signals through the nervous system using electrochemical processes. The basic structure of a neuron is shown in Figure 26.4 . The cell body (or soma ) is the metabolic center of the neuron and contains the nucleus and most of the cell’s organelles. The many finely branched extensions from the soma are called dendrites . The soma also produces an elongated extension, called the axon , which is responsible for the transmission of electrochemical signals through elaborate ion transport processes. Axons of some types of neurons can extend up to one meter in length in the human body. To facilitate electrochemical signal transmission, some neurons have a myelin sheath surrounding the axon. Myelin, formed from the cell membranes of glial cells like the Schwann cells in the PNS and oligodendrocytes in the CNS, surrounds and insulates the axon, significantly increasing the speed of electrochemical signal transmission along the axon. The end of an axon forms numerous branches that end in bulbs called synaptic terminals. Neurons form junctions with other cells, such as another neuron, with which they exchange signals. The junctions, which are actually gaps between neurons, are referred to as synapses . At each synapse, there is a presynaptic neuron and a postsynaptic neuron (or other cell). The synaptic terminals of the axon of the presynaptic terminal form the synapse with the dendrites, soma, or sometimes the axon of the postsynaptic neuron, or a part of another type of cell such as a muscle cell. The synaptic terminals contain vesicles filled with chemicals called neurotransmitters . When the electrochemical signal moving down the axon reaches the synapse, the vesicles fuse with the membrane, and neurotransmitters are released, which diffuse across the synapse and bind to receptors on the membrane of the postsynaptic cell, potentially initiating a response in that cell. That response in the postsynaptic cell might include further propagation of an electrochemical signal to transmit information or contraction of a muscle fiber. Check Your Understanding What cells are associated with neurons, and what is their function? What is the structure and function of a synapse? Meningitis and Encephalitis Although the skull provides the brain with an excellent defense, it can also become problematic during infections. Any swelling of the brain or meninges that results from inflammation can cause intracranial pressure, leading to severe damage of the brain tissues, which have limited space to expand within the inflexible bones of the skull. The term meningitis is used to describe an inflammation of the meninges. Typical symptoms can include severe headache, fever, photophobia (increased sensitivity to light), stiff neck, convulsions, and confusion. An inflammation of brain tissue is called encephalitis , and patients exhibit signs and symptoms similar to those of meningitis in addition to lethargy, seizures, and personality changes. When inflammation affects both the meninges and the brain tissue, the condition is called meningoencephalitis . All three forms of inflammation are serious and can lead to blindness, deafness, coma, and death. Meningitis and encephalitis can be caused by many different types of microbial pathogens. However, these conditions can also arise from noninfectious causes such as head trauma, some cancers, and certain drugs that trigger inflammation. To determine whether the inflammation is caused by a pathogen, a lumbar puncture is performed to obtain a sample of CSF . If the CSF contains increased levels of white blood cells and abnormal glucose and protein levels, this indicates that the inflammation is a response to an infectioninflinin. Check Your Understanding What are the two types of inflammation that can impact the CNS? Why do both forms of inflammation have such serious consequences? Micro Connections Guillain-Barré Syndrome Guillain-Barré syndrome (GBS) is a rare condition that can be preceded by a viral or bacterial infection that results in an autoimmune reaction against myelinated nerve cells. The destruction of the myelin sheath around these neurons results in a loss of sensation and function. The first symptoms of this condition are tingling and weakness in the affected tissues. The symptoms intensify over a period of several weeks and can culminate in complete paralysis. Severe cases can be life-threatening. Infections by several different microbial pathogens, including Campylobacter jejuni (the most common risk factor), cytomegalovirus, Epstein-Barr virus , varicella-zoster virus, Mycoplasma pneumoniae , 1 and Zika virus 2 have been identified as triggers for GBS. Anti-myelin antibodies from patients with GBS have been demonstrated to also recognize C. jejuni . It is possible that cross-reactive antibodies, antibodies that react with similar antigenic sites on different proteins, might be formed during an infection and may lead to this autoimmune response. 1 Yuki, Nobuhiro and Hans-Peter Hartung, “Guillain–Barré Syndrome,” New England Journal of Medicine 366, no. 24 (2012): 2294-304. 2 Cao-Lormeau, Van-Mai, Alexandre Blake, Sandrine Mons, Stéphane Lastère, Claudine Roche, Jessica Vanhomwegen, Timothée Dub et al., “Guillain-Barré Syndrome Outbreak Associated with Zika Virus Infection in French Polynesia: A Case-Control Study,” The Lancet 387, no. 10027 (2016): 1531-9. GBS is solely identified by the appearance of clinical symptoms. There are no other diagnostic tests available. Fortunately, most cases spontaneously resolve within a few months with few permanent effects, as there is no available vaccine. GBS can be treated by plasmapheresis. In this procedure, the patient’s plasma is filtered from their blood, removing autoantibodies. 26.2 Bacterial Diseases of the Nervous System Learning Objectives Identify the most common bacteria that can cause infections of the nervous system Compare the major characteristics of specific bacterial diseases affecting the nervous system Bacterial infections that affect the nervous system are serious and can be life-threatening. Fortunately, there are only a few bacterial species commonly associated with neurological infections. Bacterial Meningitis Bacterial meningitis is one of the most serious forms of meningitis. Bacteria that cause meningitis often gain access to the CNS through the bloodstream after trauma or as a result of the action of bacterial toxins. Bacteria may also spread from structures in the upper respiratory tract, such as the oropharynx, nasopharynx, sinuses, and middle ear. Patients with head wounds or cochlear implants (an electronic device placed in the inner ear) are also at risk for developing meningitis. Many of the bacteria that can cause meningitis are commonly found in healthy people. The most common causes of non-neonatal bacterial meningitis are Neisseria meningitidis , Streptococcus pneumoniae , and Haemophilus influenzae . All three of these bacterial pathogens are spread from person to person by respiratory secretions. Each can colonize and cross through the mucous membranes of the oropharynx and nasopharynx, and enter the blood. Once in the blood, these pathogens can disseminate throughout the body and are capable of both establishing an infection and triggering inflammation in any body site, including the meninges ( Figure 26.5 ). Without appropriate systemic antibacterial therapy, the case-fatality rate can be as high as 70%, and 20% of those survivors may be left with irreversible nerve damage or tissue destruction, resulting in hearing loss, neurologic disability, or loss of a limb. Mortality rates are much lower (as low as 15%) in populations where appropriate therapeutic drugs and preventive vaccines are available. 3 3 Thigpen, Michael C., Cynthia G. Whitney, Nancy E. Messonnier, Elizabeth R. Zell, Ruth Lynfield, James L. Hadler, Lee H. Harrison et al., “Bacterial Meningitis in the United States, 1998–2007,” New England Journal of Medicine 364, no. 21 (2011): 2016-25. A variety of other bacteria, including Listeria monocytogenes and Escherichia coli , are also capable of causing meningitis. These bacteria cause infections of the arachnoid mater and CSF after spreading through the circulation in blood or by spreading from an infection of the sinuses or nasopharynx. Streptococcus agalactiae , commonly found in the microbiota of the vagina and gastrointestinal tract, can also cause bacterial meningitis in newborns after transmission from the mother either before or during birth. The profound inflammation caused by these microbes can result in early symptoms that include severe headache, fever, confusion, nausea, vomiting, photophobia, and stiff neck. Systemic inflammatory responses associated with some types of bacterial meningitis can lead to hemorrhaging and purpuric lesions on skin, followed by even more severe conditions that include shock, convulsions, coma, and death—in some cases, in the span of just a few hours. Diagnosis of bacterial meningitis is best confirmed by analysis of CSF obtained by a lumbar puncture . Abnormal levels of polymorphonuclear neutrophils (PMNs) (> 10 PMNs/mm 3 ), glucose (< 45 mg/dL), and protein (> 45 mg/dL) in the CSF are suggestive of bacterial meningitis. 4 Characteristics of specific forms of bacterial meningitis are detailed in the subsections that follow. 4 Popovic, T., et al. World Health Organization, “Laboratory Manual for the Diagnosis of Meningitis Caused by Neisseria meningitidis , Streptococcus pneumoniae , and Haemophilus influenza ,” 1999. Meningococcal Meningitis Meningococcal meningitis is a serious infection caused by the gram-negative coccus N. meningitidis . In some cases, death can occur within a few hours of the onset of symptoms. Nonfatal cases can result in irreversible nerve damage, resulting in hearing loss and brain damage, or amputation of extremities because of tissue necrosis. Meningococcal meningitis can infect people of any age, but its prevalence is highest among infants, adolescents, and young adults. 5 Meningococcal meningitis was once the most common cause of meningitis epidemics in human populations. This is still the case in a swath of sub-Saharan Africa known as the meningitis belt , but meningococcal meningitis epidemics have become rare in most other regions, thanks to meningococcal vaccines. However, outbreaks can still occur in communities, schools, colleges, prisons, and other populations where people are in close direct contact. 5 US Centers for Disease Control and Prevention, “Meningococcal Disease,” August 5, 2015. Accessed June 28, 2015. http://www.cdc.gov/meningococcal/surveillance/index.html. N. meningitidis has a high affinity for mucosal membranes in the oropharynx and nasopharynx. Contact with respiratory secretions containing N. meningitidis is an effective mode of transmission. The pathogenicity of N. meningitidis is enhanced by virulence factors that contribute to the rapid progression of the disease. These include lipooligosaccharide (LOS) endotoxin , type IV pili for attachment to host tissues, and polysaccharide capsules that help the cells avoid phagocytosis and complement-mediated killing. Additional virulence factors include IgA protease (which breaks down IgA antibodies), the invasion factors Opa, Opc, and porin (which facilitate transcellular entry through the blood-brain barrier ), iron-uptake factors (which strip heme units from hemoglobin in host cells and use them for growth), and stress proteins that protect bacteria from reactive oxygen molecules. A unique sign of meningococcal meningitis is the formation of a petechial rash on the skin or mucous membranes, characterized by tiny, red, flat, hemorrhagic lesions. This rash, which appears soon after disease onset, is a response to LOS endotoxin and adherence virulence factors that disrupt the endothelial cells of capillaries and small veins in the skin. The blood vessel disruption triggers the formation of tiny blood clots, causing blood to leak into the surrounding tissue. As the infection progresses, the levels of virulence factors increase, and the hemorrhagic lesions can increase in size as blood continues to leak into tissues. Lesions larger than 1.0 cm usually occur in patients developing shock, as virulence factors cause increased hemorrhage and clot formation. Sepsis, as a result of systemic damage from meningococcal virulence factors, can lead to rapid multiple organ failure, shock, disseminated intravascular coagulation, and death. Because meningococcoal meningitis progresses so rapidly, a greater variety of clinical specimens are required for the timely detection of N. meningitidis . Required specimens can include blood, CSF, naso- and oropharyngeal swabs, urethral and endocervical swabs, petechial aspirates, and biopsies. Safety protocols for handling and transport of specimens suspected of containing N. meningitidis should always be followed, since cases of fatal meningococcal disease have occurred in healthcare workers exposed to droplets or aerosols from patient specimens. Prompt presumptive diagnosis of meningococcal meningitis can occur when CSF is directly evaluated by Gram stain, revealing extra- and intracellular gram-negative diplococci with a distinctive coffee-bean microscopic morphology associated with PMNs ( Figure 26.6 ). Identification can also be made directly from CSF using latex agglutination and immunochromatographic rapid diagnostic tests specific for N. meningitidis . Species identification can also be performed using DNA sequence-based typing schemes for hypervariable outer membrane proteins of N. meningitidis , which has replaced sero(sub)typing. Meningococcal infections can be treated with antibiotic therapy, and third-generation cephalosporins are most often employed. However, because outcomes can be negative even with treatment, preventive vaccination is the best form of treatment. In 2010, countries in Africa’s meningitis belt began using a new serogroup A meningococcal conjugate vaccine. This program has dramatically reduced the number of cases of meningococcal meningitis by conferring individual and herd immunity. Twelve different capsular serotypes of N. meningitidis are known to exist. Serotypes A, B, C, W, X, and Y are the most prevalent worldwide. The CDC recommends that children between 11–12 years of age be vaccinated with a single dose of a quadrivalent vaccine that protects against serotypes A, C, W, and Y, with a booster at age 16. 6 An additional booster or injections of serogroup B meningococcal vaccine may be given to individuals in high-risk settings (such as epidemic outbreaks on college campuses). 6 US Centers for Disease Control and Prevention, “Recommended Immunization Schedule for Persons Aged 0 Through 18 Years, United States, 2016,” February 1, 2016. Accessed on June 28, 2016. http://www.cdc.gov/vaccines/schedules/hcp/imz/child-adolescent.html. Micro Connections Meningitis on Campus College students living in dorms or communal housing are at increased risk for contracting epidemic meningitis. From 2011 to 2015, there have been at least nine meningococcal outbreaks on college campuses in the United States. These incidents involved a total of 43 students (of whom four died). 7 In spite of rapid diagnosis and aggressive antimicrobial treatment, several of the survivors suffered from amputations or serious neurological problems. 7 National Meningitis Association, “Serogroup B Meningococcal Disease Outbreaks on U.S. College Campuses,” 2016. Accessed June 28, 2016. http://www.nmaus.org/disease-prevention-information/serogroup-b-meningococcal-disease/outbreaks/. Prophylactic vaccination of first-year college students living in dorms is recommended by the CDC, and insurance companies now cover meningococcal vaccination for students in college dorms. Some colleges have mandated vaccination with meningococcal conjugate vaccine for certain students entering college ( Figure 26.7 ). Pneumococcal Meningitis Pneumococcal meningitis is caused by the encapsulated gram-positive bacterium S. pneumoniae (pneumococcus, also called strep pneumo). This organism is commonly found in the microbiota of the pharynx of 30–70% of young children, depending on the sampling method, while S. pneumoniae can be found in fewer than 5% of healthy adults. Although it is often present without disease symptoms, this microbe can cross the blood-brain barrier in susceptible individuals. In some cases, it may also result in septicemia. Since the introduction of the Hib vaccine , S. pneumoniae has become the leading cause of meningitis in humans aged 2 months through adulthood. S. pneumoniae can be identified in CSF samples using gram-stained specimens, latex agglutination, and immunochromatographic RDT specific for S. pneumoniae . In gram-stained samples, S. pneumoniae appears as gram-positive, lancet-shaped diplococci ( Figure 26.8 ). Identification of S. pneumoniae can also be achieved using cultures of CSF and blood, and at least 93 distinct serotypes can be identified based on the quellung reaction to unique capsular polysaccharides. PCR and RT-PCR assays are also available to confirm identification. Major virulence factors produced by S. pneumoniae include PI-1 pilin for adherence to host cells (pneumococcal adherence) and virulence factor B (PavB) for attachment to cells of the respiratory tract; choline-binding proteins (cbpA) that bind to epithelial cells and interfere with immune factors IgA and C3; and the cytoplasmic bacterial toxin pneumolysin that triggers an inflammatory response. With the emergence of drug-resistant strains of S. pneumoniae , pneumococcal meningitis is typically treated with broad-spectrum antibiotics, such as levofloxacin , cefotaxime , penicillin , or other β-lactam antibiotics. The two available pneumococcal vaccines are described in Bacterial Infections of the Respiratory Tract . Haemophilus influenzae Type b Meningitis due to H. influenzae serotype b (Hib), an encapsulated pleomorphic gram-negative coccobacilli, is now uncommon in most countries, because of the use of the effective Hib vaccine . Without the use of the Hib vaccine, H. influenzae can be the primary cause of meningitis in children 2 months thru 5 years of age. H. influenzae can be found in the throats of healthy individuals, including infants and young children. By five years of age, most children have developed immunity to this microbe. Infants older than 2 months of age, however, do not produce a sufficient protective antibody response and are susceptible to serious disease. The intracranial pressure caused by this infection leads to a 5% mortality rate and 20% incidence of deafness or brain damage in survivors. 8 8 United States Department of Health and Human Services, “Hib (Haemophilus Influenzae Type B),” Accessed June 28, 2016. http://www.vaccines.gov/diseases/hib/#. H. influenzae produces at least 16 different virulence factors, including LOS , which triggers inflammation, and Haemophilus adhesion and penetration factor (Hap), which aids in attachment and invasion into respiratory epithelial cells. The bacterium also has a polysaccharide capsule that helps it avoid phagocytosis, as well as factors such as IgA1 protease and P2 protein that allow it to evade antibodies secreted from mucous membranes. In addition, factors such as hemoglobin-binding protein (Hgp) and transferrin-binding protein (Tbp) acquire iron from hemoglobin and transferrin, respectively, for bacterial growth. Preliminary diagnosis of H. influenzae infections can be made by direct PCR and a smear of CSF . Stained smears will reveal intracellular and extracellular PMNs with small, pleomorphic, gram-negative coccobacilli or filamentous forms that are characteristic of H. influenzae . Initial confirmation of this genus can be based on its fastidious growth on chocolate agar. Identification is confirmed with requirements for exogenous biochemical growth cofactors NAD and heme (by MALDI-TOF), latex agglutination, and RT-PCR. Meningitis caused by H. influenzae is usually treated with doxycycline , fluoroquinolones , second- and third-generation cephalosporins , and carbapenems . The best means of preventing H. influenza infection is with the use of the Hib polysaccharide conjugate vaccine . It is recommended that all children receive this vaccine at 2, 4, and 6 months of age, with a final booster dose at 12 to 15 months of age. 9 9 US Centers for Disease Control and Prevention, “Meningococcal Disease, Disease Trends,” 2015. Accessed September 13, 2016. http://www.cdc.gov/meningococcal/surveillance/index.html. Neonatal Meningitis S. agalactiae , Group B streptococcus (GBS) , is an encapsulated gram-positive bacterium that is the most common cause of neonatal meningitis , a term that refers to meningitis occurring in babies up to 3 months of age. 10 S. agalactiae can also cause meningitis in people of all ages and can be found in the urogenital and gastrointestinal microbiota of about 10–30% of humans. 10 Thigpen, Michael C., Cynthia G. Whitney, Nancy E. Messonnier, Elizabeth R. Zell, Ruth Lynfield, James L. Hadler, Lee H. Harrison et al., “Bacterial Meningitis in the United States, 1998–2007,” New England Journal of Medicine 364, no. 21 (2011): 2016-25. Neonatal infection occurs as either early onset or late-onset disease. Early onset disease is defined as occurring in infants up to 7 days old. The infant initially becomes infected by S. agalactiae during childbirth, when the bacteria may be transferred from the mother’s vagina. Incidence of early onset neonatal meningitis can be greatly reduced by giving intravenous antibiotics to the mother during labor. Late-onset neonatal meningitis occurs in infants between 1 week and 3 months of age. Infants born to mothers with S. agalactiae in the urogenital tract have a higher risk of late-onset menigitis, but late-onset infections can be transmitted from sources other than the mother; often, the source of infection is unknown. Infants who are born prematurely (before 37 weeks of pregnancy) or to mothers who develop a fever also have a greater risk of contracting late-onset neonatal meningitis. Signs and symptoms of early onset disease include temperature instability, apnea (cessation of breathing), bradycardia (slow heart rate), hypotension , difficulty feeding, irritability, and limpness. When asleep, the baby may be difficult to wake up. Symptoms of late-onset disease are more likely to include seizures, bulging fontanel (soft spot), stiff neck, hemiparesis (weakness on one side of the body), and opisthotonos (rigid body with arched back and head thrown backward). S. agalactiae produces at least 12 virulence factors that include FbsA that attaches to host cell surface proteins, PI-1 pili that promotes the invasion of human endothelial cells, a polysaccharide capsule that prevents the activation of the alternative complement pathway and inhibits phagocytosis, and the toxin CAMP factor , which forms pores in host cell membranes and binds to IgG and IgM antibodies. Diagnosis of neonatal meningitis is often, but not uniformly, confirmed by positive results from cultures of CSF or blood. Tests include routine culture, antigen detection by enzyme immunoassay, serotyping of different capsule types, PCR, and RT-PCR. It is typically treated with β-lactam antibiotics such as intravenous penicillin or ampicillin plus gentamicin . Even with treatment, roughly 10% mortality is seen in infected neonates. 11 11 Thigpen, Michael C., Cynthia G. Whitney, Nancy E. Messonnier, Elizabeth R. Zell, Ruth Lynfield, James L. Hadler, Lee H. Harrison et al., “Bacterial Meningitis in the United States, 1998–2007,” New England Journal of Medicine 364, no. 21 (2011): 2016-25; Heath, Paul T., Gail Balfour, Abbie M. Weisner, Androulla Efstratiou, Theresa L. Lamagni, Helen Tighe, Liam AF O’Connell et al., “Group B Streptococcal Disease in UK and Irish Infants Younger than 90 Days,” The Lancet 363, no. 9405 (2004): 292-4. Check Your Understanding Which groups are most vulnerable to each of the bacterial meningitis diseases? For which of the bacterial meningitis diseases are there vaccines presently available? Which organism can cause epidemic meningitis? Clostridium -Associated Diseases Species in the genus Clostridium are gram-positive, endospore-forming rods that are obligate anaerobes. Endospores of Clostridium spp. are widespread in nature, commonly found in soil, water, feces, sewage, and marine sediments. Clostridium spp. produce more types of protein exotoxins than any other bacterial genus, including two exotoxins with protease activity that are the most potent known biological toxins: botulinum neurotoxin (BoNT) and tetanus neurotoxin (TeNT). These two toxins have lethal doses of 0.2–10 ng per kg body weight. BoNT can be produced by unique strains of C. butyricum , and C. baratii ; however, it is primarily associated with C. botulinum and the condition of botulism. TeNT, which causes tetanus, is only produced by C. tetani . These powerful neural exotoxins are the primary virulence factors for these pathogens. The mode of action for these toxins was described in Virulence Factors of Bacterial and Viral Pathogens and illustrated in Figure 15.16 . Diagnosis of tetanus or botulism typically involves bioassays that detect the presence of BoNT and TeNT in fecal specimens, blood (serum), or suspect foods. In addition, both C. botulinum and C. tetani can be isolated and cultured using commercially available media for anaerobes. ELISA and RT-PCR tests are also available. Tetanus Tetanus is a noncommunicable disease characterized by uncontrollable muscle spasms (contractions) caused by the action of TeNT. It generally occurs when C. tetani infects a wound and produces TeNT, which rapidly binds to neural tissue, resulting in an intoxication (poisoning) of neurons. Depending on the site and extent of infection, cases of tetanus can be described as localized, cephalic, or generalized. Generalized tetanus that occurs in a newborn is called neonatal tetanus. Localized tetanus occurs when TeNT only affects the muscle groups close to the injury site. There is no CNS involvement, and the symptoms are usually mild, with localized muscle spasms caused by a dysfunction in the surrounding neurons. Individuals with partial immunity—especially previously vaccinated individuals who neglect to get the recommended booster shots—are most likely to develop localized tetanus as a result of C. tetani infecting a puncture wound. Cephalic tetanus is a rare, localized form of tetanus generally associated with wounds on the head or face. In rare cases, it has occurred in cases of otitis media (middle ear infection). Cephalic tetanus often results in patients seeing double images, because of the spasms affecting the muscles that control eye movement. Both localized and cephalic tetanus may progress to generalized tetanus—a much more serious condition—if TeNT is able to spread further into body tissues. In generalized tetanus, TeNT enters neurons of the PNS. From there, TeNT travels from the site of the wound, usually on an extremity of the body, retrograde (back up) to inhibitory neurons in the CNS. There, it prevents the release of gamma aminobutyric acid (GABA), the neurotransmitter responsible for muscle relaxation. The resulting muscle spasms often first occur in the jaw muscles, leading to the characteristic symptom of lockjaw (inability to open the mouth). As the toxin progressively continues to block neurotransmitter release, other muscles become involved, resulting in uncontrollable, sudden muscle spasms that are powerful enough to cause tendons to rupture and bones to fracture. Spasms in the muscles in the neck, back, and legs may cause the body to form a rigid, stiff arch, a posture called opisthotonos ( Figure 26.9 ). Spasms in the larynx, diaphragm, and muscles of the chest restrict the patient’s ability to swallow and breathe, eventually leading to death by asphyxiation (insufficient supply of oxygen). Neonatal tetanus typically occurs when the stump of the umbilical cord is contaminated with spores of C. tetani after delivery. Although this condition is rare in the United States, neonatal tetanus is a major cause of infant mortality in countries that lack maternal immunization for tetanus and where birth often occurs in unsanitary conditions. At the end of the first week of life, infected infants become irritable, feed poorly, and develop rigidity with spasms. Neonatal tetanus has a very poor prognosis with a mortality rate of 70%–100%. 12 12 UNFPA, UNICEF WHO, “Maternal and Neonatal Tetanus Elimination by 2005,” 2000. http://www.unicef.org/immunization/files/MNTE_strategy_paper.pdf. Treatment for patients with tetanus includes assisted breathing through the use of a ventilator, wound debridement, fluid balance, and antibiotic therapy with metronidazole or penicillin to halt the growth of C. tetani . In addition, patients are treated with TeNT antitoxin , preferably in the form of human immunoglobulin to neutralize nonfixed toxin and benzodiazepines to enhance the effect of GABA for muscle relaxation and anxiety. A tetanus toxoid (TT) vaccine is available for protection and prevention of tetanus. It is the T component of vaccines such as DTaP, Tdap, and Td. The CDC recommends children receive doses of the DTaP vaccine at 2, 4, 6, and 15–18 months of age and another at 4–6 years of age. One dose of Td is recommended for adolescents and adults as a TT booster every 10 years. 13 13 US Centers for Disease Control and Prevention, “Tetanus Vaccination,” 2013. Accessed June 29, 2016. http://www.cdc.gov/tetanus/vaccination.html. Botulism Botulism is a rare but frequently fatal illness caused by intoxication by BoNT . It can occur either as the result of an infection by C. botulinum , in which case the bacteria produce BoNT in vivo , or as the result of a direct introduction of BoNT into tissues. Infection and production of BoNT in vivo can result in wound botulism , infant botulism, and adult intestinal toxemia. Wound botulism typically occurs when C. botulinum is introduced directly into a wound after a traumatic injury, deep puncture wound, or injection site. Infant botulism, which occurs in infants younger than 1 year of age, and adult intestinal toxemia, which occurs in immunocompromised adults, results from ingesting C. botulinum endospores in food. The endospores germinate in the body, resulting in the production of BoNT in the intestinal tract. Intoxications occur when BoNT is produced outside the body and then introduced directly into the body through food ( foodborne botulism ), air ( inhalation botulism ), or a clinical procedure ( iatrogenic botulism ). Foodborne botulism, the most common of these forms, occurs when BoNT is produced in contaminated food and then ingested along with the food (recall Case in Point: A Streak of Bad Potluck ). Inhalation botulism is rare because BoNT is unstable as an aerosol and does not occur in nature; however, it can be produced in the laboratory and was used (unsuccessfully) as a bioweapon by terrorists in Japan in the 1990s. A few cases of accidental inhalation botulism have also occurred. Iatrogenic botulism is also rare; it is associated with injections of BoNT used for cosmetic purposes (see Micro Connections: Medicinal Uses of Botulinum Toxin ). When BoNT enters the bloodstream in the gastrointestinal tract, wound, or lungs, it is transferred to the neuromuscular junctions of motor neurons where it binds irreversibly to presynaptic membranes and prevents the release of acetylcholine from the presynaptic terminal of motor neurons into the neuromuscular junction. The consequence of preventing acetylcholine release is the loss of muscle activity, leading to muscle relaxation and eventually paralysis. If BoNT is absorbed through the gastrointestinal tract, early symptoms of botulism include blurred vision, drooping eyelids, difficulty swallowing, abdominal cramps, nausea, vomiting, constipation, or possibly diarrhea. This is followed by progressive flaccid paralysis , a gradual weakening and loss of control over the muscles. A patient’s experience can be particularly terrifying, because hearing remains normal, consciousness is not lost, and he or she is fully aware of the progression of his or her condition. In infants, notable signs of botulism include weak cry, decreased ability to suckle, and hypotonia (limpness of head or body). Eventually, botulism ends in death from respiratory failure caused by the progressive paralysis of the muscles of the upper airway, diaphragm, and chest. Botulism is treated with an antitoxin specific for BoNT. If administered in time, the antitoxin stops the progression of paralysis but does not reverse it. Once the antitoxin has been administered, the patient will slowly regain neurological function, but this may take several weeks or months, depending on the severity of the case. During recovery, patients generally must remain hospitalized and receive breathing assistance through a ventilator. Check Your Understanding How frequently should the tetanus vaccination be updated in adults? What are the most common causes of botulism? Why is botulism not treated with an antibiotic? Micro Connections Medicinal Uses of Botulinum Toxin Although it is the most toxic biological material known to man, botulinum toxin is often intentionally injected into people to treat other conditions. Type A botulinum toxin is used cosmetically to reduce wrinkles. The injection of minute quantities of this toxin into the face causes the relaxation of facial muscles, thereby giving the skin a smoother appearance. Eyelid twitching and crossed eyes can also be treated with botulinum toxin injections. Other uses of this toxin include the treatment of hyperhidrosis (excessive sweating). In fact, botulinum toxin can be used to moderate the effects of several other apparently nonmicrobial diseases involving inappropriate nerve function. Such diseases include cerebral palsy, multiple sclerosis, and Parkinson’s disease. Each of these diseases is characterized by a loss of control over muscle contractions; treatment with botulinum toxin serves to relax contracted muscles. Listeriosis Listeria monocytogenes is a nonencapsulated, nonsporulating, gram-positive rod and a foodborne pathogen that causes listeriosis . At-risk groups include pregnant women, neonates, the elderly, and the immunocompromised (recall the Clinical Focus case studies in Microbial Growth and Microbial Mechanisms of Pathogenicity ). Listeriosis leads to meningitis in about 20% of cases, particularly neonates and patients over the age of 60. The CDC identifies listeriosis as the third leading cause of death due to foodborne illness, with overall mortality rates reaching 16%. 14 In pregnant women, listeriosis can cause also cause spontaneous abortion in pregnant women because of the pathogen’s unique ability to cross the placenta. 14 Scallan, Elaine, Robert M. Hoekstra, Frederick J. Angulo, Robert V. Tauxe, Marc-Alain Widdowson, Sharon L. Roy, Jeffery L. Jones, and Patricia M. Griffin, “Foodborne Illness Acquired in the United States—Major Pathogens,” Emerging Infectious Diseases 17, no. 1 (2011): 7-15. L. monocytogenes is generally introduced into food items by contamination with soil or animal manure used as fertilizer. Foods commonly associated with listeriosis include fresh fruits and vegetables, frozen vegetables, processed meats, soft cheeses, and raw milk. 15 Unlike most other foodborne pathogens, Listeria is able to grow at temperatures between 0 °C and 50 °C, and can therefore continue to grow, even in refrigerated foods. 15 US Centers for Disease Control and Prevention, “ Listeria Outbreaks,” 2016. Accessed June 29, 2016. https://www.cdc.gov/listeria/outbreaks/index.html. Ingestion of contaminated food leads initially to infection of the gastrointestinal tract. However, L. monocytogenes produces several unique virulence factors that allow it to cross the intestinal barrier and spread to other body systems. Surface proteins called internalins (InlA and InlB) help L. monocytogenes invade nonphagocytic cells and tissues, penetrating the intestinal wall and becoming disseminating through the circulatory and lymphatic systems. Internalins also enable L. monocytogenes to breach other important barriers, including the blood-brain barrier and the placenta. Within tissues, L. monocytogenes uses other proteins called listeriolysin O and ActA to facilitate intercellular movement, allowing the infection to spread from cell to cell ( Figure 26.10 ). L. monocytogenes is usually identified by cultivation of samples from a normally sterile site (e.g., blood or CSF ). Recovery of viable organisms can be enhanced using cold enrichment by incubating samples in a broth at 4 °C for a week or more. Distinguishing types and subtypes of L. monocytogenes —an important step for diagnosis and epidemiology—is typically done using pulsed-field gel electrophoresis. Identification can also be achieved using chemiluminescence DNA probe assays and MALDI-TOF. Treatment for listeriosis involves antibiotic therapy, most commonly with ampicillin and gentamicin . There is no vaccine available. Check Your Understanding How does Listeria enter the nervous system? Hansen’s Disease (Leprosy) Hansen’s disease (also known as leprosy ) is caused by a long, thin, filamentous rod-shaped bacterium Mycobacterium leprae , an obligate intracellular pathogen. M. leprae is classified as gram-positive bacteria, but it is best visualized microscopically with an acid-fast stain and is generally referred to as an acid-fast bacterium . Hansen’s disease affects the PNS, leading to permanent damage and loss of appendages or other body parts. Hansen’s disease is communicable but not highly contagious; approximately 95% of the human population cannot be easily infected because they have a natural immunity to M. leprae . Person-to-person transmission occurs by inhalation into nasal mucosa or prolonged and repeated contact with infected skin. Armadillos, one of only five mammals susceptible to Hansen’s disease, have also been implicated in transmission of some cases. 16 16 Sharma, Rahul, Pushpendra Singh, W. J. Loughry, J. Mitchell Lockhart, W. Barry Inman, Malcolm S. Duthie, Maria T. Pena et al., “Zoonotic Leprosy in the Southeastern United States,” Emerging Infectious Diseases 21, no. 12 (2015): 2127-34. In the human body, M. leprae grows best at the cooler temperatures found in peripheral tissues like the nose, toes, fingers, and ears. Some of the virulence factors that contribute to M. leprae ’s pathogenicity are located on the capsule and cell wall of the bacterium. These virulence factors enable it to bind to and invade Schwann cells , resulting in progressive demyelination that gradually destroys neurons of the PNS. The loss of neuronal function leads to hypoesthesia (numbness) in infected lesions. M. leprae is readily phagocytized by macrophages but is able to survive within macrophages in part by neutralizing reactive oxygen species produced in the oxidative burst of the phagolysosome. Like L. monocytogenes , M. leprae also can move directly between macrophages to avoid clearance by immune factors. The extent of the disease is related to the immune response of the patient. Initial symptoms may not appear for as long as 2 to 5 years after infection. These often begin with small, blanched, numb areas of the skin. In most individuals, these will resolve spontaneously, but some cases may progress to a more serious form of the disease. Tuberculoid (paucibacillary) Hansen’s disease is marked by the presence of relatively few (three or less) flat, blanched skin lesions with small nodules at the edges and few bacteria present in the lesion. Although these lesions can persist for years or decades, the bacteria are held in check by an effective immune response including cell-mediated cytotoxicity . Individuals who are unable to contain the infection may later develop lepromatous (multibacillary) Hansen’s disease . This is a progressive form of the disease characterized by nodules filled with acid-fast bacilli and macrophages. Impaired function of infected Schwann cells leads to peripheral nerve damage, resulting in sensory loss that leads to ulcers, deformities, and fractures. Damage to the ulnar nerve (in the wrist) by M. leprae is one of the most common causes of crippling of the hand. In some cases, chronic tissue damage can ultimately lead to loss of fingers or toes. When mucosal tissues are also involved, disfiguring lesions of the nose and face can also occur ( Figure 26.11 ). Hansen’s disease is diagnosed on the basis of clinical signs and symptoms of the disease, and confirmed by the presence of acid-fast bacilli on skin smears or in skin biopsy specimens ( Figure 26.11 ). M. leprae does not grow in vitro on any known laboratory media, but it can be identified by culturing in vivo in the footpads of laboratory mice or armadillos. Where needed, PCR and genotyping of M. leprae DNA in infected human tissue may be performed for diagnosis and epidemiology. Hansen’s disease responds well to treatment and, if diagnosed and treated early, does not cause disability. In the United States, most patients with Hansen’s disease are treated in ambulatory care clinics in major cities by the National Hansen’s Disease program, the only institution in the United States exclusively devoted to Hansen’s disease. Since 1995, WHO has made multidrug therapy for Hansen’s disease available free of charge to all patients worldwide. As a result, global prevalence of Hansen’s disease has declined from about 5.2 million cases in 1985 to roughly 176,000 in 2014. 17 Multidrug therapy consists of dapsone and rifampicin for all patients and a third drug, clofazimin , for patients with multibacillary disease. 17 World Health Organization, “Leprosy Fact Sheet,” 2016. Accessed September 13, 2016. http://www.who.int/mediacentre/factsheets/fs101/en/. Currently, there is no universally accepted vaccine for Hansen’s disease. India and Brazil use a tuberculosis vaccine against Hansen’s disease because both diseases are caused by species of Mycobacterium . The effectiveness of this method is questionable, however, since it appears that the vaccine works in some populations but not in others. Check Your Understanding What prevents the progression from tuberculoid to lepromatus leprosy? Why does Hansen’s disease typically affect the nerves of the extremities? Eye on Ethics Leper Colonies Disfiguring, deadly diseases like leprosy have historically been stigmatized in many cultures. Before leprosy was understood, victims were often isolated in leper colonies, a practice mentioned frequently in ancient texts, including the Bible. But leper colonies are not just an artifact of the ancient world. In Hawaii, a leper colony established in the late nineteenth century persisted until the mid-twentieth century, its residents forced to live in deplorable conditions. 18 Although leprosy is a communicable disease, it is not considered contagious (easily communicable), and it certainly does not pose enough of a threat to justify the permanent isolation of its victims. Today, we reserve the practices of isolation and quarantine to patients with more dangerous diseases, such as Ebola or multiple-drug-resistant bacteria like Mycobacterium tuberculosis and Staphylococcus aureus . The ethical argument for this practice is that isolating infected patients is necessary to prevent the transmission and spread of highly contagious diseases—even when it goes against the wishes of the patient. 18 National Park Service, “A Brief History of Kalaupapa,” Accessed February 2, 2016. http://www.nps.gov/kala/learn/historyculture/a-brief-history-of-kalaupapa.htm. Of course, it is much easier to justify the practice of temporary, clinical quarantining than permanent social segregation, as occurred in leper colonies. In the 1980s, there were calls by some groups to establish camps for people infected with AIDS. Although this idea was never actually implemented, it begs the question—where do we draw the line? Are permanent isolation camps or colonies ever medically or socially justifiable? Suppose there were an outbreak of a fatal, contagious disease for which there is no treatment. Would it be justifiable to impose social isolation on those afflicted with the disease? How would we balance the rights of the infected with the risk they pose to others? To what extent should society expect individuals to put their own health at risk for the sake of treating others humanely? Disease Profile Bacterial Infections of the Nervous System Despite the formidable defenses protecting the nervous system, a number of bacterial pathogens are known to cause serious infections of the CNS or PNS. Unfortunately, these infections are often serious and life threatening. Figure 26.12 summarizes some important infections of the nervous system. 26.3 Acellular Diseases of the Nervous System Learning Objectives Identify the most common acellular pathogens that can cause infections of the nervous system Compare the major characteristics of specific viral diseases affecting the nervous system A number of different viruses and subviral particles can cause diseases that affect the nervous system. Viral diseases tend to be more common than bacterial infections of the nervous system today. Fortunately, viral infections are generally milder than their bacterial counterparts and often spontaneously resolve. Some of the more important acellular pathogens of the nervous system are described in this section. Viral Meningitis Although it is much more common than bacterial meningitis, viral meningitis is typically less severe. Many different viruses can lead to meningitis as a sequela of the primary infection, including those that cause herpes , influenza , measles , and mumps . Most cases of viral meningitis spontaneously resolve, but severe cases do occur. Arboviral Encephalitis Several types of insect-borne viruses can cause encephalitis. Collectively, these viruses are referred to as arboviruses (because they are ar thropod- bo rne), and the diseases they cause are described as arboviral encephalitis . Most arboviruses are endemic to specific geographical regions. Arborviral encephalitis diseases found in the United States include eastern equine encephalitis (EEE), western equine encephalitis (WEE), St. Louis encephalitis, and West Nile encephalitis (WNE). Expansion of arboviruses beyond their endemic regions sometimes occurs, generally as a result of environmental changes that are favorable to the virus or its vector. Increased travel of infected humans, animals, or vectors has also allowed arboviruses to spread into new regions. In most cases, arboviral infections are asymptomatic or lead to a mild disease. However, when symptoms do occur, they include high fever, chills, headaches, vomiting, diarrhea, and restlessness. In elderly patients, severe arboviral encephalitis can rapidly lead to convulsions, coma, and death. Mosquitoes are the most common biological vectors for arboviruses, which tend to be enveloped ssRNA viruses. Thus, prevention of arboviral infections is best achieved by avoiding mosquitoes—using insect repellent, wearing long pants and sleeves, sleeping in well-screened rooms, using bed nets, etc. Diagnosis of arboviral encephalitis is based on clinical symptoms and serologic testing of serum or CSF. There are no antiviral drugs to treat any of these arboviral diseases, so treatment consists of supportive care and management of symptoms. Eastern equine encephalitis (EEE) is caused by eastern equine encephalitis virus (EEEV), which can cause severe disease in horses and humans. Birds are reservoirs for EEEV with accidental transmission to horses and humans by Aedes , Coquillettidia , and Culex species of mosquitoes. Neither horses nor humans serve as reservoirs. EEE is most common in US Gulf Coast and Atlantic states. EEE is one of the more severe mosquito-transmitted diseases in the United States, but fortunately, it is a very rare disease in the United States ( Figure 26.13 ). 19 20 19 US Centers for Disease Control and Prevention, “Eastern Equine Encephalitis Virus Disease Cases and Deaths Reported to CDC by Year and Clinical Presentation, 2004–2013,” 2014. http://www.cdc.gov/EasternEquineEncephalitis/resources/EEEV-Cases-by-Year_2004-2013.pdf. 20 US Centers for Disease Control and Prevention, “Eastern Equine Encephalitis, Symptoms & Treatment, 2016,” Accessed June 29, 2016. https://www.cdc.gov/easternequineencephalitis/tech/symptoms.html. Western equine encephalitis (WEE) is caused by western equine encephalitis virus (WEEV). WEEV is usually transmitted to horses and humans by the Culex tarsalis mosquitoes and, in the past decade, has caused very few cases of encephalitis in humans in the United States. In humans, WEE symptoms are less severe than EEE and include fever, chills, and vomiting, with a mortality rate of 3–4%. Like EEEV, birds are the natural reservoir for WEEV. Periodically, for indeterminate reasons, epidemics in human cases have occurred in North America in the past. The largest on record was in 1941, with more than 3400 cases. 21 21 US Centers for Disease Control and Prevention, “Western Equine Encephalitis—United States and Canada, 1987,” Morbidity and Mortality Weekly Report 36, no. 39 (1987): 655. St. Louis encephalitis (SLE) , caused by St. Louis encephalitis virus (SLEV), is a rare form of encephalitis with symptoms occurring in fewer than 1% of infected patients. The natural reservoirs for SLEV are birds. SLEV is most often found in the Ohio-Mississippi River basin of the central United States and was named after a severe outbreak in Missouri in 1934. The worst outbreak of St. Louis encephalitis occurred in 1975, with over 2000 cases reported. 22 Humans become infected when bitten by C. tarsalis , C. quinquefasciatus , or C. pipiens mosquitoes carrying SLEV. Most patients are asymptomatic, but in a small number of individuals, symptoms range from mild flu-like syndromes to fatal encephalitis. The overall mortality rate for symptomatic patients is 5–15%. 23 22 US Centers for Disease Control and Prevention, “Saint Louis encephalitis, Epidemiology & Geographic Distribution,” Accessed June 30, 2016. http://www.cdc.gov/sle/technical/epi.html. 23 US Centers for Disease Control and Prevention, “Saint Louis encephalitis, Symptoms and Treatment,” Accessed June 30, 2016. http://www.cdc.gov/sle/technical/symptoms.html. Japanese encephalitis , caused by Japanese encephalitis virus (JEV), is the leading cause of vaccine-preventable encephalitis in humans and is endemic to some of the most populous countries in the world, including China, India, Japan, and all of Southeast Asia. JEV is transmitted to humans by Culex mosquitoes, usually the species C. tritaeniorhynchus . The biological reservoirs for JEV include pigs and wading birds. Most patients with JEV infections are asymptomatic, with symptoms occurring in fewer than 1% of infected individuals. However, about 25% of those who do develop encephalitis die, and among those who recover, 30–50% have psychiatric, neurologic, or cognitive impairment. 24 Fortunately, there is an effective vaccine that can prevent infection with JEV. The CDC recommends this vaccine for travelers who expect to spend more than one month in endemic areas. 24 US Centers for Disease Control and Prevention, “Japanese Encephalitis, Symptoms and Treatment,” Accessed June 30, 2016. http://www.cdc.gov/japaneseencephalitis/symptoms/index.html. As the name suggests, West Nile virus (WNV) and its associated disease, West Nile encephalitis (WNE) , did not originate in North America. Until 1999, it was endemic in the Middle East, Africa, and Asia; however, the first US cases were identified in New York in 1999, and by 2004, the virus had spread across the entire continental United States. Over 35,000 cases, including 1400 deaths, were confirmed in the five-year period between 1999 and 2004. WNV infection remains reportable to the CDC. WNV is transmitted to humans by Culex mosquitoes from its natural reservoir, infected birds, with 70–80% of infected patients experiencing no symptoms. Most symptomatic cases involve only mild, flu-like symptoms, but fewer than 1% of infected people develop severe and sometimes fatal encephalitis or meningitis. The mortality rate in WNV patients who develop neurological disease is about 10%. More information about West Nile virus can be found in Modes of Disease Transmission . Link to Learning This interactive map identifies cases of several arboviral diseases in humans and reservoir species by state and year for the United States. Check Your Understanding Why is it unlikely that arboviral encephalitis viruses will be eradicated in the future? Which is the most common form of viral encephalitis in the United States? Clinical Focus Part 2 Levofloxacin is a quinolone antibiotic that is often prescribed to treat bacterial infections of the respiratory tract, including pneumonia and bronchitis. But after taking the medication for a week, David returned to his physician sicker than before. He claimed that the antibiotic had no effect on his earlier symptoms. In addition, he now was experiencing headaches, a stiff neck, and difficulty focusing at work. He also showed the doctor a rash that had developed on his arms over the past week. His doctor, more concerned now, began to ask about David's activities over the past two weeks. David explained that he had been recently working on a project to disassemble an old barn. His doctor collected sputum samples and scrapings from David’s rash for cultures. A spinal tap was also performed to examine David’s CSF. Microscopic examination of his CSF revealed encapsulated yeast cells. Based on this result, the doctor prescribed a new antimicrobial therapy using amphotericin B and flucytosine. Why was the original treatment ineffective? Why is the presence of a capsule clinically important? Jump to the previous Clinical Focus box. Jump to the next Clinical Focus box. Zika Virus Infection Zika virus infection is an emerging arboviral disease associated with human illness in Africa, Southeast Asia, and South and Central America; however, its range is expanding as a result of the widespread range of its mosquito vector. The first cases originating in the United States were reported in 2016.The Zika virus was initially described in 1947 from monkeys in the Zika Forest of Uganda through a network that monitored yellow fever. It was not considered a serious human pathogen until the first large-scale outbreaks occurred in Micronesia in 2007; 25 however, the virus has gained notoriety over the past decade, as it has emerged as a cause of symptoms similar to other arboviral infections that include fever, skin rashes, conjunctivitis, muscle and joint pain, malaise, and headache. Mosquitoes of the Aedes genus are the primary vectors, although the virus can also be transmitted sexually, from mother to baby during pregnancy, or through a blood transfusion. 25 Sikka, Veronica, Vijay Kumar Chattu, Raaj K. Popli, Sagar C. Galwankar, Dhanashree Kelkar, Stanley G. Sawicki, Stanislaw P. Stawicki, and Thomas J. Papadimos, “The Emergence of Zika Virus as a Global Health Security Threat: A Review and a Consensus Statement of the INDUSEM Joint Working Group (JWG),” Journal of Global Infectious Diseases 8, no. 1 (2016): 3. Most Zika virus infections result in mild symptoms such as fever, a slight rash, or conjunctivitis. However, infections in pregnant women can adversely affect the developing fetus. Reports in 2015 indicate fetal infections can result in brain damage, including a serious birth defect called microcephaly , in which the infant is born with an abnormally small head ( Figure 26.14 ). 26 26 Mlakar, Jernej, Misa Korva, Nataša Tul, Mara Popović, Mateja Poljšak-Prijatelj, Jerica Mraz, Marko Kolenc et al., “Zika Virus Associated with Microcephaly,” New England Journal of Medicine 374, no. 10 (2016): 951-8. Diagnosis of Zika is primarily based on clinical symptoms. However, the FDA recently authorized the use of a Zika virus RNA assay, Trioplex RT-PCR, and Zika MAC-ELISA to test patient blood and urine to confirm Zika virus disease. There are currently no antiviral treatments or vaccines for Zika virus, and treatment is limited to supportive care. Check Your Understanding What are the signs and symptoms of Zika virus infection in adults? Why is Zika virus infection considered a serious public health threat? Rabies Rabies is a deadly zoonotic disease that has been known since antiquity. The disease is caused by rabies virus (RV) , a member of the family Rhabdoviridae, and is primarily transmitted through the bite of an infected mammal. Rhabdoviridae are enveloped RNA viruses that have a distinctive bullet shape ( Figure 26.15 ); they were first studied by Louis Pasteur , who obtained rabies virus from rabid dogs and cultivated the virus in rabbits. He successfully prepared a rabies vaccine using dried nerve tissues from infected animals. This vaccine was used to first treat an infected human in 1885. The most common reservoirs in the United States are wild animals such as raccoons (30.2% of all animal cases during 2014), bats (29.1%), skunks (26.3%), and foxes (4.1%); collectively, these animals were responsible for a total of 92.6% of animal rabies cases in the United States in 2014. The remaining 7.4% of cases that year were in domesticated animals such as dogs, cats, horses, mules, sheep, goats, and llamas. 27 While there are typically only one or two human cases per year in the United States, rabies still causes tens of thousands of human deaths per year worldwide, primarily in Asia and Africa. 27 US Centers for Disease Control and Prevention, “Rabies, Wild Animals,” 2016. Accessed September 13, 2016. http://www.cdc.gov/rabies/location/usa/surveillance/wild_animals.html. The low incidence of rabies in the United States is primarily a result of the widespread vaccination of dogs and cats. An oral vaccine is also used to protect wild animals, such as raccoons and foxes, from infection. Oral vaccine programs tend to focus on geographic areas where rabies is endemic. 28 The oral vaccine is usually delivered in a package of bait that is dropped by airplane, although baiting in urban areas is done by hand to maximize safety. 29 Many countries require a quarantine or proof of rabies vaccination for domestic pets being brought into the country. These procedures are especially strict in island nations where rabies is not yet present, such as Australia. 28 Slate, Dennis, Charles E. Rupprecht, Jane A. Rooney, Dennis Donovan, Donald H. Lein, and Richard B. Chipman, “Status of Oral Rabies Vaccination in Wild Carnivores in the United States,” Virus Research 111, no. 1 (2005): 68-76. 29 Finnegan, Christopher J., Sharon M. Brookes, Nicholas Johnson, Jemma Smith, Karen L. Mansfield, Victoria L. Keene, Lorraine M. McElhinney, and Anthony R. Fooks, “Rabies in North America and Europe,” Journal of the Royal Society of Medicine 95, no. 1 (2002): 9-13. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1279140/. The incubation period for rabies can be lengthy, ranging from several weeks or months to over a year. As the virus replicates, it moves from the site of the bite into motor and sensory axons of peripheral nerves and spreads from nerve to nerve using a process called retrograde transport, eventually making its way to the CNS through the spinal ganglia. Once rabies virus reaches the brain, the infection leads to encephalitis caused by the disruption of normal neurotransmitter function, resulting in the symptoms associated with rabies. The virions act in the synaptic spaces as competitors with a variety of neurotransmitters for acetylcholine , GABA , and glycine receptors . Thus, the action of rabies virus is neurotoxic rather than cytotoxic. After the rabies virus infects the brain, it can continue to spread through other neuronal pathways, traveling out of the CNS to tissues such as the salivary glands, where the virus can be released. As a result, as the disease progresses the virus can be found in many other tissues, including the salivary glands, taste buds, nasal cavity, and tears. The early symptoms of rabies include discomfort at the site of the bite, fever, and headache. Once the virus reaches the brain and later symptoms appear, the disease is always fatal. Terminal rabies cases can end in one of two ways: either furious or paralytic rabies. Individuals with furious rabies become very agitated and hyperactive. Hydrophobia (a fear of water) is common in patients with furious rabies, which is caused by muscular spasms in the throat when swallowing or thinking about water. Excess salivation and a desire to bite can lead to foaming of the mouth. These behaviors serve to enhance the likelihood of viral transmission, although contact with infected secretions like saliva or tears alone is sufficient for infection. The disease culminates after just a few days with terror and confusion, followed by cardiovascular and respiratory arrest. In contrast, individuals with paralytic rabies generally follow a longer course of disease. The muscles at the site of infection become paralyzed. Over a period of time, the paralysis slowly spreads throughout the body. This paralytic form of disease culminates in coma and death. Before present-day diagnostic methods were available, rabies diagnosis was made using a clinical case history and histopathological examination of biopsy or autopsy tissues, looking for the presence of Negri bodies . We now know these histologic changes cannot be used to confirm a rabies diagnosis. There are no tests that can detect rabies virus in humans at the time of the bite or shortly thereafter. Once the virus has begun to replicate (but before clinical symptoms occur), the virus can be detected using an immunofluorescence test on cutaneous nerves found at the base of hair follicles. Saliva can also be tested for viral genetic material by reverse transcription followed by polymerase chain reaction (RT-PCR). Even when these tests are performed, most suspected infections are treated as positive in the absence of contravening evidence. It is better that patients undergo unnecessary therapy because of a false-positive result, rather than die as the result of a false-negative result. Human rabies infections are treated by immunization with multiple doses of an attenuated vaccine to develop active immunity in the patient (see the Clinical Focus feature in the chapter on Acellular Pathogens ). Vaccination of an already-infected individual has the potential to work because of the slow progress of the disease, which allows time for the patient’s immune system to develop antibodies against the virus. Patients may also be treated with human rabies immune globulin (antibodies to the rabies virus) to encourage passive immunity. These antibodies will neutralize any free viral particles. Although the rabies infection progresses slowly in peripheral tissues, patients are not normally able to mount a protective immune response on their own. Check Your Understanding How does the bite from an infected animal transmit rabies? What is the goal of wildlife vaccination programs for rabies? How is rabies treated in a human? Poliomyelitis Poliomyelitis (polio) , caused by poliovirus, is a primarily intestinal disease that, in a small percentage of cases, proceeds to the nervous system, causing paralysis and, potentially, death. Poliovirus is highly contagious, with transmission occurring by the fecal-oral route or by aerosol or droplet transmission. Approximately 72% of all poliovirus infections are asymptomatic; another 25% result only in mild intestinal disease, producing nausea, fever, and headache. 30 However, even in the absence of symptoms, patients infected with the virus can shed it in feces and oral secretions, potentially transmitting the virus to others. In about one case in every 200, the poliovirus affects cells in the CNS. 31 30 US Centers for Disease Control and Prevention, “Global Health – Polio,” 2014. Accessed June 30, 2016. http://www.cdc.gov/polio/about/index.htm. 31 US Centers for Disease Control and Prevention, “Global Health – Polio,” 2014. Accessed June 30, 2016. http://www.cdc.gov/polio/about/index.htm. After it enters through the mouth, initial replication of poliovirus occurs at the site of implantation in the pharynx and gastrointestinal tract. As the infection progresses, poliovirus is usually present in the throat and in the stool before the onset of symptoms. One week after the onset of symptoms, there is less poliovirus in the throat, but for several weeks, poliovirus continues to be excreted in the stool. Poliovirus invades local lymphoid tissue, enters the bloodstream, and then may infect cells of the CNS. Replication of poliovirus in motor neurons of the anterior horn cells in the spinal cord, brain stem, or motor cortex results in cell destruction and leads to flaccid paralysis . In severe cases, this can involve the respiratory system, leading to death. Patients with impaired respiratory function are treated using positive-pressure ventilation systems. In the past, patients were sometimes confined to Emerson respirators , also known as iron lungs ( Figure 26.16 ). Direct detection of the poliovirus from the throat or feces can be achieved using reverse transcriptase PCR (RT-PCR) or genomic sequencing to identify the genotype of the poliovirus infecting the patient. Serological tests can be used to determine whether the patient has been previously vaccinated. There are no therapeutic measures for polio; treatment is limited to various supportive measures. These include pain relievers, rest, heat therapy to ease muscle spasms, physical therapy and corrective braces if necessary to help with walking, and mechanical ventilation to assist with breathing if necessary. Two different vaccines were introduced in the 1950s that have led to the dramatic decrease in polio worldwide ( Figure 26.17 ). The Salk vaccine is an inactivated polio virus that was first introduced in 1955. This vaccine is delivered by intramuscular injection. The Sabin vaccine is an oral polio vaccine that contains an attenuated virus; it was licensed for use in 1962. There are three serotypes of poliovirus that cause disease in humans; both the Salk and the Sabin vaccines are effective against all three. Attenuated viruses from the Sabin vaccine are shed in the feces of immunized individuals and thus have the potential to infect nonimmunized individuals. By the late 1990s, the few polio cases originating in the United States could be traced back to the Sabin vaccine. In these cases, mutations of the attenuated virus following vaccination likely allowed the microbe to revert to a virulent form. For this reason, the United States switched exclusively to the Salk vaccine in 2000. Because the Salk vaccine contains an inactivated virus, there is no risk of transmission to others (see Vaccines ). Currently four doses of the vaccine are recommended for children: at 2, 4, and 6–18 months of age, and at 4–6 years of age. In 1988, WHO launched the Global Polio Eradication Initiative with the goal of eradicating polio worldwide through immunization. That goal is now close to being realized. Polio is now endemic in only a few countries, including Afghanistan, Pakistan, and Nigeria, where vaccination efforts have been disrupted by military conflict or political instability. Micro Connections The Terror of Polio In the years after World War II, the United States and the Soviet Union entered a period known as the Cold War. Although there was no armed conflict, the two super powers were diplomatically and economically isolated from each other, as represented by the so-called Iron Curtain between the Soviet Union and the rest of the world. After 1950, migration or travel outside of the Soviet Union was exceedingly difficult, and it was equally difficult for foreigners to enter the Soviet Union. The United States also placed strict limits on Soviets entering the country. During the Eisenhower administration, only 20 graduate students from the Soviet Union were allowed to come to study in the United States per year. Yet even the Iron Curtain was no match for polio. The Salk vaccine became widely available in the West in 1955, and by the time the Sabin vaccine was ready for clinical trials, most of the susceptible population in the United States and Canada had already been vaccinated against polio. Sabin needed to look elsewhere for study participants. At the height of the Cold War, Mikhail Chumakov was allowed to come to the United States to study Sabin’s work. Likewise, Sabin, an American microbiologist, was allowed to travel to the Soviet Union to begin clinical trials. Chumakov organized Soviet-based production and managed the experimental trials to test the new vaccine in the Soviet Union. By 1959, over ten million Soviet children had been safely treated with Sabin’s vaccine. As a result of a global vaccination campaign with the Sabin vaccine, the overall incidence of polio has dropped dramatically. Today, polio has been nearly eliminated around the world and is only rarely seen in the United States. Perhaps one day soon, polio will become the third microbial disease to be eradicated from the general population [small pox and rinderpest (the cause of cattle plague) being the first two]. Check Your Understanding How is poliovirus transmitted? Compare the pros and cons of each of the two polio vaccines. Transmissible Spongiform Encephalopathies Acellular infectious agents called prions are responsible for a group of related diseases known as transmissible spongiform encephalopathies (TSEs) that occurs in humans and other animals (see Viroids, Virusoids, and Prions ). All TSEs are degenerative, fatal neurological diseases that occur when brain tissue becomes infected by prions. These diseases have a slow onset; symptoms may not become apparent until after an incubation period of years and perhaps decades, but death usually occurs within months to a few years after the first symptoms appear. TSEs in animals include scrapie , a disease in sheep that has been known since the 1700s, and chronic wasting disease , a disease of deer and elk in the United States and Canada. Mad cow disease is seen in cattle and can be transmitted to humans through the consumption of infected nerve tissues. Human prion diseases include Creutzfeldt-Jakob disease and kuru , a rare disease endemic to Papua New Guinea. Prions are infectious proteinaceous particles that are not viruses and do not contain nucleic acid. They are typically transmitted by exposure to and ingestion of infected nervous system tissues, tissue transplants, blood transfusions, or contaminated fomites. Prion proteins are normally found in a healthy brain tissue in a form called PrP C . However, if this protein is misfolded into a denatured form (PrP Sc ), it can cause disease. Although the exact function of PrP C is not currently understood, the protein folds into mostly alpha helices and binds copper. The rogue protein, on the other hand, folds predominantly into beta-pleated sheets and is resistant to proteolysis. In addition, PrP Sc can induce PrP C to become misfolded and produce more rogue protein ( Figure 26.18 ). As PrP Sc accumulates, it aggregates and forms fibrils within nerve cells. These protein complexes ultimately cause the cells to die. As a consequence, brain tissues of infected individuals form masses of neurofibrillary tangles and amyloid plaques that give the brain a spongy appearance, which is why these diseases are called spongiform encephalopathy ( Figure 6.26 ). Damage to brain tissue results in a variety of neurological symptoms. Most commonly, affected individuals suffer from memory loss, personality changes, blurred vision, uncoordinated movements, and insomnia. These symptoms gradually worsen over time and culminate in coma and death. The gold standard for diagnosing TSE is the histological examination of brain biopsies for the presence of characteristic amyloid plaques, vacuoles, and prion proteins. Great care must be taken by clinicians when handling suspected prion-infected materials to avoid becoming infected themselves. Other tissue assays search for the presence of the 14-3-3 protein, a marker for prion diseases like Creutzfeldt-Jakob disease. New assays, like RT-QuIC (real-time quaking-induced conversion), offer new hope to effectively detect the abnormal prion proteins in tissues earlier in the course of infection. Prion diseases cannot be cured. However, some medications may help slow their progress. Medical support is focused on keeping patients as comfortable as possible despite progressive and debilitating symptoms. Link to Learning Because prion-contaminated materials are potential sources of infection for clinical scientists and physicians, both the World Health Organization and CDC provide information to inform, educate and minimize the risk of infections due to prions. Check Your Understanding Do prions reproduce in the conventional sense? What is the connection between prions and the removal of animal byproducts from the food of farm animals? Disease Profile Acellular Infections of the Nervous System Serious consequences are the common thread among these neurological diseases. Several cause debilitating paralysis, and some, such as Creutzfeldt-Jakob disease and rabies, are always or nearly always fatal. Since few drugs are available to combat these infections, vector control and vaccination are critical for prevention and containment. Figure 26.19 summarizes some important viral and prion infections of the nervous system. 26.4 Fungal and Parasitic Diseases of the Nervous System Learning Objectives Identify the most common fungi that can cause infections of the nervous system Compare the major characteristics of specific fungal diseases affecting the nervous system Fungal infections of the nervous system, called neuromycoses , are rare in healthy individuals. However, neuromycoses can be devastating in immunocompromised or elderly patients. Several eukaryotic parasites are also capable of infecting the nervous system of human hosts. Although relatively uncommon, these infections can also be life-threatening in immunocompromised individuals. In this section, we will first discuss neuromycoses, followed by parasitic infections of the nervous system. Cryptococcocal Meningitis Cryptococcus neoformans is a fungal pathogen that can cause meningitis. This yeast is commonly found in soils and is particularly associated with pigeon droppings. It has a thick capsule that serves as an important virulence factor , inhibiting clearance by phagocytosis. Most C. neoformans cases result in subclinical respiratory infections that, in healthy individuals, generally resolve spontaneously with no long-term consequences (see Respiratory Mycoses ). In immunocompromised patients or those with other underlying illnesses, the infection can progress to cause meningitis and granuloma formation in brain tissues. Cryptococcus antigens can also serve to inhibit cell-mediated immunity and delayed-type hypersensitivity. Cryptococcus can be easily cultured in the laboratory and identified based on its extensive capsule ( Figure 26.20 ). C. neoformans is frequently cultured from urine samples of patients with disseminated infections. Prolonged treatment with antifungal drugs is required to treat cryptococcal infections. Combined therapy is required with amphotericin B plus flucytosine for at least 10 weeks. Many antifungal drugs have difficulty crossing the blood-brain barrier and have strong side effects that necessitate low doses; these factors contribute to the lengthy time of treatment. Patients with AIDS are particularly susceptible to Cryptococcus infections because of their compromised immune state. AIDS patients with cryptococcosis can also be treated with antifungal drugs, but they often have relapses; lifelong doses of fluconazole may be necessary to prevent reinfection. Check Your Understanding Why are neuromycoses infections rare in the general population? How is a cryptococcal infection acquired? Disease Profile Neuromycoses Neuromycoses typically occur only in immunocompromised individuals and usually only invade the nervous system after first infecting a different body system. As such, many diseases that sometimes affect the nervous system have already been discussed in previous chapters. Figure 26.21 presents some of the most common fungal infections associated with neurological disease. This table includes only the neurological aspects associated with these diseases; it does not include characteristics associated with other body systems. Clinical Focus Resolution David’s new prescription for two antifungal drugs, amphotericin B and flucytosine, proved effective, and his condition began to improve. Culture results from David’s sputum, skin, and CSF samples confirmed a fungal infection. All were positive for C. neoformans . Serological tests of his tissues were also positive for the C. neoformans capsular polysaccharide antigen. Since C. neoformans is known to occur in bird droppings, it is likely that David had been exposed to the fungus while working on the barn. Despite this exposure, David’s doctor explained to him that immunocompetent people rarely contract cryptococcal meningitis and that his immune system had likely been compromised by the anti-inflammatory medication he was taking to treat his Crohn’s disease. However, to rule out other possible causes of immunodeficiency, David’s doctor recommended that he be tested for HIV. After David tested negative for HIV, his doctor took him off the corticosteroid he was using to manage his Crohn’s disease, replacing it with a different class of drug. After several weeks of antifungal treatments, David managed a full recovery. Jump to the previous Clinical Focus box. Amoebic Meningitis Primary amoebic meningoencephalitis (PAM) is caused by Naegleria fowleri . This amoeboflagellate is commonly found free-living in soils and water. It can exist in one of three forms—the infective amoebic trophozoite form, a motile flagellate form, and a resting cyst form. PAM is a rare disease that has been associated with young and otherwise healthy individuals. Individuals are typically infected by the amoeba while swimming in warm bodies of freshwater such as rivers, lakes, and hot springs. The pathogenic trophozoite infects the brain by initially entering through nasal passages to the sinuses; it then moves down olfactory nerve fibers to penetrate the submucosal nervous plexus, invades the cribriform plate, and reaches the subarachnoid space. The subarachnoid space is highly vascularized and is a route of dissemination of trophozoites to other areas of the CNS, including the brain ( Figure 26.22 ). Inflammation and destruction of gray matter leads to severe headaches and fever. Within days, confusion and convulsions occur and quickly progress to seizures, coma, and death. The progression can be very rapid, and the disease is often not diagnosed until autopsy. N. fowleri infections can be confirmed by direct observation of CSF; the amoebae can often be seen moving while viewing a fresh CSF wet mount through a microscope. Flagellated forms can occasionally also be found in CSF. The amoebae can be stained with several stains for identification, including Giemsa-Wright or a modified trichrome stain. Detection of antigens with indirect immunofluorescence, or genetic analysis with PCR, can be used to confirm an initial diagnosis. N. fowleri infections are nearly always fatal; only 3 of 138 patients with PAM in the United States have survived. 32 A new experimental drug called miltefosine shows some promise for treating these infections. This drug is a phosphotidylcholine derivative that is thought to inhibit membrane function in N. fowleri , triggering apoptosis and disturbance of lipid-dependent cell signaling pathways. 33 When administered early in infection and coupled with therapeutic hypothermia (lowering the body’s core temperature to reduce the cerebral edema associated with infection), this drug has been successfully used to treat primary amoebic encephalitis. 32 US Centers for Disease Control and Prevention, “ Naegleria fowleri —Primary Amoebic Meningoencephalitis (PAM)—Amebic Encephalitis,” 2016. Accessed June 30, 2016. http://www.cdc.gov/parasites/naegleria/treatment.html. 33 Dorlo, Thomas PC, Manica Balasegaram, Jos H. Beijnen, and Peter J. de Vries, “Miltefosine: A Review of Its Pharmacology and Therapeutic Efficacy in the Treatment of Leishmaniasis,” Journal of Antimicrobial Chemotherapy 67, no. 11 (2012): 2576-97. Granulomatous Amoebic Encephalitis Acanthamoeba and Balamuthia species are free-living amoebae found in many bodies of fresh water. Human infections by these amoebae are rare. However, they can cause amoebic keratitis in contact lens wearers (see Protozoan and Helminthic Infections of the Eyes ), disseminated infections in immunocompromised patients, and granulomatous amoebic encephalitis (GAE) in severe cases. Compared to PAM, GAE tend to be subacute infections. The microbe is thought to enter through either the nasal sinuses or breaks in the skin. It is disseminated hematogenously and can invade the CNS. There, the infections lead to inflammation, formation of lesions, and development of typical neurological symptoms of encephalitis ( Figure 26.23 ). GAE is nearly always fatal. GAE is often not diagnosed until late in the infection. Lesions caused by the infection can be detected using CT or MRI. The live amoebae can be directly detected in CSF or tissue biopsies. Serological tests are available but generally are not necessary to make a correct diagnosis, since the presence of the organism in CSF is definitive. Some antifungal drugs, like fluconazole , have been used to treat acanthamoebal infections. In addition, a combination of miltefosine and voriconazole (an inhibitor of ergosterol biosynthesis) has recently been used to successfully treat GAE. Even with treatment, however, the mortality rate for patients with these infections is high. Check Your Understanding How is granulomatous amoebic encephalitis diagnosed? Human African Trypanosomiasis Human African trypanosomiasis (also known as African sleeping sickness ) is a serious disease endemic to two distinct regions in sub-Saharan Africa. It is caused by the insect-borne hemoflagellate Trypanosoma brucei . The subspecies Trypanosoma brucei rhodesiense causes East African trypanosomiasis (EAT), and another subspecies, Trypanosoma brucei gambiense causes West African trypanosomiasis (WAT). A few hundred cases of EAT are currently reported each year. 34 WAT is more commonly reported and tends to be a more chronic disease. Around 7000 to 10,000 new cases of WAT are identified each year. 35 34 US Centers for Disease Control and Prevention, “Parasites – African Trypanosomiasis (also known as Sleeping Sickness), East African Trypanosomiasis FAQs,” 2012. Accessed June 30, 2016. http://www.cdc.gov/parasites/sleepingsickness/gen_info/faqs-east.html. 35 US Centers for Disease Control and Prevention, “Parasites – African Trypanosomiasis (also known as Sleeping Sickness), Epidemiology & Risk Factors,” 2012. Accessed June 30, 2016. http://www.cdc.gov/parasites/sleepingsickness/epi.html. T. brucei is primarily transmitted to humans by the bite of the tsetse fly ( Glossina spp.). Soon after the bite of a tsetse fly, a chancre forms at the site of infection. The flagellates then spread, moving into the circulatory system ( Figure 26.24 ). These systemic infections result in an undulating fever , during which symptoms persist for two or three days with remissions of about a week between bouts. As the disease enters its final phase, the pathogens move from the lymphatics into the CNS. Neurological symptoms include daytime sleepiness, insomnia, and mental deterioration. In EAT, the disease runs its course over a span of weeks to months. In contrast, WAT often occurs over a span of months to years. Although a strong immune response is mounted against the trypanosome, it is not sufficient to eliminate the pathogen. Through antigenic variation , Trypanosoma can change their surface proteins into over 100 serological types. This variation leads to the undulating form of the initial disease. The initial septicemia caused by the infection leads to high fevers. As the immune system responds to the infection, the number of organisms decrease, and the clinical symptoms abate. However, a subpopulation of the pathogen then alters its surface coat antigens by antigenic variation and evades the immune response. These flagellates rapidly proliferate and cause another bout of disease. If untreated, these infections are usually fatal. Clinical symptoms can be used to recognize the early signs of African trypanosomiasis. These include the formation of a chancre at the site of infection and Winterbottom’s sign . Winterbottom’s sign refers to the enlargement of lymph nodes on the back of the neck—often indicative of cerebral infections. Trypanosoma can be directly observed in stained samples including blood, lymph, CSF, and skin biopsies of chancres from patients. Antibodies against the parasite are found in most patients with acute or chronic disease. Serologic testing is generally not used for diagnosis, however, since the microscopic detection of the parasite is sufficient. Early diagnosis is important for treatment. Before the nervous system is involved, drugs like pentamidine (an inhibitor of nuclear metabolism) and suramin (mechanism unclear) can be used. These drugs have fewer side effects than the drugs needed to treat the second stage of the disease. Once the sleeping sickness phase has begun, harsher drugs including melarsoprol (an arsenic derivative) and eflornithine can be effective. Following successful treatment, patients still need to have follow-up examinations of their CSF for two years to detect possible relapses of the disease. The most effective means of preventing these diseases is to control the insect vector populations. Check Your Understanding What is the symptom of a systemic Trypanosoma infection? What are the symptoms of a neurological Trypanosoma infection ? Why are trypanosome infections so difficult to eradicate? Neurotoxoplasmosis Toxoplasma gondii is an ubiquitous intracellular parasite that can cause neonatal infections. Cats are the definitive host, and humans can become infected after eating infected meat or, more commonly, by ingesting oocysts shed in the feces of cats (see Parasitic Infections of the Circulatory and Lymphatic Systems ). T. gondii enters the circulatory system by passing between the endothelial cells of blood vessels. 36 Most cases of toxoplasmosis are asymptomatic. However, in immunocompromised patients, neurotoxoplasmosis caused by T. gondii infections are one of the most common causes of brain abscesses. 37 The organism is able to cross the blood-brain barrier by infecting the endothelial cells of capillaries in the brain. The parasite reproduces within these cells, a step that appears to be necessary for entry to the brain, and then causes the endothelial cell to lyse, releasing the progeny into brain tissues. This mechanism is quite different than the method it uses to enter the bloodstream in the first place. 38 36 Carruthers, Vern B., and Yasuhiro Suzuki, “Effects of Toxoplasma gondii Infection on the Brain,” Schizophrenia Bulletin 33, no. 3 (2007): 745-51. 37 Uppal, Gulshan, “CNS Toxoplasmosis in HIV,” 2015. Accessed June 30, 2016. http://emedicine.medscape.com/article/1167298-overview#a3. 38 Konradt, Christoph, Norikiyo Ueno, David A. Christian, Jonathan H. Delong, Gretchen Harms Pritchard, Jasmin Herz, David J. Bzik et al., “Endothelial Cells Are a Replicative Niche for Entry of Toxoplasma gondii to the Central Nervous System,” Nature Microbiology 1 (2016): 16001. The brain lesions associated with neurotoxoplasmosis can be detected radiographically using MRI or CAT scans ( Figure 26.25 ). Diagnosis can be confirmed by direct observation of the organism in CSF. RT-PCR assays can also be used to detect T. gondii through genetic markers. Treatment of neurotoxoplasmosis caused by T. gondii infections requires six weeks of multi-drug therapy with pyrimethamine , sulfadiazine , and folinic acid . Long-term maintenance doses are often required to prevent recurrence. Check Your Understanding Under what conditions is Toxoplasma infection serious? How does Toxoplasma circumvent the blood-brain barrier ? Neurocysticercosis Cysticercosis is a parasitic infection caused by the larval form of the pork tapeworm , Taenia solium . When the larvae invade the brain and spinal cord, the condition is referred to as neurocysticercosis . This condition affects millions of people worldwide and is the leading cause of adult onset epilepsy in the developing world. 39 39 DeGiorgio, Christopher M., Marco T. Medina, Reyna Durón, Chi Zee, and Susan Pietsch Escueta, “Neurocysticercosis,” Epilepsy Currents 4, no. 3 (2004): 107-11. The life cycle of T. solium is discussed in Helminthic Infections of the Gastrointestinal Tract . Following ingestion, the eggs hatch in the intestine to form larvae called cysticerci . Adult tapeworms form in the small intestine and produce eggs that are shed in the feces. These eggs can infect other individuals through fecal contamination of food or other surfaces. Eggs can also hatch within the intestine of the original patient and lead to an ongoing autoinfection. The cystercerci, can migrate to the blood and invade many tissues in the body, including the CNS. Neurocysticercosis is usually diagnosed through noninvasive techniques. Epidemiological information can be used as an initial screen; cysticercosis is endemic in Central and South America, Africa, and Asia. Radiological imaging (MRI and CT scans) is the primary method used to diagnose neurocysticercosis; imaging can be used to detect the one- to two-centimeter cysts that form around the parasites ( Figure 26.26 ). Elevated levels of eosinophils in the blood can also indicate a parasitic infection. EIA and ELISA are also used to detect antigens associated with the pathogen. The treatment for neurocysticercosis depends on the location, number, size, and stage of cysticerci present. Antihelminthic chemotherapy includes albendazole and praziquantel . Because these drugs kill viable cysts, they may acutely increase symptoms by provoking an inflammatory response caused by the release of Taenia cysticerci antigens, as the cysts are destroyed by the drugs. To alleviate this response, corticosteroids that cross the blood-brain barrier (e.g., dexamethasone ) can be used to mitigate these effects. Surgical intervention may be required to remove intraventricular cysts. Disease Profile Parasitic Diseases of the Nervous System Parasites that successfully invade the nervous system can cause a wide range of neurological signs and symptoms. Often, they inflict lesions that can be visualized through radiologic imaging. A number of these infections are fatal, but some can be treated (with varying levels of success) by antimicrobial drugs ( Figure 26.27 ). Check Your Understanding What neurological condition is associated with neurocysticercosis? How is neurocysticercosis diagnosed?
anatomy_and_physiology
Chapter Objectives After studying this chapter, you will be able to: Identify the contributions of the endocrine system to homeostasis Discuss the chemical composition of hormones and the mechanisms of hormone action Summarize the site of production, regulation, and effects of the hormones of the pituitary, thyroid, parathyroid, adrenal, and pineal glands Discuss the hormonal regulation of the reproductive system Explain the role of the pancreatic endocrine cells in the regulation of blood glucose Identify the hormones released by the heart, kidneys, and other organs with secondary endocrine functions Discuss several common diseases associated with endocrine system dysfunction Discuss the embryonic development of, and the effects of aging on, the endocrine system Introduction You may never have thought of it this way, but when you send a text message to two friends to meet you at the dining hall at six, you’re sending digital signals that (you hope) will affect their behavior—even though they are some distance away. Similarly, certain cells send chemical signals to other cells in the body that influence their behavior. This long-distance intercellular communication, coordination, and control is critical for homeostasis, and it is the fundamental function of the endocrine system.
[ { "answer": { "ans_choice": 2, "ans_text": "secrete chemical messengers that travel in the bloodstream" }, "bloom": "1", "hl_context": "<hl> An endocrine gland may also secrete a hormone in response to the presence of another hormone produced by a different endocrine gland . <hl> <hl> Such hormonal stimuli often involve the hypothalamus , which produces releasing and inhibiting hormones that control the secretion of a variety of pituitary hormones . <hl> <hl> In endocrine signaling , hormones secreted into the extracellular fluid diffuse into the blood or lymph , and can then travel great distances throughout the body . <hl> In contrast , autocrine signaling takes place within the same cell . An autocrine ( auto - = “ self ” ) is a chemical that elicits a response in the same cell that secreted it . Interleukin - 1 , or IL - 1 , is a signaling molecule that plays an important role in inflammatory response . The cells that secrete IL - 1 have receptors on their cell surface that bind these molecules , resulting in autocrine signaling . <hl> Structures of the Endocrine System The endocrine system consists of cells , tissues , and organs that secrete hormones as a primary or secondary function . <hl> <hl> The endocrine gland is the major player in this system . <hl> <hl> The primary function of these ductless glands is to secrete their hormones directly into the surrounding fluid . <hl> <hl> The interstitial fluid and the blood vessels then transport the hormones throughout the body . <hl> The endocrine system includes the pituitary , thyroid , parathyroid , adrenal , and pineal glands ( Figure 17.2 ) . Some of these glands have both endocrine and non-endocrine functions . For example , the pancreas contains cells that function in digestion as well as cells that secrete the hormones insulin and glucagon , which regulate blood glucose levels . The hypothalamus , thymus , heart , kidneys , stomach , small intestine , liver , skin , female ovaries , and male testes are other organs that contain cells with endocrine function . Moreover , adipose tissue has long been known to produce hormones , and recent research has revealed that even bone tissue has endocrine functions . The ductless endocrine glands are not to be confused with the body ’ s exocrine system , whose glands release their secretions through ducts . Examples of exocrine glands include the sebaceous and sweat glands of the skin . As just noted , the pancreas also has an exocrine function : most of its cells secrete pancreatic juice through the pancreatic and accessory ducts to the lumen of the small intestine .", "hl_sentences": "An endocrine gland may also secrete a hormone in response to the presence of another hormone produced by a different endocrine gland . Such hormonal stimuli often involve the hypothalamus , which produces releasing and inhibiting hormones that control the secretion of a variety of pituitary hormones . In endocrine signaling , hormones secreted into the extracellular fluid diffuse into the blood or lymph , and can then travel great distances throughout the body . Structures of the Endocrine System The endocrine system consists of cells , tissues , and organs that secrete hormones as a primary or secondary function . The endocrine gland is the major player in this system . The primary function of these ductless glands is to secrete their hormones directly into the surrounding fluid . The interstitial fluid and the blood vessels then transport the hormones throughout the body .", "question": { "cloze_format": "Endocrine glands ________.", "normal_format": "Which of the following is correct about endocrine glands?", "question_choices": [ "secrete hormones that travel through a duct to the target organs", "release neurotransmitters into the synaptic cleft", "secrete chemical messengers that travel in the bloodstream", "include sebaceous glands and sweat glands" ], "question_id": "fs-id1513874", "question_text": "Endocrine glands ________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 1, "ans_text": "paracrine" }, "bloom": "1", "hl_context": "<hl> Local intercellular communication is the province of the paracrine , also called a paracrine factor , which is a chemical that induces a response in neighboring cells . <hl> Although paracrines may enter the bloodstream , their concentration is generally too low to elicit a response from distant tissues . A familiar example to those with asthma is histamine , a paracrine that is released by immune cells in the bronchial tree . Histamine causes the smooth muscle cells of the bronchi to constrict , narrowing the airways . Another example is the neurotransmitters of the nervous system , which act only locally within the synaptic cleft .", "hl_sentences": "Local intercellular communication is the province of the paracrine , also called a paracrine factor , which is a chemical that induces a response in neighboring cells .", "question": { "cloze_format": "Chemical signaling that affects neighboring cells is called ________.", "normal_format": "What is chemical signaling that affects neighboring cells called?", "question_choices": [ "autocrine", "paracrine", "endocrine", "neuron" ], "question_id": "fs-id1812561", "question_text": "Chemical signaling that affects neighboring cells is called ________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 1, "ans_text": "thyroid hormone" }, "bloom": "1", "hl_context": "Adequate levels of thyroid hormones are also required for protein synthesis and for fetal and childhood tissue development and growth . They are especially critical for normal development of the nervous system both in utero and in early childhood , and they continue to support neurological function in adults . As noted earlier , these thyroid hormones have a complex interrelationship with reproductive hormones , and deficiencies can influence libido , fertility , and other aspects of reproductive function . Finally , thyroid hormones increase the body ’ s sensitivity to catecholamines ( epinephrine and norepinephrine ) from the adrenal medulla by upregulation of receptors in the blood vessels . When levels of T 3 and T 4 hormones are excessive , this effect accelerates the heart rate , strengthens the heartbeat , and increases blood pressure . <hl> Because thyroid hormones regulate metabolism , heat production , protein synthesis , and many other body functions , thyroid disorders can have severe and widespread consequences . <hl> <hl> Intracellular hormone receptors are located inside the cell . <hl> Hormones that bind to this type of receptor must be able to cross the cell membrane . Steroid hormones are derived from cholesterol and therefore can readily diffuse through the lipid bilayer of the cell membrane to reach the intracellular receptor ( Figure 17.4 ) . <hl> Thyroid hormones , which contain benzene rings studded with iodine , are also lipid-soluble and can enter the cell . <hl>", "hl_sentences": "Because thyroid hormones regulate metabolism , heat production , protein synthesis , and many other body functions , thyroid disorders can have severe and widespread consequences . Intracellular hormone receptors are located inside the cell . Thyroid hormones , which contain benzene rings studded with iodine , are also lipid-soluble and can enter the cell .", "question": { "cloze_format": "A newly developed pesticide has been observed to bind to an intracellular hormone receptor. If ingested, residue from this pesticide could disrupt levels of ________.", "normal_format": "A newly developed pesticide has been observed to bind to an intracellular hormone receptor. If ingested, residue from this pesticide could disrupt levels of what?", "question_choices": [ "melatonin", "thyroid hormone", "growth hormone", "insulin" ], "question_id": "fs-id805158", "question_text": "A newly developed pesticide has been observed to bind to an intracellular hormone receptor. If ingested, residue from this pesticide could disrupt levels of ________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "neural" }, "bloom": "1", "hl_context": "<hl> The secretion of medullary epinephrine and norepinephrine is controlled by a neural pathway that originates from the hypothalamus in response to danger or stress ( the SAM pathway ) . <hl> Both epinephrine and norepinephrine signal the liver and skeletal muscle cells to convert glycogen into glucose , resulting in increased blood glucose levels . <hl> These hormones increase the heart rate , pulse , and blood pressure to prepare the body to fight the perceived threat or flee from it . <hl> In addition , the pathway dilates the airways , raising blood oxygen levels . It also prompts vasodilation , further increasing the oxygenation of important organs such as the lungs , brain , heart , and skeletal muscle . At the same time , it triggers vasoconstriction to blood vessels serving less essential organs such as the gastrointestinal tract , kidneys , and skin , and downregulates some components of the immune system . Other effects include a dry mouth , loss of appetite , pupil dilation , and a loss of peripheral vision . The major hormones of the adrenal glands are summarized in Table 17.5 . In addition to these chemical signals , hormones can also be released in response to neural stimuli . <hl> A common example of neural stimuli is the activation of the fight-or-flight response by the sympathetic nervous system . <hl> <hl> When an individual perceives danger , sympathetic neurons signal the adrenal glands to secrete norepinephrine and epinephrine . <hl> <hl> The two hormones dilate blood vessels , increase the heart and respiratory rate , and suppress the digestive and immune systems . <hl> These responses boost the body ’ s transport of oxygen to the brain and muscles , thereby improving the body ’ s ability to fight or flee .", "hl_sentences": "The secretion of medullary epinephrine and norepinephrine is controlled by a neural pathway that originates from the hypothalamus in response to danger or stress ( the SAM pathway ) . These hormones increase the heart rate , pulse , and blood pressure to prepare the body to fight the perceived threat or flee from it . A common example of neural stimuli is the activation of the fight-or-flight response by the sympathetic nervous system . When an individual perceives danger , sympathetic neurons signal the adrenal glands to secrete norepinephrine and epinephrine . The two hormones dilate blood vessels , increase the heart and respiratory rate , and suppress the digestive and immune systems .", "question": { "cloze_format": "A student is in a car accident, and although not hurt, immediately experiences pupil dilation, increased heart rate, and rapid breathing. The ___ endocrine system stimulus did the student receive.", "normal_format": "A student is in a car accident, and although not hurt, immediately experiences pupil dilation, increased heart rate, and rapid breathing. What type of endocrine system stimulus did the student receive?", "question_choices": [ "humoral", "hormonal", "neural", "positive feedback" ], "question_id": "fs-id1173979", "question_text": "A student is in a car accident, and although not hurt, immediately experiences pupil dilation, increased heart rate, and rapid breathing. What type of endocrine system stimulus did the student receive?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 1, "ans_text": "nerve axons" }, "bloom": null, "hl_context": "<hl> Posterior Pituitary The posterior pituitary is actually an extension of the neurons of the paraventricular and supraoptic nuclei of the hypothalamus . <hl> <hl> The cell bodies of these regions rest in the hypothalamus , but their axons descend as the hypothalamic – hypophyseal tract within the infundibulum , and end in axon terminals that comprise the posterior pituitary ( Figure 17.8 ) . <hl> The posterior pituitary gland does not produce hormones , but rather stores and secretes hormones produced by the hypothalamus . The paraventricular nuclei produce the hormone oxytocin , whereas the supraoptic nuclei produce ADH . These hormones travel along the axons into storage sites in the axon terminals of the posterior pituitary . In response to signals from the same hypothalamic neurons , the hormones are released from the axon terminals into the bloodstream .", "hl_sentences": "Posterior Pituitary The posterior pituitary is actually an extension of the neurons of the paraventricular and supraoptic nuclei of the hypothalamus . The cell bodies of these regions rest in the hypothalamus , but their axons descend as the hypothalamic – hypophyseal tract within the infundibulum , and end in axon terminals that comprise the posterior pituitary ( Figure 17.8 ) .", "question": { "cloze_format": "The hypothalamus is functionally and anatomically connected to the posterior pituitary lobe by a bridge of ________.", "normal_format": "The hypothalamus is functionally and anatomically connected to the posterior pituitary lobe by which bridge?", "question_choices": [ "blood vessels", "nerve axons", "cartilage", "bone" ], "question_id": "fs-id2593908", "question_text": "The hypothalamus is functionally and anatomically connected to the posterior pituitary lobe by a bridge of ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "TSH" }, "bloom": "1", "hl_context": "The activity of the thyroid gland is regulated by thyroid-stimulating hormone ( TSH ) , also called thyrotropin . <hl> TSH is released from the anterior pituitary in response to thyrotropin-releasing hormone ( TRH ) from the hypothalamus . <hl> As discussed shortly , it triggers the secretion of thyroid hormones by the thyroid gland . In a classic negative feedback loop , elevated levels of thyroid hormones in the bloodstream then trigger a drop in production of TRH and subsequently TSH . <hl> The anterior pituitary produces seven hormones . <hl> <hl> These are the growth hormone ( GH ) , thyroid-stimulating hormone ( TSH ) , adrenocorticotropic hormone ( ACTH ) , follicle-stimulating hormone ( FSH ) , luteinizing hormone ( LH ) , beta endorphin , and prolactin . <hl> Of the hormones of the anterior pituitary , TSH , ACTH , FSH , and LH are collectively referred to as tropic hormones ( trope - = “ turning ” ) because they turn on or off the function of other endocrine glands .", "hl_sentences": "TSH is released from the anterior pituitary in response to thyrotropin-releasing hormone ( TRH ) from the hypothalamus . The anterior pituitary produces seven hormones . These are the growth hormone ( GH ) , thyroid-stimulating hormone ( TSH ) , adrenocorticotropic hormone ( ACTH ) , follicle-stimulating hormone ( FSH ) , luteinizing hormone ( LH ) , beta endorphin , and prolactin .", "question": { "cloze_format": "___ is an anterior pituitary hormone.", "normal_format": "Which of the following is an anterior pituitary hormone?", "question_choices": [ "ADH", "oxytocin", "TSH", "cortisol" ], "question_id": "fs-id1085968", "question_text": "Which of the following is an anterior pituitary hormone?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 0, "ans_text": "0" }, "bloom": "1", "hl_context": "<hl> Recall that the posterior pituitary does not synthesize hormones , but merely stores them . <hl> <hl> In contrast , the anterior pituitary does manufacture hormones . <hl> However , the secretion of hormones from the anterior pituitary is regulated by two classes of hormones . These hormones — secreted by the hypothalamus — are the releasing hormones that stimulate the secretion of hormones from the anterior pituitary and the inhibiting hormones that inhibit secretion . Posterior Pituitary The posterior pituitary is actually an extension of the neurons of the paraventricular and supraoptic nuclei of the hypothalamus . The cell bodies of these regions rest in the hypothalamus , but their axons descend as the hypothalamic – hypophyseal tract within the infundibulum , and end in axon terminals that comprise the posterior pituitary ( Figure 17.8 ) . <hl> The posterior pituitary gland does not produce hormones , but rather stores and secretes hormones produced by the hypothalamus . <hl> The paraventricular nuclei produce the hormone oxytocin , whereas the supraoptic nuclei produce ADH . These hormones travel along the axons into storage sites in the axon terminals of the posterior pituitary . In response to signals from the same hypothalamic neurons , the hormones are released from the axon terminals into the bloodstream .", "hl_sentences": "Recall that the posterior pituitary does not synthesize hormones , but merely stores them . In contrast , the anterior pituitary does manufacture hormones . The posterior pituitary gland does not produce hormones , but rather stores and secretes hormones produced by the hypothalamus .", "question": { "cloze_format": "___ hormones are produced by the posterior pituitary.", "normal_format": "How many hormones are produced by the posterior pituitary?", "question_choices": [ "0", "1", "2", "6" ], "question_id": "fs-id1932618", "question_text": "How many hormones are produced by the posterior pituitary?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "antidiuretic hormone" }, "bloom": "1", "hl_context": "<hl> The most superficial region of the adrenal cortex is the zona glomerulosa , which produces a group of hormones collectively referred to as mineralocorticoids because of their effect on body minerals , especially sodium and potassium . <hl> <hl> These hormones are essential for fluid and electrolyte balance . <hl> <hl> Examples of peptide hormones include antidiuretic hormone ( ADH ) , a pituitary hormone important in fluid balance , and atrial-natriuretic peptide , which is produced by the heart and helps to decrease blood pressure . <hl> Some examples of protein hormones include growth hormone , which is produced by the pituitary gland , and follicle-stimulating hormone ( FSH ) , which has an attached carbohydrate group and is thus classified as a glycoprotein . FSH helps stimulate the maturation of eggs in the ovaries and sperm in the testes .", "hl_sentences": "The most superficial region of the adrenal cortex is the zona glomerulosa , which produces a group of hormones collectively referred to as mineralocorticoids because of their effect on body minerals , especially sodium and potassium . These hormones are essential for fluid and electrolyte balance . Examples of peptide hormones include antidiuretic hormone ( ADH ) , a pituitary hormone important in fluid balance , and atrial-natriuretic peptide , which is produced by the heart and helps to decrease blood pressure .", "question": { "cloze_format": "The hormone that contributes to the regulation of the body’s fluid and electrolyte balance is the ___ .", "normal_format": "Which of the following hormones contributes to the regulation of the body’s fluid and electrolyte balance?", "question_choices": [ "adrenocorticotropic hormone", "antidiuretic hormone", "luteinizing hormone", "all of the above" ], "question_id": "fs-id1120911", "question_text": "Which of the following hormones contributes to the regulation of the body’s fluid and electrolyte balance?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 0, "ans_text": "It is located anterior to the trachea and inferior to the larynx." }, "bloom": "1", "hl_context": "<hl> A butterfly-shaped organ , the thyroid gland is located anterior to the trachea , just inferior to the larynx ( Figure 17.12 ) . <hl> The medial region , called the isthmus , is flanked by wing-shaped left and right lobes . Each of the thyroid lobes are embedded with parathyroid glands , primarily on their posterior surfaces . The tissue of the thyroid gland is composed mostly of thyroid follicles . The follicles are made up of a central cavity filled with a sticky fluid called colloid . Surrounded by a wall of epithelial follicle cells , the colloid is the center of thyroid hormone production , and that production is dependent on the hormones ’ essential and unique component : iodine .", "hl_sentences": "A butterfly-shaped organ , the thyroid gland is located anterior to the trachea , just inferior to the larynx ( Figure 17.12 ) .", "question": { "cloze_format": "The statement about the thyroid gland that is true is that ___.", "normal_format": "Which of the following statements about the thyroid gland is true?", "question_choices": [ "It is located anterior to the trachea and inferior to the larynx.", "The parathyroid glands are embedded within it.", "It manufactures three hormones.", "all of the above" ], "question_id": "fs-id994926", "question_text": "Which of the following statements about the thyroid gland is true?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "TSH from the anterior pituitary" }, "bloom": "1", "hl_context": "<hl> The release of T 3 and T 4 from the thyroid gland is regulated by thyroid-stimulating hormone ( TSH ) . <hl> As shown in Figure 17.13 , low blood levels of T 3 and T 4 stimulate the release of thyrotropin-releasing hormone ( TRH ) from the hypothalamus , which triggers secretion of TSH from the anterior pituitary . In turn , TSH stimulates the thyroid gland to secrete T 3 and T 4 . The levels of TRH , TSH , T 3 , and T 4 are regulated by a negative feedback system in which increasing levels of T 3 and T 4 decrease the production and secretion of TSH . <hl> The activity of the thyroid gland is regulated by thyroid-stimulating hormone ( TSH ) , also called thyrotropin . <hl> <hl> TSH is released from the anterior pituitary in response to thyrotropin-releasing hormone ( TRH ) from the hypothalamus . <hl> As discussed shortly , it triggers the secretion of thyroid hormones by the thyroid gland . In a classic negative feedback loop , elevated levels of thyroid hormones in the bloodstream then trigger a drop in production of TRH and subsequently TSH .", "hl_sentences": "The release of T 3 and T 4 from the thyroid gland is regulated by thyroid-stimulating hormone ( TSH ) . The activity of the thyroid gland is regulated by thyroid-stimulating hormone ( TSH ) , also called thyrotropin . TSH is released from the anterior pituitary in response to thyrotropin-releasing hormone ( TRH ) from the hypothalamus .", "question": { "cloze_format": "The secretion of thyroid hormones is controlled by ________.", "normal_format": "What is the secretion of thyroid hormones controlled by?", "question_choices": [ "TSH from the hypothalamus", "TSH from the anterior pituitary", "thyroxine from the anterior pituitary", "thyroglobulin from the thyroid’s parafollicular cells" ], "question_id": "fs-id1422524", "question_text": "The secretion of thyroid hormones is controlled by ________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "there is an excessive accumulation of colloid in the thyroid follicles" }, "bloom": "1", "hl_context": "Dietary iodine deficiency can result in the impaired ability to synthesize T 3 and T 4 , leading to a variety of severe disorders . When T 3 and T 4 cannot be produced , TSH is secreted in increasing amounts . <hl> As a result of this hyperstimulation , thyroglobulin accumulates in the thyroid gland follicles , increasing their deposits of colloid . <hl> <hl> The accumulation of colloid increases the overall size of the thyroid gland , a condition called a goiter ( Figure 17.14 ) . <hl> A goiter is only a visible indication of the deficiency . Other iodine deficiency disorders include impaired growth and development , decreased fertility , and prenatal and infant death . Moreover , iodine deficiency is the primary cause of preventable mental retardation worldwide . Neonatal hypothyroidism ( cretinism ) is characterized by cognitive deficits , short stature , and sometimes deafness and muteness in children and adults born to mothers who were iodine-deficient during pregnancy .", "hl_sentences": "As a result of this hyperstimulation , thyroglobulin accumulates in the thyroid gland follicles , increasing their deposits of colloid . The accumulation of colloid increases the overall size of the thyroid gland , a condition called a goiter ( Figure 17.14 ) .", "question": { "cloze_format": "The development of a goiter indicates that ________.", "normal_format": "What does the development of a goiter indicate?", "question_choices": [ "the anterior pituitary is abnormally enlarged", "there is hypertrophy of the thyroid’s follicle cells", "there is an excessive accumulation of colloid in the thyroid follicles", "the anterior pituitary is secreting excessive growth hormone" ], "question_id": "fs-id1236458", "question_text": "The development of a goiter indicates that ________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "active transport" }, "bloom": "1", "hl_context": "<hl> Binding of TSH to its receptors in the follicle cells of the thyroid gland causes the cells to actively transport iodide ions ( I – ) across their cell membrane , from the bloodstream into the cytosol . <hl> As a result , the concentration of iodide ions “ trapped ” in the follicular cells is many times higher than the concentration in the bloodstream .", "hl_sentences": "Binding of TSH to its receptors in the follicle cells of the thyroid gland causes the cells to actively transport iodide ions ( I – ) across their cell membrane , from the bloodstream into the cytosol .", "question": { "cloze_format": "Iodide ions cross from the bloodstream into follicle cells via ________.", "normal_format": "Iodide ions cross from the bloodstream into follicle cells via what?", "question_choices": [ "simple diffusion", "facilitated diffusion", "active transport", "osmosis" ], "question_id": "fs-id1321262", "question_text": "Iodide ions cross from the bloodstream into follicle cells via ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "fractures" }, "bloom": "1", "hl_context": "<hl> In contrast , abnormally low blood calcium levels may be caused by parathyroid hormone deficiency , called hypoparathyroidism , which may develop following injury or surgery involving the thyroid gland . <hl> Low blood calcium increases membrane permeability to sodium , resulting in muscle twitching , cramping , spasms , or convulsions . Severe deficits can paralyze muscles , including those involved in breathing , and can be fatal . Abnormally high activity of the parathyroid gland can cause hyperparathyroidism , a disorder caused by an overproduction of PTH that results in excessive calcium reabsorption from bone . <hl> Hyperparathyroidism can significantly decrease bone density , leading to spontaneous fractures or deformities . <hl> As blood calcium levels rise , cell membrane permeability to sodium is decreased , and the responsiveness of the nervous system is reduced . At the same time , calcium deposits may collect in the body ’ s tissues and organs , impairing their functioning .", "hl_sentences": "In contrast , abnormally low blood calcium levels may be caused by parathyroid hormone deficiency , called hypoparathyroidism , which may develop following injury or surgery involving the thyroid gland . Hyperparathyroidism can significantly decrease bone density , leading to spontaneous fractures or deformities .", "question": { "cloze_format": "(An) ___ can result from hyperparathyroidism.", "normal_format": "Which of the following can result from hyperparathyroidism?", "question_choices": [ "increased bone deposition", "fractures", "convulsions", "all of the above" ], "question_id": "fs-id1398516", "question_text": "Which of the following can result from hyperparathyroidism?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "kidneys" }, "bloom": null, "hl_context": "<hl> The adrenal glands are wedges of glandular and neuroendocrine tissue adhering to the top of the kidneys by a fibrous capsule ( Figure 17.17 ) . <hl> The adrenal glands have a rich blood supply and experience one of the highest rates of blood flow in the body . They are served by several arteries branching off the aorta , including the suprarenal and renal arteries . Blood flows to each adrenal gland at the adrenal cortex and then drains into the adrenal medulla . Adrenal hormones are released into the circulation via the left and right suprarenal veins .", "hl_sentences": "The adrenal glands are wedges of glandular and neuroendocrine tissue adhering to the top of the kidneys by a fibrous capsule ( Figure 17.17 ) .", "question": { "cloze_format": "The organ the adrenal glands are superiorly attached to, is the ___.", "normal_format": "The adrenal glands are attached superiorly to which organ?", "question_choices": [ "thyroid", "liver", "kidneys", "hypothalamus" ], "question_id": "fs-id1247068", "question_text": "The adrenal glands are attached superiorly to which organ?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "chromaffin cells" }, "bloom": null, "hl_context": "<hl> The medullary tissue is composed of unique postganglionic SNS neurons called chromaffin cells , which are large and irregularly shaped , and produce the neurotransmitters epinephrine ( also called adrenaline ) and norepinephrine ( or noradrenaline ) . <hl> Epinephrine is produced in greater quantities — approximately a 4 to 1 ratio with norepinephrine — and is the more powerful hormone . Because the chromaffin cells release epinephrine and norepinephrine into the systemic circulation , where they travel widely and exert effects on distant cells , they are considered hormones . Derived from the amino acid tyrosine , they are chemically classified as catecholamines .", "hl_sentences": "The medullary tissue is composed of unique postganglionic SNS neurons called chromaffin cells , which are large and irregularly shaped , and produce the neurotransmitters epinephrine ( also called adrenaline ) and norepinephrine ( or noradrenaline ) .", "question": { "cloze_format": "The secretory cell type that is found in the adrenal medulla is ___.", "normal_format": "What secretory cell type is found in the adrenal medulla?", "question_choices": [ "chromaffin cells", "neuroglial cells", "follicle cells", "oxyphil cells" ], "question_id": "fs-id1376839", "question_text": "What secretory cell type is found in the adrenal medulla?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 1, "ans_text": "abnormally high levels of cortisol" }, "bloom": "2", "hl_context": "Several disorders are caused by the dysregulation of the hormones produced by the adrenal glands . <hl> For example , Cushing ’ s disease is a disorder characterized by high blood glucose levels and the accumulation of lipid deposits on the face and neck . <hl> It is caused by hypersecretion of cortisol . The most common source of Cushing ’ s disease is a pituitary tumor that secretes cortisol or ACTH in abnormally high amounts . Other common signs of Cushing ’ s disease include the development of a moon-shaped face , a buffalo hump on the back of the neck , rapid weight gain , and hair loss . Chronically elevated glucose levels are also associated with an elevated risk of developing type 2 diabetes . In addition to hyperglycemia , chronically elevated glucocorticoids compromise immunity , resistance to infection , and memory , and can result in rapid weight gain and hair loss .", "hl_sentences": "For example , Cushing ’ s disease is a disorder characterized by high blood glucose levels and the accumulation of lipid deposits on the face and neck .", "question": { "cloze_format": "Cushing’s disease is a disorder caused by ________.", "normal_format": "Cushing’s disease is a disorder caused by which of the following?", "question_choices": [ "abnormally low levels of cortisol", "abnormally high levels of cortisol", "abnormally low levels of aldosterone", "abnormally high levels of aldosterone" ], "question_id": "fs-id13985160", "question_text": "Cushing’s disease is a disorder caused by ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "reduced mental activity" }, "bloom": null, "hl_context": "In addition to these chemical signals , hormones can also be released in response to neural stimuli . A common example of neural stimuli is the activation of the fight-or-flight response by the sympathetic nervous system . When an individual perceives danger , sympathetic neurons signal the adrenal glands to secrete norepinephrine and epinephrine . <hl> The two hormones dilate blood vessels , increase the heart and respiratory rate , and suppress the digestive and immune systems . <hl> These responses boost the body ’ s transport of oxygen to the brain and muscles , thereby improving the body ’ s ability to fight or flee .", "hl_sentences": "The two hormones dilate blood vessels , increase the heart and respiratory rate , and suppress the digestive and immune systems .", "question": { "cloze_format": "The response that s not part of the fight-or-flight response is ___.", "normal_format": "Which of the following responses s not part of the fight-or-flight response?", "question_choices": [ "pupil dilation", "increased oxygen supply to the lungs", "suppressed digestion", "reduced mental activity" ], "question_id": "fs-id1378590", "question_text": "Which of the following responses s not part of the fight-or-flight response?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "pinealocytes" }, "bloom": "1", "hl_context": "Recall that the hypothalamus , part of the diencephalon of the brain , sits inferior and somewhat anterior to the thalamus . Inferior but somewhat posterior to the thalamus is the pineal gland , a tiny endocrine gland whose functions are not entirely clear . <hl> The pinealocyte cells that make up the pineal gland are known to produce and secrete the amine hormone melatonin , which is derived from serotonin . <hl>", "hl_sentences": "The pinealocyte cells that make up the pineal gland are known to produce and secrete the amine hormone melatonin , which is derived from serotonin .", "question": { "cloze_format": "Cells that secrete melatonin are ___ .", "normal_format": "What cells secrete melatonin?", "question_choices": [ "melanocytes", "pinealocytes", "suprachiasmatic nucleus cells", "retinal cells" ], "question_id": "fs-id1351515", "question_text": "What cells secrete melatonin?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "exposure to bright light" }, "bloom": "1", "hl_context": "<hl> The secretion of melatonin varies according to the level of light received from the environment . <hl> When photons of light stimulate the retinas of the eyes , a nerve impulse is sent to a region of the hypothalamus called the suprachiasmatic nucleus ( SCN ) , which is important in regulating biological rhythms . From the SCN , the nerve signal is carried to the spinal cord and eventually to the pineal gland , where the production of melatonin is inhibited . As a result , blood levels of melatonin fall , promoting wakefulness . <hl> In contrast , as light levels decline — such as during the evening — melatonin production increases , boosting blood levels and causing drowsiness . <hl>", "hl_sentences": "The secretion of melatonin varies according to the level of light received from the environment . In contrast , as light levels decline — such as during the evening — melatonin production increases , boosting blood levels and causing drowsiness .", "question": { "cloze_format": "The production of melatonin is inhibited by ________.", "normal_format": "What inhibited the production of melatonin?", "question_choices": [ "declining levels of light", "exposure to bright light", "the secretion of serotonin", "the activity of pinealocytes" ], "question_id": "fs-id1414216", "question_text": "The production of melatonin is inhibited by ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "steroid hormones" }, "bloom": "2", "hl_context": "<hl> The deepest region of the adrenal cortex is the zona reticularis , which produces small amounts of a class of steroid sex hormones called androgens . <hl> During puberty and most of adulthood , androgens are produced in the gonads . The androgens produced in the zona reticularis supplement the gonadal androgens . They are produced in response to ACTH from the anterior pituitary and are converted in the tissues to testosterone or estrogens . In adult women , they may contribute to the sex drive , but their function in adult men is not well understood . In post-menopausal women , as the functions of the ovaries decline , the main source of estrogens becomes the androgens produced by the zona reticularis . The primary hormones derived from lipids are steroids . Steroid hormones are derived from the lipid cholesterol . <hl> For example , the reproductive hormones testosterone and the estrogens — which are produced by the gonads ( testes and ovaries ) — are steroid hormones . <hl> The adrenal glands produce the steroid hormone aldosterone , which is involved in osmoregulation , and cortisol , which plays a role in metabolism .", "hl_sentences": "The deepest region of the adrenal cortex is the zona reticularis , which produces small amounts of a class of steroid sex hormones called androgens . For example , the reproductive hormones testosterone and the estrogens — which are produced by the gonads ( testes and ovaries ) — are steroid hormones .", "question": { "cloze_format": "The class of hormones that the gonads produce are ___.", "normal_format": "The gonads produce what class of hormones?", "question_choices": [ "amine hormones", "peptide hormones", "steroid hormones", "catecholamines" ], "question_id": "fs-id1399907", "question_text": "The gonads produce what class of hormones?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 3, "ans_text": "inhibin" }, "bloom": "1", "hl_context": "The primary hormone produced by the male testes is testosterone , a steroid hormone important in the development of the male reproductive system , the maturation of sperm cells , and the development of male secondary sex characteristics such as a deepened voice , body hair , and increased muscle mass . Interestingly , testosterone is also produced in the female ovaries , but at a much reduced level . <hl> In addition , the testes produce the peptide hormone inhibin , which inhibits the secretion of FSH from the anterior pituitary gland . <hl> FSH stimulates spermatogenesis .", "hl_sentences": "In addition , the testes produce the peptide hormone inhibin , which inhibits the secretion of FSH from the anterior pituitary gland .", "question": { "cloze_format": "The production of FSH by the anterior pituitary is reduced by the ___ hormone.", "normal_format": "The production of FSH by the anterior pituitary is reduced by which hormone?", "question_choices": [ "estrogens", "progesterone", "relaxin", "inhibin" ], "question_id": "fs-id1394735", "question_text": "The production of FSH by the anterior pituitary is reduced by which hormone?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "prepare the breasts for lactation" }, "bloom": "1", "hl_context": "The primary hormones produced by the ovaries are estrogens , which include estradiol , estriol , and estrone . Estrogens play an important role in a larger number of physiological processes , including the development of the female reproductive system , regulation of the menstrual cycle , the development of female secondary sex characteristics such as increased adipose tissue and the development of breast tissue , and the maintenance of pregnancy . Another significant ovarian hormone is progesterone , which contributes to regulation of the menstrual cycle and is important in preparing the body for pregnancy as well as maintaining pregnancy . In addition , the granulosa cells of the ovarian follicles produce inhibin , which — as in males — inhibits the secretion of FSH.During the initial stages of pregnancy , an organ called the placenta develops within the uterus . The placenta supplies oxygen and nutrients to the fetus , excretes waste products , and produces and secretes estrogens and progesterone . The placenta produces human chorionic gonadotropin ( hCG ) as well . The hCG hormone promotes progesterone synthesis and reduces the mother ’ s immune function to protect the fetus from immune rejection . <hl> It also secretes human placental lactogen ( hPL ) , which plays a role in preparing the breasts for lactation , and relaxin , which is thought to help soften and widen the pubic symphysis in preparation for childbirth . <hl> The hormones controlling reproduction are summarized in Table 17.6 . Reproductive Hormones", "hl_sentences": "It also secretes human placental lactogen ( hPL ) , which plays a role in preparing the breasts for lactation , and relaxin , which is thought to help soften and widen the pubic symphysis in preparation for childbirth .", "question": { "cloze_format": "The function of the placental hormone human placental lactogen (hPL) is to ________.", "normal_format": "What is the function of the placental hormone human placental lactogen (hPL)?", "question_choices": [ "prepare the breasts for lactation", "nourish the placenta", "regulate the menstrual cycle", "all of the above" ], "question_id": "fs-id1382723", "question_text": "The function of the placental hormone human placental lactogen (hPL) is to ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "Insulin facilitates the movement of intracellular glucose transporters to the cell membrane." }, "bloom": "1", "hl_context": "The presence of food in the intestine triggers the release of gastrointestinal tract hormones such as glucose-dependent insulinotropic peptide ( previously known as gastric inhibitory peptide ) . This is in turn the initial trigger for insulin production and secretion by the beta cells of the pancreas . Once nutrient absorption occurs , the resulting surge in blood glucose levels further stimulates insulin secretion . Precisely how insulin facilitates glucose uptake is not entirely clear . <hl> However , insulin appears to activate a tyrosine kinase receptor , triggering the phosphorylation of many substrates within the cell . <hl> <hl> These multiple biochemical reactions converge to support the movement of intracellular vesicles containing facilitative glucose transporters to the cell membrane . <hl> In the absence of insulin , these transport proteins are normally recycled slowly between the cell membrane and cell interior . Insulin triggers the rapid movement of a pool of glucose transporter vesicles to the cell membrane , where they fuse and expose the glucose transporters to the extracellular fluid . The transporters then move glucose by facilitated diffusion into the cell interior . <hl> The primary function of insulin is to facilitate the uptake of glucose into body cells . <hl> <hl> Red blood cells , as well as cells of the brain , liver , kidneys , and the lining of the small intestine , do not have insulin receptors on their cell membranes and do not require insulin for glucose uptake . <hl> <hl> Although all other body cells do require insulin if they are to take glucose from the bloodstream , skeletal muscle cells and adipose cells are the primary targets of insulin . <hl>", "hl_sentences": "However , insulin appears to activate a tyrosine kinase receptor , triggering the phosphorylation of many substrates within the cell . These multiple biochemical reactions converge to support the movement of intracellular vesicles containing facilitative glucose transporters to the cell membrane . The primary function of insulin is to facilitate the uptake of glucose into body cells . Red blood cells , as well as cells of the brain , liver , kidneys , and the lining of the small intestine , do not have insulin receptors on their cell membranes and do not require insulin for glucose uptake . Although all other body cells do require insulin if they are to take glucose from the bloodstream , skeletal muscle cells and adipose cells are the primary targets of insulin .", "question": { "cloze_format": "The statement about insulin that is true is that ___.", "normal_format": "Which of the following statements about insulin is true?", "question_choices": [ "Insulin acts as a transport protein, carrying glucose across the cell membrane.", "Insulin facilitates the movement of intracellular glucose transporters to the cell membrane.", "Insulin stimulates the breakdown of stored glycogen into glucose.", "Insulin stimulates the kidneys to reabsorb glucose into the bloodstream." ], "question_id": "fs-id1219310", "question_text": "Which of the following statements about insulin is true?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "atrial natriuretic peptide" }, "bloom": "1", "hl_context": "When the body experiences an increase in blood volume or pressure , the cells of the heart ’ s atrial wall stretch . <hl> In response , specialized cells in the wall of the atria produce and secrete the peptide hormone atrial natriuretic peptide ( ANP ) . <hl> ANP signals the kidneys to reduce sodium reabsorption , thereby decreasing the amount of water reabsorbed from the urine filtrate and reducing blood volume . Other actions of ANP include the inhibition of renin secretion , thus inhibition of the renin-angiotensin-aldosterone system ( RAAS ) and vasodilation . Therefore , ANP aids in decreasing blood pressure , blood volume , and blood sodium levels .", "hl_sentences": "In response , specialized cells in the wall of the atria produce and secrete the peptide hormone atrial natriuretic peptide ( ANP ) .", "question": { "cloze_format": "The hormone that the walls of the atria produce is ___.", "normal_format": "The walls of the atria produce which hormone?", "question_choices": [ "cholecystokinin", "atrial natriuretic peptide", "renin", "calcitriol" ], "question_id": "fs-id1897050", "question_text": "The walls of the atria produce which hormone?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 3, "ans_text": "increase blood pressure" }, "bloom": null, "hl_context": "When the body experiences an increase in blood volume or pressure , the cells of the heart ’ s atrial wall stretch . In response , specialized cells in the wall of the atria produce and secrete the peptide hormone atrial natriuretic peptide ( ANP ) . ANP signals the kidneys to reduce sodium reabsorption , thereby decreasing the amount of water reabsorbed from the urine filtrate and reducing blood volume . <hl> Other actions of ANP include the inhibition of renin secretion , thus inhibition of the renin-angiotensin-aldosterone system ( RAAS ) and vasodilation . <hl> <hl> Therefore , ANP aids in decreasing blood pressure , blood volume , and blood sodium levels . <hl>", "hl_sentences": "Other actions of ANP include the inhibition of renin secretion , thus inhibition of the renin-angiotensin-aldosterone system ( RAAS ) and vasodilation . Therefore , ANP aids in decreasing blood pressure , blood volume , and blood sodium levels .", "question": { "cloze_format": "The end result of the RAAS is to ________.", "normal_format": "What is the end result of the RAAS?", "question_choices": [ "reduce blood volume", "increase blood glucose", "reduce blood pressure", "increase blood pressure" ], "question_id": "fs-id1765862", "question_text": "The end result of the RAAS is to ________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "blood oxygen levels" }, "bloom": "1", "hl_context": "The endocrine system can be exploited for illegal or unethical purposes . A prominent example of this is the use of steroid drugs by professional athletes . Commonly used for performance enhancement , anabolic steroids are synthetic versions of the male sex hormone , testosterone . <hl> By boosting natural levels of this hormone , athletes experience increased muscle mass . <hl> Synthetic versions of human growth hormone are also used to build muscle mass . The use of performance-enhancing drugs is banned by all major collegiate and professional sports organizations in the United States because they impart an unfair advantage to athletes who take them . In addition , the drugs can cause significant and dangerous side effects . For example , anabolic steroid use can increase cholesterol levels , raise blood pressure , and damage the liver . Altered testosterone levels ( both too low or too high ) have been implicated in causing structural damage to the heart , and increasing the risk for cardiac arrhythmias , heart attacks , congestive heart failure , and sudden death . Paradoxically , steroids can have a feminizing effect in males , including shriveled testicles and enlarged breast tissue . In females , their use can cause masculinizing effects such as an enlarged clitoris and growth of facial hair . In both sexes , their use can promote increased aggression ( commonly known as “ roid-rage ” ) , depression , sleep disturbances , severe acne , and infertility . 17.9 The Endocrine Pancreas Learning Objectives By the end of this section , you will be able to : The kidneys participate in several complex endocrine pathways and produce certain hormones . A decline in blood flow to the kidneys stimulates them to release the enzyme renin , triggering the renin-angiotensin-aldosterone ( RAAS ) system , and stimulating the reabsorption of sodium and water . The reabsorption increases blood flow and blood pressure . The kidneys also play a role in regulating blood calcium levels through the production of calcitriol from vitamin D 3 , which is released in response to the secretion of parathyroid hormone ( PTH ) . In addition , the kidneys produce the hormone erythropoietin ( EPO ) in response to low oxygen levels . <hl> EPO stimulates the production of red blood cells ( erythrocytes ) in the bone marrow , thereby increasing oxygen delivery to tissues . <hl> You may have heard of EPO as a performance-enhancing drug ( in a synthetic form ) .", "hl_sentences": "By boosting natural levels of this hormone , athletes experience increased muscle mass . EPO stimulates the production of red blood cells ( erythrocytes ) in the bone marrow , thereby increasing oxygen delivery to tissues .", "question": { "cloze_format": "Athletes may take synthetic EPO to boost their ________.", "normal_format": "Why do athletes may take synthetic EPO? ", "question_choices": [ "blood calcium levels", "secretion of growth hormone", "blood oxygen levels", "muscle mass" ], "question_id": "fs-id1641299", "question_text": "Athletes may take synthetic EPO to boost their ________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 0, "ans_text": "development of T cells" }, "bloom": "1", "hl_context": "The thymus is an organ of the immune system that is larger and more active during infancy and early childhood , and begins to atrophy as we age . <hl> Its endocrine function is the production of a group of hormones called thymosins that contribute to the development and differentiation of T lymphocytes , which are immune cells . <hl> Although the role of thymosins is not yet well understood , it is clear that they contribute to the immune response . Thymosins have been found in tissues other than the thymus and have a wide variety of functions , so the thymosins cannot be strictly categorized as thymic hormones .", "hl_sentences": "Its endocrine function is the production of a group of hormones called thymosins that contribute to the development and differentiation of T lymphocytes , which are immune cells .", "question": { "cloze_format": "Hormones produced by the thymus play a role in the ________.", "normal_format": "Where do hormones produced by the thymus play a role?", "question_choices": [ "development of T cells", "preparation of the body for childbirth", "regulation of appetite", "release of hydrochloric acid in the stomach" ], "question_id": "fs-id1933534", "question_text": "Hormones produced by the thymus play a role in the ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "oral ectoderm" }, "bloom": "1", "hl_context": "The hypothalamus – pituitary complex can be thought of as the “ command center ” of the endocrine system . This complex secretes several hormones that directly produce responses in target tissues , as well as hormones that regulate the synthesis and secretion of hormones of other glands . In addition , the hypothalamus – pituitary complex coordinates the messages of the endocrine and nervous systems . In many cases , a stimulus received by the nervous system must pass through the hypothalamus – pituitary complex to be translated into hormones that can initiate a response . The hypothalamus is a structure of the diencephalon of the brain located anterior and inferior to the thalamus ( Figure 17.7 ) . It has both neural and endocrine functions , producing and secreting many hormones . In addition , the hypothalamus is anatomically and functionally related to the pituitary gland ( or hypophysis ) , a bean-sized organ suspended from it by a stem called the infundibulum ( or pituitary stalk ) . The pituitary gland is cradled within the sellaturcica of the sphenoid bone of the skull . <hl> It consists of two lobes that arise from distinct parts of embryonic tissue : the posterior pituitary ( neurohypophysis ) is neural tissue , whereas the anterior pituitary ( also known as the adenohypophysis ) is glandular tissue that develops from the primitive digestive tract . <hl> The hormones secreted by the posterior and anterior pituitary , and the intermediate zone between the lobes are summarized in Table 17.3 . Pituitary Hormones The endocrine system arises from all three embryonic germ layers . The endocrine glands that produce the steroid hormones , such as the gonads and adrenal cortex , arise from the mesoderm . In contrast , endocrine glands that arise from the endoderm and ectoderm produce the amine , peptide , and protein hormones . <hl> The pituitary gland arises from two distinct areas of the ectoderm : the anterior pituitary gland arises from the oral ectoderm , whereas the posterior pituitary gland arises from the neural ectoderm at the base of the hypothalamus . <hl> The pineal gland also arises from the ectoderm . The two structures of the adrenal glands arise from two different germ layers : the adrenal cortex from the mesoderm and the adrenal medulla from ectoderm neural cells . The endoderm gives rise to the thyroid and parathyroid glands , as well as the pancreas and the thymus . As the body ages , changes occur that affect the endocrine system , sometimes altering the production , secretion , and catabolism of hormones . For example , the structure of the anterior pituitary gland changes as vascularization decreases and the connective tissue content increases with increasing age . This restructuring affects the gland ’ s hormone production . For example , the amount of human growth hormone that is produced declines with age , resulting in the reduced muscle mass commonly observed in the elderly . The adrenal glands also undergo changes as the body ages ; as fibrous tissue increases , the production of cortisol and aldosterone decreases . Interestingly , the production and secretion of epinephrine and norepinephrine remain normal throughout the aging process .", "hl_sentences": "It consists of two lobes that arise from distinct parts of embryonic tissue : the posterior pituitary ( neurohypophysis ) is neural tissue , whereas the anterior pituitary ( also known as the adenohypophysis ) is glandular tissue that develops from the primitive digestive tract . The pituitary gland arises from two distinct areas of the ectoderm : the anterior pituitary gland arises from the oral ectoderm , whereas the posterior pituitary gland arises from the neural ectoderm at the base of the hypothalamus .", "question": { "cloze_format": "The anterior pituitary gland develops from the ___ embryonic germ layer.", "normal_format": "The anterior pituitary gland develops from which embryonic germ layer?", "question_choices": [ "oral ectoderm", "neural ectoderm", "mesoderm", "endoderm" ], "question_id": "fs-id1954035", "question_text": "The anterior pituitary gland develops from which embryonic germ layer?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "decreased basal metabolic rate" }, "bloom": "1", "hl_context": "In areas of the world with access to iodized salt , dietary deficiency is rare . Instead , inflammation of the thyroid gland is the more common cause of low blood levels of thyroid hormones . Called hypothyroidism , the condition is characterized by a low metabolic rate , weight gain , cold extremities , constipation , reduced libido , menstrual irregularities , and reduced mental activity . In contrast , hyperthyroidism — an abnormally elevated blood level of thyroid hormones — is often caused by a pituitary or thyroid tumor . In Graves ’ disease , the hyperthyroid state results from an autoimmune reaction in which antibodies overstimulate the follicle cells of the thyroid gland . <hl> Hyperthyroidism can lead to an increased metabolic rate , excessive body heat and sweating , diarrhea , weight loss , tremors , and increased heart rate . <hl> The person ’ s eyes may bulge ( called exophthalmos ) as antibodies produce inflammation in the soft tissues of the orbits . The person may also develop a goiter . <hl> The thyroid hormones , T 3 and T 4 , are often referred to as metabolic hormones because their levels influence the body ’ s basal metabolic rate , the amount of energy used by the body at rest . <hl> When T 3 and T 4 bind to intracellular receptors located on the mitochondria , they cause an increase in nutrient breakdown and the use of oxygen to produce ATP . In addition , T 3 and T 4 initiate the transcription of genes involved in glucose oxidation . Although these mechanisms prompt cells to produce more ATP , the process is inefficient , and an abnormally increased level of heat is released as a byproduct of these reactions . This so-called calorigenic effect ( calor - = “ heat ” ) raises body temperature . <hl> As the body ages , the thyroid gland produces less of the thyroid hormones , causing a gradual decrease in the basal metabolic rate . <hl> The lower metabolic rate reduces the production of body heat and increases levels of body fat . Parathyroid hormones , on the other hand , increase with age . This may be because of reduced dietary calcium levels , causing a compensatory increase in parathyroid hormone . However , increased parathyroid hormone levels combined with decreased levels of calcitonin ( and estrogens in women ) can lead to osteoporosis as PTH stimulates demineralization of bones to increase blood calcium levels . Notice that osteoporosis is common in both elderly males and females .", "hl_sentences": "Hyperthyroidism can lead to an increased metabolic rate , excessive body heat and sweating , diarrhea , weight loss , tremors , and increased heart rate . The thyroid hormones , T 3 and T 4 , are often referred to as metabolic hormones because their levels influence the body ’ s basal metabolic rate , the amount of energy used by the body at rest . As the body ages , the thyroid gland produces less of the thyroid hormones , causing a gradual decrease in the basal metabolic rate .", "question": { "cloze_format": "In the elderly, decreased thyroid function causes ________.", "normal_format": "In the elderly, what causes decreased thyroid function?", "question_choices": [ "increased tolerance for cold", "decreased basal metabolic rate", "decreased body fat", "osteoporosis" ], "question_id": "fs-id1382612", "question_text": "In the elderly, decreased thyroid function causes ________." }, "references_are_paraphrase": null } ]
17
17.1 An Overview of the Endocrine System Learning Objectives By the end of this section, you will be able to: Distinguish the types of intercellular communication, their importance, mechanisms, and effects Identify the major organs and tissues of the endocrine system and their location in the body Communication is a process in which a sender transmits signals to one or more receivers to control and coordinate actions. In the human body, two major organ systems participate in relatively “long distance” communication: the nervous system and the endocrine system. Together, these two systems are primarily responsible for maintaining homeostasis in the body. Neural and Endocrine Signaling The nervous system uses two types of intercellular communication—electrical and chemical signaling—either by the direct action of an electrical potential, or in the latter case, through the action of chemical neurotransmitters such as serotonin or norepinephrine. Neurotransmitters act locally and rapidly. When an electrical signal in the form of an action potential arrives at the synaptic terminal, they diffuse across the synaptic cleft (the gap between a sending neuron and a receiving neuron or muscle cell). Once the neurotransmitters interact (bind) with receptors on the receiving (post-synaptic) cell, the receptor stimulation is transduced into a response such as continued electrical signaling or modification of cellular response. The target cell responds within milliseconds of receiving the chemical “message”; this response then ceases very quickly once the neural signaling ends. In this way, neural communication enables body functions that involve quick, brief actions, such as movement, sensation, and cognition.In contrast, the endocrine system uses just one method of communication: chemical signaling. These signals are sent by the endocrine organs, which secrete chemicals—the hormone —into the extracellular fluid. Hormones are transported primarily via the bloodstream throughout the body, where they bind to receptors on target cells, inducing a characteristic response. As a result, endocrine signaling requires more time than neural signaling to prompt a response in target cells, though the precise amount of time varies with different hormones. For example, the hormones released when you are confronted with a dangerous or frightening situation, called the fight-or-flight response, occur by the release of adrenal hormones—epinephrine and norepinephrine—within seconds. In contrast, it may take up to 48 hours for target cells to respond to certain reproductive hormones. Interactive Link Visit this link to watch an animation of the events that occur when a hormone binds to a cell membrane receptor. What is the secondary messenger made by adenylyl cyclase during the activation of liver cells by epinephrine? In addition, endocrine signaling is typically less specific than neural signaling. The same hormone may play a role in a variety of different physiological processes depending on the target cells involved. For example, the hormone oxytocin promotes uterine contractions in women in labor. It is also important in breastfeeding, and may be involved in the sexual response and in feelings of emotional attachment in both males and females. In general, the nervous system involves quick responses to rapid changes in the external environment, and the endocrine system is usually slower acting—taking care of the internal environment of the body, maintaining homeostasis, and controlling reproduction ( Table 17.1 ). So how does the fight-or-flight response that was mentioned earlier happen so quickly if hormones are usually slower acting? It is because the two systems are connected. It is the fast action of the nervous system in response to the danger in the environment that stimulates the adrenal glands to secrete their hormones. As a result, the nervous system can cause rapid endocrine responses to keep up with sudden changes in both the external and internal environments when necessary. Endocrine and Nervous Systems Endocrine system Nervous system Signaling mechanism(s) Chemical Chemical/electrical Primary chemical signal Hormones Neurotransmitters Distance traveled Long or short Always short Response time Fast or slow Always fast Environment targeted Internal Internal and external Table 17.1 Structures of the Endocrine System The endocrine system consists of cells, tissues, and organs that secrete hormones as a primary or secondary function. The endocrine gland is the major player in this system. The primary function of these ductless glands is to secrete their hormones directly into the surrounding fluid. The interstitial fluid and the blood vessels then transport the hormones throughout the body. The endocrine system includes the pituitary, thyroid, parathyroid, adrenal, and pineal glands ( Figure 17.2 ). Some of these glands have both endocrine and non-endocrine functions. For example, the pancreas contains cells that function in digestion as well as cells that secrete the hormones insulin and glucagon, which regulate blood glucose levels. The hypothalamus, thymus, heart, kidneys, stomach, small intestine, liver, skin, female ovaries, and male testes are other organs that contain cells with endocrine function. Moreover, adipose tissue has long been known to produce hormones, and recent research has revealed that even bone tissue has endocrine functions. The ductless endocrine glands are not to be confused with the body’s exocrine system , whose glands release their secretions through ducts. Examples of exocrine glands include the sebaceous and sweat glands of the skin. As just noted, the pancreas also has an exocrine function: most of its cells secrete pancreatic juice through the pancreatic and accessory ducts to the lumen of the small intestine. Other Types of Chemical Signaling In endocrine signaling, hormones secreted into the extracellular fluid diffuse into the blood or lymph, and can then travel great distances throughout the body. In contrast, autocrine signaling takes place within the same cell. An autocrine (auto- = “self”) is a chemical that elicits a response in the same cell that secreted it. Interleukin-1, or IL-1, is a signaling molecule that plays an important role in inflammatory response. The cells that secrete IL-1 have receptors on their cell surface that bind these molecules, resulting in autocrine signaling. Local intercellular communication is the province of the paracrine , also called a paracrine factor, which is a chemical that induces a response in neighboring cells. Although paracrines may enter the bloodstream, their concentration is generally too low to elicit a response from distant tissues. A familiar example to those with asthma is histamine, a paracrine that is released by immune cells in the bronchial tree. Histamine causes the smooth muscle cells of the bronchi to constrict, narrowing the airways. Another example is the neurotransmitters of the nervous system, which act only locally within the synaptic cleft. Career Connection Endocrinologist Endocrinology is a specialty in the field of medicine that focuses on the treatment of endocrine system disorders. Endocrinologists—medical doctors who specialize in this field—are experts in treating diseases associated with hormonal systems, ranging from thyroid disease to diabetes mellitus. Endocrine surgeons treat endocrine disease through the removal, or resection, of the affected endocrine gland. Patients who are referred to endocrinologists may have signs and symptoms or blood test results that suggest excessive or impaired functioning of an endocrine gland or endocrine cells. The endocrinologist may order additional blood tests to determine whether the patient’s hormonal levels are abnormal, or they may stimulate or suppress the function of the suspect endocrine gland and then have blood taken for analysis. Treatment varies according to the diagnosis. Some endocrine disorders, such as type 2 diabetes, may respond to lifestyle changes such as modest weight loss, adoption of a healthy diet, and regular physical activity. Other disorders may require medication, such as hormone replacement, and routine monitoring by the endocrinologist. These include disorders of the pituitary gland that can affect growth and disorders of the thyroid gland that can result in a variety of metabolic problems. Some patients experience health problems as a result of the normal decline in hormones that can accompany aging. These patients can consult with an endocrinologist to weigh the risks and benefits of hormone replacement therapy intended to boost their natural levels of reproductive hormones. In addition to treating patients, endocrinologists may be involved in research to improve the understanding of endocrine system disorders and develop new treatments for these diseases. 17.10 Organs with Secondary Endocrine Functions Learning Objectives By the end of this section, you will be able to: Identify the organs with a secondary endocrine function, the hormone they produce, and its effects In your study of anatomy and physiology, you have already encountered a few of the many organs of the body that have secondary endocrine functions. Here, you will learn about the hormone-producing activities of the heart, gastrointestinal tract, kidneys, skeleton, adipose tissue, skin, and thymus. Heart When the body experiences an increase in blood volume or pressure, the cells of the heart’s atrial wall stretch. In response, specialized cells in the wall of the atria produce and secrete the peptide hormone atrial natriuretic peptide (ANP) . ANP signals the kidneys to reduce sodium reabsorption, thereby decreasing the amount of water reabsorbed from the urine filtrate and reducing blood volume. Other actions of ANP include the inhibition of renin secretion, thus inhibition of the renin-angiotensin-aldosterone system (RAAS) and vasodilation. Therefore, ANP aids in decreasing blood pressure, blood volume, and blood sodium levels. Gastrointestinal Tract The endocrine cells of the GI tract are located in the mucosa of the stomach and small intestine. Some of these hormones are secreted in response to eating a meal and aid in digestion. An example of a hormone secreted by the stomach cells is gastrin, a peptide hormone secreted in response to stomach distention that stimulates the release of hydrochloric acid. Secretin is a peptide hormone secreted by the small intestine as acidic chyme (partially digested food and fluid) moves from the stomach. It stimulates the release of bicarbonate from the pancreas, which buffers the acidic chyme, and inhibits the further secretion of hydrochloric acid by the stomach. Cholecystokinin (CCK) is another peptide hormone released from the small intestine. It promotes the secretion of pancreatic enzymes and the release of bile from the gallbladder, both of which facilitate digestion. Other hormones produced by the intestinal cells aid in glucose metabolism, such as by stimulating the pancreatic beta cells to secrete insulin, reducing glucagon secretion from the alpha cells, or enhancing cellular sensitivity to insulin. Kidneys The kidneys participate in several complex endocrine pathways and produce certain hormones. A decline in blood flow to the kidneys stimulates them to release the enzyme renin, triggering the renin-angiotensin-aldosterone (RAAS) system, and stimulating the reabsorption of sodium and water. The reabsorption increases blood flow and blood pressure. The kidneys also play a role in regulating blood calcium levels through the production of calcitriol from vitamin D 3 , which is released in response to the secretion of parathyroid hormone (PTH). In addition, the kidneys produce the hormone erythropoietin (EPO) in response to low oxygen levels. EPO stimulates the production of red blood cells (erythrocytes) in the bone marrow, thereby increasing oxygen delivery to tissues. You may have heard of EPO as a performance-enhancing drug (in a synthetic form). Skeleton Although bone has long been recognized as a target for hormones, only recently have researchers recognized that the skeleton itself produces at least two hormones. Fibroblast growth factor 23 (FGF23) is produced by bone cells in response to increased blood levels of vitamin D 3 or phosphate. It triggers the kidneys to inhibit the formation of calcitriol from vitamin D 3 and to increase phosphorus excretion. Osteocalcin, produced by osteoblasts, stimulates the pancreatic beta cells to increase insulin production. It also acts on peripheral tissues to increase their sensitivity to insulin and their utilization of glucose. Adipose Tissue Adipose tissue produces and secretes several hormones involved in lipid metabolism and storage. One important example is leptin , a protein manufactured by adipose cells that circulates in amounts directly proportional to levels of body fat. Leptin is released in response to food consumption and acts by binding to brain neurons involved in energy intake and expenditure. Binding of leptin produces a feeling of satiety after a meal, thereby reducing appetite. It also appears that the binding of leptin to brain receptors triggers the sympathetic nervous system to regulate bone metabolism, increasing deposition of cortical bone. Adiponectin—another hormone synthesized by adipose cells—appears to reduce cellular insulin resistance and to protect blood vessels from inflammation and atherosclerosis. Its levels are lower in people who are obese, and rise following weight loss. Skin The skin functions as an endocrine organ in the production of the inactive form of vitamin D 3 , cholecalciferol. When cholesterol present in the epidermis is exposed to ultraviolet radiation, it is converted to cholecalciferol, which then enters the blood. In the liver, cholecalciferol is converted to an intermediate that travels to the kidneys and is further converted to calcitriol, the active form of vitamin D 3 . Vitamin D is important in a variety of physiological processes, including intestinal calcium absorption and immune system function. In some studies, low levels of vitamin D have been associated with increased risks of cancer, severe asthma, and multiple sclerosis. Vitamin D deficiency in children causes rickets, and in adults, osteomalacia—both of which are characterized by bone deterioration. Thymus The thymus is an organ of the immune system that is larger and more active during infancy and early childhood, and begins to atrophy as we age. Its endocrine function is the production of a group of hormones called thymosins that contribute to the development and differentiation of T lymphocytes, which are immune cells. Although the role of thymosins is not yet well understood, it is clear that they contribute to the immune response. Thymosins have been found in tissues other than the thymus and have a wide variety of functions, so the thymosins cannot be strictly categorized as thymic hormones. Liver The liver is responsible for secreting at least four important hormones or hormone precursors: insulin-like growth factor (somatomedin), angiotensinogen, thrombopoetin, and hepcidin. Insulin-like growth factor-1 is the immediate stimulus for growth in the body, especially of the bones. Angiotensinogen is the precursor to angiotensin, mentioned earlier, which increases blood pressure. Thrombopoetin stimulates the production of the blood’s platelets. Hepcidins block the release of iron from cells in the body, helping to regulate iron homeostasis in our body fluids. The major hormones of these other organs are summarized in Table 17.8 . Organs with Secondary Endocrine Functions and Their Major Hormones Organ Major hormones Effects Heart Atrial natriuretic peptide (ANP) Reduces blood volume, blood pressure, and Na + concentration Gastrointestinal tract Gastrin, secretin, and cholecystokinin Aid digestion of food and buffering of stomach acids Gastrointestinal tract Glucose-dependent insulinotropic peptide (GIP) and glucagon-like peptide 1 (GLP-1) Stimulate beta cells of the pancreas to release insulin Kidneys Renin Stimulates release of aldosterone Kidneys Calcitriol Aids in the absorption of Ca 2+ Kidneys Erythropoietin Triggers the formation of red blood cells in the bone marrow Skeleton FGF23 Inhibits production of calcitriol and increases phosphate excretion Skeleton Osteocalcin Increases insulin production Adipose tissue Leptin Promotes satiety signals in the brain Adipose tissue Adiponectin Reduces insulin resistance Skin Cholecalciferol Modified to form vitamin D Thymus (and other organs) Thymosins Among other things, aids in the development of T lymphocytes of the immune system Liver Insulin-like growth factor-1 Stimulates bodily growth Liver Angiotensinogen Raises blood pressure Liver Thrombopoetin Causes increase in platelets Liver Hepcidin Blocks release of iron into body fluids Table 17.8 17.11 Development and Aging of the Endocrine System Learning Objectives By the end of this section, you will be able to: Describe the embryonic origins of the endocrine system Discuss the effects of aging on the endocrine system The endocrine system arises from all three embryonic germ layers. The endocrine glands that produce the steroid hormones, such as the gonads and adrenal cortex, arise from the mesoderm. In contrast, endocrine glands that arise from the endoderm and ectoderm produce the amine, peptide, and protein hormones. The pituitary gland arises from two distinct areas of the ectoderm: the anterior pituitary gland arises from the oral ectoderm, whereas the posterior pituitary gland arises from the neural ectoderm at the base of the hypothalamus. The pineal gland also arises from the ectoderm. The two structures of the adrenal glands arise from two different germ layers: the adrenal cortex from the mesoderm and the adrenal medulla from ectoderm neural cells. The endoderm gives rise to the thyroid and parathyroid glands, as well as the pancreas and the thymus. As the body ages, changes occur that affect the endocrine system, sometimes altering the production, secretion, and catabolism of hormones. For example, the structure of the anterior pituitary gland changes as vascularization decreases and the connective tissue content increases with increasing age. This restructuring affects the gland’s hormone production. For example, the amount of human growth hormone that is produced declines with age, resulting in the reduced muscle mass commonly observed in the elderly. The adrenal glands also undergo changes as the body ages; as fibrous tissue increases, the production of cortisol and aldosterone decreases. Interestingly, the production and secretion of epinephrine and norepinephrine remain normal throughout the aging process. A well-known example of the aging process affecting an endocrine gland is menopause and the decline of ovarian function. With increasing age, the ovaries decrease in both size and weight and become progressively less sensitive to gonadotropins. This gradually causes a decrease in estrogen and progesterone levels, leading to menopause and the inability to reproduce. Low levels of estrogens and progesterone are also associated with some disease states, such as osteoporosis, atherosclerosis, and hyperlipidemia, or abnormal blood lipid levels. Testosterone levels also decline with age, a condition called andropause (or viropause); however, this decline is much less dramatic than the decline of estrogens in women, and much more gradual, rarely affecting sperm production until very old age. Although this means that males maintain their ability to father children for decades longer than females, the quantity, quality, and motility of their sperm is often reduced. As the body ages, the thyroid gland produces less of the thyroid hormones, causing a gradual decrease in the basal metabolic rate. The lower metabolic rate reduces the production of body heat and increases levels of body fat. Parathyroid hormones, on the other hand, increase with age. This may be because of reduced dietary calcium levels, causing a compensatory increase in parathyroid hormone. However, increased parathyroid hormone levels combined with decreased levels of calcitonin (and estrogens in women) can lead to osteoporosis as PTH stimulates demineralization of bones to increase blood calcium levels. Notice that osteoporosis is common in both elderly males and females. Increasing age also affects glucose metabolism, as blood glucose levels spike more rapidly and take longer to return to normal in the elderly. In addition, increasing glucose intolerance may occur because of a gradual decline in cellular insulin sensitivity. Almost 27 percent of Americans aged 65 and older have diabetes. 17.2 Hormones Learning Objectives By the end of this section, you will be able to: Identify the three major classes of hormones on the basis of chemical structure Compare and contrast intracellular and cell membrane hormone receptors Describe signaling pathways that involve cAMP and IP3 Identify several factors that influence a target cell’s response Discuss the role of feedback loops and humoral, hormonal, and neural stimuli in hormone control Although a given hormone may travel throughout the body in the bloodstream, it will affect the activity only of its target cells; that is, cells with receptors for that particular hormone. Once the hormone binds to the receptor, a chain of events is initiated that leads to the target cell’s response. Hormones play a critical role in the regulation of physiological processes because of the target cell responses they regulate. These responses contribute to human reproduction, growth and development of body tissues, metabolism, fluid, and electrolyte balance, sleep, and many other body functions. The major hormones of the human body and their effects are identified in Table 17.2 . Endocrine Glands and Their Major Hormones Endocrine gland Associated hormones Chemical class Effect Pituitary (anterior) Growth hormone (GH) Protein Promotes growth of body tissues Pituitary (anterior) Prolactin (PRL) Peptide Promotes milk production Pituitary (anterior) Thyroid-stimulating hormone (TSH) Glycoprotein Stimulates thyroid hormone release Pituitary (anterior) Adrenocorticotropic hormone (ACTH) Peptide Stimulates hormone release by adrenal cortex Pituitary (anterior) Follicle-stimulating hormone (FSH) Glycoprotein Stimulates gamete production Pituitary (anterior) Luteinizing hormone (LH) Glycoprotein Stimulates androgen production by gonads Pituitary (posterior) Antidiuretic hormone (ADH) Peptide Stimulates water reabsorption by kidneys Pituitary (posterior) Oxytocin Peptide Stimulates uterine contractions during childbirth Thyroid Thyroxine (T 4 ), triiodothyronine (T 3 ) Amine Stimulate basal metabolic rate Thyroid Calcitonin Peptide Reduces blood Ca 2+ levels Parathyroid Parathyroid hormone (PTH) Peptide Increases blood Ca 2+ levels Adrenal (cortex) Aldosterone Steroid Increases blood Na + levels Adrenal (cortex) Cortisol, corticosterone, cortisone Steroid Increase blood glucose levels Adrenal (medulla) Epinephrine, norepinephrine Amine Stimulate fight-or-flight response Pineal Melatonin Amine Regulates sleep cycles Pancreas Insulin Protein Reduces blood glucose levels Pancreas Glucagon Protein Increases blood glucose levels Testes Testosterone Steroid Stimulates development of male secondary sex characteristics and sperm production Ovaries Estrogens and progesterone Steroid Stimulate development of female secondary sex characteristics and prepare the body for childbirth Table 17.2 Types of Hormones The hormones of the human body can be divided into two major groups on the basis of their chemical structure. Hormones derived from amino acids include amines, peptides, and proteins. Those derived from lipids include steroids ( Figure 17.3 ). These chemical groups affect a hormone’s distribution, the type of receptors it binds to, and other aspects of its function. Amine Hormones Hormones derived from the modification of amino acids are referred to as amine hormones. Typically, the original structure of the amino acid is modified such that a –COOH, or carboxyl, group is removed, whereas the − NH 3 + − NH 3 + , or amine, group remains. Amine hormones are synthesized from the amino acids tryptophan or tyrosine. An example of a hormone derived from tryptophan is melatonin, which is secreted by the pineal gland and helps regulate circadian rhythm. Tyrosine derivatives include the metabolism-regulating thyroid hormones, as well as the catecholamines, such as epinephrine, norepinephrine, and dopamine. Epinephrine and norepinephrine are secreted by the adrenal medulla and play a role in the fight-or-flight response, whereas dopamine is secreted by the hypothalamus and inhibits the release of certain anterior pituitary hormones. Peptide and Protein Hormones Whereas the amine hormones are derived from a single amino acid, peptide and protein hormones consist of multiple amino acids that link to form an amino acid chain. Peptide hormones consist of short chains of amino acids, whereas protein hormones are longer polypeptides. Both types are synthesized like other body proteins: DNA is transcribed into mRNA, which is translated into an amino acid chain. Examples of peptide hormones include antidiuretic hormone (ADH), a pituitary hormone important in fluid balance, and atrial-natriuretic peptide, which is produced by the heart and helps to decrease blood pressure. Some examples of protein hormones include growth hormone, which is produced by the pituitary gland, and follicle-stimulating hormone (FSH), which has an attached carbohydrate group and is thus classified as a glycoprotein. FSH helps stimulate the maturation of eggs in the ovaries and sperm in the testes. Steroid Hormones The primary hormones derived from lipids are steroids. Steroid hormones are derived from the lipid cholesterol. For example, the reproductive hormones testosterone and the estrogens—which are produced by the gonads (testes and ovaries)—are steroid hormones. The adrenal glands produce the steroid hormone aldosterone, which is involved in osmoregulation, and cortisol, which plays a role in metabolism. Like cholesterol, steroid hormones are not soluble in water (they are hydrophobic). Because blood is water-based, lipid-derived hormones must travel to their target cell bound to a transport protein. This more complex structure extends the half-life of steroid hormones much longer than that of hormones derived from amino acids. A hormone’s half-life is the time required for half the concentration of the hormone to be degraded. For example, the lipid-derived hormone cortisol has a half-life of approximately 60 to 90 minutes. In contrast, the amino acid–derived hormone epinephrine has a half-life of approximately one minute. Pathways of Hormone Action The message a hormone sends is received by a hormone receptor , a protein located either inside the cell or within the cell membrane. The receptor will process the message by initiating other signaling events or cellular mechanisms that result in the target cell’s response. Hormone receptors recognize molecules with specific shapes and side groups, and respond only to those hormones that are recognized. The same type of receptor may be located on cells in different body tissues, and trigger somewhat different responses. Thus, the response triggered by a hormone depends not only on the hormone, but also on the target cell. Once the target cell receives the hormone signal, it can respond in a variety of ways. The response may include the stimulation of protein synthesis, activation or deactivation of enzymes, alteration in the permeability of the cell membrane, altered rates of mitosis and cell growth, and stimulation of the secretion of products. Moreover, a single hormone may be capable of inducing different responses in a given cell. Pathways Involving Intracellular Hormone Receptors Intracellular hormone receptors are located inside the cell. Hormones that bind to this type of receptor must be able to cross the cell membrane. Steroid hormones are derived from cholesterol and therefore can readily diffuse through the lipid bilayer of the cell membrane to reach the intracellular receptor ( Figure 17.4 ). Thyroid hormones, which contain benzene rings studded with iodine, are also lipid-soluble and can enter the cell. The location of steroid and thyroid hormone binding differs slightly: a steroid hormone may bind to its receptor within the cytosol or within the nucleus. In either case, this binding generates a hormone-receptor complex that moves toward the chromatin in the cell nucleus and binds to a particular segment of the cell’s DNA. In contrast, thyroid hormones bind to receptors already bound to DNA. For both steroid and thyroid hormones, binding of the hormone-receptor complex with DNA triggers transcription of a target gene to mRNA, which moves to the cytosol and directs protein synthesis by ribosomes. Pathways Involving Cell Membrane Hormone Receptors Hydrophilic, or water-soluble, hormones are unable to diffuse through the lipid bilayer of the cell membrane and must therefore pass on their message to a receptor located at the surface of the cell. Except for thyroid hormones, which are lipid-soluble, all amino acid–derived hormones bind to cell membrane receptors that are located, at least in part, on the extracellular surface of the cell membrane. Therefore, they do not directly affect the transcription of target genes, but instead initiate a signaling cascade that is carried out by a molecule called a second messenger . In this case, the hormone is called a first messenger . The second messenger used by most hormones is cyclic adenosine monophosphate (cAMP) . In the cAMP second messenger system, a water-soluble hormone binds to its receptor in the cell membrane (Step 1 in Figure 17.5 ). This receptor is associated with an intracellular component called a G protein , and binding of the hormone activates the G-protein component (Step 2). The activated G protein in turn activates an enzyme called adenylyl cyclase , also known as adenylate cyclase (Step 3), which converts adenosine triphosphate (ATP) to cAMP (Step 4). As the second messenger, cAMP activates a type of enzyme called a protein kinase that is present in the cytosol (Step 5). Activated protein kinases initiate a phosphorylation cascade , in which multiple protein kinases phosphorylate (add a phosphate group to) numerous and various cellular proteins, including other enzymes (Step 6). The phosphorylation of cellular proteins can trigger a wide variety of effects, from nutrient metabolism to the synthesis of different hormones and other products. The effects vary according to the type of target cell, the G proteins and kinases involved, and the phosphorylation of proteins. Examples of hormones that use cAMP as a second messenger include calcitonin, which is important for bone construction and regulating blood calcium levels; glucagon, which plays a role in blood glucose levels; and thyroid-stimulating hormone, which causes the release of T 3 and T 4 from the thyroid gland. Overall, the phosphorylation cascade significantly increases the efficiency, speed, and specificity of the hormonal response, as thousands of signaling events can be initiated simultaneously in response to a very low concentration of hormone in the bloodstream. However, the duration of the hormone signal is short, as cAMP is quickly deactivated by the enzyme phosphodiesterase (PDE) , which is located in the cytosol. The action of PDE helps to ensure that a target cell’s response ceases quickly unless new hormones arrive at the cell membrane. Importantly, there are also G proteins that decrease the levels of cAMP in the cell in response to hormone binding. For example, when growth hormone–inhibiting hormone (GHIH), also known as somatostatin, binds to its receptors in the pituitary gland, the level of cAMP decreases, thereby inhibiting the secretion of human growth hormone. Not all water-soluble hormones initiate the cAMP second messenger system. One common alternative system uses calcium ions as a second messenger. In this system, G proteins activate the enzyme phospholipase C (PLC), which functions similarly to adenylyl cyclase. Once activated, PLC cleaves a membrane-bound phospholipid into two molecules: diacylglycerol (DAG) and inositol triphosphate (IP 3 ) . Like cAMP, DAG activates protein kinases that initiate a phosphorylation cascade. At the same time, IP 3 causes calcium ions to be released from storage sites within the cytosol, such as from within the smooth endoplasmic reticulum. The calcium ions then act as second messengers in two ways: they can influence enzymatic and other cellular activities directly, or they can bind to calcium-binding proteins, the most common of which is calmodulin. Upon binding calcium, calmodulin is able to modulate protein kinase within the cell. Examples of hormones that use calcium ions as a second messenger system include angiotensin II, which helps regulate blood pressure through vasoconstriction, and growth hormone–releasing hormone (GHRH), which causes the pituitary gland to release growth hormones. Factors Affecting Target Cell Response You will recall that target cells must have receptors specific to a given hormone if that hormone is to trigger a response. But several other factors influence the target cell response. For example, the presence of a significant level of a hormone circulating in the bloodstream can cause its target cells to decrease their number of receptors for that hormone. This process is called downregulation , and it allows cells to become less reactive to the excessive hormone levels. When the level of a hormone is chronically reduced, target cells engage in upregulation to increase their number of receptors. This process allows cells to be more sensitive to the hormone that is present. Cells can also alter the sensitivity of the receptors themselves to various hormones. Two or more hormones can interact to affect the response of cells in a variety of ways. The three most common types of interaction are as follows: The permissive effect, in which the presence of one hormone enables another hormone to act. For example, thyroid hormones have complex permissive relationships with certain reproductive hormones. A dietary deficiency of iodine, a component of thyroid hormones, can therefore affect reproductive system development and functioning. The synergistic effect, in which two hormones with similar effects produce an amplified response. In some cases, two hormones are required for an adequate response. For example, two different reproductive hormones—FSH from the pituitary gland and estrogens from the ovaries—are required for the maturation of female ova (egg cells). The antagonistic effect, in which two hormones have opposing effects. A familiar example is the effect of two pancreatic hormones, insulin and glucagon. Insulin increases the liver’s storage of glucose as glycogen, decreasing blood glucose, whereas glucagon stimulates the breakdown of glycogen stores, increasing blood glucose. Regulation of Hormone Secretion To prevent abnormal hormone levels and a potential disease state, hormone levels must be tightly controlled. The body maintains this control by balancing hormone production and degradation. Feedback loops govern the initiation and maintenance of most hormone secretion in response to various stimuli. Role of Feedback Loops The contribution of feedback loops to homeostasis will only be briefly reviewed here. Positive feedback loops are characterized by the release of additional hormone in response to an original hormone release. The release of oxytocin during childbirth is a positive feedback loop. The initial release of oxytocin begins to signal the uterine muscles to contract, which pushes the fetus toward the cervix, causing it to stretch. This, in turn, signals the pituitary gland to release more oxytocin, causing labor contractions to intensify. The release of oxytocin decreases after the birth of the child. The more common method of hormone regulation is the negative feedback loop. Negative feedback is characterized by the inhibition of further secretion of a hormone in response to adequate levels of that hormone. This allows blood levels of the hormone to be regulated within a narrow range. An example of a negative feedback loop is the release of glucocorticoid hormones from the adrenal glands, as directed by the hypothalamus and pituitary gland. As glucocorticoid concentrations in the blood rise, the hypothalamus and pituitary gland reduce their signaling to the adrenal glands to prevent additional glucocorticoid secretion ( Figure 17.6 ). Role of Endocrine Gland Stimuli Reflexes triggered by both chemical and neural stimuli control endocrine activity. These reflexes may be simple, involving only one hormone response, or they may be more complex and involve many hormones, as is the case with the hypothalamic control of various anterior pituitary–controlled hormones. Humoral stimuli are changes in blood levels of non-hormone chemicals, such as nutrients or ions, which cause the release or inhibition of a hormone to, in turn, maintain homeostasis. For example, osmoreceptors in the hypothalamus detect changes in blood osmolarity (the concentration of solutes in the blood plasma). If blood osmolarity is too high, meaning that the blood is not dilute enough, osmoreceptors signal the hypothalamus to release ADH. The hormone causes the kidneys to reabsorb more water and reduce the volume of urine produced. This reabsorption causes a reduction of the osmolarity of the blood, diluting the blood to the appropriate level. The regulation of blood glucose is another example. High blood glucose levels cause the release of insulin from the pancreas, which increases glucose uptake by cells and liver storage of glucose as glycogen. An endocrine gland may also secrete a hormone in response to the presence of another hormone produced by a different endocrine gland. Such hormonal stimuli often involve the hypothalamus, which produces releasing and inhibiting hormones that control the secretion of a variety of pituitary hormones. In addition to these chemical signals, hormones can also be released in response to neural stimuli. A common example of neural stimuli is the activation of the fight-or-flight response by the sympathetic nervous system. When an individual perceives danger, sympathetic neurons signal the adrenal glands to secrete norepinephrine and epinephrine. The two hormones dilate blood vessels, increase the heart and respiratory rate, and suppress the digestive and immune systems. These responses boost the body’s transport of oxygen to the brain and muscles, thereby improving the body’s ability to fight or flee. Everyday Connection Bisphenol A and Endocrine Disruption You may have heard news reports about the effects of a chemical called bisphenol A (BPA) in various types of food packaging. BPA is used in the manufacturing of hard plastics and epoxy resins. Common food-related items that may contain BPA include the lining of aluminum cans, plastic food-storage containers, drinking cups, as well as baby bottles and “sippy” cups. Other uses of BPA include medical equipment, dental fillings, and the lining of water pipes. Research suggests that BPA is an endocrine disruptor, meaning that it negatively interferes with the endocrine system, particularly during the prenatal and postnatal development period. In particular, BPA mimics the hormonal effects of estrogens and has the opposite effect—that of androgens. The U.S. Food and Drug Administration (FDA) notes in their statement about BPA safety that although traditional toxicology studies have supported the safety of low levels of exposure to BPA, recent studies using novel approaches to test for subtle effects have led to some concern about the potential effects of BPA on the brain, behavior, and prostate gland in fetuses, infants, and young children. The FDA is currently facilitating decreased use of BPA in food-related materials. Many US companies have voluntarily removed BPA from baby bottles, “sippy” cups, and the linings of infant formula cans, and most plastic reusable water bottles sold today boast that they are “BPA free.” In contrast, both Canada and the European Union have completely banned the use of BPA in baby products. The potential harmful effects of BPA have been studied in both animal models and humans and include a large variety of health effects, such as developmental delay and disease. For example, prenatal exposure to BPA during the first trimester of human pregnancy may be associated with wheezing and aggressive behavior during childhood. Adults exposed to high levels of BPA may experience altered thyroid signaling and male sexual dysfunction. BPA exposure during the prenatal or postnatal period of development in animal models has been observed to cause neurological delays, changes in brain structure and function, sexual dysfunction, asthma, and increased risk for multiple cancers. In vitro studies have also shown that BPA exposure causes molecular changes that initiate the development of cancers of the breast, prostate, and brain. Although these studies have implicated BPA in numerous ill health effects, some experts caution that some of these studies may be flawed and that more research needs to be done. In the meantime, the FDA recommends that consumers take precautions to limit their exposure to BPA. In addition to purchasing foods in packaging free of BPA, consumers should avoid carrying or storing foods or liquids in bottles with the recycling code 3 or 7. Foods and liquids should not be microwave-heated in any form of plastic: use paper, glass, or ceramics instead. 17.3 The Pituitary Gland and Hypothalamus Learning Objectives By the end of this section, you will be able to: Explain the interrelationships of the anatomy and functions of the hypothalamus and the posterior and anterior lobes of the pituitary gland Identify the two hormones released from the posterior pituitary, their target cells, and their principal actions Identify the six hormones produced by the anterior lobe of the pituitary gland, their target cells, their principal actions, and their regulation by the hypothalamus The hypothalamus–pituitary complex can be thought of as the “command center” of the endocrine system. This complex secretes several hormones that directly produce responses in target tissues, as well as hormones that regulate the synthesis and secretion of hormones of other glands. In addition, the hypothalamus–pituitary complex coordinates the messages of the endocrine and nervous systems. In many cases, a stimulus received by the nervous system must pass through the hypothalamus–pituitary complex to be translated into hormones that can initiate a response. The hypothalamus is a structure of the diencephalon of the brain located anterior and inferior to the thalamus ( Figure 17.7 ). It has both neural and endocrine functions, producing and secreting many hormones. In addition, the hypothalamus is anatomically and functionally related to the pituitary gland (or hypophysis), a bean-sized organ suspended from it by a stem called the infundibulum (or pituitary stalk). The pituitary gland is cradled within the sellaturcica of the sphenoid bone of the skull. It consists of two lobes that arise from distinct parts of embryonic tissue: the posterior pituitary (neurohypophysis) is neural tissue, whereas the anterior pituitary (also known as the adenohypophysis) is glandular tissue that develops from the primitive digestive tract. The hormones secreted by the posterior and anterior pituitary, and the intermediate zone between the lobes are summarized in Table 17.3 . Pituitary Hormones Pituitary lobe Associated hormones Chemical class Effect Anterior Growth hormone (GH) Protein Promotes growth of body tissues Anterior Prolactin (PRL) Peptide Promotes milk production from mammary glands Anterior Thyroid-stimulating hormone (TSH) Glycoprotein Stimulates thyroid hormone release from thyroid Anterior Adrenocorticotropic hormone (ACTH) Peptide Stimulates hormone release by adrenal cortex Anterior Follicle-stimulating hormone (FSH) Glycoprotein Stimulates gamete production in gonads Anterior Luteinizing hormone (LH) Glycoprotein Stimulates androgen production by gonads Posterior Antidiuretic hormone (ADH) Peptide Stimulates water reabsorption by kidneys Posterior Oxytocin Peptide Stimulates uterine contractions during childbirth Intermediate zone Melanocyte-stimulating hormone Peptide Stimulates melanin formation in melanocytes Table 17.3 Posterior Pituitary The posterior pituitary is actually an extension of the neurons of the paraventricular and supraoptic nuclei of the hypothalamus. The cell bodies of these regions rest in the hypothalamus, but their axons descend as the hypothalamic–hypophyseal tract within the infundibulum, and end in axon terminals that comprise the posterior pituitary ( Figure 17.8 ). The posterior pituitary gland does not produce hormones, but rather stores and secretes hormones produced by the hypothalamus. The paraventricular nuclei produce the hormone oxytocin, whereas the supraoptic nuclei produce ADH. These hormones travel along the axons into storage sites in the axon terminals of the posterior pituitary. In response to signals from the same hypothalamic neurons, the hormones are released from the axon terminals into the bloodstream. Oxytocin When fetal development is complete, the peptide-derived hormone oxytocin (tocia- = “childbirth”) stimulates uterine contractions and dilation of the cervix. Throughout most of pregnancy, oxytocin hormone receptors are not expressed at high levels in the uterus. Toward the end of pregnancy, the synthesis of oxytocin receptors in the uterus increases, and the smooth muscle cells of the uterus become more sensitive to its effects. Oxytocin is continually released throughout childbirth through a positive feedback mechanism. As noted earlier, oxytocin prompts uterine contractions that push the fetal head toward the cervix. In response, cervical stretching stimulates additional oxytocin to be synthesized by the hypothalamus and released from the pituitary. This increases the intensity and effectiveness of uterine contractions and prompts additional dilation of the cervix. The feedback loop continues until birth. Although the mother’s high blood levels of oxytocin begin to decrease immediately following birth, oxytocin continues to play a role in maternal and newborn health. First, oxytocin is necessary for the milk ejection reflex (commonly referred to as “let-down”) in breastfeeding women. As the newborn begins suckling, sensory receptors in the nipples transmit signals to the hypothalamus. In response, oxytocin is secreted and released into the bloodstream. Within seconds, cells in the mother’s milk ducts contract, ejecting milk into the infant’s mouth. Secondly, in both males and females, oxytocin is thought to contribute to parent–newborn bonding, known as attachment. Oxytocin is also thought to be involved in feelings of love and closeness, as well as in the sexual response. Antidiuretic Hormone (ADH) The solute concentration of the blood, or blood osmolarity, may change in response to the consumption of certain foods and fluids, as well as in response to disease, injury, medications, or other factors. Blood osmolarity is constantly monitored by osmoreceptors —specialized cells within the hypothalamus that are particularly sensitive to the concentration of sodium ions and other solutes. In response to high blood osmolarity, which can occur during dehydration or following a very salty meal, the osmoreceptors signal the posterior pituitary to release antidiuretic hormone (ADH) . The target cells of ADH are located in the tubular cells of the kidneys. Its effect is to increase epithelial permeability to water, allowing increased water reabsorption. The more water reabsorbed from the filtrate, the greater the amount of water that is returned to the blood and the less that is excreted in the urine. A greater concentration of water results in a reduced concentration of solutes. ADH is also known as vasopressin because, in very high concentrations, it causes constriction of blood vessels, which increases blood pressure by increasing peripheral resistance. The release of ADH is controlled by a negative feedback loop. As blood osmolarity decreases, the hypothalamic osmoreceptors sense the change and prompt a corresponding decrease in the secretion of ADH. As a result, less water is reabsorbed from the urine filtrate. Interestingly, drugs can affect the secretion of ADH. For example, alcohol consumption inhibits the release of ADH, resulting in increased urine production that can eventually lead to dehydration and a hangover. A disease called diabetes insipidus is characterized by chronic underproduction of ADH that causes chronic dehydration. Because little ADH is produced and secreted, not enough water is reabsorbed by the kidneys. Although patients feel thirsty, and increase their fluid consumption, this doesn’t effectively decrease the solute concentration in their blood because ADH levels are not high enough to trigger water reabsorption in the kidneys. Electrolyte imbalances can occur in severe cases of diabetes insipidus. Anterior Pituitary The anterior pituitary originates from the digestive tract in the embryo and migrates toward the brain during fetal development. There are three regions: the pars distalis is the most anterior, the pars intermedia is adjacent to the posterior pituitary, and the pars tuberalis is a slender “tube” that wraps the infundibulum. Recall that the posterior pituitary does not synthesize hormones, but merely stores them. In contrast, the anterior pituitary does manufacture hormones. However, the secretion of hormones from the anterior pituitary is regulated by two classes of hormones. These hormones—secreted by the hypothalamus—are the releasing hormones that stimulate the secretion of hormones from the anterior pituitary and the inhibiting hormones that inhibit secretion. Hypothalamic hormones are secreted by neurons, but enter the anterior pituitary through blood vessels ( Figure 17.9 ). Within the infundibulum is a bridge of capillaries that connects the hypothalamus to the anterior pituitary. This network, called the hypophyseal portal system , allows hypothalamic hormones to be transported to the anterior pituitary without first entering the systemic circulation. The system originates from the superior hypophyseal artery, which branches off the carotid arteries and transports blood to the hypothalamus. The branches of the superior hypophyseal artery form the hypophyseal portal system (see Figure 17.9 ). Hypothalamic releasing and inhibiting hormones travel through a primary capillary plexus to the portal veins, which carry them into the anterior pituitary. Hormones produced by the anterior pituitary (in response to releasing hormones) enter a secondary capillary plexus, and from there drain into the circulation. The anterior pituitary produces seven hormones. These are the growth hormone (GH), thyroid-stimulating hormone (TSH), adrenocorticotropic hormone (ACTH), follicle-stimulating hormone (FSH), luteinizing hormone (LH), beta endorphin, and prolactin. Of the hormones of the anterior pituitary, TSH, ACTH, FSH, and LH are collectively referred to as tropic hormones (trope- = “turning”) because they turn on or off the function of other endocrine glands. Growth Hormone The endocrine system regulates the growth of the human body, protein synthesis, and cellular replication. A major hormone involved in this process is growth hormone (GH) , also called somatotropin—a protein hormone produced and secreted by the anterior pituitary gland. Its primary function is anabolic; it promotes protein synthesis and tissue building through direct and indirect mechanisms ( Figure 17.10 ). GH levels are controlled by the release of GHRH and GHIH (also known as somatostatin) from the hypothalamus. A glucose-sparing effect occurs when GH stimulates lipolysis, or the breakdown of adipose tissue, releasing fatty acids into the blood. As a result, many tissues switch from glucose to fatty acids as their main energy source, which means that less glucose is taken up from the bloodstream. GH also initiates the diabetogenic effect in which GH stimulates the liver to break down glycogen to glucose, which is then deposited into the blood. The name “diabetogenic” is derived from the similarity in elevated blood glucose levels observed between individuals with untreated diabetes mellitus and individuals experiencing GH excess. Blood glucose levels rise as the result of a combination of glucose-sparing and diabetogenic effects. GH indirectly mediates growth and protein synthesis by triggering the liver and other tissues to produce a group of proteins called insulin-like growth factors (IGFs) . These proteins enhance cellular proliferation and inhibit apoptosis, or programmed cell death. IGFs stimulate cells to increase their uptake of amino acids from the blood for protein synthesis. Skeletal muscle and cartilage cells are particularly sensitive to stimulation from IGFs. Dysfunction of the endocrine system’s control of growth can result in several disorders. For example, gigantism is a disorder in children that is caused by the secretion of abnormally large amounts of GH, resulting in excessive growth. A similar condition in adults is acromegaly , a disorder that results in the growth of bones in the face, hands, and feet in response to excessive levels of GH in individuals who have stopped growing. Abnormally low levels of GH in children can cause growth impairment—a disorder called pituitary dwarfism (also known as growth hormone deficiency). Thyroid-Stimulating Hormone The activity of the thyroid gland is regulated by thyroid-stimulating hormone (TSH) , also called thyrotropin. TSH is released from the anterior pituitary in response to thyrotropin-releasing hormone (TRH) from the hypothalamus. As discussed shortly, it triggers the secretion of thyroid hormones by the thyroid gland. In a classic negative feedback loop, elevated levels of thyroid hormones in the bloodstream then trigger a drop in production of TRH and subsequently TSH. Adrenocorticotropic Hormone The adrenocorticotropic hormone (ACTH) , also called corticotropin, stimulates the adrenal cortex (the more superficial “bark” of the adrenal glands) to secrete corticosteroid hormones such as cortisol. ACTH come from a precursor molecule known as pro-opiomelanotropin (POMC) which produces several biologically active molecules when cleaved, including ACTH, melanocyte-stimulating hormone, and the brain opioid peptides known as endorphins. The release of ACTH is regulated by the corticotropin-releasing hormone (CRH) from the hypothalamus in response to normal physiologic rhythms. A variety of stressors can also influence its release, and the role of ACTH in the stress response is discussed later in this chapter. Follicle-Stimulating Hormone and Luteinizing Hormone The endocrine glands secrete a variety of hormones that control the development and regulation of the reproductive system (these glands include the anterior pituitary, the adrenal cortex, and the gonads—the testes in males and the ovaries in females). Much of the development of the reproductive system occurs during puberty and is marked by the development of sex-specific characteristics in both male and female adolescents. Puberty is initiated by gonadotropin-releasing hormone (GnRH), a hormone produced and secreted by the hypothalamus. GnRH stimulates the anterior pituitary to secrete gonadotropins —hormones that regulate the function of the gonads. The levels of GnRH are regulated through a negative feedback loop; high levels of reproductive hormones inhibit the release of GnRH. Throughout life, gonadotropins regulate reproductive function and, in the case of women, the onset and cessation of reproductive capacity. The gonadotropins include two glycoprotein hormones: follicle-stimulating hormone (FSH) stimulates the production and maturation of sex cells, or gametes, including ova in women and sperm in men. FSH also promotes follicular growth; these follicles then release estrogens in the female ovaries. Luteinizing hormone (LH) triggers ovulation in women, as well as the production of estrogens and progesterone by the ovaries. LH stimulates production of testosterone by the male testes. Prolactin As its name implies, prolactin (PRL) promotes lactation (milk production) in women. During pregnancy, it contributes to development of the mammary glands, and after birth, it stimulates the mammary glands to produce breast milk. However, the effects of prolactin depend heavily upon the permissive effects of estrogens, progesterone, and other hormones. And as noted earlier, the let-down of milk occurs in response to stimulation from oxytocin. In a non-pregnant woman, prolactin secretion is inhibited by prolactin-inhibiting hormone (PIH), which is actually the neurotransmitter dopamine, and is released from neurons in the hypothalamus. Only during pregnancy do prolactin levels rise in response to prolactin-releasing hormone (PRH) from the hypothalamus. Intermediate Pituitary: Melanocyte-Stimulating Hormone The cells in the zone between the pituitary lobes secrete a hormone known as melanocyte-stimulating hormone (MSH) that is formed by cleavage of the pro-opiomelanocortin (POMC) precursor protein. Local production of MSH in the skin is responsible for melanin production in response to UV light exposure. The role of MSH made by the pituitary is more complicated. For instance, people with lighter skin generally have the same amount of MSH as people with darker skin. Nevertheless, this hormone is capable of darkening of the skin by inducing melanin production in the skin’s melanocytes. Women also show increased MSH production during pregnancy; in combination with estrogens, it can lead to darker skin pigmentation, especially the skin of the areolas and labia minora. Figure 17.11 is a summary of the pituitary hormones and their principal effects. Interactive Link Visit this link to watch an animation showing the role of the hypothalamus and the pituitary gland. Which hormone is released by the pituitary to stimulate the thyroid gland? 17.4 The Thyroid Gland Learning Objectives By the end of this section, you will be able to: Describe the location and anatomy of the thyroid gland Discuss the synthesis of triiodothyronine and thyroxine Explain the role of thyroid hormones in the regulation of basal metabolism Identify the hormone produced by the parafollicular cells of the thyroid A butterfly-shaped organ, the thyroid gland is located anterior to the trachea, just inferior to the larynx ( Figure 17.12 ). The medial region, called the isthmus, is flanked by wing-shaped left and right lobes. Each of the thyroid lobes are embedded with parathyroid glands, primarily on their posterior surfaces. The tissue of the thyroid gland is composed mostly of thyroid follicles. The follicles are made up of a central cavity filled with a sticky fluid called colloid . Surrounded by a wall of epithelial follicle cells, the colloid is the center of thyroid hormone production, and that production is dependent on the hormones’ essential and unique component: iodine. Synthesis and Release of Thyroid Hormones Hormones are produced in the colloid when atoms of the mineral iodine attach to a glycoprotein, called thyroglobulin, that is secreted into the colloid by the follicle cells. The following steps outline the hormones’ assembly: Binding of TSH to its receptors in the follicle cells of the thyroid gland causes the cells to actively transport iodide ions (I – ) across their cell membrane, from the bloodstream into the cytosol. As a result, the concentration of iodide ions “trapped” in the follicular cells is many times higher than the concentration in the bloodstream. Iodide ions then move to the lumen of the follicle cells that border the colloid. There, the ions undergo oxidation (their negatively charged electrons are removed). The oxidation of two iodide ions (2 I – ) results in iodine (I 2 ), which passes through the follicle cell membrane into the colloid. In the colloid, peroxidase enzymes link the iodine to the tyrosine amino acids in thyroglobulin to produce two intermediaries: a tyrosine attached to one iodine and a tyrosine attached to two iodines. When one of each of these intermediaries is linked by covalent bonds, the resulting compound is triiodothyronine (T 3 ), a thyroid hormone with three iodines. Much more commonly, two copies of the second intermediary bond, forming tetraiodothyronine, also known as thyroxine (T 4 ), a thyroid hormone with four iodines. These hormones remain in the colloid center of the thyroid follicles until TSH stimulates endocytosis of colloid back into the follicle cells. There, lysosomal enzymes break apart the thyroglobulin colloid, releasing free T 3 and T 4 , which diffuse across the follicle cell membrane and enter the bloodstream. In the bloodstream, less than one percent of the circulating T 3 and T 4 remains unbound. This free T 3 and T 4 can cross the lipid bilayer of cell membranes and be taken up by cells. The remaining 99 percent of circulating T 3 and T 4 is bound to specialized transport proteins called thyroxine-binding globulins (TBGs), to albumin, or to other plasma proteins. This “packaging” prevents their free diffusion into body cells. When blood levels of T 3 and T 4 begin to decline, bound T 3 and T 4 are released from these plasma proteins and readily cross the membrane of target cells. T 3 is more potent than T 4 , and many cells convert T 4 to T 3 through the removal of an iodine atom. Regulation of TH Synthesis The release of T 3 and T 4 from the thyroid gland is regulated by thyroid-stimulating hormone (TSH). As shown in Figure 17.13 , low blood levels of T 3 and T 4 stimulate the release of thyrotropin-releasing hormone (TRH) from the hypothalamus, which triggers secretion of TSH from the anterior pituitary. In turn, TSH stimulates the thyroid gland to secrete T 3 and T 4 . The levels of TRH, TSH, T 3 , and T 4 are regulated by a negative feedback system in which increasing levels of T 3 and T 4 decrease the production and secretion of TSH. Functions of Thyroid Hormones The thyroid hormones, T 3 and T 4 , are often referred to as metabolic hormones because their levels influence the body’s basal metabolic rate, the amount of energy used by the body at rest. When T 3 and T 4 bind to intracellular receptors located on the mitochondria, they cause an increase in nutrient breakdown and the use of oxygen to produce ATP. In addition, T 3 and T 4 initiate the transcription of genes involved in glucose oxidation. Although these mechanisms prompt cells to produce more ATP, the process is inefficient, and an abnormally increased level of heat is released as a byproduct of these reactions. This so-called calorigenic effect (calor- = “heat”) raises body temperature. Adequate levels of thyroid hormones are also required for protein synthesis and for fetal and childhood tissue development and growth. They are especially critical for normal development of the nervous system both in utero and in early childhood, and they continue to support neurological function in adults. As noted earlier, these thyroid hormones have a complex interrelationship with reproductive hormones, and deficiencies can influence libido, fertility, and other aspects of reproductive function. Finally, thyroid hormones increase the body’s sensitivity to catecholamines (epinephrine and norepinephrine) from the adrenal medulla by upregulation of receptors in the blood vessels. When levels of T 3 and T 4 hormones are excessive, this effect accelerates the heart rate, strengthens the heartbeat, and increases blood pressure. Because thyroid hormones regulate metabolism, heat production, protein synthesis, and many other body functions, thyroid disorders can have severe and widespread consequences. Disorders of the... Endocrine System: Iodine Deficiency, Hypothyroidism, and Hyperthyroidism As discussed above, dietary iodine is required for the synthesis of T 3 and T 4 . But for much of the world’s population, foods do not provide adequate levels of this mineral, because the amount varies according to the level in the soil in which the food was grown, as well as the irrigation and fertilizers used. Marine fish and shrimp tend to have high levels because they concentrate iodine from seawater, but many people in landlocked regions lack access to seafood. Thus, the primary source of dietary iodine in many countries is iodized salt. Fortification of salt with iodine began in the United States in 1924, and international efforts to iodize salt in the world’s poorest nations continue today. Dietary iodine deficiency can result in the impaired ability to synthesize T 3 and T 4 , leading to a variety of severe disorders. When T 3 and T 4 cannot be produced, TSH is secreted in increasing amounts. As a result of this hyperstimulation, thyroglobulin accumulates in the thyroid gland follicles, increasing their deposits of colloid. The accumulation of colloid increases the overall size of the thyroid gland, a condition called a goiter ( Figure 17.14 ). A goiter is only a visible indication of the deficiency. Other iodine deficiency disorders include impaired growth and development, decreased fertility, and prenatal and infant death. Moreover, iodine deficiency is the primary cause of preventable mental retardation worldwide. Neonatal hypothyroidism (cretinism) is characterized by cognitive deficits, short stature, and sometimes deafness and muteness in children and adults born to mothers who were iodine-deficient during pregnancy. In areas of the world with access to iodized salt, dietary deficiency is rare. Instead, inflammation of the thyroid gland is the more common cause of low blood levels of thyroid hormones. Called hypothyroidism , the condition is characterized by a low metabolic rate, weight gain, cold extremities, constipation, reduced libido, menstrual irregularities, and reduced mental activity. In contrast, hyperthyroidism —an abnormally elevated blood level of thyroid hormones—is often caused by a pituitary or thyroid tumor. In Graves’ disease, the hyperthyroid state results from an autoimmune reaction in which antibodies overstimulate the follicle cells of the thyroid gland. Hyperthyroidism can lead to an increased metabolic rate, excessive body heat and sweating, diarrhea, weight loss, tremors, and increased heart rate. The person’s eyes may bulge (called exophthalmos) as antibodies produce inflammation in the soft tissues of the orbits. The person may also develop a goiter. Calcitonin The thyroid gland also secretes a hormone called calcitonin that is produced by the parafollicular cells (also called C cells) that stud the tissue between distinct follicles. Calcitonin is released in response to a rise in blood calcium levels. It appears to have a function in decreasing blood calcium concentrations by: Inhibiting the activity of osteoclasts, bone cells that release calcium into the circulation by degrading bone matrix Increasing osteoblastic activity Decreasing calcium absorption in the intestines Increasing calcium loss in the urine However, these functions are usually not significant in maintaining calcium homeostasis, so the importance of calcitonin is not entirely understood. Pharmaceutical preparations of calcitonin are sometimes prescribed to reduce osteoclast activity in people with osteoporosis and to reduce the degradation of cartilage in people with osteoarthritis. The hormones secreted by thyroid are summarized in Table 17.4 . Thyroid Hormones Associated hormones Chemical class Effect Thyroxine (T 4 ), triiodothyronine (T 3 ) Amine Stimulate basal metabolic rate Calcitonin Peptide Reduces blood Ca 2+ levels Table 17.4 Of course, calcium is critical for many other biological processes. It is a second messenger in many signaling pathways, and is essential for muscle contraction, nerve impulse transmission, and blood clotting. Given these roles, it is not surprising that blood calcium levels are tightly regulated by the endocrine system. The organs involved in the regulation are the parathyroid glands. 17.5 The Parathyroid Glands Learning Objectives By the end of this section, you will be able to: Describe the location and structure of the parathyroid glands Describe the hormonal control of blood calcium levels Discuss the physiological response of parathyroid dysfunction The parathyroid glands are tiny, round structures usually found embedded in the posterior surface of the thyroid gland ( Figure 17.15 ). A thick connective tissue capsule separates the glands from the thyroid tissue. Most people have four parathyroid glands, but occasionally there are more in tissues of the neck or chest. The function of one type of parathyroid cells, the oxyphil cells, is not clear. The primary functional cells of the parathyroid glands are the chief cells. These epithelial cells produce and secrete the parathyroid hormone (PTH) , the major hormone involved in the regulation of blood calcium levels. Interactive Link View the University of Michigan WebScope to explore the tissue sample in greater detail. The parathyroid glands produce and secrete PTH, a peptide hormone, in response to low blood calcium levels ( Figure 17.16 ). PTH secretion causes the release of calcium from the bones by stimulating osteoclasts, which secrete enzymes that degrade bone and release calcium into the interstitial fluid. PTH also inhibits osteoblasts, the cells involved in bone deposition, thereby sparing blood calcium. PTH causes increased reabsorption of calcium (and magnesium) in the kidney tubules from the urine filtrate. In addition, PTH initiates the production of the steroid hormone calcitriol (also known as 1,25-dihydroxyvitamin D), which is the active form of vitamin D 3 , in the kidneys. Calcitriol then stimulates increased absorption of dietary calcium by the intestines. A negative feedback loop regulates the levels of PTH, with rising blood calcium levels inhibiting further release of PTH. Abnormally high activity of the parathyroid gland can cause hyperparathyroidism , a disorder caused by an overproduction of PTH that results in excessive calcium reabsorption from bone. Hyperparathyroidism can significantly decrease bone density, leading to spontaneous fractures or deformities. As blood calcium levels rise, cell membrane permeability to sodium is decreased, and the responsiveness of the nervous system is reduced. At the same time, calcium deposits may collect in the body’s tissues and organs, impairing their functioning. In contrast, abnormally low blood calcium levels may be caused by parathyroid hormone deficiency, called hypoparathyroidism , which may develop following injury or surgery involving the thyroid gland. Low blood calcium increases membrane permeability to sodium, resulting in muscle twitching, cramping, spasms, or convulsions. Severe deficits can paralyze muscles, including those involved in breathing, and can be fatal. When blood calcium levels are high, calcitonin is produced and secreted by the parafollicular cells of the thyroid gland. As discussed earlier, calcitonin inhibits the activity of osteoclasts, reduces the absorption of dietary calcium in the intestine, and signals the kidneys to reabsorb less calcium, resulting in larger amounts of calcium excreted in the urine. 17.6 The Adrenal Glands Learning Objectives By the end of this section, you will be able to: Describe the location and structure of the adrenal glands Identify the hormones produced by the adrenal cortex and adrenal medulla, and summarize their target cells and effects The adrenal glands are wedges of glandular and neuroendocrine tissue adhering to the top of the kidneys by a fibrous capsule ( Figure 17.17 ). The adrenal glands have a rich blood supply and experience one of the highest rates of blood flow in the body. They are served by several arteries branching off the aorta, including the suprarenal and renal arteries. Blood flows to each adrenal gland at the adrenal cortex and then drains into the adrenal medulla. Adrenal hormones are released into the circulation via the left and right suprarenal veins. Interactive Link View the University of Michigan WebScope to explore the tissue sample in greater detail. The adrenal gland consists of an outer cortex of glandular tissue and an inner medulla of nervous tissue. The cortex itself is divided into three zones: the zona glomerulosa , the zona fasciculata , and the zona reticularis . Each region secretes its own set of hormones. The adrenal cortex , as a component of the hypothalamic-pituitary-adrenal (HPA) axis, secretes steroid hormones important for the regulation of the long-term stress response, blood pressure and blood volume, nutrient uptake and storage, fluid and electrolyte balance, and inflammation. The HPA axis involves the stimulation of hormone release of adrenocorticotropic hormone (ACTH) from the pituitary by the hypothalamus. ACTH then stimulates the adrenal cortex to produce the hormone cortisol. This pathway will be discussed in more detail below. The adrenal medulla is neuroendocrine tissue composed of postganglionic sympathetic nervous system (SNS) neurons. It is really an extension of the autonomic nervous system, which regulates homeostasis in the body. The sympathomedullary (SAM) pathway involves the stimulation of the medulla by impulses from the hypothalamus via neurons from the thoracic spinal cord. The medulla is stimulated to secrete the amine hormones epinephrine and norepinephrine. One of the major functions of the adrenal gland is to respond to stress. Stress can be either physical or psychological or both. Physical stresses include exposing the body to injury, walking outside in cold and wet conditions without a coat on, or malnutrition. Psychological stresses include the perception of a physical threat, a fight with a loved one, or just a bad day at school. The body responds in different ways to short-term stress and long-term stress following a pattern known as the general adaptation syndrome (GAS) . Stage one of GAS is called the alarm reaction . This is short-term stress, the fight-or-flight response, mediated by the hormones epinephrine and norepinephrine from the adrenal medulla via the SAM pathway. Their function is to prepare the body for extreme physical exertion. Once this stress is relieved, the body quickly returns to normal. The section on the adrenal medulla covers this response in more detail. If the stress is not soon relieved, the body adapts to the stress in the second stage called the stage of resistance . If a person is starving for example, the body may send signals to the gastrointestinal tract to maximize the absorption of nutrients from food. If the stress continues for a longer term however, the body responds with symptoms quite different than the fight-or-flight response. During the stage of exhaustion , individuals may begin to suffer depression, the suppression of their immune response, severe fatigue, or even a fatal heart attack. These symptoms are mediated by the hormones of the adrenal cortex, especially cortisol, released as a result of signals from the HPA axis. Adrenal hormones also have several non–stress-related functions, including the increase of blood sodium and glucose levels, which will be described in detail below. Adrenal Cortex The adrenal cortex consists of multiple layers of lipid-storing cells that occur in three structurally distinct regions. Each of these regions produces different hormones. Interactive Link Visit this link to view an animation describing the location and function of the adrenal glands. Which hormone produced by the adrenal glands is responsible for the mobilization of energy stores? Hormones of the Zona Glomerulosa The most superficial region of the adrenal cortex is the zona glomerulosa, which produces a group of hormones collectively referred to as mineralocorticoids because of their effect on body minerals, especially sodium and potassium. These hormones are essential for fluid and electrolyte balance. Aldosterone is the major mineralocorticoid. It is important in the regulation of the concentration of sodium and potassium ions in urine, sweat, and saliva. For example, it is released in response to elevated blood K + , low blood Na + , low blood pressure, or low blood volume. In response, aldosterone increases the excretion of K + and the retention of Na + , which in turn increases blood volume and blood pressure. Its secretion is prompted when CRH from the hypothalamus triggers ACTH release from the anterior pituitary. Aldosterone is also a key component of the renin-angiotensin-aldosterone system (RAAS) in which specialized cells of the kidneys secrete the enzyme renin in response to low blood volume or low blood pressure. Renin then catalyzes the conversion of the blood protein angiotensinogen, produced by the liver, to the hormone angiotensin I. Angiotensin I is converted in the lungs to angiotensin II by angiotensin-converting enzyme (ACE). Angiotensin II has three major functions: Initiating vasoconstriction of the arterioles, decreasing blood flow Stimulating kidney tubules to reabsorb NaCl and water, increasing blood volume Signaling the adrenal cortex to secrete aldosterone, the effects of which further contribute to fluid retention, restoring blood pressure and blood volume For individuals with hypertension, or high blood pressure, drugs are available that block the production of angiotensin II. These drugs, known as ACE inhibitors, block the ACE enzyme from converting angiotensin I to angiotensin II, thus mitigating the latter’s ability to increase blood pressure. Hormones of the Zona Fasciculata The intermediate region of the adrenal cortex is the zona fasciculata, named as such because the cells form small fascicles (bundles) separated by tiny blood vessels. The cells of the zona fasciculata produce hormones called glucocorticoids because of their role in glucose metabolism. The most important of these is cortisol , some of which the liver converts to cortisone. A glucocorticoid produced in much smaller amounts is corticosterone. In response to long-term stressors, the hypothalamus secretes CRH, which in turn triggers the release of ACTH by the anterior pituitary. ACTH triggers the release of the glucocorticoids. Their overall effect is to inhibit tissue building while stimulating the breakdown of stored nutrients to maintain adequate fuel supplies. In conditions of long-term stress, for example, cortisol promotes the catabolism of glycogen to glucose, the catabolism of stored triglycerides into fatty acids and glycerol, and the catabolism of muscle proteins into amino acids. These raw materials can then be used to synthesize additional glucose and ketones for use as body fuels. The hippocampus, which is part of the temporal lobe of the cerebral cortices and important in memory formation, is highly sensitive to stress levels because of its many glucocorticoid receptors. You are probably familiar with prescription and over-the-counter medications containing glucocorticoids, such as cortisone injections into inflamed joints, prednisone tablets and steroid-based inhalers used to manage severe asthma, and hydrocortisone creams applied to relieve itchy skin rashes. These drugs reflect another role of cortisol—the downregulation of the immune system, which inhibits the inflammatory response. Hormones of the Zona Reticularis The deepest region of the adrenal cortex is the zona reticularis, which produces small amounts of a class of steroid sex hormones called androgens. During puberty and most of adulthood, androgens are produced in the gonads. The androgens produced in the zona reticularis supplement the gonadal androgens. They are produced in response to ACTH from the anterior pituitary and are converted in the tissues to testosterone or estrogens. In adult women, they may contribute to the sex drive, but their function in adult men is not well understood. In post-menopausal women, as the functions of the ovaries decline, the main source of estrogens becomes the androgens produced by the zona reticularis. Adrenal Medulla As noted earlier, the adrenal cortex releases glucocorticoids in response to long-term stress such as severe illness. In contrast, the adrenal medulla releases its hormones in response to acute, short-term stress mediated by the sympathetic nervous system (SNS). The medullary tissue is composed of unique postganglionic SNS neurons called chromaffin cells, which are large and irregularly shaped, and produce the neurotransmitters epinephrine (also called adrenaline) and norepinephrine (or noradrenaline). Epinephrine is produced in greater quantities—approximately a 4 to 1 ratio with norepinephrine—and is the more powerful hormone. Because the chromaffin cells release epinephrine and norepinephrine into the systemic circulation, where they travel widely and exert effects on distant cells, they are considered hormones. Derived from the amino acid tyrosine, they are chemically classified as catecholamines. The secretion of medullary epinephrine and norepinephrine is controlled by a neural pathway that originates from the hypothalamus in response to danger or stress (the SAM pathway). Both epinephrine and norepinephrine signal the liver and skeletal muscle cells to convert glycogen into glucose, resulting in increased blood glucose levels. These hormones increase the heart rate, pulse, and blood pressure to prepare the body to fight the perceived threat or flee from it. In addition, the pathway dilates the airways, raising blood oxygen levels. It also prompts vasodilation, further increasing the oxygenation of important organs such as the lungs, brain, heart, and skeletal muscle. At the same time, it triggers vasoconstriction to blood vessels serving less essential organs such as the gastrointestinal tract, kidneys, and skin, and downregulates some components of the immune system. Other effects include a dry mouth, loss of appetite, pupil dilation, and a loss of peripheral vision. The major hormones of the adrenal glands are summarized in Table 17.5 . Hormones of the Adrenal Glands Adrenal gland Associated hormones Chemical class Effect Adrenal cortex Aldosterone Steroid Increases blood Na + levels Adrenal cortex Cortisol, corticosterone, cortisone Steroid Increase blood glucose levels Adrenal medulla Epinephrine, norepinephrine Amine Stimulate fight-or-flight response Table 17.5 Disorders Involving the Adrenal Glands Several disorders are caused by the dysregulation of the hormones produced by the adrenal glands. For example, Cushing’s disease is a disorder characterized by high blood glucose levels and the accumulation of lipid deposits on the face and neck. It is caused by hypersecretion of cortisol. The most common source of Cushing’s disease is a pituitary tumor that secretes cortisol or ACTH in abnormally high amounts. Other common signs of Cushing’s disease include the development of a moon-shaped face, a buffalo hump on the back of the neck, rapid weight gain, and hair loss. Chronically elevated glucose levels are also associated with an elevated risk of developing type 2 diabetes. In addition to hyperglycemia, chronically elevated glucocorticoids compromise immunity, resistance to infection, and memory, and can result in rapid weight gain and hair loss. In contrast, the hyposecretion of corticosteroids can result in Addison’s disease, a rare disorder that causes low blood glucose levels and low blood sodium levels. The signs and symptoms of Addison’s disease are vague and are typical of other disorders as well, making diagnosis difficult. They may include general weakness, abdominal pain, weight loss, nausea, vomiting, sweating, and cravings for salty food. 17.7 The Pineal Gland Learning Objectives By the end of this section, you will be able to: Describe the location and structure of the pineal gland Discuss the function of melatonin Recall that the hypothalamus, part of the diencephalon of the brain, sits inferior and somewhat anterior to the thalamus. Inferior but somewhat posterior to the thalamus is the pineal gland , a tiny endocrine gland whose functions are not entirely clear. The pinealocyte cells that make up the pineal gland are known to produce and secrete the amine hormone melatonin , which is derived from serotonin. The secretion of melatonin varies according to the level of light received from the environment. When photons of light stimulate the retinas of the eyes, a nerve impulse is sent to a region of the hypothalamus called the suprachiasmatic nucleus (SCN), which is important in regulating biological rhythms. From the SCN, the nerve signal is carried to the spinal cord and eventually to the pineal gland, where the production of melatonin is inhibited. As a result, blood levels of melatonin fall, promoting wakefulness. In contrast, as light levels decline—such as during the evening—melatonin production increases, boosting blood levels and causing drowsiness. Interactive Link Visit this link to view an animation describing the function of the hormone melatonin. What should you avoid doing in the middle of your sleep cycle that would lower melatonin? The secretion of melatonin may influence the body’s circadian rhythms, the dark-light fluctuations that affect not only sleepiness and wakefulness, but also appetite and body temperature. Interestingly, children have higher melatonin levels than adults, which may prevent the release of gonadotropins from the anterior pituitary, thereby inhibiting the onset of puberty. Finally, an antioxidant role of melatonin is the subject of current research. Jet lag occurs when a person travels across several time zones and feels sleepy during the day or wakeful at night. Traveling across multiple time zones significantly disturbs the light-dark cycle regulated by melatonin. It can take up to several days for melatonin synthesis to adjust to the light-dark patterns in the new environment, resulting in jet lag. Some air travelers take melatonin supplements to induce sleep. 17.8 Gonadal and Placental Hormones Learning Objectives By the end of this section, you will be able to: Identify the most important hormones produced by the testes and ovaries Name the hormones produced by the placenta and state their functions This section briefly discusses the hormonal role of the gonads—the male testes and female ovaries—which produce the sex cells (sperm and ova) and secrete the gonadal hormones. The roles of the gonadotropins released from the anterior pituitary (FSH and LH) were discussed earlier. The primary hormone produced by the male testes is testosterone , a steroid hormone important in the development of the male reproductive system, the maturation of sperm cells, and the development of male secondary sex characteristics such as a deepened voice, body hair, and increased muscle mass. Interestingly, testosterone is also produced in the female ovaries, but at a much reduced level. In addition, the testes produce the peptide hormone inhibin , which inhibits the secretion of FSH from the anterior pituitary gland. FSH stimulates spermatogenesis. The primary hormones produced by the ovaries are estrogens , which include estradiol, estriol, and estrone. Estrogens play an important role in a larger number of physiological processes, including the development of the female reproductive system, regulation of the menstrual cycle, the development of female secondary sex characteristics such as increased adipose tissue and the development of breast tissue, and the maintenance of pregnancy. Another significant ovarian hormone is progesterone , which contributes to regulation of the menstrual cycle and is important in preparing the body for pregnancy as well as maintaining pregnancy. In addition, the granulosa cells of the ovarian follicles produce inhibin, which—as in males—inhibits the secretion of FSH.During the initial stages of pregnancy, an organ called the placenta develops within the uterus. The placenta supplies oxygen and nutrients to the fetus, excretes waste products, and produces and secretes estrogens and progesterone. The placenta produces human chorionic gonadotropin (hCG) as well. The hCG hormone promotes progesterone synthesis and reduces the mother’s immune function to protect the fetus from immune rejection. It also secretes human placental lactogen (hPL), which plays a role in preparing the breasts for lactation, and relaxin, which is thought to help soften and widen the pubic symphysis in preparation for childbirth. The hormones controlling reproduction are summarized in Table 17.6 . Reproductive Hormones Gonad Associated hormones Chemical class Effect Testes Testosterone Steroid Stimulates development of male secondary sex characteristics and sperm production Testes Inhibin Protein Inhibits FSH release from pituitary Ovaries Estrogens and progesterone Steroid Stimulate development of female secondary sex characteristics and prepare the body for childbirth Placenta Human chorionic gonadotropin Protein Promotes progesterone synthesis during pregnancy and inhibits immune response against fetus Table 17.6 Everyday Connection Anabolic Steroids The endocrine system can be exploited for illegal or unethical purposes. A prominent example of this is the use of steroid drugs by professional athletes. Commonly used for performance enhancement, anabolic steroids are synthetic versions of the male sex hormone, testosterone. By boosting natural levels of this hormone, athletes experience increased muscle mass. Synthetic versions of human growth hormone are also used to build muscle mass. The use of performance-enhancing drugs is banned by all major collegiate and professional sports organizations in the United States because they impart an unfair advantage to athletes who take them. In addition, the drugs can cause significant and dangerous side effects. For example, anabolic steroid use can increase cholesterol levels, raise blood pressure, and damage the liver. Altered testosterone levels (both too low or too high) have been implicated in causing structural damage to the heart, and increasing the risk for cardiac arrhythmias, heart attacks, congestive heart failure, and sudden death. Paradoxically, steroids can have a feminizing effect in males, including shriveled testicles and enlarged breast tissue. In females, their use can cause masculinizing effects such as an enlarged clitoris and growth of facial hair. In both sexes, their use can promote increased aggression (commonly known as “roid-rage”), depression, sleep disturbances, severe acne, and infertility. 17.9 The Endocrine Pancreas Learning Objectives By the end of this section, you will be able to: Describe the location and structure of the pancreas, and the morphology and function of the pancreatic islets Compare and contrast the functions of insulin and glucagon The pancreas is a long, slender organ, most of which is located posterior to the bottom half of the stomach ( Figure 17.18 ). Although it is primarily an exocrine gland, secreting a variety of digestive enzymes, the pancreas has an endocrine function. Its pancreatic islets —clusters of cells formerly known as the islets of Langerhans—secrete the hormones glucagon, insulin, somatostatin, and pancreatic polypeptide (PP). Interactive Link View the University of Michigan WebScope  to explore the tissue sample in greater detail. Cells and Secretions of the Pancreatic Islets The pancreatic islets each contain four varieties of cells: The alpha cell produces the hormone glucagon and makes up approximately 20 percent of each islet. Glucagon plays an important role in blood glucose regulation; low blood glucose levels stimulate its release. The beta cell produces the hormone insulin and makes up approximately 75 percent of each islet. Elevated blood glucose levels stimulate the release of insulin. The delta cell accounts for four percent of the islet cells and secretes the peptide hormone somatostatin. Recall that somatostatin is also released by the hypothalamus (as GHIH), and the stomach and intestines also secrete it. An inhibiting hormone, pancreatic somatostatin inhibits the release of both glucagon and insulin. The PP cell accounts for about one percent of islet cells and secretes the pancreatic polypeptide hormone. It is thought to play a role in appetite, as well as in the regulation of pancreatic exocrine and endocrine secretions. Pancreatic polypeptide released following a meal may reduce further food consumption; however, it is also released in response to fasting. Regulation of Blood Glucose Levels by Insulin and Glucagon Glucose is required for cellular respiration and is the preferred fuel for all body cells. The body derives glucose from the breakdown of the carbohydrate-containing foods and drinks we consume. Glucose not immediately taken up by cells for fuel can be stored by the liver and muscles as glycogen, or converted to triglycerides and stored in the adipose tissue. Hormones regulate both the storage and the utilization of glucose as required. Receptors located in the pancreas sense blood glucose levels, and subsequently the pancreatic cells secrete glucagon or insulin to maintain normal levels. Glucagon Receptors in the pancreas can sense the decline in blood glucose levels, such as during periods of fasting or during prolonged labor or exercise ( Figure 17.19 ). In response, the alpha cells of the pancreas secrete the hormone glucagon , which has several effects: It stimulates the liver to convert its stores of glycogen back into glucose. This response is known as glycogenolysis. The glucose is then released into the circulation for use by body cells. It stimulates the liver to take up amino acids from the blood and convert them into glucose. This response is known as gluconeogenesis. It stimulates lipolysis, the breakdown of stored triglycerides into free fatty acids and glycerol. Some of the free glycerol released into the bloodstream travels to the liver, which converts it into glucose. This is also a form of gluconeogenesis. Taken together, these actions increase blood glucose levels. The activity of glucagon is regulated through a negative feedback mechanism; rising blood glucose levels inhibit further glucagon production and secretion. Insulin The primary function of insulin is to facilitate the uptake of glucose into body cells. Red blood cells, as well as cells of the brain, liver, kidneys, and the lining of the small intestine, do not have insulin receptors on their cell membranes and do not require insulin for glucose uptake. Although all other body cells do require insulin if they are to take glucose from the bloodstream, skeletal muscle cells and adipose cells are the primary targets of insulin. The presence of food in the intestine triggers the release of gastrointestinal tract hormones such as glucose-dependent insulinotropic peptide (previously known as gastric inhibitory peptide). This is in turn the initial trigger for insulin production and secretion by the beta cells of the pancreas. Once nutrient absorption occurs, the resulting surge in blood glucose levels further stimulates insulin secretion. Precisely how insulin facilitates glucose uptake is not entirely clear. However, insulin appears to activate a tyrosine kinase receptor, triggering the phosphorylation of many substrates within the cell. These multiple biochemical reactions converge to support the movement of intracellular vesicles containing facilitative glucose transporters to the cell membrane. In the absence of insulin, these transport proteins are normally recycled slowly between the cell membrane and cell interior. Insulin triggers the rapid movement of a pool of glucose transporter vesicles to the cell membrane, where they fuse and expose the glucose transporters to the extracellular fluid. The transporters then move glucose by facilitated diffusion into the cell interior. Interactive Link Visit this link to view an animation describing the location and function of the pancreas. What goes wrong in the function of insulin in type 2 diabetes? Insulin also reduces blood glucose levels by stimulating glycolysis, the metabolism of glucose for generation of ATP. Moreover, it stimulates the liver to convert excess glucose into glycogen for storage, and it inhibits enzymes involved in glycogenolysis and gluconeogenesis. Finally, insulin promotes triglyceride and protein synthesis. The secretion of insulin is regulated through a negative feedback mechanism. As blood glucose levels decrease, further insulin release is inhibited. The pancreatic hormones are summarized in Table 17.7 . Hormones of the Pancreas Associated hormones Chemical class Effect Insulin (beta cells) Protein Reduces blood glucose levels Glucagon (alpha cells) Protein Increases blood glucose levels Somatostatin (delta cells) Protein Inhibits insulin and glucagon release Pancreatic polypeptide (PP cells) Protein Role in appetite Table 17.7 Disorders of the... Endocrine System: Diabetes Mellitus Dysfunction of insulin production and secretion, as well as the target cells’ responsiveness to insulin, can lead to a condition called diabetes mellitus . An increasingly common disease, diabetes mellitus has been diagnosed in more than 18 million adults in the United States, and more than 200,000 children. It is estimated that up to 7 million more adults have the condition but have not been diagnosed. In addition, approximately 79 million people in the US are estimated to have pre-diabetes, a condition in which blood glucose levels are abnormally high, but not yet high enough to be classified as diabetes. There are two main forms of diabetes mellitus. Type 1 diabetes is an autoimmune disease affecting the beta cells of the pancreas. Certain genes are recognized to increase susceptibility. The beta cells of people with type 1 diabetes do not produce insulin; thus, synthetic insulin must be administered by injection or infusion. This form of diabetes accounts for less than five percent of all diabetes cases. Type 2 diabetes accounts for approximately 95 percent of all cases. It is acquired, and lifestyle factors such as poor diet, inactivity, and the presence of pre-diabetes greatly increase a person’s risk. About 80 to 90 percent of people with type 2 diabetes are overweight or obese. In type 2 diabetes, cells become resistant to the effects of insulin. In response, the pancreas increases its insulin secretion, but over time, the beta cells become exhausted. In many cases, type 2 diabetes can be reversed by moderate weight loss, regular physical activity, and consumption of a healthy diet; however, if blood glucose levels cannot be controlled, the diabetic will eventually require insulin. Two of the early manifestations of diabetes are excessive urination and excessive thirst. They demonstrate how the out-of-control levels of glucose in the blood affect kidney function. The kidneys are responsible for filtering glucose from the blood. Excessive blood glucose draws water into the urine, and as a result the person eliminates an abnormally large quantity of sweet urine. The use of body water to dilute the urine leaves the body dehydrated, and so the person is unusually and continually thirsty. The person may also experience persistent hunger because the body cells are unable to access the glucose in the bloodstream. Over time, persistently high levels of glucose in the blood injure tissues throughout the body, especially those of the blood vessels and nerves. Inflammation and injury of the lining of arteries lead to atherosclerosis and an increased risk of heart attack and stroke. Damage to the microscopic blood vessels of the kidney impairs kidney function and can lead to kidney failure. Damage to blood vessels that serve the eyes can lead to blindness. Blood vessel damage also reduces circulation to the limbs, whereas nerve damage leads to a loss of sensation, called neuropathy, particularly in the hands and feet. Together, these changes increase the risk of injury, infection, and tissue death (necrosis), contributing to a high rate of toe, foot, and lower leg amputations in people with diabetes. Uncontrolled diabetes can also lead to a dangerous form of metabolic acidosis called ketoacidosis. Deprived of glucose, cells increasingly rely on fat stores for fuel. However, in a glucose-deficient state, the liver is forced to use an alternative lipid metabolism pathway that results in the increased production of ketone bodies (or ketones), which are acidic. The build-up of ketones in the blood causes ketoacidosis, which—if left untreated—may lead to a life-threatening “diabetic coma.” Together, these complications make diabetes the seventh leading cause of death in the United States. Diabetes is diagnosed when lab tests reveal that blood glucose levels are higher than normal, a condition called hyperglycemia . The treatment of diabetes depends on the type, the severity of the condition, and the ability of the patient to make lifestyle changes. As noted earlier, moderate weight loss, regular physical activity, and consumption of a healthful diet can reduce blood glucose levels. Some patients with type 2 diabetes may be unable to control their disease with these lifestyle changes, and will require medication. Historically, the first-line treatment of type 2 diabetes was insulin. Research advances have resulted in alternative options, including medications that enhance pancreatic function. Interactive Link Visit this link to view an animation describing the role of insulin and the pancreas in diabetes.
u.s._history
Summary 6.1 Britain’s Law-and-Order Strategy and Its Consequences Until Parliament passed the Coercive Acts in 1774, most colonists still thought of themselves as proud subjects of the strong British Empire. However, the Coercive (or Intolerable) Acts, which Parliament enacted to punish Massachusetts for failing to pay for the destruction of the tea, convinced many colonists that Great Britain was indeed threatening to stifle their liberty. In Massachusetts and other New England colonies, militias like the minutemen prepared for war by stockpiling weapons and ammunition. After the first loss of life at the battles of Lexington and Concord in April 1775, skirmishes continued throughout the colonies. When Congress met in Philadelphia in July 1776, its members signed the Declaration of Independence, officially breaking ties with Great Britain and declaring their intention to be self-governing. 6.2 The Early Years of the Revolution The British successfully implemented the first part of their strategy to isolate New England when they took New York City in the fall of 1776. For the next seven years, they used New York as a base of operations, expanding their control to Philadelphia in the winter of 1777. After suffering through a terrible winter in 1777–1778 in Valley Forge, Pennsylvania, American forces were revived with help from Baron von Steuben, a Prussian military officer who helped transform the Continental Army into a professional fighting force. The effort to cut off New England from the rest of the colonies failed with the General Burgoyne’s surrender at Saratoga in October 1777. After Saratoga, the struggle for independence gained a powerful ally when France agreed to recognize the United States as a new nation and began to send much-needed military support. The entrance of France—Britain’s archrival in the contest of global empire—into the American fight helped to turn the tide of the war in favor of the revolutionaries. 6.3 War in the South The British gained momentum in the war when they turned their military efforts against the southern colonies. They scored repeated victories in the coastal towns, where they found legions of supporters, including people escaping bondage. As in other colonies, however, control of major seaports did not mean the British could control the interior. Fighting in the southern colonies devolved into a merciless civil war as the Revolution opened the floodgates of pent-up anger and resentment between frontier residents and those along the coastal regions. The southern campaign came to an end at Yorktown when Cornwallis surrendered to American forces. 6.4 Identity during the American Revolution The American Revolution divided the colonists as much as it united them, with Loyalists (or Tories) joining the British forces against the Patriots (or revolutionaries). Both sides included a broad cross-section of the population. However, Great Britain was able to convince many to join its forces by promising them freedom, something the southern revolutionaries would not agree to do. The war provided new opportunities, as well as new challenges, for enslaved and free Black people, women, and Native peoples. After the war, many Loyalists fled the American colonies, heading across the Atlantic to England, north to Canada, or south to the West Indies.
Chapter Outline 6.1 Britain’s Law-and-Order Strategy and Its Consequences 6.2 The Early Years of the Revolution 6.3 War in the South 6.4 Identity during the American Revolution Introduction By the 1770s, Great Britain ruled a vast empire, with its American colonies producing useful raw materials and profitably consuming British goods. From Britain’s perspective, it was inconceivable that the colonies would wage a successful war for independence; in 1776, they appeared weak and disorganized, no match for the Empire. Yet, although the Revolutionary War did indeed drag on for eight years, in 1783, the thirteen colonies, now the United States, ultimately prevailed against the British. The Revolution succeeded because colonists from diverse economic and social backgrounds united in their opposition to Great Britain. Although thousands of colonists remained loyal to the crown and many others preferred to remain neutral, a sense of community against a common enemy prevailed among Patriots. The signing of the Declaration of Independence ( Figure 6.1 ) exemplifies the spirit of that common cause. Representatives asserted: “That these United Colonies are, and of Right ought to be Free and Independent States; that they are Absolved from all Allegiance to the British Crown, . . . And for the support of this Declaration, . . . we mutually pledge to each other our Lives, our Fortunes and our sacred Honor.”
[ { "answer": { "ans_choice": 2, "ans_text": "C" }, "bloom": null, "hl_context": "Both the British and the rebels in New England began to prepare for conflict by turning their attention to supplies of weapons and gunpowder . <hl> General Gage stationed thirty-five hundred troops in Boston , and from there he ordered periodic raids on towns where guns and gunpowder were stockpiled , hoping to impose law and order by seizing them . <hl> As Boston became the headquarters of British military operations , many residents fled the city . <hl> In an effort to restore law and order in Boston , the British dispatched General Thomas Gage to the New England seaport . <hl> <hl> He arrived in Boston in May 1774 as the new royal governor of the Province of Massachusetts , accompanied by several regiments of British troops . <hl> As in 1768 , the British again occupied the town . Massachusetts delegates met in a Provincial Congress and published the Suffolk Resolves , which officially rejected the Coercive Acts and called for the raising of colonial militias to take military action if needed . The Suffolk Resolves signaled the overthrow of the royal government in Massachusetts . Great Britain pursued a policy of law and order when dealing with the crises in the colonies in the late 1760s and 1770s . Relations between the British and many American Patriots worsened over the decade , culminating in an unruly mob destroying a fortune in tea by dumping it into Boston Harbor in December 1773 as a protest against British tax laws . The harsh British response to this act in 1774 , which included sending British troops to Boston and closing Boston Harbor , caused tensions and resentments to escalate further . <hl> The British tried to disarm the insurgents in Massachusetts by confiscating their weapons and ammunition and arresting the leaders of the patriotic movement . <hl> However , this effort faltered on April 19 , when Massachusetts militias and British troops fired on each other as British troops marched to Lexington and Concord , an event immortalized by poet Ralph Waldo Emerson as the “ shot heard round the world . ” The American Revolution had begun .", "hl_sentences": "General Gage stationed thirty-five hundred troops in Boston , and from there he ordered periodic raids on towns where guns and gunpowder were stockpiled , hoping to impose law and order by seizing them . In an effort to restore law and order in Boston , the British dispatched General Thomas Gage to the New England seaport . He arrived in Boston in May 1774 as the new royal governor of the Province of Massachusetts , accompanied by several regiments of British troops . The British tried to disarm the insurgents in Massachusetts by confiscating their weapons and ammunition and arresting the leaders of the patriotic movement .", "question": { "cloze_format": "British General Thomas Gage attempt to deal with the uprising in Massachussetss in 177 by ___.", "normal_format": "How did British General Thomas Gage attempt to deal with the uprising in Massachusetts in 1774?", "question_choices": [ "He offered the rebels land on the Maine frontier in return for loyalty to England.", "He allowed for town meetings in an attempt to appease the rebels.", "He attempted to seize arms and munitions from the colonial insurgents.", "He ordered his troops to burn Boston to the ground to show the determination of Britain." ], "question_id": "fs-idm122624064", "question_text": "How did British General Thomas Gage attempt to deal with the uprising in Massachusetts in 1774?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 1, "ans_text": "A majority of enslaved people in the colonies won their freedom." }, "bloom": null, "hl_context": "In Virginia , the royal governor , Lord Dunmore , raised Loyalist forces to combat the rebel colonists and also tried to use the large enslaved population to put down the rebellion . <hl> In November 1775 , he issued a decree , known as Dunmore ’ s Proclamation , promising freedom to enslaved people and indentured servants of rebels who remained loyal to the king and who pledged to fight with the Loyalists against the insurgents . <hl> <hl> Dunmore ’ s Proclamation exposed serious problems for both the Patriot cause and for the British . <hl> <hl> In order for the British to put down the rebellion , they needed the support of Virginia ’ s landowners , many of whom enslaved people . <hl> <hl> ( While Patriot slaveholders in Virginia and elsewhere proclaimed they acted in defense of liberty , they kept thousands in bondage , a fact the British decided to exploit . ) <hl> <hl> Although a number of enslaved people did join Dunmore ’ s side , the proclamation had the unintended effect of galvanizing Patriot resistance to Britain . <hl> <hl> From the rebels ’ point of view , the British looked to deprive them of their enslaved property and incite a race war . <hl> <hl> Slaveholders feared an uprising and increased their commitment to the cause against Great Britain , calling for independence . <hl> Dunmore fled Virginia in 1776 .", "hl_sentences": "In November 1775 , he issued a decree , known as Dunmore ’ s Proclamation , promising freedom to enslaved people and indentured servants of rebels who remained loyal to the king and who pledged to fight with the Loyalists against the insurgents . Dunmore ’ s Proclamation exposed serious problems for both the Patriot cause and for the British . In order for the British to put down the rebellion , they needed the support of Virginia ’ s landowners , many of whom enslaved people . ( While Patriot slaveholders in Virginia and elsewhere proclaimed they acted in defense of liberty , they kept thousands in bondage , a fact the British decided to exploit . ) Although a number of enslaved people did join Dunmore ’ s side , the proclamation had the unintended effect of galvanizing Patriot resistance to Britain . From the rebels ’ point of view , the British looked to deprive them of their enslaved property and incite a race war . Slaveholders feared an uprising and increased their commitment to the cause against Great Britain , calling for independence .", "question": { "cloze_format": "___ was not a result of Dunmore's Proclamation.", "normal_format": "Which of the following was not a result of Dunmore’s Proclamation?", "question_choices": [ "Enslaved people joined Dunmore to fight for the British.", "A majority of enslaved people in the colonies won their freedom.", "Patriot forces increased their commitment to independence.", "Both slaveholding and non-slaveholding White people feared a rebellion." ], "question_id": "fs-idm165023472", "question_text": "Which of the following was not a result of Dunmore’s Proclamation?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 3, "ans_text": "D" }, "bloom": null, "hl_context": "Paine ’ s pamphlet rejected the monarchy , calling King George III a “ royal brute ” and questioning the right of an island ( England ) to rule over America . In this way , Paine helped to channel colonial discontent toward the king himself and not , as had been the case , toward the British Parliament — a bold move that signaled the desire to create a new political order disavowing monarchy entirely . <hl> He argued for the creation of an American republic , a state without a king , and extolled the blessings of republicanism , a political philosophy that held that elected representatives , not a hereditary monarch , should govern states . <hl> <hl> The vision of an American republic put forward by Paine included the idea of popular sovereignty : citizens in the republic would determine who would represent them , and decide other issues , on the basis of majority rule . <hl> Republicanism also served as a social philosophy guiding the conduct of the Patriots in their struggle against the British Empire . It demanded adherence to a code of virtue , placing the public good and community above narrow self-interest .", "hl_sentences": "He argued for the creation of an American republic , a state without a king , and extolled the blessings of republicanism , a political philosophy that held that elected representatives , not a hereditary monarch , should govern states . The vision of an American republic put forward by Paine included the idea of popular sovereignty : citizens in the republic would determine who would represent them , and decide other issues , on the basis of majority rule .", "question": { "cloze_format": "___ is not true of a republic.", "normal_format": "Which of the following is not true of a republic?", "question_choices": [ "A republic has no hereditary ruling class.", "A republic relies on the principle of popular sovereignty.", "Representatives chosen by the people lead the republic.", "A republic is governed by a monarch and the royal officials he or she appoints." ], "question_id": "fs-idm184651552", "question_text": "Which of the following is not true of a republic?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "B" }, "bloom": null, "hl_context": "Great Britain ’ s effort to isolate New England in 1777 failed . <hl> In June 1778 , the occupying British force in Philadelphia evacuated and returned to New York City in order to better defend that city , and the British then turned their attention to the southern colonies . <hl> 6.3 War in the South Learning Objectives By the end of this section , you will be able to : On September 16 , 1776 , George Washington ’ s forces held up against the British at the Battle of Harlem Heights . This important American military achievement , a key reversal after the disaster on Long Island , occurred as most of Washington ’ s forces retreated to New Jersey . A few weeks later , on October 28 , General Howe ’ s forces defeated Washington ’ s at the Battle of White Plains and New York City fell to the British . <hl> For the next seven years , the British made the city the headquarters for their military efforts to defeat the rebellion , which included raids on surrounding areas . <hl> In 1777 , the British burned Danbury , Connecticut , and in July 1779 , they set fire to homes in Fairfield and Norwalk . They held American prisoners aboard ships in the waters around New York City ; the death toll was shocking , with thousands perishing in the holds . Meanwhile , New York City served as a haven for Loyalists who disagreed with the effort to break away from the Empire and establish an American republic . <hl> The major campaigns over the next several years took place in the middle colonies of New York , New Jersey , and Pennsylvania , whose populations were sharply divided between Loyalists and Patriots . <hl> Revolutionaries faced many hardships as British superiority on the battlefield became evident and the difficulty of funding the war caused strains .", "hl_sentences": "In June 1778 , the occupying British force in Philadelphia evacuated and returned to New York City in order to better defend that city , and the British then turned their attention to the southern colonies . For the next seven years , the British made the city the headquarters for their military efforts to defeat the rebellion , which included raids on surrounding areas . The major campaigns over the next several years took place in the middle colonies of New York , New Jersey , and Pennsylvania , whose populations were sharply divided between Loyalists and Patriots .", "question": { "cloze_format": "___ served as the base for British operations for most of the war.", "normal_format": "Which city served as the base for British operations for most of the war?", "question_choices": [ "Boston", "New York", "Philadelphia", "Saratoga" ], "question_id": "fs-idm97982096", "question_text": "Which city served as the base for British operations for most of the war?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 0, "ans_text": "the Battle of Saratoga" }, "bloom": null, "hl_context": "<hl> The American victory at the Battle of Saratoga was the major turning point in the war . <hl> This victory convinced the French to recognize American independence and form a military alliance with the new nation , which changed the course of the war by opening the door to badly needed military support from France . Still smarting from their defeat by Britain in the Seven Years ’ War , the French supplied the United States with gunpowder and money , as well as soldiers and naval forces that proved decisive in the defeat of Great Britain . The French also contributed military leaders , including the Marquis de Lafayette , who arrived in America in 1777 as a volunteer and served as Washington ’ s aide-de-camp .", "hl_sentences": "The American victory at the Battle of Saratoga was the major turning point in the war .", "question": { "cloze_format": "___ turned the tide of war in favor of the Americans.", "normal_format": "What battle turned the tide of war in favor of the Americans?", "question_choices": [ "the Battle of Saratoga", "the Battle of Brandywine Creek", "the Battle of White Plains", "the Battle of Valley Forge" ], "question_id": "fs-idm75993488", "question_text": "What battle turned the tide of war in favor of the Americans?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "C" }, "bloom": null, "hl_context": "That changed in late 1776 and early 1777 , when Washington broke with conventional eighteenth-century military tactics that called for fighting in the summer months only . Intent on raising revolutionary morale after the British captured New York City , he launched surprise strikes against British forces in their winter quarters . <hl> In Trenton , New Jersey , he led his soldiers across the Delaware River and surprised an encampment of Hessians , German mercenaries hired by Great Britain to put down the American rebellion . <hl> Beginning the night of December 25 , 1776 , and continuing into the early hours of December 26 , Washington moved on Trenton where the Hessians were encamped . Maintaining the element of surprise by attacking at Christmastime , he defeated them , taking over nine hundred captive . On January 3 , 1777 , Washington achieved another much-needed victory at the Battle of Princeton . He again broke with eighteenth-century military protocol by attacking unexpectedly after the fighting season had ended .", "hl_sentences": "In Trenton , New Jersey , he led his soldiers across the Delaware River and surprised an encampment of Hessians , German mercenaries hired by Great Britain to put down the American rebellion .", "question": { "cloze_format": "German soldiers hired by Great Britain to put down the American rebellion are called ___.", "normal_format": "Which term describes German soldiers hired by Great Britain to put down the American rebellion?", "question_choices": [ "Patriots", "Royalists", "Hessians", "Loyalists" ], "question_id": "fs-idm90469232", "question_text": "Which term describes German soldiers hired by Great Britain to put down the American rebellion?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "Nathanael Greene" }, "bloom": null, "hl_context": "As the British had hoped , large numbers of Loyalists helped ensure the success of the southern strategy , and thousands of enslaved individuals seeking freedom arrived to aid Cornwallis ’ s army . <hl> However , the war turned in the Americans ’ favor in 1781 . <hl> <hl> General Greene realized that to defeat Cornwallis , he did not have to win a single battle . <hl> <hl> So long as he remained in the field , he could continue to destroy isolated British forces . <hl> <hl> Greene therefore made a strategic decision to divide his own troops to wage war — and the strategy worked . <hl> <hl> American forces under General Daniel Morgan decisively beat the British at the Battle of Cowpens in South Carolina . <hl> <hl> General Cornwallis now abandoned his strategy of defeating the backcountry rebels in South Carolina . <hl> <hl> Determined to destroy Greene ’ s army , he gave chase as Greene strategically retreated north into North Carolina . <hl> At the Battle of Guilford Courthouse in March 1781 , the British prevailed on the battlefield but suffered extensive losses , an outcome that paralleled the Battle of Bunker Hill nearly six years earlier in June 1775 . The disaster at Charleston led the Continental Congress to change leadership by placing General Horatio Gates in charge of American forces in the South . However , General Gates fared no better than General Lincoln ; at the Battle of Camden , South Carolina , in August 1780 , Cornwallis forced General Gates to retreat into North Carolina . Camden was one of the worst disasters suffered by American armies during the entire Revolutionary War . <hl> Congress again changed military leadership , this time by placing General Nathanael Greene ( Figure 6.14 ) in command in December 1780 . <hl> By 1778 , the war had turned into a stalemate . Although some in Britain , including Prime Minister Lord North , wanted peace , King George III demanded that the colonies be brought to obedience . To break the deadlock , the British revised their strategy and turned their attention to the southern colonies , where they could expect more support from Loyalists . The southern colonies soon became the center of the fighting . <hl> The southern strategy brought the British success at first , but thanks to the leadership of George Washington and General Nathanael Greene and the crucial assistance of French forces , the Continental Army defeated the British at Yorktown , effectively ending further large-scale operations during the war . <hl>", "hl_sentences": "However , the war turned in the Americans ’ favor in 1781 . General Greene realized that to defeat Cornwallis , he did not have to win a single battle . So long as he remained in the field , he could continue to destroy isolated British forces . Greene therefore made a strategic decision to divide his own troops to wage war — and the strategy worked . American forces under General Daniel Morgan decisively beat the British at the Battle of Cowpens in South Carolina . General Cornwallis now abandoned his strategy of defeating the backcountry rebels in South Carolina . Determined to destroy Greene ’ s army , he gave chase as Greene strategically retreated north into North Carolina . Congress again changed military leadership , this time by placing General Nathanael Greene ( Figure 6.14 ) in command in December 1780 . The southern strategy brought the British success at first , but thanks to the leadership of George Washington and General Nathanael Greene and the crucial assistance of French forces , the Continental Army defeated the British at Yorktown , effectively ending further large-scale operations during the war .", "question": { "cloze_format": "___ is responsible for improving the American military position in the South.", "normal_format": "Which American general is responsible for improving the American military position in the South?", "question_choices": [ "John Burgoyne", "Nathanael Greene", "Wilhelm Frederick von Steuben", "Charles Cornwallis" ], "question_id": "fs-idp6723008", "question_text": "Which American general is responsible for improving the American military position in the South?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "American colonists were divided among those who wanted independence, those who wanted to remain part of the British Empire, and those who were neutral." }, "bloom": null, "hl_context": "<hl> The Revolution succeeded because colonists from diverse economic and social backgrounds united in their opposition to Great Britain . <hl> <hl> Although thousands of colonists remained loyal to the crown and many others preferred to remain neutral , a sense of community against a common enemy prevailed among Patriots . <hl> The signing of the Declaration of Independence ( Figure 6.1 ) exemplifies the spirit of that common cause . Representatives asserted : “ That these United Colonies are , and of Right ought to be Free and Independent States ; that they are Absolved from all Allegiance to the British Crown , . . . And for the support of this Declaration , . . . we mutually pledge to each other our Lives , our Fortunes and our sacred Honor . ”", "hl_sentences": "The Revolution succeeded because colonists from diverse economic and social backgrounds united in their opposition to Great Britain . Although thousands of colonists remained loyal to the crown and many others preferred to remain neutral , a sense of community against a common enemy prevailed among Patriots .", "question": { "cloze_format": "___ best represents the division between Patriots and Loyalists.", "normal_format": "Which of the following statements best represents the division between Patriots and Loyalists?", "question_choices": [ "Most American colonists were Patriots, with only a few traditionalists remaining loyal to the King and Empire.", "Most American colonists were Loyalists, with only a few firebrand revolutionaries leading the charge for independence.", "American colonists were divided among those who wanted independence, those who wanted to remain part of the British Empire, and those who were neutral.", "The vast majority of American colonists were neutral and didn’t take a side between Loyalists and Patriots." ], "question_id": "fs-idm54987872", "question_text": "Which of the following statements best represents the division between Patriots and Loyalists?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 0, "ans_text": "A" }, "bloom": null, "hl_context": "Women who did not share Reed ’ s prominent status nevertheless played key economic roles by producing homespun cloth and food . During shortages , some women formed mobs and wrested supplies from those who hoarded them . Crowds of women beset merchants and demanded fair prices for goods ; if a merchant refused , a riot would ensue . <hl> Still other women accompanied the army as “ camp followers , ” serving as cooks , washerwomen , and nurses . <hl> A few also took part in combat and proved their equality with men through violence against the hated British . <hl> The Revolution opened some new doors for women , however , as they took on public roles usually reserved for men . <hl> <hl> The Daughters of Liberty , an informal organization formed in the mid - 1760s to oppose British revenue-raising measures , worked tirelessly to support the war effort . <hl> Esther DeBerdt Reed of Philadelphia , wife of Governor Joseph Reed , formed the Ladies Association of Philadelphia and led a fundraising drive to provide sorely needed supplies to the Continental Army . In “ The Sentiments of an American Woman ” ( 1780 ) , she wrote to other women , “ The time is arrived to display the same sentiments which animated us at the beginning of the Revolution , when we renounced the use of teas , however agreeable to our taste , rather than receive them from our persecutors ; when we made it appear to them that we placed former necessaries in the rank of superfluities , when our liberty was interested ; when our republican and laborious hands spun the flax , prepared the linen intended for the use of our soldiers ; when exiles and fugitives we supported with courage all the evils which are the concomitants of war . ” Reed and other women in Philadelphia raised almost $ 300,000 in Continental money for the war . In colonial America , women shouldered enormous domestic and child-rearing responsibilities . The war for independence only increased their workload and , in some ways , solidified their roles . <hl> Rebel leaders required women to produce articles for war — everything from clothing to foodstuffs — while also keeping their homesteads going . <hl> <hl> This was not an easy task when their husbands and sons were away fighting . <hl> <hl> Women were also expected to provide food and lodging for armies and to nurse wounded soldiers . <hl>", "hl_sentences": "Still other women accompanied the army as “ camp followers , ” serving as cooks , washerwomen , and nurses . The Revolution opened some new doors for women , however , as they took on public roles usually reserved for men . The Daughters of Liberty , an informal organization formed in the mid - 1760s to oppose British revenue-raising measures , worked tirelessly to support the war effort . Rebel leaders required women to produce articles for war — everything from clothing to foodstuffs — while also keeping their homesteads going . This was not an easy task when their husbands and sons were away fighting . Women were also expected to provide food and lodging for armies and to nurse wounded soldiers .", "question": { "cloze_format": "___ is not one of the tasks women performed during the Revolution.", "normal_format": "Which of the following is not one of the tasks women performed during the Revolution?", "question_choices": [ "holding government offices", "maintaining their homesteads", "feeding, quartering, and nursing soldiers", "raising funds for the war effort" ], "question_id": "fs-idp1696480", "question_text": "Which of the following is not one of the tasks women performed during the Revolution?" }, "references_are_paraphrase": 0 } ]
6
6.1 Britain’s Law-and-Order Strategy and Its Consequences Learning Objectives By the end of this section, you will be able to: Explain how Great Britain’s response to the destruction of a British shipment of tea in Boston Harbor in 1773 set the stage for the Revolution Describe the beginnings of the American Revolution Great Britain pursued a policy of law and order when dealing with the crises in the colonies in the late 1760s and 1770s. Relations between the British and many American Patriots worsened over the decade, culminating in an unruly mob destroying a fortune in tea by dumping it into Boston Harbor in December 1773 as a protest against British tax laws. The harsh British response to this act in 1774, which included sending British troops to Boston and closing Boston Harbor, caused tensions and resentments to escalate further. The British tried to disarm the insurgents in Massachusetts by confiscating their weapons and ammunition and arresting the leaders of the patriotic movement. However, this effort faltered on April 19, when Massachusetts militias and British troops fired on each other as British troops marched to Lexington and Concord, an event immortalized by poet Ralph Waldo Emerson as the “shot heard round the world.” The American Revolution had begun. ON THE EVE OF REVOLUTION The decade from 1763 to 1774 was a difficult one for the British Empire. Although Great Britain had defeated the French in the French and Indian War, the debt from that conflict remained a stubborn and seemingly unsolvable problem for both Great Britain and the colonies. Great Britain tried various methods of raising revenue on both sides of the Atlantic to manage the enormous debt, including instituting a tax on tea and other goods sold to the colonies by British companies, but many subjects resisted these taxes. In the colonies, Patriot groups like the Sons of Liberty led boycotts of British goods and took violent measures that stymied British officials. Boston proved to be the epicenter of protest. In December 1773, a group of Patriots protested the Tea Act passed that year—which, among other provisions, gave the East India Company a monopoly on tea—by boarding British tea ships docked in Boston Harbor and dumping tea worth over $1 million (in current prices) into the water. The destruction of the tea radically escalated the crisis between Great Britain and the American colonies. When the Massachusetts Assembly refused to pay for the tea, Parliament enacted a series of laws called the Coercive Acts, which some colonists called the Intolerable Acts. Parliament designed these laws, which closed the port of Boston, limited the meetings of the colonial assembly, and disbanded all town meetings, to punish Massachusetts and bring the colony into line. However, many British Americans in other colonies were troubled and angered by Parliament’s response to Massachusetts. In September and October 1774, all the colonies except Georgia participated in the First Continental Congress in Philadelphia. The Congress advocated a boycott of all British goods and established the Continental Association to enforce local adherence to the boycott. The Association supplanted royal control and shaped resistance to Great Britain. Americana Joining the Boycott Many British colonists in Virginia, as in the other colonies, disapproved of the destruction of the tea in Boston Harbor. However, after the passage of the Coercive Acts, the Virginia House of Burgesses declared its solidarity with Massachusetts by encouraging Virginians to observe a day of fasting and prayer on May 24 in sympathy with the people of Boston. Almost immediately thereafter, Virginia’s colonial governor dissolved the House of Burgesses, but many of its members met again in secret on May 30 and adopted a resolution stating that “the Colony of Virginia will concur with the other Colonies in such Measures as shall be judged most effectual for the preservation of the Common Rights and Liberty of British America.” After the First Continental Congress in Philadelphia, Virginia’s Committee of Safety ensured that all merchants signed the non-importation agreements that the Congress had proposed. This British cartoon ( Figure 6.3 ) shows a Virginian signing the Continental Association boycott agreement. Note the tar and feathers hanging from the gallows in the background of this image and the demeanor of the people surrounding the signer. What is the message of this engraving? Where are the sympathies of the artist? What is the meaning of the title “The Alternative of Williams-Burg?” In an effort to restore law and order in Boston, the British dispatched General Thomas Gage to the New England seaport. He arrived in Boston in May 1774 as the new royal governor of the Province of Massachusetts, accompanied by several regiments of British troops. As in 1768, the British again occupied the town. Massachusetts delegates met in a Provincial Congress and published the Suffolk Resolves, which officially rejected the Coercive Acts and called for the raising of colonial militias to take military action if needed. The Suffolk Resolves signaled the overthrow of the royal government in Massachusetts. Both the British and the rebels in New England began to prepare for conflict by turning their attention to supplies of weapons and gunpowder. General Gage stationed thirty-five hundred troops in Boston, and from there he ordered periodic raids on towns where guns and gunpowder were stockpiled, hoping to impose law and order by seizing them. As Boston became the headquarters of British military operations, many residents fled the city. Gage’s actions led to the formation of local rebel militias that were able to mobilize in a minute’s time. These minutemen , many of whom were veterans of the French and Indian War, played an important role in the war for independence. In one instance, General Gage seized munitions in Cambridge and Charlestown, but when he arrived to do the same in Salem, his troops were met by a large crowd of minutemen and had to leave empty-handed. In New Hampshire, minutemen took over Fort William and Mary and confiscated weapons and cannons there. New England readied for war. THE OUTBREAK OF FIGHTING Throughout late 1774 and into 1775, tensions in New England continued to mount. General Gage knew that a powder magazine was stored in Concord, Massachusetts, and on April 19, 1775, he ordered troops to seize these munitions. Instructions from London called for the arrest of rebel leaders Samuel Adams and John Hancock. Hoping for secrecy, his troops left Boston under cover of darkness, but riders from Boston let the militias know of the British plans. (Paul Revere was one of these riders, but the British captured him and he never finished his ride. Henry Wadsworth Longfellow memorialized Revere in his 1860 poem, “Paul Revere’s Ride,” incorrectly implying that he made it all the way to Concord.) Minutemen met the British troops and skirmished with them, first at Lexington and then at Concord ( Figure 6.4 ). The British retreated to Boston, enduring ambushes from several other militias along the way. Over four thousand militiamen took part in these skirmishes with British soldiers. Seventy-three British soldiers and forty-nine Patriots died during the British retreat to Boston. The famous confrontation is the basis for Emerson’s “Concord Hymn” (1836), which begins with the description of the “shot heard round the world.” Although propagandists on both sides pointed fingers, it remains unclear who fired that shot. After the battles of Lexington and Concord, New England fully mobilized for war. Thousands of militias from towns throughout New England marched to Boston, and soon the city was besieged by a sea of rebel forces ( Figure 6.5 ). In May 1775, Ethan Allen and Colonel Benedict Arnold led a group of rebels against Fort Ticonderoga in New York. They succeeded in capturing the fort, and cannons from Ticonderoga were brought to Massachusetts and used to bolster the Siege of Boston. In June, General Gage resolved to take Breed’s Hill and Bunker Hill , the high ground across the Charles River from Boston, a strategic site that gave the rebel militias an advantage since they could train their cannons on the British. In the Battle of Bunker Hill ( Figure 6.6 ), on June 17, the British launched three assaults on the hills, gaining control only after the rebels ran out of ammunition. British losses were very high—over two hundred were killed and eight hundred wounded—and, despite his victory, General Gage was unable to break the colonial forces’ siege of the city. In August, King George III declared the colonies to be in a state of rebellion. Parliament and many in Great Britain agreed with their king. Meanwhile, the British forces in Boston found themselves in a terrible predicament, isolated in the city and with no control over the countryside. In the end, General George Washington, commander in chief of the Continental Army since June 15, 1775, used the Fort Ticonderoga cannons to force the evacuation of the British from Boston. Washington had positioned these cannons on the hills overlooking both the fortified positions of the British and Boston Harbor, where the British supply ships were anchored. The British could not return fire on the colonial positions because they could not elevate their cannons. They soon realized that they were in an untenable position and had to withdraw from Boston. On March 17, 1776, the British evacuated their troops to Halifax, Nova Scotia, ending the nearly year-long siege. By the time the British withdrew from Boston, fighting had broken out in other colonies as well. In May 1775, Mecklenburg County in North Carolina issued the Mecklenburg Resolves , stating that a rebellion against Great Britain had begun, that colonists did not owe any further allegiance to Great Britain, and that governing authority had now passed to the Continental Congress. The resolves also called upon the formation of militias to be under the control of the Continental Congress. Loyalists and Patriots clashed in North Carolina in February 1776 at the Battle of Moore’s Creek Bridge. In Virginia, the royal governor, Lord Dunmore, raised Loyalist forces to combat the rebel colonists and also tried to use the large enslaved population to put down the rebellion. In November 1775, he issued a decree, known as Dunmore’s Proclamation , promising freedom to enslaved people and indentured servants of rebels who remained loyal to the king and who pledged to fight with the Loyalists against the insurgents. Dunmore’s Proclamation exposed serious problems for both the Patriot cause and for the British. In order for the British to put down the rebellion, they needed the support of Virginia’s landowners, many of whom enslaved people. (While Patriot slaveholders in Virginia and elsewhere proclaimed they acted in defense of liberty, they kept thousands in bondage, a fact the British decided to exploit.) Although a number of enslaved people did join Dunmore’s side, the proclamation had the unintended effect of galvanizing Patriot resistance to Britain. From the rebels’ point of view, the British looked to deprive them of their enslaved property and incite a race war. Slaveholders feared an uprising and increased their commitment to the cause against Great Britain, calling for independence. Dunmore fled Virginia in 1776. COMMON SENSE With the events of 1775 fresh in their minds, many colonists reached the conclusion in 1776 that the time had come to secede from the Empire and declare independence. Over the past ten years, these colonists had argued that they deserved the same rights as Englishmen enjoyed in Great Britain, only to find themselves relegated to an intolerable subservient status in the Empire. The groundswell of support for their cause of independence in 1776 also owed much to the appearance of an anonymous pamphlet, first published in January 1776, entitled Common Sense . Thomas Paine, who had emigrated from England to Philadelphia in 1774, was the author. Arguably the most radical pamphlet of the revolutionary era, Common Sense made a powerful argument for independence. Paine’s pamphlet rejected the monarchy, calling King George III a “royal brute” and questioning the right of an island (England) to rule over America. In this way, Paine helped to channel colonial discontent toward the king himself and not, as had been the case, toward the British Parliament—a bold move that signaled the desire to create a new political order disavowing monarchy entirely. He argued for the creation of an American republic, a state without a king, and extolled the blessings of republicanism , a political philosophy that held that elected representatives, not a hereditary monarch, should govern states. The vision of an American republic put forward by Paine included the idea of popular sovereignty : citizens in the republic would determine who would represent them, and decide other issues, on the basis of majority rule. Republicanism also served as a social philosophy guiding the conduct of the Patriots in their struggle against the British Empire. It demanded adherence to a code of virtue, placing the public good and community above narrow self-interest. Paine wrote Common Sense ( Figure 6.7 ) in simple, direct language aimed at ordinary people, not just the learned elite. The pamphlet proved immensely popular and was soon available in all thirteen colonies , where it helped convince many to reject monarchy and the British Empire in favor of independence and a republican form of government. THE DECLARATION OF INDEPENDENCE In the summer of 1776, the Continental Congress met in Philadelphia and agreed to sever ties with Great Britain. Virginian Thomas Jefferson and John Adams of Massachusetts, with the support of the Congress, articulated the justification for liberty in the Declaration of Independence ( Figure 6.8 ). The Declaration, written primarily by Jefferson, included a long list of grievances against King George III and laid out the foundation of American government as a republic in which the consent of the governed would be of paramount importance. The preamble to the Declaration began with a statement of Enlightenment principles about universal human rights and values: “We hold these Truths to be self-evident, that all Men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty, and the pursuit of Happiness—That to secure these Rights, Governments are instituted among Men, deriving their just Powers from the Consent of the Governed, that whenever any Form of Government becomes destructive of these Ends, it is the Right of the People to alter or abolish it.” In addition to this statement of principles, the document served another purpose: Patriot leaders sent copies to France and Spain in hopes of winning their support and aid in the contest against Great Britain. They understood how important foreign recognition and aid would be to the creation of a new and independent nation. The Declaration of Independence has since had a global impact, serving as the basis for many subsequent movements to gain independence from other colonial powers. It is part of America’s civil religion, and thousands of people each year make pilgrimages to see the original document in Washington, DC. The Declaration also reveals a fundamental contradiction of the American Revolution: the conflict between the existence of slavery and the idea that “all men are created equal.” One-fifth of the population in 1776 was enslaved, and at the time he drafted the Declaration, Jefferson himself owned more than one hundred enslaved individuals. Further, the Declaration framed equality as existing only among White men; women and non-White people were entirely left out of a document that referred to Native peoples as “merciless Indian savages” who indiscriminately killed men, women, and children. Nonetheless, the promise of equality for all planted the seeds for future struggles waged by enslaved individuals, women, and many others to bring about its full realization. Much of American history is the story of the slow realization of the promise of equality expressed in the Declaration of Independence. 6.2 The Early Years of the Revolution Learning Objectives By the end of this section, you will be able to: Explain the British and American strategies of 1776 through 1778 Identify the key battles of the early years of the Revolution After the British quit Boston, they slowly adopted a strategy to isolate New England from the rest of the colonies and force the insurgents in that region into submission, believing that doing so would end the conflict. At first, British forces focused on taking the principal colonial centers. They began by easily capturing New York City in 1776. The following year, they took over the American capital of Philadelphia. The larger British effort to isolate New England was implemented in 1777. That effort ultimately failed when the British surrendered a force of over five thousand to the Americans in the fall of 1777 at the Battle of Saratoga. The major campaigns over the next several years took place in the middle colonies of New York, New Jersey, and Pennsylvania, whose populations were sharply divided between Loyalists and Patriots. Revolutionaries faced many hardships as British superiority on the battlefield became evident and the difficulty of funding the war caused strains. THE BRITISH STRATEGY IN THE MIDDLE COLONIES After evacuating Boston in March 1776, British forces sailed to Nova Scotia to regroup. They devised a strategy, successfully implemented in 1776, to take New York City. The following year, they planned to end the rebellion by cutting New England off from the rest of the colonies and starving it into submission. Three British armies were to move simultaneously from New York City, Montreal, and Fort Oswego to converge along the Hudson River; British control of that natural boundary would isolate New England. General William Howe ( Figure 6.9 ), commander in chief of the British forces in America, amassed thirty-two thousand troops on Staten Island in June and July 1776. His brother, Admiral Richard Howe, controlled New York Harbor. Command of New York City and the Hudson River was their goal. In August 1776, General Howe landed his forces on Long Island and easily routed the American Continental Army there in the Battle of Long Island (August 27). The Americans were outnumbered and lacked both military experience and discipline. Sensing victory, General and Admiral Howe arranged a peace conference in September 1776, where Benjamin Franklin, John Adams, and South Carolinian John Rutledge represented the Continental Congress. Despite the Howes’ hopes, however, the Americans demanded recognition of their independence, which the Howes were not authorized to grant, and the conference disbanded. On September 16, 1776, George Washington’s forces held up against the British at the Battle of Harlem Heights. This important American military achievement, a key reversal after the disaster on Long Island, occurred as most of Washington’s forces retreated to New Jersey. A few weeks later, on October 28, General Howe’s forces defeated Washington’s at the Battle of White Plains and New York City fell to the British. For the next seven years, the British made the city the headquarters for their military efforts to defeat the rebellion, which included raids on surrounding areas. In 1777, the British burned Danbury, Connecticut, and in July 1779, they set fire to homes in Fairfield and Norwalk. They held American prisoners aboard ships in the waters around New York City; the death toll was shocking, with thousands perishing in the holds. Meanwhile, New York City served as a haven for Loyalists who disagreed with the effort to break away from the Empire and establish an American republic. GEORGE WASHINGTON AND THE CONTINENTAL ARMY When the Second Continental Congress met in Philadelphia in May 1775, members approved the creation of a professional Continental Army with Washington as commander in chief ( Figure 6.10 ). Although sixteen thousand volunteers enlisted, it took several years for the Continental Army to become a truly professional force. In 1775 and 1776, militias still composed the bulk of the Patriots’ armed forces, and these soldiers returned home after the summer fighting season, drastically reducing the army’s strength. That changed in late 1776 and early 1777, when Washington broke with conventional eighteenth-century military tactics that called for fighting in the summer months only. Intent on raising revolutionary morale after the British captured New York City, he launched surprise strikes against British forces in their winter quarters. In Trenton, New Jersey, he led his soldiers across the Delaware River and surprised an encampment of Hessians , German mercenaries hired by Great Britain to put down the American rebellion. Beginning the night of December 25, 1776, and continuing into the early hours of December 26, Washington moved on Trenton where the Hessians were encamped. Maintaining the element of surprise by attacking at Christmastime, he defeated them, taking over nine hundred captive. On January 3, 1777, Washington achieved another much-needed victory at the Battle of Princeton. He again broke with eighteenth-century military protocol by attacking unexpectedly after the fighting season had ended. Defining American Thomas Paine on “The American Crisis” During the American Revolution, following the publication of Common Sense in January 1776, Thomas Paine began a series of sixteen pamphlets known collectively as The American Crisis ( Figure 6.11 ). He wrote the first volume in 1776, describing the dire situation facing the revolutionaries at the end of that hard year. These are the times that try men’s souls. The summer soldier and the sunshine patriot will, in this crisis, shrink from the service of their country; but he that stands it now, deserves the love and thanks of man and woman. . . . Britain, with an army to enforce her tyranny, has declared that she has a right (not only to tax) but “to bind us in all cases whatsoever,” and if being bound in that manner, is not slavery, then is there not such a thing as slavery upon earth. Even the expression is impious; for so unlimited a power can belong only to God. . . . I shall conclude this paper with some miscellaneous remarks on the state of our affairs; and shall begin with asking the following question, Why is it that the enemy have left the New England provinces, and made these middle ones the seat of war? The answer is easy: New England is not infested with Tories, and we are. I have been tender in raising the cry against these men, and used numberless arguments to show them their danger, but it will not do to sacrifice a world either to their folly or their baseness. The period is now arrived, in which either they or we must change our sentiments, or one or both must fall. . . . By perseverance and fortitude we have the prospect of a glorious issue; by cowardice and submission, the sad choice of a variety of evils—a ravaged country—a depopulated city—habitations without safety, and slavery without hope—our homes turned into barracks and bawdy-houses for Hessians, and a future race to provide for, whose fathers we shall doubt of. Look on this picture and weep over it! and if there yet remains one thoughtless wretch who believes it not, let him suffer it unlamented. —Thomas Paine, “The American Crisis,” December 23, 1776 What topics does Paine address in this pamphlet? What was his purpose in writing? What does he write about Tories (Loyalists), and why does he consider them a problem? PHILADELPHIA AND SARATOGA: BRITISH AND AMERICAN VICTORIES In August 1777, General Howe brought fifteen thousand British troops to Chesapeake Bay as part of his plan to take Philadelphia, where the Continental Congress met. That fall, the British defeated Washington’s soldiers in the Battle of Brandywine Creek and took control of Philadelphia, forcing the Continental Congress to flee. During the winter of 1777–1778, the British occupied the city, and Washington’s army camped at Valley Forge, Pennsylvania. Washington’s winter at Valley Forge was a low point for the American forces. A lack of supplies weakened the men, and disease took a heavy toll. Amid the cold, hunger, and sickness, soldiers deserted in droves. On February 16, Washington wrote to George Clinton, governor of New York: “For some days past, there has been little less than a famine in camp. A part of the army has been a week without any kind of flesh & the rest three or four days. Naked and starving as they are, we cannot enough admire the incomparable patience and fidelity of the soldiery, that they have not been ere [before] this excited by their sufferings to a general mutiny and dispersion.” Of eleven thousand soldiers encamped at Valley Forge, twenty-five hundred died of starvation, malnutrition, and disease. As Washington feared, nearly one hundred soldiers deserted every week. (Desertions continued, and by 1780, Washington was executing recaptured deserters every Saturday.) The low morale extended all the way to Congress, where some wanted to replace Washington with a more seasoned leader. Assistance came to Washington and his soldiers in February 1778 in the form of the Prussian soldier Friedrich Wilhelm von Steuben ( Figure 6.12 ). Baron von Steuben was an experienced military man, and he implemented a thorough training course for Washington’s ragtag troops. By drilling a small corps of soldiers and then having them train others, he finally transformed the Continental Army into a force capable of standing up to the professional British and Hessian soldiers. His drill manual— Regulations for the Order and Discipline of the Troops of the United States —informed military practices in the United States for the next several decades. Meanwhile, the campaign to sever New England from the rest of the colonies had taken an unexpected turn during the fall of 1777. The British had attempted to implement the plan, drawn up by Lord George Germain and Prime Minister Lord North, to isolate New England with the combined forces of three armies. One army, led by General John Burgoyne, would march south from Montreal. A second force, led by Colonel Barry St. Leger and made up of British troops and Iroquois, would march east from Fort Oswego on the banks of Lake Ontario. A third force, led by General Sir Henry Clinton, would march north from New York City. The armies would converge at Albany and effectively cut the rebellion in two by isolating New England. This northern campaign fell victim to competing strategies, however, as General Howe had meanwhile decided to take Philadelphia. His decision to capture that city siphoned off troops that would have been vital to the overall success of the campaign in 1777. The British plan to isolate New England ended in disaster. St. Leger’s efforts to bring his force of British regulars, Loyalist fighters, and Iroquois allies east to link up with General Burgoyne failed, and he retreated to Quebec. Burgoyne’s forces encountered ever-stiffer resistance as he made his way south from Montreal, down Lake Champlain and the upper Hudson River corridor. Although they did capture Fort Ticonderoga when American forces retreated, Burgoyne’s army found themselves surrounded by a sea of colonial militias in Saratoga, New York. In the meantime, the small British force under Clinton that left New York City to aid Burgoyne advanced slowly up the Hudson River, failing to provide the much-needed support for the troops at Saratoga. On October 17, 1777, Burgoyne surrendered his five thousand soldiers to the Continental Army ( Figure 6.13 ). The American victory at the Battle of Saratoga was the major turning point in the war. This victory convinced the French to recognize American independence and form a military alliance with the new nation, which changed the course of the war by opening the door to badly needed military support from France. Still smarting from their defeat by Britain in the Seven Years’ War, the French supplied the United States with gunpowder and money, as well as soldiers and naval forces that proved decisive in the defeat of Great Britain. The French also contributed military leaders, including the Marquis de Lafayette, who arrived in America in 1777 as a volunteer and served as Washington’s aide-de-camp. The war quickly became more difficult for the British, who had to fight the rebels in North America as well as the French in the Caribbean. Following France’s lead, Spain joined the war against Great Britain in 1779, though it did not recognize American independence until 1783. The Dutch Republic also began to support the American revolutionaries and signed a treaty of commerce with the United States in 1782. Great Britain’s effort to isolate New England in 1777 failed. In June 1778, the occupying British force in Philadelphia evacuated and returned to New York City in order to better defend that city, and the British then turned their attention to the southern colonies. 6.3 War in the South Learning Objectives By the end of this section, you will be able to: Outline the British southern strategy and its results Describe key American victories and the end of the war Identify the main terms of the Treaty of Paris (1783) By 1778, the war had turned into a stalemate. Although some in Britain, including Prime Minister Lord North, wanted peace, King George III demanded that the colonies be brought to obedience. To break the deadlock, the British revised their strategy and turned their attention to the southern colonies, where they could expect more support from Loyalists. The southern colonies soon became the center of the fighting. The southern strategy brought the British success at first, but thanks to the leadership of George Washington and General Nathanael Greene and the crucial assistance of French forces, the Continental Army defeated the British at Yorktown, effectively ending further large-scale operations during the war. GEORGIA AND SOUTH CAROLINA The British architect of the war strategy, Lord George Germain, believed Britain would gain the upper hand with the support of Loyalists, enslaved people, and Native American allies in the South, and indeed, this southern strategy initially achieved great success. The British began their southern campaign by capturing Savannah, the capital of Georgia, in December 1778. In Georgia, they found support from thousands of enslaved individuals who ran to the British side to escape their bondage. As the British regained political control in Georgia, they forced the inhabitants to swear allegiance to the king and formed twenty Loyalist regiments. The Continental Congress had suggested that enslaved people be given freedom if they joined the Patriot army against the British, but revolutionaries in Georgia and South Carolina refused to consider this proposal. Once again, the Revolution served to further divisions over race and slavery. After taking Georgia, the British turned their attention to South Carolina. Before the Revolution, South Carolina had been starkly divided between the backcountry, which harbored revolutionary partisans, and the coastal regions, where Loyalists remained a powerful force. Waves of violence rocked the backcountry from the late 1770s into the early 1780s. The Revolution provided an opportunity for residents to fight over their local resentments and antagonisms with murderous consequences. Revenge killings and the destruction of property became mainstays in the savage civil war that gripped the South. In April 1780, a British force of eight thousand soldiers besieged American forces in Charleston ( Figure 6.14 ). After six weeks of the Siege of Charleston, the British triumphed. General Benjamin Lincoln, who led the effort for the revolutionaries, had to surrender his entire force, the largest American loss during the entire war. Many of the defeated Americans were placed in jails or in British prison ships anchored in Charleston Harbor. The British established a military government in Charleston under the command of General Sir Henry Clinton. From this base, Clinton ordered General Charles Cornwallis to subdue the rest of South Carolina. The disaster at Charleston led the Continental Congress to change leadership by placing General Horatio Gates in charge of American forces in the South. However, General Gates fared no better than General Lincoln; at the Battle of Camden, South Carolina, in August 1780, Cornwallis forced General Gates to retreat into North Carolina. Camden was one of the worst disasters suffered by American armies during the entire Revolutionary War. Congress again changed military leadership, this time by placing General Nathanael Greene ( Figure 6.14 ) in command in December 1780. As the British had hoped, large numbers of Loyalists helped ensure the success of the southern strategy, and thousands of enslaved individuals seeking freedom arrived to aid Cornwallis’s army. However, the war turned in the Americans’ favor in 1781. General Greene realized that to defeat Cornwallis, he did not have to win a single battle. So long as he remained in the field, he could continue to destroy isolated British forces. Greene therefore made a strategic decision to divide his own troops to wage war—and the strategy worked. American forces under General Daniel Morgan decisively beat the British at the Battle of Cowpens in South Carolina. General Cornwallis now abandoned his strategy of defeating the backcountry rebels in South Carolina. Determined to destroy Greene’s army, he gave chase as Greene strategically retreated north into North Carolina. At the Battle of Guilford Courthouse in March 1781, the British prevailed on the battlefield but suffered extensive losses, an outcome that paralleled the Battle of Bunker Hill nearly six years earlier in June 1775. YORKTOWN In the summer of 1781, Cornwallis moved his army to Yorktown , Virginia. He expected the Royal Navy to transport his army to New York, where he thought he would join General Sir Henry Clinton. Yorktown was a tobacco port on a peninsula, and Cornwallis believed the British navy would be able to keep the coast clear of rebel ships. Sensing an opportunity, a combined French and American force of sixteen thousand men swarmed the peninsula in September 1781. Washington raced south with his forces, now a disciplined army, as did the Marquis de Lafayette and the Comte de Rochambeau with their French troops. The French Admiral de Grasse sailed his naval force into Chesapeake Bay, preventing Lord Cornwallis from taking a seaward escape route. In October 1781, the American forces began the battle for Yorktown, and after a siege that lasted eight days, Lord Cornwallis capitulated on October 19 ( Figure 6.15 ). Tradition says that during the surrender of his troops, the British band played “The World Turned Upside Down,” a song that befitted the Empire’s unexpected reversal of fortune. Defining American “The World Turned Upside Down” “The World Turned Upside Down,” reputedly played during the surrender of the British at Yorktown, was a traditional English ballad from the seventeenth century. It was also the theme of a popular British print that circulated in the 1790s ( Figure 6.16 ). Why do you think these images were popular in Great Britain in the decade following the Revolutionary War? What would these images imply to Americans? THE TREATY OF PARIS The British defeat at Yorktown made the outcome of the war all but certain. In light of the American victory, the Parliament of Great Britain voted to end further military operations against the rebels and to begin peace negotiations. Support for the war effort had come to an end, and British military forces began to evacuate the former American colonies in 1782. When hostilities had ended, Washington resigned as commander in chief and returned to his Virginia home. In April 1782, Benjamin Franklin, John Adams, and John Jay had begun informal peace negotiations in Paris. Officials from Great Britain and the United States finalized the treaty in 1783, signing the Treaty of Paris ( Figure 6.17 ) in September of that year. The treaty recognized the independence of the United States; placed the western, eastern, northern, and southern boundaries of the nation at the Mississippi River, the Atlantic Ocean, Canada, and Florida, respectively; and gave New Englanders fishing rights in the waters off Newfoundland. Under the terms of the treaty, individual states were encouraged to refrain from persecuting Loyalists and to return their confiscated property. 6.4 Identity during the American Revolution Learning Objectives By the end of this section, you will be able to: Explain Loyalist and Patriot sentiments Identify different groups that participated in the Revolutionary War The American Revolution in effect created multiple civil wars. Many of the resentments and antagonisms that fed these conflicts predated the Revolution, and the outbreak of war acted as the catalyst they needed to burst forth. In particular, the middle colonies of New York, New Jersey, and Pennsylvania had deeply divided populations. Loyalty to Great Britain came in many forms, from wealthy elites who enjoyed the prewar status quo to escaped enslaved people who desired the freedom that the British offered. LOYALISTS Historians disagree on what percentage of colonists were Loyalists; estimates range from 20 percent to over 30 percent. In general, however, of British America’s population of 2.5 million, roughly one-third remained loyal to Great Britain, while another third committed themselves to the cause of independence. The remaining third remained apathetic, content to continue with their daily lives as best they could and preferring not to engage in the struggle. Many Loyalists were royal officials and merchants with extensive business ties to Great Britain, who viewed themselves as the rightful and just defenders of the British constitution. Others simply resented local business and political rivals who supported the Revolution, viewing the rebels as hypocrites and schemers who selfishly used the break with the Empire to increase their fortunes. In New York’s Hudson Valley, animosity among the tenants of estates owned by Revolutionary leaders turned them to the cause of King and Empire. During the war, all the states passed confiscation acts , which gave the new revolutionary governments in the former colonies the right to seize Loyalist land and property. To ferret out Loyalists, revolutionary governments also passed laws requiring the male population to take oaths of allegiance to the new states. Those who refused lost their property and were often imprisoned or made to work for the new local revolutionary order. William Franklin, Benjamin Franklin’s only surviving son, remained loyal to Crown and Empire and served as royal governor of New Jersey, a post he secured with his father’s help. During the war, revolutionaries imprisoned William in Connecticut; however, he remained steadfast in his allegiance to Great Britain and moved to England after the Revolution. He and his father never reconciled. As many as nineteen thousand colonists served the British in the effort to put down the rebellion, and after the Revolution, as many as 100,000 colonists left, moving to England or north to Canada rather than staying in the new United States ( Figure 6.18 ). Eight thousand Whites and five thousand free Black people went to Britain. Over thirty thousand went to Canada, transforming that nation from predominately French to predominantly British. Another sizable group of Loyalists went to the British West Indies, taking enslaved people with them. My Story Hannah Ingraham on Removing to Nova Scotia Hannah Ingraham was eleven years old in 1783, when her Loyalist family removed from New York to Ste. Anne’s Point in the colony of Nova Scotia. Later in life, she compiled her memories of that time. [Father] said we were to go to Nova Scotia, that a ship was ready to take us there, so we made all haste to get ready. . . . Then on Tuesday, suddenly the house was surrounded by rebels and father was taken prisoner and carried away. . . . When morning came, they said he was free to go. We had five wagon loads carried down the Hudson in a sloop and then we went on board the transport that was to bring us to Saint John. I was just eleven years old when we left our farm to come here. It was the last transport of the season and had on board all those who could not come sooner. The first transports had come in May so the people had all the summer before them to get settled. . . . We lived in a tent at St. Anne’s until father got a house ready. . . . There was no floor laid, no windows, no chimney, no door, but we had a roof at least. A good fire was blazing and mother had a big loaf of bread and she boiled a kettle of water and put a good piece of butter in a pewter bowl. We toasted the bread and all sat around the bowl and ate our breakfast that morning and mother said: “Thank God we are no longer in dread of having shots fired through our house. This is the sweetest meal I ever tasted for many a day.” What do these excerpts tell you about life as a Loyalist in New York or as a transplant to Canada? ENSLAVED PEOPLE AND NATIVE PEOPLE While some enslaved people who fought for the Patriot cause received their freedom, revolutionary leaders—unlike the British—did not grant these allies their freedom as a matter of course. Washington, the enslaver of more than two hundred people during the Revolution, refused to let enslaved people serve in the army, although he did allow free Black people to serve. (In his will, Washington did free the people he enslaved.) In the new United States, the Revolution largely reinforced a racial identity based on skin color. Whiteness, now a national identity, denoted freedom and stood as the key to power. Blackness, more than ever before, denoted servile status. Indeed, despite their class and ethnic differences, White revolutionaries stood mostly united in their hostility to both Black and Native Americans. My Story Boyrereau Brinch and Boston King on the Revolutionary War In the Revolutionary War, some Black people, both free and enslaved, chose to fight for the Americans ( Figure 6.19 ). Others chose to fight for the British, who offered them freedom for joining their cause. Read the excerpts below for the perspective of a Black veteran from each side of the conflict. Boyrereau Brinch was captured in Africa at age sixteen and brought to America. He joined the Patriot forces and was honorably discharged and emancipated after the war. He told his story to Benjamin Prentiss, who published it as The Blind African Slave in 1810. Finally, I was in the battles at Cambridge, White Plains, Monmouth, Princeton, Newark, Frog’s Point, Horseneck where I had a ball pass through my knapsack. All which battels [sic] the reader can obtain a more perfect account of in history, than I can give. At last we returned to West Point and were discharged [1783], as the war was over. Thus was I, a slave for five years fighting for liberty. After we were disbanded, I returned to my old master at Woodbury [Connecticut], with whom I lived one year, my services in the American war, having emancipated me from further slavery, and from being bartered or sold. . . . Here I enjoyed the pleasures of a freeman; my food was sweet, my labor pleasure: and one bright gleam of life seemed to shine upon me. Boston King was a Charleston-born enslaved man who escaped his captor and joined the Loyalists. He made his way to Nova Scotia and later Sierra Leone, where he published his memoirs in 1792. The excerpt below describes his experience in New York after the war. When I arrived at New-York, my friends rejoiced to see me once more restored to liberty, and joined me in praising the Lord for his mercy and goodness. . . . [In 1783] the horrors and devastation of war happily terminated, and peace was restored between America and Great Britain, which diffused universal joy among all parties, except us, who had escaped from slavery and taken refuge in the English army; for a report prevailed at New-York, that all the slaves, in number 2000, were to be delivered up to their masters, altho’ some of them had been three or four years among the English. This dreadful rumour filled us all with inexpressible anguish and terror, especially when we saw our old masters coming from Virginia, North-Carolina, and other parts, and seizing upon their slaves in the streets of New-York, or even dragging them out of their beds. Many of the slaves had very cruel masters, so that the thoughts of returning home with them embittered life to us. For some days we lost our appetite for food, and sleep departed from our eyes. The English had compassion upon us in the day of distress, and issued out a Proclamation, importing, That all slaves should be free, who had taken refuge in the British lines, and claimed the sanction and privileges of the Proclamations respecting the security and protection of Negroes. In consequence of this, each of us received a certificate from the commanding officer at New-York, which dispelled all our fears, and filled us with joy and gratitude. What do these two narratives have in common, and how are they different? How do the two men describe freedom? For enslaved people willing to run away and join the British, the American Revolution offered a unique occasion to escape bondage. Of the half a million enslaved people in the American colonies during the Revolution, twenty thousand joined the British cause. At Yorktown, for instance, thousands of Black troops fought with Lord Cornwallis. People enslaved by George Washington, Thomas Jefferson, Patrick Henry, and other revolutionaries seized the opportunity for freedom and fled to the British side. Between ten and twenty thousand enslaved people gained their freedom because of the Revolution; arguably, the Revolution created the largest slave uprising and the greatest emancipation until the Civil War. After the Revolution, some of these African Loyalists emigrated to Sierra Leone on the west coast of Africa. Others removed to Canada and England. It is also true that people of color made heroic contributions to the cause of American independence. However, while the British offered freedom, most American revolutionaries clung to notions of Black inferiority. Powerful Native peoples who had allied themselves with the British, including the Mohawk and the Creek, also remained loyal to the Empire. A Mohawk named Joseph Brant, whose given name was Thayendanegea ( Figure 6.20 ), rose to prominence while fighting for the British during the Revolution. He joined forces with Colonel Barry St. Leger during the 1777 campaign, which ended with the surrender of General Burgoyne at Saratoga. After the war, Brant moved to the Six Nations reserve in Canada. From his home on the shores of Lake Ontario, he remained active in efforts to restrict White encroachment onto Native lands. After their defeat, the British did not keep promises they’d made to help their Native American allies keep their territory; in fact, the Treaty of Paris granted the United States huge amounts of supposedly British-owned regions that were actually Native lands. PATRIOTS The American revolutionaries (also called Patriots or Whigs) came from many different backgrounds and included merchants, shoemakers, farmers, and sailors. What is extraordinary is the way in which the struggle for independence brought a vast cross-section of society together, animated by a common cause. During the war, the revolutionaries faced great difficulties, including massive supply problems; clothing, ammunition, tents, and equipment were all hard to come by. After an initial burst of enthusiasm in 1775 and 1776, the shortage of supplies became acute in 1777 through 1779, as Washington’s difficult winter at Valley Forge demonstrates. Funding the war effort also proved very difficult. Whereas the British could pay in gold and silver, the American forces relied on paper money, backed by loans obtained in Europe. This first American money was called Continental currency ; unfortunately, it quickly fell in value. “Not worth a Continental” soon became a shorthand term for something of no value. The new revolutionary government printed a great amount of this paper money, resulting in runaway inflation. By 1781, inflation was such that 146 Continental dollars were worth only one dollar in gold. The problem grew worse as each former colony, now a revolutionary state, printed its own currency. WOMEN In colonial America, women shouldered enormous domestic and child-rearing responsibilities. The war for independence only increased their workload and, in some ways, solidified their roles. Rebel leaders required women to produce articles for war—everything from clothing to foodstuffs—while also keeping their homesteads going. This was not an easy task when their husbands and sons were away fighting. Women were also expected to provide food and lodging for armies and to nurse wounded soldiers. The Revolution opened some new doors for women, however, as they took on public roles usually reserved for men. The Daughters of Liberty, an informal organization formed in the mid-1760s to oppose British revenue-raising measures, worked tirelessly to support the war effort. Esther DeBerdt Reed of Philadelphia, wife of Governor Joseph Reed, formed the Ladies Association of Philadelphia and led a fundraising drive to provide sorely needed supplies to the Continental Army. In “The Sentiments of an American Woman” (1780), she wrote to other women, “The time is arrived to display the same sentiments which animated us at the beginning of the Revolution, when we renounced the use of teas, however agreeable to our taste, rather than receive them from our persecutors; when we made it appear to them that we placed former necessaries in the rank of superfluities, when our liberty was interested; when our republican and laborious hands spun the flax, prepared the linen intended for the use of our soldiers; when exiles and fugitives we supported with courage all the evils which are the concomitants of war.” Reed and other women in Philadelphia raised almost $300,000 in Continental money for the war. Women who did not share Reed’s prominent status nevertheless played key economic roles by producing homespun cloth and food. During shortages, some women formed mobs and wrested supplies from those who hoarded them. Crowds of women beset merchants and demanded fair prices for goods; if a merchant refused, a riot would ensue. Still other women accompanied the army as “camp followers,” serving as cooks, washerwomen, and nurses. A few also took part in combat and proved their equality with men through violence against the hated British.
biology
Chapter Outline 25.1 Early Plant Life 25.2 Green Algae: Precursors of Land Plants 25.3 Bryophytes 25.4 Seedless Vascular Plants Introduction An incredible variety of seedless plants populates the terrestrial landscape. Mosses may grow on a tree trunk, and horsetails may display their jointed stems and spindly leaves across the forest floor. Today, seedless plants represent only a small fraction of the plants in our environment; yet, three hundred million years ago, seedless plants dominated the landscape and grew in the enormous swampy forests of the Carboniferous period. Their decomposition created large deposits of coal that we mine today. Current evolutionary thought holds that all plants—green algae as well as land dwellers—are monophyletic; that is, they are descendants of a single common ancestor. The evolutionary transition from water to land imposed severe constraints on plants. They had to develop strategies to avoid drying out, to disperse reproductive cells in air, for structural support, and for capturing and filtering sunlight. While seed plants developed adaptations that allowed them to populate even the most arid habitats on Earth, full independence from water did not happen in all plants. Most seedless plants still require a moist environment.
[ { "answer": { "ans_choice": 0, "ans_text": "green algae" }, "bloom": null, "hl_context": "Until recently , all photosynthetic eukaryotes were considered members of the kingdom Plantae . The brown , red , and gold algae , however , have been reassigned to the Protista kingdom . This is because apart from their ability to capture light energy and fix CO 2 , they lack many structural and biochemical traits that distinguish plants from protists . The position of green algae is more ambiguous . <hl> Green algae contain the same carotenoids and chlorophyll a and b as land plants , whereas other algae have different accessory pigments and types of chlorophyll molecules in addition to chlorophyll a . <hl> <hl> Both green algae and land plants also store carbohydrates as starch . <hl> Cells in green algae divide along cell plates called phragmoplasts , and their cell walls are layered in the same manner as the cell walls of embryophytes . Consequently , land plants and closely related green algae are now part of a new monophyletic group called Streptophyta .", "hl_sentences": "Green algae contain the same carotenoids and chlorophyll a and b as land plants , whereas other algae have different accessory pigments and types of chlorophyll molecules in addition to chlorophyll a . Both green algae and land plants also store carbohydrates as starch .", "question": { "cloze_format": "The land plants are probably descendants of ___.", "normal_format": "The land plants are probably descendants of which of these groups?", "question_choices": [ "green algae", "red algae", "brown algae", "angiosperms" ], "question_id": "fs-idp63257072", "question_text": "The land plants are probably descendants of which of these groups?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "both haploid and diploid multicellular organisms" }, "bloom": null, "hl_context": "<hl> Alternation of generations describes a life cycle in which an organism has both haploid and diploid multicellular stages ( Figure 25.2 ) . <hl> Haplontic refers to a lifecycle in which there is a dominant haploid stage , and diplontic refers to a lifecycle in which the diploid is the dominant life stage . Humans are diplontic . <hl> Most plants exhibit alternation of generations , which is described as haplodiplodontic : the haploid multicellular form , known as a gametophyte , is followed in the development sequence by a multicellular diploid organism : the sporophyte . <hl> The gametophyte gives rise to the gametes ( reproductive cells ) by mitosis . This can be the most obvious phase of the life cycle of the plant , as in the mosses , or it can occur in a microscopic structure , such as a pollen grain , in the higher plants ( a common collective term for the vascular plants ) . The sporophyte stage is barely noticeable in lower plants ( the collective term for the plant groups of mosses , liverworts , and lichens ) . Towering trees are the diplontic phase in the lifecycles of plants such as sequoias and pines .", "hl_sentences": "Alternation of generations describes a life cycle in which an organism has both haploid and diploid multicellular stages ( Figure 25.2 ) . Most plants exhibit alternation of generations , which is described as haplodiplodontic : the haploid multicellular form , known as a gametophyte , is followed in the development sequence by a multicellular diploid organism : the sporophyte .", "question": { "cloze_format": "Alternation of generations means that plants produce ___ .", "normal_format": "Due to Alternation of generations, what do plants produce?", "question_choices": [ "only haploid multicellular organisms", "only diploid multicellular organisms", "only diploid multicellular organisms with single-celled haploid gametes", "both haploid and diploid multicellular organisms" ], "question_id": "fs-idp51619296", "question_text": "Alternation of generations means that plants produce:" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "tracheids" }, "bloom": "3", "hl_context": "The first fossils that show the presence of vascular tissue date to the Silurian period , about 430 million years ago . The simplest arrangement of conductive cells shows a pattern of xylem at the center surrounded by phloem . Xylem is the tissue responsible for the storage and long-distance transport of water and nutrients , as well as the transfer of water-soluble growth factors from the organs of synthesis to the target organs . <hl> The tissue consists of conducting cells , known as tracheids , and supportive filler tissue , called parenchyma . <hl> <hl> Xylem conductive cells incorporate the compound lignin into their walls , and are thus described as lignified . <hl> <hl> Lignin itself is a complex polymer that is impermeable to water and confers mechanical strength to vascular tissue . <hl> <hl> With their rigid cell walls , the xylem cells provide support to the plant and allow it to achieve impressive heights . <hl> Tall plants have a selective advantage by being able to reach unfiltered sunlight and disperse their spores or seeds further away , thus expanding their range . By growing higher than other plants , tall trees cast their shadow on shorter plants and limit competition for water and precious nutrients in the soil .", "hl_sentences": "The tissue consists of conducting cells , known as tracheids , and supportive filler tissue , called parenchyma . Xylem conductive cells incorporate the compound lignin into their walls , and are thus described as lignified . Lignin itself is a complex polymer that is impermeable to water and confers mechanical strength to vascular tissue . With their rigid cell walls , the xylem cells provide support to the plant and allow it to achieve impressive heights .", "question": { "cloze_format": "___ is a trait of land plants that allow them to grow in height.", "normal_format": "Which of the following traits of land plants allows them to grow in height?", "question_choices": [ "alternation of generations", "waxy cuticle", "tracheids", "sporopollenin" ], "question_id": "fs-idp26033424", "question_text": "Which of the following traits of land plants allows them to grow in height?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "sporopollenin" }, "bloom": "2", "hl_context": "<hl> Green algae in the order Charales , and the coleochaetes ( microscopic green algae that enclose their spores in sporopollenin ) , are considered the closest living relatives of embryophytes . <hl> <hl> The Charales can be traced back 420 million years . <hl> <hl> They live in a range of fresh water habitats and vary in size from a few millimeters to a meter in length . <hl> The representative species is Chara ( Figure 25.8 ) , often called muskgrass or skunkweed because of its unpleasant smell . Large cells form the thallus : the main stem of the alga . Branches arising from the nodes are made of smaller cells . Male and female reproductive structures are found on the nodes , and the sperm have flagella . Unlike land plants , Charales do not undergo alternation of generations in their lifecycle . <hl> Charales exhibit a number of traits that are significant in their adaptation to land life . <hl> <hl> They produce the compounds lignin and sporopollenin , and form plasmodesmata that connect the cytoplasm of adjacent cells . <hl> The egg , and later , the zygote , form in a protected chamber on the parent plant .", "hl_sentences": "Green algae in the order Charales , and the coleochaetes ( microscopic green algae that enclose their spores in sporopollenin ) , are considered the closest living relatives of embryophytes . The Charales can be traced back 420 million years . They live in a range of fresh water habitats and vary in size from a few millimeters to a meter in length . Charales exhibit a number of traits that are significant in their adaptation to land life . They produce the compounds lignin and sporopollenin , and form plasmodesmata that connect the cytoplasm of adjacent cells .", "question": { "cloze_format": "The characteristic of Charales that would enable them to survive a dry spell is ___.", "normal_format": "What characteristic of Charales would enable them to survive a dry spell?", "question_choices": [ "sperm with flagella", "phragmoplasts", "sporopollenin", "chlorophyll a" ], "question_id": "fs-idp70823648", "question_text": "What characteristic of Charales would enable them to survive a dry spell?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "alternation of generations" }, "bloom": null, "hl_context": "Green algae in the order Charales , and the coleochaetes ( microscopic green algae that enclose their spores in sporopollenin ) , are considered the closest living relatives of embryophytes . The Charales can be traced back 420 million years . They live in a range of fresh water habitats and vary in size from a few millimeters to a meter in length . The representative species is Chara ( Figure 25.8 ) , often called muskgrass or skunkweed because of its unpleasant smell . Large cells form the thallus : the main stem of the alga . Branches arising from the nodes are made of smaller cells . Male and female reproductive structures are found on the nodes , and the sperm have flagella . <hl> Unlike land plants , Charales do not undergo alternation of generations in their lifecycle . <hl> Charales exhibit a number of traits that are significant in their adaptation to land life . They produce the compounds lignin and sporopollenin , and form plasmodesmata that connect the cytoplasm of adjacent cells . The egg , and later , the zygote , form in a protected chamber on the parent plant .", "hl_sentences": "Unlike land plants , Charales do not undergo alternation of generations in their lifecycle .", "question": { "cloze_format": "The characteristic that is present in land plants and not in Charales is (the) ___.", "normal_format": "Which one of these characteristics is present in land plants and not in Charales?", "question_choices": [ "alternation of generations", "flagellated sperm", "phragmoplasts", "plasmodesmata" ], "question_id": "fs-idp64348640", "question_text": "Which one of these characteristics is present in land plants and not in Charales?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "hornworts" }, "bloom": null, "hl_context": "<hl> Stomata appear in the hornworts and are abundant on the sporophyte . <hl> Photosynthetic cells in the thallus contain a single chloroplast . Meristem cells at the base of the plant keep dividing and adding to its height . Many hornworts establish symbiotic relationships with cyanobacteria that fix nitrogen from the environment .", "hl_sentences": "Stomata appear in the hornworts and are abundant on the sporophyte .", "question": { "cloze_format": "The group of plants in which stomata appears is ___.", "normal_format": "Stomata appear in which group of plants?", "question_choices": [ "Charales", "liverworts", "hornworts", "mosses" ], "question_id": "fs-idp148910848", "question_text": "Stomata appear in which group of plants?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "1n" }, "bloom": null, "hl_context": "<hl> Mosses form diminutive gametophytes , which are the dominant phase of the lifecycle . <hl> Green , flat structures — resembling true leaves , but lacking vascular tissue — are attached in a spiral to a central stalk . The plants absorb water and nutrients directly through these leaf-like structures . Some mosses have small branches . Some primitive traits of green algae , such as flagellated sperm , are still present in mosses that are dependent on water for reproduction . Other features of mosses are clearly adaptations to dry land . For example , stomata are present on the stems of the sporophyte , and a primitive vascular system runs up the sporophyte ’ s stalk . Additionally , mosses are anchored to the substrate — whether it is soil , rock , or roof tiles — by multicellular rhizoids . These structures are precursors of roots . They originate from the base of the gametophyte , but are not the major route for the absorption of water and minerals . The lack of a true root system explains why it is so easy to rip moss mats from a tree trunk . The moss lifecycle follows the pattern of alternation of generations as shown in Figure 25.14 . <hl> The most familiar structure is the haploid gametophyte , which germinates from a haploid spore and forms first a protonema — usually , a tangle of single-celled filaments that hug the ground . <hl> Cells akin to an apical meristem actively divide and give rise to a gametophore , consisting of a photosynthetic stem and foliage-like structures . Rhizoids form at the base of the gametophore . Gametangia of both sexes develop on separate gametophores . The male organ ( the antheridium ) produces many sperm , whereas the archegonium ( the female organ ) forms a single egg . At fertilization , the sperm swims down the neck to the venter and unites with the egg inside the archegonium . The zygote , protected by the archegonium , divides and grows into a sporophyte , still attached by its foot to the gametophyte .", "hl_sentences": "Mosses form diminutive gametophytes , which are the dominant phase of the lifecycle . The most familiar structure is the haploid gametophyte , which germinates from a haploid spore and forms first a protonema — usually , a tangle of single-celled filaments that hug the ground .", "question": { "cloze_format": "The chromosome complement in a moss protonema (is) ___.", "normal_format": "What is the chromosome complement in a moss protonema?", "question_choices": [ "1n", "2n", "3n", "varies with the size of the protonema" ], "question_id": "fs-idp177356832", "question_text": "The chromosome complement in a moss protonema is:" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "They do not have true roots and can grow on hard surfaces." }, "bloom": "3", "hl_context": "<hl> Mosses and liverworts are often the first macroscopic organisms to colonize an area , both in a primary succession — where bare land is settled for the first time by living organisms — or in a secondary succession , where soil remains intact after a catastrophic event wipes out many existing species . <hl> Their spores are carried by the wind , birds , or insects . Once mosses and liverworts are established , they provide food and shelter for other species . <hl> In a hostile environment , like the tundra where the soil is frozen , bryophytes grow well because they do not have roots and can dry and rehydrate rapidly once water is again available . <hl> Mosses are at the base of the food chain in the tundra biome . Many species — from small insects to musk oxen and reindeer — depend on mosses for food . In turn , predators feed on the herbivores , which are the primary consumers . Some reports indicate that bryophytes make the soil more amenable to colonization by other plants . Because they establish symbiotic relationships with nitrogen-fixing cyanobacteria , mosses replenish the soil with nitrogen .", "hl_sentences": "Mosses and liverworts are often the first macroscopic organisms to colonize an area , both in a primary succession — where bare land is settled for the first time by living organisms — or in a secondary succession , where soil remains intact after a catastrophic event wipes out many existing species . In a hostile environment , like the tundra where the soil is frozen , bryophytes grow well because they do not have roots and can dry and rehydrate rapidly once water is again available .", "question": { "cloze_format": "Mosses grow well in the Arctic tundra because ___.", "normal_format": "Why do mosses grow well in the Arctic tundra?", "question_choices": [ "They grow better at cold temperatures.", "They do not require moisture.", "They do not have true roots and can grow on hard surfaces.", "There are no herbivores in the tundra." ], "question_id": "fs-idp117088800", "question_text": "Why do mosses grow well in the Arctic tundra?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "club mosses" }, "bloom": "4", "hl_context": "<hl> The club mosses , or phylum Lycopodiophyta , are the earliest group of seedless vascular plants . <hl> They dominated the landscape of the Carboniferous , growing into tall trees and forming large swamp forests . <hl> Today ’ s club mosses are diminutive , evergreen plants consisting of a stem ( which may be branched ) and microphylls ( Figure 25.16 ) . <hl> The phylum Lycopodiophyta consists of close to 1,200 species , including the quillworts ( Isoetales ) , the club mosses ( Lycopodiales ) , and spike mosses ( Selaginellales ) , none of which are true mosses or bryophytes . The existence of two types of morphology suggests that leaves evolved independently in several groups of plants . The first type of leaf is the microphyll , or “ little leaf , ” which can be dated to 350 million years ago in the late Silurian . A microphyll is small and has a simple vascular system . A single unbranched vein — a bundle of vascular tissue made of xylem and phloem — runs through the center of the leaf . Microphylls may have originated from the flattening of lateral branches , or from sporangia that lost their reproductive capabilities . <hl> Microphylls are present in the club mosses and probably preceded the development of megaphylls , or “ big leaves ” , which are larger leaves with a pattern of branching veins . <hl> Megaphylls most likely appeared independently several times during the course of evolution . Their complex networks of veins suggest that several branches may have combined into a flattened organ , with the gaps between the branches being filled with photosynthetic tissue .", "hl_sentences": "The club mosses , or phylum Lycopodiophyta , are the earliest group of seedless vascular plants . Today ’ s club mosses are diminutive , evergreen plants consisting of a stem ( which may be branched ) and microphylls ( Figure 25.16 ) . Microphylls are present in the club mosses and probably preceded the development of megaphylls , or “ big leaves ” , which are larger leaves with a pattern of branching veins .", "question": { "cloze_format": "The type of plants microphylls are characteristic of are ___ .", "normal_format": "Microphylls are characteristic of which types of plants?", "question_choices": [ "mosses", "liverworts", "club mosses", "ferns" ], "question_id": "fs-idm106145808", "question_text": "Microphylls are characteristic of which types of plants?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "horsetail" }, "bloom": null, "hl_context": "<hl> The stem of a horsetail is characterized by the presence of joints or nodes , hence the name Arthrophyta ( arthro - = \" joint \" ; - phyta = \" plant \" ) . <hl> <hl> Leaves and branches come out as whorls from the evenly spaced joints . <hl> <hl> The needle-shaped leaves do not contribute greatly to photosynthesis , the majority of which takes place in the green stem ( Figure 25.18 ) . <hl> Silica collects in the epidermal cells , contributing to the stiffness of horsetail plants . Underground stems known as rhizomes anchor the plants to the ground . Modern-day horsetails are homosporous and produce bisexual gametophytes .", "hl_sentences": "The stem of a horsetail is characterized by the presence of joints or nodes , hence the name Arthrophyta ( arthro - = \" joint \" ; - phyta = \" plant \" ) . Leaves and branches come out as whorls from the evenly spaced joints . The needle-shaped leaves do not contribute greatly to photosynthesis , the majority of which takes place in the green stem ( Figure 25.18 ) .", "question": { "cloze_format": "A plant in the understory of a forest displays a segmented stem and slender leaves arranged in a whorl. It is probably a ________.", "normal_format": "A plant in the understory of a forest displays a segmented stem and slender leaves arranged in a whorl. What is it probably?", "question_choices": [ "club moss", "whisk fern", "fern", "horsetail" ], "question_id": "fs-idm149369136", "question_text": "A plant in the understory of a forest displays a segmented stem and slender leaves arranged in a whorl. It is probably a ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "sori" }, "bloom": "2", "hl_context": "Most ferns produce the same type of spores and are therefore homosporous . The diploid sporophyte is the most conspicuous stage of the lifecycle . <hl> On the underside of its mature fronds , sori ( singular , sorus ) form as small clusters where sporangia develop ( Figure 25.23 ) . <hl> Inside the sori , spores are produced by meiosis and released into the air . Those that land on a suitable substrate germinate and form a heart-shaped gametophyte , which is attached to the ground by thin filamentous rhizoids ( Figure 25.24 ) . The inconspicuous gametophyte harbors both sex gametangia . Flagellated sperm released from the antheridium swim on a wet surface to the archegonium , where the egg is fertilized . The newly formed zygote grows into a sporophyte that emerges from the gametophyte and grows by mitosis into the next generation sporophyte .", "hl_sentences": "On the underside of its mature fronds , sori ( singular , sorus ) form as small clusters where sporangia develop ( Figure 25.23 ) .", "question": { "cloze_format": "The structures that are found on the underside of fern leaves and contain sporangia are ___.", "normal_format": "Which structures are found on the underside of fern leaves and contain sporangia?", "question_choices": [ "sori", "rhizomes", "megaphylls", "microphylls" ], "question_id": "fs-idm52208096", "question_text": "The following structures are found on the underside of fern leaves and contain sporangia:" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "sporophyte" }, "bloom": null, "hl_context": "<hl> The dominant stage of the lifecycle of a fern is the sporophyte , which consists of large compound leaves called fronds . <hl> Fronds fulfill a double role ; they are photosynthetic organs that also carry reproductive organs . The stem may be buried underground as a rhizome , from which adventitious roots grow to absorb water and nutrients from the soil ; or , they may grow above ground as a trunk in tree ferns ( Figure 25.20 ) . Adventitious organs are those that grow in unusual places , such as roots growing from the side of a stem .", "hl_sentences": "The dominant stage of the lifecycle of a fern is the sporophyte , which consists of large compound leaves called fronds .", "question": { "cloze_format": "The dominant organism in fern is the ________.", "normal_format": "What is the dominant organism in fern?", "question_choices": [ "sperm", "spore", "gamete", "sporophyte" ], "question_id": "fs-idm116603568", "question_text": "The dominant organism in fern is the ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "sphagnum moss" }, "bloom": "3", "hl_context": "<hl> Seedless plants have historically played a role in human life through uses as tools , fuel , and medicine . <hl> <hl> Dried peat moss , Sphagnum , is commonly used as fuel in some parts of Europe and is considered a renewable resource . <hl> <hl> Sphagnum bogs ( Figure 25.26 ) are cultivated with cranberry and blueberry bushes . <hl> <hl> The ability of Sphagnum to hold moisture makes the moss a common soil conditioner . <hl> <hl> Florists use blocks of Sphagnum to maintain moisture for floral arrangements . <hl> The attractive fronds of ferns make them a favorite ornamental plant . Because they thrive in low light , they are well suited as house plants . More importantly , fiddleheads are a traditional spring food of Native Americans in the Pacific Northwest , and are popular as a side dish in French cuisine . The licorice fern , Polypodium glycyrrhiza , is part of the diet of the Pacific Northwest coastal tribes , owing in part to the sweetness of its rhizomes . It has a faint licorice taste and serves as a sweetener . The rhizome also figures in the pharmacopeia of Native Americans for its medicinal properties and is used as a remedy for sore throat .", "hl_sentences": "Seedless plants have historically played a role in human life through uses as tools , fuel , and medicine . Dried peat moss , Sphagnum , is commonly used as fuel in some parts of Europe and is considered a renewable resource . Sphagnum bogs ( Figure 25.26 ) are cultivated with cranberry and blueberry bushes . The ability of Sphagnum to hold moisture makes the moss a common soil conditioner . Florists use blocks of Sphagnum to maintain moisture for floral arrangements .", "question": { "cloze_format": "___ is a seedless plant that is a renewable source of energy.", "normal_format": "What seedless plant is a renewable source of energy?", "question_choices": [ "club moss", "horsetail", "sphagnum moss", "fern" ], "question_id": "fs-idp76109760", "question_text": "What seedless plant is a renewable source of energy?" }, "references_are_paraphrase": null } ]
25
25.1 Early Plant Life Learning Objectives By the end of this section, you will be able to: Discuss the challenges to plant life on land Describe the adaptations that allowed plants to colonize the land Describe the timeline of plant evolution and the impact of land plants on other living things The kingdom Plantae constitutes large and varied groups of organisms. There are more than 300,000 species of catalogued plants. Of these, more than 260,000 are seed plants. Mosses, ferns, conifers, and flowering plants are all members of the plant kingdom. Most biologists also consider green algae to be plants, although others exclude all algae from the plant kingdom. The reason for this disagreement stems from the fact that only green algae, the Charophytes , share common characteristics with land plants (such as using chlorophyll a and b plus carotene in the same proportion as plants). These characteristics are absent in other types of algae. Evolution Connection Algae and Evolutionary Paths to Photosynthesis Some scientists consider all algae to be plants, while others assert that only the Charophytes belong in the kingdom Plantae. These divergent opinions are related to the different evolutionary paths to photosynthesis selected for in different types of algae. While all algae are photosynthetic—that is, they contain some form of a chloroplast—they didn’t all become photosynthetic via the same path. The ancestors to the green algae became photosynthetic by endosymbiosing a green, photosynthetic bacterium about 1.65 billion years ago. That algal line evolved into the Charophytes, and eventually into the modern mosses, ferns, gymnosperms, and angiosperms. Their evolutionary trajectory was relatively straight and monophyletic. In contrast, the other algae—red, brown, golden, stramenopiles, and so on—all became photosynthetic by secondary, or even tertiary, endosymbiotic events; that is, they endosymbiosed cells that had already endosymbiosed a cyanobacterium. These latecomers to photosynthesis are parallels to the Charophytes in terms of autotrophy, but they did not expand to the same extent as the Charophytes, nor did they colonize the land. The different views on whether all algae are Plantae arise from how these evolutionary paths are viewed. Scientists who solely track evolutionary straight lines (that is, monophyly), consider only the Charophytes as plants. To biologists who cast a broad net over living things that share a common characteristic (in this case, photosynthetic eukaryotes), all algae are plants. Link to Learning Go to this interactive website to get a more in-depth view of the Charophytes. Plant Adaptations to Life on Land As organisms adapted to life on land, they had to contend with several challenges in the terrestrial environment. Water has been described as “the stuff of life.” The cell’s interior is a watery soup: in this medium, most small molecules dissolve and diffuse, and the majority of the chemical reactions of metabolism take place. Desiccation, or drying out, is a constant danger for an organism exposed to air. Even when parts of a plant are close to a source of water, the aerial structures are likely to dry out. Water also provides buoyancy to organisms. On land, plants need to develop structural support in a medium that does not give the same lift as water. The organism is also subject to bombardment by mutagenic radiation, because air does not filter out ultraviolet rays of sunlight. Additionally, the male gametes must reach the female gametes using new strategies, because swimming is no longer possible. Therefore, both gametes and zygotes must be protected from desiccation. The successful land plants developed strategies to deal with all of these challenges. Not all adaptations appeared at once. Some species never moved very far from the aquatic environment, whereas others went on to conquer the driest environments on Earth. To balance these survival challenges, life on land offers several advantages. First, sunlight is abundant. Water acts as a filter, altering the spectral quality of light absorbed by the photosynthetic pigment chlorophyll. Second, carbon dioxide is more readily available in air than in water, since it diffuses faster in air. Third, land plants evolved before land animals; therefore, until dry land was colonized by animals, no predators threatened plant life. This situation changed as animals emerged from the water and fed on the abundant sources of nutrients in the established flora. In turn, plants developed strategies to deter predation: from spines and thorns to toxic chemicals. Early land plants, like the early land animals, did not live very far from an abundant source of water and developed survival strategies to combat dryness. One of these strategies is called tolerance. Many mosses, for example, can dry out to a brown and brittle mat, but as soon as rain or a flood makes water available, mosses will absorb it and are restored to their healthy green appearance. Another strategy is to colonize environments with high humidity, where droughts are uncommon. Ferns, which are considered an early lineage of plants, thrive in damp and cool places such as the understory of temperate forests. Later, plants moved away from moist or aquatic environments using resistance to desiccation, rather than tolerance. These plants, like cacti, minimize the loss of water to such an extent they can survive in extremely dry environments. The most successful adaptation solution was the development of new structures that gave plants the advantage when colonizing new and dry environments. Four major adaptations are found in all terrestrial plants: the alternation of generations, a sporangium in which the spores are formed, a gametangium that produces haploid cells, and apical meristem tissue in roots and shoots. The evolution of a waxy cuticle and a cell wall with lignin also contributed to the success of land plants. These adaptations are noticeably lacking in the closely related green algae—another reason for the debate over their placement in the plant kingdom. Alternation of Generations Alternation of generations describes a life cycle in which an organism has both haploid and diploid multicellular stages ( Figure 25.2 ). Haplontic refers to a lifecycle in which there is a dominant haploid stage, and diplontic refers to a lifecycle in which the diploid is the dominant life stage. Humans are diplontic. Most plants exhibit alternation of generations, which is described as haplodiplodontic: the haploid multicellular form, known as a gametophyte, is followed in the development sequence by a multicellular diploid organism: the sporophyte. The gametophyte gives rise to the gametes (reproductive cells) by mitosis. This can be the most obvious phase of the life cycle of the plant, as in the mosses, or it can occur in a microscopic structure, such as a pollen grain, in the higher plants (a common collective term for the vascular plants). The sporophyte stage is barely noticeable in lower plants (the collective term for the plant groups of mosses, liverworts, and lichens). Towering trees are the diplontic phase in the lifecycles of plants such as sequoias and pines. Protection of the embryo is a major requirement for land plants. The vulnerable embryo must be sheltered from desiccation and other environmental hazards. In both seedless and seed plants, the female gametophyte provides protection and nutrients to the embryo as it develops into the new generation of sporophyte. This distinguishing feature of land plants gave the group its alternate name of embryophytes . Sporangia in Seedless Plants The sporophyte of seedless plants is diploid and results from syngamy (fusion) of two gametes. The sporophyte bears the sporangia (singular, sporangium): organs that first appeared in the land plants. The term “sporangia” literally means “spore in a vessel,” as it is a reproductive sac that contains spores Figure 25.3 . Inside the multicellular sporangia, the diploid sporocytes , or mother cells, produce haploid spores by meiosis, where the 2 n chromosome number is reduced to 1 n (note that many plant sporophytes are polyploid: for example, durum wheat is tetraploid, bread wheat is hexaploid, and some ferns are 1000-ploid). The spores are later released by the sporangia and disperse in the environment. Two different types of spores are produced in land plants, resulting in the separation of sexes at different points in the lifecycle. Seedless non-vascular plants produce only one kind of spore and are called homosporous . The gametophyte phase is dominant in these plants. After germinating from a spore, the resulting gametophyte produces both male and female gametangia, usually on the same individual. In contrast, heterosporous plants produce two morphologically different types of spores. The male spores are called microspores , because of their smaller size, and develop into the male gametophyte; the comparatively larger megaspores develop into the female gametophyte. Heterospory is observed in a few seedless vascular plants and in all seed plants. When the haploid spore germinates in a hospitable environment, it generates a multicellular gametophyte by mitosis. The gametophyte supports the zygote formed from the fusion of gametes and the resulting young sporophyte (vegetative form). The cycle then begins anew. The spores of seedless plants are surrounded by thick cell walls containing a tough polymer known as sporopollenin . This complex substance is characterized by long chains of organic molecules related to fatty acids and carotenoids: hence the yellow color of most pollen. Sporopollenin is unusually resistant to chemical and biological degradation. In seed plants, which use pollen to transfer the male sperm to the female egg, the toughness of sporopollenin explains the existence of well-preserved pollen fossils. Sporopollenin was once thought to be an innovation of land plants; however, the green algae Coleochaetes forms spores that contain sporopollenin. Gametangia in Seedless Plants Gametangia (singular, gametangium) are structures observed on multicellular haploid gametophytes. In the gametangia, precursor cells give rise to gametes by mitosis. The male gametangium ( antheridium ) releases sperm. Many seedless plants produce sperm equipped with flagella that enable them to swim in a moist environment to the archegonia : the female gametangium. The embryo develops inside the archegonium as the sporophyte. Gametangia are prominent in seedless plants, but are very rarely found in seed plants. Apical Meristems Shoots and roots of plants increase in length through rapid cell division in a tissue called the apical meristem, which is a small zone of cells found at the shoot tip or root tip ( Figure 25.4 ). The apical meristem is made of undifferentiated cells that continue to proliferate throughout the life of the plant. Meristematic cells give rise to all the specialized tissues of the organism. Elongation of the shoots and roots allows a plant to access additional space and resources: light in the case of the shoot, and water and minerals in the case of roots. A separate meristem, called the lateral meristem, produces cells that increase the diameter of tree trunks. Additional Land Plant Adaptations As plants adapted to dry land and became independent from the constant presence of water in damp habitats, new organs and structures made their appearance. Early land plants did not grow more than a few inches off the ground, competing for light on these low mats. By developing a shoot and growing taller, individual plants captured more light. Because air offers substantially less support than water, land plants incorporated more rigid molecules in their stems (and later, tree trunks). In small plants such as single-celled algae, simple diffusion suffices to distribute water and nutrients throughout the organism. However, for plants to evolve larger forms, the evolution of vascular tissue for the distribution of water and solutes was a prerequisite. The vascular system contains xylem and phloem tissues. Xylem conducts water and minerals absorbed from the soil up to the shoot, while phloem transports food derived from photosynthesis throughout the entire plant. A root system evolved to take up water and minerals from the soil, and to anchor the increasingly taller shoot in the soil. In land plants, a waxy, waterproof cover called a cuticle protects the leaves and stems from desiccation. However, the cuticle also prevents intake of carbon dioxide needed for the synthesis of carbohydrates through photosynthesis. To overcome this, stomata or pores that open and close to regulate traffic of gases and water vapor appeared in plants as they moved away from moist environments into drier habitats. Water filters ultraviolet-B (UVB) light, which is harmful to all organisms, especially those that must absorb light to survive. This filtering does not occur for land plants. This presented an additional challenge to land colonization, which was met by the evolution of biosynthetic pathways for the synthesis of protective flavonoids and other compounds: pigments that absorb UV wavelengths of light and protect the aerial parts of plants from photodynamic damage. Plants cannot avoid being eaten by animals. Instead, they synthesize a large range of poisonous secondary metabolites: complex organic molecules such as alkaloids, whose noxious smells and unpleasant taste deter animals. These toxic compounds can also cause severe diseases and even death, thus discouraging predation. Humans have used many of these compounds for centuries as drugs, medications, or spices. In contrast, as plants co-evolved with animals, the development of sweet and nutritious metabolites lured animals into providing valuable assistance in dispersing pollen grains, fruit, or seeds. Plants have been enlisting animals to be their helpers in this way for hundreds of millions of years. Evolution of Land Plants No discussion of the evolution of plants on land can be undertaken without a brief review of the timeline of the geological eras. The early era, known as the Paleozoic, is divided into six periods. It starts with the Cambrian period, followed by the Ordovician, Silurian, Devonian, Carboniferous, and Permian. The major event to mark the Ordovician, more than 500 million years ago, was the colonization of land by the ancestors of modern land plants. Fossilized cells, cuticles, and spores of early land plants have been dated as far back as the Ordovician period in the early Paleozoic era. The oldest-known vascular plants have been identified in deposits from the Devonian. One of the richest sources of information is the Rhynie chert, a sedimentary rock deposit found in Rhynie, Scotland ( Figure 25.5 ), where embedded fossils of some of the earliest vascular plants have been identified. Paleobotanists distinguish between extinct species, as fossils, and extant species, which are still living. The extinct vascular plants, classified as zosterophylls and trimerophytes, most probably lacked true leaves and roots and formed low vegetation mats similar in size to modern-day mosses, although some trimetophytes could reach one meter in height. The later genus Cooksonia , which flourished during the Silurian, has been extensively studied from well-preserved examples. Imprints of Cooksonia show slender branching stems ending in what appear to be sporangia. From the recovered specimens, it is not possible to establish for certain whether Cooksonia possessed vascular tissues. Fossils indicate that by the end of the Devonian period, ferns, horsetails, and seed plants populated the landscape, giving rising to trees and forests. This luxuriant vegetation helped enrich the atmosphere in oxygen, making it easier for air-breathing animals to colonize dry land. Plants also established early symbiotic relationships with fungi, creating mycorrhizae: a relationship in which the fungal network of filaments increases the efficiency of the plant root system, and the plants provide the fungi with byproducts of photosynthesis. Career Connection Paleobotanist How organisms acquired traits that allow them to colonize new environments—and how the contemporary ecosystem is shaped—are fundamental questions of evolution. Paleobotany (the study of extinct plants) addresses these questions through the analysis of fossilized specimens retrieved from field studies, reconstituting the morphology of organisms that disappeared long ago. Paleobotanists trace the evolution of plants by following the modifications in plant morphology: shedding light on the connection between existing plants by identifying common ancestors that display the same traits. This field seeks to find transitional species that bridge gaps in the path to the development of modern organisms. Fossils are formed when organisms are trapped in sediments or environments where their shapes are preserved. Paleobotanists collect fossil specimens in the field and place them in the context of the geological sediments and other fossilized organisms surrounding them. The activity requires great care to preserve the integrity of the delicate fossils and the layers of rock in which they are found. One of the most exciting recent developments in paleobotany is the use of analytical chemistry and molecular biology to study fossils. Preservation of molecular structures requires an environment free of oxygen, since oxidation and degradation of material through the activity of microorganisms depend on its presence. One example of the use of analytical chemistry and molecular biology is the identification of oleanane, a compound that deters pests. Up to this point, oleanane appeared to be unique to flowering plants; however, it has now been recovered from sediments dating from the Permian, much earlier than the current dates given for the appearance of the first flowering plants. Paleobotanists can also study fossil DNA, which can yield a large amount of information, by analyzing and comparing the DNA sequences of extinct plants with those of living and related organisms. Through this analysis, evolutionary relationships can be built for plant lineages. Some paleobotanists are skeptical of the conclusions drawn from the analysis of molecular fossils. For example, the chemical materials of interest degrade rapidly when exposed to air during their initial isolation, as well as in further manipulations. There is always a high risk of contaminating the specimens with extraneous material, mostly from microorganisms. Nevertheless, as technology is refined, the analysis of DNA from fossilized plants will provide invaluable information on the evolution of plants and their adaptation to an ever-changing environment. The Major Divisions of Land Plants The green algae and land plants are grouped together into a subphylum called the Streptophytina, and thus are called Streptophytes. In a further division, land plants are classified into two major groups according to the absence or presence of vascular tissue, as detailed in Figure 25.6 . Plants that lack vascular tissue, which is formed of specialized cells for the transport of water and nutrients, are referred to as non-vascular plants . Liverworts, mosses, and hornworts are seedless, non-vascular plants that likely appeared early in land plant evolution. Vascular plants developed a network of cells that conduct water and solutes. The first vascular plants appeared in the late Ordovician and were probably similar to lycophytes, which include club mosses (not to be confused with the mosses) and the pterophytes (ferns, horsetails, and whisk ferns). Lycophytes and pterophytes are referred to as seedless vascular plants, because they do not produce seeds. The seed plants, or spermatophytes, form the largest group of all existing plants, and hence dominate the landscape. Seed plants include gymnosperms, most notably conifers (Gymnosperms), which produce “naked seeds,” and the most successful of all plants, the flowering plants (Angiosperms). Angiosperms protect their seeds inside chambers at the center of a flower; the walls of the chamber later develop into a fruit. Visual Connection Which of the following statements about plant divisions is false? Lycophytes and pterophytes are seedless vascular plants. All vascular plants produce seeds. All nonvascular embryophytes are bryophytes. Seed plants include angiosperms and gymnosperms. 25.2 Green Algae: Precursors of Land Plants Learning Objectives By the end of this section, you will be able to: Describe the traits shared by green algae and land plants Explain the reasons why Charales are considered the closest relative to land plants Understand that current phylogenetic relationships are reshaped by comparative analysis of DNA sequences Streptophytes Until recently, all photosynthetic eukaryotes were considered members of the kingdom Plantae. The brown, red, and gold algae, however, have been reassigned to the Protista kingdom. This is because apart from their ability to capture light energy and fix CO 2 , they lack many structural and biochemical traits that distinguish plants from protists. The position of green algae is more ambiguous. Green algae contain the same carotenoids and chlorophyll a and b as land plants, whereas other algae have different accessory pigments and types of chlorophyll molecules in addition to chlorophyll a . Both green algae and land plants also store carbohydrates as starch. Cells in green algae divide along cell plates called phragmoplasts, and their cell walls are layered in the same manner as the cell walls of embryophytes. Consequently, land plants and closely related green algae are now part of a new monophyletic group called Streptophyta . The remaining green algae, which belong to a group called Chlorophyta, include more than 7000 different species that live in fresh or brackish water, in seawater, or in snow patches. A few green algae even survive on soil, provided it is covered by a thin film of moisture in which they can live. Periodic dry spells provide a selective advantage to algae that can survive water stress. Some green algae may already be familiar, in particular Spirogyra and desmids. Their cells contain chloroplasts that display a dizzying variety of shapes, and their cell walls contain cellulose, as do land plants. Some green algae are single cells, such as Chlorella and Chlamydomonas , which adds to the ambiguity of green algae classification, because plants are multicellular. Other algae, like Ulva (commonly called sea lettuce), form colonies ( Figure 25.7 ). Reproduction of Green Algae Green algae reproduce both asexually, by fragmentation or dispersal of spores, or sexually, by producing gametes that fuse during fertilization. In a single-celled organism such as Chlamydomonas , there is no mitosis after fertilization. In the multicellular Ulva , a sporophyte grows by mitosis after fertilization. Both Chlamydomonas and Ulva produce flagellated gametes. Charales Green algae in the order Charales, and the coleochaetes (microscopic green algae that enclose their spores in sporopollenin), are considered the closest living relatives of embryophytes. The Charales can be traced back 420 million years. They live in a range of fresh water habitats and vary in size from a few millimeters to a meter in length. The representative species is Chara ( Figure 25.8 ), often called muskgrass or skunkweed because of its unpleasant smell. Large cells form the thallus: the main stem of the alga. Branches arising from the nodes are made of smaller cells. Male and female reproductive structures are found on the nodes, and the sperm have flagella. Unlike land plants, Charales do not undergo alternation of generations in their lifecycle. Charales exhibit a number of traits that are significant in their adaptation to land life. They produce the compounds lignin and sporopollenin, and form plasmodesmata that connect the cytoplasm of adjacent cells. The egg, and later, the zygote, form in a protected chamber on the parent plant. New information from recent, extensive DNA sequence analysis of green algae indicates that the Zygnematales are more closely related to the embryophytes than the Charales. The Zygnematales include the familiar genus Spirogyra. As techniques in DNA analysis improve and new information on comparative genomics arises, the phylogenetic connections between species will change. Clearly, plant biologists have not yet solved the mystery of the origin of land plants. 25.3 Bryophytes Learning Objectives By the end of this section, you will be able to: Identify the main characteristics of bryophytes Describe the distinguishing traits of liverworts, hornworts, and mosses Chart the development of land adaptations in the bryophytes Describe the events in the bryophyte lifecycle Bryophytes are the group of plants that are the closest extant relative of early terrestrial plants. The first bryophytes (liverworts) most likely appeared in the Ordovician period, about 450 million years ago. Because of the lack of lignin and other resistant structures, the likelihood of bryophytes forming fossils is rather small. Some spores protected by sporopollenin have survived and are attributed to early bryophytes. By the Silurian period, however, vascular plants had spread through the continents. This compelling fact is used as evidence that non-vascular plants must have preceded the Silurian period. More than 25,000 species of bryophytes thrive in mostly damp habitats, although some live in deserts. They constitute the major flora of inhospitable environments like the tundra, where their small size and tolerance to desiccation offer distinct advantages. They generally lack lignin and do not have actual tracheids (xylem cells specialized for water conduction). Rather, water and nutrients circulate inside specialized conducting cells. Although the term non-tracheophyte is more accurate, bryophytes are commonly called nonvascular plants. In a bryophyte, all the conspicuous vegetative organs—including the photosynthetic leaf-like structures, the thallus, stem, and the rhizoid that anchors the plant to its substrate—belong to the haploid organism or gametophyte. The sporophyte is barely noticeable. The gametes formed by bryophytes swim with a flagellum, as do gametes in a few of the tracheophytes. The sporangium—the multicellular sexual reproductive structure—is present in bryophytes and absent in the majority of algae. The bryophyte embryo also remains attached to the parent plant, which protects and nourishes it. This is a characteristic of land plants. The bryophytes are divided into three phyla: the liverworts or Hepaticophyta, the hornworts or Anthocerotophyta, and the mosses or true Bryophyta. Liverworts Liverworts (Hepaticophyta) are viewed as the plants most closely related to the ancestor that moved to land. Liverworts have colonized every terrestrial habitat on Earth and diversified to more than 7000 existing species ( Figure 25.9 ). Some gametophytes form lobate green structures, as seen in Figure 25.10 . The shape is similar to the lobes of the liver, and hence provides the origin of the name given to the phylum. Openings that allow the movement of gases may be observed in liverworts. However, these are not stomata, because they do not actively open and close. The plant takes up water over its entire surface and has no cuticle to prevent desiccation. Figure 25.11 represents the lifecycle of a liverwort. The cycle starts with the release of haploid spores from the sporangium that developed on the sporophyte. Spores disseminated by wind or water germinate into flattened thalli attached to the substrate by thin, single-celled filaments. Male and female gametangia develop on separate, individual plants. Once released, male gametes swim with the aid of their flagella to the female gametangium (the archegonium), and fertilization ensues. The zygote grows into a small sporophyte still attached to the parent gametophyte. It will give rise, by meiosis, to the next generation of spores. Liverwort plants can also reproduce asexually, by the breaking of branches or the spreading of leaf fragments called gemmae. In this latter type of reproduction, the gemmae —small, intact, complete pieces of plant that are produced in a cup on the surface of the thallus (shown in Figure 25.11 )—are splashed out of the cup by raindrops. The gemmae then land nearby and develop into gametophytes. Hornworts The hornworts ( Anthocerotophyta ) belong to the broad bryophyte group. They have colonized a variety of habitats on land, although they are never far from a source of moisture. The short, blue-green gametophyte is the dominant phase of the lifecycle of a hornwort. The narrow, pipe-like sporophyte is the defining characteristic of the group. The sporophytes emerge from the parent gametophyte and continue to grow throughout the life of the plant ( Figure 25.12 ). Stomata appear in the hornworts and are abundant on the sporophyte. Photosynthetic cells in the thallus contain a single chloroplast. Meristem cells at the base of the plant keep dividing and adding to its height. Many hornworts establish symbiotic relationships with cyanobacteria that fix nitrogen from the environment. The lifecycle of hornworts ( Figure 25.13 ) follows the general pattern of alternation of generations. The gametophytes grow as flat thalli on the soil with embedded gametangia. Flagellated sperm swim to the archegonia and fertilize eggs. The zygote develops into a long and slender sporophyte that eventually splits open, releasing spores. Thin cells called pseudoelaters surround the spores and help propel them further in the environment. Unlike the elaters observed in horsetails, the hornwort pseudoelaters are single-celled structures. The haploid spores germinate and give rise to the next generation of gametophyte. Mosses More than 10,000 species of mosses have been catalogued. Their habitats vary from the tundra, where they are the main vegetation, to the understory of tropical forests. In the tundra, the mosses’ shallow rhizoids allow them to fasten to a substrate without penetrating the frozen soil. Mosses slow down erosion, store moisture and soil nutrients, and provide shelter for small animals as well as food for larger herbivores, such as the musk ox. Mosses are very sensitive to air pollution and are used to monitor air quality. They are also sensitive to copper salts, so these salts are a common ingredient of compounds marketed to eliminate mosses from lawns. Mosses form diminutive gametophytes, which are the dominant phase of the lifecycle. Green, flat structures—resembling true leaves, but lacking vascular tissue—are attached in a spiral to a central stalk. The plants absorb water and nutrients directly through these leaf-like structures. Some mosses have small branches. Some primitive traits of green algae, such as flagellated sperm, are still present in mosses that are dependent on water for reproduction. Other features of mosses are clearly adaptations to dry land. For example, stomata are present on the stems of the sporophyte, and a primitive vascular system runs up the sporophyte’s stalk. Additionally, mosses are anchored to the substrate—whether it is soil, rock, or roof tiles—by multicellular rhizoids . These structures are precursors of roots. They originate from the base of the gametophyte, but are not the major route for the absorption of water and minerals. The lack of a true root system explains why it is so easy to rip moss mats from a tree trunk. The moss lifecycle follows the pattern of alternation of generations as shown in Figure 25.14 . The most familiar structure is the haploid gametophyte, which germinates from a haploid spore and forms first a protonema —usually, a tangle of single-celled filaments that hug the ground. Cells akin to an apical meristem actively divide and give rise to a gametophore, consisting of a photosynthetic stem and foliage-like structures. Rhizoids form at the base of the gametophore. Gametangia of both sexes develop on separate gametophores. The male organ (the antheridium) produces many sperm, whereas the archegonium (the female organ) forms a single egg. At fertilization, the sperm swims down the neck to the venter and unites with the egg inside the archegonium. The zygote, protected by the archegonium, divides and grows into a sporophyte, still attached by its foot to the gametophyte. Visual Connection Which of the following statements about the moss life cycle is false? The mature gametophyte is haploid. The sporophyte produces haploid spores. The calyptra buds to form a mature gametophyte. The zygote is housed in the venter. The slender seta (plural, setae), as seen in Figure 25.15 , contains tubular cells that transfer nutrients from the base of the sporophyte (the foot) to the sporangium or capsule . A structure called a peristome increases the spread of spores after the tip of the capsule falls off at dispersal. The concentric tissue around the mouth of the capsule is made of triangular, close-fitting units, a little like “teeth”; these open and close depending on moisture levels, and periodically release spores. 25.4 Seedless Vascular Plants Learning Objectives By the end of this section, you will be able to: Identify the new traits that first appear in tracheophytes Discuss the importance of adaptations to life on land Describe the classes of seedless tracheophytes Describe the lifecycle of a fern Explain the role of seedless vascular plants in the ecosystem The vascular plants, or tracheophytes , are the dominant and most conspicuous group of land plants. More than 260,000 species of tracheophytes represent more than 90 percent of Earth’s vegetation. Several evolutionary innovations explain their success and their ability to spread to all habitats. Bryophytes may have been successful at the transition from an aquatic habitat to land, but they are still dependent on water for reproduction, and absorb moisture and nutrients through the gametophyte surface. The lack of roots for absorbing water and minerals from the soil, as well as a lack of reinforced conducting cells, limits bryophytes to small sizes. Although they may survive in reasonably dry conditions, they cannot reproduce and expand their habitat range in the absence of water. Vascular plants, on the other hand, can achieve enormous heights, thus competing successfully for light. Photosynthetic organs become leaves, and pipe-like cells or vascular tissues transport water, minerals, and fixed carbon throughout the organism. In seedless vascular plants, the diploid sporophyte is the dominant phase of the lifecycle. The gametophyte is now an inconspicuous, but still independent, organism. Throughout plant evolution, there is an evident reversal of roles in the dominant phase of the lifecycle. Seedless vascular plants still depend on water during fertilization, as the sperm must swim on a layer of moisture to reach the egg. This step in reproduction explains why ferns and their relatives are more abundant in damp environments. Vascular Tissue: Xylem and Phloem The first fossils that show the presence of vascular tissue date to the Silurian period, about 430 million years ago. The simplest arrangement of conductive cells shows a pattern of xylem at the center surrounded by phloem. Xylem is the tissue responsible for the storage and long-distance transport of water and nutrients, as well as the transfer of water-soluble growth factors from the organs of synthesis to the target organs. The tissue consists of conducting cells, known as tracheids, and supportive filler tissue, called parenchyma. Xylem conductive cells incorporate the compound lignin into their walls, and are thus described as lignified. Lignin itself is a complex polymer that is impermeable to water and confers mechanical strength to vascular tissue. With their rigid cell walls, the xylem cells provide support to the plant and allow it to achieve impressive heights. Tall plants have a selective advantage by being able to reach unfiltered sunlight and disperse their spores or seeds further away, thus expanding their range. By growing higher than other plants, tall trees cast their shadow on shorter plants and limit competition for water and precious nutrients in the soil. Phloem is the second type of vascular tissue; it transports sugars, proteins, and other solutes throughout the plant. Phloem cells are divided into sieve elements (conducting cells) and cells that support the sieve elements. Together, xylem and phloem tissues form the vascular system of plants. Roots: Support for the Plant Roots are not well preserved in the fossil record. Nevertheless, it seems that roots appeared later in evolution than vascular tissue. The development of an extensive network of roots represented a significant new feature of vascular plants. Thin rhizoids attached bryophytes to the substrate, but these rather flimsy filaments did not provide a strong anchor for the plant; neither did they absorb substantial amounts of water and nutrients. In contrast, roots, with their prominent vascular tissue system, transfer water and minerals from the soil to the rest of the plant. The extensive network of roots that penetrates deep into the soil to reach sources of water also stabilizes trees by acting as a ballast or anchor. The majority of roots establish a symbiotic relationship with fungi, forming mycorrhizae, which benefit the plant by greatly increasing the surface area for absorption of water and soil minerals and nutrients. Leaves, Sporophylls, and Strobili A third innovation marks the seedless vascular plants. Accompanying the prominence of the sporophyte and the development of vascular tissue, the appearance of true leaves improved their photosynthetic efficiency. Leaves capture more sunlight with their increased surface area by employing more chloroplasts to trap light energy and convert it to chemical energy, which is then used to fix atmospheric carbon dioxide into carbohydrates. The carbohydrates are exported to the rest of the plant by the conductive cells of phloem tissue. The existence of two types of morphology suggests that leaves evolved independently in several groups of plants. The first type of leaf is the microphyll , or “little leaf,” which can be dated to 350 million years ago in the late Silurian. A microphyll is small and has a simple vascular system. A single unbranched vein —a bundle of vascular tissue made of xylem and phloem—runs through the center of the leaf. Microphylls may have originated from the flattening of lateral branches, or from sporangia that lost their reproductive capabilities. Microphylls are present in the club mosses and probably preceded the development of megaphylls , or “big leaves”, which are larger leaves with a pattern of branching veins. Megaphylls most likely appeared independently several times during the course of evolution. Their complex networks of veins suggest that several branches may have combined into a flattened organ, with the gaps between the branches being filled with photosynthetic tissue. In addition to photosynthesis, leaves play another role in the life of the plants. Pine cones, mature fronds of ferns, and flowers are all sporophylls —leaves that were modified structurally to bear sporangia. Strobili are cone-like structures that contain sporangia. They are prominent in conifers and are commonly known as pine cones. Ferns and Other Seedless Vascular Plants By the late Devonian period, plants had evolved vascular tissue, well-defined leaves, and root systems. With these advantages, plants increased in height and size. During the Carboniferous period, swamp forests of club mosses and horsetails—some specimens reaching heights of more than 30 m (100 ft)—covered most of the land. These forests gave rise to the extensive coal deposits that gave the Carboniferous its name. In seedless vascular plants, the sporophyte became the dominant phase of the lifecycle. Water is still required for fertilization of seedless vascular plants, and most favor a moist environment. Modern-day seedless tracheophytes include club mosses, horsetails, ferns, and whisk ferns. Phylum Lycopodiophyta: Club Mosses The club mosses , or phylum Lycopodiophyta , are the earliest group of seedless vascular plants. They dominated the landscape of the Carboniferous, growing into tall trees and forming large swamp forests. Today’s club mosses are diminutive, evergreen plants consisting of a stem (which may be branched) and microphylls ( Figure 25.16 ). The phylum Lycopodiophyta consists of close to 1,200 species, including the quillworts ( Isoetales ), the club mosses ( Lycopodiales ), and spike mosses ( Selaginellales ), none of which are true mosses or bryophytes. Lycophytes follow the pattern of alternation of generations seen in the bryophytes, except that the sporophyte is the major stage of the lifecycle. The gametophytes do not depend on the sporophyte for nutrients. Some gametophytes develop underground and form mycorrhizal associations with fungi. In club mosses, the sporophyte gives rise to sporophylls arranged in strobili, cone-like structures that give the class its name. Lycophytes can be homosporous or heterosporous. Phylum Monilophyta: Class Equisetopsida (Horsetails) Horsetails, whisk ferns and ferns belong to the phylum Monilophyta, with horsetails placed in the Class Equisetopsida. The single genus Equisetum is the survivor of a large group of plants, known as Arthrophyta, which produced large trees and entire swamp forests in the Carboniferous. The plants are usually found in damp environments and marshes ( Figure 25.17 ). The stem of a horsetail is characterized by the presence of joints or nodes, hence the name Arthrophyta (arthro- = "joint"; -phyta = "plant"). Leaves and branches come out as whorls from the evenly spaced joints. The needle-shaped leaves do not contribute greatly to photosynthesis, the majority of which takes place in the green stem ( Figure 25.18 ). Silica collects in the epidermal cells, contributing to the stiffness of horsetail plants. Underground stems known as rhizomes anchor the plants to the ground. Modern-day horsetails are homosporous and produce bisexual gametophytes. Phylum Monilophyta: Class Psilotopsida (Whisk Ferns) While most ferns form large leaves and branching roots, the whisk ferns , Class Psilotopsida, lack both roots and leaves, probably lost by reduction. Photosynthesis takes place in their green stems, and small yellow knobs form at the tip of the branch stem and contain the sporangia. Whisk ferns were considered an early pterophytes. However, recent comparative DNA analysis suggests that this group may have lost both vascular tissue and roots through evolution, and is more closely related to ferns. Phylum Monilophyta: Class Psilotopsida (Ferns) With their large fronds, ferns are the most readily recognizable seedless vascular plants. They are considered the most advanced seedless vascular plants and display characteristics commonly observed in seed plants. More than 20,000 species of ferns live in environments ranging from tropics to temperate forests. Although some species survive in dry environments, most ferns are restricted to moist, shaded places. Ferns made their appearance in the fossil record during the Devonian period and expanded during the Carboniferous. The dominant stage of the lifecycle of a fern is the sporophyte, which consists of large compound leaves called fronds. Fronds fulfill a double role; they are photosynthetic organs that also carry reproductive organs. The stem may be buried underground as a rhizome, from which adventitious roots grow to absorb water and nutrients from the soil; or, they may grow above ground as a trunk in tree ferns ( Figure 25.20 ). Adventitious organs are those that grow in unusual places, such as roots growing from the side of a stem. The tip of a developing fern frond is rolled into a crozier, or fiddlehead ( Figure 25.21 a and Figure 25.21 b ). Fiddleheads unroll as the frond develops. The lifecycle of a fern is depicted in Figure 25.22 . Visual Connection Which of the following statements about the fern life cycle is false? Sporangia produce haploid spores. The sporophyte grows from a gametophyte. The sporophyte is diploid and the gametophyte is haploid. Sporangia form on the underside of the gametophyte. Link to Learning To see an animation of the lifecycle of a fern and to test your knowledge, go to the website . Most ferns produce the same type of spores and are therefore homosporous. The diploid sporophyte is the most conspicuous stage of the lifecycle. On the underside of its mature fronds, sori (singular, sorus) form as small clusters where sporangia develop ( Figure 25.23 ). Inside the sori, spores are produced by meiosis and released into the air. Those that land on a suitable substrate germinate and form a heart-shaped gametophyte, which is attached to the ground by thin filamentous rhizoids ( Figure 25.24 ). The inconspicuous gametophyte harbors both sex gametangia. Flagellated sperm released from the antheridium swim on a wet surface to the archegonium, where the egg is fertilized. The newly formed zygote grows into a sporophyte that emerges from the gametophyte and grows by mitosis into the next generation sporophyte. Career Connection Landscape Designer Looking at the well-laid parterres of flowers and fountains in the grounds of royal castles and historic houses of Europe, it’s clear that the gardens’ creators knew about more than art and design. They were also familiar with the biology of the plants they chose. Landscape design also has strong roots in the United States’ tradition. A prime example of early American classical design is Monticello: Thomas Jefferson’s private estate. Among his many interests, Jefferson maintained a strong passion for botany. Landscape layout can encompass a small private space, like a backyard garden; public gathering places, like Central Park in New York City; or an entire city plan, like Pierre L’Enfant’s design for Washington, DC. A landscape designer will plan traditional public spaces—such as botanical gardens, parks, college campuses, gardens, and larger developments—as well as natural areas and private gardens. The restoration of natural places encroached on by human intervention, such as wetlands, also requires the expertise of a landscape designer. With such an array of necessary skills, a landscape designer’s education includes a solid background in botany, soil science, plant pathology, entomology, and horticulture. Coursework in architecture and design software is also required for the completion of the degree. The successful design of a landscape rests on an extensive knowledge of plant growth requirements, such as light and shade, moisture levels, compatibility of different species, and susceptibility to pathogens and pests. Mosses and ferns will thrive in a shaded area, where fountains provide moisture; cacti, on the other hand, would not fare well in that environment. The future growth of individual plants must be taken into account, to avoid crowding and competition for light and nutrients. The appearance of the space over time is also of concern. Shapes, colors, and biology must be balanced for a well-maintained and sustainable green space. Art, architecture, and biology blend in a beautifully designed and implemented landscape. The Importance of Seedless Vascular Plants Mosses and liverworts are often the first macroscopic organisms to colonize an area, both in a primary succession—where bare land is settled for the first time by living organisms—or in a secondary succession, where soil remains intact after a catastrophic event wipes out many existing species. Their spores are carried by the wind, birds, or insects. Once mosses and liverworts are established, they provide food and shelter for other species. In a hostile environment, like the tundra where the soil is frozen, bryophytes grow well because they do not have roots and can dry and rehydrate rapidly once water is again available. Mosses are at the base of the food chain in the tundra biome. Many species—from small insects to musk oxen and reindeer—depend on mosses for food. In turn, predators feed on the herbivores, which are the primary consumers. Some reports indicate that bryophytes make the soil more amenable to colonization by other plants. Because they establish symbiotic relationships with nitrogen-fixing cyanobacteria, mosses replenish the soil with nitrogen. At the end of the nineteenth century, scientists observed that lichens and mosses were becoming increasingly rare in urban and suburban areas. Since bryophytes have neither a root system for absorption of water and nutrients, nor a cuticle layer that protects them from desiccation, pollutants in rainwater readily penetrate their tissues; they absorb moisture and nutrients through their entire exposed surfaces. Therefore, pollutants dissolved in rainwater penetrate plant tissues readily and have a larger impact on mosses than on other plants. The disappearance of mosses can be considered a bioindicator for the level of pollution in the environment. Ferns contribute to the environment by promoting the weathering of rock, accelerating the formation of topsoil, and slowing down erosion by spreading rhizomes in the soil. The water ferns of the genus Azolla harbor nitrogen-fixing cyanobacteria and restore this important nutrient to aquatic habitats. Seedless plants have historically played a role in human life through uses as tools, fuel, and medicine. Dried peat moss , Sphagnum , is commonly used as fuel in some parts of Europe and is considered a renewable resource. Sphagnum bogs ( Figure 25.26 ) are cultivated with cranberry and blueberry bushes. The ability of Sphagnum to hold moisture makes the moss a common soil conditioner. Florists use blocks of Sphagnum to maintain moisture for floral arrangements. The attractive fronds of ferns make them a favorite ornamental plant. Because they thrive in low light, they are well suited as house plants. More importantly, fiddleheads are a traditional spring food of Native Americans in the Pacific Northwest, and are popular as a side dish in French cuisine. The licorice fern, Polypodium glycyrrhiza, is part of the diet of the Pacific Northwest coastal tribes, owing in part to the sweetness of its rhizomes. It has a faint licorice taste and serves as a sweetener. The rhizome also figures in the pharmacopeia of Native Americans for its medicinal properties and is used as a remedy for sore throat. Link to Learning Go to this website to learn how to identify fern species based upon their fiddleheads. By far the greatest impact of seedless vascular plants on human life, however, comes from their extinct progenitors. The tall club mosses, horsetails, and tree-like ferns that flourished in the swampy forests of the Carboniferous period gave rise to large deposits of coal throughout the world. Coal provided an abundant source of energy during the Industrial Revolution, which had tremendous consequences on human societies, including rapid technological progress and growth of large cities, as well as the degradation of the environment. Coal is still a prime source of energy and also a major contributor to global warming.
principles_of_accounting,_volume_1:_financial_accounting
Summary 10.1 Describe and Demonstrate the Basic Inventory Valuation Methods and Their Cost Flow Assumptions The total cost of goods available for sale is a combination of the beginning inventory plus new inventory purchases. These costs relating to goods available for sale are included in the ending inventory, reported on the balance sheet, or become part of the cost of goods sold reported on the income statement. Merchandise inventory is maintained using either the periodic or the perpetual updating system. Periodic updating is performed at the end of the period only, whereas perpetual updating is an ongoing activity that maintains inventory records that are approximately equal to the actual inventory on hand at any time. There are four basic inventory cost flow allocation methods, which are alternative ways to estimate the cost of the units that are sold and the value of the ending inventory. The costing methods are not indicative of the flow of the goods, which often moves in a different order than the flow of the costs. Utilizing different cost allocation options results in marked differences in reported cost of goods sold, net income, and inventory balances. 10.2 Calculate the Cost of Goods Sold and Ending Inventory Using the Periodic Method The periodic inventory system updates inventory at the end of a fixed accounting period. During the accounting period, inventory records are not changed, and at the end of the period, inventory records are adjusted for what was sold and added during the period. Companies using the periodic and perpetual method for inventory updating choose between the basic four cost flow assumption methods, which are first-in, first-out (FIFO); last-in, first-out (LIFO); specific identification (SI); and weighted average (AVG). Periodic inventory systems are still used in practice, but the prevalence of their use has greatly diminished, with advances in technology and as prices for inventory management software have significantly decreased. 10.3 Calculate the Cost of Goods Sold and Ending Inventory Using the Perpetual Method Perpetual inventory systems maintain inventory balance in the company records in a real-time or slightly delayed, continuously updated state. No significant adjustments are needed at the end of the period, before issuing the financial statements. Companies using the perpetual method for inventory updating choose between the basic four cost flow assumption methods, which are first-in, first-out (FIFO); last-in, first-out (LIFO); specific identification (SI); and weighted average (AVG). Most modern inventory systems utilize the perpetual inventory system, due to the benefits it offers for efficiency, ease of operation, availability of real-time updating, and accuracy. 10.4 Explain and Demonstrate the Impact of Inventory Valuation Errors on the Income Statement and Balance Sheet The value for cost of the goods available for sale is dependent on accurate beginning and ending inventory numbers. Because of the interrelationship between inventory values and cost of goods sold, when the inventory values are incorrect, the associated income statement and balance sheet accounts are also incorrect. Inventory errors at the beginning of a reporting period affect only the income statement. Overstatements of beginning inventory result in overstated cost of goods sold and understated net income. Conversely, understatements of beginning inventory result in understated cost of goods sold and overstated net income. Inventory errors at the end of a reporting period affect both the income statement and the balance sheet. Overstatements of ending inventory result in understated cost of goods sold, overstated net income, overstated assets, and overstated equity. Conversely, understatements of ending inventory result in overstated cost of goods sold, understated net income, understated assets, and understated equity. 10.5 Examine the Efficiency of Inventory Management Using Financial Ratios Inventory ratio analysis tools help management to identify inefficient management practices and pinpoint troublesome scenarios within their inventory operations processes. The inventory turnover ratio measures how fast the inventory sells, which can be useful for inter-period comparison as well as comparisons with competitor firms. The number of days’ sales in inventory ratio indicates how long it takes for inventory to be sold, on average, which can help the firm identify instances of too much or too little inventory, indicating such cases as product obsolescence or excess stocking, or the reverse scenario: insufficient inventory, which could result in customer dissatisfaction and lost sales.
Chapter Outline 10.1 Describe and Demonstrate the Basic Inventory Valuation Methods and Their Cost Flow Assumptions 10.2 Calculate the Cost of Goods Sold and Ending Inventory Using the Periodic Method 10.3 Calculate the Cost of Goods Sold and Ending Inventory Using the Perpetual Method 10.4 Explain and Demonstrate the Impact of Inventory Valuation Errors on the Income Statement and Balance Sheet 10.5 Examine the Efficiency of Inventory Management Using Financial Ratios Why It Matters Did you ever decide to start a healthy eating plan and meticulously planned your shopping list, including foods for meals, drinks, and snacks? Maybe you stocked your cabinets and fridge with the best healthy foods you could find, including lots of luscious-looking fruit and vegetables, to make sure that you could make tasty and healthy smoothies when you got hungry. Then, at the end of the week, if everything didn’t go as you had planned, you may have discovered that a lot of your produce was still uneaten but not very fresh anymore. Stocking up on goods, so that you will have them when you need them, is only a good idea if the goods are used before they become worthless. Just like with someone whose preparation for healthy eating can backfire in wasted produce, businesses have to balance a fine line between being prepared for any volume of inventory demand that customers request and being careful not to overstock those goods so the company will not be left holding excess inventory they cannot sell. Not having the goods that a customer wants available is bad, of course, but extra inventory is wasteful. That is one reason why inventory accounting is important.
[ { "answer": { "ans_choice": 0, "ans_text": "A" }, "bloom": null, "hl_context": "<hl> The specific identification method refers to tracking the actual cost of the item being sold and is generally used only on expensive items that are highly customized ( such as tracking detailed costs for each individual car in automobiles sales ) or inherently distinctive ( such as tracking origin and cost for each unique stone in diamond sales ) . <hl> This method is too cumbersome for goods of large quantity , especially if there are not significant feature differences in the various inventory items of each product type . However , for purposes of this demonstration , assume that the company sold one specific identifiable unit , which was purchased in the second lot of products , at a cost of $ 27 .", "hl_sentences": "The specific identification method refers to tracking the actual cost of the item being sold and is generally used only on expensive items that are highly customized ( such as tracking detailed costs for each individual car in automobiles sales ) or inherently distinctive ( such as tracking origin and cost for each unique stone in diamond sales ) .", "question": { "cloze_format": "When inventory items are highly specialized, the best inventory costing method is ________.", "normal_format": "When inventory items are highly specialized, which is the best inventory costing method?", "question_choices": [ "specific identification", "first-in, first-out", "last-in, first-out", "weighted average" ], "question_id": "fs-idm350849088", "question_text": "When inventory items are highly specialized, the best inventory costing method is ________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "The seller must pay the shipping." }, "bloom": null, "hl_context": "<hl> Similarly , FOB destination means the seller transfers title and responsibility to the buyer at the destination , so the seller would owe the shipping costs . <hl> Ownership of the product is the trigger that mandates that the asset be included on the company ’ s balance sheet . <hl> In summary , the goods belong to the seller until they transition to the location following the term FOB , making the seller responsible for everything about the goods to that point , including recording purchased goods on the balance sheet . <hl> If something happens to damage or destroy the goods before they reach the FOB location , the seller would be required to replace the product or reverse the sales transaction .", "hl_sentences": "Similarly , FOB destination means the seller transfers title and responsibility to the buyer at the destination , so the seller would owe the shipping costs . In summary , the goods belong to the seller until they transition to the location following the term FOB , making the seller responsible for everything about the goods to that point , including recording purchased goods on the balance sheet .", "question": { "cloze_format": "If goods are shipped FOB destination, it is true that ___.", "normal_format": "If goods are shipped FOB destination, which of the following is true?", "question_choices": [ "Title to the goods will transfer as soon as the goods are shipped.", "FOB indicates that a price reduction has been applied to the order.", "The seller must pay the shipping.", "The seller and the buyer will each pay 50% of the cost." ], "question_id": "fs-idm327394080", "question_text": "If goods are shipped FOB destination, which of the following is true?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "A" }, "bloom": null, "hl_context": "The company ’ s financial statements report the combined cost of all items sold as an offset to the proceeds from those sales , producing the net number referred to as gross margin ( or gross profit ) . This is presented in the first part of the results of operations for the period on the multi-step income statement . The unsold inventory at period end is an asset to the company and is therefore included in the company ’ s financial statements , on the balance sheet , as shown in Figure 10.2 . <hl> The total cost of all the inventory that remains at period end , reported as merchandise inventory on the balance sheet , plus the total cost of the inventory that was sold or otherwise removed ( through shrinkage , theft , or other loss ) , reported as cost of goods sold on the income statement ( see Figure 10.2 ) , represent the entirety of the inventory that the company had to work with during the period , or goods available for sale . <hl>", "hl_sentences": "The total cost of all the inventory that remains at period end , reported as merchandise inventory on the balance sheet , plus the total cost of the inventory that was sold or otherwise removed ( through shrinkage , theft , or other loss ) , reported as cost of goods sold on the income statement ( see Figure 10.2 ) , represent the entirety of the inventory that the company had to work with during the period , or goods available for sale .", "question": { "cloze_format": "The financial statement on which the merchandise inventory account would appear is (the) ___ .", "normal_format": "On which financial statement would the merchandise inventory account appear?", "question_choices": [ "balance sheet", "income statement", "both balance sheet and income statement", "neither balance sheet nor income statement" ], "question_id": "fs-idm494302528", "question_text": "On which financial statement would the merchandise inventory account appear?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 0, "ans_text": "inflationary times" }, "bloom": null, "hl_context": "<hl> As prices rise ( inflationary times ) , FIFO ending inventory account balances grow larger even when inventory unit counts are constant , while the income statement reflects lower cost of goods sold than the current prices for those goods , which produces higher profits than if the goods were costed with current inventory prices . <hl> <hl> Conversely , when prices fall ( deflationary times ) , FIFO ending inventory account balances decrease and the income statement reflects higher cost of goods sold and lower profits than if goods were costed at current inventory prices . <hl> The effect of inflationary and deflationary cycles on LIFO inventory valuation are the exact opposite of their effects on FIFO inventory valuation .", "hl_sentences": "As prices rise ( inflationary times ) , FIFO ending inventory account balances grow larger even when inventory unit counts are constant , while the income statement reflects lower cost of goods sold than the current prices for those goods , which produces higher profits than if the goods were costed with current inventory prices . Conversely , when prices fall ( deflationary times ) , FIFO ending inventory account balances decrease and the income statement reflects higher cost of goods sold and lower profits than if goods were costed at current inventory prices .", "question": { "cloze_format": "As to when the FIFO inventory costing method would produce higher inventory account balances than the LIFO method, that holds (in) ___?", "normal_format": "When would using the FIFO inventory costing method produce higher inventory account balances than the LIFO method would?", "question_choices": [ "inflationary times", "deflationary times", "always", "never" ], "question_id": "fs-idm326594704", "question_text": "When would using the FIFO inventory costing method produce higher inventory account balances than the LIFO method would?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 0, "ans_text": "A" }, "bloom": null, "hl_context": "<hl> Reporting inventory values on the balance sheet using the accounting concept of conservatism ( which discourages overstatement of net assets and net income ) requires inventory to be calculated and adjusted to a value that is the lower of the cost calculated using the company ’ s chosen valuation method or the market value based on the market or replacement value of the inventory items . <hl> Thus , if traditional cost calculations produce inventory values that are overstated , the lower-of-cost-or-market ( LCM ) concept requires that the balance in the inventory account should be decreased to the more conservative replacement value rather than be overstated on the balance sheet .", "hl_sentences": "Reporting inventory values on the balance sheet using the accounting concept of conservatism ( which discourages overstatement of net assets and net income ) requires inventory to be calculated and adjusted to a value that is the lower of the cost calculated using the company ’ s chosen valuation method or the market value based on the market or replacement value of the inventory items .", "question": { "cloze_format": "The accounting rule that serves as the primary basis for the lower-of-cost-or-market methodology for inventory valuation is ___ .", "normal_format": "Which accounting rule serves as the primary basis for the lower-of-cost-or-market methodology for inventory valuation?", "question_choices": [ "conservatism", "consistency", "optimism", "pessimism" ], "question_id": "fs-idm355327040", "question_text": "Which accounting rule serves as the primary basis for the lower-of-cost-or-market methodology for inventory valuation?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 1, "ans_text": "perpetual" }, "bloom": null, "hl_context": "A perpetual inventory system updates the inventory account balance on an ongoing basis , at the time of each individual sale . This is normally accomplished by use of auto-ID technology , such as optical-scan barcode or radio frequency identification ( RFIF ) labels . <hl> As transactions occur , the perpetual system requires that every sale is recorded with two entries , first recording the sales transaction as an increase to Accounts Receivable and a decrease to Sales Revenue , and then recording the cost associated with the sale as an increase to Cost of Goods Sold and a decrease to Merchandise Inventory . <hl> The journal entries made at the time of sale immediately shift the costs relating to the goods being sold from the merchandise inventory account on the balance sheet to the cost of goods sold account on the income statement . Little or no adjustment is needed to inventory at period end because changes in the inventory balances are recorded as both the sales and purchase transactions occur . Any necessary adjustments to the ending inventory account balances would typically be caused by one of the types of shrinkage you ’ ve learned about . These are example entries for an inventory sales transaction when using perpetual inventory updating : A periodic inventory system updates the inventory balances at the end of the reporting period , typically the end of a month , quarter , or year . At that point , a journal entry is made to adjust the merchandise inventory asset balance to agree with the physical count of inventory , with the corresponding adjustment to the expense account , cost of goods sold . This adjustment shifts the costs of all inventory items that are no longer held by the company to the income statement , where the costs offset the revenue from inventory sales , as reflected by the gross margin . <hl> As sales transactions occur throughout the period , the periodic system requires that only the sales entry be recorded because costs will only be updated during end-of-period adjustments when financial statements are prepared . <hl> However , any additional goods for sale acquired during the month are recorded as purchases . Following are examples of typical journal entries for periodic transactions . The first is an example entry for an inventory sales transaction when using periodic inventory , and the second records the purchase of additional inventory when using the periodic method . Note : Periodic requires no corresponding cost entry at the time of sale , since the inventory is adjusted only at period end .", "hl_sentences": "As transactions occur , the perpetual system requires that every sale is recorded with two entries , first recording the sales transaction as an increase to Accounts Receivable and a decrease to Sales Revenue , and then recording the cost associated with the sale as an increase to Cost of Goods Sold and a decrease to Merchandise Inventory . As sales transactions occur throughout the period , the periodic system requires that only the sales entry be recorded because costs will only be updated during end-of-period adjustments when financial statements are prepared .", "question": { "cloze_format": "The type or types of inventory timing system (periodic or perpetual) that requires the user to record two journal entries every time a sale is made is ___.", "normal_format": "Which type or types of inventory timing system (periodic or perpetual) requires the user to record two journal entries every time a sale is made?", "question_choices": [ "periodic", "perpetual", "both periodic and perpetual", "neither periodic nor perpetual" ], "question_id": "fs-idm362805584", "question_text": "Which type or types of inventory timing system (periodic or perpetual) requires the user to record two journal entries every time a sale is made." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "specific identification" }, "bloom": null, "hl_context": "<hl> The specific identification costing assumption tracks inventory items individually so that , when they are sold , the exact cost of the item is used to offset the revenue from the sale . <hl> The cost of goods sold , inventory , and gross margin shown in Figure 10.13 were determined from the previously-stated data , particular to specific identification costing . <hl> The specific identification method refers to tracking the actual cost of the item being sold and is generally used only on expensive items that are highly customized ( such as tracking detailed costs for each individual car in automobiles sales ) or inherently distinctive ( such as tracking origin and cost for each unique stone in diamond sales ) . <hl> This method is too cumbersome for goods of large quantity , especially if there are not significant feature differences in the various inventory items of each product type . However , for purposes of this demonstration , assume that the company sold one specific identifiable unit , which was purchased in the second lot of products , at a cost of $ 27 . <hl> A perpetual inventory system updates the inventory account balance on an ongoing basis , at the time of each individual sale . <hl> This is normally accomplished by use of auto-ID technology , such as optical-scan barcode or radio frequency identification ( RFIF ) labels . As transactions occur , the perpetual system requires that every sale is recorded with two entries , first recording the sales transaction as an increase to Accounts Receivable and a decrease to Sales Revenue , and then recording the cost associated with the sale as an increase to Cost of Goods Sold and a decrease to Merchandise Inventory . The journal entries made at the time of sale immediately shift the costs relating to the goods being sold from the merchandise inventory account on the balance sheet to the cost of goods sold account on the income statement . Little or no adjustment is needed to inventory at period end because changes in the inventory balances are recorded as both the sales and purchase transactions occur . Any necessary adjustments to the ending inventory account balances would typically be caused by one of the types of shrinkage you ’ ve learned about . These are example entries for an inventory sales transaction when using perpetual inventory updating : <hl> Inventory costing is accomplished by one of four specific costing methods : ( 1 ) specific identification , ( 2 ) first-in , first-out , ( 3 ) last-in , first-out , and ( 4 ) weighted-average cost methods . <hl> All four methods are techniques that allow management to distribute the costs of inventory in a logical and consistent manner , to facilitate matching of costs to offset the related revenue item that is recognized during the period , in accordance with GAAP expense recognition and matching concepts . Note that a company ’ s cost allocation process represents management ’ s chosen method for expensing product costs , based strictly on estimates of the flow of inventory costs , which is unrelated to the actual flow of the physical inventory . Use of a cost allocation strategy eliminates the need for often cost-prohibitive individual tracking of costs of each specific inventory item , for which purchase prices may vary greatly . In this chapter , you will be provided with some background concepts and explanations of terms associated with inventory as well as a basic demonstration of each of the four allocation methods , and then further delineation of the application and nuances of the costing methods .", "hl_sentences": "The specific identification costing assumption tracks inventory items individually so that , when they are sold , the exact cost of the item is used to offset the revenue from the sale . The specific identification method refers to tracking the actual cost of the item being sold and is generally used only on expensive items that are highly customized ( such as tracking detailed costs for each individual car in automobiles sales ) or inherently distinctive ( such as tracking origin and cost for each unique stone in diamond sales ) . A perpetual inventory system updates the inventory account balance on an ongoing basis , at the time of each individual sale . Inventory costing is accomplished by one of four specific costing methods : ( 1 ) specific identification , ( 2 ) first-in , first-out , ( 3 ) last-in , first-out , and ( 4 ) weighted-average cost methods .", "question": { "cloze_format": "An inventory costing method that is almost always done on a perpetual basis is ___ .", "normal_format": "Which inventory costing method is almost always done on a perpetual basis?", "question_choices": [ "specific identification", "first-in, first-out", "last-in, first-out", "weighted average" ], "question_id": "fs-idm210640480", "question_text": "Which inventory costing method is almost always done on a perpetual basis?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 0, "ans_text": "A" }, "bloom": null, "hl_context": "<hl> As you ’ ve learned , the perpetual inventory system is updated continuously to reflect the current status of inventory on an ongoing basis . <hl> <hl> Modern sales activity commonly uses electronic identifier s — such as bar codes and RFID technology — to account for inventory as it is purchased , monitored , and sold . <hl> Specific identification inventory methods also commonly use a manual form of the perpetual system . Here we ’ ll demonstrate the mechanics implemented when using perpetual inventory systems in inventory accounting , whether those calculations are orchestrated in a laborious manual system or electronically ( in the latter , the inventory accounting operates effortlessly behind the scenes but nonetheless utilizes the same perpetual methodology ) . <hl> A perpetual inventory system updates the inventory account balance on an ongoing basis , at the time of each individual sale . <hl> <hl> This is normally accomplished by use of auto-ID technology , such as optical-scan barcode or radio frequency identification ( RFIF ) labels . <hl> As transactions occur , the perpetual system requires that every sale is recorded with two entries , first recording the sales transaction as an increase to Accounts Receivable and a decrease to Sales Revenue , and then recording the cost associated with the sale as an increase to Cost of Goods Sold and a decrease to Merchandise Inventory . The journal entries made at the time of sale immediately shift the costs relating to the goods being sold from the merchandise inventory account on the balance sheet to the cost of goods sold account on the income statement . Little or no adjustment is needed to inventory at period end because changes in the inventory balances are recorded as both the sales and purchase transactions occur . Any necessary adjustments to the ending inventory account balances would typically be caused by one of the types of shrinkage you ’ ve learned about . These are example entries for an inventory sales transaction when using perpetual inventory updating :", "hl_sentences": "As you ’ ve learned , the perpetual inventory system is updated continuously to reflect the current status of inventory on an ongoing basis . Modern sales activity commonly uses electronic identifier s — such as bar codes and RFID technology — to account for inventory as it is purchased , monitored , and sold . A perpetual inventory system updates the inventory account balance on an ongoing basis , at the time of each individual sale . This is normally accomplished by use of auto-ID technology , such as optical-scan barcode or radio frequency identification ( RFIF ) labels .", "question": { "cloze_format": "A description of a perpetual inventory system is that ___.", "normal_format": "Which of the following describes features of a perpetual inventory system?", "question_choices": [ "Technology is normally used to record inventory changes.", "Merchandise bought is recorded as purchases.", "An adjusting journal entry is required at year end, to match physical counts to the asset account.", "Inventory is updated at the end of the period." ], "question_id": "fs-idm170378704", "question_text": "Which of the following describes features of a perpetual inventory system?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 3, "ans_text": "both statements" }, "bloom": null, "hl_context": "Understanding this interaction between inventory assets ( merchandise inventory balances ) and inventory expense ( cost of goods sold ) highlights the impact of errors . <hl> Errors in the valuation of ending merchandise inventory , which is on the balance sheet , produce an equivalent corresponding error in the company ’ s cost of goods sold for the period , which is on the income statement . <hl> When cost of goods sold is overstated , inventory and net income are understated . When cost of goods sold is understated , inventory and net income are overstated . Further , an error in ending inventory carries into the next period , since ending inventory of one period becomes the beginning inventory of the next period , causing both the balance sheet and the income statement values to be wrong in year two as well as in the year of the error . Over a two-year period , misstatements of ending inventory will balance themselves out . For example , an overstatement to ending inventory overstates net income , but next year , since ending inventory becomes beginning inventory , it understates net income . So over a two-year period , this corrects itself . However , financial statements are prepared for one period , so all this means is that two years of cost of goods sold are misstated ( the first year is overstated / understated , and the second year is understated / overstated . ) <hl> Because of the dynamic relationship between cost of goods sold and merchandise inventory , errors in inventory counts have a direct and significant impact on the financial statements of the company . <hl> Errors in inventory valuation cause mistaken values to be reported for merchandise inventory and cost of goods sold due to the toggle effect that changes in either one of the two accounts have on the other . As explained , the company has a finite amount of inventory that they can work with during a given period of business operations , such as a year . This limited quantity of goods is known as goods available for sale and is sourced from", "hl_sentences": "Errors in the valuation of ending merchandise inventory , which is on the balance sheet , produce an equivalent corresponding error in the company ’ s cost of goods sold for the period , which is on the income statement . Because of the dynamic relationship between cost of goods sold and merchandise inventory , errors in inventory counts have a direct and significant impact on the financial statements of the company .", "question": { "cloze_format": "The financial statement that would be impacted by a current-year ending inventory error, when using a periodic inventory updating system is the ___ .", "normal_format": "Which of the following financial statements would be impacted by a current-year ending inventory error, when using a periodic inventory updating system?", "question_choices": [ "balance sheet", "income statement", "neither statement", "both statements" ], "question_id": "fs-idm350849344", "question_text": "Which of the following financial statements would be impacted by a current-year ending inventory error, when using a periodic inventory updating system?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "C" }, "bloom": null, "hl_context": "<hl> In periodic inventory systems , inventory errors commonly arise from careless oversight of physical counts . <hl> <hl> Another common cause of periodic inventory errors results from management neglecting to take the physical count . <hl> <hl> Both perpetual and periodic updating inventory systems also face potential errors relating to ownership transfers during transportation ( relating to FOB shipping point and FOB destination terms ); losses in value due to shrinkage , theft , or obsolescence ; and consignment inventory , the goods for which should never be included in the retailer ’ s inventory but should be recorded as an asset of the consignor , who remains the legal owner of the goods until they are sold . <hl> <hl> Similarly , FOB destination means the seller transfers title and responsibility to the buyer at the destination , so the seller would owe the shipping costs . <hl> Ownership of the product is the trigger that mandates that the asset be included on the company ’ s balance sheet . In summary , the goods belong to the seller until they transition to the location following the term FOB , making the seller responsible for everything about the goods to that point , including recording purchased goods on the balance sheet . If something happens to damage or destroy the goods before they reach the FOB location , the seller would be required to replace the product or reverse the sales transaction .", "hl_sentences": "In periodic inventory systems , inventory errors commonly arise from careless oversight of physical counts . Another common cause of periodic inventory errors results from management neglecting to take the physical count . Both perpetual and periodic updating inventory systems also face potential errors relating to ownership transfers during transportation ( relating to FOB shipping point and FOB destination terms ); losses in value due to shrinkage , theft , or obsolescence ; and consignment inventory , the goods for which should never be included in the retailer ’ s inventory but should be recorded as an asset of the consignor , who remains the legal owner of the goods until they are sold . Similarly , FOB destination means the seller transfers title and responsibility to the buyer at the destination , so the seller would owe the shipping costs .", "question": { "cloze_format": "Overstating periodic ending inventory would be caused by the following: ___.", "normal_format": "Which of the following would cause periodic ending inventory to be overstated?", "question_choices": [ "Goods held on consignment are omitted from the physical count.", "Goods purchased and delivered, but not yet paid for, are included in the physical count.", "Purchased goods shipped FOB destination and not yet delivered are included in the physical count.", "None of the above" ], "question_id": "fs-idm326238448", "question_text": "Which of the following would cause periodic ending inventory to be overstated?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 1, "ans_text": "increasing inventory turnover ratio" }, "bloom": null, "hl_context": "<hl> Year 2 ’ s number of days ’ sales in inventory ratio increased over year 1 ’ s ratio results , indicating an unfavorable change . <hl> <hl> This result would alert management that it is taking much too long to sell the inventory , so reduction in the inventory balance might be appropriate , or as an alternative , increased sales efforts could turn the ratio toward a more positive trend . <hl> This ratio is useful to identify cases of obsolescence , which is especially prevalent in an evolving market , such as the technology sector of the economy . As with any ratio , comparison should also be made to competitor and industry ratios , while consideration should also be given to other factors affecting the company ’ s financial health , as well as to the strength of the overall market economy . <hl> The fact that the year 2 inventory turnover ratio is lower than the year 1 ratio is not a positive trend . <hl> This result would alert management that the inventory balance might be too high to be practical for this volume of sales . Comparison should also be made to competitor and industry ratios , while consideration should also be given to other factors affecting the company ’ s financial health as well as the strength of the overall market economy . <hl> Inventory ratio analysis relates to how well the inventory is being managed . <hl> Two ratios can be used to assess how efficiently management is handling inventory . <hl> The first ratio , inventory turnover , measures the number of times an average quantity of inventory was bought and sold during the period . <hl> <hl> The second ratio , number of days ’ sales in inventory , measures how many days it takes to complete the cycle between buying and selling inventory . <hl>", "hl_sentences": "Year 2 ’ s number of days ’ sales in inventory ratio increased over year 1 ’ s ratio results , indicating an unfavorable change . This result would alert management that it is taking much too long to sell the inventory , so reduction in the inventory balance might be appropriate , or as an alternative , increased sales efforts could turn the ratio toward a more positive trend . The fact that the year 2 inventory turnover ratio is lower than the year 1 ratio is not a positive trend . Inventory ratio analysis relates to how well the inventory is being managed . The first ratio , inventory turnover , measures the number of times an average quantity of inventory was bought and sold during the period . The second ratio , number of days ’ sales in inventory , measures how many days it takes to complete the cycle between buying and selling inventory .", "question": { "cloze_format": "A positive trend for inventory management is indicated by the ___ .", "normal_format": "Which of the following indicates a positive trend for inventory management?", "question_choices": [ "increasing number of days’ sales in inventory ratio", "increasing inventory turnover ratio", "increasing cost of goods sold", "increasing sales revenue" ], "question_id": "fs-idm231699696", "question_text": "Which of the following indicates a positive trend for inventory management?" }, "references_are_paraphrase": null } ]
10
10.1 Describe and Demonstrate the Basic Inventory Valuation Methods and Their Cost Flow Assumptions Accounting for inventory is a critical function of management. Inventory accounting is significantly complicated by the fact that it is an ongoing process of constant change, in part because (1) most companies offer a large variety of products for sale, (2) product purchases occur at irregular times, (3) products are acquired for differing prices, and (4) inventory acquisitions are based on sales projections, which are always uncertain and often sporadic. Merchandising companies must meticulously account for every individual product that they sell, equipping them with essential information, for decisions such as these: What is the quantity of each product that is available to customers? When should inventory of each product item be replenished and at what quantity? How much should the company charge customers for each product to cover all costs plus profit margin? How much of the inventory cost should be allocated toward the units sold (cost of goods sold) during the period? How much of the inventory cost should be allocated toward the remaining units (ending inventory) at the end of the period? Is each product moving robustly or have some individual inventory items’ activity decreased? Are some inventory items obsolete? The company’s financial statements report the combined cost of all items sold as an offset to the proceeds from those sales, producing the net number referred to as gross margin (or gross profit). This is presented in the first part of the results of operations for the period on the multi-step income statement. The unsold inventory at period end is an asset to the company and is therefore included in the company’s financial statements, on the balance sheet, as shown in Figure 10.2 . The total cost of all the inventory that remains at period end, reported as merchandise inventory on the balance sheet, plus the total cost of the inventory that was sold or otherwise removed (through shrinkage, theft, or other loss), reported as cost of goods sold on the income statement (see Figure 10.2 ), represent the entirety of the inventory that the company had to work with during the period, or goods available for sale. Fundamentals of Inventory Although our discussion will consider inventory issues from the perspective of a retail company, using a resale or merchandising operation, inventory accounting also encompasses recording and reporting of manufacturing operations. In the manufacturing environment, there would be separate inventory calculations for the various process levels of inventory, such as raw materials, work in process, and finished goods. The manufacturer’s finished goods inventory is equivalent to the merchandiser’s inventory account in that it includes finished goods that are available for sale. In merchandising companies, inventory is a company asset that includes beginning inventory plus purchases , which include all additions to inventory during the period. Every time the company sells products to customers, they dispose of a portion of the company’s inventory asset. Goods available for sale refers to the total cost of all inventory that the company had on hand at any time during the period, including beginning inventory and all inventory purchases. These goods were normally either sold to customers during the period (occasionally lost due to spoilage, theft, damage, or other types of shrinkages) and thus reported as cost of goods sold, an expense account on the income statement, or these goods are still in inventory at the end of the period and reported as ending merchandise inventory, an asset account on the balance sheet. As an example, assume that Harry’s Auto Parts Store sells oil filters. Suppose that at the end of January 31, 2018, they had 50 oil filters on hand at a cost of $7 per unit. This means that at the beginning of February, they had 50 units in inventory at a total cost of $350 (50 × $7). During the month, they purchased 20 filters at a cost of $7, for a total cost of $140 (20 × $7). At the end of the month, there were 18 units left in inventory. Therefore, during the month of February, they sold 52 units. Figure 10.3 illustrates how to calculate the goods available for sale and the cost of goods sold. Inventory costing is accomplished by one of four specific costing methods: (1) specific identification, (2) first-in, first-out, (3) last-in, first-out, and (4) weighted-average cost methods. All four methods are techniques that allow management to distribute the costs of inventory in a logical and consistent manner, to facilitate matching of costs to offset the related revenue item that is recognized during the period, in accordance with GAAP expense recognition and matching concepts. Note that a company’s cost allocation process represents management’s chosen method for expensing product costs, based strictly on estimates of the flow of inventory costs, which is unrelated to the actual flow of the physical inventory. Use of a cost allocation strategy eliminates the need for often cost-prohibitive individual tracking of costs of each specific inventory item, for which purchase prices may vary greatly. In this chapter, you will be provided with some background concepts and explanations of terms associated with inventory as well as a basic demonstration of each of the four allocation methods, and then further delineation of the application and nuances of the costing methods. A critical issue for inventory accounting is the frequency for which inventory values are updated. There are two primary methods used to account for inventory balance timing changes: the periodic inventory method and the perpetual inventory method. These two methods were addressed in depth in Merchandising Transactions ). Periodic Inventory Method A periodic inventory system updates the inventory balances at the end of the reporting period, typically the end of a month, quarter, or year. At that point, a journal entry is made to adjust the merchandise inventory asset balance to agree with the physical count of inventory, with the corresponding adjustment to the expense account, cost of goods sold. This adjustment shifts the costs of all inventory items that are no longer held by the company to the income statement, where the costs offset the revenue from inventory sales, as reflected by the gross margin. As sales transactions occur throughout the period, the periodic system requires that only the sales entry be recorded because costs will only be updated during end-of-period adjustments when financial statements are prepared. However, any additional goods for sale acquired during the month are recorded as purchases. Following are examples of typical journal entries for periodic transactions. The first is an example entry for an inventory sales transaction when using periodic inventory, and the second records the purchase of additional inventory when using the periodic method. Note: Periodic requires no corresponding cost entry at the time of sale, since the inventory is adjusted only at period end. A purchase of inventory for sale by a company under the periodic inventory method would necessitate the following journal entry. (This is discussed in more depth in Merchandising Transactions .) Perpetual Inventory Method A perpetual inventory system updates the inventory account balance on an ongoing basis, at the time of each individual sale. This is normally accomplished by use of auto-ID technology, such as optical-scan barcode or radio frequency identification (RFIF) labels. As transactions occur, the perpetual system requires that every sale is recorded with two entries, first recording the sales transaction as an increase to Accounts Receivable and a decrease to Sales Revenue, and then recording the cost associated with the sale as an increase to Cost of Goods Sold and a decrease to Merchandise Inventory. The journal entries made at the time of sale immediately shift the costs relating to the goods being sold from the merchandise inventory account on the balance sheet to the cost of goods sold account on the income statement. Little or no adjustment is needed to inventory at period end because changes in the inventory balances are recorded as both the sales and purchase transactions occur. Any necessary adjustments to the ending inventory account balances would typically be caused by one of the types of shrinkage you’ve learned about. These are example entries for an inventory sales transaction when using perpetual inventory updating: A purchase of inventory for sale by a company under the perpetual inventory method would necessitate the following journal entry. (Greater detail is provided in Merchandising Transactions .) Continuing Application Inventory As previously discussed, Gearhead Outfitters is a retail chain selling outdoor gear and accessories. As such, the company is faced with many possible questions related to inventory. How much inventory should be carried? What products are the most profitable? Which products have the most sales? Which products are obsolete? What timeframe should the company allow for inventory to be replenished? Which products are the most in demand at each location? In addition to questions related to type, volume, obsolescence, and lead time, there are many issues related to accounting for inventory and the flow of goods. As one of the biggest assets of the company, the way inventory is tracked can have an effect on profit. Which method of accounting—first-in first-out, last-in first out, specific identification, weighted average— provides the most accurate reflection of inventory and cost of goods sold is important in determining gross profit and net income. The method selected affects profits, taxes, and can even change the opinion of potential lenders concerning the financial strength of the company. In choosing a method of accounting for inventory, management should consider many factors, including the accurate reflection of costs, taxes on profits, decision-making about purchases, and what effect a point-of-sale (POS) system may have on tracking inventory. Gearhead exists to provide a positive shopping experience for its customers. Offering a clear picture of its goods, and maintaining an appealing, timely supply at competitive prices is one way to keep the shopping experience positive. Thus, accounting for inventory plays an instrumental role in management’s ability to successfully run a company and deliver the company’s promise to customers. Data for Demonstration of the Four Basic Inventory Valuation Methods The following dataset will be used to demonstrate the application and analysis of the four methods of inventory accounting . Company: Spy Who Loves You Corporation Product: Global Positioning System (GPS) Tracking Device Description: This product is an economical real-time GPS tracking device, designed for individuals who wish to monitor others’ whereabouts. It is marketed to parents of middle school and high school students as a safety measure. Parents benefit by being apprised of the child’s location, and the student benefits by not having to constantly check in with parents. Demand for the product has spiked during the current fiscal period, while supply is limited, causing the selling price to escalate rapidly. Specific Identification Method The specific identification method refers to tracking the actual cost of the item being sold and is generally used only on expensive items that are highly customized (such as tracking detailed costs for each individual car in automobiles sales) or inherently distinctive (such as tracking origin and cost for each unique stone in diamond sales). This method is too cumbersome for goods of large quantity, especially if there are not significant feature differences in the various inventory items of each product type. However, for purposes of this demonstration, assume that the company sold one specific identifiable unit, which was purchased in the second lot of products, at a cost of $27. Three separate lots of goods are purchased: First-in, First-out (FIFO) Method The first-in, first-out method (FIFO) records costs relating to a sale as if the earliest purchased item would be sold first. However, the physical flow of the units sold under both the periodic and perpetual methods would be the same. Due to the mechanics of the determination of costs of goods sold under the perpetual method, based on the timing of additional purchases of inventory during the accounting period, it is possible that the costs of goods sold might be slightly different for an accounting period. Since FIFO assumes that the first items purchased are sold first, the latest acquisitions would be the items that remain in inventory at the end of the period and would constitute ending inventory. Three separate lots of goods are purchased: Last-in, First-out (LIFO) Method The last-in, first out method (LIFO) records costs relating to a sale as if the latest purchased item would be sold first. As a result, the earliest acquisitions would be the items that remain in inventory at the end of the period. Three separate lots of goods are purchased: IFRS Connection Inventory For many companies, inventory is a significant portion of the company’s assets. In 2018, the inventory of Walmart , the world’s largest international retailer, was 70% of current assets and 21% of total assets. Because inventory also affects income as it is sold through the cost of goods sold account, inventory plays a significant role in the analysis and evaluation of many companies. Ending inventory affects both the balance sheet and the income statement. As you’ve learned, the ending inventory balance is reflected as a current asset on the balance sheet and the ending inventory balance is used in the calculation of costs of goods sold. Understanding how companies report inventory under US GAAP versus under IFRS is important when comparing companies reporting under the two methods, particularly because of a significant difference between the two methods. Similarities When inventory is purchased, it is accounted for at historical cost and then evaluated at each balance sheet date to adjust to the lower of cost or net realizable value. Both IFRS and US GAAP allow FIFO and weighted-average cost flow assumptions as well as specific identification where appropriate and applicable. Differences IFRS does not permit the use of LIFO. This is a major difference between US GAAP and IFRS. The AICPA estimates that roughly 35–40% of all US companies use LIFO, and in some industries, such as oil and gas, the use of LIFO is more prevalent. Because LIFO generates lower taxable income during times of rising prices, it is estimated that eliminating LIFO would generate an estimated $102 billion in tax revenues in the US for the period 2017–2026. In creating IFRS, the IASB chose to eliminate LIFO, arguing that FIFO more closely matches the flow of goods. In the US, FASB believes the choice between LIFO and FIFO is a business model decision that should be left up to each company. In addition, there was significant pressure by some companies and industries to retain LIFO because of the significant tax liability that would arise for many companies from the elimination of LIFO. Weighted-Average Cost Method The weighted-average cost method (sometimes referred to as the average cost method ) requires a calculation of the average cost of all units of each particular inventory items. The average is obtained by multiplying the number of units by the cost paid per unit for each lot of goods, then adding the calculated total value of all lots together, and finally dividing the total cost by the total number of units for that product. As a caveat relating to the average cost method, note that a new average cost must be calculated after every change in inventory to reassess the per-unit weighted-average value of the goods. This laborious requirement might make use of the average method cost-prohibitive. Three separate lots of goods are purchased: Comparing the various costing methods for the sale of one unit in this simple example reveals a significant difference that the choice of cost allocation method can make. Note that the sales price is not affected by the cost assumptions; only the cost amount varies, depending on which method is chosen. Figure 10.4 depicts the different outcomes that the four methods produced. Once the methods of costing are determined for the company, that methodology would typically be applied repeatedly over the remainder of the company’s history to accomplish the generally accepted accounting principle of consistency from one period to another. It is possible to change methods if the company finds that a different method more accurately reflects results of operations, but the change requires disclosure in the company’s notes to the financial statements, which alerts financial statement users of the impact of the change in methodology. Also, it is important to realize that although the Internal Revenue Service generally allows differing methods of accounting treatment for tax purposes than for financial statement purposes, an exception exists that prohibits the use of LIFO inventory costing on the company tax return unless LIFO is also used for the financial statement costing calculations. Ethical Considerations Auditors Look for Inventory Fraud Inventory fraud can be used to book false revenue or to increase the amount of assets to obtain additional lending from a bank or other sources. In the typical chain of accounting events, inventory ultimately becomes an expense item known as cost of goods sold. 1 In a manipulated accounting system, a trail of fraudulent transactions can point to accounting misrepresentation in the sales cycle, which may include 1 “Inventory Fraud: Knowledge Is Your First Line of Defense.” Weaver. Mar. 27, 2015. https://weaver.com/blog/inventory-fraud-knowledge-your-first-line-defense recording fictitious and nonexistent inventory, manipulation of inventory counts during a facility audit, recording of sales but no recording of purchases, and/or fraudulent inventory capitalization, to list a few. 2 All these elaborate schemes have the same goal: to improperly manipulate inventory values to support the creation of a fraudulent financial statement. Accountants have an ethical, moral, and legal duty to not commit accounting and financial statement fraud. Auditors have a duty to look for such inventory fraud. 2 Wells, Joseph T. “Ghost Goods: How to Spot Phantom Inventory.” Journal of Accountancy . June 1, 2001. https://www.journalofaccountancy.com/issues/2001/jun/ghostgoodshowtospotphantominventory.html Auditors follow the Statement on Auditing Standards (SAS) No. 99 and AU Section 316 Consideration of Fraud in a Financial Statement Audit when auditing a company’s books. Auditors are outside accountants hired to “obtain reasonable assurance about whether the financial statements are free of material misstatement, whether caused by error or fraud.” 3 Ultimately, an auditor will prepare an audit report based on the testing of the balances in a company’s books, and a review of the company’s accounting system. The auditor is to perform “procedures at locations on a surprise or unannounced basis, for example, observing inventory on unexpected dates or at unexpected locations or counting cash on a surprise basis.” 4 Such testing of a company’s inventory system is used to catch accounting fraud. It is the responsibility of the accountant to present accurate accounting records to the auditor, and for the auditor to create auditing procedures that reasonably ensure that the inventory balances are free of material misstatements in the accounting balances. 3 American Institute of Certified Public Accountants (AICPA). Consideration of Fraud in a Financial Statement Audit (AU Section 316). https://www.aicpa.org/Research/Standards/AuditAttest/DownloadableDocuments/AU-00316.pdf 4 American Institute of Certified Public Accountants (AICPA). Consideration of Fraud in a Financial Statement Audit (AU Section 316). https://www.aicpa.org/Research/Standards/AuditAttest/DownloadableDocuments/AU-00316.pdf Additional Inventory Issues Various other issues that affect inventory accounting include consignment sales, transportation and ownership issues, inventory estimation tools, and the effects of inflationary versus deflationary cycles on various methods. Consignment Consigned goods refer to merchandise inventory that belongs to a third party but which is displayed for sale by the company. These goods are not owned by the company and thus must not be included on the company’s balance sheet nor be used in the company’s inventory calculations. The company’s profit relating to consigned goods is normally limited to a percentage of the sales proceeds at the time of sale. For example, assume that you sell your office and your current furniture doesn’t match your new building. One way to dispose of the furniture would be to have a consignment shop sell it. The shop would keep a percentage of the sales revenue and pay you the remaining balance. Assume in this example that the shop will keep one-third of the sales proceeds and pay you the remaining two-thirds balance. If the furniture sells for $15,000, you would receive $10,000 and the shop would keep the remaining $5,000 as its sales commission. A key point to remember is that until the inventory, in this case your office furniture, is sold, you still own it, and it is reported as an asset on your balance sheet and not an asset for the consignment shop. After the sale, the buyer is the owner, so the consignment shop is never the property’s owner. Free on Board (FOB) Shipping and Destination Transportation costs are commonly assigned to either the buyer or the seller based on the free on board (FOB) terms, as the terms relate to the seller. Transportation costs are part of the responsibilities of the owner of the product, so determining the owner at the shipping point identifies who should pay for the shipping costs. The seller’s responsibility and ownership of the goods ends at the point that is listed after the FOB designation. Thus, FOB shipping point means that the seller transfers title and responsibility to the buyer at the shipping point, so the buyer would owe the shipping costs. The purchased goods would be recorded on the buyer’s balance sheet at this point. Similarly, FOB destination means the seller transfers title and responsibility to the buyer at the destination, so the seller would owe the shipping costs. Ownership of the product is the trigger that mandates that the asset be included on the company’s balance sheet. In summary, the goods belong to the seller until they transition to the location following the term FOB, making the seller responsible for everything about the goods to that point, including recording purchased goods on the balance sheet . If something happens to damage or destroy the goods before they reach the FOB location, the seller would be required to replace the product or reverse the sales transaction. Lower-of-Cost-or-Market (LCM) Reporting inventory values on the balance sheet using the accounting concept of conservatism (which discourages overstatement of net assets and net income) requires inventory to be calculated and adjusted to a value that is the lower of the cost calculated using the company’s chosen valuation method or the market value based on the market or replacement value of the inventory items. Thus, if traditional cost calculations produce inventory values that are overstated, the lower-of-cost-or-market (LCM) concept requires that the balance in the inventory account should be decreased to the more conservative replacement value rather than be overstated on the balance sheet. Estimating Inventory Costs: Gross Profit Method and Retail Inventory Method Sometimes companies have a need to estimate inventory values. These estimates could be needed for interim reports, when physical counts are not taken. The need could be result from a natural disaster that destroys part or all of the inventory or from an error that causes inventory counts to be compromised or omitted. Some specific industries (such as select retail businesses) also regularly use these estimation tools to determine cost of goods sold. Although the method is predictable and simple, it is also less accurate since it is based on estimates rather than actual cost figures. The gross profit method is used to estimate inventory values by applying a standard gross profit percentage to the company’s sales totals when a physical count is not possible. The resulting gross profit can then be subtracted from sales, leaving an estimated cost of goods sold. Then the ending inventory can be calculated by subtracting cost of goods sold from the total goods available for sale. Likewise, the retail inventory method estimates the cost of goods sold, much like the gross profit method does, but uses the retail value of the portions of inventory rather than the cost figures used in the gross profit method. Inflationary Versus Deflationary Cycles As prices rise (inflationary times), FIFO ending inventory account balances grow larger even when inventory unit counts are constant, while the income statement reflects lower cost of goods sold than the current prices for those goods, which produces higher profits than if the goods were costed with current inventory prices. Conversely, when prices fall (deflationary times), FIFO ending inventory account balances decrease and the income statement reflects higher cost of goods sold and lower profits than if goods were costed at current inventory prices. The effect of inflationary and deflationary cycles on LIFO inventory valuation are the exact opposite of their effects on FIFO inventory valuation. Link to Learning Accounting Coach does a great job in explaining inventory issues (and so many other accounting topics too): Learn more about inventory and cost of goods sold on their website. Think It Through First-in, First-out (FIFO) Suppose you are the assistant controller for a retail establishment that is an independent bookseller. The company uses manual, periodic inventory updating, using physical counts at year end, and the FIFO method for inventory costing. How would you approach the subject of whether the company should consider switching to computerized perpetual inventory updating? Can you present a persuasive argument for the benefits of perpetual? Explain. 10.2 Calculate the Cost of Goods Sold and Ending Inventory Using the Periodic Method As you’ve learned, the periodic inventory system is updated at the end of the period to adjust inventory numbers to match the physical count and provide accurate merchandise inventory values for the balance sheet. The adjustment ensures that only the inventory costs that remain on hand are recorded, and the remainder of the goods available for sale are expensed on the income statement as cost of goods sold. Here we will demonstrate the mechanics used to calculate the ending inventory values using the four cost allocation methods and the periodic inventory system. Information Relating to All Cost Allocation Methods, but Specific to Periodic Inventory Updating Let’s return to the example of The Spy Who Loves You Corporation to demonstrate the four cost allocation methods, assuming inventory is updated at the end of the period using the periodic system. Cost Data for Calculations Company : Spy Who Loves You Corporation Product : Global Positioning System (GPS) Tracking Device Description : This product is an economical real-time GPS tracking device, designed for individuals who wish to monitor others’ whereabouts. It is being marketed to parents of middle school and high school students as a safety measure. Parents benefit by being apprised of the child’s location, and the student benefits by not having to constantly check in with parents. Demand for the product has spiked during the current fiscal period, while supply is limited, causing the selling price to escalate rapidly. Note: For simplicity of demonstration, beginning inventory cost is assumed to be $21 per unit for all cost assumption methods. Specific Identification The specific units assumed to be sold in this period are designated as follows, with the specific inventory distinction being associated with the lot numbers: Sold 120 units, all from Lot 1 (beginning inventory), costing $21 per unit Sold 180 units, 20 from Lot 1 (beginning inventory), costing $21 per unit; 160 from the Lot 2 (July 10 purchase), costing $27 per unit The specific identification method of cost allocation directly tracks each of the units purchased and costs them out as they are actually sold. In this demonstration, assume that some sales were made by specifically tracked goods that are part of a lot, as previously stated for this method. So for The Spy Who Loves You, considering the entire period together, note that 140 of the 150 units that were purchased for $21 were sold, leaving 10 of $21 units remaining 160 of the 225 units that were purchased for $27 were sold, leaving 65 of the $27 units remaining none of the 210 units that were purchased for $33 were sold, leaving all 210 of the $33 units remaining Ending inventory was made up of 10 units at $21 each, 65 units at $27 each, and 210 units at $33 each, for a total specific identification ending inventory value of $8,895. Subtracting this ending inventory from the $16,155 total of goods available for sale leaves $7,260 in cost of goods sold this period. Calculations of Costs of Goods Sold, Ending Inventory, and Gross Margin, Specific Identification The specific identification costing assumption tracks inventory items individually, so that when they are sold, the exact cost of the item is used to offset the revenue from the sale. The cost of goods sold, inventory, and gross margin shown in Figure 10.5 were determined from the previously-stated data, particular to specific identification costing. The gross margin, resulting from the specific identification periodic cost allocations of $7,260, is shown in Figure 10.6 . Calculation for the Ending Inventory Adjustment under Periodic/Specific Identification Methods Merchandise inventory, before adjustment, had a balance of $3,150, which was the beginning inventory. Journal entries are not shown, but the following calculations provide the information that would be used in recording the necessary journal entries. The inventory at the end of the period should be $8,895, requiring an entry to increase merchandise inventory by $5,745. Cost of goods sold was calculated to be $7,260, which should be recorded as an expense. The credit entry to balance the adjustment is $13,005, which is the total amount that was recorded as purchases for the period. This entry distributes the balance in the purchases account between the inventory that was sold (cost of goods sold) and the amount of inventory that remains at period end (merchandise inventory). First-in, First-out (FIFO) The first-in, first-out method (FIFO) of cost allocation assumes that the earliest units purchased are also the first units sold. For The Spy Who Loves You, considering the entire period, 300 of the 585 units available for the period were sold, and if the earliest acquisitions are considered sold first, then the units that remain under FIFO are those that were purchased last. Following that logic, ending inventory included 210 units purchased at $33 and 75 units purchased at $27 each, for a total FIFO periodic ending inventory value of $8,955. Subtracting this ending inventory from the $16,155 total of goods available for sale leaves $7,200 in cost of goods sold this period. Calculations of Costs of Goods Sold, Ending Inventory, and Gross Margin, First-in, First-out (FIFO) The FIFO costing assumption tracks inventory items based on segments or lots of goods that are tracked, in the order that they were acquired, so that when they are sold, the earliest acquired items are used to offset the revenue from the sale. The cost of goods sold, inventory, and gross margin shown in Figure 10.7 were determined from the previously-stated data, particular to FIFO costing. The gross margin, resulting from the FIFO periodic cost allocations of $7,200, is shown in Figure 10.8 . Calculations for Inventory Adjustment, Periodic/First-in, First-out (FIFO) Beginning merchandise inventory had a balance of $3,150 before adjustment. The inventory at period end should be $8,955, requiring an entry to increase merchandise inventory by $5,895. Journal entries are not shown, but the following calculations provide the information that would be used in recording the necessary journal entries. Cost of goods sold was calculated to be $7,200, which should be recorded as an expense. The credit entry to balance the adjustment is for $13,005, which is the total amount that was recorded as purchases for the period. This entry distributes the balance in the purchases account between the inventory that was sold (cost of goods sold) and the amount of inventory that remains at period end (merchandise inventory). Last-in, First-out (LIFO) The last-in, first-out method (LIFO) of cost allocation assumes that the last units purchased are the first units sold. For The Spy Who Loves You, considering the entire period together, 300 of the 585 units available for the period were sold, and if the latest acquisitions are considered sold first, then the units that remain under LIFO are those that were purchased first. Following that logic, ending inventory included 150 units purchased at $21 and 135 units purchased at $27 each, for a total LIFO periodic ending inventory value of $6,795. Subtracting this ending inventory from the $16,155 total of goods available for sale leaves $9,360 in cost of goods sold this period. It is important to note that these answers can differ when calculated using the perpetual method. When perpetual methodology is utilized, the cost of goods sold and ending inventory are calculated at the time of each sale rather than at the end of the month. For example, in this case, when the first sale of 150 units is made, inventory will be removed and cost computed as of that date from the beginning inventory. The differences in timing as to when cost of goods sold is calculated can alter the order that costs are sequenced. Calculations of Costs of Goods Sold, Ending Inventory, and Gross Margin, Last-in, First-out (LIFO) The LIFO costing assumption tracks inventory items based on lots of goods that are tracked, in the order that they were acquired, so that when they are sold, the latest acquired items are used to offset the revenue from the sale. The following cost of goods sold, inventory, and gross margin were determined from the previously-stated data, particular to LIFO costing. The gross margin, resulting from the LIFO periodic cost allocations of $9,360, is shown in Figure 10.10 . Calculations for Inventory Adjustment, Periodic/Last-in, First-out (LIFO) Beginning merchandise inventory had a balance before adjustment of $3,150. The inventory at period end should be $6,795, requiring an entry to increase merchandise inventory by $3,645. Journal entries are not shown, but the following calculations provide the information that would be used in recording the necessary journal entries. Cost of goods sold was calculated to be $9,360, which should be recorded as an expense. The credit entry to balance the adjustment is for $13,005, which is the total amount that was recorded as purchases for the period. This entry distributes the balance in the purchases account between the inventory that was sold (cost of goods sold) and the amount of inventory that remains at period end (merchandise inventory). Weighted-Average Cost (AVG) Weighted-average cost allocation requires computation of the average cost of all units in goods available for sale at the time the sale is made. For The Spy Who Loves You, considering the entire period, the weighted-average cost is computed by dividing total cost of goods available for sale ($16,155) by the total number of available units (585) to get the average cost of $27.62. Note that 285 of the 585 units available for sale during the period remained in inventory at period end. Following that logic, ending inventory included 285 units at an average cost of $27.62 for a total AVG periodic ending inventory value of $7,872. Subtracting this ending inventory from the $16,155 total of goods available for sale leaves $8,283 in cost of goods sold this period. It is important to note that final numbers can often differ by one or two cents due to rounding of the calculations. In this case, the cost comes to $27.6154 but rounds up to the stated cost of $27.62. Calculations of Costs of Goods Sold, Ending Inventory, and Gross Margin, Weighted Average (AVG) The AVG costing assumption tracks inventory items based on lots of goods that are tracked but averages the cost of all units on hand every time an addition is made to inventory so that, when they are sold, the most recently averaged cost items are used to offset the revenue from the sale. The cost of goods sold, inventory, and gross margin shown in Figure 10.11 were determined from the previously-stated data, particular to AVG costing. Figure 10.12 shows the gross margin resulting from the weighted-average periodic cost allocations of $8283. Journal Entries for Inventory Adjustment, Periodic/Weighted Average Beginning merchandise inventory had a balance before adjustment of $3,150. The inventory at period end should be $7,872, requiring an entry to increase merchandise inventory by $4,722. Journal entries are not shown, but the following calculations provide the information that would be used in recording the necessary journal entries. Cost of goods sold was calculated to be $8,283, which should be recorded as an expense. The credit entry to balance the adjustment is for $13,005, which is the total amount that was recorded as purchases for the period. This entry distributes the balance in the purchases account between the inventory that was sold (cost of goods sold) and the amount of inventory that remains at period end (merchandise inventory). 10.3 Calculate the Cost of Goods Sold and Ending Inventory Using the Perpetual Method As you’ve learned, the perpetual inventory system is updated continuously to reflect the current status of inventory on an ongoing basis. Modern sales activity commonly uses electronic identifier s—such as bar codes and RFID technology—to account for inventory as it is purchased, monitored, and sold. Specific identification inventory methods also commonly use a manual form of the perpetual system. Here we’ll demonstrate the mechanics implemented when using perpetual inventory systems in inventory accounting, whether those calculations are orchestrated in a laborious manual system or electronically (in the latter, the inventory accounting operates effortlessly behind the scenes but nonetheless utilizes the same perpetual methodology). Concepts In Practice Perpetual Inventory’s Advancements through Technology Perpetual inventory has been seen as the wave of the future for many years. It has grown since the 1970s alongside the development of affordable personal computers. Universal product codes, commonly known as UPC barcodes, have advanced inventory management for large and small retail organizations, allowing real-time inventory counts and reorder capability that increased popularity of the perpetual inventory system. These UPC codes identify specific products but are not specific to the particular batch of goods that were produced. Electronic product codes (EPCs) such as radio frequency identifiers (RFIDs) are essentially an evolved version of UPCs in which a chip/identifier is embedded in the EPC code that matches the goods to the actual batch of product that was produced. This more specific information allows better control, greater accountability, increased efficiency, and overall quality monitoring of goods in inventory. The technology advancements that are available for perpetual inventory systems make it nearly impossible for businesses to choose periodic inventory and forego the competitive advantages that the technology offers. Information Relating to All Cost Allocation Methods, but Specific to Perpetual Inventory Updating Let’s return to The Spy Who Loves You Corporation data to demonstrate the four cost allocation methods, assuming inventory is updated on an ongoing basis in a perpetual system. Cost Data for Calculations Company : Spy Who Loves You Corporation Product : Global Positioning System (GPS) Tracking Device Description : This product is an economical real-time GPS tracking device, designed for individuals who wish to monitor others’ whereabouts. It is being marketed to parents of middle school and high school students as a safety measure. Parents benefit by being apprised of the child’s location, and the student benefits by not having to constantly check in with parents. Demand for the product has spiked during the current fiscal period, while supply is limited, causing the selling price to escalate rapidly. Note: For simplicity of demonstration, beginning inventory cost is assumed to be $21 per unit for all cost assumption methods. Calculations for Inventory Purchases and Sales during the Period, Perpetual Inventory Updating Regardless of which cost assumption is chosen, recording inventory sales using the perpetual method involves recording both the revenue and the cost from the transaction for each individual sale. As additional inventory is purchased during the period, the cost of those goods is added to the merchandise inventory account. Normally, no significant adjustments are needed at the end of the period (before financial statements are prepared) since the inventory balance is maintained to continually parallel actual counts. Ethical Considerations Ethical Short-Term Decision Making When management and executives participate in unethical or fraudulent short-term decision making, it can negatively impact a company on many levels. According to Antonia Chion, Associate Director of the SEC’s Division of Enforcement, those who participate in such activities will be held accountable. 5 For example, in 2015, the Securities and Exchange Commission (SEC) charged two former top executives of OCZ Technology Group Inc. for accounting failures. 6 The SEC alleged that OCZ’s former CEO Ryan Petersen engaged in a scheme to materially inflate OCZ’s revenues and gross margins from 2010 to 2012, and that OCZ’s former chief financial officer Arthur Knapp participated in certain accounting, disclosure, and internal accounting controls failures. 5 U.S. Securities and Exchange Commission (SEC). “SEC Charges Former Executives with Accounting Fraud and Other Accounting Failures.” October 6, 2015. https://www.sec.gov/news/pressrelease/2015-234.html 6 SEC v. Ryan Petersen , No. 15-cv-04599 (N.D. Cal. filed October 6, 2015). https://www.sec.gov/litigation/litreleases/2017/lr23874.htm Petersen and Knapp allegedly participated in channel stuffing, which is the process of recognizing and recording revenue in a current period that actually will be legally earned in one or more future fiscal periods. A common example is to arrange for customers to submit purchase orders in the current year, often with the understanding that if they don’t need the additional inventory then they may return the inventory received or cancel the order if delivery has not occurred. 7 When the intention behind channel stuffing is to mislead investors, it crosses the line into fraudulent practice. This and other unethical short-term accounting decisions made by Petersen and Knapp led to the bankruptcy of the company they were supposed to oversee and resulted in fraud charges from the SEC. Practicing ethical short-term decision making may have prevented both scenarios. 7 George B. Parizek and Madeleine V. Findley. Charting a Course: Revenue Recognition Practices for Today’s Business Environment . 2008. https://www.sidley.com/-/media/files/publications/2008/10/charting-a-course-revenue-recognition-practices-__/files/view-article/fileattachment/chartingacourse.pdf Specific Identification For demonstration purposes, the specific units assumed to be sold in this period are designated as follows, with the specific inventory distinction being associated with the lot numbers: Sold 120 units, all from Lot 1 (beginning inventory), costing $21 per unit Sold 180 units, 20 from Lot 1 (beginning inventory), costing $21 per unit; 160 from Lot 2 (July 10 purchase), costing $27 per unit The specific identification method of cost allocation directly tracks each of the units purchased and costs them out as they are sold. In this demonstration, assume that some sales were made by specifically tracked goods that are part of a lot, as previously stated for this method. For The Spy Who Loves You, the first sale of 120 units is assumed to be the units from the beginning inventory, which had cost $21 per unit, bringing the total cost of these units to $2,520. Once those units were sold, there remained 30 more units of the beginning inventory. The company bought 225 more units for $27 per unit. The second sale of 180 units consisted of 20 units at $21 per unit and 160 units at $27 per unit for a total second-sale cost of $4,740. Thus, after two sales, there remained 10 units of inventory that had cost the company $21, and 65 units that had cost the company $27 each. The last transaction was an additional purchase of 210 units for $33 per unit. Ending inventory was made up of 10 units at $21 each, 65 units at $27 each, and 210 units at $33 each, for a total specific identification perpetual ending inventory value of $8,895. Calculations of Costs of Goods Sold, Ending Inventory, and Gross Margin, Specific Identification The specific identification costing assumption tracks inventory items individually so that, when they are sold, the exact cost of the item is used to offset the revenue from the sale. The cost of goods sold, inventory, and gross margin shown in Figure 10.13 were determined from the previously-stated data, particular to specific identification costing. Figure 10.14 shows the gross margin, resulting from the specific identification perpetual cost allocations of $7,260. Description of Journal Entries for Inventory Sales, Perpetual, Specific Identification Journal entries are not shown, but the following discussion provides the information that would be used in recording the necessary journal entries. Each time a product is sold, a revenue entry would be made to record the sales revenue and the corresponding accounts receivable or cash from the sale. Because of the choice to apply perpetual inventory updating, a second entry made at the same time would record the cost of the item based on the actual cost of the items, which would be shifted from merchandise inventory (an asset) to cost of goods sold (an expense). First-in, First-out (FIFO) The first-in, first-out method (FIFO) of cost allocation assumes that the earliest units purchased are also the first units sold. For The Spy Who Loves You, using perpetual inventory updating, the first sale of 120 units is assumed to be the units from the beginning inventory, which had cost $21 per unit, bringing the total cost of these units to $2,520. Once those units were sold, there remained 30 more units of beginning inventory. The company bought 225 more units for $27 per unit. At the time of the second sale of 180 units, the FIFO assumption directs the company to cost out the last 30 units of the beginning inventory, plus 150 of the units that had been purchased for $27. Thus, after two sales, there remained 75 units of inventory that had cost the company $27 each. The last transaction was an additional purchase of 210 units for $33 per unit. Ending inventory was made up of 75 units at $27 each, and 210 units at $33 each, for a total FIFO perpetual ending inventory value of $8,955. Calculations of Costs of Goods Sold, Ending Inventory, and Gross Margin, First-in, First-out (FIFO) The FIFO costing assumption tracks inventory items based on lots of goods that are tracked, in the order that they were acquired, so that when they are sold the earliest acquired items are used to offset the revenue from the sale. The cost of goods sold, inventory, and gross margin shown in Figure 10.15 were determined from the previously-stated data, particular to perpetual FIFO costing. Figure 10.16 shows the gross margin, resulting from the FIFO perpetual cost allocations of $7,200. Description of Journal Entries for Inventory Sales, Perpetual, First-in, First-out (FIFO) Journal entries are not shown, but the following discussion provides the information that would be used in recording the necessary journal entries. Each time a product is sold, a revenue entry would be made to record the sales revenue and the corresponding accounts receivable or cash from the sale. When applying perpetual inventory updating, a second entry made at the same time would record the cost of the item based on FIFO, which would be shifted from merchandise inventory (an asset) to cost of goods sold (an expense). Last-in, First-out (LIFO) The last-in, first-out method (LIFO) of cost allocation assumes that the last units purchased are the first units sold. For The Spy Who Loves You, using perpetual inventory updating, the first sale of 120 units is assumed to be the units from the beginning inventory (because this was the only lot of good available, so it represented the last purchased lot), which had cost $21 per unit, bringing the total cost of these units in the first sale to $2,520. Once those units were sold, there remained 30 more units of beginning inventory. The company bought 225 more units for $27 per unit. At the time of the second sale of 180 units, the LIFO assumption directs the company to cost out the 180 units from the latest purchased units, which had cost $27 for a total cost on the second sale of $4,860. Thus, after two sales, there remained 30 units of beginning inventory that had cost the company $21 each, plus 45 units of the goods purchased for $27 each. The last transaction was an additional purchase of 210 units for $33 per unit. Ending inventory was made up of 30 units at $21 each, 45 units at $27 each, and 210 units at $33 each, for a total LIFO perpetual ending inventory value of $8,775. Calculations of Costs of Goods Sold, Ending Inventory, and Gross Margin, Last-in, First-out (LIFO) The LIFO costing assumption tracks inventory items based on lots of goods that are tracked in the order that they were acquired, so that when they are sold, the latest acquired items are used to offset the revenue from the sale. The following cost of goods sold, inventory, and gross margin were determined from the previously-stated data, particular to perpetual, LIFO costing. Figure 10.18 shows the gross margin resulting from the LIFO perpetual cost allocations of $7,380. Description of Journal Entries for Inventory Sales, Perpetual, Last-in, First-out (LIFO) Journal entries are not shown, but the following discussion provides the information that would be used in recording the necessary journal entries. Each time a product is sold, a revenue entry would be made to record the sales revenue and the corresponding accounts receivable or cash from the sale. When applying apply perpetual inventory updating, a second entry made at the same time would record the cost of the item based on LIFO, which would be shifted from merchandise inventory (an asset) to cost of goods sold (an expense). Link to Learning Visit this Amazon inventory video for a little insight into some of the inventory challenges experienced by retail giant Amazon to learn more. Weighted-Average Cost (AVG) Weighted-average cost allocation requires computation of the average cost of all units in goods available for sale at the time the sale is made for perpetual inventory calculations. For The Spy Who Loves You, the first sale of 120 units is assumed to be the units from the beginning inventory (because this was the only lot of good available, so the price of these units also represents the average cost), which had cost $21 per unit, bringing the total cost of these units in the first sale to $2,520. Once those units were sold, there remained 30 more units of the inventory, which still had a $21 average cost. The company bought 225 more units for $27 per unit. Recalculating the average cost, after this purchase, is accomplished by dividing total cost of goods available for sale (which totaled $6,705 at that point) by the number of units held, which was 255 units, for an average cost of $26.29 per unit. At the time of the second sale of 180 units, the AVG assumption directs the company to cost out the 180 at $26.29 for a total cost on the second sale of $4,732. Thus, after two sales, there remained 75 units at an average cost of $26.29 each. The last transaction was an additional purchase of 210 units for $33 per unit. Recalculating the average cost again resulted in an average cost of $31.24 per unit. Ending inventory was made up of 285 units at $31.24 each for a total AVG perpetual ending inventory value of $8,902 (rounded). 8 8 Note that there is a $1 rounding difference due to the rounding of cents inherent in the cost determination chain process. Calculations of Costs of Goods Sold, Ending Inventory, and Gross Margin, Weighted Average (AVG) The AVG costing assumption tracks inventory items based on lots of goods that are combined and re-averaged after each new acquisition to determine a new average cost per unit so that, when they are sold, the latest averaged cost items are used to offset the revenue from the sale. The cost of goods sold, inventory, and gross margin shown in Figure 10.19 were determined from the previously-stated data, particular to perpetual, AVG costing. Figure 10.20 shows the gross margin, resulting from the weighted-average perpetual cost allocations of $7,253. Description of Journal Entries for Inventory Sales, Perpetual, Weighted Average (AVG) Journal entries are not shown, but the following discussion provides the information that would be used in recording the necessary journal entries. Each time a product is sold, a revenue entry would be made to record the sales revenue and the corresponding accounts receivable or cash from the sale. When applying perpetual inventory updating, a second entry would be made at the same time to record the cost of the item based on the AVG costing assumptions, which would be shifted from merchandise inventory (an asset) to cost of goods sold (an expense). Comparison of All Four Methods, Perpetual The outcomes for gross margin, under each of these different cost assumptions, is summarized in Figure 10.21 . Think It Through Last-in, First-out (LIFO) Two-part consideration: 1) Why do you think a company would ever choose to use perpetual LIFO as its costing method? It is clearly more trouble to calculate than other methods and doesn’t really align with the natural flow of the merchandise, in most cases. 2) Should the order in which the items are actually sold determine which costs are used to offset sales revenues from those goods? Explain your understanding of these issues. 10.4 Explain and Demonstrate the Impact of Inventory Valuation Errors on the Income Statement and Balance Sheet Because of the dynamic relationship between cost of goods sold and merchandise inventory, errors in inventory counts have a direct and significant impact on the financial statements of the company. Errors in inventory valuation cause mistaken values to be reported for merchandise inventory and cost of goods sold due to the toggle effect that changes in either one of the two accounts have on the other. As explained, the company has a finite amount of inventory that they can work with during a given period of business operations, such as a year. This limited quantity of goods is known as goods available for sale and is sourced from beginning inventory (unsold goods left over from the previous period’s operations); and purchases of additional inventory during the current period. These available inventory items (goods available for sale) will be handled in one of two ways: be sold to customers (normally) or be lost due to shrinkage, spoilage, or theft (occasionally), and reported as cost of goods sold on the income statement; OR be unsold and held in ending inventory, to be passed into the next period, and reported as merchandise inventory on the balance sheet. Fundamentals of the Impact of Inventory Valuation Errors on the Income Statement and Balance Sheet Understanding this interaction between inventory assets (merchandise inventory balances) and inventory expense (cost of goods sold) highlights the impact of errors. Errors in the valuation of ending merchandise inventory, which is on the balance sheet, produce an equivalent corresponding error in the company’s cost of goods sold for the period, which is on the income statement. When cost of goods sold is overstated, inventory and net income are understated. When cost of goods sold is understated, inventory and net income are overstated. Further, an error in ending inventory carries into the next period, since ending inventory of one period becomes the beginning inventory of the next period, causing both the balance sheet and the income statement values to be wrong in year two as well as in the year of the error. Over a two-year period, misstatements of ending inventory will balance themselves out. For example, an overstatement to ending inventory overstates net income, but next year, since ending inventory becomes beginning inventory, it understates net income. So over a two-year period, this corrects itself. However, financial statements are prepared for one period, so all this means is that two years of cost of goods sold are misstated (the first year is overstated/understated, and the second year is understated/overstated.) In periodic inventory systems, inventory errors commonly arise from careless oversight of physical counts. Another common cause of periodic inventory errors results from management neglecting to take the physical count. Both perpetual and periodic updating inventory systems also face potential errors relating to ownership transfers during transportation (relating to FOB shipping point and FOB destination terms); losses in value due to shrinkage, theft, or obsolescence; and consignment inventory, the goods for which should never be included in the retailer’s inventory but should be recorded as an asset of the consignor, who remains the legal owner of the goods until they are sold. Calculated Income Statement and Balance Sheet Effects for Two Years Let’s return to The Spy Who Loves You Company dataset to demonstrate the effects of an inventory error on the company’s balance sheet and income statement. Example 1 (shown in Figure 10.22 ) depicts the balance sheet and income statement toggle when no inventory error is present. Example 2 (see Figure 10.23 ) shows the balance sheet and income statement inventory toggle, in a case when a $1,500 understatement error occurred at the end of year 1. Comparing the two examples with and without the inventory error highlights the significant effect the error had on the net results reported on the balance sheet and income statements for the two years. Users of financial statements make important business and personal decisions based on the data they receive from the statements and errors of this sort provide those users with faulty information that could negatively affect the quality of their decisions. In these examples, the combined net income was identical for the two years and the error worked itself out at the end of the second year, yet year 1 and year 2 were incorrect and not representative of the true activity of the business for those periods of time. Extreme care should be taken to value inventories accurately. 10.5 Examine the Efficiency of Inventory Management Using Financial Ratios Inventory is a large investment for many companies so it is important that this asset be managed wisely. Too little inventory means lost sales opportunities, whereas too much inventory means unproductive investment of resources as well as extra costs related to storage, care, and protection of the inventory. Ratio analysis is used to measure how well management is doing at maintaining just the right amount of inventory for the needs of their particular business. Once calculated, these ratios should be compared to previous years’ ratios for the company, direct competitors’ ratios, industry ratios, and other industries’ ratios. The insights gained from the ratio analysis should be used to augment analysis of the general strength and stability of the company, with the full data available in the annual report, including financial statements and notes to the financial statement. Fundamentals of Inventory Ratios Inventory ratio analysis relates to how well the inventory is being managed. Two ratios can be used to assess how efficiently management is handling inventory. The first ratio, inventory turnover, measures the number of times an average quantity of inventory was bought and sold during the period. The second ratio, number of days’ sales in inventory, measures how many days it takes to complete the cycle between buying and selling inventory. Calculating and Interpreting the Inventory Turnover Ratio Inventory turnover ratio is computed by dividing cost of goods sold by average inventory. The ratio measures the number of times inventory rotated through the sales cycle for the period. Let’s review how this works for The Spy Who Loves You dataset. This example scenario relates to the FIFO periodic cost allocation, using those previously calculated values for year 1 cost of goods sold, beginning inventory, and ending inventory, and assuming a 10% increase in inventory activity for year 2, as shown in Figure 10.24 . The inventory turnover ratio is calculated by dividing cost of goods sold by average inventory. The result for the Spy Who Loves You Company indicates that the inventory cycled through the sales cycle 1.19 times in year 1, and 0.84 times in year 2. The fact that the year 2 inventory turnover ratio is lower than the year 1 ratio is not a positive trend. This result would alert management that the inventory balance might be too high to be practical for this volume of sales. Comparison should also be made to competitor and industry ratios, while consideration should also be given to other factors affecting the company’s financial health as well as the strength of the overall market economy. Calculating and Interpreting the Days’ Sales in Inventory Ratio Number of days’ sales in inventory ratio is computed by dividing average merchandise inventory by the average daily cost of goods sold. The ratio measures the number of days it would take to clear the remaining inventory. Let’s review this using The Spy Who Loves You dataset. The example scenario relates to the FIFO periodic cost allocation, using those previously calculated values for year 1 cost of goods sold, beginning inventory, and ending inventory, and assuming a 10% increase in inventory activity for year 2, as in Figure 10.25 . The number of days’ sales in inventory ratio is calculated by dividing average inventory by average daily cost of goods sold. The result for the Spy Who Loves You indicates that it would take about 307 days to clear the average inventory held in year 1 and about 433 days to clear the average inventory held in year 2. Year 2’s number of days’ sales in inventory ratio increased over year 1’s ratio results, indicating an unfavorable change. This result would alert management that it is taking much too long to sell the inventory, so reduction in the inventory balance might be appropriate, or as an alternative, increased sales efforts could turn the ratio toward a more positive trend. This ratio is useful to identify cases of obsolescence, which is especially prevalent in an evolving market, such as the technology sector of the economy. As with any ratio, comparison should also be made to competitor and industry ratios, while consideration should also be given to other factors affecting the company’s financial health, as well as to the strength of the overall market economy. Link to Learning Check out Investopedia for help with calculation and analysis of ratios and their discussion about the inventory turnover ratio to learn more.
american_government
Summary 3.1 The Division of Powers Federalism is a system of government that creates two relatively autonomous levels of government, each possessing authority granted to them by the national constitution. Federal systems like the one in the United States are different from unitary systems, which concentrate authority in the national government, and from confederations, which concentrate authority in subnational governments. The U.S. Constitution allocates powers to the states and federal government, structures the relationship between these two levels of government, and guides state-to-state relationships. Federal, state, and local governments rely on different sources of revenue to enable them to fulfill their public responsibilities. 3.2 The Evolution of American Federalism Federalism in the United States has gone through several phases of evolution during which the relationship between the federal and state governments has varied. In the era of dual federalism, both levels of government stayed within their own jurisdictional spheres. During the era of cooperative federalism, the federal government became active in policy areas previously handled by the states. The 1970s ushered in an era of new federalism and attempts to decentralize policy management. 3.3 Intergovernmental Relationships To accomplish its policy priorities, the federal government often needs to elicit the cooperation of states and local governments, using various strategies. Block and categorical grants provide money to lower government levels to subsidize the cost of implementing policy programs fashioned in part by the federal government. This strategy gives state and local authorities some degree of flexibility and discretion as they coordinate with the federal government. On the other hand, mandate compels state and local governments to abide by federal laws and regulations or face penalties. 3.4 Competitive Federalism Today Some policy areas have been redefined as a result of changes in the roles that states and the federal government play in them. The constitutional disputes these changes often trigger have had to be sorted out by the Supreme Court. Contemporary federalism has also witnessed interest groups engaging in venue shopping. Aware of the multiple access points to our political system, such groups seek to access the level of government they deem will be most receptive to their policy views. 3.5 Advantages and Disadvantages of Federalism The benefits of federalism are that it can encourage political participation, give states an incentive to engage in policy innovation, and accommodate diverse viewpoints across the country. The disadvantages are that it can set off a race to the bottom among states, cause cross-state economic and social disparities, and obstruct federal efforts to address national problems.
Chapter Outline 3.1 The Division of Powers 3.2 The Evolution of American Federalism 3.3 Intergovernmental Relationships 3.4 Competitive Federalism Today 3.5 Advantages and Disadvantages of Federalism Introduction Federalism figures prominently in the U.S. political system. Specifically, the federal design spelled out in the Constitution divides powers between two levels of government—the states and the federal government—and creates a mechanism for them to check and balance one another. As an institutional design, federalism both safeguards state interests and creates a strong union led by a capable central government. American federalism also seeks to balance the forces of decentralization and centralization . We see decentralization when we cross state lines and encounter different taxation levels, welfare eligibility requirements, and voting regulations, to name just a few. Centralization is apparent in the fact that the federal government is the only entity permitted to print money, to challenge the legality of state laws, or to employ money grants and mandates to shape state actions. Colorful billboards with simple messages may greet us at state borders ( Figure 3.1 ), but behind them lies a complex and evolving federal design that has structured relationships between states and the federal government since the late 1700s. What specific powers and responsibilities are granted to the federal and state governments? How does our process of government keep these separate governing entities in balance? To answer these questions and more, this chapter traces the origins, evolution, and functioning of the American system of federalism, as well as its advantages and disadvantages for citizens.
[ { "answer": { "ans_choice": 1, "ans_text": "B" }, "bloom": null, "hl_context": "<hl> The Constitution contains several provisions that direct the functioning of U . S . federalism . <hl> Some delineate the scope of national and state power , while others restrict it . <hl> The remaining provisions shape relationships among the states and between the states and the federal government . <hl> Division of power can also occur via a unitary structure or confederation ( Figure 3.2 ) . <hl> In contrast to federalism , a unitary system makes subnational governments dependent on the national government , where significant authority is concentrated . <hl> Before the late 1990s , the United Kingdom ’ s unitary system was centralized to the extent that the national government held the most important levers of power . Since then , power has been gradually decentralized through a process of devolution , leading to the creation of regional governments in Scotland , Wales , and Northern Ireland as well as the delegation of specific responsibilities to them . Other democratic countries with unitary systems , such as France , Japan , and Sweden , have followed a similar path of decentralization .", "hl_sentences": "The Constitution contains several provisions that direct the functioning of U . S . federalism . The remaining provisions shape relationships among the states and between the states and the federal government . In contrast to federalism , a unitary system makes subnational governments dependent on the national government , where significant authority is concentrated .", "question": { "cloze_format": "The statement about federal and unitary systems that is most accurate is that ___.", "normal_format": "Which statement about federal and unitary systems is most accurate?", "question_choices": [ "In a federal system, power is concentrated in the states; in a unitary system, it is concentrated in the national government.", "In a federal system, the constitution allocates powers between states and federal government; in a unitary system, powers are lodged in the national government.", "Today there are more countries with federal systems than with unitary systems.", "The United States and Japan have federal systems, while Great Britain and Canada have unitary systems." ], "question_id": "fs-id1163758440425", "question_text": "Which statement about federal and unitary systems is most accurate?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "Between 30 and 40 percent of the revenue for local and state governments comes from grant money." }, "bloom": null, "hl_context": "The most important sources of revenue for local governments in 2013 were taxes , federal and state grants , and service charges . For local governments the property tax , a levy on residential and commercial real estate , was the most important source of tax revenue , accounting for about 74 percent of the total . <hl> Federal and state grants accounted for 37 percent of local government revenue . <hl> State grants made up 87 percent of total local grants . Charges for hospital-related services , sewage and solid-waste management , public city university tuition , and airport services are important sources of general revenue for local governments .", "hl_sentences": "Federal and state grants accounted for 37 percent of local government revenue .", "question": { "cloze_format": "The statement that is most accurate about the sources of revenue for local and state governments is ___.", "normal_format": "Which statement is most accurate about the sources of revenue for local and state governments?", "question_choices": [ "Taxes generate well over one-half the total revenue of local and state governments.", "Property taxes generate the most tax revenue for both local and state governments.", "Between 30 and 40 percent of the revenue for local and state governments comes from grant money.", "Local and state governments generate an equal amount of revenue from issuing licenses and certificates." ], "question_id": "fs-id1163758389121", "question_text": "Which statement is most accurate about the sources of revenue for local and state governments?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "C" }, "bloom": null, "hl_context": "This ruling established the doctrine of implied powers , granting Congress a vast source of discretionary power to achieve its constitutional responsibilities . The Supreme Court also sided with the federal government on the issue of whether states could tax federal property . <hl> Under the supremacy clause of Article VI , legitimate national laws trump conflicting state laws . <hl> As the court observed , “ the government of the Union , though limited in its powers , is supreme within its sphere of action and its laws , when made in pursuance of the constitution , form the supreme law of the land . ” Maryland ’ s action violated national supremacy because “ the power to tax is the power to destroy . ” This second ruling established the principle of national supremacy , which prohibits states from meddling in the lawful activities of the national government . <hl> A political showdown between Maryland and the national government emerged when James McCulloch , an agent for the Baltimore branch of the Second Bank , refused to pay a tax that Maryland had imposed on all out-of-state chartered banks . <hl> The standoff raised two constitutional questions : Did Congress have the authority to charter a national bank ? Were states allowed to tax federal property ? In McCulloch v . Maryland , Chief Justice John Marshall ( Figure 3.8 ) argued that Congress could create a national bank even though the Constitution did not expressly authorize it . <hl> 20 Under the necessary and proper clause of Article I , Section 8 , the Supreme Court asserted that Congress could establish “ all means which are appropriate ” to fulfill “ the legitimate ends ” of the Constitution . <hl> In other words , the bank was an appropriate instrument that enabled the national government to carry out several of its enumerated powers , such as regulating interstate commerce , collecting taxes , and borrowing money .", "hl_sentences": "Under the supremacy clause of Article VI , legitimate national laws trump conflicting state laws . A political showdown between Maryland and the national government emerged when James McCulloch , an agent for the Baltimore branch of the Second Bank , refused to pay a tax that Maryland had imposed on all out-of-state chartered banks . 20 Under the necessary and proper clause of Article I , Section 8 , the Supreme Court asserted that Congress could establish “ all means which are appropriate ” to fulfill “ the legitimate ends ” of the Constitution .", "question": { "cloze_format": "In McCulloch v. Maryland, the provisions of the constitution that were invoked by the Supreme Court were ___ .", "normal_format": "In McCulloch v. Maryland, the Supreme Court invoked which provisions of the constitution?", "question_choices": [ "Tenth Amendment and spending clause", "commerce clause and supremacy clause", "necessary and proper clause and supremacy clause", "taxing power and necessary and proper clause" ], "question_id": "fs-id1163758354612", "question_text": "In McCulloch v. Maryland, the Supreme Court invoked which provisions of the constitution?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 3, "ans_text": "President Reagan was able to promote new federalism consistently throughout his administration." }, "bloom": null, "hl_context": "<hl> Several Supreme Court rulings also promoted new federalism by hemming in the scope of the national government ’ s power , especially under the commerce clause . <hl> <hl> For example , in United States v . Lopez , the court struck down the Gun-Free School Zones Act of 1990 , which banned gun possession in school zones . <hl> 39 It argued that the regulation in question did not “ substantively affect interstate commerce . ” The ruling ended a nearly sixty-year period in which the court had used a broad interpretation of the commerce clause that by the 1960s allowed it to regulate numerous local commercial activities . 40 During the administrations of Presidents Richard Nixon ( 1969 – 1974 ) and Ronald Reagan ( 1981 – 1989 ) , attempts were made to reverse the process of nationalization — that is , to restore states ’ prominence in policy areas into which the federal government had moved in the past . <hl> New federalism is premised on the idea that the decentralization of policies enhances administrative efficiency , reduces overall public spending , and improves policy outcomes . <hl> <hl> During Nixon ’ s administration , general revenue sharing programs were created that distributed funds to the state and local governments with minimal restrictions on how the money was spent . <hl> <hl> The election of Ronald Reagan heralded the advent of a “ devolution revolution ” in U . S . federalism , in which the president pledged to return authority to the states according to the Constitution . <hl> <hl> In the Omnibus Budget Reconciliation Act of 1981 , congressional leaders together with President Reagan consolidated numerous federal grant programs related to social welfare and reformulated them in order to give state and local administrators greater discretion in using federal funds . <hl> 37", "hl_sentences": "Several Supreme Court rulings also promoted new federalism by hemming in the scope of the national government ’ s power , especially under the commerce clause . For example , in United States v . Lopez , the court struck down the Gun-Free School Zones Act of 1990 , which banned gun possession in school zones . New federalism is premised on the idea that the decentralization of policies enhances administrative efficiency , reduces overall public spending , and improves policy outcomes . During Nixon ’ s administration , general revenue sharing programs were created that distributed funds to the state and local governments with minimal restrictions on how the money was spent . The election of Ronald Reagan heralded the advent of a “ devolution revolution ” in U . S . federalism , in which the president pledged to return authority to the states according to the Constitution . In the Omnibus Budget Reconciliation Act of 1981 , congressional leaders together with President Reagan consolidated numerous federal grant programs related to social welfare and reformulated them in order to give state and local administrators greater discretion in using federal funds .", "question": { "cloze_format": "A false statement about new federalism is that ___ .", "normal_format": "Which statement about new federalism is not true?", "question_choices": [ "New federalism was launched by President Nixon and continued by President Reagan.", "New federalism is based on the idea that decentralization of responsibility enhances administrative efficiency.", "United States v. Lopez is a Supreme Court ruling that advanced the logic of new federalism.", "President Reagan was able to promote new federalism consistently throughout his administration." ], "question_id": "fs-id1163758505412", "question_text": "Which statement about new federalism is not true?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "C" }, "bloom": null, "hl_context": "<hl> Cooperative federalism was born of necessity and lasted well into the twentieth century as the national and state governments each found it beneficial . <hl> Under this model , both levels of government coordinated their actions to solve national problems , such as the Great Depression and the civil rights struggle of the following decades . <hl> In contrast to dual federalism , it erodes the jurisdictional boundaries between the states and national government , leading to a blending of layers as in a marble cake . <hl> The era of cooperative federalism contributed to the gradual incursion of national authority into the jurisdictional domain of the states , as well as the expansion of the national government ’ s power in concurrent policy areas . 32", "hl_sentences": "Cooperative federalism was born of necessity and lasted well into the twentieth century as the national and state governments each found it beneficial . In contrast to dual federalism , it erodes the jurisdictional boundaries between the states and national government , leading to a blending of layers as in a marble cake .", "question": { "cloze_format": "___ is not a merit of cooperative federalism.", "normal_format": "Which is not a merit of cooperative federalism?", "question_choices": [ "Federal cooperation helps mitigate the problem of collective action among states.", "Federal assistance encourages state and local governments to generate positive externalities.", "Cooperative federalism respects the traditional jurisdictional boundaries between states and federal government.", "Federal assistance ensures some degree of uniformity of public services across states." ], "question_id": "fs-id1163758456379", "question_text": "Which is not a merit of cooperative federalism?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "The amount of federal grant money going to states has steadily increased since the 1960s." }, "bloom": null, "hl_context": "<hl> During the 1960s and 1970s , funding for federal grants grew significantly , as the trend line shows in Figure 3.13 . <hl> Growth picked up again in the 1990s and 2000s . The upward slope since the 1990s is primarily due to the increase in federal grant money going to Medicaid . Federally funded health-care programs jumped from $ 43.8 billion in 1990 to $ 320 billion in 2014 . <hl> 44 Health-related grant programs such as Medicaid and the Children ’ s Health Insurance Program ( CHIP ) represented more than half of total federal grant expenses . <hl>", "hl_sentences": "During the 1960s and 1970s , funding for federal grants grew significantly , as the trend line shows in Figure 3.13 . 44 Health-related grant programs such as Medicaid and the Children ’ s Health Insurance Program ( CHIP ) represented more than half of total federal grant expenses .", "question": { "cloze_format": "The statement about federal grants in recent decades that is most accurate is ___.", "normal_format": "Which statement about federal grants in recent decades is most accurate?", "question_choices": [ "The federal government allocates the most grant money to income security.", "The amount of federal grant money going to states has steadily increased since the 1960s.", "The majority of federal grants are block grants.", "Block grants tend to gain more flexibility over time." ], "question_id": "fs-id1163758412221", "question_text": "Which statement about federal grants in recent decades is most accurate?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "D" }, "bloom": null, "hl_context": "<hl> The continued use of unfunded mandates clearly contradicts new federalism ’ s call for giving states and local governments more flexibility in carrying out national goals . <hl> The temptation to use them appears to be difficult for the federal government to resist , however , as the UMRA ’ s poor track record illustrates . This is because mandates allow the federal government to fulfill its national priorities while passing most of the cost to the states , an especially attractive strategy for national lawmakers trying to cut federal spending . 54 Some leading federalism scholars have used the term coercive federalism to capture this aspect of contemporary U . S . federalism . 55 In other words , Washington has been as likely to use the stick of mandates as the carrot of grants to accomplish its national objectives . As a result , there have been more instances of confrontational interactions between the states and the federal government . <hl> The widespread use of federal mandates in the 1970s and 1980s provoked a backlash among state and local authorities , which culminated in the Unfunded Mandates Reform Act ( UMRA ) in 1995 . <hl> <hl> The UMRA ’ s main objective has been to restrain the national government ’ s use of mandates by subjecting rules that impose unfunded requirements on state and local governments to greater procedural scrutiny . <hl> However , since the act ’ s implementation , states and local authorities have obtained limited relief . A new piece of legislation aims to take this approach further . The 2015 Unfunded Mandates and Information Transparency Act , HR 50 , passed the House early in 2015 before being referred to the Senate , where it waits committee consideration . 51 <hl> For example , Title VI of the Civil Rights Act of 1964 authorizes the federal government to withhold federal grants as well as file lawsuits against state and local officials for practicing racial discrimination . <hl> <hl> Finally , some mandates come in the form of partial preemption regulations , whereby the federal government sets national regulatory standards but delegates the enforcement to state and local governments . <hl> <hl> For example , the Clean Air Act sets air quality regulations but instructs states to design implementation plans to achieve such standards ( Figure 3.14 ) . <hl> 50", "hl_sentences": "The continued use of unfunded mandates clearly contradicts new federalism ’ s call for giving states and local governments more flexibility in carrying out national goals . The widespread use of federal mandates in the 1970s and 1980s provoked a backlash among state and local authorities , which culminated in the Unfunded Mandates Reform Act ( UMRA ) in 1995 . The UMRA ’ s main objective has been to restrain the national government ’ s use of mandates by subjecting rules that impose unfunded requirements on state and local governments to greater procedural scrutiny . For example , Title VI of the Civil Rights Act of 1964 authorizes the federal government to withhold federal grants as well as file lawsuits against state and local officials for practicing racial discrimination . Finally , some mandates come in the form of partial preemption regulations , whereby the federal government sets national regulatory standards but delegates the enforcement to state and local governments . For example , the Clean Air Act sets air quality regulations but instructs states to design implementation plans to achieve such standards ( Figure 3.14 ) .", "question": { "cloze_format": "About unfunded mandates, it cannot be said that ___ .", "normal_format": "Which statement about unfunded mandates is false?", "question_choices": [ "The Unfunded Mandates Reform Act has prevented Congress from using unfunded mandates.", "The Clean Air Act is a type of federal partial preemptive regulation.", "Title VI of the Civil Rights Act establishes crosscutting requirements.", "New federalism does not promote the use of unfunded mandates." ], "question_id": "fs-id1163758300241", "question_text": "Which statement about unfunded mandates is false?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 0, "ans_text": "A" }, "bloom": null, "hl_context": "<hl> In 2012 , in Arizona v . United States , the Supreme Court affirmed federal supremacy on immigration . <hl> 60 The court struck down three of the four central provisions of the Arizona law — namely , those allowing police officers to arrest an undocumented immigrant without a warrant if they had probable cause to think he or she had committed a crime that could lead to deportation , making it a crime to seek a job without proper immigration papers , and making it a crime to be in Arizona without valid immigration papers . The court upheld the “ show me your papers ” provision , which authorizes police officers to check the immigration status of anyone they stop or arrest who they suspect is an illegal immigrant . 61 However , in letting this provision stand , the court warned Arizona and other states with similar laws that they could face civil rights lawsuits if police officers applied it based on racial profiling . 62 All in all , Justice Anthony Kennedy ’ s opinion embraced an expansive view of the U . S . government ’ s authority to regulate immigration and aliens , describing it as broad and undoubted . That authority derived from the legislative power of Congress to “ establish a uniform Rule of Naturalization , ” enumerated in the Constitution . Arizona has been one of the states at the forefront of immigration federalism . <hl> In 2010 , it passed Senate Bill 1070 , which sought to make it so difficult for illegal immigrants to live in the state that they would return to their native country , a strategy referred to as “ attrition by enforcement . ” 58 The federal government filed suit to block the Arizona law , contending that it conflicted with federal immigration laws . <hl> Arizona ’ s law has also divided society , because some groups , like the Tea Party movement , have supported its tough stance against illegal immigrants , while other groups have opposed it for humanitarian and human-rights reasons ( Figure 3.15 ) . According to a poll of Latino voters in the state by Arizona State University researchers , 81 percent opposed this bill . 59 Immigration federalism describes the gradual movement of states into the immigration policy domain . <hl> 56 Since the late 1990s , states have asserted a right to make immigration policy on the grounds that they are enforcing , not supplanting , the nation ’ s immigration laws , and they are exercising their jurisdictional authority by restricting illegal immigrants ’ access to education , health care , and welfare benefits , areas that fall under the states ’ responsibilities . <hl> In 2005 , twenty-five states had enacted a total of thirty-nine laws related to immigration ; by 2014 , forty-three states and Washington , DC , had passed a total of 288 immigration-related laws and resolutions . 57", "hl_sentences": "In 2012 , in Arizona v . United States , the Supreme Court affirmed federal supremacy on immigration . In 2010 , it passed Senate Bill 1070 , which sought to make it so difficult for illegal immigrants to live in the state that they would return to their native country , a strategy referred to as “ attrition by enforcement . ” 58 The federal government filed suit to block the Arizona law , contending that it conflicted with federal immigration laws . 56 Since the late 1990s , states have asserted a right to make immigration policy on the grounds that they are enforcing , not supplanting , the nation ’ s immigration laws , and they are exercising their jurisdictional authority by restricting illegal immigrants ’ access to education , health care , and welfare benefits , areas that fall under the states ’ responsibilities .", "question": { "cloze_format": "The statement about immigration federalism that is false is that ___.", "normal_format": "Which statement about immigration federalism is false?", "question_choices": [ "The Arizona v. United States decision struck down all Arizona’s most restrictive provisions on illegal immigration.", "Since the 1990s, states have increasingly moved into the policy domain of immigration.", "Federal immigration laws trump state laws.", "States’ involvement in immigration is partly due to their interest in preventing illegal immigrants from accessing public services such as education and welfare benefits." ], "question_id": "fs-id1163757189527", "question_text": "Which statement about immigration federalism is false?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "United States v. Windsor legalized same-sex marriage in the United States." }, "bloom": null, "hl_context": "Following the Windsor decision , the number of states that recognized same-sex marriages increased rapidly , as illustrated in Figure 3.17 . In 2015 , marriage equality was recognized in thirty-six states plus Washington , DC , up from seventeen in 2013 . The diffusion of marriage equality across states was driven in large part by federal district and appeals courts , which have used the rationale underpinning the Windsor case ( i . e . , laws cannot discriminate between same-sex and opposite-sex couples based on the equal protection clause of the Fourteenth Amendment ) to invalidate state bans on same-sex marriage . The 2014 court decision not to hear a collection of cases from four different states essentially affirmed same-sex marriage in thirty states . <hl> And in 2015 the Supreme Court gave same-sex marriage a constitutional basis of right nationwide in Obergefell v . Hodges . <hl> In sum , as the immigration and marriage equality examples illustrate , constitutional disputes have arisen as states and the federal government have sought to reposition themselves on certain policy issues , disputes that the federal courts have had to sort out . <hl> Insider Perspective Edith Windsor : Icon of the Marriage Equality Movement Edith Windsor , the plaintiff in the landmark Supreme Court case United States v . Windsor , has become an icon of the marriage equality movement for her successful effort to force repeal the DOMA provision that denied married same-sex couples a host of federal provisions and protections . <hl> In 2007 , after having lived together since the late 1960s , Windsor and her partner Thea Spyer were married in Canada , where same-sex marriage was legal . After Spyer died in 2009 , Windsor received a $ 363,053 federal tax bill on the estate Spyer had left her . Because her marriage was not valid under federal law , her request for the estate-tax exemption that applies to surviving spouses was denied . With the counsel of her lawyer , Roberta Kaplan , Windsor sued the federal government and won ( Figure 3.16 ) . <hl> DOMA clearly made the topic a state matter . <hl> <hl> It denoted a choice for states , which led many states to take up the policy issue of marriage equality . <hl> Scores of states considered legislation and ballot initiatives on the question . The federal courts took up the issue with zeal after the U . S . Supreme Court in United States v . Windsor struck down the part of DOMA that outlawed federal benefits . 9 That move was followed by upwards of forty federal court decisions that upheld marriage equality in particular states . In 2014 , the Supreme Court decided not to hear several key case appeals from a variety of states , all of which were brought by opponents of marriage equality who had lost in the federal courts . The outcome of not hearing these cases was that federal court decisions in four states were affirmed , which , when added to other states in the same federal circuit districts , brought the total number of states permitting same-sex marriage to thirty . 10 Then , in 2015 , the Obergefell v . Hodges case had a sweeping effect when the Supreme Court clearly identified a constitutional right to marriage based on the Fourteenth Amendment . 11 Various constitutional provisions govern state-to-state relations . Article IV , Section 1 , referred to as the full faith and credit clause or the comity clause , requires the states to accept court decisions , public acts , and contracts of other states . Thus , an adoption certificate or driver ’ s license issued in one state is valid in any other state . The movement for marriage equality has put the full faith and credit clause to the test in recent decades . <hl> In light of Baehr v . Lewin , a 1993 ruling in which the Hawaii Supreme Court asserted that the state ’ s ban on same-sex marriage was unconstitutional , a number of states became worried that they would be required to recognize those marriage certificates . <hl> <hl> 8 To address this concern , Congress passed and President Clinton signed the Defense of Marriage Act ( DOMA ) in 1996 . <hl> <hl> The law declared that “ No state ( or other political subdivision within the United States ) need recognize a marriage between persons of the same sex , even if the marriage was concluded or recognized in another state . ” The law also barred federal benefits for same-sex partners . <hl>", "hl_sentences": "And in 2015 the Supreme Court gave same-sex marriage a constitutional basis of right nationwide in Obergefell v . Hodges . Insider Perspective Edith Windsor : Icon of the Marriage Equality Movement Edith Windsor , the plaintiff in the landmark Supreme Court case United States v . Windsor , has become an icon of the marriage equality movement for her successful effort to force repeal the DOMA provision that denied married same-sex couples a host of federal provisions and protections . DOMA clearly made the topic a state matter . It denoted a choice for states , which led many states to take up the policy issue of marriage equality . In light of Baehr v . Lewin , a 1993 ruling in which the Hawaii Supreme Court asserted that the state ’ s ban on same-sex marriage was unconstitutional , a number of states became worried that they would be required to recognize those marriage certificates . 8 To address this concern , Congress passed and President Clinton signed the Defense of Marriage Act ( DOMA ) in 1996 . The law declared that “ No state ( or other political subdivision within the United States ) need recognize a marriage between persons of the same sex , even if the marriage was concluded or recognized in another state . ” The law also barred federal benefits for same-sex partners .", "question": { "cloze_format": "The statement about the evolution of same-sex marriage that is false is that ___.", "normal_format": "Which statement about the evolution of same-sex marriage is false?", "question_choices": [ "The federal government became involved in this issue when it passed DOMA.", "In the 1990s and 2000s, the number of state restrictions on same-sex marriage increased.", "United States v. Windsor legalized same-sex marriage in the United States.", "More than half the states had legalized same-sex marriage by the time the Supreme Court made same-sex marriage legal nationwide in 2015." ], "question_id": "fs-id1163757286426", "question_text": "Which statement about the evolution of same-sex marriage is false?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 3, "ans_text": "D" }, "bloom": null, "hl_context": "<hl> The strategy anti-abortion advocates have used in recent years is another example of venue shopping . <hl> <hl> In their attempts to limit abortion rights in the wake of the 1973 Roe v . Wade Supreme Court decision making abortion legal nationwide , anti-abortion advocates initially targeted Congress in hopes of obtaining restrictive legislation . <hl> 67 Lack of progress at the national level prompted them to shift their focus to state legislators , where their advocacy efforts have been more successful . By 2015 , for example , thirty-eight states required some form of parental involvement in a minor ’ s decision to have an abortion , forty-six states allowed individual health-care providers to refuse to participate in abortions , and thirty-two states prohibited the use of public funds to carry out an abortion except when the woman ’ s life is in danger or the pregnancy is the result of rape or incest . While 31 percent of U . S . women of childbearing age resided in one of the thirteen states that had passed restrictive abortion laws in 2000 , by 2013 , about 56 percent of such women resided in one of the twenty-seven states where abortion is restricted . 68 3.5 Advantages and Disadvantages of Federalism By creating two institutional access points — the federal and state governments — the U . S . federal system enables interest groups such as MADD to strategize about how best to achieve their policy objectives . <hl> The term venue shopping refers to a strategy in which interest groups select the level and branch of government ( legislature , judiciary , or executive ) they calculate will be most advantageous for them . <hl> <hl> 66 If one institutional venue proves unreceptive to an advocacy group ’ s policy goal , as state legislators were to MADD , the group will attempt to steer its issue to a more responsive venue . <hl> <hl> Mothers Against Drunk Driving ( MADD ) was established in 1980 by a woman whose thirteen-year-old daughter had been killed by a drunk driver . <hl> The organization lobbied state legislators to raise the drinking age and impose tougher penalties , but without success . States with lower drinking ages had an economic interest in maintaining them because they lured youths from neighboring states with restricted consumption laws . So MADD decided to redirect its lobbying efforts at Congress , hoping to find sympathetic representatives willing to take action . <hl> In 1984 , the federal government passed the National Minimum Drinking Age Act ( NMDAA ) , a crosscutting mandate that gradually reduced federal highway grant money to any state that failed to increase the legal age for alcohol purchase and possession to twenty-one . <hl> <hl> After losing a legal battle against the NMDAA , all states were in compliance by 1988 . <hl> 65", "hl_sentences": "The strategy anti-abortion advocates have used in recent years is another example of venue shopping . In their attempts to limit abortion rights in the wake of the 1973 Roe v . Wade Supreme Court decision making abortion legal nationwide , anti-abortion advocates initially targeted Congress in hopes of obtaining restrictive legislation . The term venue shopping refers to a strategy in which interest groups select the level and branch of government ( legislature , judiciary , or executive ) they calculate will be most advantageous for them . 66 If one institutional venue proves unreceptive to an advocacy group ’ s policy goal , as state legislators were to MADD , the group will attempt to steer its issue to a more responsive venue . Mothers Against Drunk Driving ( MADD ) was established in 1980 by a woman whose thirteen-year-old daughter had been killed by a drunk driver . In 1984 , the federal government passed the National Minimum Drinking Age Act ( NMDAA ) , a crosscutting mandate that gradually reduced federal highway grant money to any state that failed to increase the legal age for alcohol purchase and possession to twenty-one . After losing a legal battle against the NMDAA , all states were in compliance by 1988 .", "question": { "cloze_format": "The statement about venue changes that is true is that ___.", "normal_format": "Which statement about venue shopping is true?", "question_choices": [ "MADD steered the drinking age issue from the federal government down to the states.", "Anti-abortion advocates have steered the abortion issue from the states up to the federal government.", "Both MADD and anti-abortion proponents redirected their advocacy from the states to the federal government.", "None of the statements are correct." ], "question_id": "fs-id1163757253230", "question_text": "Which statement about venue shopping is true?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "B" }, "bloom": null, "hl_context": "<hl> Among the merits of federalism are that it promotes policy innovation and political participation and accommodates diversity of opinion . <hl> On the subject of policy innovation , Supreme Court Justice Louis Brandeis observed in 1932 that “ a single courageous state may , if its citizens choose , serve as a laboratory ; and try novel social and economic experiments without risk to the rest of the country . ” 69 What Brandeis meant was that states could harness their constitutional authority to engage in policy innovations that might eventually be diffused to other states and at the national level . For example , a number of New Deal breakthroughs , such as child labor laws , were inspired by state policies . Prior to the passage of the Nineteenth Amendment , several states had already granted women the right to vote . California has led the way in establishing standards for fuel emissions and other environmental policies ( Figure 3.18 ) . Recently , the health insurance exchanges run by Connecticut , Kentucky , Rhode Island , and Washington have served as models for other states seeking to improve the performance of their exchanges . 70", "hl_sentences": "Among the merits of federalism are that it promotes policy innovation and political participation and accommodates diversity of opinion .", "question": { "cloze_format": "___ is not a benefit of federalism.", "normal_format": "Which of the following is not a benefit of federalism?", "question_choices": [ "Federalism promotes political participation.", "Federalism encourages economic equality across the country.", "Federalism provides for multiple levels of government action.", "Federalism accommodates a diversity of opinion." ], "question_id": "fs-id1163758304032", "question_text": "Which of the following is not a benefit of federalism?" }, "references_are_paraphrase": null } ]
3
3.1 The Division of Powers Learning Objectives By the end of this section, you will be able to: Explain the concept of federalism Discuss the constitutional logic of federalism Identify the powers and responsibilities of federal, state, and local governments Modern democracies divide governmental power in two general ways; some, like the United States, use a combination of both structures. The first and more common mechanism shares power among three branches of government—the legislature, the executive, and the judiciary. The second, federalism, apportions power between two levels of government: national and subnational. In the United States, the term federal government refers to the government at the national level, while the term states means governments at the subnational level. FEDERALISM DEFINED AND CONTRASTED Federalism is an institutional arrangement that creates two relatively autonomous levels of government, each possessing the capacity to act directly on behalf of the people with the authority granted to it by the national constitution. 1 Although today’s federal systems vary in design, five structural characteristics are common to the United States and other federal systems around the world, including Germany and Mexico. First, all federal systems establish two levels of government, with both levels being elected by the people and each level assigned different functions. The national government is responsible for handling matters that affect the country as a whole, for example, defending the nation against foreign threats and promoting national economic prosperity. Subnational, or state governments, are responsible for matters that lie within their regions, which include ensuring the well-being of their people by administering education, health care, public safety, and other public services. By definition, a system like this requires that different levels of government cooperate, because the institutions at each level form an interacting network. In the U.S. federal system, all national matters are handled by the federal government, which is led by the president and members of Congress, all of whom are elected by voters across the country. All matters at the subnational level are the responsibility of the fifty states, each headed by an elected governor and legislature. Thus, there is a separation of functions between the federal and state governments, and voters choose the leader at each level. 2 The second characteristic common to all federal systems is a written national constitution that cannot be changed without the substantial consent of subnational governments. In the American federal system, the twenty-seven amendments added to the Constitution since its adoption were the result of an arduous process that required approval by two-thirds of both houses of Congress and three-fourths of the states. The main advantage of this supermajority requirement is that no changes to the Constitution can occur unless there is broad support within Congress and among states. The potential drawback is that numerous national amendment initiatives—such as the Equal Rights Amendment (ERA), which aims to guarantee equal rights regardless of sex—have failed because they cannot garner sufficient consent among members of Congress or, in the case of the ERA, the states. Third, the constitutions of countries with federal systems formally allocate legislative, judicial, and executive authority to the two levels of government in such a way as to ensure each level some degree of autonomy from the other. Under the U.S. Constitution, the president assumes executive power, Congress exercises legislative powers, and the federal courts (e.g., U.S. district courts, appellate courts, and the Supreme Court) assume judicial powers. In each of the fifty states, a governor assumes executive authority, a state legislature makes laws, and state-level courts (e.g., trial courts, intermediate appellate courts, and supreme courts) possess judicial authority. While each level of government is somewhat independent of the others, a great deal of interaction occurs among them. In fact, the ability of the federal and state governments to achieve their objectives often depends on the cooperation of the other level of government. For example, the federal government’s efforts to ensure homeland security are bolstered by the involvement of law enforcement agents working at local and state levels. On the other hand, the ability of states to provide their residents with public education and health care is enhanced by the federal government’s financial assistance. Another common characteristic of federalism around the world is that national courts commonly resolve disputes between levels and departments of government. In the United States, conflicts between states and the federal government are adjudicated by federal courts, with the U.S. Supreme Court being the final arbiter. The resolution of such disputes can preserve the autonomy of one level of government, as illustrated recently when the Supreme Court ruled that states cannot interfere with the federal government’s actions relating to immigration. 3 In other instances, a Supreme Court ruling can erode that autonomy, as demonstrated in the 1940s when, in United States v. Wrightwood Dairy Co. , the Court enabled the federal government to regulate commercial activities that occurred within states, a function previously handled exclusively by the states. 4 Finally, subnational governments are always represented in the upper house of the national legislature, enabling regional interests to influence national lawmaking. 5 In the American federal system, the U.S. Senate functions as a territorial body by representing the fifty states: Each state elects two senators to ensure equal representation regardless of state population differences. Thus, federal laws are shaped in part by state interests, which senators convey to the federal policymaking process. Link to Learning The governmental design of the United States is unusual; most countries do not have a federal structure. Aside from the United States, how many other countries have a federal system? Division of power can also occur via a unitary structure or confederation ( Figure 3.2 ). In contrast to federalism, a unitary system makes subnational governments dependent on the national government, where significant authority is concentrated. Before the late 1990s, the United Kingdom’s unitary system was centralized to the extent that the national government held the most important levers of power. Since then, power has been gradually decentralized through a process of devolution , leading to the creation of regional governments in Scotland, Wales, and Northern Ireland as well as the delegation of specific responsibilities to them. Other democratic countries with unitary systems, such as France, Japan, and Sweden, have followed a similar path of decentralization. In a confederation, authority is decentralized, and the central government’s ability to act depends on the consent of the subnational governments. Under the Articles of Confederation (the first constitution of the United States), states were sovereign and powerful while the national government was subordinate and weak. Because states were reluctant to give up any of their power, the national government lacked authority in the face of challenges such as servicing the war debt, ending commercial disputes among states, negotiating trade agreements with other countries, and addressing popular uprisings that were sweeping the country. As the brief American experience with confederation clearly shows, the main drawback with this system of government is that it maximizes regional self-rule at the expense of effective national governance. FEDERALISM AND THE CONSTITUTION The Constitution contains several provisions that direct the functioning of U.S. federalism . Some delineate the scope of national and state power, while others restrict it. The remaining provisions shape relationships among the states and between the states and the federal government. The enumerated powers of the national legislature are found in Article I, Section 8. These powers define the jurisdictional boundaries within which the federal government has authority. In seeking not to replay the problems that plagued the young country under the Articles of Confederation, the Constitution’s framers granted Congress specific powers that ensured its authority over national and foreign affairs. To provide for the general welfare of the populace, it can tax, borrow money, regulate interstate and foreign commerce, and protect property rights, for example. To provide for the common defense of the people, the federal government can raise and support armies and declare war. Furthermore, national integration and unity are fostered with the government’s powers over the coining of money, naturalization, postal services, and other responsibilities. The last clause of Article I, Section 8, commonly referred to as the elastic clause or the necessary and proper cause , enables Congress “to make all Laws which shall be necessary and proper for carrying” out its constitutional responsibilities. While the enumerated powers define the policy areas in which the national government has authority, the elastic clause allows it to create the legal means to fulfill those responsibilities. However, the open-ended construction of this clause has enabled the national government to expand its authority beyond what is specified in the Constitution, a development also motivated by the expansive interpretation of the commerce clause, which empowers the federal government to regulate interstate economic transactions. The powers of the state governments were never listed in the original Constitution. The consensus among the framers was that states would retain any powers not prohibited by the Constitution or delegated to the national government. 6 However, when it came time to ratify the Constitution, a number of states requested that an amendment be added explicitly identifying the reserved powers of the states. What these Anti-Federalists sought was further assurance that the national government’s capacity to act directly on behalf of the people would be restricted, which the first ten amendments ( Bill of Rights ) provided. The Tenth Amendment affirms the states’ reserved powers: “The powers not delegated to the United States by the Constitution, nor prohibited by it to the States, are reserved to the States respectively, or to the people.” Indeed, state constitutions had bills of rights, which the first Congress used as the source for the first ten amendments to the Constitution. Some of the states’ reserved powers are no longer exclusively within state domain, however. For example, since the 1940s, the federal government has also engaged in administering health, safety, income security, education, and welfare to state residents. The boundary between intrastate and interstate commerce has become indefinable as a result of broad interpretation of the commerce clause. Shared and overlapping powers have become an integral part of contemporary U.S. federalism. These concurrent powers range from taxing, borrowing, and making and enforcing laws to establishing court systems ( Figure 3.3 ). 7 Article I, Sections 9 and 10, along with several constitutional amendments, lay out the restrictions on federal and state authority. The most important restriction Section 9 places on the national government prevents measures that cause the deprivation of personal liberty. Specifically, the government cannot suspend the writ of habeas corpus , which enables someone in custody to petition a judge to determine whether that person’s detention is legal; pass a bill of attainder , a legislative action declaring someone guilty without a trial; or enact an ex post facto law , which criminalizes an act retroactively. The Bill of Rights affirms and expands these constitutional restrictions, ensuring that the government cannot encroach on personal freedoms. The states are also constrained by the Constitution. Article I, Section 10, prohibits the states from entering into treaties with other countries, coining money, and levying taxes on imports and exports. Like the federal government, the states cannot violate personal freedoms by suspending the writ of habeas corpus, passing bills of attainder, or enacting ex post facto laws. Furthermore, the Fourteenth Amendment , ratified in 1868, prohibits the states from denying citizens the rights to which they are entitled by the Constitution, due process of law, or the equal protection of the laws. Lastly, three civil rights amendments—the Fifteenth, Nineteenth, and Twenty-Sixth—prevent both the states and the federal government from abridging citizens’ right to vote based on race, sex, and age. This topic remains controversial because states have not always ensured equal protection. The supremacy clause in Article VI of the Constitution regulates relationships between the federal and state governments by declaring that the Constitution and federal law are the supreme law of the land. This means that if a state law clashes with a federal law found to be within the national government’s constitutional authority, the federal law prevails. The intent of the supremacy clause is not to subordinate the states to the federal government; rather, it affirms that one body of laws binds the country. In fact, all national and state government officials are bound by oath to uphold the Constitution regardless of the offices they hold. Yet enforcement is not always that simple. In the case of marijuana use, which the federal government defines to be illegal, twenty-three states and the District of Columbia have nevertheless established medical marijuana laws, others have decriminalized its recreational use, and four states have completely legalized it. The federal government could act in this area if it wanted to. For example, in addition to the legalization issue, there is the question of how to treat the money from marijuana sales, which the national government designates as drug money and regulates under laws regarding its deposit in banks. Various constitutional provisions govern state-to-state relations. Article IV , Section 1, referred to as the full faith and credit clause or the comity clause , requires the states to accept court decisions, public acts, and contracts of other states. Thus, an adoption certificate or driver’s license issued in one state is valid in any other state. The movement for marriage equality has put the full faith and credit clause to the test in recent decades. In light of Baehr v. Lewin , a 1993 ruling in which the Hawaii Supreme Court asserted that the state’s ban on same-sex marriage was unconstitutional, a number of states became worried that they would be required to recognize those marriage certificates. 8 To address this concern, Congress passed and President Clinton signed the Defense of Marriage Act (DOMA) in 1996. The law declared that “No state (or other political subdivision within the United States) need recognize a marriage between persons of the same sex, even if the marriage was concluded or recognized in another state.” The law also barred federal benefits for same-sex partners. DOMA clearly made the topic a state matter. It denoted a choice for states, which led many states to take up the policy issue of marriage equality. Scores of states considered legislation and ballot initiatives on the question. The federal courts took up the issue with zeal after the U.S. Supreme Court in United States v. Windsor struck down the part of DOMA that outlawed federal benefits. 9 That move was followed by upwards of forty federal court decisions that upheld marriage equality in particular states. In 2014, the Supreme Court decided not to hear several key case appeals from a variety of states, all of which were brought by opponents of marriage equality who had lost in the federal courts. The outcome of not hearing these cases was that federal court decisions in four states were affirmed, which, when added to other states in the same federal circuit districts, brought the total number of states permitting same-sex marriage to thirty. 10 Then, in 2015, the Obergefell v. Hodges case had a sweeping effect when the Supreme Court clearly identified a constitutional right to marriage based on the Fourteenth Amendment . 11 The privileges and immunities clause of Article IV asserts that states are prohibited from discriminating against out-of-staters by denying them such guarantees as access to courts, legal protection, property rights, and travel rights. The clause has not been interpreted to mean there cannot be any difference in the way a state treats residents and non-residents. For example, individuals cannot vote in a state in which they do not reside, tuition at state universities is higher for out-of-state residents, and in some cases individuals who have recently become residents of a state must wait a certain amount of time to be eligible for social welfare benefits. Another constitutional provision prohibits states from establishing trade restrictions on goods produced in other states. However, a state can tax out-of-state goods sold within its borders as long as state-made goods are taxed at the same level. THE DISTRIBUTION OF FINANCES Federal, state, and local governments depend on different sources of revenue to finance their annual expenditures. In 2014, total revenue (or receipts) reached $3.2 trillion for the federal government, $1.7 trillion for the states, and $1.2 trillion for local governments. 12 Two important developments have fundamentally changed the allocation of revenue since the early 1900s. First, the ratification of the Sixteenth Amendment in 1913 authorized Congress to impose income taxes without apportioning it among the states on the basis of population, a burdensome provision that Article I, Section 9, had imposed on the national government. 13 With this change, the federal government’s ability to raise revenue significantly increased and so did its ability to spend. The second development regulates federal grants, that is, transfers of federal money to state and local governments. These transfers, which do not have to be repaid, are designed to support the activities of the recipient governments, but also to encourage them to pursue federal policy objectives they might not otherwise adopt. The expansion of the federal government’s spending power has enabled it to transfer more grant money to lower government levels, which has accounted for an increasing share of their total revenue. 14 The sources of revenue for federal, state, and local governments are detailed in Figure 3.4 . Although the data reflect 2013 results, the patterns we see in the figure give us a good idea of how governments have funded their activities in recent years. For the federal government, 47 percent of 2013 revenue came from individual income taxes and 34 percent from payroll taxes, which combine Social Security tax and Medicare tax. For state governments, 50 percent of revenue came from taxes, while 30 percent consisted of federal grants. Sales tax—which includes taxes on purchased food, clothing, alcohol, amusements, insurance, motor fuels, tobacco products, and public utilities, for example—accounted for about 47 percent of total tax revenue, and individual income taxes represented roughly 35 percent. Revenue from service charges (e.g., tuition revenue from public universities and fees for hospital-related services) accounted for 11 percent. The tax structure of states varies. Alaska, Florida, Nevada, South Dakota, Texas, Washington, and Wyoming do not have individual income taxes. Figure 3.5 illustrates yet another difference: Fuel tax as a percentage of total tax revenue is much higher in South Dakota and West Virginia than in Alaska and Hawaii. However, most states have done little to prevent the erosion of the fuel tax’s share of their total tax revenue between 2007 and 2014 (notice that for many states the dark blue dots for 2014 are to the left of the light blue numbers for 2007). Fuel tax revenue is typically used to finance state highway transportation projects, although some states do use it to fund non-transportation projects. The most important sources of revenue for local governments in 2013 were taxes, federal and state grants, and service charges. For local governments the property tax, a levy on residential and commercial real estate, was the most important source of tax revenue, accounting for about 74 percent of the total. Federal and state grants accounted for 37 percent of local government revenue. State grants made up 87 percent of total local grants. Charges for hospital-related services, sewage and solid-waste management, public city university tuition, and airport services are important sources of general revenue for local governments. Intergovernmental grants are important sources of revenue for both state and local governments. When economic times are good, such grants help states, cities, municipalities, and townships carry out their regular functions. However, during hard economic times, such as the Great Recession of 2007–2009, intergovernmental transfers provide much-needed fiscal relief as the revenue streams of state and local governments dry up. During the Great Recession, tax receipts dropped as business activities slowed, consumer spending dropped, and family incomes decreased due to layoffs or work-hour reductions. To offset the adverse effects of the recession on the states and local governments, federal grants increased by roughly 33 percent during this period. 15 In 2009, President Obama signed the American Recovery and Reinvestment Act (ARRA), which provided immediate economic-crisis management assistance such as helping local and state economies ride out the Great Recession and shoring up the country’s banking sector. A total of $274.7 billion in grants, contracts, and loans was allocated to state and local governments under the ARRA. 16 The bulk of the stimulus funds apportioned to state and local governments was used to create and protect existing jobs through public works projects and to fund various public welfare programs such as unemployment insurance. 17 How are the revenues generated by our tax dollars, fees we pay to use public services and obtain licenses, and monies from other sources put to use by the different levels of government? A good starting point to gain insight on this question as it relates to the federal government is Article I, Section 8, of the Constitution. Recall, for instance, that the Constitution assigns the federal government various powers that allow it to affect the nation as a whole. A look at the federal budget in 2014 ( Figure 3.6 ) shows that the three largest spending categories were Social Security (24 percent of the total budget); Medicare, Medicaid, the Children’s Health Insurance Program, and marketplace subsidies under the Affordable Care Act (24 percent); and defense and international security assistance (18 percent). The rest was divided among categories such as safety net programs (11 percent), including the Earned Income Tax Credit and Child Tax Credit, unemployment insurance, food stamps, and other low-income assistance programs; interest on federal debt (7 percent); benefits for federal retirees and veterans (8 percent); and transportation infrastructure (3 percent). 18 It is clear from the 2014 federal budget that providing for the general welfare and national defense consumes much of the government’s resources—not just its revenue, but also its administrative capacity and labor power. Figure 3.7 compares recent spending activities of local and state governments. Educational expenditures constitute a major category for both. However, whereas the states spend comparatively more than local governments on university education, local governments spend even more on elementary and secondary education. That said, nationwide, state funding for public higher education has declined as a percentage of university revenues; this is primarily because states have taken in lower amounts of sales taxes as internet commerce has increased. Local governments allocate more funds to police protection, fire protection, housing and community development, and public utilities such as water, sewage, and electricity. And while state governments allocate comparatively more funds to public welfare programs, such as health care, income support, and highways, both local and state governments spend roughly similar amounts on judicial and legal services and correctional services. 3.2 The Evolution of American Federalism Learning Objectives By the end of this section, you will be able to: Describe how federalism has evolved in the United States Compare different conceptions of federalism The Constitution sketches a federal framework that aims to balance the forces of decentralized and centralized governance in general terms; it does not flesh out standard operating procedures that say precisely how the states and federal governments are to handle all policy contingencies imaginable. Therefore, officials at the state and national levels have had some room to maneuver as they operate within the Constitution’s federal design. This has led to changes in the configuration of federalism over time, changes corresponding to different historical phases that capture distinct balances between state and federal authority. THE STRUGGLE BETWEEN NATIONAL POWER AND STATE POWER As George Washington’s secretary of the treasury from 1789 to 1795, Alexander Hamilton championed legislative efforts to create a publicly chartered bank. For Hamilton, the establishment of the Bank of the United States was fully within Congress’s authority, and he hoped the bank would foster economic development, print and circulate paper money, and provide loans to the government. Although Thomas Jefferson , Washington’s secretary of state, staunchly opposed Hamilton’s plan on the constitutional grounds that the national government had no authority to create such an instrument, Hamilton managed to convince the reluctant president to sign the legislation. 19 When the bank’s charter expired in 1811, Jeffersonian Democratic-Republicans prevailed in blocking its renewal. However, the fiscal hardships that plagued the government during the War of 1812 , coupled with the fragility of the country’s financial system, convinced Congress and then-president James Madison to create the Second Bank of the United States in 1816. Many states rejected the Second Bank, arguing that the national government was infringing upon the states’ constitutional jurisdiction. A political showdown between Maryland and the national government emerged when James McCulloch, an agent for the Baltimore branch of the Second Bank, refused to pay a tax that Maryland had imposed on all out-of-state chartered banks. The standoff raised two constitutional questions: Did Congress have the authority to charter a national bank? Were states allowed to tax federal property? In McCulloch v. Maryland , Chief Justice John Marshall ( Figure 3.8 ) argued that Congress could create a national bank even though the Constitution did not expressly authorize it. 20 Under the necessary and proper clause of Article I , Section 8, the Supreme Court asserted that Congress could establish “all means which are appropriate” to fulfill “the legitimate ends” of the Constitution. In other words, the bank was an appropriate instrument that enabled the national government to carry out several of its enumerated powers, such as regulating interstate commerce, collecting taxes, and borrowing money. This ruling established the doctrine of implied powers, granting Congress a vast source of discretionary power to achieve its constitutional responsibilities. The Supreme Court also sided with the federal government on the issue of whether states could tax federal property. Under the supremacy clause of Article VI , legitimate national laws trump conflicting state laws. As the court observed, “the government of the Union, though limited in its powers, is supreme within its sphere of action and its laws, when made in pursuance of the constitution, form the supreme law of the land.” Maryland’s action violated national supremacy because “the power to tax is the power to destroy.” This second ruling established the principle of national supremacy, which prohibits states from meddling in the lawful activities of the national government. Defining the scope of national power was the subject of another landmark Supreme Court decision in 1824. In Gibbons v. Ogden , the court had to interpret the commerce clause of Article I , Section 8; specifically, it had to determine whether the federal government had the sole authority to regulate the licensing of steamboats operating between New York and New Jersey. 21 Aaron Ogden, who had obtained an exclusive license from New York State to operate steamboat ferries between New York City and New Jersey, sued Thomas Gibbons, who was operating ferries along the same route under a coasting license issued by the federal government. Gibbons lost in New York state courts and appealed. Chief Justice Marshall delivered a two-part ruling in favor of Gibbons that strengthened the power of the national government. First, interstate commerce was interpreted broadly to mean “commercial intercourse” among states, thus allowing Congress to regulate navigation. Second, because the federal Licensing Act of 1793, which regulated coastal commerce, was a constitutional exercise of Congress’s authority under the commerce clause, federal law trumped the New York State license-monopoly law that had granted Ogden an exclusive steamboat operating license. As Marshall pointed out, “the acts of New York must yield to the law of Congress.” 22 Various states railed against the nationalization of power that had been going on since the late 1700s. When President John Adams signed the Sedition Act in 1798, which made it a crime to speak openly against the government, the Kentucky and Virginia legislatures passed resolutions declaring the act null on the grounds that they retained the discretion to follow national laws. In effect, these resolutions articulated the legal reasoning underpinning the doctrine of nullification —that states had the right to reject national laws they deemed unconstitutional. 23 A nullification crisis emerged in the 1830s over President Andrew Jackson’s tariff acts of 1828 and 1832. Led by John Calhoun , President Jackson’s vice president, nullifiers argued that high tariffs on imported goods benefited northern manufacturing interests while disadvantaging economies in the South. South Carolina passed an Ordinance of Nullification declaring both tariff acts null and void and threatened to leave the Union. The federal government responded by enacting the Force Bill in 1833, authorizing President Jackson to use military force against states that challenged federal tariff laws. The prospect of military action coupled with the passage of the Compromise Tariff Act of 1833 (which lowered tariffs over time) led South Carolina to back off, ending the nullification crisis. The ultimate showdown between national and state authority came during the Civil War . Prior to the conflict, in Dred Scott v. Sandford , the Supreme Court ruled that the national government lacked the authority to ban slavery in the territories. 24 But the election of President Abraham Lincoln in 1860 led eleven southern states to secede from the United States because they believed the new president would challenge the institution of slavery. What was initially a conflict to preserve the Union became a conflict to end slavery when Lincoln issued the Emancipation Proclamation in 1863, freeing all slaves in the rebellious states. The defeat of the South had a huge impact on the balance of power between the states and the national government in two important ways. First, the Union victory put an end to the right of states to secede and to challenge legitimate national laws. Second, Congress imposed several conditions for readmitting former Confederate states into the Union; among them was ratification of the Fourteenth and Fifteenth Amendment s. In sum, after the Civil War the power balance shifted toward the national government, a movement that had begun several decades before with McCulloch v. Maryland (1819) and Gibbons v. Odgen (1824). The period between 1819 and the 1860s demonstrated that the national government sought to establish its role within the newly created federal design, which in turn often provoked the states to resist as they sought to protect their interests. With the exception of the Civil War, the Supreme Court settled the power struggles between the states and national government. From a historical perspective, the national supremacy principle introduced during this period did not so much narrow the states’ scope of constitutional authority as restrict their encroachment on national powers. 25 DUAL FEDERALISM The late 1870s ushered in a new phase in the evolution of U.S. federalism. Under dual federalism , the states and national government exercise exclusive authority in distinctly delineated spheres of jurisdiction. Like the layers of a cake, the levels of government do not blend with one another but rather are clearly defined. Two factors contributed to the emergence of this conception of federalism. First, several Supreme Court rulings blocked attempts by both state and federal governments to step outside their jurisdictional boundaries. Second, the prevailing economic philosophy at the time loathed government interference in the process of industrial development. Industrialization changed the socioeconomic landscape of the United States. One of its adverse effects was the concentration of market power. Because there was no national regulatory supervision to ensure fairness in market practices, collusive behavior among powerful firms emerged in several industries. 26 To curtail widespread anticompetitive practices in the railroad industry, Congress passed the Interstate Commerce Act in 1887, which created the Interstate Commerce Commission. Three years later, national regulatory capacity was broadened by the Sherman Antitrust Act of 1890, which made it illegal to monopolize or attempt to monopolize and conspire in restraining commerce (Figure 03_02_Commerce). In the early stages of industrial capitalism, federal regulations were focused for the most part on promoting market competition rather than on addressing the social dislocations resulting from market operations, something the government began to tackle in the 1930s. 27 The new federal regulatory regime was dealt a legal blow early in its existence. In 1895, in United States v. E. C. Knight , the Supreme Court ruled that the national government lacked the authority to regulate manufacturing. 28 The case came about when the government, using its regulatory power under the Sherman Act, attempted to override American Sugar’s purchase of four sugar refineries, which would give the company a commanding share of the industry. Distinguishing between commerce among states and the production of goods, the court argued that the national government’s regulatory authority applied only to commercial activities. If manufacturing activities fell within the purview of the commerce clause of the Constitution, then “comparatively little of business operations would be left for state control,” the court argued. In the late 1800s, some states attempted to regulate working conditions. For example, New York State passed the Bakeshop Act in 1897, which prohibited bakery employees from working more than sixty hours in a week. In Lochner v. New York , the Supreme Court ruled this state regulation that capped work hours unconstitutional, on the grounds that it violated the due process clause of the Fourteenth Amendment. 29 In other words, the right to sell and buy labor is a “liberty of the individual” safeguarded by the Constitution, the court asserted. The federal government also took up the issue of working conditions, but that case resulted in the same outcome as in the Lochner case. 30 COOPERATIVE FEDERALISM The Great Depression of the 1930s brought economic hardships the nation had never witnessed before ( Figure 3.10 ). Between 1929 and 1933, the national unemployment rate reached 25 percent, industrial output dropped by half, stock market assets lost more than half their value, thousands of banks went out of business, and the gross domestic product shrunk by one-quarter. 31 Given the magnitude of the economic depression, there was pressure on the national government to coordinate a robust national response along with the states. Cooperative federalism was born of necessity and lasted well into the twentieth century as the national and state governments each found it beneficial. Under this model, both levels of government coordinated their actions to solve national problems, such as the Great Depression and the civil rights struggle of the following decades. In contrast to dual federalism, it erodes the jurisdictional boundaries between the states and national government, leading to a blending of layers as in a marble cake. The era of cooperative federalism contributed to the gradual incursion of national authority into the jurisdictional domain of the states, as well as the expansion of the national government’s power in concurrent policy areas. 32 The New Deal programs President Franklin D. Roosevelt proposed as a means to tackle the Great Depression ran afoul of the dual-federalism mindset of the justices on the Supreme Court in the 1930s. The court struck down key pillars of the New Deal—the National Industrial Recovery Act and the Agricultural Adjustment Act , for example—on the grounds that the federal government was operating in matters that were within the purview of the states. The court’s obstructionist position infuriated Roosevelt, leading him in 1937 to propose a court-packing plan that would add one new justice for each one over the age of seventy, thus allowing the president to make a maximum of six new appointments. Before Congress took action on the proposal, the Supreme Court began leaning in support of the New Deal as Chief Justice Charles Evans Hughes and Justice Owen Roberts changed their view on federalism. 33 In National Labor Relations Board (NLRB) v. Jones and Laughlin Steel , 34 for instance, the Supreme Court ruled the National Labor Relations Act of 1935 constitutional, asserting that Congress can use its authority under the commerce clause to regulate both manufacturing activities and labor-management relations. The New Deal changed the relationship Americans had with the national government. Before the Great Depression , the government offered little in terms of financial aid, social benefits, and economic rights. After the New Deal, it provided old-age pensions (Social Security), unemployment insurance, agricultural subsidies, protections for organizing in the workplace, and a variety of other public services created during Roosevelt’s administration. In the 1960s, President Lyndon Johnson ’s administration expanded the national government’s role in society even more. Medicaid (which provides medical assistance to the indigent), Medicare (which provides health insurance to the elderly and disabled), and school nutrition programs were created. The Elementary and Secondary Education Act (1965), the Higher Education Act (1965), and the Head Start preschool program (1965) were established to expand educational opportunities and equality ( Figure 3.11 ). The Clean Air Act (1965), the Highway Safety Act (1966), and the Fair Packaging and Labeling Act (1966) promoted environmental and consumer protection. Finally, laws were passed to promote urban renewal, public housing development, and affordable housing. In addition to these Great Society programs, the Civil Rights Act (1964) and the Voting Rights Act (1965) gave the federal government effective tools to promote civil rights equality across the country. While the era of cooperative federalism witnessed a broadening of federal powers in concurrent and state policy domains, it is also the era of a deepening coordination between the states and the federal government in Washington. Nowhere is this clearer than with respect to the social welfare and social insurance programs created during the New Deal and Great Society eras, most of which are administered by both state and federal authorities and are jointly funded. The Social Security Act of 1935, which created federal subsidies for state-administered programs for the elderly; people with handicaps; dependent mothers; and children, gave state and local officials wide discretion over eligibility and benefit levels. The unemployment insurance program, also created by the Social Security Act, requires states to provide jobless benefits, but it allows them significant latitude to decide the level of tax to impose on businesses in order to fund the program as well as the duration and replacement rate of unemployment benefits. A similar multilevel division of labor governs Medicaid and Children’s Health Insurance. 35 Thus, the era of cooperative federalism left two lasting attributes on federalism in the United States. First, a nationalization of politics emerged as a result of federal legislative activism aimed at addressing national problems such as marketplace inefficiencies, social and political inequality, and poverty. The nationalization process expanded the size of the federal administrative apparatus and increased the flow of federal grants to state and local authorities, which have helped offset the financial costs of maintaining a host of New Deal- and Great Society–era programs. The second lasting attribute is the flexibility that states and local authorities were given in the implementation of federal social welfare programs. One consequence of administrative flexibility, however, is that it has led to cross-state differences in the levels of benefits and coverage. 36 NEW FEDERALISM During the administrations of Presidents Richard Nixon (1969–1974) and Ronald Reagan (1981–1989), attempts were made to reverse the process of nationalization—that is, to restore states’ prominence in policy areas into which the federal government had moved in the past. New federalism is premised on the idea that the decentralization of policies enhances administrative efficiency, reduces overall public spending, and improves policy outcomes. During Nixon’s administration, general revenue sharing programs were created that distributed funds to the state and local governments with minimal restrictions on how the money was spent. The election of Ronald Reagan heralded the advent of a “devolution revolution” in U.S. federalism, in which the president pledged to return authority to the states according to the Constitution. In the Omnibus Budget Reconciliation Act of 1981, congressional leaders together with President Reagan consolidated numerous federal grant programs related to social welfare and reformulated them in order to give state and local administrators greater discretion in using federal funds. 37 However, Reagan’s track record in promoting new federalism was inconsistent. This was partly due to the fact that the president’s devolution agenda met some opposition from Democrats in Congress, moderate Republicans, and interest groups, preventing him from making further advances on that front. For example, his efforts to completely devolve Aid to Families With Dependent Children (a New Deal-era program) and food stamps (a Great Society-era program) to the states were rejected by members of Congress, who feared states would underfund both programs, and by members of the National Governors’ Association, who believed the proposal would be too costly for states. Reagan terminated general revenue sharing in 1986. 38 Several Supreme Court rulings also promoted new federalism by hemming in the scope of the national government’s power, especially under the commerce clause. For example, in United States v. Lopez , the court struck down the Gun-Free School Zones Act of 1990, which banned gun possession in school zones. 39 It argued that the regulation in question did not “substantively affect interstate commerce.” The ruling ended a nearly sixty-year period in which the court had used a broad interpretation of the commerce clause that by the 1960s allowed it to regulate numerous local commercial activities. 40 However, many would say that the years since the 9/11 attacks have swung the pendulum back in the direction of central federal power. The creation of the Department of Homeland Security federalized disaster response power in Washington, and the Transportation Security Administration was created to federalize airport security. Broad new federal policies and mandates have also been carried out in the form of the Faith-Based Initiative and No Child Left Behind (during the George W. Bush administration) and the Affordable Care Act (during Barack Obama’s administration). Finding a Middle Ground Cooperative Federalism versus New Federalism Morton Grodzins coined the cake analogy of federalism in the 1950s while conducting research on the evolution of American federalism. Until then most scholars had thought of federalism as a layer cake, but according to Grodzins the 1930s ushered in “marble-cake federalism” ( Figure 3.12 ): “The American form of government is often, but erroneously, symbolized by a three-layer cake. A far more accurate image is the rainbow or marble cake, characterized by an inseparable mingling of differently colored ingredients, the colors appearing in vertical and diagonal strands and unexpected whirls. As colors are mixed in the marble cake, so functions are mixed in the American federal system.” 41 Cooperative federalism has several merits: Because state and local governments have varying fiscal capacities, the national government’s involvement in state activities such as education, health, and social welfare is necessary to ensure some degree of uniformity in the provision of public services to citizens in richer and poorer states. The problem of collective action, which dissuades state and local authorities from raising regulatory standards for fear they will be disadvantaged as others lower theirs, is resolved by requiring state and local authorities to meet minimum federal standards (e.g., minimum wage and air quality). Federal assistance is necessary to ensure state and local programs (e.g., water and air pollution controls) that generate positive externalities are maintained. For example, one state’s environmental regulations impose higher fuel prices on its residents, but the externality of the cleaner air they produce benefits neighboring states. Without the federal government’s support, this state and others like it would underfund such programs. New federalism has advantages as well: Because there are economic, demographic, social, and geographical differences among states, one-size-fits-all features of federal laws are suboptimal. Decentralization accommodates the diversity that exists across states. By virtue of being closer to citizens, state and local authorities are better than federal agencies at discerning the public’s needs. Decentralized federalism fosters a marketplace of innovative policy ideas as states compete against each other to minimize administrative costs and maximize policy output. Which model of federalism do you think works best for the United States? Why? Link to Learning The leading international journal devoted to the practical and theoretical study of federalism is called Publius: The Journal of Federalism . Find out where its name comes from. 3.3 Intergovernmental Relationships Learning Objectives By the end of this section, you will be able to: Explain how federal intergovernmental grants have evolved over time Identify the types of federal intergovernmental grants Describe the characteristics of federal unfunded mandates The national government’s ability to achieve its objectives often requires the participation of state and local governments. Intergovernmental grants offer positive financial inducements to get states to work toward selected national goals. A grant is commonly likened to a “carrot” to the extent that it is designed to entice the recipient to do something. On the other hand, unfunded mandates impose federal requirements on state and local authorities. Mandates are typically backed by the threat of penalties for non-compliance and provide little to no compensation for the costs of implementation. Thus, given its coercive nature, a mandate is commonly likened to a “stick.” GRANTS The national government has used grants to influence state actions as far back as the Articles of Confederation when it provided states with land grants. In the first half of the 1800s, land grants were the primary means by which the federal government supported the states. Millions of acres of federal land were donated to support road, railroad, bridge, and canal construction projects, all of which were instrumental in piecing together a national transportation system to facilitate migration, interstate commerce, postal mail service, and movement of military people and equipment. Numerous universities and colleges across the country, such as Ohio State University and the University of Maine, are land-grant institutions because their campuses were built on land donated by the federal government. At the turn of the twentieth century, cash grants replaced land grants as the main form of federal intergovernmental transfers and have become a central part of modern federalism. 42 Federal cash grants do come with strings attached; the national government has an interest in seeing that public monies are used for policy activities that advance national objectives. Categorical grants are federal transfers formulated to limit recipients’ discretion in the use of funds and subject them to strict administrative criteria that guide project selection, performance, and financial oversight, among other things. These grants also often require some commitment of matching funds. Medicaid and the food stamp program are examples of categorical grants. Block grants come with less stringent federal administrative conditions and provide recipients more flexibility over how to spend grant funds. Examples of block grants include the Workforce Investment Act program, which provides state and local agencies money to help youths and adults obtain skill sets that will lead to better-paying jobs, and the Surface Transportation Program , which helps state and local governments maintain and improve highways, bridges, tunnels, sidewalks, and bicycle paths. Finally, recipients of general revenue sharing faced the least restrictions on the use of federal grants. From 1972 to 1986, when revenue sharing was abolished, upwards of $85 billion of federal money was distributed to states, cities, counties, towns, and villages. 43 During the 1960s and 1970s, funding for federal grants grew significantly, as the trend line shows in Figure 3.13 . Growth picked up again in the 1990s and 2000s. The upward slope since the 1990s is primarily due to the increase in federal grant money going to Medicaid. Federally funded health-care programs jumped from $43.8 billion in 1990 to $320 billion in 2014. 44 Health-related grant programs such as Medicaid and the Children’s Health Insurance Program (CHIP) represented more than half of total federal grant expenses. Link to Learning The federal government uses grants and other tools to achieve its national policy priorities. Take a look at the National Priorities Project to find out more. The national government has greatly preferred using categorical grants to transfer funds to state and local authorities because this type of grant gives them more control and discretion in how the money is spent. In 2014, the federal government distributed 1,099 grants, 1,078 of which were categorical, while only 21 were block grants. 45 In response to the terrorist attack on the United States on September 11, 2001, more than a dozen new federal grant programs relating to homeland security were created, but as of 2011, only three were block grants. There are a couple of reasons that categorical grants are more popular than block grants despite calls to decentralize public policy. One reason is that elected officials who sponsor these grants can take credit for their positive outcomes (e.g., clean rivers, better-performing schools, healthier children, a secure homeland) since elected officials, not state officials, formulate the administrative standards that lead to the results. Another reason is that categorical grants afford federal officials greater command over grant program performance. A common criticism leveled against block grants is that they lack mechanisms to hold state and local administrators accountable for outcomes, a reproach the Obama administration has made about the Community Services Block Grant program. Finally, once categorical grants have been established, vested interests in Congress and the federal bureaucracy seek to preserve them. The legislators who enact them and the federal agencies that implement them invest heavily in defending them, ensuring their continuation. 46 Reagan’s “devolution revolution” contributed to raising the number of block grants from six in 1981 to fourteen in 1989. Block grants increased to twenty-four in 1999 during the Clinton administration and to twenty-six during Obama’s presidency, but by 2014 the total had dropped to twenty-one, accounting for 10 percent of total federal grant outlay. 47 In 1994, the Republican-controlled Congress passed legislation that called for block-granting Medicaid, which would have capped federal Medicaid spending. President Clinton vetoed the legislation. However, congressional efforts to convert Aid to Families with Dependent Children (AFDC) to a block grant succeeded. The Temporary Assistance for Needy Families (TANF) block grant replaced the AFDC in 1996, marking the first time the federal government transformed an entitlement program (which guarantees individual rights to benefits) into a block grant. Under the AFDC, the federal government had reimbursed states a portion of the costs they bore for running the program without placing a ceiling on the amount. In contrast, the TANF block grant caps annual federal funding at $16.489 billion and provides a yearly lump sum to each state, which it can use to manage its own program. Block grants have been championed for their cost-cutting effects. By eliminating uncapped federal funding, as the TANF issue illustrates, the national government can reverse the escalating costs of federal grant programs. This point has not been lost on Speaker of the House Paul Ryan (R-WI), former chair of the House Budget Committee and the House Ways and Means Committee, who has tried multiple times but without success to convert Medicaid into a block grant, a reform he estimates could save the federal government upwards of $732 billion over ten years. 48 Another noteworthy characteristic of block grants is that their flexibility has been undermined over time as a result of creeping categorization , a process in which the national government places new administrative requirements on state and local governments or supplants block grants with new categorical grants. 49 Among the more common measures used to restrict block grants’ programmatic flexibility are set-asides (i.e., requiring a certain share of grant funds to be designated for a specific purpose) and cost ceilings (i.e., placing a cap on funding other purposes). UNFUNDED MANDATES Unfunded mandates are federal laws and regulations that impose obligations on state and local governments without fully compensating them for the administrative costs they incur. The federal government has used mandates increasingly since the 1960s to promote national objectives in policy areas such as the environment, civil rights, education, and homeland security. One type of mandate threatens civil and criminal penalties for state and local authorities that fail to comply with them across the board in all programs, while another provides for the suspension of federal grant money if the mandate is not followed. These types of mandates are commonly referred to as crosscutting mandates. Failure to fully comply with crosscutting mandates can result in punishments that normally include reduction of or suspension of federal grants, prosecution of officials, fines, or some combination of these penalties. If only one requirement is not met, state or local governments may not get any money at all. For example, Title VI of the Civil Rights Act of 1964 authorizes the federal government to withhold federal grants as well as file lawsuits against state and local officials for practicing racial discrimination. Finally, some mandates come in the form of partial preemption regulations, whereby the federal government sets national regulatory standards but delegates the enforcement to state and local governments. For example, the Clean Air Act sets air quality regulations but instructs states to design implementation plans to achieve such standards ( Figure 3.14 ). 50 The widespread use of federal mandates in the 1970s and 1980s provoked a backlash among state and local authorities, which culminated in the Unfunded Mandates Reform Act (UMRA) in 1995. The UMRA’s main objective has been to restrain the national government’s use of mandates by subjecting rules that impose unfunded requirements on state and local governments to greater procedural scrutiny. However, since the act’s implementation, states and local authorities have obtained limited relief. A new piece of legislation aims to take this approach further. The 2015 Unfunded Mandates and Information Transparency Act , HR 50, passed the House early in 2015 before being referred to the Senate, where it waits committee consideration. 51 The number of mandates has continued to rise, and some have been especially costly to states and local authorities. Consider the Real ID Act of 2005, a federal law designed to beef up homeland security. The law requires driver’s licenses and state-issued identification cards (DL/IDs) to contain standardized anti-fraud security features, specific data, and machine-readable technology. It also requires states to verify the identity of everyone being reissued DL/IDs. The Department of Homeland Security announced a phased enforcement of the law in 2013, which requires individuals to present compliant DL/IDs to board commercial airlines starting in 2016. The cost to states of re-issuing DL/IDs, implementing new identity verification procedures, and redesigning DL/IDs is estimated to be $11 billion, and the federal government stands to reimburse only a small fraction. 52 Compliance with the federal law has been onerous for many states; only twenty-two were in full compliance with Real ID in 2015. 53 The continued use of unfunded mandates clearly contradicts new federalism’s call for giving states and local governments more flexibility in carrying out national goals. The temptation to use them appears to be difficult for the federal government to resist, however, as the UMRA’s poor track record illustrates. This is because mandates allow the federal government to fulfill its national priorities while passing most of the cost to the states, an especially attractive strategy for national lawmakers trying to cut federal spending. 54 Some leading federalism scholars have used the term coercive federalism to capture this aspect of contemporary U.S. federalism. 55 In other words, Washington has been as likely to use the stick of mandates as the carrot of grants to accomplish its national objectives. As a result, there have been more instances of confrontational interactions between the states and the federal government. Milestone The Clery Act The Clery Act of 1990, formally the Jeanne Clery Disclosure of Campus Security Policy and Campus Crime Statistics Act, requires public and private colleges and universities that participate in federal student aid programs to disclose information about campus crime. The Act is named after Jeanne Clery , who in 1986 was raped and murdered by a fellow student in her Lehigh University dorm room. The U.S. Department of Education ’s Clery Act Compliance Division is responsible for enforcing the 1990 Act. Specifically, to remain eligible for federal financial aid funds and avoid penalties, colleges and universities must comply with the following provisions: Publish an annual security report and make it available to current and prospective students and employees; Keep a public crime log that documents each crime on campus and is accessible to the public; Disclose information about incidents of criminal homicide, sex offenses, robbery, aggravated assault, burglary, motor vehicle theft, arson, and hate crimes that occurred on or near campus; Issue warnings about Clery Act crimes that pose a threat to students and employees; Develop a campus community emergency response and notification strategy that is subject to annual testing; Gather and report fire data to the federal government and publish an annual fire safety report; Devise procedures to address reports of missing students living in on-campus housing. For more about the Clery Act, see Clery Center for Security on Campus, http://clerycenter.org . Were you made aware of your campus’s annual security report before you enrolled? Do you think reporting about campus security is appropriately regulated at the federal level under the Clery Act? Why or why not? 3.4 Competitive Federalism Today Learning Objectives By the end of this section, you will be able to: Explain the dynamic of competitive federalism Analyze some issues over which the states and federal government have contended Certain functions clearly belong to the federal government, the state governments, and local governments. National security is a federal matter, the issuance of licenses is a state matter, and garbage collection is a local matter. One aspect of competitive federalism today is that some policy issues, such as immigration and the marital rights of gays and lesbians, have been redefined as the roles that states and the federal government play in them have changed. Another aspect of competitive federalism is that interest groups seeking to change the status quo can take a policy issue up to the federal government or down to the states if they feel it is to their advantage. Interest groups have used this strategy to promote their views on such issues as abortion, gun control, and the legal drinking age. CONTENDING ISSUES Immigration and marriage equality have not been the subject of much contention between states and the federal government until recent decades. Before that, it was understood that the federal government handled immigration and states determined the legality of same-sex marriage. This understanding of exclusive responsibilities has changed; today both levels of government play roles in these two policy areas. Immigration federalism describes the gradual movement of states into the immigration policy domain. 56 Since the late 1990s, states have asserted a right to make immigration policy on the grounds that they are enforcing, not supplanting, the nation’s immigration laws, and they are exercising their jurisdictional authority by restricting illegal immigrants’ access to education, health care, and welfare benefits, areas that fall under the states’ responsibilities. In 2005, twenty-five states had enacted a total of thirty-nine laws related to immigration; by 2014, forty-three states and Washington, DC, had passed a total of 288 immigration-related laws and resolutions. 57 Arizona has been one of the states at the forefront of immigration federalism. In 2010, it passed Senate Bill 1070 , which sought to make it so difficult for illegal immigrants to live in the state that they would return to their native country, a strategy referred to as “attrition by enforcement.” 58 The federal government filed suit to block the Arizona law, contending that it conflicted with federal immigration laws. Arizona’s law has also divided society, because some groups, like the Tea Party movement, have supported its tough stance against illegal immigrants, while other groups have opposed it for humanitarian and human-rights reasons ( Figure 3.15 ). According to a poll of Latino voters in the state by Arizona State University researchers, 81 percent opposed this bill. 59 In 2012, in Arizona v. United States , the Supreme Court affirmed federal supremacy on immigration. 60 The court struck down three of the four central provisions of the Arizona law—namely, those allowing police officers to arrest an undocumented immigrant without a warrant if they had probable cause to think he or she had committed a crime that could lead to deportation, making it a crime to seek a job without proper immigration papers, and making it a crime to be in Arizona without valid immigration papers. The court upheld the “show me your papers” provision, which authorizes police officers to check the immigration status of anyone they stop or arrest who they suspect is an illegal immigrant. 61 However, in letting this provision stand, the court warned Arizona and other states with similar laws that they could face civil rights lawsuits if police officers applied it based on racial profiling. 62 All in all, Justice Anthony Kennedy ’s opinion embraced an expansive view of the U.S. government’s authority to regulate immigration and aliens, describing it as broad and undoubted. That authority derived from the legislative power of Congress to “establish a uniform Rule of Naturalization ,” enumerated in the Constitution. Link to Learning Arizona’s Senate Bill 1070 has been the subject of heated debate. Read the views of proponents and opponents of the law. Marital rights for gays and lesbians have also significantly changed in recent years. By passing the Defense of Marriage Act (DOMA) in 1996, the federal government stepped into this policy issue. Not only did DOMA allow states to choose whether to recognize same-sex marriages, it also defined marriage as a union between a man and a woman, which meant that same-sex couples were denied various federal provisions and benefits—such as the right to file joint tax returns and receive Social Security survivor benefits. In 1997, more than half the states in the union had passed some form of legislation banning same-sex marriage. By 2006, two years after Massachusetts became the first state to recognize marriage equality, twenty-seven states had passed constitutional bans on same-sex marriage. In United States v. Windsor , the Supreme Court changed the dynamic established by DOMA by ruling that the federal government had no authority to define marriage. The Court held that states possess the “historic and essential authority to define the marital relation,” and that the federal government’s involvement in this area “departs from this history and tradition of reliance on state law to define marriage.” 63 Insider Perspective Edith Windsor: Icon of the Marriage Equality Movement Edith Windsor , the plaintiff in the landmark Supreme Court case United States v. Windsor , has become an icon of the marriage equality movement for her successful effort to force repeal the DOMA provision that denied married same-sex couples a host of federal provisions and protections. In 2007, after having lived together since the late 1960s, Windsor and her partner Thea Spyer were married in Canada, where same-sex marriage was legal. After Spyer died in 2009, Windsor received a $363,053 federal tax bill on the estate Spyer had left her. Because her marriage was not valid under federal law, her request for the estate-tax exemption that applies to surviving spouses was denied. With the counsel of her lawyer, Roberta Kaplan, Windsor sued the federal government and won ( Figure 3.16 ). Because of the Windsor decision, federal laws could no longer discriminate against same-sex married couples. What is more, marriage equality became a reality in a growing number of states as federal court after federal court overturned state constitutional bans on same-sex marriage. The Windsor case gave federal judges the moment of clarity from the U.S. Supreme Court that they needed. James Esseks, director of the American Civil Liberties Union ’s (ACLU) Lesbian Gay Bisexual Transgender & AIDS Project, summarizes the significance of the case as follows: “Part of what’s gotten us to this exciting moment in American culture is not just Edie’s lawsuit but the story of her life. The love at the core of that story, as well as the injustice at its end, is part of what has moved America on this issue so profoundly.” 64 In the final analysis, same-sex marriage is a protected constitutional right as decided by the U.S. Supreme Court, which took up the issue again when it heard Obergefell v. Hodges in 2015. What role do you feel the story of Edith Windsor played in reframing the debate over same-sex marriage? How do you think it changed the federal government’s view of its role in legislation regarding same-sex marriage relative to the role of the states? Following the Windsor decision, the number of states that recognized same-sex marriages increased rapidly, as illustrated in Figure 3.17 . In 2015, marriage equality was recognized in thirty-six states plus Washington, DC, up from seventeen in 2013. The diffusion of marriage equality across states was driven in large part by federal district and appeals courts, which have used the rationale underpinning the Windsor case (i.e., laws cannot discriminate between same-sex and opposite-sex couples based on the equal protection clause of the Fourteenth Amendment ) to invalidate state bans on same-sex marriage. The 2014 court decision not to hear a collection of cases from four different states essentially affirmed same-sex marriage in thirty states. And in 2015 the Supreme Court gave same-sex marriage a constitutional basis of right nationwide in Obergefell v. Hodges . In sum, as the immigration and marriage equality examples illustrate, constitutional disputes have arisen as states and the federal government have sought to reposition themselves on certain policy issues, disputes that the federal courts have had to sort out. STRATEGIZING ABOUT NEW ISSUES Mothers Against Drunk Driving (MADD) was established in 1980 by a woman whose thirteen-year-old daughter had been killed by a drunk driver. The organization lobbied state legislators to raise the drinking age and impose tougher penalties, but without success. States with lower drinking ages had an economic interest in maintaining them because they lured youths from neighboring states with restricted consumption laws. So MADD decided to redirect its lobbying efforts at Congress, hoping to find sympathetic representatives willing to take action. In 1984, the federal government passed the National Minimum Drinking Age Act (NMDAA), a crosscutting mandate that gradually reduced federal highway grant money to any state that failed to increase the legal age for alcohol purchase and possession to twenty-one. After losing a legal battle against the NMDAA, all states were in compliance by 1988. 65 By creating two institutional access points—the federal and state governments—the U.S. federal system enables interest groups such as MADD to strategize about how best to achieve their policy objectives. The term venue shopping refers to a strategy in which interest groups select the level and branch of government (legislature, judiciary, or executive) they calculate will be most advantageous for them. 66 If one institutional venue proves unreceptive to an advocacy group’s policy goal, as state legislators were to MADD, the group will attempt to steer its issue to a more responsive venue. The strategy anti-abortion advocates have used in recent years is another example of venue shopping. In their attempts to limit abortion rights in the wake of the 1973 Roe v. Wade Supreme Court decision making abortion legal nationwide, anti-abortion advocates initially targeted Congress in hopes of obtaining restrictive legislation. 67 Lack of progress at the national level prompted them to shift their focus to state legislators, where their advocacy efforts have been more successful. By 2015, for example, thirty-eight states required some form of parental involvement in a minor’s decision to have an abortion, forty-six states allowed individual health-care providers to refuse to participate in abortions, and thirty-two states prohibited the use of public funds to carry out an abortion except when the woman’s life is in danger or the pregnancy is the result of rape or incest. While 31 percent of U.S. women of childbearing age resided in one of the thirteen states that had passed restrictive abortion laws in 2000, by 2013, about 56 percent of such women resided in one of the twenty-seven states where abortion is restricted. 68 3.5 Advantages and Disadvantages of Federalism Learning Objectives By the end of this section, you will be able to: Discuss the advantages of federalism Explain the disadvantages of federalism The federal design of our Constitution has had a profound effect on U.S. politics. Several positive and negative attributes of federalism have manifested themselves in the U.S. political system. THE BENEFITS OF FEDERALISM Among the merits of federalism are that it promotes policy innovation and political participation and accommodates diversity of opinion. On the subject of policy innovation, Supreme Court Justice Louis Brandeis observed in 1932 that “a single courageous state may, if its citizens choose, serve as a laboratory; and try novel social and economic experiments without risk to the rest of the country.” 69 What Brandeis meant was that states could harness their constitutional authority to engage in policy innovations that might eventually be diffused to other states and at the national level. For example, a number of New Deal breakthroughs, such as child labor laws, were inspired by state policies. Prior to the passage of the Nineteenth Amendment , several states had already granted women the right to vote. California has led the way in establishing standards for fuel emissions and other environmental policies ( Figure 3.18 ). Recently, the health insurance exchanges run by Connecticut, Kentucky, Rhode Island, and Washington have served as models for other states seeking to improve the performance of their exchanges. 70 Another advantage of federalism is that because our federal system creates two levels of government with the capacity to take action, failure to attain a desired policy goal at one level can be offset by successfully securing the support of elected representatives at another level. Thus, individuals, groups, and social movements are encouraged to actively participate and help shape public policy. Get Connected! Federalism and Political Office Thinking of running for elected office? Well, you have several options. As Table 3.1 shows, there are a total of 510,682 elected offices at the federal, state, and local levels. Elected representatives in municipal and township governments account for a little more than half the total number of elected officials in the United States. Political careers rarely start at the national level. In fact, a very small share of politicians at the subnational level transition to the national stage as representatives, senators, vice presidents, or presidents. Elected Officials at the Federal, State, and Local Levels Number of Elective Bodies Number of Elected Officials Federal Government 1 Executive branch 2 U.S. Senate 100 U.S. House of Representatives 435 State Government 50 State legislatures 7,382 Statewide offices 1,036 State boards 1,331 Local Government County governments 3,034 58,818 Municipal governments 19,429 135,531 Town governments 16,504 126,958 School districts 13,506 95,000 Special districts 35,052 84,089 Total 87,576 510,682 Table 3.1 This table lists the number of elected bodies and elected officials at the federal, state, and local levels. 71 If you are interested in serving the public as an elected official, there are more opportunities to do so at the local and state levels than at the national level. As an added incentive for setting your sights at the subnational stage, consider the following. Whereas only 28 percent of U.S. adults trusted Congress in 2014, about 62 percent trusted their state governments and 72 percent had confidence in their local governments. 72 If you ran for public office, what problems would you most want to solve? What level of government would best enable you to solve them, and why? The system of checks and balances in our political system often prevents the federal government from imposing uniform policies across the country. As a result, states and local communities have the latitude to address policy issues based on the specific needs and interests of their citizens. The diversity of public viewpoints across states is manifested by differences in the way states handle access to abortion, distribution of alcohol, gun control, and social welfare benefits, for example. THE DRAWBACKS OF FEDERALISM Federalism also comes with drawbacks. Chief among them are economic disparities across states, race-to-the-bottom dynamics (i.e., states compete to attract business by lowering taxes and regulations), and the difficulty of taking action on issues of national importance. Stark economic differences across states have a profound effect on the well-being of citizens. For example, in 2014, Maryland had the highest median household income ($73,971), while Mississippi had the lowest ($39,680). 73 There are also huge disparities in school funding across states. In 2013, New York spent $19,818 per student for elementary and secondary education, while Utah spent $6,555. 74 Furthermore, health-care access, costs, and quality vary greatly across states. 75 Proponents of social justice contend that federalism has tended to obstruct national efforts to effectively even out these disparities. Link to Learning The National Education Association discusses the problem of inequality in the educational system of the United States. Read its proposed solution and decide whether you agree. The economic strategy of using race-to-the-bottom tactics in order to compete with other states in attracting new business growth also carries a social cost. For example, workers’ safety and pay can suffer as workplace regulations are lifted, and the reduction in payroll taxes for employers has led a number of states to end up with underfunded unemployment insurance programs. 76 Nineteen states have also opted not to cover more of their residents under Medicaid, as encouraged by the Patient Protection and Affordable Care Act in 2010, for fear it will raise state public spending and increase employers’ cost of employee benefits, despite provisions that the federal government will pick up nearly all cost of the expansion. 77 More than half of these states are in the South. The federal design of our Constitution and the system of checks and balances has jeopardized or outright blocked federal responses to important national issues. President Roosevelt’s efforts to combat the scourge of the Great Depression were initially struck down by the Supreme Court. More recently, President Obama’s effort to make health insurance accessible to more Americans under the Affordable Care Act immediately ran into legal challenges 78 from some states, but it has been supported by the Supreme Court so far. However, the federal government’s ability to defend the voting rights of citizens suffered a major setback when the Supreme Court in 2013 struck down a key provision of the Voting Rights Act of 1965 . 79 No longer are the nine states with histories of racial discrimination in their voting processes required to submit plans for changes to the federal government for approval.
introduction_to_sociology
Learning Objectives 21.1 Collective Behavior Describe different forms of collective behavior Differentiate between types of crowds Discuss emergent norm, value-added, and assembling perspective analyses of collective behavior 21.2 Social Movements Demonstrate awareness of social movements on a state, national, and global level Distinguish between different types of social movements Identify stages of social movements Discuss theoretical perspectives on social movements, like resource mobilization, framing, and new social movement theory 21.3 Social Change Explain how technology, social institutions, population, and the environment can bring about social change Discuss the importance of modernization in relation to social change Introduction to Social Movements and Social Change In January 2011, Egypt erupted in protests against the stifling rule of longtime President Hosni Mubarak. The protests were sparked in part by the revolution in Tunisia, and, in turn, they inspired demonstrations throughout the Middle East in Libya, Syria, and beyond. This wave of protest movements traveled across national borders and seemed to spread like wildfire. There have been countless causes and factors in play in these protests and revolutions, but many have noted the internet-savvy youth of these countries. Some believe that the adoption of social technology—from Facebook pages to cell phone cameras—that helped to organize and document the movement contributed directly to the wave of protests called Arab Spring. The combination of deep unrest and disruptive technologies meant these social movements were ready to rise up and seek change. What do Arab Spring, Occupy Wall Street, People for the Ethical Treatment of Animals (PETA), the anti-globalization movement, and the Tea Party have in common? Not much, you might think. But although they may be left-wing or right-wing, radical or conservative, highly organized or very diffused, they are all examples of social movements. Social movements are purposeful, organized groups striving to work toward a common goal. These groups might be attempting to create change (Occupy Wall Street, Arab Spring), to resist change (anti-globalization movement), or to provide a political voice to those otherwise disenfranchised (civil rights movements). Social movements, along with technology, social institutions, population, and environmental changes, create social change. Consider the effect of the 2010 BP oil spill in the Gulf of Mexico. This disaster exemplifies how a change in the environment, coupled with the use of technology to fix that change, combined with anti-oil sentiment in social movements and social institutions, led to changes in offshore oil drilling policies. Subsequently, in an effort to support the Gulf Coast’s rebuilding efforts, new changes occurred. From grassroots marketing campaigns that promote consumption of local seafood to municipal governments needing to coordinate with federal cleanups, organizations develop and shift to meet the changing needs of the society. Just as we saw with the Deepwater Horizon oil spill, social movements have, throughout history, influenced societal shifts. Sociology looks at these moments through the lenses of three major perspectives. The functionalist perspective looks at the big picture, focusing on the way that all aspects of society are integral to the continued health and viability of the whole. When studying social movements, a functionalist might focus on why social movements develop, why they continue to exist, and what social purposes they serve. For example, movements must change their goals as initial aims are met or they risk dissolution. Several organizations associated with the anti-polio industry folded after the creation of an effective vaccine that made the disease virtually disappear. Can you think of another social movement whose goals were met? What about one whose goals have changed over time? The conflict perspective focuses on the creation and reproduction of inequality. Someone applying the conflict perspective would likely be interested in how social movements are generated through systematic inequality, and how social change is constant, speedy, and unavoidable. In fact, the conflict that this perspective sees as inherent in social relations drives social change. For example, the National Association for the Advancement of Colored People (NAACP) was founded in 1908. Partly created in response to the horrific lynchings occurring in the southern United States, the organization fought to secure the constitutional rights guaranteed in the 13th, 14th, and 15th amendments, which established an end to slavery, equal protection under the law, and universal male suffrage (NAACP 2011). While those goals have been achieved, the organization remains active today, continuing to fight against inequalities in civil rights and to remedy discriminatory practices. The symbolic interaction perspective studies the day-to-day interaction of social movements, the meanings individuals attach to involvement in such movements, and the individual experience of social change. An interactionist studying social movements might address social movement norms and tactics as well as individual motivations. For example, social movements might be generated through a feeling of deprivation or discontent, but people might actually join social movements for a variety of reasons that have nothing to do with the cause. They might want to feel important, or they know someone in the movement they want to support, or they just want to be a part of something. Have you ever been motivated to show up for a rally or sign a petition because your friends invited you? Would you have been as likely to get involved otherwise?
[ { "answer": { "ans_choice": 0, "ans_text": "National Football League" }, "bloom": "3", "hl_context": "McCarthy and Zald ( 1977 ) conceptualize resource mobilization theory as a way to explain movement success in terms of its ability to acquire resources and mobilize individuals . <hl> For example , PETA , a social movement organization , is in competition with Greenpeace and the Animal Liberation Front ( ALF ) , two other social movement organizations . <hl> Taken together , along with all other social movement organizations working on animals rights issues , these similar organizations constitute a social movement industry . Multiple social movement industries in a society , though they may have widely different constituencies and goals , constitute a society's social movement sector . Every social movement organization ( a single social movement group ) within the social movement sector is competing for your attention , your time , and your resources . The chart below shows the relationship between these components . The conflict perspective focuses on the creation and reproduction of inequality . <hl> Someone applying the conflict perspective would likely be interested in how social movements are generated through systematic inequality , and how social change is constant , speedy , and unavoidable . <hl> In fact , the conflict that this perspective sees as inherent in social relations drives social change . <hl> For example , the National Association for the Advancement of Colored People ( NAACP ) was founded in 1908 . <hl> Partly created in response to the horrific lynchings occurring in the southern United States , the organization fought to secure the constitutional rights guaranteed in the 13th , 14th , and 15th amendments , which established an end to slavery , equal protection under the law , and universal male suffrage ( NAACP 2011 ) . While those goals have been achieved , the organization remains active today , continuing to fight against inequalities in civil rights and to remedy discriminatory practices . <hl> What do Arab Spring , Occupy Wall Street , People for the Ethical Treatment of Animals ( PETA ) , the anti-globalization movement , and the Tea Party have in common ? <hl> Not much , you might think . <hl> But although they may be left-wing or right-wing , radical or conservative , highly organized or very diffused , they are all examples of social movements . <hl>", "hl_sentences": "For example , PETA , a social movement organization , is in competition with Greenpeace and the Animal Liberation Front ( ALF ) , two other social movement organizations . Someone applying the conflict perspective would likely be interested in how social movements are generated through systematic inequality , and how social change is constant , speedy , and unavoidable . For example , the National Association for the Advancement of Colored People ( NAACP ) was founded in 1908 . What do Arab Spring , Occupy Wall Street , People for the Ethical Treatment of Animals ( PETA ) , the anti-globalization movement , and the Tea Party have in common ? But although they may be left-wing or right-wing , radical or conservative , highly organized or very diffused , they are all examples of social movements .", "question": { "cloze_format": "The organization that is not an example of a social movement is ___.", "normal_format": "Which of the following organizations is not an example of a social movement?", "question_choices": [ "National Football League", "Tea Party", "Greenpeace", "NAACP" ], "question_id": "fs-id2080514", "question_text": "Which of the following organizations is not an example of a social movement?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "What motivates inequitably treated people to join a movement" }, "bloom": "1", "hl_context": "The symbolic interaction perspective studies the day-to-day interaction of social movements , the meanings individuals attach to involvement in such movements , and the individual experience of social change . An interactionist studying social movements might address social movement norms and tactics as well as individual motivations . <hl> For example , social movements might be generated through a feeling of deprivation or discontent , but people might actually join social movements for a variety of reasons that have nothing to do with the cause . <hl> They might want to feel important , or they know someone in the movement they want to support , or they just want to be a part of something . Have you ever been motivated to show up for a rally or sign a petition because your friends invited you ? Would you have been as likely to get involved otherwise ? <hl> The conflict perspective focuses on the creation and reproduction of inequality . <hl> <hl> Someone applying the conflict perspective would likely be interested in how social movements are generated through systematic inequality , and how social change is constant , speedy , and unavoidable . <hl> <hl> In fact , the conflict that this perspective sees as inherent in social relations drives social change . <hl> For example , the National Association for the Advancement of Colored People ( NAACP ) was founded in 1908 . Partly created in response to the horrific lynchings occurring in the southern United States , the organization fought to secure the constitutional rights guaranteed in the 13th , 14th , and 15th amendments , which established an end to slavery , equal protection under the law , and universal male suffrage ( NAACP 2011 ) . While those goals have been achieved , the organization remains active today , continuing to fight against inequalities in civil rights and to remedy discriminatory practices .", "hl_sentences": "For example , social movements might be generated through a feeling of deprivation or discontent , but people might actually join social movements for a variety of reasons that have nothing to do with the cause . The conflict perspective focuses on the creation and reproduction of inequality . Someone applying the conflict perspective would likely be interested in how social movements are generated through systematic inequality , and how social change is constant , speedy , and unavoidable . In fact , the conflict that this perspective sees as inherent in social relations drives social change .", "question": { "cloze_format": "Sociologists using conflict perspective might study ___ .", "normal_format": "Sociologists using conflict perspective might study what?", "question_choices": [ "How social movements develop", "What social purposes a movement serves", "What motivates inequitably treated people to join a movement", "What individuals hope to gain from taking part in a social movement" ], "question_id": "fs-id1806415", "question_text": "Sociologists using conflict perspective might study what?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 1, "ans_text": "A group of people interested in hearing an author speak" }, "bloom": null, "hl_context": "<hl> Flash mobs are examples of collective behavior , non-institutionalized activity in which several people voluntarily engage . <hl> Other examples of collective behavior can include anything from a group of commuters traveling home from work to the trend toward adopting the Justin Bieber hair flip . In short , it can be any group behavior that is not mandated or regulated by an institution . <hl> There are four primary forms of collective behavior : the crowd , the mass , the public , and social movements . <hl>", "hl_sentences": "Flash mobs are examples of collective behavior , non-institutionalized activity in which several people voluntarily engage . There are four primary forms of collective behavior : the crowd , the mass , the public , and social movements .", "question": { "cloze_format": "___ is an example of collective behavior.", "normal_format": "Which of the following is an example of collective behavior?", "question_choices": [ "A soldier questioning orders", "A group of people interested in hearing an author speak", "A class going on a field trip", "Going shopping with a friend" ], "question_id": "fs-id1958312", "question_text": "Which of the following is an example of collective behavior?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 3, "ans_text": "an acting crowd" }, "bloom": "1", "hl_context": "It takes a fairly large number of people in close proximity to form a crowd ( Lofland 1993 ) . Examples include a group of people attending an Ani DiFranco concert , tailgating at a Patriots game , or attending a worship service . Turner and Killian ( 1993 ) identified four types of crowds . Casual crowds consist of people who are in the same place at the same time , but who aren ’ t really interacting , such as people standing in line at the post office . Conventional crowds are those who come together for a scheduled event occurring regularly , like a religious service . Expressive crowds are people who join together to express emotion , often at funerals , weddings , or the like . <hl> The final type , acting crowds , focus on a specific goal or action , such as a protest movement or riot . <hl> In addition to the different types of crowds , collective groups can also be identified in two other ways . A mass is a relatively large number of people with a common interest , though they may not be in close proximity ( Lofland 1993 ) , such as players of the popular Facebook game Farmville . A public , on the other hand , is an unorganized , relatively diffused group of people who share ideas , such as the Libertarian political party . While these two types of crowds are similar , they are not the same . To distinguish between them , remember that members of a mass share interests whereas members of a public share ideas . <hl> In January 2011 , Egypt erupted in protests against the stifling rule of longtime President Hosni Mubarak . <hl> The protests were sparked in part by the revolution in Tunisia , and , in turn , they inspired demonstrations throughout the Middle East in Libya , Syria , and beyond . This wave of protest movements traveled across national borders and seemed to spread like wildfire . There have been countless causes and factors in play in these protests and revolutions , but many have noted the internet-savvy youth of these countries . Some believe that the adoption of social technology — from Facebook pages to cell phone cameras — that helped to organize and document the movement contributed directly to the wave of protests called Arab Spring . The combination of deep unrest and disruptive technologies meant these social movements were ready to rise up and seek change .", "hl_sentences": "The final type , acting crowds , focus on a specific goal or action , such as a protest movement or riot . In January 2011 , Egypt erupted in protests against the stifling rule of longtime President Hosni Mubarak .", "question": { "cloze_format": "The protesters at the Egypt uprising rally were ___ .", "normal_format": "Who were the protesters at the Egypt uprising rally?", "question_choices": [ "a casual crowd", "a conventional crowd", "a mass", "an acting crowd" ], "question_id": "fs-id1739596", "question_text": "The protesters at the Egypt uprising rally were:" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "able to develop their own definition of the situation" }, "bloom": null, "hl_context": "Once individuals find themselves in a situation ungoverned by previously established norms , they interact in small groups to develop new guidelines on how to behave . <hl> According to the emergent-norm perspective , crowds are not viewed as irrational , impulsive , uncontrolled groups . <hl> <hl> Instead , norms develop and are accepted as they fit the situation . <hl> While this theory offers insight into why norms develop , it leaves undefined the nature of norms , how they come to be accepted by the crowd , and how they spread through the crowd .", "hl_sentences": "According to the emergent-norm perspective , crowds are not viewed as irrational , impulsive , uncontrolled groups . Instead , norms develop and are accepted as they fit the situation .", "question": { "cloze_format": "According to emergent-norm theory, crowds are ___.", "normal_format": "What are crowds according to emergent-norm theory?", "question_choices": [ "irrational and impulsive", "often misinterpreted and misdirected", "able to develop their own definition of the situation", "prone to criminal behavior" ], "question_id": "fs-id1744995", "question_text": "According to emergent-norm theory, crowds are:" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "precipitating factors" }, "bloom": "3", "hl_context": "Let ’ s consider a hypothetical example of these conditions . In structure conduciveness ( awareness and opportunity ) , a group of students gathers on the campus quad . Structural strain emerges when they feel stress concerning their high tuition costs . If the crowd decides that the latest tuition hike is the fault of the Chancellor , and that she ’ ll lower tuition if they protest , then growth and spread of a generalized belief has occurred . <hl> A precipitation factor arises when campus security appears to disperse the crowd , using pepper spray to do so . <hl> <hl> When the student body president sits down and passively resists attempts to stop the protest , this represents mobilization of action . <hl> Finally , when local police arrive and direct students back to their dorms , we ’ ve seen agents of social control in action .", "hl_sentences": "A precipitation factor arises when campus security appears to disperse the crowd , using pepper spray to do so . When the student body president sits down and passively resists attempts to stop the protest , this represents mobilization of action .", "question": { "cloze_format": "A boy throwing rocks during a demonstration might be an example of ___________.", "normal_format": "What might a boy throwing rocks during a demonstration be an example of?", "question_choices": [ "structural conduciveness", "structural strain", "precipitating factors", "mobilization for action" ], "question_id": "fs-id1536336", "question_text": "A boy throwing rocks during a demonstration might be an example of ___________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "resource mobilization " }, "bloom": null, "hl_context": "<hl> McCarthy and Zald ( 1977 ) conceptualize resource mobilization theory as a way to explain movement success in terms of its ability to acquire resources and mobilize individuals . <hl> For example , PETA , a social movement organization , is in competition with Greenpeace and the Animal Liberation Front ( ALF ) , two other social movement organizations . Taken together , along with all other social movement organizations working on animals rights issues , these similar organizations constitute a social movement industry . <hl> Multiple social movement industries in a society , though they may have widely different constituencies and goals , constitute a society's social movement sector . <hl> <hl> Every social movement organization ( a single social movement group ) within the social movement sector is competing for your attention , your time , and your resources . <hl> The chart below shows the relationship between these components . <hl> Resource Mobilization Social movements will always be a part of society , and people will always weigh their options and make rational choices about which movements to follow . <hl> <hl> As long as social movements wish to thrive , they must find resources ( such as money , people , and plans ) for how to meet their goals . <hl> Not only will social movements compete for our attention with many other concerns — from the basic ( our jobs or our need to feed ourselves ) to the broad ( video games , sports , or television ) , but they also compete with each other . For any individual , it may be a simple matter to decide you want to spend your time and money on animal shelters and Republican politics versus homeless shelters and Democrats . But which animal shelter , and which Republican candidate ? Social movements are competing for a piece of finite resources , and the field is growing more crowded all the time .", "hl_sentences": "McCarthy and Zald ( 1977 ) conceptualize resource mobilization theory as a way to explain movement success in terms of its ability to acquire resources and mobilize individuals . Multiple social movement industries in a society , though they may have widely different constituencies and goals , constitute a society's social movement sector . Every social movement organization ( a single social movement group ) within the social movement sector is competing for your attention , your time , and your resources . Resource Mobilization Social movements will always be a part of society , and people will always weigh their options and make rational choices about which movements to follow . As long as social movements wish to thrive , they must find resources ( such as money , people , and plans ) for how to meet their goals .", "question": { "cloze_format": "If we divide social movements according to their position among all social movements in a society, we are using the __________ theory to understand social movements.", "normal_format": "If we divide social movements according to their position among all social movements in a society, which theory are we using to understand social movements?", "question_choices": [ "framing", "new social movement ", "resource mobilization ", "value-added " ], "question_id": "fs-id1343539", "question_text": "If we divide social movements according to their position among all social movements in a society, we are using the __________ theory to understand social movements." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 0, "ans_text": "social movement industry" }, "bloom": "1", "hl_context": "McCarthy and Zald ( 1977 ) conceptualize resource mobilization theory as a way to explain movement success in terms of its ability to acquire resources and mobilize individuals . <hl> For example , PETA , a social movement organization , is in competition with Greenpeace and the Animal Liberation Front ( ALF ) , two other social movement organizations . <hl> <hl> Taken together , along with all other social movement organizations working on animals rights issues , these similar organizations constitute a social movement industry . <hl> Multiple social movement industries in a society , though they may have widely different constituencies and goals , constitute a society's social movement sector . Every social movement organization ( a single social movement group ) within the social movement sector is competing for your attention , your time , and your resources . The chart below shows the relationship between these components . <hl> What do Arab Spring , Occupy Wall Street , People for the Ethical Treatment of Animals ( PETA ) , the anti-globalization movement , and the Tea Party have in common ? <hl> Not much , you might think . <hl> But although they may be left-wing or right-wing , radical or conservative , highly organized or very diffused , they are all examples of social movements . <hl>", "hl_sentences": "For example , PETA , a social movement organization , is in competition with Greenpeace and the Animal Liberation Front ( ALF ) , two other social movement organizations . Taken together , along with all other social movement organizations working on animals rights issues , these similar organizations constitute a social movement industry . What do Arab Spring , Occupy Wall Street , People for the Ethical Treatment of Animals ( PETA ) , the anti-globalization movement , and the Tea Party have in common ? But although they may be left-wing or right-wing , radical or conservative , highly organized or very diffused , they are all examples of social movements .", "question": { "cloze_format": "While PETA is a social movement organization, taken together, the animal rights social movement organizations PETA, ALF, and Greenpeace are a(n) __________.", "normal_format": "While PETA is a social movement organization, taken together, what are the animal rights social movement organizations PETA, ALF, and Greenpeace?", "question_choices": [ "social movement industry", "social movement sector", "social movement party", "social industry" ], "question_id": "fs-id1703662", "question_text": "While PETA is a social movement organization, taken together, the animal rights social movement organizations PETA, ALF, and Greenpeace are a(n) __________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "the collective action of individuals working together in an attempt to establish new norms beliefs, or values" }, "bloom": "1", "hl_context": "<hl> Most theories of social movements are called collective action theories , indicating the purposeful nature of this form of collective behavior . <hl> The following three theories are but a few of the many classic and modern theories developed by social scientists .", "hl_sentences": "Most theories of social movements are called collective action theories , indicating the purposeful nature of this form of collective behavior .", "question": { "cloze_format": "Social movements are ___ .", "normal_format": "What are social movements?", "question_choices": [ "disruptive and chaotic challenges to the government", "ineffective mass movements", "the collective action of individuals working together in an attempt to establish new norms beliefs, or values", "the singular activities of a collection of groups working to challenge the status quo" ], "question_id": "fs-id1446314", "question_text": "Social movements are:" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "frame transformation" }, "bloom": "1", "hl_context": "<hl> Transformation involves a complete revision of goals . <hl> <hl> Once a movement has succeeded , it risks losing relevance . <hl> <hl> If it wants to remain active , the movement has to change with the transformation or risk becoming obsolete . <hl> For instance , when the women ’ s suffrage movement gained women the right to vote , they turned their attention to equal rights and campaigning to elect women . <hl> In short , it is an evolution to the existing diagnostic or prognostic frames generally involving a total conversion of movement . <hl>", "hl_sentences": "Transformation involves a complete revision of goals . Once a movement has succeeded , it risks losing relevance . If it wants to remain active , the movement has to change with the transformation or risk becoming obsolete . In short , it is an evolution to the existing diagnostic or prognostic frames generally involving a total conversion of movement .", "question": { "cloze_format": "If a movement claims that the best way to reverse climate change is to reduce carbon emissions by outlawing privately owned cars, “outlawing cars” is the ________.", "normal_format": "If a movement claims that the best way to reverse climate change is to reduce carbon emissions by outlawing privately owned cars, what is the “outlawing cars”?", "question_choices": [ "prognostic framing", "diagnostic framing", "motivational framing", "frame transformation" ], "question_id": "fs-id1313604", "question_text": "If a movement claims that the best way to reverse climate change is to reduce carbon emissions by outlawing privately owned cars, “outlawing cars” is the ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "the digital divide" }, "bloom": "3", "hl_context": "Of course there are drawbacks . <hl> The increasing gap between the technological haves and have-nots – – sometimes called the digital divide – – occurs both locally and globally . <hl> Further , there are added security risks : the loss of privacy , the risk of total system failure ( like the Y2K panic at the turn of the millennium ) , and the added vulnerability created by technological dependence . Think about the technology that goes into keeping nuclear power plants running safely and securely . What happens if an earthquake or other disaster , like in the case of Japan ’ s Fukushima plant , causes the technology to malfunction , not to mention the possibility of a systematic attack to our nation ’ s relatively vulnerable technological infrastructure ?", "hl_sentences": "The increasing gap between the technological haves and have-nots – – sometimes called the digital divide – – occurs both locally and globally .", "question": { "cloze_format": "Children in peripheral nations have little to no daily access to computers and the internet, while children in core nations are constantly exposed to this technology. This is an example of (the) ___ .", "normal_format": "Children in peripheral nations have little to no daily access to computers and the internet, while children in core nations are constantly exposed to this technology. What is this an example of?", "question_choices": [ "the digital divide", "human ecology", "modernization theory", "dependency theory" ], "question_id": "fs-id1786867", "question_text": "Children in peripheral nations have little to no daily access to computers and the internet, while children in core nations are constantly exposed to this technology. This is an example of:" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 0, "ans_text": "Population growth" }, "bloom": null, "hl_context": "Changes to technology , social institutions , population , and the environment , alone or in some combination , create change . Below , we will discuss how these act as agents of social change and we ’ ll examine real-world examples . <hl> We will focus on four agents of change recognized by social scientists : technology , social institutions , population , and the environment . <hl>", "hl_sentences": "We will focus on four agents of change recognized by social scientists : technology , social institutions , population , and the environment .", "question": { "cloze_format": "When sociologists think about technology as an agent of social change, ___ is not an example.", "normal_format": "When sociologists think about technology as an agent of social change, which of the following is not an example?", "question_choices": [ "Population growth", "Medical advances", "The Internet", "Genetically engineered food" ], "question_id": "fs-id1909276", "question_text": "When sociologists think about technology as an agent of social change, which of the following is not an example?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "modernization" }, "bloom": "3", "hl_context": "<hl> Modernization describes the processes that increase the amount of specialization and differentiation of structure in societies resulting in the move from an undeveloped society to developed , technologically driven society ( Irwin 1975 ) . <hl> By this definition , the level of modernity within a society is judged by the sophistication of its technology , particularly as it relates to infrastructure , industry , and the like . However , it is important to note the inherent ethnocentric bias of such assessment . Why do we assume that those living in semi-peripheral and peripheral nations would find it so wonderful to become more like the core nations ? Is modernization always positive ?", "hl_sentences": "Modernization describes the processes that increase the amount of specialization and differentiation of structure in societies resulting in the move from an undeveloped society to developed , technologically driven society ( Irwin 1975 ) .", "question": { "cloze_format": "China is undergoing a shift in industry, increasing labor specialization and the amount of differentiation present in the social structure. This exemplifies ___ .", "normal_format": "China is undergoing a shift in industry, increasing labor specialization and the amount of differentiation present in the social structure. What does this exemplify?", "question_choices": [ "human ecology", "dependency theory", "modernization", "conflict perspective" ], "question_id": "fs-id1912151", "question_text": "China is undergoing a shift in industry, increasing labor specialization and the amount of differentiation present in the social structure. This exemplifies:" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "all of the above" }, "bloom": "1", "hl_context": "Further , the internet bought us information , but at a cost . The morass of information means that there is as much poor information available as trustworthy sources . <hl> There is a delicate line to walk when core nations seek to bring the assumed benefits of modernization to more traditional cultures . <hl> <hl> For one , there are obvious pro-capitalist biases that go into such attempts , and it is short-sighted for western governments and social scientists to assume all other countries aspire to follow in their footsteps . <hl> <hl> Additionally , there can be a kind of neo-liberal defense of rural cultures , ignoring the often crushing poverty and diseases that exist in peripheral nations and focusing only on a nostalgic mythology of the happy peasant . <hl> <hl> It takes a very careful hand to understand both the need for cultural identity and preservation as well as the hopes for future growth . <hl> Modernization describes the processes that increase the amount of specialization and differentiation of structure in societies resulting in the move from an undeveloped society to developed , technologically driven society ( Irwin 1975 ) . By this definition , the level of modernity within a society is judged by the sophistication of its technology , particularly as it relates to infrastructure , industry , and the like . <hl> However , it is important to note the inherent ethnocentric bias of such assessment . <hl> <hl> Why do we assume that those living in semi-peripheral and peripheral nations would find it so wonderful to become more like the core nations ? <hl> <hl> Is modernization always positive ? <hl>", "hl_sentences": "There is a delicate line to walk when core nations seek to bring the assumed benefits of modernization to more traditional cultures . For one , there are obvious pro-capitalist biases that go into such attempts , and it is short-sighted for western governments and social scientists to assume all other countries aspire to follow in their footsteps . Additionally , there can be a kind of neo-liberal defense of rural cultures , ignoring the often crushing poverty and diseases that exist in peripheral nations and focusing only on a nostalgic mythology of the happy peasant . It takes a very careful hand to understand both the need for cultural identity and preservation as well as the hopes for future growth . However , it is important to note the inherent ethnocentric bias of such assessment . Why do we assume that those living in semi-peripheral and peripheral nations would find it so wonderful to become more like the core nations ? Is modernization always positive ?", "question": { "cloze_format": "Core nations that work to propel peripheral nations toward modernization need to be aware of ___ .", "normal_format": "What do core nations that work to propel peripheral nations toward modernization need to be aware of?", "question_choices": [ "preserving peripheral nation cultural identity", "preparing for pitfalls that come with modernization", "avoiding hegemonistic assumptions about modernization", "all of the above" ], "question_id": "fs-id1305474", "question_text": "Core nations that work to propel peripheral nations toward modernization need to be aware of:" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "the environment" }, "bloom": null, "hl_context": "<hl> Changes to technology , social institutions , population , and the environment , alone or in some combination , create change . <hl> Below , we will discuss how these act as agents of social change and we ’ ll examine real-world examples . We will focus on four agents of change recognized by social scientists : technology , social institutions , population , and the environment . Social movements are purposeful , organized groups striving to work toward a common goal . These groups might be attempting to create change ( Occupy Wall Street , Arab Spring ) , to resist change ( anti-globalization movement ) , or to provide a political voice to those otherwise disenfranchised ( civil rights movements ) . <hl> Social movements , along with technology , social institutions , population , and environmental changes , create social change . <hl>", "hl_sentences": "Changes to technology , social institutions , population , and the environment , alone or in some combination , create change . Social movements , along with technology , social institutions , population , and environmental changes , create social change .", "question": { "cloze_format": "In addition to social movements, social change is also caused by technology, social institutions, population and ______ .", "normal_format": "What also causes a social change in addition to social movements, technology, social institutions, and population?", "question_choices": [ "the environment", "modernization", "social structure", "new social movements" ], "question_id": "fs-id1529653", "question_text": "In addition to social movements, social change is also caused by technology, social institutions, population and ______ ." }, "references_are_paraphrase": null } ]
21
21.1 Collective Behavior Sociology in the Real World Flash Mobs People sitting in a café in a touristy corner of Rome might expect the usual sights and sounds of a busy city. They might be more surprised when, as they sip their espressos, hundreds of young people start streaming into the picturesque square clutching pillows, and when someone gives a signal, they start pummeling each other in a massive free-for-all pillow fight. Spectators might lean forward, coffee forgotten, as feathers fly and more and more people join in. All around the square, others hang out of their windows or stop on the street, transfixed, to watch. After several minutes, the spectacle is over. With cheers and the occasional high five, the crowd disperses, leaving only destroyed pillows and clouds of fluff in its wake. This is a flash mob , a large group of people who gather together in a spontaneous activity that lasts a limited amount of time before returning to their regular routines. Technology plays a big role in the creation of a flash mob: select people are texted or emailed, and the message spreads virally until a crowd has grown. But while technology might explain the “how” of flash mobs, it does not explain the “why.” Flash mobs often are captured on video and shared on the internet; frequently they go viral and become well-known. So what leads people to want to flock somewhere for a massive pillow fight? Or for a choreographed dance? Or to freeze in place? Why is this appealing? In large part, it is as simple as the reason humans have bonded together around fires for storytelling, or danced together, or joined a community holiday celebration. Humans seek connections and shared experiences. And a flash mob, pillows included, provides a way to make that happen. Forms of Collective Behavior Flash mobs are examples of collective behavior , non-institutionalized activity in which several people voluntarily engage. Other examples of collective behavior can include anything from a group of commuters traveling home from work to the trend toward adopting the Justin Bieber hair flip. In short, it can be any group behavior that is not mandated or regulated by an institution. There are four primary forms of collective behavior: the crowd, the mass, the public, and social movements. It takes a fairly large number of people in close proximity to form a crowd (Lofland 1993). Examples include a group of people attending an Ani DiFranco concert, tailgating at a Patriots game, or attending a worship service. Turner and Killian (1993) identified four types of crowds. Casual crowds consist of people who are in the same place at the same time, but who aren’t really interacting, such as people standing in line at the post office. Conventional crowds are those who come together for a scheduled event occurring regularly, like a religious service. Expressive crowds are people who join together to express emotion, often at funerals, weddings, or the like. The final type, acting crowds , focus on a specific goal or action, such as a protest movement or riot. In addition to the different types of crowds, collective groups can also be identified in two other ways. A mass is a relatively large number of people with a common interest, though they may not be in close proximity (Lofland 1993), such as players of the popular Facebook game Farmville. A public , on the other hand, is an unorganized, relatively diffused group of people who share ideas, such as the Libertarian political party. While these two types of crowds are similar, they are not the same. To distinguish between them, remember that members of a mass share interests whereas members of a public share ideas. Theoretical Perspectives on Collective Behavior Early collective behavior theories (LeBon 1895; Blumer 1969) focused on the irrationality of crowds. Eventually, those theorists who viewed crowds as uncontrolled groups of irrational people were supplanted by theorists who viewed the behavior some crowds engaged in as the rational behavior of logical beings. Emergent-Norm Perspective Sociologists Ralph Turner and Lewis Killian (1993) built on earlier sociological ideas and developed what is known as emergent norm theory. They believe that the norms experienced by people in a crowd may be disparate and fluctuating. They emphasize the importance of these norms in shaping crowd behavior, especially those norms that shift quickly in response to changing external factors. Emergent norm theory asserts that, in this circumstance, people perceive and respond to the crowd situation with their particular (individual) set of norms, which may change as the crowd experience evolves. This focus on the individual component of interaction reflects a symbolic interactionist perspective. For Turner and Killian, the process begins when individuals suddenly find themselves in a new situation, or when an existing situation suddenly becomes strange or unfamiliar. For example, think about human behavior during Hurricane Katrina. New Orleans was decimated and people were trapped without supplies or a way to evacuate. In these extraordinary circumstances, what outsiders saw as “looting” was defined by those involved as seeking needed supplies for survival. Normally, individuals would not wade into a corner gas station and take canned goods without paying, but given that they were suddenly in a greatly changed situation, they established a norm that they felt was reasonable. Once individuals find themselves in a situation ungoverned by previously established norms, they interact in small groups to develop new guidelines on how to behave. According to the emergent-norm perspective, crowds are not viewed as irrational, impulsive, uncontrolled groups. Instead, norms develop and are accepted as they fit the situation. While this theory offers insight into why norms develop, it leaves undefined the nature of norms, how they come to be accepted by the crowd, and how they spread through the crowd. Value-Added Theory Neil Smelser’s (1962) meticulous categorization of crowd behavior, called value-added theory , is a perspective within the functionalist tradition based on the idea that several conditions must be in place for collective behavior to occur. Each condition adds to the likelihood that collective behavior will occur. The first condition is structural conduciveness , which describes when people are aware of the problem and have the opportunity to gather, ideally in an open area. Structural strain , the second condition, refers to people’s expectations about the situation at hand being unmet, causing tension and strain. The next condition is the growth and spread of a generalized belief , wherein a problem is clearly identified and attributed to a person or group. Fourth, precipitating factors spur collective behavior; this is the emergence of a dramatic event. The fifth condition is mobilization for action , when leaders emerge to direct a crowd to action. The final condition relates to action by the agents. Called social control , it is the only way to end the collective behavior episode (Smelser 1962). Let’s consider a hypothetical example of these conditions. In structure conduciveness (awareness and opportunity), a group of students gathers on the campus quad. Structural strain emerges when they feel stress concerning their high tuition costs. If the crowd decides that the latest tuition hike is the fault of the Chancellor, and that she’ll lower tuition if they protest, then growth and spread of a generalized belief has occurred. A precipitation factor arises when campus security appears to disperse the crowd, using pepper spray to do so. When the student body president sits down and passively resists attempts to stop the protest, this represents mobilization of action. Finally, when local police arrive and direct students back to their dorms, we’ve seen agents of social control in action. While value-added theory addresses the complexity of collective behavior, it also assumes that such behavior is inherently negative or disruptive. In contrast, collective behavior can be non-disruptive, such as when people flood to a place where a leader or public figure has died to express condolences or leave tokens of remembrance. Assembling Perspective Interactionist sociologist Clark McPhail (1991) developed assembling perspective , another system for understanding collective behavior that credited individuals in crowds as rational beings. Unlike previous theories, this theory refocuses attention from collective behavior to collective action. Remember that collective behavior is a non-institutionalized gathering, whereas collective action is based on a shared interest. McPhail’s theory focused primarily on the processes associated with crowd behavior, plus the lifecycle of gatherings. He identified several instances of convergent or collective behavior, as shown on the chart below. Type of crowd Description Example Convergence clusters Family and friends who travel together Carpooling parents take several children to the movies Convergent orientation Group all facing the same direction A semi-circle around a stage Collective vocalization Sounds or noises made collectively Screams on a roller coaster Collective verbalization Collective and simultaneous participation in a speech or song Pledge of Allegiance in the school classroom Collective gesticulation Body parts forming symbols The YMCA dance Collective manipulation Objects collectively moved around Holding signs at a protest rally Collective locomotion The direction and rate of movement to the event Children running to an ice cream truck Table 21.1 Clark McPhail identified various circumstances of convergent and collective behavior (McPhail 1991). As useful as this is for understanding the components of how crowds come together, many sociologists criticize its lack of attention on the large cultural context of the described behaviors, instead focusing on individual actions. 21.2 Social Movements Social movements are purposeful, organized groups striving to work toward a common social goal. While most of us learned about social movements in history classes, we tend to take for granted the fundamental changes they caused —and we may be completely unfamiliar with the trend toward global social movement. But from the anti-tobacco movement that has worked to outlaw smoking in public buildings and raise the cost of cigarettes, to uprisings throughout the Arab world, movements are creating social change on a global scale. Levels of Social Movements Movements happen in our towns, in our nation, and around the world. Let’s take a look at examples of social movements, from local to global. No doubt you can think of others on all of these levels, especially since modern technology has allowed us a near-constant stream of information about the quest for social change around the world. Local Chicago is a city of highs and lows, from corrupt politicians and failing schools to innovative education programs and a thriving arts scene. Not surprisingly, it has been home to a number of social movements over time. Currently, AREA Chicago is a social movement focused on “building a socially just city” (AREA Chicago 2011). The organization seeks to “create relationships and sustain community through art, research, education, and activism” (AREA Chicago 2011). The movement offers online tools like the Radicalendar––a calendar for getting radical and connected–– and events such as an alternative to the traditional Independence Day picnic. Through its offerings, AREA Chicago gives local residents a chance to engage in a movement to help build a socially just city. State At the other end of the political spectrum from AREA Chicago, there is a social movement across the country in Texas. There, the statewide Texas Secede! organization promotes the idea that Texas can and should secede from the United States to become an independent republic. The organization, which has 3,400 “likes” on Facebook, references both Texas and national history in promoting secession. The movement encourages Texans to return to their rugged and individualistic roots, and to stand up to what proponents believe is the theft of their rights and property by the U.S. government (Texas Secede! 2009). National A polarizing national issue which has helped spawn many activist groups is gay marriage. While the legal battle is being played out state-by-state, the issue is a national one and crops up in presidential debates quite frequently. There are ardent supporters on both sides of the issue. The Human Rights Campaign, a nationwide organization that advocates for LGBT civil rights, has been around for over 30 years and claims more than a million members. One focus of the organization is their Americans for Marriage Equality campaign. Using public celebrities such as athletes, musicians, and political figures, the campaigns seeks to engage the public in the issue of equal rights under the law. The campaign raises awareness of the over 1,100 different rights, benefits, and protections provided on the basis of marital status under federal law, and seeks to educate the public on why they believe these protections are due to committed couples, regardless of gender (Human Rights Campaign 2011). A movement on the opposite end would be the National Organization for Marriage, an organization that funds campaigns to stop same-sex marriage (National Organization for Marriage 2011). Both of these organizations work on the national stage and seek to engage people through grassroots efforts to push their message. Global Despite their successes in bringing forth change on controversial topics, social movements are not always about volatile politicized issues. For example, let’s look at the global movement called Slow Food. Slow Food, with the slogan “Good, Clean, Fair Food,” is a global grassroots movement claiming supporters in 150 countries. The movement links community and environmental issues back to the question of what is on our plates and where it came from. Founded in 1989 in response to the increasing existence of fast food in communities that used to treasure their culinary traditions, Slow Food works to raise awareness of food choices (Slow Food 2011). With more than 100,000 members in 1,300 local chapters, Slow Food is a movement that crosses political, age, and regional lines. Types of Social Movements We know that social movements can occur on the local, national, or even global stage. Are there other patterns or classifications that can help us understand them? Sociologist David Aberle (1966) addresses this question, developing categories that distinguish among social movements based on what they want to change and how much change they want. Reform movements seek to change something specific about the social structure. Examples include anti-nuclear groups, Mothers Against Drunk Driving (MADD), and the Human Rights Campaign’s advocacy for Marriage Equality. Revolutionary movements seek to completely change every aspect of society. These would include the 1960’s counterculture movement, as well as anarchist collectives. Texas Secede! is a revolutionary movement. Religious/Redemptive movements are “meaning seeking,” and their goal is to provoke inner change or spiritual growth in individuals. Organizations pushing these movements might include Heaven’s Gate or the Branch Davidians. Alternative movements are focused on self-improvement and limited, specific changes to individual beliefs and behavior. These include trends like transcendental meditation or a macrobiotic diet. Resistance movements seek to prevent or undo change to the social structure. The Ku Klux Klan and pro-life movements fall into this category. Stages of Social Movements Later sociologists studied the lifecycle of social movements—how they emerge, grow, and in some cases, die out. Blumer (1969) and Tilly (1978) outline a four-stage process. In the preliminary stage , people become aware of an issue and leaders emerge. This is followed by the coalescence stage when people join together and organize in order to publicize the issue and raise awareness. In the institutionalization stage , the movement no longer requires grassroots volunteerism: it is an established organization, typically peopled with a paid staff. When people fall away, adopt a new movement, the movement successfully brings about the change it sought, or people no longer take the issue seriously, the movement falls into the decline stage . Each social movement discussed earlier belongs in one of these four stages. Where would you put them on the list? Big Picture Social Media and Social Change: A Match Made in Heaven Chances are you have been asked to tweet, friend, like, or donate online for a cause. Maybe you were one of the many people who, in 2010, helped raise over $3 million in relief efforts for Haiti through cell phone text donations. Or maybe you follow presidential candidates on Twitter and retweet their messages to your followers. Perhaps you have “liked” a local nonprofit on Facebook, prompted by one of your neighbors or friends liking it too. Nowadays, woven throughout our social media activities, are social movements. After all, social movements start by activating people. Referring to the ideal type stages discussed above, you can see that social media has the potential to dramatically transform how people get involved. Look at stage one, the preliminary stage : people become aware of an issue and leaders emerge. Imagine how social media speeds up this step. Suddenly, a shrewd user of Twitter can alert his thousands of followers about an emerging cause or an issue on his mind. Issue awareness can spread at the speed of a click, with thousands of people across the globe becoming informed at the same time. In a similar vein, those who are savvy and engaged with social media emerge as leaders. Suddenly, you don’t need to be a powerful public speaker. You don’t even need to leave your house. You can build an audience through social media without ever meeting the people you are inspiring. At the next stage, the coalescence stage , social media also is transformative. Coalescence is the point when people join together to publicize the issue and get organized. President Obama’s 2008 campaign was a case study in organizing through social media. Using Twitter and other online tools, the campaign engaged volunteers who had typically not bothered with politics, and empowered those who were more active to generate still more activity. It is no coincidence that Obama’s earlier work experience included grassroots community organizing. What is the difference between his campaign and the work he did in Chicago neighborhoods decades earlier? The ability to organize without regard to geographical boundaries by using social media. In 2009, when student protests erupted in Tehran, social media was considered so important to the organizing effort that the U.S. State Department actually asked Twitter to suspend scheduled maintenance so that a vital tool would not be disabled during the demonstrations. So what is the real impact of this technology on the world? Did Twitter bring down Mubarak in Egypt? Author Malcolm Gladwell (2010) doesn’t think so. In an article in New Yorker magazine, Gladwell tackles what he considers the myth that social media gets people more engaged. He points out that most of the tweets relating to the Iran protests were in English and sent from Western accounts (instead of people on the ground). Rather than increasing engagement, he contends that social media only increases participation; after all, the cost of participation is so much lower than the cost of engagement. Instead of risking being arrested, shot with rubber bullets, or sprayed with fire hoses, social media activists can click “like” or retweet a message from the comfort and safety of their desk (Gladwell 2010). Sociologists have identified high-risk activism, such as the civil rights movement, as a “strong-tie” phenomenon, meaning that people are far more likely to stay engaged and not run home to safety if they have close friends who are also engaged. The people who dropped out of the movement––who went home after the danger got too great––did not display any less ideological commitment. But they lacked the strong-tie connection to other people who were staying. Social media, by its very makeup, is “weak-tie” (McAdam and Paulsen 1993). People follow or friend people they have never met. But while these online acquaintances are a source of information and inspiration, the lack of engaged personal contact limits the level of risk we’ll take on their behalf. Theoretical Perspectives on Social Movements Most theories of social movements are called collective action theories, indicating the purposeful nature of this form of collective behavior. The following three theories are but a few of the many classic and modern theories developed by social scientists. Resource Mobilization Social movements will always be a part of society, and people will always weigh their options and make rational choices about which movements to follow. As long as social movements wish to thrive, they must find resources (such as money, people, and plans) for how to meet their goals. Not only will social movements compete for our attention with many other concerns—from the basic (our jobs or our need to feed ourselves) to the broad (video games, sports, or television), but they also compete with each other. For any individual, it may be a simple matter to decide you want to spend your time and money on animal shelters and Republican politics versus homeless shelters and Democrats. But which animal shelter, and which Republican candidate? Social movements are competing for a piece of finite resources, and the field is growing more crowded all the time. McCarthy and Zald (1977) conceptualize resource mobilization theory as a way to explain movement success in terms of its ability to acquire resources and mobilize individuals. For example, PETA, a social movement organization, is in competition with Greenpeace and the Animal Liberation Front (ALF), two other social movement organizations. Taken together, along with all other social movement organizations working on animals rights issues, these similar organizations constitute a social movement industry . Multiple social movement industries in a society, though they may have widely different constituencies and goals, constitute a society's social movement sector . Every social movement organization (a single social movement group) within the social movement sector is competing for your attention, your time, and your resources. The chart below shows the relationship between these components. Framing/Frame Analysis Over the past several decades, sociologists have developed the concept of frames to explain how individuals identify and understand social events and which norms they should follow in any given situation (Goffman 1974; Snow et al. 1986; Benford and Snow 2000). Imagine entering a restaurant. Your “frame” immediately provides you with a behavior template. It probably does not occur to you to wear pajamas to a fine dining establishment, throw food at other patrons, or spit your drink onto the table. However, eating food at a sleepover pizza party provides you with an entirely different behavior template. It might be perfectly acceptable to eat in your pajamas, and maybe even throw popcorn at others or guzzle drinks from cans. Successful social movements use three kinds of frames (Snow and Benford 1988) to further their goals. The first type, diagnostic framing , states the problem in a clear, easily understood way. When applying diagnostic frames, there are no shades of gray: instead, there is the belief that what “they” do is wrong and this is how “we” will fix it. The anti-gay marriage movement is an example of diagnostic framing with its uncompromising insistence that marriage is only between a man and a woman. Prognostic framing , the second type, offers a solution and states how it will be implemented. Some examples of this frame, when looking at the issue of marriage equality as framed by the anti-gay marriage movement, include the plan to restrict marriage to “one man/one woman” or to allow only “civil unions” instead of marriage. As you can see, there may be many competing prognostic frames even within social movements adhering to similar diagnostic frames. Finally, motivational framing is the call to action: what should you do once you agree with the diagnostic frame and believe in the prognostic frame? These frames are action-oriented. In the gay marriage movement, a call to action might encourage you to vote “no” on Proposition 8 in California (a move to limit marriage to male-female couples), or conversely, to contact your local congressperson to express your viewpoint that marriage should be restricted to opposite-sex couples. With so many similar diagnostic frames, some groups find it best to join together to maximize their impact. When social movements link their goals to the goals of other social movements and merge into a single group, a frame alignment process (Snow et al. 1986) occurs—an ongoing and intentional means of recruiting participants to the movement. This frame alignment process involves four aspects: bridging, amplification, extension, and transformation. Bridging describes a “bridge” that connects uninvolved individuals and unorganized or ineffective groups with social movements that, though structurally unconnected, nonetheless share similar interests or goals. These organizations join together creating a new, stronger social movement organization. Can you think of examples of different organizations with a similar goal that have banded together? In the amplification model, organizations seek to expand their core ideas to gain a wider, more universal appeal. By expanding their ideas to include a broader range, they can mobilize more people for their cause. For example, the Slow Food movement extends its arguments in support of local food to encompass reduced energy consumption and reduced pollution, plus reduced obesity from eating more healthfully, and other benefits. In extension , social movements agree to mutually promote each other, even when the two social movement organization’s goals don’t necessarily relate to each other’s immediate goals. This often occurs when organizations are sympathetic to each others’ causes, even if they are not directly aligned, such as women’s equal rights and the civil rights movement. Transformation involves a complete revision of goals. Once a movement has succeeded, it risks losing relevance. If it wants to remain active, the movement has to change with the transformation or risk becoming obsolete. For instance, when the women’s suffrage movement gained women the right to vote, they turned their attention to equal rights and campaigning to elect women. In short, it is an evolution to the existing diagnostic or prognostic frames generally involving a total conversion of movement. New Social Movement Theory New social movement theory , a development of European social scientists in the 1950s and 1960s, attempts to explain the proliferation of post-industrial and post-modern movements that are difficult to analyze using traditional social movement theories. Rather than being one specific theory, it is more of a perspective that revolves around understanding movements as they relate to politics, identity, culture, and social change. Some of these more complex interrelated movements include ecofeminism, which focuses on the patriarchal society as the source of environmental problems, and the transgender rights movement. Sociologist Steven Buechler (2000) suggests that we should be looking at the bigger picture in which these movements arise—shifting to a macro-level, global analysis of social movements. 21.3 Social Change Collective behavior and social movements are just two of the forces driving social change , which is the change in society created through social movements as well as external factors like environmental shifts or technological innovations. Essentially, any disruptive shift in the status quo, be it intentional or random, human-caused or natural, can lead to social change. Below are some of the likely causes. Causes of Social Change Changes to technology, social institutions, population, and the environment, alone or in some combination, create change. Below, we will discuss how these act as agents of social change and we’ll examine real-world examples. We will focus on four agents of change recognized by social scientists: technology, social institutions, population, and the environment. Technology Some would say that improving technology has made our lives easier. Imagine what your day would be like without the internet, the automobile, or electricity. In The World Is Flat , Thomas Friedman (2005) argues that technology is a driving force behind globalization, while the other forces of social change (social institutions, population, environment) play comparatively minor roles. He suggests that we can view globalization as occurring in three distinct periods. First, globalization was driven by military expansion, powered by horsepower and windpower. The countries best able to take advantage of these power sources expanded the most, exerting control over the politics of the globe from the late 15th century to around the year 1800. The second shorter period, from approximately 1800 C.E. to 2000 C.E., consisted of a globalizing economy. Steam and rail power were the guiding forces of social change and globalization in this period. Finally, Friedman brings us to the post-millennial era. In this period of globalization, change is driven by technology, particularly the internet (Friedman 2005). But also consider that technology can create change in the other three forces social scientists link to social change. Advances in medical technology allow otherwise infertile women to bear children, indirectly leading to an increase in population. Advances in agricultural technology have allowed us to genetically alter and patent food products, changing our environment in innumerable ways. From the way we educate children in the classroom to the way we grow the food we eat, technology has impacted all aspects of modern life. Of course there are drawbacks. The increasing gap between the technological haves and have-nots––sometimes called the digital divide––occurs both locally and globally. Further, there are added security risks: the loss of privacy, the risk of total system failure (like the Y2K panic at the turn of the millennium), and the added vulnerability created by technological dependence. Think about the technology that goes into keeping nuclear power plants running safely and securely. What happens if an earthquake or other disaster, like in the case of Japan’s Fukushima plant, causes the technology to malfunction, not to mention the possibility of a systematic attack to our nation’s relatively vulnerable technological infrastructure? Social Institutions Each change in a single social institution leads to changes in all social institutions. For example, the industrialization of society meant that there was no longer a need for large families to produce enough manual labor to run a farm. Further, new job opportunities were in close proximity to urban centers where living space was at a premium. The result is that the average family size shrunk significantly. This same shift towards industrial corporate entities also changed the way we view government involvement in the private sector, created the global economy, provided new political platforms, and even spurred new religions and new forms of religious worship like Scientology. It has also informed the way we educate our children: originally schools were set up to accommodate an agricultural calendar so children could be home to work the fields in the summer, and even today, teaching models are largely based on preparing students for industrial jobs, despite that being an outdated need. As this example illustrates, a shift in one area, such as industrialization, means an interconnected impact across social institutions. Population Population composition is changing at every level of society. Births increase in one nation and decrease in another. Some families delay childbirth while others start bringing children into their fold early. Population changes can be due to random external forces, like an epidemic, or shifts in other social institutions, as described above. But regardless of why and how it happens, population trends have a tremendous interrelated impact on all other aspects of society. In the United States, we are experiencing an increase in our senior population as baby boomers begin to retire, which will in turn change the way many of our social institutions are organized. For example, there is an increased demand for housing in warmer climates, a massive shift in the need for elder care and assisted living facilities, and growing awareness of elder abuse. There is concern about labor shortages as boomers retire, not to mention the knowledge gap as the most senior and accomplished leaders in different sectors start to leave. Further, as this large generation leaves the workforce, the loss of tax income and pressure on pension and retirement plans means that the financial stability of the country is threatened. Globally, often the countries with the highest fertility rates are least able to absorb and attend to the needs of a growing population. Family planning is a large step in ensuring that families are not burdened with more children than they can care for. On a macro level, the increased population, particularly in the poorest parts of the globe, also leads to increased stress on the planet’s resources. The Environment Turning to human ecology, we know that individuals and the environment affect each other. As human populations move into more vulnerable areas, we see an increase in the number of people affected by natural disasters, and we see that human interaction with the environment increases the impact of those disasters. Part of this is simply the numbers: the more people there are on the planet, the more likely it is that people will be impacted by a natural disaster. But it goes beyond that. We face a combination of too many people and the increased demands these numbers make on the earth. As a population, we have brought water tables to dangerously low levels, built up fragile shorelines to increase development, and irrigated massive crop fields with water brought in from several states away. How can we be surprised when homes along coastlines are battered and droughts threaten whole towns? The year 2011 holds the unwelcome distinction of being a record year for billion-dollar weather disasters, with about a dozen falling into that category. From twisters and floods to snowstorms and droughts, the planet is making our problems abundantly clear (CBS News 2011). These events have birthed social movements and are bringing about social change as the public becomes educated about these issues. Sociology in the Real World Our Dystopian Future: From A Brave New World to The Hunger Games Humans have long been interested in science fiction and space travel, and many of us are eager to see the invention of jet packs and flying cars. But part of this futuristic fiction trend is much darker and less optimistic. In 1932, when Aldous Huxley’s Brave New World was published, there was a cultural trend towards seeing the future as golden and full of opportunity. In his novel set in 2540, there is a more frightening future. Since then, there has been an ongoing stream of dystopian novels, or books set in the future after some kind of apocalypse has occurred and when a totalitarian and restrictive government has taken over. These books have been gaining in popularity recently, especially among young adult readers. And while the adult versions of these books often have a grim or dismal ending, the youth-geared versions usually end with some promise of hope. So what is it about our modern times that makes looking forward so fearsome? Take the example of author Suzanne Collins’s hugely popular Hunger Games trilogy for young adults. The futuristic setting isn’t given a date, and the locale is Panem, a transformed version of North America with 12 districts ruled by a cruel and dictatorial capitol. The capitol punishes the districts for their long-ago attempt at rebellion by forcing an annual Hunger Game, where two children from each district are thrown into a created world where they must fight to the death. Connotations of gladiator games and video games come together in this world, where the government can kill people for their amusement, and the technological wonders never cease. From meals that appear at the touch of a button to mutated government-built creatures that track and kill, the future world of Hunger Games is a mix of modernization fantasy and nightmare. When thinking about modernization theory and how it is viewed today by both functionalists and conflict theorists, it is interesting to look at this world of fiction that is so popular. When you think of the future, do you view it as a wonderful place, full of opportunity? Or as a horrifying dictatorship sublimating the individual to the good of the state? Do you view modernization as something to look forward to or something to avoid? And which media has influenced your view? Modernization Modernization describes the processes that increase the amount of specialization and differentiation of structure in societies resulting in the move from an undeveloped society to developed, technologically driven society (Irwin 1975). By this definition, the level of modernity within a society is judged by the sophistication of its technology, particularly as it relates to infrastructure, industry, and the like. However, it is important to note the inherent ethnocentric bias of such assessment. Why do we assume that those living in semi-peripheral and peripheral nations would find it so wonderful to become more like the core nations? Is modernization always positive? One contradiction of all kinds of technology is that they often promise time-saving benefits, but somehow fail to deliver. How many times have you ground your teeth in frustration at an internet site that refused to load or at a dropped call on your cell phone? Despite time-saving devices such as dishwashers, washing machines, and, now, remote control vacuum cleaners, the average amount of time spent on housework is the same today as it was fifty years ago. And the dubious benefits of 24/7 email and immediate information have simply increased the amount of time employees are expected to be responsive and available. While once businesses had to travel at the speed of the United States postal system, sending something off and waiting until it was received before the next stage, today the immediacy of information transfer means there are no such breaks. Further, the internet bought us information, but at a cost. The morass of information means that there is as much poor information available as trustworthy sources. There is a delicate line to walk when core nations seek to bring the assumed benefits of modernization to more traditional cultures. For one, there are obvious pro-capitalist biases that go into such attempts, and it is short-sighted for western governments and social scientists to assume all other countries aspire to follow in their footsteps. Additionally, there can be a kind of neo-liberal defense of rural cultures, ignoring the often crushing poverty and diseases that exist in peripheral nations and focusing only on a nostalgic mythology of the happy peasant. It takes a very careful hand to understand both the need for cultural identity and preservation as well as the hopes for future growth.
biology
Chapter Outline 34.1 Digestive Systems 34.2 Nutrition and Energy Production 34.3 Digestive System Processes 34.4 Digestive System Regulation Introduction All living organisms need nutrients to survive. While plants can obtain the molecules required for cellular function through the process of photosynthesis, most animals obtain their nutrients by the consumption of other organisms. At the cellular level, the biological molecules necessary for animal function are amino acids, lipid molecules, nucleotides, and simple sugars. However, the food consumed consists of protein, fat, and complex carbohydrates. Animals must convert these macromolecules into the simple molecules required for maintaining cellular functions, such as assembling new molecules, cells, and tissues. The conversion of the food consumed to the nutrients required is a multi-step process involving digestion and absorption. During digestion, food particles are broken down to smaller components, and later, they are absorbed by the body. One of the challenges in human nutrition is maintaining a balance between food intake, storage, and energy expenditure. Imbalances can have serious health consequences. For example, eating too much food while not expending much energy leads to obesity, which in turn will increase the risk of developing illnesses such as type-2 diabetes and cardiovascular disease. The recent rise in obesity and related diseases makes understanding the role of diet and nutrition in maintaining good health all the more important.
[ { "answer": { "ans_choice": 3, "ans_text": "horse" }, "bloom": null, "hl_context": "<hl> Some animals , such as camels and alpacas , are pseudo-ruminants . <hl> <hl> They eat a lot of plant material and roughage . <hl> Digesting plant material is not easy because plant cell walls contain the polymeric sugar molecule cellulose . The digestive enzymes of these animals cannot break down cellulose , but microorganisms present in the digestive system can . Therefore , the digestive system must be able to handle large amounts of roughage and break down the cellulose . Pseudo-ruminants have a three-chamber stomach in the digestive system . However , their cecum — a pouched organ at the beginning of the large intestine containing many microorganisms that are necessary for the digestion of plant materials — is large and is the site where the roughage is fermented and digested . These animals do not have a rumen but have an omasum , abomasum , and reticulum .", "hl_sentences": "Some animals , such as camels and alpacas , are pseudo-ruminants . They eat a lot of plant material and roughage .", "question": { "cloze_format": "A ___ is a pseudo-ruminant.", "normal_format": "Which of the following is a pseudo-ruminant?", "question_choices": [ "cow", "pig", "crow", "horse" ], "question_id": "fs-idp230293872", "question_text": "Which of the following is a pseudo-ruminant?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "Birds eat large quantities at one time so that they can fly long distances." }, "bloom": null, "hl_context": "Birds have a highly efficient , simplified digestive system . Recent fossil evidence has shown that the evolutionary divergence of birds from other land animals was characterized by streamlining and simplifying the digestive system . Unlike many other animals , birds do not have teeth to chew their food . In place of lips , they have sharp pointy beaks . The horny beak , lack of jaws , and the smaller tongue of the birds can be traced back to their dinosaur ancestors . The emergence of these changes seems to coincide with the inclusion of seeds in the bird diet . Seed-eating birds have beaks that are shaped for grabbing seeds and the two-compartment stomach allows for delegation of tasks . <hl> Since birds need to remain light in order to fly , their metabolic rates are very high , which means they digest their food very quickly and need to eat often . <hl> Contrast this with the ruminants , where the digestion of plant matter takes a very long time .", "hl_sentences": "Since birds need to remain light in order to fly , their metabolic rates are very high , which means they digest their food very quickly and need to eat often .", "question": { "cloze_format": "The statement that is untrue is that ___.", "normal_format": "Which of the following statements is untrue?", "question_choices": [ "Roughage takes a long time to digest.", "Birds eat large quantities at one time so that they can fly long distances.", "Cows do not have upper teeth.", "In pseudo-ruminants, roughage is digested in the cecum." ], "question_id": "fs-idp82804064", "question_text": "Which of the following statements is untrue?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "bicarbonates" }, "bloom": null, "hl_context": "In the duodenum , digestive secretions from the liver , pancreas , and gallbladder play an important role in digesting chyme during the intestinal phase . <hl> In order to neutralize the acidic chyme , a hormone called secretin stimulates the pancreas to produce alkaline bicarbonate solution and deliver it to the duodenum . <hl> Secretin acts in tandem with another hormone called cholecystokinin ( CCK ) . Not only does CCK stimulate the pancreas to produce the requisite pancreatic juices , it also stimulates the gallbladder to release bile into the duodenum . The pancreas is another important gland that secretes digestive juices . <hl> The chyme produced from the stomach is highly acidic in nature ; the pancreatic juices contain high levels of bicarbonate , an alkali that neutralizes the acidic chyme . <hl> Additionally , the pancreatic juices contain a large variety of enzymes that are required for the digestion of protein and carbohydrates . The human small intestine is over 6m long and is divided into three parts : the duodenum , the jejunum , and the ileum . The “ C-shaped , ” fixed part of the small intestine is called the duodenum and is shown in Figure 34.11 . The duodenum is separated from the stomach by the pyloric sphincter which opens to allow chyme to move from the stomach to the duodenum . <hl> In the duodenum , chyme is mixed with pancreatic juices in an alkaline solution rich in bicarbonate that neutralizes the acidity of chyme and acts as a buffer . <hl> Pancreatic juices also contain several digestive enzymes . Digestive juices from the pancreas , liver , and gallbladder , as well as from gland cells of the intestinal wall itself , enter the duodenum . Bile is produced in the liver and stored and concentrated in the gallbladder . Bile contains bile salts which emulsify lipids while the pancreas produces enzymes that catabolize starches , disaccharides , proteins , and fats . These digestive juices break down the food particles in the chyme into glucose , triglycerides , and amino acids . Some chemical digestion of food takes place in the duodenum . Absorption of fatty acids also takes place in the duodenum .", "hl_sentences": "In order to neutralize the acidic chyme , a hormone called secretin stimulates the pancreas to produce alkaline bicarbonate solution and deliver it to the duodenum . The chyme produced from the stomach is highly acidic in nature ; the pancreatic juices contain high levels of bicarbonate , an alkali that neutralizes the acidic chyme . In the duodenum , chyme is mixed with pancreatic juices in an alkaline solution rich in bicarbonate that neutralizes the acidity of chyme and acts as a buffer .", "question": { "cloze_format": "The acidic nature of chyme is neutralized by ________.", "normal_format": "What is the acidic nature of chyme neutralized by?", "question_choices": [ "potassium hydroxide", "sodium hydroxide", "bicarbonates", "vinegar" ], "question_id": "fs-idp29187296", "question_text": "The acidic nature of chyme is neutralized by ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "Essential nutrients can be synthesized by the body." }, "bloom": "1", "hl_context": "The omega - 3 alpha-linolenic acid and the omega - 6 linoleic acid are essential fatty acids needed to make some membrane phospholipids . <hl> Vitamins are another class of essential organic molecules that are required in small quantities for many enzymes to function and , for this reason , are considered to be co-enzymes . <hl> Absence or low levels of vitamins can have a dramatic effect on health , as outlined in Table 34.1 and Table 34.2 . <hl> Both fat-soluble and water-soluble vitamins must be obtained from food . <hl> Minerals , listed in Table 34.3 , are inorganic essential nutrients that must be obtained from food . Among their many functions , minerals help in structure and regulation and are considered co-factors . <hl> Certain amino acids also must be procured from food and cannot be synthesized by the body . <hl> These amino acids are the “ essential ” amino acids . <hl> The human body can synthesize only 11 of the 20 required amino acids ; the rest must be obtained from food . <hl> The essential amino acids are listed in Table 34.4 . Essential Nutrients While the animal body can synthesize many of the molecules required for function from the organic precursors , there are some nutrients that need to be consumed from food . <hl> These nutrients are termed essential nutrients , meaning they must be eaten , and the body cannot produce them . <hl>", "hl_sentences": "Vitamins are another class of essential organic molecules that are required in small quantities for many enzymes to function and , for this reason , are considered to be co-enzymes . Both fat-soluble and water-soluble vitamins must be obtained from food . Certain amino acids also must be procured from food and cannot be synthesized by the body . The human body can synthesize only 11 of the 20 required amino acids ; the rest must be obtained from food . These nutrients are termed essential nutrients , meaning they must be eaten , and the body cannot produce them .", "question": { "cloze_format": "It is a false statement that ___ .", "normal_format": "Which of the following statements is not true?", "question_choices": [ "Essential nutrients can be synthesized by the body.", "Vitamins are required in small quantities for bodily function.", "Some amino acids can be synthesized by the body, while others need to be obtained from diet.", "Vitamins come in two categories: fat-soluble and water-soluble." ], "question_id": "fs-idp141568560", "question_text": "Which of the following statements is not true?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "vitamin C" }, "bloom": null, "hl_context": "<hl> Vitamins can be either water-soluble or lipid-soluble . <hl> Fat soluble vitamins are absorbed in the same manner as lipids . It is important to consume some amount of dietary lipid to aid the absorption of lipid-soluble vitamins . <hl> Water-soluble vitamins can be directly absorbed into the bloodstream from the intestine . <hl> <hl> Vitamin C ( Ascorbic acid ) <hl> <hl> Vitamin B 12 ( Cobalamin ) <hl> <hl> Vitamin B 9 ( Folic acid ) <hl> <hl> Vitamin B 7 ( Biotin ) <hl> <hl> Vitamin B 6 ( Pyridoxine ) <hl> <hl> Vitamin B 5 ( Pantothenic acid ) <hl> <hl> Vitamin B 3 ( Niacin ) <hl> <hl> Vitamin B 2 ( Riboflavin ) <hl> <hl> Vitamin B 1 ( Thiamine ) <hl> <hl> Water-soluble Essential Vitamins <hl>", "hl_sentences": "Vitamins can be either water-soluble or lipid-soluble . Water-soluble vitamins can be directly absorbed into the bloodstream from the intestine . Vitamin C ( Ascorbic acid ) Vitamin B 12 ( Cobalamin ) Vitamin B 9 ( Folic acid ) Vitamin B 7 ( Biotin ) Vitamin B 6 ( Pyridoxine ) Vitamin B 5 ( Pantothenic acid ) Vitamin B 3 ( Niacin ) Vitamin B 2 ( Riboflavin ) Vitamin B 1 ( Thiamine ) Water-soluble Essential Vitamins", "question": { "cloze_format": "___ is a water-soluble vitamin.", "normal_format": "Which of the following is a water-soluble vitamin?", "question_choices": [ "vitamin A", "vitamin E", "vitamin K", "vitamin C" ], "question_id": "fs-idp85552", "question_text": "Which of the following is a water-soluble vitamin?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "carbohydrates" }, "bloom": null, "hl_context": "<hl> The primary source of energy for animals is carbohydrates , mainly glucose . <hl> Glucose is called the body ’ s fuel . The digestible carbohydrates in an animal ’ s diet are converted to glucose molecules through a series of catabolic chemical reactions . The organic molecules required for building cellular material and tissues must come from food . <hl> Carbohydrates or sugars are the primary source of organic carbons in the animal body . <hl> <hl> During digestion , digestible carbohydrates are ultimately broken down into glucose and used to provide energy through metabolic pathways . <hl> Complex carbohydrates , including polysaccharides , can be broken down into glucose through biochemical modification ; however , humans do not produce the enzyme cellulase and lack the ability to derive glucose from the polysaccharide cellulose . In humans , these molecules provide the fiber required for moving waste through the large intestine and a healthy colon . The intestinal flora in the human gut are able to extract some nutrition from these plant fibers . The excess sugars in the body are converted into glycogen and stored in the liver and muscles for later use . Glycogen stores are used to fuel prolonged exertions , such as long-distance running , and to provide energy during food shortage . Excess glycogen can be converted to fats , which are stored in the lower layer of the skin of mammals for insulation and energy storage . Excess digestible carbohydrates are stored by mammals in order to survive famine and aid in mobility .", "hl_sentences": "The primary source of energy for animals is carbohydrates , mainly glucose . Carbohydrates or sugars are the primary source of organic carbons in the animal body . During digestion , digestible carbohydrates are ultimately broken down into glucose and used to provide energy through metabolic pathways .", "question": { "cloze_format": "The primary fuel for the body is ___.", "normal_format": "What is the primary fuel for the body?", "question_choices": [ "carbohydrates", "lipids", "protein", "glycogen" ], "question_id": "fs-idp150766624", "question_text": "What is the primary fuel for the body?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "glycogen" }, "bloom": null, "hl_context": "ATP is required for all cellular functions . It is used to build the organic molecules that are required for cells and tissues ; it provides energy for muscle contraction and for the transmission of electrical signals in the nervous system . <hl> When the amount of ATP is available in excess of the body ’ s requirements , the liver uses the excess ATP and excess glucose to produce molecules called glycogen . <hl> Glycogen is a polymeric form of glucose and is stored in the liver and skeletal muscle cells . When blood sugar drops , the liver releases glucose from stores of glycogen . Skeletal muscle converts glycogen to glucose during intense exercise . The process of converting glucose and excess ATP to glycogen and the storage of excess energy is an evolutionarily important step in helping animals deal with mobility , food shortages , and famine . The organic molecules required for building cellular material and tissues must come from food . Carbohydrates or sugars are the primary source of organic carbons in the animal body . During digestion , digestible carbohydrates are ultimately broken down into glucose and used to provide energy through metabolic pathways . Complex carbohydrates , including polysaccharides , can be broken down into glucose through biochemical modification ; however , humans do not produce the enzyme cellulase and lack the ability to derive glucose from the polysaccharide cellulose . In humans , these molecules provide the fiber required for moving waste through the large intestine and a healthy colon . The intestinal flora in the human gut are able to extract some nutrition from these plant fibers . <hl> The excess sugars in the body are converted into glycogen and stored in the liver and muscles for later use . <hl> Glycogen stores are used to fuel prolonged exertions , such as long-distance running , and to provide energy during food shortage . Excess glycogen can be converted to fats , which are stored in the lower layer of the skin of mammals for insulation and energy storage . Excess digestible carbohydrates are stored by mammals in order to survive famine and aid in mobility .", "hl_sentences": "When the amount of ATP is available in excess of the body ’ s requirements , the liver uses the excess ATP and excess glucose to produce molecules called glycogen . The excess sugars in the body are converted into glycogen and stored in the liver and muscles for later use .", "question": { "cloze_format": "Excess glucose is stored as ________.", "normal_format": "What is excess glucose stored as?", "question_choices": [ "fat", "glucagon", "glycogen", "it is not stored in the body" ], "question_id": "fs-idm12790176", "question_text": "Excess glucose is stored as ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "stomach" }, "bloom": null, "hl_context": "<hl> Protein A large part of protein digestion takes place in the stomach . <hl> The enzyme pepsin plays an important role in the digestion of proteins by breaking down the intact protein to peptides , which are short chains of four to nine amino acids . In the duodenum , other enzymes — trypsin , elastase , and chymotrypsin — act on the peptides reducing them to smaller peptides . Trypsin elastase , carboxypeptidase , and chymotrypsin are produced by the pancreas and released into the duodenum where they act on the chyme . Further breakdown of peptides to single amino acids is aided by enzymes called peptidases ( those that break down peptides ) . Specifically , carboxypeptidase , dipeptidase , and aminopeptidase play important roles in reducing the peptides to free amino acids . The amino acids are absorbed into the bloodstream through the small intestines . The steps in protein digestion are summarized in Figure 34.17 and Table 34.6 . <hl> The stomach is also the major site for protein digestion in animals other than ruminants . <hl> Protein digestion is mediated by an enzyme called pepsin in the stomach chamber . Pepsin is secreted by the chief cells in the stomach in an inactive form called pepsinogen . Pepsin breaks peptide bonds and cleaves proteins into smaller polypeptides ; it also helps activate more pepsinogen , starting a positive feedback mechanism that generates more pepsin . Another cell type — parietal cells — secrete hydrogen and chloride ions , which combine in the lumen to form hydrochloric acid , the primary acidic component of the stomach juices . Hydrochloric acid helps to convert the inactive pepsinogen to pepsin . The highly acidic environment also kills many microorganisms in the food and , combined with the action of the enzyme pepsin , results in the hydrolysis of protein in the food . Chemical digestion is facilitated by the churning action of the stomach . Contraction and relaxation of smooth muscles mixes the stomach contents about every 20 minutes . The partially digested food and gastric juice mixture is called chyme . Chyme passes from the stomach to the small intestine . Further protein digestion takes place in the small intestine . Gastric emptying occurs within two to six hours after a meal . Only a small amount of chyme is released into the small intestine at a time . The movement of chyme from the stomach into the small intestine is regulated by the pyloric sphincter . When digesting protein and some fats , the stomach lining must be protected from getting digested by pepsin . There are two points to consider when describing how the stomach lining is protected . First , as previously mentioned , the enzyme pepsin is synthesized in the inactive form . This protects the chief cells , because pepsinogen does not have the same enzyme functionality of pepsin . Second , the stomach has a thick mucus lining that protects the underlying tissue from the action of the digestive juices . When this mucus lining is ruptured , ulcers can form in the stomach . Ulcers are open wounds in or on an organ caused by bacteria ( Helicobacter pylori ) when the mucus lining is ruptured and fails to reform .", "hl_sentences": "Protein A large part of protein digestion takes place in the stomach . The stomach is also the major site for protein digestion in animals other than ruminants .", "question": { "cloze_format": "The majority of protein digestion takes place in the ___.", "normal_format": "Where does the majority of protein digestion take place?", "question_choices": [ "stomach", "duodenum", "mouth", "jejunum" ], "question_id": "fs-idm42106064", "question_text": "Where does the majority of protein digestion take place?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "lipids" }, "bloom": null, "hl_context": "Why is emulsification important for digestion of lipids ? <hl> Pancreatic juices contain enzymes called lipases ( enzymes that break down lipids ) . <hl> If the lipid in the chyme aggregates into large globules , very little surface area of the lipids is available for the lipases to act on , leaving lipid digestion incomplete . By forming an emulsion , bile salts increase the available surface area of the lipids many fold . The pancreatic lipases can then act on the lipids more efficiently and digest them , as detailed in Figure 34.18 . Lipases break down the lipids into fatty acids and glycerides . These molecules can pass through the plasma membrane of the cell and enter the epithelial cells of the intestinal lining . The bile salts surround long-chain fatty acids and monoglycerides forming tiny spheres called micelles . The micelles move into the brush border of the small intestine absorptive cells where the long-chain fatty acids and monoglycerides diffuse out of the micelles into the absorptive cells leaving the micelles behind in the chyme . The long-chain fatty acids and monoglycerides recombine in the absorptive cells to form triglycerides , which aggregate into globules and become coated with proteins . These large spheres are called chylomicrons . Chylomicrons contain triglycerides , cholesterol , and other lipids and have proteins on their surface . The surface is also composed of the hydrophilic phosphate \" heads \" of phospholipids . Together , they enable the chylomicron to move in an aqueous environment without exposing the lipids to water . Chylomicrons leave the absorptive cells via exocytosis . Chylomicrons enter the lymphatic vessels , and then enter the blood in the subclavian vein . <hl> Lipid digestion begins in the stomach with the aid of lingual lipase and gastric lipase . <hl> However , the bulk of lipid digestion occurs in the small intestine due to pancreatic lipase . When chyme enters the duodenum , the hormonal responses trigger the release of bile , which is produced in the liver and stored in the gallbladder . Bile aids in the digestion of lipids , primarily triglycerides by emulsification . Emulsification is a process in which large lipid globules are broken down into several small lipid globules . These small globules are more widely distributed in the chyme rather than forming large aggregates . Lipids are hydrophobic substances : in the presence of water , they will aggregate to form globules to minimize exposure to water . Bile contains bile salts , which are amphipathic , meaning they contain hydrophobic and hydrophilic parts . Thus , the bile salts hydrophilic side can interface with water on one side and the hydrophobic side interfaces with lipids on the other . By doing so , bile salts emulsify large lipid globules into small lipid globules . The extensive chemical process of digestion begins in the mouth . As food is being chewed , saliva , produced by the salivary glands , mixes with the food . Saliva is a watery substance produced in the mouths of many animals . There are three major glands that secrete saliva — the parotid , the submandibular , and the sublingual . Saliva contains mucus that moistens food and buffers the pH of the food . Saliva also contains immunoglobulins and lysozymes , which have antibacterial action to reduce tooth decay by inhibiting growth of some bacteria . Saliva also contains an enzyme called salivary amylase that begins the process of converting starches in the food into a disaccharide called maltose . Another enzyme called lipase is produced by the cells in the tongue . <hl> Lipases are a class of enzymes that can break down triglycerides . <hl> The lingual lipase begins the breakdown of fat components in the food . The chewing and wetting action provided by the teeth and saliva prepare the food into a mass called the bolus for swallowing . The tongue helps in swallowing — moving the bolus from the mouth into the pharynx . The pharynx opens to two passageways : the trachea , which leads to the lungs , and the esophagus , which leads to the stomach . The trachea has an opening called the glottis , which is covered by a cartilaginous flap called the epiglottis . When swallowing , the epiglottis closes the glottis and food passes into the esophagus and not the trachea . This arrangement allows food to be kept out of the trachea .", "hl_sentences": "Pancreatic juices contain enzymes called lipases ( enzymes that break down lipids ) . Lipid digestion begins in the stomach with the aid of lingual lipase and gastric lipase . Lipases are a class of enzymes that can break down triglycerides .", "question": { "cloze_format": "Lipases are enzymes that break down ________.", "normal_format": "Lipases are enzymes that break down what?", "question_choices": [ "disaccharides", "lipids", "proteins", "cellulose" ], "question_id": "fs-idm26472480", "question_text": "Lipases are enzymes that break down ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "CCK" }, "bloom": null, "hl_context": "In the duodenum , digestive secretions from the liver , pancreas , and gallbladder play an important role in digesting chyme during the intestinal phase . In order to neutralize the acidic chyme , a hormone called secretin stimulates the pancreas to produce alkaline bicarbonate solution and deliver it to the duodenum . Secretin acts in tandem with another hormone called cholecystokinin ( CCK ) . <hl> Not only does CCK stimulate the pancreas to produce the requisite pancreatic juices , it also stimulates the gallbladder to release bile into the duodenum . <hl>", "hl_sentences": "Not only does CCK stimulate the pancreas to produce the requisite pancreatic juices , it also stimulates the gallbladder to release bile into the duodenum .", "question": { "cloze_format": "The hormone that controls the release of bile from the gallbladder is ___.", "normal_format": "Which hormone controls the release of bile from the gallbladder?", "question_choices": [ "pepsin", "amylase", "CCK", "gastrin" ], "question_id": "fs-idp109800704", "question_text": "Which hormone controls the release of bile from the gallbladder" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "somatostatin" }, "bloom": null, "hl_context": "One of the important factors under hormonal control is the stomach acid environment . During the gastric phase , the hormone gastrin is secreted by G cells in the stomach in response to the presence of proteins . Gastrin stimulates the release of stomach acid , or hydrochloric acid ( HCl ) which aids in the digestion of the proteins . <hl> However , when the stomach is emptied , the acidic environment need not be maintained and a hormone called somatostatin stops the release of hydrochloric acid . <hl> This is controlled by a negative feedback mechanism .", "hl_sentences": "However , when the stomach is emptied , the acidic environment need not be maintained and a hormone called somatostatin stops the release of hydrochloric acid .", "question": { "cloze_format": "The hormone that stops acid secretion in the stomach is ___.", "normal_format": "Which hormone stops acid secretion in the stomach?", "question_choices": [ "gastrin", "somatostatin", "gastric inhibitory peptide", "CCK" ], "question_id": "fs-idp144627040", "question_text": "Which hormone stops acid secretion in the stomach?" }, "references_are_paraphrase": null } ]
34
34.1 Digestive Systems Learning Objectives By the end of this section, you will be able to: Explain the processes of digestion and absorption Compare and contrast different types of digestive systems Explain the specialized functions of the organs involved in processing food in the body Describe the ways in which organs work together to digest food and absorb nutrients Animals obtain their nutrition from the consumption of other organisms. Depending on their diet, animals can be classified into the following categories: plant eaters (herbivores), meat eaters (carnivores), and those that eat both plants and animals (omnivores). The nutrients and macromolecules present in food are not immediately accessible to the cells. There are a number of processes that modify food within the animal body in order to make the nutrients and organic molecules accessible for cellular function. As animals evolved in complexity of form and function, their digestive systems have also evolved to accommodate their various dietary needs. Herbivores, Omnivores, and Carnivores Herbivores are animals whose primary food source is plant-based. Examples of herbivores, as shown in Figure 34.2 include vertebrates like deer, koalas, and some bird species, as well as invertebrates such as crickets and caterpillars. These animals have evolved digestive systems capable of handling large amounts of plant material. Herbivores can be further classified into frugivores (fruit-eaters), granivores (seed eaters), nectivores (nectar feeders), and folivores (leaf eaters). Carnivores are animals that eat other animals. The word carnivore is derived from Latin and literally means “meat eater.” Wild cats such as lions, shown in Figure 34.3 a and tigers are examples of vertebrate carnivores, as are snakes and sharks, while invertebrate carnivores include sea stars, spiders, and ladybugs, shown in Figure 34.3 b . Obligate carnivores are those that rely entirely on animal flesh to obtain their nutrients; examples of obligate carnivores are members of the cat family, such as lions and cheetahs. Facultative carnivores are those that also eat non-animal food in addition to animal food. Note that there is no clear line that differentiates facultative carnivores from omnivores; dogs would be considered facultative carnivores. Omnivores are animals that eat both plant- and animal-derived food. In Latin, omnivore means to eat everything. Humans, bears (shown in Figure 34.4 a ), and chickens are example of vertebrate omnivores; invertebrate omnivores include cockroaches and crayfish (shown in Figure 34.4 b ). Invertebrate Digestive Systems Animals have evolved different types of digestive systems to aid in the digestion of the different foods they consume. The simplest example is that of a gastrovascular cavity and is found in organisms with only one opening for digestion. Platyhelminthes (flatworms), Ctenophora (comb jellies), and Cnidaria (coral, jelly fish, and sea anemones) use this type of digestion. Gastrovascular cavities, as shown in Figure 34.5 a , are typically a blind tube or cavity with only one opening, the “mouth”, which also serves as an “anus”. Ingested material enters the mouth and passes through a hollow, tubular cavity. Cells within the cavity secrete digestive enzymes that break down the food. The food particles are engulfed by the cells lining the gastrovascular cavity. The alimentary canal , shown in Figure 34.5 b , is a more advanced system: it consists of one tube with a mouth at one end and an anus at the other. Earthworms are an example of an animal with an alimentary canal. Once the food is ingested through the mouth, it passes through the esophagus and is stored in an organ called the crop; then it passes into the gizzard where it is churned and digested. From the gizzard, the food passes through the intestine, the nutrients are absorbed, and the waste is eliminated as feces, called castings, through the anus. Vertebrate Digestive Systems Vertebrates have evolved more complex digestive systems to adapt to their dietary needs. Some animals have a single stomach, while others have multi-chambered stomachs. Birds have developed a digestive system adapted to eating unmasticated food. Monogastric: Single-chambered Stomach As the word monogastric suggests, this type of digestive system consists of one (“mono”) stomach chamber (“gastric”). Humans and many animals have a monogastric digestive system as illustrated in Figure 34.6 ab . The process of digestion begins with the mouth and the intake of food. The teeth play an important role in masticating (chewing) or physically breaking down food into smaller particles. The enzymes present in saliva also begin to chemically break down food. The esophagus is a long tube that connects the mouth to the stomach. Using peristalsis, or wave-like smooth muscle contractions, the muscles of the esophagus push the food towards the stomach. In order to speed up the actions of enzymes in the stomach, the stomach is an extremely acidic environment, with a pH between 1.5 and 2.5. The gastric juices, which include enzymes in the stomach, act on the food particles and continue the process of digestion. Further breakdown of food takes place in the small intestine where enzymes produced by the liver, the small intestine, and the pancreas continue the process of digestion. The nutrients are absorbed into the blood stream across the epithelial cells lining the walls of the small intestines. The waste material travels on to the large intestine where water is absorbed and the drier waste material is compacted into feces; it is stored until it is excreted through the rectum. Avian Birds face special challenges when it comes to obtaining nutrition from food. They do not have teeth and so their digestive system, shown in Figure 34.7 , must be able to process un-masticated food. Birds have evolved a variety of beak types that reflect the vast variety in their diet, ranging from seeds and insects to fruits and nuts. Because most birds fly, their metabolic rates are high in order to efficiently process food and keep their body weight low. The stomach of birds has two chambers: the proventriculus , where gastric juices are produced to digest the food before it enters the stomach, and the gizzard , where the food is stored, soaked, and mechanically ground. The undigested material forms food pellets that are sometimes regurgitated. Most of the chemical digestion and absorption happens in the intestine and the waste is excreted through the cloaca. Evolution Connection Avian Adaptations Birds have a highly efficient, simplified digestive system. Recent fossil evidence has shown that the evolutionary divergence of birds from other land animals was characterized by streamlining and simplifying the digestive system. Unlike many other animals, birds do not have teeth to chew their food. In place of lips, they have sharp pointy beaks. The horny beak, lack of jaws, and the smaller tongue of the birds can be traced back to their dinosaur ancestors. The emergence of these changes seems to coincide with the inclusion of seeds in the bird diet. Seed-eating birds have beaks that are shaped for grabbing seeds and the two-compartment stomach allows for delegation of tasks. Since birds need to remain light in order to fly, their metabolic rates are very high, which means they digest their food very quickly and need to eat often. Contrast this with the ruminants, where the digestion of plant matter takes a very long time. Ruminants Ruminants are mainly herbivores like cows, sheep, and goats, whose entire diet consists of eating large amounts of roughage or fiber. They have evolved digestive systems that help them digest vast amounts of cellulose. An interesting feature of the ruminants’ mouth is that they do not have upper incisor teeth. They use their lower teeth, tongue and lips to tear and chew their food. From the mouth, the food travels to the esophagus and on to the stomach. To help digest the large amount of plant material, the stomach of the ruminants is a multi-chambered organ, as illustrated in Figure 34.8 . The four compartments of the stomach are called the rumen, reticulum, omasum, and abomasum. These chambers contain many microbes that break down cellulose and ferment ingested food. The abomasum is the “true” stomach and is the equivalent of the monogastric stomach chamber where gastric juices are secreted. The four-compartment gastric chamber provides larger space and the microbial support necessary to digest plant material in ruminants. The fermentation process produces large amounts of gas in the stomach chamber, which must be eliminated. As in other animals, the small intestine plays an important role in nutrient absorption, and the large intestine helps in the elimination of waste. Pseudo-ruminants Some animals, such as camels and alpacas, are pseudo-ruminants. They eat a lot of plant material and roughage. Digesting plant material is not easy because plant cell walls contain the polymeric sugar molecule cellulose. The digestive enzymes of these animals cannot break down cellulose, but microorganisms present in the digestive system can. Therefore, the digestive system must be able to handle large amounts of roughage and break down the cellulose. Pseudo-ruminants have a three-chamber stomach in the digestive system. However, their cecum—a pouched organ at the beginning of the large intestine containing many microorganisms that are necessary for the digestion of plant materials—is large and is the site where the roughage is fermented and digested. These animals do not have a rumen but have an omasum, abomasum, and reticulum. Parts of the Digestive System The vertebrate digestive system is designed to facilitate the transformation of food matter into the nutrient components that sustain organisms. Oral Cavity The oral cavity, or mouth, is the point of entry of food into the digestive system, illustrated in Figure 34.9 . The food consumed is broken into smaller particles by mastication, the chewing action of the teeth. All mammals have teeth and can chew their food. The extensive chemical process of digestion begins in the mouth. As food is being chewed, saliva, produced by the salivary glands, mixes with the food. Saliva is a watery substance produced in the mouths of many animals. There are three major glands that secrete saliva—the parotid, the submandibular, and the sublingual. Saliva contains mucus that moistens food and buffers the pH of the food. Saliva also contains immunoglobulins and lysozymes, which have antibacterial action to reduce tooth decay by inhibiting growth of some bacteria. Saliva also contains an enzyme called salivary amylase that begins the process of converting starches in the food into a disaccharide called maltose. Another enzyme called lipase is produced by the cells in the tongue. Lipases are a class of enzymes that can break down triglycerides. The lingual lipase begins the breakdown of fat components in the food. The chewing and wetting action provided by the teeth and saliva prepare the food into a mass called the bolus for swallowing. The tongue helps in swallowing—moving the bolus from the mouth into the pharynx. The pharynx opens to two passageways: the trachea, which leads to the lungs, and the esophagus, which leads to the stomach. The trachea has an opening called the glottis, which is covered by a cartilaginous flap called the epiglottis. When swallowing, the epiglottis closes the glottis and food passes into the esophagus and not the trachea. This arrangement allows food to be kept out of the trachea. Esophagus The esophagus is a tubular organ that connects the mouth to the stomach. The chewed and softened food passes through the esophagus after being swallowed. The smooth muscles of the esophagus undergo a series of wave like movements called peristalsis that push the food toward the stomach, as illustrated in Figure 34.10 . The peristalsis wave is unidirectional—it moves food from the mouth to the stomach, and reverse movement is not possible. The peristaltic movement of the esophagus is an involuntary reflex; it takes place in response to the act of swallowing. A ring-like muscle called a sphincter forms valves in the digestive system. The gastro-esophageal sphincter is located at the stomach end of the esophagus. In response to swallowing and the pressure exerted by the bolus of food, this sphincter opens, and the bolus enters the stomach. When there is no swallowing action, this sphincter is shut and prevents the contents of the stomach from traveling up the esophagus. Many animals have a true sphincter; however, in humans, there is no true sphincter, but the esophagus remains closed when there is no swallowing action. Acid reflux or “heartburn” occurs when the acidic digestive juices escape into the esophagus. Stomach A large part of digestion occurs in the stomach, shown in Figure 34.11 . The stomach is a saclike organ that secretes gastric digestive juices. The pH in the stomach is between 1.5 and 2.5. This highly acidic environment is required for the chemical breakdown of food and the extraction of nutrients. When empty, the stomach is a rather small organ; however, it can expand to up to 20 times its resting size when filled with food. This characteristic is particularly useful for animals that need to eat when food is available. Visual Connection Which of the following statements about the digestive system is false? Chyme is a mixture of food and digestive juices that is produced in the stomach. Food enters the large intestine before the small intestine. In the small intestine, chyme mixes with bile, which emulsifies fats. The stomach is separated from the small intestine by the pyloric sphincter. The stomach is also the major site for protein digestion in animals other than ruminants. Protein digestion is mediated by an enzyme called pepsin in the stomach chamber. Pepsin is secreted by the chief cells in the stomach in an inactive form called pepsinogen . Pepsin breaks peptide bonds and cleaves proteins into smaller polypeptides; it also helps activate more pepsinogen, starting a positive feedback mechanism that generates more pepsin. Another cell type—parietal cells—secrete hydrogen and chloride ions, which combine in the lumen to form hydrochloric acid, the primary acidic component of the stomach juices. Hydrochloric acid helps to convert the inactive pepsinogen to pepsin. The highly acidic environment also kills many microorganisms in the food and, combined with the action of the enzyme pepsin, results in the hydrolysis of protein in the food. Chemical digestion is facilitated by the churning action of the stomach. Contraction and relaxation of smooth muscles mixes the stomach contents about every 20 minutes. The partially digested food and gastric juice mixture is called chyme . Chyme passes from the stomach to the small intestine. Further protein digestion takes place in the small intestine. Gastric emptying occurs within two to six hours after a meal. Only a small amount of chyme is released into the small intestine at a time. The movement of chyme from the stomach into the small intestine is regulated by the pyloric sphincter. When digesting protein and some fats, the stomach lining must be protected from getting digested by pepsin. There are two points to consider when describing how the stomach lining is protected. First, as previously mentioned, the enzyme pepsin is synthesized in the inactive form. This protects the chief cells, because pepsinogen does not have the same enzyme functionality of pepsin. Second, the stomach has a thick mucus lining that protects the underlying tissue from the action of the digestive juices. When this mucus lining is ruptured, ulcers can form in the stomach. Ulcers are open wounds in or on an organ caused by bacteria ( Helicobacter pylori ) when the mucus lining is ruptured and fails to reform. Small Intestine Chyme moves from the stomach to the small intestine. The small intestine is the organ where the digestion of protein, fats, and carbohydrates is completed. The small intestine is a long tube-like organ with a highly folded surface containing finger-like projections called the villi . The apical surface of each villus has many microscopic projections called microvilli. These structures, illustrated in Figure 34.12 , are lined with epithelial cells on the luminal side and allow for the nutrients to be absorbed from the digested food and absorbed into the blood stream on the other side. The villi and microvilli, with their many folds, increase the surface area of the intestine and increase absorption efficiency of the nutrients. Absorbed nutrients in the blood are carried into the hepatic portal vein, which leads to the liver. There, the liver regulates the distribution of nutrients to the rest of the body and removes toxic substances, including drugs, alcohol, and some pathogens. Visual Connection Which of the following statements about the small intestine is false? Absorptive cells that line the small intestine have microvilli, small projections that increase surface area and aid in the absorption of food. The inside of the small intestine has many folds, called villi. Microvilli are lined with blood vessels as well as lymphatic vessels. The inside of the small intestine is called the lumen. The human small intestine is over 6m long and is divided into three parts: the duodenum, the jejunum, and the ileum. The “C-shaped,” fixed part of the small intestine is called the duodenum and is shown in Figure 34.11 . The duodenum is separated from the stomach by the pyloric sphincter which opens to allow chyme to move from the stomach to the duodenum. In the duodenum, chyme is mixed with pancreatic juices in an alkaline solution rich in bicarbonate that neutralizes the acidity of chyme and acts as a buffer. Pancreatic juices also contain several digestive enzymes. Digestive juices from the pancreas, liver, and gallbladder, as well as from gland cells of the intestinal wall itself, enter the duodenum. Bile is produced in the liver and stored and concentrated in the gallbladder. Bile contains bile salts which emulsify lipids while the pancreas produces enzymes that catabolize starches, disaccharides, proteins, and fats. These digestive juices break down the food particles in the chyme into glucose, triglycerides, and amino acids. Some chemical digestion of food takes place in the duodenum. Absorption of fatty acids also takes place in the duodenum. The second part of the small intestine is called the jejunum , shown in Figure 34.11 . Here, hydrolysis of nutrients is continued while most of the carbohydrates and amino acids are absorbed through the intestinal lining. The bulk of chemical digestion and nutrient absorption occurs in the jejunum. The ileum , also illustrated in Figure 34.11 is the last part of the small intestine and here the bile salts and vitamins are absorbed into blood stream. The undigested food is sent to the colon from the ileum via peristaltic movements of the muscle. The ileum ends and the large intestine begins at the ileocecal valve. The vermiform, “worm-like,” appendix is located at the ileocecal valve. The appendix of humans secretes no enzymes and has an insignificant role in immunity. Large Intestine The large intestine , illustrated in Figure 34.13 , reabsorbs the water from the undigested food material and processes the waste material. The human large intestine is much smaller in length compared to the small intestine but larger in diameter. It has three parts: the cecum, the colon, and the rectum. The cecum joins the ileum to the colon and is the receiving pouch for the waste matter. The colon is home to many bacteria or “intestinal flora” that aid in the digestive processes. The colon can be divided into four regions, the ascending colon, the transverse colon, the descending colon and the sigmoid colon. The main functions of the colon are to extract the water and mineral salts from undigested food, and to store waste material. Carnivorous mammals have a shorter large intestine compared to herbivorous mammals due to their diet. Rectum and Anus The rectum is the terminal end of the large intestine, as shown in Figure 34.13 . The primary role of the rectum is to store the feces until defecation. The feces are propelled using peristaltic movements during elimination. The anus is an opening at the far-end of the digestive tract and is the exit point for the waste material. Two sphincters between the rectum and anus control elimination: the inner sphincter is involuntary and the outer sphincter is voluntary. Accessory Organs The organs discussed above are the organs of the digestive tract through which food passes. Accessory organs are organs that add secretions (enzymes) that catabolize food into nutrients. Accessory organs include salivary glands, the liver, the pancreas, and the gallbladder. The liver, pancreas, and gallbladder are regulated by hormones in response to the food consumed. The liver is the largest internal organ in humans and it plays a very important role in digestion of fats and detoxifying blood. The liver produces bile, a digestive juice that is required for the breakdown of fatty components of the food in the duodenum. The liver also processes the vitamins and fats and synthesizes many plasma proteins. The pancreas is another important gland that secretes digestive juices. The chyme produced from the stomach is highly acidic in nature; the pancreatic juices contain high levels of bicarbonate, an alkali that neutralizes the acidic chyme. Additionally, the pancreatic juices contain a large variety of enzymes that are required for the digestion of protein and carbohydrates. The gallbladder is a small organ that aids the liver by storing bile and concentrating bile salts. When chyme containing fatty acids enters the duodenum, the bile is secreted from the gallbladder into the duodenum. 34.2 Nutrition and Energy Production Learning Objectives By the end of this section, you will be able to: Explain why an animal’s diet should be balanced and meet the needs of the body Define the primary components of food Describe the essential nutrients required for cellular function that cannot be synthesized by the animal body Explain how energy is produced through diet and digestion Describe how excess carbohydrates and energy are stored in the body Given the diversity of animal life on our planet, it is not surprising that the animal diet would also vary substantially. The animal diet is the source of materials needed for building DNA and other complex molecules needed for growth, maintenance, and reproduction; collectively these processes are called biosynthesis. The diet is also the source of materials for ATP production in the cells. The diet must be balanced to provide the minerals and vitamins that are required for cellular function. Food Requirements What are the fundamental requirements of the animal diet? The animal diet should be well balanced and provide nutrients required for bodily function and the minerals and vitamins required for maintaining structure and regulation necessary for good health and reproductive capability. These requirements for a human are illustrated graphically in Figure 34.14 Link to Learning The first step in ensuring that you are meeting the food requirements of your body is an awareness of the food groups and the nutrients they provide. To learn more about each food group and the recommended daily amounts, explore this interactive site by the United States Department of Agriculture. Everyday Connection Let’s Move! Campaign Obesity is a growing epidemic and the rate of obesity among children is rapidly rising in the United States. To combat childhood obesity and ensure that children get a healthy start in life, first lady Michelle Obama has launched the Let’s Move! campaign. The goal of this campaign is to educate parents and caregivers on providing healthy nutrition and encouraging active lifestyles to future generations. This program aims to involve the entire community, including parents, teachers, and healthcare providers to ensure that children have access to healthy foods—more fruits, vegetables, and whole grains—and consume fewer calories from processed foods. Another goal is to ensure that children get physical activity. With the increase in television viewing and stationary pursuits such as video games, sedentary lifestyles have become the norm. Learn more at www.letsmove.gov. Organic Precursors The organic molecules required for building cellular material and tissues must come from food. Carbohydrates or sugars are the primary source of organic carbons in the animal body. During digestion, digestible carbohydrates are ultimately broken down into glucose and used to provide energy through metabolic pathways. Complex carbohydrates, including polysaccharides, can be broken down into glucose through biochemical modification; however, humans do not produce the enzyme cellulase and lack the ability to derive glucose from the polysaccharide cellulose. In humans, these molecules provide the fiber required for moving waste through the large intestine and a healthy colon. The intestinal flora in the human gut are able to extract some nutrition from these plant fibers. The excess sugars in the body are converted into glycogen and stored in the liver and muscles for later use. Glycogen stores are used to fuel prolonged exertions, such as long-distance running, and to provide energy during food shortage. Excess glycogen can be converted to fats, which are stored in the lower layer of the skin of mammals for insulation and energy storage. Excess digestible carbohydrates are stored by mammals in order to survive famine and aid in mobility. Another important requirement is that of nitrogen. Protein catabolism provides a source of organic nitrogen. Amino acids are the building blocks of proteins and protein breakdown provides amino acids that are used for cellular function. The carbon and nitrogen derived from these become the building block for nucleotides, nucleic acids, proteins, cells, and tissues. Excess nitrogen must be excreted as it is toxic. Fats add flavor to food and promote a sense of satiety or fullness. Fatty foods are also significant sources of energy because one gram of fat contains nine calories. Fats are required in the diet to aid the absorption of fat-soluble vitamins and the production of fat-soluble hormones. Essential Nutrients While the animal body can synthesize many of the molecules required for function from the organic precursors, there are some nutrients that need to be consumed from food. These nutrients are termed essential nutrients , meaning they must be eaten, and the body cannot produce them. The omega-3 alpha-linolenic acid and the omega-6 linoleic acid are essential fatty acids needed to make some membrane phospholipids. Vitamins are another class of essential organic molecules that are required in small quantities for many enzymes to function and, for this reason, are considered to be co-enzymes. Absence or low levels of vitamins can have a dramatic effect on health, as outlined in Table 34.1 and Table 34.2 . Both fat-soluble and water-soluble vitamins must be obtained from food. Minerals , listed in Table 34.3 , are inorganic essential nutrients that must be obtained from food. Among their many functions, minerals help in structure and regulation and are considered co-factors. Certain amino acids also must be procured from food and cannot be synthesized by the body. These amino acids are the “essential” amino acids. The human body can synthesize only 11 of the 20 required amino acids; the rest must be obtained from food. The essential amino acids are listed in Table 34.4 . Water-soluble Essential Vitamins Vitamin Function Deficiencies Can Lead To Sources Vitamin B 1 (Thiamine) Needed by the body to process lipids, proteins, and carbohydrates Coenzyme removes CO 2 from organic compounds Muscle weakness, Beriberi: reduced heart function, CNS problems Milk, meat, dried beans, whole grains Vitamin B 2 (Riboflavin) Takes an active role in metabolism, aiding in the conversion of food to energy (FAD and FMN) Cracks or sores on the outer surface of the lips (cheliosis); inflammation and redness of the tongue; moist, scaly skin inflammation (seborrheic dermatitis) Meat, eggs, enriched grains, vegetables Vitamin B 3 (Niacin) Used by the body to release energy from carbohydrates and to process alcohol; required for the synthesis of sex hormones; component of coenzyme NAD + and NADP + Pellagra, which can result in dermatitis, diarrhea, dementia, and death Meat, eggs, grains, nuts, potatoes Vitamin B 5 (Pantothenic acid) Assists in producing energy from foods (lipids, in particular); component of coenzyme A Fatigue, poor coordination, retarded growth, numbness, tingling of hands and feet Meat, whole grains, milk, fruits, vegetables Vitamin B 6 (Pyridoxine) The principal vitamin for processing amino acids and lipids; also helps convert nutrients into energy Irritability, depression, confusion, mouth sores or ulcers, anemia, muscular twitching Meat, dairy products, whole grains, orange juice Vitamin B 7 (Biotin) Used in energy and amino acid metabolism, fat synthesis, and fat breakdown; helps the body use blood sugar Hair loss, dermatitis, depression, numbness and tingling in the extremities; neuromuscular disorders Meat, eggs, legumes and other vegetables Vitamin B 9 (Folic acid) Assists the normal development of cells, especially during fetal development; helps metabolize nucleic and amino acids Deficiency during pregnancy is associated with birth defects, such as neural tube defects and anemia Leafy green vegetables, whole wheat, fruits, nuts, legumes Vitamin B 12 (Cobalamin) Maintains healthy nervous system and assists with blood cell formation; coenzyme in nucleic acid metabolism Anemia, neurological disorders, numbness, loss of balance Meat, eggs, animal products Vitamin C (Ascorbic acid) Helps maintain connective tissue: bone, cartilage, and dentin; boosts the immune system Scurvy, which results in bleeding, hair and tooth loss; joint pain and swelling; delayed wound healing Citrus fruits, broccoli, tomatoes, red sweet bell peppers Table 34.1 Fat-soluble Essential Vitamins Vitamin Function Deficiencies Can Lead To Sources Vitamin A (Retinol) Critical to the development of bones, teeth, and skin; helps maintain eyesight, enhances the immune system, fetal development, gene expression Night-blindness, skin disorders, impaired immunity Dark green leafy vegetables, yellow-orange vegetables fruits, milk, butter Vitamin D Critical for calcium absorption for bone development and strength; maintains a stable nervous system; maintains a normal and strong heartbeat; helps in blood clotting Rickets, osteomalacia, immunity Cod liver oil, milk, egg yolk Vitamin E (Tocopherol) Lessens oxidative damage of cells,and prevents lung damage from pollutants; vital to the immune system Deficiency is rare; anemia, nervous system degeneration Wheat germ oil, unrefined vegetable oils, nuts, seeds, grains Vitamin K (Phylloquinone) Essential to blood clotting Bleeding and easy bruising Leafy green vegetables, tea Table 34.2 Minerals and Their Function in the Human Body Mineral Function Deficiencies Can Lead To Sources *Calcium Needed for muscle and neuron function; heart health; builds bone and supports synthesis and function of blood cells; nerve function Osteoporosis, rickets, muscle spasms, impaired growth Milk, yogurt, fish, green leafy vegetables, legumes *Chlorine Needed for production of hydrochloric acid (HCl) in the stomach and nerve function; osmotic balance Muscle cramps, mood disturbances, reduced appetite Table salt Copper (trace amounts) Required component of many redox enzymes, including cytochrome c oxidase; cofactor for hemoglobin synthesis Copper deficiency is rare Liver, oysters, cocoa, chocolate, sesame, nuts Iodine Required for the synthesis of thyroid hormones Goiter Seafood, iodized salt, dairy products Iron Required for many proteins and enzymes, notably hemoglobin, to prevent anemia Anemia, which causes poor concentration, fatigue, and poor immune function Red meat, leafy green vegetables, fish (tuna, salmon), eggs, dried fruits, beans, whole grains *Magnesium Required co-factor for ATP formation; bone formation; normal membrane functions; muscle function Mood disturbances, muscle spasms Whole grains, leafy green vegetables Manganese (trace amounts) A cofactor in enzyme functions; trace amounts are required Manganese deficiency is rare Common in most foods Molybdenum (trace amounts) Acts as a cofactor for three essential enzymes in humans: sulfite oxidase, xanthine oxidase, and aldehyde oxidase Molybdenum deficiency is rare *Phosphorus A component of bones and teeth; helps regulate acid-base balance; nucleotide synthesis Weakness, bone abnormalities, calcium loss Milk, hard cheese, whole grains, meats *Potassium Vital for muscles, heart, and nerve function Cardiac rhythm disturbance, muscle weakness Legumes, potato skin, tomatoes, bananas Selenium (trace amounts) A cofactor essential to activity of antioxidant enzymes like glutathione peroxidase; trace amounts are required Selenium deficiency is rare Common in most foods *Sodium Systemic electrolyte required for many functions; acid-base balance; water balance; nerve function Muscle cramps, fatigue, reduced appetite Table salt Zinc (trace amounts) Required for several enzymes such as carboxypeptidase, liver alcohol dehydrogenase, and carbonic anhydrase Anemia, poor wound healing, can lead to short stature Common in most foods *Greater than 200mg/day required Table 34.3 Essential Amino Acids Amino acids that must be consumed Amino acids anabolized by the body isoleucine alanine leucine selenocysteine lysine aspartate methionine cysteine phenylalanine glutamate tryptophan glycine valine proline histidine* serine threonine tyrosine arginine* asparagine *The human body can synthesize histidine and arginine, but not in the quantities required, especially for growing children. Table 34.4 Food Energy and ATP Animals need food to obtain energy and maintain homeostasis. Homeostasis is the ability of a system to maintain a stable internal environment even in the face of external changes to the environment. For example, the normal body temperature of humans is 37°C (98.6°F). Humans maintain this temperature even when the external temperature is hot or cold. It takes energy to maintain this body temperature, and animals obtain this energy from food. The primary source of energy for animals is carbohydrates, mainly glucose. Glucose is called the body’s fuel. The digestible carbohydrates in an animal’s diet are converted to glucose molecules through a series of catabolic chemical reactions. Adenosine triphosphate, or ATP, is the primary energy currency in cells; ATP stores energy in phosphate ester bonds. ATP releases energy when the phosphodiester bonds are broken and ATP is converted to ADP and a phosphate group. ATP is produced by the oxidative reactions in the cytoplasm and mitochondrion of the cell, where carbohydrates, proteins, and fats undergo a series of metabolic reactions collectively called cellular respiration. For example, glycolysis is a series of reactions in which glucose is converted to pyruvic acid and some of its chemical potential energy is transferred to NADH and ATP. ATP is required for all cellular functions. It is used to build the organic molecules that are required for cells and tissues; it provides energy for muscle contraction and for the transmission of electrical signals in the nervous system. When the amount of ATP is available in excess of the body’s requirements, the liver uses the excess ATP and excess glucose to produce molecules called glycogen. Glycogen is a polymeric form of glucose and is stored in the liver and skeletal muscle cells. When blood sugar drops, the liver releases glucose from stores of glycogen. Skeletal muscle converts glycogen to glucose during intense exercise. The process of converting glucose and excess ATP to glycogen and the storage of excess energy is an evolutionarily important step in helping animals deal with mobility, food shortages, and famine. Everyday Connection Obesity Obesity is a major health concern in the United States, and there is a growing focus on reducing obesity and the diseases it may lead to, such as type-2 diabetes, cancers of the colon and breast, and cardiovascular disease. How does the food consumed contribute to obesity? Fatty foods are calorie-dense, meaning that they have more calories per unit mass than carbohydrates or proteins. One gram of carbohydrates has four calories, one gram of protein has four calories, and one gram of fat has nine calories. Animals tend to seek lipid-rich food for their higher energy content. The signals of hunger (“time to eat”) and satiety (“time to stop eating”) are controlled in the hypothalamus region of the brain. Foods that are rich in fatty acids tend to promote satiety more than foods that are rich only in carbohydrates. Excess carbohydrate and ATP are used by the liver to synthesize glycogen. The pyruvate produced during glycolysis is used to synthesize fatty acids. When there is more glucose in the body than required, the resulting excess pyruvate is converted into molecules that eventually result in the synthesis of fatty acids within the body. These fatty acids are stored in adipose cells—the fat cells in the mammalian body whose primary role is to store fat for later use. It is important to note that some animals benefit from obesity. Polar bears and seals need body fat for insulation and to keep them from losing body heat during Arctic winters. When food is scarce, stored body fat provides energy for maintaining homeostasis. Fats prevent famine in mammals, allowing them to access energy when food is not available on a daily basis; fats are stored when a large kill is made or lots of food is available. 34.3 Digestive System Processes Learning Objectives By the end of this section, you will be able to: Describe the process of digestion Detail the steps involved in digestion and absorption Define elimination Explain the role of both the small and large intestines in absorption Obtaining nutrition and energy from food is a multi-step process. For true animals, the first step is ingestion, the act of taking in food. This is followed by digestion, absorption, and elimination. In the following sections, each of these steps will be discussed in detail. Ingestion The large molecules found in intact food cannot pass through the cell membranes. Food needs to be broken into smaller particles so that animals can harness the nutrients and organic molecules. The first step in this process is ingestion . Ingestion is the process of taking in food through the mouth. In vertebrates, the teeth, saliva, and tongue play important roles in mastication (preparing the food into bolus). While the food is being mechanically broken down, the enzymes in saliva begin to chemically process the food as well. The combined action of these processes modifies the food from large particles to a soft mass that can be swallowed and can travel the length of the esophagus. Digestion and Absorption Digestion is the mechanical and chemical break down of food into small organic fragments. It is important to break down macromolecules into smaller fragments that are of suitable size for absorption across the digestive epithelium. Large, complex molecules of proteins, polysaccharides, and lipids must be reduced to simpler particles such as simple sugar before they can be absorbed by the digestive epithelial cells. Different organs play specific roles in the digestive process. The animal diet needs carbohydrates, protein, and fat, as well as vitamins and inorganic components for nutritional balance. How each of these components is digested is discussed in the following sections. Carbohydrates The digestion of carbohydrates begins in the mouth. The salivary enzyme amylase begins the breakdown of food starches into maltose, a disaccharide. As the bolus of food travels through the esophagus to the stomach, no significant digestion of carbohydrates takes place. The esophagus produces no digestive enzymes but does produce mucous for lubrication. The acidic environment in the stomach stops the action of the amylase enzyme. The next step of carbohydrate digestion takes place in the duodenum. Recall that the chyme from the stomach enters the duodenum and mixes with the digestive secretion from the pancreas, liver, and gallbladder. Pancreatic juices also contain amylase, which continues the breakdown of starch and glycogen into maltose, a disaccharide. The disaccharides are broken down into monosaccharides by enzymes called maltases , sucrases , and lactases , which are also present in the brush border of the small intestinal wall. Maltase breaks down maltose into glucose. Other disaccharides, such as sucrose and lactose are broken down by sucrase and lactase, respectively. Sucrase breaks down sucrose (or “table sugar”) into glucose and fructose, and lactase breaks down lactose (or “milk sugar”) into glucose and galactose. The monosaccharides (glucose) thus produced are absorbed and then can be used in metabolic pathways to harness energy. The monosaccharides are transported across the intestinal epithelium into the bloodstream to be transported to the different cells in the body. The steps in carbohydrate digestion are summarized in Figure 34.16 and Table 34.5 . Digestion of Carbohydrates Enzyme Produced By Site of Action Substrate Acting On End Products Salivary amylase Salivary glands Mouth Polysaccharides (Starch) Disaccharides (maltose), oligosaccharides Pancreatic amylase Pancreas Small intestine Polysaccharides (starch) Disaccharides (maltose), monosaccharides Oligosaccharidases Lining of the intestine; brush border membrane Small intestine Disaccharides Monosaccharides (e.g., glucose, fructose, galactose) Table 34.5 Protein A large part of protein digestion takes place in the stomach. The enzyme pepsin plays an important role in the digestion of proteins by breaking down the intact protein to peptides, which are short chains of four to nine amino acids. In the duodenum, other enzymes— trypsin , elastase , and chymotrypsin —act on the peptides reducing them to smaller peptides. Trypsin elastase, carboxypeptidase, and chymotrypsin are produced by the pancreas and released into the duodenum where they act on the chyme. Further breakdown of peptides to single amino acids is aided by enzymes called peptidases (those that break down peptides). Specifically, carboxypeptidase , dipeptidase , and aminopeptidase play important roles in reducing the peptides to free amino acids. The amino acids are absorbed into the bloodstream through the small intestines. The steps in protein digestion are summarized in Figure 34.17 and Table 34.6 . Digestion of Protein Enzyme Produced By Site of Action Substrate Acting On End Products Pepsin Stomach chief cells Stomach Proteins Peptides Trypsin Elastase Chymotrypsin Pancreas Small intestine Proteins Peptides Carboxypeptidase Pancreas Small intestine Peptides Amino acids and peptides Aminopeptidase Dipeptidase Lining of intestine Small intestine Peptides Amino acids Table 34.6 Lipids Lipid digestion begins in the stomach with the aid of lingual lipase and gastric lipase. However, the bulk of lipid digestion occurs in the small intestine due to pancreatic lipase. When chyme enters the duodenum, the hormonal responses trigger the release of bile, which is produced in the liver and stored in the gallbladder. Bile aids in the digestion of lipids, primarily triglycerides by emulsification. Emulsification is a process in which large lipid globules are broken down into several small lipid globules. These small globules are more widely distributed in the chyme rather than forming large aggregates. Lipids are hydrophobic substances: in the presence of water, they will aggregate to form globules to minimize exposure to water. Bile contains bile salts, which are amphipathic, meaning they contain hydrophobic and hydrophilic parts. Thus, the bile salts hydrophilic side can interface with water on one side and the hydrophobic side interfaces with lipids on the other. By doing so, bile salts emulsify large lipid globules into small lipid globules. Why is emulsification important for digestion of lipids? Pancreatic juices contain enzymes called lipases (enzymes that break down lipids). If the lipid in the chyme aggregates into large globules, very little surface area of the lipids is available for the lipases to act on, leaving lipid digestion incomplete. By forming an emulsion, bile salts increase the available surface area of the lipids many fold. The pancreatic lipases can then act on the lipids more efficiently and digest them, as detailed in Figure 34.18 . Lipases break down the lipids into fatty acids and glycerides. These molecules can pass through the plasma membrane of the cell and enter the epithelial cells of the intestinal lining. The bile salts surround long-chain fatty acids and monoglycerides forming tiny spheres called micelles. The micelles move into the brush border of the small intestine absorptive cells where the long-chain fatty acids and monoglycerides diffuse out of the micelles into the absorptive cells leaving the micelles behind in the chyme. The long-chain fatty acids and monoglycerides recombine in the absorptive cells to form triglycerides, which aggregate into globules and become coated with proteins. These large spheres are called chylomicrons . Chylomicrons contain triglycerides, cholesterol, and other lipids and have proteins on their surface. The surface is also composed of the hydrophilic phosphate "heads" of phospholipids. Together, they enable the chylomicron to move in an aqueous environment without exposing the lipids to water. Chylomicrons leave the absorptive cells via exocytosis. Chylomicrons enter the lymphatic vessels, and then enter the blood in the subclavian vein. Vitamins Vitamins can be either water-soluble or lipid-soluble. Fat soluble vitamins are absorbed in the same manner as lipids. It is important to consume some amount of dietary lipid to aid the absorption of lipid-soluble vitamins. Water-soluble vitamins can be directly absorbed into the bloodstream from the intestine. Link to Learning This website has an overview of the digestion of protein, fat, and carbohydrates. Visual Connection Which of the following statements about digestive processes is true? Amylase, maltase, and lactase in the mouth digest carbohydrates. Trypsin and lipase in the stomach digest protein. Bile emulsifies lipids in the small intestine. No food is absorbed until the small intestine. Elimination The final step in digestion is the elimination of undigested food content and waste products. The undigested food material enters the colon, where most of the water is reabsorbed. Recall that the colon is also home to the microflora called “intestinal flora” that aid in the digestion process. The semi-solid waste is moved through the colon by peristaltic movements of the muscle and is stored in the rectum. As the rectum expands in response to storage of fecal matter, it triggers the neural signals required to set up the urge to eliminate. The solid waste is eliminated through the anus using peristaltic movements of the rectum. Common Problems with Elimination Diarrhea and constipation are some of the most common health concerns that affect digestion. Constipation is a condition where the feces are hardened because of excess water removal in the colon. In contrast, if enough water is not removed from the feces, it results in diarrhea. Many bacteria, including the ones that cause cholera, affect the proteins involved in water reabsorption in the colon and result in excessive diarrhea. Emesis Emesis, or vomiting, is elimination of food by forceful expulsion through the mouth. It is often in response to an irritant that affects the digestive tract, including but not limited to viruses, bacteria, emotions, sights, and food poisoning. This forceful expulsion of the food is due to the strong contractions produced by the stomach muscles. The process of emesis is regulated by the medulla. 34.4 Digestive System Regulation Learning Objectives By the end of this section, you will be able to: Discuss the role of neural regulation in digestive processes Explain how hormones regulate digestion The brain is the control center for the sensation of hunger and satiety. The functions of the digestive system are regulated through neural and hormonal responses. Neural Responses to Food In reaction to the smell, sight, or thought of food, like that shown in Figure 34.20 , the first response is that of salivation. The salivary glands secrete more saliva in response to stimulation by the autonomic nervous system triggered by food in preparation for digestion. Simultaneously, the stomach begins to produce hydrochloric acid to digest the food. Recall that the peristaltic movements of the esophagus and other organs of the digestive tract are under the control of the brain. The brain prepares these muscles for movement as well. When the stomach is full, the part of the brain that detects satiety signals fullness. There are three overlapping phases of gastric control—the cephalic phase, the gastric phase, and the intestinal phase—each requires many enzymes and is under neural control as well. Digestive Phases The response to food begins even before food enters the mouth. The first phase of ingestion, called the cephalic phase , is controlled by the neural response to the stimulus provided by food. All aspects—such as sight, sense, and smell—trigger the neural responses resulting in salivation and secretion of gastric juices. The gastric and salivary secretion in the cephalic phase can also take place due to the thought of food. Right now, if you think about a piece of chocolate or a crispy potato chip, the increase in salivation is a cephalic phase response to the thought. The central nervous system prepares the stomach to receive food. The gastric phase begins once the food arrives in the stomach. It builds on the stimulation provided during the cephalic phase. Gastric acids and enzymes process the ingested materials. The gastric phase is stimulated by (1) distension of the stomach, (2) a decrease in the pH of the gastric contents, and (3) the presence of undigested material. This phase consists of local, hormonal, and neural responses. These responses stimulate secretions and powerful contractions. The intestinal phase begins when chyme enters the small intestine triggering digestive secretions. This phase controls the rate of gastric emptying. In addition to gastrin emptying, when chyme enters the small intestine, it triggers other hormonal and neural events that coordinate the activities of the intestinal tract, pancreas, liver, and gallbladder. Hormonal Responses to Food The endocrine system controls the response of the various glands in the body and the release of hormones at the appropriate times. One of the important factors under hormonal control is the stomach acid environment. During the gastric phase, the hormone gastrin is secreted by G cells in the stomach in response to the presence of proteins. Gastrin stimulates the release of stomach acid, or hydrochloric acid (HCl) which aids in the digestion of the proteins. However, when the stomach is emptied, the acidic environment need not be maintained and a hormone called somatostatin stops the release of hydrochloric acid. This is controlled by a negative feedback mechanism. In the duodenum, digestive secretions from the liver, pancreas, and gallbladder play an important role in digesting chyme during the intestinal phase. In order to neutralize the acidic chyme, a hormone called secretin stimulates the pancreas to produce alkaline bicarbonate solution and deliver it to the duodenum. Secretin acts in tandem with another hormone called cholecystokinin (CCK). Not only does CCK stimulate the pancreas to produce the requisite pancreatic juices, it also stimulates the gallbladder to release bile into the duodenum. Link to Learning Visit this website to learn more about the endocrine system. Review the text and watch the animation of how control is implemented in the endocrine system. Another level of hormonal control occurs in response to the composition of food. Foods high in lipids take a long time to digest. A hormone called gastric inhibitory peptide is secreted by the small intestine to slow down the peristaltic movements of the intestine to allow fatty foods more time to be digested and absorbed. Understanding the hormonal control of the digestive system is an important area of ongoing research. Scientists are exploring the role of each hormone in the digestive process and developing ways to target these hormones. Advances could lead to knowledge that may help to battle the obesity epidemic.
anatomy_and_physiology
Chapter Objectives After studying this chapter, you will be able to: Describe the functions of the skeletal system and define its two major subdivisions Identify the bones and bony structures of the skull, the cranial suture lines, the cranial fossae, and the openings in the skull Discuss the vertebral column and regional variations in its bony components and curvatures Describe the components of the thoracic cage Discuss the embryonic development of the axial skeleton Introduction The skeletal system forms the rigid internal framework of the body. It consists of the bones, cartilages, and ligaments. Bones support the weight of the body, allow for body movements, and protect internal organs. Cartilage provides flexible strength and support for body structures such as the thoracic cage, the external ear, and the trachea and larynx. At joints of the body, cartilage can also unite adjacent bones or provide cushioning between them. Ligaments are the strong connective tissue bands that hold the bones at a moveable joint together and serve to prevent excessive movements of the joint that would result in injury. Providing movement of the skeleton are the muscles of the body, which are firmly attached to the skeleton via connective tissue structures called tendons. As muscles contract, they pull on the bones to produce movements of the body. Thus, without a skeleton, you would not be able to stand, run, or even feed yourself! Each bone of the body serves a particular function, and therefore bones vary in size, shape, and strength based on these functions. For example, the bones of the lower back and lower limb are thick and strong to support your body weight. Similarly, the size of a bony landmark that serves as a muscle attachment site on an individual bone is related to the strength of this muscle. Muscles can apply very strong pulling forces to the bones of the skeleton. To resist these forces, bones have enlarged bony landmarks at sites where powerful muscles attach. This means that not only the size of a bone, but also its shape, is related to its function. For this reason, the identification of bony landmarks is important during your study of the skeletal system. Bones are also dynamic organs that can modify their strength and thickness in response to changes in muscle strength or body weight. Thus, muscle attachment sites on bones will thicken if you begin a workout program that increases muscle strength. Similarly, the walls of weight-bearing bones will thicken if you gain body weight or begin pounding the pavement as part of a new running regimen. In contrast, a reduction in muscle strength or body weight will cause bones to become thinner. This may happen during a prolonged hospital stay, following limb immobilization in a cast, or going into the weightlessness of outer space. Even a change in diet, such as eating only soft food due to the loss of teeth, will result in a noticeable decrease in the size and thickness of the jaw bones.
[ { "answer": { "ans_choice": 3, "ans_text": "vertebral column" }, "bloom": "1", "hl_context": "The skeleton is subdivided into two major divisions — the axial and appendicular . The axial skeleton forms the vertical , central axis of the body and includes all bones of the head , neck , chest , and back ( Figure 7.2 ) . It serves to protect the brain , spinal cord , heart , and lungs . It also serves as the attachment site for muscles that move the head , neck , and back , and for muscles that act across the shoulder and hip joints to move their corresponding limbs . <hl> The axial skeleton of the adult consists of 80 bones , including the skull , the vertebral column , and the thoracic cage . <hl> The skull is formed by 22 bones . Also associated with the head are an additional seven bones , including the hyoid bone and the ear ossicles ( three small bones found in each middle ear ) . The vertebral column consists of 24 bones , each called a vertebra , plus the sacrum and coccyx . The thoracic cage includes the 12 pairs of ribs , and the sternum , the flattened bone of the anterior chest .", "hl_sentences": "The axial skeleton of the adult consists of 80 bones , including the skull , the vertebral column , and the thoracic cage .", "question": { "cloze_format": "The ___is part of the axial skeleton.", "normal_format": "Which of the following is part of the axial skeleton?", "question_choices": [ "shoulder bones", "thigh bone", "foot bones", "vertebral column" ], "question_id": "fs-id2637677", "question_text": "Which of the following is part of the axial skeleton?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "supports trunk of body" }, "bloom": "1", "hl_context": "The skeletal system includes all of the bones , cartilages , and ligaments of the body that support and give shape to the body and body structures . The skeleton consists of the bones of the body . For adults , there are 206 bones in the skeleton . Younger individuals have higher numbers of bones because some bones fuse together during childhood and adolescence to form an adult bone . <hl> The primary functions of the skeleton are to provide a rigid , internal structure that can support the weight of the body against the force of gravity , and to provide a structure upon which muscles can act to produce movements of the body . <hl> The lower portion of the skeleton is specialized for stability during walking or running . <hl> In contrast , the upper skeleton has greater mobility and ranges of motion , features that allow you to lift and carry objects or turn your head and trunk . <hl> In addition to providing for support and movements of the body , the skeleton has protective and storage functions . It protects the internal organs , including the brain , spinal cord , heart , lungs , and pelvic organs . The bones of the skeleton serve as the primary storage site for important minerals such as calcium and phosphate . The bone marrow found within bones stores fat and houses the blood-cell producing tissue of the body .", "hl_sentences": "The primary functions of the skeleton are to provide a rigid , internal structure that can support the weight of the body against the force of gravity , and to provide a structure upon which muscles can act to produce movements of the body . In contrast , the upper skeleton has greater mobility and ranges of motion , features that allow you to lift and carry objects or turn your head and trunk .", "question": { "cloze_format": "A function of the axial skeleton is that it ___ .", "normal_format": "Which of the following is a function of the axial skeleton?", "question_choices": [ "allows for movement of the wrist and hand", "protects nerves and blood vessels at the elbow", "supports trunk of body", "allows for movements of the ankle and foot" ], "question_id": "fs-id1927640", "question_text": "Which of the following is a function of the axial skeleton?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 1, "ans_text": "forms the vertical axis of the body" }, "bloom": null, "hl_context": "The skeleton is subdivided into two major divisions — the axial and appendicular . <hl> The axial skeleton forms the vertical , central axis of the body and includes all bones of the head , neck , chest , and back ( Figure 7.2 ) . <hl> It serves to protect the brain , spinal cord , heart , and lungs . It also serves as the attachment site for muscles that move the head , neck , and back , and for muscles that act across the shoulder and hip joints to move their corresponding limbs . The axial skeleton of the adult consists of 80 bones , including the skull , the vertebral column , and the thoracic cage . The skull is formed by 22 bones . Also associated with the head are an additional seven bones , including the hyoid bone and the ear ossicles ( three small bones found in each middle ear ) . The vertebral column consists of 24 bones , each called a vertebra , plus the sacrum and coccyx . The thoracic cage includes the 12 pairs of ribs , and the sternum , the flattened bone of the anterior chest .", "hl_sentences": "The axial skeleton forms the vertical , central axis of the body and includes all bones of the head , neck , chest , and back ( Figure 7.2 ) .", "question": { "cloze_format": "The axial skeleton ________.", "normal_format": "What is the characteristic of the axial skeleton?", "question_choices": [ "consists of 126 bones", "forms the vertical axis of the body", "includes all bones of the body trunk and limbs", "includes only the bones of the lower limbs" ], "question_id": "fs-id1888088", "question_text": "The axial skeleton ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "parietal bone" }, "bloom": "1", "hl_context": "The floor of the brain case is referred to as the base of the skull . This is a complex area that varies in depth and has numerous openings for the passage of cranial nerves , blood vessels , and the spinal cord . Inside the skull , the base is subdivided into three large spaces , called the anterior cranial fossa , middle cranial fossa , and posterior cranial fossa ( fossa = “ trench or ditch ” ) ( Figure 7.6 ) . From anterior to posterior , the fossae increase in depth . The shape and depth of each fossa corresponds to the shape and size of the brain region that each houses . The boundaries and openings of the cranial fossae ( singular = fossa ) will be described in a later section . <hl> The brain case consists of eight bones . <hl> <hl> These include the paired parietal and temporal bones , plus the unpaired frontal , occipital , sphenoid , and ethmoid bones . <hl>", "hl_sentences": "The brain case consists of eight bones . These include the paired parietal and temporal bones , plus the unpaired frontal , occipital , sphenoid , and ethmoid bones .", "question": { "cloze_format": "The ___ is a bone of the brain case.", "normal_format": "Which of the following is a bone of the brain case?", "question_choices": [ "parietal bone", "zygomatic bone", "maxillary bone", "lacrimal bone" ], "question_id": "fs-id2271040", "question_text": "Which of the following is a bone of the brain case?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "occipital bone" }, "bloom": "1", "hl_context": "The two suture lines seen on the top of the skull are the coronal and sagittal sutures . The coronal suture runs from side to side across the skull , within the coronal plane of section ( see Figure 7.5 ) . It joins the frontal bone to the right and left parietal bones . The sagittal suture extends posteriorly from the coronal suture , running along the midline at the top of the skull in the sagittal plane of section ( see Figure 7.9 ) . It unites the right and left parietal bones . On the posterior skull , the sagittal suture terminates by joining the lambdoid suture . The lambdoid suture extends downward and laterally to either side away from its junction with the sagittal suture . <hl> The lambdoid suture joins the occipital bone to the right and left parietal and temporal bones . <hl> This suture is named for its upside-down \" V \" shape , which resembles the capital letter version of the Greek letter lambda ( Λ ) . The squamous suture is located on the lateral skull . It unites the squamous portion of the temporal bone with the parietal bone ( see Figure 7.5 ) . At the intersection of four bones is the pterion , a small , capital-H-shaped suture line region that unites the frontal bone , parietal bone , squamous portion of the temporal bone , and greater wing of the sphenoid bone . It is the weakest part of the skull . The pterion is located approximately two finger widths above the zygomatic arch and a thumb ’ s width posterior to the upward portion of the zygomatic bone . Disorders of the ... Skeletal System", "hl_sentences": "The lambdoid suture joins the occipital bone to the right and left parietal and temporal bones .", "question": { "cloze_format": "The lambdoid suture joins the parietal bone to the ________.", "normal_format": "The lambdoid suture joins the parietal bone to which of the following?", "question_choices": [ "frontal bone", "occipital bone", "other parietal bone", "temporal bone" ], "question_id": "fs-id1521705", "question_text": "The lambdoid suture joins the parietal bone to the ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "has the foramen rotundum, foramen ovale, and foramen spinosum" }, "bloom": null, "hl_context": "<hl> Foramen spinosum — This small opening , located posterior-lateral to the foramen ovale , is the entry point for an important artery that supplies the covering layers surrounding the brain . <hl> The branching pattern of this artery forms readily visible grooves on the internal surface of the skull and these grooves can be traced back to their origin at the foramen spinosum . <hl> Foramen ovale of the middle cranial fossa — This large , oval-shaped opening in the floor of the middle cranial fossa provides passage for a major sensory nerve to the lateral head , cheek , chin , and lower teeth . <hl> <hl> Foramen rotundum — This rounded opening ( rotundum = “ round ” ) is located in the floor of the middle cranial fossa , just inferior to the superior orbital fissure . <hl> It is the exit point for a major sensory nerve that supplies the cheek , nose , and upper teeth .", "hl_sentences": "Foramen spinosum — This small opening , located posterior-lateral to the foramen ovale , is the entry point for an important artery that supplies the covering layers surrounding the brain . Foramen ovale of the middle cranial fossa — This large , oval-shaped opening in the floor of the middle cranial fossa provides passage for a major sensory nerve to the lateral head , cheek , chin , and lower teeth . Foramen rotundum — This rounded opening ( rotundum = “ round ” ) is located in the floor of the middle cranial fossa , just inferior to the superior orbital fissure .", "question": { "cloze_format": "The middle cranial fossa ________.", "normal_format": "Which of the following is correct about the middle cranial fossa?", "question_choices": [ "is bounded anteriorly by the petrous ridge", "is bounded posteriorly by the lesser wing of the sphenoid bone", "is divided at the midline by a small area of the ethmoid bone", "has the foramen rotundum, foramen ovale, and foramen spinosum" ], "question_id": "fs-id2326233", "question_text": "The middle cranial fossa ________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 0, "ans_text": "air-filled spaces found within the frontal, maxilla, sphenoid, and ethmoid bones only" }, "bloom": "1", "hl_context": "<hl> The paranasal sinuses are named for the skull bone that each occupies . <hl> <hl> The frontal sinus is located just above the eyebrows , within the frontal bone ( see Figure 7.17 ) . <hl> This irregular space may be divided at the midline into bilateral spaces , or these may be fused into a single sinus space . The frontal sinus is the most anterior of the paranasal sinuses . <hl> The largest sinus is the maxillary sinus . <hl> <hl> These are paired and located within the right and left maxillary bones , where they occupy the area just below the orbits . <hl> The maxillary sinuses are most commonly involved during sinus infections . Because their connection to the nasal cavity is located high on their medial wall , they are difficult to drain . <hl> The sphenoid sinus is a single , midline sinus . <hl> <hl> It is located within the body of the sphenoid bone , just anterior and inferior to the sella turcica , thus making it the most posterior of the paranasal sinuses . <hl> <hl> The lateral aspects of the ethmoid bone contain multiple small spaces separated by very thin bony walls . <hl> <hl> Each of these spaces is called an ethmoid air cell . <hl> These are located on both sides of the ethmoid bone , between the upper nasal cavity and medial orbit , just behind the superior nasal conchae .", "hl_sentences": "The paranasal sinuses are named for the skull bone that each occupies . The frontal sinus is located just above the eyebrows , within the frontal bone ( see Figure 7.17 ) . The largest sinus is the maxillary sinus . These are paired and located within the right and left maxillary bones , where they occupy the area just below the orbits . The sphenoid sinus is a single , midline sinus . It is located within the body of the sphenoid bone , just anterior and inferior to the sella turcica , thus making it the most posterior of the paranasal sinuses . The lateral aspects of the ethmoid bone contain multiple small spaces separated by very thin bony walls . Each of these spaces is called an ethmoid air cell .", "question": { "cloze_format": "The paranasal sinuses are ________.", "normal_format": "What are the paranasal sinuses?", "question_choices": [ "air-filled spaces found within the frontal, maxilla, sphenoid, and ethmoid bones only", "air-filled spaces found within all bones of the skull", "not connected to the nasal cavity", "divided at the midline by the nasal septum" ], "question_id": "fs-id1541226", "question_text": "The paranasal sinuses are ________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 0, "ans_text": "sella turcica" }, "bloom": "1", "hl_context": "The middle cranial fossa is deeper and situated posterior to the anterior fossa . It extends from the lesser wings of the sphenoid bone anteriorly , to the petrous ridges ( petrous portion of the temporal bones ) posteriorly . The large , diagonally positioned petrous ridges give the middle cranial fossa a butterfly shape , making it narrow at the midline and broad laterally . The temporal lobes of the brain occupy this fossa . <hl> The middle cranial fossa is divided at the midline by the upward bony prominence of the sella turcica , a part of the sphenoid bone . <hl> The middle cranial fossa has several openings for the passage of blood vessels and cranial nerves ( see Figure 7.8 ) .", "hl_sentences": "The middle cranial fossa is divided at the midline by the upward bony prominence of the sella turcica , a part of the sphenoid bone .", "question": { "cloze_format": "Parts of the sphenoid bone include the ________.", "normal_format": "What is included in parts of the sphenoid bone?", "question_choices": [ "sella turcica", "squamous portion", "glabella", "zygomatic process" ], "question_id": "fs-id2168003", "question_text": "Parts of the sphenoid bone include the ________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 3, "ans_text": "hypoglossal canal, which is located in the posterior cranial fossa" }, "bloom": "1", "hl_context": "Located on the medial wall of the petrous ridge in the posterior cranial fossa is the internal acoustic meatus ( see Figure 7.11 ) . This opening provides for passage of the nerve from the hearing and equilibrium organs of the inner ear , and the nerve that supplies the muscles of the face . <hl> Located at the anterior-lateral margin of the foramen magnum is the hypoglossal canal . <hl> These emerge on the inferior aspect of the skull at the base of the occipital condyle and provide passage for an important nerve to the tongue . <hl> The posterior cranial fossa is the most posterior and deepest portion of the cranial cavity . <hl> It contains the cerebellum of the brain . The posterior fossa is bounded anteriorly by the petrous ridges , while the occipital bone forms the floor and posterior wall . It is divided at the midline by the large foramen magnum ( “ great aperture ” ) , the opening that provides for passage of the spinal cord . The floor of the brain case is referred to as the base of the skull . This is a complex area that varies in depth and has numerous openings for the passage of cranial nerves , blood vessels , and the spinal cord . <hl> Inside the skull , the base is subdivided into three large spaces , called the anterior cranial fossa , middle cranial fossa , and posterior cranial fossa ( fossa = “ trench or ditch ” ) ( Figure 7.6 ) . <hl> From anterior to posterior , the fossae increase in depth . The shape and depth of each fossa corresponds to the shape and size of the brain region that each houses . The boundaries and openings of the cranial fossae ( singular = fossa ) will be described in a later section . The brain case consists of eight bones . These include the paired parietal and temporal bones , plus the unpaired frontal , occipital , sphenoid , and ethmoid bones .", "hl_sentences": "Located at the anterior-lateral margin of the foramen magnum is the hypoglossal canal . The posterior cranial fossa is the most posterior and deepest portion of the cranial cavity . Inside the skull , the base is subdivided into three large spaces , called the anterior cranial fossa , middle cranial fossa , and posterior cranial fossa ( fossa = “ trench or ditch ” ) ( Figure 7.6 ) .", "question": { "cloze_format": "The bony openings of the skull include the ________.", "normal_format": "What do the bony openings of the skull include?", "question_choices": [ "carotid canal, which is located in the anterior cranial fossa", "superior orbital fissure, which is located at the superior margin of the anterior orbit", "mental foramen, which is located just below the orbit", "hypoglossal canal, which is located in the posterior cranial fossa" ], "question_id": "fs-id1708493", "question_text": "The bony openings of the skull include the ________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 0, "ans_text": "seven vertebrae" }, "bloom": "1", "hl_context": "The vertebral column originally develops as a series of 33 vertebrae , but this number is eventually reduced to 24 vertebrae , plus the sacrum and coccyx . <hl> The vertebral column is subdivided into five regions , with the vertebrae in each area named for that region and numbered in descending order . <hl> <hl> In the neck , there are seven cervical vertebrae , each designated with the letter “ C ” followed by its number . <hl> Superiorly , the C1 vertebra articulates ( forms a joint ) with the occipital condyles of the skull . Inferiorly , C1 articulates with the C2 vertebra , and so on . Below these are the 12 thoracic vertebrae , designated T1 – T12 . The lower back contains the L1 – L5 lumbar vertebrae . The single sacrum , which is also part of the pelvis , is formed by the fusion of five sacral vertebrae . Similarly , the coccyx , or tailbone , results from the fusion of four small coccygeal vertebrae . However , the sacral and coccygeal fusions do not start until age 20 and are not completed until middle age .", "hl_sentences": "The vertebral column is subdivided into five regions , with the vertebrae in each area named for that region and numbered in descending order . In the neck , there are seven cervical vertebrae , each designated with the letter “ C ” followed by its number .", "question": { "cloze_format": "The cervical region of the vertebral column consists of ________.", "normal_format": "What does the cervical region of the vertebral column consist of?", "question_choices": [ "seven vertebrae", "12 vertebrae", "five vertebrae", "a single bone derived from the fusion of five vertebrae" ], "question_id": "fs-id2051547", "question_text": "The cervical region of the vertebral column consists of ________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 1, "ans_text": "are remnants of the original fetal curvature" }, "bloom": "1", "hl_context": "During fetal development , the body is flexed anteriorly into the fetal position , giving the entire vertebral column a single curvature that is concave anteriorly . In the adult , this fetal curvature is retained in two regions of the vertebral column as the thoracic curve , which involves the thoracic vertebrae , and the sacrococcygeal curve , formed by the sacrum and coccyx . <hl> Each of these is thus called a primary curve because they are retained from the original fetal curvature of the vertebral column . <hl>", "hl_sentences": "Each of these is thus called a primary curve because they are retained from the original fetal curvature of the vertebral column .", "question": { "cloze_format": "The primary curvatures of the vertebral column ________.", "normal_format": "Which of the following is correct about the primary curvatures of the vertebral column?", "question_choices": [ "include the lumbar curve", "are remnants of the original fetal curvature", "include the cervical curve", "develop after the time of birth" ], "question_id": "fs-id1282419", "question_text": "The primary curvatures of the vertebral column ________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "lamina that spans between the transverse process and spinous process" }, "bloom": "1", "hl_context": "<hl> Seven processes arise from the vertebral arch . <hl> <hl> Each paired transverse process projects laterally and arises from the junction point between the pedicle and lamina . <hl> <hl> The single spinous process ( vertebral spine ) projects posteriorly at the midline of the back . <hl> The vertebral spines can easily be felt as a series of bumps just under the skin down the middle of the back . The transverse and spinous processes serve as important muscle attachment sites . A superior articular process extends or faces upward , and an inferior articular process faces or projects downward on each side of a vertebrae . The paired superior articular processes of one vertebra join with the corresponding paired inferior articular processes from the next higher vertebra . These junctions form slightly moveable joints between the adjacent vertebrae . The shape and orientation of the articular processes vary in different regions of the vertebral column and play a major role in determining the type and range of motion available in each region .", "hl_sentences": "Seven processes arise from the vertebral arch . Each paired transverse process projects laterally and arises from the junction point between the pedicle and lamina . The single spinous process ( vertebral spine ) projects posteriorly at the midline of the back .", "question": { "cloze_format": "A typical vertebra has ________.", "normal_format": "What does a typical vertebra have?", "question_choices": [ "a vertebral foramen that passes through the body", "a superior articular process that projects downward to articulate with the superior portion of the next lower vertebra", "lamina that spans between the transverse process and spinous process", "a pair of laterally projecting spinous processes" ], "question_id": "fs-id1752256", "question_text": "A typical vertebra has ________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 0, "ans_text": "a short, rounded spinous process" }, "bloom": "1", "hl_context": "<hl> Lumbar vertebrae carry the greatest amount of body weight and are thus characterized by the large size and thickness of the vertebral body ( Figure 7.28 ) . <hl> <hl> They have short transverse processes and a short , blunt spinous process that projects posteriorly . <hl> The articular processes are large , with the superior process facing backward and the inferior facing forward .", "hl_sentences": "Lumbar vertebrae carry the greatest amount of body weight and are thus characterized by the large size and thickness of the vertebral body ( Figure 7.28 ) . They have short transverse processes and a short , blunt spinous process that projects posteriorly .", "question": { "cloze_format": "A typical lumbar vertebra has ________.", "normal_format": "What does a typical lumbar vertebra have?", "question_choices": [ "a short, rounded spinous process", "a bifid spinous process", "articulation sites for ribs", "a transverse foramen" ], "question_id": "fs-id2079599", "question_text": "A typical lumbar vertebra has ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "nuchal ligament" }, "bloom": "1", "hl_context": "The supraspinous ligament is located on the posterior side of the vertebral column , where it interconnects the spinous processes of the thoracic and lumbar vertebrae . This strong ligament supports the vertebral column during forward bending motions . <hl> In the posterior neck , where the cervical spinous processes are short , the supraspinous ligament expands to become the nuchal ligament ( nuchae = “ nape ” or “ back of the neck ” ) . <hl> <hl> The nuchal ligament is attached to the cervical spinous processes and extends upward and posteriorly to attach to the midline base of the skull , out to the external occipital protuberance . <hl> It supports the skull and prevents it from falling forward . <hl> This ligament is much larger and stronger in four-legged animals such as cows , where the large skull hangs off the front end of the vertebral column . <hl> You can easily feel this ligament by first extending your head backward and pressing down on the posterior midline of your neck . Then tilt your head forward and you will fill the nuchal ligament popping out as it tightens to limit anterior bending of the head and neck .", "hl_sentences": "In the posterior neck , where the cervical spinous processes are short , the supraspinous ligament expands to become the nuchal ligament ( nuchae = “ nape ” or “ back of the neck ” ) . The nuchal ligament is attached to the cervical spinous processes and extends upward and posteriorly to attach to the midline base of the skull , out to the external occipital protuberance . This ligament is much larger and stronger in four-legged animals such as cows , where the large skull hangs off the front end of the vertebral column .", "question": { "cloze_format": "The ___ is found only in the cervical region of the vertebral column.", "normal_format": "Which is found only in the cervical region of the vertebral column?", "question_choices": [ "nuchal ligament", "ligamentum flavum", "supraspinous ligament", "anterior longitudinal ligament" ], "question_id": "fs-id2080555", "question_text": "Which is found only in the cervical region of the vertebral column?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 1, "ans_text": "has the sternal angle located between the manubrium and body" }, "bloom": "1", "hl_context": "The elongated , central portion of the sternum is the body . <hl> The manubrium and body join together at the sternal angle , so called because the junction between these two components is not flat , but forms a slight bend . <hl> <hl> The second rib attaches to the sternum at the sternal angle . <hl> Since the first rib is hidden behind the clavicle , the second rib is the highest rib that can be identified by palpation . Thus , the sternal angle and second rib are important landmarks for the identification and counting of the lower ribs . Ribs 3 – 7 attach to the sternal body .", "hl_sentences": "The manubrium and body join together at the sternal angle , so called because the junction between these two components is not flat , but forms a slight bend . The second rib attaches to the sternum at the sternal angle .", "question": { "cloze_format": "The sternum ________.", "normal_format": "Which of the folllowing is correct about the sternum?", "question_choices": [ "consists of only two parts, the manubrium and xiphoid process", "has the sternal angle located between the manubrium and body", "receives direct attachments from the costal cartilages of all 12 pairs of ribs", "articulates directly with the thoracic vertebrae" ], "question_id": "fs-id1855309", "question_text": "The sternum ________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 3, "ans_text": "junction between the manubrium and body" }, "bloom": "1", "hl_context": "The elongated , central portion of the sternum is the body . <hl> The manubrium and body join together at the sternal angle , so called because the junction between these two components is not flat , but forms a slight bend . <hl> The second rib attaches to the sternum at the sternal angle . Since the first rib is hidden behind the clavicle , the second rib is the highest rib that can be identified by palpation . Thus , the sternal angle and second rib are important landmarks for the identification and counting of the lower ribs . Ribs 3 – 7 attach to the sternal body .", "hl_sentences": "The manubrium and body join together at the sternal angle , so called because the junction between these two components is not flat , but forms a slight bend .", "question": { "cloze_format": "The sternal angle is the ________.", "normal_format": "What is the sternal angle?", "question_choices": [ "junction between the body and xiphoid process", "site for attachment of the clavicle", "site for attachment of the floating ribs", "junction between the manubrium and body" ], "question_id": "fs-id2573730", "question_text": "The sternal angle is the ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "is for articulation with the transverse process of a thoracic vertebra" }, "bloom": "1", "hl_context": "<hl> Thoracic vertebrae have several additional articulation sites , each of which is called a facet , where a rib is attached . <hl> Most thoracic vertebrae have two facets located on the lateral sides of the body , each of which is called a costal facet ( costal = “ rib ” ) . These are for articulation with the head ( end ) of a rib . <hl> An additional facet is located on the transverse process for articulation with the tubercle of a rib . <hl>", "hl_sentences": "Thoracic vertebrae have several additional articulation sites , each of which is called a facet , where a rib is attached . An additional facet is located on the transverse process for articulation with the tubercle of a rib .", "question": { "cloze_format": "The tubercle of a rib ________.", "normal_format": "Which of the following is correct about the tubercle of a rib? ", "question_choices": [ "is for articulation with the transverse process of a thoracic vertebra", "is for articulation with the body of a thoracic vertebra", "provides for passage of blood vessels and a nerve", "is the area of greatest rib curvature" ], "question_id": "fs-id2267142", "question_text": "The tubercle of a rib ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "attached via their costal cartilage directly to the sternum" }, "bloom": "1", "hl_context": "<hl> Ribs 1 – 7 are classified as true ribs ( vertebrosternal ribs ) . <hl> <hl> The costal cartilage from each of these ribs attaches directly to the sternum . <hl> Ribs 8 – 12 are called false ribs ( vertebrochondral ribs ) . The costal cartilages from these ribs do not attach directly to the sternum . For ribs 8 – 10 , the costal cartilages are attached to the cartilage of the next higher rib . Thus , the cartilage of rib 10 attaches to the cartilage of rib 9 , rib 9 then attaches to rib 8 , and rib 8 is attached to rib 7 . The last two false ribs ( 11 – 12 ) are also called floating ribs ( vertebral ribs ) . These are short ribs that do not attach to the sternum at all . Instead , their small costal cartilages terminate within the musculature of the lateral abdominal wall . 7.5 Embryonic Development of the Axial Skeleton Learning Objectives By the end of this section , you will be able to :", "hl_sentences": "Ribs 1 – 7 are classified as true ribs ( vertebrosternal ribs ) . The costal cartilage from each of these ribs attaches directly to the sternum .", "question": { "cloze_format": "True ribs are ________.", "normal_format": "What are True ribs?", "question_choices": [ "ribs 8–12", "attached via their costal cartilage to the next higher rib", "made entirely of bone, and thus do not have a costal cartilage", "attached via their costal cartilage directly to the sternum" ], "question_id": "fs-id2005953", "question_text": "True ribs are ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "endochondral ossification, which forms the ribs and sternum" }, "bloom": null, "hl_context": "<hl> The ribs and sternum also develop from mesenchyme . <hl> <hl> The ribs initially develop as part of the cartilage model for each vertebra , but in the thorax region , the rib portion separates from the vertebra by the eighth week . <hl> The cartilage model of the rib then ossifies , except for the anterior portion , which remains as the costal cartilage . <hl> The sternum initially forms as paired hyaline cartilage models on either side of the anterior midline , beginning during the fifth week of development . <hl> The cartilage models of the ribs become attached to the lateral sides of the developing sternum . Eventually , the two halves of the cartilaginous sternum fuse together along the midline and then ossify into bone . The manubrium and body of the sternum are converted into bone first , with the xiphoid process remaining as cartilage until late in life . <hl> Development of the vertebrae begins with the accumulation of mesenchyme cells from each sclerotome around the notochord . <hl> <hl> These cells differentiate into a hyaline cartilage model for each vertebra , which then grow and eventually ossify into bone through the process of endochondral ossification . <hl> As the developing vertebrae grow , the notochord largely disappears . However , small areas of notochord tissue persist between the adjacent vertebrae and this contributes to the formation of each intervertebral disc . <hl> The axial skeleton begins to form during early embryonic development . <hl> However , growth , remodeling , and ossification ( bone formation ) continue for several decades after birth before the adult skeleton is fully formed . Knowledge of the developmental processes that give rise to the skeleton is important for understanding the abnormalities that may arise in skeletal structures .", "hl_sentences": "The ribs and sternum also develop from mesenchyme . The ribs initially develop as part of the cartilage model for each vertebra , but in the thorax region , the rib portion separates from the vertebra by the eighth week . The sternum initially forms as paired hyaline cartilage models on either side of the anterior midline , beginning during the fifth week of development . Development of the vertebrae begins with the accumulation of mesenchyme cells from each sclerotome around the notochord . These cells differentiate into a hyaline cartilage model for each vertebra , which then grow and eventually ossify into bone through the process of endochondral ossification . The axial skeleton begins to form during early embryonic development .", "question": { "cloze_format": "Embryonic development of the axial skeleton involves ________.", "normal_format": "What does embryonic development of the axial skeleton involve?", "question_choices": [ "intramembranous ossification, which forms the facial bones.", "endochondral ossification, which forms the ribs and sternum", "the notochord, which produces the cartilage models for the vertebrae", "the formation of hyaline cartilage models, which give rise to the flat bones of the skull" ], "question_id": "fs-id1894914", "question_text": "Embryonic development of the axial skeleton involves ________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 3, "ans_text": "is the area of fibrous connective tissue found at birth between the brain case bones" }, "bloom": "1", "hl_context": "The bones of the skull arise from mesenchyme during embryonic development in two different ways . The first mechanism produces the bones that form the top and sides of the brain case . This involves the local accumulation of mesenchymal cells at the site of the future bone . These cells then differentiate directly into bone producing cells , which form the skull bones through the process of intramembranous ossification . <hl> As the brain case bones grow in the fetal skull , they remain separated from each other by large areas of dense connective tissue , each of which is called a fontanelle ( Figure 7.33 ) . <hl> <hl> The fontanelles are the soft spots on an infant ’ s head . <hl> They are important during birth because these areas allow the skull to change shape as it squeezes through the birth canal . After birth , the fontanelles allow for continued growth and expansion of the skull as the brain enlarges . The largest fontanelle is located on the anterior head , at the junction of the frontal and parietal bones . The fontanelles decrease in size and disappear by age 2 . However , the skull bones remained separated from each other at the sutures , which contain dense fibrous connective tissue that unites the adjacent bones . The connective tissue of the sutures allows for continued growth of the skull bones as the brain enlarges during childhood growth .", "hl_sentences": "As the brain case bones grow in the fetal skull , they remain separated from each other by large areas of dense connective tissue , each of which is called a fontanelle ( Figure 7.33 ) . The fontanelles are the soft spots on an infant ’ s head .", "question": { "cloze_format": "A fontanelle ________.", "normal_format": "Which of the following is correct about a fontanelle?", "question_choices": [ "is the cartilage model for a vertebra that later is converted into bone", "gives rise to the facial bones and vertebrae", "is the rod-like structure that runs the length of the early embryo", "is the area of fibrous connective tissue found at birth between the brain case bones" ], "question_id": "fs-id1888187", "question_text": "A fontanelle ________." }, "references_are_paraphrase": null } ]
7
7.1 Divisions of the Skeletal System Learning Objectives By the end of this section, you will be able to: Discuss the functions of the skeletal system Distinguish between the axial skeleton and appendicular skeleton Define the axial skeleton and its components Define the appendicular skeleton and its components The skeletal system includes all of the bones, cartilages, and ligaments of the body that support and give shape to the body and body structures. The skeleton consists of the bones of the body. For adults, there are 206 bones in the skeleton. Younger individuals have higher numbers of bones because some bones fuse together during childhood and adolescence to form an adult bone. The primary functions of the skeleton are to provide a rigid, internal structure that can support the weight of the body against the force of gravity, and to provide a structure upon which muscles can act to produce movements of the body. The lower portion of the skeleton is specialized for stability during walking or running. In contrast, the upper skeleton has greater mobility and ranges of motion, features that allow you to lift and carry objects or turn your head and trunk. In addition to providing for support and movements of the body, the skeleton has protective and storage functions. It protects the internal organs, including the brain, spinal cord, heart, lungs, and pelvic organs. The bones of the skeleton serve as the primary storage site for important minerals such as calcium and phosphate. The bone marrow found within bones stores fat and houses the blood-cell producing tissue of the body. The skeleton is subdivided into two major divisions—the axial and appendicular. The Axial Skeleton The skeleton is subdivided into two major divisions—the axial and appendicular. The axial skeleton forms the vertical, central axis of the body and includes all bones of the head, neck, chest, and back ( Figure 7.2 ). It serves to protect the brain, spinal cord, heart, and lungs. It also serves as the attachment site for muscles that move the head, neck, and back, and for muscles that act across the shoulder and hip joints to move their corresponding limbs. The axial skeleton of the adult consists of 80 bones, including the skull , the vertebral column , and the thoracic cage . The skull is formed by 22 bones. Also associated with the head are an additional seven bones, including the hyoid bone and the ear ossicles (three small bones found in each middle ear). The vertebral column consists of 24 bones, each called a vertebra , plus the sacrum and coccyx . The thoracic cage includes the 12 pairs of ribs , and the sternum , the flattened bone of the anterior chest. The Appendicular Skeleton The appendicular skeleton includes all bones of the upper and lower limbs, plus the bones that attach each limb to the axial skeleton. There are 126 bones in the appendicular skeleton of an adult. The bones of the appendicular skeleton are covered in a separate chapter. 7.2 The Skull Learning Objectives By the end of this section, you will be able to: List and identify the bones of the brain case and face Locate the major suture lines of the skull and name the bones associated with each Locate and define the boundaries of the anterior, middle, and posterior cranial fossae, the temporal fossa, and infratemporal fossa Define the paranasal sinuses and identify the location of each Name the bones that make up the walls of the orbit and identify the openings associated with the orbit Identify the bones and structures that form the nasal septum and nasal conchae, and locate the hyoid bone Identify the bony openings of the skull The cranium (skull) is the skeletal structure of the head that supports the face and protects the brain. It is subdivided into the facial bones and the brain case , or cranial vault ( Figure 7.3 ). The facial bones underlie the facial structures, form the nasal cavity, enclose the eyeballs, and support the teeth of the upper and lower jaws. The rounded brain case surrounds and protects the brain and houses the middle and inner ear structures. In the adult, the skull consists of 22 individual bones, 21 of which are immobile and united into a single unit. The 22nd bone is the mandible (lower jaw), which is the only moveable bone of the skull. Interactive Link Watch this video to view a rotating and exploded skull, with color-coded bones. Which bone (yellow) is centrally located and joins with most of the other bones of the skull? Anterior View of Skull The anterior skull consists of the facial bones and provides the bony support for the eyes and structures of the face. This view of the skull is dominated by the openings of the orbits and the nasal cavity. Also seen are the upper and lower jaws, with their respective teeth ( Figure 7.4 ). The orbit is the bony socket that houses the eyeball and muscles that move the eyeball or open the upper eyelid. The upper margin of the anterior orbit is the supraorbital margin . Located near the midpoint of the supraorbital margin is a small opening called the supraorbital foramen . This provides for passage of a sensory nerve to the skin of the forehead. Below the orbit is the infraorbital foramen , which is the point of emergence for a sensory nerve that supplies the anterior face below the orbit. Inside the nasal area of the skull, the nasal cavity is divided into halves by the nasal septum . The upper portion of the nasal septum is formed by the perpendicular plate of the ethmoid bone and the lower portion is the vomer bone . Each side of the nasal cavity is triangular in shape, with a broad inferior space that narrows superiorly. When looking into the nasal cavity from the front of the skull, two bony plates are seen projecting from each lateral wall. The larger of these is the inferior nasal concha , an independent bone of the skull. Located just above the inferior concha is the middle nasal concha , which is part of the ethmoid bone. A third bony plate, also part of the ethmoid bone, is the superior nasal concha . It is much smaller and out of sight, above the middle concha. The superior nasal concha is located just lateral to the perpendicular plate, in the upper nasal cavity. Lateral View of Skull A view of the lateral skull is dominated by the large, rounded brain case above and the upper and lower jaws with their teeth below ( Figure 7.5 ). Separating these areas is the bridge of bone called the zygomatic arch. The zygomatic arch is the bony arch on the side of skull that spans from the area of the cheek to just above the ear canal. It is formed by the junction of two bony processes: a short anterior component, the temporal process of the zygomatic bone (the cheekbone) and a longer posterior portion, the zygomatic process of the temporal bone , extending forward from the temporal bone. Thus the temporal process (anteriorly) and the zygomatic process (posteriorly) join together, like the two ends of a drawbridge, to form the zygomatic arch. One of the major muscles that pulls the mandible upward during biting and chewing arises from the zygomatic arch. On the lateral side of the brain case, above the level of the zygomatic arch, is a shallow space called the temporal fossa . Below the level of the zygomatic arch and deep to the vertical portion of the mandible is another space called the infratemporal fossa . Both the temporal fossa and infratemporal fossa contain muscles that act on the mandible during chewing. Bones of the Brain Case The brain case contains and protects the brain. The interior space that is almost completely occupied by the brain is called the cranial cavity . This cavity is bounded superiorly by the rounded top of the skull, which is called the calvaria (skullcap), and the lateral and posterior sides of the skull. The bones that form the top and sides of the brain case are usually referred to as the “flat” bones of the skull. The floor of the brain case is referred to as the base of the skull. This is a complex area that varies in depth and has numerous openings for the passage of cranial nerves, blood vessels, and the spinal cord. Inside the skull, the base is subdivided into three large spaces, called the anterior cranial fossa , middle cranial fossa , and posterior cranial fossa (fossa = “trench or ditch”) ( Figure 7.6 ). From anterior to posterior, the fossae increase in depth. The shape and depth of each fossa corresponds to the shape and size of the brain region that each houses. The boundaries and openings of the cranial fossae (singular = fossa) will be described in a later section. The brain case consists of eight bones. These include the paired parietal and temporal bones, plus the unpaired frontal, occipital, sphenoid, and ethmoid bones. Parietal Bone The parietal bone forms most of the upper lateral side of the skull (see Figure 7.5 ). These are paired bones, with the right and left parietal bones joining together at the top of the skull. Each parietal bone is also bounded anteriorly by the frontal bone, inferiorly by the temporal bone, and posteriorly by the occipital bone. Temporal Bone The temporal bone forms the lower lateral side of the skull (see Figure 7.5 ). Common wisdom has it that the temporal bone (temporal = “time”) is so named because this area of the head (the temple) is where hair typically first turns gray, indicating the passage of time. The temporal bone is subdivided into several regions ( Figure 7.7 ). The flattened, upper portion is the squamous portion of the temporal bone. Below this area and projecting anteriorly is the zygomatic process of the temporal bone, which forms the posterior portion of the zygomatic arch. Posteriorly is the mastoid portion of the temporal bone. Projecting inferiorly from this region is a large prominence, the mastoid process , which serves as a muscle attachment site. The mastoid process can easily be felt on the side of the head just behind your earlobe. On the interior of the skull, the petrous portion of each temporal bone forms the prominent, diagonally oriented petrous ridge in the floor of the cranial cavity. Located inside each petrous ridge are small cavities that house the structures of the middle and inner ears. Important landmarks of the temporal bone, as shown in Figure 7.8 , include the following: External acoustic meatus (ear canal)—This is the large opening on the lateral side of the skull that is associated with the ear. Internal acoustic meatus —This opening is located inside the cranial cavity, on the medial side of the petrous ridge. It connects to the middle and inner ear cavities of the temporal bone. Mandibular fossa —This is the deep, oval-shaped depression located on the external base of the skull, just in front of the external acoustic meatus. The mandible (lower jaw) joins with the skull at this site as part of the temporomandibular joint, which allows for movements of the mandible during opening and closing of the mouth. Articular tubercle —The smooth ridge located immediately anterior to the mandibular fossa. Both the articular tubercle and mandibular fossa contribute to the temporomandibular joint, the joint that provides for movements between the temporal bone of the skull and the mandible. Styloid process —Posterior to the mandibular fossa on the external base of the skull is an elongated, downward bony projection called the styloid process, so named because of its resemblance to a stylus (a pen or writing tool). This structure serves as an attachment site for several small muscles and for a ligament that supports the hyoid bone of the neck. (See also Figure 7.7 .) Stylomastoid foramen —This small opening is located between the styloid process and mastoid process. This is the point of exit for the cranial nerve that supplies the facial muscles. Carotid canal —The carotid canal is a zig-zag shaped tunnel that provides passage through the base of the skull for one of the major arteries that supplies the brain. Its entrance is located on the outside base of the skull, anteromedial to the styloid process. The canal then runs anteromedially within the bony base of the skull, and then turns upward to its exit in the floor of the middle cranial cavity, above the foramen lacerum. Frontal Bone The frontal bone is the single bone that forms the forehead. At its anterior midline, between the eyebrows, there is a slight depression called the glabella (see Figure 7.5 ). The frontal bone also forms the supraorbital margin of the orbit. Near the middle of this margin, is the supraorbital foramen, the opening that provides passage for a sensory nerve to the forehead. The frontal bone is thickened just above each supraorbital margin, forming rounded brow ridges. These are located just behind your eyebrows and vary in size among individuals, although they are generally larger in males. Inside the cranial cavity, the frontal bone extends posteriorly. This flattened region forms both the roof of the orbit below and the floor of the anterior cranial cavity above (see Figure 7.8 b ). Occipital Bone The occipital bone is the single bone that forms the posterior skull and posterior base of the cranial cavity ( Figure 7.9 ; see also Figure 7.8 ). On its outside surface, at the posterior midline, is a small protrusion called the external occipital protuberance , which serves as an attachment site for a ligament of the posterior neck. Lateral to either side of this bump is a superior nuchal line (nuchal = “nape” or “posterior neck”). The nuchal lines represent the most superior point at which muscles of the neck attach to the skull, with only the scalp covering the skull above these lines. On the base of the skull, the occipital bone contains the large opening of the foramen magnum , which allows for passage of the spinal cord as it exits the skull. On either side of the foramen magnum is an oval-shaped occipital condyle . These condyles form joints with the first cervical vertebra and thus support the skull on top of the vertebral column. Sphenoid Bone The sphenoid bone is a single, complex bone of the central skull ( Figure 7.10 ). It serves as a “keystone” bone, because it joins with almost every other bone of the skull. The sphenoid forms much of the base of the central skull (see Figure 7.8 ) and also extends laterally to contribute to the sides of the skull (see Figure 7.5 ). Inside the cranial cavity, the right and left lesser wings of the sphenoid bone , which resemble the wings of a flying bird, form the lip of a prominent ridge that marks the boundary between the anterior and middle cranial fossae. The sella turcica (“Turkish saddle”) is located at the midline of the middle cranial fossa. This bony region of the sphenoid bone is named for its resemblance to the horse saddles used by the Ottoman Turks, with a high back and a tall front. The rounded depression in the floor of the sella turcica is the hypophyseal (pituitary) fossa , which houses the pea-sized pituitary (hypophyseal) gland. The greater wings of the sphenoid bone extend laterally to either side away from the sella turcica, where they form the anterior floor of the middle cranial fossa. The greater wing is best seen on the outside of the lateral skull, where it forms a rectangular area immediately anterior to the squamous portion of the temporal bone. On the inferior aspect of the skull, each half of the sphenoid bone forms two thin, vertically oriented bony plates. These are the medial pterygoid plate and lateral pterygoid plate (pterygoid = “wing-shaped”). The right and left medial pterygoid plates form the posterior, lateral walls of the nasal cavity. The somewhat larger lateral pterygoid plates serve as attachment sites for chewing muscles that fill the infratemporal space and act on the mandible. Ethmoid Bone The ethmoid bone is a single, midline bone that forms the roof and lateral walls of the upper nasal cavity, the upper portion of the nasal septum, and contributes to the medial wall of the orbit ( Figure 7.11 and Figure 7.12 ). On the interior of the skull, the ethmoid also forms a portion of the floor of the anterior cranial cavity (see Figure 7.8 b ). Within the nasal cavity, the perpendicular plate of the ethmoid bone forms the upper portion of the nasal septum. The ethmoid bone also forms the lateral walls of the upper nasal cavity. Extending from each lateral wall are the superior nasal concha and middle nasal concha, which are thin, curved projections that extend into the nasal cavity ( Figure 7.13 ). In the cranial cavity, the ethmoid bone forms a small area at the midline in the floor of the anterior cranial fossa. This region also forms the narrow roof of the underlying nasal cavity. This portion of the ethmoid bone consists of two parts, the crista galli and cribriform plates. The crista galli (“rooster’s comb or crest”) is a small upward bony projection located at the midline. It functions as an anterior attachment point for one of the covering layers of the brain. To either side of the crista galli is the cribriform plate (cribrum = “sieve”), a small, flattened area with numerous small openings termed olfactory foramina. Small nerve branches from the olfactory areas of the nasal cavity pass through these openings to enter the brain. The lateral portions of the ethmoid bone are located between the orbit and upper nasal cavity, and thus form the lateral nasal cavity wall and a portion of the medial orbit wall. Located inside this portion of the ethmoid bone are several small, air-filled spaces that are part of the paranasal sinus system of the skull. Sutures of the Skull A suture is an immobile joint between adjacent bones of the skull. The narrow gap between the bones is filled with dense, fibrous connective tissue that unites the bones. The long sutures located between the bones of the brain case are not straight, but instead follow irregular, tightly twisting paths. These twisting lines serve to tightly interlock the adjacent bones, thus adding strength to the skull for brain protection. The two suture lines seen on the top of the skull are the coronal and sagittal sutures. The coronal suture runs from side to side across the skull, within the coronal plane of section (see Figure 7.5 ). It joins the frontal bone to the right and left parietal bones. The sagittal suture extends posteriorly from the coronal suture, running along the midline at the top of the skull in the sagittal plane of section (see Figure 7.9 ). It unites the right and left parietal bones. On the posterior skull, the sagittal suture terminates by joining the lambdoid suture. The lambdoid suture extends downward and laterally to either side away from its junction with the sagittal suture. The lambdoid suture joins the occipital bone to the right and left parietal and temporal bones. This suture is named for its upside-down "V" shape, which resembles the capital letter version of the Greek letter lambda (Λ). The squamous suture is located on the lateral skull. It unites the squamous portion of the temporal bone with the parietal bone (see Figure 7.5 ). At the intersection of four bones is the pterion , a small, capital-H-shaped suture line region that unites the frontal bone, parietal bone, squamous portion of the temporal bone, and greater wing of the sphenoid bone. It is the weakest part of the skull. The pterion is located approximately two finger widths above the zygomatic arch and a thumb’s width posterior to the upward portion of the zygomatic bone. Disorders of the... Skeletal System Head and traumatic brain injuries are major causes of immediate death and disability, with bleeding and infections as possible additional complications. According to the Centers for Disease Control and Prevention (2010), approximately 30 percent of all injury-related deaths in the United States are caused by head injuries. The majority of head injuries involve falls. They are most common among young children (ages 0–4 years), adolescents (15–19 years), and the elderly (over 65 years). Additional causes vary, but prominent among these are automobile and motorcycle accidents. Strong blows to the brain-case portion of the skull can produce fractures. These may result in bleeding inside the skull with subsequent injury to the brain. The most common is a linear skull fracture, in which fracture lines radiate from the point of impact. Other fracture types include a comminuted fracture, in which the bone is broken into several pieces at the point of impact, or a depressed fracture, in which the fractured bone is pushed inward. In a contrecoup (counterblow) fracture, the bone at the point of impact is not broken, but instead a fracture occurs on the opposite side of the skull. Fractures of the occipital bone at the base of the skull can occur in this manner, producing a basilar fracture that can damage the artery that passes through the carotid canal. A blow to the lateral side of the head may fracture the bones of the pterion. The pterion is an important clinical landmark because located immediately deep to it on the inside of the skull is a major branch of an artery that supplies the skull and covering layers of the brain. A strong blow to this region can fracture the bones around the pterion. If the underlying artery is damaged, bleeding can cause the formation of a hematoma (collection of blood) between the brain and interior of the skull. As blood accumulates, it will put pressure on the brain. Symptoms associated with a hematoma may not be apparent immediately following the injury, but if untreated, blood accumulation will exert increasing pressure on the brain and can result in death within a few hours. Interactive Link View this animation to see how a blow to the head may produce a contrecoup (counterblow) fracture of the basilar portion of the occipital bone on the base of the skull. Why may a basilar fracture be life threatening? Facial Bones of the Skull The facial bones of the skull form the upper and lower jaws, the nose, nasal cavity and nasal septum, and the orbit. The facial bones include 14 bones, with six paired bones and two unpaired bones. The paired bones are the maxilla, palatine, zygomatic, nasal, lacrimal, and inferior nasal conchae bones. The unpaired bones are the vomer and mandible bones. Although classified with the brain-case bones, the ethmoid bone also contributes to the nasal septum and the walls of the nasal cavity and orbit. Maxillary Bone The maxillary bone , often referred to simply as the maxilla (plural = maxillae), is one of a pair that together form the upper jaw, much of the hard palate, the medial floor of the orbit, and the lateral base of the nose (see Figure 7.4 ). The curved, inferior margin of the maxillary bone that forms the upper jaw and contains the upper teeth is the alveolar process of the maxilla ( Figure 7.14 ). Each tooth is anchored into a deep socket called an alveolus. On the anterior maxilla, just below the orbit, is the infraorbital foramen. This is the point of exit for a sensory nerve that supplies the nose, upper lip, and anterior cheek. On the inferior skull, the palatine process from each maxillary bone can be seen joining together at the midline to form the anterior three-quarters of the hard palate (see Figure 7.8 a ). The hard palate is the bony plate that forms the roof of the mouth and floor of the nasal cavity, separating the oral and nasal cavities. Palatine Bone The palatine bone is one of a pair of irregularly shaped bones that contribute small areas to the lateral walls of the nasal cavity and the medial wall of each orbit. The largest region of each of the palatine bone is the horizontal plate . The plates from the right and left palatine bones join together at the midline to form the posterior quarter of the hard palate (see Figure 7.8 a ). Thus, the palatine bones are best seen in an inferior view of the skull and hard palate. Homeostatic Imbalances Cleft Lip and Cleft Palate During embryonic development, the right and left maxilla bones come together at the midline to form the upper jaw. At the same time, the muscle and skin overlying these bones join together to form the upper lip. Inside the mouth, the palatine processes of the maxilla bones, along with the horizontal plates of the right and left palatine bones, join together to form the hard palate. If an error occurs in these developmental processes, a birth defect of cleft lip or cleft palate may result. Cleft lip is a common development defect that affects approximately 1:1000 births, most of which are male. This defect involves a partial or complete failure of the right and left portions of the upper lip to fuse together, leaving a cleft (gap). A more severe developmental defect is cleft palate, which affects the hard palate. The hard palate is the bony structure that separates the nasal cavity from the oral cavity. It is formed during embryonic development by the midline fusion of the horizontal plates from the right and left palatine bones and the palatine processes of the maxilla bones. Cleft palate affects approximately 1:2500 births and is more common in females. It results from a failure of the two halves of the hard palate to completely come together and fuse at the midline, thus leaving a gap between them. This gap allows for communication between the nasal and oral cavities. In severe cases, the bony gap continues into the anterior upper jaw where the alveolar processes of the maxilla bones also do not properly join together above the front teeth. If this occurs, a cleft lip will also be seen. Because of the communication between the oral and nasal cavities, a cleft palate makes it very difficult for an infant to generate the suckling needed for nursing, thus leaving the infant at risk for malnutrition. Surgical repair is required to correct cleft palate defects. Zygomatic Bone The zygomatic bone is also known as the cheekbone. Each of the paired zygomatic bones forms much of the lateral wall of the orbit and the lateral-inferior margins of the anterior orbital opening (see Figure 7.4 ). The short temporal process of the zygomatic bone projects posteriorly, where it forms the anterior portion of the zygomatic arch (see Figure 7.5 ). Nasal Bone The nasal bone is one of two small bones that articulate (join) with each other to form the bony base (bridge) of the nose. They also support the cartilages that form the lateral walls of the nose (see Figure 7.11 ). These are the bones that are damaged when the nose is broken. Lacrimal Bone Each lacrimal bone is a small, rectangular bone that forms the anterior, medial wall of the orbit (see Figure 7.4 and Figure 7.5 ). The anterior portion of the lacrimal bone forms a shallow depression called the lacrimal fossa , and extending inferiorly from this is the nasolacrimal canal . The lacrimal fluid (tears of the eye), which serves to maintain the moist surface of the eye, drains at the medial corner of the eye into the nasolacrimal canal. This duct then extends downward to open into the nasal cavity, behind the inferior nasal concha. In the nasal cavity, the lacrimal fluid normally drains posteriorly, but with an increased flow of tears due to crying or eye irritation, some fluid will also drain anteriorly, thus causing a runny nose. Inferior Nasal Conchae The right and left inferior nasal conchae form a curved bony plate that projects into the nasal cavity space from the lower lateral wall (see Figure 7.13 ). The inferior concha is the largest of the nasal conchae and can easily be seen when looking into the anterior opening of the nasal cavity. Vomer Bone The unpaired vomer bone, often referred to simply as the vomer, is triangular-shaped and forms the posterior-inferior part of the nasal septum (see Figure 7.11 ). The vomer is best seen when looking from behind into the posterior openings of the nasal cavity (see Figure 7.8 a ). In this view, the vomer is seen to form the entire height of the nasal septum. A much smaller portion of the vomer can also be seen when looking into the anterior opening of the nasal cavity. Mandible The mandible forms the lower jaw and is the only moveable bone of the skull. At the time of birth, the mandible consists of paired right and left bones, but these fuse together during the first year to form the single U-shaped mandible of the adult skull. Each side of the mandible consists of a horizontal body and posteriorly, a vertically oriented ramus of the mandible (ramus = “branch”). The outside margin of the mandible, where the body and ramus come together is called the angle of the mandible ( Figure 7.15 ). The ramus on each side of the mandible has two upward-going bony projections. The more anterior projection is the flattened coronoid process of the mandible , which provides attachment for one of the biting muscles. The posterior projection is the condylar process of the mandible , which is topped by the oval-shaped condyle . The condyle of the mandible articulates (joins) with the mandibular fossa and articular tubercle of the temporal bone. Together these articulations form the temporomandibular joint, which allows for opening and closing of the mouth (see Figure 7.5 ). The broad U-shaped curve located between the coronoid and condylar processes is the mandibular notch . Important landmarks for the mandible include the following: Alveolar process of the mandible —This is the upper border of the mandibular body and serves to anchor the lower teeth. Mental protuberance —The forward projection from the inferior margin of the anterior mandible that forms the chin (mental = “chin”). Mental foramen —The opening located on each side of the anterior-lateral mandible, which is the exit site for a sensory nerve that supplies the chin. Mylohyoid line —This bony ridge extends along the inner aspect of the mandibular body (see Figure 7.11 ). The muscle that forms the floor of the oral cavity attaches to the mylohyoid lines on both sides of the mandible. Mandibular foramen —This opening is located on the medial side of the ramus of the mandible. The opening leads into a tunnel that runs down the length of the mandibular body. The sensory nerve and blood vessels that supply the lower teeth enter the mandibular foramen and then follow this tunnel. Thus, to numb the lower teeth prior to dental work, the dentist must inject anesthesia into the lateral wall of the oral cavity at a point prior to where this sensory nerve enters the mandibular foramen. Lingula —This small flap of bone is named for its shape (lingula = “little tongue”). It is located immediately next to the mandibular foramen, on the medial side of the ramus. A ligament that anchors the mandible during opening and closing of the mouth extends down from the base of the skull and attaches to the lingula. The Orbit The orbit is the bony socket that houses the eyeball and contains the muscles that move the eyeball or open the upper eyelid. Each orbit is cone-shaped, with a narrow posterior region that widens toward the large anterior opening. To help protect the eye, the bony margins of the anterior opening are thickened and somewhat constricted. The medial walls of the two orbits are parallel to each other but each lateral wall diverges away from the midline at a 45° angle. This divergence provides greater lateral peripheral vision. The walls of each orbit include contributions from seven skull bones ( Figure 7.16 ). The frontal bone forms the roof and the zygomatic bone forms the lateral wall and lateral floor. The medial floor is primarily formed by the maxilla, with a small contribution from the palatine bone. The ethmoid bone and lacrimal bone make up much of the medial wall and the sphenoid bone forms the posterior orbit. At the posterior apex of the orbit is the opening of the optic canal , which allows for passage of the optic nerve from the retina to the brain. Lateral to this is the elongated and irregularly shaped superior orbital fissure, which provides passage for the artery that supplies the eyeball, sensory nerves, and the nerves that supply the muscles involved in eye movements. The Nasal Septum and Nasal Conchae The nasal septum consists of both bone and cartilage components ( Figure 7.17 ; see also Figure 7.11 ). The upper portion of the septum is formed by the perpendicular plate of the ethmoid bone. The lower and posterior parts of the septum are formed by the triangular-shaped vomer bone. In an anterior view of the skull, the perpendicular plate of the ethmoid bone is easily seen inside the nasal opening as the upper nasal septum, but only a small portion of the vomer is seen as the inferior septum. A better view of the vomer bone is seen when looking into the posterior nasal cavity with an inferior view of the skull, where the vomer forms the full height of the nasal septum. The anterior nasal septum is formed by the septal cartilage , a flexible plate that fills in the gap between the perpendicular plate of the ethmoid and vomer bones. This cartilage also extends outward into the nose where it separates the right and left nostrils. The septal cartilage is not found in the dry skull. Attached to the lateral wall on each side of the nasal cavity are the superior, middle, and inferior nasal conchae (singular = concha), which are named for their positions (see Figure 7.13 ). These are bony plates that curve downward as they project into the space of the nasal cavity. They serve to swirl the incoming air, which helps to warm and moisturize it before the air moves into the delicate air sacs of the lungs. This also allows mucus, secreted by the tissue lining the nasal cavity, to trap incoming dust, pollen, bacteria, and viruses. The largest of the conchae is the inferior nasal concha, which is an independent bone of the skull. The middle concha and the superior conchae, which is the smallest, are both formed by the ethmoid bone. When looking into the anterior nasal opening of the skull, only the inferior and middle conchae can be seen. The small superior nasal concha is well hidden above and behind the middle concha. Cranial Fossae Inside the skull, the floor of the cranial cavity is subdivided into three cranial fossae (spaces), which increase in depth from anterior to posterior (see Figure 7.6 , Figure 7.8 b , and Figure 7.11 ). Since the brain occupies these areas, the shape of each conforms to the shape of the brain regions that it contains. Each cranial fossa has anterior and posterior boundaries and is divided at the midline into right and left areas by a significant bony structure or opening. Anterior Cranial Fossa The anterior cranial fossa is the most anterior and the shallowest of the three cranial fossae. It overlies the orbits and contains the frontal lobes of the brain. Anteriorly, the anterior fossa is bounded by the frontal bone, which also forms the majority of the floor for this space. The lesser wings of the sphenoid bone form the prominent ledge that marks the boundary between the anterior and middle cranial fossae. Located in the floor of the anterior cranial fossa at the midline is a portion of the ethmoid bone, consisting of the upward projecting crista galli and to either side of this, the cribriform plates. Middle Cranial Fossa The middle cranial fossa is deeper and situated posterior to the anterior fossa. It extends from the lesser wings of the sphenoid bone anteriorly, to the petrous ridges (petrous portion of the temporal bones) posteriorly. The large, diagonally positioned petrous ridges give the middle cranial fossa a butterfly shape, making it narrow at the midline and broad laterally. The temporal lobes of the brain occupy this fossa. The middle cranial fossa is divided at the midline by the upward bony prominence of the sella turcica, a part of the sphenoid bone. The middle cranial fossa has several openings for the passage of blood vessels and cranial nerves (see Figure 7.8 ). Openings in the middle cranial fossa are as follows: Optic canal —This opening is located at the anterior lateral corner of the sella turcica. It provides for passage of the optic nerve into the orbit. Superior orbital fissure —This large, irregular opening into the posterior orbit is located on the anterior wall of the middle cranial fossa, lateral to the optic canal and under the projecting margin of the lesser wing of the sphenoid bone. Nerves to the eyeball and associated muscles, and sensory nerves to the forehead pass through this opening. Foramen rotundum —This rounded opening (rotundum = “round”) is located in the floor of the middle cranial fossa, just inferior to the superior orbital fissure. It is the exit point for a major sensory nerve that supplies the cheek, nose, and upper teeth. Foramen ovale of the middle cranial fossa —This large, oval-shaped opening in the floor of the middle cranial fossa provides passage for a major sensory nerve to the lateral head, cheek, chin, and lower teeth. Foramen spinosum —This small opening, located posterior-lateral to the foramen ovale, is the entry point for an important artery that supplies the covering layers surrounding the brain. The branching pattern of this artery forms readily visible grooves on the internal surface of the skull and these grooves can be traced back to their origin at the foramen spinosum. Carotid canal —This is the zig-zag passageway through which a major artery to the brain enters the skull. The entrance to the carotid canal is located on the inferior aspect of the skull, anteromedial to the styloid process (see Figure 7.8 a ). From here, the canal runs anteromedially within the bony base of the skull. Just above the foramen lacerum, the carotid canal opens into the middle cranial cavity, near the posterior-lateral base of the sella turcica. Foramen lacerum —This irregular opening is located in the base of the skull, immediately inferior to the exit of the carotid canal. This opening is an artifact of the dry skull, because in life it is completely filled with cartilage. All the openings of the skull that provide for passage of nerves or blood vessels have smooth margins; the word lacerum (“ragged” or “torn”) tells us that this opening has ragged edges and thus nothing passes through it. Posterior Cranial Fossa The posterior cranial fossa is the most posterior and deepest portion of the cranial cavity. It contains the cerebellum of the brain. The posterior fossa is bounded anteriorly by the petrous ridges, while the occipital bone forms the floor and posterior wall. It is divided at the midline by the large foramen magnum (“great aperture”), the opening that provides for passage of the spinal cord. Located on the medial wall of the petrous ridge in the posterior cranial fossa is the internal acoustic meatus (see Figure 7.11 ). This opening provides for passage of the nerve from the hearing and equilibrium organs of the inner ear, and the nerve that supplies the muscles of the face. Located at the anterior-lateral margin of the foramen magnum is the hypoglossal canal . These emerge on the inferior aspect of the skull at the base of the occipital condyle and provide passage for an important nerve to the tongue. Immediately inferior to the internal acoustic meatus is the large, irregularly shaped jugular foramen (see Figure 7.8 a ). Several cranial nerves from the brain exit the skull via this opening. It is also the exit point through the base of the skull for all the venous return blood leaving the brain. The venous structures that carry blood inside the skull form large, curved grooves on the inner walls of the posterior cranial fossa, which terminate at each jugular foramen. Paranasal Sinuses The paranasal sinuses are hollow, air-filled spaces located within certain bones of the skull ( Figure 7.18 ). All of the sinuses communicate with the nasal cavity (paranasal = “next to nasal cavity”) and are lined with nasal mucosa. They serve to reduce bone mass and thus lighten the skull, and they also add resonance to the voice. This second feature is most obvious when you have a cold or sinus congestion. These produce swelling of the mucosa and excess mucus production, which can obstruct the narrow passageways between the sinuses and the nasal cavity, causing your voice to sound different to yourself and others. This blockage can also allow the sinuses to fill with fluid, with the resulting pressure producing pain and discomfort. The paranasal sinuses are named for the skull bone that each occupies. The frontal sinus is located just above the eyebrows, within the frontal bone (see Figure 7.17 ). This irregular space may be divided at the midline into bilateral spaces, or these may be fused into a single sinus space. The frontal sinus is the most anterior of the paranasal sinuses. The largest sinus is the maxillary sinus . These are paired and located within the right and left maxillary bones, where they occupy the area just below the orbits. The maxillary sinuses are most commonly involved during sinus infections. Because their connection to the nasal cavity is located high on their medial wall, they are difficult to drain. The sphenoid sinus is a single, midline sinus. It is located within the body of the sphenoid bone, just anterior and inferior to the sella turcica, thus making it the most posterior of the paranasal sinuses. The lateral aspects of the ethmoid bone contain multiple small spaces separated by very thin bony walls. Each of these spaces is called an ethmoid air cell . These are located on both sides of the ethmoid bone, between the upper nasal cavity and medial orbit, just behind the superior nasal conchae. Hyoid Bone The hyoid bone is an independent bone that does not contact any other bone and thus is not part of the skull ( Figure 7.19 ). It is a small U-shaped bone located in the upper neck near the level of the inferior mandible, with the tips of the “U” pointing posteriorly. The hyoid serves as the base for the tongue above, and is attached to the larynx below and the pharynx posteriorly. The hyoid is held in position by a series of small muscles that attach to it either from above or below. These muscles act to move the hyoid up/down or forward/back. Movements of the hyoid are coordinated with movements of the tongue, larynx, and pharynx during swallowing and speaking. 7.3 The Vertebral Column Learning Objectives By the end of this section, you will be able to: Describe each region of the vertebral column and the number of bones in each region Discuss the curves of the vertebral column and how these change after birth Describe a typical vertebra and determine the distinguishing characteristics for vertebrae in each vertebral region and features of the sacrum and the coccyx Define the structure of an intervertebral disc Determine the location of the ligaments that provide support for the vertebral column The vertebral column is also known as the spinal column or spine ( Figure 7.20 ). It consists of a sequence of vertebrae (singular = vertebra), each of which is separated and united by an intervertebral disc . Together, the vertebrae and intervertebral discs form the vertebral column. It is a flexible column that supports the head, neck, and body and allows for their movements. It also protects the spinal cord, which passes down the back through openings in the vertebrae. Regions of the Vertebral Column The vertebral column originally develops as a series of 33 vertebrae, but this number is eventually reduced to 24 vertebrae, plus the sacrum and coccyx. The vertebral column is subdivided into five regions, with the vertebrae in each area named for that region and numbered in descending order. In the neck, there are seven cervical vertebrae, each designated with the letter “C” followed by its number. Superiorly, the C1 vertebra articulates (forms a joint) with the occipital condyles of the skull. Inferiorly, C1 articulates with the C2 vertebra, and so on. Below these are the 12 thoracic vertebrae, designated T1–T12. The lower back contains the L1–L5 lumbar vertebrae. The single sacrum, which is also part of the pelvis, is formed by the fusion of five sacral vertebrae. Similarly, the coccyx, or tailbone, results from the fusion of four small coccygeal vertebrae. However, the sacral and coccygeal fusions do not start until age 20 and are not completed until middle age. An interesting anatomical fact is that almost all mammals have seven cervical vertebrae, regardless of body size. This means that there are large variations in the size of cervical vertebrae, ranging from the very small cervical vertebrae of a shrew to the greatly elongated vertebrae in the neck of a giraffe. In a full-grown giraffe, each cervical vertebra is 11 inches tall. Curvatures of the Vertebral Column The adult vertebral column does not form a straight line, but instead has four curvatures along its length (see Figure 7.20 ). These curves increase the vertebral column’s strength, flexibility, and ability to absorb shock. When the load on the spine is increased, by carrying a heavy backpack for example, the curvatures increase in depth (become more curved) to accommodate the extra weight. They then spring back when the weight is removed. The four adult curvatures are classified as either primary or secondary curvatures. Primary curves are retained from the original fetal curvature, while secondary curvatures develop after birth. During fetal development, the body is flexed anteriorly into the fetal position, giving the entire vertebral column a single curvature that is concave anteriorly. In the adult, this fetal curvature is retained in two regions of the vertebral column as the thoracic curve , which involves the thoracic vertebrae, and the sacrococcygeal curve , formed by the sacrum and coccyx. Each of these is thus called a primary curve because they are retained from the original fetal curvature of the vertebral column. A secondary curve develops gradually after birth as the child learns to sit upright, stand, and walk. Secondary curves are concave posteriorly, opposite in direction to the original fetal curvature. The cervical curve of the neck region develops as the infant begins to hold their head upright when sitting. Later, as the child begins to stand and then to walk, the lumbar curve of the lower back develops. In adults, the lumbar curve is generally deeper in females. Disorders associated with the curvature of the spine include kyphosis (an excessive posterior curvature of the thoracic region), lordosis (an excessive anterior curvature of the lumbar region), and scoliosis (an abnormal, lateral curvature, accompanied by twisting of the vertebral column). Disorders of the... Vertebral Column Developmental anomalies, pathological changes, or obesity can enhance the normal vertebral column curves, resulting in the development of abnormal or excessive curvatures ( Figure 7.21 ). Kyphosis, also referred to as humpback or hunchback, is an excessive posterior curvature of the thoracic region. This can develop when osteoporosis causes weakening and erosion of the anterior portions of the upper thoracic vertebrae, resulting in their gradual collapse ( Figure 7.22 ). Lordosis, or swayback, is an excessive anterior curvature of the lumbar region and is most commonly associated with obesity or late pregnancy. The accumulation of body weight in the abdominal region results an anterior shift in the line of gravity that carries the weight of the body. This causes in an anterior tilt of the pelvis and a pronounced enhancement of the lumbar curve. Scoliosis is an abnormal, lateral curvature, accompanied by twisting of the vertebral column. Compensatory curves may also develop in other areas of the vertebral column to help maintain the head positioned over the feet. Scoliosis is the most common vertebral abnormality among girls. The cause is usually unknown, but it may result from weakness of the back muscles, defects such as differential growth rates in the right and left sides of the vertebral column, or differences in the length of the lower limbs. When present, scoliosis tends to get worse during adolescent growth spurts. Although most individuals do not require treatment, a back brace may be recommended for growing children. In extreme cases, surgery may be required. Excessive vertebral curves can be identified while an individual stands in the anatomical position. Observe the vertebral profile from the side and then from behind to check for kyphosis or lordosis. Then have the person bend forward. If scoliosis is present, an individual will have difficulty in bending directly forward, and the right and left sides of the back will not be level with each other in the bent position. Interactive Link Osteoporosis is a common age-related bone disease in which bone density and strength is decreased. Watch this video to get a better understanding of how thoracic vertebrae may become weakened and may fracture due to this disease. How may vertebral osteoporosis contribute to kyphosis? General Structure of a Vertebra Within the different regions of the vertebral column, vertebrae vary in size and shape, but they all follow a similar structural pattern. A typical vertebra will consist of a body, a vertebral arch, and seven processes ( Figure 7.23 ). The body is the anterior portion of each vertebra and is the part that supports the body weight. Because of this, the vertebral bodies progressively increase in size and thickness going down the vertebral column. The bodies of adjacent vertebrae are separated and strongly united by an intervertebral disc. The vertebral arch forms the posterior portion of each vertebra. It consists of four parts, the right and left pedicles and the right and left laminae. Each pedicle forms one of the lateral sides of the vertebral arch. The pedicles are anchored to the posterior side of the vertebral body. Each lamina forms part of the posterior roof of the vertebral arch. The large opening between the vertebral arch and body is the vertebral foramen , which contains the spinal cord. In the intact vertebral column, the vertebral foramina of all of the vertebrae align to form the vertebral (spinal) canal , which serves as the bony protection and passageway for the spinal cord down the back. When the vertebrae are aligned together in the vertebral column, notches in the margins of the pedicles of adjacent vertebrae together form an intervertebral foramen , the opening through which a spinal nerve exits from the vertebral column ( Figure 7.24 ). Seven processes arise from the vertebral arch. Each paired transverse process projects laterally and arises from the junction point between the pedicle and lamina. The single spinous process (vertebral spine) projects posteriorly at the midline of the back. The vertebral spines can easily be felt as a series of bumps just under the skin down the middle of the back. The transverse and spinous processes serve as important muscle attachment sites. A superior articular process extends or faces upward, and an inferior articular process faces or projects downward on each side of a vertebrae. The paired superior articular processes of one vertebra join with the corresponding paired inferior articular processes from the next higher vertebra. These junctions form slightly moveable joints between the adjacent vertebrae. The shape and orientation of the articular processes vary in different regions of the vertebral column and play a major role in determining the type and range of motion available in each region. Regional Modifications of Vertebrae In addition to the general characteristics of a typical vertebra described above, vertebrae also display characteristic size and structural features that vary between the different vertebral column regions. Thus, cervical vertebrae are smaller than lumbar vertebrae due to differences in the proportion of body weight that each supports. Thoracic vertebrae have sites for rib attachment, and the vertebrae that give rise to the sacrum and coccyx have fused together into single bones. Cervical Vertebrae Typical cervical vertebrae , such as C4 or C5, have several characteristic features that differentiate them from thoracic or lumbar vertebrae ( Figure 7.25 ). Cervical vertebrae have a small body, reflecting the fact that they carry the least amount of body weight. Cervical vertebrae usually have a bifid (Y-shaped) spinous process. The spinous processes of the C3–C6 vertebrae are short, but the spine of C7 is much longer. You can find these vertebrae by running your finger down the midline of the posterior neck until you encounter the prominent C7 spine located at the base of the neck. The transverse processes of the cervical vertebrae are sharply curved (U-shaped) to allow for passage of the cervical spinal nerves. Each transverse process also has an opening called the transverse foramen . An important artery that supplies the brain ascends up the neck by passing through these openings. The superior and inferior articular processes of the cervical vertebrae are flattened and largely face upward or downward, respectively. The first and second cervical vertebrae are further modified, giving each a distinctive appearance. The first cervical (C1) vertebra is also called the atlas , because this is the vertebra that supports the skull on top of the vertebral column (in Greek mythology, Atlas was the god who supported the heavens on his shoulders). The C1 vertebra does not have a body or spinous process. Instead, it is ring-shaped, consisting of an anterior arch and a posterior arch . The transverse processes of the atlas are longer and extend more laterally than do the transverse processes of any other cervical vertebrae. The superior articular processes face upward and are deeply curved for articulation with the occipital condyles on the base of the skull. The inferior articular processes are flat and face downward to join with the superior articular processes of the C2 vertebra. The second cervical (C2) vertebra is called the axis , because it serves as the axis for rotation when turning the head toward the right or left. The axis resembles typical cervical vertebrae in most respects, but is easily distinguished by the dens (odontoid process), a bony projection that extends upward from the vertebral body. The dens joins with the inner aspect of the anterior arch of the atlas, where it is held in place by transverse ligament. Thoracic Vertebrae The bodies of the thoracic vertebrae are larger than those of cervical vertebrae ( Figure 7.26 ). The characteristic feature for a typical midthoracic vertebra is the spinous process, which is long and has a pronounced downward angle that causes it to overlap the next inferior vertebra. The superior articular processes of thoracic vertebrae face anteriorly and the inferior processes face posteriorly. These orientations are important determinants for the type and range of movements available to the thoracic region of the vertebral column. Thoracic vertebrae have several additional articulation sites, each of which is called a facet , where a rib is attached. Most thoracic vertebrae have two facets located on the lateral sides of the body, each of which is called a costal facet (costal = “rib”). These are for articulation with the head (end) of a rib. An additional facet is located on the transverse process for articulation with the tubercle of a rib. Lumbar Vertebrae Lumbar vertebrae carry the greatest amount of body weight and are thus characterized by the large size and thickness of the vertebral body ( Figure 7.28 ). They have short transverse processes and a short, blunt spinous process that projects posteriorly. The articular processes are large, with the superior process facing backward and the inferior facing forward. Sacrum and Coccyx The sacrum is a triangular-shaped bone that is thick and wide across its superior base where it is weight bearing and then tapers down to an inferior, non-weight bearing apex ( Figure 7.29 ). It is formed by the fusion of five sacral vertebrae, a process that does not begin until after the age of 20. On the anterior surface of the older adult sacrum, the lines of vertebral fusion can be seen as four transverse ridges. On the posterior surface, running down the midline, is the median sacral crest , a bumpy ridge that is the remnant of the fused spinous processes (median = “midline”; while medial = “toward, but not necessarily at, the midline”). Similarly, the fused transverse processes of the sacral vertebrae form the lateral sacral crest . The sacral promontory is the anterior lip of the superior base of the sacrum. Lateral to this is the roughened auricular surface, which joins with the ilium portion of the hipbone to form the immobile sacroiliac joints of the pelvis. Passing inferiorly through the sacrum is a bony tunnel called the sacral canal , which terminates at the sacral hiatus near the inferior tip of the sacrum. The anterior and posterior surfaces of the sacrum have a series of paired openings called sacral foramina (singular = foramen) that connect to the sacral canal. Each of these openings is called a posterior (dorsal) sacral foramen or anterior (ventral) sacral foramen . These openings allow for the anterior and posterior branches of the sacral spinal nerves to exit the sacrum. The superior articular process of the sacrum , one of which is found on either side of the superior opening of the sacral canal, articulates with the inferior articular processes from the L5 vertebra. The coccyx, or tailbone, is derived from the fusion of four very small coccygeal vertebrae (see Figure 7.29 ). It articulates with the inferior tip of the sacrum. It is not weight bearing in the standing position, but may receive some body weight when sitting. Intervertebral Discs and Ligaments of the Vertebral Column The bodies of adjacent vertebrae are strongly anchored to each other by an intervertebral disc. This structure provides padding between the bones during weight bearing, and because it can change shape, also allows for movement between the vertebrae. Although the total amount of movement available between any two adjacent vertebrae is small, when these movements are summed together along the entire length of the vertebral column, large body movements can be produced. Ligaments that extend along the length of the vertebral column also contribute to its overall support and stability. Intervertebral Disc An intervertebral disc is a fibrocartilaginous pad that fills the gap between adjacent vertebral bodies (see Figure 7.24 ). Each disc is anchored to the bodies of its adjacent vertebrae, thus strongly uniting these. The discs also provide padding between vertebrae during weight bearing. Because of this, intervertebral discs are thin in the cervical region and thickest in the lumbar region, which carries the most body weight. In total, the intervertebral discs account for approximately 25 percent of your body height between the top of the pelvis and the base of the skull. Intervertebral discs are also flexible and can change shape to allow for movements of the vertebral column. Each intervertebral disc consists of two parts. The anulus fibrosus is the tough, fibrous outer layer of the disc. It forms a circle (anulus = “ring” or “circle”) and is firmly anchored to the outer margins of the adjacent vertebral bodies. Inside is the nucleus pulposus , consisting of a softer, more gel-like material. It has a high water content that serves to resist compression and thus is important for weight bearing. With increasing age, the water content of the nucleus pulposus gradually declines. This causes the disc to become thinner, decreasing total body height somewhat, and reduces the flexibility and range of motion of the disc, making bending more difficult. The gel-like nature of the nucleus pulposus also allows the intervertebral disc to change shape as one vertebra rocks side to side or forward and back in relation to its neighbors during movements of the vertebral column. Thus, bending forward causes compression of the anterior portion of the disc but expansion of the posterior disc. If the posterior anulus fibrosus is weakened due to injury or increasing age, the pressure exerted on the disc when bending forward and lifting a heavy object can cause the nucleus pulposus to protrude posteriorly through the anulus fibrosus, resulting in a herniated disc (“ruptured” or “slipped” disc) ( Figure 7.30 ). The posterior bulging of the nucleus pulposus can cause compression of a spinal nerve at the point where it exits through the intervertebral foramen, with resulting pain and/or muscle weakness in those body regions supplied by that nerve. The most common sites for disc herniation are the L4/L5 or L5/S1 intervertebral discs, which can cause sciatica, a widespread pain that radiates from the lower back down the thigh and into the leg. Similar injuries of the C5/C6 or C6/C7 intervertebral discs, following forcible hyperflexion of the neck from a collision accident or football injury, can produce pain in the neck, shoulder, and upper limb. Interactive Link Watch this animation to see what it means to “slip” a disk. Watch this second animation to see one possible treatment for a herniated disc, removing and replacing the damaged disc with an artificial one that allows for movement between the adjacent vertebrae. How could lifting a heavy object produce pain in a lower limb? Ligaments of the Vertebral Column Adjacent vertebrae are united by ligaments that run the length of the vertebral column along both its posterior and anterior aspects ( Figure 7.31 ). These serve to resist excess forward or backward bending movements of the vertebral column, respectively. The anterior longitudinal ligament runs down the anterior side of the entire vertebral column, uniting the vertebral bodies. It serves to resist excess backward bending of the vertebral column. Protection against this movement is particularly important in the neck, where extreme posterior bending of the head and neck can stretch or tear this ligament, resulting in a painful whiplash injury. Prior to the mandatory installation of seat headrests, whiplash injuries were common for passengers involved in a rear-end automobile collision. The supraspinous ligament is located on the posterior side of the vertebral column, where it interconnects the spinous processes of the thoracic and lumbar vertebrae. This strong ligament supports the vertebral column during forward bending motions. In the posterior neck, where the cervical spinous processes are short, the supraspinous ligament expands to become the nuchal ligament (nuchae = “nape” or “back of the neck”). The nuchal ligament is attached to the cervical spinous processes and extends upward and posteriorly to attach to the midline base of the skull, out to the external occipital protuberance. It supports the skull and prevents it from falling forward. This ligament is much larger and stronger in four-legged animals such as cows, where the large skull hangs off the front end of the vertebral column. You can easily feel this ligament by first extending your head backward and pressing down on the posterior midline of your neck. Then tilt your head forward and you will fill the nuchal ligament popping out as it tightens to limit anterior bending of the head and neck. Additional ligaments are located inside the vertebral canal, next to the spinal cord, along the length of the vertebral column. The posterior longitudinal ligament is found anterior to the spinal cord, where it is attached to the posterior sides of the vertebral bodies. Posterior to the spinal cord is the ligamentum flavum (“yellow ligament”). This consists of a series of short, paired ligaments, each of which interconnects the lamina regions of adjacent vertebrae. The ligamentum flavum has large numbers of elastic fibers, which have a yellowish color, allowing it to stretch and then pull back. Both of these ligaments provide important support for the vertebral column when bending forward. Interactive Link Use this tool to identify the bones, intervertebral discs, and ligaments of the vertebral column. The thickest portions of the anterior longitudinal ligament and the supraspinous ligament are found in which regions of the vertebral column? Career Connection Chiropractor Chiropractors are health professionals who use nonsurgical techniques to help patients with musculoskeletal system problems that involve the bones, muscles, ligaments, tendons, or nervous system. They treat problems such as neck pain, back pain, joint pain, or headaches. Chiropractors focus on the patient’s overall health and can also provide counseling related to lifestyle issues, such as diet, exercise, or sleep problems. If needed, they will refer the patient to other medical specialists. Chiropractors use a drug-free, hands-on approach for patient diagnosis and treatment. They will perform a physical exam, assess the patient’s posture and spine, and may perform additional diagnostic tests, including taking X-ray images. They primarily use manual techniques, such as spinal manipulation, to adjust the patient’s spine or other joints. They can recommend therapeutic or rehabilitative exercises, and some also include acupuncture, massage therapy, or ultrasound as part of the treatment program. In addition to those in general practice, some chiropractors specialize in sport injuries, neurology, orthopaedics, pediatrics, nutrition, internal disorders, or diagnostic imaging. To become a chiropractor, students must have 3–4 years of undergraduate education, attend an accredited, four-year Doctor of Chiropractic (D.C.) degree program, and pass a licensure examination to be licensed for practice in their state. With the aging of the baby-boom generation, employment for chiropractors is expected to increase. 7.4 The Thoracic Cage Learning Objectives By the end of this section, you will be able to: Discuss the components that make up the thoracic cage Identify the parts of the sternum and define the sternal angle Discuss the parts of a rib and rib classifications The thoracic cage (rib cage) forms the thorax (chest) portion of the body. It consists of the 12 pairs of ribs with their costal cartilages and the sternum ( Figure 7.32 ). The ribs are anchored posteriorly to the 12 thoracic vertebrae (T1–T12). The thoracic cage protects the heart and lungs. Sternum The sternum is the elongated bony structure that anchors the anterior thoracic cage. It consists of three parts: the manubrium, body, and xiphoid process. The manubrium is the wider, superior portion of the sternum. The top of the manubrium has a shallow, U-shaped border called the jugular (suprasternal) notch . This can be easily felt at the anterior base of the neck, between the medial ends of the clavicles. The clavicular notch is the shallow depression located on either side at the superior-lateral margins of the manubrium. This is the site of the sternoclavicular joint, between the sternum and clavicle. The first ribs also attach to the manubrium. The elongated, central portion of the sternum is the body. The manubrium and body join together at the sternal angle , so called because the junction between these two components is not flat, but forms a slight bend. The second rib attaches to the sternum at the sternal angle. Since the first rib is hidden behind the clavicle, the second rib is the highest rib that can be identified by palpation. Thus, the sternal angle and second rib are important landmarks for the identification and counting of the lower ribs. Ribs 3–7 attach to the sternal body. The inferior tip of the sternum is the xiphoid process . This small structure is cartilaginous early in life, but gradually becomes ossified starting during middle age. Ribs Each rib is a curved, flattened bone that contributes to the wall of the thorax. The ribs articulate posteriorly with the T1–T12 thoracic vertebrae, and most attach anteriorly via their costal cartilages to the sternum. There are 12 pairs of ribs. The ribs are numbered 1–12 in accordance with the thoracic vertebrae. Parts of a Typical Rib The posterior end of a typical rib is called the head of the rib (see Figure 7.27 ). This region articulates primarily with the costal facet located on the body of the same numbered thoracic vertebra and to a lesser degree, with the costal facet located on the body of the next higher vertebra. Lateral to the head is the narrowed neck of the rib . A small bump on the posterior rib surface is the tubercle of the rib , which articulates with the facet located on the transverse process of the same numbered vertebra. The remainder of the rib is the body of the rib (shaft). Just lateral to the tubercle is the angle of the rib , the point at which the rib has its greatest degree of curvature. The angles of the ribs form the most posterior extent of the thoracic cage. In the anatomical position, the angles align with the medial border of the scapula. A shallow costal groove for the passage of blood vessels and a nerve is found along the inferior margin of each rib. Rib Classifications The bony ribs do not extend anteriorly completely around to the sternum. Instead, each rib ends in a costal cartilage . These cartilages are made of hyaline cartilage and can extend for several inches. Most ribs are then attached, either directly or indirectly, to the sternum via their costal cartilage (see Figure 7.32 ). The ribs are classified into three groups based on their relationship to the sternum. Ribs 1–7 are classified as true ribs (vertebrosternal ribs). The costal cartilage from each of these ribs attaches directly to the sternum. Ribs 8–12 are called false ribs (vertebrochondral ribs). The costal cartilages from these ribs do not attach directly to the sternum. For ribs 8–10, the costal cartilages are attached to the cartilage of the next higher rib. Thus, the cartilage of rib 10 attaches to the cartilage of rib 9, rib 9 then attaches to rib 8, and rib 8 is attached to rib 7. The last two false ribs (11–12) are also called floating ribs (vertebral ribs). These are short ribs that do not attach to the sternum at all. Instead, their small costal cartilages terminate within the musculature of the lateral abdominal wall. 7.5 Embryonic Development of the Axial Skeleton Learning Objectives By the end of this section, you will be able to: Discuss the two types of embryonic bone development within the skull Describe the development of the vertebral column and thoracic cage The axial skeleton begins to form during early embryonic development. However, growth, remodeling, and ossification (bone formation) continue for several decades after birth before the adult skeleton is fully formed. Knowledge of the developmental processes that give rise to the skeleton is important for understanding the abnormalities that may arise in skeletal structures. Development of the Skull During the third week of embryonic development, a rod-like structure called the notochord develops dorsally along the length of the embryo. The tissue overlying the notochord enlarges and forms the neural tube, which will give rise to the brain and spinal cord. By the fourth week, mesoderm tissue located on either side of the notochord thickens and separates into a repeating series of block-like tissue structures, each of which is called a somite . As the somites enlarge, each one will split into several parts. The most medial of these parts is called a sclerotome . The sclerotomes consist of an embryonic tissue called mesenchyme, which will give rise to the fibrous connective tissues, cartilages, and bones of the body. The bones of the skull arise from mesenchyme during embryonic development in two different ways. The first mechanism produces the bones that form the top and sides of the brain case. This involves the local accumulation of mesenchymal cells at the site of the future bone. These cells then differentiate directly into bone producing cells, which form the skull bones through the process of intramembranous ossification. As the brain case bones grow in the fetal skull, they remain separated from each other by large areas of dense connective tissue, each of which is called a fontanelle ( Figure 7.33 ). The fontanelles are the soft spots on an infant’s head. They are important during birth because these areas allow the skull to change shape as it squeezes through the birth canal. After birth, the fontanelles allow for continued growth and expansion of the skull as the brain enlarges. The largest fontanelle is located on the anterior head, at the junction of the frontal and parietal bones. The fontanelles decrease in size and disappear by age 2. However, the skull bones remained separated from each other at the sutures, which contain dense fibrous connective tissue that unites the adjacent bones. The connective tissue of the sutures allows for continued growth of the skull bones as the brain enlarges during childhood growth. The second mechanism for bone development in the skull produces the facial bones and floor of the brain case. This also begins with the localized accumulation of mesenchymal cells. However, these cells differentiate into cartilage cells, which produce a hyaline cartilage model of the future bone. As this cartilage model grows, it is gradually converted into bone through the process of endochondral ossification. This is a slow process and the cartilage is not completely converted to bone until the skull achieves its full adult size. At birth, the brain case and orbits of the skull are disproportionally large compared to the bones of the jaws and lower face. This reflects the relative underdevelopment of the maxilla and mandible, which lack teeth, and the small sizes of the paranasal sinuses and nasal cavity. During early childhood, the mastoid process enlarges, the two halves of the mandible and frontal bone fuse together to form single bones, and the paranasal sinuses enlarge. The jaws also expand as the teeth begin to appear. These changes all contribute to the rapid growth and enlargement of the face during childhood. Development of the Vertebral Column and Thoracic cage Development of the vertebrae begins with the accumulation of mesenchyme cells from each sclerotome around the notochord. These cells differentiate into a hyaline cartilage model for each vertebra, which then grow and eventually ossify into bone through the process of endochondral ossification. As the developing vertebrae grow, the notochord largely disappears. However, small areas of notochord tissue persist between the adjacent vertebrae and this contributes to the formation of each intervertebral disc. The ribs and sternum also develop from mesenchyme. The ribs initially develop as part of the cartilage model for each vertebra, but in the thorax region, the rib portion separates from the vertebra by the eighth week. The cartilage model of the rib then ossifies, except for the anterior portion, which remains as the costal cartilage. The sternum initially forms as paired hyaline cartilage models on either side of the anterior midline, beginning during the fifth week of development. The cartilage models of the ribs become attached to the lateral sides of the developing sternum. Eventually, the two halves of the cartilaginous sternum fuse together along the midline and then ossify into bone. The manubrium and body of the sternum are converted into bone first, with the xiphoid process remaining as cartilage until late in life. Interactive Link View this video to review the two processes that give rise to the bones of the skull and body. What are the two mechanisms by which the bones of the body are formed and which bones are formed by each mechanism? Homeostatic Imbalances Craniosynostosis The premature closure (fusion) of a suture line is a condition called craniosynostosis. This error in the normal developmental process results in abnormal growth of the skull and deformity of the head. It is produced either by defects in the ossification process of the skull bones or failure of the brain to properly enlarge. Genetic factors are involved, but the underlying cause is unknown. It is a relatively common condition, occurring in approximately 1:2000 births, with males being more commonly affected. Primary craniosynostosis involves the early fusion of one cranial suture, whereas complex craniosynostosis results from the premature fusion of several sutures. The early fusion of a suture in primary craniosynostosis prevents any additional enlargement of the cranial bones and skull along this line. Continued growth of the brain and skull is therefore diverted to other areas of the head, causing an abnormal enlargement of these regions. For example, the early disappearance of the anterior fontanelle and premature closure of the sagittal suture prevents growth across the top of the head. This is compensated by upward growth by the bones of the lateral skull, resulting in a long, narrow, wedge-shaped head. This condition, known as scaphocephaly, accounts for approximately 50 percent of craniosynostosis abnormalities. Although the skull is misshapen, the brain still has adequate room to grow and thus there is no accompanying abnormal neurological development. In cases of complex craniosynostosis, several sutures close prematurely. The amount and degree of skull deformity is determined by the location and extent of the sutures involved. This results in more severe constraints on skull growth, which can alter or impede proper brain growth and development. Cases of craniosynostosis are usually treated with surgery. A team of physicians will open the skull along the fused suture, which will then allow the skull bones to resume their growth in this area. In some cases, parts of the skull will be removed and replaced with an artificial plate. The earlier after birth that surgery is performed, the better the outcome. After treatment, most children continue to grow and develop normally and do not exhibit any neurological problems.
introduction_to_sociology
Learning Objectives 1.1 What Is Sociology? Explain concepts central to sociology Understand how different sociological perspectives have developed 1.2 The History of Sociology Explain why sociology emerged when it did Describe how sociology became a separate academic discipline 1.3 Theoretical Perspectives Explain what sociological theories are and how they are used Understand the similarities and differences between structural functionalism, conflict theory, and symbolic interactionism 1.4 Why Study Sociology? Explain why it is worthwhile to study sociology Identify ways sociology is applied in the real world Introduction to Sociology Concerts, sports games, and political rallies can have very large crowds. When you attend one of these events, you may know only the people you came with. Yet you may experience a feeling of connection to the group. You are one of the crowd. You cheer and applaud when everyone else does. You boo and yell alongside them. You move out of the way when someone needs to get by, and you say "excuse me" when you need to leave. You know how to behave in this kind of crowd. It can be a very different experience if you are traveling in a foreign country and find yourself in a crowd moving down the street. You may have trouble figuring out what is happening. Is the crowd just the usual morning rush, or is it a political protest of some kind? Perhaps there was some sort of accident or disaster. Is it safe in this crowd, or should you try to extract yourself? How can you find out what is going on? Although you are in it, you may not feel like you are part of this crowd. You may not know what to do or how to behave. Even within one type of crowd, different groups exist and different behaviors are on display. At a rock concert, for example, some may enjoy singing along, others prefer to sit and observe, while still others may join in a mosh pit or try crowd surfing. Why do we feel and act differently in different types of social situations? Why might people of a single group exhibit different behaviors in the same situation? Why might people acting similarly not feel connected to others exhibiting the same behavior? These are some of the many questions sociologists ask as they study people and societies.
[ { "answer": { "ans_choice": 2, "ans_text": "The study of society and social interaction" }, "bloom": "1", "hl_context": "Sociologists study social events , interactions , and patterns . They then develop theories to explain why these occur and what can result from them . <hl> In sociology , a theory is a way to explain different aspects of social interactions and to create testable propositions about society ( Allan 2006 ) . <hl> <hl> A dictionary defines sociology as the systematic study of society and social interaction . <hl> The word “ sociology ” is derived from the Latin word socius ( companion ) and the Greek word logos ( study of ) , meaning “ the study of companionship . ” While this is a starting point for the discipline , sociology is actually much more complex . It uses many different methods to study a wide range of subject matter and to apply these studies to the real world .", "hl_sentences": "In sociology , a theory is a way to explain different aspects of social interactions and to create testable propositions about society ( Allan 2006 ) . A dictionary defines sociology as the systematic study of society and social interaction .", "question": { "cloze_format": "____ best describes sociology as a subject.", "normal_format": "Which of the following best describes sociology as a subject?", "question_choices": [ "The study of individual behavior", "The study of cultures", "The study of society and social interaction", "The study of economics" ], "question_id": "fs-id817674", "question_text": "Which of the following best describes sociology as a subject?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "imagination" }, "bloom": "1", "hl_context": "Although these studies and the methods of carrying them out are different , the sociologists involved in them all have something in common . <hl> Each of them looks at society using what pioneer sociologist C . Wright Mills called the sociological imagination , sometimes also referred to as the sociological lens or sociological perspective . <hl> Mills defined sociological imagination as how individuals understand their own and others ’ pasts in relation to history and social structure ( 1959 ) .", "hl_sentences": "Each of them looks at society using what pioneer sociologist C . Wright Mills called the sociological imagination , sometimes also referred to as the sociological lens or sociological perspective .", "question": { "cloze_format": "C. Wright Mills once said that sociologists need to develop a sociological __________ to study how society affects individuals.", "normal_format": "Which sociological C. Wright Mills once said that sociologists need to develop to study how society affects individuals?", "question_choices": [ "culture", "imagination", "method", "tool" ], "question_id": "fs-id1947852", "question_text": "C. Wright Mills once said that sociologists need to develop a sociological __________ to study how society affects individuals." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 0, "ans_text": "interact" }, "bloom": null, "hl_context": "Sociologists study all aspects and levels of society . <hl> A society is a group of people whose members interact , reside in a definable area , and share a culture . <hl> A culture includes the group ’ s shared practices , values , and beliefs . One sociologist might analyze video of people from different societies as they carry on everyday conversations to study the rules of polite conversation from different world cultures . Another sociologist might interview a representative sample of people to see how texting has changed the way they communicate . Yet another sociologist might study how migration determined the way in which language spread and changed over time . A fourth sociologist might be part of a team developing signs to warn people living thousands of years in the future , and speaking many different languages , to stay away from still-dangerous nuclear waste .", "hl_sentences": "A society is a group of people whose members interact , reside in a definable area , and share a culture .", "question": { "cloze_format": "A sociologist defines society as a group of people who reside in a defined area, share a culture, and who ____.", "normal_format": "A sociologist defines society as a group of people who reside in a defined area, share a culture, and do what?", "question_choices": [ "interact", "work in the same industry", "speak different languages", "practice a recognized religion" ], "question_id": "fs-id1155804", "question_text": "A sociologist defines society as a group of people who reside in a defined area, share a culture, and who:" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "identify similarities in how social groups respond to social pressure" }, "bloom": null, "hl_context": "All sociologists are interested in the experiences of individuals and how those experiences are shaped by interactions with social groups and society as a whole . To a sociologist , the personal decisions an individual makes do not exist in a vacuum . Cultural patterns and social forces put pressure on people to select one choice over another . <hl> Sociologists try to identify these general patterns by examining the behavior of large groups of people living in the same society and experiencing the same societal pressures . <hl>", "hl_sentences": "Sociologists try to identify these general patterns by examining the behavior of large groups of people living in the same society and experiencing the same societal pressures .", "question": { "cloze_format": "Seeing patterns means that a sociologist needs to be able to ____.", "normal_format": "Seeing patterns means that a sociologist needs to be able to do what?", "question_choices": [ "compare the behavior of individuals from different societies", "compare one society to another", "identify similarities in how social groups respond to social pressure", "compare individuals to groups" ], "question_id": "fs-id1414719", "question_text": "Seeing patterns means that a sociologist needs to be able to:" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "Economics" }, "bloom": null, "hl_context": "In the 13th century , Ma Tuan-Lin , a Chinese historian , first recognized social dynamics as an underlying component of historical development in his seminal encyclopedia , General Study of Literary Remains . The next century saw the emergence of the historian some consider to be the world ’ s first sociologist : Ibn Khaldun ( 1332 – 1406 ) of Tunisia . <hl> He wrote about many topics of interest today , setting a foundation for both modern sociology and economics , including a theory of social conflict , a comparison of nomadic and sedentary life , a description of political economy , and a study connecting a tribe ’ s social cohesion to its capacity for power ( Hannoum 2003 ) . <hl>", "hl_sentences": "He wrote about many topics of interest today , setting a foundation for both modern sociology and economics , including a theory of social conflict , a comparison of nomadic and sedentary life , a description of political economy , and a study connecting a tribe ’ s social cohesion to its capacity for power ( Hannoum 2003 ) .", "question": { "cloze_format": "___ was not a topic of study in early sociology.", "normal_format": "Which of the following was a topic of study in early sociology?", "question_choices": [ "Astrology", "Economics", "Physics", "History" ], "question_id": "fs-id1169033060946", "question_text": "Which of the following was a topic of study in early sociology?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 1, "ans_text": "Karl Marx" }, "bloom": "1", "hl_context": "Another theory with a macro-level view , called conflict theory , looks at society as a competition for limited resources . Conflict theory sees society as being made up of individuals who must compete for social , political , and material resources such as political power , leisure time , money , housing , and entertainment . Social structures and organizations such as religious groups , governments , and corporations reflect this competition in their inherent inequalities . Some individuals and organizations are able to obtain and keep more resources than others . These \" winners \" use their power and influence to maintain their positions of power in society and to suppress the advancement of other individuals and groups . <hl> Of the early founders of sociology , Karl Marx is most closely identified with this theory . <hl> <hl> He focused on the economic conflict between different social classes . <hl> As he and Fredrick Engels famously described in their Communist Manifesto , “ the history of all hitherto existing society is the history of class struggles . Freeman and slave , patrician and plebeian , lord and serf , guild-master and journeyman , in a word , oppressor and oppressed ” ( 1848 ) . <hl> Marx rejected Comte's positivism . <hl> <hl> He believed that societies grew and changed as a result of the struggles of different social classes over the means of production . <hl> At the time he was developing his theories , the Industrial Revolution and the rise of capitalism led to great disparities in wealth between the owners of the factories and workers . Capitalism , an economic system characterized by private or corporate ownership of goods and the means to produce them , grew in many nations .", "hl_sentences": "Of the early founders of sociology , Karl Marx is most closely identified with this theory . He focused on the economic conflict between different social classes . Marx rejected Comte's positivism . He believed that societies grew and changed as a result of the struggles of different social classes over the means of production .", "question": { "cloze_format": "___ believed societies changed due to class struggle.", "normal_format": "Which founder of sociology believed societies changed due to class struggle?", "question_choices": [ "Emile Comte", "Karl Marx", "Plato", "Herbert Spencer" ], "question_id": "fs-id1169033067407", "question_text": "Which founder of sociology believed societies changed due to class struggle?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "whether sociological studies can predict or improve society" }, "bloom": null, "hl_context": "<hl> The different approaches to research based on positivism or antipositivism are often considered the foundation for the differences found today between quantitative sociology and qualitative sociology . <hl> Quantitative sociology uses statistical methods such as surveys with large numbers of participants . Researchers analyze data using statistical techniques to see if they can uncover patterns of human behavior . Qualitative sociology seeks to understand human behavior by learning about it through in-depth interviews , focus groups , and analysis of content sources ( like books , magazines , journals , and popular media ) .", "hl_sentences": "The different approaches to research based on positivism or antipositivism are often considered the foundation for the differences found today between quantitative sociology and qualitative sociology .", "question": { "cloze_format": "The difference between positivism and antipositivsm relates to ____.", "normal_format": "What is the difference between positivism and antipositivism relate to?", "question_choices": [ "whether individuals like or dislike their society", "whether research methods use statistical data or person-to-person research", "whether sociological studies can predict or improve society", "all of the above" ], "question_id": "fs-id1169033122620", "question_text": "The difference between positivism and antipositivism relates to:" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 0, "ans_text": "A large survey" }, "bloom": null, "hl_context": "The different approaches to research based on positivism or antipositivism are often considered the foundation for the differences found today between quantitative sociology and qualitative sociology . <hl> Quantitative sociology uses statistical methods such as surveys with large numbers of participants . <hl> Researchers analyze data using statistical techniques to see if they can uncover patterns of human behavior . Qualitative sociology seeks to understand human behavior by learning about it through in-depth interviews , focus groups , and analysis of content sources ( like books , magazines , journals , and popular media ) .", "hl_sentences": "Quantitative sociology uses statistical methods such as surveys with large numbers of participants .", "question": { "cloze_format": "A quantitative sociologists use to gather date is ____.", "normal_format": "Which would a quantitative sociologists use to gather data?", "question_choices": [ "A large survey", "A literature search", "An in-depth interview", "A review of television programs" ], "question_id": "fs-id1169033106082", "question_text": "Which would a quantitative sociologists use to gather data?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "their culture" }, "bloom": "1", "hl_context": "Weber also made a major contribution to the methodology of sociological research . <hl> Along with other researchers such as Wilhelm Dilthey ( 1833 – 1911 ) and Heinrich Rickert ( 1863 – 1936 ) , Weber believed that it was difficult if not impossible to use standard scientific methods to accurately predict the behavior of groups as people hoped to do . <hl> <hl> They argued that the influence of culture on human behavior had to be taken into account . <hl> This even applied to the researchers themselves , who , they believed , should be aware of how their own cultural biases could influence their research . To deal with this problem , Weber and Dilthey introduced the concept of verstehen , a German word that means to understand in a deep way . In seeking verstehen , outside observers of a social world — an entire culture or a small setting — attempt to understand it from an insider ’ s point of view .", "hl_sentences": "Along with other researchers such as Wilhelm Dilthey ( 1833 – 1911 ) and Heinrich Rickert ( 1863 – 1936 ) , Weber believed that it was difficult if not impossible to use standard scientific methods to accurately predict the behavior of groups as people hoped to do . They argued that the influence of culture on human behavior had to be taken into account .", "question": { "cloze_format": "Weber believed humans could not be studied purely objectively because they were influenced by ____.", "normal_format": "According to Weber's beliefs, humans could not be studied objectively because they were influenced by what?", "question_choices": [ "drugs", "their culture", "their genetic makeup", "the researcher" ], "question_id": "fs-id1169033139889", "question_text": "Weber believed humans could not be studied purely objectively because they were influenced by:" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 3, "ans_text": "Symbolic interactionism" }, "bloom": null, "hl_context": "<hl> Symbolic Interactionism provides a theoretical perspective that helps scholars examine the relationship of individuals within their society . <hl> <hl> This perspective is centered on the notion that communication — or the exchange of meaning through language and symbols — is how people make sense of their social worlds . <hl> As pointed out by Herman and Reynolds ( 1994 ) , this viewpoint sees people as active in shaping their world , rather than as entities who are acted upon by society ( Herman and Reynolds 1994 ) . <hl> This approach looks at society and people from a micro-level perspective . <hl>", "hl_sentences": "Symbolic Interactionism provides a theoretical perspective that helps scholars examine the relationship of individuals within their society . This perspective is centered on the notion that communication — or the exchange of meaning through language and symbols — is how people make sense of their social worlds . This approach looks at society and people from a micro-level perspective .", "question": { "cloze_format": "____ is most likely to look at the social world on a micro level.", "normal_format": "Which of these theories is most likely to look at the social world on a micro level?", "question_choices": [ "Structural functionalism", "Conflict theory", "Positivism", "Symbolic interactionism" ], "question_id": "fs-id2039108", "question_text": "Which of these theories is most likely to look at the social world on a micro level?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "Karl Marx" }, "bloom": "1", "hl_context": "Another theory with a macro-level view , called conflict theory , looks at society as a competition for limited resources . Conflict theory sees society as being made up of individuals who must compete for social , political , and material resources such as political power , leisure time , money , housing , and entertainment . Social structures and organizations such as religious groups , governments , and corporations reflect this competition in their inherent inequalities . Some individuals and organizations are able to obtain and keep more resources than others . These \" winners \" use their power and influence to maintain their positions of power in society and to suppress the advancement of other individuals and groups . Of the early founders of sociology , Karl Marx is most closely identified with this theory . <hl> He focused on the economic conflict between different social classes . <hl> <hl> As he and Fredrick Engels famously described in their Communist Manifesto , “ the history of all hitherto existing society is the history of class struggles . <hl> Freeman and slave , patrician and plebeian , lord and serf , guild-master and journeyman , in a word , oppressor and oppressed ” ( 1848 ) .", "hl_sentences": "He focused on the economic conflict between different social classes . As he and Fredrick Engels famously described in their Communist Manifesto , “ the history of all hitherto existing society is the history of class struggles .", "question": { "cloze_format": "____ believed that the history of society was one of class struggle.", "normal_format": "Who believed that the history of society was one of class struggle?", "question_choices": [ "Emile Durkheim", "Karl Marx", "Erving Goffmann", "George Herbert Mead" ], "question_id": "eip-917", "question_text": "Who believed that the history of society was one of class struggle?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 0, "ans_text": "Herbert Blumer" }, "bloom": "1", "hl_context": "George Herbert Mead ( 1863 – 1931 ) is considered one of the founders of symbolic interactionism , though he never published his work on it ( LaRossa & Reitzes 1993 ) . It was up to his student Herbert Blumer ( 1900 – 1987 ) to interpret Mead's work and popularize the theory . <hl> Blumer coined the term “ symbolic interactionism ” and identified its three basic premises : <hl>", "hl_sentences": "Blumer coined the term “ symbolic interactionism ” and identified its three basic premises :", "question": { "cloze_format": "___ coined the phrase symbolic interactionism.", "normal_format": "Who coined the phrase symbolic interactionism?", "question_choices": [ "Herbert Blumer", "Max Weber", "Lester F. Ward", "W.I. Thomas" ], "question_id": "fs-id1574745", "question_text": "Who coined the phrase symbolic interactionism?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "theatrical roles" }, "bloom": "2", "hl_context": "These meanings are handled in , and modified through , an interpretative process used by the person in dealing with the things he / she encounters ( Blumer 1969 ) . Social scientists who apply symbolic-interactionist thinking look for patterns of interaction between individuals . Their studies often involve observation of one-on-one interactions . <hl> For example , while a conflict theorist studying a political protest might focus on class difference , a symbolic interactionist would be more interested in how individuals in the protesting group interact , as well as the signs and symbols protesters use to communicate their message . <hl> The focus on the importance of symbols in building a society led sociologists like Erving Goffman ( 1922-1982 ) to develop a technique called dramaturgical analysis . <hl> Goffman used theater as an analogy for social interaction and recognized that people ’ s interactions showed patterns of cultural “ scripts . ” Because it can be unclear what part a person may play in a given situation , he or she has to improvise his or her role as the situation unfolds ( Goffman 1958 ) . <hl> <hl> A sociologist viewing food consumption through a symbolic interactionist lens would be more interested in micro-level topics , such as the symbolic use of food in religious rituals , or the role it plays in the social interaction of a family dinner . <hl> This perspective might also study the interactions among group members who identify themselves based on their sharing a particular diet , such as vegetarians ( people who don ’ t eat meat ) or locavores ( people who strive to eat locally produced food ) .", "hl_sentences": "For example , while a conflict theorist studying a political protest might focus on class difference , a symbolic interactionist would be more interested in how individuals in the protesting group interact , as well as the signs and symbols protesters use to communicate their message . Goffman used theater as an analogy for social interaction and recognized that people ’ s interactions showed patterns of cultural “ scripts . ” Because it can be unclear what part a person may play in a given situation , he or she has to improvise his or her role as the situation unfolds ( Goffman 1958 ) . A sociologist viewing food consumption through a symbolic interactionist lens would be more interested in micro-level topics , such as the symbolic use of food in religious rituals , or the role it plays in the social interaction of a family dinner .", "question": { "cloze_format": "A symbolic interactionist may compare social interactions to ___ .", "normal_format": "What may a symbolic interactionist compare social interactions with?", "question_choices": [ "behaviors", "conflicts", "human organs", "theatrical roles" ], "question_id": "fs-id2195852", "question_text": "A symbolic interactionist may compare social interactions to:" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 1, "ans_text": "Participant observation" }, "bloom": "2", "hl_context": "<hl> Studies that use the symbolic interactionist perspective are more likely to use qualitative research methods , such as in-depth interviews or participant observation , because they seek to understand the symbolic worlds in which research subjects live . <hl> These meanings are handled in , and modified through , an interpretative process used by the person in dealing with the things he / she encounters ( Blumer 1969 ) . <hl> Social scientists who apply symbolic-interactionist thinking look for patterns of interaction between individuals . <hl> <hl> Their studies often involve observation of one-on-one interactions . <hl> For example , while a conflict theorist studying a political protest might focus on class difference , a symbolic interactionist would be more interested in how individuals in the protesting group interact , as well as the signs and symbols protesters use to communicate their message . The focus on the importance of symbols in building a society led sociologists like Erving Goffman ( 1922-1982 ) to develop a technique called dramaturgical analysis . Goffman used theater as an analogy for social interaction and recognized that people ’ s interactions showed patterns of cultural “ scripts . ” Because it can be unclear what part a person may play in a given situation , he or she has to improvise his or her role as the situation unfolds ( Goffman 1958 ) .", "hl_sentences": "Studies that use the symbolic interactionist perspective are more likely to use qualitative research methods , such as in-depth interviews or participant observation , because they seek to understand the symbolic worlds in which research subjects live . Social scientists who apply symbolic-interactionist thinking look for patterns of interaction between individuals . Their studies often involve observation of one-on-one interactions .", "question": { "cloze_format": "___ would most likely be used by a symbolic interactionist.", "normal_format": "Which research technique would most likely be used by a symbolic interactionist?", "question_choices": [ "Surveys", "Participant observation", "Quantitative data analysis", "None of the above" ], "question_id": "fs-id1352194", "question_text": "Which research technique would most likely be used by a symbolic interactionist?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "harmful" }, "bloom": "1", "hl_context": "When Elizabeth Eckford tried to enter Central High School in Little Rock , Arkansas , in September 1957 , she was met by an angry crowd . But she knew she had the law on her side . Three years earlier in the landmark Brown vs . the Board of Education case , the U . S . Supreme Court had overturned 21 state laws that allowed blacks and whites to be taught in separate school systems as long as the school systems were “ equal . ” One of the major factors influencing that decision was research conducted by the husband-and-wife team of sociologists , Kenneth and Mamie Clark . <hl> Their research showed that segregation was harmful to young black schoolchildren , and the Court found that harm to be unconstitutional . <hl>", "hl_sentences": "Their research showed that segregation was harmful to young black schoolchildren , and the Court found that harm to be unconstitutional .", "question": { "cloze_format": "Kenneth and Mamie Clark used sociological research to show that segregation was ____.", "normal_format": "What segregation was that Kenneth and Mamie Clark used sociological research to show?", "question_choices": [ "beneficial", "harmful", "illegal", "of no importance" ], "question_id": "fs-id1798206", "question_text": "Kenneth and Mamie Clark used sociological research to show that segregation was:" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "all of the above" }, "bloom": "3", "hl_context": "Sociologists study social events , interactions , and patterns . <hl> They then develop theories to explain why these occur and what can result from them . <hl> In sociology , a theory is a way to explain different aspects of social interactions and to create testable propositions about society ( Allan 2006 ) . The different approaches to research based on positivism or antipositivism are often considered the foundation for the differences found today between quantitative sociology and qualitative sociology . Quantitative sociology uses statistical methods such as surveys with large numbers of participants . <hl> Researchers analyze data using statistical techniques to see if they can uncover patterns of human behavior . <hl> <hl> Qualitative sociology seeks to understand human behavior by learning about it through in-depth interviews , focus groups , and analysis of content sources ( like books , magazines , journals , and popular media ) . <hl>", "hl_sentences": "They then develop theories to explain why these occur and what can result from them . Researchers analyze data using statistical techniques to see if they can uncover patterns of human behavior . Qualitative sociology seeks to understand human behavior by learning about it through in-depth interviews , focus groups , and analysis of content sources ( like books , magazines , journals , and popular media ) .", "question": { "cloze_format": "Studying Sociology helps people analyze data because they learn ____.", "normal_format": "Sociology studies help people analyze data because they learn how to do what?", "question_choices": [ "interview techniques", "to apply statistics", "to generate theories", "all of the above" ], "question_id": "fs-id1904314", "question_text": "Studying Sociology helps people analyze data because they learn:" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "both a and b" }, "bloom": null, "hl_context": "The prominent sociologist Peter L . Berger ( 1929 – ) , in his 1963 book Invitation to Sociology : A Humanistic Perspective , describes a sociologist as \" someone concerned with understanding society in a disciplined way . \" <hl> He asserts that sociologists have a natural interest in the monumental moments of people ’ s lives , as well as a fascination with banal , everyday occurrences . <hl> Berger also describes the “ aha ” moment when a sociological theory becomes applicable and understood :", "hl_sentences": "He asserts that sociologists have a natural interest in the monumental moments of people ’ s lives , as well as a fascination with banal , everyday occurrences .", "question": { "cloze_format": "Berger describes sociologists as concerned with ___.", "normal_format": "What does Berger describe sociologists as concerned with?", "question_choices": [ "monumental moments in people’s lives", "common everyday life events", "both a and b", "none of the above" ], "question_id": "fs-id1823323", "question_text": "Berger describes sociologists as concerned with:" }, "references_are_paraphrase": null } ]
1
1.1 What Is Sociology? A dictionary defines sociology as the systematic study of society and social interaction. The word “sociology” is derived from the Latin word socius (companion) and the Greek word logos (study of), meaning “the study of companionship.” While this is a starting point for the discipline, sociology is actually much more complex. It uses many different methods to study a wide range of subject matter and to apply these studies to the real world. What Are Society and Culture? Sociologists study all aspects and levels of society. A society is a group of people whose members interact, reside in a definable area, and share a culture. A culture includes the group’s shared practices, values, and beliefs. One sociologist might analyze video of people from different societies as they carry on everyday conversations to study the rules of polite conversation from different world cultures. Another sociologist might interview a representative sample of people to see how texting has changed the way they communicate. Yet another sociologist might study how migration determined the way in which language spread and changed over time. A fourth sociologist might be part of a team developing signs to warn people living thousands of years in the future, and speaking many different languages, to stay away from still-dangerous nuclear waste. The Sociological Imagination Although these studies and the methods of carrying them out are different, the sociologists involved in them all have something in common. Each of them looks at society using what pioneer sociologist C. Wright Mills called the sociological imagination, sometimes also referred to as the sociological lens or sociological perspective. Mills defined sociological imagination as how individuals understand their own and others’ pasts in relation to history and social structure (1959). By looking at individuals and societies and how they interact through this lens, sociologists are able to examine what influences behavior, attitudes, and culture. By applying systematic and scientific methods to this process, they try to do so without letting their own biases and pre-conceived ideas influence their conclusions. Studying Patterns: How Sociologists View Society All sociologists are interested in the experiences of individuals and how those experiences are shaped by interactions with social groups and society as a whole. To a sociologist, the personal decisions an individual makes do not exist in a vacuum. Cultural patterns and social forces put pressure on people to select one choice over another. Sociologists try to identify these general patterns by examining the behavior of large groups of people living in the same society and experiencing the same societal pressures. The recent turmoil in the U.S. housing market and the high rate of foreclosures offer an example of how a sociologist might explore social patterns. Owning a home has long been considered an essential part of the American Dream. People often work for years to save for a down payment on what will be the largest investment they ever make. The monthly mortgage is often a person’s largest budget item. Missing one or more mortgage payments can result in serious consequences. The lender may foreclose on the mortgage and repossess the property. People may lose their homes and may not be able to borrow money in the future. Walking away from the responsibility to pay debts is not a choice most people make easily. About three million homes were repossessed in the United States between 2006 and 2011. Experts predict the number could double by 2013 (Levy and Gop 2011). This is a much higher rate than the historical average. What social factors are contributing to this situation, and where might sociologists find patterns? Do Americans view debt, including mortgages, differently than in the past? What role do unemployment rates play? Might a shift in class structure be an influential factor? What about the way major economic players operate? To answer these questions, sociologists will look beyond individual foreclosures at national trends. They will see that in recent years unemployment has been at record highs. They will observe that many lenders approved subprime mortgages with adjustable rates that started low and ballooned. They may look into whether unemployment and lending practices were different for members of different social classes, races, or genders. By analyzing the impact of these external conditions on individuals’ choices, sociologists can better explain why people make the decisions they do. Another example of how society influences individual decisions can be seen in people’s opinions about and use of food stamps (also known as the Supplemental Nutrition Assistance Program, or SNAP benefits). Some people believe that those who receive food stamps are lazy and unmotivated. Statistics from the United States Department of Agriculture show a complex picture. Percent Eligible by Reason for Eligibility Living in Waiver Area Have Not Exceeded Time Limits a In E & T Program Received Exemption Total Percent Eligible for the FSP a Alabama 29 62 / 72 0 1 73 / 80 Alaska 100 62 / 72 0 0 100 California 6 62 / 72 0 0 64 / 74 District of Columbia 100 62 / 72 0 0 100 Florida 48 62 / 72 0 0 80 / 85 Mississippi 39 62 / 72 0 3 100 Wyoming 7 62 / 72 0 0 64 / 74 Table 1.1 Food Stamp Use by State Sociologists examine social conditions in different states to explain differences in the number of people receiving food stamps. (Table courtesy of U.S. Department of Agriculture) The percentage of the population receiving food stamps is much higher in certain states than in others. Does this mean, if the stereotype above were applied, that people in some states are lazier and less motivated than those in other states? Sociologists study the economies in each state—comparing unemployment rates, food, energy costs, and other factors—to explain differences in social issues like this. To identify social trends, sociologists also study how people use food stamps and how people react to their use. Research has found that for many people from all classes, there is a strong stigma attached to the use of food stamps. This stigma can prevent people who qualify for this type of assistance from using food stamps. According to Hanson and Gundersen (2002), how strongly this stigma is felt is linked to the general economic climate. This illustrates how sociologists observe a pattern in society. Sociologists identify and study patterns related to all kinds of contemporary social issues. The “don’t ask, don’t tell” policy, the emergence of the Tea Party as a political faction, how Twitter has influenced everyday communication—these are all examples of topics that sociologists might explore. Studying Part and Whole: How Sociologists View Social Structures A key basis of the sociological perspective is the concept that the individual and society are inseparable. It is impossible to study one without the other. German sociologist Norbert Elias called the process of simultaneously analyzing the behavior of individuals and the society that shapes that behavior figuration . He described it through a metaphor of dancing. There can be no dance without the dancers, but there can be no dancers without the dance. Without the dancers, a dance is just an idea about motions in a choreographer’s head. Without a dance, there is just a group of people moving around a floor. Similarly, there is no society without the individuals that make it up, and there are also no individuals who are not affected by the society in which they live (Elias 1978). An application that makes this concept understandable is the practice of religion. While people experience their religion in a distinctly individual manner, religion exists in a larger social context. For instance, an individual’s religious practice may be influenced by what government dictates, holidays, teachers, places of worship, rituals, and so on. These influences underscore the important relationship between individual practices of religion and social pressures that influence that religious experience. Individual-Society Connections When sociologist Nathan Kierns spoke to his friend Ashley (a pseudonym) about the move she and her partner had made from an urban center to a small Midwestern town, he was curious how the social pressures placed on a lesbian couple differed from one community to the other. Ashley said that in the city they had been accustomed to getting looks and hearing comments when she and her partner walked hand in hand. Otherwise, she felt that they were at least being tolerated. There had been little to no outright discrimination. Things changed when they moved to the small town for her partner’s job. For the first time, Ashley found herself experiencing direct discrimination because of her sexual orientation. Some of it was particularly hurtful. Landlords would not rent to them. Ashley, who is a highly trained professional, had a great deal of difficulty finding a new job. When Nathan asked Ashley if she and her partner became discouraged or bitter about this new situation, Ashley said that rather than letting it get to them, they decided to do something about it. Ashley approached groups at a local college and several churches in the area. Together they decided to form the town's first gay-straight alliance. The alliance has worked successfully to educate their community about same-sex couples. It also worked to raise awareness about the kinds of discrimination Ashley and her partner experienced in the town and how those could be eliminated. The alliance has become a strong advocacy group, working to attain equal rights for LBGT individuals. Kierns observed that this is an excellent example of how negative social forces can result in a positive response from individuals to bring about social change (Kierns 2011). 1.2 The History of Sociology Since ancient times, people have been fascinated by the relationship between individuals and the societies to which they belong. Many of the topics that are central to modern sociological scholarship were studied by ancient philosophers. Many of these earlier thinkers were motivated by their desire to describe an ideal society. In the 13th century, Ma Tuan-Lin, a Chinese historian, first recognized social dynamics as an underlying component of historical development in his seminal encyclopedia, General Study of Literary Remains . The next century saw the emergence of the historian some consider to be the world’s first sociologist: Ibn Khaldun (1332–1406) of Tunisia. He wrote about many topics of interest today, setting a foundation for both modern sociology and economics, including a theory of social conflict, a comparison of nomadic and sedentary life, a description of political economy, and a study connecting a tribe’s social cohesion to its capacity for power (Hannoum 2003). In the 18th century, Age of Enlightenment philosophers developed general principles that could be used to explain social life. Thinkers such as John Locke, Voltaire, Immanuel Kant, and Thomas Hobbes responded to what they saw as social ills by writing on topics that they hoped would lead to social reform. The early 19th century saw great changes with the Industrial Revolution, increased mobility, and new kinds of employment. It was also a time of great social and political upheaval with the rise of empires that exposed many people—for the first time—to societies and cultures other than their own. Millions of people were moving into cities and many people were turning away from their traditional religious beliefs. The Father of Sociology The term sociology was first coined in 1780 by the French essayist Emmanuel-Joseph Sieyès (1748–1836) in an unpublished manuscript (Fauré et al. 1999). In 1838, the term was reinvented by Auguste Comte (1798–1857). Comte originally studied to be an engineer, but later became a pupil of social philosopher Claude Henri de Rouvroy Comte de Saint-Simon (1760–1825). They both thought that society could be studied using the same scientific methods utilized in natural sciences. Comte also believed in the potential of social scientists to work toward the betterment of society. He held that once scholars identified the laws that governed society, sociologists could address problems such as poor education and poverty (Abercrombie et al. 2000). Comte named the scientific study of social patterns positivism . He described his philosophy in a series of books called The Course in Positive Philosophy (1830–1842) and A General View of Positivism (1848). He believed that using scientific methods to reveal the laws by which societies and individuals interact would usher in a new “positivist” age of history. While the field and its terminology have grown, sociologists still believe in the positive impact of their work. Karl Marx Karl Marx (1818–1883) was a German philosopher and economist. In 1848 he and Friedrich Engels (1820–1895) coauthored the Communist Manifesto . This book is one of the most influential political manuscripts in history. It also presents Marx's theory of society, which differed from what Comte proposed. Marx rejected Comte's positivism. He believed that societies grew and changed as a result of the struggles of different social classes over the means of production. At the time he was developing his theories, the Industrial Revolution and the rise of capitalism led to great disparities in wealth between the owners of the factories and workers. Capitalism, an economic system characterized by private or corporate ownership of goods and the means to produce them, grew in many nations. Marx predicted that inequalities of capitalism would become so extreme that workers would eventually revolt. This would lead to the collapse of capitalism, which would be replaced by communism. Communism is an economic system under which there is no private or corporate ownership: everything is owned communally and distributed as needed. Marx believed that communism was a more equitable system than capitalism. While his economic predictions may not have come true in the time frame he predicted, Marx’s idea that social conflict leads to change in society is still one of the major theories used in modern sociology. Creating a Discipline In 1873, the English philosopher Herbert Spencer (1820–1903) published The Study of Sociology , the first book with the term “sociology” in the title. Spencer rejected much of Comte’s philosophy as well as Marx's theory of class struggle and his support of communism. Instead, he favored a form of government that allowed market forces to control capitalism. His work influenced many early sociologists including Émile Durkheim (1858–1917). Durkheim helped establish sociology as a formal academic discipline by establishing the first European department of sociology at the University of Bordeaux in 1895 and by publishing his Rules of the Sociological Method in 1895. In another important work, Division of Labour in Society (1893), Durkheim laid out his theory on how societies transformed from a primitive state into a capitalist, industrial society. According to Durkheim, people rise to their proper level in society based on merit. Durkheim believed that sociologists could study objective “social facts” (Poggi 2000). He also believed that through such studies it would be possible to determine if a society was “healthy” or “pathological.” He saw healthy societies as stable, while pathological societies experienced a breakdown in social norms between individuals and society. In 1897, Durkheim attempted to demonstrate the effectiveness of his rules of social research when he published a work titled Suicide . Durkheim examined suicide statistics in different police districts to research differences between Catholic and Protestant communities. He attributed the differences to socioreligious forces rather than to individual or psychological causes. Prominent sociologist Max Weber (1864–1920) established a sociology department in Germany at the Ludwig Maximilians University of Munich in 1919. Weber wrote on many topics related to sociology including political change in Russia and social forces that affect factory workers. He is known best for his 1904 book, The Protestant Ethic and the Spirit of Capitalism . The theory that Weber sets forth in this book is still controversial. Some believe that Weber was arguing that the beliefs of many Protestants, especially Calvinists, led to the creation of capitalism. Others interpret it as simply claiming that the ideologies of capitalism and Protestantism are complementary. Weber also made a major contribution to the methodology of sociological research. Along with other researchers such as Wilhelm Dilthey (1833–1911) and Heinrich Rickert (1863–1936), Weber believed that it was difficult if not impossible to use standard scientific methods to accurately predict the behavior of groups as people hoped to do. They argued that the influence of culture on human behavior had to be taken into account. This even applied to the researchers themselves, who, they believed, should be aware of how their own cultural biases could influence their research. To deal with this problem, Weber and Dilthey introduced the concept of verstehen , a German word that means to understand in a deep way. In seeking verstehen, outside observers of a social world—an entire culture or a small setting—attempt to understand it from an insider’s point of view. In his book The Nature of Social Action (1922), Weber described sociology as striving to "interpret the meaning of social action and thereby give a causal explanation of the way in which action proceeds and the effects it produces." He and other like-minded sociologists proposed a philosophy of antipositivism whereby social researchers would strive for subjectivity as they worked to represent social processes, cultural norms, and societal values. This approach led to some research methods whose aim was not to generalize or predict (traditional in science), but to systematically gain an in-depth understanding of social worlds. The different approaches to research based on positivism or antipositivism are often considered the foundation for the differences found today between quantitative sociology and qualitative sociology. Quantitative sociology uses statistical methods such as surveys with large numbers of participants. Researchers analyze data using statistical techniques to see if they can uncover patterns of human behavior. Qualitative sociology seeks to understand human behavior by learning about it through in-depth interviews, focus groups, and analysis of content sources (like books, magazines, journals, and popular media). Social Policy and Debate How Do Working Moms Impact Society? What constitutes a “typical family” in America has changed tremendously over the past decades. One of the most notable changes has been the increasing number of mothers who work outside the home. Earlier in U.S. society, most family households consisted of one parent working outside the home and the other being the primary childcare provider. Because of traditional gender roles and family structures, this was typically a working father and a stay-at-home mom. Quantitative research shows that in 1940 only 27 percent of all women worked outside the home. Today, 59.2 percent of all women do. Almost half of women with children younger than one year of age are employed (U.S. Congress Joint Economic Committee Report 2010). Sociologists interested in this topic might approach its study from a variety of angles. One might be interested in its impact on a child’s development, another may explore related economic values, while a third might examine how other social institutions have responded to this shift in society. A sociologist studying the impact of working mothers on a child’s development might ask questions about children raised in childcare settings. How is a child socialized differently when raised largely by a childcare provider rather than a parent? Do early experiences in a school-like childcare setting lead to improved academic performance later in life? How does a child with two working parents perceive gender roles compared to a child raised with a stay-at-home parent? Another sociologist might be interested in the increase in working mothers from an economic perspective. Why do so many households today have dual incomes? Has there been a contributing change in social class expectations? What impact does the larger economy play in the economic conditions of an individual household? Do people view money—savings, spending, debt—differently than they have in the past? Curiosity about this trend’s influence on social institutions might lead a researcher to explore its effect on the nation’s educational system. Has the increase in working mothers shifted traditional family responsibilities onto schools, such as providing lunch and even breakfast for students? How does the creation of after-school care programs shift resources away from traditional academics? As these examples show, sociologists study many real-world topics. Their research often influences social policies and political issues. Results from sociological studies on this topic might play a role in developing federal laws like the Family and Medical Leave Act, or they might bolster the efforts of an advocacy group striving to reduce social stigmas placed on stay-at-home dads, or they might help governments determine how to best allocate funding for education. 1.3 Theoretical Perspectives Sociologists study social events, interactions, and patterns. They then develop theories to explain why these occur and what can result from them. In sociology, a theory is a way to explain different aspects of social interactions and to create testable propositions about society (Allan 2006). For example, early in the development of sociology, Émile Durkheim was interested in explaining the social phenomenon of suicide. He gathered data on large groups of people in Europe who had ended their lives. When he analyzed the data, he found that suicide rates differed among groups with different religious affiliations. For example, the data showed that Protestants were more likely to commit suicide than Catholics. To explain this, Durkheim developed the concept of social solidarity. Social solidarity described the social ties that bind a group of people together such as kinship, shared location, or religion. Durkheim combined these concepts with the data he analyzed to propose a theory that explained the religion-based differences in suicide rates. He suggested that differences in social solidarity between the two groups corresponded to the differences in suicide rates. Although some have disagreed with his methods and his conclusions, Durkheim's work shows the importance of theory in sociology. Proposing theories supported by data gives sociologists a way to explain social patterns and to posit cause-and-effect relationships in social situations. Theories vary in scope depending on the scale of the issues they are meant to explain. Grand theories , also described as macro-level , are attempts to explain large-scale relationships and answer fundamental questions such as why societies form and why they change. These theories tend to be abstract and can be difficult if not impossible to test empirically. Micro-level theories are at the other end of the scale and cover very specific relationships between individuals or small groups. They are dependent on their context and are more concrete. This means they are more scientifically testable. An example of a micro-theory would be a theory to explain why middle-class teenage girls text to communicate instead of making telephone calls. A sociologist might develop a hypothesis that the reason they do this is because they think texting is silent and therefore more private. A sociologist might then conduct interviews or design a survey to test this hypothesis. If there is enough supportive data, a hypothesis can become a theory. Sociological theory is constantly evolving and should never be considered complete. Classic sociological theories are still considered important and current, but new sociological theories build upon the work of their predecessors and add to them (Calhoun 2002). In sociology, a few theories provide broad perspectives that help to explain many different aspects of social life. These theories are so prominent that many consider them paradigms. Paradigms are philosophical and theoretical frameworks used within a discipline to formulate theories, generalizations, and the experiments performed in support of them. Three of these paradigms have come to dominate sociological thinking because they provide useful explanations: structural functionalism, conflict theory, and symbolic interactionism. Sociological Paradigm Level of Analysis Focus Structural Functionalism Macro or mid How each part of society functions together to contribute to the whole Conflict Theory Macro How inequalities contribute to social differences and perpetuate differences in power Symbolic Interactionism Micro One-to-one interactions and communications Table 1.2 Sociological Theories or Perspectives Different sociological perspectives enable sociologists to view social issues through a variety of useful lenses. Functionalism Functionalism , also called structural functional theory, sees society as a structure with interrelated parts designed to meet the biological and social needs of individuals who make up that society. It is the oldest of the main theories of sociology. In fact, its origins began before sociology emerged as a formal discipline. It grew out of the writings of English philosopher and biologist Herbert Spencer (1820–1903) who likened society to a human body. He argued that just as the various organs in the body work together to keep the entire system functioning and regulated, the various parts of society work together to keep the entire society functioning and regulated (Spencer 1898). By parts of society, Spencer was referring to such social institutions as the economy, political systems, healthcare, education, media, and religion. Spencer continued the analogy by pointing out that societies evolve just as the bodies of humans and other animals do (Maryanski and Turner 1992). One of the founders of sociology, Emile Durkheim, applied Spencer’s analogy to explain the structure of societies and how they change and survive over time. Durkheim believed that earlier, more primitive societies were held together because most people performed similar tasks and shared values, language, and symbols. They exchanged goods and services in similar ways. Modern societies, according to Durkheim, were more complex. People served many different functions in society and their ability to carry out their function depended upon others being able to carry out theirs. Durkheim's theory sees society as a complex system of interrelated parts, working together to maintain stability (Durkheim 1893). According to this sociological viewpoint, the parts of society are interdependent. This means each part influences the others. In a healthy society, all of these parts work together to produce a stable state called dynamic equilibrium (Parsons 1961). Durkheim believed that individuals may make up society, but in order to study society, sociologists have to look beyond individuals to social facts. Social facts are the laws, morals, values, religious beliefs, customs, fashions, rituals, and all of the cultural rules that govern social life (Durkheim 1895). Each of these social facts serves one or more functions within a society. For example, one function of a society’s laws may be to protect society from violence, while another is to punish criminal behavior, while another is to preserve public health. The English sociologist Alfred Radcliffe-Brown (1881–1955) shared Comte's and Durkheim's views. He believed that how these functions worked together to maintain a stable society was controlled by laws that could be discovered through systematic comparison (Broce 1973). Like Durkheim, he argued that explanations of social interactions had to be made at the social level and not involve the wants and needs of individuals (Goldschmidt 1996). He defined the function of any recurrent activity as the part it plays in the social life as a whole, and thereby, the contribution it makes to structural continuity (Radcliffe-Brown 1952). Another noted structural functionalist, Robert Merton (1910–2003), pointed out that social processes often have many functions. Manifest functions are the consequences of a social process that are sought or anticipated, while latent functions are the unsought consequences of a social process. A manifest function of college education, for example, includes gaining knowledge, preparing for a career, and finding a good job that utilizes that education. Latent functions of your college years include meeting new people, participating in extracurricular activities, or even finding a spouse or partner. Another latent function of education is creating a hierarchy of employment based on the level of education attained. Latent functions can be beneficial, neutral, or harmful. Social processes that have undesirable consequences for the operation of society are called dysfunctions . In education, examples of dysfunction include getting bad grades, truancy, dropping out, not graduating, and not finding suitable employment. Criticism Structural-functionalism was the sociological paradigm that prevailed between World War II and the Vietnam War. Its influence declined in the 1960s and 1970s because many sociologists believed that it could not adequately explain the many rapid social changes taking place at the time. Many sociologists now believe that structural functionalism is no longer useful as a macro-level theory, but that it does serve as useful purpose in many mid-range analyses. Big Picture A Global Culture? Sociologists around the world are looking closely for signs of what would be an unprecedented event: the emergence of a global culture. In the past, empires such as those that existed in China, Europe, Africa, and Central and South America linked people from many different countries, but those people rarely became part of a common culture. They lived too far from each other, spoke different languages, practiced different religions, and traded few goods. Today, increases in communication, travel, and trade have made the world a much smaller place. More and more people are able to communicate with each other instantly—wherever they are located—by telephone, video, and text. They share movies, television shows, music, games, and information over the internet. Students can study with teachers and pupils from the other side of the globe. Governments find it harder to hide conditions inside their countries from the rest of the world. Sociologists are researching many different aspects of this potential global culture. Some are exploring the dynamics involved in the social interactions of global online communities, such as when members feel a closer kinship to other group members than to people residing in their own country. Other sociologists are studying the impact this growing international culture has on smaller, less-powerful local cultures. Yet other researchers are exploring how international markets and the outsourcing of labor impact social inequalities. Sociology can play a key role in people's ability to understand the nature of this emerging global culture and how to respond to it. Conflict Theory Another theory with a macro-level view, called conflict theory , looks at society as a competition for limited resources. Conflict theory sees society as being made up of individuals who must compete for social, political, and material resources such as political power, leisure time, money, housing, and entertainment. Social structures and organizations such as religious groups, governments, and corporations reflect this competition in their inherent inequalities. Some individuals and organizations are able to obtain and keep more resources than others. These "winners" use their power and influence to maintain their positions of power in society and to suppress the advancement of other individuals and groups. Of the early founders of sociology, Karl Marx is most closely identified with this theory. He focused on the economic conflict between different social classes. As he and Fredrick Engels famously described in their Communist Manifesto , “the history of all hitherto existing society is the history of class struggles. Freeman and slave, patrician and plebeian, lord and serf, guild-master and journeyman, in a word, oppressor and oppressed” (1848). Developing on this foundation, Polish-Austrian sociologist Ludwig Gumplowicz (1838–1909) expanded on Marx’s ideas to develop his own version of conflict theory, adding his knowledge about how civilizations evolve. In Outlines of Sociology (1884), he argues that war and conquest are the basis on which civilizations have been shaped. He believed that cultural and ethnic conflicts led to states being identified and defined by a dominant group that had power over other groups (Irving 2007). The German sociologist Max Weber agreed with Marx that the economic inequalities of the capitalist system were a source of widespread conflict. However, he disagreed that the conflict must lead to revolution and the collapse of capitalism. Weber theorized that there was more than one cause for conflict: besides economics, inequalities could exist over political power and social status. The level of inequalities could also be different for different groups based on education, race, or gender. As long as these conflicts remained separate, the system as a whole was not threatened. Weber also identified several factors that moderated people's reaction to inequality. If the authority of the people in power was considered legitimate by those over whom they had power, then conflicts were less intense. Other moderating factors were high rates of social mobility and low rates of class difference. Another German sociologist, Georg Simmel (1858–1918), wrote that conflict can in fact help integrate and stabilize a society. Like Weber, Simmel said that the nature of social conflict was highly variable. The intensity and violence of the conflict depended upon the emotional involvement of the different sides, the degree of solidarity among the opposing groups, and if there were clear and limited goals to be achieved. Simmel also said that frequent smaller conflicts would be less violent than a few large conflicts. Simmel also studied how conflict changes the parties involved. He showed that groups work to increase their internal solidarity, centralize power, reduce dissent, and become less tolerant of those not in the group during conflict. Resolving conflicts can release tension and hostility and pave the way for future agreements. More recently, conflict theory has been used to explain inequalities between groups based on gender or race. Janet Saltzman Chafetz (1941–2006) was a leader in the field of feminist conflict theory. Her books Masculine/Feminine or Human (1974), Feminist Sociology (1988), and Gender Equity (1990) and other studies Dr. Chafetz uses conflict theory to present a set of models to explain the forces maintaining a system of gender inequality as well as a theory of how such a system can be changed. She argues that two types of forces sustain a system of gender inequality. One type of force is coercive and is based on the advantages men have in finding, keeping, and advancing in positions within the workforce. The other depends on the voluntary choices individuals make based on the gender roles that have been passed down through their families. Chafetz argues that the system can be changed through changes in the number and types of jobs available to increasingly large numbers of well-educated women entering the workforce (Turner 2003). Criticism Just as structural functionalism was criticized for focusing too much on the stability of societies, conflict theory has been criticized because it tends to focus on conflict to the exclusion of recognizing stability. Many social structures are extremely stable or have gradually progressed over time rather than changing abruptly as conflict theory would suggest. Sociology in the Real World Farming and Locavores: How Sociological Perspectives Might View Food Consumption The consumption of food is a commonplace, daily occurrence, yet it can also be associated with important moments in our lives. Eating can be an individual or a group action, and eating habits and customs are influenced by our cultures. In the context of society, our nation’s food system is at the core of numerous social movements, political issues, and economic debates. Any of these factors might become a topic of sociological study. A structural-functional approach to the topic of food consumption might be interested in the role of the agriculture industry within the nation’s economy and how this has changed from the early days of manual-labor farming to modern mechanized production. Another examination might study the different functions that occur in food production: from farming and harvesting to flashy packaging and mass consumerism. A conflict theorist might be interested in the power differentials present in the regulation of food, exploring where people’s right to information intersects with corporations’ drive for profit and how the government mediates those interests. Or a conflict theorist might be interested in the power and powerlessness experienced by local farmers versus large farming conglomerates, such as the documentary Food Inc. depicts as resulting from Monsanto’s patenting of seed technology. Another topic of study might be how nutrition varies between different social classes. A sociologist viewing food consumption through a symbolic interactionist lens would be more interested in micro-level topics, such as the symbolic use of food in religious rituals, or the role it plays in the social interaction of a family dinner. This perspective might also study the interactions among group members who identify themselves based on their sharing a particular diet, such as vegetarians (people who don’t eat meat) or locavores (people who strive to eat locally produced food). Symbolic InteractionistTheory Symbolic Interactionism provides a theoretical perspective that helps scholars examine the relationship of individuals within their society. This perspective is centered on the notion that communication—or the exchange of meaning through language and symbols—is how people make sense of their social worlds. As pointed out by Herman and Reynolds (1994), this viewpoint sees people as active in shaping their world, rather than as entities who are acted upon by society (Herman and Reynolds 1994). This approach looks at society and people from a micro-level perspective. George Herbert Mead (1863–1931) is considered one of the founders of symbolic interactionism, though he never published his work on it (LaRossa & Reitzes 1993). It was up to his student Herbert Blumer (1900–1987) to interpret Mead's work and popularize the theory. Blumer coined the term “symbolic interactionism” and identified its three basic premises: Humans act toward things on the basis of the meanings they ascribe to those things. The meaning of such things is derived from, or arises out of, the social interaction that one has with others and the society. These meanings are handled in, and modified through, an interpretative process used by the person in dealing with the things he/she encounters (Blumer 1969). Social scientists who apply symbolic-interactionist thinking look for patterns of interaction between individuals. Their studies often involve observation of one-on-one interactions. For example, while a conflict theorist studying a political protest might focus on class difference, a symbolic interactionist would be more interested in how individuals in the protesting group interact, as well as the signs and symbols protesters use to communicate their message. The focus on the importance of symbols in building a society led sociologists like Erving Goffman (1922-1982) to develop a technique called dramaturgical analysis . Goffman used theater as an analogy for social interaction and recognized that people’s interactions showed patterns of cultural “scripts.” Because it can be unclear what part a person may play in a given situation, he or she has to improvise his or her role as the situation unfolds (Goffman 1958). Studies that use the symbolic interactionist perspective are more likely to use qualitative research methods, such as in-depth interviews or participant observation, because they seek to understand the symbolic worlds in which research subjects live. Criticism Research done from this perspective is often scrutinized because of the difficulty of remaining objective. Others criticize the extremely narrow focus on symbolic interaction. Proponents, of course, consider this one of its greatest strengths. 1.4 Why Study Sociology? When Elizabeth Eckford tried to enter Central High School in Little Rock, Arkansas, in September 1957, she was met by an angry crowd. But she knew she had the law on her side. Three years earlier in the landmark Brown vs. the Board of Education case, the U.S. Supreme Court had overturned 21 state laws that allowed blacks and whites to be taught in separate school systems as long as the school systems were “equal.” One of the major factors influencing that decision was research conducted by the husband-and-wife team of sociologists, Kenneth and Mamie Clark. Their research showed that segregation was harmful to young black schoolchildren, and the Court found that harm to be unconstitutional. Since it was first founded, many people interested in sociology have been driven by the scholarly desire to contribute knowledge to this field, while others have seen it as way not only to study society, but also to improve it. Besides desegregation, sociology has played a crucial role in many important social reforms such as equal opportunity for women in the workplace, improved treatment for individuals with mental handicaps or learning disabilities, increased accessibility and accommodation for people with physical handicaps, the right of native populations to preserve their land and culture, and prison system reforms. The prominent sociologist Peter L. Berger (1929– ), in his 1963 book Invitation to Sociology: A Humanistic Perspective , describes a sociologist as "someone concerned with understanding society in a disciplined way." He asserts that sociologists have a natural interest in the monumental moments of people’s lives, as well as a fascination with banal, everyday occurrences. Berger also describes the “aha” moment when a sociological theory becomes applicable and understood: [T]here is a deceptive simplicity and obviousness about some sociological investigations. One reads them, nods at the familiar scene, remarks that one has heard all this before and don't people have better things to do than to waste their time on truisms—until one is suddenly brought up against an insight that radically questions everything one had previously assumed about this familiar scene. This is the point at which one begins to sense the excitement of sociology. (Berger 1963) Sociology can be exciting because it teaches people ways to recognize how they fit into the world and how others perceive them. Looking at themselves and society from a sociological perspective helps people see where they connect to different groups based on the many different ways they classify themselves and how society classifies them in turn. It raises awareness of how those classifications—such as economic and status levels, education, ethnicity, or sexual orientation—affect perceptions. Sociology teaches people not to accept easy explanations. It teaches them a way to organize their thinking so that they can ask better questions and formulate better answers. It makes people more aware that there are many different kinds of people in the world who do not necessarily think the way they do. It increases their willingness and ability to try to see the world from other people's perspectives. This prepares them to live and work in an increasingly diverse and integrated world. Sociology in the Workplace Employers continue to seek people with what are called “transferable skills.” This means that they want to hire people whose knowledge and education can be applied in a variety of settings and whose skills will contribute to various tasks. Studying sociology can provide people with this wide knowledge and a skill set that can contribute to many workplaces, including: an understanding of social systems and large bureaucracies, the ability to devise and carry out research projects to assess whether a program or policy is working, the ability to collect, read, and analyze statistical information from polls or surveys, the ability to recognize important differences in people’s social, cultural, and economic backgrounds, skills in preparing reports and communicating complex ideas, the capacity for critical thinking about social issues and problems that confront modern society. (Department of Sociology, University of Alabama) Sociology prepares people for a wide variety of careers. Besides actually conducting social research or training others in the field, people who graduate from college with a degree in sociology are hired by government agencies and corporations in fields such as social services, counseling (e.g., family planning, career, substance abuse), community planning, health services, marketing, market research, and human resources. Even a small amount of training in sociology can be an asset in careers like sales, public relations, journalism, teaching, law, and criminal justice. Please “Friend” Me: Students and Social Networking The phenomenon known as Facebook was designed specifically for students. Whereas earlier generations wrote notes in each other’s printed yearbooks at the end of the academic year, modern technology and the internet ushered in dynamic new ways for people to interact socially. Instead of having to meet up on campus, students can call, text, and Skype from their dorm rooms. Instead of a study group gathering weekly in the library, online forums and chat rooms help learners connect. The availability and immediacy of computer technology has forever changed the ways students engage with each other. Now, after several social networks have vied for primacy, a few have established their place in the market and some have attracted niche audience. While Facebook launched the social networking trend geared toward teens and young adults, now people of all ages are actively “friending” each other. LinkedIn distinguished itself by focusing on professional connections, serving as a virtual world for workplace networking. Newer offshoots like Foursquare help people connect based on the real-world places they frequent, while Twitter has cornered the market on brevity. These newer modes of social interaction have also spawned harmful consequences, such as cyberbullying and what some call FAD, or Facebook Addiction Disorder. Researchers have also examined other potential negative impacts, such as whether Facebooking lowers a student’s GPA, or whether there might be long-term effects of replacing face-to-face interaction with social media. All of these social networks demonstrate emerging ways that people interact, whether positive or negative. They illustrate how sociological topics are alive and changing today. Social media will most certainly be a developing topic in the study of sociology for decades to come.
principles_of_accounting,_volume_2:_managerial_accounting
Summary 13.1 Describe Sustainability and the Way It Creates Business Value Users of financial reports want to know whether businesses are making appropriate decisions not only to increase shareholder wealth, but also to sustain the business, and the world around it, into the future. This management goal is called business sustainability. Although the U.S. has pulled out of the Paris Climate Agreement, many companies have announced their own commitment to maintain the spirit of the Agreement. Early ventures into sustainability practices and reporting often arose in response to negative events and even tragedies as communities demanded more accountability by companies that operated within those communities. Many businesses have chosen to develop sustainable business practices because they realize doing so can provide positive benefits, not just to society and the environment, but also to the long-term viability of their own business. 13.2 Identify User Needs for Information Users of sustainability reporting information are not just primary users such as shareholders and lenders but can also be secondary users such as employees, customers, the community, governments, and regulators. Shareholders concern themselves with the future viability of the company and want profits to be sustained or increased over the long term. Lenders want to know the company borrowing from them does not have any going-concern risks that could affect its ability to repay the loan. Employees and potential employees want assurance that they will be fairly compensated, that the workplace is safe and the employer ethical, and that all employees have equal rights and opportunities, regardless of gender, race, religion, or sexual orientation. Customers want to know the companies to which they give their money reflect their own values and beliefs. Governments and regulators want to be able to see that a company is behaving responsibly. Communities want to know the organization is behaving at the level of society’s expectations. This information need reflects the existence of a social contract, the expectation that companies will hold to an unwritten contract with society as a whole. 13.3 Discuss Examples of Major Sustainability Initiatives Materiality describes how significant an event or issue is to warrant its inclusion or discussion. The not-for-profit Global Reporting Initiative (GRI) provides companies with guidance about how to report sustainability and identifies common themes and components for reports and in 2016 produced its first set of global reporting standards. According to GRI, 92% of the Global 250 produced sustainability reports in 2016. The Sustainability Accounting Standards Board (SASB) was established in 2011 to develop standards for disclosure of material sustainability information to investors. SASB adopted the view of materiality taken by the US Supreme Court, that information is material if there is “a substantial likelihood that the disclosure of the omitted fact would have been viewed by the reasonable investor as having significantly altered the ‘total mix’ of information made available.” 87 SASB standards are available for 79 industries across 10 sectors. 87 Sustainability Accounting Standards Board (SASB). Hardware: Sustainability Accounting Standard . Aprril 2014. https://www.sasb.org/wp-content/uploads/2014/04/SASB_Standard_Hardware_Provisional.pdf The International Integrated Reporting Council (IIRC) was formed in 2010 to improve the quality of information provided to investors and lenders, promote a more cohesive and efficient approach to corporate reporting which draws on different reporting strands, enhance accountability and stewardship for six types of capital (financial, manufactured, intellectual, human, social and relationship, and natural), and support integrated thinking, decision-making and actions so as to create value. 13.4 Future Issues in Sustainability Innovation, security risks, and globalization mean that businesses must adapt quickly or risk becoming obsolete. Artificial intelligence is predicted to significantly change our lives in the future. Some of those changes may threaten the stability of employment for white collar workers. Workers must learn to be multi-skilled, more innovative and possess a good analytical mind.
Chapter Outline 13.1 Describe Sustainability and the Way It Creates Business Value 13.2 Identify User Needs for Information 13.3 Discuss Examples of Major Sustainability Initiatives 13.4 Future Issues in Sustainability Why It Matters Gina studies supply chain management at a local university. Last summer, she worked at a manufacturing plant for a major auto manufacturer. She enjoyed her experience and learned quite a bit about the manufacturing and supply chain process, and she spent a significant amount of time on the production floor learning how the supply chain process affects the assembly of the vehicles. Gina felt she was well paid and she liked her colleagues. This summer, she has a comparable position and compensation with a different auto manufacturer. She is curious to see how the two companies compare. One of the first things Gina notices is the number of reminders posted around the plant to save and conserve energy. There are procedures in place to save energy when machines are idle, and sensors that turn off lights when no one is in the offices or break room. Gina also heard fellow employees talking about taking paid time off to volunteer at local charities. Her supervisor has asked her to be one of the speakers at presentations given throughout the year at local schools as part of a project to promote school-age girls entering technical fields. She also visited the company’s research and development symposium and learned how the company is trying to improve fuel efficiency and move away from cars that use fossil fuels. Gina never noticed initiatives like these at her position the prior summer. And though she enjoyed that job, she feels better about the current manufacturer because she realizes the company is trying to accomplish goals in addition to making money for its shareholders. Her current employer takes steps to promote the well-being of its employees, the community, and the environment. When Gina asks one of her professors about the difference, she learns that her current employer is more involved in corporate social responsibility and the company’s sustainability reports will provide more information. Gina decides to learn more about sustainability reporting.
[ { "answer": { "ans_choice": 1, "ans_text": "B" }, "bloom": null, "hl_context": "<hl> In December 2015 , 196 nations adopted the Paris Climate Agreement , a historic plan to work together to limit the increase of global temperatures to 1.5 ° C . <hl> The Agreement aims to help delay or avoid some of the worst consequences of climate change within a system of transparency and accountability in which each nation can evaluate the progress of the others .", "hl_sentences": "In December 2015 , 196 nations adopted the Paris Climate Agreement , a historic plan to work together to limit the increase of global temperatures to 1.5 ° C .", "question": { "cloze_format": "The ___ is an agreement that 196 nations adopted in December 2015.", "normal_format": "Which agreement did 196 nations adopt in December 2015?", "question_choices": [ "Oslo Accord", "Paris Climate Agreement", "Kyoto Agreement", "Copenhagen Accord" ], "question_id": "fs-idm374084592", "question_text": "Which agreement did 196 nations adopt in December 2015?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "1.5 °C" }, "bloom": null, "hl_context": "<hl> In December 2015 , 196 nations adopted the Paris Climate Agreement , a historic plan to work together to limit the increase of global temperatures to 1.5 ° C . <hl> The Agreement aims to help delay or avoid some of the worst consequences of climate change within a system of transparency and accountability in which each nation can evaluate the progress of the others .", "hl_sentences": "In December 2015 , 196 nations adopted the Paris Climate Agreement , a historic plan to work together to limit the increase of global temperatures to 1.5 ° C .", "question": { "cloze_format": "The 2015 Paris Agreement on Climate Change aimed to limit the increase of global temperatures to ________.", "normal_format": "The 2015 Paris Agreement on Climate Change aimed to limit the increase of global temperatures to which of the following?", "question_choices": [ "0.5 °C", "1.0 °C", "1.5 °C", "2.0 °C" ], "question_id": "fs-idm366999376", "question_text": "The 2015 Paris Agreement on Climate Change aimed to limit the increase of global temperatures to ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "D" }, "bloom": null, "hl_context": "The fact that these companies and others are run by CEOs whose primary objective is to make a profit does not mean they live in a vacuum , unaware of their effects on the larger world . As mentioned , responsible companies today are concerned not only about their economic performance , but also about their effects on the environment and society . Recall , corporate social responsibility ( CSR ) is the set of steps that firms take to bear responsibility for their impact on the environment and social well-being . <hl> Even if some managers are not personally guided by these motivations , good corporate citizenship makes good business sense . <hl>", "hl_sentences": "Even if some managers are not personally guided by these motivations , good corporate citizenship makes good business sense .", "question": { "cloze_format": "Good corporate citizenship ________.", "normal_format": "Which of the following is correct about good corporate citizenship?", "question_choices": [ "is expensive to implement and does not guarantee returns", "must have management’s sincere convictions behind it in order to succeed", "is more relevant in countries with less regulation.", "makes good business sense" ], "question_id": "fs-idm373531632", "question_text": "Good corporate citizenship ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "It meets the needs of the present without compromising the ability of future generations to meet their own needs." }, "bloom": null, "hl_context": "A sustainability report presents the economic , environmental and social effects that a corporation or organization was responsible for during the course of everyday business . Sustainability reporting aims to respond to the idea that companies can be held accountable for sustainability . <hl> In 1987 , the former Norwegian Prime Minister , Gro Harlem Brundtland , chaired a World Commission on Environment and Development to both formulate proposals and increase understanding of and commitment to environment and development . <hl> <hl> The resulting Brundtland Commission Report laid the groundwork for the concept of sustainable development ( Figure 13.3 ) . <hl> <hl> This was defined as “ development that meets the needs of the present without compromising the ability of future generations to meet their own needs . ” 2 2 NGO Committee on Education . <hl> “ Report of the World Commission on Environment and Development : Our Common Future . ” UN Documents : Gathering a Body of Global Agreements . August 4 , 1987 . http://www.un-documents.net/wced-ocf.htm A primary goal of any business is to maximize shareholder or owner wealth and thus continue operating into the future . However , in making decisions to be profitable and to remain in business into the future , companies must think beyond their own organization and consider other stakeholders . <hl> This approach is a major goal of sustainability , which is meeting the needs of the present generation without compromising the ability of future generations to meet their own needs . <hl> 1 Another concept that is sometimes associated with sustainability is corporate social responsibility ( CSR ) , which is the set of actions that firms take to assume responsibility for their impact on the environment and social well-being . CSR can be used to describe the actions of an individual company or in comparing the actions of multiple corporations . 1 Brundtland Commission . Our Common Future . 1987 .", "hl_sentences": "In 1987 , the former Norwegian Prime Minister , Gro Harlem Brundtland , chaired a World Commission on Environment and Development to both formulate proposals and increase understanding of and commitment to environment and development . The resulting Brundtland Commission Report laid the groundwork for the concept of sustainable development ( Figure 13.3 ) . This was defined as “ development that meets the needs of the present without compromising the ability of future generations to meet their own needs . ” 2 2 NGO Committee on Education . This approach is a major goal of sustainability , which is meeting the needs of the present generation without compromising the ability of future generations to meet their own needs .", "question": { "cloze_format": "According to the World Commission on Environment and Development, sustainable development is defined as that ___.", "normal_format": "According to the World Commission on Environment and Development, how is sustainable development defined?", "question_choices": [ "It meets the needs of the future without compromising the ability of the present generations to meet their own needs.", "It applies the fairness doctrine that no generation, present or future, will be disadvantaged in their ability to meet their own needs.", "It meets the needs of the present without compromising the ability of future generations to meet their own needs.", "none of the above" ], "question_id": "fs-idm379358752", "question_text": "According to the World Commission on Environment and Development, how is sustainable development defined?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "D" }, "bloom": null, "hl_context": "<hl> A sustainability report presents the economic , environmental and social effects that a corporation or organization was responsible for during the course of everyday business . <hl> Sustainability reporting aims to respond to the idea that companies can be held accountable for sustainability . In 1987 , the former Norwegian Prime Minister , Gro Harlem Brundtland , chaired a World Commission on Environment and Development to both formulate proposals and increase understanding of and commitment to environment and development . The resulting Brundtland Commission Report laid the groundwork for the concept of sustainable development ( Figure 13.3 ) . This was defined as “ development that meets the needs of the present without compromising the ability of future generations to meet their own needs . ” 2 2 NGO Committee on Education . “ Report of the World Commission on Environment and Development : Our Common Future . ” UN Documents : Gathering a Body of Global Agreements . August 4 , 1987 . http://www.un-documents.net/wced-ocf.htm", "hl_sentences": "A sustainability report presents the economic , environmental and social effects that a corporation or organization was responsible for during the course of everyday business .", "question": { "cloze_format": "Sustainability reporting can incorporate ___ .", "normal_format": "Sustainability reporting can incorporate which of the following?", "question_choices": [ "environmental reporting", "social reporting", "business viability reporting", "all of the above" ], "question_id": "fs-idm382508432", "question_text": "Sustainability reporting can incorporate which of the following?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "a combination of low staff levels, numerous safety issues, and a lack of immediate employee attention to the problem as pressure built up inside the tank" }, "bloom": null, "hl_context": "<hl> Union Carbide asserts that a disgruntled employee sabotaged the plant by mixing water with the methyl isocyanate to create a reaction . <hl> <hl> Some employees claimed that a worker lacking proper training was ordered by a novice supervisor to wash out a pipe that had not been properly sealed . <hl> Although it was against plant rules , this action may have started the reaction . 17 17 Stuart Diamond . “ The Bhopal Disaster : How It Happened . ” New York Times . January 28 , 1985 . http://www.nytimes.com/1985/01/28/world/the-bhopal-disaster-how-it-happened.html?pagewanted=all <hl> Employees discovered the leak around 11:30 pm on December 2 . <hl> However , they then decided to take a tea break and did not deal with the leak until two hours later . 12 12 Stuart Diamond . “ The Bhopal Disaster : How It Happened . ” New York Times . January 28 , 1985 . http://www.nytimes.com/1985/01/28/world/the-bhopal-disaster-how-it-happened.html?pagewanted=all <hl> A safety audit two years before had noted numerous problems at the plant , including several implicated in the accident . <hl> 10 10 Juanita Stuart . “ Union Carbide Bhopal Chemical Plant Explosion . ” Worksafe . 2015 . https://worksafe.govt.nz/data-and-research/research/role-of-information-management-disaster-prevention/#lf-doc-34129 Though the plant had ceased production a couple of years earlier , the plant still contained vast quantities of dangerous chemicals . There was still 60 tons of deadly MIC in tanks at the plant , and proper maintenance of the tanks and the containment systems was necessary . <hl> It was later discovered that all the safety systems put into place failed due to lack of maintenance after the plant closed . <hl> 8 8 The Bhopal Medical Appeal . “ Union Carbide ’ s Disaster . ” n . d . http://bhopal.org/what-happened/union-carbides-disaster/", "hl_sentences": "Union Carbide asserts that a disgruntled employee sabotaged the plant by mixing water with the methyl isocyanate to create a reaction . Some employees claimed that a worker lacking proper training was ordered by a novice supervisor to wash out a pipe that had not been properly sealed . Employees discovered the leak around 11:30 pm on December 2 . A safety audit two years before had noted numerous problems at the plant , including several implicated in the accident . It was later discovered that all the safety systems put into place failed due to lack of maintenance after the plant closed .", "question": { "cloze_format": "___ caused Union Carbide ’s deadly gas leak in Bhopal, India, which killed 3,000 and injured 42,000.", "normal_format": "What caused Union Carbide ’s deadly gas leak in Bhopal, India, which killed 3,000 and injured 42,000?", "question_choices": [ "a combination of low staff levels, corruption, pay-offs to employees to keep quiet, and the manager going on vacation the day before the leak", "diversion of funds and resources to a Northern India project that also took staff from the Bhopal plant, plus many safety issues, including fines imposed on community members who camped too close to the plant", "employees’ deciding to have lunch before dealing with the pressure buildup inside the tank and bribes paid to the government employees who inspected the plant", "a combination of low staff levels, numerous safety issues, and a lack of immediate employee attention to the problem as pressure built up inside the tank" ], "question_id": "fs-idm369437280", "question_text": "What caused Union Carbide ’s deadly gas leak in Bhopal, India, which killed 3,000 and injured 42,000?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 1, "ans_text": "B" }, "bloom": null, "hl_context": "A second consequence arose from the fact that preparation of infant formula required sterilized equipment and clean water . Both clean water and sterilization were difficult to guarantee in developing nations where mothers may not have understood the requirements for sterilization or may have lacked the fuel or electricity to boil water . Lapses in preparing the formula led to increased risks of infections , including vomiting and diarrhea that , in some cases , proved fatal . UNICEF estimated that formula-fed infants were 14 times more likely 22 to die of diarrhea and four times more likely to die of pneumonia than breast-fed children . <hl> Advocacy groups also argued that dehydration could result if mothers used too much formula and malnutrition could occur if they used too little in an effort to save money . <hl> 23 22 Unicef . “ Improving Breastfeeding , Complementary Foods , and Feeding Practices . ” May 1 , 2018 . https://www.unicef.org/nutrition/index_breastfeeding.html 23 E . Ziegler . “ Adverse Effects of Cow ’ s Milk in Infants . ” Nestlé Nutrition Workshop Senior Pediatric Program . 2007 ( 60 ): 185 – 199 . https://www.ncbi.nlm.nih.gov/pubmed/17664905 <hl> The origins of the boycott go back to the mid - 1970s , when consumer concerns arose about Nestlé ’ s use of aggressive marketing tactics to sell its baby formula in developing countries in Asia , Africa , and Latin America . <hl> Initially new mothers were provided with free samples of formula to feed their babies , a common practice in many hospitals throughout the world . But in developing countries , this led to two negative consequences for mothers and their babies . First , once bottle feeding begins , the demand on the mother ’ s body is reduced and breast milk begins to dry up . Mothers in developing countries were often living in poverty and unable to afford the cost of artificial infant food . Action groups argued that , in Nigeria , the cost of bottle feeding a three-month-old infant was approximately 30 % of the minimum wage , and by the time the child reached six months old , the cost was 47 % . 21 21 Mike Muller . “ The Baby Killer . ” War on Want . March 1974 . http://archive.babymilkaction.org/pdfs/babykiller.pdf", "hl_sentences": "Advocacy groups also argued that dehydration could result if mothers used too much formula and malnutrition could occur if they used too little in an effort to save money . The origins of the boycott go back to the mid - 1970s , when consumer concerns arose about Nestlé ’ s use of aggressive marketing tactics to sell its baby formula in developing countries in Asia , Africa , and Latin America .", "question": { "cloze_format": "Nestlé ’s reputation was damaged when the company was accused of ___ .", "normal_format": "Nestlé ’s reputation was damaged when the company was accused of which of the following?", "question_choices": [ "forcing mothers to buy baby formula within days of delivering their babies", "promoting inadequate nutrition in developing countries", "providing cheap formula to mothers in developing countries, but more expensive to mothers in developed countries", "selling poor quality bottled water to developing countries" ], "question_id": "fs-idm365748512", "question_text": "Nestlé ’s reputation was damaged when the company was accused of which of the following?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 0, "ans_text": "solar" }, "bloom": null, "hl_context": "In April 2017 , the company went several steps further and launched Project Gigaton , inviting their suppliers to commit to reducing GHG emissions by a billion tons by 2030 . This would be the equivalent of taking more than 211 million passenger vehicles off the roads for a year . 40 To do this , the company has initiated a number of endeavors to achieve reduced GHG emissions . <hl> These include sourcing 25 % of their total energy for operations from renewable energy sources ( energy that is not depleted when used ) and aiming to increase this to 50 % by 2025 . <hl> 40 Walmart . “ Walmart Launches Project Gigaton to Reduce Emissions in Company ’ s Supply Chain . ” April 19 , 2017 . https://news.walmart.com/2017/04/19/walmart-launches-project-gigaton-to-reduce-emissions-in-companys-supply-chain The environment , human rights , employee relations , and philanthropy are all examples of topics on which corporations often report . When you think of sustainability in business , environmental sustainability might be the first area that comes to mind . Environmental sustainability is defined as rates of resource exploitation can be continued indefinitely without permanently depleting those resources . If these resources cannot be exploited indefinitely at the current rate , then the rate is not considered sustainable . A recent focus of environmental sustainability is climate change impacts . This focus has developed over the past three decades ( although some contributors to climate change , such as pollution , have been a concern for much longer . ) . Climate change , in the context of sustainability , is a change in climate patterns caused by the increased levels of carbon dioxide ( CO 2 ) in the atmosphere attributed mainly to use of fossil fuels . Companies are increasingly expected to measure and reduce their carbon footprint , the amount of CO 2 and other greenhouse gases they generate , in addition to adopting policies that are more environmentally friendly . For example , according to the sustainability report for Coca-Cola , in 2016 the company reduced the amount of CO 2 embedded in the containers that hold their beverages by 14 % . 32 Such corporate policies to reduce their carbon footprint can include reducing waste , especially of resources like water ; switching to paperless record-keeping systems ; designing environmentally friendly packaging ; installing low-energy lighting , heating , and cooling in offices ; recycling ; and offering flexible working hours to minimize the time employees sit in traffic adding auto emissions to the environment . <hl> Industries that use or produce non-renewable resources as sources of energy , such as coal and oil , are significantly challenged to stay relevant in an era of new energy technologies like solar and wind power . <hl> 32 The Coca-Cola Company . “ Infographic : 2016 Sustainability Highlights . ” n . d . https://www.coca-colacompany.com/stories/2016-sustainability-highlights-infographic", "hl_sentences": "These include sourcing 25 % of their total energy for operations from renewable energy sources ( energy that is not depleted when used ) and aiming to increase this to 50 % by 2025 . Industries that use or produce non-renewable resources as sources of energy , such as coal and oil , are significantly challenged to stay relevant in an era of new energy technologies like solar and wind power .", "question": { "cloze_format": "___ energy is renewable.", "normal_format": "Which form of energy is renewable?", "question_choices": [ "solar", "oil", "coal", "nuclear" ], "question_id": "fs-idm362762208", "question_text": "Which form of energy is renewable?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "A" }, "bloom": null, "hl_context": "Following the Brundtland Report , financial statement preparers began to ask how they might communicate not just the financial status of a company ’ s operations but the social and environmental status as well . <hl> The concept of a triple bottom line , also known as TBL or 3BL , was first proposed in 1997 by John Elkington to expand the traditional financial reporting framework so as to capture a firm ’ s social and environmental performance . <hl> Elkington also used the phrase People , Planet , Profit to explain the three focuses of triple bottom line reporting . <hl> By the late 1990s , companies were becoming more aware of triple bottom line reporting and were preparing sustainability reports on their own social , environmental , and economic impact . <hl> Another innovation was life-cycle or full-cost accounting . This reporting method took a “ cradle to grave ” approach to costing that put a price on the disposal of products at the end of their lives and then considered ways to minimize these costs by making adjustments in the design phase . This method also incorporated potential social , environmental , and economic costs ( externalities in the language of economics ) to attempt to identify all of the costs involved in production . For example , one early adopter of life-cycle accounting , Chrysler Corporation , considered all costs associated with each design phase and then made adjustments to the design . When its engineers developed an oil filter for a new vehicle , they estimated the material costs and hidden manufacturing expenses and also looked at liabilities associated with disposal of the filter . They found that the option with the lowest direct costs had hidden disposal costs that meant it was not the cheapest alternative . 30 30 J . Fiksel , J . McDaniel , and D . Spitzley . “ Measuring Product Sustainability . ” The Journal of Sustainable Product Design July , no . 6 ( 1998 ): 7 – 18 .", "hl_sentences": "The concept of a triple bottom line , also known as TBL or 3BL , was first proposed in 1997 by John Elkington to expand the traditional financial reporting framework so as to capture a firm ’ s social and environmental performance . By the late 1990s , companies were becoming more aware of triple bottom line reporting and were preparing sustainability reports on their own social , environmental , and economic impact .", "question": { "cloze_format": "___ is a type of reporting that the Triple Bottom Line does not incorporate.", "normal_format": "Which of the following types of reporting does the Triple Bottom Line not incorporate?", "question_choices": [ "management", "social", "environmental", "economic" ], "question_id": "fs-idm350812080", "question_text": "Which of the following types of reporting does the Triple Bottom Line not incorporate?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "anyone directly or indirectly affected by the organization" }, "bloom": null, "hl_context": "<hl> The concept of the triple bottom line expanded the role of reporting beyond shareholders and investors to a broader range of stakeholders – that is , anyone directly or indirectly affected by the organization , including employees , customers , government entities , regulators , creditors , and the local community . <hl> Naturally , companies may feel their first obligation is to their present and potential investors . But it also makes good business sense to consider other stakeholders who can affect the company ’ s livelihood . Let ’ s examine the various users of sustainability reports and their particular information needs . Primary users would be considered shareholders and investors , whereas secondary users would be customers , suppliers , the community and regulators .", "hl_sentences": "The concept of the triple bottom line expanded the role of reporting beyond shareholders and investors to a broader range of stakeholders – that is , anyone directly or indirectly affected by the organization , including employees , customers , government entities , regulators , creditors , and the local community .", "question": { "cloze_format": "Stakeholders are best defined by ___ .", "normal_format": "Which of the following best defines stakeholders ?", "question_choices": [ "investors and lenders", "environmental groups", "anyone directly or indirectly affected by the organization", "groups or individuals financially impacted by the organization" ], "question_id": "fs-idm347029456", "question_text": "Which of the following best defines stakeholders ?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 0, "ans_text": "A" }, "bloom": null, "hl_context": "Investors , including ethical investors , must look to the future of their investments , buying shares that are sustainable for the long term to provide better returns . <hl> A recent Harvard Business Review study showed that socially responsible companies post higher profits and stock performance than those that were not focused on social responsibility . <hl> 60 This result is supported by a Deutsche Bank analysis of more than 2,000 studies dating back to the 1970 ’ s , 90 % of which suggested that socially responsible investing gives better returns than passive investing . 61 60 MoneyShow . “ Socially-Responsible Investing : Earn Better Returns from Good Companies . ” Forbes . August 16 , 2017 . https://www.forbes.com/sites/moneyshow/2017/08/16/socially-responsible-investing-earn-better-returns-from-good-companies/#7f73a8a1623d 61 MoneyShow . “ Socially-Responsible Investing : Earn Better Returns from Good Companies . ” Forbes . August 16 , 2017 . https://www.forbes.com/sites/moneyshow/2017/08/16/socially-responsible-investing-earn-better-returns-from-good-companies/#7f73a8a1623d", "hl_sentences": "A recent Harvard Business Review study showed that socially responsible companies post higher profits and stock performance than those that were not focused on social responsibility .", "question": { "cloze_format": "The statement that is most often the case is ___.", "normal_format": "Which of the following statements is most often the case?", "question_choices": [ "Socially responsible businesses tend to post higher profits than those not focused on social responsibility.", "Companies that are not socially responsible will have better profits, but have a moral obligation to society.", "Socially responsible investing gives poorer returns than non-socially responsible investing.", "Investors are more short termed focus and so socially responsible investing should not be a factor in their investment portfolio." ], "question_id": "fs-idm342767360", "question_text": "Which of the following statements is most often the case?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 0, "ans_text": "economic, environmental, social" }, "bloom": null, "hl_context": "Though the GRI has provided a framework , a firm ’ s decision about what to report rests on its definition of materiality . <hl> GRI defines materiality in the context of a sustainability report as follows : “ The report should cover Aspects that : Reflect the organization ’ s significant economic , environmental and social impacts ; or substantively influence the assessments and decisions of stakeholders . ” 72 In its 2016 report , Coca-Cola listed these areas as its primary sustainability goals : 72 Global Reporting Initiative ( GRI ) . <hl> “ G4 Sustainability Reporting Guidelines . Reporting Principles and Standard Disclosures . 2013 . <hl> In 1997 , a not-for-profit organization called the Global Reporting Initiative ( GRI ) was formed with the goal of increasing the number of companies that create sustainability reports as well as to provide those companies with guidance about how to report and establish some consistency in reporting ( such as identifying common themes and components for reports ) . <hl> The idea is that as companies begin to create these reports , they become more aware of their impact on the sustainability of our world and are more likely to make positive changes to improve that impact . According to GRI , 92 % of the Global 250 produced sustainability reports in 2016 .", "hl_sentences": "GRI defines materiality in the context of a sustainability report as follows : “ The report should cover Aspects that : Reflect the organization ’ s significant economic , environmental and social impacts ; or substantively influence the assessments and decisions of stakeholders . ” 72 In its 2016 report , Coca-Cola listed these areas as its primary sustainability goals : 72 Global Reporting Initiative ( GRI ) . In 1997 , a not-for-profit organization called the Global Reporting Initiative ( GRI ) was formed with the goal of increasing the number of companies that create sustainability reports as well as to provide those companies with guidance about how to report and establish some consistency in reporting ( such as identifying common themes and components for reports ) .", "question": { "cloze_format": "The ___ standards are considered universal under the GRI.", "normal_format": "Which standards are considered universal under the GRI?", "question_choices": [ "economic, environmental, social", "foundation, general disclosures, management approach", "foundation, economic, general disclosures", "management approach, economic, social" ], "question_id": "fs-idm353062960", "question_text": "Which standards are considered universal under the GRI?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 3, "ans_text": "D" }, "bloom": null, "hl_context": "<hl> Whereas the GRI viewed materiality as the inclusion of information that reflects an organization ’ s significant economic , environmental , and social impacts or its substantial influence on the assessments and decisions of stakeholders , SASB adopted the US Supreme Court ’ s view that information is material if there is “ a substantial likelihood that the disclosure of the omitted fact would have been viewed by the reasonable investor as having significantly altered the ‘ total mix ’ of information made available ” . <hl> 79 It is up to the firms to determine whether something is material and needs reporting , and this determination would begin with the initial questions “ Is the topic important to the total mix of information ? ” and “ Would it be of interest to the reasonable investor . 80 79 TSC Indus . v . Northway , Inc . ( 426 U . S . 438 , 449 ( 1976 ) ) . 80 The explanation of SASB ’ s interpretation of “ total mix ” can be viewed on their website . Sustainability Accounting Standards Board ( SASB ) . SASB ’ s Approach to Materiality for the Purpose of Standards Development ( Staff Bulletin No . SB002 - 07062017 ) . July 6 , 2017 . http://library.sasb.org/wp-content/uploads/2017/01/ApproachMateriality-Staff-Bulletin-01192017.pdf?hsCtaTracking=9280788c-d775-4b34-8bc8-5447a06a6d38%7C2e22652a-5486-4854-b68f-73fea01a2414—Ed .", "hl_sentences": "Whereas the GRI viewed materiality as the inclusion of information that reflects an organization ’ s significant economic , environmental , and social impacts or its substantial influence on the assessments and decisions of stakeholders , SASB adopted the US Supreme Court ’ s view that information is material if there is “ a substantial likelihood that the disclosure of the omitted fact would have been viewed by the reasonable investor as having significantly altered the ‘ total mix ’ of information made available ” .", "question": { "cloze_format": "The SASB view on materiality has been adapted from ___ .", "normal_format": "The SASB view on materiality has been adapted from which of the following?", "question_choices": [ "the U.S. Executive branch", "the GRI definition", "a determination by U.S. Congress", "the U.S. Supreme Court" ], "question_id": "fs-idm334071264", "question_text": "The SASB view on materiality has been adapted from which of the following?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 0, "ans_text": "evidence-based, industry-specific, and market-informed" }, "bloom": null, "hl_context": "<hl> The SASB standards , available for 79 industries across 10 sectors , help firms disclose material sustainability factors that are likely to affect financial performance . <hl> <hl> For example , a company that has operations in a developing nation may need to disclose its employment practices in that country to inform users of the risks to which the company is exposed because of its operations . <hl> SASB Standards and Framework to see the current SASB conceptual framework . <hl> Whereas the GRI viewed materiality as the inclusion of information that reflects an organization ’ s significant economic , environmental , and social impacts or its substantial influence on the assessments and decisions of stakeholders , SASB adopted the US Supreme Court ’ s view that information is material if there is “ a substantial likelihood that the disclosure of the omitted fact would have been viewed by the reasonable investor as having significantly altered the ‘ total mix ’ of information made available ” . <hl> <hl> 79 It is up to the firms to determine whether something is material and needs reporting , and this determination would begin with the initial questions “ Is the topic important to the total mix of information ? ” and “ Would it be of interest to the reasonable investor . <hl> 80 79 TSC Indus . v . Northway , Inc . ( 426 U . S . 438 , 449 ( 1976 ) ) . 80 The explanation of SASB ’ s interpretation of “ total mix ” can be viewed on their website . Sustainability Accounting Standards Board ( SASB ) . SASB ’ s Approach to Materiality for the Purpose of Standards Development ( Staff Bulletin No . SB002 - 07062017 ) . July 6 , 2017 . http://library.sasb.org/wp-content/uploads/2017/01/ApproachMateriality-Staff-Bulletin-01192017.pdf?hsCtaTracking=9280788c-d775-4b34-8bc8-5447a06a6d38%7C2e22652a-5486-4854-b68f-73fea01a2414—Ed . For this reason , the Sustainability Accounting Standards Board ( SASB ) was established in 2011 . <hl> SASB ’ s mission is to help businesses around the world identify , manage and report on the sustainability topics that matter most to their investors . <hl> <hl> The SASB develops standards for disclosure of material sustainability information to investors , which can meet the disclosure requirements for known trends and uncertainties in the Management Discussion and Analysis section filed with the Securities Exchange Commission . <hl> SASB ’ s version of materiality differs somewhat from the GRI ’ s version .", "hl_sentences": "The SASB standards , available for 79 industries across 10 sectors , help firms disclose material sustainability factors that are likely to affect financial performance . For example , a company that has operations in a developing nation may need to disclose its employment practices in that country to inform users of the risks to which the company is exposed because of its operations . Whereas the GRI viewed materiality as the inclusion of information that reflects an organization ’ s significant economic , environmental , and social impacts or its substantial influence on the assessments and decisions of stakeholders , SASB adopted the US Supreme Court ’ s view that information is material if there is “ a substantial likelihood that the disclosure of the omitted fact would have been viewed by the reasonable investor as having significantly altered the ‘ total mix ’ of information made available ” . 79 It is up to the firms to determine whether something is material and needs reporting , and this determination would begin with the initial questions “ Is the topic important to the total mix of information ? ” and “ Would it be of interest to the reasonable investor . SASB ’ s mission is to help businesses around the world identify , manage and report on the sustainability topics that matter most to their investors . The SASB develops standards for disclosure of material sustainability information to investors , which can meet the disclosure requirements for known trends and uncertainties in the Management Discussion and Analysis section filed with the Securities Exchange Commission .", "question": { "cloze_format": "The fundamental tenets of SASB’s Approach are considered ________.", "normal_format": "What are the fundamental tenets of SASB’s Approach considered?", "question_choices": [ "evidence-based, industry-specific, and market-informed", "industry-specific, interest-based, and value creating", "consensus-based, industry-specific, and actionable", "interest-based, value creating, and market-informed" ], "question_id": "fs-idm346559248", "question_text": "The fundamental tenets of SASB’s Approach are considered ________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "C" }, "bloom": null, "hl_context": "<hl> As outlined , the Integrated Reporting framework identifies six broad categories of capital used by organizations which are : financial , manufactured , intellectual , human , social and relationship , and natural . <hl>", "hl_sentences": "As outlined , the Integrated Reporting framework identifies six broad categories of capital used by organizations which are : financial , manufactured , intellectual , human , social and relationship , and natural .", "question": { "cloze_format": "The number of broad categories of capital that are identified by the Integrated Reporting Framework is ___ .", "normal_format": "How many broad categories of capital are identified by the Integrated Reporting Framework?", "question_choices": [ "2", "4", "6", "8" ], "question_id": "fs-idm344533872", "question_text": "How many broad categories of capital are identified by the Integrated Reporting Framework?" }, "references_are_paraphrase": null } ]
13
13.1 Describe Sustainability and the Way It Creates Business Value A primary goal of any business is to maximize shareholder or owner wealth and thus continue operating into the future. However, in making decisions to be profitable and to remain in business into the future, companies must think beyond their own organization and consider other stakeholders. This approach is a major goal of sustainability , which is meeting the needs of the present generation without compromising the ability of future generations to meet their own needs. 1 Another concept that is sometimes associated with sustainability is corporate social responsibility (CSR), which is the set of actions that firms take to assume responsibility for their impact on the environment and social well-being. CSR can be used to describe the actions of an individual company or in comparing the actions of multiple corporations. 1 Brundtland Commission. Our Common Future . 1987. Just as individuals often make conscious decisions to recycle, reuse items and reduce their individual negative effect on the environment, so too do most businesses. Corporations affect the world on many different levels—economic, environmental and social—and many corporations have realized that being good stewards of the world can add value to their business. Companies increase their value, both financial and nonfinancial, in the eyes of consumers and shareholders by heralding their efforts to be good citizens of the globe and the results of those efforts. It is important to note that a corporation’s social and environmental influence is often affected by government policy, both local and federal, and sometimes even internationally through agreements and treaties. The global effort to limit climate change is an example of this influence. In December 2015, 196 nations adopted the Paris Climate Agreement , a historic plan to work together to limit the increase of global temperatures to 1.5 °C. The Agreement aims to help delay or avoid some of the worst consequences of climate change within a system of transparency and accountability in which each nation can evaluate the progress of the others. In June 2017, President Trump announced his intention that the United States withdraw from the Agreement. Five months later, Syria ratified the Agreement, leaving the United States as the only non-participating country in the world. By November 2017, however, a coalition of 20 U.S. states and 50 cities, led by California governor Jerry Brown and former New York City Mayor Michael Bloomberg, had formed ( Figure 13.2 ). During the 23rd UN Climate Change Conference in Germany, the members of this coalition pledged to continue supporting the Agreement. They aim to do this by reducing their carbon output , which is a measure of their carbon dioxide and other greenhouse gas emissions into the atmosphere. In addition to these commitments at the local, state and national level, many U.S. companies have also committed to reducing their carbon output , including Walmart , Apple , Disney , Tesla , and Facebook . The fact that these companies and others are run by CEOs whose primary objective is to make a profit does not mean they live in a vacuum, unaware of their effects on the larger world. As mentioned, responsible companies today are concerned not only about their economic performance, but also about their effects on the environment and society. Recall, corporate social responsibility (CSR) is the set of steps that firms take to bear responsibility for their impact on the environment and social well-being. Even if some managers are not personally guided by these motivations, good corporate citizenship makes good business sense. Historically, companies disclosed financial information in their annual reports to allow investors and creditors to assess how well managers have allocated their economic resources. The public usually learned little about a company’s hiring practices, environmental impact, or safety record unless a violation occurred that was serious enough to make the news. Companies that did not make the news were simply assumed to be doing the right thing. Today, however, as a consequence of social media platforms such as Facebook and Twitter, the public is more aware of corporate behavior, both good and bad. Investors and consumers alike can make financial decisions about firms that align with their own values and beliefs. Management decisions perceived to be detrimental to society can quickly put companies in a bad light and affect sales and profitability for many years. Thus, users of financial reports increasingly want to know whether businesses are making appropriate decisions not only to increase shareholder wealth, but also to sustain the business, and minimize any future negative effects on the environment and the citizens of the world. This management goal is called business sustainability . The number of companies reporting sustainability outcomes has grown over the last two decades. This growth has made this non-financial component of reporting increasingly important to accountants. Sustainability Reporting A sustainability report presents the economic, environmental and social effects that a corporation or organization was responsible for during the course of everyday business. Sustainability reporting aims to respond to the idea that companies can be held accountable for sustainability. In 1987, the former Norwegian Prime Minister, Gro Harlem Brundtland, chaired a World Commission on Environment and Development to both formulate proposals and increase understanding of and commitment to environment and development. The resulting Brundtland Commission Report laid the groundwork for the concept of sustainable development ( Figure 13.3 ). This was defined as “development that meets the needs of the present without compromising the ability of future generations to meet their own needs.” 2 2 NGO Committee on Education. “Report of the World Commission on Environment and Development: Our Common Future.” UN Documents: Gathering a Body of Global Agreements . August 4, 1987. http://www.un-documents.net/wced-ocf.htm With that in mind, the early adopters of sustainability reporting attempted to construct a framework that could convey the good stewardship of companies, primarily their social and environmental effects. Since then, sustainability reporting has evolved to include the ways in which sustainability practices of the company benefit its profitability and longevity. Indeed, adopting sustainable business practices may benefit business in many ways. Companies can: save money by using less water and energy and reducing or recycling business waste reduce insurance costs by limiting their exposure to environmental risks attract investors who prefer to work with businesses that are environmentally and socially responsible reduce social risks, such as racial or gender discrimination improve customer sales and loyalty by enhancing reputation and brand value reduce the possibility of potentially costly regulation by proactively undertaking sustainability initiatives attract and retain employees who share similar values strengthen their relationship with the community contribute to improving environmental sustainability In short, sustainability reporting has evolved to describe both how the company’s practices contribute to the social good and how they add value to the company, which ultimately provides better returns to its investors. The need for improved reporting by corporations on sustainability developed over time. The Union Carbide , Nestlé , and Johnson and Johnson cases are examples of corporate crises that contributed to the development of better sustainability reporting. And though each of these cases involved a negative public response toward the company, this led to a broader shift in business practices, changing how other corporations handle similar challenges. Historical Drivers of Contemporary Sustainability Reporting Much of the drive to adopt sustainability reporting has resulted from the publicity surrounding corporate responses to specific crises. The three featured cases, on Union Carbide , Nestlé , and Johnson & Johnson , look at events that had such an impact on communities and the social conscience that they have contributed to shaping modern sustainability reporting and what society’s expectations of corporations are today. We first look at Union Carbide , whose actions, or lack of action, resulted in the deaths of thousands of impoverished Indians who lived in the shanty communities next to a facility of the U.S.-owned conglomerate. This case highlighted the power disparity between corporations and poor individuals and became a stark emblem of corporate disregard for the human toll of the quest for profit. We then consider the long running campaign against Nestlé Corporation , ongoing since the early 1980s. We will examine what Nestlé has attempted to do to mitigate the perception of exploitation which, some activists argue, is still a superficial response. Finally, we look at the reaction by Johnson & Johnson to the Tylenol poisoning crisis, which, while not of their making, is seen as a rapid and responsible response to ensure the well-being of the community, even if it initially came at considerable financial cost to the company. Union Carbide A few hours before midnight on December 2, 1984, at the Union Carbide pesticide plant in Bhopal India, pressure and heat built up in a tank that stored methyl isocyanate (MIC). Within two hours, approximately 27 tons 3 , 4 of MIC had escaped into the surrounding community, exposing more than 600,000 5 people to the deadly gas cloud. By the next day, 1,700 people were dead. The official toll eventually rose to 3,598 dead 6 and another 42,000 injured, although some accounts estimate that the incident was responsible for 16,000–20,000 deaths. 7 3 The Bhopal Medical Appeal. “Union Carbide’s Disaster.” n.d. http://bhopal.org/what-happened/union-carbides-disaster/ 4 Paul Cullinan. “Case Study of the Bhopal Incident.” Environmental Toxicology and Human Health, Vol. I. Encyclopedia of Life Support Systems . n.d. https://www.eolss.net/sample-chapters/C09/E4-12-02-04.pdf 5 Alan Taylor. “Bhopal: The World’s Worst Industrial Disaster, 30 Years Later.” The Atlantic . December 2, 2014. https://www.theatlantic.com/photo/2014/12/bhopal-the-worlds-worst-industrial-disaster-30-years-later/100864/ 6 Paul Cullinan. “Case Study of the Bhopal Incident.” Environmental Toxicology and Human Health, Vol. I. Encyclopedia of Life Support Systems . n.d. https://www.eolss.net/sample-chapters/C09/E4-12-02-04.pdf 7 The Bhopal Medical Appeal. “Basic Facts & Figures, Numbers of Dead and Injured, Bhopal Disaster.” n.d. http http://bhopal.org/basic-facts-figures-numbers-of-dead-and-injured-bhopal-disaster/ Though the plant had ceased production a couple of years earlier, the plant still contained vast quantities of dangerous chemicals. There was still 60 tons of deadly MIC in tanks at the plant, and proper maintenance of the tanks and the containment systems was necessary. It was later discovered that all the safety systems put into place failed due to lack of maintenance after the plant closed. 8 8 The Bhopal Medical Appeal. “Union Carbide’s Disaster.” n.d. http://bhopal.org/what-happened/union-carbides-disaster/ Within days of the explosion, Warren Anderson, the CEO of Union Carbide , arrived in India, was arrested and released, and then immediately flew out of the country. Although he was subsequently charged with manslaughter, he never returned to India to face trial. 9 Some of the criticisms of Union Carbide ’s handling of matters, both before and after the disaster, are: 9 Douglas Martin. “Warren Anderson, 92, Dies; Faced India Plant Disaster.” New York Times . October 30, 2014. https://www.nytimes.com/2014/10/31/business/w-m-anderson-92-dies-led-union-carbide-in-80s-.html A safety audit two years before had noted numerous problems at the plant, including several implicated in the accident. 10 10 Juanita Stuart. “Union Carbide Bhopal Chemical Plant Explosion.” Worksafe . 2015. https://worksafe.govt.nz/data-and-research/research/role-of-information-management-disaster-prevention/#lf-doc-34129 Before the incident, staff were routinely ordered to deviate from safety regulations and fined if they refused to do so. 11 11 Juanita Stuart. “Union Carbide Bhopal Chemical Plant Explosion.” Worksafe . 2015. https://worksafe.govt.nz/data-and-research/research/role-of-information-management-disaster-prevention/#lf-doc-34129 Employees discovered the leak around 11:30pm on December 2. However, they then decided to take a tea break and did not deal with the leak until two hours later. 12 12 Stuart Diamond. “The Bhopal Disaster: How It Happened.” New York Times . January 28, 1985. http://www.nytimes.com/1985/01/28/world/the-bhopal-disaster-how-it-happened.html?pagewanted=all Two of the plant’s main safety systems were out of action at the time of the accident; one of them had been inoperable for several weeks. 13 13 Stuart Diamond. “The Bhopal Disaster: How It Happened.” New York Times . January 28, 1985. http://www.nytimes.com/1985/01/28/world/the-bhopal-disaster-how-it-happened.html?pagewanted=all Staffing had been cut from 12 operators a shift to six. Kamal K. Pareek, a chemical engineer employed by the plant later argued that it was not possible to safely run the closed plant with only six people. 14 14 Stuart Diamond. “The Bhopal Disaster: How It Happened.” New York Times . January 28, 1985. http://www.nytimes.com/1985/01/28/world/the-bhopal-disaster-how-it-happened.html?pagewanted=all There were no public education programs to inform the surrounding community about what to do in an emergency, 15 and on the night of the leak, there was no public warning of the disaster. An external alarm was turned on at 12:50am but ran for only a minute before it was turned off. 15 Stuart Diamond. “The Bhopal Disaster: How It Happened.” New York Times . January 28, 1985. http://www.nytimes.com/1985/01/28/world/the-bhopal-disaster-how-it-happened.html?pagewanted=all Beginning at 1:15am, workers denied to local police that they were aware of any problems. They restarted the public warning siren at 2:15am and then contacted police to report the leak. 16 16 “The Bhopal Disaster.” Chapter 8 in Health . n.d. http://cseindia.org/userfiles/THE%20BHOPAL%20DISASTER.pdf Union Carbide asserts that a disgruntled employee sabotaged the plant by mixing water with the methyl isocyanate to create a reaction. Some employees claimed that a worker lacking proper training was ordered by a novice supervisor to wash out a pipe that had not been properly sealed. Although it was against plant rules, this action may have started the reaction. 17 17 Stuart Diamond. “The Bhopal Disaster: How It Happened.” New York Times . January 28, 1985. http://www.nytimes.com/1985/01/28/world/the-bhopal-disaster-how-it-happened.html?pagewanted=all Union Carbide ’s disgruntled-employee theory appeared to many to be an effort to deflect blame and deny responsibility. Ultimately, the company agreed to pay the Indian Government $470 million in compensation to be distributed to Bhopal residents, 18 and seven former employees were jailed for two years. In 2001, the company was bought by Dow Chemical Company . Though Dow Chemical obtained the financial liabilities of Union Carbide , Dow maintains that it did not assume legal responsibility for the prior actions of Union Carbide . 19 More than thirty years later, many victims are still awaiting the compensation they were promised, after having paid doctors and lawyers to prove their injuries. “In a way, they were fighting their own government for adequate compensation, whereas the state should have fought with them against Union Carbide,” says a representative of the one of the groups fighting for the victims’ rights. 20 18 Business and Human Rights Resources Centre. “Union Carbide/Dow Lawsuit (re Bhopal).” n.d. https://business-humanrights.org/en/union-carbidedow-lawsuit-re-bhopal 19 Dow. “Dow and the Bhopal Tragedy.” n.d. https://www.dow.com/en-us/about-dow/issues-and-challenges/bhopal/dow-and-bhopal 20 Nita Bhalla. “Victims Call for Justice 30 Years after Bhopal Disaster.” Reuters . December 3, 2014. https://www.reuters.com/article/us-india-bhopal-anniversary/victims-call-for-justice-30-years-after-bhopal-disaster-idUSKCN0JH1L620141203 Nestlé Nestlé is the target of one of the longest-running consumer boycotts in modern history. Founded and headquartered in Switzerland, the company recently became the largest food company in the world. While there have been boycotts against a number of its products over the years, none has lasted as long as the baby formula boycott. The origins of the boycott go back to the mid-1970s, when consumer concerns arose about Nestlé’s use of aggressive marketing tactics to sell its baby formula in developing countries in Asia, Africa, and Latin America. Initially new mothers were provided with free samples of formula to feed their babies, a common practice in many hospitals throughout the world. But in developing countries, this led to two negative consequences for mothers and their babies. First, once bottle feeding begins, the demand on the mother’s body is reduced and breast milk begins to dry up. Mothers in developing countries were often living in poverty and unable to afford the cost of artificial infant food. Action groups argued that, in Nigeria, the cost of bottle feeding a three-month-old infant was approximately 30% of the minimum wage, and by the time the child reached six months old, the cost was 47%. 21 21 Mike Muller. “The Baby Killer.” War on Want . March 1974. http://archive.babymilkaction.org/pdfs/babykiller.pdf A second consequence arose from the fact that preparation of infant formula required sterilized equipment and clean water. Both clean water and sterilization were difficult to guarantee in developing nations where mothers may not have understood the requirements for sterilization or may have lacked the fuel or electricity to boil water. Lapses in preparing the formula led to increased risks of infections, including vomiting and diarrhea that, in some cases, proved fatal. UNICEF estimated that formula-fed infants were 14 times more likely 22 to die of diarrhea and four times more likely to die of pneumonia than breast-fed children. Advocacy groups also argued that dehydration could result if mothers used too much formula and malnutrition could occur if they used too little in an effort to save money. 23 22 Unicef. “Improving Breastfeeding, Complementary Foods, and Feeding Practices.” May 1, 2018. https://www.unicef.org/nutrition/index_breastfeeding.html 23 E. Ziegler. “Adverse Effects of Cow’s Milk in Infants.” Nestlé Nutrition Workshop Senior Pediatric Program. 2007 (60): 185–199. https://www.ncbi.nlm.nih.gov/pubmed/17664905 An active campaign against Nestlé ensued, and the company endures a backlash even today. One group distributed a report, Nestlé Toten Babies (“Nestlé Kills Babies”), which a Swiss court found to be libelous. Nonetheless, the judge warned Nestlé that perhaps it should change the way it did business if it did not want to face such accusations. 24 24 Mike Muller. “Nestlé Baby Milk Scandal Has Grown Up but Not Gone Away.” The Guardian . February 13, 2013. https://www.theguardian.com/sustainable-business/nestle-baby-milk-scandal-food-industry-standards The boycott and negative publicity precipitated a long-running campaign by Nestlé to improve its image. The company now explicitly states on its packaging that breastfeeding is best for babies and supports the World Health Organization’s recommendation that babies should be breastfed exclusively for at least the first six months of life. It distributes educational materials for healthcare professionals and parents on the benefits of breastfeeding and holds seminars on breastfeeding for the medical community. Nestlé established a global Maternity Protection Policy that provides its own employees with extended maternity leave (up to six months) and flexible work arrangements. It opened 945 breastfeeding rooms in India and another 1,500 in China in a partnership with several public and private organizations, and it developed a breastfeeding room locator app for mothers. 25 In those countries considered to be at higher risk for infant mortality and malnutrition, Nestlé applies its own stringent policies, which they believe are stricter than national code and which were derived from the World Health Organization’s International Code of Marketing of Breast-Milk Substitutes. 26 Meanwhile, debate about whether Nestlé is a good corporate citizen continues. 25 Nestlé. “Supporting Breastfeeding.” n.d. https://www.nestle.com/csv/impact/healthier-lives/baby-milk 26 Nestlé. “The Nestlé Policy and Procedures for Implementation of the WHO International Code of Marketing and Breast Milk Substitutes.” September 2017. https://www.nestle.com/asset-library/documents/creating%20shared%20value/nutrition/nestle_policy_who_code_en.pdf Johnson & Johnson At 6:30 in the morning on Wednesday, September 29, 1982, twelve-year-old Mary Kellerman woke up feeling sick. Her parents gave her some Tylenol and decided to keep her home from school. Within an hour Mary had collapsed, and she was pronounced dead at 9:24. Within 24 hours another six people were dead, poisoned, like Mary, by cyanide capsules in Tylenol bottles. In the early 1980s, Tylenol was the leader in over-the-counter pain relief, and during the first three quarters of 1982 the product was responsible for 19% of Johnson & Johnson’s profits. Then an unknown person replaced Tylenol Extra-Strength capsules with cyanide-laced capsules and deposited the bottles on the shelves of at least a half-dozen stores across Chicago. On learning of the deaths, Johnson & Johnson reacted swiftly. CEO James Burke formed a seven-member strategy team charged with answering two questions: “How do we protect the people?” and “How do we save the product?” The first step was to immediately warn consumers through a national announcement not to consume any type of Tylenol product until the extent of the tampering could be determined. All Tylenol capsules in Chicago were withdrawn, and upon discovering two more compromised bottles, Johnson & Johnson ordered a nationwide withdrawal of all Tylenol products. Less than a week had passed. At the same time, the company established a toll-free number for consumers and another one for news organizations that provided daily recorded updates about the crisis. Within two months, Tylenol was re-launched with three-way tamper-proof packaging ( Figure 13.4 ). The carton was securely glued, the cap was wrapped with a plastic seal, and the bottle carried a foil seal. The company also began an extensive media campaign emphasizing trust. In addition, other companies, not only in the pharmaceutical industry but in other industries such as food production and packaging, began to implement the use of tamper proof or double sealed packaging after the Tylenol incident. Since the crisis, the company’s response has been lauded in business case studies and has formed the basis of crisis communications strategies developed by researchers. 27 Ultimately, Johnson & Johnson spent more than $100 million on the recall, an amount that might cripple some companies. Yet its share price returned to its previous high within six weeks. 28 In fact, if you had invested $1,000 in Johnson & Johnson in September 1982, it would have been worth almost $50,000 by late 2017. Today, the company ranks 35th in the Fortune 500, with revenues of almost $76 million. 29 27 Department of Defense. “Case Study: The Johnson & Johnson Tylenol Crisis.” n.d. https://www.ou.edu/deptcomm/dodjcc/groups/02C2/Johnson%20&%20Johnson.htm 28 Judith Rehak. “Tylenol Made a Hero of Johnson & Johnson: The Recall That Started Them All.” New York Times . March 23, 2002. http://www.nytimes.com/2002/03/23/your-money/tylenol-made-a-hero-of-johnson-johnson-the-recall-that-started.html 29 Fortune. “Fortune 500 Full List.” n.d. http://fortune.com/fortune500/list/ These are three early examples of the impact on businesses of decisions made by management that had unintended consequences or circumstances brought about by others that the company did not foresee happening. Each of these instances weakened the sustainability of the corporation, at least temporarily. These examples, as well as others, helped contribute to the CSR movement. Companies are concerned about the effects of their products and practices on all stakeholders from a moral and ethical standpoint and want to be socially responsible in addition to maintaining sustainability of their business. Certainly, there have been many more examples of company responses to social and environmental impacts that have been either positively or negatively received by stakeholders or those who have an interest or concern in the business. Nonetheless, the cases examined demonstrate a range of the types of events and company responses that can affect both the company’s reputation and the society in which they operate in, sometimes for decades. Initial Sustainability Reports Following the Brundtland Report, financial statement preparers began to ask how they might communicate not just the financial status of a company’s operations but the social and environmental status as well. The concept of a triple bottom line , also known as TBL or 3BL, was first proposed in 1997 by John Elkington to expand the traditional financial reporting framework so as to capture a firm’s social and environmental performance. Elkington also used the phrase People, Planet, Profit to explain the three focuses of triple bottom line reporting. By the late 1990s, companies were becoming more aware of triple bottom line reporting and were preparing sustainability reports on their own social, environmental, and economic impact. Another innovation was life-cycle or full-cost accounting . This reporting method took a “cradle to grave” approach to costing that put a price on the disposal of products at the end of their lives and then considered ways to minimize these costs by making adjustments in the design phase. This method also incorporated potential social, environmental, and economic costs ( externalities in the language of economics) to attempt to identify all of the costs involved in production. For example, one early adopter of life-cycle accounting, Chrysler Corporation , considered all costs associated with each design phase and then made adjustments to the design. When its engineers developed an oil filter for a new vehicle, they estimated the material costs and hidden manufacturing expenses and also looked at liabilities associated with disposal of the filter. They found that the option with the lowest direct costs had hidden disposal costs that meant it was not the cheapest alternative. 30 30 J. Fiksel, J. McDaniel, and D. Spitzley. “Measuring Product Sustainability.” The Journal of Sustainable Product Design July, no. 6 (1998): 7–18. Much of the early sustainability reporting movement was driven by stakeholder concerns and protests. For example, throughout the 1990s, Nike drew accusations from consumers that its employees and subcontractors’ employees in developing countries were being subjected to inhumane working conditions. The “sweatshop” charge has since been made against many companies that use off-shore manufacturing, and some now pre-emptively respond by producing sustainability reports to assure stakeholders that they are maintaining a good track record in human rights. One of the earliest adopters of social reporting was The Body Shop , which released its first social report in 1995 based on surveys of stakeholders. BP (formerly British Petroleum) took a different approach, with a series of case studies in social impact assessment and releasing its social report in 1997. Early study into the hows of sustainability reporting led researchers 31 to suggest that some performance indicators could be quantified. Figure 13.5 shows the sustainable product indicators identified by Fiskel and colleagues with suggestions on how each element of economic output might also be measured from an environmental or societal stance. 31 J. Fiksel, J. McDaniel, and D. Spitzley. “Measuring Product Sustainability.” The Journal of Sustainable Product Design July, no. 6 (1998): 7–18. Fiskel’s research suggests that different elements can be categorized as economic , environmental, or societal . The study demonstrates how each element may have quantifiable costs or indicators that can be measured and reported so that users will be able to consider how those inputs and outputs contribute to the entire life cycle of a product. Although Fiskel’s model is rarely reported today, the creation of quantifiable and measurable social and environmental standards is the basis of the Sustainability Accounting Standards Board, which uses an approach similar to Fiskel’s model. Current Examples of Sustainability in Business The environment, human rights, employee relations, and philanthropy are all examples of topics on which corporations often report. When you think of sustainability in business, environmental sustainability might be the first area that comes to mind. Environmental sustainability is defined as rates of resource exploitation can be continued indefinitely without permanently depleting those resources. If these resources cannot be exploited indefinitely at the current rate, then the rate is not considered sustainable. A recent focus of environmental sustainability is climate change impacts. This focus has developed over the past three decades (although some contributors to climate change, such as pollution, have been a concern for much longer.). Climate change, in the context of sustainability, is a change in climate patterns caused by the increased levels of carbon dioxide (CO 2 ) in the atmosphere attributed mainly to use of fossil fuels. Companies are increasingly expected to measure and reduce their carbon footprint , the amount of CO 2 and other greenhouse gases they generate, in addition to adopting policies that are more environmentally friendly. For example, according to the sustainability report for Coca-Cola , in 2016 the company reduced the amount of CO 2 embedded in the containers that hold their beverages by 14%. 32 Such corporate policies to reduce their carbon footprint can include reducing waste, especially of resources like water; switching to paperless record-keeping systems; designing environmentally friendly packaging; installing low-energy lighting, heating, and cooling in offices; recycling; and offering flexible working hours to minimize the time employees sit in traffic adding auto emissions to the environment. Industries that use or produce non-renewable resources as sources of energy, such as coal and oil, are significantly challenged to stay relevant in an era of new energy technologies like solar and wind power. 32 The Coca-Cola Company. “Infographic: 2016 Sustainability Highlights.” n.d. https://www.coca-colacompany.com/stories/2016-sustainability-highlights-infographic Your Turn Mars Inc. Read this article by Stephen Badger, chair of Mars Inc. Then visit the Mars Inc. website and review the sustainability discussion under “Sustainable in a Generation Plan.” Discuss four examples of sustainability that Mars is implementing. What type of cost outlays might a company expend for each of these examples? Can you explain what type of savings the company might have, now or in the future, by these investments and outlays? Solution Mars is implementing a number of endeavors. In their “Healthy Planet” category, they identify climate action, water stewardship, land use, and waste reduction. In “Thriving People,” they identify endeavors toward increasing income, respecting human rights, and increasing opportunities for women. In their “Nourishing Wellbeing” category, they identify product improvement, responsible marketing, and food safety and security. The company might make significant expenses or investments into each of the sustainability measures in the short term. Responses should provide examples of the type of programs that the company implements. For example, under Climate Plans, Mars discusses GHG emissions reductions targets of 67% by 2050 from 2015 levels. In reducing emissions, the company also explains that by improving raw material production practices, they can increase their efficiencies which should eventually lower costs. The company may make substantial savings by investments into energy reduction or water management. The concept of sustainability in business also applies to a company’s human rights and employee relations records. From an employee relations perspective, businesses that are willing to demonstrate that they are good corporate citizens endeavor to maintain sound working conditions to ensure their workplaces are safe, ergonomically appropriate, and healthy even if this means going above and beyond the rules and regulations set by local authorities. For example, good corporate citizens choose not to use child labor even in countries where it is accepted and choose to provide a working environment that exceeds local minimum standards for safety and cleanliness. Also, issues such as pay and job promotion fairness across genders, race and religion, otherwise known as equity issues , are also examined to ensure there are no inequities. For example, gender equity would exist when women are paid the same as men if they are performing the same duties. By other equity measures, a person would not be denied employment or equal pay simply because of their race or religion. Firms may also implement parental leave policies and flexible or remote work hours to improve the morale and productivity of employees with families. A number of organizations also offer health and wellness groups and healthy vending and cafeteria options for employees. Companies may also promote sustainability through philanthropic endeavors, or charitable giving . While charitable giving is responsible, it is only sustainable if the money given improves or alleviates the underlying issue for which the money is being given. Otherwise, the money is not being spent productively, and that goes against sustainable business practices. To enhance the amount given to charities, many companies offer matching programs wherein they will match charitable contributions made by employees. Some companies also offer from two to five paid work days per year for employees to perform volunteer work. Many companies also go further and contribute a portion of company earnings to charitable causes. Investors may not always approve of the manner in which charitable funds are spent as they may prefer either that (1) the money be given to different charitable causes than the ones chosen by the company or (2) may feel the money could be more effective if applied to expansion and growth of the company. However, as most shareholders realize, corporations take a significant role in funding charitable organizations, and many of these not-for-profit organizations could not perform the services they provide without corporate funding. Table 13.1 provides an example of philanthropic contributions by several public corporations. Table 13.2 shows a few of the best places to work if you are looking for an employer that gives back to the community. Examples of Corporate Charitable Giving Corporation Amount Donated Primary Causes Supported Gilead Sciences $446.7 million HIV/AIDS, liver diseases Walmart $301 million Worker economic mobility, Feed America – anti-hunger campaign Wells Fargo $281.3 million Part to local charities and part to national charities such as Neighborworks Goldman Sachs $276.4 million Their own projects called 10,000 Women and 10,000 Small Businesses Exxon Mobil $268 million Education, malaria prevention, and economic opportunity for women Table 13.1 These companies were the top five charitable giving corporations in 2015. 33 33 Caroline Preston. “The 20 Most Generous Companies of the Fortune 500.” Fortune . June 22, 2016. http://fortune.com/2016/06/22/fortune-500-most-charitable-companies/ Top Places to Work That Give Back Company Amount Given Matches Employee Giving Gives Paid Days to Do Charitable Work Salesforce $137 million Yes 56 hours NuStar Energy $8.5 million Yes 50 hours Veterans United Home Loan $7.1 million Yes 40 hours Intuit $42 million Yes 32 hours Autodesk $20.4 million Yes 48 hours Table 13.2 These companies are considered the top five to work for if an employee is interested in community involvement and charitable contributions. 34 34 Fortune. “The 50 Best Workplaces for Giving Back.” February 9, 2017. http://fortune.com/2017/02/09/best-workplaces-giving-back/ Coca-Cola Corporation has a program designed to empower female entrepreneurs through e-learning programs. The company launched the 5by20 initiative that aims to empower 5 million women entrepreneurs across the company’s value chain of producers, distributors, recyclers, and retailers around the world by 2020. By the end of 2016, the program had enabled 1.75 million women through the program in 64 countries globally. Many corporations offer corporate giving programs by which employees are encouraged to participate in volunteerism or match with in-kind donations. Companies such as Intel , Pacific Gas and Electric Company , GE , General Mills , Intuit , Autodesk, and Salesforce have corporate giving programs that match dollar for dollar the amounts contributed by their employees. For example, if an employee wishes to support their local school, it is a registered tax exempt 501(c) (3) charity, and the employee donates $200, then the employer will match their contribution. Additionally, companies may give their employees paid volunteer time. For example, Intuit gives each of its employees 32 paid hours to help out at local organizations. These volunteer hours can be used for many things, such as going to work in the local food bank for a few hours, volunteering for a fundraiser they believe in, or even something as simple as allowing an employee to participate in their child’s school. These programs tend to be most effective when employees have input into where they will donate, or how they will dedicate their time. Business decisions that affect the environment, human rights, employee relations, and philanthropic activities represent actions that are, hopefully, responsible and, at the same time, contribute to business sustainability, which in turn adds value to the business. Creating Business Value In the past, firms increased business value by increasing revenue or reducing expenses. However, managers now are realizing that some consumers are willing to pay more to support a company whose philosophy aligns with their own values. If they believe a company is making a greater effort to reduce its carbon emissions than its competitors are or that it looks after its workers and their communities, consumers will pay more for the product or invest in the company because they believe the company is doing the right thing by the environment or society. Many investors demonstrate these same principles. Companies have many ways to inform investors and customers of their efforts to improve the three P’s—planet, people and profit—as you learned about in the discussion about the triple bottom line in the Initial Sustainability Reports section . While not every company officially reports a triple bottom line, many companies report their efforts to improve their impact on the planet and on people through various avenues such as in a formal corporate social responsibility report, on their website, or even through their advertising. It is often difficult to translate the effects of these efforts on the profits of the corporation; nonetheless, a company can often quantify the effects of their actions to help the planet, employees, and communities in other ways. Next, let’s examine efforts by a few such companies and the results they have achieved. Patagonia For more than 30 years, the outdoor-clothing maker Patagonia has donated 1% of its annual sales or 10% of its pre-tax profits, whichever is greater, to environmental organizations. In 2010, the company helped found the Sustainable Apparel Coalition, whose members measure and score their environmental impact and then report the results in the Higgs Index. The Higgs Index is a social and environmental performance index that clothing industry executives use to make more sustainable decisions when sourcing materials and to protect the well-being of factory workers, local communities, and the environment. 35 35 Sustainable Apparel Coalition. “The Higg Index.” n.d. https://apparelcoalition.org/the-higg-index/ In 2012, Patagonia became one of California’s first B corporations. A B corporation is a benefit corporation, which, although profit motivated, aims to make a positive impact on society, workers, the community, and the environment. Link to Learning This website on B Corporations will help you learn more. In 2013, Patagonia ’s founder, Yvon Chouinard, launched the $20 Million and Change fund, now called Tin Shed Ventures, 36 which aimed to help start-up companies bring about positive benefit to the environment. 37 In late 2017, Patagonia sued the U.S. government and President Donald Trump for the decision to undo federal protections of public lands in Utah’s Bears Ears and Grand Staircase-Escalante national monuments. The company temporarily turned its homepage into a single graphic reading, “The President Stole Your Land.” 36 Tin Shed Adventures. “About.” n.d. http://www.tinshedventures.com/about/ 37 Yvon Chouinard. “Introducing Patagonia Works, A New Kind of Holding Company.” Patagonia . May 6, 2013. http://www.patagoniaworks.com/#index Patagonia claims that it holds itself to a single cause: “Using business to help solve the environmental crisis.” The company has encountered some criticism from animal rights groups over its use of live-plucked feathers and mulesing (a controversial surgical process to help prevent parasitic infection) of sheep, but it appears to have taken action quickly to source down and wool according to strict animal welfare and land use standards. 38 38 Patagonia “Our Wool Restart.” July 26, 2018. https://www.patagonia.com/blog/2016/07/our-wool-restart/; Patagonia. “Patagonia Traceable Down.” n.d. https://www.patagonia.com/traceable-down.html Walmart ’s Greenhouse Gas Reduction Goals In February 2010, Walmart announced its aim to eliminate 20 million metric tons of greenhouse gas (GHG) emissions from its global supply chain within five years. Environmentally, this would be equal to taking more than 3.8 million cars off the road for a year. 39 By 2015 the company announced that they had surpassed that goal and had achieved a 28-million-ton reduction. 39 Walmart. “Walmart Announces Goal to Eliminate 20 Million Metric Tons of Greenhouse Gas Emissions from Global Supply Chain.” February 25, 2010. https://corporate.walmart.com/_news_/news-archive/2010/02/25/walmart-announces-goal-to-eliminate-20-million-metric-tons-of-greenhouse-gas-emissions-from-global-supply-chain In April 2017, the company went several steps further and launched Project Gigaton, inviting their suppliers to commit to reducing GHG emissions by a billion tons by 2030. This would be the equivalent of taking more than 211 million passenger vehicles off the roads for a year. 40 To do this, the company has initiated a number of endeavors to achieve reduced GHG emissions. These include sourcing 25% of their total energy for operations from renewable energy sources (energy that is not depleted when used) and aiming to increase this to 50% by 2025. 40 Walmart. “Walmart Launches Project Gigaton to Reduce Emissions in Company’s Supply Chain.” April 19, 2017. https://news.walmart.com/2017/04/19/walmart-launches-project-gigaton-to-reduce-emissions-in-companys-supply-chain The company also aims to achieve zero waste to landfill in key markets by 2025; by 2015, 75% of their global waste was already diverted from landfills. 41 41 Walmart. “Walmart Offers New Vision for the Company’s Role in Society.” November 4, 2016. https://news.walmart.com/2016/11/04/walmart-offers-new-vision-for-the-companys-role-in-society Walmart has gone to great lengths to measure the environmental implications of its supply chains, which has also saved the company money. One very simple example is the company’s focus on selling more concentrated detergents so that they can reduce the number of ships bringing the detergent from China to the United States. 42 42 M.P. Vandenbergh and J.M. Gilligan. Beyond Politics: The Private Governance Response to Climate Change . (Cambridge, 2017), 198; Walmart. “Walmart Completes Goal to Sell Only Concentrated Liquid Laundry Detergent.” May 29, 2008. https://corporate.walmart.com/_news_/news-archive/2008/05/29/wal-mart-completes-goal-to-sell-only-concentrated-liquid-laundry-detergent Gravity Payments Productivity, or the amount of output or income generated by an average hour of work, has improved 22% from 2000 to 2014 in the US. Yet, during the same time, median wages rose only 1.8%, adjusted for inflation. 43 CEOs have reaped more of the benefits of productivity gains and now earn about 271 times more than typical workers (up from 59 times more in 1989). 44 CEO pay has been a controversial topic for many years. As leaders of their organizations, CEOs affect not only the culture of the company but the direction as well. For example, unethical CEOs can result in significant loss of shareholder wealth, which happened to Enron , Hewlett-Packard , and Merrill Lynch . 45 Ethical CEOs can help guide the company to greater wealth by being cognizant of the role they play within their corporation as well as in the world. 43 Josh Bivens and Lawrence Mishel. “Understanding the Historic Divergence between Productivity and a Typical Worker’s Pay.” Economic Policy Institute . September 2, 2015. http://www.epi.org/publication/understanding-the-historic-divergence-between-productivity-and-a-typical-workers-pay-why-it-matters-and-why-its-real/ 44 Economic Policy Institute. “Top CEOs Took Home 271 Times More Than the Typical Worker in 2016.” July 20, 2017. https://www.epi.org/press/top-ceos-took-home-271-times-more-than-the-typical-worker-in-2016/ 45 Tomas Chamorro-Premuzic. “Are CEOs Overhyped and Overpaid?” Harvard Business Review . November 1, 2016. https://hbr.org/2016/11/are-ceos-overhyped-and-overpaid In April 2015, Dan Price, the co-founder and CEO of Seattle-based credit-card processing firm Gravity Payments , decided to take a different path from other CEOs. Price announced he was slashing his own million-dollar salary to $70,000 and raising the minimum salary for all his 120 employees, in stages, to $70,000 a year. 46 After a few minor bumps in the road, mostly resulting from the attendant publicity, in the year after his announcement, profits doubled, the firm’s employee turnover reached a record low, and another 50 employees were added to deal with the increased business. Team members were able to afford to move closer to their workplace, reducing commute time and the stress associated with it. 47 46 Gravity Payments. “$70K Minimum Wage Initial Results.” n.d. https://gravitypayments.com/thegravityof70k/ 47 Gravity Payments. “$70K Minimum Wage Initial Results.” n.d. https://gravitypayments.com/thegravityof70k/ Part of Price’s motivation was a conversation with a friend who was worried about a $200 rent increase. He remembered reading a 2010 study by Princeton behavioral economist Daniel Kahneman noting that people were decidedly unhappier the less they earned below $75,000. 48 After the pay increase, Gravity saw employee happiness, in terms of overall work place satisfaction levels increase significantly, although this tapered off somewhat to average levels in the year after ( Figure 13.6 ). Almost three years on, the company is still going strong. Time will tell whether the “Price of Gravity” is a continued success. 48 Paul Keegan. “Here’s What Really Happened at That Company That Set a $70,000 Minimum Wage.” Inc . November 2015. https://www.inc.com/magazine/201511/paul-keegan/does-more-pay-mean-more-growth.html Grameen Bank In 1974, Muhammad Yunus, then an Economics Professor in Bangladesh, began to lend small sums of money at minimal or no interest to a few dozen local women who were basket weavers. Eliminating the high interest charged by traditional lenders allowed the women to make enough profit to enlarge their businesses into income-generating activities and lift themselves out of poverty. Yunus continued helping poor entrepreneurs, usually women, and ultimately formalized his simple micro-lending system by forming Grameen Bank in 1983. The bank now has 8.9 million borrowers, most often women, across 81,399 villages 49 and has distributed more than US$19.6 billion in loans since its inception; more than $17.9 billion has been repaid. The bank claims a rate of recovery of 99.25%. 50 Its profits are loaned to other borrowers or go to fund local development to enrich the lives of the community ( Figure 13.7 ). 49 Grameen Bank. “Introduction.” January 2018. http://www.grameen.com/introduction/ 50 Grameen Bank. “Monthly Report: 2017-11 Issue 455 in BDT.” December 5, 2017. http://www.grameen.com/data-and-report/monthly-report-2017-11-issue-455-in-bdt/ The average household income of Grameen ’s members is about 50% higher than that of a target group in a control village, and 25% higher than that of non-members. While 56% of non- Grameen members live below the poverty line, the bank’s micro-financing efforts have meant that only 20% of members now live below that line. 51 51 Arjun Bhaskar. “Microfinance in South India: A Case Study.” Wharton Research Scholars, Scholarly Commons, Penn Libraries . April 2015 https://repository.upenn.edu/wharton_research_scholars/122/ Although it has not avoided controversy, the bank has won many awards, including the World Habitat Award of 1997 and the 2006 Nobel Peace Prize (awarded jointly to the bank and to Yunus) for efforts to create economic and social development through microcredit so that small entrepreneurs could break from the cycle of poverty. Link to Learning You can learn more about the corporate social reporting of these companies online: Patagonia corporate social reporting Walmart corporate social reporting Gravity Payments corporate social reporting Grameen Bank corporate social reporting Think It Through Do Friedman’s Ideas Stand the Test of Time? In a 1970 New York Times Magazine article, economist Milton Friedman argued that for a manager acting as an agent of the business owner (principal), “there is one and only one social responsibility of business—to use its resources and engage in activities designed to increase its profits so long as it stays within the rules of the game, which is to say, engages in open and free competition without deception or fraud.” Given what we have learned about Earth’s environment since this article was published, do you think Friedman’s statement that the “sole purpose of business is to make profits” is valid? Explain your answer. 13.2 Identify User Needs for Information The concept of the triple bottom line expanded the role of reporting beyond shareholders and investors to a broader range of stakeholders – that is, anyone directly or indirectly affected by the organization, including employees, customers, government entities, regulators, creditors, and the local community. Naturally, companies may feel their first obligation is to their present and potential investors. But it also makes good business sense to consider other stakeholders who can affect the company’s livelihood. Let’s examine the various users of sustainability reports and their particular information needs. Primary users would be considered shareholders and investors, whereas secondary users would be customers, suppliers, the community and regulators. Shareholders Many consider the company’s shareholders to be its primary information user group. These equity investors may be small single investors or they may be part of an institutional investment fund charged with investing on behalf of its members. As shareholders they concern themselves with the future viability of the company and want profits to be sustained or increased over the long term. Shareholders often use financial ratios, such as earnings per share (EPS), return on investment (ROI), and the price/earnings ratio to evaluate the financial health and the sustainability of financial growth of the company. Shareholders not only evaluate whether there is current value in owning stock of the company but also whether there will continue to be value in owning that company’s stock. Otherwise the shareholder is likely to divest of their ownership interest. One ratio that shareholders often use to measure the value of the company’s stock relative to the company’s earnings is the price-earnings ratio , or P/E ratio . In the P/E ratio, the market price of the stock is divided by the earnings per share of the company’s stock. This ratio indicates the amount an investor is willing to pay for one dollar of the company’s earnings. For example, if a stock is trading at a P/E of 30, then this indicates investors are willing to pay $30 for $1 of current earnings. A high P/E ratio indicates investors expect high future earnings. A low P/E ratio has several interpretations but could indicate a company is undervalued. Many investors use the P/E ratio as a measure of whether or not a stock should be purchased, but no single metric should be used alone. In addition, the P/E ratio is only useful when comparing changes across time for a single company to see trends or lack of trends. The P/E ratio is most useful if compared across companies within a given industry sector. Most often, growth will vary widely between different sectors but will be more similar within a particular sector. Investors buy and sell stock for many reasons, both financial and non-financial. They can sell a stock due to lack of current growth in value or an expected drop in future earnings. They can also buy a stock because the company participates in activities that the shareholder values, such as fair wages and greenhouse emission reductions even if the company has a low P/E ratio. Let’s look at an example of an investment driven by more than just the company’s current financial situation. In 2008, Warren Buffett’s MidAmerican Energy Company , a subsidiary of Berkshire Hathaway , bought a $230m stake in BYD , a Chinese battery maker about to begin auto production. 52 Although the auto industry initially ridiculed Buffet’s investment in such a little- known company, he may well have the last laugh. Since 2008, the company has evolved into the world’s leading producer of electric cars, and its shares now trade at almost 10 times what MidAmerican paid for them. This increase in value reflects the market’s optimism about the future of the company based on the Chinese government’s commitment to speeding up the phasing out of fossil fuels. 52 Keith Bradsher. “Buffet Buys Stake in Chinese Battery Manufacturer.” New York Times . September 29, 2008. https://www.nytimes.com/2008/09/30/business/worldbusiness/30battery.html The new ethical investing movement focuses on eliminating investments that conflict with shareholders’ values, such as dependence on environmentally damaging fossil fuels. The movement is growing each year. In 2016, ethical investments topped $8.7 trillion, up 33% from 2014, and they now account for 20% of all investment under professional management. 53 Ethical investors are increasingly avoiding polluters, weapon manufacturers, and tobacco companies as well as companies with a poor track record on human rights or philosophies that do not align with a fund’s religious tenets. Pension funds, such as the New York City Pension Fund 54 have announced a move away from investing in companies in the fossil fuel industry, a move that will put substantial pressure on these companies to seek out alternatives to the non-renewables business model. On the opposite side of the country, the California Public Employees’ Retirement System announced that it had divested from most of its holdings in thermal coal stock. 55 53 Matt Whittaker. “Ethical Investing Continues to Grow.” U.S. News and World Report . January 27, 2017. https://money.usnews.com/investing/articles/2017-01-27/ethical-investing-continues-to-grow 54 William Neuman. “To Fight Climate Change, New York City Takes on Oil Companies.” New York Times . January 10, 2018. https://www.nytimes.com/2018/01/10/nyregion/new-york-city-fossil-fuel-divestment.html? 55 Randy Diamond. “CalPERS Reveals It Divested from Most Thermal Coal Companies.” Pensions & Investments . August 7, 2017. http://www.pionline.com/article/20170807/ONLINE/170809876/calpers-reveals-it-divested-from-most-thermal-coal-companies Investors are also increasingly looking to the future to evaluate whether a firm’s stock price is sustainable. Consider that, as the cost of renewable energy alternatives become cheaper, non-renewable resources become less able to compete. That is, the price of the non-renewable commodity falls to a point where the costs of extraction become greater than the price that can be obtained for the asset, and so the non-renewable resource remains in the ground. 56 At this point, the value of the asset, the mine, is impaired, which leads to a reduced share price. 56 M.K. Linnenluecke, J. Birt, J. Lyon, and B.K. Sidhu. “Planetary Boundaries: Implications for Asset Impairment.” Accounting & Finance 55, no. 4 (2015). For example, in mid-2017, Coal India , the largest coal-mining company in the world, announced that it would close 96 57 of its 394 mines 58 by March 2018 because they would be no longer economically viable after the Indian Government announced it would cut its commitments to purchase coal after 2022. 59 57 IANS. “Coal India Could Close 53 Underground Mines This Fiscal.” The Economic Times . September 12, 2018. https://economictimes.indiatimes.com/industry/indl-goods/svs/metals-mining/coal-india-could-close-53-underground-mines-this-fiscal/articleshow/65783526.cms 58 Coal India. Annual Report and Accounts 2016–2017 . n.d. https://www.coalindia.in/DesktopModules/DocumentList/documents/Annual_Report_&_Accounts_2016_17_Deluxe_English_07112017.pdf 59 Harriet Agerholm. “World’s Biggest Coal Company Closes 37 Mines as Solar Power’s Influence Grows.” Independent . June 21, 2017. http://www.independent.co.uk/news/world/asia/coal-india-closes-37-mines-solar-power-sustainable-energy-market-influence-pollution-a7800631.html Investors, including ethical investors, must look to the future of their investments, buying shares that are sustainable for the long term to provide better returns. A recent Harvard Business Review study showed that socially responsible companies post higher profits and stock performance than those that were not focused on social responsibility. 60 This result is supported by a Deutsche Bank analysis of more than 2,000 studies dating back to the 1970’s, 90% of which suggested that socially responsible investing gives better returns than passive investing. 61 60 MoneyShow. “Socially-Responsible Investing: Earn Better Returns from Good Companies.” Forbes . August 16, 2017. https://www.forbes.com/sites/moneyshow/2017/08/16/socially-responsible-investing-earn-better-returns-from-good-companies/#7f73a8a1623d 61 MoneyShow. “Socially-Responsible Investing: Earn Better Returns from Good Companies.” Forbes . August 16, 2017. https://www.forbes.com/sites/moneyshow/2017/08/16/socially-responsible-investing-earn-better-returns-from-good-companies/#7f73a8a1623d Ethical Considerations Millennials Are Demanding Sustainable Investments According to the Forum for Sustainable and Responsible Investment, a U.S.-based membership organization, “sustainable, responsible and impact investing is an investment discipline that considers environmental, social and corporate governance criteria to generate long-term competitive financial returns and positive societal impact.” 62 Demand for this type of sustainable investments is being driven in a large part by millennials who prefer that their investments align with their personal beliefs and values. Ethical companies are seeing value in the millennial investors because “millennials are poised to receive more than $30 trillion of inheritable wealth.” 63 Forward-looking companies need to develop an awareness of millennial values. 62 US SIF. “SRI Basics.” n.d. https://www.ussif.org/sribasics 63 Ernst & Young. Sustainable Investing: The Millennial Investor . 2017. https://www.ey.com/Publication/vwLUAssets/ey-sustainable-investing-the-millennial-investor-gl/$FILE/ey-sustainable-investing-the-millennial-investor.pdf Forward-looking companies and investment advisor companies also need to adapt to a sustainable investment environment. This changes the perspective of accounting because managers will need to look to other factors besides profits to guide management’s business decisions. Management and accountants will need to look beyond just numbers, and this will require a change in culture, technology, and operational and financial reporting to investors, potential investors and stakeholders. Lenders Sustainability reports provide useful information for lenders. Lenders want to know that the company borrowing from them does not have any going-concern risks that could affect its ability to repay the loan ( Figure 13.8 ). They want to know the company will not be sued for human rights violations at home or abroad, be unable to repay its loans because consumer boycotts have hurt its cash flow, or that they maintain valuable property assets in high-risk areas. For example, after the 2017 Houston floods, a number of Houston-based banks were examined to find that they had a high level of exposure in commercial real estate in Houston. 64 This type of investment concentration in a single geographic area can be risky for lenders as a single disaster can have a more damaging effect than on a portfolio spread over a broader geographical area. 64 Ely Razin. “As Harvey Leaves Houston Reeling, These Banks Are More Exposed Than Others.” Forbes . August 31, 2017. https://www.forbes.com/sites/elyrazin/2017/08/31/as-harvey-leaves-houston-reeling-these-banks-are-more-exposed-than-others/#423dcb5e6355 Employees Employees and potential employees want to know that the company they work for is concerned about their safety and is an ethical organization. They want assurance that they will be fairly compensated and that all employees have equal rights and opportunities, regardless of gender, race, religion, or sexual orientation. Recent studies show that employees increasingly want to work for companies that align with their own values and will be more loyal to those organizations. In 2016, 76% of millennials said that a company’s social and environmental commitments were considerations in employment, with 64% of millennials indicating that they would not work for a company that did not have strong corporate social responsibility practices. 65 65 Cone Communications (Whitney Dailey). “Three-Quarters of Millennials Would Take a Pay Cut to Work for a Socially Responsible Company, According to the Research from Cone Communications.” November 2, 2016. http://www.conecomm.com/news-blog/2016-cone-communications-millennial-employee-engagement-study-press-release Employees also report higher levels of satisfaction when their employers engage in corporate giving programs that are aligned with employee values or are chosen by employees. 66 For example, Intel will donate $10 to an educational institution, environmental program, or other community organization for every hour an employee volunteers there. More than 40% of Intel ’s U.S. employees have donated time that totals hundreds of thousands of volunteer hours. 67 Other firms have corporate giving programs that match employee’s charitable donations dollar for dollar. 66 America’s Charities. “Facts and Statistics on Workplace Giving, Matching Gifts, and Volunteer Programs.” n.d. https://www.charities.org/facts-statistics-workplace-giving-matching-gifts-and-volunteer-programs 67 Intel. “Giving Back: How Our Employees Make a Difference.” n.d. https://www.intel.com/content/www/us/en/jobs/life-at-intel/usa/giving-back.html Customers Customers often have many choices about where to spend their hard-earned dollars. They want to know the companies to which they give that money reflect their own values and beliefs. If a company is seen to be uncaring about an issue, then customers may arrange campaigns to boycott the company (see the Nestlé story for an example of such consumer activism). A 2016 study by Unilever showed that 33% of consumers buy from brands they believe are doing social or environmental good and that this presents a €966 billion (over 1.1 trillion $USD) opportunity for brands. As such, it is important for a company to demonstrate their commitment to CSR, and sustainability reporting offers a medium to do this. 68 68 Unilever. “Report Shows a Third of Consumers Prefer Sustainable Brands.” May 5, 2017. https://www.unilever.com/news/Press-releases/2017/report-shows-a-third-of-consumers-prefer-sustainable-brands.html Governments and Regulators Governments and regulators want to be able to see that a company is behaving responsibly. If they are confident that it is, there is less need to design laws and regulations that might restrict the company even more than if it undertook best-practice measures on its own. Many companies form industry alliance groups that aim to implement best practices in trade, social responsibility, or environmental initiatives. Community The community at large also wants to know that the organization is behaving at the level of society’s expectations. This reflects the existence of a social contract , the expectation that companies will hold to an unwritten contract with society as a whole. If a firm is undertaking actions that might harm society or that reject its general values, community backlash may cost the firm dearly. In summary, a company’s accountability to a wider group of users is an element of stakeholder theory. This theory presents a view that asserts a corporation has an obligation to groups beyond just its shareholders. Your Turn Identifying Stakeholders Locate the sustainability report of a Fortune 500 company and read the management discussion in it. Explain who you think the company considers its primary and secondary users. What information about itself and its operations does the company attempt to convey to each audience? Do you think its choices meet the information needs of these two groups of stakeholders? Why or why not? Solution Invariably, the primary users will be shareholders and creditors. Secondary users would be customers, employees, environmental groups, the community and regulators. The strength or relevance of each user will be dependent on the type of business discussed in the response. Ethical Considerations Public Benefit Corporations Traditionally, standard American corporations consider their ultimate purpose as maximizing the profits of the shareholders. In the United States, directors of for-profit corporations recognize that one of their major goals is to maximize shareholder value. While corporations generally have the ability to engage in any legal activities, including those that are socially responsible, corporate decision-making must be justified in terms of creating shareholder value. Mission driven and other socially conscious businesses, impact investors, and social entrepreneurs are constrained by this inflexible legal framework that does not accommodate for-profit entities whose mission and impact is central to their business model. In response, the benefit corporation model has emerged, which “broadens the perspective of traditional corporate law by incorporating concepts of purpose, accountability and transparency with respect to all corporate stakeholders, not just stockholders.” 69 Public benefit corporations expand the obligations of boards, requiring them to consider environmental and social factors, as well as the financial interests of shareholders. This gives directors and managers the legal protection to pursue a mission other than maximizing profit and consider the impact their business has on society and the environment. 69 Morris, Nicols, Arsht & Tunnel. “Understanding Delaware’s Benefit Corporation Governance Mode.” The Public Benefit Corporation Guidebook . May 2016. http://news.mnat.com/rv/ff00272e4c8b3699806e25d24c48a286df5bf926 13.3 Discuss Examples of Major Sustainability Initiatives In 2017, a KPMG report noted that 93% of the world’s 250 largest companies by revenue produced corporate responsibility reports. When looking at the top 100 companies in each of 49 countries, the report found an underlying trend of 75% of companies that reported corporate responsibility and this was up from 18% only 15 years ago. 70 Given these figures, sustainability reporting is clearly responding to a need by investors, lenders and other stakeholders to provide information beyond what financial reports can produce. 70 KPMG. The Road Ahead: The KPMG Survey of Corporate Responsibility Reporting 2017 . 2017. https://assets.kpmg.com/content/dam/kpmg/xx/pdf/2017/10/kpmg-survey-of-corporate-responsibility-reporting-2017.pdf However, for these reports to be comparable and useful, there needs to be a standard that users can rely on. Just as financial statements are produced using GAAP or IFRS, there is a need for some type of uniformity within corporate social responsibility reporting. The non-mandatory nature of CSR reporting has made the emergence of a single set of standards a challenge. Three of the most well-known reporting frameworks are the Global Reporting Initiative (GRI), the Sustainability Accounting Standards Board (SASB), and Integrated Framework. Each framework relies on materiality (how significant an event or issue is to warrant its inclusion or discussion) as its basis of reporting, but each describes it slightly differently. Global Reporting Initiative (GRI) In 1997, a not-for-profit organization called the Global Reporting Initiative (GRI) was formed with the goal of increasing the number of companies that create sustainability reports as well as to provide those companies with guidance about how to report and establish some consistency in reporting (such as identifying common themes and components for reports). The idea is that as companies begin to create these reports, they become more aware of their impact on the sustainability of our world and are more likely to make positive changes to improve that impact. According to GRI, 92% of the Global 250 produced sustainability reports in 2016. Although businesses have been preparing reports using GRI standards for some time, in 2016, the GRI produced its first set of global reporting standards, 71 which have been designed as modular, interrelated standards. Every organization that produces a GRI sustainability report uses three universal standards: foundation, general disclosures, and management approach ( Figure 13.9 ). The foundation standard (GRI 101) is the starting point and introduces the 10 reporting principles and explains how to prepare a report in accordance with the standards. General Disclosures (GRI 102) is for reporting contextual information about the organization and its reporting practices. Management Approach (GRI 103) is used to report how a firm manages each of its material topics. Applying the materiality principle, the organization identifies its material topics, explains why each is material, and then shows where the impacts occur. Then, it selects topic-specific standards most significant to its own stakeholders. 71 Global Reporting Initiative (GRI). “GRI Standards.” n.d. https://www.globalreporting.org/standards Though the GRI has provided a framework, a firm’s decision about what to report rests on its definition of materiality. GRI defines materiality in the context of a sustainability report as follows: “The report should cover Aspects that: Reflect the organization’s significant economic, environmental and social impacts; or substantively influence the assessments and decisions of stakeholders.” 72 In its 2016 report, Coca-Cola listed these areas as its primary sustainability goals: 72 Global Reporting Initiative (GRI). “G4 Sustainability Reporting Guidelines. Reporting Principles and Standard Disclosures . 2013. Agriculture Human and Workplace Rights Climate Protection Giving Back Water Stewardship Packaging and Recycling Women’s Economic Development 73 73 Coca-Cola Company. “2016 Sustainability Report: Women’s Economic Empowerment.” August 17, 2017. https://www.coca-colacompany.com/stories/2016-womens-economic-empowerment Dow Chemical issues a different type of report and lists these categories: Who We Are—Strategy and Profile Why We Do It—Global Challenges What We Do—Our Products and Solutions How We Do It—Our People and Operations Awards and Recognitions 74 74 Dow Chemical Company. Redefining the Role of Business in Society: 2016 Sustainability Report . 2017. http://storage.dow.com.edgesuite.net/dow.com/sustainability/highlights/Dow_2016_Sustainability_Report.pdf Sustainability reporting is not confined to manufacturing or merchandising. Service organizations report as well. For example, Bank of America states in its 2016 sustainability report: “At Bank of America, we are guided by a common purpose to help make financial lives better through the power of every connection. We deliver on this through a focus on responsible growth and environmental, social and governance leadership. Through these efforts, we are driving growth—investing in the success of our employees, helping to create jobs, develop communities, foster economic mobility and address society’s biggest challenges—while managing risk and providing a return to our clients and our business.” 75 For more information about the GRI can be found on the web. 75 Bank of America. “Responsible Growth.” n.d. https://about.bankofamerica.com/en-us/what-guides-us/driving-responsible-growth.html Concepts In Practice Sustainability in Mobile Telecommunications With more than 460,000 employees, China Mobile Limited is the largest mobile telecommunications company in the world. The company published their first GRI report in 2006, and, since then, the company has been able to review and disclose key sustainability performance indicators. Wen Xuelian, responsible for CSR reporting and management told GRI that sustainability reporting has helped the company to keep track of material sustainability issues and to improve overall performance each year. Xuelian notes that “at China Mobile we have built our CSR management systems by combining elements of the GRI framework with the operational infrastructure that we already had in place.” 76 76 Xuelian, Wen. “China Mobile: Helping Build a Robust Sustainability Reporting Community in China.” GRI. Nov. 7, 2017. https://www.globalreporting.org/information/news-and-press-center/Pages/China-Mobile-Helping-build-a-robust-sustainability-reporting-community-in-China.aspx Another challenge, Xuelian explains, was quantifying costs and benefits of the company’s sustainability efforts. “Over the years of reporting, we have gradually built up relevant systems and incorporated social and environmental impact assessments into the early stage of business development and introduced external assessment methods for better evaluation.” 77 77 Xuelian, Wen. “China Mobile: Helping Build a Robust Sustainability Reporting Community in China.” GRI. Nov. 7, 2017. https://www.globalreporting.org/information/news-and-press-center/Pages/China-Mobile-Helping-build-a-robust-sustainability-reporting-community-in-China.aspx The company addressed material issues such as network connectivity, information security, using information to benefit society, energy conservation, GHG emissions, reduction of poverty, employee development and anti-corruption efforts and sustainability reporting helped them to be more transparent in their operations. In the 10 years since implementation, they have reduced their electricity consumption per unit of business volume by 94%, built over 13,000 new energy base stations, reduced timber usage in packaging by over 600,000 cubic meters and introduced smart digital solutions for community emissions reductions. 78 78 Xuelian, Wen. “China Mobile: Helping Build a Robust Sustainability Reporting Community in China.” GRI. Nov. 7, 2017. https://www.globalreporting.org/information/news-and-press-center/Pages/China-Mobile-Helping-build-a-robust-sustainability-reporting-community-in-China.aspx Link to Learning Visit the GRI website and select one of the companies in the featured reports. Locate the company’s sustainability report on their website and then locate their oldest sustainability report publication available. How has the company improved their corporate social responsibility performance since they implemented GRI reporting? Sustainability Accounting Standards Board (SASB) GRI standards were targeted at a variety of stakeholders, from the community at large to investors and lenders. This meant that the scope of disclosure encouraged by the GRI standards was perhaps too broad for companies that were primarily focused on reporting to investors in routine terms. Investors have their own unique needs related to sustainability information. Their concerns are related to the price and value of the organization, whereas other stakeholders are interested in how the company might affect them specifically. This effect may not even be financial; it could be whether the company pollutes in its local community, or it could be how a firm treats its workers. For this reason, the Sustainability Accounting Standards Board (SASB) was established in 2011. SASB’s mission is to help businesses around the world identify, manage and report on the sustainability topics that matter most to their investors. The SASB develops standards for disclosure of material sustainability information to investors, which can meet the disclosure requirements for known trends and uncertainties in the Management Discussion and Analysis section filed with the Securities Exchange Commission. SASB’s version of materiality differs somewhat from the GRI’s version. Whereas the GRI viewed materiality as the inclusion of information that reflects an organization’s significant economic, environmental, and social impacts or its substantial influence on the assessments and decisions of stakeholders, SASB adopted the US Supreme Court’s view that information is material if there is “a substantial likelihood that the disclosure of the omitted fact would have been viewed by the reasonable investor as having significantly altered the ‘total mix’ of information made available”. 79 It is up to the firms to determine whether something is material and needs reporting, and this determination would begin with the initial questions “Is the topic important to the total mix of information?” and “Would it be of interest to the reasonable investor. 80 79 TSC Indus. v. Northway, Inc . (426 U.S. 438, 449 (1976)). 80 The explanation of SASB’s interpretation of “total mix” can be viewed on their website. Sustainability Accounting Standards Board (SASB). SASB’s Approach to Materiality for the Purpose of Standards Development (Staff Bulletin No. SB002-07062017). July 6, 2017. http://library.sasb.org/wp-content/uploads/2017/01/ApproachMateriality-Staff-Bulletin-01192017.pdf?hsCtaTracking=9280788c-d775-4b34-8bc8-5447a06a6d38%7C2e22652a-5486-4854-b68f-73fea01a2414—Ed. The SASB standards, available for 79 industries across 10 sectors, help firms disclose material sustainability factors that are likely to affect financial performance. For example, a company that has operations in a developing nation may need to disclose its employment practices in that country to inform users of the risks to which the company is exposed because of its operations. SASB Standards and Framework to see the current SASB conceptual framework. Integrated Reporting Even though companies were reporting through a range of mechanisms—sustainability reports, triple bottom line, and CSR reports—these methods of reporting were seen as fragmented and not integrating the financial and non-financial information into one report ( Figure 13.10 ). 81 Also, the methods “failed to make the connection between the organization’s strategy, its financial performance and its performance on environmental, social and governance issues.” 82 81 Wendy Stubbs, Colin Higgins, and Markus Milne. “Why Do Companies Not Produce Sustainability Reports?” November 12, 2012. Business Strategy and the Environment 22(7): 456–470. 82 Harold P. Roth. “Is Integrated Reporting in the Future?” April 22, 2014. CPA Journal 84(3): 62-67. https://insurancenewsnet.com/oarticle/Is-Integrated-Reporting-in-the-Future-a-493109#.XC6i GGm1vpw In response to these criticisms, the International Integrated Reporting Council (IIRC) was formed in 2010, touting Integrated Reporting as a solution to the shortfalls of financial reporting. Its intent is to act as a catalyst for behavioral change and long-term thinking, 83 bringing together financial, social, environmental and governance information in a clear, concise, consistent and comparable format. 84 83 Stathis Gould. “Integrated Reporting <IR> Longs for Finance Professionals.” International Federation of Accountants (IFAC) . February 2, 2017. https://www.ifac.org/global-knowledge-gateway/business-reporting/discussion/integrated-reporting-longs-finance 84 International Federation of Accountants. “A4S and GRI Announce Formation of the IIRC.” August 2, 2010. https://www.ifac.org/news-events/a4s-and-gri-announces-formation-iirc-0 The goals of Integrated Reporting are to: improve the quality of information provided to investors and lenders communicate the full range of factors that materially affect the ability of an organization to create value over time by using a more cohesive and efficient approach to corporate reporting which draws on different reporting strands. enhance accountability and stewardship for the broad base of six capitals (financial, manufactured, intellectual, human, social and relationship, natural) and promote understanding of their interdependencies. support integrated thinking, decision-making and actions so as to create value 85 . 85 International Integrated Reporting Council (IIRC). The International Integrated Reporting Framework . 2013. http://integratedreporting.org/wp-content/uploads/2013/12/13-12-08-THE-INTERNATIONAL-IR-FRAMEWORK-2-1.pdf As outlined, the Integrated Reporting framework identifies six broad categories of capital used by organizations which are: financial, manufactured, intellectual, human, social and relationship, and natural. Whether information should be prepared and presented, that is, whether it is material in its inclusion is determined by: Identifying relevant matters based on their ability to affect value creation—that is how it increases, decreases or transforms the capitals caused by the organization’s activities. This may be value created for the organization itself or for stakeholders, including society itself. Evaluating the importance of relevant matters in terms of their known or potential effect on value creation. This includes evaluating the magnitude of a occurrence’s effect and its likelihood of occurrence. Prioritizing those matters based on their relative importance so as to focus on the most important matters when determining how they should be reported. Determining what information to disclose about material matters. This may require some judgment and discussion with stakeholders to ensure that the report meets its primary purpose. 86 86 International Integrated Reporting Council (IIRC). The International Integrated Reporting Framework . 2013. http://integratedreporting.org/wp-content/uploads/2013/12/13-12-08-THE-INTERNATIONAL-IR-FRAMEWORK-2-1.pdf Integrated Reporting has been adopted by a number of companies throughout the world and is mandatory for listed companies in South Africa and Brazil. So far, it has been slow to take hold in the U.S., however, a number of companies have implemented Integrated Reporting, including Clorox , Entergy , General Electric , Jones Lang LaSalle , PepsiCo , Prudential Financial , and Southwest Airlines . You can find out more information about the IR framework by visiting the Integrated Reporting website. 13.4 Future Issues in Sustainability Sustainability reporting is still relatively new and its use is not yet mandatory. But from the standpoint of materiality, companies should disclose information if it has become important enough to influence the decisions of users of financial information. The focus on sustainability has led to some notable innovation. For example, Tesla Corporation has become the United States’ premier electric car manufacturer and is planning an electric semi-trailer to compete with diesel semi-trailers. The company has also made huge strides in the development of economically viable battery and solar technologies, and developing affordable attractive glass solar tiles that can provide all the electricity necessary for the typical home. The Tesla Gigafactory, located in Sparks, Nevada, expects to be able to produce more lithium ion batteries in one year than were produced in globally in 2013. If industries reduce carbon emissions and improve social responsibility, what issues remain to guide the quest for sustainability in the future? One possibility is the need for security against cyberattacks, which not only harm the company’s functioning but also dent consumer confidence. Another issue will be whether companies can continue to become or remain global in their operations, as political winds shift and the potential arises for backlash against the resulting economic changes in industrialized nations. A third issue is the role of artificial intelligence (AI). As AI gains prominence and robots become more capable of undertaking complex tasks, white-collar workers of the 21st century may find themselves losing jobs like their 20 th -century manufacturing counterparts did. This result will raise a number of ethical questions, such as whether corporations have a greater responsibility to society than to shareholders, and whether the use of robots should be taxed in order for governments to provide retraining to displaced workers and a universal basic income 88 . 88 Catherine Clifford. “Automation Could Kill 2× More Jobs Than the Great Depression—so San Francisco Lawmaker Pushes for Bill Gates’ ‘Robot Tax.’” CNBC . August 24, 2017. https://www.cnbc.com/2017/08/24/san-francisco-lawmaker-pushes-forward-bill-gates-robot-tax.html AI can herald positive change as well. It is expected, for instance, that 10 million self-driving cars will be on the road by 2020, 89 most of them electric and rechargeable using wind or solar power. In fact, you may not even need to own a vehicle at all! Instead, you can be taken to work in a driverless car that will drop you off and then collect other passengers. 89 Business Insider Intelligence. “10 Million Self-Driving Cars Will Be on the Road by 2020.” Business Insider . June 15, 2016. http://www.businessinsider.com/report-10-million-self-driving-cars-will-be-on-the-road-by-2020-2015-5-6 These changes are examples of what some call the technological revolution. 90 To maintain relevance, today’s worker must learn to be multi-skilled, more innovative, and have a good analytical mind that is able to think critically and creatively. These types of shifts can increase stress for employees and means that the business will be subject to high degrees of scrutiny by stakeholders. As a result, stakeholders will demand that companies be more accountable than simply providing financial reports. 90 Klaus Schwab. “Are You Ready for the Technological Revolution?” World Economic Forum . February 19, 2015. https://www.weforum.org/agenda/2015/02/are-you-ready-for-the-technological-revolution/ Think It Through Robot Tax In 2017, Microsoft founder, Bill Gates called for a “robot tax” to be introduced to offset the inequality expected to result from automation. 91 He called for the robot tax to finance a Universal Basic Income (UBI). A universal basic income is the concept by which citizens would receive a regular and unconditional amount of money from the government that is sufficient to meet basic needs. Another similar concept is that of a Universal Basic Dividend (UBD) by which a portion of the initial public offerings (IPOs) of a company would go into a public trust that generates an income stream to pay the UBD. 92 91 Yanis Varoufakis. “Robot Taxes and Universal Basic Income.” Acuity . June 16, 2017. https://www.acuitymag.com/technology/robot-taxes-and-universal-basic-income 92 Yanis Varoufakis. “Robot Taxes and Universal Basic Income.” Acuity . June 16, 2017. https://www.acuitymag.com/technology/robot-taxes-and-universal-basic-income What are the costs to society of increased automation? How might a robot tax be calculated and implemented? The discussion of the environmental and social responsibility in this chapter only touched on some of the issues that affect our world. Sustainability reporting allows companies to not only report what they are doing to be good global citizens, it also makes them more aware of areas in which they need to improve. Awareness of the areas that need improvement allows companies to create a plan to continually improve their role in society. In addition, as more and more companies assess their own social responsibility and move to improve their sustainability, it draws attention to unreported sustainability issues as well as to companies that are not being socially aware. Social responsibility reporting has moved us a long way from merely reporting the financial results of businesses. It provides a foundation that links all businesses to all citizens, whether they are shareholders or not, and it helps bind us all in a way that says we are all truly part of a single, global environment that is determined by the actions of both businesses and citizens.
u.s._history
Summary 32.1 The War on Terror George W. Bush’s first term in office began with al-Qaeda’s deadly attacks on the World Trade Center and the Pentagon on September 11, 2001. Shortly thereafter, the United States found itself at war with Afghanistan, which was accused of harboring the 9/11 mastermind, Osama bin Laden, and his followers. Claiming that Iraq’s president Saddam Hussein was building weapons of mass destruction, perhaps with the intent of attacking the United States, the president sent U.S. troops to Iraq as well in 2003. Thousands were killed, and many of the men captured by the United States were imprisoned and sometimes tortured for information. The ease with which Hussein was deposed led the president to declare that the mission in Iraq had been accomplished only a few months after it began. He was, however, mistaken. Meanwhile, the establishment of the Office of Homeland Security and the passage of the Homeland Security Act and USA Patriot Act created new means and levels of surveillance to identify potential threats. 32.2 The Domestic Mission When George W. Bush took office in January 2001, he was committed to a Republican agenda. He cut tax rates for the rich and tried to limit the role of government in people’s lives, in part by providing students with vouchers to attend charter and private schools, and encouraging religious organizations to provide social services instead of the government. While his tax cuts pushed the United States into a chronically large federal deficit, many of his supply-side economic reforms stalled during his second term. In 2005, Hurricane Katrina underscored the limited capacities of the federal government under Bush to assure homeland security. In combination with increasing discontent over the Iraq War, these events handed Democrats a majority in both houses in 2006. Largely as a result of a deregulated bond market and dubious innovations in home mortgages, the nation reached the pinnacle of a real estate boom in 2007. The threatened collapse of the nations’ banks and investment houses required the administration to extend aid to the financial sector. Many resented this bailout of the rich, as ordinary citizens lost jobs and homes in the Great Recession of 2008. 32.3 New Century, Old Disputes The nation’s increasing diversity—and with it, the fact that White Caucasians will soon be a demographic minority—prompted a conservative backlash that continues to manifest itself in debates about immigration. Questions of who is an American and what constitutes a marriage continue to be debated, although the answers are beginning to change. As some states broadened civil rights to include gays and lesbians, groups opposed to these developments sought to impose state constitutional restrictions. From this flurry of activity, however, a new political consensus for expanding marriage rights has begun to emerge. On the issue of climate change, however, polarization has increased. A strong distrust of science among Americans has divided the political parties and hampered scientific research. 32.4 Hope and Change Despite Republican resistance and political gridlock in Washington during his first term in office, President Barack Obama oversaw the distribution of the TARP program’s $7.77 trillion to help shore up the nation’s banking system, and Congress authorized $80 billion to help Chrysler and General Motors. The goals of Obama’s Patient Protection and Affordable Care Act (Obamacare) were to provide all Americans with access to affordable health insurance, to require that everyone in the United States had some form of health insurance, and to lower the costs of healthcare. During his second term, the nation struggled to grow modestly, the percentage of the population living in poverty remained around 15 percent, and unemployment was still high in some areas. Acceptance of same-sex marriage grew, and the United States sharply reduced its military commitments in Iraq and Afghanistan.
Chapter Outline 32.1 The War on Terror 32.2 The Domestic Mission 32.3 New Century, Old Disputes 32.4 Hope and Change Introduction On the morning of September 11, 2001, hopes that the new century would leave behind the conflicts of the previous one were dashed when two hijacked airliners crashed into the twin towers of New York’s World Trade Center. When the first plane struck the north tower, many assumed that the crash was a horrific accident. But then a second plane hit the south tower less than thirty minutes later. People on the street watched in horror, as some of those trapped in the burning buildings jumped to their deaths and the enormous towers collapsed into dust. In the photo above, the Statue of Liberty appears to look on helplessly, as thick plumes of smoke obscure the Lower Manhattan skyline ( Figure 32.1 ). The events set in motion by the September 11 attacks would raise fundamental questions about the United States’ role in the world, the extent to which privacy should be protected at the cost of security, the definition of exactly who is an American, and the cost of liberty.
[ { "answer": { "ans_choice": 3, "ans_text": "the NSA" }, "bloom": null, "hl_context": "While the CIA operates overseas , the Federal Bureau of Investigation ( FBI ) is the chief federal law enforcement agency within U . S . national borders . Its activities are limited by , among other things , the Fourth Amendment , which protects citizens against unreasonable searches and seizures . <hl> Beginning in 2002 , however , the Bush administration implemented a wide-ranging program of warrantless domestic wiretapping , known as the Terrorist Surveillance Program , by the National Security Agency ( NSA ) . <hl> The shaky constitutional basis for this program was ultimately revealed in August 2006 , when a federal judge in Detroit ordered the program ended immediately .", "hl_sentences": "Beginning in 2002 , however , the Bush administration implemented a wide-ranging program of warrantless domestic wiretapping , known as the Terrorist Surveillance Program , by the National Security Agency ( NSA ) .", "question": { "cloze_format": "Unwarranted wiretapping in the United States was conducted by ________.", "normal_format": "What was unwarranted wiretapping in the United States conducted by?", "question_choices": [ "the FBI", "the CIA", "the New York Times", "the NSA" ], "question_id": "fs-idm166676576", "question_text": "Unwarranted wiretapping in the United States was conducted by ________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "Lehman Brothers" }, "bloom": null, "hl_context": "<hl> Realizing that the failure of major financial institutions could result in the collapse of the entire U . S . economy , the chairman of the Federal Reserve , Ben Bernanke , authorized a bailout of the Wall Street firm Bear Stearns , although months later , the financial services firm Lehman Brothers was allowed to file for the largest bankruptcy in the nation ’ s history . <hl> Members of Congress met with Bernanke and Secretary of the Treasury Henry Paulson in September 2008 , to find a way to head off the crisis . They agreed to use $ 700 billion in federal funds to bail out the troubled institutions , and Congress subsequently passed the Emergency Economic Stabilization Act , creating the Troubled Asset Relief Program ( TARP ) . One important element of this program was aid to the auto industry : The Bush administration responded to their appeal with an emergency loan of $ 17.4 billion — to be executed by his successor after the November election — to stave off the industry ’ s collapse . When the real estate market stalled after reaching a peak in 2007 , the house of cards built by the country ’ s largest financial institutions came tumbling down . People began to default on their loans , and more than one hundred mortgage lenders went out of business . American International Group ( AIG ) , a multinational insurance company that had insured many of the investments , faced collapse . Other large financial institutions , which had once been prevented by federal regulations from engaging in risky investment practices , found themselves in danger , as they either were besieged by demands for payment or found their demands on their own insurers unmet . <hl> The prestigious investment firm Lehman Brothers was completely wiped out in September 2008 . <hl> Some endangered companies , like Wall Street giant Merrill Lynch , sold themselves to other financial institutions to survive . A financial panic ensued that revealed other fraudulent schemes built on CDOs . The biggest among them was a pyramid scheme organized by the New York financier Bernard Madoff , who had defrauded his investors by at least $ 18 billion .", "hl_sentences": "Realizing that the failure of major financial institutions could result in the collapse of the entire U . S . economy , the chairman of the Federal Reserve , Ben Bernanke , authorized a bailout of the Wall Street firm Bear Stearns , although months later , the financial services firm Lehman Brothers was allowed to file for the largest bankruptcy in the nation ’ s history . The prestigious investment firm Lehman Brothers was completely wiped out in September 2008 .", "question": { "cloze_format": "___ went bankrupt in 2008, signaling the beginning of a major economic crisis.", "normal_format": "What investment banking firm went bankrupt in 2008, signaling the beginning of a major economic crisis?", "question_choices": [ "CitiBank", "Wells Fargo", "Lehman Brothers", "Price Waterhouse" ], "question_id": "fs-idm383557360", "question_text": "What investment banking firm went bankrupt in 2008, signaling the beginning of a major economic crisis?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "A" }, "bloom": null, "hl_context": "Notwithstanding economic growth in the 1990s and steadily increasing productivity , wages had remained largely flat relative to inflation since the end of the 1970s ; despite the mild recovery , they remained so . To compensate , many consumers were buying on credit , and with interest rates low , financial institutions were eager to oblige them . By 2008 , credit card debt had risen to over $ 1 trillion . <hl> More importantly , banks were making high-risk , high-interest mortgage loans called subprime mortgages to consumers who often misunderstood their complex terms and lacked the ability to make the required payments . <hl>", "hl_sentences": "More importantly , banks were making high-risk , high-interest mortgage loans called subprime mortgages to consumers who often misunderstood their complex terms and lacked the ability to make the required payments .", "question": { "cloze_format": "A subprime mortgage is ________.", "normal_format": "What is a subprime mortgage?", "question_choices": [ "a high-risk, high-interest loan", "a federal bailout for major banks", "a form of insurance on investments", "a form of political capital" ], "question_id": "fs-idm259226576", "question_text": "A subprime mortgage is ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "C" }, "bloom": null, "hl_context": "<hl> Defining American Arizona Bans Mexican American Studies In 2010 , Arizona passed a law barring the teaching of any class that promoted “ resentment ” of students of other races or encouraged “ ethnic solidarity . ” The ban , to take effect on December 31 of that year , included a popular Mexican American studies program taught at elementary , middle , and high schools in the city of Tucson . <hl> <hl> The program , which focused on teaching students about Mexican American history and literature , was begun in 1998 , to convert high absentee rates and low academic performance among Latino students , and proved highly successful . <hl> Public school superintendent Tom Horne objected to the course , however , claiming it encouraged resentment of Whites and of the U . S . government , and improperly encouraged students to think of themselves as members of a race instead of as individuals .", "hl_sentences": "Defining American Arizona Bans Mexican American Studies In 2010 , Arizona passed a law barring the teaching of any class that promoted “ resentment ” of students of other races or encouraged “ ethnic solidarity . ” The ban , to take effect on December 31 of that year , included a popular Mexican American studies program taught at elementary , middle , and high schools in the city of Tucson . The program , which focused on teaching students about Mexican American history and literature , was begun in 1998 , to convert high absentee rates and low academic performance among Latino students , and proved highly successful .", "question": { "cloze_format": "A popular Mexican American studies program was banned by the state of ________, which accused it of causing resentment of White people.", "normal_format": "By which state was a popular Mexican American studies program banned, which accused it of causing resentment of White people?", "question_choices": [ "New Mexico", "California", "Arizona", "Texas" ], "question_id": "fs-idm228198608", "question_text": "A popular Mexican American studies program was banned by the state of ________, which accused it of causing resentment of White people." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "Massachusetts" }, "bloom": null, "hl_context": "Following Vermont ’ s lead , several other states legalized same-sex marriages or civil unions among gay and lesbian couples . In 2004 , the Massachusetts Supreme Judicial Court ruled that barring gays and lesbians from marrying violated the state constitution . <hl> The court held that offering same-sex couples the right to form civil unions but not marriage was an act of discrimination , and Massachusetts became the first state to allow same-sex couples to marry . <hl> Not all states followed suit , however , and there was a backlash in several states . Between 1998 and 2012 , thirty states banned same-sex marriage either by statute or by amending their constitutions . Other states attempted , unsuccessfully , to do the same . In 2007 , the Massachusetts State Legislature rejected a proposed amendment to the state ’ s constitution that would have prohibited such marriages .", "hl_sentences": "The court held that offering same-sex couples the right to form civil unions but not marriage was an act of discrimination , and Massachusetts became the first state to allow same-sex couples to marry .", "question": { "cloze_format": "The first state to allow same-sex marriage was ________.", "normal_format": "What was the first state to allow same-sex marriage?", "question_choices": [ "Massachusetts", "New York", "California", "Pennsylvania" ], "question_id": "fs-idm107004160", "question_text": "The first state to allow same-sex marriage was ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "2013" }, "bloom": null, "hl_context": "During Barack Obama ’ s second term in office , courts began to counter efforts by conservatives to outlaw same-sex marriage . <hl> A series of decisions declared nine states ’ prohibitions against same-sex marriage to be unconstitutional , and the Supreme Court rejected an attempt to overturn a federal court ruling to that effect in California in June 2013 . <hl> Shortly thereafter , the Supreme Court also ruled that the Defense of Marriage Act of 1996 was unconstitutional , because it violated the Equal Protection Clause of the Fourteenth Amendment . These decisions seem to allow legal challenges in all the states that persist in trying to block same-sex unions .", "hl_sentences": "A series of decisions declared nine states ’ prohibitions against same-sex marriage to be unconstitutional , and the Supreme Court rejected an attempt to overturn a federal court ruling to that effect in California in June 2013 .", "question": { "cloze_format": "The U.S. Supreme Court ruled the Defense of Marriage Act unconstitutional in ________.", "normal_format": "When the U.S. Supreme Court ruled the Defense of Marriage Act unconstitutional?", "question_choices": [ "2007", "2009", "2013", "2014" ], "question_id": "fs-idp283014320", "question_text": "The U.S. Supreme Court ruled the Defense of Marriage Act unconstitutional in ________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 3, "ans_text": "D" }, "bloom": null, "hl_context": "The act , which created the program known as Obamacare , represented the first significant overhaul of the American healthcare system since the passage of Medicaid in 1965 . <hl> Its goals were to provide all Americans with access to affordable health insurance , to require that everyone in the United States acquire some form of health insurance , and to lower the costs of healthcare . <hl> The plan , which made use of government funding , created private insurance company exchanges to market various insurance packages to enrollees .", "hl_sentences": "Its goals were to provide all Americans with access to affordable health insurance , to require that everyone in the United States acquire some form of health insurance , and to lower the costs of healthcare .", "question": { "cloze_format": "___ is not a goal of Obamacare.", "normal_format": "Which of the following is not a goal of Obamacare (the Patient Protection and Affordable Care Act)?", "question_choices": [ "to provide all Americans with access to affordable health insurance", "to require that everyone in the United States acquire some form of health insurance", "to lower the costs of healthcare", "to increase employment in the healthcare industry" ], "question_id": "fs-idm115353008", "question_text": "Which of the following is not a goal of Obamacare (the Patient Protection and Affordable Care Act)?" }, "references_are_paraphrase": null } ]
32
32.1 The War on Terror Learning Objectives By the end of this section, you will be able to: Discuss how the United States responded to the terrorist attacks of September 11, 2001 Explain why the United States went to war against Afghanistan and Iraq Describe the treatment of suspected terrorists by U.S. law enforcement agencies and the U.S. military As a result of the narrow decision of the U.S. Supreme Court in Bush v. Gore , Republican George W. Bush was the declared the winner of the 2000 presidential election with a majority in the Electoral College of 271 votes to 266, although he received approximately 540,000 fewer popular votes nationally than his Democratic opponent, Bill Clinton’s vice president, Al Gore. Bush had campaigned with a promise of “compassionate conservatism” at home and nonintervention abroad. These platform planks were designed to appeal to those who felt that the Clinton administration’s initiatives in the Balkans and Africa had unnecessarily entangled the United States in the conflicts of foreign nations. Bush’s 2001 education reform act, dubbed No Child Left Behind, had strong bipartisan support and reflected his domestic interests. But before the president could sign the bill into law, the world changed when four American airliners were hijacked and used in the single most deadly act of terrorism in the United States. Bush’s domestic agenda quickly took a backseat, as the president swiftly changed course from nonintervention in foreign affairs to a “war on terror.” 9/11 Shortly after takeoff on the morning of September 11, 2001, teams of hijackers from the Islamist terrorist group al-Qaeda seized control of four American airliners. Two of the airplanes were flown into the twin towers of the World Trade Center in Lower Manhattan. Morning news programs that were filming the moments after the first impact, then assumed to be an accident, captured and aired live footage of the second plane, as it barreled into the other tower in a flash of fire and smoke. Less than two hours later, the heat from the crash and the explosion of jet fuel caused the upper floors of both buildings to collapse onto the lower floors, reducing both towers to smoldering rubble. The passengers and crew on both planes, as well as 2,606 people in the two buildings, all died, including 343 New York City firefighters who rushed in to save victims shortly before the towers collapsed. The third hijacked plane was flown into the Pentagon building in northern Virginia, just outside Washington, DC, killing everyone on board and 125 people on the ground. The fourth plane, also heading towards Washington, crashed in a field near Shanksville, Pennsylvania, when passengers, aware of the other attacks, attempted to storm the cockpit and disarm the hijackers. Everyone on board was killed ( Figure 32.3 ). That evening, President Bush promised the nation that those responsible for the attacks would be brought to justice. Three days later, Congress issued a joint resolution authorizing the president to use all means necessary against the individuals, organizations, or nations involved in the attacks. On September 20, in an address to a joint session of Congress, Bush declared war on terrorism, blamed al-Qaeda leader Osama bin Laden for the attacks, and demanded that the radical Islamic fundamentalists who ruled Afghanistan, the Taliban , turn bin Laden over or face attack by the United States. This speech encapsulated what became known as the Bush Doctrine , the belief that the United States has the right to protect itself from terrorist acts by engaging in pre-emptive wars or ousting hostile governments in favor of friendly, preferably democratic, regimes. World leaders and millions of their citizens expressed support for the United States and condemned the deadly attacks. Russian president Vladimir Putin characterized them as a bold challenge to humanity itself. German chancellor Gerhard Schroder said the events of that day were “not only attacks on the people in the United States, our friends in America, but also against the entire civilized world, against our own freedom, against our own values, values which we share with the American people.” Yasser Arafat, chairman of the Palestinian Liberation Organization and a veteran of several bloody struggles against Israel, was dumbfounded by the news and announced to reporters in Gaza, “We completely condemn this very dangerous attack, and I convey my condolences to the American people, to the American president and to the American administration.” GOING TO WAR IN AFGHANISTAN When it became clear that the mastermind behind the attack was Osama bin Laden, a wealthy Saudi Arabian national who ran his terror network from Afghanistan, the full attention of the United States turned towards Central Asia and the Taliban. Bin Laden had deep roots in Afghanistan. Like many others from around the Islamic world, he had come to the country to oust the Soviet army, which invaded Afghanistan in 1979. Ironically, both bin Laden and the Taliban received material support from the United States at that time. By the late 1980s, the Soviets and the Americans had both left, although bin Laden, by that time the leader of his own terrorist organization, al-Qaeda, remained. The Taliban refused to turn bin Laden over, and the United States began a bombing campaign in October, allying with the Afghan Northern Alliance , a coalition of tribal leaders opposed to the Taliban. U.S. air support was soon augmented by ground troops ( Figure 32.4 ). By November 2001, the Taliban had been ousted from power in Afghanistan’s capital of Kabul, but bin Laden and his followers had already escaped across the Afghan border to mountain sanctuaries in northern Pakistan. IRAQ At the same time that the U.S. military was taking control of Afghanistan, the Bush administration was looking to a new and larger war with the country of Iraq. Relations between the United States and Iraq had been strained ever since the Gulf War a decade earlier. Economic sanctions imposed on Iraq by the United Nations, and American attempts to foster internal revolts against President Saddam Hussein’s government, had further tainted the relationship. A faction within the Bush administration, sometimes labeled neoconservatives, believed Iraq’s recalcitrance in the face of overwhelming U.S. military superiority represented a dangerous symbol to terrorist groups around the world, recently emboldened by the dramatic success of the al-Qaeda attacks in the United States. Powerful members of this faction, including Vice President Dick Cheney and Secretary of Defense Donald Rumsfeld, believed the time to strike Iraq and solve this festering problem was right then, in the wake of 9/11. Others, like Secretary of State Colin Powell, a highly respected veteran of the Vietnam War and former chair of the Joint Chiefs of Staff, were more cautious about initiating combat. The more militant side won, and the argument for war was gradually laid out for the American people. The immediate impetus to the invasion, it argued, was the fear that Hussein was stockpiling weapons of mass destruction ( WMDs ): nuclear, chemical, or biological weapons capable of wreaking great havoc. Hussein had in fact used WMDs against Iranian forces during his war with Iran in the 1980s, and against the Kurds in northern Iraq in 1988—a time when the United States actively supported the Iraqi dictator. Following the Gulf War, inspectors from the United Nations Special Commission and International Atomic Energy Agency had in fact located and destroyed stockpiles of Iraqi weapons. Those arguing for a new Iraqi invasion insisted, however, that weapons still existed. President Bush himself told the nation in October 2002 that the United States was “facing clear evidence of peril, we cannot wait for the final proof—the smoking gun—that could come in the form of a mushroom cloud.” The head of the United Nations Monitoring, Verification and Inspection Commission, Hanx Blix, dismissed these claims. Blix argued that while Saddam Hussein was not being entirely forthright, he did not appear to be in possession of WMDs. Despite Blix’s findings and his own earlier misgivings, Powell argued in 2003 before the United Nations General Assembly that Hussein had violated UN resolutions. Much of his evidence relied on secret information provided by an informant that was later proven to be false. On March 17, 2003, the United States cut off all relations with Iraq. Two days later, in a coalition with Great Britain, Australia, and Poland, the United States began “Operation Iraqi Freedom” with an invasion of Iraq. Other arguments supporting the invasion noted the ease with which the operation could be accomplished. In February 2002, some in the Department of Defense were suggesting the war would be “a cakewalk.” In November, referencing the short and successful Gulf War of 1990–1991, Secretary of Defense Rumsfeld told the American people it was absurd, as some were claiming, that the conflict would degenerate into a long, drawn-out quagmire. “Five days or five weeks or five months, but it certainly isn’t going to last any longer than that,” he insisted. “It won’t be a World War III.” And, just days before the start of combat operations in 2003, Vice President Cheney announced that U.S. forces would likely “be greeted as liberators,” and the war would be over in “weeks rather than months.” Early in the conflict, these predictions seemed to be coming true. The march into Bagdad went fairly smoothly. Soon Americans back home were watching on television as U.S. soldiers and the Iraqi people worked together to topple statues of the deposed leader Hussein around the capital. The reality, however, was far more complex. While American deaths had been few, thousands of Iraqis had died, and the seeds of internal strife and resentment against the United States had been sown. The United States was not prepared for a long period of occupation; it was also not prepared for the inevitable problems of law and order, or for the violent sectarian conflicts that emerged. Thus, even though Bush proclaimed a U.S. victory in May 2003, on the deck of the USS Abraham Lincoln with the banner “Mission Accomplished” prominently displayed behind him, the celebration proved premature by more than seven years ( Figure 32.5 ). My Story Lt. General James Conway on the Invasion of Baghdad Lt. General James Conway, who commanded the First Marine Expeditionary Force in Iraq, answers a reporter’s questions about civilian casualties during the 2003 invasion of Baghdad. “As a civilian in those early days, one definitely had the sense that the high command had expected something to happen which didn’t. Was that a correct perception?” —We were told by our intelligence folks that the enemy is carrying civilian clothes in their packs because, as soon as the shooting starts, they’re going put on their civilian clothes and they’re going go home. Well, they put on their civilian clothes, but not to go home. They put on civilian clothes to blend with the civilians and shoot back at us. . . . “There’s been some criticism of the behavior of the Marines at the Diyala bridge [across the Tigris River into Baghdad] in terms of civilian casualties.” —Well, after the Third Battalion, Fourth Marines crossed, the resistance was not all gone. . . . They had just fought to take a bridge. They were being counterattacked by enemy forces. Some of the civilian vehicles that wound up with the bullet holes in them contained enemy fighters in uniform with weapons, some of them did not. Again, we’re terribly sorry about the loss of any civilian life where civilians are killed in a battlefield setting. I will guarantee you, it was not the intent of those Marines to kill civilians. [The civilian casualties happened because the Marines] felt threatened, [and] they were having a tough time distinguishing from an enemy that [is violating] the laws of land warfare by going to civilian clothes, putting his own people at risk. All of those things, I think, [had an] impact [on the behavior of the Marines], and in the end it’s very unfortunate that civilians died. Who in your opinion bears primary responsibility for the deaths of Iraqi civilians? DOMESTIC SECURITY The attacks of September 11 awakened many to the reality that the end of the Cold War did not mean an end to foreign violent threats. Some Americans grew wary of alleged possible enemies in their midst and hate crimes against Muslim Americans—and those thought to be Muslims—surged in the aftermath. Fearing that terrorists might strike within the nation’s borders again, and aware of the chronic lack of cooperation among different federal law enforcement agencies, Bush created the Office of Homeland Security in October 2001. The next year, Congress passed the Homeland Security Act , creating the Department of Homeland Security, which centralized control over a number of different government functions in order to better control threats at home ( Figure 32.6 ). The Bush administration also pushed the USA Patriot Act through Congress, which enabled law enforcement agencies to monitor citizens’ e-mails and phone conversations without a warrant. The Bush administration was fiercely committed to rooting out threats to the United States wherever they originated, and in the weeks after September 11, the Central Intelligence Agency (CIA) scoured the globe, sweeping up thousands of young Muslim men. Because U.S. law prohibits the use of torture, the CIA transferred some of these prisoners to other nations—a practice known as rendition or extraordinary rendition—where the local authorities can use methods of interrogation not allowed in the United States. While the CIA operates overseas, the Federal Bureau of Investigation (FBI) is the chief federal law enforcement agency within U.S. national borders. Its activities are limited by, among other things, the Fourth Amendment, which protects citizens against unreasonable searches and seizures. Beginning in 2002, however, the Bush administration implemented a wide-ranging program of warrantless domestic wiretapping, known as the Terrorist Surveillance Program, by the National Security Agency (NSA). The shaky constitutional basis for this program was ultimately revealed in August 2006, when a federal judge in Detroit ordered the program ended immediately. The use of unconstitutional wire taps to prosecute the war on terrorism was only one way the new threat challenged authorities in the United States. Another problem was deciding what to do with foreign terrorists captured on the battlefields in Afghanistan and Iraq. In traditional conflicts, where both sides are uniformed combatants, the rules of engagement and the treatment of prisoners of war are clear. But in the new war on terror, extracting intelligence about upcoming attacks became a top priority that superseded human rights and constitutional concerns. For that purpose, the United States began transporting men suspected of being members of al-Qaeda to the U.S. naval base at Guantanamo Bay, Cuba, for questioning. The Bush administration labeled the detainees “unlawful combatants,” in an effort to avoid affording them the rights guaranteed to prisoners of war, such as protection from torture, by international treaties such as the Geneva Conventions. Furthermore, the Justice Department argued that the prisoners were unable to sue for their rights in U.S. courts on the grounds that the constitution did not apply to U.S. territories. It was only in 2006 that the Supreme Court ruled in Hamdan v. Rumsfeld that the military tribunals that tried Guantanamo prisoners violated both U.S. federal law and the Geneva Conventions. 32.2 The Domestic Mission Learning Objectives By the end of this section, you will be able to: Discuss the Bush administration’s economic theories and tax policies, and their effects on the American economy Explain how the federal government attempted to improve the American public education system Describe the federal government’s response to Hurricane Katrina Identify the causes of the Great Recession of 2008 and its effect on the average citizen By the time George W. Bush became president, the concept of supply-side economics had become an article of faith within the Republican Party. The oft-repeated argument was that tax cuts for the wealthy would allow them to invest more and create jobs for everyone else. This belief in the self-regulatory powers of competition also served as the foundation of Bush’s education reform. But by the end of 2008, however, Americans’ faith in the dynamics of the free market had been badly shaken. The failure of the homeland security apparatus during Hurricane Katrina and the ongoing challenge of the Iraq War compounded the effects of the bleak economic situation. OPENING AND CLOSING THE GAP The Republican Party platform for the 2000 election offered the American people an opportunity to once again test the rosy expectations of supply-side economics. In 2001, Bush and the Republicans pushed through a $1.35 trillion tax cut by lowering tax rates across the board but reserving the largest cuts for those in the highest tax brackets. This was in the face of calls by Republicans for a balanced budget, which Bush insisted would happen when the so-called job creators expanded the economy by using their increased income to invest in business. The cuts were controversial; the rich were getting richer while the middle and lower classes bore a proportionally larger share of the nation’s tax burden. Between 1966 and 2001, one-half of the nation’s income gained from increased productivity went to the top 0.01 percent of earners. By 2005, dramatic examples of income inequity were increasing; the chief executive of Wal-Mart earned $15 million that year, roughly 950 times what the company’s average associate made. The head of the construction company K. B. Homes made $150 million, or four thousand times what the average construction worker earned that same year. Even as productivity climbed, workers’ incomes stagnated; with a larger share of the wealth, the very rich further solidified their influence on public policy. Left with a smaller share of the economic pie, average workers had fewer resources to improve their lives or contribute to the nation’s prosperity by, for example, educating themselves and their children. Another gap that had been widening for years was the education gap. Some education researchers had argued that American students were being left behind. In 1983, a commission established by Ronald Reagan had published a sobering assessment of the American educational system entitled A Nation at Risk . The report argued that American students were more poorly educated than their peers in other countries, especially in areas such as math and science, and were thus unprepared to compete in the global marketplace. Furthermore, test scores revealed serious educational achievement gaps between White students and students of color. Touting himself as the “education president,” Bush sought to introduce reforms that would close these gaps. His administration offered two potential solutions to these problems. First, it sought to hold schools accountable for raising standards and enabling students to meet them. The No Child Left Behind Act , signed into law in January 2002, erected a system of testing to measure and ultimately improve student performance in reading and math at all schools that received federal funds ( Figure 32.7 ). Schools whose students performed poorly on the tests would be labeled “in need of improvement.” If poor performance continued, schools could face changes in curricula and teachers, or even the prospect of closure. The second proposed solution was to give students the opportunity to attend schools with better performance records. Some of these might be charter schools , institutions funded by local tax monies in much the same way as public schools, but able to accept private donations and exempt from some of the rules public schools must follow. During the administration of George H. W. Bush, the development of charter schools had gathered momentum, and the American Federation of Teachers welcomed them as places to employ innovative teaching methods or offer specialized instruction in particular subjects. President George W. Bush now encouraged states to grant educational funding vouchers to parents, who could use them to pay for a private education for their children if they chose. These vouchers were funded by tax revenue that would otherwise have gone to public schools. THE 2004 ELECTION AND BUSH’S SECOND TERM In the wake of the 9/11 attacks, Americans had rallied around their president in a gesture of patriotic loyalty, giving Bush approval ratings of 90 percent. Even following the first few months of the Iraq war, his approval rating remained historically high at approximately 70 percent. But as the 2004 election approached, opposition to the war in Iraq began to grow. While Bush could boast of a number of achievements at home and abroad during his first term, the narrow victory he achieved in 2000 augured poorly for his chances for reelection in 2004 and a successful second term. Reelection As the 2004 campaign ramped up, the president was persistently dogged by rising criticism of the violence of the Iraq war and the fact that his administration’s claims of WMDs had been greatly overstated. In the end, no such weapons were ever found. These criticisms were amplified by growing international concern over the treatment of prisoners at the Guantanamo Bay detention camp and widespread disgust over the torture conducted by U.S. troops at the prison in Abu Ghraib , Iraq, which surfaced only months before the election ( Figure 32.8 ). In March 2004, an ambush by Iraqi insurgents of a convoy of private military contractors from Blackwater USA in the town of Fallujah west of Baghdad, and the subsequent torture and mutilation of the four captured mercenaries, shocked the American public. But the event also highlighted the growing insurgency against U.S. occupation, the escalating sectarian conflict between the newly empowered Shia Muslims and the minority of the formerly ruling Sunni, and the escalating costs of a war involving a large number of private contractors that, by conservative estimates, approached $1.7 trillion by 2013. Just as importantly, the American campaign in Iraq had diverted resources from the war against al-Qaeda in Afghanistan, where U.S troops were no closer to capturing Osama bin Laden, the mastermind behind the 9/11 attacks. With two hot wars overseas, one of which appeared to be spiraling out of control, the Democrats nominated a decorated Vietnam War veteran, Massachusetts senator John Kerry ( Figure 32.9 ), to challenge Bush for the presidency. As someone with combat experience, three Purple Hearts, and a foreign policy background, Kerry seemed like the right challenger in a time of war. But his record of support for the invasion of Iraq made his criticism of the incumbent less compelling and earned him the byname “Waffler” from Republicans. The Bush campaign also sought to characterize Kerry as an elitist out of touch with regular Americans—Kerry had studied overseas, spoke fluent French, and married a wealthy foreign-born heiress. Republican supporters also unleashed an attack on Kerry’s Vietnam War record, falsely claiming he had lied about his experience and fraudulently received his medals. Kerry’s reluctance to embrace his past leadership of Vietnam Veterans Against the War weakened the enthusiasm of antiwar Americans while opening him up to criticisms from veterans groups. This combination compromised the impact of his challenge to the incumbent in a time of war. Urged by the Republican Party to “stay the course” with Bush, voters listened. Bush won another narrow victory, and the Republican Party did well overall, picking up four seats in the Senate and increasing its majority there to fifty-five. In the House, the Republican Party gained three seats, adding to its majority there as well. Across the nation, most governorships also went to Republicans, and Republicans dominated many state legislatures. Despite a narrow win, the president made a bold declaration in his first news conference following the election. “I earned capital in this campaign, political capital, and now I intend to spend it.” The policies on which he chose to spend this political capital included the partial privatization of Social Security and new limits on court-awarded damages in medical malpractice lawsuits. In foreign affairs, Bush promised that the United States would work towards “ending tyranny in the world.” But at home and abroad, the president achieved few of his second-term goals. Instead, his second term in office became associated with the persistent challenge of pacifying Iraq, the failure of the homeland security apparatus during Hurricane Katrina, and the most severe economic crisis since the Great Depression. A Failed Domestic Agenda The Bush administration had planned a series of free-market reforms, but corruption, scandals, and Democrats in Congress made these goals hard to accomplish. Plans to convert Social Security into a private-market mechanism relied on the claim that demographic trends would eventually make the system unaffordable for the shrinking number of young workers, but critics countered that this was easily fixed. Privatization, on the other hand, threatened to derail the mission of the New Deal welfare agency and turn it into a fee generator for stock brokers and Wall Street financiers. Similarly unpopular was the attempt to abolish the estate tax. Labeled the “death tax” by its critics, its abolishment would have benefitted only the wealthiest 1 percent. As a result of the 2003 tax cuts, the growing federal deficit did not help make the case for Republicans. The nation faced another policy crisis when the Republican-dominated House of Representatives approved a bill making the undocumented status of millions of immigrants a felony and criminalizing the act of employing or knowingly aiding illegal immigrants. In response, millions of illegal and legal immigrants, along with other critics of the bill, took to the streets in protest. What they saw as the civil rights challenge of their generation, conservatives read as a dangerous challenge to law and national security. Congress eventually agreed on a massive build-up of the U.S. Border Patrol and the construction of a seven-hundred-mile-long fence along the border with Mexico, but the deep divisions over immigration and the status of up to twelve million undocumented immigrants remained unresolved. Hurricane Katrina One event highlighted the nation’s economic inequality and racial divisions, as well as the Bush administration’s difficulty in addressing them effectively. On August 29, 2005, Hurricane Katrina came ashore and devastated coastal stretches of Alabama, Mississippi, and Louisiana. The city of New Orleans, no stranger to hurricanes and floods, suffered heavy damage when the levees, embankments designed to protect against flooding, failed during the storm surge, as the Army Corps of Engineers had warned they might. The flooding killed some fifteen hundred people and so overwhelmed parts of the city that tens of thousands more were trapped and unable to evacuate ( Figure 32.10 ). Thousands who were elderly, ill, or too poor to own a car followed the mayor’s directions and sought refuge at the Superdome, which lacked adequate food, water, and sanitation. Public services collapsed under the weight of the crisis. Although the U.S. Coast Guard managed to rescue more than thirty-five thousand people from the stricken city, the response by other federal bodies was less effective. The Federal Emergency Management Agency (FEMA), an agency charged with assisting state and local governments in times of natural disaster, proved inept at coordinating different agencies and utilizing the rescue infrastructure at its disposal. Critics argued that FEMA was to blame and that its director, Michael D. Brown, a Bush friend and appointee with no background in emergency management, was an example of cronyism at its worst. The failures of FEMA were particularly harmful for an administration that had made “homeland security” its top priority. Supporters of the president, however, argued that the scale of the disaster was such that no amount of preparedness or competence could have allowed federal agencies to cope. While there was plenty of blame to go around—at the city, state, and national levels—FEMA and the Bush administration got the lion’s share. Even when the president attempted to demonstrate his concern with a personal appearance, the tactic largely backfired. Photographs of him looking down on a flooded New Orleans from the comfort of Air Force One only reinforced the impression of a president detached from the problems of everyday people. Despite his attempts to give an uplifting speech from Jackson Square, he was unable to shake this characterization, and it underscored the disappointments of his second term. On the eve of the 2006 midterm elections, President Bush’s popularity had reached a new low, as a result of the war in Iraq and Hurricane Katrina, and a growing number of Americans feared that his party’s economic policy benefitted the wealthy first and foremost. Young voters, non-White Americans, and women favored the Democratic ticket by large margins. The elections handed Democrats control of the Senate and House for the first time since 1994, and, in January 2007, California representative Nancy Pelosi became the first female Speaker of the House in the nation’s history. THE GREAT RECESSION For most Americans, the millennium had started with economic woes. In March 2001, the U.S. stock market had taken a sharp drop, and the ensuing recession triggered the loss of millions of jobs over the next two years. In response, the Federal Reserve Board cut interest rates to historic lows to encourage consumer spending. By 2002, the economy seemed to be stabilizing somewhat, but few of the manufacturing jobs lost were restored to the national economy. Instead, the “outsourcing” of jobs to China and India became an increasing concern, along with a surge in corporate scandals. After years of reaping tremendous profits in the deregulated energy markets, Houston-based Enron imploded in 2003 over allegations of massive accounting fraud. Its top executives, Ken Lay and Jeff Skilling, received long prison sentences, but their activities were illustrative of a larger trend in the nation’s corporate culture that embroiled reputable companies like JP Morgan Chase and the accounting firm Arthur Anderson. In 2003, Bernard Ebbers, the CEO of communications giant WorldCom, was discovered to have inflated his company’s assets by as much as $11 billion, making it the largest accounting scandal in the nation’s history. Only five years later, however, Bernard Madoff’s Ponzi scheme would reveal even deeper cracks in the nation’s financial economy. Banks Gone Wild Notwithstanding economic growth in the 1990s and steadily increasing productivity, wages had remained largely flat relative to inflation since the end of the 1970s; despite the mild recovery, they remained so. To compensate, many consumers were buying on credit, and with interest rates low, financial institutions were eager to oblige them. By 2008, credit card debt had risen to over $1 trillion. More importantly, banks were making high-risk, high-interest mortgage loans called subprime mortgages to consumers who often misunderstood their complex terms and lacked the ability to make the required payments. These subprime loans had a devastating impact on the larger economy. In the past, a prospective home buyer went to a local bank for a mortgage loan. Because the bank expected to make a profit in the form of interest charged on the loan, it carefully vetted buyers for their ability to repay. Changes in finance and banking laws in the 1990s and early 2000s, however, allowed lending institutions to securitize their mortgage loans and sell them as bonds, thus separating the financial interests of the lender from the ability of the borrower to repay, and making highly risky loans more attractive to lenders. In other words, banks could afford to make bad loans, because they could sell them and not suffer the financial consequences when borrowers failed to repay. Once they had purchased the loans, larger investment banks bundled them into huge packages known as collateralized debt obligations (CDOs) and sold them to investors around the world. Even though CDOs consisted of subprime mortgages, credit card debt, and other risky investments, credit ratings agencies had a financial incentive to rate them as very safe. Making matters worse, financial institutions created instruments called credit default swaps , which were essentially a form of insurance on investments. If the investment lost money, the investors would be compensated. This system, sometimes referred to as the securitization food chain, greatly swelled the housing loan market, especially the market for subprime mortgages, because these loans carried higher interest rates. The result was a housing bubble, in which the value of homes rose year after year based on the ease with which people now could buy them. Banks Gone Broke When the real estate market stalled after reaching a peak in 2007, the house of cards built by the country’s largest financial institutions came tumbling down. People began to default on their loans, and more than one hundred mortgage lenders went out of business. American International Group (AIG), a multinational insurance company that had insured many of the investments, faced collapse. Other large financial institutions, which had once been prevented by federal regulations from engaging in risky investment practices, found themselves in danger, as they either were besieged by demands for payment or found their demands on their own insurers unmet. The prestigious investment firm Lehman Brothers was completely wiped out in September 2008. Some endangered companies, like Wall Street giant Merrill Lynch, sold themselves to other financial institutions to survive. A financial panic ensued that revealed other fraudulent schemes built on CDOs. The biggest among them was a pyramid scheme organized by the New York financier Bernard Madoff, who had defrauded his investors by at least $18 billion. Realizing that the failure of major financial institutions could result in the collapse of the entire U.S. economy, the chairman of the Federal Reserve, Ben Bernanke, authorized a bailout of the Wall Street firm Bear Stearns, although months later, the financial services firm Lehman Brothers was allowed to file for the largest bankruptcy in the nation’s history. Members of Congress met with Bernanke and Secretary of the Treasury Henry Paulson in September 2008, to find a way to head off the crisis. They agreed to use $700 billion in federal funds to bail out the troubled institutions, and Congress subsequently passed the Emergency Economic Stabilization Act, creating the Troubled Asset Relief Program (TARP). One important element of this program was aid to the auto industry: The Bush administration responded to their appeal with an emergency loan of $17.4 billion—to be executed by his successor after the November election—to stave off the industry’s collapse. The actions of the Federal Reserve, Congress, and the president prevented the complete disintegration of the nation’s financial sector and warded off a scenario like that of the Great Depression. However, the bailouts could not prevent a severe recession in the U.S. and world economy. As people lost faith in the economy, stock prices fell by 45 percent. Unable to receive credit from now-wary banks, smaller businesses found that they could not pay suppliers or employees. With houses at record prices and growing economic uncertainty, people stopped buying new homes. As the value of homes decreased, owners were unable to borrow against them to pay off other obligations, such as credit card debt or car loans. More importantly, millions of homeowners who had expected to sell their houses at a profit and pay off their adjustable-rate mortgages were now stuck in houses with values shrinking below their purchasing price and forced to make mortgage payments they could no longer afford. Without access to credit, consumer spending declined. Some European nations had suffered similar speculation bubbles in housing, but all had bought into the mortgage securities market and suffered the losses of assets, jobs, and demand as a result. International trade slowed, hurting many American businesses. As the Great Recession of 2008 deepened, the situation of ordinary citizens became worse. During the last four months of 2008, one million American workers lost their jobs, and during 2009, another three million found themselves out of work. Under such circumstances, many resented the expensive federal bailout of banks and investment firms. It seemed as if the wealthiest were being rescued by the taxpayer from the consequences of their imprudent and even corrupt practices. 32.3 New Century, Old Disputes Learning Objectives By the end of this section, you will be able to: Describe the efforts to reduce the influence of immigrants on American culture Describe the evolution of twenty-first-century American attitudes towards same-sex marriage Explain the clash over climate change As the United States entered the twenty-first century, old disputes continued to rear their heads. Some revolved around what it meant to be American and the rights to full citizenship. Others arose from religious conservatism and the influence of the Religious Right on American culture and society. Debates over gay and lesbian rights continued, and arguments over abortion became more complex and contentious, as science and technology advanced. The clash between faith and science also influenced attitudes about how the government should respond to climate change, with religious conservatives finding allies among political conservatives who favored business over potentially expensive measures to reduce harmful emissions. WHO IS AN AMERICAN? There is nothing new about anxiety over immigration in the United States. For its entire history, citizens have worried about who is entering the country and the changes that might result. Such concerns began to flare once again beginning in the 1980s, as Americans of European ancestry started to recognize the significant demographic changes on the horizon. The number of Americans of color and multiethnic Americans was growing, as was the percentage of people with other than European ancestry. It was clear the White majority would soon be a demographic minority ( Figure 32.11 ). The nation’s increasing diversity prompted some social conservatives to identify American culture as one of European heritage, including the drive to legally designate English the official language of the United States. This movement was particularly strong in areas of the country with large Spanish-speaking populations such as Arizona, where, in 2006, three-quarters of voters approved a proposition to make English the official language in the state. Proponents in Arizona and elsewhere argued that these laws were necessary, because recent immigrants, especially Hispanic newcomers, were not being sufficiently acculturated to White, middle-class culture. Opponents countered that English was already the de facto official language, and codifying it into law would only amount to unnecessary discrimination. Defining American Arizona Bans Mexican American Studies In 2010, Arizona passed a law barring the teaching of any class that promoted “resentment” of students of other races or encouraged “ethnic solidarity.” The ban, to take effect on December 31 of that year, included a popular Mexican American studies program taught at elementary, middle, and high schools in the city of Tucson. The program, which focused on teaching students about Mexican American history and literature, was begun in 1998, to convert high absentee rates and low academic performance among Latino students, and proved highly successful. Public school superintendent Tom Horne objected to the course, however, claiming it encouraged resentment of Whites and of the U.S. government, and improperly encouraged students to think of themselves as members of a race instead of as individuals. Tucson was ordered to end its Mexican American studies program or lose 10 percent of the school system’s funding, approximately $3 million each month. In 2012, the Tucson school board voted to end the program. A former student and his mother filed a suit in federal court, claiming that the law, which did not prohibit programs teaching Native American students about their culture, was discriminatory and violated the First Amendment rights of Tucson’s students. In March 2013, the court found in favor of the state, ruling that the law was not discriminatory, because it targeted classes, and not students or teachers, and that preventing the teaching of Mexican studies classes did not intrude on students’ constitutional rights. The court did, however, declare the part of the law prohibiting classes designed for members of particular ethnic groups to be unconstitutional. What advantages or disadvantages can you see in an ethnic studies program? How could an ethnic studies course add to our understanding of U.S. history? Explain. The fear that English-speaking Americans were being outnumbered by a Hispanic population that was not forced to assimilate was sharpened by the concern that far too many were illegally emigrating from Latin America to the United States. The Comprehensive Immigration Reform Act proposed by Congress in 2006 sought to simultaneously strengthen security along the U.S.-Mexico border (a task for the Department of Homeland Security), increase the number of temporary “guest workers” allowed in the United States, and provide a pathway for long-term U.S. residents who had entered the country illegally to gain legal status. It also sought to establish English as a “common and unifying language” for the nation. The bill and a similar amended version both failed to become law. With unemployment rates soaring during the Great Recession, anxiety over illegal immigration rose, even while the incoming flow slowed. State legislatures in Alabama and Arizona passed strict new laws that required police and other officials to verify the immigration status of those they thought had entered the country illegally. In Alabama, the new law made it a crime to rent housing to undocumented immigrants, thus making it difficult for these immigrants to live within the state. Both laws have been challenged in court, and portions have been deemed unconstitutional or otherwise blocked. Beginning in October 2013, states along the U.S.-Mexico border faced an increase in the immigration of children from a handful of Central American countries. Approximately fifty-two thousand children, some unaccompanied, were taken into custody as they reached the United States. A study by the United Nations High Commissioner for Refugees estimated that 58 percent of those migrants, largely from El Salvador and Honduras, were propelled towards the United States by poverty, violence, and the potential for exploitation in their home countries. Because of a 2008 law originally intended to protect victims of human trafficking, these Central American children are guaranteed a court hearing. Predictably, the crisis has served to underline the need for comprehensive immigration reform. But, as of late 2014, a 2013 Senate immigration reform bill that combines border security with a guest worker program and a path to citizenship has yet to be enacted as law. WHAT IS A MARRIAGE? In the 1990s, the idea of legal, same-sex marriage seemed particularly unlikely; neither of the two main political parties expressed support for it. Things began to change, however, following Vermont’s decision to allow same-sex couples to form state-recognized civil unions in which they could enjoy all the legal rights and privileges of marriage. Although it was the intention of the state to create a type of legal relationship equivalent to marriage, it did not use the word “marriage” to describe it. Following Vermont’s lead, several other states legalized same-sex marriages or civil unions among gay and lesbian couples. In 2004, the Massachusetts Supreme Judicial Court ruled that barring gays and lesbians from marrying violated the state constitution. The court held that offering same-sex couples the right to form civil unions but not marriage was an act of discrimination, and Massachusetts became the first state to allow same-sex couples to marry. Not all states followed suit, however, and there was a backlash in several states. Between 1998 and 2012, thirty states banned same-sex marriage either by statute or by amending their constitutions. Other states attempted, unsuccessfully, to do the same. In 2007, the Massachusetts State Legislature rejected a proposed amendment to the state’s constitution that would have prohibited such marriages. While those in support of broadening civil rights to include same-sex marriage were optimistic, those opposed employed new tactics. In 2008, opponents of same-sex marriage in California tried a ballot initiative to define marriage strictly as a union between a man and a woman. Despite strong support for broadening marriage rights, the proposition was successful. This change was just one of dozens that states had been putting in place since the late 1990s to make same-sex marriage unconstitutional at the state level. Like the California proposition, however, many new state constitutional amendments have faced challenges in court ( Figure 32.12 ). As of 2014, leaders in both political parties are more receptive than ever before to the idea of same-sex marriage. WHY FIGHT CLIMATE CHANGE? Even as mainstream members of both political parties moved closer together on same-sex marriage, political divisions on scientific debates continued. One increasingly polarizing debate that baffles much of the rest of the world is about global climate change. Despite near unanimity in the scientific community that climate change is real and will have devastating consequences, large segments of the American population, predominantly on the right, continue to insist that it is little more than a complex hoax and a leftist conspiracy. Much of the Republican Party’s base denies that global warming is the result of human activity; some deny that the earth is getting hotter at all. This popular denial has had huge global consequences. In 1998, the United States, which produces roughly 36 percent of the greenhouse gases like carbon dioxide that prevent the earth’s heat from escaping into space, signed the Kyoto Protocol , an agreement among the world’s nations to reduce their emissions of these gases. President Bush objected to the requirement that major industrialized nations limit their emissions to a greater extent than other parts of the world and argued that doing so might hurt the American economy. He announced that the United States would not be bound by the agreement, and it was never ratified by Congress. Instead, the Bush administration appeared to suppress scientific reporting on climate change. In 2006, the progressive-leaning Union of Concerned Scientists surveyed sixteen hundred climate scientists, asking them about the state of federal climate research. Of those who responded, nearly three-fourths believed that their research had been subjected to new administrative requirements, third-party editing to change their conclusions, or pressure not to use terms such as “global warming.” Republican politicians, citing the altered reports, argued that there was no unified opinion among members of the scientific community that humans were damaging the climate. Countering this rejection of science were the activities of many environmentalists, including Al Gore, Clinton’s vice president and Bush’s opponent in the disputed 2000 election. As a new member of Congress in 1976, Gore had developed what proved a steady commitment to environmental issues. In 2004, he established Generation Investment Management, which sought to promote an environmentally responsible system of equity analysis and investment. In 2006, a documentary film, An Inconvenient Truth , represented his attempts to educate people about the realities and dangers of global warming, and won the 2007 Academy Award for Best Documentary. Though some of what Gore said was in error, the film’s main thrust is in keeping with the weight of scientific evidence. In 2007, as a result of these efforts to “disseminate greater knowledge about man-made climate change,” Gore shared the Nobel Peace Prize with the Intergovernmental Panel on Climate Change. 32.4 Hope and Change Learning Objectives By the end of this section, you will be able to: Describe how Barack Obama’s domestic policies differed from those of George W. Bush Discuss the important events of the war on terror during Obama’s two administrations Discuss some of the specific challenges facing the United States as Obama’s second term draws to a close In 2008, American voters, tired of war and dispirited by the economic downturn, elected a relative newcomer to the political scene who inspired them and made them believe that the United States could rise above political partisanship. Barack Obama’s story resembled that of many Americans: a multicultural background; a largely absent father; a single working mother; and care provided by maternal grandparents. As president, Obama would face significant challenges, including managing the economic recovery in the wake of the Great Recession, fighting the war on terror inherited from the previous administration, and implementing the healthcare reform upon which he had campaigned. OBAMA TAKES OFFICE Born in Hawaii in 1961 to a Kenyan father and an American woman from Kansas, Obama excelled at school, going on to attend Occidental College in Los Angeles, Columbia University, and finally Harvard Law School, where he became the first African American president of the Harvard Law Review . As part of his education, he also spent time in Chicago working as a community organizer to help those displaced by the decline of heavy industry in the early 1980s. Obama first came to national attention when he delivered the keynote address at the 2004 Democratic National Convention while running for his first term in the U.S. Senate. Just a couple of years later, he was running for president himself, the first African American nominee for the office from either major political party. Obama’s opponent in 2008 was John McCain, a Vietnam veteran and Republican senator with the reputation of a “maverick” who had occasionally broken ranks with his party to support bipartisan initiatives. The senator from Arizona faced a number of challenges. As the Republican nominee, he remained closely associated with the two disastrous foreign wars initiated under the Bush administration. His late recognition of the economic catastrophe on the eve of the election did not help matters and further damaged the Republican brand at the polls. At seventy-one, he also had to fight accusations that he was too old for the job, an impression made even more striking by his energetic young challenger. To minimize this weakness, McCain chose a young but inexperienced running mate, Governor Sarah Palin of Alaska. This tactic backfired, however, when a number of poor performances in television interviews convinced many voters that Palin was not prepared for higher office ( Figure 32.13 ). Senator Obama, too, was criticized for his lack of experience with foreign policy, a deficit he remedied by choosing experienced politician Joseph Biden as his running mate. Unlike his Republican opponent, however, Obama offered promises of “hope and change.” By sending out voter reminders on Twitter and connecting with supporters on Facebook, he was able to harness social media and take advantage of grassroots enthusiasm for his candidacy. His youthful vigor drew independents and first-time voters, and he won 95 percent of the African American vote and 44 percent of the White vote ( Figure 32.14 ). Defining American Politicking in a New Century Barack Obama’s campaign seemed to come out of nowhere to overcome the widely supported frontrunner Hillary Clinton in the Democratic primaries. Having won the nomination, Obama shot to the top with an exuberant base of youthful supporters who were encouraged and inspired by his appeal to hope and change. Behind the scenes, the Obama campaign was employing technological innovations and advances in social media to both inform and organize its base. The Obama campaign realized early that the key to political success in the twenty-first century was to energize young voters by reaching them where they were: online. The organizing potential of platforms like Facebook, YouTube, and Twitter had never before been tapped—and they were free. The results were groundbreaking. Using these social media platforms, the Obama campaign became an organizing and fundraising machine of epic proportions. During his almost two-year-long campaign, Obama accepted 6.5 million donations, totaling $500 million. The vast majority of online donations were less than $100. This accomplishment stunned the political establishment, and they have been quick to adapt. Since 2008, nearly every political campaign has followed in Obama’s footsteps, effecting a revolution in campaigning in the United States. ECONOMIC AND HEALTHCARE REFORMS Barack Obama had been elected on a platform of healthcare reform and a wave of frustration over the sinking economy. As he entered office in 2009, he set out to deal with both. Taking charge of the TARP program instituted under George W. Bush to stabilize the country’s financial institutions, Obama oversaw the distribution of some $7.77 trillion designed to help shore up the nation’s banking system. Recognizing that the economic downturn also threatened major auto manufacturers in the United States, he sought and received congressional authorization for $80 billion to help Chrysler and General Motors. The action was controversial, and some characterized it as a government takeover of industry. The money did, however, help the automakers earn a profit by 2011, reversing the trend of consistent losses that had hurt the industry since 2004. It also helped prevent layoffs and wage cuts. By 2013, the automakers had repaid over $50 billion of bailout funds. Finally, through the 2009 American Recovery and Reinvestment Act (ARRA), the Obama administration pumped almost $800 billion into the economy to stimulate economic growth and job creation. More important for Obama supporters than his attempts to restore the economy was that he fulfill his promise to enact comprehensive healthcare reform. Many assumed such reforms would move quickly through Congress, since Democrats had comfortable majorities in both houses, and both Obama and McCain had campaigned on healthcare reform. However, as had occurred years before during President Clinton’s first term, opposition groups saw attempts at reform as an opportunity to put the political brakes on the Obama presidency. After months of political wrangling and condemnations of the healthcare reform plan as socialism, the Patient Protection and Affordable Care Act ( Figure 32.15 ) was passed and signed into law. The act, which created the program known as Obamacare , represented the first significant overhaul of the American healthcare system since the passage of Medicaid in 1965. Its goals were to provide all Americans with access to affordable health insurance, to require that everyone in the United States acquire some form of health insurance, and to lower the costs of healthcare. The plan, which made use of government funding, created private insurance company exchanges to market various insurance packages to enrollees. Although the plan implemented the market-based reforms that they had supported for years, Republicans refused to vote for it. Following its passage, they called numerous times for its repeal, and more than twenty-four states sued the federal government to stop its implementation. Discontent over the Affordable Care Act helped the Republicans capture the majority in the House of Representatives in the 2010 midterm elections. It also helped spawn the Tea Party , a conservative movement focused primarily on limiting government spending and the size of the federal government. THE ELECTION OF 2012 By the 2012 presidential election, the Republicans, convinced Obama was vulnerable because of opposition to his healthcare program and a weak economy, nominated Mitt Romney, a well-known business executive-turned politician who had earlier signed healthcare reform into state law as governor of Massachusetts ( Figure 32.16 ). Romney had unsuccessfully challenged McCain for the Republican nomination in 2008, but by 2012, he had remade himself politically by moving towards the party’s right wing and its newly created Tea Party faction, which was pulling the traditional conservative base further to the right with its strong opposition to abortion, gun control, and immigration. Romney appealed to a new attitude within the Republican Party. While the percentage of Democrats who agreed that the government should help people unable to provide for themselves had remained relatively stable from 1987 to 2012, at roughly 75 to 79 percent, the percentage of Republicans who felt the same way had decreased from 62 to 40 percent over the same period, with the greatest decline coming after 2007. Indeed, Romney himself revealed his disdain for people on the lower rungs of the socioeconomic ladder when, at a fundraising event attended by affluent Republicans, he remarked that he did not care to reach the 47 percent of Americans who would always vote for Obama because of their dependence on government assistance. In his eyes, this low-income portion of the population preferred to rely on government social programs instead of trying to improve their own lives. Starting out behind Obama in the polls, Romney significantly closed the gap in the first of three presidential debates, when he moved towards more centrist positions on many issues. Obama regained momentum in the remaining two debates and used his bailout of the auto industry to appeal to voters in the key states of Michigan and Ohio. Romney’s remarks about the 47 percent hurt his position among both poor Americans and those who sympathized with them. A long-time critic of FEMA who claimed that it should be eliminated, Romney also likely lost votes in the Northeast when, a week before the election, Hurricane Sandy devastated the New England, New York, and New Jersey coasts. Obama and the federal government had largely rebuilt FEMA since its disastrous showing in New Orleans in 2005, and the agency quickly swung into action to assist the 8.5 million people affected by the disaster. Obama won the election, but the Republicans retained their hold on the House of Representatives and the Democratic majority in the Senate grew razor-thin. Political bickering and intractable Republican resistance, including a 70 percent increase in filibusters over the 1980s, a refusal to allow a vote on some legislation, such as the 2012 “jobs bill,” and the glacial pace at which the Senate confirmed the President’s judicial nominations, created political gridlock in Washington, interfering with Obama’s ability to secure any important legislative victories. ONGOING CHALLENGES As Obama entered his second term in office, the economy remained stagnant in many areas. On average, American students continued to fall behind their peers in the rest of the world, and the cost of a college education became increasingly unaffordable for many. Problems continued overseas in Iraq and Afghanistan, and another act of terrorism took place on American soil when bombs exploded at the 2013 Boston Marathon. At the same time, the cause of same-sex marriage made significant advances, and Obama was able to secure greater protection for the environment. He raised fuel-efficiency standards for automobiles to reduce the emissions of greenhouse gases and required coal-burning power plants to capture their carbon emissions. Learning and Earning The quality of American education remains a challenge. The global economy is dominated by those nations with the greatest number of “knowledge workers:” people with specialized knowledge and skills like engineers, scientists, doctors, teachers, financial analysts, and computer programmers. Furthermore, American students’ reading, math, and critical thinking skills are less developed than those of their peers in other industrialized nations, including small countries like Estonia. The Obama administration sought to make higher education more accessible by increasing the amount that students could receive under the federally funded Pell Grant Program, which, by the 2012–13 academic year, helped 9.5 million students pay for their college education. Obama also worked out a compromise with Congress in 2013, which lowered the interest rates charged on student loans. However, college tuition is still growing at a rate of 2 to 3 percent per year, and the debt burden has surpassed the $1 trillion mark and is likely to increase. With debt upon graduation averaging about $29,000, students may find their economic options limited. Instead of buying cars or paying for housing, they may have to join the boomerang generation and return to their parents’ homes in order to make their loan payments. Clearly, high levels of debt will affect their career choices and life decisions for the foreseeable future. Many other Americans continue to be challenged by the state of the economy. Most economists calculate that the Great Recession reached its lowest point in 2009, and the economy has gradually improved since then. The stock market ended 2013 at historic highs, having experienced its biggest percentage gain since 1997. However, despite these gains, the nation struggled to maintain a modest annual growth rate of 2.5 percent after the Great Recession, and the percentage of the population living in poverty continues to hover around 15 percent. Income has decreased ( Figure 32.17 ), and, as late as 2011, the unemployment rate was still high in some areas. Eight million full-time workers have been forced into part-time work, whereas 26 million seem to have given up and left the job market. LGBT Rights During Barack Obama’s second term in office, courts began to counter efforts by conservatives to outlaw same-sex marriage. A series of decisions declared nine states’ prohibitions against same-sex marriage to be unconstitutional, and the Supreme Court rejected an attempt to overturn a federal court ruling to that effect in California in June 2013. Shortly thereafter, the Supreme Court also ruled that the Defense of Marriage Act of 1996 was unconstitutional, because it violated the Equal Protection Clause of the Fourteenth Amendment. These decisions seem to allow legal challenges in all the states that persist in trying to block same-sex unions. The struggle against discrimination based on gender identity has also won some significant victories. In 2014, the U.S. Department of Education ruled that schools receiving federal funds may not discriminate against transgender students, and a board within the Department of Health and Human Services decided that Medicare should cover sexual reassignment surgery. Although very few people eligible for Medicare are transgender, the decision is still important, because private insurance companies often base their coverage on what Medicare considers appropriate and necessary forms of treatment for various conditions. Undoubtedly, the fight for greater rights for LGBT (lesbian, gay, bisexual, transsexual) individuals will continue. Violence Another running debate questions the easy accessibility of firearms. Between the spring of 1999, when two teens killed twelve of their classmates, a teacher, and themselves at their high school in Columbine, Colorado, and the early summer of 2014, fifty-two additional shootings or attempted shootings had occurred at schools ( Figure 32.18 ). Nearly always, the violence was perpetrated by young people with severe mental health problems, as at Sandy Hook elementary school in Newtown, Connecticut, in 2012. After killing his mother at home, twenty-year-old Adam Lanza went to the school and fatally shot twenty six- and seven-year-old students, along with six adult staff members, before killing himself. Advocates of stricter gun control noted a clear relationship between access to guns and mass shootings. Gun rights advocates, however, disagreed. They argued that access to guns is merely incidental. Another shocking act of violence was the attack on the Boston Marathon. On April 15, 2013, shortly before 3:00 p.m., two bombs made from pressure cookers exploded near the finish line ( Figure 32.19 ). Three people were killed, and more than 250 were injured. Three days later, two suspects were identified, and a manhunt began. Later that night, the two young men, brothers who had immigrated to the United States from Chechnya, killed a campus security officer at the Massachusetts Institute of Technology, stole a car, and fled. The older, Tamerlan Tsarnaev, was killed in a fight with the police, and Dzhokhar Tsarnaev was captured the next day. In his statements to the police, Dzhokhar Tsarnaev reported that he and his brother, who he claimed had planned the attacks, had been influenced by the actions of fellow radical Islamists in Afghanistan and Iraq, but he denied they had been affiliated with any larger terrorist group. America and the World In May 2014, President Obama announced that, for the most part, U.S. combat operations in Afghanistan were over. Although a residual force of ninety-eight hundred soldiers will remain to continue training the Afghan army, by 2016, all U.S. troops will have left the country, except for a small number to defend U.S. diplomatic posts. The years of warfare have brought the United States few rewards. In Iraq, 4,475 American soldiers died and 32,220 were wounded. In Afghanistan, the toll through February 2013 was 2,165 dead and 18,230 wounded. By some estimates, the total monetary cost of the wars in Iraq and Afghanistan could easily reach $4 trillion, and the Congressional Budget Office believes that the cost of providing medical care for the veterans might climb to $8 billion by 2020. In Iraq, the coalition led by then-Prime Minister Nouri al-Maliki was able to win 92 of the 328 seats in parliament in May 2014, and he seemed poised to begin another term as the country’s ruler. The elections, however, did not stem the tide of violence in the country. In June 2014, the Islamic State of Iraq and Syria (ISIS), a radical Islamist militant group consisting of mostly Sunni Muslims and once affiliated with al-Qaeda, seized control of Sunni-dominated areas of Iraq and Syria. On June 29, 2014, it proclaimed the formation of the Islamic State with Abu Bakr al-Baghdadi as caliph, the state’s political and religious leader.
u.s._history
Summary 13.1 An Awakening of Religion and Individualism Evangelical Protestantism pervaded American culture in the antebellum era and fueled a belief in the possibility of changing society for the better. Leaders of the Second Great Awakening like Charles G. Finney urged listeners to take charge of their own salvation. This religious message dovetailed with the new economic possibilities created by the market and Industrial Revolution, making the Protestantism of the Second Great Awakening, with its emphasis on individual spiritual success, a reflection of the individualistic, capitalist spirit of the age. Transcendentalists took a different approach, but like their religiously oriented brethren, they too looked to create a better existence. These authors, most notably Emerson, identified a major tension in American life between the effort to be part of the democratic majority and the need to remain true to oneself as an individual. 13.2 Antebellum Communal Experiments Reformers who engaged in communal experiments aimed to recast economic and social relationships by introducing innovations designed to create a more stable and equitable society. Their ideas found many expressions, from early socialist experiments (such as by the Fourierists and the Owenites) to the dreams of the New England intellectual elite (such as Brook Farm). The Second Great Awakening also prompted many religious utopias, like those of the Rappites and Shakers. By any measure, the Mormons emerged as the most successful of these. 13.3 Reforms to Human Health Reformers targeted vices that corrupted the human body and society: the individual and the national soul. For many, alcohol appeared to be the most destructive and widespread. Indeed, in the years before the Civil War, the United States appeared to be a republic of drunkenness to many. To combat this national substance abuse problem, reformers created a host of temperance organizations that first targeted the middle and upper classes, and then the working classes. Thanks to Sylvester Graham and other health reformers, exercise and fresh air, combined with a good diet, became fashionable. Phrenologists focused on revealing the secrets of the mind and personality. In a fast-paced world, phrenology offered the possibility of knowing different human characteristics. 13.4 Addressing Slavery Contrasting proposals were put forth to deal with slavery. Reformers in the antebellum United States addressed the thorny issue of slavery through contrasting proposals that offered profoundly different solutions to the dilemma of the institution. Many leading American statesmen, including slaveholders, favored colonization, relocating American Blacks to Africa, which abolitionists scorned. Slave rebellions sought the end of the institution through its violent overthrow, a tactic that horrified many in the North and the South. Abolitionists, especially those who followed William Lloyd Garrison, provoked equally strong reactions by envisioning a new United States without slavery, where Blacks and Whites stood on equal footing. Opponents saw abolition as the worst possible reform, a threat to all order and decency. Slaveholders, in particular, saw slavery as a positive aspect of American society, one that reformed the lives of enslaved people by exposing them to civilization and religion. 13.5 Women’s Rights The spirit of religious awakening and reform in the antebellum era impacted women lives by allowing them to think about their lives and their society in new and empowering ways. Of all the various antebellum reforms, however, abolition played a significant role in generating the early feminist movement in the United States. Although this early phase of American feminism did not lead to political rights for women, it began the long process of overcoming gender inequalities in the republic.
Chapter Outline 13.1 An Awakening of Religion and Individualism 13.2 Antebellum Communal Experiments 13.3 Reforms to Human Health 13.4 Addressing Slavery 13.5 Women’s Rights Introduction This masthead for the abolitionist newspaper The Liberator shows two Americas ( Figure 13.1 ). On the left is the southern version where enslaved people are being sold; on the right, free Black people enjoy the blessing of liberty. Reflecting the role of evangelical Protestantism in reforms such as abolition, the image features Jesus as the central figure. The caption reads, “I come to break the bonds of the oppressor,” and below the masthead, “Our country is the World, our Countrymen are all Mankind.” The reform efforts of the antebellum years, including abolitionism, aimed to perfect the national destiny and redeem the souls of individual Americans. A great deal of optimism, fueled by evangelical Protestantism revivalism, underwrote the moral crusades of the first half of the nineteenth century. Some reformers targeted what they perceived as the shallow, materialistic, and democratic market culture of the United States and advocated a stronger sense of individualism and self-reliance. Others dreamed of a more equal society and established their own idealistic communities. Still others, who viewed slavery as the most serious flaw in American life, labored to end the institution. Women’s rights, temperance, health reforms, and a host of other efforts also came to the forefront during the heyday of reform in the 1830s and 1840s.
[ { "answer": { "ans_choice": 0, "ans_text": "A" }, "bloom": null, "hl_context": "<hl> The Second Great Awakening also brought significant changes to American culture . <hl> <hl> Church membership doubled in the years between 1800 and 1835 . <hl> <hl> Several new groups formed to promote and strengthen the message of religious revival . <hl> <hl> The American Bible Society , founded in 1816 , distributed Bibles in an effort to ensure that every family had access to the sacred text , while the American Sunday School Union , established in 1824 , focused on the religious education of children and published religious materials specifically for young readers . <hl> In 1825 , the American Tract Society formed with the goal of disseminating the Protestant revival message in a flurry of publications . The burst of religious enthusiasm that began in Kentucky and Tennessee in the 1790s and early 1800s among Baptists , Methodists , and Presbyterians owed much to the uniqueness of the early decades of the republic . These years saw swift population growth , broad western expansion , and the rise of participatory democracy . <hl> These political and social changes made many people anxious , and the more egalitarian , emotional , and individualistic religious practices of the Second Great Awakening provided relief and comfort for Americans experiencing rapid change . <hl> <hl> The awakening soon spread to the East , where it had a profound impact on Congregationalists and Presbyterians . <hl> <hl> The thousands swept up in the movement believed in the possibility of creating a much better world . <hl> Many adopted millennialism , the fervent belief that the Kingdom of God would be established on earth and that God would reign on earth for a thousand years , characterized by harmony and Christian morality . Those drawn to the message of the Second Great Awakening yearned for stability , decency , and goodness in the new and turbulent American republic .", "hl_sentences": "The Second Great Awakening also brought significant changes to American culture . Church membership doubled in the years between 1800 and 1835 . Several new groups formed to promote and strengthen the message of religious revival . The American Bible Society , founded in 1816 , distributed Bibles in an effort to ensure that every family had access to the sacred text , while the American Sunday School Union , established in 1824 , focused on the religious education of children and published religious materials specifically for young readers . These political and social changes made many people anxious , and the more egalitarian , emotional , and individualistic religious practices of the Second Great Awakening provided relief and comfort for Americans experiencing rapid change . The awakening soon spread to the East , where it had a profound impact on Congregationalists and Presbyterians . The thousands swept up in the movement believed in the possibility of creating a much better world .", "question": { "cloze_format": "___ is not a characteristic of the Second Great Awakening.", "normal_format": "Which of the following is not a characteristic of the Second Great Awakening?", "question_choices": [ "greater emphasis on nature", "greater emphasis on religious education of children", "greater church attendance", "belief in the possibility of a better world" ], "question_id": "eip-idm31610640", "question_text": "Which of the following is not a characteristic of the Second Great Awakening?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "the individual" }, "bloom": null, "hl_context": "<hl> Beginning in the 1820s , a new intellectual movement known as transcendentalism began to grow in the Northeast . <hl> In this context , to transcend means to go beyond the ordinary sensory world to grasp personal insights and gain appreciation of a deeper reality , and transcendentalists believed that all people could attain an understanding of the world that surpassed rational , sensory experience . Transcendentalists were critical of mainstream American culture . <hl> They reacted against the age of mass democracy in Jacksonian America — what Tocqueville called the “ tyranny of majority ” — by arguing for greater individualism against conformity . <hl> European romanticism , a movement in literature and art that stressed emotion over cold , calculating reason , also influenced transcendentalists in the United States , especially the transcendentalists ’ celebration of the uniqueness of individual feelings . Protestantism shaped the views of the vast majority of Americans in the antebellum years . The influence of religion only intensified during the decades before the Civil War , as religious camp meetings spread the word that people could bring about their own salvation , a direct contradiction to the Calvinist doctrine of predestination . <hl> Alongside this religious fervor , transcendentalists advocated a more direct knowledge of the self and an emphasis on individualism . <hl> The writers and thinkers devoted to transcendentalism , as well as the reactions against it , created a trove of writings , an outpouring that has been termed the American Renaissance .", "hl_sentences": "Beginning in the 1820s , a new intellectual movement known as transcendentalism began to grow in the Northeast . They reacted against the age of mass democracy in Jacksonian America — what Tocqueville called the “ tyranny of majority ” — by arguing for greater individualism against conformity . Alongside this religious fervor , transcendentalists advocated a more direct knowledge of the self and an emphasis on individualism .", "question": { "cloze_format": "Transcendentalists were most concerned with ________.", "normal_format": "What were transcendentalists most concerned with?", "question_choices": [ "the afterlife", "predestination", "the individual", "democracy" ], "question_id": "eip-idp206914896", "question_text": "Transcendentalists were most concerned with ________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 0, "ans_text": "Shakers" }, "bloom": null, "hl_context": "The most successful religious utopian community to arise in the antebellum years was begun by Joseph Smith . Smith came from a large Vermont family that had not prospered in the new market economy and moved to the town of Palmyra , in the “ burned over district ” of western New York . In 1823 , Smith claimed to have to been visited by the angel Moroni , who told him the location of a trove of golden plates or tablets . During the late 1820s , Smith translated the writing on the golden plates , and in 1830 , he published his finding as The Book of Mormon . <hl> That same year , he organized the Church of Christ , the progenitor of the Church of Jesus Christ of Latter-day Saints then popularly known as Mormons . <hl> <hl> He presented himself as a prophet and aimed to recapture what he viewed as the purity of the primitive Christian church , purity that had been lost over the centuries . <hl> <hl> Smith emphasized the importance of families being led by fathers . <hl> <hl> His vision of a reinvigorated patriarchy resonated with men and women who had not thrived during the market revolution , and his claims attracted those who hoped for a better future . <hl> Smith ’ s new church placed great stress on work and discipline . He aimed to create a New Jerusalem where the church exercised oversight of its members . Smith ’ s claims of translating the golden plates antagonized his neighbors in New York . Difficulties with anti-Mormons led him and his followers to move to Kirtland , Ohio , in 1831 . By 1838 , as the United States experienced continued economic turbulence following the Panic of 1837 , Smith and his followers were facing financial collapse after a series of efforts in banking and money-making ended in disaster . They moved to Missouri , but trouble soon developed there as well , as citizens reacted against the Church members ’ beliefs . Actual fighting broke out in 1838 , and the ten thousand or so members of the Church of Jesus Christ removed to Nauvoo , Illinois , where they founded a new center of Mormonism .", "hl_sentences": "That same year , he organized the Church of Christ , the progenitor of the Church of Jesus Christ of Latter-day Saints then popularly known as Mormons . He presented himself as a prophet and aimed to recapture what he viewed as the purity of the primitive Christian church , purity that had been lost over the centuries . Smith emphasized the importance of families being led by fathers . His vision of a reinvigorated patriarchy resonated with men and women who had not thrived during the market revolution , and his claims attracted those who hoped for a better future .", "question": { "cloze_format": "The religious community of ___ focused on the power of patriarchy.", "normal_format": "Which religious community focused on the power of patriarchy?", "question_choices": [ "Shakers", "Mormons", "Owenites", "Rappites" ], "question_id": "fs-idm200336", "question_text": "Which religious community focused on the power of patriarchy?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "C" }, "bloom": null, "hl_context": "Not all utopian communities were prompted by the religious fervor of the Second Great Awakening ; some were outgrowths of the intellectual ideas of the time , such as romanticism with its emphasis on the importance of individualism over conformity . <hl> One of these , Brook Farm , took shape in West Roxbury , Massachusetts , in the 1840s . <hl> <hl> It was founded by George Ripley , a transcendentalist from Massachusetts . <hl> In the summer of 1841 , this utopian community gained support from Boston-area thinkers and writers , an intellectual group that included many important transcendentalists . Brook Farm is best characterized as a community of intensely individualistic personalities who combined manual labor , such as the growing and harvesting food , with intellectual pursuits . They opened a school that specialized in the liberal arts rather than rote memorization and published a weekly journal called The Harbinger , which was “ Devoted to Social and Political Progress ” ( Figure 13.11 ) . Members of Brook Farm never totaled more than one hundred , but it won renown largely because of the luminaries , such as Emerson and Thoreau , whose names were attached to it . Nathaniel Hawthorne , a Massachusetts writer who took issue with some of the transcendentalists ’ claims , was a founding member of Brook Farm , and he fictionalized some of his experiences in his novel The Blithedale Romance . In 1846 , a fire destroyed the main building of Brook Farm , and already hampered by financial problems , the Brook Farm experiment came to an end in 1847 .", "hl_sentences": "One of these , Brook Farm , took shape in West Roxbury , Massachusetts , in the 1840s . It was founded by George Ripley , a transcendentalist from Massachusetts .", "question": { "cloze_format": "___ is associated with transcendentalism.", "normal_format": "Which community or movement is associated with transcendentalism?", "question_choices": [ "the Oneida Community", "the Ephrata Cloister", "Brook Farm", "Fourierism" ], "question_id": "fs-idp2081328", "question_text": "Which community or movement is associated with transcendentalism?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "B" }, "bloom": null, "hl_context": "Still , by that time , temperance had risen to a major political issue . Reformers lobbied for laws limiting or prohibiting alcohol , and states began to pass the first temperance laws . The earliest , an 1838 law in Massachusetts , prohibited the sale of liquor in quantities less than fifteen gallons , a move designed to make it difficult for ordinary workmen of modest means to buy spirits . <hl> The law was repealed in 1840 , but Massachusetts towns then took the initiative by passing local laws banning alcohol . <hl> In 1845 , close to one hundred towns in the state went “ dry . ”", "hl_sentences": "The law was repealed in 1840 , but Massachusetts towns then took the initiative by passing local laws banning alcohol .", "question": { "cloze_format": "The first temperance laws were enacted by ________.", "normal_format": "Which political groups enacted the first temperance laws?", "question_choices": [ "state governments", "local governments", "the federal government", "temperance organizations" ], "question_id": "fs-idp202206080", "question_text": "The first temperance laws were enacted by ________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 3, "ans_text": "all of the above" }, "bloom": null, "hl_context": "<hl> Graham advocated baths and cleanliness in general to preserve health ; hydropathy , or water cures for various ailments , became popular in the United States in the 1840s and 1850s . <hl> <hl> He also viewed masturbation and excessive sex as a cause of disease and debility . <hl> <hl> His ideas led him to create what he believed to be a perfect food that would maintain health : the Graham cracker , which he invented in 1829 . <hl> Followers of Graham , known as Grahamites , established boardinghouses where lodgers followed the recommended strict diet and sexual regimen . <hl> Sylvester Graham stands out as a leading light among the health reformers in the antebellum years . <hl> A Presbyterian minister , Graham began his career as a reformer , lecturing against the evils of strong drink . <hl> He combined an interest in temperance with vegetarianism and sexuality into what he called a “ Science of Human Life , ” calling for a regimented diet of more vegetables , fruits , and grain , and no alcohol , meat , or spices . <hl>", "hl_sentences": "Graham advocated baths and cleanliness in general to preserve health ; hydropathy , or water cures for various ailments , became popular in the United States in the 1840s and 1850s . He also viewed masturbation and excessive sex as a cause of disease and debility . His ideas led him to create what he believed to be a perfect food that would maintain health : the Graham cracker , which he invented in 1829 . Sylvester Graham stands out as a leading light among the health reformers in the antebellum years . He combined an interest in temperance with vegetarianism and sexuality into what he called a “ Science of Human Life , ” calling for a regimented diet of more vegetables , fruits , and grain , and no alcohol , meat , or spices .", "question": { "cloze_format": "Sylvester Graham’s reformers targeted ________.", "normal_format": "What did Sylvester Graham’s reformers targeted?", "question_choices": [ "the human body", "nutrition", "sexuality", "all of the above" ], "question_id": "fs-idp78763280", "question_text": "Sylvester Graham’s reformers targeted ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "the relocation of African Americans to Africa" }, "bloom": null, "hl_context": "<hl> An early and popular “ reform ” to slavery was colonization , or a movement advocating the displacement of African Americans out of the country , usually to Africa . <hl> In 1816 , the Society for the Colonization of Free People of Color of America ( also called the American Colonization Society or ACS ) was founded with this goal . Leading statesmen including Thomas Jefferson endorsed the idea of colonization .", "hl_sentences": "An early and popular “ reform ” to slavery was colonization , or a movement advocating the displacement of African Americans out of the country , usually to Africa .", "question": { "cloze_format": "Colonization refers to ___ in the context of the antebellum era.", "normal_format": "In the context of the antebellum era, what does colonization refer to?", "question_choices": [ "Great Britain’s colonization of North America", "the relocation of African Americans to Africa", "American colonization of the Caribbean", "American colonization of Africa" ], "question_id": "fs-idm54326320", "question_text": "In the context of the antebellum era, what does colonization refer to?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "C" }, "bloom": null, "hl_context": "<hl> Garrison also preached immediatism : the moral demand to take immediate action to end slavery . <hl> <hl> He wrote of equal rights and demanded that Blacks be treated as equal to Whites . <hl> He appealed to women and men , Black and White , to join the fight . The abolition press , which produced hundreds of tracts , helped to circulate moral suasion . <hl> Garrison and other abolitionists also used the power of petitions , sending hundreds of petitions to Congress in the early 1830s , demanding an end to slavery . <hl> <hl> Since most newspapers published congressional proceedings , the debate over abolition petitions reached readers throughout the nation . <hl> Garrison founded the New England Anti-Slavery Society in 1831 , and the American Anti-Slavery Society ( AASS ) in 1833 . By 1838 , the AASS had 250,000 members , sometimes called Garrisonians . They rejected colonization as a racist scheme and opposed the use of violence to end slavery . <hl> Influenced by evangelical Protestantism , Garrison and other abolitionists believed in moral suasion , a technique of appealing to the conscience of the public , especially slaveholders . <hl> Moral suasion relied on dramatic narratives , often from formerly enslaved people , about the horrors of slavery , arguing that slavery destroyed families , as children were sold and taken away from their mothers and fathers ( Figure 13.16 ) . Moral suasion resonated with many women , who condemned the sexual violence against enslaved women and the victimization of southern White women by adulterous husbands . <hl> William Lloyd Garrison of Massachusetts distinguished himself as the leader of the abolitionist movement . <hl> Although he had once been in favor of colonization , he came to believe that such a scheme only deepened racism and perpetuated the sinful practices of his fellow Americans . In 1831 , he founded the abolitionist newspaper The Liberator , whose first edition declared :", "hl_sentences": "Garrison also preached immediatism : the moral demand to take immediate action to end slavery . He wrote of equal rights and demanded that Blacks be treated as equal to Whites . Garrison and other abolitionists also used the power of petitions , sending hundreds of petitions to Congress in the early 1830s , demanding an end to slavery . Since most newspapers published congressional proceedings , the debate over abolition petitions reached readers throughout the nation . Influenced by evangelical Protestantism , Garrison and other abolitionists believed in moral suasion , a technique of appealing to the conscience of the public , especially slaveholders . William Lloyd Garrison of Massachusetts distinguished himself as the leader of the abolitionist movement .", "question": { "cloze_format": "William Lloyd Garrison dit not employ ___ in his abolitionist efforts.", "normal_format": "Which of the following did William Lloyd Garrison not employ in his abolitionist efforts?", "question_choices": [ "moral suasion", "immediatism", "political involvement", "pamphleteering" ], "question_id": "fs-idm213840880", "question_text": "Which of the following did William Lloyd Garrison not employ in his abolitionist efforts?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "They lectured to co-ed audiences." }, "bloom": null, "hl_context": "In the mid - 1830s , the sisters joined the abolitionist movement , and in 1837 , they embarked on a public lecture tour , speaking about immediate abolition to “ promiscuous assemblies , ” that is , to audiences of women and men . This public action thoroughly scandalized respectable society , where it was unheard of for women to lecture to men . <hl> William Lloyd Garrison endorsed the Grimké sisters ’ public lectures , but other abolitionists did not . <hl> <hl> Their lecture tour served as a turning point ; the reaction against them propelled the question of women ’ s proper sphere in society to the forefront of public debate . <hl>", "hl_sentences": "William Lloyd Garrison endorsed the Grimké sisters ’ public lectures , but other abolitionists did not . Their lecture tour served as a turning point ; the reaction against them propelled the question of women ’ s proper sphere in society to the forefront of public debate .", "question": { "cloze_format": "William Lloyd Garrison's endorsement of the Grimké sisters divided the abolitionist movement because ___.", "normal_format": "Why did William Lloyd Garrison’s endorsement of the Grimké sisters divide the abolitionist movement?", "question_choices": [ "They advocated equal rights for women.", "They supported colonization.", "They attended the Seneca Falls Convention.", "They lectured to co-ed audiences." ], "question_id": "fs-idm66023920", "question_text": "Why did William Lloyd Garrison’s endorsement of the Grimké sisters divide the abolitionist movement?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "C" }, "bloom": null, "hl_context": "<hl> Catharine Beecher , the daughter of Lyman Beecher , pushed for women ’ s roles as educators . <hl> In her 1845 book , The Duty of American Women to Their Country , she argued that the United States had lost its moral compass due to democratic excess . Both “ intelligence and virtue ” were imperiled in an age of riots and disorder . Women , she argued , could restore the moral center by instilling in children a sense of right and wrong . Beecher represented a northern , middle-class female sensibility . The home , especially the parlor , became the site of northern female authority . <hl> Some northern female reformers saw new and vital roles for their sex in the realm of education . <hl> They believed in traditional gender roles , viewing women as inherently more moral and nurturing than men . <hl> Because of these attributes , the feminists argued , women were uniquely qualified to take up the roles of educators of children . <hl>", "hl_sentences": "Catharine Beecher , the daughter of Lyman Beecher , pushed for women ’ s roles as educators . Some northern female reformers saw new and vital roles for their sex in the realm of education . Because of these attributes , the feminists argued , women were uniquely qualified to take up the roles of educators of children .", "question": { "cloze_format": "___ focused on women's roles as the educators of children.", "normal_format": "Which female reformer focused on women’s roles as the educators of children?", "question_choices": [ "Lydia Maria Child", "Sarah Grimké", "Catherine Beecher", "Susan B. Anthony" ], "question_id": "fs-idm117627472", "question_text": "Which female reformer focused on women’s roles as the educators of children?" }, "references_are_paraphrase": null } ]
13
13.1 An Awakening of Religion and Individualism Learning Objectives By the end of this section, you will be able to: Explain the connection between evangelical Protestantism and the Second Great Awakening Describe the message of the transcendentalists Protestantism shaped the views of the vast majority of Americans in the antebellum years. The influence of religion only intensified during the decades before the Civil War, as religious camp meetings spread the word that people could bring about their own salvation, a direct contradiction to the Calvinist doctrine of predestination. Alongside this religious fervor, transcendentalists advocated a more direct knowledge of the self and an emphasis on individualism. The writers and thinkers devoted to transcendentalism, as well as the reactions against it, created a trove of writings, an outpouring that has been termed the American Renaissance. THE SECOND GREAT AWAKENING The reform efforts of the antebellum era sprang from the Protestant revival fervor that found expression in what historians refer to as the Second Great Awakening . (The First Great Awakening of evangelical Protestantism had taken place in the 1730s and 1740s.) The Second Great Awakening emphasized an emotional religious style in which sinners grappled with their unworthy nature before concluding that they were born again, that is, turning away from their sinful past and devoting themselves to living a righteous, Christ-centered life. This emphasis on personal salvation, with its rejection of predestination (the Calvinist concept that God selected only a chosen few for salvation), was the religious embodiment of the Jacksonian celebration of the individual. Itinerant ministers preached the message of the awakening to hundreds of listeners at outdoors revival meetings ( Figure 13.3 ). The burst of religious enthusiasm that began in Kentucky and Tennessee in the 1790s and early 1800s among Baptists, Methodists, and Presbyterians owed much to the uniqueness of the early decades of the republic. These years saw swift population growth, broad western expansion, and the rise of participatory democracy. These political and social changes made many people anxious, and the more egalitarian, emotional, and individualistic religious practices of the Second Great Awakening provided relief and comfort for Americans experiencing rapid change. The awakening soon spread to the East, where it had a profound impact on Congregationalists and Presbyterians. The thousands swept up in the movement believed in the possibility of creating a much better world. Many adopted millennialism , the fervent belief that the Kingdom of God would be established on earth and that God would reign on earth for a thousand years, characterized by harmony and Christian morality. Those drawn to the message of the Second Great Awakening yearned for stability, decency, and goodness in the new and turbulent American republic. The Second Great Awakening also brought significant changes to American culture. Church membership doubled in the years between 1800 and 1835. Several new groups formed to promote and strengthen the message of religious revival. The American Bible Society, founded in 1816, distributed Bibles in an effort to ensure that every family had access to the sacred text, while the American Sunday School Union, established in 1824, focused on the religious education of children and published religious materials specifically for young readers. In 1825, the American Tract Society formed with the goal of disseminating the Protestant revival message in a flurry of publications. Missionaries and circuit riders (ministers without a fixed congregation) brought the message of the awakening across the United States, including into the lives of the enslaved. The revival spurred many slaveholders to begin encouraging the people they enslaved to become Christians. Previously, many slaveholders feared allowing the enslaved to convert, due to a belief that Christians could not be enslaved and because of the fear that enslaved people might use Christian principles to oppose their enslavement. However, by the 1800s, Americans established a legal foundation for the enslavement of Christians. Also, by this time, slaveholders had come to believe that if enslaved people learned the “right” (that is, White) form of Christianity, then they would be more obedient and hardworking. Allowing enslaved people access to Christianity also served to ease the consciences of Christian slaveholders, who argued that slavery was divinely ordained, yet it was a faith that also required slaveholders to bring slaves to the “truth.” Also important to this era was the creation of African American forms of worship as well as African American churches such as the African Methodist Episcopal Church, the first independent Black Protestant church in the United States. Formed in the 1790s by Richard Allen, the African Methodist Episcopal Church advanced the African American effort to express their faith apart from White Methodists ( Figure 13.4 ). In the Northeast, Presbyterian minister Charles Grandison Finney rose to prominence as one of the most important evangelicals in the movement ( Figure 13.4 ). Born in 1792 in western New York, Finney studied to be a lawyer until 1821, when he experienced a religious conversion and thereafter devoted himself to revivals. He led revival meetings in New York and Pennsylvania, but his greatest success occurred after he accepted a ministry in Rochester, New York, in 1830. At the time, Rochester was a boomtown because the Erie Canal had brought a lively shipping business. The new middle class—an outgrowth of the Industrial Revolution—embraced Finney’s message. It fit perfectly with their understanding of themselves as people shaping their own destiny. Workers also latched onto the message that they too could control their salvation, spiritually and perhaps financially as well. Western New York gained a reputation as the “burned over district,” a reference to the intense flames of religious fervor that swept the area during the Second Great Awakening. TRANSCENDENTALISM Beginning in the 1820s, a new intellectual movement known as transcendentalism began to grow in the Northeast. In this context, to transcend means to go beyond the ordinary sensory world to grasp personal insights and gain appreciation of a deeper reality, and transcendentalists believed that all people could attain an understanding of the world that surpassed rational, sensory experience. Transcendentalists were critical of mainstream American culture. They reacted against the age of mass democracy in Jacksonian America—what Tocqueville called the “tyranny of majority”—by arguing for greater individualism against conformity. European romanticism, a movement in literature and art that stressed emotion over cold, calculating reason, also influenced transcendentalists in the United States, especially the transcendentalists’ celebration of the uniqueness of individual feelings. Ralph Waldo Emerson emerged as the leading figure of this movement ( Figure 13.5 ). Born in Boston in 1803, Emerson came from a religious family. His father served as a Unitarian minister and, after graduating from Harvard Divinity School in the 1820s, Emerson followed in his father’s footsteps. However, after his wife died in 1831, he left the clergy. On a trip to Europe in 1832, he met leading figures of romanticism who rejected the hyper-rationalism of the Enlightenment, emphasizing instead emotion and the sublime. When Emerson returned home the following year, he began giving lectures on his romanticism-influenced ideas. In 1836, he published “Nature,” an essay arguing that humans can find their true spirituality in nature, not in the everyday bustling working world of Jacksonian democracy and industrial transformation. In 1841, Emerson published his essay “Self-Reliance,” which urged readers to think for themselves and reject the mass conformity and mediocrity he believed had taken root in American life. In this essay, he wrote, “Whoso would be a man must be a nonconformist,” demanding that his readers be true to themselves and not blindly follow a herd mentality. Emerson’s ideas dovetailed with those of the French aristocrat, Alexis de Tocqueville, who wrote about the “tyranny of the majority” in his Democracy in America . Tocqueville, like Emerson, expressed concern that a powerful majority could overpower the will of individuals. Emerson’s ideas struck a chord with a class of literate adults who also were dissatisfied with mainstream American life and searching for greater spiritual meaning. Many writers were drawn to transcendentalism, and they started to express its ideas through new stories, poems, essays, and articles. The ideas of transcendentalism were able to permeate American thought and culture through a prolific print culture, which allowed magazines and journals to be widely disseminated. Among those attracted to Emerson’s ideas was his friend Henry David Thoreau, whom he encouraged to write about his own ideas. Thoreau placed a special emphasis on the role of nature as a gateway to the transcendentalist goal of greater individualism. In 1848, Thoreau gave a lecture in which he argued that individuals must stand up to governmental injustice, a topic he chose because of his disgust over the Mexican-American War and slavery. In 1849, he published his lecture “Civil Disobedience” and urged readers to refuse to support a government that was immoral. In 1854, he published Walden; Or, Life in the Woods , a book about the two years he spent in a small cabin on Walden Pond near Concord, Massachusetts ( Figure 13.6 ). Thoreau had lived there as an experiment in living apart, but not too far apart, from his conformist neighbors. Margaret Fuller also came to prominence as a leading transcendentalist and advocate for women’s equality. Fuller was a friend of Emerson and Thoreau, and other intellectuals of her day. Because she was a woman, she could not attend Harvard, as it was a male-only institution for undergraduate students until 1973. However, she was later granted the use of the library there because of her towering intellect. In 1840, she became the editor of The Dial , a transcendentalist journal, and she later found employment as a book reviewer for the New York Tribune newspaper. Tragically, in 1850, she died at the age of forty in a shipwreck off Fire Island, New York. Walt Whitman also added to the transcendentalist movement, most notably with his 1855 publication of twelve poems, entitled Leaves of Grass , which celebrated the subjective experience of the individual. One of the poems, “Song of Myself,” amplified the message of individualism, but by uniting the individual with all other people through a transcendent bond. Americana Walt Whitman’s “Song of Myself” Walt Whitman ( Figure 13.7 ) was a poet associated with the transcendentalists. His 1855 poem, “Song of Myself,” shocked many when it was first published, but it has been called one of the most influential poems in American literature. I CELEBRATE myself, and sing myself, And what I assume you shall assume, For every atom belonging to me as good belongs to you. I loafe and invite my soul, I lean and loafe at my ease observing a spear of summer grass. My tongue, every atom of my blood, form’d from this soil, this air, Born here of parents born here from parents the same, and their parents the same, I, now thirty-seven years old in perfect health begin, Hoping to cease not till death. . . . And I say to mankind, Be not curious about God, For I who am curious about each am not curious about God, (No array of terms can say how much I am at peace about God and about death.) I hear and behold God in every object, yet understand God not in the least, Nor do I understand who there can be more wonderful than myself. . . . I too am not a bit tamed, I too am untranslatable, I sound my barbaric yawp over the roofs of the world. . . . You will hardly know who I am or what I mean, But I shall be good health to you nevertheless, And filter and fibre your blood. Failing to fetch me at first keep encouraged, Missing me one place search another, I stop somewhere waiting for you. What images does Whitman use to describe himself and the world around him? What might have been shocking about this poem in 1855? Why do you think it has endured? Some critics took issue with transcendentalism’s emphasis on rampant individualism by pointing out the destructive consequences of compulsive human behavior. Herman Melville’s novel Moby Dick; or, The Whale emphasized the perils of individual obsession by telling the tale of Captain Ahab’s single-minded quest to kill a white whale, Moby Dick, which had destroyed Ahab’s original ship and caused him to lose one of his legs. Edgar Allan Poe, a popular author, critic, and poet, decried “the so-called poetry of the so-called transcendentalists.” These American writers who questioned transcendentalism illustrate the underlying tension between individualism and conformity in American life. 13.2 Antebellum Communal Experiments Learning Objectives By the end of this section, you will be able to: Identify similarities and differences among utopian groups of the antebellum era Explain how religious utopian communities differed from nonreligious ones Prior to 1815, in the years before the market and Industrial Revolution, most Americans lived on farms where they produced much of the foods and goods they used. This largely pre-capitalist culture centered on large family units whose members all lived in the same towns, counties, and parishes. Economic forces unleashed after 1815, however, forever altered that world. More and more people now bought their food and goods in the thriving market economy, a shift that opened the door to a new way of life. These economic transformations generated various reactions; some people were nostalgic for what they viewed as simpler, earlier times, whereas others were willing to try new ways of living and working. In the early nineteenth century, experimental communities sprang up, created by men and women who hoped not just to create a better way of life but to recast American civilization, so that greater equality and harmony would prevail. Indeed, some of these reformers envisioned the creation of alternative ways of living, where people could attain perfection in human relations. The exact number of these societies is unknown because many of them were so short-lived, but the movement reached its apex in the 1840s. RELIGIOUS UTOPIAN SOCIETIES Most of those attracted to utopian communities had been profoundly influenced by evangelical Protestantism, especially the Second Great Awakening. However, their experience of revivalism had left them wanting to further reform society. The communities they formed and joined adhered to various socialist ideas and were considered radical, because members wanted to create a new social order, not reform the old. German Protestant migrants formed several pietistic societies: communities that stressed transformative individual religious experience or piety over religious rituals and formality. One of the earliest of these, the Ephrata Cloister in Pennsylvania, was founded by a charismatic leader named Conrad Beissel in the 1730s. By the antebellum era, it was the oldest communal experiment in the United States. Its members devoted themselves to spiritual contemplation and a disciplined work regime while they awaited the millennium. They wore homespun rather than buying cloth or premade clothing, and encouraged celibacy. Although the Ephrata Cloister remained small, it served as an early example of the type of community that antebellum reformers hoped to create. In 1805, a second German religious society, led by George Rapp, took root in Pennsylvania with several hundred members called Rappites who encouraged celibacy and adhered to the socialist principle of holding all goods in common (as opposed to allowing individual ownership). They not only built the town of Harmony but also produced surplus goods to sell to the outside world. In 1815, the group sold its Pennsylvanian holdings and moved to Indiana, establishing New Harmony on a twenty-thousand-acre plot along the Wabash River. In 1825, members returned to Pennsylvania, and established themselves in the town called Economy. The Shakers provide another example of a community established with a religious mission. The Shakers started in England as an outgrowth of the Quaker religion in the middle of the eighteenth century. Ann Lee, a leader of the group in England, emigrated to New York in the 1770s, having experienced a profound religious awakening that convinced her that she was “mother in Christ.” She taught that God was both male and female; Jesus embodied the male side, while Mother Ann (as she came to be known by her followers) represented the female side. To Shakers in both England and the United States, Mother Ann represented the completion of divine revelation and the beginning of the millennium of heaven on earth. In practice, men and women in Shaker communities were held as equals—a radical departure at the time—and women often outnumbered men. Equality extended to the possession of material goods as well; no one could hold private property. Shaker communities aimed for self-sufficiency, raising food and making all that was necessary, including furniture that emphasized excellent workmanship as a substitute for worldly pleasure. The defining features of the Shakers were their spiritual mysticism and their prohibition of sexual intercourse, which they held as an example of a lesser spiritual life and a source of conflict between women and men. Rapturous Shaker dances, for which the group gained notoriety, allowed for emotional release ( Figure 13.8 ). The high point of the Shaker movement came in the 1830s, when about six thousand members populated communities in New England, New York, Ohio, Indiana, and Kentucky. Another religious utopian experiment, the Oneida Community, began with the teachings of John Humphrey Noyes, a Vermonter who had graduated from Dartmouth, Andover Theological Seminary, and Yale. The Second Great Awakening exerted a powerful effect on him, and he came to believe in perfectionism, the idea that it is possible to be perfect and free of sin. Noyes claimed to have achieved this state of perfection in 1834. Noyes applied his idea of perfection to relationships between men and women, earning notoriety for his unorthodox views on marriage and sexuality. Beginning in his home town of Putney, Vermont, he began to advocate what he called “complex marriage:” a form of communal marriage in which women and men who had achieved perfection could engage in sexual intercourse without sin. Noyes also promoted “male continence,” whereby men would not ejaculate, thereby freeing women from pregnancy and the difficulty of determining paternity when they had many partners. Intercourse became fused with spiritual power among Noyes and his followers. The concept of complex marriage scandalized the townspeople in Putney, so Noyes and his followers removed to Oneida, New York. Individuals who wanted to join the Oneida Community underwent a tough screening process to weed out those who had not reached a state of perfection, which Noyes believed promoted self-control, not out-of-control behavior. The goal was a balance between individuals in a community of love and respect. The perfectionist community Noyes envisioned ultimately dissolved in 1881, although the Oneida Community itself continues to this day ( Figure 13.9 ). The most successful religious utopian community to arise in the antebellum years was begun by Joseph Smith. Smith came from a large Vermont family that had not prospered in the new market economy and moved to the town of Palmyra, in the “burned over district” of western New York. In 1823, Smith claimed to have to been visited by the angel Moroni, who told him the location of a trove of golden plates or tablets. During the late 1820s, Smith translated the writing on the golden plates, and in 1830, he published his finding as The Book of Mormon . That same year, he organized the Church of Christ, the progenitor of the Church of Jesus Christ of Latter-day Saints then popularly known as Mormons . He presented himself as a prophet and aimed to recapture what he viewed as the purity of the primitive Christian church, purity that had been lost over the centuries. Smith emphasized the importance of families being led by fathers. His vision of a reinvigorated patriarchy resonated with men and women who had not thrived during the market revolution, and his claims attracted those who hoped for a better future. Smith’s new church placed great stress on work and discipline. He aimed to create a New Jerusalem where the church exercised oversight of its members. Smith’s claims of translating the golden plates antagonized his neighbors in New York. Difficulties with anti-Mormons led him and his followers to move to Kirtland, Ohio, in 1831. By 1838, as the United States experienced continued economic turbulence following the Panic of 1837, Smith and his followers were facing financial collapse after a series of efforts in banking and money-making ended in disaster. They moved to Missouri, but trouble soon developed there as well, as citizens reacted against the Church members’ beliefs. Actual fighting broke out in 1838, and the ten thousand or so members of the Church of Jesus Christ removed to Nauvoo, Illinois, where they founded a new center of Mormonism. By the 1840s, Nauvoo boasted a population of thirty thousand, making it the largest utopian community in the United States. Thanks to some important conversions among powerful citizens in Illinois, the Church members had virtual autonomy in Nauvoo, which they used to create the largest armed force in the state. Smith also received further revelations there, including one that allowed male church leaders to practice polygamy. He also declared that all of North and South America would be the new Zion and announced that he would run for president in the 1844 election. Smith and the Church members’ convictions and practices generated a great deal of opposition from neighbors in surrounding towns. Smith was arrested for treason (for his role in the destruction of the printing press of a newspaper that criticized Mormonism), and while he was in prison, an anti-Mormon mob stormed into his cell and killed him. Brigham Young ( Figure 13.10 ) then assumed leadership of the group, which he led to a permanent home in what is now Salt Lake City, Utah. SECULAR UTOPIAN SOCIETIES Not all utopian communities were prompted by the religious fervor of the Second Great Awakening; some were outgrowths of the intellectual ideas of the time, such as romanticism with its emphasis on the importance of individualism over conformity. One of these, Brook Farm, took shape in West Roxbury, Massachusetts, in the 1840s. It was founded by George Ripley, a transcendentalist from Massachusetts. In the summer of 1841, this utopian community gained support from Boston-area thinkers and writers, an intellectual group that included many important transcendentalists. Brook Farm is best characterized as a community of intensely individualistic personalities who combined manual labor, such as the growing and harvesting food, with intellectual pursuits. They opened a school that specialized in the liberal arts rather than rote memorization and published a weekly journal called The Harbinger , which was “Devoted to Social and Political Progress” ( Figure 13.11 ). Members of Brook Farm never totaled more than one hundred, but it won renown largely because of the luminaries, such as Emerson and Thoreau, whose names were attached to it. Nathaniel Hawthorne, a Massachusetts writer who took issue with some of the transcendentalists’ claims, was a founding member of Brook Farm, and he fictionalized some of his experiences in his novel The Blithedale Romance . In 1846, a fire destroyed the main building of Brook Farm, and already hampered by financial problems, the Brook Farm experiment came to an end in 1847. Robert Owen, a British industrialist, helped inspire those who dreamed of a more equitable world in the face of the changes brought about by industrialization. Owen had risen to prominence before he turned thirty by running cotton mills in New Lanark, Scotland; these were considered the most successful cotton mills in Great Britain. Owen was very uneasy about the conditions of workers, and he devoted both his life and his fortune to trying to create cooperative societies where workers would lead meaningful, fulfilled lives. Unlike the founders of many utopian communities, he did not gain inspiration from religion; his vision derived instead from his faith in human reason to make the world better. When the Rappite community in Harmony, Indiana, decided to sell its holdings and relocate to Pennsylvania, Owen seized the opportunity to put his ideas into action. In 1825, he bought the twenty-thousand-acre parcel in Indiana and renamed it New Harmony ( Figure 13.12 ). After only a few years, however, a series of bad decisions by Owen and infighting over issues like the elimination of private property led to the dissolution of the community. But Owen’s ideas of cooperation and support inspired other “Owenite” communities in the United States, Canada, and Great Britain. A French philosopher who advocated the creation of a new type of utopian community, Charles Fourier also inspired American readers, notably Arthur Brisbane, who popularized Fourier’s ideas in the United States. Fourier emphasized collective effort by groups of people or “associations.” Members of the association would be housed in large buildings or “phalanxes,” a type of communal living arrangement. Converts to Fourier’s ideas about a new science of living published and lectured vigorously. They believed labor was a type of capital, and the more unpleasant the job, the higher the wages should be. Fourierists in the United States created some twenty-eight communities between 1841 and 1858, but by the late 1850s, the movement had run its course in the United States. 13.3 Reforms to Human Health Learning Objectives By the end of this section, you will be able to: Explain the different reforms aimed at improving the health of the human body Describe the various factions and concerns within the temperance movement Antebellum reform efforts aimed at perfecting the spiritual and social worlds of individuals, and as an outgrowth of those concerns, some reformers moved in the direction of ensuring the health of American citizens. Many Americans viewed drunkenness as a major national problem, and the battle against alcohol and the many problems associated with it led many to join the temperance movement. Other reformers offered plans to increase physical well-being, instituting plans designed to restore vigor. Still others celebrated new sciences that would unlock the mysteries of human behavior and, by doing so, advance American civilization. TEMPERANCE According to many antebellum reformers, intemperance (drunkenness) stood as the most troubling problem in the United States, one that eroded morality, Christianity, and played a starring role in corrupting American democracy. Americans consumed huge quantities of liquor in the early 1800s, including gin, whiskey, rum, and brandy. Indeed, scholars agree that the rate of consumption of these drinks during the first three decades of the 1800s reached levels that have never been equaled in American history. A variety of reformers created organizations devoted to temperance , that is, moderation or self-restraint. Each of these organizations had its own distinct orientation and target audience. The earliest ones were formed in the 1810s in New England. The Massachusetts Society for the Suppression of Intemperance and the Connecticut Society for the Reformation of Morals were both formed in 1813. Protestant ministers led both organizations, which enjoyed support from New Englanders who clung to the ideals of the Federalist Party and later the Whigs. These early temperance societies called on individuals to lead pious lives and avoid sin, including the sin of overindulging in alcohol. They called not for the eradication of drinking but for a more restrained and genteel style of imbibing. Americana The Drunkard’s Progress This 1840 temperance illustration ( Figure 13.13 ) charts the path of destruction for those who drink. The step-by-step progression reads: Step 1. A glass with a friend. Step 2. A glass to keep the cold out. Step 3. A glass too much. Step 4. Drunk and riotous. Step 5. The summit attained. Jolly companions. A confirmed drunkard. Step 6. Poverty and disease. Step 7. Forsaken by Friends. Step 8. Desperation and crime. Step 9. Death by suicide. Who do you think was the intended audience for this engraving? How do you think different audiences (children, drinkers, nondrinkers) would react to the story it tells? Do you think it is an effective piece of propaganda? Why or why not? In the 1820s, temperance gained ground largely through the work of Presbyterian minister Lyman Beecher. In 1825, Beecher delivered six sermons on temperance that were published the follow year as Six Sermons on the Nature, Occasions, Signs, Evils, and Remedy of Intemperance . He urged total abstinence from hard liquor and called for the formation of voluntary associations to bring forth a new day without spirits (whiskey, rum, gin, brandy). Lyman’s work enjoyed a wide readership and support from leading Protestant ministers as well as the emerging middle class; temperance fit well with the middle-class ethic of encouraging hard work and a sober workforce. In 1826, the American Temperance Society was formed, and by the early 1830s, thousands of similar societies had sprouted across the country. Members originally pledged to shun only hard liquor. By 1836, however, leaders of the temperance movement, including Beecher, called for a more comprehensive approach. Thereafter, most temperance societies advocated total abstinence; no longer would beer and wine be tolerated. Such total abstinence from alcohol is known as teetotalism . Teetotalism led to disagreement within the movement and a loss of momentum for reform after 1836. However, temperance enjoyed a revival in the 1840s, as a new type of reformer took up the cause against alcohol. The engine driving the new burst of enthusiastic temperance reform was the Washington Temperance Society (named in deference to George Washington), which organized in 1840. The leaders of the Washingtonians came not from the ranks of Protestant ministers but from the working class. They aimed their efforts at confirmed alcoholics, unlike the early temperance advocates who mostly targeted the middle class. Washingtonians welcomed the participation of women and children, as they cast alcohol as the destroyer of families, and those who joined the group took a public pledge of teetotalism. Americans flocked to the Washingtonians; as many as 600,000 had taken the pledge by 1844. The huge surge in membership had much to do with the style of this reform effort. The Washingtonians turned temperance into theater by dramatizing the plight of those who fell into the habit of drunkenness. Perhaps the most famous fictional drama put forward by the temperance movement was Ten Nights in a Bar-Room (1853), a novel that became the basis for popular theatrical productions. The Washingtonians also sponsored picnics and parades that drew whole families into the movement. The group’s popularity quickly waned in the late 1840s and early 1850s, when questions arose about the effectiveness of merely taking a pledge. Many who had done so soon relapsed into alcoholism. Still, by that time, temperance had risen to a major political issue. Reformers lobbied for laws limiting or prohibiting alcohol, and states began to pass the first temperance laws. The earliest, an 1838 law in Massachusetts, prohibited the sale of liquor in quantities less than fifteen gallons, a move designed to make it difficult for ordinary workmen of modest means to buy spirits. The law was repealed in 1840, but Massachusetts towns then took the initiative by passing local laws banning alcohol. In 1845, close to one hundred towns in the state went “dry.” An 1839 Mississippi law, similar to Massachusetts’ original law, outlawed the sale of less than a gallon of liquor. Mississippi’s law illustrates the national popularity of temperance; regional differences notwithstanding, citizens in northern and southern states agreed on the issue of alcohol. Nonetheless, northern states pushed hardest for outlawing alcohol. Maine enacted the first statewide prohibition law in 1851. New England, New York, and states in the Midwest passed local laws in the 1850s, prohibiting the sale and manufacture of intoxicating beverages. REFORMS FOR THE BODY AND THE MIND Beyond temperance, other reformers looked to ways to maintain and improve health in a rapidly changing world. Without professional medical organizations or standards, health reform went in many different directions; although the American Medical Association was formed in 1847, it did not have much power to oversee medical practices. Too often, quack doctors prescribed regimens and medicines that did far more harm that good. Sylvester Graham stands out as a leading light among the health reformers in the antebellum years. A Presbyterian minister, Graham began his career as a reformer, lecturing against the evils of strong drink. He combined an interest in temperance with vegetarianism and sexuality into what he called a “Science of Human Life,” calling for a regimented diet of more vegetables, fruits, and grain, and no alcohol, meat, or spices. Graham advocated baths and cleanliness in general to preserve health; hydropathy , or water cures for various ailments, became popular in the United States in the 1840s and 1850s. He also viewed masturbation and excessive sex as a cause of disease and debility. His ideas led him to create what he believed to be a perfect food that would maintain health: the Graham cracker, which he invented in 1829. Followers of Graham, known as Grahamites, established boardinghouses where lodgers followed the recommended strict diet and sexual regimen. During the early nineteenth century, reformers also interested themselves in the workings of the mind in an effort to better understand the effects of a rapidly changing world awash with religious revivals and democratic movements. Phrenology —the mapping of the cranium to specific human attributes—stands as an early type of science, related to what would become psychology and devoted to understanding how the mind worked. Phrenologists believed that the mind contained thirty-seven “faculties,” the strengths or weaknesses of which could be determined by a close examination of the size and shape of the cranium ( Figure 13.14 ). Initially developed in Europe by Franz Joseph Gall, a German doctor, phrenology first came to the United States in the 1820s. In the 1830s and 1840s, it grew in popularity as lecturers crisscrossed the republic. It was sometimes used as an educational test, and like temperance, it also became a form of popular entertainment. The popularity of phrenology offers us some insight into the emotional world of the antebellum United States. Its popularity speaks to the desire of those living in a rapidly changing society, where older ties to community and family were being challenged, to understand one another. It appeared to offer a way to quickly recognize an otherwise-unknown individual as a readily understood set of human faculties. 13.4 Addressing Slavery Learning Objectives By the end of this section, you will be able to: Identify the different approaches to reforming the institution of slavery Describe the abolitionist movement in the early to mid-nineteenth century The issue of slavery proved especially combustible in the reform-minded antebellum United States. Those who hoped to end slavery had different ideas about how to do it. Some could not envision a biracial society and advocated sending Blacks to Africa or the Caribbean. Others promoted the use of violence as the best method to bring American slavery to an end. Abolitionists, by contrast, worked to end slavery and to create a multiracial society of equals using moral arguments—moral suasion—to highlight the immorality of slavery. In keeping with the religious fervor of the era, abolitionists hoped to bring about a mass conversion in public opinion to end slavery. “REFORMS” TO SLAVERY An early and popular “reform” to slavery was colonization , or a movement advocating the displacement of African Americans out of the country, usually to Africa. In 1816, the Society for the Colonization of Free People of Color of America (also called the American Colonization Society or ACS) was founded with this goal. Leading statesmen including Thomas Jefferson endorsed the idea of colonization. Members of the ACS did not believe that Blacks and Whites could live as equals, so they targeted the roughly 200,000 free Blacks in the United States for relocation to Africa. For several years after the ACS’s founding, they raised money and pushed Congress for funds. In 1819, they succeeded in getting $100,000 from the federal government to further the colonization project. The ACS played a major role in the creation of the colony of Liberia, on the west coast of Africa. The country’s capital, Monrovia, was named in honor of President James Monroe. The ACS stands as an example of how White reformers, especially men of property and standing, addressed the issue of slavery. Their efforts stand in stark contrast with other reformers’ efforts to deal with slavery in the United States. Although rebellion stretches the definition of reform, another potential solution to slavery was its violent overthrow. Nat Turner’s Rebellion, one of the largest slave uprisings in American history, took place in 1831, in Southampton County, Virginia. Like many enslaved people, Nat Turner was inspired by the evangelical Protestant fervor sweeping the republic. He preached to fellow slaves in Southampton County, gaining a reputation among them as a prophet. He organized them for rebellion, awaiting a sign to begin, until an eclipse in August signaled that the appointed time had come. Turner and as many as seventy other enslaved people killed their enslavers and their families, a total of around sixty-five people ( Figure 13.15 ). Turner eluded capture until late October, when he was tried, hanged, and then beheaded and quartered. Virginia put to death fifty-six others whom they believed to have taken part in the rebellion. White vigilantes killed two hundred more as panic swept through Virginia and the rest of the South. My Story Nat Turner on His Battle against Slavery Thomas R. Gray was a lawyer in Southampton, Virginia, where he visited Nat Turner in jail. He published The Confessions of Nat Turner, the leader of the late insurrection in Southampton, Va., as fully and voluntarily made to Thomas R. Gray in November 1831, after Turner had been executed. For as the blood of Christ had been shed on this earth, and had ascended to heaven for the salvation of sinners, and was now returning to earth again in the form of dew . . . it was plain to me that the Saviour was about to lay down the yoke he had borne for the sins of men, and the great day of judgment was at hand. . . . And on the 12th of May, 1828, I heard a loud noise in the heavens, and the Spirit instantly appeared to me and said the Serpent was loosened, and Christ had laid down the yoke he had borne for the sins of men, and that I should take it on and fight against the Serpent, . . . Ques. Do you not find yourself mistaken now? Ans. Was not Christ crucified. And by signs in the heavens that it would make known to me when I should commence the great work—and on the appearance of the sign, (the eclipse of the sun last February) I should arise and prepare myself, and slay my enemies with their own weapons. How did Turner interpret his fight against slavery? What did he mean by the “serpent?” Nat Turner’s Rebellion provoked a heated discussion in Virginia over slavery. The Virginia legislature was already in the process of revising the state constitution, and some delegates advocated for an easier manumission process. The rebellion, however, rendered that reform impossible. Virginia and other slave states recommitted themselves to the institution of slavery, and defenders of slavery in the South increasingly blamed northerners for provoking the enslaved to rebel. Literate, educated Black people, including David Walker, also favored rebellion. Walker was born a free Black man in North Carolina in 1796. He moved to Boston in the 1820s, lectured on slavery, and promoted the first African American newspaper, Freedom’s Journal . He called for Blacks to actively resist slavery and to use violence if needed. He published An Appeal to the Colored Citizens of the World in 1829, denouncing the scheme of colonization and urging Blacks to fight for equality in the United States, to take action against racism. Walker died months after the publication of his Appeal , and debate continues to this day over the cause of his death. Many believe he was murdered. Walker became a symbol of hope to free people in the North and a symbol of the terrors of literate, educated Black people to the slaveholders of the South. ABOLITIONISM Abolitionists took a far more radical approach to the issue of the slavery by using moral arguments to advocate its immediate elimination. They publicized the atrocities committed under slavery and aimed to create a society characterized by equality of Black and White people. In a world of intense religious fervor, they hoped to bring about a mass awakening in the United States of the sin of slavery, confident that they could transform the national conscience against the South’s peculiar institution. William Lloyd Garrison and Antislavery Societies William Lloyd Garrison of Massachusetts distinguished himself as the leader of the abolitionist movement. Although he had once been in favor of colonization, he came to believe that such a scheme only deepened racism and perpetuated the sinful practices of his fellow Americans. In 1831, he founded the abolitionist newspaper The Liberator , whose first edition declared: I am aware that many object to the severity of my language; but is there not cause for severity? I will be as harsh as truth, and as uncompromising as justice. On this subject, I do not wish to think, or speak, or write, with moderation. No! No! Tell a man whose house is on fire to give a moderate alarm; tell him to moderately rescue his wife from the hands of the ravisher; tell the mother to gradually extricate her babe from the fire into which it has fallen;—but urge me not to use moderation in a cause like the present. I am in earnest—I will not equivocate—I will not excuse—I will not retreat a single inch—AND I WILL BE HEARD. White Virginians blamed Garrison for stirring up enslaved people and instigating slave rebellions like Nat Turner’s. Garrison founded the New England Anti-Slavery Society in 1831, and the American Anti-Slavery Society (AASS) in 1833. By 1838, the AASS had 250,000 members, sometimes called Garrisonians. They rejected colonization as a racist scheme and opposed the use of violence to end slavery. Influenced by evangelical Protestantism, Garrison and other abolitionists believed in moral suasion , a technique of appealing to the conscience of the public, especially slaveholders. Moral suasion relied on dramatic narratives, often from formerly enslaved people, about the horrors of slavery, arguing that slavery destroyed families, as children were sold and taken away from their mothers and fathers ( Figure 13.16 ). Moral suasion resonated with many women, who condemned the sexual violence against enslaved women and the victimization of southern White women by adulterous husbands. Garrison also preached immediatism : the moral demand to take immediate action to end slavery. He wrote of equal rights and demanded that Blacks be treated as equal to Whites. He appealed to women and men, Black and White, to join the fight. The abolition press, which produced hundreds of tracts, helped to circulate moral suasion. Garrison and other abolitionists also used the power of petitions, sending hundreds of petitions to Congress in the early 1830s, demanding an end to slavery. Since most newspapers published congressional proceedings, the debate over abolition petitions reached readers throughout the nation. Although Garrison rejected the U.S. political system as a tool of slaveholders, other abolitionists believed mainstream politics could bring about their goal, and they helped create the Liberty Party in 1840. Its first candidate was James G. Birney, who ran for president that year. Birney epitomized the ideal and goals of the abolitionist movement. Born in Kentucky in 1792, Birney was an enslaver and, searching for a solution to what he eventually condemned as the immorality of slavery, initially endorsed colonization. In the 1830s, however, he rejected colonization, released the people he enslaved, and began to advocate the immediate end of slavery. The Liberty Party did not generate much support and remained a fringe third party. Many of its supporters turned to the Free-Soil Party in the aftermath of the Mexican Cession. The vast majority of northerners rejected abolition entirely. Indeed, abolition generated a fierce backlash in the United States, especially during the Age of Jackson, when racism saturated American culture. Anti-abolitionists in the North saw Garrison and other abolitionists as the worst of the worst, a threat to the republic that might destroy all decency and order by upending time-honored distinctions between Blacks and Whites, and between women and men. Northern anti-abolitionists feared that if slavery ended, the North would be flooded with Blacks who would take jobs from Whites. Opponents made clear their resistance to Garrison and others of his ilk; Garrison nearly lost his life in 1835, when a Boston anti-abolitionist mob dragged him through the city streets. Anti-abolitionists tried to pass federal laws that made the distribution of abolitionist literature a criminal offense, fearing that such literature, with its engravings and simple language, could spark rebellious Black people to action. Their sympathizers in Congress passed a “gag rule” that forbade the consideration of the many hundreds of petitions sent to Washington by abolitionists. A mob in Illinois killed an abolitionist named Elijah Lovejoy in 1837, and the following year, ten thousand protestors destroyed the abolitionists’ newly built Pennsylvania Hall in Philadelphia, burning it to the ground. Frederick Douglass Many escaped enslaved people joined the abolitionist movement, including Frederick Douglass. Douglass was born in Maryland in 1818, escaping to New York in 1838. He later moved to New Bedford, Massachusetts, with his wife. Douglass’s commanding presence and powerful speaking skills electrified his listeners when he began to provide public lectures on slavery. He came to the attention of Garrison and others, who encouraged him to publish his story. In 1845, Douglass published Narrative of the Life of Frederick Douglass, An American Slave Written by Himself , in which he told about his life of slavery in Maryland ( Figure 13.17 ). He identified by name the Whites who had brutalized him, and for that reason, along with the mere act of publishing his story, Douglass had to flee the United States to avoid being murdered. British abolitionist friends bought his freedom from his Maryland owner, and Douglass returned to the United States. He began to publish his own abolitionist newspaper, North Star , in Rochester, New York. During the 1840s and 1850s, Douglass labored to bring about the end of slavery by telling the story of his life and highlighting how slavery destroyed families, both Black and White. My Story Frederick Douglass on Slavery Most White slaveholders frequently raped enslaved females. In this excerpt, Douglass explains the consequences for the children fathered by White masters and slave women. Slaveholders have ordained, and by law established, that the children of slave women shall in all cases follow the condition of their mothers . . . this is done too obviously to administer to their own lusts, and make a gratification of their wicked desires profitable as well as pleasurable . . . the slaveholder, in cases not a few, sustains to his slaves the double relation of master and father. . . . Such slaves [born of White masters] invariably suffer greater hardships . . . They are . . . a constant offence to their mistress . . . she is never better pleased than when she sees them under the lash, . . . The master is frequently compelled to sell this class of his slaves, out of deference to the feelings of his White wife; and, cruel as the deed may strike any one to be, for a man to sell his own children to human flesh-mongers, . . . for, unless he does this, he must not only whip them himself, but must stand by and see one White son tie up his brother, of but few shades darker . . . and ply the gory lash to his naked back. —Frederick Douglass, Narrative of the Life of Frederick Douglass, An American Slave Written by Himself (1845) What moral complications did slavery unleash upon White slaveholders in the South, according to Douglass? What imagery does he use? 13.5 Women’s Rights Learning Objectives By the end of this section, you will be able to: Explain the connections between abolition, reform, and antebellum feminism Describe the ways antebellum women’s movements were both traditional and revolutionary Women took part in all the antebellum reforms, from transcendentalism to temperance to abolition. In many ways, traditional views of women as nurturers played a role in encouraging their participation. Women who joined the cause of temperance, for example, amplified their accepted role as moral guardians of the home. Some women advocated a much more expansive role for themselves and their peers by educating children and men in solid republican principles. But it was their work in antislavery efforts that served as a springboard for women to take action against gender inequality. Many, especially northern women, came to the conclusion that they, like enslaved people, were held in shackles in a society dominated by men. Despite the radical nature of their effort to end slavery and create a biracial society, most abolitionist men clung to traditional notions of proper gender roles. White and Black women, as well as free Black men, were forbidden from occupying leadership positions in the AASS. Because women were not allowed to join the men in playing leading roles in the organization, they formed separate societies, such as the Boston Female Anti-Slavery Society, the Philadelphia Female Anti-Slavery Society, and similar groups. THE GRIMKÉ SISTERS Two leading abolitionist women, Sarah and Angelina Grimké, played major roles in combining the fight to end slavery with the struggle to achieve female equality. The sisters had been born into a prosperous slaveholding family in South Carolina. Both were caught up in the religious fervor of the Second Great Awakening, and they moved to the North and converted to Quakerism. In the mid-1830s, the sisters joined the abolitionist movement, and in 1837, they embarked on a public lecture tour, speaking about immediate abolition to “promiscuous assemblies,” that is, to audiences of women and men. This public action thoroughly scandalized respectable society, where it was unheard of for women to lecture to men. William Lloyd Garrison endorsed the Grimké sisters’ public lectures, but other abolitionists did not. Their lecture tour served as a turning point; the reaction against them propelled the question of women’s proper sphere in society to the forefront of public debate. THE DECLARATION OF RIGHTS AND SENTIMENTS Participation in the abolitionist movement led some women to embrace feminism, the advocacy of women’s rights. Lydia Maria Child, an abolitionist and feminist, observed, “The comparison between women and the colored race is striking . . . both have been kept in subjection by physical force.” Other women, including Elizabeth Cady Stanton, Lucy Stone, and Susan B. Anthony, agreed ( Figure 13.18 ). In 1848, about three hundred male and female feminists, many of them veterans of the abolition campaign, gathered at the Seneca Falls Convention in New York for a conference on women’s rights that was organized by Lucretia Mott and Elizabeth Cady Stanton. It was the first of what became annual meetings that have continued to the present day. Attendees agreed to a “Declaration of Rights and Sentiments” based on the Declaration of Independence; it declared, “We hold these truths to be self-evident: that all men and women are created equal; that they are endowed by their Creator with certain inalienable rights; that among these are life, liberty, and the pursuit of happiness.” “The history of mankind,” the document continued, “is a history of repeated injuries and usurpations on the part of man toward woman, having in direct object the establishment of an absolute tyranny over her.” REPUBLICAN MOTHERHOOD IN THE ANTEBELLUM YEARS Some northern female reformers saw new and vital roles for their sex in the realm of education. They believed in traditional gender roles, viewing women as inherently more moral and nurturing than men. Because of these attributes, the feminists argued, women were uniquely qualified to take up the roles of educators of children. Catharine Beecher, the daughter of Lyman Beecher, pushed for women’s roles as educators. In her 1845 book, The Duty of American Women to Their Country , she argued that the United States had lost its moral compass due to democratic excess. Both “intelligence and virtue” were imperiled in an age of riots and disorder. Women, she argued, could restore the moral center by instilling in children a sense of right and wrong. Beecher represented a northern, middle-class female sensibility. The home, especially the parlor, became the site of northern female authority.
biology
Chapter Outline 4.1 Studying Cells 4.2 Prokaryotic Cells 4.3 Eukaryotic Cells 4.4 The Endomembrane System and Proteins 4.5 The Cytoskeleton 4.6 Connections between Cells and Cellular Activities Introduction Close your eyes and picture a brick wall. What is the basic building block of that wall? A single brick, of course. Like a brick wall, your body is composed of basic building blocks, and the building blocks of your body are cells. Your body has many kinds of cells, each specialized for a specific purpose. Just as a home is made from a variety of building materials, the human body is constructed from many cell types. For example, epithelial cells protect the surface of the body and cover the organs and body cavities within. Bone cells help to support and protect the body. Cells of the immune system fight invading bacteria. Additionally, blood and blood cells carry nutrients and oxygen throughout the body while removing carbon dioxide. Each of these cell types plays a vital role during the growth, development, and day-to-day maintenance of the body. In spite of their enormous variety, however, cells from all organisms—even ones as diverse as bacteria, onion, and human—share certain fundamental characteristics.
[ { "answer": { "ans_choice": 2, "ans_text": "special stains" }, "bloom": null, "hl_context": "To give you a sense of cell size , a typical human red blood cell is about eight millionths of a meter or eight micrometers ( abbreviated as eight μm ) in diameter ; the head of a pin of is about two thousandths of a meter ( two mm ) in diameter . That means about 250 red blood cells could fit on the head of a pin . Most student microscopes are classified as light microscopes ( Figure 4.2 a ) . Visible light passes and is bent through the lens system to enable the user to see the specimen . <hl> Light microscopes are advantageous for viewing living organisms , but since individual cells are generally transparent , their components are not distinguishable unless they are colored with special stains . <hl> Staining , however , usually kills the cells .", "hl_sentences": "Light microscopes are advantageous for viewing living organisms , but since individual cells are generally transparent , their components are not distinguishable unless they are colored with special stains .", "question": { "cloze_format": "When viewing a specimen through a light microscope, scientists use ________ to distinguish the individual components of cells.", "normal_format": "What do scientists use to distinguish the individual components of cells when viewing a specimen through a light microscope?", "question_choices": [ "a beam of electrons", "radioactive isotopes", "special stains", "high temperatures" ], "question_id": "fs-id1775146", "question_text": "When viewing a specimen through a light microscope, scientists use ________ to distinguish the individual components of cells." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 1, "ans_text": "cell" }, "bloom": "2", "hl_context": "<hl> By the late 1830s , botanist Matthias Schleiden and zoologist Theodor Schwann were studying tissues and proposed the unified cell theory , which states that all living things are composed of one or more cells , the cell is the basic unit of life , and new cells arise from existing cells . <hl> Rudolf Virchow later made important contributions to this theory . <hl> A cell is the smallest unit of a living thing . <hl> A living thing , whether made of one cell ( like bacteria ) or many cells ( like a human ) , is called an organism . Thus , cells are the basic building blocks of all organisms .", "hl_sentences": "By the late 1830s , botanist Matthias Schleiden and zoologist Theodor Schwann were studying tissues and proposed the unified cell theory , which states that all living things are composed of one or more cells , the cell is the basic unit of life , and new cells arise from existing cells . A cell is the smallest unit of a living thing .", "question": { "cloze_format": "The ________ is the basic unit of life.", "normal_format": "What is the basic unit of life?", "question_choices": [ "organism", "cell", "tissue", "organ" ], "question_id": "fs-id1989359", "question_text": "The ________ is the basic unit of life." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "diffusion" }, "bloom": null, "hl_context": "At 0.1 to 5.0 μm in diameter , prokaryotic cells are significantly smaller than eukaryotic cells , which have diameters ranging from 10 to 100 μm ( Figure 4.6 ) . The small size of prokaryotes allows ions and organic molecules that enter them to quickly diffuse to other parts of the cell . <hl> Similarly , any wastes produced within a prokaryotic cell can quickly diffuse out . <hl> This is not the case in eukaryotic cells , which have developed different structural adaptations to enhance intracellular transport . Small size , in general , is necessary for all cells , whether prokaryotic or eukaryotic . Let ’ s examine why that is so . First , we ’ ll consider the area and volume of a typical cell . Not all cells are spherical in shape , but most tend to approximate a sphere . You may remember from your high school geometry course that the formula for the surface area of a sphere is 4πr 2 , while the formula for its volume is 4πr 3 / 3 . Thus , as the radius of a cell increases , its surface area increases as the square of its radius , but its volume increases as the cube of its radius ( much more rapidly ) . Therefore , as a cell increases in size , its surface area-to-volume ratio decreases . This same principle would apply if the cell had the shape of a cube ( Figure 4.7 ) . If the cell grows too large , the plasma membrane will not have sufficient surface area to support the rate of diffusion required for the increased volume . In other words , as a cell grows , it becomes less efficient . One way to become more efficient is to divide ; another way is to develop organelles that perform specific tasks . These adaptations lead to the development of more sophisticated cells called eukaryotic cells .", "hl_sentences": "Similarly , any wastes produced within a prokaryotic cell can quickly diffuse out .", "question": { "cloze_format": "Prokaryotes depend on ________ to obtain some materials and to get rid of wastes.", "normal_format": "What do prokaryotes depend on to obtain some materials and get rid of wastes?", "question_choices": [ "ribosomes", "flagella", "cell division", "diffusion" ], "question_id": "fs-id1617238", "question_text": "Prokaryotes depend on ________ to obtain some materials and to get rid of wastes." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 0, "ans_text": "adhere to cell surfaces" }, "bloom": null, "hl_context": "A prokaryote is a simple , mostly single-celled ( unicellular ) organism that lacks a nucleus , or any other membrane-bound organelle . We will shortly come to see that this is significantly different in eukaryotes . Prokaryotic DNA is found in a central part of the cell : the nucleoid ( Figure 4.5 ) . Most prokaryotes have a peptidoglycan cell wall and many have a polysaccharide capsule ( Figure 4.5 ) . The cell wall acts as an extra layer of protection , helps the cell maintain its shape , and prevents dehydration . <hl> The capsule enables the cell to attach to surfaces in its environment . <hl> Some prokaryotes have flagella , pili , or fimbriae . Flagella are used for locomotion . Pili are used to exchange genetic material during a type of reproduction called conjugation . <hl> Fimbriae are used by bacteria to attach to a host cell . <hl> Career Connection Microbiologist", "hl_sentences": "The capsule enables the cell to attach to surfaces in its environment . Fimbriae are used by bacteria to attach to a host cell .", "question": { "cloze_format": "Bacteria that lack fimbriae are less likely to ________.", "normal_format": "Bacteria that lack fimbriae are less likely to what?", "question_choices": [ "adhere to cell surfaces", "swim through bodily fluids", "synthesize proteins", "retain the ability to divide" ], "question_id": "fs-id866358", "question_text": "Bacteria that lack fimbriae are less likely to ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "the nucleoplasm" }, "bloom": "3", "hl_context": "<hl> The nuclear envelope is a double-membrane structure that constitutes the outermost portion of the nucleus ( Figure 4.11 ) . <hl> <hl> Both the inner and outer membranes of the nuclear envelope are phospholipid bilayers . <hl>", "hl_sentences": "The nuclear envelope is a double-membrane structure that constitutes the outermost portion of the nucleus ( Figure 4.11 ) . Both the inner and outer membranes of the nuclear envelope are phospholipid bilayers .", "question": { "cloze_format": "___ is surrounded by two phospholipid bilayers.", "normal_format": "Which of the following is surrounded by two phospholipid bilayers?", "question_choices": [ "the ribosomes", "the vesicles", "the cytoplasm", "the nucleoplasm" ], "question_id": "fs-id1798051", "question_text": "Which of the following is surrounded by two phospholipid bilayers?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "produced during their oxidation reactions" }, "bloom": null, "hl_context": "<hl> Peroxisomes are small , round organelles enclosed by single membranes . <hl> <hl> They carry out oxidation reactions that break down fatty acids and amino acids . <hl> They also detoxify many poisons that may enter the body . ( Many of these oxidation reactions release hydrogen peroxide , H 2 O 2 , which would be damaging to cells ; however , when these reactions are confined to peroxisomes , enzymes safely break down the H 2 O 2 into oxygen and water . ) For example , alcohol is detoxified by peroxisomes in liver cells . Glyoxysomes , which are specialized peroxisomes in plants , are responsible for converting stored fats into sugars .", "hl_sentences": "Peroxisomes are small , round organelles enclosed by single membranes . They carry out oxidation reactions that break down fatty acids and amino acids .", "question": { "cloze_format": "Peroxisomes got their name because hydrogen peroxide is ___ .", "normal_format": "How did peroxisomes get their name due to hydrogen peroxide?", "question_choices": [ "used in their detoxification reactions", "produced during their oxidation reactions", "incorporated into their membranes", "a cofactor for the organelles’ enzymes " ], "question_id": "fs-id2004118", "question_text": "Peroxisomes got their name because hydrogen peroxide is:" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "vacuoles" }, "bloom": null, "hl_context": "<hl> Animal cells have another set of organelles not found in plant cells : lysosomes . <hl> <hl> The lysosomes are the cell ’ s “ garbage disposal . ” In plant cells , the digestive processes take place in vacuoles . <hl> Enzymes within the lysosomes aid the breakdown of proteins , polysaccharides , lipids , nucleic acids , and even worn-out organelles . These enzymes are active at a much lower pH than that of the cytoplasm . Therefore , the pH within lysosomes is more acidic than the pH of the cytoplasm . Many reactions that take place in the cytoplasm could not occur at a low pH , so again , the advantage of compartmentalizing the eukaryotic cell into organelles is apparent .", "hl_sentences": "Animal cells have another set of organelles not found in plant cells : lysosomes . The lysosomes are the cell ’ s “ garbage disposal . ” In plant cells , the digestive processes take place in vacuoles .", "question": { "cloze_format": "In plant cells, the function of the lysosomes is carried out by __________.", "normal_format": "In plant cells, the function of the lysosomes is carried out by what?", "question_choices": [ "vacuoles", "peroxisomes", "ribosomes", "nuclei" ], "question_id": "fs-id1429843", "question_text": "In plant cells, the function of the lysosomes is carried out by __________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 3, "ans_text": "ribosomes" }, "bloom": "2", "hl_context": "<hl> All cells share four common components : 1 ) a plasma membrane , an outer covering that separates the cell ’ s interior from its surrounding environment ; 2 ) cytoplasm , consisting of a jelly-like cytosol within the cell in which other cellular components are found ; 3 ) DNA , the genetic material of the cell ; and 4 ) ribosomes , which synthesize proteins . <hl> However , prokaryotes differ from eukaryotic cells in several ways .", "hl_sentences": "All cells share four common components : 1 ) a plasma membrane , an outer covering that separates the cell ’ s interior from its surrounding environment ; 2 ) cytoplasm , consisting of a jelly-like cytosol within the cell in which other cellular components are found ; 3 ) DNA , the genetic material of the cell ; and 4 ) ribosomes , which synthesize proteins .", "question": { "cloze_format": "(The) ___ is/are found both in eukaryotic and prokaryotic cells.", "normal_format": "Which of the following is found both in eukaryotic and prokaryotic cells?", "question_choices": [ "nucleus", "mitochondrion", "vacuole", "ribosomes" ], "question_id": "fs-id1624048", "question_text": "Which of the following is found both in eukaryotic and prokaryotic cells?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 0, "ans_text": "mitochondrion" }, "bloom": "2", "hl_context": "<hl> The endomembrane system ( endo = “ within ” ) is a group of membranes and organelles ( Figure 4.18 ) in eukaryotic cells that works together to modify , package , and transport lipids and proteins . <hl> <hl> It includes the nuclear envelope , lysosomes , and vesicles , which we ’ ve already mentioned , and the endoplasmic reticulum and Golgi apparatus , which we will cover shortly . <hl> Although not technically within the cell , the plasma membrane is included in the endomembrane system because , as you will see , it interacts with the other endomembranous organelles . The endomembrane system does not include the membranes of either mitochondria or chloroplasts . Art Connection", "hl_sentences": "The endomembrane system ( endo = “ within ” ) is a group of membranes and organelles ( Figure 4.18 ) in eukaryotic cells that works together to modify , package , and transport lipids and proteins . It includes the nuclear envelope , lysosomes , and vesicles , which we ’ ve already mentioned , and the endoplasmic reticulum and Golgi apparatus , which we will cover shortly .", "question": { "cloze_format": "The ___ is not a component of the endomembrane system.", "normal_format": "Which of the following is not a component of the endomembrane system?", "question_choices": [ "mitochondrion", "Golgi apparatus", "endoplasmic reticulum", "lysosome" ], "question_id": "fs-id1486983", "question_text": "Which of the following is not a component of the endomembrane system?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "phagocytosis" }, "bloom": null, "hl_context": "In addition to their role as the digestive component and organelle-recycling facility of animal cells , lysosomes are considered to be parts of the endomembrane system . Lysosomes also use their hydrolytic enzymes to destroy pathogens ( disease-causing organisms ) that might enter the cell . A good example of this occurs in a group of white blood cells called macrophages , which are part of your body ’ s immune system . <hl> In a process known as phagocytosis or endocytosis , a section of the plasma membrane of the macrophage invaginates ( folds in ) and engulfs a pathogen . <hl> The invaginated section , with the pathogen inside , then pinches itself off from the plasma membrane and becomes a vesicle . The vesicle fuses with a lysosome . The lysosome ’ s hydrolytic enzymes then destroy the pathogen ( Figure 4.21 ) . 4.5 The Cytoskeleton Learning Objectives By the end of this section , you will be able to :", "hl_sentences": "In a process known as phagocytosis or endocytosis , a section of the plasma membrane of the macrophage invaginates ( folds in ) and engulfs a pathogen .", "question": { "cloze_format": "The process by which a cell engulfs a foreign particle is known as ___ .", "normal_format": "What is the process by which a cell engulfs a foreign particle known as?", "question_choices": [ "endosymbiosis", "phagocytosis", "hydrolysis", "membrane synthesis" ], "question_id": "fs-id1758285", "question_text": "The process by which a cell engulfs a foreign particle is known as:" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "a cell that makes steroid hormones" }, "bloom": null, "hl_context": "The smooth endoplasmic reticulum ( SER ) is continuous with the RER but has few or no ribosomes on its cytoplasmic surface ( Figure 4.18 ) . <hl> Functions of the SER include synthesis of carbohydrates , lipids , and steroid hormones ; detoxification of medications and poisons ; and storage of calcium ions . <hl>", "hl_sentences": "Functions of the SER include synthesis of carbohydrates , lipids , and steroid hormones ; detoxification of medications and poisons ; and storage of calcium ions .", "question": { "cloze_format": "___ is most likely to have the greatest concentration of smooth endoplasmic reticulum.", "normal_format": "Which of the following is most likely to have the greatest concentration of smooth endoplasmic reticulum?", "question_choices": [ "a cell that secretes enzymes", "a cell that destroys pathogens", "a cell that makes steroid hormones", "a cell that engages in photosynthesis" ], "question_id": "fs-id8331650", "question_text": "Which of the following is most likely to have the greatest concentration of smooth endoplasmic reticulum?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "synthesis of the protein on the ribosome; modification in the Golgi apparatus; packaging in the endoplasmic reticulum; tagging in the vesicle" }, "bloom": null, "hl_context": "<hl> Finally , the modified and tagged proteins are packaged into secretory vesicles that bud from the trans face of the Golgi . <hl> While some of these vesicles deposit their contents into other parts of the cell where they will be used , other secretory vesicles fuse with the plasma membrane and release their contents outside the cell . The receiving side of the Golgi apparatus is called the cis face . The opposite side is called the trans face . The transport vesicles that formed from the ER travel to the cis face , fuse with it , and empty their contents into the lumen of the Golgi apparatus . <hl> As the proteins and lipids travel through the Golgi , they undergo further modifications that allow them to be sorted . <hl> The most frequent modification is the addition of short chains of sugar molecules . These newly modified proteins and lipids are then tagged with phosphate groups or other small molecules so that they can be routed to their proper destinations . We have already mentioned that vesicles can bud from the ER and transport their contents elsewhere , but where do the vesicles go ? <hl> Before reaching their final destination , the lipids or proteins within the transport vesicles still need to be sorted , packaged , and tagged so that they wind up in the right place . <hl> <hl> Sorting , tagging , packaging , and distribution of lipids and proteins takes place in the Golgi apparatus ( also called the Golgi body ) , a series of flattened membranes ( Figure 4.20 ) . <hl> Typically , the nucleus is the most prominent organelle in a cell ( Figure 4.8 ) . <hl> The nucleus ( plural = nuclei ) houses the cell ’ s DNA and directs the synthesis of ribosomes and proteins . <hl> Let ’ s look at it in more detail ( Figure 4.11 ) .", "hl_sentences": "Finally , the modified and tagged proteins are packaged into secretory vesicles that bud from the trans face of the Golgi . As the proteins and lipids travel through the Golgi , they undergo further modifications that allow them to be sorted . Before reaching their final destination , the lipids or proteins within the transport vesicles still need to be sorted , packaged , and tagged so that they wind up in the right place . Sorting , tagging , packaging , and distribution of lipids and proteins takes place in the Golgi apparatus ( also called the Golgi body ) , a series of flattened membranes ( Figure 4.20 ) . The nucleus ( plural = nuclei ) houses the cell ’ s DNA and directs the synthesis of ribosomes and proteins .", "question": { "cloze_format": "The sequences that correctly lists in order the steps involved in the incorporation of a proteinaceous molecule within a cell is ___.", "normal_format": "Which of the following sequences correctly lists in order the steps involved in the incorporation of a proteinaceous molecule within a cell?", "question_choices": [ "synthesis of the protein on the ribosome; modification in the Golgi apparatus; packaging in the endoplasmic reticulum; tagging in the vesicle", "synthesis of the protein on the lysosome; tagging in the Golgi; packaging in the vesicle; distribution in the endoplasmic reticulum", "synthesis of the protein on the ribosome; modification in the endoplasmic reticulum; tagging in the Golgi; distribution via the vesicle", "synthesis of the protein on the lysosome; packaging in the vesicle; distribution via the Golgi; tagging in the endoplasmic reticulum" ], "question_id": "fs-id1703517", "question_text": "Which of the following sequences correctly lists in order the steps involved in the incorporation of a proteinaceous molecule within a cell?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "microfilaments and microtubules" }, "bloom": "3", "hl_context": "As their name implies , microtubules are small hollow tubes . The walls of the microtubule are made of polymerized dimers of α-tubulin and β-tubulin , two globular proteins ( Figure 4.25 ) . With a diameter of about 25 nm , microtubules are the widest components of the cytoskeleton . They help the cell resist compression , provide a track along which vesicles move through the cell , and pull replicated chromosomes to opposite ends of a dividing cell . <hl> Like microfilaments , microtubules can dissolve and reform quickly . <hl> <hl> Microfilaments also provide some rigidity and shape to the cell . <hl> <hl> They can depolymerize ( disassemble ) and reform quickly , thus enabling a cell to change its shape and move . <hl> White blood cells ( your body ’ s infection-fighting cells ) make good use of this ability . They can move to the site of an infection and phagocytize the pathogen .", "hl_sentences": "Like microfilaments , microtubules can dissolve and reform quickly . Microfilaments also provide some rigidity and shape to the cell . They can depolymerize ( disassemble ) and reform quickly , thus enabling a cell to change its shape and move .", "question": { "cloze_format": "___ have the ability to disassemble and reform quickly.", "normal_format": "Which of the following have the ability to disassemble and reform quickly?", "question_choices": [ "microfilaments and intermediate filaments", "microfilaments and microtubules", "intermediate filaments and microtubules", "only intermediate filaments" ], "question_id": "fs-id1645274", "question_text": "Which of the following have the ability to disassemble and reform quickly?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "only intermediate filaments" }, "bloom": "3", "hl_context": "<hl> Intermediate filaments have no role in cell movement . <hl> Their function is purely structural . They bear tension , thus maintaining the shape of the cell , and anchor the nucleus and other organelles in place . Figure 4.22 shows how intermediate filaments create a supportive scaffolding inside the cell . The intermediate filaments are the most diverse group of cytoskeletal elements . Several types of fibrous proteins are found in the intermediate filaments . You are probably most familiar with keratin , the fibrous protein that strengthens your hair , nails , and the epidermis of the skin .", "hl_sentences": "Intermediate filaments have no role in cell movement .", "question": { "cloze_format": "___ do not play a role in intracellular movement.", "normal_format": "Which of the following do not play a role in intracellular movement?", "question_choices": [ "microfilaments and intermediate filaments", "microfilaments and microtubules", "intermediate filaments and microtubules", "only intermediate filaments" ], "question_id": "fs-id2219667", "question_text": "Which of the following do not play a role in intracellular movement?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "plasmodesmata" }, "bloom": "3", "hl_context": "In general , long stretches of the plasma membranes of neighboring plant cells cannot touch one another because they are separated by the cell wall that surrounds each cell ( Figure 4.8 b ) . How then , can a plant transfer water and other soil nutrients from its roots , through its stems , and to its leaves ? Such transport uses the vascular tissues ( xylem and phloem ) primarily . <hl> There also exist structural modifications called plasmodesmata ( singular = plasmodesma ) , numerous channels that pass between cell walls of adjacent plant cells , connect their cytoplasm , and enable materials to be transported from cell to cell , and thus throughout the plant ( Figure 4.28 ) . <hl>", "hl_sentences": "There also exist structural modifications called plasmodesmata ( singular = plasmodesma ) , numerous channels that pass between cell walls of adjacent plant cells , connect their cytoplasm , and enable materials to be transported from cell to cell , and thus throughout the plant ( Figure 4.28 ) .", "question": { "cloze_format": "___ are found only in plant cells.", "normal_format": "Which of the following are found only in plant cells?", "question_choices": [ "gap junctions", "desmosomes", "plasmodesmata", "tight junctions" ], "question_id": "fs-id1722302", "question_text": "Which of the following are found only in plant cells?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "intermediate filaments" }, "bloom": null, "hl_context": "<hl> Also found only in animal cells are desmosomes , which act like spot welds between adjacent epithelial cells ( Figure 4.30 ) . <hl> <hl> Short proteins called cadherins in the plasma membrane connect to intermediate filaments to create desmosomes . <hl> The cadherins join two adjacent cells together and maintain the cells in a sheet-like formation in organs and tissues that stretch , like the skin , heart , and muscles .", "hl_sentences": "Also found only in animal cells are desmosomes , which act like spot welds between adjacent epithelial cells ( Figure 4.30 ) . Short proteins called cadherins in the plasma membrane connect to intermediate filaments to create desmosomes .", "question": { "cloze_format": "The key components of desmosomes are cadherins and __________.", "normal_format": "What is the key component of desmosomes beside cadherins? ", "question_choices": [ "actin", "microfilaments", "intermediate filaments", "microtubules" ], "question_id": "fs-id1434588", "question_text": "The key components of desmosomes are cadherins and __________." }, "references_are_paraphrase": 0 } ]
4
4.1 Studying Cells Learning Objectives By the end of this section, you will be able to: Describe the role of cells in organisms Compare and contrast light microscopy and electron microscopy Summarize cell theory A cell is the smallest unit of a living thing. A living thing, whether made of one cell (like bacteria) or many cells (like a human), is called an organism. Thus, cells are the basic building blocks of all organisms. Several cells of one kind that interconnect with each other and perform a shared function form tissues, several tissues combine to form an organ (your stomach, heart, or brain), and several organs make up an organ system (such as the digestive system, circulatory system, or nervous system). Several systems that function together form an organism (like a human being). Here, we will examine the structure and function of cells. There are many types of cells, all grouped into one of two broad categories: prokaryotic and eukaryotic. For example, both animal and plant cells are classified as eukaryotic cells, whereas bacterial cells are classified as prokaryotic. Before discussing the criteria for determining whether a cell is prokaryotic or eukaryotic, let’s first examine how biologists study cells. Microscopy Cells vary in size. With few exceptions, individual cells cannot be seen with the naked eye, so scientists use microscopes (micro- = “small”; -scope = “to look at”) to study them. A microscope is an instrument that magnifies an object. Most photographs of cells are taken with a microscope, and these images can also be called micrographs. The optics of a microscope’s lenses change the orientation of the image that the user sees. A specimen that is right-side up and facing right on the microscope slide will appear upside-down and facing left when viewed through a microscope, and vice versa. Similarly, if the slide is moved left while looking through the microscope, it will appear to move right, and if moved down, it will seem to move up. This occurs because microscopes use two sets of lenses to magnify the image. Because of the manner by which light travels through the lenses, this system of two lenses produces an inverted image (binocular, or dissecting microscopes, work in a similar manner, but include an additional magnification system that makes the final image appear to be upright). Light Microscopes To give you a sense of cell size, a typical human red blood cell is about eight millionths of a meter or eight micrometers (abbreviated as eight μm) in diameter; the head of a pin of is about two thousandths of a meter (two mm) in diameter. That means about 250 red blood cells could fit on the head of a pin. Most student microscopes are classified as light microscopes ( Figure 4.2 a ). Visible light passes and is bent through the lens system to enable the user to see the specimen. Light microscopes are advantageous for viewing living organisms, but since individual cells are generally transparent, their components are not distinguishable unless they are colored with special stains. Staining, however, usually kills the cells. Light microscopes commonly used in the undergraduate college laboratory magnify up to approximately 400 times. Two parameters that are important in microscopy are magnification and resolving power. Magnification is the process of enlarging an object in appearance. Resolving power is the ability of a microscope to distinguish two adjacent structures as separate: the higher the resolution, the better the clarity and detail of the image. When oil immersion lenses are used for the study of small objects, magnification is usually increased to 1,000 times. In order to gain a better understanding of cellular structure and function, scientists typically use electron microscopes. Electron Microscopes In contrast to light microscopes, electron microscopes ( Figure 4.2 b ) use a beam of electrons instead of a beam of light. Not only does this allow for higher magnification and, thus, more detail ( Figure 4.3 ), it also provides higher resolving power. The method used to prepare the specimen for viewing with an electron microscope kills the specimen. Electrons have short wavelengths (shorter than photons) that move best in a vacuum, so living cells cannot be viewed with an electron microscope. In a scanning electron microscope, a beam of electrons moves back and forth across a cell’s surface, creating details of cell surface characteristics. In a transmission electron microscope, the electron beam penetrates the cell and provides details of a cell’s internal structures. As you might imagine, electron microscopes are significantly more bulky and expensive than light microscopes. Link to Learning For another perspective on cell size, try the HowBig interactive at this site . Cell Theory The microscopes we use today are far more complex than those used in the 1600s by Antony van Leeuwenhoek, a Dutch shopkeeper who had great skill in crafting lenses. Despite the limitations of his now-ancient lenses, van Leeuwenhoek observed the movements of protista (a type of single-celled organism) and sperm, which he collectively termed “animalcules.” In a 1665 publication called Micrographia , experimental scientist Robert Hooke coined the term “cell” for the box-like structures he observed when viewing cork tissue through a lens. In the 1670s, van Leeuwenhoek discovered bacteria and protozoa. Later advances in lenses, microscope construction, and staining techniques enabled other scientists to see some components inside cells. By the late 1830s, botanist Matthias Schleiden and zoologist Theodor Schwann were studying tissues and proposed the unified cell theory , which states that all living things are composed of one or more cells, the cell is the basic unit of life, and new cells arise from existing cells. Rudolf Virchow later made important contributions to this theory. Career Connection Cytotechnologist Have you ever heard of a medical test called a Pap smear ( Figure 4.4 )? In this test, a doctor takes a small sample of cells from the uterine cervix of a patient and sends it to a medical lab where a cytotechnologist stains the cells and examines them for any changes that could indicate cervical cancer or a microbial infection. Cytotechnologists (cyto- = “cell”) are professionals who study cells via microscopic examinations and other laboratory tests. They are trained to determine which cellular changes are within normal limits and which are abnormal. Their focus is not limited to cervical cells; they study cellular specimens that come from all organs. When they notice abnormalities, they consult a pathologist, who is a medical doctor who can make a clinical diagnosis. Cytotechnologists play a vital role in saving people’s lives. When abnormalities are discovered early, a patient’s treatment can begin sooner, which usually increases the chances of a successful outcome. 4.2 Prokaryotic Cells Learning Objectives By the end of this section, you will be able to: Name examples of prokaryotic and eukaryotic organisms Compare and contrast prokaryotic cells and eukaryotic cells Describe the relative sizes of different kinds of cells Explain why cells must be small Cells fall into one of two broad categories: prokaryotic and eukaryotic. Only the predominantly single-celled organisms of the domains Bacteria and Archaea are classified as prokaryotes (pro- = “before”; -kary- = “nucleus”). Animals, plants, fungi, and protists are all eukaryotes (eu- = “true”) and are made up of eukaryotic cells. Components of Prokaryotic Cells All cells share four common components: 1) a plasma membrane, an outer covering that separates the cell’s interior from its surrounding environment; 2) cytoplasm, consisting of a jelly-like cytosol within the cell in which other cellular components are found; 3) DNA, the genetic material of the cell; and 4) ribosomes, which synthesize proteins. However, prokaryotes differ from eukaryotic cells in several ways. A prokaryote is a simple, mostly single-celled (unicellular) organism that lacks a nucleus, or any other membrane-bound organelle. We will shortly come to see that this is significantly different in eukaryotes. Prokaryotic DNA is found in a central part of the cell: the nucleoid ( Figure 4.5 ). Most prokaryotes have a peptidoglycan cell wall and many have a polysaccharide capsule ( Figure 4.5 ). The cell wall acts as an extra layer of protection, helps the cell maintain its shape, and prevents dehydration. The capsule enables the cell to attach to surfaces in its environment. Some prokaryotes have flagella, pili, or fimbriae. Flagella are used for locomotion. Pili are used to exchange genetic material during a type of reproduction called conjugation. Fimbriae are used by bacteria to attach to a host cell. Career Connection Microbiologist The most effective action anyone can take to prevent the spread of contagious illnesses is to wash his or her hands. Why? Because microbes (organisms so tiny that they can only be seen with microscopes) are ubiquitous. They live on doorknobs, money, your hands, and many other surfaces. If someone sneezes into his hand and touches a doorknob, and afterwards you touch that same doorknob, the microbes from the sneezer’s mucus are now on your hands. If you touch your hands to your mouth, nose, or eyes, those microbes can enter your body and could make you sick. However, not all microbes (also called microorganisms) cause disease; most are actually beneficial. You have microbes in your gut that make vitamin K. Other microorganisms are used to ferment beer and wine. Microbiologists are scientists who study microbes. Microbiologists can pursue a number of careers. Not only do they work in the food industry, they are also employed in the veterinary and medical fields. They can work in the pharmaceutical sector, serving key roles in research and development by identifying new sources of antibiotics that could be used to treat bacterial infections. Environmental microbiologists may look for new ways to use specially selected or genetically engineered microbes for the removal of pollutants from soil or groundwater, as well as hazardous elements from contaminated sites. These uses of microbes are called bioremediation technologies. Microbiologists can also work in the field of bioinformatics, providing specialized knowledge and insight for the design, development, and specificity of computer models of, for example, bacterial epidemics. Cell Size At 0.1 to 5.0 μm in diameter, prokaryotic cells are significantly smaller than eukaryotic cells, which have diameters ranging from 10 to 100 μm ( Figure 4.6 ). The small size of prokaryotes allows ions and organic molecules that enter them to quickly diffuse to other parts of the cell. Similarly, any wastes produced within a prokaryotic cell can quickly diffuse out. This is not the case in eukaryotic cells, which have developed different structural adaptations to enhance intracellular transport. Small size, in general, is necessary for all cells, whether prokaryotic or eukaryotic. Let’s examine why that is so. First, we’ll consider the area and volume of a typical cell. Not all cells are spherical in shape, but most tend to approximate a sphere. You may remember from your high school geometry course that the formula for the surface area of a sphere is 4πr 2 , while the formula for its volume is 4πr 3 /3. Thus, as the radius of a cell increases, its surface area increases as the square of its radius, but its volume increases as the cube of its radius (much more rapidly). Therefore, as a cell increases in size, its surface area-to-volume ratio decreases. This same principle would apply if the cell had the shape of a cube ( Figure 4.7 ). If the cell grows too large, the plasma membrane will not have sufficient surface area to support the rate of diffusion required for the increased volume. In other words, as a cell grows, it becomes less efficient. One way to become more efficient is to divide; another way is to develop organelles that perform specific tasks. These adaptations lead to the development of more sophisticated cells called eukaryotic cells. Art Connection Prokaryotic cells are much smaller than eukaryotic cells. What advantages might small cell size confer on a cell? What advantages might large cell size have? 4.3 Eukaryotic Cells Learning Objectives By the end of this section, you will be able to: Describe the structure of eukaryotic cells Compare animal cells with plant cells State the role of the plasma membrane Summarize the functions of the major cell organelles Have you ever heard the phrase “form follows function?” It’s a philosophy practiced in many industries. In architecture, this means that buildings should be constructed to support the activities that will be carried out inside them. For example, a skyscraper should be built with several elevator banks; a hospital should be built so that its emergency room is easily accessible. Our natural world also utilizes the principle of form following function, especially in cell biology, and this will become clear as we explore eukaryotic cells ( Figure 4.8 ). Unlike prokaryotic cells, eukaryotic cells have: 1) a membrane-bound nucleus; 2) numerous membrane-bound organelles such as the endoplasmic reticulum, Golgi apparatus, chloroplasts, mitochondria, and others; and 3) several, rod-shaped chromosomes. Because a eukaryotic cell’s nucleus is surrounded by a membrane, it is often said to have a “true nucleus.” The word “organelle” means “little organ,” and, as already mentioned, organelles have specialized cellular functions, just as the organs of your body have specialized functions. At this point, it should be clear to you that eukaryotic cells have a more complex structure than prokaryotic cells. Organelles allow different functions to be compartmentalized in different areas of the cell. Before turning to organelles, let’s first examine two important components of the cell: the plasma membrane and the cytoplasm. Art Connection If the nucleolus were not able to carry out its function, what other cellular organelles would be affected? The Plasma Membrane Like prokaryotes, eukaryotic cells have a plasma membrane ( Figure 4.9 ), a phospholipid bilayer with embedded proteins that separates the internal contents of the cell from its surrounding environment. A phospholipid is a lipid molecule with two fatty acid chains and a phosphate-containing group. The plasma membrane controls the passage of organic molecules, ions, water, and oxygen into and out of the cell. Wastes (such as carbon dioxide and ammonia) also leave the cell by passing through the plasma membrane. The plasma membranes of cells that specialize in absorption are folded into fingerlike projections called microvilli (singular = microvillus); ( Figure 4.10 ). Such cells are typically found lining the small intestine, the organ that absorbs nutrients from digested food. This is an excellent example of form following function. People with celiac disease have an immune response to gluten, which is a protein found in wheat, barley, and rye. The immune response damages microvilli, and thus, afflicted individuals cannot absorb nutrients. This leads to malnutrition, cramping, and diarrhea. Patients suffering from celiac disease must follow a gluten-free diet. The Cytoplasm The cytoplasm is the entire region of a cell between the plasma membrane and the nuclear envelope (a structure to be discussed shortly). It is made up of organelles suspended in the gel-like cytosol , the cytoskeleton, and various chemicals ( Figure 4.8 ). Even though the cytoplasm consists of 70 to 80 percent water, it has a semi-solid consistency, which comes from the proteins within it. However, proteins are not the only organic molecules found in the cytoplasm. Glucose and other simple sugars, polysaccharides, amino acids, nucleic acids, fatty acids, and derivatives of glycerol are found there, too. Ions of sodium, potassium, calcium, and many other elements are also dissolved in the cytoplasm. Many metabolic reactions, including protein synthesis, take place in the cytoplasm. The Nucleus Typically, the nucleus is the most prominent organelle in a cell ( Figure 4.8 ). The nucleus (plural = nuclei) houses the cell’s DNA and directs the synthesis of ribosomes and proteins. Let’s look at it in more detail ( Figure 4.11 ). The Nuclear Envelope The nuclear envelope is a double-membrane structure that constitutes the outermost portion of the nucleus ( Figure 4.11 ). Both the inner and outer membranes of the nuclear envelope are phospholipid bilayers. The nuclear envelope is punctuated with pores that control the passage of ions, molecules, and RNA between the nucleoplasm and cytoplasm. The nucleoplasm is the semi-solid fluid inside the nucleus, where we find the chromatin and the nucleolus. Chromatin and Chromosomes To understand chromatin, it is helpful to first consider chromosomes. Chromosomes are structures within the nucleus that are made up of DNA, the hereditary material. You may remember that in prokaryotes, DNA is organized into a single circular chromosome. In eukaryotes, chromosomes are linear structures. Every eukaryotic species has a specific number of chromosomes in the nuclei of its body’s cells. For example, in humans, the chromosome number is 46, while in fruit flies, it is eight. Chromosomes are only visible and distinguishable from one another when the cell is getting ready to divide. When the cell is in the growth and maintenance phases of its life cycle, proteins are attached to chromosomes, and they resemble an unwound, jumbled bunch of threads. These unwound protein-chromosome complexes are called chromatin ( Figure 4.12 ); chromatin describes the material that makes up the chromosomes both when condensed and decondensed. The Nucleolus We already know that the nucleus directs the synthesis of ribosomes, but how does it do this? Some chromosomes have sections of DNA that encode ribosomal RNA. A darkly staining area within the nucleus called the nucleolus (plural = nucleoli) aggregates the ribosomal RNA with associated proteins to assemble the ribosomal subunits that are then transported out through the pores in the nuclear envelope to the cytoplasm. Ribosomes Ribosomes are the cellular structures responsible for protein synthesis. When viewed through an electron microscope, ribosomes appear either as clusters (polyribosomes) or single, tiny dots that float freely in the cytoplasm. They may be attached to the cytoplasmic side of the plasma membrane or the cytoplasmic side of the endoplasmic reticulum and the outer membrane of the nuclear envelope ( Figure 4.8 ). Electron microscopy has shown us that ribosomes, which are large complexes of protein and RNA, consist of two subunits, aptly called large and small ( Figure 4.13 ). Ribosomes receive their “orders” for protein synthesis from the nucleus where the DNA is transcribed into messenger RNA (mRNA). The mRNA travels to the ribosomes, which translate the code provided by the sequence of the nitrogenous bases in the mRNA into a specific order of amino acids in a protein. Amino acids are the building blocks of proteins. Because proteins synthesis is an essential function of all cells (including enzymes, hormones, antibodies, pigments, structural components, and surface receptors), ribosomes are found in practically every cell. Ribosomes are particularly abundant in cells that synthesize large amounts of protein. For example, the pancreas is responsible for creating several digestive enzymes and the cells that produce these enzymes contain many ribosomes. Thus, we see another example of form following function. Mitochondria Mitochondria (singular = mitochondrion) are often called the “powerhouses” or “energy factories” of a cell because they are responsible for making adenosine triphosphate (ATP), the cell’s main energy-carrying molecule. ATP represents the short-term stored energy of the cell. Cellular respiration is the process of making ATP using the chemical energy found in glucose and other nutrients. In mitochondria, this process uses oxygen and produces carbon dioxide as a waste product. In fact, the carbon dioxide that you exhale with every breath comes from the cellular reactions that produce carbon dioxide as a byproduct. In keeping with our theme of form following function, it is important to point out that muscle cells have a very high concentration of mitochondria that produce ATP. Your muscle cells need a lot of energy to keep your body moving. When your cells don’t get enough oxygen, they do not make a lot of ATP. Instead, the small amount of ATP they make in the absence of oxygen is accompanied by the production of lactic acid. Mitochondria are oval-shaped, double membrane organelles ( Figure 4.14 ) that have their own ribosomes and DNA. Each membrane is a phospholipid bilayer embedded with proteins. The inner layer has folds called cristae. The area surrounded by the folds is called the mitochondrial matrix. The cristae and the matrix have different roles in cellular respiration. Peroxisomes Peroxisomes are small, round organelles enclosed by single membranes. They carry out oxidation reactions that break down fatty acids and amino acids. They also detoxify many poisons that may enter the body. (Many of these oxidation reactions release hydrogen peroxide, H 2 O 2 , which would be damaging to cells; however, when these reactions are confined to peroxisomes, enzymes safely break down the H 2 O 2 into oxygen and water.) For example, alcohol is detoxified by peroxisomes in liver cells. Glyoxysomes, which are specialized peroxisomes in plants, are responsible for converting stored fats into sugars. Vesicles and Vacuoles Vesicles and vacuoles are membrane-bound sacs that function in storage and transport. Other than the fact that vacuoles are somewhat larger than vesicles, there is a very subtle distinction between them: The membranes of vesicles can fuse with either the plasma membrane or other membrane systems within the cell. Additionally, some agents such as enzymes within plant vacuoles break down macromolecules. The membrane of a vacuole does not fuse with the membranes of other cellular components. Animal Cells versus Plant Cells At this point, you know that each eukaryotic cell has a plasma membrane, cytoplasm, a nucleus, ribosomes, mitochondria, peroxisomes, and in some, vacuoles, but there are some striking differences between animal and plant cells. While both animal and plant cells have microtubule organizing centers (MTOCs), animal cells also have centrioles associated with the MTOC: a complex called the centrosome. Animal cells each have a centrosome and lysosomes, whereas plant cells do not. Plant cells have a cell wall, chloroplasts and other specialized plastids, and a large central vacuole, whereas animal cells do not. The Centrosome The centrosome is a microtubule-organizing center found near the nuclei of animal cells. It contains a pair of centrioles, two structures that lie perpendicular to each other ( Figure 4.15 ). Each centriole is a cylinder of nine triplets of microtubules. The centrosome (the organelle where all microtubules originate) replicates itself before a cell divides, and the centrioles appear to have some role in pulling the duplicated chromosomes to opposite ends of the dividing cell. However, the exact function of the centrioles in cell division isn’t clear, because cells that have had the centrosome removed can still divide, and plant cells, which lack centrosomes, are capable of cell division. Lysosomes Animal cells have another set of organelles not found in plant cells: lysosomes. The lysosomes are the cell’s “garbage disposal.” In plant cells, the digestive processes take place in vacuoles. Enzymes within the lysosomes aid the breakdown of proteins, polysaccharides, lipids, nucleic acids, and even worn-out organelles. These enzymes are active at a much lower pH than that of the cytoplasm. Therefore, the pH within lysosomes is more acidic than the pH of the cytoplasm. Many reactions that take place in the cytoplasm could not occur at a low pH, so again, the advantage of compartmentalizing the eukaryotic cell into organelles is apparent. The Cell Wall If you examine Figure 4.8 b , the diagram of a plant cell, you will see a structure external to the plasma membrane called the cell wall. The cell wall is a rigid covering that protects the cell, provides structural support, and gives shape to the cell. Fungal and protistan cells also have cell walls. While the chief component of prokaryotic cell walls is peptidoglycan, the major organic molecule in the plant cell wall is cellulose ( Figure 4.16 ), a polysaccharide made up of glucose units. Have you ever noticed that when you bite into a raw vegetable, like celery, it crunches? That’s because you are tearing the rigid cell walls of the celery cells with your teeth. Chloroplasts Like the mitochondria, chloroplasts have their own DNA and ribosomes, but chloroplasts have an entirely different function. Chloroplasts are plant cell organelles that carry out photosynthesis. Photosynthesis is the series of reactions that use carbon dioxide, water, and light energy to make glucose and oxygen. This is a major difference between plants and animals; plants (autotrophs) are able to make their own food, like sugars, while animals (heterotrophs) must ingest their food. Like mitochondria, chloroplasts have outer and inner membranes, but within the space enclosed by a chloroplast’s inner membrane is a set of interconnected and stacked fluid-filled membrane sacs called thylakoids ( Figure 4.17 ). Each stack of thylakoids is called a granum (plural = grana). The fluid enclosed by the inner membrane that surrounds the grana is called the stroma. The chloroplasts contain a green pigment called chlorophyll , which captures the light energy that drives the reactions of photosynthesis. Like plant cells, photosynthetic protists also have chloroplasts. Some bacteria perform photosynthesis, but their chlorophyll is not relegated to an organelle. Evolution Connection Endosymbiosis We have mentioned that both mitochondria and chloroplasts contain DNA and ribosomes. Have you wondered why? Strong evidence points to endosymbiosis as the explanation. Symbiosis is a relationship in which organisms from two separate species depend on each other for their survival. Endosymbiosis (endo- = “within”) is a mutually beneficial relationship in which one organism lives inside the other. Endosymbiotic relationships abound in nature. We have already mentioned that microbes that produce vitamin K live inside the human gut. This relationship is beneficial for us because we are unable to synthesize vitamin K. It is also beneficial for the microbes because they are protected from other organisms and from drying out, and they receive abundant food from the environment of the large intestine. Scientists have long noticed that bacteria, mitochondria, and chloroplasts are similar in size. We also know that bacteria have DNA and ribosomes, just as mitochondria and chloroplasts do. Scientists believe that host cells and bacteria formed an endosymbiotic relationship when the host cells ingested both aerobic and autotrophic bacteria (cyanobacteria) but did not destroy them. Through many millions of years of evolution, these ingested bacteria became more specialized in their functions, with the aerobic bacteria becoming mitochondria and the autotrophic bacteria becoming chloroplasts. The Central Vacuole Previously, we mentioned vacuoles as essential components of plant cells. If you look at Figure 4.8 b , you will see that plant cells each have a large central vacuole that occupies most of the area of the cell. The central vacuole plays a key role in regulating the cell’s concentration of water in changing environmental conditions. Have you ever noticed that if you forget to water a plant for a few days, it wilts? That’s because as the water concentration in the soil becomes lower than the water concentration in the plant, water moves out of the central vacuoles and cytoplasm. As the central vacuole shrinks, it leaves the cell wall unsupported. This loss of support to the cell walls of plant cells results in the wilted appearance of the plant. The central vacuole also supports the expansion of the cell. When the central vacuole holds more water, the cell gets larger without having to invest a lot of energy in synthesizing new cytoplasm. 4.4 The Endomembrane System and Proteins Learning Objectives By the end of this section, you will be able to: List the components of the endomembrane system Recognize the relationship between the endomembrane system and its functions The endomembrane system (endo = “within”) is a group of membranes and organelles ( Figure 4.18 ) in eukaryotic cells that works together to modify, package, and transport lipids and proteins. It includes the nuclear envelope, lysosomes, and vesicles, which we’ve already mentioned, and the endoplasmic reticulum and Golgi apparatus, which we will cover shortly. Although not technically within the cell, the plasma membrane is included in the endomembrane system because, as you will see, it interacts with the other endomembranous organelles. The endomembrane system does not include the membranes of either mitochondria or chloroplasts. Art Connection If a peripheral membrane protein were synthesized in the lumen (inside) of the ER, would it end up on the inside or outside of the plasma membrane? The Endoplasmic Reticulum The endoplasmic reticulum (ER) ( Figure 4.18 ) is a series of interconnected membranous sacs and tubules that collectively modifies proteins and synthesizes lipids. However, these two functions are performed in separate areas of the ER: the rough ER and the smooth ER, respectively. The hollow portion of the ER tubules is called the lumen or cisternal space. The membrane of the ER, which is a phospholipid bilayer embedded with proteins, is continuous with the nuclear envelope. Rough ER The rough endoplasmic reticulum (RER) is so named because the ribosomes attached to its cytoplasmic surface give it a studded appearance when viewed through an electron microscope ( Figure 4.19 ). Ribosomes transfer their newly synthesized proteins into the lumen of the RER where they undergo structural modifications, such as folding or the acquisition of side chains. These modified proteins will be incorporated into cellular membranes—the membrane of the ER or those of other organelles—or secreted from the cell (such as protein hormones, enzymes). The RER also makes phospholipids for cellular membranes. If the phospholipids or modified proteins are not destined to stay in the RER, they will reach their destinations via transport vesicles that bud from the RER’s membrane ( Figure 4.18 ). Since the RER is engaged in modifying proteins (such as enzymes, for example) that will be secreted from the cell, you would be correct in assuming that the RER is abundant in cells that secrete proteins. This is the case with cells of the liver, for example. Smooth ER The smooth endoplasmic reticulum (SER) is continuous with the RER but has few or no ribosomes on its cytoplasmic surface ( Figure 4.18 ). Functions of the SER include synthesis of carbohydrates, lipids, and steroid hormones; detoxification of medications and poisons; and storage of calcium ions. In muscle cells, a specialized SER called the sarcoplasmic reticulum is responsible for storage of the calcium ions that are needed to trigger the coordinated contractions of the muscle cells. Link to Learning You can watch an excellent animation of the endomembrane system here . At the end of the animation, there is a short self-assessment. Career Connection Cardiologist Heart disease is the leading cause of death in the United States. This is primarily due to our sedentary lifestyle and our high trans-fat diets. Heart failure is just one of many disabling heart conditions. Heart failure does not mean that the heart has stopped working. Rather, it means that the heart can’t pump with sufficient force to transport oxygenated blood to all the vital organs. Left untreated, heart failure can lead to kidney failure and failure of other organs. The wall of the heart is composed of cardiac muscle tissue. Heart failure occurs when the endoplasmic reticula of cardiac muscle cells do not function properly. As a result, an insufficient number of calcium ions are available to trigger a sufficient contractile force. Cardiologists (cardi- = “heart”; -ologist = “one who studies”) are doctors who specialize in treating heart diseases, including heart failure. Cardiologists can make a diagnosis of heart failure via physical examination, results from an electrocardiogram (ECG, a test that measures the electrical activity of the heart), a chest X-ray to see whether the heart is enlarged, and other tests. If heart failure is diagnosed, the cardiologist will typically prescribe appropriate medications and recommend a reduction in table salt intake and a supervised exercise program. The Golgi Apparatus We have already mentioned that vesicles can bud from the ER and transport their contents elsewhere, but where do the vesicles go? Before reaching their final destination, the lipids or proteins within the transport vesicles still need to be sorted, packaged, and tagged so that they wind up in the right place. Sorting, tagging, packaging, and distribution of lipids and proteins takes place in the Golgi apparatus (also called the Golgi body), a series of flattened membranes ( Figure 4.20 ). The receiving side of the Golgi apparatus is called the cis face. The opposite side is called the trans face. The transport vesicles that formed from the ER travel to the cis face, fuse with it, and empty their contents into the lumen of the Golgi apparatus. As the proteins and lipids travel through the Golgi, they undergo further modifications that allow them to be sorted. The most frequent modification is the addition of short chains of sugar molecules. These newly modified proteins and lipids are then tagged with phosphate groups or other small molecules so that they can be routed to their proper destinations. Finally, the modified and tagged proteins are packaged into secretory vesicles that bud from the trans face of the Golgi. While some of these vesicles deposit their contents into other parts of the cell where they will be used, other secretory vesicles fuse with the plasma membrane and release their contents outside the cell. In another example of form following function, cells that engage in a great deal of secretory activity (such as cells of the salivary glands that secrete digestive enzymes or cells of the immune system that secrete antibodies) have an abundance of Golgi. In plant cells, the Golgi apparatus has the additional role of synthesizing polysaccharides, some of which are incorporated into the cell wall and some of which are used in other parts of the cell. Career Connection Geneticist Many diseases arise from genetic mutations that prevent the synthesis of critical proteins. One such disease is Lowe disease (also called oculocerebrorenal syndrome, because it affects the eyes, brain, and kidneys). In Lowe disease, there is a deficiency in an enzyme localized to the Golgi apparatus. Children with Lowe disease are born with cataracts, typically develop kidney disease after the first year of life, and may have impaired mental abilities. Lowe disease is a genetic disease caused by a mutation on the X chromosome. The X chromosome is one of the two human sex chromosome, as these chromosomes determine a person's sex. Females possess two X chromosomes while males possess one X and one Y chromosome. In females, the genes on only one of the two X chromosomes are expressed. Therefore, females who carry the Lowe disease gene on one of their X chromosomes have a 50/50 chance of having the disease. However, males only have one X chromosome and the genes on this chromosome are always expressed. Therefore, males will always have Lowe disease if their X chromosome carries the Lowe disease gene. The location of the mutated gene, as well as the locations of many other mutations that cause genetic diseases, has now been identified. Through prenatal testing, a woman can find out if the fetus she is carrying may be afflicted with one of several genetic diseases. Geneticists analyze the results of prenatal genetic tests and may counsel pregnant women on available options. They may also conduct genetic research that leads to new drugs or foods, or perform DNA analyses that are used in forensic investigations. Lysosomes In addition to their role as the digestive component and organelle-recycling facility of animal cells, lysosomes are considered to be parts of the endomembrane system. Lysosomes also use their hydrolytic enzymes to destroy pathogens (disease-causing organisms) that might enter the cell. A good example of this occurs in a group of white blood cells called macrophages, which are part of your body’s immune system. In a process known as phagocytosis or endocytosis, a section of the plasma membrane of the macrophage invaginates (folds in) and engulfs a pathogen. The invaginated section, with the pathogen inside, then pinches itself off from the plasma membrane and becomes a vesicle. The vesicle fuses with a lysosome. The lysosome’s hydrolytic enzymes then destroy the pathogen ( Figure 4.21 ). 4.5 The Cytoskeleton Learning Objectives By the end of this section, you will be able to: Describe the cytoskeleton Compare the roles of microfilaments, intermediate filaments, and microtubules Compare and contrast cilia and flagella Summarize the differences among the components of prokaryotic cells, animal cells, and plant cells If you were to remove all the organelles from a cell, would the plasma membrane and the cytoplasm be the only components left? No. Within the cytoplasm, there would still be ions and organic molecules, plus a network of protein fibers that help maintain the shape of the cell, secure some organelles in specific positions, allow cytoplasm and vesicles to move within the cell, and enable cells within multicellular organisms to move. Collectively, this network of protein fibers is known as the cytoskeleton . There are three types of fibers within the cytoskeleton: microfilaments, intermediate filaments, and microtubules ( Figure 4.22 ). Here, we will examine each. Microfilaments Of the three types of protein fibers in the cytoskeleton, microfilaments are the narrowest. They function in cellular movement, have a diameter of about 7 nm, and are made of two intertwined strands of a globular protein called actin ( Figure 4.23 ). For this reason, microfilaments are also known as actin filaments. Actin is powered by ATP to assemble its filamentous form, which serves as a track for the movement of a motor protein called myosin. This enables actin to engage in cellular events requiring motion, such as cell division in animal cells and cytoplasmic streaming, which is the circular movement of the cell cytoplasm in plant cells. Actin and myosin are plentiful in muscle cells. When your actin and myosin filaments slide past each other, your muscles contract. Microfilaments also provide some rigidity and shape to the cell. They can depolymerize (disassemble) and reform quickly, thus enabling a cell to change its shape and move. White blood cells (your body’s infection-fighting cells) make good use of this ability. They can move to the site of an infection and phagocytize the pathogen. Link to Learning To see an example of a white blood cell in action, watch a short time-lapse video of the cell capturing two bacteria. It engulfs one and then moves on to the other. Click to view content Intermediate Filaments Intermediate filaments are made of several strands of fibrous proteins that are wound together ( Figure 4.24 ). These elements of the cytoskeleton get their name from the fact that their diameter, 8 to 10 nm, is between those of microfilaments and microtubules. Intermediate filaments have no role in cell movement. Their function is purely structural. They bear tension, thus maintaining the shape of the cell, and anchor the nucleus and other organelles in place. Figure 4.22 shows how intermediate filaments create a supportive scaffolding inside the cell. The intermediate filaments are the most diverse group of cytoskeletal elements. Several types of fibrous proteins are found in the intermediate filaments. You are probably most familiar with keratin, the fibrous protein that strengthens your hair, nails, and the epidermis of the skin. Microtubules As their name implies, microtubules are small hollow tubes. The walls of the microtubule are made of polymerized dimers of α-tubulin and β-tubulin, two globular proteins ( Figure 4.25 ). With a diameter of about 25 nm, microtubules are the widest components of the cytoskeleton. They help the cell resist compression, provide a track along which vesicles move through the cell, and pull replicated chromosomes to opposite ends of a dividing cell. Like microfilaments, microtubules can dissolve and reform quickly. Microtubules are also the structural elements of flagella, cilia, and centrioles (the latter are the two perpendicular bodies of the centrosome). In fact, in animal cells, the centrosome is the microtubule-organizing center. In eukaryotic cells, flagella and cilia are quite different structurally from their counterparts in prokaryotes, as discussed below. Flagella and Cilia To refresh your memory, flagella (singular = flagellum) are long, hair-like structures that extend from the plasma membrane and are used to move an entire cell (for example, sperm, Euglena ). When present, the cell has just one flagellum or a few flagella. When cilia (singular = cilium) are present, however, many of them extend along the entire surface of the plasma membrane. They are short, hair-like structures that are used to move entire cells (such as paramecia) or substances along the outer surface of the cell (for example, the cilia of cells lining the Fallopian tubes that move the ovum toward the uterus, or cilia lining the cells of the respiratory tract that trap particulate matter and move it toward your nostrils.) Despite their differences in length and number, flagella and cilia share a common structural arrangement of microtubules called a “9 + 2 array.” This is an appropriate name because a single flagellum or cilium is made of a ring of nine microtubule doublets, surrounding a single microtubule doublet in the center ( Figure 4.26 ). You have now completed a broad survey of the components of prokaryotic and eukaryotic cells. For a summary of cellular components in prokaryotic and eukaryotic cells, see Table 4.1 . Components of Prokaryotic and Eukaryotic Cells Cell Component Function Present in Prokaryotes? Present in Animal Cells? Present in Plant Cells? Plasma membrane Separates cell from external environment; controls passage of organic molecules, ions, water, oxygen, and wastes into and out of cell Yes Yes Yes Cytoplasm Provides turgor pressure to plant cells as fluid inside the central vacuole; site of many metabolic reactions; medium in which organelles are found Yes Yes Yes Nucleolus Darkened area within the nucleus where ribosomal subunits are synthesized. No Yes Yes Nucleus Cell organelle that houses DNA and directs synthesis of ribosomes and proteins No Yes Yes Ribosomes Protein synthesis Yes Yes Yes Mitochondria ATP production/cellular respiration No Yes Yes Peroxisomes Oxidizes and thus breaks down fatty acids and amino acids, and detoxifies poisons No Yes Yes Vesicles and vacuoles Storage and transport; digestive function in plant cells No Yes Yes Centrosome Unspecified role in cell division in animal cells; source of microtubules in animal cells No Yes No Lysosomes Digestion of macromolecules; recycling of worn-out organelles No Yes No Cell wall Protection, structural support and maintenance of cell shape Yes, primarily peptidoglycan No Yes, primarily cellulose Chloroplasts Photosynthesis No No Yes Endoplasmic reticulum Modifies proteins and synthesizes lipids No Yes Yes Golgi apparatus Modifies, sorts, tags, packages, and distributes lipids and proteins No Yes Yes Cytoskeleton Maintains cell’s shape, secures organelles in specific positions, allows cytoplasm and vesicles to move within cell, and enables unicellular organisms to move independently Yes Yes Yes Flagella Cellular locomotion Some Some No, except for some plant sperm cells. Cilia Cellular locomotion, movement of particles along extracellular surface of plasma membrane, and filtration Some Some No Table 4.1 4.6 Connections between Cells and Cellular Activities Learning Objectives By the end of this section, you will be able to: Describe the extracellular matrix List examples of the ways that plant cells and animal cells communicate with adjacent cells Summarize the roles of tight junctions, desmosomes, gap junctions, and plasmodesmata You already know that a group of similar cells working together is called a tissue. As you might expect, if cells are to work together, they must communicate with each other, just as you need to communicate with others if you work on a group project. Let’s take a look at how cells communicate with each other. Extracellular Matrix of Animal Cells Most animal cells release materials into the extracellular space. The primary components of these materials are proteins, and the most abundant protein is collagen. Collagen fibers are interwoven with carbohydrate-containing protein molecules called proteoglycans. Collectively, these materials are called the extracellular matrix ( Figure 4.27 ). Not only does the extracellular matrix hold the cells together to form a tissue, but it also allows the cells within the tissue to communicate with each other. How can this happen? Cells have protein receptors on the extracellular surfaces of their plasma membranes. When a molecule within the matrix binds to the receptor, it changes the molecular structure of the receptor. The receptor, in turn, changes the conformation of the microfilaments positioned just inside the plasma membrane. These conformational changes induce chemical signals inside the cell that reach the nucleus and turn “on” or “off” the transcription of specific sections of DNA, which affects the production of associated proteins, thus changing the activities within the cell. Blood clotting provides an example of the role of the extracellular matrix in cell communication. When the cells lining a blood vessel are damaged, they display a protein receptor called tissue factor. When tissue factor binds with another factor in the extracellular matrix, it causes platelets to adhere to the wall of the damaged blood vessel, stimulates the adjacent smooth muscle cells in the blood vessel to contract (thus constricting the blood vessel), and initiates a series of steps that stimulate the platelets to produce clotting factors. Intercellular Junctions Cells can also communicate with each other via direct contact, referred to as intercellular junctions. There are some differences in the ways that plant and animal cells do this. Plasmodesmata are junctions between plant cells, whereas animal cell contacts include tight junctions, gap junctions, and desmosomes. Plasmodesmata In general, long stretches of the plasma membranes of neighboring plant cells cannot touch one another because they are separated by the cell wall that surrounds each cell ( Figure 4.8 b ). How then, can a plant transfer water and other soil nutrients from its roots, through its stems, and to its leaves? Such transport uses the vascular tissues (xylem and phloem) primarily. There also exist structural modifications called plasmodesmata (singular = plasmodesma), numerous channels that pass between cell walls of adjacent plant cells, connect their cytoplasm, and enable materials to be transported from cell to cell, and thus throughout the plant ( Figure 4.28 ). Tight Junctions A tight junction is a watertight seal between two adjacent animal cells ( Figure 4.29 ). The cells are held tightly against each other by proteins (predominantly two proteins called claudins and occludins). This tight adherence prevents materials from leaking between the cells; tight junctions are typically found in epithelial tissues that line internal organs and cavities, and comprise most of the skin. For example, the tight junctions of the epithelial cells lining your urinary bladder prevent urine from leaking out into the extracellular space. Desmosomes Also found only in animal cells are desmosomes , which act like spot welds between adjacent epithelial cells ( Figure 4.30 ). Short proteins called cadherins in the plasma membrane connect to intermediate filaments to create desmosomes. The cadherins join two adjacent cells together and maintain the cells in a sheet-like formation in organs and tissues that stretch, like the skin, heart, and muscles. Gap Junctions Gap junctions in animal cells are like plasmodesmata in plant cells in that they are channels between adjacent cells that allow for the transport of ions, nutrients, and other substances that enable cells to communicate ( Figure 4.31 ). Structurally, however, gap junctions and plasmodesmata differ. Gap junctions develop when a set of six proteins (called connexins) in the plasma membrane arrange themselves in an elongated donut-like configuration called a connexon. When the pores (“doughnut holes”) of connexons in adjacent animal cells align, a channel between the two cells forms. Gap junctions are particularly important in cardiac muscle: The electrical signal for the muscle to contract is passed efficiently through gap junctions, allowing the heart muscle cells to contract in tandem. Link to Learning To conduct a virtual microscopy lab and review the parts of a cell, work through the steps of this interactive assignment .
u.s._history
Summary 29.1 The Kennedy Promise The arrival of the Kennedys in the White House seemed to signal a new age of youth, optimism, and confidence. Kennedy spoke of a “new frontier” and promoted the expansion of programs to aid the poor, protect African Americans’ right to vote, and improve African Americans’ employment and education opportunities. For the most part, however, Kennedy focused on foreign policy and countering the threat of Communism—especially in Cuba, where he successfully defused the Cuban Missile Crisis, and in Vietnam, to which he sent advisors and troops to support the South Vietnamese government. The tragedy of Kennedy’s assassination in Dallas brought an early end to the era, leaving Americans to wonder whether his vice president and successor, Lyndon Johnson, would bring Kennedy’s vision for the nation to fruition. 29.2 Lyndon Johnson and the Great Society Lyndon Johnson began his administration with dreams of fulfilling his fallen predecessor’s civil rights initiative and accomplishing his own plans to improve lives by eradicating poverty in the United States. His social programs, investments in education, support for the arts, and commitment to civil rights changed the lives of countless people and transformed society in many ways. However, Johnson’s insistence on maintaining American commitments in Vietnam, a policy begun by his predecessors, hurt both his ability to realize his vision of the Great Society and his support among the American people. 29.3 The Civil Rights Movement Marches On The African American civil rights movement made significant progress in the 1960s. While Congress played a role by passing the Civil Rights Act of 1964, the Voting Rights Act of 1965, and the Civil Rights Act of 1968, the actions of civil rights groups such as CORE, the SCLC, and SNCC were instrumental in forging new paths, pioneering new techniques and strategies, and achieving breakthrough successes. Civil rights activists engaged in sit-ins, freedom rides, and protest marches, and registered African American voters. Despite the movement’s many achievements, however, many grew frustrated with the slow pace of change, the failure of the Great Society to alleviate poverty, and the persistence of violence against African Americans, particularly the tragic 1968 assassination of Martin Luther King, Jr. Many African Americans in the mid- to late 1960s adopted the ideology of Black Power, which promoted their work within their own communities to redress problems without the aid of Whites. The Mexican American civil rights movement, led largely by Cesar Chavez, also made significant progress at this time. The emergence of the Chicano Movement signaled Mexican Americans’ determination to seize their political power, celebrate their cultural heritage, and demand their citizenship rights. 29.4 Challenging the Status Quo During the 1960s, many people rejected traditional roles and expectations. Influenced and inspired by the civil rights movement, college students of the baby boomer generation and women of all ages began to fight to secure a stronger role in American society. As members of groups like SDS and NOW asserted their rights and strove for equality for themselves and others, they upended many accepted norms and set groundbreaking social and legal changes in motion. Many of their successes continue to be felt today, while other goals remain unfulfilled.
Chapter Outline 29.1 The Kennedy Promise 29.2 Lyndon Johnson and the Great Society 29.3 The Civil Rights Movement Marches On 29.4 Challenging the Status Quo Introduction The 1960s was a decade of hope, change, and war that witnessed an important shift in American culture. Citizens from all walks of life sought to expand the meaning of the American promise. Their efforts helped unravel the national consensus and laid bare a far more fragmented society. As a result, men and women from all ethnic groups attempted to reform American society to make it more equitable. The United States also began to take unprecedented steps to exert what it believed to be a positive influence on the world. At the same time, the country’s role in Vietnam revealed the limits of military power and the contradictions of U.S. foreign policy. The posthumous portrait of John F. Kennedy ( Figure 29.1 ) captures this mix of the era’s promise and defeat. His election encouraged many to work for a better future, for both the middle class and the marginalized. Kennedy’s running mate, Lyndon B. Johnson, also envisioned a country characterized by the social and economic freedoms established during the New Deal years. Kennedy’s assassination in 1963, and the assassinations five years later of Martin Luther King, Jr. and Robert F. Kennedy, made it dramatically clear that not all Americans shared this vision of a more inclusive democracy.
[ { "answer": { "ans_choice": 1, "ans_text": "B" }, "bloom": null, "hl_context": "On October 22 , Kennedy demanded that Soviet premier Nikita Khrushchev remove the missiles . <hl> He also ordered a naval quarantine placed around Cuba to prevent Soviet ships from approaching . <hl> Despite his use of the word “ quarantine ” instead of “ blockade , ” for a blockade was considered an act of war , a potential war with the Soviet Union was nevertheless on the president ’ s mind . As U . S . ships headed for Cuba , the army was told to prepare for war , and Kennedy appeared on national television to declare his intention to defend the Western Hemisphere from Soviet aggression .", "hl_sentences": "He also ordered a naval quarantine placed around Cuba to prevent Soviet ships from approaching .", "question": { "cloze_format": "The term Kennedy chose to describe his sealing off of Cuba to prevent Soviet shipments of weapons or supplies was ________.", "normal_format": "Which term did Kennedy choose to describe his sealing off of Cuba to prevent Soviet shipments of weapons or supplies?", "question_choices": [ "interdiction", "quarantine", "isolation", "blockade" ], "question_id": "fs-idm95471712", "question_text": "The term Kennedy chose to describe his sealing off of Cuba to prevent Soviet shipments of weapons or supplies was ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "outlaw poll taxes" }, "bloom": null, "hl_context": "His strongest focus was on securing the voting rights of African Americans . Kennedy feared the loss of support from southern White Democrats and the impact a struggle over civil rights could have on his foreign policy agenda as well as on his reelection in 1964 . But he thought voter registration drives far preferable to the boycotts , sit-ins , and integration marches that had generated such intense global media coverage in previous years . <hl> Encouraged by Congress ’ s passage of the Civil Rights Act of 1960 , which permitted federal courts to appoint referees to guarantee that qualified persons would be registered to vote , Kennedy focused on the passage of a constitutional amendment outlawing poll taxes , a tactic that southern states used to disenfranchise African American voters . <hl> Originally proposed by President Truman ’ s Committee on Civil Rights , the idea had been largely forgotten during Eisenhower ’ s time in office . Kennedy , however , revived it and convinced Spessard Holland , a conservative Florida senator , to introduce the proposed amendment in Congress . It passed both houses of Congress and was sent to the states for ratification in September 1962 .", "hl_sentences": "Encouraged by Congress ’ s passage of the Civil Rights Act of 1960 , which permitted federal courts to appoint referees to guarantee that qualified persons would be registered to vote , Kennedy focused on the passage of a constitutional amendment outlawing poll taxes , a tactic that southern states used to disenfranchise African American voters .", "question": { "cloze_format": "Kennedy proposed a constitutional amendment that would ________.", "normal_format": "What would the constitutional amendment that Kennedy proposed do?", "question_choices": [ "provide healthcare for all Americans", "outlaw poll taxes", "make English the official language of the United States", "require all American men to register for the draft" ], "question_id": "fs-idm291538896", "question_text": "Kennedy proposed a constitutional amendment that would ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "Medicaid" }, "bloom": null, "hl_context": "Although the Great Society failed to eliminate suffering or increase civil rights to the extent that Johnson wished , it made a significant difference in people ’ s lives . <hl> By the end of Johnson ’ s administration , the percentage of people living below the poverty line had been cut nearly in half . <hl> <hl> While more people of color than Whites continued to live in poverty , the percentage of poor African Americans had decreased dramatically . <hl> <hl> The creation of Medicare and Medicaid as well as the expansion of Social Security benefits and welfare payments improved the lives of many , while increased federal funding for education enabled more people to attend college than ever before . <hl> Conservative critics argued that , by expanding the responsibilities of the federal government to care for the poor , Johnson had hurt both taxpayers and the poor themselves . Aid to the poor , many maintained , would not only fail to solve the problem of poverty but would also encourage people to become dependent on government “ handouts ” and lose their desire and ability to care for themselves — an argument that many found intuitively compelling but which lacked conclusive evidence . These same critics also accused Johnson of saddling the United States with a large debt as a result of the deficit spending ( funded by borrowing ) in which he had engaged . 29.3 The Civil Rights Movement Marches On Learning Objectives By the end of this section , you will be able to : The Johnson administration , realizing the nation ’ s elderly were among its poorest and most disadvantaged citizens , passed the Social Security Act of 1965 . The most profound change made by this act was the creation of Medicare , a program to pay the medical expenses of those over sixty-five . Although opposed by the American Medical Association , which feared the creation of a national healthcare system , the new program was supported by most citizens because it would benefit all social classes , not just the poor . The act and subsequent amendments to it also provided coverage for self-employed people in certain occupations and expanded the number of disabled who qualified for benefits . <hl> The following year , the Medicaid program allotted federal funds to pay for medical care for the poor . <hl>", "hl_sentences": "By the end of Johnson ’ s administration , the percentage of people living below the poverty line had been cut nearly in half . While more people of color than Whites continued to live in poverty , the percentage of poor African Americans had decreased dramatically . The creation of Medicare and Medicaid as well as the expansion of Social Security benefits and welfare payments improved the lives of many , while increased federal funding for education enabled more people to attend college than ever before . The following year , the Medicaid program allotted federal funds to pay for medical care for the poor .", "question": { "cloze_format": "________ was Johnson’s program to provide federal funding for healthcare for the poor.", "normal_format": "What was Johnson’s program to provide federal funding for healthcare for the poor?", "question_choices": [ "Medicare", "Social Security", "Medicaid", "AFDC" ], "question_id": "fs-idp182773472", "question_text": "________ was Johnson’s program to provide federal funding for healthcare for the poor." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "D" }, "bloom": null, "hl_context": "Although North Vietnamese forces suffered far more casualties than the roughly forty-one hundred U . S . soldiers killed , public opinion in the United States , fueled by graphic images provided in unprecedented media coverage , turned against the war . <hl> Disastrous surprise attacks like the Tet Offensive persuaded many that the war would not be over soon and raised doubts about whether Johnson ’ s administration was telling the truth about the real state of affairs . <hl> In May 1968 , with over 400,000 U . S . soldiers in Vietnam , Johnson began peace talks with the North .", "hl_sentences": "Disastrous surprise attacks like the Tet Offensive persuaded many that the war would not be over soon and raised doubts about whether Johnson ’ s administration was telling the truth about the real state of affairs .", "question": { "cloze_format": "Many Americans began to doubt that the war in Vietnam could be won following ________.", "normal_format": "Following what many Americans did begin to doubt that the war in Vietnam could be won?", "question_choices": [ "Khe Sanh", "Dien Bien Phu", "the Tonkin Gulf incident", "the Tet Offensive" ], "question_id": "fs-idm48526064", "question_text": "Many Americans began to doubt that the war in Vietnam could be won following ________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 3, "ans_text": "D" }, "bloom": null, "hl_context": "On February 1 , 1960 , four sophomores at the North Carolina Agricultural & Technical College in Greensboro — Ezell Blair , Jr . , Joseph McNeil , David Richmond , and Franklin McCain — entered the local Woolworth ’ s and sat at the lunch counter . The lunch counter was segregated , and they were refused service as they knew they would be . They had specifically chosen Woolworth ’ s , because it was a national chain and was thus believed to be especially vulnerable to negative publicity . Over the next few days , more protesters joined the four sophomores . Hostile Whites responded with threats and taunted the students by pouring sugar and ketchup on their heads . <hl> The successful six-month-long Greensboro sit-in initiated the student phase of the African American civil rights movement and , within two months , the sit-in movement had spread to fifty-four cities in nine states ( Figure 29.14 ) . <hl>", "hl_sentences": "The successful six-month-long Greensboro sit-in initiated the student phase of the African American civil rights movement and , within two months , the sit-in movement had spread to fifty-four cities in nine states ( Figure 29.14 ) .", "question": { "cloze_format": "The new protest tactic against segregation used by students in Greensboro, North Carolina, in 1960 was the ________.", "normal_format": "What was the new protest tactic against segregation used by students in Greensboro, North Carolina, in 1960?", "question_choices": [ "boycott", "guerilla theater", "teach-in", "sit-in" ], "question_id": "fs-idm83595152", "question_text": "The new protest tactic against segregation used by students in Greensboro, North Carolina, in 1960 was the ________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 0, "ans_text": "the Black Panthers" }, "bloom": null, "hl_context": "Unlike Stokely Carmichael and the Nation of Islam , most Black Power advocates did not believe African Americans needed to separate themselves from White society . <hl> The Black Panther Party , founded in 1966 in Oakland , California , by Bobby Seale and Huey Newton , believed African Americans were as much the victims of capitalism as of White racism . <hl> <hl> Accordingly , the group espoused Marxist teachings , and called for jobs , housing , and education , as well as protection from police brutality and exemption from military service in their Ten Point Program . <hl> The Black Panthers also patrolled the streets of African American neighborhoods to protect residents from police brutality , yet sometimes beat and murdered those who did not agree with their cause and tactics . Their militant attitude and advocacy of armed self-defense attracted many young men but also led to many encounters with the police , which sometimes included arrests and even shootouts , such as those that took place in Los Angeles , Chicago and Carbondale , Illinois .", "hl_sentences": "The Black Panther Party , founded in 1966 in Oakland , California , by Bobby Seale and Huey Newton , believed African Americans were as much the victims of capitalism as of White racism . Accordingly , the group espoused Marxist teachings , and called for jobs , housing , and education , as well as protection from police brutality and exemption from military service in their Ten Point Program .", "question": { "cloze_format": "The African American group that advocated the use of violence and espoused a Marxist ideology was called ________.", "normal_format": "What was the African American group that advocated the use of violence and espoused a Marxist ideology called?", "question_choices": [ "the Black Panthers", "the Nation of Islam", "SNCC", "CORE" ], "question_id": "fs-idp181059280", "question_text": "The African American group that advocated the use of violence and espoused a Marxist ideology was called ________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 3, "ans_text": "D" }, "bloom": null, "hl_context": "The equivalent of the Black Power movement among Mexican Americans was the Chicano Movement . Proudly adopting a derogatory term for Mexican Americans , Chicano activists demanded increased political power for Mexican Americans , education that recognized their cultural heritage , and the restoration of lands taken from them at the end of the Mexican-American War in 1848 . <hl> One of the founding members , Rodolfo “ Corky ” Gonzales , launched the Crusade for Justice in Denver in 1965 , to provide jobs , legal services , and healthcare for Mexican Americans . <hl> From this movement arose La Raza Unida , a political party that attracted many Mexican American college students . Elsewhere , Reies López Tijerina fought for years to reclaim lost and illegally expropriated ancestral lands in New Mexico ; he was one of the co-sponsors of the Poor People ’ s March on Washington in 1967 . 29.4 Challenging the Status Quo Learning Objectives By the end of this section , you will be able to :", "hl_sentences": "One of the founding members , Rodolfo “ Corky ” Gonzales , launched the Crusade for Justice in Denver in 1965 , to provide jobs , legal services , and healthcare for Mexican Americans .", "question": { "cloze_format": "___ founded the Crusade for Justice in Denver, Colorado in 1965.", "normal_format": "Who founded the Crusade for Justice in Denver, Colorado in 1965?", "question_choices": [ "Reies Lopez Tijerina", "Dolores Huerta", "Larry Itliong", "Rodolfo Gonzales" ], "question_id": "fs-idp208035024", "question_text": "Who founded the Crusade for Justice in Denver, Colorado in 1965?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "C" }, "bloom": null, "hl_context": "<hl> One of the most prominent New Left groups was Students for a Democratic Society ( SDS ) . <hl> Organized in 1960 , SDS held its first meeting at the University of Michigan , Ann Arbor . Its philosophy was expressed in its manifesto , the Port Huron Statement , written by Tom Hayden and adopted in 1962 , affirming the group ’ s dedication to fighting economic inequality and discrimination . It called for greater participation in the democratic process by ordinary people , advocated civil disobedience , and rejected the anti-Communist position held by most other groups committed to social reform in the United States . Meanwhile , baby boomers , many raised in this environment of affluence , streamed into universities across the nation in unprecedented numbers looking to “ find ” themselves . Instead , they found traditional systems that forced them to take required courses , confined them to rigid programs of study , and surrounded them with rules limiting what they could do in their free time . These young people were only too willing to take up Kennedy ’ s call to action , and many did so by joining the civil rights movement . To them , it seemed only right for the children of the “ greatest generation ” to help those less privileged to fight battles for justice and equality . The more radical aligned themselves with the New Left , activists of the 1960s who rejected the staid liberalism of the Democratic Party . <hl> New Left organizations sought reform in areas such as civil rights and women ’ s rights , campaigned for free speech and more liberal policies toward drug use , and condemned the war in Vietnam . <hl>", "hl_sentences": "One of the most prominent New Left groups was Students for a Democratic Society ( SDS ) . New Left organizations sought reform in areas such as civil rights and women ’ s rights , campaigned for free speech and more liberal policies toward drug use , and condemned the war in Vietnam .", "question": { "cloze_format": "___ was one of the major student organizations engaged in organizing protests and demonstrations against the Vietnam War.", "normal_format": "What was one of the major student organizations engaged in organizing protests and demonstrations against the Vietnam War?", "question_choices": [ "Committee for American Democracy", "Freedom Now Party", "Students for a Democratic Society", "Young Americans for Peace" ], "question_id": "fs-idp148855264", "question_text": "What was one of the major student organizations engaged in organizing protests and demonstrations against the Vietnam War?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "to de-criminalize the use of birth control" }, "bloom": null, "hl_context": "<hl> In 1966 , the National Organization for Women ( NOW ) formed and proceeded to set an agenda for the feminist movement ( Figure 29.22 ) . <hl> <hl> Framed by a statement of purpose written by Friedan , the agenda began by proclaiming NOW ’ s goal to make possible women ’ s participation in all aspects of American life and to gain for them all the rights enjoyed by men . <hl> <hl> Among the specific goals was the passage of the Equal Rights Amendment ( yet to be adopted ) . <hl>", "hl_sentences": "In 1966 , the National Organization for Women ( NOW ) formed and proceeded to set an agenda for the feminist movement ( Figure 29.22 ) . Framed by a statement of purpose written by Friedan , the agenda began by proclaiming NOW ’ s goal to make possible women ’ s participation in all aspects of American life and to gain for them all the rights enjoyed by men . Among the specific goals was the passage of the Equal Rights Amendment ( yet to be adopted ) .", "question": { "cloze_format": "___ was not a founding goal of NOW.", "normal_format": "Which of the following was not a founding goal of NOW?", "question_choices": [ "to gain for women all the rights enjoyed by men", "to ensure passage of the Equal Rights Amendment", "to de-criminalize the use of birth control", "to allow women to participate in all aspects of American life" ], "question_id": "fs-idm91045056", "question_text": "Which of the following was not a founding goal of NOW?" }, "references_are_paraphrase": null } ]
29
29.1 The Kennedy Promise Learning Objectives By the end of this section, you will be able to: Assess Kennedy’s Cold War strategy Describe Kennedy’s contribution to the civil rights movement In the 1950s, President Dwight D. Eisenhower presided over a United States that prized conformity over change. Although change naturally occurred, as it does in every era, it was slow and greeted warily. By the 1960s, however, the pace of change had quickened and its scope broadened, as restive and energetic waves of World War II veterans and baby boomers of both sexes and all ethnicities began to make their influence felt politically, economically, and culturally. No one symbolized the hopes and energies of the new decade more than John Fitzgerald Kennedy, the nation’s new, young, and seemingly healthful, president. Kennedy had emphasized the country’s aspirations and challenges as a “new frontier” when accepting his party’s nomination at the Democratic National Convention in Los Angeles, California. THE NEW FRONTIER The son of Joseph P. Kennedy, a wealthy Boston business owner and former ambassador to Great Britain, John F. Kennedy graduated from Harvard University and went on to serve in the U.S. House of Representatives in 1946. Even though he was young and inexperienced, his reputation as a war hero who had saved the crew of his PT boat after it was destroyed by the Japanese helped him to win election over more seasoned candidates, as did his father’s fortune. In 1952, he was elected to the U.S. Senate for the first of two terms. For many, including Arthur M. Schlesinger, Jr., a historian and member of Kennedy’s administration, Kennedy represented a bright, shining future in which the United States would lead the way in solving the most daunting problems facing the world. Kennedy’s popular reputation as a great politician undoubtedly owes much to the style and attitude he personified. He and his wife Jacqueline conveyed a sense of optimism and youthfulness. “Jackie” was an elegant first lady who wore designer dresses, served French food in the White House, and invited classical musicians to entertain at state functions. “Jack” Kennedy, or JFK, went sailing off the coast of his family’s Cape Cod estate and socialized with celebrities ( Figure 29.3 ). Few knew that behind Kennedy’s healthful and sporty image was a gravely ill man whose wartime injuries caused him daily agony. Nowhere was Kennedy’s style more evident than in the first televised presidential debate held on September 23, 1960, between him and his Republican opponent Vice President Richard M. Nixon. Seventy million viewers watched the debate on television; millions more heard it on the radio. Radio listeners judged Nixon the winner, whereas those who watched the debate on television believed the more telegenic Kennedy made the better showing. Kennedy did not appeal to all voters, however. Many feared that because he was Roman Catholic, his decisions would be influenced by the Pope. Even traditional Democratic supporters, like the head of the United Auto Workers, Walter Reuther, feared that a Catholic candidate would lose the support of Protestants. Many southern Democrats also disliked Kennedy because of his liberal position on civil rights. To shore up support for Kennedy in the South, Lyndon B. Johnson, the Protestant Texan who was Senate majority leader, was added to the Democratic ticket as the vice presidential candidate. In the end, Kennedy won the election by the closest margin since 1888, defeating Nixon with only 0.01 percent more of the record sixty-seven million votes cast. His victory in the Electoral College was greater: 303 electoral votes to Nixon’s 219. Kennedy’s win made him both the youngest man elected to the presidency and the first U.S. president born in the twentieth century. Kennedy dedicated his inaugural address to the theme of a new future for the United States. “Ask not what your country can do for you; ask what you can do for your country,” he challenged his fellow Americans. His lofty goals ranged from fighting poverty to winning the space race against the Soviet Union with a moon landing. He assembled an administration of energetic people assured of their ability to shape the future. Dean Rusk was named secretary of state. Robert McNamara, the former president of Ford Motor Company, became secretary of defense. Kennedy appointed his younger brother Robert as attorney general, much to the chagrin of many who viewed the appointment as a blatant example of nepotism. Kennedy’s domestic reform plans remained hampered, however, by his narrow victory and lack of support from members of his own party, especially southern Democrats. As a result, he remained hesitant to propose new civil rights legislation. His achievements came primarily in poverty relief and care for the disabled. Unemployment benefits were expanded, the food stamps program was piloted, and the school lunch program was extended to more students. In October 1963, the passage of the Mental Retardation Facilities and Community Mental Health Centers Construction Act increased support for public mental health services. KENNEDY THE COLD WARRIOR Kennedy focused most of his energies on foreign policy, an arena in which he had been interested since his college years and in which, like all presidents, he was less constrained by the dictates of Congress. Kennedy, who had promised in his inaugural address to protect the interests of the “free world,” engaged in Cold War politics on a variety of fronts. For example, in response to the lead that the Soviets had taken in the space race when Yuri Gagarin became the first human to successfully orbit the earth, Kennedy urged Congress to not only put a man into space ( Figure 29.4 ) but also land an American on the moon, a goal finally accomplished in 1969. This investment advanced a variety of military technologies, especially the nation’s long-range missile capability, resulting in numerous profitable spin-offs for the aviation and communication industries. It also funded a growing middle class of government workers, engineers, and defense contractors in states ranging from California to Texas to Florida—a region that would come to be known as the Sun Belt—becoming a symbol of American technological superiority. At the same time, however, the use of massive federal resources for space technologies did not change the economic outlook for low-income communities and underprivileged regions. To counter Soviet influence in the developing world, Kennedy supported a variety of measures. One of these was the Alliance for Progress , which collaborated with the governments of Latin American countries to promote economic growth and social stability in nations whose populations might find themselves drawn to communism. Kennedy also established the Agency for International Development to oversee the distribution of foreign aid, and he founded the Peace Corps , which recruited idealistic young people to undertake humanitarian projects in Asia, Africa, and Latin America. He hoped that by augmenting the food supply and improving healthcare and education, the U.S. government could encourage developing nations to align themselves with the United States and reject Soviet or Chinese overtures. The first group of Peace Corps volunteers departed for the four corners of the globe in 1961, serving as an instrument of “soft power” in the Cold War. Kennedy’s various aid projects, like the Peace Corps, fit closely with his administration’s flexible response , which Robert McNamara advocated as a better alternative to the all-or-nothing defensive strategy of mutually assured destruction favored during Eisenhower’s presidency. The plan was to develop different strategies, tactics, and even military capabilities to respond more appropriately to small or medium-sized insurgencies, and political or diplomatic crises. One component of flexible response was the Green Berets, a U.S. Army Special Forces unit trained in counterinsurgency —the military suppression of rebel and nationalist groups in foreign nations. Much of the Kennedy administration’s new approach to defense, however, remained focused on the ability and willingness of the United States to wage both conventional and nuclear warfare, and Kennedy continued to call for increases in the American nuclear arsenal. Cuba Kennedy’s multifaceted approach to national defense is exemplified by his careful handling of the Communist government of Fidel Castro in Cuba. In January 1959, following the overthrow of the corrupt and dictatorial regime of Fulgencio Batista, Castro assumed leadership of the new Cuban government. The progressive reforms he began indicated that he favored Communism, and his pro-Soviet foreign policy frightened the Eisenhower administration, which asked the Central Intelligence Agency (CIA) to find a way to remove him from power. Rather than have the U.S. military invade the small island nation, less than one hundred miles from Florida, and risk the world’s criticism, the CIA instead trained a small force of Cuban exiles for the job. After landing at the Bay of Pigs on the Cuban coast, these insurgents, the CIA believed, would inspire their countrymen to rise up and topple Castro’s regime. The United States also promised air support for the invasion. Kennedy agreed to support the previous administration’s plans, and on April 17, 1961, approximately fourteen hundred Cuban exiles stormed ashore at the designated spot. However, Kennedy feared domestic criticism and worried about Soviet retaliation elsewhere in the world, such as Berlin. He cancelled the anticipated air support, which enabled the Cuban army to easily defeat the insurgents. The hoped-for uprising of the Cuban people also failed to occur. The surviving members of the exile army were taken into custody. The Bay of Pigs invasion was a major foreign policy disaster for President Kennedy. The event highlighted how difficult it would be for the United States to act against the Castro administration. The following year, the Soviet Union sent troops and technicians to Cuba to strengthen its new ally against further U.S. military plots. Then, on October 14, U.S. spy planes took aerial photographs that confirmed the presence of long-range ballistic missile sites in Cuba. The United States was now within easy reach of Soviet nuclear warheads ( Figure 29.5 ). On October 22, Kennedy demanded that Soviet premier Nikita Khrushchev remove the missiles. He also ordered a naval quarantine placed around Cuba to prevent Soviet ships from approaching. Despite his use of the word “quarantine” instead of “blockade,” for a blockade was considered an act of war, a potential war with the Soviet Union was nevertheless on the president’s mind. As U.S. ships headed for Cuba, the army was told to prepare for war, and Kennedy appeared on national television to declare his intention to defend the Western Hemisphere from Soviet aggression. The world held its breath awaiting the Soviet reply. Realizing how serious the United States was, Khrushchev sought a peaceful solution to the crisis, overruling those in his government who urged a harder stance. Behind the scenes, Robert Kennedy and Soviet ambassador Anatoly Dobrynin worked toward a compromise that would allow both superpowers to back down without either side’s seeming intimidated by the other. On October 26, Khrushchev agreed to remove the Russian missiles in exchange for Kennedy’s promise not to invade Cuba. On October 27, Kennedy’s agreement was made public, and the crisis ended. Not made public, but nevertheless part of the agreement, was Kennedy’s promise to remove U.S. warheads from Turkey, as close to Soviet targets as the Cuban missiles had been to American ones. The showdown between the United States and the Soviet Union over Cuba’s missiles had put the world on the brink of a nuclear war. Both sides already had long-range bombers with nuclear weapons airborne or ready for launch, and were only hours away from the first strike. In the long run, this nearly catastrophic example of nuclear brinksmanship ended up making the world safer. A telephone “hot line” was installed, linking Washington and Moscow to avert future crises, and in 1963, Kennedy and Khrushchev signed the Limited Test Ban Treaty, prohibiting tests of nuclear weapons in Earth’s atmosphere. Vietnam Cuba was not the only arena in which the United States sought to contain the advance of Communism. In Indochina, nationalist independence movements, most notably Vietnam’s Viet Minh under the leadership of Ho Chi Minh, had strong Communist sympathies. President Harry S. Truman had no love for France’s colonial regime in Southeast Asia but did not want to risk the loyalty of its Western European ally against the Soviet Union. In 1950, the Truman administration sent a small military advisory group to Vietnam and provided financial aid to help France defeat the Viet Minh. In 1954, Vietnamese forces finally defeated the French, and the country was temporarily divided at the seventeenth parallel. Ho Chi Minh and the Viet Minh controlled the North. In the South, the last Vietnamese emperor and ally to France, Bao Dai, named the French-educated, anti-Communist Ngo Dinh Diem as his prime minister. But Diem refused to abide by the Geneva Accords, the treaty ending the conflict that called for countrywide national elections in 1956, with the victor to rule a reunified nation. After a fraudulent election in the South in 1955, he ousted Bao Dai and proclaimed himself president of the Republic of Vietnam. He cancelled the 1956 elections in the South and began to round up Communists and supporters of Ho Chi Minh. Realizing that Diem would never agree to the reunification of the country under Ho Chi Minh’s leadership, the North Vietnamese began efforts to overthrow the government of the South by encouraging insurgents to attack South Vietnamese officials. By 1960, North Vietnam had also created the National Liberation Front (NLF) to resist Diem and carry out an insurgency in the South. The United States, fearing the spread of Communism under Ho Chi Minh, supported Diem, assuming he would create a democratic, pro-Western government in South Vietnam. However, Diem’s oppressive and corrupt government made him a very unpopular ruler, particularly with farmers, students, and Buddhists, and many in the South actively assisted the NLF and North Vietnam in trying to overthrow his government. When Kennedy took office, Diem’s government was faltering. Continuing the policies of the Eisenhower administration, Kennedy supplied Diem with money and military advisors to prop up his government ( Figure 29.6 ). By November 1963, there were sixteen thousand U.S. troops in Vietnam, training members of that country’s special forces and flying air missions that dumped defoliant chemicals on the countryside to expose North Vietnamese and NLF forces and supply routes. A few weeks before Kennedy’s own death, Diem and his brother Nhu were assassinated by South Vietnamese military officers after U.S. officials had indicated their support for a new regime. TENTATIVE STEPS TOWARD CIVIL RIGHTS Cold War concerns, which guided U.S. policy in Cuba and Vietnam, also motivated the Kennedy administration’s steps toward racial equality. Realizing that legal segregation and widespread discrimination hurt the country’s chances of gaining allies in Africa, Asia, and Latin America, the federal government increased efforts to secure the civil rights of African Americans in the 1960s. During his presidential campaign, Kennedy had intimated his support for civil rights, and his efforts to secure the release of civil rights leader Martin Luther King, Jr., who was arrested following a demonstration, won him the African American vote. Lacking widespread backing in Congress, however, and anxious not to offend White southerners, Kennedy was cautious in assisting African Americans in their fight for full citizenship rights. His strongest focus was on securing the voting rights of African Americans. Kennedy feared the loss of support from southern White Democrats and the impact a struggle over civil rights could have on his foreign policy agenda as well as on his reelection in 1964. But he thought voter registration drives far preferable to the boycotts, sit-ins, and integration marches that had generated such intense global media coverage in previous years. Encouraged by Congress’s passage of the Civil Rights Act of 1960, which permitted federal courts to appoint referees to guarantee that qualified persons would be registered to vote, Kennedy focused on the passage of a constitutional amendment outlawing poll taxes, a tactic that southern states used to disenfranchise African American voters. Originally proposed by President Truman’s Committee on Civil Rights, the idea had been largely forgotten during Eisenhower’s time in office. Kennedy, however, revived it and convinced Spessard Holland, a conservative Florida senator, to introduce the proposed amendment in Congress. It passed both houses of Congress and was sent to the states for ratification in September 1962. Kennedy also reacted to the demands of the civil rights movement for equality in education. For example, when African American student James Meredith, encouraged by Kennedy’s speeches, attempted to enroll at the segregated University of Mississippi in 1962, riots broke out on campus ( Figure 29.7 ). The president responded by sending the U.S. Army and National Guard to Oxford, Mississippi, to support the U.S. Marshals that his brother Robert, the attorney general, had dispatched. Following similar violence at the University of Alabama when two African American students, Vivian Malone and James Hood, attempted to enroll in 1963, Kennedy responded with a bill that would give the federal government greater power to enforce school desegregation, prohibit segregation in public accommodations, and outlaw discrimination in employment. Kennedy would not live to see his bill enacted; it would become law during Lyndon Johnson’s administration as the 1964 Civil Rights Act. TRAGEDY IN DALLAS Although his stance on civil rights had won him support in the African American community and his steely performance during the Cuban Missile Crisis had led his overall popularity to surge, Kennedy understood that he had to solidify his base in the South to secure his reelection. On November 21, 1963, he accompanied Lyndon Johnson to Texas to rally his supporters. The next day, shots rang out as Kennedy’s motorcade made its way through the streets of Dallas. Seriously injured, Kennedy was rushed to Parkland Hospital and pronounced dead. The gunfire that killed Kennedy appeared to come from the upper stories of the Texas School Book Depository building; later that day, Lee Harvey Oswald, an employee at the depository and a trained sniper, was arrested ( Figure 29.8 ). Two days later, while being transferred from Dallas police headquarters to the county jail, Oswald was shot and killed by Jack Ruby, a local nightclub owner who claimed he acted to avenge the president. Almost immediately, rumors began to circulate regarding the Kennedy assassination, and conspiracy theorists, pointing to the unlikely coincidence of Oswald’s murder a few days after Kennedy’s, began to propose alternate theories about the events. To quiet the rumors and allay fears that the government was hiding evidence, Lyndon Johnson, Kennedy’s successor, appointed a fact-finding commission headed by Earl Warren, chief justice of the U.S. Supreme Court, to examine all the evidence and render a verdict. The Warren Commission concluded that Lee Harvey Oswald had acted alone and there had been no conspiracy. The commission’s ruling failed to satisfy many, and multiple theories have sprung up over time. No credible evidence has ever been uncovered, however, to prove either that someone other than Oswald murdered Kennedy or that Oswald acted with co-conspirators. 29.2 Lyndon Johnson and the Great Society Learning Objectives By the end of this section, you will be able to: Describe the major accomplishments of Lyndon Johnson’s Great Society Identify the legal advances made in the area of civil rights Explain how Lyndon Johnson deepened the American commitment in Vietnam On November 27, 1963, a few days after taking the oath of office, President Johnson addressed a joint session of Congress and vowed to accomplish the goals that John F. Kennedy had set and to expand the role of the federal government in securing economic opportunity and civil rights for all. Johnson brought to his presidency a vision of a Great Society in which everyone could share in the opportunities for a better life that the United States offered, and in which the words “liberty and justice for all” would have real meaning. THE GREAT SOCIETY In May 1964, in a speech at the University of Michigan, Lyndon Johnson described in detail his vision of the Great Society he planned to create ( Figure 29.9 ). When the Eighty-Ninth Congress convened the following January, he and his supporters began their effort to turn the promise into reality. By combatting racial discrimination and attempting to eliminate poverty, the reforms of the Johnson administration changed the nation. One of the chief pieces of legislation that Congress passed in 1965 was the Elementary and Secondary Education Act ( Figure 29.9 ). Johnson, a former teacher, realized that a lack of education was the primary cause of poverty and other social problems. Educational reform was thus an important pillar of the society he hoped to build. This act provided increased federal funding to both elementary and secondary schools, allocating more than $1 billion for the purchase of books and library materials, and the creation of educational programs for disadvantaged children. The Higher Education Act , signed into law the same year, provided scholarships and low-interest loans for the poor, increased federal funding for colleges and universities, and created a corps of teachers to serve schools in impoverished areas. Education was not the only area toward which Johnson directed his attention. Consumer protection laws were also passed that improved the safety of meat and poultry, placed warning labels on cigarette packages, required “truth in lending” by creditors, and set safety standards for motor vehicles. Funds were provided to improve public transportation and to fund high-speed mass transit. To protect the environment, the Johnson administration created laws protecting air and water quality, regulating the disposal of solid waste, preserving wilderness areas, and protecting endangered species. All of these laws fit within Johnson’s plan to make the United States a better place to live. Perhaps influenced by Kennedy’s commitment to the arts, Johnson also signed legislation creating the National Endowment for the Arts and the National Endowment for the Humanities, which provided funding for artists and scholars. The Public Broadcasting Act of 1967 authorized the creation of the private, not-for-profit Corporation for Public Broadcasting, which helped launch the Public Broadcasting Service (PBS) and National Public Radio (NPR) in 1970. In 1965, the Johnson administration also encouraged Congress to pass the Immigration and Nationality Act, which essentially overturned legislation from the 1920s that had favored immigrants from western and northern Europe over those from eastern and southern Europe. The law lifted severe restrictions on immigration from Asia and gave preference to immigrants with family ties in the United States and immigrants with desirable skills. Although the measure seemed less significant than many of the other legislative victories of the Johnson administration at the time, it opened the door for a new era in immigration and made possible the formation of Asian and Latin American immigrant communities in the following decades. While these laws touched on important aspects of the Great Society, the centerpiece of Johnson’s plan was the eradication of poverty in the United States. The war on poverty , as he termed it, was fought on many fronts. The 1965 Housing and Urban Development Act offered grants to improve city housing and subsidized rents for the poor. The Model Cities program likewise provided money for urban development projects and the building of public housing. The Economic Opportunity Act (EOA) of 1964 established and funded a variety of programs to assist the poor in finding jobs. The Office of Economic Opportunity (OEO), first administered by President Kennedy’s brother-in-law Sargent Shriver, coordinated programs such as the Jobs Corps and the Neighborhood Youth Corps, which provided job training programs and work experience for the disadvantaged. Volunteers in Service to America recruited people to offer educational programs and other community services in poor areas, just as the Peace Corps did abroad. The Community Action Program, also under the OEO, funded local Community Action Agencies, organizations created and managed by residents of disadvantaged communities to improve their own lives and those of their neighbors. The Head Start program, intended to prepare low-income children for elementary school, was also under the OEO until it was transferred to Department of Health, Education, and Welfare in 1969. The EOA fought rural poverty by providing low-interest loans to those wishing to improve their farms or start businesses ( Figure 29.10 ). EOA funds were also used to provide housing and education for migrant farm workers. Other legislation created jobs in Appalachia, one of the poorest regions in the United States, and brought programs to Indian reservations. One of EOA’s successes was the Rough Rock Demonstration School on the Navajo Reservation that, while respecting Navajo traditions and culture, also trained people for careers and jobs outside the reservation. The Johnson administration, realizing the nation’s elderly were among its poorest and most disadvantaged citizens, passed the Social Security Act of 1965. The most profound change made by this act was the creation of Medicare, a program to pay the medical expenses of those over sixty-five. Although opposed by the American Medical Association, which feared the creation of a national healthcare system, the new program was supported by most citizens because it would benefit all social classes, not just the poor. The act and subsequent amendments to it also provided coverage for self-employed people in certain occupations and expanded the number of disabled who qualified for benefits. The following year, the Medicaid program allotted federal funds to pay for medical care for the poor. JOHNSON’S COMMITMENT TO CIVIL RIGHTS The eradication of poverty was matched in importance by the Great Society’s advancement of civil rights. Indeed, the condition of the poor could not be alleviated if racial discrimination limited their access to jobs, education, and housing. Realizing this, Johnson drove the long-awaited civil rights act, proposed by Kennedy in June 1963 in the wake of riots at the University of Alabama, through Congress. Under Kennedy’s leadership, the bill had passed the House of Representatives but was stalled in the Senate by a filibuster. Johnson, a master politician, marshaled his considerable personal influence and memories of his fallen predecessor to break the filibuster. The Civil Rights Act of 1964, the most far-reaching civil rights act yet passed by Congress, banned discrimination in public accommodations, sought to aid schools in efforts to desegregate, and prohibited federal funding of programs that permitted racial segregation. Further, it barred discrimination in employment on the basis of race, color, national origin, religion, or gender, and established an Equal Employment Opportunity Commission. Protecting African Americans’ right to vote was as important as ending racial inequality in the United States. In January 1964, the Twenty-Fourth Amendment, prohibiting the imposition of poll taxes on voters, was finally ratified. Poverty would no longer serve as an obstacle to voting. Other impediments remained, however. Attempts to register southern African American voters encountered White resistance, and protests against this interference often met with violence. On March 7, 1965, a planned protest march from Selma, Alabama, to the state capitol in Montgomery, turned into “Bloody Sunday” when marchers crossing the Edmund Pettus Bridge encountered a cordon of state police, wielding batons and tear gas ( Figure 29.11 ). Images of White brutality appeared on television screens throughout the nation and in newspapers around the world. Deeply disturbed by the violence in Alabama and the refusal of Governor George Wallace to address it, Johnson introduced a bill in Congress that would remove obstacles for African American voters and lend federal support to their cause. His proposal, the Voting Rights Act of 1965, prohibited states and local governments from passing laws that discriminated against voters on the basis of race ( Figure 29.12 ). Literacy tests and other barriers to voting that had kept ethnic minorities from the polls were thus outlawed. Following the passage of the act, a quarter of a million African Americans registered to vote, and by 1967, the majority of African Americans had done so. Johnson’s final piece of civil rights legislation was the Civil Rights Act of 1968, which prohibited discrimination in housing on the basis of race, color, national origin, or religion. INCREASED COMMITMENT IN VIETNAM Building the Great Society had been Lyndon Johnson’s biggest priority, and he effectively used his decades of experience in building legislative majorities in a style that ranged from diplomacy to quid pro quo deals to bullying. In the summer of 1964, he deployed these political skills to secure congressional approval for a new strategy in Vietnam—with fateful consequences. President Johnson had never been the cold warrior Kennedy was, but believed that the credibility of the nation and his office depended on maintaining a foreign policy of containment. When, on August 2, the U.S. destroyer USS Maddox conducted an arguably provocative intelligence-gathering mission in the gulf of Tonkin, it reported an attack by North Vietnamese torpedo boats. Two days later, the Maddox was supposedly struck again, and a second ship, the USS Turner Joy , reported that it also had been fired upon. The North Vietnamese denied the second attack, and Johnson himself doubted the reliability of the crews’ report. The National Security Agency has since revealed that the August 4 attacks did not occur. Relying on information available at the time, however, Secretary of Defense Robert McNamara reported to Congress that U.S. ships had been fired upon in international waters while conducting routine operations. On August 7, with only two dissenting votes, Congress passed the Gulf of Tonkin Resolution, and on August 10, the president signed the resolution into law. The resolution gave President Johnson the authority to use military force in Vietnam without asking Congress for a declaration of war. It dramatically increased the power of the U.S. president and transformed the American role in Vietnam from advisor to combatant. In 1965, large-scale U.S. bombing of North Vietnam began. The intent of the campaign, which lasted three years under various names, was to force the North to end its support for the insurgency in the South. More than 200,000 U.S. military personnel, including combat troops, were sent to South Vietnam. At first, most of the American public supported the president’s actions in Vietnam. Support began to ebb, however, as more troops were deployed. Frustrated by losses suffered by the South’s Army of the Republic of Vietnam (ARVN), General William Westmoreland called for the United States to take more responsibility for fighting the war. By April 1966, more Americans were being killed in battle than ARVN troops. Johnson, however, maintained that the war could be won if the United States stayed the course, and in November 1967, Westmoreland proclaimed the end was in sight. Westmoreland’s predictions were called into question, however, when in January 1968, the North Vietnamese launched their most aggressive assault on the South, deploying close to eighty-five thousand troops. During the Tet Offensive, as these attacks were known, nearly one hundred cities in the South were attacked, including the capital of Saigon ( Figure 29.13 ). In heavy fighting, U.S. and South Vietnamese forces recaptured all the points taken by the enemy. Although North Vietnamese forces suffered far more casualties than the roughly forty-one hundred U.S. soldiers killed, public opinion in the United States, fueled by graphic images provided in unprecedented media coverage, turned against the war. Disastrous surprise attacks like the Tet Offensive persuaded many that the war would not be over soon and raised doubts about whether Johnson’s administration was telling the truth about the real state of affairs. In May 1968, with over 400,000 U.S. soldiers in Vietnam, Johnson began peace talks with the North. It was too late to save Johnson himself, however. Many of the most outspoken critics of the war were Democratic politicians whose opposition began to erode unity within the party. Minnesota senator Eugene McCarthy, who had called for an end to the war and the withdrawal of troops from Vietnam, received nearly as many votes in the New Hampshire presidential primary as Johnson did, even though he had been expected to fare very poorly. McCarthy’s success in New Hampshire encouraged Robert Kennedy to announce his candidacy as well. Johnson, suffering health problems and realizing his actions in Vietnam had hurt his public standing, announced that he would not seek reelection and withdrew from the 1968 presidential race. THE END OF THE GREAT SOCIETY Perhaps the greatest casualty of the nation’s war in Vietnam was the Great Society. As the war escalated, the money spent to fund it also increased, leaving less to pay for the many social programs Johnson had created to lift Americans out of poverty. Johnson knew he could not achieve his Great Society while spending money to wage the war. He was unwilling to withdraw from Vietnam, however, for fear that the world would perceive this action as evidence of American failure and doubt the ability of the United States to carry out its responsibilities as a superpower. Vietnam doomed the Great Society in other ways as well. Dreams of racial harmony suffered, as many African Americans, angered by the failure of Johnson’s programs to alleviate severe poverty in the inner cities, rioted in frustration. Their anger was heightened by the fact that a disproportionate number of African Americans were fighting and dying in Vietnam. Nearly two-thirds of eligible African Americans were drafted, whereas draft deferments for college, exemptions for skilled workers in the military industrial complex, and officer training programs allowed White middle-class youth to either avoid the draft or volunteer for a military branch of their choice. As a result, less than one-third of White men were drafted. Although the Great Society failed to eliminate suffering or increase civil rights to the extent that Johnson wished, it made a significant difference in people’s lives. By the end of Johnson’s administration, the percentage of people living below the poverty line had been cut nearly in half. While more people of color than Whites continued to live in poverty, the percentage of poor African Americans had decreased dramatically. The creation of Medicare and Medicaid as well as the expansion of Social Security benefits and welfare payments improved the lives of many, while increased federal funding for education enabled more people to attend college than ever before. Conservative critics argued that, by expanding the responsibilities of the federal government to care for the poor, Johnson had hurt both taxpayers and the poor themselves. Aid to the poor, many maintained, would not only fail to solve the problem of poverty but would also encourage people to become dependent on government “handouts” and lose their desire and ability to care for themselves—an argument that many found intuitively compelling but which lacked conclusive evidence. These same critics also accused Johnson of saddling the United States with a large debt as a result of the deficit spending (funded by borrowing) in which he had engaged. 29.3 The Civil Rights Movement Marches On Learning Objectives By the end of this section, you will be able to: Explain the strategies of the African American civil rights movement in the 1960s Discuss the rise and philosophy of Black Power Identify achievements of the Mexican American civil rights movement in the 1960s During the 1960s, the federal government, encouraged by both genuine concern for the dispossessed and the realities of the Cold War, had increased its efforts to protect civil rights and ensure equal economic and educational opportunities for all. However, most of the credit for progress toward racial equality in the Unites States lies with grassroots activists. Indeed, it was campaigns and demonstrations by ordinary people that spurred the federal government to action. Although the African American civil rights movement was the most prominent of the crusades for racial justice, other ethnic minorities also worked to seize their piece of the American dream during the promising years of the 1960s. Many were influenced by the African American cause and often used similar tactics. CHANGE FROM THE BOTTOM UP For many people inspired by the victories of Brown v. Board of Education and the Montgomery Bus Boycott, the glacial pace of progress in the segregated South was frustrating if not intolerable. In some places, such as Greensboro, North Carolina, local NAACP chapters had been influenced by Whites who provided financing for the organization. This aid, together with the belief that more forceful efforts at reform would only increase White resistance, had persuaded some African American organizations to pursue a “politics of moderation” instead of attempting to radically alter the status quo. Martin Luther King Jr.’s inspirational appeal for peaceful change in the city of Greensboro in 1958, however, planted the seed for a more assertive civil rights movement. On February 1, 1960, four sophomores at the North Carolina Agricultural & Technical College in Greensboro—Ezell Blair, Jr., Joseph McNeil, David Richmond, and Franklin McCain—entered the local Woolworth’s and sat at the lunch counter. The lunch counter was segregated, and they were refused service as they knew they would be. They had specifically chosen Woolworth’s, because it was a national chain and was thus believed to be especially vulnerable to negative publicity. Over the next few days, more protesters joined the four sophomores. Hostile Whites responded with threats and taunted the students by pouring sugar and ketchup on their heads. The successful six-month-long Greensboro sit-in initiated the student phase of the African American civil rights movement and, within two months, the sit-in movement had spread to fifty-four cities in nine states ( Figure 29.14 ). In the words of grassroots civil rights activist Ella Baker, the students at Woolworth’s wanted more than a hamburger; the movement they helped launch was about empowerment. Baker pushed for a “participatory Democracy” that built on the grassroots campaigns of active citizens instead of deferring to the leadership of educated elites and experts. As a result of her actions, in April 1960, the Student Nonviolent Coordinating Committee (SNCC) formed to carry the battle forward. Within a year, more than one hundred cities had desegregated at least some public accommodations in response to student-led demonstrations. The sit-ins inspired other forms of nonviolent protest intended to desegregate public spaces. “Sleep-ins” occupied motel lobbies, “read-ins” filled public libraries, and churches became the sites of “pray-ins.” Students also took part in the 1961 “freedom rides” sponsored by the Congress of Racial Equality (CORE) and SNCC. The intent of the African American and White volunteers who undertook these bus rides south was to test enforcement of a U.S. Supreme Court decision prohibiting segregation on interstate transportation and to protest segregated waiting rooms in southern terminals. Departing Washington, DC, on May 4, the volunteers headed south on buses that challenged the seating order of Jim Crow segregation. Whites would ride in the back, African-Americans would sit in the front, and on other occasions, riders of different races would share the same bench seat. The freedom riders encountered little difficulty until they reached Rock Hill, South Carolina, where a mob severely beat John Lewis, a freedom rider who later became chairman of SNCC ( Figure 29.15 ). The danger increased as the riders continued through Georgia into Alabama, where one of the two buses was firebombed outside the town of Anniston. The second group continued to Birmingham, where the riders were attacked by the Ku Klux Klan as they attempted to disembark at the city bus station. The remaining volunteers continued to Mississippi, where they were arrested when they attempted to desegregate the waiting rooms in the Jackson bus terminal. FREE BY ’63 (OR ’64 OR ’65) The grassroots efforts of people like the Freedom Riders to change discriminatory laws and longstanding racist traditions grew more widely known in the mid-1960s. The approaching centennial of Abraham Lincoln’s Emancipation Proclamation spawned the slogan “Free by ’63” among civil rights activists. As African Americans increased their calls for full rights for all Americans, many civil rights groups changed their tactics to reflect this new urgency. Perhaps the most famous of the civil rights-era demonstrations was the March on Washington for Jobs and Freedom, held in August 1963, on the one hundredth anniversary of Abraham Lincoln’s Emancipation Proclamation. Its purpose was to pressure President Kennedy to act on his promises regarding civil rights. The date was the eighth anniversary of the brutal racist murder of fourteen-year-old Emmett Till in Money, Mississippi. As the crowd gathered outside the Lincoln Memorial and spilled across the National Mall ( Figure 29.16 ), Martin Luther King, Jr. delivered his most famous speech. In “I Have a Dream,” King called for an end to racial injustice in the United States and envisioned a harmonious, integrated society. The speech marked the high point of the civil rights movement and established the legitimacy of its goals. However, it did not prevent White terrorism in the South, nor did it permanently sustain the tactics of nonviolent civil disobedience. Other gatherings of civil rights activists ended tragically, and some demonstrations were intended to provoke a hostile response from Whites and thus reveal the inhumanity of the Jim Crow laws and their supporters. In 1963, the Southern Christian Leadership Conference (SCLC) led by Martin Luther King, Jr. mounted protests in some 186 cities throughout the South. The campaign in Birmingham that began in April and extended into the fall of 1963 attracted the most notice, however, when a peaceful protest was met with violence by police, who attacked demonstrators, including children, with fire hoses and dogs. The world looked on in horror as innocent people were assaulted and thousands arrested. King himself was jailed on Easter Sunday, 1963, and, in response to the pleas of White clergymen for peace and patience, he penned one of the most significant documents of the struggle—“Letter from a Birmingham Jail.” In the letter, King argued that African Americans had waited patiently for more than three hundred years to be given the rights that all human beings deserved; the time for waiting was over. Defining American Letter from a Birmingham Jail By 1963, Martin Luther King, Jr. had become one of the most prominent leaders of the civil rights movement, and he continued to espouse nonviolent civil disobedience as a way of registering African American resistance against unfair, discriminatory, and racist laws and behaviors. While the campaign in Birmingham began with an African American boycott of White businesses to end discrimination in employment practices and public segregation, it became a fight over free speech when King was arrested for violating a local injunction against demonstrations. King wrote his “Letter from a Birmingham Jail” in response to an op-ed by eight White Alabama clergymen who complained about the SCLC’s fiery tactics and argued that social change needed to be pursued gradually. The letter criticizes those who did not support the cause of civil rights: In spite of my shattered dreams of the past, I came to Birmingham with the hope that the White religious leadership in the community would see the justice of our cause and, with deep moral concern, serve as the channel through which our just grievances could get to the power structure. I had hoped that each of you would understand. But again I have been disappointed. I have heard numerous religious leaders of the South call upon their worshippers to comply with a desegregation decision because it is the law, but I have longed to hear White ministers say follow this decree because integration is morally right and the Negro is your brother. In the midst of blatant injustices inflicted upon the Negro, I have watched White churches stand on the sideline and merely mouth pious irrelevancies and sanctimonious trivialities. In the midst of a mighty struggle to rid our nation of racial and economic injustice, I have heard so many ministers say, “Those are social issues with which the Gospel has no real concern,” and I have watched so many churches commit themselves to a completely other-worldly religion which made a strange distinction between body and soul, the sacred and the secular. Since its publication, the “Letter” has become one of the most cogent, impassioned, and succinct statements of the aspirations of the civil rights movement and the frustration over the glacial pace of progress in achieving justice and equality for all Americans. What civil rights tactics raised the objections of the White clergymen King addressed in his letter? Why? Some of the greatest violence during this era was aimed at those who attempted to register African Americans to vote. In 1964, SNCC, working with other civil rights groups, initiated its Mississippi Summer Project, also known as Freedom Summer. The purpose was to register African American voters in one of the most racist states in the nation. Volunteers also built “freedom schools” and community centers. SNCC invited hundreds of White middle-class students, mostly from the North, to help in the task. Many volunteers were harassed, beaten, and arrested, and African American homes and churches were burned. Three civil rights workers, James Chaney, Michael Schwerner, and Andrew Goodman, were killed by the Ku Klux Klan. That summer, civil rights activists Fannie Lou Hamer, Ella Baker, and Robert Parris Moses formally organized the Mississippi Freedom Democratic Party (MFDP) as an alternative to the all-White Mississippi Democratic Party. The Democratic National Convention’s organizers, however, would allow only two MFDP delegates to be seated, and they were confined to the roles of nonvoting observers. The vision of Whites and African Americans working together peacefully to end racial injustice suffered a severe blow with the death of Martin Luther King, Jr. in Memphis, Tennessee, in April 1968. King had gone there to support sanitation workers trying to unionize. In the city, he found a divided civil rights movement; older activists who supported his policy of nonviolence were being challenged by younger African Americans who advocated a more militant approach. On April 4, King was shot and killed while standing on the balcony of his motel. Within hours, the nation’s cities exploded with violence as angry African Americans, shocked by his murder, burned and looted inner-city neighborhoods across the country ( Figure 29.17 ). While Whites recoiled from news about the riots in fear and dismay, they also criticized African Americans for destroying their own neighborhoods; they did not realize that most of the violence was directed against businesses that were not owned by Black people and that treated African American customers with suspicion and hostility. BLACK FRUSTRATION, BLACK POWER The episodes of violence that accompanied Martin Luther King Jr.’s murder were but the latest in a string of urban riots that had shaken the United States since the mid-1960s. Between 1964 and 1968, there were 329 riots in 257 cities across the nation. In 1964, riots broke out in Harlem and other African American neighborhoods. In 1965, a traffic stop set in motion a chain of events that culminated in riots in Watts, an African American neighborhood in Los Angeles. Thousands of businesses were destroyed, and, by the time the violence ended, thirty-four people were dead, most of them African Americans killed by the Los Angeles police and the National Guard. More riots took place in 1966 and 1967. Frustration and anger lay at the heart of these disruptions. Despite the programs of the Great Society, good healthcare, job opportunities, and safe housing were abysmally lacking in urban African American neighborhoods in cities throughout the country, including in the North and West, where discrimination was less overt but just as crippling. In the eyes of many rioters, the federal government either could not or would not end their suffering, and most existing civil rights groups and their leaders had been unable to achieve significant results toward racial justice and equality. Disillusioned, many African Americans turned to those with more radical ideas about how best to obtain equality and justice. Within the chorus of voices calling for integration and legal equality were many that more stridently demanded empowerment and thus supported Black Power . Black Power meant a variety of things. One of the most famous users of the term was Stokely Carmichael, the chairman of SNCC, who later changed his name to Kwame Ture. For Carmichael, Black Power was the power of African Americans to unite as a political force and create their own institutions apart from White-dominated ones, an idea also espoused in the 1920s by political leader and orator Marcus Garvey. Like Garvey, Carmichael became an advocate of Black separatism , arguing that African Americans should live apart from Whites and solve their problems for themselves. In keeping with this philosophy, Carmichael expelled SNCC’s White members. He left SNCC in 1967 and later joined the Black Panthers (see below). Long before Carmichael began to call for separatism, the Nation of Islam, founded in 1930, had advocated the same thing. In the 1960s, its most famous member was Malcolm X, born Malcolm Little ( Figure 29.18 ). The Nation of Islam advocated the separation of White Americans and African Americans because of a belief that African Americans could not thrive in an atmosphere of White racism. Indeed, in a 1963 interview, Malcolm X, discussing the teachings of the head of the Nation of Islam in America, Elijah Muhammad, referred to White people as “devils” more than a dozen times. Rejecting the nonviolent strategy of other civil rights activists, he maintained that violence in the face of violence was appropriate. In 1964, after a trip to Africa, Malcolm X left the Nation of Islam to found the Organization of Afro-American Unity with the goal of achieving freedom, justice, and equality “by any means necessary.” His views regarding Black-White relations changed somewhat thereafter, but he remained fiercely committed to the cause of African American empowerment. On February 21, 1965, he was killed by members of the Nation of Islam. Stokely Carmichael later recalled that Malcolm X had provided an intellectual basis for Black Nationalism and given legitimacy to the use of violence in achieving the goals of Black Power. Defining American The New Negro In a roundtable conversation in October 1961, Malcolm X suggested that a “New Negro” was coming to the fore. The term and concept of a “New Negro” arose during the Harlem Renaissance of the 1920s and was revived during the civil rights movements of the 1960s. “I think there is a new so-called Negro. We don’t recognize the term ‘Negro’ but I really believe that there’s a new so-called Negro here in America. He not only is impatient. Not only is he dissatisfied, not only is he disillusioned, but he’s getting very angry. And whereas the so-called Negro in the past was willing to sit around and wait for someone else to change his condition or correct his condition, there’s a growing tendency on the part of a vast number of so-called Negroes today to take action themselves, not to sit and wait for someone else to correct the situation. This, in my opinion, is primarily what has produced this new Negro. He is not willing to wait. He thinks that what he wants is right, what he wants is just, and since these things are just and right, it’s wrong to sit around and wait for someone else to correct a nasty condition when they get ready.” In what ways were Martin Luther King, Jr. and the members of SNCC “New Negroes?” Unlike Stokely Carmichael and the Nation of Islam, most Black Power advocates did not believe African Americans needed to separate themselves from White society. The Black Panther Party, founded in 1966 in Oakland, California, by Bobby Seale and Huey Newton, believed African Americans were as much the victims of capitalism as of White racism. Accordingly, the group espoused Marxist teachings, and called for jobs, housing, and education, as well as protection from police brutality and exemption from military service in their Ten Point Program. The Black Panthers also patrolled the streets of African American neighborhoods to protect residents from police brutality, yet sometimes beat and murdered those who did not agree with their cause and tactics. Their militant attitude and advocacy of armed self-defense attracted many young men but also led to many encounters with the police, which sometimes included arrests and even shootouts, such as those that took place in Los Angeles, Chicago and Carbondale, Illinois. The self-empowerment philosophy of Black Power influenced mainstream civil rights groups such as the National Economic Growth Reconstruction Organization (NEGRO), which sold bonds and operated a clothing factory and construction company in New York, and the Opportunities Industrialization Center in Philadelphia, which provided job training and placement—by 1969, it had branches in seventy cities. Black Power was also part of a much larger process of cultural change. The 1960s composed a decade not only of Black Power but also of Black Pride . African American abolitionist John S. Rock had coined the phrase “Black Is Beautiful” in 1858, but in the 1960s, it became an important part of efforts within the African American community to raise self-esteem and encourage pride in African ancestry. Black Pride urged African Americans to reclaim their African heritage and, to promote group solidarity, to substitute African and African-inspired cultural practices, such as handshakes, hairstyles, and dress, for White practices. One of the many cultural products of this movement was the popular television music program Soul Train , created by Don Cornelius in 1969, which celebrated Black culture and aesthetics ( Figure 29.19 ). THE MEXICAN AMERICAN FIGHT FOR CIVIL RIGHTS The African American bid for full citizenship was surely the most visible of the battles for civil rights taking place in the United States. However, other minority groups that had been legally discriminated against or otherwise denied access to economic and educational opportunities began to increase efforts to secure their rights in the 1960s. Like the African American movement, the Mexican American civil rights movement won its earliest victories in the federal courts. In 1947, in Mendez v. Westminster , the U.S. Court of Appeals for the Ninth Circuit ruled that segregating children of Hispanic descent was unconstitutional. In 1954, the same year as Brown v. Board of Education , Mexican Americans prevailed in Hernandez v. Texas , when the U.S. Supreme Court extended the protections of the Fourteenth Amendment to all ethnic groups in the United States. The highest-profile struggle of the Mexican American civil rights movement was the fight that Cesar Chavez ( Figure 29.20 ) and Dolores Huerta waged in the fields of California to organize migrant farm workers. In 1962, Chavez and Huerta founded the National Farm Workers Association (NFWA). In 1965, when Filipino grape pickers led by Filipino American Larry Itliong went on strike to call attention to their plight, Chavez lent his support. Workers organized by the NFWA also went on strike, and the two organizations merged to form the United Farm Workers. When Chavez asked American consumers to boycott grapes, politically conscious people around the country heeded his call, and many unionized longshoremen refused to unload grape shipments. In 1966, Chavez led striking workers to the state capitol in Sacramento, further publicizing the cause. Martin Luther King, Jr. telegraphed words of encouragement to Chavez, whom he called a “brother.” The strike ended in 1970 when California farmers recognized the right of farm workers to unionize. However, the farm workers did not gain all they sought, and the larger struggle did not end. The equivalent of the Black Power movement among Mexican Americans was the Chicano Movement. Proudly adopting a derogatory term for Mexican Americans, Chicano activists demanded increased political power for Mexican Americans, education that recognized their cultural heritage, and the restoration of lands taken from them at the end of the Mexican-American War in 1848. One of the founding members, Rodolfo “Corky” Gonzales, launched the Crusade for Justice in Denver in 1965, to provide jobs, legal services, and healthcare for Mexican Americans. From this movement arose La Raza Unida, a political party that attracted many Mexican American college students. Elsewhere, Reies López Tijerina fought for years to reclaim lost and illegally expropriated ancestral lands in New Mexico; he was one of the co-sponsors of the Poor People’s March on Washington in 1967. 29.4 Challenging the Status Quo Learning Objectives By the end of this section, you will be able to: Describe the goals and activities of SDS, the Free Speech Movement, and the antiwar movement Explain the rise, goals, and activities of the women’s movement By the 1960s, a generation of White Americans raised in prosperity and steeped in the culture of conformity of the 1950s had come of age. However, many of these baby boomers (those born between 1946 and 1964) rejected the conformity and luxuries that their parents had provided. These young, middle-class Americans, especially those fortunate enough to attend college when many of their working-class and African American contemporaries were being sent to Vietnam, began to organize to fight for their own rights and end the war that was claiming the lives of so many. THE NEW LEFT By 1960, about one-third of the U.S. population was living in the suburbs; during the 1960s, the average family income rose by 33 percent. Material culture blossomed, and at the end of the decade, 70 percent of American families owned washing machines, 83 percent had refrigerators or freezers, and almost 80 percent had at least one car. Entertainment occupied a larger part of both working- and middle-class leisure hours. By 1960, American consumers were spending $85 billion a year on entertainment, double the spending of the preceding decade; by 1969, about 79 percent of American households had black-and-white televisions, and 31 percent could afford color sets. Movies and sports were regular aspects of the weekly routine, and the family vacation became an annual custom for both the middle and working class. Meanwhile, baby boomers, many raised in this environment of affluence, streamed into universities across the nation in unprecedented numbers looking to “find” themselves. Instead, they found traditional systems that forced them to take required courses, confined them to rigid programs of study, and surrounded them with rules limiting what they could do in their free time. These young people were only too willing to take up Kennedy’s call to action, and many did so by joining the civil rights movement. To them, it seemed only right for the children of the “greatest generation” to help those less privileged to fight battles for justice and equality. The more radical aligned themselves with the New Left, activists of the 1960s who rejected the staid liberalism of the Democratic Party. New Left organizations sought reform in areas such as civil rights and women’s rights, campaigned for free speech and more liberal policies toward drug use, and condemned the war in Vietnam. One of the most prominent New Left groups was Students for a Democratic Society (SDS). Organized in 1960, SDS held its first meeting at the University of Michigan, Ann Arbor. Its philosophy was expressed in its manifesto, the Port Huron Statement , written by Tom Hayden and adopted in 1962, affirming the group’s dedication to fighting economic inequality and discrimination. It called for greater participation in the democratic process by ordinary people, advocated civil disobedience, and rejected the anti-Communist position held by most other groups committed to social reform in the United States. SDS members demanded that universities allow more student participation in university governance and shed their entanglements with the military-industrial complex. They sought to rouse the poor to political action to defeat poverty and racism. In the summer of 1964, a small group of SDS members moved into the uptown district of Chicago and tried to take on racism and poverty through community organization. Under the umbrella of their Economic Research and Action Project, they created JOIN (Jobs or Income Now) to address problems of urban poverty and resisted plans to displace the poor under the guise of urban renewal. They also called for police review boards to end police brutality, organized free breakfast programs, and started social and recreational clubs for neighborhood youth. Eventually, the movement fissured over whether to remain a campus-based student organization or a community-based development organization. During the same time that SDS became active in Chicago, another student movement emerged on the West Coast, when actions by student activists at the University of California, Berkeley, led to the formation of Berkeley’s Free Speech Movement in 1964. University rules prohibited the solicitation of funds for political causes by anyone other than members of the student Democratic and Republican organizations, and restricted advocacy of political causes on campus. In October 1964, when a student handing out literature for CORE refused to show campus police officers his student ID card, he was promptly arrested. Instantly, the campus police car was surrounded by angry students, who refused to let the vehicle move for thirty-two hours until the student was released. In December, students organized a massive sit-in to resolve the issue of political activities on campus. While unsuccessful in the short term, the movement inspired student activism on campuses throughout the country. A target of many student groups was the war in Vietnam ( Figure 29.21 ). In April 1965, SDS organized a march on Washington for peace; about twenty thousand people attended. That same week, the faculty at the University of Michigan suspended classes and conducted a 24-hour “teach-in” on the war. The idea quickly spread, and on May 15, the first national “teach-in” was held at 122 colleges and universities across the nation. Originally designed to be a debate on the pros and cons of the war, at Berkeley, the teach-ins became massive antiwar rallies. By the end of that year, there had been antiwar rallies in some sixty cities. Americana Blue Jeans: The Uniform of Nonconformist Radicalism Overwhelmingly, young cultural warriors and social activists of the 1960s, trying to escape the shackles of what they perceived to be limits on their freedoms, adopted blue jeans as the uniform of their generation. Originally worn by manual laborers because of their near-indestructibility, blue jeans were commonly associated with cowboys, the quintessential icon of American independence. During the 1930s, jeans were adopted by a broader customer base as a result of the popularity of cowboy movies and dude ranch vacations. After World War II, Levi Strauss, their original manufacturer, began to market them east of the Mississippi, and competitors such as Wrangler and Lee fought for a share of the market. In the 1950s, youths testing the limits of middle-class conformity adopted them in imitation of movie stars like James Dean. By the 1960s, jeans became even more closely associated with youthful rebellion against tradition, a symbol available to everyone, rich and poor, Black and White, men and women. What other styles and behaviors of the 1960s expressed nonconformity, and how? WOMEN’S RIGHTS On the national scene, the civil rights movement was creating a climate of protest and claiming rights and new roles in society for people of color. Women played significant roles in organizations fighting for civil rights like SNCC and SDS. However, they often found that those organizations, enlightened as they might be about racial issues or the war in Vietnam, could still be influenced by patriarchal ideas of male superiority. Two members of SNCC, Casey Hayden and Mary King, presented some of their concerns about their organization’s treatment of women in a document entitled “On the Position of Women in SNCC.” Stokely Carmichael responded that the appropriate position for women in SNCC was “prone.” Just as the abolitionist movement made nineteenth-century women more aware of their lack of power and encouraged them to form the first women’s rights movement, the protest movements of the 1960s inspired many White and middle-class women to create their own organized movement for greater rights. Not all were young women engaged in social protest. Many were older, married women who found the traditional roles of housewife and mother unfulfilling. In 1963, writer and feminist Betty Friedan published The Feminine Mystique in which she contested the post-World War II belief that it was women’s destiny to marry and bear children. Friedan’s book was a best-seller and began to raise the consciousness of many women who agreed that homemaking in the suburbs sapped them of their individualism and left them unsatisfied. The Civil Rights Act of 1964, which prohibited discrimination in employment on the basis of race, color, national origin, and religion, also prohibited, in Title VII , discrimination on the basis of sex. Ironically, protection for women had been included at the suggestion of a Virginia congressman in an attempt to prevent the act’s passage; his reasoning seemed to be that, while a White man might accept that African Americans needed and deserved protection from discrimination, the idea that women deserved equality with men would be far too radical for any of his male colleagues to contemplate. Nevertheless, the act passed, although the struggle to achieve equal pay for equal work continues today. Medical science also contributed a tool to assist women in their liberation. In 1960, the U.S. Food and Drug Administration approved the birth control pill, freeing women from the restrictions of pregnancy and childbearing. Women who were able to limit, delay, and prevent reproduction were freer to work, attend college, and delay marriage. Within five years of the pill’s approval, some six million women were using it. The pill was the first medicine ever intended to be taken by people who were not sick. Even conservatives saw it as a possible means of making marriages stronger by removing the fear of an unwanted pregnancy and improving the health of women. Its opponents, however, argued that it would promote sexual promiscuity, undermine the institutions of marriage and the family, and destroy the moral code of the nation. By the early 1960s, thirty states had made it a criminal offense to sell contraceptive devices. In 1966, the National Organization for Women (NOW) formed and proceeded to set an agenda for the feminist movement ( Figure 29.22 ). Framed by a statement of purpose written by Friedan, the agenda began by proclaiming NOW’s goal to make possible women’s participation in all aspects of American life and to gain for them all the rights enjoyed by men. Among the specific goals was the passage of the Equal Rights Amendment (yet to be adopted). More radical feminists, like their colleagues in other movements, were dissatisfied with merely redressing economic issues and devised their own brand of consciousness-raising events and symbolic attacks on women’s oppression. The most famous of these was an event staged in September 1968 by New York Radical Women. Protesting stereotypical notions of femininity and rejecting traditional gender expectations, the group demonstrated at the Miss America Pageant in Atlantic City, New Jersey, to bring attention to the contest’s—and society’s—exploitation of women. The protestors crowned a sheep Miss America and then tossed instruments of women’s oppression, including high-heeled shoes, curlers, girdles, and bras, into a “freedom trash can.” News accounts famously, and incorrectly, described the protest as a “bra burning.”
biology
Chapter Outline 11.1 The Process of Meiosis 11.2 Sexual Reproduction Introduction The ability to reproduce in kind is a basic characteristic of all living things. In kind means that the offspring of any organism closely resemble their parent or parents. Hippopotamuses give birth to hippopotamus calves, Joshua trees produce seeds from which Joshua tree seedlings emerge, and adult flamingos lay eggs that hatch into flamingo chicks. In kind does not generally mean exactly the same . Whereas many unicellular organisms and a few multicellular organisms can produce genetically identical clones of themselves through cell division, many single-celled organisms and most multicellular organisms reproduce regularly using another method. Sexual reproduction is the production by parents of two haploid cells and the fusion of two haploid cells to form a single, unique diploid cell. In most plants and animals, through tens of rounds of mitotic cell division, this diploid cell will develop into an adult organism. Haploid cells that are part of the sexual reproductive cycle are produced by a type of cell division called meiosis. Sexual reproduction, specifically meiosis and fertilization, introduces variation into offspring that may account for the evolutionary success of sexual reproduction. The vast majority of eukaryotic organisms, both multicellular and unicellular, can or must employ some form of meiosis and fertilization to reproduce.
[ { "answer": { "ans_choice": 2, "ans_text": "four haploid" }, "bloom": null, "hl_context": "In some species , cells enter a brief interphase , or interkinesis , before entering meiosis II . Interkinesis lacks an S phase , so chromosomes are not duplicated . The two cells produced in meiosis I go through the events of meiosis II in synchrony . <hl> During meiosis II , the sister chromatids within the two daughter cells separate , forming four new haploid gametes . <hl> The mechanics of meiosis II is similar to mitosis , except that each dividing cell has only one set of homologous chromosomes . Therefore , each cell has half the number of sister chromatids to separate out as a diploid cell undergoing mitosis . Prophase II Two haploid cells are the end result of the first meiotic division . The cells are haploid because at each pole , there is just one of each pair of the homologous chromosomes . Therefore , only one full set of the chromosomes is present . This is why the cells are considered haploid — there is only one chromosome set , even though each homolog still consists of two sister chromatids . Recall that sister chromatids are merely duplicates of one of the two homologous chromosomes ( except for changes that occurred during crossing over ) . <hl> In meiosis II , these two sister chromatids will separate , creating four haploid daughter cells . <hl> Link to Learning", "hl_sentences": "During meiosis II , the sister chromatids within the two daughter cells separate , forming four new haploid gametes . In meiosis II , these two sister chromatids will separate , creating four haploid daughter cells .", "question": { "cloze_format": "Meiosis produces ________ daughter cells.", "normal_format": "Which daughter cells do meiosis produce?", "question_choices": [ "two haploid", "two diploid", "four haploid", "four diploid" ], "question_id": "fs-id1181228", "question_text": "Meiosis produces ________ daughter cells." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "synaptonemal complex" }, "bloom": "2", "hl_context": "The main differences between mitosis and meiosis occur in meiosis I , which is a very different nuclear division than mitosis . <hl> In meiosis I , the homologous chromosome pairs become associated with each other , are bound together with the synaptonemal complex , develop chiasmata and undergo crossover between sister chromatids , and line up along the metaphase plate in tetrads with kinetochore fibers from opposite spindle poles attached to each kinetochore of a homolog in a tetrad . <hl> All of these events occur only in meiosis I . Located at intervals along the synaptonemal complex are large protein assemblies called recombination nodules . These assemblies mark the points of later chiasmata and mediate the multistep process of crossover — or genetic recombination — between the non-sister chromatids . Near the recombination nodule on each chromatid , the double-stranded DNA is cleaved , the cut ends are modified , and a new connection is made between the non-sister chromatids . As prophase I progresses , the synaptonemal complex begins to break down and the chromosomes begin to condense . When the synaptonemal complex is gone , the homologous chromosomes remain attached to each other at the centromere and at chiasmata . The chiasmata remain until anaphase I . The number of chiasmata varies according to the species and the length of the chromosome . There must be at least one chiasma per chromosome for proper separation of homologous chromosomes during meiosis I , but there may be as many as 25 . <hl> Following crossover , the synaptonemal complex breaks down and the cohesin connection between homologous pairs is also removed . <hl> <hl> At the end of prophase I , the pairs are held together only at the chiasmata ( Figure 11.3 ) and are called tetrads because the four sister chromatids of each pair of homologous chromosomes are now visible . <hl>", "hl_sentences": "In meiosis I , the homologous chromosome pairs become associated with each other , are bound together with the synaptonemal complex , develop chiasmata and undergo crossover between sister chromatids , and line up along the metaphase plate in tetrads with kinetochore fibers from opposite spindle poles attached to each kinetochore of a homolog in a tetrad . Following crossover , the synaptonemal complex breaks down and the cohesin connection between homologous pairs is also removed . At the end of prophase I , the pairs are held together only at the chiasmata ( Figure 11.3 ) and are called tetrads because the four sister chromatids of each pair of homologous chromosomes are now visible .", "question": { "cloze_format": "The ___ structure is most important in forming the tetrads.", "normal_format": "What structure is most important in forming the tetrads?", "question_choices": [ "centromere", "synaptonemal complex", "chiasma", "kinetochore" ], "question_id": "fs-id1744220", "question_text": "What structure is most important in forming the tetrads?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "anaphase II" }, "bloom": "1", "hl_context": "Meiosis II is much more analogous to a mitotic division . In this case , the duplicated chromosomes ( only one set of them ) line up on the metaphase plate with divided kinetochores attached to kinetochore fibers from opposite poles . <hl> During anaphase II , as in mitotic anaphase , the kinetochores divide and one sister chromatid — now referred to as a chromosome — is pulled to one pole while the other sister chromatid is pulled to the other pole . <hl> If it were not for the fact that there had been crossover , the two products of each individual meiosis II division would be identical ( like in mitosis ) . Instead , they are different because there has always been at least one crossover per chromosome . Meiosis II is not a reduction division because although there are fewer copies of the genome in the resulting cells , there is still one set of chromosomes , as there was at the end of meiosis I .", "hl_sentences": "During anaphase II , as in mitotic anaphase , the kinetochores divide and one sister chromatid — now referred to as a chromosome — is pulled to one pole while the other sister chromatid is pulled to the other pole .", "question": { "cloze_format": "Sister chromatids are separated from each other at the ___ stage of meiosis.", "normal_format": "At which stage of meiosis are sister chromatids separated from each other?", "question_choices": [ "prophase I", "prophase II", "anaphase I", "anaphase II" ], "question_id": "fs-id933450", "question_text": "At which stage of meiosis are sister chromatids separated from each other?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "chiasmata" }, "bloom": "1", "hl_context": "The key event in prometaphase I is the attachment of the spindle fiber microtubules to the kinetochore proteins at the centromeres . Kinetochore proteins are multiprotein complexes that bind the centromeres of a chromosome to the microtubules of the mitotic spindle . Microtubules grow from centrosomes placed at opposite poles of the cell . The microtubules move toward the middle of the cell and attach to one of the two fused homologous chromosomes . The microtubules attach at each chromosomes ' kinetochores . With each member of the homologous pair attached to opposite poles of the cell , in the next phase , the microtubules can pull the homologous pair apart . A spindle fiber that has attached to a kinetochore is called a kinetochore microtubule . <hl> At the end of prometaphase I , each tetrad is attached to microtubules from both poles , with one homologous chromosome facing each pole . <hl> <hl> The homologous chromosomes are still held together at chiasmata . <hl> In addition , the nuclear membrane has broken down entirely . <hl> Located at intervals along the synaptonemal complex are large protein assemblies called recombination nodules . <hl> <hl> These assemblies mark the points of later chiasmata and mediate the multistep process of crossover — or genetic recombination — between the non-sister chromatids . <hl> Near the recombination nodule on each chromatid , the double-stranded DNA is cleaved , the cut ends are modified , and a new connection is made between the non-sister chromatids . As prophase I progresses , the synaptonemal complex begins to break down and the chromosomes begin to condense . <hl> When the synaptonemal complex is gone , the homologous chromosomes remain attached to each other at the centromere and at chiasmata . <hl> The chiasmata remain until anaphase I . The number of chiasmata varies according to the species and the length of the chromosome . There must be at least one chiasma per chromosome for proper separation of homologous chromosomes during meiosis I , but there may be as many as 25 . Following crossover , the synaptonemal complex breaks down and the cohesin connection between homologous pairs is also removed . <hl> At the end of prophase I , the pairs are held together only at the chiasmata ( Figure 11.3 ) and are called tetrads because the four sister chromatids of each pair of homologous chromosomes are now visible . <hl>", "hl_sentences": "At the end of prometaphase I , each tetrad is attached to microtubules from both poles , with one homologous chromosome facing each pole . The homologous chromosomes are still held together at chiasmata . Located at intervals along the synaptonemal complex are large protein assemblies called recombination nodules . These assemblies mark the points of later chiasmata and mediate the multistep process of crossover — or genetic recombination — between the non-sister chromatids . When the synaptonemal complex is gone , the homologous chromosomes remain attached to each other at the centromere and at chiasmata . At the end of prophase I , the pairs are held together only at the chiasmata ( Figure 11.3 ) and are called tetrads because the four sister chromatids of each pair of homologous chromosomes are now visible .", "question": { "cloze_format": "At metaphase I, homologous chromosomes are connected only at the structure of ___ .", "normal_format": "At metaphase I, homologous chromosomes are connected only at what structures?", "question_choices": [ "chiasmata", "recombination nodules", "microtubules", "kinetochores" ], "question_id": "fs-id1466651", "question_text": "At metaphase I, homologous chromosomes are connected only at what structures?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "Chiasmata are formed." }, "bloom": null, "hl_context": "<hl> The crossover events are the first source of genetic variation in the nuclei produced by meiosis . <hl> <hl> A single crossover event between homologous non-sister chromatids leads to a reciprocal exchange of equivalent DNA between a maternal chromosome and a paternal chromosome . <hl> <hl> Now , when that sister chromatid is moved into a gamete cell it will carry some DNA from one parent of the individual and some DNA from the other parent . <hl> <hl> The sister recombinant chromatid has a combination of maternal and paternal genes that did not exist before the crossover . <hl> <hl> Multiple crossovers in an arm of the chromosome have the same effect , exchanging segments of DNA to create recombinant chromosomes . <hl> <hl> Located at intervals along the synaptonemal complex are large protein assemblies called recombination nodules . <hl> <hl> These assemblies mark the points of later chiasmata and mediate the multistep process of crossover — or genetic recombination — between the non-sister chromatids . <hl> Near the recombination nodule on each chromatid , the double-stranded DNA is cleaved , the cut ends are modified , and a new connection is made between the non-sister chromatids . As prophase I progresses , the synaptonemal complex begins to break down and the chromosomes begin to condense . When the synaptonemal complex is gone , the homologous chromosomes remain attached to each other at the centromere and at chiasmata . The chiasmata remain until anaphase I . The number of chiasmata varies according to the species and the length of the chromosome . There must be at least one chiasma per chromosome for proper separation of homologous chromosomes during meiosis I , but there may be as many as 25 . Following crossover , the synaptonemal complex breaks down and the cohesin connection between homologous pairs is also removed . At the end of prophase I , the pairs are held together only at the chiasmata ( Figure 11.3 ) and are called tetrads because the four sister chromatids of each pair of homologous chromosomes are now visible . Early in prophase I , before the chromosomes can be seen clearly microscopically , the homologous chromosomes are attached at their tips to the nuclear envelope by proteins . As the nuclear envelope begins to break down , the proteins associated with homologous chromosomes bring the pair close to each other . Recall that , in mitosis , homologous chromosomes do not pair together . In mitosis , homologous chromosomes line up end-to-end so that when they divide , each daughter cell receives a sister chromatid from both members of the homologous pair . The synaptonemal complex , a lattice of proteins between the homologous chromosomes , first forms at specific locations and then spreads to cover the entire length of the chromosomes . The tight pairing of the homologous chromosomes is called synapsis . In synapsis , the genes on the chromatids of the homologous chromosomes are aligned precisely with each other . <hl> The synaptonemal complex supports the exchange of chromosomal segments between non-sister homologous chromatids , a process called crossing over . <hl> Crossing over can be observed visually after the exchange as chiasmata ( singular = chiasma ) ( Figure 11.2 ) . In species such as humans , even though the X and Y sex chromosomes are not homologous ( most of their genes differ ) , they have a small region of homology that allows the X and Y chromosomes to pair up during prophase I . A partial synaptonemal complex develops only between the regions of homology .", "hl_sentences": "The crossover events are the first source of genetic variation in the nuclei produced by meiosis . A single crossover event between homologous non-sister chromatids leads to a reciprocal exchange of equivalent DNA between a maternal chromosome and a paternal chromosome . Now , when that sister chromatid is moved into a gamete cell it will carry some DNA from one parent of the individual and some DNA from the other parent . The sister recombinant chromatid has a combination of maternal and paternal genes that did not exist before the crossover . Multiple crossovers in an arm of the chromosome have the same effect , exchanging segments of DNA to create recombinant chromosomes . Located at intervals along the synaptonemal complex are large protein assemblies called recombination nodules . These assemblies mark the points of later chiasmata and mediate the multistep process of crossover — or genetic recombination — between the non-sister chromatids . The synaptonemal complex supports the exchange of chromosomal segments between non-sister homologous chromatids , a process called crossing over .", "question": { "cloze_format": "___ is not true in regard to crossover.", "normal_format": "Which of the following is not true in regard to crossover?", "question_choices": [ "Spindle microtubules guide the transfer of DNA across the synaptonemal complex.", "Non-sister chromatids exchange genetic material.", "Chiasmata are formed.", "Recombination nodules mark the crossover point." ], "question_id": "fs-id1628288", "question_text": "Which of the following is not true in regard to crossover?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "S phase" }, "bloom": null, "hl_context": "<hl> In some species , cells enter a brief interphase , or interkinesis , before entering meiosis II . <hl> <hl> Interkinesis lacks an S phase , so chromosomes are not duplicated . <hl> The two cells produced in meiosis I go through the events of meiosis II in synchrony . During meiosis II , the sister chromatids within the two daughter cells separate , forming four new haploid gametes . The mechanics of meiosis II is similar to mitosis , except that each dividing cell has only one set of homologous chromosomes . Therefore , each cell has half the number of sister chromatids to separate out as a diploid cell undergoing mitosis . Prophase II", "hl_sentences": "In some species , cells enter a brief interphase , or interkinesis , before entering meiosis II . Interkinesis lacks an S phase , so chromosomes are not duplicated .", "question": { "cloze_format": "The phase of mitotic interphase that is missing from meiotic interkinesis is the ___.", "normal_format": "What phase of mitotic interphase is missing from meiotic interkinesis?", "question_choices": [ "G0 phase", "G1 phase", "S phase", "G2 phase" ], "question_id": "fs-id941668", "question_text": "What phase of mitotic interphase is missing from meiotic interkinesis?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "meiosis II" }, "bloom": null, "hl_context": "<hl> In some species , cells enter a brief interphase , or interkinesis , before entering meiosis II . <hl> <hl> Interkinesis lacks an S phase , so chromosomes are not duplicated . <hl> The two cells produced in meiosis I go through the events of meiosis II in synchrony . During meiosis II , the sister chromatids within the two daughter cells separate , forming four new haploid gametes . <hl> The mechanics of meiosis II is similar to mitosis , except that each dividing cell has only one set of homologous chromosomes . <hl> Therefore , each cell has half the number of sister chromatids to separate out as a diploid cell undergoing mitosis . Prophase II <hl> The nuclear division that forms haploid cells , which is called meiosis , is related to mitosis . <hl> As you have learned , mitosis is the part of a cell reproduction cycle that results in identical daughter nuclei that are also genetically identical to the original parent nucleus . In mitosis , both the parent and the daughter nuclei are at the same ploidy level — diploid for most plants and animals . <hl> Meiosis employs many of the same mechanisms as mitosis . <hl> However , the starting nucleus is always diploid and the nuclei that result at the end of a meiotic cell division are haploid . To achieve this reduction in chromosome number , meiosis consists of one round of chromosome duplication and two rounds of nuclear division . Because the events that occur during each of the division stages are analogous to the events of mitosis , the same stage names are assigned . However , because there are two rounds of division , the major process and the stages are designated with a “ I ” or a “ II . ” Thus , meiosis I is the first round of meiotic division and consists of prophase I , prometaphase I , and so on . Meiosis II , in which the second round of meiotic division takes place , includes prophase II , prometaphase II , and so on .", "hl_sentences": "In some species , cells enter a brief interphase , or interkinesis , before entering meiosis II . Interkinesis lacks an S phase , so chromosomes are not duplicated . The mechanics of meiosis II is similar to mitosis , except that each dividing cell has only one set of homologous chromosomes . The nuclear division that forms haploid cells , which is called meiosis , is related to mitosis . Meiosis employs many of the same mechanisms as mitosis .", "question": { "cloze_format": "The part of meiosis that is similar to mitosis is ________.", "normal_format": "Which part of meiosis is similar to mitosis?", "question_choices": [ "meiosis I", "anaphase I", "meiosis II", "interkinesis" ], "question_id": "fs-id1440220", "question_text": "The part of meiosis that is similar to mitosis is ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "16" }, "bloom": null, "hl_context": "<hl> Most animals and plants are diploid , containing two sets of chromosomes . <hl> <hl> In each somatic cell of the organism ( all cells of a multicellular organism except the gametes or reproductive cells ) , the nucleus contains two copies of each chromosome , called homologous chromosomes . <hl> <hl> Somatic cells are sometimes referred to as “ body ” cells . <hl> Homologous chromosomes are matched pairs containing the same genes in identical locations along their length . Diploid organisms inherit one copy of each homologous chromosome from each parent ; all together , they are considered a full set of chromosomes . <hl> Haploid cells , containing a single copy of each homologous chromosome , are found only within structures that give rise to either gametes or spores . <hl> Spores are haploid cells that can produce a haploid organism or can fuse with another spore to form a diploid cell . All animals and most plants produce eggs and sperm , or gametes . Some plants and all fungi produce spores .", "hl_sentences": "Most animals and plants are diploid , containing two sets of chromosomes . In each somatic cell of the organism ( all cells of a multicellular organism except the gametes or reproductive cells ) , the nucleus contains two copies of each chromosome , called homologous chromosomes . Somatic cells are sometimes referred to as “ body ” cells . Haploid cells , containing a single copy of each homologous chromosome , are found only within structures that give rise to either gametes or spores .", "question": { "cloze_format": "If a muscle cell of a typical organism has 32 chromosomes, the number of chromosomes that will be in a gamete of that same organism is ___ .", "normal_format": "If a muscle cell of a typical organism has 32 chromosomes, how many chromosomes will be in a gamete of that same organism?", "question_choices": [ "8", "16", "32", "64" ], "question_id": "fs-id1321696", "question_text": "If a muscle cell of a typical organism has 32 chromosomes, how many chromosomes will be in a gamete of that same organism?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "Sexual reproduction results in variation in the offspring." }, "bloom": "3", "hl_context": "<hl> It is not in dispute that sexual reproduction provides evolutionary advantages to organisms that employ this mechanism to produce offspring . <hl> <hl> But why , even in the face of fairly stable conditions , does sexual reproduction persist when it is more difficult and costly for individual organisms ? <hl> <hl> Variation is the outcome of sexual reproduction , but why are ongoing variations necessary ? <hl> Enter the Red Queen hypothesis , first proposed by Leigh Van Valen in 1973 . 3 The concept was named in reference to the Red Queen's race in Lewis Carroll's book , Through the Looking-Glass . 3 Leigh Van Valen , “ A New Evolutionary Law , ” Evolutionary Theory 1 ( 1973 ): 1 – 30 All species co-evolve with other organisms ; for example predators evolve with their prey , and parasites evolve with their hosts . Each tiny advantage gained by favorable variation gives a species an edge over close competitors , predators , parasites , or even prey . The only method that will allow a co-evolving species to maintain its own share of the resources is to also continually improve its fitness . As one species gains an advantage , this increases selection on the other species ; they must also develop an advantage or they will be outcompeted . No single species progresses too far ahead because genetic variation among the progeny of sexual reproduction provides all species with a mechanism to improve rapidly . Species that cannot keep up become extinct . The Red Queen ’ s catchphrase was , “ It takes all the running you can do to stay in the same place . ” This is an apt description of co-evolution between competing species . <hl> Identify variation among offspring as a potential evolutionary advantage to sexual reproduction <hl>", "hl_sentences": "It is not in dispute that sexual reproduction provides evolutionary advantages to organisms that employ this mechanism to produce offspring . But why , even in the face of fairly stable conditions , does sexual reproduction persist when it is more difficult and costly for individual organisms ? Variation is the outcome of sexual reproduction , but why are ongoing variations necessary ? Identify variation among offspring as a potential evolutionary advantage to sexual reproduction", "question": { "cloze_format": "A likely evolutionary advantage of sexual reproduction over asexual reproduction is that ___ .", "normal_format": "What is a likely evolutionary advantage of sexual reproduction over asexual reproduction?", "question_choices": [ "Sexual reproduction involves fewer steps.", "There is a lower chance of using up the resources in a given environment.", "Sexual reproduction results in variation in the offspring.", "Sexual reproduction is more cost-effective." ], "question_id": "fs-id1172211", "question_text": "What is a likely evolutionary advantage of sexual reproduction over asexual reproduction?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "alternation of generations" }, "bloom": "1", "hl_context": "<hl> The third life-cycle type , employed by some algae and all plants , is a blend of the haploid-dominant and diploid-dominant extremes . <hl> <hl> Species with alternation of generations have both haploid and diploid multicellular organisms as part of their life cycle . <hl> The haploid multicellular plants are called gametophytes , because they produce gametes from specialized cells . Meiosis is not directly involved in the production of gametes in this case , because the organism that produces the gametes is already a haploid . Fertilization between the gametes forms a diploid zygote . The zygote will undergo many rounds of mitosis and give rise to a diploid multicellular plant called a sporophyte . Specialized cells of the sporophyte will undergo meiosis and produce haploid spores . The spores will subsequently develop into the gametophytes ( Figure 11.10 ) . Fertilization and meiosis alternate in sexual life cycles . What happens between these two events depends on the organism . The process of meiosis reduces the chromosome number by half . Fertilization , the joining of two haploid gametes , restores the diploid condition . <hl> There are three main categories of life cycles in multicellular organisms : diploid-dominant , in which the multicellular diploid stage is the most obvious life stage , such as with most animals including humans ; haploid-dominant , in which the multicellular haploid stage is the most obvious life stage , such as with all fungi and some algae ; and alternation of generations , in which the two stages are apparent to different degrees depending on the group , as with plants and some algae . <hl>", "hl_sentences": "The third life-cycle type , employed by some algae and all plants , is a blend of the haploid-dominant and diploid-dominant extremes . Species with alternation of generations have both haploid and diploid multicellular organisms as part of their life cycle . There are three main categories of life cycles in multicellular organisms : diploid-dominant , in which the multicellular diploid stage is the most obvious life stage , such as with most animals including humans ; haploid-dominant , in which the multicellular haploid stage is the most obvious life stage , such as with all fungi and some algae ; and alternation of generations , in which the two stages are apparent to different degrees depending on the group , as with plants and some algae .", "question": { "cloze_format": "The type of life cycle that has both a haploid and diploid multicellular stage is the ___.", "normal_format": "Which type of life cycle has both a haploid and diploid multicellular stage?", "question_choices": [ "asexual", "diploid-dominant", "haploid-dominant", "alternation of generations" ], "question_id": "fs-id1338802", "question_text": "Which type of life cycle has both a haploid and diploid multicellular stage?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "haploid-dominant" }, "bloom": "1", "hl_context": "Fertilization and meiosis alternate in sexual life cycles . What happens between these two events depends on the organism . The process of meiosis reduces the chromosome number by half . Fertilization , the joining of two haploid gametes , restores the diploid condition . <hl> There are three main categories of life cycles in multicellular organisms : diploid-dominant , in which the multicellular diploid stage is the most obvious life stage , such as with most animals including humans ; haploid-dominant , in which the multicellular haploid stage is the most obvious life stage , such as with all fungi and some algae ; and alternation of generations , in which the two stages are apparent to different degrees depending on the group , as with plants and some algae . <hl>", "hl_sentences": "There are three main categories of life cycles in multicellular organisms : diploid-dominant , in which the multicellular diploid stage is the most obvious life stage , such as with most animals including humans ; haploid-dominant , in which the multicellular haploid stage is the most obvious life stage , such as with all fungi and some algae ; and alternation of generations , in which the two stages are apparent to different degrees depending on the group , as with plants and some algae .", "question": { "cloze_format": "Fungi typically display the life cycle of ___ .", "normal_format": "Fungi typically display which type of life cycle?", "question_choices": [ "diploid-dominant", "haploid-dominant", "alternation of generations", "asexual" ], "question_id": "fs-id1740750", "question_text": "Fungi typically display which type of life cycle?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "sporophyte" }, "bloom": null, "hl_context": "The third life-cycle type , employed by some algae and all plants , is a blend of the haploid-dominant and diploid-dominant extremes . Species with alternation of generations have both haploid and diploid multicellular organisms as part of their life cycle . <hl> The haploid multicellular plants are called gametophytes , because they produce gametes from specialized cells . <hl> <hl> Meiosis is not directly involved in the production of gametes in this case , because the organism that produces the gametes is already a haploid . <hl> <hl> Fertilization between the gametes forms a diploid zygote . <hl> <hl> The zygote will undergo many rounds of mitosis and give rise to a diploid multicellular plant called a sporophyte . <hl> <hl> Specialized cells of the sporophyte will undergo meiosis and produce haploid spores . <hl> <hl> The spores will subsequently develop into the gametophytes ( Figure 11.10 ) . <hl>", "hl_sentences": "The haploid multicellular plants are called gametophytes , because they produce gametes from specialized cells . Meiosis is not directly involved in the production of gametes in this case , because the organism that produces the gametes is already a haploid . Fertilization between the gametes forms a diploid zygote . The zygote will undergo many rounds of mitosis and give rise to a diploid multicellular plant called a sporophyte . Specialized cells of the sporophyte will undergo meiosis and produce haploid spores . The spores will subsequently develop into the gametophytes ( Figure 11.10 ) .", "question": { "cloze_format": "A diploid, multicellular life-cycle stage that gives rise to haploid cells by meiosis is called a ________.", "normal_format": "What is a diploid, multicellular life-cycle stage that gives rise to haploid cells by meiosis called?", "question_choices": [ "sporophyte", "gametophyte", "spore", "gamete" ], "question_id": "fs-id8722800", "question_text": "A diploid, multicellular life-cycle stage that gives rise to haploid cells by meiosis is called a ________." }, "references_are_paraphrase": null } ]
11
11.1 The Process of Meiosis Learning Objectives By the end of this section, you will be able to: Describe the behavior of chromosomes during meiosis Describe cellular events during meiosis Explain the differences between meiosis and mitosis Explain the mechanisms within meiosis that generate genetic variation among the products of meiosis Sexual reproduction requires fertilization , the union of two cells from two individual organisms. If those two cells each contain one set of chromosomes, then the resulting cell contains two sets of chromosomes. Haploid cells contain one set of chromosomes. Cells containing two sets of chromosomes are called diploid. The number of sets of chromosomes in a cell is called its ploidy level. If the reproductive cycle is to continue, then the diploid cell must somehow reduce its number of chromosome sets before fertilization can occur again, or there will be a continual doubling in the number of chromosome sets in every generation. So, in addition to fertilization, sexual reproduction includes a nuclear division that reduces the number of chromosome sets. Most animals and plants are diploid, containing two sets of chromosomes. In each somatic cell of the organism (all cells of a multicellular organism except the gametes or reproductive cells), the nucleus contains two copies of each chromosome, called homologous chromosomes. Somatic cells are sometimes referred to as “body” cells. Homologous chromosomes are matched pairs containing the same genes in identical locations along their length. Diploid organisms inherit one copy of each homologous chromosome from each parent; all together, they are considered a full set of chromosomes. Haploid cells, containing a single copy of each homologous chromosome, are found only within structures that give rise to either gametes or spores. Spores are haploid cells that can produce a haploid organism or can fuse with another spore to form a diploid cell. All animals and most plants produce eggs and sperm, or gametes. Some plants and all fungi produce spores. The nuclear division that forms haploid cells, which is called meiosis , is related to mitosis. As you have learned, mitosis is the part of a cell reproduction cycle that results in identical daughter nuclei that are also genetically identical to the original parent nucleus. In mitosis, both the parent and the daughter nuclei are at the same ploidy level—diploid for most plants and animals. Meiosis employs many of the same mechanisms as mitosis. However, the starting nucleus is always diploid and the nuclei that result at the end of a meiotic cell division are haploid. To achieve this reduction in chromosome number, meiosis consists of one round of chromosome duplication and two rounds of nuclear division. Because the events that occur during each of the division stages are analogous to the events of mitosis, the same stage names are assigned. However, because there are two rounds of division, the major process and the stages are designated with a “I” or a “II.” Thus, meiosis I is the first round of meiotic division and consists of prophase I, prometaphase I, and so on. Meiosis II , in which the second round of meiotic division takes place, includes prophase II, prometaphase II, and so on. Meiosis I Meiosis is preceded by an interphase consisting of the G 1 , S, and G 2 phases, which are nearly identical to the phases preceding mitosis. The G 1 phase, which is also called the first gap phase, is the first phase of the interphase and is focused on cell growth. The S phase is the second phase of interphase, during which the DNA of the chromosomes is replicated. Finally, the G 2 phase, also called the second gap phase, is the third and final phase of interphase; in this phase, the cell undergoes the final preparations for meiosis. During DNA duplication in the S phase, each chromosome is replicated to produce two identical copies, called sister chromatids, that are held together at the centromere by cohesin proteins. Cohesin holds the chromatids together until anaphase II. The centrosomes, which are the structures that organize the microtubules of the meiotic spindle, also replicate. This prepares the cell to enter prophase I, the first meiotic phase. Prophase I Early in prophase I, before the chromosomes can be seen clearly microscopically, the homologous chromosomes are attached at their tips to the nuclear envelope by proteins. As the nuclear envelope begins to break down, the proteins associated with homologous chromosomes bring the pair close to each other. Recall that, in mitosis, homologous chromosomes do not pair together. In mitosis, homologous chromosomes line up end-to-end so that when they divide, each daughter cell receives a sister chromatid from both members of the homologous pair. The synaptonemal complex , a lattice of proteins between the homologous chromosomes, first forms at specific locations and then spreads to cover the entire length of the chromosomes. The tight pairing of the homologous chromosomes is called synapsis . In synapsis, the genes on the chromatids of the homologous chromosomes are aligned precisely with each other. The synaptonemal complex supports the exchange of chromosomal segments between non-sister homologous chromatids, a process called crossing over. Crossing over can be observed visually after the exchange as chiasmata (singular = chiasma) ( Figure 11.2 ). In species such as humans, even though the X and Y sex chromosomes are not homologous (most of their genes differ), they have a small region of homology that allows the X and Y chromosomes to pair up during prophase I. A partial synaptonemal complex develops only between the regions of homology. Located at intervals along the synaptonemal complex are large protein assemblies called recombination nodules . These assemblies mark the points of later chiasmata and mediate the multistep process of crossover —or genetic recombination—between the non-sister chromatids. Near the recombination nodule on each chromatid, the double-stranded DNA is cleaved, the cut ends are modified, and a new connection is made between the non-sister chromatids. As prophase I progresses, the synaptonemal complex begins to break down and the chromosomes begin to condense. When the synaptonemal complex is gone, the homologous chromosomes remain attached to each other at the centromere and at chiasmata. The chiasmata remain until anaphase I. The number of chiasmata varies according to the species and the length of the chromosome. There must be at least one chiasma per chromosome for proper separation of homologous chromosomes during meiosis I, but there may be as many as 25. Following crossover, the synaptonemal complex breaks down and the cohesin connection between homologous pairs is also removed. At the end of prophase I, the pairs are held together only at the chiasmata ( Figure 11.3 ) and are called tetrads because the four sister chromatids of each pair of homologous chromosomes are now visible. The crossover events are the first source of genetic variation in the nuclei produced by meiosis. A single crossover event between homologous non-sister chromatids leads to a reciprocal exchange of equivalent DNA between a maternal chromosome and a paternal chromosome. Now, when that sister chromatid is moved into a gamete cell it will carry some DNA from one parent of the individual and some DNA from the other parent. The sister recombinant chromatid has a combination of maternal and paternal genes that did not exist before the crossover. Multiple crossovers in an arm of the chromosome have the same effect, exchanging segments of DNA to create recombinant chromosomes. Prometaphase I The key event in prometaphase I is the attachment of the spindle fiber microtubules to the kinetochore proteins at the centromeres. Kinetochore proteins are multiprotein complexes that bind the centromeres of a chromosome to the microtubules of the mitotic spindle. Microtubules grow from centrosomes placed at opposite poles of the cell. The microtubules move toward the middle of the cell and attach to one of the two fused homologous chromosomes. The microtubules attach at each chromosomes' kinetochores. With each member of the homologous pair attached to opposite poles of the cell, in the next phase, the microtubules can pull the homologous pair apart. A spindle fiber that has attached to a kinetochore is called a kinetochore microtubule. At the end of prometaphase I, each tetrad is attached to microtubules from both poles, with one homologous chromosome facing each pole. The homologous chromosomes are still held together at chiasmata. In addition, the nuclear membrane has broken down entirely. Metaphase I During metaphase I, the homologous chromosomes are arranged in the center of the cell with the kinetochores facing opposite poles. The homologous pairs orient themselves randomly at the equator. For example, if the two homologous members of chromosome 1 are labeled a and b, then the chromosomes could line up a-b, or b-a. This is important in determining the genes carried by a gamete, as each will only receive one of the two homologous chromosomes. Recall that homologous chromosomes are not identical. They contain slight differences in their genetic information, causing each gamete to have a unique genetic makeup. This randomness is the physical basis for the creation of the second form of genetic variation in offspring. Consider that the homologous chromosomes of a sexually reproducing organism are originally inherited as two separate sets, one from each parent. Using humans as an example, one set of 23 chromosomes is present in the egg donated by the mother. The father provides the other set of 23 chromosomes in the sperm that fertilizes the egg. Every cell of the multicellular offspring has copies of the original two sets of homologous chromosomes. In prophase I of meiosis, the homologous chromosomes form the tetrads. In metaphase I, these pairs line up at the midway point between the two poles of the cell to form the metaphase plate. Because there is an equal chance that a microtubule fiber will encounter a maternally or paternally inherited chromosome, the arrangement of the tetrads at the metaphase plate is random. Any maternally inherited chromosome may face either pole. Any paternally inherited chromosome may also face either pole. The orientation of each tetrad is independent of the orientation of the other 22 tetrads. This event—the random (or independent) assortment of homologous chromosomes at the metaphase plate—is the second mechanism that introduces variation into the gametes or spores. In each cell that undergoes meiosis, the arrangement of the tetrads is different. The number of variations is dependent on the number of chromosomes making up a set. There are two possibilities for orientation at the metaphase plate; the possible number of alignments therefore equals 2 n , where n is the number of chromosomes per set. Humans have 23 chromosome pairs, which results in over eight million (2 23 ) possible genetically-distinct gametes. This number does not include the variability that was previously created in the sister chromatids by crossover. Given these two mechanisms, it is highly unlikely that any two haploid cells resulting from meiosis will have the same genetic composition ( Figure 11.4 ). To summarize the genetic consequences of meiosis I, the maternal and paternal genes are recombined by crossover events that occur between each homologous pair during prophase I. In addition, the random assortment of tetrads on the metaphase plate produces a unique combination of maternal and paternal chromosomes that will make their way into the gametes. Anaphase I In anaphase I, the microtubules pull the linked chromosomes apart. The sister chromatids remain tightly bound together at the centromere. The chiasmata are broken in anaphase I as the microtubules attached to the fused kinetochores pull the homologous chromosomes apart ( Figure 11.5 ). Telophase I and Cytokinesis In telophase, the separated chromosomes arrive at opposite poles. The remainder of the typical telophase events may or may not occur, depending on the species. In some organisms, the chromosomes decondense and nuclear envelopes form around the chromatids in telophase I. In other organisms, cytokinesis—the physical separation of the cytoplasmic components into two daughter cells—occurs without reformation of the nuclei. In nearly all species of animals and some fungi, cytokinesis separates the cell contents via a cleavage furrow (constriction of the actin ring that leads to cytoplasmic division). In plants, a cell plate is formed during cell cytokinesis by Golgi vesicles fusing at the metaphase plate. This cell plate will ultimately lead to the formation of cell walls that separate the two daughter cells. Two haploid cells are the end result of the first meiotic division. The cells are haploid because at each pole, there is just one of each pair of the homologous chromosomes. Therefore, only one full set of the chromosomes is present. This is why the cells are considered haploid—there is only one chromosome set, even though each homolog still consists of two sister chromatids. Recall that sister chromatids are merely duplicates of one of the two homologous chromosomes (except for changes that occurred during crossing over). In meiosis II, these two sister chromatids will separate, creating four haploid daughter cells. Link to Learning Review the process of meiosis, observing how chromosomes align and migrate, at Meiosis: An Interactive Animation . Meiosis II In some species, cells enter a brief interphase, or interkinesis , before entering meiosis II. Interkinesis lacks an S phase, so chromosomes are not duplicated. The two cells produced in meiosis I go through the events of meiosis II in synchrony. During meiosis II, the sister chromatids within the two daughter cells separate, forming four new haploid gametes. The mechanics of meiosis II is similar to mitosis, except that each dividing cell has only one set of homologous chromosomes. Therefore, each cell has half the number of sister chromatids to separate out as a diploid cell undergoing mitosis. Prophase II If the chromosomes decondensed in telophase I, they condense again. If nuclear envelopes were formed, they fragment into vesicles. The centrosomes that were duplicated during interkinesis move away from each other toward opposite poles, and new spindles are formed. Prometaphase II The nuclear envelopes are completely broken down, and the spindle is fully formed. Each sister chromatid forms an individual kinetochore that attaches to microtubules from opposite poles. Metaphase II The sister chromatids are maximally condensed and aligned at the equator of the cell. Anaphase II The sister chromatids are pulled apart by the kinetochore microtubules and move toward opposite poles. Non-kinetochore microtubules elongate the cell. Telophase II and Cytokinesis The chromosomes arrive at opposite poles and begin to decondense. Nuclear envelopes form around the chromosomes. Cytokinesis separates the two cells into four unique haploid cells. At this point, the newly formed nuclei are both haploid. The cells produced are genetically unique because of the random assortment of paternal and maternal homologs and because of the recombining of maternal and paternal segments of chromosomes (with their sets of genes) that occurs during crossover. The entire process of meiosis is outlined in Figure 11.6 . Comparing Meiosis and Mitosis Mitosis and meiosis are both forms of division of the nucleus in eukaryotic cells. They share some similarities, but also exhibit distinct differences that lead to very different outcomes ( Figure 11.7 ). Mitosis is a single nuclear division that results in two nuclei that are usually partitioned into two new cells. The nuclei resulting from a mitotic division are genetically identical to the original nucleus. They have the same number of sets of chromosomes, one set in the case of haploid cells and two sets in the case of diploid cells. In most plants and all animal species, it is typically diploid cells that undergo mitosis to form new diploid cells. In contrast, meiosis consists of two nuclear divisions resulting in four nuclei that are usually partitioned into four new cells. The nuclei resulting from meiosis are not genetically identical and they contain one chromosome set only. This is half the number of chromosome sets in the original cell, which is diploid. The main differences between mitosis and meiosis occur in meiosis I, which is a very different nuclear division than mitosis. In meiosis I, the homologous chromosome pairs become associated with each other, are bound together with the synaptonemal complex, develop chiasmata and undergo crossover between sister chromatids, and line up along the metaphase plate in tetrads with kinetochore fibers from opposite spindle poles attached to each kinetochore of a homolog in a tetrad. All of these events occur only in meiosis I. When the chiasmata resolve and the tetrad is broken up with the homologs moving to one pole or another, the ploidy level—the number of sets of chromosomes in each future nucleus—has been reduced from two to one. For this reason, meiosis I is referred to as a reduction division . There is no such reduction in ploidy level during mitosis. Meiosis II is much more analogous to a mitotic division. In this case, the duplicated chromosomes (only one set of them) line up on the metaphase plate with divided kinetochores attached to kinetochore fibers from opposite poles. During anaphase II, as in mitotic anaphase, the kinetochores divide and one sister chromatid—now referred to as a chromosome—is pulled to one pole while the other sister chromatid is pulled to the other pole. If it were not for the fact that there had been crossover, the two products of each individual meiosis II division would be identical (like in mitosis). Instead, they are different because there has always been at least one crossover per chromosome. Meiosis II is not a reduction division because although there are fewer copies of the genome in the resulting cells, there is still one set of chromosomes, as there was at the end of meiosis I. Evolution Connection The Mystery of the Evolution of Meiosis Some characteristics of organisms are so widespread and fundamental that it is sometimes difficult to remember that they evolved like other simpler traits. Meiosis is such an extraordinarily complex series of cellular events that biologists have had trouble hypothesizing and testing how it may have evolved. Although meiosis is inextricably entwined with sexual reproduction and its advantages and disadvantages, it is important to separate the questions of the evolution of meiosis and the evolution of sex, because early meiosis may have been advantageous for different reasons than it is now. Thinking outside the box and imagining what the early benefits from meiosis might have been is one approach to uncovering how it may have evolved. Meiosis and mitosis share obvious cellular processes and it makes sense that meiosis evolved from mitosis. The difficulty lies in the clear differences between meiosis I and mitosis. Adam Wilkins and Robin Holliday 1 summarized the unique events that needed to occur for the evolution of meiosis from mitosis. These steps are homologous chromosome pairing, crossover exchanges, sister chromatids remaining attached during anaphase, and suppression of DNA replication in interphase. They argue that the first step is the hardest and most important, and that understanding how it evolved would make the evolutionary process clearer. They suggest genetic experiments that might shed light on the evolution of synapsis. 1 Adam S. Wilkins and Robin Holliday, “The Evolution of Meiosis from Mitosis,” Genetics 181 (2009): 3–12. There are other approaches to understanding the evolution of meiosis in progress. Different forms of meiosis exist in single-celled protists. Some appear to be simpler or more “primitive” forms of meiosis. Comparing the meiotic divisions of different protists may shed light on the evolution of meiosis. Marilee Ramesh and colleagues 2 compared the genes involved in meiosis in protists to understand when and where meiosis might have evolved. Although research is still ongoing, recent scholarship into meiosis in protists suggests that some aspects of meiosis may have evolved later than others. This kind of genetic comparison can tell us what aspects of meiosis are the oldest and what cellular processes they may have borrowed from in earlier cells. 2 Marilee A. Ramesh, Shehre-Banoo Malik and John M. Logsdon, Jr, “A Phylogenetic Inventory of Meiotic Genes: Evidence for Sex in Giardia and an Early Eukaryotic Origin of Meiosis,” Current Biology 15 (2005):185–91. Link to Learning Click through the steps of this interactive animation to compare the meiotic process of cell division to that of mitosis: How Cells Divide . 11.2 Sexual Reproduction Learning Objectives By the end of this section, you will be able to: Explain that meiosis and sexual reproduction are evolved traits Identify variation among offspring as a potential evolutionary advantage to sexual reproduction Describe the three different life-cycle types among sexual multicellular organisms and their commonalities Sexual reproduction was an early evolutionary innovation after the appearance of eukaryotic cells. It appears to have been very successful because most eukaryotes are able to reproduce sexually, and in many animals, it is the only mode of reproduction. And yet, scientists recognize some real disadvantages to sexual reproduction. On the surface, creating offspring that are genetic clones of the parent appears to be a better system. If the parent organism is successfully occupying a habitat, offspring with the same traits would be similarly successful. There is also the obvious benefit to an organism that can produce offspring whenever circumstances are favorable by asexual budding, fragmentation, or asexual eggs. These methods of reproduction do not require another organism of the opposite sex. Indeed, some organisms that lead a solitary lifestyle have retained the ability to reproduce asexually. In addition, in asexual populations, every individual is capable of reproduction. In sexual populations, the males are not producing the offspring themselves, so in theory an asexual population could grow twice as fast. However, multicellular organisms that exclusively depend on asexual reproduction are exceedingly rare. Why is sexuality (and meiosis) so common? This is one of the important unanswered questions in biology and has been the focus of much research beginning in the latter half of the twentieth century. There are several possible explanations, one of which is that the variation that sexual reproduction creates among offspring is very important to the survival and reproduction of the population. Thus, on average, a sexually reproducing population will leave more descendants than an otherwise similar asexually reproducing population. The only source of variation in asexual organisms is mutation. This is the ultimate source of variation in sexual organisms, but in addition, those different mutations are continually reshuffled from one generation to the next when different parents combine their unique genomes and the genes are mixed into different combinations by crossovers during prophase I and random assortment at metaphase I. Evolution Connection The Red Queen Hypothesis It is not in dispute that sexual reproduction provides evolutionary advantages to organisms that employ this mechanism to produce offspring. But why, even in the face of fairly stable conditions, does sexual reproduction persist when it is more difficult and costly for individual organisms? Variation is the outcome of sexual reproduction, but why are ongoing variations necessary? Enter the Red Queen hypothesis, first proposed by Leigh Van Valen in 1973. 3 The concept was named in reference to the Red Queen's race in Lewis Carroll's book, Through the Looking-Glass . 3 Leigh Van Valen, “A New Evolutionary Law,” Evolutionary Theory 1 (1973): 1–30 All species co-evolve with other organisms; for example predators evolve with their prey, and parasites evolve with their hosts. Each tiny advantage gained by favorable variation gives a species an edge over close competitors, predators, parasites, or even prey. The only method that will allow a co-evolving species to maintain its own share of the resources is to also continually improve its fitness. As one species gains an advantage, this increases selection on the other species; they must also develop an advantage or they will be outcompeted. No single species progresses too far ahead because genetic variation among the progeny of sexual reproduction provides all species with a mechanism to improve rapidly. Species that cannot keep up become extinct. The Red Queen’s catchphrase was, “It takes all the running you can do to stay in the same place.” This is an apt description of co-evolution between competing species. Life Cycles of Sexually Reproducing Organisms Fertilization and meiosis alternate in sexual life cycles . What happens between these two events depends on the organism. The process of meiosis reduces the chromosome number by half. Fertilization, the joining of two haploid gametes, restores the diploid condition. There are three main categories of life cycles in multicellular organisms: diploid-dominant , in which the multicellular diploid stage is the most obvious life stage, such as with most animals including humans; haploid-dominant , in which the multicellular haploid stage is the most obvious life stage, such as with all fungi and some algae; and alternation of generations , in which the two stages are apparent to different degrees depending on the group, as with plants and some algae. Diploid-Dominant Life Cycle Nearly all animals employ a diploid-dominant life-cycle strategy in which the only haploid cells produced by the organism are the gametes. Early in the development of the embryo, specialized diploid cells, called germ cells , are produced within the gonads, such as the testes and ovaries. Germ cells are capable of mitosis to perpetuate the cell line and meiosis to produce gametes. Once the haploid gametes are formed, they lose the ability to divide again. There is no multicellular haploid life stage. Fertilization occurs with the fusion of two gametes, usually from different individuals, restoring the diploid state ( Figure 11.8 ). Haploid-Dominant Life Cycle Most fungi and algae employ a life-cycle type in which the “body” of the organism—the ecologically important part of the life cycle—is haploid. The haploid cells that make up the tissues of the dominant multicellular stage are formed by mitosis. During sexual reproduction, specialized haploid cells from two individuals, designated the (+) and (−) mating types, join to form a diploid zygote. The zygote immediately undergoes meiosis to form four haploid cells called spores. Although haploid like the “parents,” these spores contain a new genetic combination from two parents. The spores can remain dormant for various time periods. Eventually, when conditions are conducive, the spores form multicellular haploid structures by many rounds of mitosis ( Figure 11.9 ). Art Connection If a mutation occurs so that a fungus is no longer able to produce a minus mating type, will it still be able to reproduce? Alternation of Generations The third life-cycle type, employed by some algae and all plants, is a blend of the haploid-dominant and diploid-dominant extremes. Species with alternation of generations have both haploid and diploid multicellular organisms as part of their life cycle. The haploid multicellular plants are called gametophytes , because they produce gametes from specialized cells. Meiosis is not directly involved in the production of gametes in this case, because the organism that produces the gametes is already a haploid. Fertilization between the gametes forms a diploid zygote. The zygote will undergo many rounds of mitosis and give rise to a diploid multicellular plant called a sporophyte . Specialized cells of the sporophyte will undergo meiosis and produce haploid spores. The spores will subsequently develop into the gametophytes ( Figure 11.10 ). Although all plants utilize some version of the alternation of generations, the relative size of the sporophyte and the gametophyte and the relationship between them vary greatly. In plants such as moss, the gametophyte organism is the free-living plant, and the sporophyte is physically dependent on the gametophyte. In other plants, such as ferns, both the gametophyte and sporophyte plants are free-living; however, the sporophyte is much larger. In seed plants, such as magnolia trees and daisies, the gametophyte is composed of only a few cells and, in the case of the female gametophyte, is completely retained within the sporophyte. Sexual reproduction takes many forms in multicellular organisms. However, at some point in each type of life cycle, meiosis produces haploid cells that will fuse with the haploid cell of another organism. The mechanisms of variation—crossover, random assortment of homologous chromosomes, and random fertilization—are present in all versions of sexual reproduction. The fact that nearly every multicellular organism on Earth employs sexual reproduction is strong evidence for the benefits of producing offspring with unique gene combinations, though there are other possible benefits as well.
microbiology
Summary 13.1 Controlling Microbial Growth Inanimate items that may harbor microbes and aid in their transmission are called fomites . The level of cleanliness required for a fomite depends both on the item’s use and the infectious agent with which the item may be contaminated. The CDC and the NIH have established four biological safety levels (BSLs) for laboratories performing research on infectious agents. Each level is designed to protect laboratory personnel and the community. These BSLs are determined by the agent’s infectivity, ease of transmission, and potential disease severity, as well as the type of work being performed with the agent. Disinfection removes potential pathogens from a fomite, whereas antisepsis uses antimicrobial chemicals safe enough for tissues; in both cases, microbial load is reduced, but microbes may remain unless the chemical used is strong enough to be a sterilant . The amount of cleanliness ( sterilization versus high-level disinfection versus general cleanliness) required for items used clinically depends on whether the item will come into contact with sterile tissues ( critical item) , mucous membranes ( semicritical item ), or intact skin ( noncritical item ). Medical procedures with a risk for contamination should be carried out in a sterile field maintained by proper aseptic technique to prevent sepsis . Sterilization is necessary for some medical applications as well as in the food industry, where endospores of Clostridium botulinum are killed through commercial sterilization protocols. Physical or chemical methods to control microbial growth that result in death of the microbe are indicated by the suffixes -cide or -cidal (e.g., as with bactericides , viricides , and fungicides ), whereas those that inhibit microbial growth are indicated by the suffixes -stat or -static (e.g., bacteriostatic , fungistatic ). Microbial death curves display the logarithmic decline of living microbes exposed to a method of microbial control. The time it takes for a protocol to yield a 1-log (90%) reduction in the microbial population is the decimal reduction time , or D-value . When choosing a microbial control protocol, factors to consider include the length of exposure time, the type of microbe targeted, its susceptibility to the protocol, the intensity of the treatment, the presence of organics that may interfere with the protocol, and the environmental conditions that may alter the effectiveness of the protocol. 13.2 Using Physical Methods to Control Microorganisms Heat is a widely used and highly effective method for controlling microbial growth. Dry-heat sterilization protocols are used commonly in aseptic techniques in the laboratory. However, moist-heat sterilization is typically the more effective protocol because it penetrates cells better than dry heat does. Pasteurization is used to kill pathogens and reduce the number of microbes that cause food spoilage. High-temperature, short-time pasteurization is commonly used to pasteurize milk that will be refrigerated; ultra-high temperature pasteurization can be used to pasteurize milk for long-term storage without refrigeration. Refrigeration slows microbial growth; freezing stops growth, killing some organisms. Laboratory and medical specimens may be frozen on dry ice or at ultra-low temperatures for storage and transport. High-pressure processing can be used to kill microbes in food. Hyperbaric oxygen therapy to increase oxygen saturation has also been used to treat certain infections. Desiccation has long been used to preserve foods and is accelerated through the addition of salt or sugar, which decrease water activity in foods. Lyophilization combines cold exposure and desiccation for the long-term storage of foods and laboratory materials, but microbes remain and can be rehydrated. Ionizing radiation , including gamma irradiation, is an effective way to sterilize heat-sensitive and packaged materials. Nonionizing radiation , like ultraviolet light, is unable to penetrate surfaces but is useful for surface sterilization. HEPA filtration is commonly used in hospital ventilation systems and biological safety cabinets in laboratories to prevent transmission of airborne microbes. Membrane filtration is commonly used to remove bacteria from heat-sensitive solutions. 13.3 Using Chemicals to Control Microorganisms Heavy metals , including mercury, silver, copper, and zinc, have long been used for disinfection and preservation, although some have toxicity and environmental risks associated with them. Halogens , including chlorine, fluorine, and iodine, are also commonly used for disinfection. Chlorine compounds, including sodium hypochlorite , chloramines , and chlorine dioxide , are commonly used for water disinfection. Iodine, in both tincture and iodophor forms, is an effective antiseptic. Alcohols , including ethyl alcohol and isopropyl alcohol, are commonly used antiseptics that act by denaturing proteins and disrupting membranes. Phenolics are stable, long-acting disinfectants that denature proteins and disrupt membranes. They are commonly found in household cleaners, mouthwashes, and hospital disinfectants, and are also used to preserve harvested crops. The phenolic compound triclosan , found in antibacterial soaps, plastics, and textiles is technically an antibiotic because of its specific mode of action of inhibiting bacterial fatty-acid synthesis.. Surfactants , including soaps and detergents, lower the surface tension of water to create emulsions that mechanically carry away microbes. Soaps are long-chain fatty acids, whereas detergents are synthetic surfactants. Quaternary ammonium compounds ( quats ) are cationic detergents that disrupt membranes. They are used in household cleaners, skin disinfectants, oral rinses, and mouthwashes. Bisbiguanides disrupt cell membranes, causing cell contents to gel. Chlorhexidine and alexidine are commonly used for surgical scrubs, for handwashing in clinical settings, and in prescription oral rinses. Alkylating agents effectively sterilize materials at low temperatures but are carcinogenic and may also irritate tissue. Glutaraldehyde and o-phthalaldehyde are used as hospital disinfectants but not as antiseptics. Formaldehyde is used for the storage of tissue specimens, as an embalming fluid, and in vaccine preparation to inactivate infectious agents. Ethylene oxide is a gas sterilant that can permeate heat-sensitive packaged materials, but it is also explosive and carcinogenic. Peroxygens , including hydrogen peroxide , peracetic acid , benzoyl peroxide , and ozone gas, are strong oxidizing agents that produce free radicals in cells, damaging their macromolecules. They are environmentally safe and are highly effective disinfectants and antiseptics. Pressurized carbon dioxide in the form of a supercritical fluid easily permeates packaged materials and cells, forming carbonic acid and lowering intracellular pH. Supercritical carbon dioxide is nonreactive, nontoxic, nonflammable, and effective at low temperatures for sterilization of medical devices, implants, and transplanted tissues. Chemical preservatives are added to a variety of foods. Sorbic acid , benzoic acid , propionic acid , and their more soluble salts inhibit enzymes or reduce intracellular pH. Sulfites are used in winemaking and food processing to prevent browning of foods. Nitrites are used to preserve meats and maintain color, but cooking nitrite-preserved meats may produce carcinogenic nitrosamines. Nisin and natamycin are naturally produced preservatives used in cheeses and meats. Nisin is effective against gram-positive bacteria and natamycin against fungi. 13.4 Testing the Effectiveness of Antiseptics and Disinfectants Chemical disinfectants are grouped by the types of microbes and infectious agents they are effective against. High-level germicides kill vegetative cells, fungi, viruses, and endospores, and can ultimately lead to sterilization. Intermediate-level germicides cannot kill all viruses and are less effective against endospores. Low-level germicides kill vegetative cells and some enveloped viruses, but are ineffective against endospores. The effectiveness of a disinfectant is influenced by several factors, including length of exposure, concentration of disinfectant, temperature, and pH. Historically, the effectiveness of a chemical disinfectant was compared with that of phenol at killing Staphylococcus aureus and Salmonella enterica serovar Typhi, and a phenol coefficient was calculated. The disk-diffusion method is used to test the effectiveness of a chemical disinfectant against a particular microbe. The use-dilution test determines the effectiveness of a disinfectant on a surface. In-use tests can determine whether disinfectant solutions are being used correctly in clinical settings.
Chapter Outline 13.1 Controlling Microbial Growth 13.2 Using Physical Methods to Control Microorganisms 13.3 Using Chemicals to Control Microorganisms 13.4 Testing the Effectiveness of Antiseptics and Disinfectants Introduction How clean is clean? People wash their cars and vacuum the carpets, but most would not want to eat from these surfaces. Similarly, we might eat with silverware cleaned in a dishwasher, but we could not use the same dishwasher to clean surgical instruments. As these examples illustrate, “clean” is a relative term. Car washing, vacuuming, and dishwashing all reduce the microbial load on the items treated, thus making them “cleaner.” But whether they are “clean enough” depends on their intended use. Because people do not normally eat from cars or carpets, these items do not require the same level of cleanliness that silverware does. Likewise, because silverware is not used for invasive surgery, these utensils do not require the same level of cleanliness as surgical equipment, which requires sterilization to prevent infection. Why not play it safe and sterilize everything? Sterilizing everything we come in contact with is impractical, as well as potentially dangerous. As this chapter will demonstrate, sterilization protocols often require time- and labor-intensive treatments that may degrade the quality of the item being treated or have toxic effects on users. Therefore, the user must consider the item’s intended application when choosing a cleaning method to ensure that it is “clean enough.”
[ { "answer": { "ans_choice": 0, "ans_text": "A" }, "bloom": null, "hl_context": "The type of protocol required to achieve the desired level of cleanliness depends on the particular item to be cleaned . For example , those used clinically are categorized as critical , semicritical , and noncritical . <hl> Critical items must be sterile because they will be used inside the body , often penetrating sterile tissues or the bloodstream ; examples of critical item s include surgical instruments , catheters , and intravenous fluids . <hl> Gastrointestinal endoscope s and various types of equipment for respiratory therapies are examples of semicritical item s ; they may contact mucous membranes or nonintact skin but do not penetrate tissues . Semicritical items do not typically need to be sterilized but do require a high level of disinfection . Items that may contact but not penetrate intact skin are noncritical item s ; examples are bed linens , furniture , crutches , stethoscopes , and blood pressure cuffs . These articles need to be clean but not highly disinfected .", "hl_sentences": "Critical items must be sterile because they will be used inside the body , often penetrating sterile tissues or the bloodstream ; examples of critical item s include surgical instruments , catheters , and intravenous fluids .", "question": { "cloze_format": "The type of medical items that requires sterilization is ___.", "normal_format": "Which of the following types of medical items requires sterilization?", "question_choices": [ "needles", "bed linens", "respiratory masks", "blood pressure cuffs" ], "question_id": "fs-id1167582522439", "question_text": "Which of the following types of medical items requires sterilization?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "B" }, "bloom": null, "hl_context": "<hl> In addition to physical methods of microbial control , chemicals are also used to control microbial growth . <hl> <hl> A wide variety of chemicals can be used as disinfectants or antiseptics . <hl> When choosing which to use , it is important to consider the type of microbe targeted ; how clean the item needs to be ; the disinfectant ’ s effect on the item ’ s integrity ; its safety to animals , humans , and the environment ; its expense ; and its ease of use . This section describes the variety of chemicals used as disinfectants and antiseptics , including their mechanisms of action and common uses .", "hl_sentences": "In addition to physical methods of microbial control , chemicals are also used to control microbial growth . A wide variety of chemicals can be used as disinfectants or antiseptics .", "question": { "cloze_format": "___ is suitable for use on tissues for microbial control to prevent infection.", "normal_format": "Which of the following is suitable for use on tissues for microbial control to prevent infection?", "question_choices": [ "disinfectant", "antiseptic", "sterilant", "water" ], "question_id": "fs-id1167585883023", "question_text": "Which of the following is suitable for use on tissues for microbial control to prevent infection?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 1, "ans_text": "B" }, "bloom": null, "hl_context": "<hl> Agents classified as BSL - 2 include those that pose moderate risk to laboratory workers and the community , and are typically “ indigenous , ” meaning that they are commonly found in that geographical area . <hl> These include bacteria such as Staphylococcus aureus and Salmonella spp . , and viruses like hepatitis , mumps , and measles viruses . BSL - 2 laboratories require additional precautions beyond those of BSL - 1 , including restricted access ; required PPE , including a face shield in some circumstances ; and the use of biological safety cabinets for procedures that may disperse agents through the air ( called “ aerosolization ” ) . BSL - 2 laboratories are equipped with self-closing doors , an eyewash station , and an autoclave , which is a specialized device for sterilizing materials with pressurized steam before use or disposal . BSL - 1 laboratories may also have an autoclave .", "hl_sentences": "Agents classified as BSL - 2 include those that pose moderate risk to laboratory workers and the community , and are typically “ indigenous , ” meaning that they are commonly found in that geographical area .", "question": { "cloze_format": "The biosafety level that is appropriate for research with microbes or infectious agents that pose moderate risk to laboratory workers and the community, and are typically indigenous is ___.", "normal_format": "Which biosafety level is appropriate for research with microbes or infectious agents that pose moderate risk to laboratory workers and the community, and are typically indigenous?", "question_choices": [ "BSL-1", "BSL-2", "BSL-3", "BSL-4" ], "question_id": "fs-id1167585782208", "question_text": "Which biosafety level is appropriate for research with microbes or infectious agents that pose moderate risk to laboratory workers and the community, and are typically indigenous?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "D" }, "bloom": null, "hl_context": "Physical and chemical methods of microbial control that kill the targeted microorganism are identified by the suffix - cide ( or - cidal ) . The prefix indicates the type of microbe or infectious agent killed by the treatment method : bactericide s kill bacteria , viricide s kill or inactivate viruses , and fungicide s kill fungi . Other methods do not kill organisms but , instead , stop their growth , making their population static ; such methods are identified by the suffix - stat ( or - static ) . <hl> For example , bacteriostatic treatments inhibit the growth of bacteria , whereas fungistatic treatments inhibit the growth of fungi . <hl> Factors that determine whether a particular treatment is - cidal or - static include the types of microorganisms targeted , the concentration of the chemical used , and the nature of the treatment applied .", "hl_sentences": "For example , bacteriostatic treatments inhibit the growth of bacteria , whereas fungistatic treatments inhibit the growth of fungi .", "question": { "cloze_format": "___ best describes a microbial control protocol that inhibits the growth of molds and yeast.", "normal_format": "Which of the following best describes a microbial control protocol that inhibits the growth of molds and yeast?", "question_choices": [ "bacteriostatic", "fungicidal", "bactericidal", "fungistatic" ], "question_id": "fs-id1167585795324", "question_text": "Which of the following best describes a microbial control protocol that inhibits the growth of molds and yeast?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "C" }, "bloom": null, "hl_context": "The degree of microbial control can be evaluated using a microbial death curve to describe the progress and effectiveness of a particular protocol . When exposed to a particular microbial control protocol , a fixed percentage of the microbes within the population will die . Because the rate of killing remains constant even when the population size varies , the percentage killed is more useful information than the absolute number of microbes killed . Death curves are often plotted as semilog plots just like microbial growth curves because the reduction in microorganisms is typically logarithmic ( Figure 13.5 ) . <hl> The amount of time it takes for a specific protocol to produce a one order-of-magnitude decrease in the number of organisms , or the death of 90 % of the population , is called the decimal reduction time ( DRT ) or D-value . <hl>", "hl_sentences": "The amount of time it takes for a specific protocol to produce a one order-of-magnitude decrease in the number of organisms , or the death of 90 % of the population , is called the decimal reduction time ( DRT ) or D-value .", "question": { "cloze_format": "The decimal reduction time refers to the amount of time it takes to ___.", "normal_format": "The decimal reduction time refers to the amount of time it takes to which of the following?", "question_choices": [ "reduce a microbial population by 10%", "reduce a microbial population by 0.1%", "reduce a microbial population by 90%", "completely eliminate a microbial population" ], "question_id": "fs-id1167585773874", "question_text": "The decimal reduction time refers to the amount of time it takes to which of the following?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "D" }, "bloom": null, "hl_context": "<hl> The use of high-frequency ultrasound waves to disrupt cell structures is called sonication . <hl> <hl> Application of ultrasound waves causes rapid changes in pressure within the intracellular liquid ; this leads to cavitation , the formation of bubbles inside the cell , which can disrupt cell structures and eventually cause the cell to lyse or collapse . <hl> Sonication is useful in the laboratory for efficiently lysing cells to release their contents for further research ; outside the laboratory , sonication is used for cleaning surgical instruments , lenses , and a variety of other objects such as coins , tools , and musical instruments .", "hl_sentences": "The use of high-frequency ultrasound waves to disrupt cell structures is called sonication . Application of ultrasound waves causes rapid changes in pressure within the intracellular liquid ; this leads to cavitation , the formation of bubbles inside the cell , which can disrupt cell structures and eventually cause the cell to lyse or collapse .", "question": { "cloze_format": "The ___ method brings about cell lysis due to cavitation induced by rapid localized pressure changes.", "normal_format": "Which of the following methods brings about cell lysis due to cavitation induced by rapid localized pressure changes?", "question_choices": [ "microwaving", "gamma irradiation", "ultraviolet radiation", "sonication" ], "question_id": "fs-id1167584856417", "question_text": "Which of the following methods brings about cell lysis due to cavitation induced by rapid localized pressure changes?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "C" }, "bloom": null, "hl_context": "Heating is one of the most common — and oldest — forms of microbial control . It is used in simple techniques like cooking and canning . Heat can kill microbes by altering their membranes and denaturing proteins . The thermal death point ( TDP ) of a microorganism is the lowest temperature at which all microbes are killed in a 10 - minute exposure . Different microorganisms will respond differently to high temperatures , with some ( e . g . , endospore-formers such as C . botulinum ) being more heat tolerant . <hl> A similar parameter , the thermal death time ( TDT ) , is the length of time needed to kill all microorganisms in a sample at a given temperature . <hl> These parameters are often used to describe sterilization procedures that use high heat , such as autoclaving . Boiling is one of the oldest methods of moist-heat control of microbes , and it is typically quite effective at killing vegetative cells and some viruses . However , boiling is less effective at killing endospores ; some endospores are able to survive up to 20 hours of boiling . Additionally , boiling may be less effective at higher altitudes , where the boiling point of water is lower and the boiling time needed to kill microbes is therefore longer . For these reasons , boiling is not considered a useful sterilization technique in the laboratory or clinical setting .", "hl_sentences": "A similar parameter , the thermal death time ( TDT ) , is the length of time needed to kill all microorganisms in a sample at a given temperature .", "question": { "cloze_format": "___ is used to describe the time required to kill all of the microbes within a sample at a given temperature.", "normal_format": "Which of the following terms is used to describe the time required to kill all of the microbes within a sample at a given temperature?", "question_choices": [ "D-value", "thermal death point", "thermal death time", "decimal reduction time" ], "question_id": "fs-id1167584730248", "question_text": "Which of the following terms is used to describe the time required to kill all of the microbes within a sample at a given temperature?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "A" }, "bloom": null, "hl_context": "<hl> Filtration is a method of physically separating microbes from samples . <hl> Air is commonly filtered through high-efficiency particulate air ( HEPA ) filter s ( Figure 13.15 ) . HEPA filters have effective pore sizes of 0.3 µm , small enough to capture bacterial cells , endospores , and many viruses , as air passes through these filters , nearly sterilizing the air on the other side of the filter . HEPA filters have a variety of applications and are used widely in clinical settings , in cars and airplanes , and even in the home . For example , they may be found in vacuum cleaners , heating and air-conditioning systems , and air purifiers . <hl> Radiation in various forms , from high-energy radiation to sunlight , can be used to kill microbes or inhibit their growth . <hl> <hl> Ionizing radiation includes X-rays , gamma rays , and high-energy electron beams . <hl> Ionizing radiation is strong enough to pass into the cell , where it alters molecular structures and damages cell components . For example , ionizing radiation introduces double-strand breaks in DNA molecules . This may directly cause DNA mutations to occur , or mutations may be introduced when the cell attempts to repair the DNA damage . As these mutations accumulate , they eventually lead to cell death . <hl> In some cases , foods are dried in the sun , relying on evaporation to achieve desiccation . <hl> <hl> Freeze-drying , or lyophilization , is another method of dessication in which an item is rapidly frozen ( “ snap-frozen ” ) and placed under vacuum so that water is lost by sublimation . <hl> <hl> Lyophilization combines both exposure to cold temperatures and desiccation , making it quite effective for controlling microbial growth . <hl> In addition , lyophilization causes less damage to an item than conventional desiccation and better preserves the item ’ s original qualities . Lyophilized items may be stored at room temperature if packaged appropriately to prevent moisture acquisition . Lyophilization is used for preservation in the food industry and is also used in the laboratory for the long-term storage and transportation of microbial cultures .", "hl_sentences": "Filtration is a method of physically separating microbes from samples . Radiation in various forms , from high-energy radiation to sunlight , can be used to kill microbes or inhibit their growth . Ionizing radiation includes X-rays , gamma rays , and high-energy electron beams . In some cases , foods are dried in the sun , relying on evaporation to achieve desiccation . Freeze-drying , or lyophilization , is another method of dessication in which an item is rapidly frozen ( “ snap-frozen ” ) and placed under vacuum so that water is lost by sublimation . Lyophilization combines both exposure to cold temperatures and desiccation , making it quite effective for controlling microbial growth .", "question": { "cloze_format": "The microbial control method ___ does not actually kill microbes or inhibit their growth but instead removes them physically from samples.", "normal_format": "Which of the following microbial control methods does not actually kill microbes or inhibit their growth but instead removes them physically from samples?", "question_choices": [ "filtration", "desiccation", "lyophilization", "nonionizing radiation" ], "question_id": "fs-id1167581340939", "question_text": "Which of the following microbial control methods does not actually kill microbes or inhibit their growth but instead removes them physically from samples?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 1, "ans_text": "B" }, "bloom": null, "hl_context": "<hl> Last , alcohols are used to make tincture s with other antiseptics , such as the iodine tinctures discussed previously in this chapter . <hl> <hl> All in all , alcohols are inexpensive and quite effective for the disinfection of a broad range of vegetative microbes . <hl> However , one disadvantage of alcohols is their high volatility , limiting their effectiveness to immediately after application .", "hl_sentences": "Last , alcohols are used to make tincture s with other antiseptics , such as the iodine tinctures discussed previously in this chapter . All in all , alcohols are inexpensive and quite effective for the disinfection of a broad range of vegetative microbes .", "question": { "cloze_format": "___ refers to a disinfecting chemical dissolved in alcohol.", "normal_format": "Which of the following refers to a disinfecting chemical dissolved in alcohol?", "question_choices": [ "iodophor", "tincture", "phenolic", "peroxygen" ], "question_id": "fs-id1167585091410", "question_text": "Which of the following refers to a disinfecting chemical dissolved in alcohol?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 0, "ans_text": "A" }, "bloom": null, "hl_context": "<hl> Peroxygens are strong oxidizing agents that can be used as disinfectants or antiseptics . <hl> <hl> The most widely used peroxygen is hydrogen peroxide ( H 2 O 2 ) , which is often used in solution to disinfect surfaces and may also be used as a gaseous agent . <hl> <hl> Hydrogen peroxide solutions are inexpensive skin antiseptics that break down into water and oxygen gas , both of which are environmentally safe . <hl> This decomposition is accelerated in the presence of light , so hydrogen peroxide solutions typically are sold in brown or opaque bottles . One disadvantage of using hydrogen peroxide as an antiseptic is that it also causes damage to skin that may delay healing or lead to scarring . <hl> Contact lens cleaners often include hydrogen peroxide as a disinfectant . <hl>", "hl_sentences": "Peroxygens are strong oxidizing agents that can be used as disinfectants or antiseptics . The most widely used peroxygen is hydrogen peroxide ( H 2 O 2 ) , which is often used in solution to disinfect surfaces and may also be used as a gaseous agent . Hydrogen peroxide solutions are inexpensive skin antiseptics that break down into water and oxygen gas , both of which are environmentally safe . Contact lens cleaners often include hydrogen peroxide as a disinfectant .", "question": { "cloze_format": "The peroxygen that is widely used as a household disinfectant, is inexpensive, and breaks down into water and oxygen gas is the ___.", "normal_format": "Which of the following peroxygens is widely used as a household disinfectant, is inexpensive, and breaks down into water and oxygen gas?", "question_choices": [ "hydrogen peroxide", "peracetic acid", "benzoyl peroxide", "ozone" ], "question_id": "fs-id1167584688712", "question_text": "Which of the following peroxygens is widely used as a household disinfectant, is inexpensive, and breaks down into water and oxygen gas?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "B" }, "bloom": null, "hl_context": "Other commonly used chemical preservatives include sulfur dioxide and nitrites . <hl> Sulfur dioxide prevents browning of foods and is used for the preservation of dried fruits ; it has been used in winemaking since ancient times . <hl> Sulfur dioxide gas dissolves in water readily , forming sulfites . <hl> Although sulfites can be metabolized by the body , some people have sulfite allergies , including asthmatic reactions . <hl> Additionally , sulfites degrade thiamine , an important nutrient in some foods . The mode of action of sulfites is not entirely clear , but they may interfere with the disulfide bond ( see Figure 7.21 ) formation in proteins , inhibiting enzymatic activity . Alternatively , they may reduce the intracellular pH of the cell , interfering with proton motive force-driven mechanisms .", "hl_sentences": "Sulfur dioxide prevents browning of foods and is used for the preservation of dried fruits ; it has been used in winemaking since ancient times . Although sulfites can be metabolized by the body , some people have sulfite allergies , including asthmatic reactions .", "question": { "cloze_format": "___ is/are a chemical food preservative that is used in the wine industry but may cause asthmatic reactions in some individuals.", "normal_format": "Which of the following chemical food preservatives is used in the wine industry but may cause asthmatic reactions in some individuals?", "question_choices": [ "nitrites", "sulfites", "propionic acid", "benzoic acid" ], "question_id": "fs-id1167584886855", "question_text": "Which of the following chemical food preservatives is used in the wine industry but may cause asthmatic reactions in some individuals?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "B" }, "bloom": null, "hl_context": "<hl> Other chemicals commonly used for disinfection are the halogens iodine , chlorine , and fluorine . <hl> Iodine works by oxidizing cellular components , including sulfur-containing amino acids , nucleotides , and fatty acids , and destabilizing the macromolecules that contain these molecules . It is often used as a topical tincture , but it may cause staining or skin irritation . An iodophor is a compound of iodine complexed with an organic molecule , thereby increasing iodine ’ s stability and , in turn , its efficacy . One common iodophor is povidone-iodine , which includes a wetting agent that releases iodine relatively slowly . Betadine is a brand of povidone-iodine commonly used as a hand scrub by medical personnel before surgery and for topical antisepsis of a patient ’ s skin before incision ( Figure 13.22 ) . The process of disinfection inactivates most microbes on the surface of a fomite by using antimicrobial chemicals or heat . Because some microbes remain , the disinfected item is not considered sterile . Ideally , disinfectant s should be fast acting , stable , easy to prepare , inexpensive , and easy to use . An example of a natural disinfectant is vinegar ; its acidity kills most microbes . <hl> Chemical disinfectants , such as chlorine bleach or products containing chlorine , are used to clean nonliving surfaces such as laboratory benches , clinical surfaces , and bathroom sinks . <hl> Typical disinfection does not lead to sterilization because endospores tend to survive even when all vegetative cells have been killed .", "hl_sentences": "Other chemicals commonly used for disinfection are the halogens iodine , chlorine , and fluorine . Chemical disinfectants , such as chlorine bleach or products containing chlorine , are used to clean nonliving surfaces such as laboratory benches , clinical surfaces , and bathroom sinks .", "question": { "cloze_format": "___ are a group of chemicals used for disinfection of which bleach is an example.", "normal_format": "Bleach is an example of which group of chemicals used for disinfection?", "question_choices": [ "heavy metals", "halogens", "quats", "bisbiguanides" ], "question_id": "fs-id1167584667818", "question_text": "Bleach is an example of which group of chemicals used for disinfection?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "C" }, "bloom": null, "hl_context": "<hl> The alkylating agent s are a group of strong disinfecting chemicals that act by replacing a hydrogen atom within a molecule with an alkyl group ( C n H 2n + 1 ) , thereby inactivating enzymes and nucleic acids ( Figure 13.29 ) . <hl> <hl> The alkylating agent formaldehyde ( CH 2 OH ) is commonly used in solution at a concentration of 37 % ( known as formalin ) or as a gaseous disinfectant and biocide . <hl> It is a strong , broad-spectrum disinfectant and biocide that has the ability to kill bacteria , viruses , fungi , and endospores , leading to sterilization at low temperatures , which is sometimes a convenient alternative to the more labor-intensive heat sterilization methods . It also cross-links proteins and has been widely used as a chemical fixative . Because of this , it is used for the storage of tissue specimens and as an embalming fluid . It also has been used to inactivate infectious agents in vaccine preparation . <hl> Formaldehyde is very irritating to living tissues and is also carcinogenic ; therefore , it is not used as an antiseptic . <hl>", "hl_sentences": "The alkylating agent s are a group of strong disinfecting chemicals that act by replacing a hydrogen atom within a molecule with an alkyl group ( C n H 2n + 1 ) , thereby inactivating enzymes and nucleic acids ( Figure 13.29 ) . The alkylating agent formaldehyde ( CH 2 OH ) is commonly used in solution at a concentration of 37 % ( known as formalin ) or as a gaseous disinfectant and biocide . Formaldehyde is very irritating to living tissues and is also carcinogenic ; therefore , it is not used as an antiseptic .", "question": { "cloze_format": "___ is a chemical disinfectant that works by methylating enzymes and nucleic acids and is known for being toxic and carcinogenic.", "normal_format": "Which chemical disinfectant works by methylating enzymes and nucleic acids and is known for being toxic and carcinogenic?", "question_choices": [ "sorbic acid", "triclosan", "formaldehyde", "hexaclorophene" ], "question_id": "fs-id1167583731360", "question_text": "Which chemical disinfectant works by methylating enzymes and nucleic acids and is known for being toxic and carcinogenic?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "C" }, "bloom": null, "hl_context": "<hl> An in-use test can determine whether an actively used solution of disinfectant in a clinical setting is microbially contaminated ( Figure 13.32 ) . <hl> A 1 - mL sample of the used disinfectant is diluted into 9 mL of sterile broth medium that also contains a compound to inactivate the disinfectant . Ten drops , totaling approximately 0.2 mL of this mixture , are then inoculated onto each of two agar plates . One plate is incubated at 37 ° C for 3 days and the other is incubated at room temperature for 7 days . The plates are monitored for growth of microbial colonies . Growth of five or more colonies on either plate suggests that viable microbial cells existed in the disinfectant solution and that it is contaminated . Such in-use tests monitor the effectiveness of disinfectants in the clinical setting .", "hl_sentences": "An in-use test can determine whether an actively used solution of disinfectant in a clinical setting is microbially contaminated ( Figure 13.32 ) .", "question": { "cloze_format": "The type of test that is used to determine whether disinfectant solutions actively used in a clinical setting are being used correctly is the ___.", "normal_format": "Which type of test is used to determine whether disinfectant solutions actively used in a clinical setting are being used correctly?", "question_choices": [ "disk-diffusion assay", "phenol coefficient test", "in-use test", "use-dilution test" ], "question_id": "fs-id1167584702765", "question_text": "Which type of test is used to determine whether disinfectant solutions actively used in a clinical setting are being used correctly?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "A" }, "bloom": null, "hl_context": "<hl> The effectiveness of a disinfectant or antiseptic can be determined in a number of ways . <hl> <hl> Historically , a chemical agent ’ s effectiveness was often compared with that of phenol , the first chemical agent used by Joseph Lister . <hl> In 1903 , British chemists Samuel Rideal ( 1863 – 1929 ) and J . T . Ainslie Walker ( 1868 – 1930 ) established a protocol to compare the effectiveness of a variety of chemicals with that of phenol , using as their test organisms Staphylococcus aureus ( a gram-positive bacterium ) and Salmonella enterica serovar Typhi ( a gram-negative bacterium ) . They exposed the test bacteria to the antimicrobial chemical solutions diluted in water for 7.5 minutes . They then calculated a phenol coefficient for each chemical for each of the two bacteria tested . A phenol coefficient of 1.0 means that the chemical agent has about the same level of effectiveness as phenol . A chemical agent with a phenol coefficient of less than 1.0 is less effective than phenol . An example is formalin , with phenol coefficients of 0.3 ( S . aureus ) and 0.7 ( S . enterica serovar Typhi ) . A chemical agent with a phenol coefficient greater than 1.0 is more effective than phenol , such as chloramine , with phenol coefficients of 133 and 100 , respectively . Although the phenol coefficient was once a useful measure of effectiveness , it is no longer commonly used because the conditions and organisms used were arbitrarily chosen .", "hl_sentences": "The effectiveness of a disinfectant or antiseptic can be determined in a number of ways . Historically , a chemical agent ’ s effectiveness was often compared with that of phenol , the first chemical agent used by Joseph Lister .", "question": { "cloze_format": "The effectiveness of chemical disinfectants has historically been compared to that of ___.", "normal_format": "The effectiveness of chemical disinfectants has historically been compared to that of which of the following?", "question_choices": [ "phenol", "ethyl alcohol", "bleach", "formaldehyde" ], "question_id": "fs-id1167583599636", "question_text": "The effectiveness of chemical disinfectants has historically been compared to that of which of the following?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "C" }, "bloom": null, "hl_context": "The effectiveness of various chemical disinfectants is reflected in the terms used to describe them . Chemical disinfectants are grouped by the power of their activity , with each category reflecting the types of microbes and viruses its component disinfectants are effective against . High-level germicide s have the ability to kill vegetative cells , fungi , viruses , and endospores , leading to sterilization , with extended use . <hl> Intermediate-level germicides , as their name suggests , are less effective against endospores and certain viruses , and low-level germicides kill only vegetative cells and certain enveloped viruses , and are ineffective against endospores . <hl>", "hl_sentences": "Intermediate-level germicides , as their name suggests , are less effective against endospores and certain viruses , and low-level germicides kill only vegetative cells and certain enveloped viruses , and are ineffective against endospores .", "question": { "cloze_format": "A/an ___ refers to a germicide that can kill vegetative cells and certain enveloped viruses but not endospores.", "normal_format": "Which of the following refers to a germicide that can kill vegetative cells and certain enveloped viruses but not endospores?", "question_choices": [ "high-level germicide", "intermediate-level germicide", "low-level germicide", "sterilant" ], "question_id": "fs-id1167583718663", "question_text": "Which of the following refers to a germicide that can kill vegetative cells and certain enveloped viruses but not endospores?" }, "references_are_paraphrase": null } ]
13
13.1 Controlling Microbial Growth Learning Objectives Compare disinfectants, antiseptics, and sterilants Describe the principles of controlling the presence of microorganisms through sterilization and disinfection Differentiate between microorganisms of various biological safety levels and explain methods used for handling microbes at each level Clinical Focus Part 1 Roberta is a 46-year-old real estate agent who recently underwent a cholecystectomy (surgery to remove painful gallstones). The surgery was performed laparoscopically with the aid of a duodenoscope, a specialized endoscope that allows surgeons to see inside the body with the aid of a tiny camera. On returning home from the hospital, Roberta developed abdominal pain and a high fever. She also experienced a burning sensation during urination and noticed blood in her urine. She notified her surgeon of these symptoms, per her postoperative instructions. What are some possible causes of Roberta’s symptoms? Jump to the next Clinical Focus box. To prevent the spread of human disease, it is necessary to control the growth and abundance of microbes in or on various items frequently used by humans. Inanimate items, such as doorknobs, toys, or towels, which may harbor microbes and aid in disease transmission, are called fomite s . Two factors heavily influence the level of cleanliness required for a particular fomite and, hence, the protocol chosen to achieve this level. The first factor is the application for which the item will be used. For example, invasive applications that require insertion into the human body require a much higher level of cleanliness than applications that do not. The second factor is the level of resistance to antimicrobial treatment by potential pathogens. For example, foods preserved by canning often become contaminated with the bacterium Clostridium botulinum , which produces the neurotoxin that causes botulism . Because C. botulinum can produce endospore s that can survive harsh conditions, extreme temperatures and pressures must be used to eliminate the endospores. Other organisms may not require such extreme measures and can be controlled by a procedure such as washing clothes in a laundry machine. Laboratory Biological Safety Levels For researchers or laboratory personnel working with pathogens, the risks associated with specific pathogens determine the levels of cleanliness and control required. The Centers for Disease Control and Prevention (CDC) and the National Institutes of Health (NIH) have established four classification levels, called “ biological safety levels ” ( BSLs ). Various organizations around the world, including the World Health Organization (WHO) and the European Union (EU), use a similar classification scheme. According to the CDC, the BSL is determined by the agent’s infectivity, ease of transmission, and potential disease severity, as well as the type of work being done with the agent. 2 2 US Centers for Disease Control and Prevention. “Recognizing the Biosafety Levels.” http://www.cdc.gov/training/quicklearns/biosafety/. Accessed June 7, 2016. Each BSL requires a different level of biocontainment to prevent contamination and spread of infectious agents to laboratory personnel and, ultimately, the community. For example, the lowest BSL, BSL-1, requires the fewest precautions because it applies to situations with the lowest risk for microbial infection. BSL-1 agents are those that generally do not cause infection in healthy human adults. These include noninfectious bacteria, such as nonpathogenic strains of Escherichia coli and Bacillus subtilis , and viruses known to infect animals other than humans, such as baculoviruses (insect viruses). Because working with BSL-1 agents poses very little risk, few precautions are necessary. Laboratory workers use standard aseptic technique and may work with these agents at an open laboratory bench or table, wearing personal protective equipment (PPE) such as a laboratory coat, goggles, and gloves, as needed. Other than a sink for handwashing and doors to separate the laboratory from the rest of the building, no additional modifications are needed. Agents classified as BSL-2 include those that pose moderate risk to laboratory workers and the community, and are typically “indigenous,” meaning that they are commonly found in that geographical area. These include bacteria such as Staphylococcus aureus and Salmonella spp., and viruses like hepatitis, mumps, and measles viruses. BSL-2 laboratories require additional precautions beyond those of BSL-1, including restricted access; required PPE, including a face shield in some circumstances; and the use of biological safety cabinets for procedures that may disperse agents through the air (called “aerosolization”). BSL-2 laboratories are equipped with self-closing doors, an eyewash station, and an autoclave , which is a specialized device for sterilizing materials with pressurized steam before use or disposal. BSL-1 laboratories may also have an autoclave. BSL-3 agents have the potential to cause lethal infections by inhalation. These may be either indigenous or “exotic,” meaning that they are derived from a foreign location, and include pathogens such as Mycobacterium tuberculosis , Bacillus anthracis , West Nile virus , and human immunodeficiency virus ( HIV ). Because of the serious nature of the infections caused by BSL-3 agents, laboratories working with them require restricted access. Laboratory workers are under medical surveillance, possibly receiving vaccinations for the microbes with which they work. In addition to the standard PPE already mentioned, laboratory personnel in BSL-3 laboratories must also wear a respirator and work with microbes and infectious agents in a biological safety cabinet at all times. BSL-3 laboratories require a hands-free sink, an eyewash station near the exit, and two sets of self-closing and locking doors at the entrance. These laboratories are equipped with directional airflow, meaning that clean air is pulled through the laboratory from clean areas to potentially contaminated areas. This air cannot be recirculated, so a constant supply of clean air is required. BSL-4 agents are the most dangerous and often fatal. These microbes are typically exotic, are easily transmitted by inhalation, and cause infections for which there are no treatments or vaccinations. Examples include Ebola virus and Marburg virus, both of which cause hemorrhagic fevers, and smallpox virus. There are only a small number of laboratories in the United States and around the world appropriately equipped to work with these agents. In addition to BSL-3 precautions, laboratory workers in BSL-4 facilities must also change their clothing on entering the laboratory, shower on exiting, and decontaminate all material on exiting. While working in the laboratory, they must either wear a full-body protective suit with a designated air supply or conduct all work within a biological safety cabinet with a high-efficiency particulate air (HEPA)-filtered air supply and a doubly HEPA-filtered exhaust. If wearing a suit, the air pressure within the suit must be higher than that outside the suit, so that if a leak in the suit occurs, laboratory air that may be contaminated cannot be drawn into the suit ( Figure 13.2 ). The laboratory itself must be located either in a separate building or in an isolated portion of a building and have its own air supply and exhaust system, as well as its own decontamination system. The BSLs are summarized in Figure 13.3 . Link to Learning To learn more about the four BSLs, visit the CDC’s website. Check Your Understanding What are some factors used to determine the BSL necessary for working with a specific pathogen? Sterilization The most extreme protocols for microbial control aim to achieve sterilization : the complete removal or killing of all vegetative cells, endospores, and viruses from the targeted item or environment. Sterilization protocols are generally reserved for laboratory, medical, manufacturing, and food industry settings, where it may be imperative for certain items to be completely free of potentially infectious agents. Sterilization can be accomplished through either physical means, such as exposure to high heat, pressure, or filtration through an appropriate filter, or by chemical means. Chemicals that can be used to achieve sterilization are called sterilant s . Sterilants effectively kill all microbes and viruses, and, with appropriate exposure time, can also kill endospores. For many clinical purposes, aseptic technique is necessary to prevent contamination of sterile surfaces. Aseptic technique involves a combination of protocols that collectively maintain sterility, or asepsis , thus preventing contamination of the patient with microbes and infectious agents. Failure to practice aseptic technique during many types of clinical procedures may introduce microbes to the patient’s body and put the patient at risk for sepsis , a systemic inflammatory response to an infection that results in high fever, increased heart and respiratory rates, shock, and, possibly, death. Medical procedures that carry risk of contamination must be performed in a sterile field , a designated area that is kept free of all vegetative microbes, endospores, and viruses. Sterile fields are created according to protocols requiring the use of sterilized materials, such as packaging and drapings, and strict procedures for washing and application of sterilants. Other protocols are followed to maintain the sterile field while the medical procedure is being performed. One food sterilization protocol, commercial sterilization , uses heat at a temperature low enough to preserve food quality but high enough to destroy common pathogens responsible for food poisoning, such as C. botulinum . Because C. botulinum and its endospores are commonly found in soil, they may easily contaminate crops during harvesting, and these endospores can later germinate within the anaerobic environment once foods are canned. Metal cans of food contaminated with C. botulinum will bulge due to the microbe’s production of gases; contaminated jars of food typically bulge at the metal lid. To eliminate the risk for C. botulinum contamination, commercial food-canning protocols are designed with a large margin of error. They assume an impossibly large population of endospores (10 12 per can) and aim to reduce this population to 1 endospore per can to ensure the safety of canned foods. For example, low- and medium-acid foods are heated to 121 °C for a minimum of 2.52 minutes, which is the time it would take to reduce a population of 10 12 endospores per can down to 1 endospore at this temperature. Even so, commercial sterilization does not eliminate the presence of all microbes; rather, it targets those pathogens that cause spoilage and foodborne diseases, while allowing many nonpathogenic organisms to survive. Therefore, “sterilization” is somewhat of a misnomer in this context, and commercial sterilization may be more accurately described as “quasi-sterilization.” Check Your Understanding What is the difference between sterilization and aseptic technique? Link to Learning The Association of Surgical Technologists publishes standards for aseptic technique, including creating and maintaining a sterile field. Other Methods of Control Sterilization protocols require procedures that are not practical, or necessary, in many settings. Various other methods are used in clinical and nonclinical settings to reduce the microbial load on items. Although the terms for these methods are often used interchangeably, there are important distinctions ( Figure 13.4 ). The process of disinfection inactivates most microbes on the surface of a fomite by using antimicrobial chemicals or heat. Because some microbes remain, the disinfected item is not considered sterile. Ideally, disinfectant s should be fast acting, stable, easy to prepare, inexpensive, and easy to use. An example of a natural disinfectant is vinegar ; its acidity kills most microbes. Chemical disinfectants, such as chlorine bleach or products containing chlorine, are used to clean nonliving surfaces such as laboratory benches, clinical surfaces, and bathroom sinks. Typical disinfection does not lead to sterilization because endospores tend to survive even when all vegetative cells have been killed. Unlike disinfectants, antiseptic s are antimicrobial chemicals safe for use on living skin or tissues. Examples of antiseptics include hydrogen peroxide and isopropyl alcohol . The process of applying an antiseptic is called antisepsis . In addition to the characteristics of a good disinfectant, antiseptics must also be selectively effective against microorganisms and able to penetrate tissue deeply without causing tissue damage. The type of protocol required to achieve the desired level of cleanliness depends on the particular item to be cleaned. For example, those used clinically are categorized as critical, semicritical, and noncritical. Critical items must be sterile because they will be used inside the body, often penetrating sterile tissues or the bloodstream; examples of critical item s include surgical instruments, catheters , and intravenous fluids. Gastrointestinal endoscope s and various types of equipment for respiratory therapies are examples of semicritical item s; they may contact mucous membranes or nonintact skin but do not penetrate tissues. Semicritical items do not typically need to be sterilized but do require a high level of disinfection. Items that may contact but not penetrate intact skin are noncritical item s; examples are bed linens, furniture, crutches, stethoscopes, and blood pressure cuffs. These articles need to be clean but not highly disinfected. The act of handwashing is an example of degerming , in which microbial numbers are significantly reduced by gently scrubbing living tissue, most commonly skin, with a mild chemical (e.g., soap) to avoid the transmission of pathogenic microbes. Wiping the skin with an alcohol swab at an injection site is another example of degerming. These degerming methods remove most (but not all) microbes from the skin’s surface. The term sanitization refers to the cleansing of fomites to remove enough microbes to achieve levels deemed safe for public health. For example, commercial dishwashers used in the food service industry typically use very hot water and air for washing and drying; the high temperatures kill most microbes, sanitizing the dishes. Surfaces in hospital rooms are commonly sanitized using a chemical disinfectant to prevent disease transmission between patients. Figure 13.4 summarizes common protocols, definitions, applications, and agents used to control microbial growth. Check Your Understanding What is the difference between a disinfectant and an antiseptic? Which is most effective at removing microbes from a product: sanitization, degerming, or sterilization? Explain. Clinical Focus Part 2 Roberta’s physician suspected that a bacterial infection was responsible for her sudden-onset high fever, abdominal pain, and bloody urine. Based on these symptoms, the physician diagnosed a urinary tract infection (UTI). A wide variety of bacteria may cause UTIs, which typically occur when bacteria from the lower gastrointestinal tract are introduced to the urinary tract. However, Roberta’s recent gallstone surgery caused the physician to suspect that she had contracted a nosocomial (hospital-acquired) infection during her surgery. The physician took a urine sample and ordered a urine culture to check for the presence of white blood cells, red blood cells, and bacteria. The results of this test would help determine the cause of the infection. The physician also prescribed a course of the antibiotic ciprofloxacin, confident that it would clear Roberta’s infection. What are some possible ways that bacteria could have been introduced to Roberta’s urinary tract during her surgery? Jump to the next Clinical Focus box. Go back to the previous Clinical Focus box. Measuring Microbial Control Physical and chemical methods of microbial control that kill the targeted microorganism are identified by the suffix -cide (or -cidal ). The prefix indicates the type of microbe or infectious agent killed by the treatment method: bactericide s kill bacteria, viricide s kill or inactivate viruses, and fungicide s kill fungi. Other methods do not kill organisms but, instead, stop their growth, making their population static; such methods are identified by the suffix -stat (or -static ). For example, bacteriostatic treatments inhibit the growth of bacteria, whereas fungistatic treatments inhibit the growth of fungi. Factors that determine whether a particular treatment is -cidal or -static include the types of microorganisms targeted, the concentration of the chemical used, and the nature of the treatment applied. Although -static treatments do not actually kill infectious agents, they are often less toxic to humans and other animals, and may also better preserve the integrity of the item treated. Such treatments are typically sufficient to keep the microbial population of an item in check. The reduced toxicity of some of these -static chemicals also allows them to be impregnated safely into plastics to prevent the growth of microbes on these surfaces. Such plastics are used in products such as toys for children and cutting boards for food preparation. When used to treat an infection, -static treatments are typically sufficient in an otherwise healthy individual, preventing the pathogen from multiplying, thus allowing the individual’s immune system to clear the infection. The degree of microbial control can be evaluated using a microbial death curve to describe the progress and effectiveness of a particular protocol. When exposed to a particular microbial control protocol, a fixed percentage of the microbes within the population will die. Because the rate of killing remains constant even when the population size varies, the percentage killed is more useful information than the absolute number of microbes killed. Death curves are often plotted as semilog plots just like microbial growth curves because the reduction in microorganisms is typically logarithmic ( Figure 13.5 ). The amount of time it takes for a specific protocol to produce a one order-of-magnitude decrease in the number of organisms, or the death of 90% of the population, is called the decimal reduction time (DRT) or D-value . Several factors contribute to the effectiveness of a disinfecting agent or microbial control protocol. First, as demonstrated in Figure 13.5 , the length of time of exposure is important. Longer exposure times kill more microbes. Because microbial death of a population exposed to a specific protocol is logarithmic, it takes longer to kill a high-population load than a low-population load exposed to the same protocol. A shorter treatment time (measured in multiples of the D-value) is needed when starting with a smaller number of organisms. Effectiveness also depends on the susceptibility of the agent to that disinfecting agent or protocol. The concentration of disinfecting agent or intensity of exposure is also important. For example, higher temperatures and higher concentrations of disinfectants kill microbes more quickly and effectively. Conditions that limit contact between the agent and the targeted cells cells—for example, the presence of bodily fluids, tissue, organic debris (e.g., mud or feces), or biofilm s on surfaces—increase the cleaning time or intensity of the microbial control protocol required to reach the desired level of cleanliness. All these factors must be considered when choosing the appropriate protocol to control microbial growth in a given situation. Check Your Understanding What are two possible reasons for choosing a bacteriostatic treatment over a bactericidal one? Name at least two factors that can compromise the effectiveness of a disinfecting agent. 13.2 Using Physical Methods to Control Microorganisms Learning Objectives Understand and compare various physical methods of controlling microbial growth, including heating, refrigeration, freezing, high-pressure treatment, desiccation, lyophilization, irradiation, and filtration For thousands of years, humans have used various physical methods of microbial control for food preservation . Common control methods include the application of high temperatures, radiation, filtration, and desiccation (drying), among others. Many of these methods nonspecifically kill cells by disrupting membranes, changing membrane permeability, or damaging proteins and nucleic acids by denaturation, degradation, or chemical modification. Various physical methods used for microbial control are described in this section. Heat Heating is one of the most common—and oldest—forms of microbial control. It is used in simple techniques like cooking and canning . Heat can kill microbes by altering their membranes and denaturing proteins. The thermal death point (TDP) of a microorganism is the lowest temperature at which all microbes are killed in a 10-minute exposure. Different microorganisms will respond differently to high temperatures, with some (e.g., endospore-formers such as C. botulinum ) being more heat tolerant. A similar parameter, the thermal death time (TDT) , is the length of time needed to kill all microorganisms in a sample at a given temperature. These parameters are often used to describe sterilization procedures that use high heat, such as autoclaving . Boiling is one of the oldest methods of moist-heat control of microbes, and it is typically quite effective at killing vegetative cells and some viruses. However, boiling is less effective at killing endospores; some endospores are able to survive up to 20 hours of boiling. Additionally, boiling may be less effective at higher altitudes, where the boiling point of water is lower and the boiling time needed to kill microbes is therefore longer. For these reasons, boiling is not considered a useful sterilization technique in the laboratory or clinical setting. Many different heating protocols can be used for sterilization in the laboratory or clinic, and these protocols can be broken down into two main categories: dry-heat sterilization and moist-heat sterilization . Aseptic technique in the laboratory typically involves some dry-heat sterilization protocols using direct application of high heat, such as sterilizing inoculating loops ( Figure 13.6 ). Incineration at very high temperatures destroys all microorganisms. Dry heat can also be applied for relatively long periods of time (at least 2 hours) at temperatures up to 170 °C by using a dry-heat sterilizer, such as an oven. However, moist-heat sterilization is typically the more effective protocol because it penetrates cells better than dry heat does. Autoclaves Autoclaves rely on moist-heat sterilization. They are used to raise temperatures above the boiling point of water to sterilize items such as surgical equipment from vegetative cells, viruses, and especially endospores, which are known to survive boiling temperatures, without damaging the items. Charles Chamberland (1851–1908) designed the modern autoclave in 1879 while working in the laboratory of Louis Pasteur . The autoclave is still considered the most effective method of sterilization ( Figure 13.7 ). Outside laboratory and clinical settings, large industrial autoclaves called retort s allow for moist-heat sterilization on a large scale. In general, the air in the chamber of an autoclave is removed and replaced with increasing amounts of steam trapped within the enclosed chamber, resulting in increased interior pressure and temperatures above the boiling point of water. The two main types of autoclaves differ in the way that air is removed from the chamber. In gravity displacement autoclave s, steam is introduced into the chamber from the top or sides. Air, which is heavier than steam, sinks to the bottom of the chamber, where it is forced out through a vent. Complete displacement of air is difficult, especially in larger loads, so longer cycles may be required for such loads. In prevacuum sterilizer s, air is removed completely using a high-speed vacuum before introducing steam into the chamber. Because air is more completely eliminated, the steam can more easily penetrate wrapped items. Many autoclaves are capable of both gravity and prevacuum cycles, using the former for the decontamination of waste and sterilization of media and unwrapped glassware, and the latter for sterilization of packaged instruments. Standard operating temperatures for autoclaves are 121 °C or, in some cases, 132 °C, typically at a pressure of 15 to 20 pounds per square inch (psi). The length of exposure depends on the volume and nature of material being sterilized, but it is typically 20 minutes or more, with larger volumes requiring longer exposure times to ensure sufficient heat transfer to the materials being sterilized. The steam must directly contact the liquids or dry materials being sterilized, so containers are left loosely closed and instruments are loosely wrapped in paper or foil. The key to autoclaving is that the temperature must be high enough to kill endospores to achieve complete sterilization. Because sterilization is so important to safe medical and laboratory protocols, quality control is essential. Autoclaves may be equipped with recorders to document the pressures and temperatures achieved during each run. Additionally, internal indicators of various types should be autoclaved along with the materials to be sterilized to ensure that the proper sterilization temperature has been reached ( Figure 13.8 ). One common type of indicator is the use of heat-sensitive autoclave tape , which has white stripes that turn black when the appropriate temperature is achieved during a successful autoclave run. This type of indicator is relatively inexpensive and can be used during every run. However, autoclave tape provides no indication of length of exposure, so it cannot be used as an indicator of sterility. Another type of indicator, a biological indicator spore test, uses either a strip of paper or a liquid suspension of the endospores of Geobacillus stearothermophilus to determine whether the endospores are killed by the process. The endospores of the obligate thermophilic bacterium G. stearothermophilus are the gold standard used for this purpose because of their extreme heat resistance. Biological spore indicators can also be used to test the effectiveness of other sterilization protocols, including ethylene oxide, dry heat, formaldehyde, gamma radiation, and hydrogen peroxide plasma sterilization using either G . stearothermophilus , Bacillus atrophaeus, B. subtilis, or B. pumilus spores. In the case of validating autoclave function, the endospores are incubated after autoclaving to ensure no viable endospores remain. Bacterial growth subsequent to endospore germination can be monitored by biological indicator spore tests that detect acid metabolites or fluorescence produced by enzymes derived from viable G. stearothermophilus . A third type of autoclave indicator is the Diack tube , a glass ampule containing a temperature-sensitive pellet that melts at the proper sterilization temperature. Spore strips or Diack tubes are used periodically to ensure the autoclave is functioning properly. Pasteurization Although complete sterilization is ideal for many medical applications, it is not always practical for other applications and may also alter the quality of the product. Boiling and autoclaving are not ideal ways to control microbial growth in many foods because these methods may ruin the consistency and other organoleptic (sensory) qualities of the food. Pasteurization is a form of microbial control for food that uses heat but does not render the food sterile. Traditional pasteurization kills pathogens and reduces the number of spoilage-causing microbes while maintaining food quality. The process of pasteurization was first developed by Louis Pasteur in the 1860s as a method for preventing the spoilage of beer and wine. Today, pasteurization is most commonly used to kill heat-sensitive pathogens in milk and other food products (e.g., apple juice and honey) ( Figure 13.9 ). However, because pasteurized food products are not sterile, they will eventually spoil. The methods used for milk pasteurization balance the temperature and the length of time of treatment. One method, high-temperature short-time (HTST) pasteurization , exposes milk to a temperature of 72 °C for 15 seconds, which lowers bacterial numbers while preserving the quality of the milk. An alternative is ultra-high-temperature (UHT) pasteurization , in which the milk is exposed to a temperature of 138 °C for 2 or more seconds. UHT pasteurized milk can be stored for a long time in sealed containers without being refrigerated; however, the very high temperatures alter the proteins in the milk, causing slight changes in the taste and smell. Still, this method of pasteurization is advantageous in regions where access to refrigeration is limited. Check Your Understanding In an autoclave, how are temperatures above boiling achieved? How would the onset of spoilage compare between HTST-pasteurized and UHT-pasteurized milk? Why is boiling not used as a sterilization method in a clinical setting? Refrigeration and Freezing Just as high temperatures are effective for controlling microbial growth, exposing microbes to low temperatures can also be an easy and effective method of microbial control, with the exception of psychrophiles , which prefer cold temperatures (see Temperature and Microbial Growth ). Refrigerators used in home kitchens or in the laboratory maintain temperatures between 0 °C and 7 °C. This temperature range inhibits microbial metabolism, slowing the growth of microorganisms significantly and helping preserve refrigerated products such as foods or medical supplies. Certain types of laboratory cultures can be preserved by refrigeration for later use. Freezing below −2 °C may stop microbial growth and even kill susceptible organisms. According to the US Department of Agriculture (USDA), the only safe ways that frozen foods can be thawed are in the refrigerator, immersed in cold water changed every 30 minutes, or in the microwave, keeping the food at temperatures not conducive for bacterial growth. 3 In addition, halted bacterial growth can restart in thawed foods, so thawed foods should be treated like fresh perishables. 3 US Department of Agriculture. “Freezing and Food Safety.” 2013. http://www.fsis.usda.gov/wps/portal/fsis/topics/food-safety-education/get-answers/food-safety-fact-sheets/safe-food-handling/freezing-and-food-safety/CT_Index. Accessed June 8, 2016. Bacterial cultures and medical specimens requiring long-term storage or transport are often frozen at ultra-low temperatures of −70 °C or lower. These ultra-low temperatures can be achieved by storing specimens on dry ice in an ultra-low freezer or in special liquid nitrogen tanks, which maintain temperatures lower than −196 °C ( Figure 13.10 ). Check Your Understanding Does placing food in a refrigerator kill bacteria on the food? Pressure Exposure to high pressure kills many microbes. In the food industry, high-pressure processing (also called pascalization ) is used to kill bacteria, yeast, molds, parasites, and viruses in foods while maintaining food quality and extending shelf life. The application of high pressure between 100 and 800 MPa (sea level atmospheric pressure is about 0.1 MPa) is sufficient to kill vegetative cells by protein denaturation, but endospores may survive these pressures. 4 5 4 C. Ferstl. “High Pressure Processing: Insights on Technology and Regulatory Requirements.” Food for Thought/White Paper. Series Volume 10. Livermore, CA: The National Food Lab; July 2013. 5 US Food and Drug Administration. “Kinetics of Microbial Inactivation for Alternative Food Processing Technologies: High Pressure Processing.” 2000. http://www.fda.gov/Food/FoodScienceResearch/SafePracticesforFoodProcesses/ucm101456.htm. Accessed July 19, 2106. In clinical settings, hyperbaric oxygen therapy is sometimes used to treat infections. In this form of therapy, a patient breathes pure oxygen at a pressure higher than normal atmospheric pressure, typically between 1 and 3 atmospheres (atm). This is achieved by placing the patient in a hyperbaric chamber or by supplying the pressurized oxygen through a breathing tube. Hyperbaric oxygen therapy helps increase oxygen saturation in tissues that become hypoxic due to infection and inflammation. This increased oxygen concentration enhances the body’s immune response by increasing the activities of neutrophils and macrophages, white blood cells that fight infections. Increased oxygen levels also contribute to the formation of toxic free radicals that inhibit the growth of oxygen-sensitive or anaerobic bacteria like as Clostridium perfringens , a common cause of gas gangrene . In C. perfringens infections, hyperbaric oxygen therapy can also reduce secretion of a bacterial toxin that causes tissue destruction. Hyperbaric oxygen therapy also seems to enhance the effectiveness of antibiotic treatments. Unfortunately, some rare risks include oxygen toxicity and effects on delicate tissues, such as the eyes, middle ear, and lungs, which may be damaged by the increased air pressure. High pressure processing is not commonly used for disinfection or sterilization of fomites. Although the application of pressure and steam in an autoclave is effective for killing endospores, it is the high temperature achieved, and not the pressure directly, that results in endospore death. Case in Point A Streak of Bad Potluck One Monday in spring 2015, an Ohio woman began to experience blurred, double vision; difficulty swallowing; and drooping eyelids. She was rushed to the emergency department of her local hospital. During the examination, she began to experience abdominal cramping, nausea, paralysis, dry mouth, weakness of facial muscles, and difficulty speaking and breathing. Based on these symptoms, the hospital’s incident command center was activated, and Ohio public health officials were notified of a possible case of botulism. Meanwhile, other patients with similar symptoms began showing up at other local hospitals. Because of the suspicion of botulism, antitoxin was shipped overnight from the CDC to these medical facilities, to be administered to the affected patients. The first patient died of respiratory failure as a result of paralysis, and about half of the remaining victims required additional hospitalization following antitoxin administration, with at least two requiring ventilators for breathing. Public health officials investigated each of the cases and determined that all of the patients had attended the same church potluck the day before. Moreover, they traced the source of the outbreak to a potato salad made with home-canned potatoes. More than likely, the potatoes were canned using boiling water, a method that allows endospores of Clostridium botulinum to survive. C. botulinum produces botulinum toxin, a neurotoxin that is often deadly once ingested. According to the CDC, the Ohio case was the largest botulism outbreak in the United States in nearly 40 years. 6 6 CL McCarty et al. “Large Outbreak of Botulism Associated with a Church Potluck Meal-Ohio, 2015.” Morbidity and Mortality Weekly Report 64, no. 29 (2015):802–803. Killing C. botulinum endospores requires a minimum temperature of 116 °C (240 °F), well above the boiling point of water. This temperature can only be reached in a pressure canner, which is recommended for home canning of low-acid foods such as meat, fish, poultry, and vegetables ( Figure 13.11 ). Additionally, the CDC recommends boiling home-canned foods for about 10 minutes before consumption. Since the botulinum toxin is heat labile (meaning that it is denatured by heat), 10 minutes of boiling will render nonfunctional any botulinum toxin that the food may contain. Link to Learning To learn more about proper home-canning techniques, visit the CDC’s website. Desiccation Drying, also known as desiccation or dehydration, is a method that has been used for millennia to preserve foods such as raisins, prunes, and jerky. It works because all cells, including microbes, require water for their metabolism and survival. Although drying controls microbial growth, it might not kill all microbes or their endospores, which may start to regrow when conditions are more favorable and water content is restored. In some cases, foods are dried in the sun, relying on evaporation to achieve desiccation. Freeze-drying, or lyophilization , is another method of dessication in which an item is rapidly frozen (“snap-frozen”) and placed under vacuum so that water is lost by sublimation. Lyophilization combines both exposure to cold temperatures and desiccation, making it quite effective for controlling microbial growth. In addition, lyophilization causes less damage to an item than conventional desiccation and better preserves the item’s original qualities. Lyophilized items may be stored at room temperature if packaged appropriately to prevent moisture acquisition. Lyophilization is used for preservation in the food industry and is also used in the laboratory for the long-term storage and transportation of microbial cultures. The water content of foods and materials, called the water activity , can be lowered without physical drying by the addition of solutes such as salts or sugars. At very high concentrations of salts or sugars, the amount of available water in microbial cells is reduced dramatically because water will be drawn from an area of low solute concentration (inside the cell) to an area of high solute concentration (outside the cell) ( Figure 13.12 ). Many microorganisms do not survive these conditions of high osmotic pressure. Honey, for example, is 80% sucrose, an environment in which very few microorganisms are capable of growing, thereby eliminating the need for refrigeration. Salted meats and fish, like ham and cod, respectively, were critically important foods before the age of refrigeration . Fruits were preserved by adding sugar, making jams and jellies. However, certain microbes, such as molds and yeasts, tend to be more tolerant of desiccation and high osmotic pressures, and, thus, may still contaminate these types of foods. Check Your Understanding How does the addition of salt or sugar to food affect its water activity? Radiation Radiation in various forms, from high-energy radiation to sunlight, can be used to kill microbes or inhibit their growth. Ionizing radiation includes X-rays, gamma rays, and high-energy electron beams. Ionizing radiation is strong enough to pass into the cell, where it alters molecular structures and damages cell components. For example, ionizing radiation introduces double-strand breaks in DNA molecules. This may directly cause DNA mutations to occur, or mutations may be introduced when the cell attempts to repair the DNA damage. As these mutations accumulate, they eventually lead to cell death. Both X-rays and gamma rays easily penetrate paper and plastic and can therefore be used to sterilize many packaged materials. In the laboratory, ionizing radiation is commonly used to sterilize materials that cannot be autoclaved, such as plastic Petri dishes and disposable plastic inoculating loops. For clinical use, ionizing radiation is used to sterilize gloves, intravenous tubing, and other latex and plastic items used for patient care. Ionizing radiation is also used for the sterilization of other types of delicate, heat-sensitive materials used clinically, including tissues for transplantation, pharmaceutical drugs, and medical equipment. In Europe, gamma irradiation for food preservation is widely used, although it has been slow to catch on in the United States (see the Micro Connections box on this topic). Packaged dried spices are also often gamma-irradiated. Because of their ability to penetrate paper, plastic, thin sheets of wood and metal, and tissue, great care must be taken when using X-rays and gamma irradiation. These types of ionizing irradiation cannot penetrate thick layers of iron or lead, so these metals are commonly used to protect humans who may be potentially exposed. Another type of radiation, nonionizing radiation , is commonly used for disinfection and uses less energy than ionizing radiation. It does not penetrate cells or packaging. Ultraviolet (UV) light is one example; it causes thymine dimer s to form between adjacent thymines within a single strand of DNA ( Figure 13.13 ). When DNA polymerase encounters the thymine dimer, it does not always incorporate the appropriate complementary nucleotides (two adenines), and this leads to formation of mutations that can ultimately kill microorganisms. UV light can be used effectively by both consumers and laboratory personnel to control microbial growth. UV lamps are now commonly incorporated into water purification systems for use in homes. In addition, small portable UV lights are commonly used by campers to purify water from natural environments before drinking. Germicidal lamps are also used in surgical suites, biological safety cabinet s, and transfer hoods, typically emitting UV light at a wavelength of 260 nm. Because UV light does not penetrate surfaces and will not pass through plastics or glass, cells must be exposed directly to the light source. Sunlight has a very broad spectrum that includes UV and visible light. In some cases, sunlight can be effective against certain bacteria because of both the formation of thymine dimers by UV light and by the production of reactive oxygen products induced in low amounts by exposure to visible light. Check Your Understanding What are two advantages of ionizing radiation as a sterilization method? How does the effectiveness of ionizing radiation compare with that of nonionizing radiation? Micro Connections Irradiated Food: Would You Eat That? Of all the ways to prevent food spoilage and foodborne illness, gamma irradiation may be the most unappetizing. Although gamma irradiation is a proven method of eliminating potentially harmful microbes from food, the public has yet to buy in. Most of their concerns, however, stem from misinformation and a poor understanding of the basic principles of radiation. The most common method of irradiation is to expose food to cobalt-60 or cesium-137 by passing it through a radiation chamber on a conveyor belt. The food does not directly contact the radioactive material and does not become radioactive itself. Thus, there is no risk for exposure to radioactive material through eating gamma-irradiated foods. Additionally, irradiated foods are not significantly altered in terms of nutritional quality, aside from the loss of certain vitamins, which is also exacerbated by extended storage. Alterations in taste or smell may occur in irradiated foods with high fat content, such as fatty meats and dairy products, but this effect can be minimized by using lower doses of radiation at colder temperatures. In the United States, the CDC, Environmental Protection Agency (EPA), and the Food and Drug Administration (FDA) have deemed irradiation safe and effective for various types of meats, poultry, shellfish, fresh fruits and vegetables, eggs with shells, and spices and seasonings. Gamma irradiation of foods has also been approved for use in many other countries, including France, the Netherlands, Portugal, Israel, Russia, China, Thailand, Belgium, Australia, and South Africa. To help ameliorate consumer concern and assist with education efforts, irradiated foods are now clearly labeled and marked with the international irradiation symbol, called the “radura” ( Figure 13.14 ). Consumer acceptance seems to be rising, as indicated by several recent studies. 7 7 AM Johnson et al. “Consumer Acceptance of Electron-Beam Irradiated Ready-to-Eat Poultry Meats.” Food Processing Preservation , 28 no. 4 (2004):302–319. Sonication The use of high-frequency ultrasound waves to disrupt cell structures is called sonication . Application of ultrasound waves causes rapid changes in pressure within the intracellular liquid; this leads to cavitation , the formation of bubbles inside the cell, which can disrupt cell structures and eventually cause the cell to lyse or collapse. Sonication is useful in the laboratory for efficiently lysing cells to release their contents for further research; outside the laboratory, sonication is used for cleaning surgical instruments, lenses, and a variety of other objects such as coins, tools, and musical instruments. Filtration Filtration is a method of physically separating microbes from samples. Air is commonly filtered through high-efficiency particulate air (HEPA) filter s ( Figure 13.15 ). HEPA filters have effective pore sizes of 0.3 µm, small enough to capture bacterial cells, endospores, and many viruses, as air passes through these filters, nearly sterilizing the air on the other side of the filter. HEPA filters have a variety of applications and are used widely in clinical settings, in cars and airplanes, and even in the home. For example, they may be found in vacuum cleaners, heating and air-conditioning systems, and air purifiers. Biological Safety Cabinets Biological safety cabinets are a good example of the use of HEPA filters. HEPA filters in biological safety cabinet s (BSCs) are used to remove particulates in the air either entering the cabinet (air intake), leaving the cabinet (air exhaust), or treating both the intake and exhaust. Use of an air-intake HEPA filter prevents environmental contaminants from entering the BSC, creating a clean area for handling biological materials. Use of an air-exhaust HEPA filter prevents laboratory pathogens from contaminating the laboratory, thus maintaining a safe work area for laboratory personnel. There are three classes of BSCs: I, II, and III. Each class is designed to provide a different level of protection for laboratory personnel and the environment; BSC II and III are also designed to protect the materials or devices in the cabinet. Table 13.1 summarizes the level of safety provided by each class of BSC for each BSL. Biological Risks and BSCs Biological Risk Assessed BSC Class Protection of Personnel Protection of Environment Protection of Product BSL-1, BSL-2, BSL-3 I Yes Yes No BSL-1, BSL-2, BSL-3 II Yes Yes Yes BSL-4 III; II when used in suit room with suit Yes Yes Yes Table 13.1 Class I BSCs protect laboratory workers and the environment from a low to moderate risk for exposure to biological agents used in the laboratory. Air is drawn into the cabinet and then filtered before exiting through the building’s exhaust system. Class II BSCs use directional air flow and partial barrier systems to contain infectious agents. Class III BSCs are designed for working with highly infectious agents like those used in BSL-4 laboratories. They are gas tight, and materials entering or exiting the cabinet must be passed through a double-door system, allowing the intervening space to be decontaminated between uses. All air is passed through one or two HEPA filters and an air incineration system before being exhausted directly to the outdoors (not through the building’s exhaust system). Personnel can manipulate materials inside the Class III cabinet by using long rubber gloves sealed to the cabinet. Link to Learning This video shows how BSCs are designed and explains how they protect personnel, the environment, and the product. Filtration in Hospitals HEPA filters are also commonly used in hospitals and surgical suites to prevent contamination and the spread of airborne microbes through ventilation systems. HEPA filtration systems may be designed for entire buildings or for individual rooms. For example, burn units, operating rooms, or isolation units may require special HEPA-filtration systems to remove opportunistic pathogens from the environment because patients in these rooms are particularly vulnerable to infection. Membrane Filters Filtration can also be used to remove microbes from liquid samples using membrane filtration . Membrane filters for liquids function similarly to HEPA filters for air. Typically, membrane filters that are used to remove bacteria have an effective pore size of 0.2 µm, smaller than the average size of a bacterium (1 µm), but filters with smaller pore sizes are available for more specific needs. Membrane filtration is useful for removing bacteria from various types of heat-sensitive solutions used in the laboratory, such as antibiotic solutions and vitamin solutions. Large volumes of culture media may also be filter sterilized rather than autoclaved to protect heat-sensitive components. Often when filtering small volumes, syringe filter s are used, but vacuum filter s are typically used for filtering larger volumes ( Figure 13.16 ). Check Your Understanding Would membrane filtration with a 0.2-µm filter likely remove viruses from a solution? Explain. Name at least two common uses of HEPA filtration in clinical or laboratory settings. Figure 13.17 and Figure 13.18 summarize the physical methods of control discussed in this section. 13.3 Using Chemicals to Control Microorganisms Learning Objectives Understand and compare various chemicals used to control microbial growth, including their uses, advantages and disadvantages, chemical structure, and mode of action In addition to physical methods of microbial control, chemicals are also used to control microbial growth. A wide variety of chemicals can be used as disinfectants or antiseptics. When choosing which to use, it is important to consider the type of microbe targeted; how clean the item needs to be; the disinfectant’s effect on the item’s integrity; its safety to animals, humans, and the environment; its expense; and its ease of use. This section describes the variety of chemicals used as disinfectants and antiseptics, including their mechanisms of action and common uses. Phenolics In the 1800s, scientists began experimenting with a variety of chemicals for disinfection. In the 1860s, British surgeon Joseph Lister (1827–1912) began using carbolic acid, known as phenol , as a disinfectant for the treatment of surgical wounds (see Foundations of Modern Cell Theory ). In 1879, Lister’s work inspired the American chemist Joseph Lawrence (1836–1909) to develop Listerine, an alcohol-based mixture of several related compounds that is still used today as an oral antiseptic. Today, carbolic acid is no longer used as a surgical disinfectant because it is a skin irritant, but the chemical compounds found in antiseptic mouthwashes and throat lozenges are called phenolics . Chemically, phenol consists of a benzene ring with an –OH group, and phenolics are compounds that have this group as part of their chemical structure ( Figure 13.19 ). Phenolics such as thymol and eucalyptol occur naturally in plants. Other phenolics can be derived from creosote, a component of coal tar. Phenolics tend to be stable, persistent on surfaces, and less toxic than phenol. They inhibit microbial growth by denaturing proteins and disrupting membranes. Since Lister’s time, several phenolic compounds have been used to control microbial growth. Phenolics like cresols (methylated phenols) and o-phenylphenol were active ingredients in various formulations of Lysol since its invention in 1889. o-Phenylphenol was also commonly used in agriculture to control bacterial and fungal growth on harvested crops, especially citrus fruits, but its use in the United States is now far more limited. The bisphenol hexachlorophene , a disinfectant, is the active ingredient in pHisoHex, a topical cleansing detergent widely used for handwashing in hospital settings. pHisoHex is particularly effective against gram-positive bacteria, including those causing staphylococcal and streptococcal skin infections. pHisoHex was formerly used for bathing infants, but this practice has been discontinued because it has been shown that exposure to hexachlorophene can lead to neurological problems. Triclosan is another bisphenol compound that has seen widespread application in antibacterial products over the last several decades. Initially used in toothpastes, triclosan has also been used in hand soaps and impregnated into a wide variety of other products, including cutting boards, knives, shower curtains, clothing, and concrete, to make them antimicrobial. However, in 2016 the FDA banned the marketing of over-the-counter antiseptic products containing triclosan and 18 other chemicals. This ruling was based on the lack of evidence of safety or efficacy, as well as concerns about the health risks of long-term exposure (See Micro Connections below). In 2019 the FDA issued an updated ban ruling to included 28 chemicals. Rulings on benzalkonium chloride, ethyl alcohol, and isopropyl alcohol have been deferred to allow for the submission of additional safety and efficacy data. 8 8 US Food and Drug Administration. "FDA Issues Final Rule on Safety and Effectiveness of Antibacterial Soaps." 2016. https://www.fda.gov/news-events/press-announcements/fda-issues-final-rule-safety-and-effectiveness-antibacterial-soaps. Accessed October 29, 2020. Micro Connections Triclosan: Antibacterial Overkill? Hand soaps and other cleaning products are often marketed as “antibacterial,” suggesting that they provide a level of cleanliness superior to that of conventional soaps and cleansers. But are the antibacterial ingredients in these products really safe and effective? About 75% of antibacterial liquid hand soaps and 30% of bar soaps contain the chemical triclosan, a phenolic, ( Figure 13.20 ). 9 Triclosan blocks an enzyme in the bacterial fatty acid-biosynthesis pathway that is not found in the comparable human pathway. Although the use of triclosan in the home increased dramatically during the 1990s, more than 40 years of research by the FDA have turned up no conclusive evidence that washing with triclosan-containing products provides increased health benefits compared with washing with traditional soap. Although some studies indicate that fewer bacteria may remain on a person’s hands after washing with triclosan-based soap, compared with traditional soap, no evidence points to any reduction in the transmission of bacteria that cause respiratory and gastrointestinal illness. In short, soaps with triclosan may remove or kill a few more germs but not enough to reduce the spread of disease. 9 J. Stromberg. “Five Reasons Why You Should Probably Stop Using Antibacterial Soap.” Smithsonian.com January 3, 2014. http://www.smithsonianmag.com/science-nature/five-reasons-why-you-should-probably-stop-using-antibacterial-soap-180948078/?no-ist. Accessed June 9, 2016. Perhaps more disturbing, some clear risks associated with triclosan-based soaps have come to light. The widespread use of triclosan has led to an increase in triclosan-resistant bacterial strains, including those of clinical importance, such as Salmonella enterica ; this resistance may render triclosan useless as an antibacterial in the long run. 10 11 Bacteria can easily gain resistance to triclosan through a change to a single gene encoding the targeted enzyme in the bacterial fatty acid-synthesis pathway. Other disinfectants with a less specific mode of action are much less prone to engendering resistance because it would take much more than a single genetic change. 10 SP Yazdankhah et al. “Triclosan and Antimicrobial Resistance in Bacteria: An Overview.” Microbial Drug Resistance 12 no. 2 (2006):83–90. 11 L. Birošová, M. Mikulášová. “Development of Triclosan and Antibiotic Resistance in Salmonella enterica serovar Typhimurium.” Journal of Medical Microbiology 58 no. 4 (2009):436–441. Use of triclosan over the last several decades has also led to a buildup of the chemical in the environment. Triclosan in hand soap is directly introduced into wastewater and sewage systems as a result of the handwashing process. There, its antibacterial properties can inhibit or kill bacteria responsible for the decomposition of sewage, causing septic systems to clog and back up. Eventually, triclosan in wastewater finds its way into surface waters, streams, lakes, sediments, and soils, disrupting natural populations of bacteria that carry out important environmental functions, such as inhibiting algae. Triclosan also finds its way into the bodies of amphibians and fish, where it can act as an endocrine disruptor. Detectable levels of triclosan have also been found in various human bodily fluids, including breast milk, plasma, and urine. 12 In fact, a study conducted by the CDC found detectable levels of triclosan in the urine of 75% of 2,517 people tested in 2003–2004. 13 This finding is even more troubling given the evidence that triclosan may affect immune function in humans. 14 12 AB Dann, A. Hontela. “Triclosan: Environmental Exposure, Toxicity and Mechanisms of Action.” Journal of Applied Toxicology 31 no. 4 (2011):285–311. 13 US Centers for Disease Control and Prevention. “Triclosan Fact Sheet.” 2013. http://www.cdc.gov/biomonitoring/Triclosan_FactSheet.html. Accessed June 9, 2016. 14 EM Clayton et al. “The Impact of Bisphenol A and Triclosan on Immune Parameters in the US Population, NHANES 2003-2006.” Environmental Health Perspectives 119 no. 3 (2011):390. In December 2013, the FDA gave soap manufacturers until 2016 to prove that antibacterial soaps provide a significant benefit over traditional soaps; if unable to do so, manufacturers will be forced to remove these products from the market. Check Your Understanding Why is triclosan more like an antibiotic than a traditional disinfectant? Heavy Metals Some of the first chemical disinfectants and antiseptics to be used were heavy metals . Heavy metals kill microbes by binding to proteins, thus inhibiting enzymatic activity ( Figure 13.21 ). Heavy metals are oligodynamic, meaning that very small concentrations show significant antimicrobial activity. Ions of heavy metals bind to sulfur-containing amino acids strongly and bioaccumulate within cells, allowing these metals to reach high localized concentrations. This causes proteins to denature. Heavy metals are not selectively toxic to microbial cells. They may bioaccumulate in human or animal cells, as well, and excessive concentrations can have toxic effects on humans. If too much silver accumulates in the body, for example, it can result in a condition called argyria , in which the skin turns irreversibly blue-gray. One way to reduce the potential toxicity of heavy metals is by carefully controlling the duration of exposure and concentration of the heavy metal. Mercury Mercury is an example of a heavy metal that has been used for many years to control microbial growth. It was used for many centuries to treat syphilis. Mercury compounds like mercuric chloride are mainly bacteriostatic and have a very broad spectrum of activity. Various forms of mercury bind to sulfur-containing amino acids within proteins, inhibiting their functions. In recent decades, the use of such compounds has diminished because of mercury’s toxicity. It is toxic to the central nervous, digestive, and renal systems at high concentrations, and has negative environmental effects, including bioaccumulation in fish. Topical antiseptics such as mercurochrome , which contains mercury in low concentrations, and merthiolate , a tincture (a solution of mercury dissolved in alcohol) were once commonly used. However, because of concerns about using mercury compounds, these antiseptics are no longer sold in the United States. Silver Silver has long been used as an antiseptic. In ancient times, drinking water was stored in silver jugs. 15 Silvadene cream is commonly used to treat topical wounds and is particularly helpful in preventing infection in burn wounds. Silver nitrate drops were once routinely applied to the eyes of newborns to protect against ophthalmia neonatorum , eye infections that can occur due to exposure to pathogens in the birth canal, but antibiotic creams are more now commonly used. Silver is often combined with antibiotics, making the antibiotics thousands of times more effective. 16 Silver is also commonly incorporated into catheters and bandages, rendering them antimicrobial; however, there is evidence that heavy metals may also enhance selection for antibiotic resistance. 17 15 N. Silvestry-Rodriguez et al. “Silver as a Disinfectant.” In Reviews of Environmental Contamination and Toxicology , pp. 23-45. Edited by GW Ware and DM Whitacre. New York: Springer, 2007. 16 B. Owens. “Silver Makes Antibiotics Thousands of Times More Effective.” Nature June 19 2013. http://www.nature.com/news/silver-makes-antibiotics-thousands-of-times-more-effective-1.13232 17 C. Seiler, TU Berendonk. “Heavy Metal Driven Co-Selection of Antibiotic Resistance in Soil and Water Bodies Impacted by Agriculture and Aquaculture.” Frontiers in Microbiology 3 (2012):399. Copper, Nickel, and Zinc Several other heavy metals also exhibit antimicrobial activity. Copper sulfate is a common algicide used to control algal growth in swimming pools and fish tanks. The use of metallic copper to minimize microbial growth is also becoming more widespread. Copper linings in incubators help reduce contamination of cell cultures. The use of copper pots for water storage in underdeveloped countries is being investigated as a way to combat diarrheal diseases. Copper coatings are also becoming popular for frequently handled objects such as doorknobs, cabinet hardware, and other fixtures in health-care facilities in an attempt to reduce the spread of microbes. Nickel and zinc coatings are now being used in a similar way. Other forms of zinc, including zinc chloride and zinc oxide , are also used commercially. Zinc chloride is quite safe for humans and is commonly found in mouthwashes, substantially increasing their length of effectiveness. Zinc oxide is found in a variety of products, including topical antiseptic creams such as calamine lotion, diaper ointments, baby powder, and dandruff shampoos. Check Your Understanding Why are many heavy metals both antimicrobial and toxic to humans? Halogens Other chemicals commonly used for disinfection are the halogens iodine , chlorine , and fluorine . Iodine works by oxidizing cellular components, including sulfur-containing amino acids, nucleotides, and fatty acids, and destabilizing the macromolecules that contain these molecules. It is often used as a topical tincture, but it may cause staining or skin irritation. An iodophor is a compound of iodine complexed with an organic molecule, thereby increasing iodine’s stability and, in turn, its efficacy. One common iodophor is povidone-iodine , which includes a wetting agent that releases iodine relatively slowly. Betadine is a brand of povidone-iodine commonly used as a hand scrub by medical personnel before surgery and for topical antisepsis of a patient’s skin before incision ( Figure 13.22 ). Chlorine is another halogen commonly used for disinfection. When chlorine gas is mixed with water, it produces a strong oxidant called hypochlorous acid, which is uncharged and enters cells easily. Chlorine gas is commonly used in municipal drinking water and wastewater treatment plants, with the resulting hypochlorous acid producing the actual antimicrobial effect. Those working at water treatment facilities need to take great care to minimize personal exposure to chlorine gas. Sodium hypochlorite is the chemical component of common household bleach , and it is also used for a wide variety of disinfecting purposes. Hypochlorite salts, including sodium and calcium hypochlorites, are used to disinfect swimming pools. Chlorine gas, sodium hypochlorite, and calcium hypochlorite are also commonly used disinfectants in the food processing and restaurant industries to reduce the spread of foodborne diseases. Workers in these industries also need to take care to use these products correctly to ensure their own safety as well as the safety of consumers. A recent joint statement published by the Food and Agriculture Organization (FAO) of the United Nations and WHO indicated that none of the many beneficial uses of chlorine products in food processing to reduce the spread of foodborne illness posed risks to consumers. 18 18 World Health Organization. “Benefits and Risks of the Use of Chlorine-Containing Disinfectants in Food Production and Food Processing: Report of a Joint FAO/WHO Expert Meeting.” Geneva, Switzerland: World Health Organization, 2009. Another class of chlorinated compounds called chloramines are widely used as disinfectants. Chloramines are relatively stable, releasing chlorine over long periods time. Chloramines are derivatives of ammonia by substitution of one, two, or all three hydrogen atoms with chlorine atoms ( Figure 13.23 ). Chloramines and other cholorine compounds may be used for disinfection of drinking water, and chloramine tablets are frequently used by the military for this purpose. After a natural disaster or other event that compromises the public water supply, the CDC recommends disinfecting tap water by adding small amounts of regular household bleach. Recent research suggests that sodium dichloroisocyanurate (NaDCC) may also be a good alternative for drinking water disinfection. Currently, NaDCC tablets are available for general use and for use by the military, campers, or those with emergency needs; for these uses, NaDCC is preferable to chloramine tablets. Chlorine dioxide, a gaseous agent used for fumigation and sterilization of enclosed areas, is also commonly used for the disinfection of water. Although chlorinated compounds are relatively effective disinfectants, they have their disadvantages. Some may irritate the skin, nose, or eyes of some individuals, and they may not completely eliminate certain hardy organisms from contaminated drinking water. The protozoan parasite Cryptosporidium , for example, has a protective outer shell that makes it resistant to chlorinated disinfectants. Thus, boiling of drinking water in emergency situations is recommended when possible. The halogen fluorine is also known to have antimicrobial properties that contribute to the prevention of dental caries (cavities). 19 Fluoride is the main active ingredient of toothpaste and is also commonly added to tap water to help communities maintain oral health. Chemically, fluoride can become incorporated into the hydroxyapatite of tooth enamel, making it more resistant to corrosive acids produced by the fermentation of oral microbes. Fluoride also enhances the uptake of calcium and phosphate ions in tooth enamel, promoting remineralization. In addition to strengthening enamel, fluoride also seems to be bacteriostatic. It accumulates in plaque-forming bacteria, interfering with their metabolism and reducing their production of the acids that contribute to tooth decay. 19 RE Marquis. “Antimicrobial Actions of Fluoride for Oral Bacteria.” Canadian Journal of Microbiology 41 no. 11 (1995):955–964. Check Your Understanding What is a benefit of a chloramine over hypochlorite for disinfecting? Alcohols Alcohols make up another group of chemicals commonly used as disinfectants and antiseptics. They work by rapidly denaturing proteins, which inhibits cell metabolism, and by disrupting membranes, which leads to cell lysis. Once denatured, the proteins may potentially refold if enough water is present in the solution. Alcohols are typically used at concentrations of about 70% aqueous solution and, in fact, work better in aqueous solutions than 100% alcohol solutions. This is because alcohols coagulate proteins. In higher alcohol concentrations, rapid coagulation of surface proteins prevents effective penetration of cells. The most commonly used alcohols for disinfection are ethyl alcohol (ethanol) and isopropyl alcohol (isopropanol, rubbing alcohol) ( Figure 13.24 ). Alcohols tend to be bactericidal and fungicidal, but may also be viricidal for enveloped viruses only. Although alcohols are not sporicidal, they do inhibit the processes of sporulation and germination. Alcohols are volatile and dry quickly, but they may also cause skin irritation because they dehydrate the skin at the site of application. One common clinical use of alcohols is swabbing the skin for degerming before needle injection. Alcohols also are the active ingredients in instant hand sanitizer s, which have gained popularity in recent years. The alcohol in these hand sanitizers works both by denaturing proteins and by disrupting the microbial cell membrane, but will not work effectively in the presence of visible dirt. Last, alcohols are used to make tincture s with other antiseptics, such as the iodine tinctures discussed previously in this chapter. All in all, alcohols are inexpensive and quite effective for the disinfection of a broad range of vegetative microbes. However, one disadvantage of alcohols is their high volatility, limiting their effectiveness to immediately after application. Check Your Understanding Name at least three advantages of alcohols as disinfectants. Describe several specific applications of alcohols used in disinfectant products. Surfactants Surface-active agents, or surfactants , are a group of chemical compounds that lower the surface tension of water. Surfactants are the major ingredients in soaps and detergents . Soaps are salts of long-chain fatty acids and have both polar and nonpolar regions, allowing them to interact with polar and nonpolar regions in other molecules ( Figure 13.25 ). They can interact with nonpolar oils and grease to create emulsions in water, loosening and lifting away dirt and microbes from surfaces and skin. Soaps do not kill or inhibit microbial growth and so are not considered antiseptics or disinfectants. However, proper use of soaps mechanically carries away microorganisms, effectively degerming a surface. Some soaps contain added bacteriostatic agents such as triclocarban or cloflucarban , compounds structurally related to triclosan, that introduce antiseptic or disinfectant properties to the soaps. Soaps, however, often form films that are difficult to rinse away, especially in hard water, which contains high concentrations of calcium and magnesium mineral salts. Detergents contain synthetic surfactant molecules with both polar and nonpolar regions that have strong cleansing activity but are more soluble, even in hard water, and, therefore, leave behind no soapy deposits. Anionic detergents , such as those used for laundry, have a negatively charged anion at one end attached to a long hydrophobic chain, whereas cationic detergents have a positively charged cation instead. Cationic detergents include an important class of disinfectants and antiseptics called the quaternary ammonium salts (quats) , named for the characteristic quaternary nitrogen atom that confers the positive charge ( Figure 13.26 ). Overall, quats have properties similar to phospholipids, having hydrophilic and hydrophobic ends. As such, quats have the ability to insert into the bacterial phospholipid bilayer and disrupt membrane integrity. The cationic charge of quats appears to confer their antimicrobial properties, which are diminished when neutralized. Quats have several useful properties. They are stable, nontoxic, inexpensive, colorless, odorless, and tasteless. They tend to be bactericidal by disrupting membranes. They are also active against fungi, protozoans, and enveloped viruses, but endospores are unaffected. In clinical settings, they may be used as antiseptics or to disinfect surfaces. Mixtures of quats are also commonly found in household cleaners and disinfectants, including many current formulations of Lysol brand products, which contain benzalkonium chlorides as the active ingredients. Benzalkonium chlorides, along with the quat cetylpyrimidine chloride , are also found in products such as skin antiseptics, oral rinses, and mouthwashes. Check Your Understanding Why are soaps not considered disinfectants? Micro Connections Handwashing the Right Way Handwashing is critical for public health and should be emphasized in a clinical setting. For the general public, the CDC recommends handwashing before, during, and after food handling; before eating; before and after interacting with someone who is ill; before and after treating a wound; after using the toilet or changing diapers; after coughing, sneezing, or blowing the nose; after handling garbage; and after interacting with an animal, its feed, or its waste. Figure 13.27 illustrates the five steps of proper handwashing recommended by the CDC. Handwashing is even more important for health-care workers, who should wash their hands thoroughly between every patient contact, after the removal of gloves, after contact with bodily fluids and potentially infectious fomites, and before and after assisting a surgeon with invasive procedures. Even with the use of proper surgical attire, including gloves, scrubbing for surgery is more involved than routine handwashing. The goal of surgical scrubbing is to reduce the normal microbiota on the skin’s surface to prevent the introduction of these microbes into a patient’s surgical wounds. There is no single widely accepted protocol for surgical scrubbing. Protocols for length of time spent scrubbing may depend on the antimicrobial used; health-care workers should always check the manufacturer’s recommendations. According to the Association of Surgical Technologists (AST), surgical scrubs may be performed with or without the use of brushes ( Figure 13.27 ). Link to Learning To learn more about proper handwashing, visit the CDC’s website. Bisbiguanides Bisbiguanides were first synthesized in the 20th century and are cationic (positively charged) molecules known for their antiseptic properties ( Figure 13.28 ). One important bisbiguanide antiseptic is chlorhexidine . It has broad-spectrum activity against yeasts, gram-positive bacteria, and gram-negative bacteria, with the exception of Pseudomonas aeruginosa , which may develop resistance on repeated exposure. 20 Chlorhexidine disrupts cell membranes and is bacteriostatic at lower concentrations or bactericidal at higher concentrations, in which it actually causes the cells’ cytoplasmic contents to congeal. It also has activity against enveloped viruses. However, chlorhexidine is poorly effective against Mycobacterium tuberculosis and nonenveloped viruses, and it is not sporicidal. Chlorhexidine is typically used in the clinical setting as a surgical scrub and for other handwashing needs for medical personnel, as well as for topical antisepsis for patients before surgery or needle injection. It is more persistent than iodophors, providing long-lasting antimicrobial activity. Chlorhexidine solutions may also be used as oral rinses after oral procedures or to treat gingivitis. Another bisbiguanide, alexidine , is gaining popularity as a surgical scrub and an oral rinse because it acts faster than chlorhexidine. 20 L. Thomas et al. “Development of Resistance to Chlorhexidine Diacetate in Pseudomonas aeruginosa and the Effect of a ‘Residual’ Concentration.” Journal of Hospital Infection 46 no. 4 (2000):297–303. Check Your Understanding What two effects does chlorhexidine have on bacterial cells? Alkylating Agents The alkylating agent s are a group of strong disinfecting chemicals that act by replacing a hydrogen atom within a molecule with an alkyl group (C n H 2n+1 ), thereby inactivating enzymes and nucleic acids ( Figure 13.29 ). The alkylating agent formaldehyde (CH 2 OH) is commonly used in solution at a concentration of 37% (known as formalin ) or as a gaseous disinfectant and biocide. It is a strong, broad-spectrum disinfectant and biocide that has the ability to kill bacteria, viruses, fungi, and endospores, leading to sterilization at low temperatures, which is sometimes a convenient alternative to the more labor-intensive heat sterilization methods. It also cross-links proteins and has been widely used as a chemical fixative. Because of this, it is used for the storage of tissue specimens and as an embalming fluid. It also has been used to inactivate infectious agents in vaccine preparation. Formaldehyde is very irritating to living tissues and is also carcinogenic; therefore, it is not used as an antiseptic. Glutaraldehyde is structurally similar to formaldehyde but has two reactive aldehyde groups, allowing it to act more quickly than formaldehyde. It is commonly used as a 2% solution for sterilization and is marketed under the brand name Cidex. It is used to disinfect a variety of surfaces and surgical and medical equipment. However, similar to formaldehyde, glutaraldehyde irritates the skin and is not used as an antiseptic. A new type of disinfectant gaining popularity for the disinfection of medical equipment is o-phthalaldehyde (OPA), which is found in some newer formulations of Cidex and similar products, replacing glutaraldehyde. o-Phthalaldehyde also has two reactive aldehyde groups, but they are linked by an aromatic bridge. o-Phthalaldehyde is thought to work similarly to glutaraldehyde and formaldehyde, but is much less irritating to skin and nasal passages, produces a minimal odor, does not require processing before use, and is more effective against mycobacteria. Ethylene oxide is a type of alkylating agent that is used for gaseous sterilization. It is highly penetrating and can sterilize items within plastic bags such as catheters, disposable items in laboratories and clinical settings (like packaged Petri dishes), and other pieces of equipment. Ethylene oxide exposure is a form of cold sterilization, making it useful for the sterilization of heat-sensitive items. Great care needs to be taken with the use of ethylene oxide, however; it is carcinogenic, like the other alkylating agents, and is also highly explosive. With careful use and proper aeration of the products after treatment, ethylene oxide is highly effective, and ethylene oxide sterilizers are commonly found in medical settings for sterilizing packaged materials. β-Propionolactone is an alkylating agent with a different chemical structure than the others already discussed. Like other alkylating agents, β-propionolactone binds to DNA, thereby inactivating it ( Figure 13.29 ). It is a clear liquid with a strong odor and has the ability to kill endospores. As such, it has been used in either liquid form or as a vapor for the sterilization of medical instruments and tissue grafts, and it is a common component of vaccines, used to maintain their sterility. It has also been used for the sterilization of nutrient broth, as well as blood plasma, milk, and water. It is quickly metabolized by animals and humans to lactic acid. It is also an irritant, however, and may lead to permanent damage of the eyes, kidneys, or liver. Additionally, it has been shown to be carcinogenic in animals; thus, precautions are necessary to minimize human exposure to β-propionolactone. 21 21 Institute of Medicine. “Long-Term Health Effects of Participation in Project SHAD (Shipboard Hazard and Defense).” Washington, DC: The National Academies Press, 2007. Check Your Understanding What chemical reaction do alkylating agents participate in? Why are alkylating agents not used as antiseptics? Micro Connections Diehard Prions Prions, the acellular, misfolded proteins responsible for incurable and fatal diseases such as kuru and Creutzfeldt-Jakob disease (see Viroids, Virusoids, and Prions ), are notoriously difficult to destroy. Prions are extremely resistant to heat, chemicals, and radiation. They are also extremely infectious and deadly; thus, handling and disposing of prion-infected items requires extensive training and extreme caution. Typical methods of disinfection can reduce but not eliminate the infectivity of prions. Autoclaving is not completely effective, nor are chemicals such as phenol, alcohols, formalin, and β-propiolactone. Even when fixed in formalin, affected brain and spinal cord tissues remain infectious. Personnel who handle contaminated specimens or equipment or work with infected patients must wear a protective coat, face protection, and cut-resistant gloves. Any contact with skin must be immediately washed with detergent and warm water without scrubbing. The skin should then be washed with 1 N NaOH or a 1:10 dilution of bleach for 1 minute. Contaminated waste must be incinerated or autoclaved in a strong basic solution, and instruments must be cleaned and soaked in a strong basic solution. Link to Learning For more information on the handling of animals and prion-contaminated materials, visit the guidelines published on the WHO website. Peroxygens Peroxygens are strong oxidizing agents that can be used as disinfectants or antiseptics. The most widely used peroxygen is hydrogen peroxide (H 2 O 2 ), which is often used in solution to disinfect surfaces and may also be used as a gaseous agent. Hydrogen peroxide solutions are inexpensive skin antiseptics that break down into water and oxygen gas, both of which are environmentally safe. This decomposition is accelerated in the presence of light, so hydrogen peroxide solutions typically are sold in brown or opaque bottles. One disadvantage of using hydrogen peroxide as an antiseptic is that it also causes damage to skin that may delay healing or lead to scarring. Contact lens cleaners often include hydrogen peroxide as a disinfectant. Hydrogen peroxide works by producing free radicals that damage cellular macromolecules. Hydrogen peroxide has broad-spectrum activity, working against gram-positive and gram-negative bacteria (with slightly greater efficacy against gram-positive bacteria), fungi, viruses, and endospores. However, bacteria that produce the oxygen-detoxifying enzymes catalase or peroxidase may have inherent tolerance to low hydrogen peroxide concentrations ( Figure 13.30 ). To kill endospores, the length of exposure or concentration of solutions of hydrogen peroxide must be increased. Gaseous hydrogen peroxide has greater efficacy and can be used as a sterilant for rooms or equipment. Plasma, a hot, ionized gas, described as the fourth state of matter, is useful for sterilizing equipment because it penetrates surfaces and kills vegetative cells and endospores. Hydrogen peroxide and peracetic acid , another commonly used peroxygen, each may be introduced as a plasma. Peracetic acid can be used as a liquid or plasma sterilant insofar as it readily kills endospores, is more effective than hydrogen peroxide even at rather low concentrations, and is immune to inactivation by catalases and peroxidases. It also breaks down to environmentally innocuous compounds; in this case, acetic acid and oxygen. Other examples of peroxygens include benzoyl peroxide and carbamide peroxide . Benzoyl peroxide is a peroxygen that used in acne medication solutions. It kills the bacterium Propionibacterium acnes , which is associated with acne. Carbamide peroxide, an ingredient used in toothpaste, is a peroxygen that combats oral biofilms that cause tooth discoloration and halitosis (bad breath). 22 Last, ozone gas is a peroxygen with disinfectant qualities and is used to clean air or water supplies. Overall, peroxygens are highly effective and commonly used, with no associated environmental hazard. 22 Yao, C.S. et al. “In vitro antibacterial effect of carbamide peroxide on oral biofilm.” Journal of Oral Microbiology Jun 12, 2013. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3682087/. doi: 10.3402/jom.v5i0.20392. Check Your Understanding How do peroxides kill cells? Supercritical Fluids Within the last 15 years, the use of supercritical fluids , especially supercritical carbon dioxide (scCO 2 ), has gained popularity for certain sterilizing applications. When carbon dioxide is brought to approximately 10 times atmospheric pressure, it reaches a supercritical state that has physical properties between those of liquids and gases. Materials put into a chamber in which carbon dioxide is pressurized in this way can be sterilized because of the ability of scCO 2 to penetrate surfaces. Supercritical carbon dioxide works by penetrating cells and forming carbonic acid , thereby lowering the cell pH considerably. This technique is effective against vegetative cells and is also used in combination with peracetic acid to kill endospores. Its efficacy can also be augmented with increased temperature or by rapid cycles of pressurization and depressurization, which more likely produce cell lysis. Benefits of scCO 2 include the nonreactive, nontoxic, and nonflammable properties of carbon dioxide, and this protocol is effective at low temperatures. Unlike other methods, such as heat and irradiation, that can degrade the object being sterilized, the use of scCO 2 preserves the object’s integrity and is commonly used for treating foods (including spices and juices) and medical devices such as endoscopes. It is also gaining popularity for disinfecting tissues such as skin, bones, tendons, and ligaments prior to transplantation. scCO 2 can also be used for pest control because it can kill insect eggs and larvae within products. Check Your Understanding Why is the use of supercritical carbon dioxide gaining popularity for commercial and medical uses? Chemical Food Preservatives Chemical preservatives are used to inhibit microbial growth and minimize spoilage in some foods. Commonly used chemical preservatives include sorbic acid , benzoic acid , and propionic acid , and their more soluble salts potassium sorbate , sodium benzoate , and calcium propionate , all of which are used to control the growth of molds in acidic foods. Each of these preservatives is nontoxic and readily metabolized by humans. They are also flavorless, so they do not compromise the flavor of the foods they preserve. Sorbic and benzoic acids exhibit increased efficacy as the pH decreases. Sorbic acid is thought to work by inhibiting various cellular enzymes, including those in the citric acid cycle, as well as catalases and peroxidases . It is added as a preservative in a wide variety of foods, including dairy, bread, fruit, and vegetable products. Benzoic acid is found naturally in many types of fruits and berries, spices, and fermented products. It is thought to work by decreasing intracellular pH, interfering with mechanisms such as oxidative phosphorylation and the uptake of molecules such as amino acids into cells. Foods preserved with benzoic acid or sodium benzoate include fruit juices, jams, ice creams, pastries, soft drinks, chewing gum, and pickles. Propionic acid is thought to both inhibit enzymes and decrease intracellular pH, working similarly to benzoic acid. However, propionic acid is a more effective preservative at a higher pH than either sorbic acid or benzoic acid. Propionic acid is naturally produced by some cheeses during their ripening and is added to other types of cheese and baked goods to prevent mold contamination. It is also added to raw dough to prevent contamination by the bacterium Bacillus mesentericus , which causes bread to become ropy. Other commonly used chemical preservatives include sulfur dioxide and nitrites . Sulfur dioxide prevents browning of foods and is used for the preservation of dried fruits; it has been used in winemaking since ancient times. Sulfur dioxide gas dissolves in water readily, forming sulfites . Although sulfites can be metabolized by the body, some people have sulfite allergies, including asthmatic reactions. Additionally, sulfites degrade thiamine, an important nutrient in some foods. The mode of action of sulfites is not entirely clear, but they may interfere with the disulfide bond (see Figure 7.21 ) formation in proteins, inhibiting enzymatic activity. Alternatively, they may reduce the intracellular pH of the cell, interfering with proton motive force-driven mechanisms. Nitrites are added to processed meats to maintain color and stop the germination of Clostridium botulinum endospores. Nitrites are reduced to nitric oxide , which reacts with heme groups and iron-sulfur groups. When nitric oxide reacts with the heme group within the myoglobin of meats, a red product forms, giving meat its red color. Alternatively, it is thought that when nitric acid reacts with the iron-sulfur enzyme ferredoxin within bacteria, this electron transport-chain carrier is destroyed, preventing ATP synthesis. Nitrosamines, however, are carcinogenic and can be produced through exposure of nitrite-preserved meats (e.g., hot dogs, lunch meat, breakfast sausage, bacon, meat in canned soups) to heat during cooking. Natural Chemical Food Preservatives The discovery of natural antimicrobial substances produced by other microbes has added to the arsenal of preservatives used in food. Nisin is an antimicrobial peptide produced by the bacterium Lactococcus lactis and is particularly effective against gram-positive organisms. Nisin works by disrupting cell wall production, leaving cells more prone to lysis. It is used to preserve cheeses, meats, and beverages. Natamycin is an antifungal macrolide antibiotic produced by the bacterium Streptomyces natalensis . It was approved by the FDA in 1982 and is used to prevent fungal growth in various types of dairy products, including cottage cheese, sliced cheese, and shredded cheese. Natamycin is also used for meat preservation in countries outside the United States. Check Your Understanding What are the advantages and drawbacks of using sulfites and nitrites as food preservatives? Chemical Disinfectants Chemical Mode of Action Example Uses Phenolics Cresols o-Phenylphenol Hexachlorophene Triclosan Denature proteins and disrupt membranes Disinfectant in Lysol Prevent contamination of crops (citrus) Antibacterial soap pHisoHex for handwashing in hospitals Metals Mercury Silver Copper Nickel Zinc Bind to proteins and inhibit enzyme activity Topical antiseptic Treatment of wounds and burns Prevention of eye infections in newborns Antibacterial in catheters and bandages Mouthwash Algicide for pools and fish tanks Containers for long-term water storage Halogens Iodine Chlorine Fluorine Oxidation and destabilization of cellular macromolecules Topical antiseptic Hand scrub for medical personnel Water disinfectant Water treatment plants Household bleach Food processing Prevention of dental carries Alcohols Ethanol Isopropanol Denature proteins and disrupt membranes Disinfectant Antiseptic Surfactants Quaternary ammonium salts Lowers surface tension of water to help with washing away of microbes, and disruption of cell membranes Soaps and detergent Disinfectant Antiseptic Mouthwash Bisbiguanides Chlorhexidine Alexidine Disruption of cell membranes Oral rinse Hand scrub for medical personnel Alkylating Agents Formaldehyde Glutaraldehyde o-Phthalaldehyde Ethylene oxide β-Propionolactone Inactivation of enzymes and nucleic acid Disinfectant Tissue specimen storage Embalming Sterilization of medical equipment Vaccine component for sterility Peroxygens Hydrogen peroxide Peracetic acid Benzoyl peroxide Carbamide peroxide Ozone gas Oxidation and destabilization of cellular macromolecules Antiseptic Disinfectant Acne medication Toothpaste ingredient Supercritical Gases Carbon dioxide Penetrates cells, forms carbonic acid, lowers intracellular pH Food preservation Disinfection of medical devices Disinfection of transplant tissues Chemical Food Preservatives Sorbic acid Benzoic acid Propionic acid Potassium sorbate Sodium benzoate Calcium propionate Sulfur dioxide Nitrites Decrease pH and inhibit enzymatic function Preservation of food products Natural Food Preservatives Nisin Natamycin Inhibition of cell wall synthesis (Nisin) Preservation of dairy products, meats, and beverages 13.4 Testing the Effectiveness of Antiseptics and Disinfectants Learning Objectives Describe why the phenol coefficient is used Compare and contrast the disk-diffusion, use-dilution, and in-use methods for testing the effectiveness of antiseptics, disinfectants, and sterilants The effectiveness of various chemical disinfectants is reflected in the terms used to describe them. Chemical disinfectants are grouped by the power of their activity, with each category reflecting the types of microbes and viruses its component disinfectants are effective against. High-level germicide s have the ability to kill vegetative cells, fungi, viruses, and endospores, leading to sterilization, with extended use. Intermediate-level germicides, as their name suggests, are less effective against endospores and certain viruses, and low-level germicides kill only vegetative cells and certain enveloped viruses, and are ineffective against endospores. However, several environmental conditions influence the potency of an antimicrobial agent and its effectiveness. For example, length of exposure is particularly important, with longer exposure increasing efficacy. Similarly, the concentration of the chemical agent is also important, with higher concentrations being more effective than lower ones. Temperature, pH, and other factors can also affect the potency of a disinfecting agent. One method to determine the effectiveness of a chemical agent includes swabbing surfaces before and after use to confirm whether a sterile field was maintained during use. Additional tests are described in the sections that follow. These tests allow for the maintenance of appropriate disinfection protocols in clinical settings, controlling microbial growth to protect patients, health-care workers, and the community. Phenol Coefficient The effectiveness of a disinfectant or antiseptic can be determined in a number of ways. Historically, a chemical agent’s effectiveness was often compared with that of phenol, the first chemical agent used by Joseph Lister . In 1903, British chemists Samuel Rideal (1863–1929) and J. T. Ainslie Walker (1868–1930) established a protocol to compare the effectiveness of a variety of chemicals with that of phenol, using as their test organisms Staphylococcus aureus (a gram-positive bacterium) and Salmonella enterica serovar Typhi (a gram-negative bacterium). They exposed the test bacteria to the antimicrobial chemical solutions diluted in water for 7.5 minutes. They then calculated a phenol coefficient for each chemical for each of the two bacteria tested. A phenol coefficient of 1.0 means that the chemical agent has about the same level of effectiveness as phenol. A chemical agent with a phenol coefficient of less than 1.0 is less effective than phenol. An example is formalin , with phenol coefficients of 0.3 ( S. aureus ) and 0.7 ( S. enterica serovar Typhi). A chemical agent with a phenol coefficient greater than 1.0 is more effective than phenol, such as chloramine, with phenol coefficients of 133 and 100, respectively. Although the phenol coefficient was once a useful measure of effectiveness, it is no longer commonly used because the conditions and organisms used were arbitrarily chosen. Check Your Understanding What are the differences between the three levels of disinfectant effectiveness? Disk-Diffusion Method The disk-diffusion method involves applying different chemicals to separate, sterile filter paper disks ( Figure 13.31 ). The disks are then placed on an agar plate that has been inoculated with the targeted bacterium and the chemicals diffuse out of the disks into the agar where the bacteria have been inoculated. As the “lawn” of bacteria grows, zones of inhibition of microbial growth are observed as clear areas around the disks. Although there are other factors that contribute to the sizes of zones of inhibition (e.g., whether the agent is water soluble and able to diffuse in the agar), larger zones typically correlate to increased inhibition effectiveness of the chemical agent. The diameter across each zone is measured in millimeters. Check Your Understanding When comparing the activities of two disinfectants against the same microbe, using the disk-diffusion assay, and assuming both are water soluble and can easily diffuse in the agar, would a more effective disinfectant have a larger zone of inhibition or a smaller one? Use-Dilution Test Other methods are also used for measuring the effectiveness of a chemical agent in clinical settings. The use-dilution test is commonly used to determine a chemical’s disinfection effectiveness on an inanimate surface. For this test, a cylinder of stainless steel is dipped in a culture of the targeted microorganism and then dried. The cylinder is then dipped in solutions of disinfectant at various concentrations for a specified amount of time. Finally, the cylinder is transferred to a new test tube containing fresh sterile medium that does not contain disinfectant, and this test tube is incubated. Bacterial survival is demonstrated by the presence of turbidity in the medium, whereas killing of the target organism on the cylinder by the disinfectant will produce no turbidity. The Association of Official Agricultural Chemists International (AOAC), a nonprofit group that establishes many protocol standards, has determined that a minimum of 59 of 60 replicates must show no growth in such a test to achieve a passing result, and the results must be repeatable from different batches of disinfectant and when performed on different days. Disinfectant manufacturers perform use-dilution tests to validate the efficacy claims for their products, as designated by the EPA. Check Your Understanding Is the use-dilution test performed in a clinical setting? Why? In-Use Test An in-use test can determine whether an actively used solution of disinfectant in a clinical setting is microbially contaminated ( Figure 13.32 ). A 1-mL sample of the used disinfectant is diluted into 9 mL of sterile broth medium that also contains a compound to inactivate the disinfectant. Ten drops, totaling approximately 0.2 mL of this mixture, are then inoculated onto each of two agar plates. One plate is incubated at 37 °C for 3 days and the other is incubated at room temperature for 7 days. The plates are monitored for growth of microbial colonies. Growth of five or more colonies on either plate suggests that viable microbial cells existed in the disinfectant solution and that it is contaminated. Such in-use tests monitor the effectiveness of disinfectants in the clinical setting. Check Your Understanding What does a positive in-use test indicate? Clinical Focus Resolution Despite antibiotic treatment, Roberta’s symptoms worsened. She developed pyelonephritis , a severe kidney infection, and was rehospitalized in the intensive care unit (ICU). Her condition continued to deteriorate, and she developed symptoms of septic shock . At this point, her physician ordered a culture from her urine to determine the exact cause of her infection, as well as a drug sensitivity test to determine what antibiotics would be effective against the causative bacterium. The results of this test indicated resistance to a wide range of antibiotics, including the carbapenems, a class of antibiotics that are used as the last resort for many types of bacterial infections. This was an alarming outcome, suggesting that Roberta’s infection was caused by a so-called superbug : a bacterial strain that has developed resistance to the majority of commonly used antibiotics. In this case, the causative agent belonged to the carbapenem-resistant Enterobacteriaceae (CRE) , a drug-resistant family of bacteria normally found in the digestive system ( Figure 13.33 ). When CRE is introduced to other body systems, as might occur through improperly cleaned surgical instruments, catheters, or endoscopes, aggressive infections can occur. CRE infections are notoriously difficult to treat, with a 40%–50% fatality rate. To treat her kidney infection and septic shock, Roberta was treated with dialysis, intravenous fluids, and medications to maintain blood pressure and prevent blood clotting. She was also started on aggressive treatment with intravenous administration of a new drug called tigecycline , which has been successful in treating infections caused by drug-resistant bacteria. After several weeks in the ICU, Roberta recovered from her CRE infection. However, public health officials soon noticed that Roberta’s case was not isolated. Several patients who underwent similar procedures at the same hospital also developed CRE infections, some dying as a result. Ultimately, the source of the infection was traced to the duodenoscopes used in the procedures. Despite the hospital staff meticulously following manufacturer protocols for disinfection, bacteria, including CRE, remained within the instruments and were introduced to patients during procedures. Go back to the previous Clinical Focus box. Eye on Ethics Who Is Responsible? Carbapenem-resistant Enterobacteriaceae infections due to contaminated endoscopes have become a high-profile problem in recent years. Several CRE outbreaks have been traced to endoscopes, including a case at Ronald Reagan UCLA Medical Center in early 2015 in which 179 patients may have been exposed to a contaminated endoscope. Seven of the patients developed infections, and two later died. Several lawsuits have been filed against Olympus, the manufacturer of the endoscopes. Some claim that Olympus did not obtain FDA approval for design changes that may have led to contamination, and others claim that the manufacturer knowingly withheld information from hospitals concerning defects in the endoscopes. Lawsuits like these raise difficult-to-answer questions about liability. Invasive procedures are inherently risky, but negative outcomes can be minimized by strict adherence to established protocols. Who is responsible, however, when negative outcomes occur due to flawed protocols or faulty equipment? Can hospitals or health-care workers be held liable if they have strictly followed a flawed procedure? Should manufacturers be held liable—and perhaps be driven out of business—if their lifesaving equipment fails or is found defective? What is the government’s role in ensuring that use and maintenance of medical equipment and protocols are fail-safe? Protocols for cleaning or sterilizing medical equipment are often developed by government agencies like the FDA, and other groups, like the AOAC, a nonprofit scientific organization that establishes many protocols for standard use globally. These procedures and protocols are then adopted by medical device and equipment manufacturers. Ultimately, the end-users (hospitals and their staff) are responsible for following these procedures and can be held liable if a breach occurs and patients become ill from improperly cleaned equipment. Unfortunately, protocols are not infallible, and sometimes it takes negative outcomes to reveal their flaws. In 2008, the FDA had approved a disinfection protocol for endoscopes, using glutaraldehyde (at a lower concentration when mixed with phenol), o-phthalaldehyde, hydrogen peroxide, peracetic acid, and a mix of hydrogen peroxide with peracetic acid. However, subsequent CRE outbreaks from endoscope use showed that this protocol alone was inadequate. As a result of CRE outbreaks, hospitals, manufacturers, and the FDA are investigating solutions. Many hospitals are instituting more rigorous cleaning procedures than those mandated by the FDA. Manufacturers are looking for ways to redesign duodenoscopes to minimize hard-to-reach crevices where bacteria can escape disinfectants, and the FDA is updating its protocols. In February 2015, the FDA added new recommendations for careful hand cleaning of the duodenoscope elevator mechanism (the location where microbes are most likely to escape disinfection), and issued more careful documentation about quality control of disinfection protocols ( Figure 13.34 ). There is no guarantee that new procedures, protocols, or equipment will completely eliminate the risk for infection associated with endoscopes. Yet these devices are used successfully in 500,000–650,000 procedures annually in the United States, many of them lifesaving. At what point do the risks outweigh the benefits of these devices, and who should be held responsible when negative outcomes occur?
principles_of_accounting,_volume_1:_financial_accounting
Summary 11.1 Distinguish between Tangible and Intangible Assets Tangible assets are assets that have physical substance. Long-term tangible assets are assets used in the normal course of operation of businesses that last for more than one year and are not intended to be resold. Examples of long-term tangible assets are land, building, and machinery. Intangible assets lack physical substance but often have value and legal rights and protections, and therefore are still assets to the firm. Examples of intangible assets are patents, trademarks, copyrights, and goodwill. 11.2 Analyze and Classify Capitalized Costs versus Expenses Costs incurred to purchase an asset that will be used in the day-to-day operations of the business will be capitalized and then depreciated over the useful life of that asset. Costs incurred to purchase an asset that will not be used in the day-to-day operations, but was purchased for investment purposes, will be considered an investment asset. Investments are short term (can be converted to cash in one year) or long term (held for over a year). Costs incurred during the life of the asset are expensed right away if they do not extend the useful life of that asset or are capitalized if they extend the asset’s useful life. 11.3 Explain and Apply Depreciation Methods to Allocate Capitalized Costs Fixed assets are recorded at the historical (initial) cost, including any costs to acquire the asset and get it ready for use. Depreciation is the process of allocating the cost of using a long-term asset over its anticipated economic (useful) life. To determine depreciation, one needs the fixed asset’s historical cost, salvage value, and useful life (in years or units). There are three main methods to calculate depreciation: the straight-line method, units-of-production method, and double-declining-balance method. Natural resources are tangible assets occurring in nature that a company owns, which are consumed when used. Natural resources are depleted over the life of the asset, using a units-consumed method. Intangible assets are amortized over the life of the asset. Amortization is different from depreciation as there is typically no salvage value, the straight-line method is typically used, and no accumulated amortization account is required. 11.4 Describe Accounting for Intangible Assets and Record Related Transactions Intangible assets are expensed using amortization. This is similar to depreciation but is credited to the intangible asset rather than to a contra account. Finite intangible assets are typically amortized using the straight-line method over the useful life of the asset. Intangible assets with an indefinite life are not amortized but are assessed yearly for impairment. 11.5 Describe Some Special Issues in Accounting for Long-Term Assets Because estimates are used to calculate depreciation of fixed assets, sometimes adjustments may need to be made to the asset’s useful life or to its salvage value. To make these adjustments, the asset’s net book value is updated, and then the adjustments are made for the remaining years. Assets are sometimes sold before the end of their useful life. These sales can result in a gain, a loss, or neither, depending on the cash received and the asset’s net book value.
Chapter Outline 11.1 Distinguish between Tangible and Intangible Assets 11.2 Analyze and Classify Capitalized Costs versus Expenses 11.3 Explain and Apply Depreciation Methods to Allocate Capitalized Costs 11.4 Describe Accounting for Intangible Assets and Record Related Transactions 11.5 Describe Some Special Issues in Accounting for Long-Term Assets Why It Matters Liam is excited to be graduating from his MBA program and looks forward to having more time to pursue his business venture. During one of his courses, Liam came up with the business idea of creating trendy workout attire. For his class project, he started silk-screening vintage album cover designs onto tanks, tees, and yoga pants. He tested the market by selling his wares on campus and was surprised how quickly and how often he sold out. In fact, sales were high enough that he decided to go into business for himself. One of his first decisions involved whether he should continue to pay someone else to silk-screen his designs or do his own silk-screening. To do his own silk-screening, he would need to invest in a silk-screening machine. Liam will need to analyze the purchase of a silk-screening machine to determine the impact on his business in the short term as well as the long term, including the accounting implications related to the expense of this machine. Liam knows that over time, the value of the machine will decrease, but he also knows that an asset is supposed to be recorded on the books at its historical cost. He also wonders what costs are considered part of this asset. Additionally, Liam has learned about the matching principle (expense recognition) but needs to learn how that relates to a machine that is purchased in one year and used for many years to help generate revenue. Liam has a lot of information to consider before making this decision.
[ { "answer": { "ans_choice": 2, "ans_text": "C" }, "bloom": null, "hl_context": "Businesses typically need many different types of these assets to meet their objectives . These assets differ from the company ’ s products . For example , the computers that Apple Inc . intends to sell are considered inventory ( a short-term asset ) , whereas the computers Apple ’ s employees use for day-to-day operations are long-term assets . In Liam ’ s case , the new silk-screening machine would be considered a long-term tangible asset as he plans to use it over many years to help him generate revenue for his business . <hl> Long-term tangible assets are listed as noncurrent assets on a company ’ s balance sheet . <hl> <hl> Typically , these assets are listed under the category of Property , Plant , and Equipment ( PP & E ) , but they may be referred to as fixed assets or plant assets . <hl> <hl> An asset is considered a tangible asset when it is an economic resource that has physical substance — it can be seen and touched . <hl> <hl> Tangible assets can be either short term , such as inventory and supplies , or long term , such as land , buildings , and equipment . <hl> To be considered a long-term tangible asset , the item needs to be used in the normal operation of the business for more than one year , not be near the end of its useful life , and the company must have no plan to sell the item in the near future . The useful life is the time period over which an asset cost is allocated . <hl> Long-term tangible assets are known as fixed assets . <hl>", "hl_sentences": "Long-term tangible assets are listed as noncurrent assets on a company ’ s balance sheet . Typically , these assets are listed under the category of Property , Plant , and Equipment ( PP & E ) , but they may be referred to as fixed assets or plant assets . An asset is considered a tangible asset when it is an economic resource that has physical substance — it can be seen and touched . Tangible assets can be either short term , such as inventory and supplies , or long term , such as land , buildings , and equipment . Long-term tangible assets are known as fixed assets .", "question": { "cloze_format": "The asset type of Property, Plant, and Equipment is called ___ .", "normal_format": "Property, Plant, and Equipment is considered which type of asset?", "question_choices": [ "current asset", "contra asset", "tangible asset", "intangible asset" ], "question_id": "fs-idm261966352", "question_text": "Property, Plant, and Equipment is considered which type of asset?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 3, "ans_text": "inventory" }, "bloom": null, "hl_context": "Companies may have other long-term assets used in the operations of the business that they do not intend to sell , but that do not have physical substance ; these assets still provide specific rights to the owner and are called intangible assets . These assets typically appear on the balance sheet following long-term tangible assets ( see Figure 11.3 . ) <hl> 3 Examples of intangible assets are patents , copyrights , franchises , licenses , goodwill , sometimes software , and trademarks ( Table 11.1 ) . <hl> Because the value of intangible assets is very subjective , it is usually not shown on the balance sheet until there is an event that indicates value objectively , such as the purchase of an intangible asset . 3 Apple , Inc . U . S . Securities and Exchange Commission 10 - K Filing . November 3 , 2017 . http://pdf.secdatabase.com/2624/0000320193-17-000070.pdf", "hl_sentences": "3 Examples of intangible assets are patents , copyrights , franchises , licenses , goodwill , sometimes software , and trademarks ( Table 11.1 ) .", "question": { "cloze_format": "___ would not be considered an intangible asset.", "normal_format": "Which of the following would not be considered an intangible asset?", "question_choices": [ "goodwill", "patent", "copyright", "inventory" ], "question_id": "fs-idm781221392", "question_text": "Which of the following would not be considered an intangible asset?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "C" }, "bloom": null, "hl_context": "<hl> A patent is a contract that provides a company exclusive rights to produce and sell a unique product . <hl> The rights are granted to the inventor by the federal government and provide exclusivity from competition for twenty years . Patents are common within the pharmaceutical industry as they provide an opportunity for drug companies to recoup the significant financial investment on research and development of a new drug . Once the new drug is produced , the company can sell it for twenty years with no direct competition .", "hl_sentences": "A patent is a contract that provides a company exclusive rights to produce and sell a unique product .", "question": { "cloze_format": "The legal protection that provides a company exclusive rights to produce and sell a unique product is known as ___.", "normal_format": "The legal protection that provides a company exclusive rights to produce and sell a unique product is known as which of the following?", "question_choices": [ "trademark", "copyright", "patent", "goodwill" ], "question_id": "fs-idm261941744", "question_text": "The legal protection that provides a company exclusive rights to produce and sell a unique product is known as which of the following?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "Capitalizing a cost means to record it as an asset." }, "bloom": null, "hl_context": "When a business purchases a long-term asset ( used for more than one year ) , it classifies the asset based on whether the asset is used in the business ’ s operations . If a long-term asset is used in the business operations , it will belong in property , plant , and equipment or intangible assets . In this situation the asset is typically capitalized . <hl> Capitalization is the process by which a long-term asset is recorded on the balance sheet and its allocated costs are expensed on the income statement over the asset ’ s economic life . <hl> Explain and Apply Depreciation Methods to Allocate Capitalized Costs addresses the available methods that companies may choose for expensing capitalized assets .", "hl_sentences": "Capitalization is the process by which a long-term asset is recorded on the balance sheet and its allocated costs are expensed on the income statement over the asset ’ s economic life .", "question": { "cloze_format": "The correct statement about capitalizing costs is that ___ .", "normal_format": "Which of the following statements about capitalizing costs is correct?", "question_choices": [ "Capitalizing costs refers to the process of converting assets to expenses.", "Only the purchase price of the asset is capitalized.", "Capitalizing a cost means to record it as an asset.", "Capitalizing costs results in an immediate decrease in net income." ], "question_id": "fs-idm237255120", "question_text": "Which of the following statements about capitalizing costs is correct?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "B" }, "bloom": null, "hl_context": "<hl> The expense recognition principle that requires that the cost of the asset be allocated over the asset ’ s useful life is the process of depreciation . <hl> For example , if we buy a delivery truck to use for the next five years , we would allocate the cost and record depreciation expense across the entire five-year period . The calculation of the depreciation expense for a period is not based on anticipated changes in the fair market value of the asset ; instead , the depreciation is based on the allocation of the cost of owning the asset over the period of its useful life . <hl> Depreciation is the process of allocating the cost of a tangible asset over its useful life , or the period of time that the business believes it will use the asset to help generate revenue . <hl> This process will be described in Explain and Apply Depreciation Methods to Allocate Capitalized Costs .", "hl_sentences": "The expense recognition principle that requires that the cost of the asset be allocated over the asset ’ s useful life is the process of depreciation . Depreciation is the process of allocating the cost of a tangible asset over its useful life , or the period of time that the business believes it will use the asset to help generate revenue .", "question": { "cloze_format": "Depreciation of a plant asset is the process of ________.", "normal_format": "Depreciation of a plant asset is the process of which of the following?", "question_choices": [ "asset valuation for statement of financial position purposes", "allocation of the asset’s cost to the periods of use", "fund accumulation for the replacement of the asset", "asset valuation based on current replacement cost data" ], "question_id": "fs-idm391075456", "question_text": "Depreciation of a plant asset is the process of ________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 1, "ans_text": "double-declining-balance depreciation" }, "bloom": null, "hl_context": "<hl> The double-declining-balance depreciation method is the most complex of the three methods because it accounts for both time and usage and takes more expense in the first few years of the asset ’ s life . <hl> <hl> Double-declining considers time by determining the percentage of depreciation expense that would exist under straight-line depreciation . <hl> To calculate this , divide 100 % by the estimated life in years . For example , a five-year asset would be 100/5 , or 20 % a year . A four-year asset would be 100/4 , or 25 % a year . Next , because assets are typically more efficient and “ used ” more heavily early in their life span , the double-declining method takes usage into account by doubling the straight-line percentage . For a four-year asset , multiply 25 % ( 100 % / 4 - year life ) × 2 , or 50 % . For a five-year asset , multiply 20 % ( 100 % / 5 - year life ) × 2 , or 40 % .", "hl_sentences": "The double-declining-balance depreciation method is the most complex of the three methods because it accounts for both time and usage and takes more expense in the first few years of the asset ’ s life . Double-declining considers time by determining the percentage of depreciation expense that would exist under straight-line depreciation .", "question": { "cloze_format": "An accelerated depreciation method that takes more expense in the first few years of the asset’s life is ________.", "normal_format": "What is an accelerated depreciation method that takes more expense in the first few years of the asset’s life?", "question_choices": [ "units-of-production depreciation", "double-declining-balance depreciation", "accumulated depreciation", "straight-line depreciation" ], "question_id": "fs-idm383115984", "question_text": "An accelerated depreciation method that takes more expense in the first few years of the asset’s life is ________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 3, "ans_text": "D" }, "bloom": null, "hl_context": "An asset is considered a tangible asset when it is an economic resource that has physical substance — it can be seen and touched . Tangible assets can be either short term , such as inventory and supplies , or long term , such as land , buildings , and equipment . <hl> To be considered a long-term tangible asset , the item needs to be used in the normal operation of the business for more than one year , not be near the end of its useful life , and the company must have no plan to sell the item in the near future . <hl> <hl> The useful life is the time period over which an asset cost is allocated . <hl> Long-term tangible assets are known as fixed assets .", "hl_sentences": "To be considered a long-term tangible asset , the item needs to be used in the normal operation of the business for more than one year , not be near the end of its useful life , and the company must have no plan to sell the item in the near future . The useful life is the time period over which an asset cost is allocated .", "question": { "cloze_format": "The estimated economic life of an asset is also known as ________.", "normal_format": "What is the estimated economic life of an asset also known as?", "question_choices": [ "residual value", "book value", "salvage life", "useful life" ], "question_id": "fs-idm398377568", "question_text": "The estimated economic life of an asset is also known as ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "depreciation" }, "bloom": null, "hl_context": "Recall that intangible assets are recorded as long-term assets at their cost . As with tangible assets , many intangible assets have a finite ( limited ) life span so their costs must be allocated over their useful lives : this process is amortization . <hl> Depreciation and amortization are similar in nature but have some important differences . <hl> First , amortization is typically only done using the straight-line method . Second , there is usually no salvage value for intangible assets because they are completely used up over their life span . Finally , an accumulated amortization account is not required to record yearly expenses ( as is needed with depreciation ); instead , the intangible asset account is written down each period .", "hl_sentences": "Depreciation and amortization are similar in nature but have some important differences .", "question": { "cloze_format": "The amortization process is like the process of ___ .", "normal_format": "The amortization process is like what other process?", "question_choices": [ "depreciation", "valuation", "recognizing revenue", "capitalization" ], "question_id": "fs-idm263715680", "question_text": "The amortization process is like what other process?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "D" }, "bloom": null, "hl_context": "<hl> Goodwill does not have an expected life span and therefore is not amortized . <hl> <hl> However , a company is required to compare the book value of goodwill to its market value at least annually to determine if it needs to be adjusted . <hl> <hl> This comparison process is called testing for impairment . <hl> If the market value of goodwill is found to be lower than the book value , then goodwill needs to be reduced to its market value . If goodwill is impaired , it is reduced with a credit , and an impairment loss is debited . Goodwill is never increased beyond its original cost . For example , if the new owner of London Hoops assesses that London Hoops now has a fair value of $ 9,000 , 000 rather than the $ 10,000 , 000 of the original purchase , the owner would need to record the impairment as shown in the following journal entry .", "hl_sentences": "Goodwill does not have an expected life span and therefore is not amortized . However , a company is required to compare the book value of goodwill to its market value at least annually to determine if it needs to be adjusted . This comparison process is called testing for impairment .", "question": { "cloze_format": "The way in which intangible assets with an indefinite life are treated is that ___.", "normal_format": "How are intangible assets with an indefinite life treated?", "question_choices": [ "They are depreciated.", "They are amortized.", "They are depleted.", "They are tested yearly for impairment." ], "question_id": "fs-idm7753083680", "question_text": "How are intangible assets with an indefinite life treated?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 1, "ans_text": "impaired; reducing it with a credit" }, "bloom": null, "hl_context": "Goodwill does not have an expected life span and therefore is not amortized . However , a company is required to compare the book value of goodwill to its market value at least annually to determine if it needs to be adjusted . This comparison process is called testing for impairment . <hl> If the market value of goodwill is found to be lower than the book value , then goodwill needs to be reduced to its market value . <hl> <hl> If goodwill is impaired , it is reduced with a credit , and an impairment loss is debited . <hl> Goodwill is never increased beyond its original cost . For example , if the new owner of London Hoops assesses that London Hoops now has a fair value of $ 9,000 , 000 rather than the $ 10,000 , 000 of the original purchase , the owner would need to record the impairment as shown in the following journal entry .", "hl_sentences": "If the market value of goodwill is found to be lower than the book value , then goodwill needs to be reduced to its market value . If goodwill is impaired , it is reduced with a credit , and an impairment loss is debited .", "question": { "cloze_format": "If the market value of goodwill is found to be lower than the book value, goodwill is __________ and must be adjusted by __________.", "normal_format": "If the market value of goodwill is found to be lower than the book value, goodwill is what? And how must it be adjusted? ", "question_choices": [ "worthless; reducing it with a credit", "impaired; reducing it with a credit", "impaired; increasing it with a credit", "worthless; increasing it with a credit" ], "question_id": "fs-idm260890288", "question_text": "If the market value of goodwill is found to be lower than the book value, goodwill is __________ and must be adjusted by __________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "D" }, "bloom": null, "hl_context": "<hl> A company will account for some events for long-term assets that are less routine than recording purchase and depreciation or amortization . <hl> <hl> For example , a company may realize that its original estimate of useful life or salvage value is no longer accurate . <hl> A long-term asset may lose its value , or a company may sell a long-term asset .", "hl_sentences": "A company will account for some events for long-term assets that are less routine than recording purchase and depreciation or amortization . For example , a company may realize that its original estimate of useful life or salvage value is no longer accurate .", "question": { "cloze_format": "___ represents an event that is less routine when accounting for long-term assets.", "normal_format": "Which of the following represents an event that is less routine when accounting for long-term assets?", "question_choices": [ "recording an asset purchase", "recording depreciation on an asset", "recording accumulated depreciation for an asset or asset category", "changing the estimated useful life of an asset" ], "question_id": "fs-idm388457440", "question_text": "Which of the following represents an event that is less routine when accounting for long-term assets?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "Depreciation expense calculations may need to be updated using new and more accurate estimates." }, "bloom": null, "hl_context": "<hl> As you have learned , depreciation is based on estimating both the useful life of an asset and the salvage value of that asset . <hl> <hl> Over time , these estimates may be proven inaccurate and need to be adjusted based on new information . <hl> <hl> When this occurs , the depreciation expense calculation should be changed to reflect the new ( more accurate ) estimates . <hl> <hl> For this entry , the remaining depreciable balance of the net book value is allocated over the new useful life of the asset . <hl> To work through this process with data , let ’ s return to the example of Kenzie Company .", "hl_sentences": "As you have learned , depreciation is based on estimating both the useful life of an asset and the salvage value of that asset . Over time , these estimates may be proven inaccurate and need to be adjusted based on new information . When this occurs , the depreciation expense calculation should be changed to reflect the new ( more accurate ) estimates . For this entry , the remaining depreciable balance of the net book value is allocated over the new useful life of the asset .", "question": { "cloze_format": "A true statement regarding special issues in accounting for long-term assets is that ___ .", "normal_format": "Which of the following is true regarding special issues in accounting for long-term assets?", "question_choices": [ "An asset’s useful life can never be changed.", "An asset’s salvage value can never be changed.", "Depreciation expense calculations may need to be updated using new and more accurate estimates.", "Asset values are never reduced in value due to physical deterioration." ], "question_id": "fs-idm339939952", "question_text": "Which of the following is true regarding special issues in accounting for long-term assets?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 0, "ans_text": "A" }, "bloom": null, "hl_context": "Obsolescence refers to the reduction in value and / or use of the asset . Obsolescence has traditionally resulted from the physical deterioration of the asset — called physical obsolescence . In current application — and considering the role of modern technology and tech assets — accounting for functional obsolescence is becoming more common . <hl> Functional obsolescence is the loss of value from all causes within a property except those due to physical deterioration . <hl> With functional obsolescence , the useful life still needs to be adjusted downward : although the asset physically still works , its functionality makes it less useful for the company . Also , an adjustment might be necessary in the salvage value . This potential adjustment depends on the specific details of each obsolescence determination or decision .", "hl_sentences": "Functional obsolescence is the loss of value from all causes within a property except those due to physical deterioration .", "question": { "cloze_format": "The loss in value from all causes within a property except those due to physical deterioration is known as ___ .", "normal_format": "The loss in value from all causes within a property except those due to physical deterioration is known as which of the following?", "question_choices": [ "functional obsolescence", "obsolescence", "true obsolescence", "deterioration" ], "question_id": "fs-idm394959376", "question_text": "The loss in value from all causes within a property except those due to physical deterioration is known as which of the following?" }, "references_are_paraphrase": null } ]
11
11.1 Distinguish between Tangible and Intangible Assets Assets are items a business owns. 1 For accounting purposes, assets are categorized as current versus long term, and tangible versus intangible. Assets that are expected to be used by the business for more than one year are considered long-term assets . They are not intended for resale and are anticipated to help generate revenue for the business in the future. Some common long-term assets are computers and other office machines, buildings, vehicles, software, computer code, and copyrights. Although these are all considered long-term assets, some are tangible and some are intangible. 1 The Financial Accounting Standards Board (FASB) defines assets as “probable future economic benefits obtained or controlled by a particular entity as a result of past transactions or events” (SFAC No. 6, p. 12). Tangible Assets An asset is considered a tangible asset when it is an economic resource that has physical substance—it can be seen and touched. Tangible assets can be either short term, such as inventory and supplies, or long term, such as land, buildings, and equipment. To be considered a long-term tangible asset, the item needs to be used in the normal operation of the business for more than one year, not be near the end of its useful life, and the company must have no plan to sell the item in the near future. The useful life is the time period over which an asset cost is allocated. Long-term tangible assets are known as fixed assets . Businesses typically need many different types of these assets to meet their objectives. These assets differ from the company’s products. For example, the computers that Apple Inc. intends to sell are considered inventory (a short-term asset), whereas the computers Apple ’s employees use for day-to-day operations are long-term assets. In Liam’s case, the new silk-screening machine would be considered a long-term tangible asset as he plans to use it over many years to help him generate revenue for his business. Long-term tangible assets are listed as noncurrent assets on a company’s balance sheet. Typically, these assets are listed under the category of Property, Plant, and Equipment (PP&E), but they may be referred to as fixed assets or plant assets. Apple Inc. lists a total of $33,783,000,000 in total Property, Plant and Equipment (net) on its 2017 consolidated balance sheet (see Figure 11.2 ). 2 As shown in the figure, this net total includes land and buildings, machinery, equipment and internal-use software, and leasehold improvements, resulting in a gross PP&E of $75,076,000,000—less accumulated depreciation and amortization of $41,293,000,000—to arrive at the net amount of $33,783,000,000. 2 Apple, Inc. U.S. Securities and Exchange Commission 10-K Filing. November 3, 2017. http://pdf.secdatabase.com/2624/0000320193-17-000070.pdf Link to Learning Recently, there has been a trend involving an increase in the number of intangibles on companies’ balance sheets. As a result, investors need a better understanding of how this will affect their valuation of these companies. Read this article on intangible assets from The Economist for more information. Intangible Assets Companies may have other long-term assets used in the operations of the business that they do not intend to sell, but that do not have physical substance; these assets still provide specific rights to the owner and are called intangible assets . These assets typically appear on the balance sheet following long-term tangible assets (see Figure 11.3 .) 3 Examples of intangible assets are patents, copyrights, franchises, licenses, goodwill, sometimes software, and trademarks ( Table 11.1 ). Because the value of intangible assets is very subjective, it is usually not shown on the balance sheet until there is an event that indicates value objectively, such as the purchase of an intangible asset. 3 Apple, Inc. U.S. Securities and Exchange Commission 10-K Filing. November 3, 2017. http://pdf.secdatabase.com/2624/0000320193-17-000070.pdf A company often records the costs of developing an intangible asset internally as expenses, not assets, especially if there is ambiguity in the expense amounts or economic life of the asset. However, there are also conditions under which the costs can be allocated over the anticipated life of the asset. (The treatment of intangible asset costs can be quite complex and is taught in advanced accounting courses.) Types of Intangible Assets Asset Useful Life Patents Twenty years Trademarks Renewable every ten years Copyrights Seventy years beyond death of creator Goodwill Indefinite Table 11.1 Think It Through Categorizing Intangible Assets Your company has recently hired a star scientist who has a history of developing new technologies. The company president is excited with the new hire, and questions you, the company accountant, why the scientist cannot be recorded as an intangible asset, as the scientist will probably provide more value to the company in the future than any of its other assets. Discuss why the scientist, and employees in general, who often provide the greatest value for a company, are not recorded as intangible assets. Patents A patent is a contract that provides a company exclusive rights to produce and sell a unique product. The rights are granted to the inventor by the federal government and provide exclusivity from competition for twenty years. Patents are common within the pharmaceutical industry as they provide an opportunity for drug companies to recoup the significant financial investment on research and development of a new drug. Once the new drug is produced, the company can sell it for twenty years with no direct competition. Think It Through Research and Development Costs Jane works in product development for a technology company. She just heard that her employer is slashing research and development costs. When she asks why, the marketing senior vice president tells her that current research and development costs are reducing net income in the current year for a potential but unknown benefit in future years, and that management is concerned about the effect on stock price. Jane wonders why research and development costs are not capitalized so that the cost would be matched with the future revenues. Why do you think research and development costs are not capitalized? Trademarks and Copyrights A company’s trademark is the exclusive right to the name, term, or symbol it uses to identify itself or its products. Federal law allows companies to register their trademarks to protect them from use by others. Trademark registration lasts for ten years with optional 10-year renewable periods. This protection helps prevent impersonators from selling a product similar to another or using its name. For example, a burger joint could not start selling the “Big Mac.” Although it has no physical substance, the exclusive right to a term or logo has value to a company and is therefore recorded as an asset. A copyright provides the exclusive right to reproduce and sell artistic, literary, or musical compositions. Anyone who owns the copyright to a specific piece of work has exclusive rights to that work. Copyrights in the United States last seventy years beyond the death of the original author. While you might not be overly interested in what seems to be an obscure law, it actually directly affects you and your fellow students. It is one of the primary reasons that your copy of the Collected Works of William Shakespeare costs about $40 in your bookstore or online, while a textbook, such as Principles of Biology or Principles of Accounting , can run in the hundreds of dollars. Goodwill Goodwill is a unique intangible asset. Goodwill refers to the value of certain favorable factors that a business possesses that allows it to generate a greater rate of return or profit. Such factors include superior management, a skilled workforce, quality products or service, great geographic location, and overall reputation. Companies typically record goodwill when they acquire another business in which the purchase price is in excess of the fair value of the identifiable net assets. The difference is recorded as goodwill on the purchaser’s balance sheet. For example, the goodwill of $5,717,000,000 that we see on Apple’s consolidated balance sheets for 2017 (see Figure 11.3 ) was created when Apple purchased another business for a purchase price exceeding the book value of its net assets. Your Turn Classifying Long-Term Assets as Tangible or Intangible Your cousin started her own business and wants to get a small loan from a local bank to expand production in the next year. The bank has asked her to prepare a balance sheet, and she is having trouble classifying the assets properly. Help her sort through the list below and note the assets that are tangible long-term assets and those that are intangible long-term assets. Cash Patent Accounts Receivable Land Investments Software Inventory Note Receivable Machinery Equipment Marketable Securities Owner Capital Copyright Building Accounts Payable Mortgage Payable Solution Tangible long-term assets include land, machinery, equipment, and building. Intangible long-term assets include patent, software, and copyright. 11.2 Analyze and Classify Capitalized Costs versus Expenses When a business purchases a long-term asset (used for more than one year), it classifies the asset based on whether the asset is used in the business’s operations. If a long-term asset is used in the business operations, it will belong in property, plant, and equipment or intangible assets. In this situation the asset is typically capitalized. Capitalization is the process by which a long-term asset is recorded on the balance sheet and its allocated costs are expensed on the income statement over the asset’s economic life. Explain and Apply Depreciation Methods to Allocate Capitalized Costs addresses the available methods that companies may choose for expensing capitalized assets. Long-term assets that are not used in daily operations are typically classified as an investment. For example, if a business owns land on which it operates a store, warehouse, factory, or offices, the cost of that land would be included in property, plant, and equipment. However, if a business owns a vacant piece of land on which the business conducts no operations (and assuming no current or intermediate-term plans for development), the land would be considered an investment. Your Turn Classifying Assets and Related Expenditures You work at a business consulting firm. Your new colleague, Marielena, is helping a client organize his accounting records by types of assets and expenditures. Marielena is a bit stumped on how to classify certain assets and related expenditures, such as capitalized costs versus expenses. She has given you the following list and asked for your help to sort through it. Help her classify the expenditures as either capitalized or expensed, and note which assets are property, plant, and equipment. Expenditures: normal repair and maintenance on the manufacturing facility cost of taxes on new equipment used in business operations shipping costs on new equipment used in business operations cost of a minor repair on existing equipment used in business operations Assets: land next to the production facility held for use next year as a place to build a warehouse land held for future resale when the value increases equipment used in the production process Solution Expenditures: normal repair and maintenance on the manufacturing facility: expensed cost of taxes on new equipment used in business operations: capitalized shipping costs on new equipment used in business operations: capitalized cost of a minor repair on existing equipment used in business operations: expensed Assets: land next to the production facility held for use next year as a place to build a warehouse: property, plant, and equipment land held for future resale when the value increases: investment equipment used in the production process: property, plant, and equipment Property, Plant, and Equipment (Fixed Assets) Why are the costs of putting a long-term asset into service capitalized and written off as expenses (depreciated) over the economic life of the asset? Let’s return to Liam’s start-up business as an example. Liam plans to buy a silk-screening machine to help create clothing that he will sell. The machine is a long-term asset, because it will be used in the business’s daily operation for many years. If the machine costs Liam $5,000 and it is expected to be used in his business for several years, generally accepted accounting principles (GAAP) require the allocation of the machine’s costs over its useful life, which is the period over which it will produce revenues. Overall, in determining a company’s financial performance, we would not expect that Liam should have an expense of $5,000 this year and $0 in expenses for this machine for future years in which it is being used. GAAP addressed this through the expense recognition ( matching ) principle, which states that expenses should be recorded in the same period with the revenues that the expense helped create. In Liam’s case, the $5,000 for this machine should be allocated over the years in which it helps to generate revenue for the business. Capitalizing the machine allows this to occur. As stated previously, to capitalize is to record a long-term asset on the balance sheet and expense its allocated costs on the income statement over the asset’s economic life. Therefore, when Liam purchases the machine, he will record it as an asset on the financial statements. When capitalizing an asset, the total cost of acquiring the asset is included in the cost of the asset. This includes additional costs beyond the purchase price, such as shipping costs, taxes, assembly, and legal fees. For example, if a real estate broker is paid $8,000 as part of a transaction to purchase land for $100,000, the land would be recorded at a cost of $108,000. Over time as the asset is used to generate revenue, Liam will need to depreciate the asset. Depreciation is the process of allocating the cost of a tangible asset over its useful life, or the period of time that the business believes it will use the asset to help generate revenue. This process will be described in Explain and Apply Depreciation Methods to Allocate Capitalized Costs . Ethical Considerations How WorldCom’s Improper Capitalization of Costs Almost Shut Down the Internet In 2002, telecommunications giant WorldCom filed for the largest Chapter 11 bankruptcy to date, a situation resulting from manipulation of its accounting records. At the time, WorldCom operated nearly a third of the bandwidth of the twenty largest US internet backbone routes, connecting over 3,400 global networks that serviced more than 70,000 businesses in 114 countries. 4 4 Cybertelecom. “WorldCom (UNNET).” n.d. http://www.cybertelecom.org/industry/wcom.htm WorldCom used a number of accounting gimmicks to defraud investors, mainly including capitalizing costs that should have been expensed. Under normal circumstances, this might have been considered just another account fiasco leading to the end of a company. However, WorldCom controlled a large percentage of backbone routes, a major component of the hardware supporting the internet, as even the Securities and Exchange Commission recognized. 5 If WorldCom ’s bankruptcy due to accounting malfeasance shut the company down, then the internet would no longer be functional. 5 Dennis R. Beresford, Nicholas DeB. Katzenbach, and C.B. Rogers, Jr. “Special Investigative Committee of the Board of Directors of WorldCom.” Report of Investigation . March 31, 2003. https://www.sec.gov/Archives/edgar/data/723527/000093176303001862/dex991.htm If such an event was to happen today, it could shut down international commerce and would be considered a national emergency. As demonstrated by WorldCom , the unethical behavior of a few accountants could have shut down the world’s online businesses and international commerce. An accountant’s job is fundamental and important: keep businesses operating in a transparent fashion. Investments A short-term or long-term asset that is not used in the day-to-day operations of the business is considered an investment and is not expensed, since the company does not expect to use up the asset over time. On the contrary, the company hopes that the assets (investment) would grow in value over time. Short-term investments are investments that are expected to be sold within a year and are recorded as current assets. Continuing Application Investment in Property in the Grocery Industry To remain viable, companies constantly look to invest in upgrades in long-term assets. Such acquisitions might include new machinery, buildings, warehouses, or even land in order to expand operations or make the work process more efficient. Think back to the last time you walked through a grocery store. Were you mostly focused on getting the food items on your list? Or did you plan to pick up a prescription and maybe a coffee once you finished? Grocery stores have become a one-stop shopping environment, and investments encompass more than just shelving and floor arrangement. Some grocery chains purchase warehouses to distribute inventory as needed to various stores. Machinery upgrades can help automate various departments. Some supermarkets even purchase large parcels of land to build not only their stores, but also surrounding shopping plazas to draw in customers. All such investments help increase the company’s net profit. Concepts In Practice Vehicle Repairs and Enhancements Automobiles are a useful way of looking at the difference between repair and maintenance expenses and capitalized modifications. Routine repairs such as brake pad replacements are recorded as repair and maintenance expense. They are an expected part of owning a vehicle. However, a car may be modified to change its appearance or performance. For example, if a supercharger is added to a car to increase its horsepower, the car’s performance is increased, and the cost should be included as a part of the vehicle asset. Likewise, if replacing the engine of an older car extends its useful life, that cost would also be capitalized. Repair and Maintenance Costs of Property, Plant, and Equipment Long-term assets may have additional costs associated with them over time. These additional costs may be capitalized or expensed based on the nature of the cost. For example, Walmart ’s financial statements explain that major improvements are capitalized, while costs of normal repairs and maintenance are charged to expense as incurred. An amount spent is considered a current expense , or an amount charged in the current period, if the amount incurred did not help to extend the life of or improve the asset. For example, if a service company cleans and maintains Liam’s silk-screening machine every six months, that service does not extend the useful life of the machine beyond the original estimate, increase the capacity of the machine, or improve the quality of the silk-screening performed by the machine. Therefore, this maintenance would be expensed within the current period. In contrast, if Liam had the company upgrade the circuit board of the silk-screening machine, thereby increasing the machine’s future capabilities, this would be capitalized and depreciated over its useful life. Think It Through Correcting Errors in Classifying Assets You work at a business consulting firm. Your new colleague, Marielena, helped a client organize his accounting records last year by types of assets and expenditures. Even though Marielena was a bit stumped on how to classify certain assets and related expenditures, such as capitalized costs versus expenses, she did not come to you or any other more experienced colleagues for help. Instead, she made the following classifications and gave them to the client who used this as the basis for accounting transactions over the last year. Thankfully, you have been asked this year to help prepare the client’s financial reports and correct errors that were made. Explain what impact these errors would have had over the last year and how you will correct them so you can prepare accurate financial statements. Expenditures: Normal repair and maintenance on the manufacturing facility were capitalized. The cost of taxes on new equipment used in business operations was expensed. The shipping costs on new equipment used in business operations were expensed. The cost of a minor repair on existing equipment used in business operations was capitalized. Assets: Land next to the production facility held for use next year as a place to build a warehouse was depreciated. Land held for future resale when the value increases was classified as Property, Plant, and Equipment but not depreciated. Equipment used in the production process was classified as an investment. Link to Learning Many businesses invest a lot of money in production facilities and operations. Some production processes are more automated than others, and they require a greater investment in property, plant, and equipment than production facilities that may be more labor intensive. Watch this video of the operation of a Georgia-Pacific lumber mill and note where you see all components of property, plant, and equipment in operations in this fascinating production process. There’s even a reference to an intangible asset—if you watch and listen closely, you just might catch it. 11.3 Explain and Apply Depreciation Methods to Allocate Capitalized Costs In this section, we concentrate on the major characteristics of determining capitalized costs and some of the options for allocating these costs on an annual basis using the depreciation process. In the determination of capitalized costs, we do not consider just the initial cost of the asset; instead, we determine all of the costs necessary to place the asset into service. For example, if our company purchased a drill press for $22,000, and spent $2,500 on sales taxes and $800 for delivery and setup, the depreciation calculation would be based on a cost of $22,000 plus $2,500 plus $800, for a total cost of $25,300. We also address some of the terminology used in depreciation determination that you want to familiarize yourself with. Finally, in terms of allocating the costs, there are alternatives that are available to the company. We consider three of the most popular options, the straight-line method , the units-of-production method , and the double-declining-balance method . Your Turn Calculating Depreciation Costs Liam buys his silk screen machine for $10,000. He estimates that he can use this machine for five years or 100,000 presses, and that the machine will only be worth $1,000 at the end of its life. He also estimates that he will make 20,000 clothing items in year one and 30,000 clothing items in year two. Determine Liam’s depreciation costs for his first two years of business under straight-line, units-of-production, and double-declining-balance methods. Also, record the journal entries. Solution Straight-line method : ($10,000 – $1,000)/5 = $1,800 per year for both years. Units-of-production method : ($10,000 – $1,000)/100,000= $0.09 per press Year 1 expense: $0.09 × 20,000 = $1,800 Year 2 expense: $0.09 × 30,000 = $2,700 Double-declining-balance method : Year 1 expense: [($10,000 – 0)/5] × 2 = $4,000 Year 2 expense: [($10,000 – $4,000)/5] × 2 = $2,400 Fundamentals of Depreciation As you have learned, when accounting for a long-term fixed asset, we cannot simply record an expense for the cost of the asset and record the entire outflow of cash in one accounting period. Like all other assets, when purchasing or acquiring a long-term asset, it must be recorded at the historical (initial) cost, which includes all costs to acquire the asset and put it into use. The initial recording of an asset has two steps: Record the initial purchase on the date of purchase, which places the asset on the balance sheet (as property, plant, and equipment) at cost, and record the amount as notes payable, accounts payable, or an outflow of cash. At the end of the period, make an adjusting entry to recognize the depreciation expense. Companies may record depreciation expense incurred annually, quarterly, or monthly. Following GAAP and the expense recognition principle, the depreciation expense is recognized over the asset’s estimated useful life. Recording the Initial Purchase of an Asset Assets are recorded on the balance sheet at cost, meaning that all costs to purchase the asset and to prepare the asset for operation should be included. Costs outside of the purchase price may include shipping, taxes, installation, and modifications to the asset. The journal entry to record the purchase of a fixed asset (assuming that a note payable is used for financing and not a short-term account payable) is shown here. Applying this to Liam’s silk-screening business, we learn that he purchased his silk-screening machine for $5,000 by paying $1,000 cash and the remainder in a note payable over five years. The journal entry to record the purchase is shown here. Concepts In Practice Estimating Useful Life and Salvage Value Useful life and salvage value are estimates made at the time an asset is placed in service. It is common and expected that the estimates are inaccurate with the uncertainty involved in estimating the future. Sometimes, however, a company may attempt to take advantage of estimating salvage value and useful life to improve earnings. A larger salvage value and longer useful life decrease annual depreciation expense and increase annual net income. An example of this behavior is Waste Management , which was disciplined by the Securities and Exchange Commission for fraudulently altering its estimates to reduce depreciation expense and overstate net income by $1.7 billion. 6 6 U.S. Securities and Exchange Commission. “Judge Enters Final Judgment against Former CFO of Waste Management, Inc. Following Jury Verdict in SEC’s Favor.” January 3, 2008. https://www.sec.gov/news/press/2008/2008-2.htm Components Used in Calculating Depreciation The expense recognition principle that requires that the cost of the asset be allocated over the asset’s useful life is the process of depreciation. For example, if we buy a delivery truck to use for the next five years, we would allocate the cost and record depreciation expense across the entire five-year period. The calculation of the depreciation expense for a period is not based on anticipated changes in the fair market value of the asset; instead, the depreciation is based on the allocation of the cost of owning the asset over the period of its useful life. The following items are important in determining and recording depreciation: Book value : the asset’s original cost less accumulated depreciation. Useful life : the length of time the asset will be productively used within operations. Salvage (residual) value : the price the asset will sell for or be worth as a trade-in when its useful life expires. The determination of salvage value can be an inexact science, since it requires anticipating what will occur in the future. Often, the salvage value is estimated based on past experiences with similar assets. Depreciable base (cost) : the depreciation expense over the asset’s useful life. For example, if we paid $50,000 for an asset and anticipate a salvage value of $10,000, the depreciable base is $40,000. We expect $40,000 in depreciation over the time period in which the asset was used, and then it would be sold for $10,000. Depreciation records an expense for the value of an asset consumed and removes that portion of the asset from the balance sheet. The journal entry to record depreciation is shown here. Depreciation expense is a common operating expense that appears on an income statement. Accumulated depreciation is a contra account , meaning it is attached to another account and is used to offset the main account balance that records the total depreciation expense for a fixed asset over its life. In this case, the asset account stays recorded at the historical value but is offset on the balance sheet by accumulated depreciation. Accumulated depreciation is subtracted from the historical cost of the asset on the balance sheet to show the asset at book value. Book value is the amount of the asset that has not been allocated to expense through depreciation. In this case, the asset’s book value is $20,000: the historical cost of $25,000 less the accumulated depreciation of $5,000. It is important to note, however, that not all long-term assets are depreciated. For example, land is not depreciated because depreciation is the allocating of the expense of an asset over its useful life. How can one determine a useful life for land? It is assumed that land has an unlimited useful life; therefore, it is not depreciated, and it remains on the books at historical cost. Once it is determined that depreciation should be accounted for, there are three methods that are most commonly used to calculate the allocation of depreciation expense: the straight-line method , the units-of-production method , and the double-declining-balance method . A fourth method, the sum-of-the-years-digits method, is another accelerated option that has been losing popularity and can be learned in intermediate accounting courses. Let’s use the following scenario involving Kenzie Company to work through these three methods. Assume that on January 1, 2019, Kenzie Company bought a printing press for $54,000. Kenzie pays shipping costs of $1,500 and setup costs of $2,500, assumes a useful life of five years or 960,000 pages. Based on experience, Kenzie Company anticipates a salvage value of $10,000. Recall that determination of the costs to be depreciated requires including all costs that prepare the asset for use by the company. The Kenzie example would include shipping and setup costs. Any costs for maintaining or repairing the equipment would be treated as regular expenses, so the total cost would be $58,000, and, after allowing for an anticipated salvage value of $10,000 in five years, the business could take $48,000 in depreciation over the machine’s economic life. Concepts In Practice Fixed Assets You work for Georgia-Pacific as an accountant in charge of the fixed assets subsidiary ledger at a production and warehouse facility in Pennsylvania. The facility is in the process of updating and replacing several asset categories, including warehouse storage units, fork trucks, and equipment on the production line. It is your job to keep the information in the fixed assets subsidiary ledger up to date and accurate. You need information on original historical cost, estimated useful life, salvage value, depreciation methods, and additional capital expenditures. You are excited about the new purchases and upgrades to the facility and how they will help the company serve its customers better. However, you have been in your current position for only a few years and have never overseen extensive updates, and you realize that you will have to gather a lot of information at once to keep the accounting records accurate. You feel overwhelmed and take a minute to catch your breath and think through what you need. After a few minutes, you realize that you have many people and many resources to work with to tackle this project. Whom will you work with and how will you go about gathering what you need? Straight-Line Depreciation Straight-line depreciation is a method of depreciation that evenly splits the depreciable amount across the useful life of the asset. Therefore, we must determine the yearly depreciation expense by dividing the depreciable base of $48,000 by the economic life of five years, giving an annual depreciation expense of $9,600. The journal entries to record the first two years of expenses are shown, along with the balance sheet information. Here are the journal entry and information for year one: After the journal entry in year one, the press would have a book value of $48,400. This is the original cost of $58,000 less the accumulated depreciation of $9,600. Here are the journal entry and information for year two: Kenzie records an annual depreciation expense of $9,600. Each year, the accumulated depreciation balance increases by $9,600, and the press’s book value decreases by the same $9,600. At the end of five years, the asset will have a book value of $10,000, which is calculated by subtracting the accumulated depreciation of $48,000 (5 × $9,600) from the cost of $58,000. Units-of-Production Depreciation Straight-line depreciation is efficient, accounting for assets used consistently over their lifetime, but what about assets that are used with less regularity? The units-of-production depreciation method bases depreciation on the actual usage of the asset, which is more appropriate when an asset’s life is a function of usage instead of time. For example, this method could account for depreciation of a printing press for which the depreciable base is $48,000 (as in the straight-line method), but now the number of pages the press prints is important. In our example, the press will have total depreciation of $48,000 over its useful life of 960,000 pages. Therefore, we would divide $48,000 by 960,000 pages to get a cost per page of $0.05. If Kenzie printed 180,000 pages in the first year, the depreciation expense would be 180,000 pages × $0.05 per page, or $9,000. The journal entry to record this expense would be the same as with straight-line depreciation: only the dollar amount would have changed. The presentation of accumulated depreciation and the calculation of the book value would also be the same. Kenzie would continue to depreciate the asset until a total of $48,000 in depreciation was taken after printing 960,000 total pages. Think It Through Deciding on a Depreciation Method Liam is struggling to determine which deprecation method he should use for his new silk-screening machine. He expects sales to increase over the next five years. He also expects (hopes) that in two years he will need to buy a second silk-screening machine to keep up with the demand for products of his growing company. Which depreciation method makes more sense for Liam: higher expenses in the first few years, or keeping expenses consistent over time? Or would it be better for him to not think in terms of time, but rather in the usage of the machine? Double-Declining-Balance Depreciation The double-declining-balance depreciation method is the most complex of the three methods because it accounts for both time and usage and takes more expense in the first few years of the asset’s life. Double-declining considers time by determining the percentage of depreciation expense that would exist under straight-line depreciation. To calculate this, divide 100% by the estimated life in years. For example, a five-year asset would be 100/5, or 20% a year. A four-year asset would be 100/4, or 25% a year. Next, because assets are typically more efficient and “used” more heavily early in their life span, the double-declining method takes usage into account by doubling the straight-line percentage. For a four-year asset, multiply 25% (100%/4-year life) × 2, or 50%. For a five-year asset, multiply 20% (100%/5-year life) × 2, or 40%. One unique feature of the double-declining-balance method is that in the first year, the estimated salvage value is not subtracted from the total asset cost before calculating the first year’s depreciation expense. Instead the total cost is multiplied by the calculated percentage. However, depreciation expense is not permitted to take the book value below the estimated salvage value, as demonstrated in the following text. Notice that in year four, the remaining book value of $12,528 was not multiplied by 40%. This is because the expense would have been $5,011.20, and since we cannot depreciate the asset below the estimated salvage value of $10,000, the expense cannot exceed $2,528, which is the amount left to depreciate (difference between the book value of $12,528 and the salvage value of $10,000). Since the asset has been depreciated to its salvage value at the end of year four, no depreciation can be taken in year five. In our example, the first year’s double-declining-balance depreciation expense would be $58,000 × 40%, or $23,200. For the remaining years, the double-declining percentage is multiplied by the remaining book value of the asset. Kenzie would continue to depreciate the asset until the book value and the estimated salvage value are the same (in this case $10,000). The net effect of the differences in straight-line depreciation versus double-declining-balance depreciation is that under the double-declining-balance method, the allowable depreciation expenses are greater in the earlier years than those allowed for straight-line depreciation. However, over the depreciable life of the asset, the total depreciation expense taken will be the same, no matter which method the entity chooses. For example, in the current example both straight-line and double-declining-balance depreciation will provide a total depreciation expense of $48,000 over its five-year depreciable life. IFRS Connection Accounting for Depreciation Both US GAAP and International Financial Reporting Standards (IFRS) account for long-term assets (tangible and intangible) by recording the asset at the cost necessary to make the asset ready for its intended use. Additionally, both sets of standards require that the cost of the asset be recognized over the economic, useful, or legal life of the asset through an allocation process such as depreciation. However, there are some significant differences in how the allocation process is used as well as how the assets are carried on the balance sheet. IFRS and US GAAP allow companies to choose between different methods of depreciation, such as even allocation (straight-line method), depreciation based on usage (production methods), or an accelerated method (double-declining balance). The mechanics of applying these methods do not differ between the two standards. However, IFRS requires companies to use “component depreciation” if it is feasible. Component depreciation would apply to assets with components that have differing lives. Consider the following example using a plane owned by Southwest Airlines . Let’s divide this plane into three components: the interior, the engines, and the fuselage. Suppose the average life of the interior of a plane is ten years, the average life of the engines is fifteen years, and the average life of the fuselage is twenty-five years. Given this, what should be the depreciable life of the asset? In that case, under IFRS, the costs associated with the interior would be depreciated over ten years, the costs associated with the engines would be depreciated over fifteen years, and the costs associated with the fuselage would be depreciated over twenty-five years. Under US GAAP, the total cost of the airplane would likely be depreciated over twenty years. Obviously, component depreciation involves more record keeping and differing amounts of depreciation per year for the life of the asset. But the same amount of total depreciation, the cost of the asset less residual value, would be taken over the life of the asset under both US GAAP and IFRS. Probably one of the most significant differences between IFRS and US GAAP affects long-lived assets. This is the ability, under IFRS, to adjust the value of those assets to their fair value as of the balance sheet date. The adjustment to fair value is to be done by “class” of asset, such as real estate, for example. A company can adjust some classes of assets to fair value but not others. Under US GAAP, almost all long-lived assets are carried on the balance sheet at their depreciated historical cost, regardless of how the actual fair value of the asset changes. Consider the following example. Suppose your company owns a single building that you bought for $1,000,000. That building currently has $200,000 in accumulated depreciation. This building now has a book value of $800,000. Under US GAAP, this is how this building would appear in the balance sheet. Even if the fair value of the building is $875,000, the building would still appear on the balance sheet at its depreciated historical cost of $800,000 under US GAAP. Alternatively, if the company used IFRS and elected to carry real estate on the balance sheet at fair value, the building would appear on the company’s balance sheet at its new fair value of $875,000. It is difficult to determine an accurate fair value for long-lived assets. This is one reason US GAAP has not permitted the fair valuing of long-lived assets. Different appraisals can result in different determinations of “fair value.” Thus, the Financial Accounting Standards Board (FASB) elected to continue with the current method of carrying assets at their depreciated historical cost. The thought process behind the adjustments to fair value under IFRS is that fair value more accurately represents true value. Even if the fair value reported is not known with certainty, reporting the class of assets at a reasonable representation of fair value enhances decision-making by users of the financial statements. Summary of Depreciation Table 11.2 compares the three methods discussed. Note that although each time-based (straight-line and double-declining balance) annual depreciation expense is different, after five years the total amount depreciated (accumulated depreciation) is the same. This occurs because at the end of the asset’s useful life, it was expected to be worth $10,000: thus, both methods depreciated the asset’s value by $48,000 over that time period. The units of production method is different from the two above methods in that while those methods are based on time factors, the units of production is based on usage. However, the total amount of depreciation taken over an asset’s economic life will still be the same. In our example, the total depreciation will be $48,000, even though the sum-of-the-years-digits method could take only two or three years or possibly six or seven years to be allocated. Calculation of Depreciation Expense Depreciation Method Calculation Straight line (Cost – salvage value)/Useful life Units of production (Cost – salvage value) × (Units produced in current period/Estimated total units to be produced) Double declining balance Book value × Straight-line annual depreciation percentage × 2 Table 11.2 Ethical Considerations Depreciation Analysis Requires Careful Evaluation When analyzing depreciation, accountants are required to make a supportable estimate of an asset’s useful life and its salvage value. However, “management teams typically fail to invest either time or attention into making or periodically revisiting and revising reasonably supportable estimates of asset lives or salvage values, or the selection of depreciation methods, as prescribed by GAAP.” 7 This failure is not an ethical approach to properly accounting for the use of assets. 7 Howard B. Levy. “Depreciable Asset Lives.” The CPA Journal . September 2016. https://www.cpajournal.com/2016/09/08/depreciable-asset-lives/ Accountants need to analyze depreciation of an asset over the entire useful life of the asset. As an asset supports the cash flow of the organization, expensing its cost needs to be allocated, not just recorded as an arbitrary calculation. An asset’s depreciation may change over its life according to its use. If asset depreciation is arbitrarily determined, the recorded “gains or losses on the disposition of depreciable property assets seen in financial statements” 8 are not true best estimates. Due to operational changes, the depreciation expense needs to be periodically reevaluated and adjusted. 8 Howard B. Levy. “Depreciable Asset Lives.” The CPA Journal . September 2016. https://www.cpajournal.com/2016/09/08/depreciable-asset-lives/ Any mischaracterization of asset usage is not proper GAAP and is not proper accrual accounting. Therefore, “financial statement preparers, as well as their accountants and auditors, should pay more attention to the quality of depreciation-related estimates and their possible mischaracterization and losses of credits and charges to operations as disposal gains.” 9 An accountant should always follow GAAP guidelines and allocate the expense of an asset according to its usage. 9 Howard B. Levy. “Depreciable Asset Lives.” The CPA Journal . September 2016. https://www.cpajournal.com/2016/09/08/depreciable-asset-lives/ Partial-Year Depreciation A company will usually only own depreciable assets for a portion of a year in the year of purchase or disposal. Companies must be consistent in how they record depreciation for assets owned for a partial year. A common method is to allocate depreciation expense based on the number of months the asset is owned in a year. For example, a company purchases an asset with a total cost of $58,000, a five-year useful life, and a salvage value of $10,000. The annual depreciation is $9,600 ([$58,000 – 10,000]/5). However, the asset is purchased at the beginning of the fourth month of the fiscal year. The company will own the asset for nine months of the first year. The depreciation expense of the first year is $7,200 ($9,600 × 9/12). The company will depreciate the asset $9,600 for the next four years, but only $2,400 in the sixth year so that the total depreciation of the asset over its useful life is the depreciable amount of $48,000 ($7,200 + 9,600 + 9,600 + 9,600 + 9,600 + 2,400). Think It Through Choosing Appropriate Depreciation Methods You are part of a team reviewing the financial statements of a new computer company. Looking over the fixed assets accounts, one long-term tangible asset sticks out. It is labeled “USB” and valued at $10,000. You ask the company’s accountant for more detail, and he explains that the asset is a USB drive that holds the original coding for a game the company developed during the year. The company expects the game to be fairly popular for the next few years, and then sales are expected to trail off. Because of this, they are planning on depreciating this asset over the next five years using the double-declining method. Does this recording seem appropriate, or is there a better way to categorize the asset? How should this asset be expensed over time? Special Issues in Depreciation While you’ve now learned thebasic foundationof the major available depreciation methods, there are a few special issues. Until now, we have assumed a definite physical or economically functional useful life for the depreciable assets. However, in some situations, depreciable assets can be used beyond their useful life. If so desired, the company could continue to use the asset beyond the original estimated economic life. In this case, a new remaining depreciation expense would be calculated based on the remaining depreciable base and estimated remaining economic life. Assume in the earlier Kenzie example that after five years and $48,000 in accumulated depreciation, the company estimated that it could use the asset for two more years, at which point the salvage value would be $0. The company would be able to take an additional $10,000 in depreciation over the extended two-year period, or $5,000 a year, using the straight-line method. As with the straight-line example, the asset could be used for more than five years, with depreciation recalculated at the end of year five using the double-declining balance method. While the process of calculating the additional depreciation for the double-declining-balance method would differ from that of the straight-line method, it would also allow the company to take an additional $10,000 after year five, as with the other methods, so long as the cost of $58,000 is not exceeded. As a side note, there often is a difference in useful lives for assets when following GAAP versus the guidelines for depreciation under federal tax law, as enforced by the Internal Revenue Service (IRS). This difference is not unexpected when you consider that tax law is typically determined by the United States Congress, and there often is an economic reason for tax policy. For example, if we want to increase investment in real estate, shortening the economic lives of real estate for taxation calculations can have a positive increasing effect on new construction. If we want to slow down new production, extending the economic life can have the desired slowing effect. In this course, we concentrate on financial accounting depreciation principles rather than tax depreciation. Fundamentals of Depletion of Natural Resources Another type of fixed asset is natural resources , assets a company owns that are consumed when used. Examples include lumber, mineral deposits, and oil/gas fields. These assets are considered natural resources while they are still part of the land; as they are extracted from the land and converted into products, they are then accounted for as inventory (raw materials). Natural resources are recorded on the company’s books like a fixed asset, at cost, with total costs including all expenses to acquire and prepare the resource for its intended use. As the resource is consumed (converted to a product), the cost of the asset must be expensed: this process is called depletion . As with depreciation of nonnatural resource assets, a contra account called accumulated depletion , which records the total depletion expense for a natural resource over its life, offsets the natural resource asset account. Depletion expense is typically calculated based on the number of units extracted from cutting, mining, or pumping the resource from the land, similar to the units-of-production method. For example, assume a company has an oil well with an estimated 10,000 gallons of crude oil. The company purchased this well for $1,000,000, and the well is expected to have no salvage value once it is pumped dry. The depletion cost per gallon will be $1,000,000/10,000 = $100. If the company extracts 4,000 gallons of oil in a given year, the depletion expense will be $400,000. Fundamentals of Amortization of an Intangible Recall that intangible assets are recorded as long-term assets at their cost. As with tangible assets, many intangible assets have a finite (limited) life span so their costs must be allocated over their useful lives: this process is amortization . Depreciation and amortization are similar in nature but have some important differences. First, amortization is typically only done using the straight-line method. Second, there is usually no salvage value for intangible assets because they are completely used up over their life span. Finally, an accumulated amortization account is not required to record yearly expenses (as is needed with depreciation); instead, the intangible asset account is written down each period. For example, a company called Patents-R-Us purchased a product patent for $10,000, granting the company exclusive use of that product for the next twenty years. Therefore, unless the company does not think the product will be useful for all twenty years (at that point the company would use the shorter useful life of the product), the company will record amortization expense of $500 a year ($10,000/20 years). Assuming that it was placed into service on October 1, 2019, the journal entry would be as follows: Link to Learning See Form 10-K that was filed with the SEC to determine which depreciation method McDonald’s Corporation used for its long-term assets in 2017. 11.4 Describe Accounting for Intangible Assets and Record Related Transactions Intangible assets can be difficult to understand and incorporate into the decision-making process. In this section we explain them in more detail and provide examples of how to amortize each type of intangible asset. Fundamentals of Intangible Assets Intangibles are recorded at their acquisition cost, as are tangible assets. The costs of internally generated intangible assets, such as a patent developed through research and development, are recorded as expenses when incurred. An exception is legal costs to register or defend an intangible asset. For example, if a company incurs legal costs to defend a patent it has developed internally, the costs associated with developing the patent are recorded as an expense, but the legal costs associated with defending the patent would be capitalized as a patent intangible asset. Amortization of intangible assets is handled differently than depreciation of tangible assets. Intangible assets are typically amortized using the straight-line method; there is typically no salvage value, as the usefulness of the asset is used up over its lifetime, and no accumulated amortization account is needed. Additionally, based on regulations, certain intangible assets are restricted and given limited life spans, while others are infinite in their economic life and not amortized. Copyrights While copyrights have a finite life span of 70 years beyond the author’s death, they are amortized over their estimated useful life. Therefore, if a company acquired a copyright on a new graphic novel for $10,000 and estimated it would be able to sell that graphic novel for the next ten years, it would amortize $1,000 a year ($10,000/ten years), and the journal entry would be as shown. Assume that the novel began sales on January 1, 2019. Patents Patents are issued to the inventor of the product by the federal government and last twenty years. All costs associated with creating the product being patented (such as research and development costs) are expensed; however, direct costs to obtain the patent could be capitalized. Otherwise, patents are capitalized only when purchased. Like copyrights, patents are amortized over their useful life, which can be shorter than twenty years due to changing technology. Assume Mech Tech purchased the patent for a new pump system. The patent cost $20,000, and the company expects the pump to be a useful product for the next twenty years. Mech Tech will then amortize the $20,000 over the next twenty years, which is $1,000 a year. Trademarks Companies can register their trademarks with the federal government for ten years with the opportunity to renew the trademark every ten years. Trademarks are recorded as assets only when they are purchased from another company and are valued based on market price at the time of purchase. In this case, these trademarks are amortized over the expected useful life. In some cases, the trademark may be seen as having an indefinite life, in which case there would be no amortization. Goodwill From an accounting standpoint, goodwill is internally generated and is not recorded as an asset unless it is purchased during the acquisition of another company. The purchase of goodwill occurs when one company buys another company for an amount greater than the total value of the company’s net assets. The value difference between net assets and the purchase price is then recorded as goodwill on the purchaser’s financial statements. For example, say the London Hoops professional basketball team was sold for $10 million. The new owner received net assets of $7 million, so the goodwill (value of the London Hoops above its net assets) is $3 million. The following journal entry shows how the new owner would record this purchase. Goodwill does not have an expected life span and therefore is not amortized. However, a company is required to compare the book value of goodwill to its market value at least annually to determine if it needs to be adjusted. This comparison process is called testing for impairment . If the market value of goodwill is found to be lower than the book value, then goodwill needs to be reduced to its market value. If goodwill is impaired, it is reduced with a credit, and an impairment loss is debited. Goodwill is never increased beyond its original cost. For example, if the new owner of London Hoops assesses that London Hoops now has a fair value of $9,000,000 rather than the $10,000,000 of the original purchase, the owner would need to record the impairment as shown in the following journal entry. Concepts In Practice Microsoft’s Goodwill In 2016, Microsoft bought LinkedIn for $25 billion. Microsoft wanted the brand, website platform, and software, which are intangible assets of LinkedIn , and therefore Microsoft only received $4 billion in net assets. The overpayment by Microsoft is not necessarily a bad business decision, but rather the premium or value of those intangible assets that LinkedIn owned and Microsoft wanted. The $21 billion difference will be listed on Microsoft ’s balance sheet as goodwill. Link to Learning Apple Inc. had goodwill of $5,717,000,000 on its 2017 balance sheet. Explore Apple, Inc. ’s U.S. Securities and Exchange Commission 10-K Filing for notes that discuss goodwill and whether Apple has had to adjust for the impairment of this asset in recent years. 11.5 Describe Some Special Issues in Accounting for Long-Term Assets A company will account for some events for long-term assets that are less routine than recording purchase and depreciation or amortization. For example, a company may realize that its original estimate of useful life or salvage value is no longer accurate. A long-term asset may lose its value, or a company may sell a long-term asset. Revision of Remaining Life or Salvage Value As you have learned, depreciation is based on estimating both the useful life of an asset and the salvage value of that asset. Over time, these estimates may be proven inaccurate and need to be adjusted based on new information. When this occurs, the depreciation expense calculation should be changed to reflect the new (more accurate) estimates. For this entry, the remaining depreciable balance of the net book value is allocated over the new useful life of the asset. To work through this process with data, let’s return to the example of Kenzie Company. Kenzie has a press worth $58,000. Its salvage value was originally estimated to be $10,000. Its economic life was originally estimated to be five years. Kenzie uses straight-line depreciation. After three years, Kenzie determines that the estimated useful life would have been more accurately estimated at eight years, and the salvage value at that time would be $6,000. The revised depreciation expense is calculated as shown: These revised calculations show that Kenzie should now be recording a depreciation of $4,640 per year for the next five years. Your Turn Useful Life Georgia-Pacific is a global company that employs a wide variety of property, plant, and equipment assets in its production facilities. You work for Georgia-Pacific as an accountant in charge of the fixed assets subsidiary ledger at a warehouse facility in Pennsylvania. You find out that the useful lives for the fork trucks need to be adjusted. As an asset category, the trucks were bought at the same time and had original useful lives of seven years. However, after depreciating them for two years, the company makes improvements to the trucks that allow them to be used outdoors in what can be harsh winters. The improvements also extend their useful lives by two additional years. What is the remaining useful life after the improvements? Solution Seven original years – two years depreciated + two additional years = seven years remaining. Obsolescence Obsolescence refers to the reduction in value and/or use of the asset. Obsolescence has traditionally resulted from the physical deterioration of the asset—called physical obsolescence . In current application—and considering the role of modern technology and tech assets—accounting for functional obsolescence is becoming more common. Functional obsolescence is the loss of value from all causes within a property except those due to physical deterioration. With functional obsolescence, the useful life still needs to be adjusted downward: although the asset physically still works, its functionality makes it less useful for the company. Also, an adjustment might be necessary in the salvage value. This potential adjustment depends on the specific details of each obsolescence determination or decision. Sale of an Asset When an asset is sold, the company must account for its depreciation up to the date of sale. This means companies may be required to record a depreciation entry before the sale of the asset to ensure it is current. After ensuring that the net book value of an asset is current, the company must determine if the asset has sold at a gain, at a loss, or at book value. We look at examples of each accounting alternative using the Kenzie Company data. Recall that Kenzie’s press has a depreciable base of $48,000 and an economic life of five years. If Kenzie sells the press at the end of the third year, the company would have taken three years of depreciation amounting to $28,800 ($9,600 × 3 years). With an original cost of $58,000, and after subtracting the accumulated depreciation of $28,800, the press would have a book value of $29,200. If the company sells the press for $31,000, it would realize a gain of $1,800, as shown. The journal entry to record the sale is shown here. If Kenzie sells the printing press for $27,100, what would the journal entries be? The book value of the press is $29,200, so Kenzie would be selling the press at a loss. The journal entry to record the sale is shown here. What if Kenzie sells the press at exactly book value? In this case, the company will realize neither a gain nor a loss. Here is the journal entry to record the sale. While it would be ideal to estimate a salvage value that provides neither a gain nor a loss upon the retirement and sale of a long-term asset, this type of accuracy is virtually impossible to reach, unless you negotiate a fixed future sales price. For example, you might buy a truck for $80,000 and lock in a five-year life with 100,000 or fewer miles driven. Under these conditions, the dealer might agree to pay you $20,000 for the truck in five years. Under these conditions, you could justify calculating your depreciation over a five-year period, using a depreciable base of $60,000. Under the straight-line method, this would provide an annual depreciation amount of $12,000. Also, when you sell the truck to the dealer after five years, the sales price will be $20,000, and the book value will be $20,000, so there would be neither a gain nor a loss on the sale. In the Kenzie example where the asset was sold for $31,000 after three years, Kenzie should have recorded a total of $27,000 in depreciation (cost of $58,000 less the sales value of $31,000). However, the company recorded $28,800 in depreciation over the three-year period. Subtracting the gain of $1,800 from the total depreciation expense of $28,800 shows the true cost of using the asset as $27,000, and not the depreciation amount of $28,800. When the asset was sold for $27,100, the accounting records would show $30,900 in depreciation (cost of $58,000 less the sales price of $27,100). However, depreciation is listed as $28,800 over the three-year period. Adding the loss of $2,100 to the total depreciation expense of $28,800 results in a cost of $30,900 for use of the asset rather than the $28,800 depreciation. If the asset sells for exactly the book value, its depreciation expense was estimated perfectly, and there is no gain or loss. If it sells for $29,200 and had a book value of $29,200, its depreciation expense of $28,800 matches the original estimate. Think It Through Depreciation of Long-Term Assets You are a new staff accountant at a large construction company. After a rough year, management is seeking ways to minimize expenses or increase revenues before year-end to help increase the company’s earnings per share. Your boss has asked staff to think “outside the box” and has asked to you look through the list of long-term assets to find ones that have been fully depreciated in value but may still have market value. Why would your manager be looking for these specific assets? How significantly might these items impact your company’s overall performance? What ethical issues might come into play in the task you have been assigned? Link to Learning The management of fixed assets can be quite a challenge for any business, from sole proprietorships to global corporations. Not only do companies need to track their asset purchases, depreciation, sales, disposals, and capital expenditures, they also need to be able to generate a variety of reports. Read this Finances Online post for more details on software packages that help companies steward their fixed assets no matter what their size.
anatomy_and_physiology
Chapter Objectives After studying this chapter, you will be able to: Describe the fundamental composition of matter Identify the three subatomic particles Identify the four most abundant elements in the body Explain the relationship between an atom’s number of electrons and its relative stability Distinguish between ionic bonds, covalent bonds, and hydrogen bonds Explain how energy is invested, stored, and released via chemical reactions, particularly those reactions that are critical to life Explain the importance of the inorganic compounds that contribute to life, such as water, salts, acids, and bases Compare and contrast the four important classes of organic (carbon-based) compounds—proteins, carbohydrates, lipids and nucleic acids—according to their composition and functional importance to human life Introduction The smallest, most fundamental material components of the human body are basic chemical elements. In fact, chemicals called nucleotide bases are the foundation of the genetic code with the instructions on how to build and maintain the human body from conception through old age. There are about three billion of these base pairs in human DNA. Human chemistry includes organic molecules (carbon-based) and biochemicals (those produced by the body). Human chemistry also includes elements. In fact, life cannot exist without many of the elements that are part of the earth. All of the elements that contribute to chemical reactions, to the transformation of energy, and to electrical activity and muscle contraction—elements that include phosphorus, carbon, sodium, and calcium, to name a few—originated in stars. These elements, in turn, can form both the inorganic and organic chemical compounds important to life, including, for example, water, glucose, and proteins. This chapter begins by examining elements and how the structures of atoms, the basic units of matter, determine the characteristics of elements by the number of protons, neutrons, and electrons in the atoms. The chapter then builds the framework of life from there.
[ { "answer": { "ans_choice": 3, "ans_text": "oxygen, carbon, hydrogen, and nitrogen" }, "bloom": "1", "hl_context": "All matter in the natural world is composed of one or more of the 92 fundamental substances called elements . An element is a pure substance that is distinguished from all other matter by the fact that it cannot be created or broken down by ordinary chemical means . While your body can assemble many of the chemical compounds needed for life from their constituent elements , it cannot make elements . They must come from the environment . A familiar example of an element that you must take in is calcium ( Ca ) . Calcium is essential to the human body ; it is absorbed and used for a number of processes , including strengthening bones . When you consume dairy products your digestive system breaks down the food into components small enough to cross into the bloodstream . Among these is calcium , which , because it is an element , cannot be broken down further . The elemental calcium in cheese , therefore , is the same as the calcium that forms your bones . Some other elements you might be familiar with are oxygen , sodium , and iron . <hl> The elements in the human body are shown in Figure 2.2 , beginning with the most abundant : oxygen ( O ) , carbon ( C ) , hydrogen ( H ) , and nitrogen ( N ) . <hl> Each element ’ s name can be replaced by a one - or two-letter symbol ; you will become familiar with some of these during this course . All the elements in your body are derived from the foods you eat and the air you breathe . In nature , elements rarely occur alone . Instead , they combine to form compounds . A compound is a substance composed of two or more elements joined by chemical bonds . For example , the compound glucose is an important body fuel . It is always composed of the same three elements : carbon , hydrogen , and oxygen . Moreover , the elements that make up any given compound always occur in the same relative amounts . In glucose , there are always six carbon and six oxygen units for every twelve hydrogen units . But what , exactly , are these “ units ” of elements ?", "hl_sentences": "The elements in the human body are shown in Figure 2.2 , beginning with the most abundant : oxygen ( O ) , carbon ( C ) , hydrogen ( H ) , and nitrogen ( N ) .", "question": { "cloze_format": "Together, just four elements make up more than 95 percent of the body’s mass. These include ________.", "normal_format": "Together, just four elements make up more than 95 percent of the body’s mass. These include which of the following?", "question_choices": [ "calcium, magnesium, iron, and carbon", "oxygen, calcium, iron, and nitrogen", "sodium, chlorine, carbon, and hydrogen", "oxygen, carbon, hydrogen, and nitrogen" ], "question_id": "fs-id2767629", "question_text": "Together, just four elements make up more than 95 percent of the body’s mass. These include ________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 1, "ans_text": "atom" }, "bloom": "1", "hl_context": "<hl> An atom is the smallest quantity of an element that retains the unique properties of that element . <hl> In other words , an atom of hydrogen is a unit of hydrogen — the smallest amount of hydrogen that can exist . As you might guess , atoms are almost unfathomably small . The period at the end of this sentence is millions of atoms wide .", "hl_sentences": "An atom is the smallest quantity of an element that retains the unique properties of that element .", "question": { "cloze_format": "The smallest unit of an element that still retains the distinctive behavior of that element is an ________.", "normal_format": "What is the smallest unit of an element that still retains the distinctive behavior of that element?", "question_choices": [ "electron", "atom", "elemental particle", "isotope" ], "question_id": "fs-id1409734", "question_text": "The smallest unit of an element that still retains the distinctive behavior of that element is an ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "protons" }, "bloom": "1", "hl_context": "An atom of carbon is unique to carbon , but a proton of carbon is not . One proton is the same as another , whether it is found in an atom of carbon , sodium ( Na ) , or iron ( Fe ) . The same is true for neutrons and electrons . <hl> So , what gives an element its distinctive properties — what makes carbon so different from sodium or iron ? <hl> <hl> The answer is the unique quantity of protons each contains . <hl> Carbon by definition is an element whose atoms contain six protons . No other element has exactly six protons in its atoms . Moreover , all atoms of carbon , whether found in your liver or in a lump of coal , contain six protons . Thus , the atomic number , which is the number of protons in the nucleus of the atom , identifies the element . Because an atom usually has the same number of electrons as protons , the atomic number identifies the usual number of electrons as well .", "hl_sentences": "So , what gives an element its distinctive properties — what makes carbon so different from sodium or iron ? The answer is the unique quantity of protons each contains .", "question": { "cloze_format": "The characteristic that gives an element its distinctive properties is its number of ________.", "normal_format": "The characteristic that gives an element its distinctive properties is its number of which of the following?", "question_choices": [ "protons", "neutrons", "electrons", "atoms" ], "question_id": "fs-id1349739", "question_text": "The characteristic that gives an element its distinctive properties is its number of ________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 1, "ans_text": "two" }, "bloom": "1", "hl_context": "Although electrons do not follow rigid orbits a set distance away from the atom ’ s nucleus , they do tend to stay within certain regions of space called electron shells . An electron shell is a layer of electrons that encircle the nucleus at a distinct energy level . <hl> The atoms of the elements found in the human body have from one to five electron shells , and all electron shells hold eight electrons except the first shell , which can only hold two . <hl> This configuration of electron shells is the same for all atoms . <hl> The precise number of shells depends on the number of electrons in the atom . <hl> Hydrogen and helium have just one and two electrons , respectively . If you take a look at the periodic table of the elements , you will notice that hydrogen and helium are placed alone on either sides of the top row ; they are the only elements that have just one electron shell ( Figure 2.7 ) . A second shell is necessary to hold the electrons in all elements larger than hydrogen and helium .", "hl_sentences": "The atoms of the elements found in the human body have from one to five electron shells , and all electron shells hold eight electrons except the first shell , which can only hold two . The precise number of shells depends on the number of electrons in the atom .", "question": { "cloze_format": "Nitrogen has an atomic number of seven. It likely has ___ electron shells.", "normal_format": "Nitrogen has an atomic number of seven. How many electron shells does it likely have?", "question_choices": [ "one", "two", "three", "four" ], "question_id": "fs-id1231333", "question_text": "Nitrogen has an atomic number of seven. How many electron shells does it likely have?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "H2" }, "bloom": "1", "hl_context": "Instead , atoms link by forming a chemical bond . A bond is a weak or strong electrical attraction that holds atoms in the same vicinity . The new grouping is typically more stable — less likely to react again — than its component atoms were when they were separate . <hl> A more or less stable grouping of two or more atoms held together by chemical bonds is called a molecule . <hl> <hl> The bonded atoms may be of the same element , as in the case of H 2 , which is called molecular hydrogen or hydrogen gas . <hl> When a molecule is made up of two or more atoms of different elements , it is called a chemical compound . Thus , a unit of water , or H 2 O , is a compound , as is a single molecule of the gas methane , or CH 4 .", "hl_sentences": "A more or less stable grouping of two or more atoms held together by chemical bonds is called a molecule . The bonded atoms may be of the same element , as in the case of H 2 , which is called molecular hydrogen or hydrogen gas .", "question": { "cloze_format": "___ is a molecule, but not a compound.", "normal_format": "Which of the following is a molecule, but not a compound?", "question_choices": [ "H2O", "2H", "H2", "H+" ], "question_id": "fs-id1482023", "question_text": "Which of the following is a molecule, but not a compound?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "polar covalent bonds" }, "bloom": null, "hl_context": "Groups of legislators with completely opposite views on a particular issue are often described as “ polarized ” by news writers . <hl> In chemistry , a polar molecule is a molecule that contains regions that have opposite electrical charges . <hl> <hl> Polar molecules occur when atoms share electrons unequally , in polar covalent bonds . <hl>", "hl_sentences": "In chemistry , a polar molecule is a molecule that contains regions that have opposite electrical charges . Polar molecules occur when atoms share electrons unequally , in polar covalent bonds .", "question": { "cloze_format": "A molecule of ammonia contains one atom of nitrogen and three atoms of hydrogen. These are linked with ________.", "normal_format": "A molecule of ammonia contains one atom of nitrogen and three atoms of hydrogen. With which are linked?", "question_choices": [ "ionic bonds", "nonpolar covalent bonds", "polar covalent bonds", "hydrogen bonds" ], "question_id": "fs-id2371267", "question_text": "A molecule of ammonia contains one atom of nitrogen and three atoms of hydrogen. These are linked with ________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 0, "ans_text": "an ion" }, "bloom": "1", "hl_context": "Recall that an atom typically has the same number of positively charged protons and negatively charged electrons . As long as this situation remains , the atom is electrically neutral . But when an atom participates in a chemical reaction that results in the donation or acceptance of one or more electrons , the atom will then become positively or negatively charged . This happens frequently for most atoms in order to have a full valence shell , as described previously . <hl> This can happen either by gaining electrons to fill a shell that is more than half-full , or by giving away electrons to empty a shell that is less than half-full , thereby leaving the next smaller electron shell as the new , full , valence shell . <hl> <hl> An atom that has an electrical charge — whether positive or negative — is an ion . <hl> Interactive Link", "hl_sentences": "This can happen either by gaining electrons to fill a shell that is more than half-full , or by giving away electrons to empty a shell that is less than half-full , thereby leaving the next smaller electron shell as the new , full , valence shell . An atom that has an electrical charge — whether positive or negative — is an ion .", "question": { "cloze_format": "When an atom donates an electron to another atom, it becomes ___ .", "normal_format": "To what does an atom change when it donates an electron to another atom?", "question_choices": [ "an ion", "an anion", "nonpolar", "all of the above" ], "question_id": "fs-id2327363", "question_text": "When an atom donates an electron to another atom, it becomes" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 1, "ans_text": "salt" }, "bloom": null, "hl_context": "<hl> The opposite charges of cations and anions exert a moderately strong mutual attraction that keeps the atoms in close proximity forming an ionic bond . <hl> <hl> An ionic bond is an ongoing , close association between ions of opposite charge . <hl> <hl> The table salt you sprinkle on your food owes its existence to ionic bonding . <hl> As shown in Figure 2.8 , sodium commonly donates an electron to chlorine , becoming the cation Na + . When chlorine accepts the electron , it becomes the chloride anion , Cl – . With their opposing charges , these two ions strongly attract each other . Water is an essential component of life because it is able to break the ionic bonds in salts to free the ions . In fact , in biological fluids , most individual atoms exist as ions . These dissolved ions produce electrical charges within the body . The behavior of these ions produces the tracings of heart and brain function observed as waves on an electrocardiogram ( EKG or ECG ) or an electroencephalogram ( EEG ) . The electrical activity that derives from the interactions of the charged ions is why they are also called electrolytes .", "hl_sentences": "The opposite charges of cations and anions exert a moderately strong mutual attraction that keeps the atoms in close proximity forming an ionic bond . An ionic bond is an ongoing , close association between ions of opposite charge . The table salt you sprinkle on your food owes its existence to ionic bonding .", "question": { "cloze_format": "A substance formed of crystals of equal numbers of cations and anions held together by ionic bonds is called a(n) ________.", "normal_format": "What is a substance formed of crystals of equal numbers of cations and anions held together by ionic bonds called?", "question_choices": [ "noble gas", "salt", "electrolyte", "dipole" ], "question_id": "fs-id2325947", "question_text": "A substance formed of crystals of equal numbers of cations and anions held together by ionic bonds is called a(n) ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "Covalent bonds are stronger than ionic bonds." }, "bloom": null, "hl_context": "Unlike ionic bonds formed by the attraction between a cation ’ s positive charge and an anion ’ s negative charge , molecules formed by a covalent bond share electrons in a mutually stabilizing relationship . Like next-door neighbors whose kids hang out first at one home and then at the other , the atoms do not lose or gain electrons permanently . Instead , the electrons move back and forth between the elements . <hl> Because of the close sharing of pairs of electrons ( one electron from each of two atoms ) , covalent bonds are stronger than ionic bonds . <hl>", "hl_sentences": "Because of the close sharing of pairs of electrons ( one electron from each of two atoms ) , covalent bonds are stronger than ionic bonds .", "question": { "cloze_format": "The true statement about chemical bonds is that ___ .", "normal_format": "Which of the following statements about chemical bonds is true?", "question_choices": [ "Covalent bonds are stronger than ionic bonds.", "Hydrogen bonds occur between two atoms of hydrogen.", "Bonding readily occurs between nonpolar and polar molecules.", "A molecule of water is unlikely to bond with an ion." ], "question_id": "fs-id2602138", "question_text": "Which of the following statements about chemical bonds is true?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "potential energy" }, "bloom": "1", "hl_context": "Chemical reactions require a sufficient amount of energy to cause the matter to collide with enough precision and force that old chemical bonds can be broken and new ones formed . In general , kinetic energy is the form of energy powering any type of matter in motion . Imagine you are building a brick wall . The energy it takes to lift and place one brick atop another is kinetic energy — the energy matter possesses because of its motion . Once the wall is in place , it stores potential energy . <hl> Potential energy is the energy of position , or the energy matter possesses because of the positioning or structure of its components . <hl> If the brick wall collapses , the stored potential energy is released as kinetic energy as the bricks fall .", "hl_sentences": "Potential energy is the energy of position , or the energy matter possesses because of the positioning or structure of its components .", "question": { "cloze_format": "The energy stored in a foot of snow on a steep roof is ________.", "normal_format": "What is the energy stored in a foot of snow on a steep roof?", "question_choices": [ "potential energy", "kinetic energy", "radiant energy", "activation energy" ], "question_id": "fs-id2024237", "question_text": "The energy stored in a foot of snow on a steep roof is ________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 1, "ans_text": "synthesis" }, "bloom": "1", "hl_context": "Notice that , in the first example , a nitrogen ( N ) atom and three hydrogen ( H ) atoms bond to form a compound . This anabolic reaction requires energy , which is then stored within the compound ’ s bonds . Such reactions are referred to as synthesis reactions . <hl> A synthesis reaction is a chemical reaction that results in the synthesis ( joining ) of components that were formerly separate ( Figure 2.12 a ) . <hl> Again , nitrogen and hydrogen are reactants in a synthesis reaction that yields ammonia as the product . The general equation for a synthesis reaction is", "hl_sentences": "A synthesis reaction is a chemical reaction that results in the synthesis ( joining ) of components that were formerly separate ( Figure 2.12 a ) .", "question": { "cloze_format": "The bonding of calcium, phosphorus, and other elements produces mineral crystals that are found in bone. This is an example of a(n) ________ reaction.", "normal_format": "The bonding of calcium, phosphorus, and other elements produces mineral crystals that are found in bone. What type of reaction is this an example for?", "question_choices": [ "catabolic", "synthesis", "decomposition", "exchange" ], "question_id": "fs-id2308224", "question_text": "The bonding of calcium, phosphorus, and other elements produces mineral crystals that are found in bone. This is an example of a(n) ________ reaction." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 3, "ans_text": "Catabolic, exergonic, and decomposition" }, "bloom": "1", "hl_context": ". <hl> An exchange reaction is a chemical reaction in which both synthesis and decomposition occur , chemical bonds are both formed and broken , and chemical energy is absorbed , stored , and released ( see Figure 2.12 c ) . <hl> The simplest form of an exchange reaction might be : <hl> In the second example , ammonia is catabolized into its smaller components , and the potential energy that had been stored in its bonds is released . <hl> <hl> Such reactions are referred to as decomposition reactions . <hl> A decomposition reaction is a chemical reaction that breaks down or “ de-composes ” something larger into its constituent parts ( see Figure 2.12 b ) . The general equation for a decomposition reaction is : <hl> Chemical reactions that release more energy than they absorb are characterized as exergonic . <hl> <hl> The catabolism of the foods in your energy bar is an example . <hl> Some of the chemical energy stored in the bar is absorbed into molecules your body uses for fuel , but some of it is released — for example , as heat . In contrast , chemical reactions that absorb more energy than they release are endergonic . These reactions require energy input , and the resulting molecule stores not only the chemical energy in the original components , but also the energy that fueled the reaction . Because energy is neither created nor destroyed , where does the energy needed for endergonic reactions come from ? In many cases , it comes from exergonic reactions .", "hl_sentences": "An exchange reaction is a chemical reaction in which both synthesis and decomposition occur , chemical bonds are both formed and broken , and chemical energy is absorbed , stored , and released ( see Figure 2.12 c ) . In the second example , ammonia is catabolized into its smaller components , and the potential energy that had been stored in its bonds is released . Such reactions are referred to as decomposition reactions . Chemical reactions that release more energy than they absorb are characterized as exergonic . The catabolism of the foods in your energy bar is an example .", "question": { "cloze_format": "________ reactions release energy.", "normal_format": "Which reactions release energy?", "question_choices": [ "Catabolic", "Exergonic", "Decomposition", "Catabolic, exergonic, and decomposition" ], "question_id": "fs-id1266974", "question_text": "________ reactions release energy." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 0, "ans_text": "hydrogen and hydrogen" }, "bloom": "3", "hl_context": "Instead , atoms link by forming a chemical bond . A bond is a weak or strong electrical attraction that holds atoms in the same vicinity . <hl> The new grouping is typically more stable — less likely to react again — than its component atoms were when they were separate . <hl> A more or less stable grouping of two or more atoms held together by chemical bonds is called a molecule . <hl> The bonded atoms may be of the same element , as in the case of H 2 , which is called molecular hydrogen or hydrogen gas . <hl> When a molecule is made up of two or more atoms of different elements , it is called a chemical compound . Thus , a unit of water , or H 2 O , is a compound , as is a single molecule of the gas methane , or CH 4 .", "hl_sentences": "The new grouping is typically more stable — less likely to react again — than its component atoms were when they were separate . The bonded atoms may be of the same element , as in the case of H 2 , which is called molecular hydrogen or hydrogen gas .", "question": { "cloze_format": "The combination of atoms that is most likely to result in a chemical reaction is ___.", "normal_format": "Which of the following combinations of atoms is most likely to result in a chemical reaction?", "question_choices": [ "hydrogen and hydrogen", "hydrogen and helium", "helium and helium", "neon and helium" ], "question_id": "fs-id1250576", "question_text": "Which of the following combinations of atoms is most likely to result in a chemical reaction?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 3, "ans_text": "saliva contains enzymes" }, "bloom": "3", "hl_context": "<hl> Enzymes are critical to the body ’ s healthy functioning . <hl> <hl> They assist , for example , with the breakdown of food and its conversion to energy . <hl> In fact , most of the chemical reactions in the body are facilitated by enzymes . 2.4 Inorganic Compounds Essential to Human Functioning Learning Objectives By the end of this section , you will be able to :", "hl_sentences": "Enzymes are critical to the body ’ s healthy functioning . They assist , for example , with the breakdown of food and its conversion to energy .", "question": { "cloze_format": "Chewing a bite of bread mixes it with saliva and facilitates its chemical breakdown. This is most likely due to the fact that ________.", "normal_format": "Chewing a bite of bread mixes it with saliva and facilitates its chemical breakdown. What is most likely due to?", "question_choices": [ "the inside of the mouth maintains a very high temperature", "chewing stores potential energy", "chewing facilitates synthesis reactions", "saliva contains enzymes" ], "question_id": "fs-id2177358", "question_text": "Chewing a bite of bread mixes it with saliva and facilitates its chemical breakdown. This is most likely due to the fact that ________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 1, "ans_text": "organic" }, "bloom": null, "hl_context": "<hl> An organic compound , then , is a substance that contains both carbon and hydrogen . <hl> Organic compounds are synthesized via covalent bonds within living organisms , including the human body . Recall that carbon and hydrogen are the second and third most abundant elements in your body . You will soon discover how these two elements combine in the foods you eat , in the compounds that make up your body structure , and in the chemicals that fuel your functioning .", "hl_sentences": "An organic compound , then , is a substance that contains both carbon and hydrogen .", "question": { "cloze_format": "CH4 is methane. This compound is ________.", "normal_format": "CH4 is methane. What is this compound?", "question_choices": [ "inorganic", "organic", "reactive", "a crystal" ], "question_id": "fs-id2153414", "question_text": "CH4 is methane. This compound is ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "sodium ions and chloride ions" }, "bloom": null, "hl_context": "<hl> A typical salt , NaCl , dissociates completely in water ( Figure 2.15 ) . <hl> <hl> The positive and negative regions on the water molecule ( the hydrogen and oxygen ends respectively ) attract the negative chloride and positive sodium ions , pulling them away from each other . <hl> Again , whereas nonpolar and polar covalently bonded compounds break apart into molecules in solution , salts dissociate into ions . These ions are electrolytes ; they are capable of conducting an electrical current in solution . This property is critical to the function of ions in transmitting nerve impulses and prompting muscle contraction .", "hl_sentences": "A typical salt , NaCl , dissociates completely in water ( Figure 2.15 ) . The positive and negative regions on the water molecule ( the hydrogen and oxygen ends respectively ) attract the negative chloride and positive sodium ions , pulling them away from each other .", "question": { "cloze_format": "___ are most likely to be found evenly distributed in water in a homogeneous solution.", "normal_format": "Which of the following is most likely to be found evenly distributed in water in a homogeneous solution?", "question_choices": [ "sodium ions and chloride ions", "NaCl molecules", "salt crystals", "red blood cells" ], "question_id": "fs-id1404738", "question_text": "Which of the following is most likely to be found evenly distributed in water in a homogeneous solution?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "salt" }, "bloom": null, "hl_context": "<hl> A typical salt , NaCl , dissociates completely in water ( Figure 2.15 ) . <hl> The positive and negative regions on the water molecule ( the hydrogen and oxygen ends respectively ) attract the negative chloride and positive sodium ions , pulling them away from each other . Again , whereas nonpolar and polar covalently bonded compounds break apart into molecules in solution , salts dissociate into ions . These ions are electrolytes ; they are capable of conducting an electrical current in solution . This property is critical to the function of ions in transmitting nerve impulses and prompting muscle contraction .", "hl_sentences": "A typical salt , NaCl , dissociates completely in water ( Figure 2.15 ) .", "question": { "cloze_format": "A substance dissociates into K+ and Cl– in solution. The substance is a(n) ________.", "normal_format": "Which substance dissociates into K+ and Cl– in solution?", "question_choices": [ "acid", "base", "salt", "buffer" ], "question_id": "fs-id1408445", "question_text": "A substance dissociates into K+ and Cl– in solution. The substance is a(n) ________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 1, "ans_text": "Ty’s blood is slightly alkaline." }, "bloom": "4", "hl_context": "The relative acidity or alkalinity of a solution can be indicated by its pH . A solution ’ s pH is the negative , base - 10 logarithm of the hydrogen ion ( H + ) concentration of the solution . As an example , a pH 4 solution has an H + concentration that is ten times greater than that of a pH 5 solution . That is , a solution with a pH of 4 is ten times more acidic than a solution with a pH of 5 . The concept of pH will begin to make more sense when you study the pH scale , like that shown in Figure 2.17 . The scale consists of a series of increments ranging from 0 to 14 . A solution with a pH of 7 is considered neutral — neither acidic nor basic . Pure water has a pH of 7 . The lower the number below 7 , the more acidic the solution , or the greater the concentration of H + . The concentration of hydrogen ions at each pH value is 10 times different than the next pH . For instance , a pH value of 4 corresponds to a proton concentration of 10 – 4 M , or 0.0001 M , while a pH value of 5 corresponds to a proton concentration of 10 – 5 M , or 0.00001 M . <hl> The higher the number above 7 , the more basic ( alkaline ) the solution , or the lower the concentration of H + . <hl> Human urine , for example , is ten times more acidic than pure water , and HCl is 10,000 , 000 times more acidic than water .", "hl_sentences": "The higher the number above 7 , the more basic ( alkaline ) the solution , or the lower the concentration of H + .", "question": { "cloze_format": "Ty is three years old and as a result of a “stomach bug” has been vomiting for about 24 hours. His blood pH is 7.48. This means that ___ .", "normal_format": "Ty is three years old and as a result of a “stomach bug” has been vomiting for about 24 hours. His blood pH is 7.48. What does this mean?", "question_choices": [ "Ty’s blood is slightly acidic.", "Ty’s blood is slightly alkaline.", "Ty’s blood is highly acidic.", "Ty’s blood is within the normal range" ], "question_id": "fs-id1637943", "question_text": "Ty is three years old and as a result of a “stomach bug” has been vomiting for about 24 hours. His blood pH is 7.48. What does this mean?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "hexose monosaccharide" }, "bloom": null, "hl_context": "<hl> A monosaccharide is a monomer of carbohydrates . <hl> Five monosaccharides are important in the body . <hl> Three of these are the hexose sugars , so called because they each contain six atoms of carbon . <hl> These are glucose , fructose , and galactose , shown in Figure 2.18 a . The remaining monosaccharides are the two pentose sugars , each of which contains five atoms of carbon . They are ribose and deoxyribose , shown in Figure 2.18 b .", "hl_sentences": "A monosaccharide is a monomer of carbohydrates . Three of these are the hexose sugars , so called because they each contain six atoms of carbon .", "question": { "cloze_format": "C6H12O6 is the chemical formula for a ________.", "normal_format": "What is C6H12O6 the chemical formula for?", "question_choices": [ "polymer of carbohydrate", "pentose monosaccharide", "hexose monosaccharide", "all of the above" ], "question_id": "fs-id809987", "question_text": "C6H12O6 is the chemical formula for a ________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 0, "ans_text": "glucose" }, "bloom": null, "hl_context": "<hl> Although most body cells can break down other organic compounds for fuel , all body cells can use glucose . <hl> <hl> Moreover , nerve cells ( neurons ) in the brain , spinal cord , and through the peripheral nervous system , as well as red blood cells , can use only glucose for fuel . <hl> In the breakdown of glucose for energy , molecules of adenosine triphosphate , better known as ATP , are produced . Adenosine triphosphate ( ATP ) is composed of a ribose sugar , an adenine base , and three phosphate groups . ATP releases free energy when its phosphate bonds are broken , and thus supplies ready energy to the cell . More ATP is produced in the presence of oxygen ( O 2 ) than in pathways that do not use oxygen . The overall reaction for the conversion of the energy in glucose to energy stored in ATP can be written :", "hl_sentences": "Although most body cells can break down other organic compounds for fuel , all body cells can use glucose . Moreover , nerve cells ( neurons ) in the brain , spinal cord , and through the peripheral nervous system , as well as red blood cells , can use only glucose for fuel .", "question": { "cloze_format": "Brain cells primarily rely on the organic compound ___ for fuel.", "normal_format": "What organic compound do brain cells primarily rely on for fuel?", "question_choices": [ "glucose", "glycogen", "galactose", "glycerol" ], "question_id": "fs-id2233845", "question_text": "What organic compound do brain cells primarily rely on for fuel?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "amino" }, "bloom": "1", "hl_context": "<hl> Amino groups are found within amino acids , the building blocks of proteins . <hl>", "hl_sentences": "Amino groups are found within amino acids , the building blocks of proteins .", "question": { "cloze_format": "___ is a functional group that is part of a building block of proteins.", "normal_format": "Which of the following is a functional group that is part of a building block of proteins?", "question_choices": [ "phosphate", "adenine", "amino", "ribose" ], "question_id": "fs-id2123086", "question_text": "Which of the following is a functional group that is part of a building block of proteins?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "nucleic acids" }, "bloom": "1", "hl_context": "<hl> The nucleic acids differ in their type of pentose sugar . <hl> Deoxyribonucleic acid ( DNA ) is nucleotide that stores genetic information . DNA contains deoxyribose ( so-called because it has one less atom of oxygen than ribose ) plus one phosphate group and one nitrogen-containing base . The “ choices ” of base for DNA are adenine , cytosine , guanine , and thymine . Ribonucleic acid ( RNA ) is a ribose-containing nucleotide that helps manifest the genetic code as protein . RNA contains ribose , one phosphate group , and one nitrogen-containing base , but the “ choices ” of base for RNA are adenine , cytosine , guanine , and uracil . The nitrogen-containing bases adenine and guanine are classified as purines . A purine is a nitrogen-containing molecule with a double ring structure , which accommodates several nitrogen atoms . The bases cytosine , thymine ( found in DNA only ) and uracil ( found in RNA only ) are pyramidines . A pyramidine is a nitrogen-containing base with a single ring structure", "hl_sentences": "The nucleic acids differ in their type of pentose sugar .", "question": { "cloze_format": "A pentose sugar is a part of the monomer used to build the ___ macromolecule.", "normal_format": "A pentose sugar is a part of the monomer used to build which type of macromolecule?", "question_choices": [ "polysaccharides", "nucleic acids", "phosphorylated glucose", "glycogen" ], "question_id": "fs-id971341", "question_text": "A pentose sugar is a part of the monomer used to build which type of macromolecule?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "has both polar and nonpolar regions" }, "bloom": "1", "hl_context": "As its name suggests , a phospholipid is a bond between the glycerol component of a lipid and a phosphorous molecule . In fact , phospholipids are similar in structure to triglycerides . However , instead of having three fatty acids , a phospholipid is generated from a diglyceride , a glycerol with just two fatty acid chains ( Figure 2.23 ) . The third binding site on the glycerol is taken up by the phosphate group , which in turn is attached to a polar “ head ” region of the molecule . Recall that triglycerides are nonpolar and hydrophobic . This still holds for the fatty acid portion of a phospholipid compound . <hl> However , the head of a phospholipid contains charges on the phosphate groups , as well as on the nitrogen atom . <hl> <hl> These charges make the phospholipid head hydrophilic . <hl> <hl> Therefore , phospholipids are said to have hydrophobic tails , containing the neutral fatty acids , and hydrophilic heads , containing the charged phosphate groups and nitrogen atom . <hl>", "hl_sentences": "However , the head of a phospholipid contains charges on the phosphate groups , as well as on the nitrogen atom . These charges make the phospholipid head hydrophilic . Therefore , phospholipids are said to have hydrophobic tails , containing the neutral fatty acids , and hydrophilic heads , containing the charged phosphate groups and nitrogen atom .", "question": { "cloze_format": "A phospholipid ________.", "normal_format": "Which of the following is correct about a phospholid?", "question_choices": [ "has both polar and nonpolar regions", "is made up of a triglyceride bonded to a phosphate group", "is a building block of ATP", "can donate both cations and anions in solution" ], "question_id": "fs-id2291079", "question_text": "A phospholipid ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "double helix" }, "bloom": "1", "hl_context": "Bonds formed by dehydration synthesis between the pentose sugar of one nucleic acid monomer and the phosphate group of another form a “ backbone , ” from which the components ’ nitrogen-containing bases protrude . <hl> In DNA , two such backbones attach at their protruding bases via hydrogen bonds . <hl> <hl> These twist to form a shape known as a double helix ( Figure 2.29 ) . <hl> The sequence of nitrogen-containing bases within a strand of DNA form the genes that act as a molecular code instructing cells in the assembly of amino acids into proteins . Humans have almost 22,000 genes in their DNA , locked up in the 46 chromosomes inside the nucleus of each cell ( except red blood cells which lose their nuclei during development ) . These genes carry the genetic code to build one ’ s body , and are unique for each individual except identical twins .", "hl_sentences": "In DNA , two such backbones attach at their protruding bases via hydrogen bonds . These twist to form a shape known as a double helix ( Figure 2.29 ) .", "question": { "cloze_format": "In DNA, nucleotide bonding forms a compound with a characteristic shape known as a(n) ________.", "normal_format": "In DNA, nucleotide bonding forms a compound with a characteristic shape known as which of the following?", "question_choices": [ "beta chain", "pleated sheet", "alpha helix", "double helix" ], "question_id": "fs-id1610975", "question_text": "In DNA, nucleotide bonding forms a compound with a characteristic shape known as a(n) ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "all of the above" }, "bloom": "1", "hl_context": "The nucleic acids differ in their type of pentose sugar . Deoxyribonucleic acid ( DNA ) is nucleotide that stores genetic information . DNA contains deoxyribose ( so-called because it has one less atom of oxygen than ribose ) plus one phosphate group and one nitrogen-containing base . The “ choices ” of base for DNA are adenine , cytosine , guanine , and thymine . Ribonucleic acid ( RNA ) is a ribose-containing nucleotide that helps manifest the genetic code as protein . <hl> RNA contains ribose , one phosphate group , and one nitrogen-containing base , but the “ choices ” of base for RNA are adenine , cytosine , guanine , and uracil . <hl> <hl> The nitrogen-containing bases adenine and guanine are classified as purines . <hl> A purine is a nitrogen-containing molecule with a double ring structure , which accommodates several nitrogen atoms . <hl> The bases cytosine , thymine ( found in DNA only ) and uracil ( found in RNA only ) are pyramidines . <hl> A pyramidine is a nitrogen-containing base with a single ring structure", "hl_sentences": "RNA contains ribose , one phosphate group , and one nitrogen-containing base , but the “ choices ” of base for RNA are adenine , cytosine , guanine , and uracil . The nitrogen-containing bases adenine and guanine are classified as purines . The bases cytosine , thymine ( found in DNA only ) and uracil ( found in RNA only ) are pyramidines .", "question": { "cloze_format": "Uracil ________.", "normal_format": "Which of the following is correct about uracil?", "question_choices": [ "contains nitrogen", "is a pyrimidine", "is found in RNA", "all of the above" ], "question_id": "fs-id2022623", "question_text": "Uracil ________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 1, "ans_text": "specificity" }, "bloom": "1", "hl_context": "Due to this jigsaw puzzle-like match between an enzyme and its substrates , enzymes are known for their specificity . <hl> In fact , as an enzyme binds to its substrate ( s ) , the enzyme structure changes slightly to find the best fit between the transition state ( a structural intermediate between the substrate and product ) and the active site , just as a rubber glove molds to a hand inserted into it . <hl> This active-site modification in the presence of substrate , along with the simultaneous formation of the transition state , is called induced fit . Overall , there is a specifically matched enzyme for each substrate and , thus , for each chemical reaction ; however , there is some flexibility as well . Some enzymes have the ability to act on several different structurally related substrates . Binding of a substrate produces an enzyme – substrate complex . It is likely that enzymes speed up chemical reactions in part because the enzyme – substrate complex undergoes a set of temporary and reversible changes that cause the substrates to be oriented toward each other in an optimal position to facilitate their interaction . This promotes increased reaction speed . The enzyme then releases the product ( s ) , and resumes its original shape . The enzyme is then free to engage in the process again , and will do so as long as substrate remains . Enzymatic reactions — chemical reactions catalyzed by enzymes — begin when substrates bind to the enzyme . A substrate is a reactant in an enzymatic reaction . This occurs on regions of the enzyme known as active sites ( Figure 2.27 ) . Any given enzyme catalyzes just one type of chemical reaction . <hl> This characteristic , called specificity , is due to the fact that a substrate with a particular shape and electrical charge can bind only to an active site corresponding to that substrate . <hl>", "hl_sentences": "In fact , as an enzyme binds to its substrate ( s ) , the enzyme structure changes slightly to find the best fit between the transition state ( a structural intermediate between the substrate and product ) and the active site , just as a rubber glove molds to a hand inserted into it . This characteristic , called specificity , is due to the fact that a substrate with a particular shape and electrical charge can bind only to an active site corresponding to that substrate .", "question": { "cloze_format": "The ability of an enzyme’s active sites to bind only substrates of compatible shape and charge is known as ________.", "normal_format": "The ability of an enzyme’s active sites to bind only substrates of compatible shape and charge is known as what?", "question_choices": [ "selectivity", "specificity", "subjectivity", "specialty" ], "question_id": "fs-id2365039", "question_text": "The ability of an enzyme’s active sites to bind only substrates of compatible shape and charge is known as ________." }, "references_are_paraphrase": 0 } ]
2
2.1 Elements and Atoms: The Building Blocks of Matter Learning Objectives By the end of this section, you will be able to: Discuss the relationships between matter, mass, elements, compounds, atoms, and subatomic particles Distinguish between atomic number and mass number Identify the key distinction between isotopes of the same element Explain how electrons occupy electron shells and their contribution to an atom’s relative stability The substance of the universe—from a grain of sand to a star—is called matter . Scientists define matter as anything that occupies space and has mass. An object’s mass and its weight are related concepts, but not quite the same. An object’s mass is the amount of matter contained in the object, and the object’s mass is the same whether that object is on Earth or in the zero-gravity environment of outer space. An object’s weight, on the other hand, is its mass as affected by the pull of gravity. Where gravity strongly pulls on an object’s mass its weight is greater than it is where gravity is less strong. An object of a certain mass weighs less on the moon, for example, than it does on Earth because the gravity of the moon is less than that of Earth. In other words, weight is variable, and is influenced by gravity. A piece of cheese that weighs a pound on Earth weighs only a few ounces on the moon. Elements and Compounds All matter in the natural world is composed of one or more of the 92 fundamental substances called elements. An element is a pure substance that is distinguished from all other matter by the fact that it cannot be created or broken down by ordinary chemical means. While your body can assemble many of the chemical compounds needed for life from their constituent elements, it cannot make elements. They must come from the environment. A familiar example of an element that you must take in is calcium (Ca). Calcium is essential to the human body; it is absorbed and used for a number of processes, including strengthening bones. When you consume dairy products your digestive system breaks down the food into components small enough to cross into the bloodstream. Among these is calcium, which, because it is an element, cannot be broken down further. The elemental calcium in cheese, therefore, is the same as the calcium that forms your bones. Some other elements you might be familiar with are oxygen, sodium, and iron. The elements in the human body are shown in Figure 2.2 , beginning with the most abundant: oxygen (O), carbon (C), hydrogen (H), and nitrogen (N). Each element’s name can be replaced by a one- or two-letter symbol; you will become familiar with some of these during this course. All the elements in your body are derived from the foods you eat and the air you breathe. In nature, elements rarely occur alone. Instead, they combine to form compounds. A compound is a substance composed of two or more elements joined by chemical bonds. For example, the compound glucose is an important body fuel. It is always composed of the same three elements: carbon, hydrogen, and oxygen. Moreover, the elements that make up any given compound always occur in the same relative amounts. In glucose, there are always six carbon and six oxygen units for every twelve hydrogen units. But what, exactly, are these “units” of elements? Atoms and Subatomic Particles An atom is the smallest quantity of an element that retains the unique properties of that element. In other words, an atom of hydrogen is a unit of hydrogen—the smallest amount of hydrogen that can exist. As you might guess, atoms are almost unfathomably small. The period at the end of this sentence is millions of atoms wide. Atomic Structure and Energy Atoms are made up of even smaller subatomic particles, three types of which are important: the proton , neutron , and electron . The number of positively-charged protons and non-charged (“neutral”) neutrons, gives mass to the atom, and the number of each in the nucleus of the atom determine the element. The number of negatively-charged electrons that “spin” around the nucleus at close to the speed of light equals the number of protons. An electron has about 1/2000th the mass of a proton or neutron. Figure 2.3 shows two models that can help you imagine the structure of an atom—in this case, helium (He). In the planetary model, helium’s two electrons are shown circling the nucleus in a fixed orbit depicted as a ring. Although this model is helpful in visualizing atomic structure, in reality, electrons do not travel in fixed orbits, but whiz around the nucleus erratically in a so-called electron cloud. An atom’s protons and electrons carry electrical charges. Protons, with their positive charge, are designated p + . Electrons, which have a negative charge, are designated e – . An atom’s neutrons have no charge: they are electrically neutral. Just as a magnet sticks to a steel refrigerator because their opposite charges attract, the positively charged protons attract the negatively charged electrons. This mutual attraction gives the atom some structural stability. The attraction by the positively charged nucleus helps keep electrons from straying far. The number of protons and electrons within a neutral atom are equal, thus, the atom’s overall charge is balanced. Atomic Number and Mass Number An atom of carbon is unique to carbon, but a proton of carbon is not. One proton is the same as another, whether it is found in an atom of carbon, sodium (Na), or iron (Fe). The same is true for neutrons and electrons. So, what gives an element its distinctive properties—what makes carbon so different from sodium or iron? The answer is the unique quantity of protons each contains. Carbon by definition is an element whose atoms contain six protons. No other element has exactly six protons in its atoms. Moreover, all atoms of carbon, whether found in your liver or in a lump of coal, contain six protons. Thus, the atomic number , which is the number of protons in the nucleus of the atom, identifies the element. Because an atom usually has the same number of electrons as protons, the atomic number identifies the usual number of electrons as well. In their most common form, many elements also contain the same number of neutrons as protons. The most common form of carbon, for example, has six neutrons as well as six protons, for a total of 12 subatomic particles in its nucleus. An element’s mass number is the sum of the number of protons and neutrons in its nucleus. So the most common form of carbon’s mass number is 12. (Electrons have so little mass that they do not appreciably contribute to the mass of an atom.) Carbon is a relatively light element. Uranium (U), in contrast, has a mass number of 238 and is referred to as a heavy metal. Its atomic number is 92 (it has 92 protons) but it contains 146 neutrons; it has the most mass of all the naturally occurring elements. The periodic table of the elements , shown in Figure 2.4 , is a chart identifying the 92 elements found in nature, as well as several larger, unstable elements discovered experimentally. The elements are arranged in order of their atomic number, with hydrogen and helium at the top of the table, and the more massive elements below. The periodic table is a useful device because for each element, it identifies the chemical symbol, the atomic number, and the mass number, while organizing elements according to their propensity to react with other elements. The number of protons and electrons in an element are equal. The number of protons and neutrons may be equal for some elements, but are not equal for all. Interactive Link Visit this website to view the periodic table. In the periodic table of the elements, elements in a single column have the same number of electrons that can participate in a chemical reaction. These electrons are known as “valence electrons.” For example, the elements in the first column all have a single valence electron, an electron that can be “donated” in a chemical reaction with another atom. What is the meaning of a mass number shown in parentheses? Isotopes Although each element has a unique number of protons, it can exist as different isotopes. An isotope is one of the different forms of an element, distinguished from one another by different numbers of neutrons. The standard isotope of carbon is 12 C, commonly called carbon twelve. 12 C has six protons and six neutrons, for a mass number of twelve. All of the isotopes of carbon have the same number of protons; therefore, 13 C has seven neutrons, and 14 C has eight neutrons. The different isotopes of an element can also be indicated with the mass number hyphenated (for example, C-12 instead of 12 C). Hydrogen has three common isotopes, shown in Figure 2.5 . An isotope that contains more than the usual number of neutrons is referred to as a heavy isotope. An example is 14 C. Heavy isotopes tend to be unstable, and unstable isotopes are radioactive. A radioactive isotope is an isotope whose nucleus readily decays, giving off subatomic particles and electromagnetic energy. Different radioactive isotopes (also called radioisotopes) differ in their half-life, the time it takes for half of any size sample of an isotope to decay. For example, the half-life of tritium—a radioisotope of hydrogen—is about 12 years, indicating it takes 12 years for half of the tritium nuclei in a sample to decay. Excessive exposure to radioactive isotopes can damage human cells and even cause cancer and birth defects, but when exposure is controlled, some radioactive isotopes can be useful in medicine. For more information, see the Career Connections. Career Connection Interventional Radiologist The controlled use of radioisotopes has advanced medical diagnosis and treatment of disease. Interventional radiologists are physicians who treat disease by using minimally invasive techniques involving radiation. Many conditions that could once only be treated with a lengthy and traumatic operation can now be treated non-surgically, reducing the cost, pain, length of hospital stay, and recovery time for patients. For example, in the past, the only options for a patient with one or more tumors in the liver were surgery and chemotherapy (the administration of drugs to treat cancer). Some liver tumors, however, are difficult to access surgically, and others could require the surgeon to remove too much of the liver. Moreover, chemotherapy is highly toxic to the liver, and certain tumors do not respond well to it anyway. In some such cases, an interventional radiologist can treat the tumors by disrupting their blood supply, which they need if they are to continue to grow. In this procedure, called radioembolization, the radiologist accesses the liver with a fine needle, threaded through one of the patient’s blood vessels. The radiologist then inserts tiny radioactive “seeds” into the blood vessels that supply the tumors. In the days and weeks following the procedure, the radiation emitted from the seeds destroys the vessels and directly kills the tumor cells in the vicinity of the treatment. Radioisotopes emit subatomic particles that can be detected and tracked by imaging technologies. One of the most advanced uses of radioisotopes in medicine is the positron emission tomography (PET) scanner, which detects the activity in the body of a very small injection of radioactive glucose, the simple sugar that cells use for energy. The PET camera reveals to the medical team which of the patient’s tissues are taking up the most glucose. Thus, the most metabolically active tissues show up as bright “hot spots” on the images ( Figure 2.6 ). PET can reveal some cancerous masses because cancer cells consume glucose at a high rate to fuel their rapid reproduction. The Behavior of Electrons In the human body, atoms do not exist as independent entities. Rather, they are constantly reacting with other atoms to form and to break down more complex substances. To fully understand anatomy and physiology you must grasp how atoms participate in such reactions. The key is understanding the behavior of electrons. Although electrons do not follow rigid orbits a set distance away from the atom’s nucleus, they do tend to stay within certain regions of space called electron shells. An electron shell is a layer of electrons that encircle the nucleus at a distinct energy level. The atoms of the elements found in the human body have from one to five electron shells, and all electron shells hold eight electrons except the first shell, which can only hold two. This configuration of electron shells is the same for all atoms. The precise number of shells depends on the number of electrons in the atom. Hydrogen and helium have just one and two electrons, respectively. If you take a look at the periodic table of the elements, you will notice that hydrogen and helium are placed alone on either sides of the top row; they are the only elements that have just one electron shell ( Figure 2.7 ). A second shell is necessary to hold the electrons in all elements larger than hydrogen and helium. Lithium (Li), whose atomic number is 3, has three electrons. Two of these fill the first electron shell, and the third spills over into a second shell. The second electron shell can accommodate as many as eight electrons. Carbon, with its six electrons, entirely fills its first shell, and half-fills its second. With ten electrons, neon (Ne) entirely fills its two electron shells. Again, a look at the periodic table reveals that all of the elements in the second row, from lithium to neon, have just two electron shells. Atoms with more than ten electrons require more than two shells. These elements occupy the third and subsequent rows of the periodic table. The factor that most strongly governs the tendency of an atom to participate in chemical reactions is the number of electrons in its valence shell. A valence shell is an atom’s outermost electron shell. If the valence shell is full, the atom is stable; meaning its electrons are unlikely to be pulled away from the nucleus by the electrical charge of other atoms. If the valence shell is not full, the atom is reactive; meaning it will tend to react with other atoms in ways that make the valence shell full. Consider hydrogen, with its one electron only half-filling its valence shell. This single electron is likely to be drawn into relationships with the atoms of other elements, so that hydrogen’s single valence shell can be stabilized. All atoms (except hydrogen and helium with their single electron shells) are most stable when there are exactly eight electrons in their valence shell. This principle is referred to as the octet rule, and it states that an atom will give up, gain, or share electrons with another atom so that it ends up with eight electrons in its own valence shell. For example, oxygen, with six electrons in its valence shell, is likely to react with other atoms in a way that results in the addition of two electrons to oxygen’s valence shell, bringing the number to eight. When two hydrogen atoms each share their single electron with oxygen, covalent bonds are formed, resulting in a molecule of water, H 2 O. In nature, atoms of one element tend to join with atoms of other elements in characteristic ways. For example, carbon commonly fills its valence shell by linking up with four atoms of hydrogen. In so doing, the two elements form the simplest of organic molecules, methane, which also is one of the most abundant and stable carbon-containing compounds on Earth. As stated above, another example is water; oxygen needs two electrons to fill its valence shell. It commonly interacts with two atoms of hydrogen, forming H 2 O. Incidentally, the name “hydrogen” reflects its contribution to water (hydro- = “water”; -gen = “maker”). Thus, hydrogen is the “water maker.” 2.2 Chemical Bonds Learning Objectives By the end of this section, you will be able to: Explain the relationship between molecules and compounds Distinguish between ions, cations, and anions Identify the key difference between ionic and covalent bonds Distinguish between nonpolar and polar covalent bonds Explain how water molecules link via hydrogen bonds Atoms separated by a great distance cannot link; rather, they must come close enough for the electrons in their valence shells to interact. But do atoms ever actually touch one another? Most physicists would say no, because the negatively charged electrons in their valence shells repel one another. No force within the human body—or anywhere in the natural world—is strong enough to overcome this electrical repulsion. So when you read about atoms linking together or colliding, bear in mind that the atoms are not merging in a physical sense. Instead, atoms link by forming a chemical bond. A bond is a weak or strong electrical attraction that holds atoms in the same vicinity. The new grouping is typically more stable—less likely to react again—than its component atoms were when they were separate. A more or less stable grouping of two or more atoms held together by chemical bonds is called a molecule . The bonded atoms may be of the same element, as in the case of H 2 , which is called molecular hydrogen or hydrogen gas. When a molecule is made up of two or more atoms of different elements, it is called a chemical compound . Thus, a unit of water, or H 2 O, is a compound, as is a single molecule of the gas methane, or CH 4 . Three types of chemical bonds are important in human physiology, because they hold together substances that are used by the body for critical aspects of homeostasis, signaling, and energy production, to name just a few important processes. These are ionic bonds, covalent bonds, and hydrogen bonds. Ions and Ionic Bonds Recall that an atom typically has the same number of positively charged protons and negatively charged electrons. As long as this situation remains, the atom is electrically neutral. But when an atom participates in a chemical reaction that results in the donation or acceptance of one or more electrons, the atom will then become positively or negatively charged. This happens frequently for most atoms in order to have a full valence shell, as described previously. This can happen either by gaining electrons to fill a shell that is more than half-full, or by giving away electrons to empty a shell that is less than half-full, thereby leaving the next smaller electron shell as the new, full, valence shell. An atom that has an electrical charge—whether positive or negative—is an ion . Interactive Link Visit this website to learn about electrical energy and the attraction/repulsion of charges. What happens to the charged electroscope when a conductor is moved between its plastic sheets, and why? Potassium (K), for instance, is an important element in all body cells. Its atomic number is 19. It has just one electron in its valence shell. This characteristic makes potassium highly likely to participate in chemical reactions in which it donates one electron. (It is easier for potassium to donate one electron than to gain seven electrons.) The loss will cause the positive charge of potassium’s protons to be more influential than the negative charge of potassium’s electrons. In other words, the resulting potassium ion will be slightly positive. A potassium ion is written K + , indicating that it has lost a single electron. A positively charged ion is known as a cation . Now consider fluorine (F), a component of bones and teeth. Its atomic number is nine, and it has seven electrons in its valence shell. Thus, it is highly likely to bond with other atoms in such a way that fluorine accepts one electron (it is easier for fluorine to gain one electron than to donate seven electrons). When it does, its electrons will outnumber its protons by one, and it will have an overall negative charge. The ionized form of fluorine is called fluoride, and is written as F – . A negatively charged ion is known as an anion . Atoms that have more than one electron to donate or accept will end up with stronger positive or negative charges. A cation that has donated two electrons has a net charge of +2. Using magnesium (Mg) as an example, this can be written Mg ++ or Mg 2+ . An anion that has accepted two electrons has a net charge of –2. The ionic form of selenium (Se), for example, is typically written Se 2– . The opposite charges of cations and anions exert a moderately strong mutual attraction that keeps the atoms in close proximity forming an ionic bond. An ionic bond is an ongoing, close association between ions of opposite charge. The table salt you sprinkle on your food owes its existence to ionic bonding. As shown in Figure 2.8 , sodium commonly donates an electron to chlorine, becoming the cation Na + . When chlorine accepts the electron, it becomes the chloride anion, Cl – . With their opposing charges, these two ions strongly attract each other. Water is an essential component of life because it is able to break the ionic bonds in salts to free the ions. In fact, in biological fluids, most individual atoms exist as ions. These dissolved ions produce electrical charges within the body. The behavior of these ions produces the tracings of heart and brain function observed as waves on an electrocardiogram (EKG or ECG) or an electroencephalogram (EEG). The electrical activity that derives from the interactions of the charged ions is why they are also called electrolytes. Covalent Bonds Unlike ionic bonds formed by the attraction between a cation’s positive charge and an anion’s negative charge, molecules formed by a covalent bond share electrons in a mutually stabilizing relationship. Like next-door neighbors whose kids hang out first at one home and then at the other, the atoms do not lose or gain electrons permanently. Instead, the electrons move back and forth between the elements. Because of the close sharing of pairs of electrons (one electron from each of two atoms), covalent bonds are stronger than ionic bonds. Nonpolar Covalent Bonds Figure 2.9 shows several common types of covalent bonds. Notice that the two covalently bonded atoms typically share just one or two electron pairs, though larger sharings are possible. The important concept to take from this is that in covalent bonds, electrons in the two atoms' overlapping atomic orbitals are shared to fill the valence shells of both atoms, ultimately stabilizing both of the atoms involved. In a single covalent bond, a single electron pair is shared between two atoms, while in a double covalent bond, two pairs of electrons are shared between two atoms. There even are triple covalent bonds, where three electron pairs are shared between two atoms. You can see that the covalent bonds shown in Figure 2.9 are balanced. The sharing of the negative electrons is relatively equal, as is the electrical pull of the positive protons in the nucleus of the atoms involved. This is why covalently bonded molecules that are electrically balanced in this way are described as nonpolar; that is, no region of the molecule is either more positive or more negative than any other. Polar Covalent Bonds Groups of legislators with completely opposite views on a particular issue are often described as “polarized” by news writers. In chemistry, a polar molecule is a molecule that contains regions that have opposite electrical charges. Polar molecules occur when atoms share electrons unequally, in polar covalent bonds. The most familiar example of a polar molecule is water ( Figure 2.10 ). The molecule has three parts: one atom of oxygen, the nucleus of which contains eight protons, and two hydrogen atoms, whose nuclei each contain only one proton. Because every proton exerts an identical positive charge, a nucleus that contains eight protons exerts a charge eight times greater than a nucleus that contains one proton. This means that the negatively charged electrons present in the water molecule are more strongly attracted to the oxygen nucleus than to the hydrogen nuclei. Each hydrogen atom’s single negative electron therefore migrates toward the oxygen atom, making the oxygen end of their bond slightly more negative than the hydrogen end of their bond. What is true for the bonds is true for the water molecule as a whole; that is, the oxygen region has a slightly negative charge and the regions of the hydrogen atoms have a slightly positive charge. These charges are often referred to as “partial charges” because the strength of the charge is less than one full electron, as would occur in an ionic bond. As shown in Figure 2.10 , regions of weak polarity are indicated with the Greek letter delta (δ) and a plus (+) or minus (–) sign. Even though a single water molecule is unimaginably tiny, it has mass, and the opposing electrical charges on the molecule pull that mass in such a way that it creates a shape somewhat like a triangular tent (see Figure 2.10 b ). This dipole, with the positive charges at one end formed by the hydrogen atoms at the “bottom” of the tent and the negative charge at the opposite end (the oxygen atom at the “top” of the tent) makes the charged regions highly likely to interact with charged regions of other polar molecules. For human physiology, the resulting bond is one of the most important formed by water—the hydrogen bond. Hydrogen Bonds A hydrogen bond is formed when a weakly positive hydrogen atom already bonded to one electronegative atom (for example, the oxygen in the water molecule) is attracted to another electronegative atom from another molecule. In other words, hydrogen bonds always include hydrogen that is already part of a polar molecule. The most common example of hydrogen bonding in the natural world occurs between molecules of water. It happens before your eyes whenever two raindrops merge into a larger bead, or a creek spills into a river. Hydrogen bonding occurs because the weakly negative oxygen atom in one water molecule is attracted to the weakly positive hydrogen atoms of two other water molecules ( Figure 2.11 ). Water molecules also strongly attract other types of charged molecules as well as ions. This explains why “table salt,” for example, actually is a molecule called a “salt” in chemistry, which consists of equal numbers of positively-charged sodium (Na + ) and negatively-charged chloride (Cl – ), dissolves so readily in water, in this case forming dipole-ion bonds between the water and the electrically-charged ions (electrolytes). Water molecules also repel molecules with nonpolar covalent bonds, like fats, lipids, and oils. You can demonstrate this with a simple kitchen experiment: pour a teaspoon of vegetable oil, a compound formed by nonpolar covalent bonds, into a glass of water. Instead of instantly dissolving in the water, the oil forms a distinct bead because the polar water molecules repel the nonpolar oil. 2.3 Chemical Reactions Learning Objectives By the end of this section, you will be able to: Distinguish between kinetic and potential energy, and between exergonic and endergonic chemical reactions Identify four forms of energy important in human functioning Describe the three basic types of chemical reactions Identify several factors influencing the rate of chemical reactions One characteristic of a living organism is metabolism, which is the sum total of all of the chemical reactions that go on to maintain that organism’s health and life. The bonding processes you have learned thus far are anabolic chemical reactions; that is, they form larger molecules from smaller molecules or atoms. But recall that metabolism can proceed in another direction: in catabolic chemical reactions, bonds between components of larger molecules break, releasing smaller molecules or atoms. Both types of reaction involve exchanges not only of matter, but of energy. The Role of Energy in Chemical Reactions Chemical reactions require a sufficient amount of energy to cause the matter to collide with enough precision and force that old chemical bonds can be broken and new ones formed. In general, kinetic energy is the form of energy powering any type of matter in motion. Imagine you are building a brick wall. The energy it takes to lift and place one brick atop another is kinetic energy—the energy matter possesses because of its motion. Once the wall is in place, it stores potential energy. Potential energy is the energy of position, or the energy matter possesses because of the positioning or structure of its components. If the brick wall collapses, the stored potential energy is released as kinetic energy as the bricks fall. In the human body, potential energy is stored in the bonds between atoms and molecules. Chemical energy is the form of potential energy in which energy is stored in chemical bonds. When those bonds are formed, chemical energy is invested, and when they break, chemical energy is released. Notice that chemical energy, like all energy, is neither created nor destroyed; rather, it is converted from one form to another. When you eat an energy bar before heading out the door for a hike, the honey, nuts, and other foods the bar contains are broken down and rearranged by your body into molecules that your muscle cells convert to kinetic energy. Chemical reactions that release more energy than they absorb are characterized as exergonic. The catabolism of the foods in your energy bar is an example. Some of the chemical energy stored in the bar is absorbed into molecules your body uses for fuel, but some of it is released—for example, as heat. In contrast, chemical reactions that absorb more energy than they release are endergonic. These reactions require energy input, and the resulting molecule stores not only the chemical energy in the original components, but also the energy that fueled the reaction. Because energy is neither created nor destroyed, where does the energy needed for endergonic reactions come from? In many cases, it comes from exergonic reactions. Forms of Energy Important in Human Functioning You have already learned that chemical energy is absorbed, stored, and released by chemical bonds. In addition to chemical energy, mechanical, radiant, and electrical energy are important in human functioning. Mechanical energy, which is stored in physical systems such as machines, engines, or the human body, directly powers the movement of matter. When you lift a brick into place on a wall, your muscles provide the mechanical energy that moves the brick. Radiant energy is energy emitted and transmitted as waves rather than matter. These waves vary in length from long radio waves and microwaves to short gamma waves emitted from decaying atomic nuclei. The full spectrum of radiant energy is referred to as the electromagnetic spectrum. The body uses the ultraviolet energy of sunlight to convert a compound in skin cells to vitamin D, which is essential to human functioning. The human eye evolved to see the wavelengths that comprise the colors of the rainbow, from red to violet, so that range in the spectrum is called “visible light.” Electrical energy, supplied by electrolytes in cells and body fluids, contributes to the voltage changes that help transmit impulses in nerve and muscle cells. Characteristics of Chemical Reactions All chemical reactions begin with a reactant , the general term for the one or more substances that enter into the reaction. Sodium and chloride ions, for example, are the reactants in the production of table salt. The one or more substances produced by a chemical reaction are called the product . In chemical reactions, the components of the reactants—the elements involved and the number of atoms of each—are all present in the product(s). Similarly, there is nothing present in the products that are not present in the reactants. This is because chemical reactions are governed by the law of conservation of mass, which states that matter cannot be created or destroyed in a chemical reaction. Just as you can express mathematical calculations in equations such as 2 + 7 = 9, you can use chemical equations to show how reactants become products. As in math, chemical equations proceed from left to right, but instead of an equal sign, they employ an arrow or arrows indicating the direction in which the chemical reaction proceeds. For example, the chemical reaction in which one atom of nitrogen and three atoms of hydrogen produce ammonia would be written as N + 3H → NH 3 N + 3H → NH 3 MathType@MTEF@5@5@+=feaagyart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaaeOtaiaabccacaqGRaGaaeiiaiaabodacaqGibGaeyOKH4QaaeOtaiaabIeadaWgaaWcbaGaae4maaqabaaaaa@3EA4@ . Correspondingly, the breakdown of ammonia into its components would be written as NH 3 → N + 3H. NH 3 → N + 3H. Notice that, in the first example, a nitrogen (N) atom and three hydrogen (H) atoms bond to form a compound. This anabolic reaction requires energy, which is then stored within the compound’s bonds. Such reactions are referred to as synthesis reactions. A synthesis reaction is a chemical reaction that results in the synthesis (joining) of components that were formerly separate ( Figure 2.12 a ). Again, nitrogen and hydrogen are reactants in a synthesis reaction that yields ammonia as the product. The general equation for a synthesis reaction is A + B → AB. A + B → AB. MathType@MTEF@5@5@+=feaagyart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaqcaauaaiaabgeacaqGGaGaae4kaiaabccacaqGcbGaeyOKH4Qaaeyqaiaabkeaaaa@3D25@ In the second example, ammonia is catabolized into its smaller components, and the potential energy that had been stored in its bonds is released. Such reactions are referred to as decomposition reactions. A decomposition reaction is a chemical reaction that breaks down or “de-composes” something larger into its constituent parts (see Figure 2.12 b ). The general equation for a decomposition reaction is: AB → A + B AB → A + B MathType@MTEF@5@5@+=feaagyart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaqcaauaaiaabgeacaqGcbGaeyOKH4QaaeyqaiabgUcaRiaabkeaaaa@3C13@ . An exchange reaction is a chemical reaction in which both synthesis and decomposition occur, chemical bonds are both formed and broken, and chemical energy is absorbed, stored, and released (see Figure 2.12 c ). The simplest form of an exchange reaction might be: A + BC → AB + C A + BC → AB + C MathType@MTEF@5@5@+=feaagyart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaqcaauaaiaabgeacqGHRaWkcaqGcbGaae4qaiabgkziUkaabgeacaqGcbGaey4kaSIaae4qaaaa@3E81@ . Notice that, to produce these products, B and C had to break apart in a decomposition reaction, whereas A and B had to bond in a synthesis reaction. A more complex exchange reaction might be: AB + CD → AC + BD AB + CD → AC + BD MathType@MTEF@5@5@+=feaagyart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaqcaauaaiaabgeacaqGcbGaey4kaSIaae4qaiaabseacqGHsgIRcaqGbbGaae4qaiabgUcaRiaabkeacaqGebaaaa@400F@ . Another example might be: AB + CD → AD + BC AB + CD → AD + BC MathType@MTEF@5@5@+=feaagyart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaqcaauaaiaabgeacaqGcbGaey4kaSIaae4qaiaabseacqGHsgIRcaqGbbGaaeiraiabgUcaRiaabkeacaqGdbaaaa@400F@ . In theory, any chemical reaction can proceed in either direction under the right conditions. Reactants may synthesize into a product that is later decomposed. Reversibility is also a quality of exchange reactions. For instance, A + BC → AB + C A + BC → AB + C MathType@MTEF@5@5@+=feaagyart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaqcaauaaiaabgeacqGHRaWkcaqGcbGaae4qaiabgkziUkaabgeacaqGcbGaey4kaSIaae4qaaaa@3E81@ could then reverse to AB + C → A + BC AB + C → A + BC MathType@MTEF@5@5@+=feaagyart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaqcaauaaiaabgeacaqGcbGaey4kaSIaae4qaiabgkziUkaabgeacqGHRaWkcaqGcbGaae4qaaaa@3E81@ . This reversibility of a chemical reaction is indicated with a double arrow: A + BC ⇄ AB + C A + BC ⇄ AB + C MathType@MTEF@5@5@+=feaagyart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaqcaauaaiaabgeacqGHRaWkcaqGcbGaae4qaiablsCiakaabgeacaqGcbGaey4kaSIaae4qaaaa@3E8A@ . Still, in the human body, many chemical reactions do proceed in a predictable direction, either one way or the other. You can think of this more predictable path as the path of least resistance because, typically, the alternate direction requires more energy. Factors Influencing the Rate of Chemical Reactions If you pour vinegar into baking soda, the reaction is instantaneous; the concoction will bubble and fizz. But many chemical reactions take time. A variety of factors influence the rate of chemical reactions. This section, however, will consider only the most important in human functioning. Properties of the Reactants If chemical reactions are to occur quickly, the atoms in the reactants have to have easy access to one another. Thus, the greater the surface area of the reactants, the more readily they will interact. When you pop a cube of cheese into your mouth, you chew it before you swallow it. Among other things, chewing increases the surface area of the food so that digestive chemicals can more easily get at it. As a general rule, gases tend to react faster than liquids or solids, again because it takes energy to separate particles of a substance, and gases by definition already have space between their particles. Similarly, the larger the molecule, the greater the number of total bonds, so reactions involving smaller molecules, with fewer total bonds, would be expected to proceed faster. In addition, recall that some elements are more reactive than others. Reactions that involve highly reactive elements like hydrogen proceed more quickly than reactions that involve less reactive elements. Reactions involving stable elements like helium are not likely to happen at all. Temperature Nearly all chemical reactions occur at a faster rate at higher temperatures. Recall that kinetic energy is the energy of matter in motion. The kinetic energy of subatomic particles increases in response to increases in thermal energy. The higher the temperature, the faster the particles move, and the more likely they are to come in contact and react. Concentration and Pressure If just a few people are dancing at a club, they are unlikely to step on each other’s toes. But as more and more people get up to dance—especially if the music is fast—collisions are likely to occur. It is the same with chemical reactions: the more particles present within a given space, the more likely those particles are to bump into one another. This means that chemists can speed up chemical reactions not only by increasing the concentration of particles—the number of particles in the space—but also by decreasing the volume of the space, which would correspondingly increase the pressure. If there were 100 dancers in that club, and the manager abruptly moved the party to a room half the size, the concentration of the dancers would double in the new space, and the likelihood of collisions would increase accordingly. Enzymes and Other Catalysts For two chemicals in nature to react with each other they first have to come into contact, and this occurs through random collisions. Because heat helps increase the kinetic energy of atoms, ions, and molecules, it promotes their collision. But in the body, extremely high heat—such as a very high fever—can damage body cells and be life-threatening. On the other hand, normal body temperature is not high enough to promote the chemical reactions that sustain life. That is where catalysts come in. In chemistry, a catalyst is a substance that increases the rate of a chemical reaction without itself undergoing any change. You can think of a catalyst as a chemical change agent. They help increase the rate and force at which atoms, ions, and molecules collide, thereby increasing the probability that their valence shell electrons will interact. The most important catalysts in the human body are enzymes. An enzyme is a catalyst composed of protein or ribonucleic acid (RNA), both of which will be discussed later in this chapter. Like all catalysts, enzymes work by lowering the level of energy that needs to be invested in a chemical reaction. A chemical reaction’s activation energy is the “threshold” level of energy needed to break the bonds in the reactants. Once those bonds are broken, new arrangements can form. Without an enzyme to act as a catalyst, a much larger investment of energy is needed to ignite a chemical reaction ( Figure 2.13 ). Enzymes are critical to the body’s healthy functioning. They assist, for example, with the breakdown of food and its conversion to energy. In fact, most of the chemical reactions in the body are facilitated by enzymes. 2.4 Inorganic Compounds Essential to Human Functioning Learning Objectives By the end of this section, you will be able to: Compare and contrast inorganic and organic compounds Identify the properties of water that make it essential to life Explain the role of salts in body functioning Distinguish between acids and bases, and explain their role in pH Discuss the role of buffers in helping the body maintain pH homeostasis The concepts you have learned so far in this chapter govern all forms of matter, and would work as a foundation for geology as well as biology. This section of the chapter narrows the focus to the chemistry of human life; that is, the compounds important for the body’s structure and function. In general, these compounds are either inorganic or organic. An inorganic compound is a substance that does not contain both carbon and hydrogen. A great many inorganic compounds do contain hydrogen atoms, such as water (H 2 O) and the hydrochloric acid (HCl) produced by your stomach. In contrast, only a handful of inorganic compounds contain carbon atoms. Carbon dioxide (CO 2 ) is one of the few examples. An organic compound , then, is a substance that contains both carbon and hydrogen. Organic compounds are synthesized via covalent bonds within living organisms, including the human body. Recall that carbon and hydrogen are the second and third most abundant elements in your body. You will soon discover how these two elements combine in the foods you eat, in the compounds that make up your body structure, and in the chemicals that fuel your functioning. The following section examines the three groups of inorganic compounds essential to life: water, salts, acids, and bases. Organic compounds are covered later in the chapter. Water As much as 70 percent of an adult’s body weight is water. This water is contained both within the cells and between the cells that make up tissues and organs. Its several roles make water indispensable to human functioning. Water as a Lubricant and Cushion Water is a major component of many of the body’s lubricating fluids. Just as oil lubricates the hinge on a door, water in synovial fluid lubricates the actions of body joints, and water in pleural fluid helps the lungs expand and recoil with breathing. Watery fluids help keep food flowing through the digestive tract, and ensure that the movement of adjacent abdominal organs is friction free. Water also protects cells and organs from physical trauma, cushioning the brain within the skull, for example, and protecting the delicate nerve tissue of the eyes. Water cushions a developing fetus in the mother’s womb as well. Water as a Heat Sink A heat sink is a substance or object that absorbs and dissipates heat but does not experience a corresponding increase in temperature. In the body, water absorbs the heat generated by chemical reactions without greatly increasing in temperature. Moreover, when the environmental temperature soars, the water stored in the body helps keep the body cool. This cooling effect happens as warm blood from the body’s core flows to the blood vessels just under the skin and is transferred to the environment. At the same time, sweat glands release warm water in sweat. As the water evaporates into the air, it carries away heat, and then the cooler blood from the periphery circulates back to the body core. Water as a Component of Liquid Mixtures A mixture is a combination of two or more substances, each of which maintains its own chemical identity. In other words, the constituent substances are not chemically bonded into a new, larger chemical compound. The concept is easy to imagine if you think of powdery substances such as flour and sugar; when you stir them together in a bowl, they obviously do not bond to form a new compound. The room air you breathe is a gaseous mixture, containing three discrete elements—nitrogen, oxygen, and argon—and one compound, carbon dioxide. There are three types of liquid mixtures, all of which contain water as a key component. These are solutions, colloids, and suspensions. For cells in the body to survive, they must be kept moist in a water-based liquid called a solution. In chemistry, a liquid solution consists of a solvent that dissolves a substance called a solute. An important characteristic of solutions is that they are homogeneous; that is, the solute molecules are distributed evenly throughout the solution. If you were to stir a teaspoon of sugar into a glass of water, the sugar would dissolve into sugar molecules separated by water molecules. The ratio of sugar to water in the left side of the glass would be the same as the ratio of sugar to water in the right side of the glass. If you were to add more sugar, the ratio of sugar to water would change, but the distribution—provided you had stirred well—would still be even. Water is considered the “universal solvent” and it is believed that life cannot exist without water because of this. Water is certainly the most abundant solvent in the body; essentially all of the body’s chemical reactions occur among compounds dissolved in water. Because water molecules are polar, with regions of positive and negative electrical charge, water readily dissolves ionic compounds and polar covalent compounds. Such compounds are referred to as hydrophilic, or “water-loving.” As mentioned above, sugar dissolves well in water. This is because sugar molecules contain regions of hydrogen-oxygen polar bonds, making it hydrophilic. Nonpolar molecules, which do not readily dissolve in water, are called hydrophobic, or “water-fearing.” Concentrations of Solutes Various mixtures of solutes and water are described in chemistry. The concentration of a given solute is the number of particles of that solute in a given space (oxygen makes up about 21 percent of atmospheric air). In the bloodstream of humans, glucose concentration is usually measured in milligram (mg) per deciliter (dL), and in a healthy adult averages about 100 mg/dL. Another method of measuring the concentration of a solute is by its molarilty—which is moles (M) of the molecules per liter (L). The mole of an element is its atomic weight, while a mole of a compound is the sum of the atomic weights of its components, called the molecular weight. An often-used example is calculating a mole of glucose, with the chemical formula C 6 H 12 O 6 . Using the periodic table, the atomic weight of carbon (C) is 12.011 grams (g), and there are six carbons in glucose, for a total atomic weight of 72.066 g. Doing the same calculations for hydrogen (H) and oxygen (O), the molecular weight equals 180.156g (the “gram molecular weight” of glucose). When water is added to make one liter of solution, you have one mole (1M) of glucose. This is particularly useful in chemistry because of the relationship of moles to “Avogadro’s number.” A mole of any solution has the same number of particles in it: 6.02 × 10 23 . Many substances in the bloodstream and other tissue of the body are measured in thousandths of a mole, or millimoles (mM). A colloid is a mixture that is somewhat like a heavy solution. The solute particles consist of tiny clumps of molecules large enough to make the liquid mixture opaque (because the particles are large enough to scatter light). Familiar examples of colloids are milk and cream. In the thyroid glands, the thyroid hormone is stored as a thick protein mixture also called a colloid. A suspension is a liquid mixture in which a heavier substance is suspended temporarily in a liquid, but over time, settles out. This separation of particles from a suspension is called sedimentation. An example of sedimentation occurs in the blood test that establishes sedimentation rate, or sed rate. The test measures how quickly red blood cells in a test tube settle out of the watery portion of blood (known as plasma) over a set period of time. Rapid sedimentation of blood cells does not normally happen in the healthy body, but aspects of certain diseases can cause blood cells to clump together, and these heavy clumps of blood cells settle to the bottom of the test tube more quickly than do normal blood cells. The Role of Water in Chemical Reactions Two types of chemical reactions involve the creation or the consumption of water: dehydration synthesis and hydrolysis. In dehydration synthesis, one reactant gives up an atom of hydrogen and another reactant gives up a hydroxyl group (OH) in the synthesis of a new product. In the formation of their covalent bond, a molecule of water is released as a byproduct ( Figure 2.14 ). This is also sometimes referred to as a condensation reaction. In hydrolysis, a molecule of water disrupts a compound, breaking its bonds. The water is itself split into H and OH. One portion of the severed compound then bonds with the hydrogen atom, and the other portion bonds with the hydroxyl group. These reactions are reversible, and play an important role in the chemistry of organic compounds (which will be discussed shortly). Salts Recall that salts are formed when ions form ionic bonds. In these reactions, one atom gives up one or more electrons, and thus becomes positively charged, whereas the other accepts one or more electrons and becomes negatively charged. You can now define a salt as a substance that, when dissolved in water, dissociates into ions other than H + or OH – . This fact is important in distinguishing salts from acids and bases, discussed next. A typical salt, NaCl, dissociates completely in water ( Figure 2.15 ). The positive and negative regions on the water molecule (the hydrogen and oxygen ends respectively) attract the negative chloride and positive sodium ions, pulling them away from each other. Again, whereas nonpolar and polar covalently bonded compounds break apart into molecules in solution, salts dissociate into ions. These ions are electrolytes; they are capable of conducting an electrical current in solution. This property is critical to the function of ions in transmitting nerve impulses and prompting muscle contraction. Many other salts are important in the body. For example, bile salts produced by the liver help break apart dietary fats, and calcium phosphate salts form the mineral portion of teeth and bones. Acids and Bases Acids and bases, like salts, dissociate in water into electrolytes. Acids and bases can very much change the properties of the solutions in which they are dissolved. Acids An acid is a substance that releases hydrogen ions (H + ) in solution ( Figure 2.16 a ). Because an atom of hydrogen has just one proton and one electron, a positively charged hydrogen ion is simply a proton. This solitary proton is highly likely to participate in chemical reactions. Strong acids are compounds that release all of their H + in solution; that is, they ionize completely. Hydrochloric acid (HCl), which is released from cells in the lining of the stomach, is a strong acid because it releases all of its H + in the stomach’s watery environment. This strong acid aids in digestion and kills ingested microbes. Weak acids do not ionize completely; that is, some of their hydrogen ions remain bonded within a compound in solution. An example of a weak acid is vinegar, or acetic acid; it is called acetate after it gives up a proton. Bases A base is a substance that releases hydroxyl ions (OH – ) in solution, or one that accepts H + already present in solution (see Figure 2.16 b ). The hydroxyl ions (also known as hydroxide ions) or other basic substances combine with H + present to form a water molecule, thereby removing H + and reducing the solution’s acidity. Strong bases release most or all of their hydroxyl ions; weak bases release only some hydroxyl ions or absorb only a few H + . Food mixed with hydrochloric acid from the stomach would burn the small intestine, the next portion of the digestive tract after the stomach, if it were not for the release of bicarbonate (HCO 3 – ), a weak base that attracts H + . Bicarbonate accepts some of the H + protons, thereby reducing the acidity of the solution. The Concept of pH The relative acidity or alkalinity of a solution can be indicated by its pH. A solution’s pH is the negative, base-10 logarithm of the hydrogen ion (H + ) concentration of the solution. As an example, a pH 4 solution has an H + concentration that is ten times greater than that of a pH 5 solution. That is, a solution with a pH of 4 is ten times more acidic than a solution with a pH of 5. The concept of pH will begin to make more sense when you study the pH scale, like that shown in Figure 2.17 . The scale consists of a series of increments ranging from 0 to 14. A solution with a pH of 7 is considered neutral—neither acidic nor basic. Pure water has a pH of 7. The lower the number below 7, the more acidic the solution, or the greater the concentration of H + . The concentration of hydrogen ions at each pH value is 10 times different than the next pH. For instance, a pH value of 4 corresponds to a proton concentration of 10 –4 M, or 0.0001M, while a pH value of 5 corresponds to a proton concentration of 10 –5 M, or 0.00001M. The higher the number above 7, the more basic (alkaline) the solution, or the lower the concentration of H + . Human urine, for example, is ten times more acidic than pure water, and HCl is 10,000,000 times more acidic than water. Buffers The pH of human blood normally ranges from 7.35 to 7.45, although it is typically identified as pH 7.4. At this slightly basic pH, blood can reduce the acidity resulting from the carbon dioxide (CO 2 ) constantly being released into the bloodstream by the trillions of cells in the body. Homeostatic mechanisms (along with exhaling CO 2 while breathing) normally keep the pH of blood within this narrow range. This is critical, because fluctuations—either too acidic or too alkaline—can lead to life-threatening disorders. All cells of the body depend on homeostatic regulation of acid–base balance at a pH of approximately 7.4. The body therefore has several mechanisms for this regulation, involving breathing, the excretion of chemicals in urine, and the internal release of chemicals collectively called buffers into body fluids. A buffer is a solution of a weak acid and its conjugate base. A buffer can neutralize small amounts of acids or bases in body fluids. For example, if there is even a slight decrease below 7.35 in the pH of a bodily fluid, the buffer in the fluid—in this case, acting as a weak base—will bind the excess hydrogen ions. In contrast, if pH rises above 7.45, the buffer will act as a weak acid and contribute hydrogen ions. Homeostatic Imbalances Acids and Bases Excessive acidity of the blood and other body fluids is known as acidosis. Common causes of acidosis are situations and disorders that reduce the effectiveness of breathing, especially the person’s ability to exhale fully, which causes a buildup of CO 2 (and H + ) in the bloodstream. Acidosis can also be caused by metabolic problems that reduce the level or function of buffers that act as bases, or that promote the production of acids. For instance, with severe diarrhea, too much bicarbonate can be lost from the body, allowing acids to build up in body fluids. In people with poorly managed diabetes (ineffective regulation of blood sugar), acids called ketones are produced as a form of body fuel. These can build up in the blood, causing a serious condition called diabetic ketoacidosis. Kidney failure, liver failure, heart failure, cancer, and other disorders also can prompt metabolic acidosis. In contrast, alkalosis is a condition in which the blood and other body fluids are too alkaline (basic). As with acidosis, respiratory disorders are a major cause; however, in respiratory alkalosis, carbon dioxide levels fall too low. Lung disease, aspirin overdose, shock, and ordinary anxiety can cause respiratory alkalosis, which reduces the normal concentration of H + . Metabolic alkalosis often results from prolonged, severe vomiting, which causes a loss of hydrogen and chloride ions (as components of HCl). Medications also can prompt alkalosis. These include diuretics that cause the body to lose potassium ions, as well as antacids when taken in excessive amounts, for instance by someone with persistent heartburn or an ulcer. 2.5 Organic Compounds Essential to Human Functioning Learning Objectives By the end of this section, you will be able to: Identify four types of organic molecules essential to human functioning Explain the chemistry behind carbon’s affinity for covalently bonding in organic compounds Provide examples of three types of carbohydrates, and identify the primary functions of carbohydrates in the body Discuss four types of lipids important in human functioning Describe the structure of proteins, and discuss their importance to human functioning Identify the building blocks of nucleic acids, and the roles of DNA, RNA, and ATP in human functioning Organic compounds typically consist of groups of carbon atoms covalently bonded to hydrogen, usually oxygen, and often other elements as well. They are found throughout the world, in soils and seas, commercial products, and every cell of the human body. The four types most important to human structure and function are carbohydrates, lipids, proteins, and nucleic acids. Before exploring these compounds, you need to first understand the chemistry of carbon. The Chemistry of Carbon What makes organic compounds ubiquitous is the chemistry of their carbon core. Recall that carbon atoms have four electrons in their valence shell, and that the octet rule dictates that atoms tend to react in such a way as to complete their valence shell with eight electrons. Carbon atoms do not complete their valence shells by donating or accepting four electrons. Instead, they readily share electrons via covalent bonds. Commonly, carbon atoms share with other carbon atoms, often forming a long carbon chain referred to as a carbon skeleton. When they do share, however, they do not share all their electrons exclusively with each other. Rather, carbon atoms tend to share electrons with a variety of other elements, one of which is always hydrogen. Carbon and hydrogen groupings are called hydrocarbons. If you study the figures of organic compounds in the remainder of this chapter, you will see several with chains of hydrocarbons in one region of the compound. Many combinations are possible to fill carbon’s four “vacancies.” Carbon may share electrons with oxygen or nitrogen or other atoms in a particular region of an organic compound. Moreover, the atoms to which carbon atoms bond may also be part of a functional group. A functional group is a group of atoms linked by strong covalent bonds and tending to function in chemical reactions as a single unit. You can think of functional groups as tightly knit “cliques” whose members are unlikely to be parted. Five functional groups are important in human physiology; these are the hydroxyl, carboxyl, amino, methyl and phosphate groups ( Table 2.1 ). Functional Groups Important in Human Physiology Functional group Structural formula Importance Hydroxyl —O—H Hydroxyl groups are polar. They are components of all four types of organic compounds discussed in this chapter. They are involved in dehydration synthesis and hydrolysis reactions. Carboxyl Carboxyl groups are found within fatty acids, amino acids, and many other acids. Amino —N—H 2 Amino groups are found within amino acids, the building blocks of proteins. Methyl —C—H 3 Methyl groups are found within amino acids. Phosphate —P—O 4 2– Phosphate groups are found within phospholipids and nucleotides. Table 2.1 Carbon’s affinity for covalent bonding means that many distinct and relatively stable organic molecules nevertheless readily form larger, more complex molecules. Any large molecule is referred to as macromolecule (macro- = “large”), and the organic compounds in this section all fit this description. However, some macromolecules are made up of several “copies” of single units called monomer (mono- = “one”; -mer = “part”). Like beads in a long necklace, these monomers link by covalent bonds to form long polymers (poly- = “many”). There are many examples of monomers and polymers among the organic compounds. Monomers form polymers by engaging in dehydration synthesis (see Figure 2.14 ). As was noted earlier, this reaction results in the release of a molecule of water. Each monomer contributes: One gives up a hydrogen atom and the other gives up a hydroxyl group. Polymers are split into monomers by hydrolysis (-lysis = “rupture”). The bonds between their monomers are broken, via the donation of a molecule of water, which contributes a hydrogen atom to one monomer and a hydroxyl group to the other. Carbohydrates The term carbohydrate means “hydrated carbon.” Recall that the root hydro- indicates water. A carbohydrate is a molecule composed of carbon, hydrogen, and oxygen; in most carbohydrates, hydrogen and oxygen are found in the same two-to-one relative proportions they have in water. In fact, the chemical formula for a “generic” molecule of carbohydrate is (CH 2 O) n . Carbohydrates are referred to as saccharides, a word meaning “sugars.” Three forms are important in the body. Monosaccharides are the monomers of carbohydrates. Disaccharides (di- = “two”) are made up of two monomers. Polysaccharides are the polymers, and can consist of hundreds to thousands of monomers. Monosaccharides A monosaccharide is a monomer of carbohydrates. Five monosaccharides are important in the body. Three of these are the hexose sugars, so called because they each contain six atoms of carbon. These are glucose, fructose, and galactose, shown in Figure 2.18 a . The remaining monosaccharides are the two pentose sugars, each of which contains five atoms of carbon. They are ribose and deoxyribose, shown in Figure 2.18 b . Disaccharides A disaccharide is a pair of monosaccharides. Disaccharides are formed via dehydration synthesis, and the bond linking them is referred to as a glycosidic bond (glyco- = “sugar”). Three disaccharides (shown in Figure 2.19 ) are important to humans. These are sucrose, commonly referred to as table sugar; lactose, or milk sugar; and maltose, or malt sugar. As you can tell from their common names, you consume these in your diet; however, your body cannot use them directly. Instead, in the digestive tract, they are split into their component monosaccharides via hydrolysis. Interactive Link Watch this video to observe the formation of a disaccharide. What happens when water encounters a glycosidic bond? Polysaccharides Polysaccharides can contain a few to a thousand or more monosaccharides. Three are important to the body ( Figure 2.20 ): Starches are polymers of glucose. They occur in long chains called amylose or branched chains called amylopectin, both of which are stored in plant-based foods and are relatively easy to digest. Glycogen is also a polymer of glucose, but it is stored in the tissues of animals, especially in the muscles and liver. It is not considered a dietary carbohydrate because very little glycogen remains in animal tissues after slaughter; however, the human body stores excess glucose as glycogen, again, in the muscles and liver. Cellulose, a polysaccharide that is the primary component of the cell wall of green plants, is the component of plant food referred to as “fiber”. In humans, cellulose/fiber is not digestible; however, dietary fiber has many health benefits. It helps you feel full so you eat less, it promotes a healthy digestive tract, and a diet high in fiber is thought to reduce the risk of heart disease and possibly some forms of cancer. Functions of Carbohydrates The body obtains carbohydrates from plant-based foods. Grains, fruits, and legumes and other vegetables provide most of the carbohydrate in the human diet, although lactose is found in dairy products. Although most body cells can break down other organic compounds for fuel, all body cells can use glucose. Moreover, nerve cells (neurons) in the brain, spinal cord, and through the peripheral nervous system, as well as red blood cells, can use only glucose for fuel. In the breakdown of glucose for energy, molecules of adenosine triphosphate, better known as ATP, are produced. Adenosine triphosphate (ATP) is composed of a ribose sugar, an adenine base, and three phosphate groups. ATP releases free energy when its phosphate bonds are broken, and thus supplies ready energy to the cell. More ATP is produced in the presence of oxygen (O 2 ) than in pathways that do not use oxygen. The overall reaction for the conversion of the energy in glucose to energy stored in ATP can be written: C 6 H 12 O 6  + 6 O 2   →  6 CO 2  + 6 H 2 O + ATP C 6 H 12 O 6  + 6 O 2   →  6 CO 2  + 6 H 2 O + ATP In addition to being a critical fuel source, carbohydrates are present in very small amounts in cells’ structure. For instance, some carbohydrate molecules bind with proteins to produce glycoproteins, and others combine with lipids to produce glycolipids, both of which are found in the membrane that encloses the contents of body cells. Lipids A lipid is one of a highly diverse group of compounds made up mostly of hydrocarbons. The few oxygen atoms they contain are often at the periphery of the molecule. Their nonpolar hydrocarbons make all lipids hydrophobic. In water, lipids do not form a true solution, but they may form an emulsion, which is the term for a mixture of solutions that do not mix well. Triglycerides A triglyceride is one of the most common dietary lipid groups, and the type found most abundantly in body tissues. This compound, which is commonly referred to as a fat, is formed from the synthesis of two types of molecules ( Figure 2.21 ): A glycerol backbone at the core of triglycerides, consists of three carbon atoms. Three fatty acids, long chains of hydrocarbons with a carboxyl group and a methyl group at opposite ends, extend from each of the carbons of the glycerol. Triglycerides form via dehydration synthesis. Glycerol gives up hydrogen atoms from its hydroxyl groups at each bond, and the carboxyl group on each fatty acid chain gives up a hydroxyl group. A total of three water molecules are thereby released. Fatty acid chains that have no double carbon bonds anywhere along their length and therefore contain the maximum number of hydrogen atoms are called saturated fatty acids. These straight, rigid chains pack tightly together and are solid or semi-solid at room temperature ( Figure 2.22 a ). Butter and lard are examples, as is the fat found on a steak or in your own body. In contrast, fatty acids with one double carbon bond are kinked at that bond ( Figure 2.22 b ). These monounsaturated fatty acids are therefore unable to pack together tightly, and are liquid at room temperature. Polyunsaturated fatty acids contain two or more double carbon bonds, and are also liquid at room temperature. Plant oils such as olive oil typically contain both mono- and polyunsaturated fatty acids. Whereas a diet high in saturated fatty acids increases the risk of heart disease, a diet high in unsaturated fatty acids is thought to reduce the risk. This is especially true for the omega-3 unsaturated fatty acids found in cold-water fish such as salmon. These fatty acids have their first double carbon bond at the third hydrocarbon from the methyl group (referred to as the omega end of the molecule). Finally, trans fatty acids found in some processed foods, including some stick and tub margarines, are thought to be even more harmful to the heart and blood vessels than saturated fatty acids. Trans fats are created from unsaturated fatty acids (such as corn oil) when chemically treated to produce partially hydrogenated fats. As a group, triglycerides are a major fuel source for the body. When you are resting or asleep, a majority of the energy used to keep you alive is derived from triglycerides stored in your fat (adipose) tissues. Triglycerides also fuel long, slow physical activity such as gardening or hiking, and contribute a modest percentage of energy for vigorous physical activity. Dietary fat also assists the absorption and transport of the nonpolar fat-soluble vitamins A, D, E, and K. Additionally, stored body fat protects and cushions the body’s bones and internal organs, and acts as insulation to retain body heat. Fatty acids are also components of glycolipids, which are sugar-fat compounds found in the cell membrane. Lipoproteins are compounds in which the hydrophobic triglycerides are packaged in protein envelopes for transport in body fluids. Phospholipids As its name suggests, a phospholipid is a bond between the glycerol component of a lipid and a phosphorous molecule. In fact, phospholipids are similar in structure to triglycerides. However, instead of having three fatty acids, a phospholipid is generated from a diglyceride, a glycerol with just two fatty acid chains ( Figure 2.23 ). The third binding site on the glycerol is taken up by the phosphate group, which in turn is attached to a polar “head” region of the molecule. Recall that triglycerides are nonpolar and hydrophobic. This still holds for the fatty acid portion of a phospholipid compound. However, the head of a phospholipid contains charges on the phosphate groups, as well as on the nitrogen atom. These charges make the phospholipid head hydrophilic. Therefore, phospholipids are said to have hydrophobic tails, containing the neutral fatty acids, and hydrophilic heads, containing the charged phosphate groups and nitrogen atom. Steroids A steroid compound (referred to as a sterol) has as its foundation a set of four hydrocarbon rings bonded to a variety of other atoms and molecules (see Figure 2.23 b ). Although both plants and animals synthesize sterols, the type that makes the most important contribution to human structure and function is cholesterol, which is synthesized by the liver in humans and animals and is also present in most animal-based foods. Like other lipids, cholesterol’s hydrocarbons make it hydrophobic; however, it has a polar hydroxyl head that is hydrophilic. Cholesterol is an important component of bile acids, compounds that help emulsify dietary fats. In fact, the word root chole- refers to bile. Cholesterol is also a building block of many hormones, signaling molecules that the body releases to regulate processes at distant sites. Finally, like phospholipids, cholesterol molecules are found in the cell membrane, where their hydrophobic and hydrophilic regions help regulate the flow of substances into and out of the cell. Prostaglandins Like a hormone, a prostaglandin is one of a group of signaling molecules, but prostaglandins are derived from unsaturated fatty acids (see Figure 2.23 c ). One reason that the omega-3 fatty acids found in fish are beneficial is that they stimulate the production of certain prostaglandins that help regulate aspects of blood pressure and inflammation, and thereby reduce the risk for heart disease. Prostaglandins also sensitize nerves to pain. One class of pain-relieving medications called nonsteroidal anti-inflammatory drugs (NSAIDs) works by reducing the effects of prostaglandins. Proteins You might associate proteins with muscle tissue, but in fact, proteins are critical components of all tissues and organs. A protein is an organic molecule composed of amino acids linked by peptide bonds. Proteins include the keratin in the epidermis of skin that protects underlying tissues, the collagen found in the dermis of skin, in bones, and in the meninges that cover the brain and spinal cord. Proteins are also components of many of the body’s functional chemicals, including digestive enzymes in the digestive tract, antibodies, the neurotransmitters that neurons use to communicate with other cells, and the peptide-based hormones that regulate certain body functions (for instance, growth hormone). While carbohydrates and lipids are composed of hydrocarbons and oxygen, all proteins also contain nitrogen (N), and many contain sulfur (S), in addition to carbon, hydrogen, and oxygen. Microstructure of Proteins Proteins are polymers made up of nitrogen-containing monomers called amino acids. An amino acid is a molecule composed of an amino group and a carboxyl group, together with a variable side chain. Just 20 different amino acids contribute to nearly all of the thousands of different proteins important in human structure and function. Body proteins contain a unique combination of a few dozen to a few hundred of these 20 amino acid monomers. All 20 of these amino acids share a similar structure ( Figure 2.24 ). All consist of a central carbon atom to which the following are bonded: a hydrogen atom an alkaline (basic) amino group NH 2 (see Table 2.1 ) an acidic carboxyl group COOH (see Table 2.1 ) a variable group Notice that all amino acids contain both an acid (the carboxyl group) and a base (the amino group) (amine = “nitrogen-containing”). For this reason, they make excellent buffers, helping the body regulate acid–base balance. What distinguishes the 20 amino acids from one another is their variable group, which is referred to as a side chain or an R-group. This group can vary in size and can be polar or nonpolar, giving each amino acid its unique characteristics. For example, the side chains of two amino acids—cysteine and methionine—contain sulfur. Sulfur does not readily participate in hydrogen bonds, whereas all other amino acids do. This variation influences the way that proteins containing cysteine and methionine are assembled. Amino acids join via dehydration synthesis to form protein polymers ( Figure 2.25 ). The unique bond holding amino acids together is called a peptide bond. A peptide bond is a covalent bond between two amino acids that forms by dehydration synthesis. A peptide, in fact, is a very short chain of amino acids. Strands containing fewer than about 100 amino acids are generally referred to as polypeptides rather than proteins. The body is able to synthesize most of the amino acids from components of other molecules; however, nine cannot be synthesized and have to be consumed in the diet. These are known as the essential amino acids. Free amino acids available for protein construction are said to reside in the amino acid pool within cells. Structures within cells use these amino acids when assembling proteins. If a particular essential amino acid is not available in sufficient quantities in the amino acid pool, however, synthesis of proteins containing it can slow or even cease. Shape of Proteins Just as a fork cannot be used to eat soup and a spoon cannot be used to spear meat, a protein’s shape is essential to its function. A protein’s shape is determined, most fundamentally, by the sequence of amino acids of which it is made ( Figure 2.26 a ). The sequence is called the primary structure of the protein. Although some polypeptides exist as linear chains, most are twisted or folded into more complex secondary structures that form when bonding occurs between amino acids with different properties at different regions of the polypeptide. The most common secondary structure is a spiral called an alpha-helix. If you were to take a length of string and simply twist it into a spiral, it would not hold the shape. Similarly, a strand of amino acids could not maintain a stable spiral shape without the help of hydrogen bonds, which create bridges between different regions of the same strand (see Figure 2.26 b ). Less commonly, a polypeptide chain can form a beta-pleated sheet, in which hydrogen bonds form bridges between different regions of a single polypeptide that has folded back upon itself, or between two or more adjacent polypeptide chains. The secondary structure of proteins further folds into a compact three-dimensional shape, referred to as the protein’s tertiary structure (see Figure 2.26 c ). In this configuration, amino acids that had been very distant in the primary chain can be brought quite close via hydrogen bonds or, in proteins containing cysteine, via disulfide bonds. A disulfide bond is a covalent bond between sulfur atoms in a polypeptide. Often, two or more separate polypeptides bond to form an even larger protein with a quaternary structure (see Figure 2.26 d ). The polypeptide subunits forming a quaternary structure can be identical or different. For instance, hemoglobin, the protein found in red blood cells is composed of four tertiary polypeptides, two of which are called alpha chains and two of which are called beta chains. When they are exposed to extreme heat, acids, bases, and certain other substances, proteins will denature. Denaturation is a change in the structure of a molecule through physical or chemical means. Denatured proteins lose their functional shape and are no longer able to carry out their jobs. An everyday example of protein denaturation is the curdling of milk when acidic lemon juice is added. The contribution of the shape of a protein to its function can hardly be exaggerated. For example, the long, slender shape of protein strands that make up muscle tissue is essential to their ability to contract (shorten) and relax (lengthen). As another example, bones contain long threads of a protein called collagen that acts as scaffolding upon which bone minerals are deposited. These elongated proteins, called fibrous proteins, are strong and durable and typically hydrophobic. In contrast, globular proteins are globes or spheres that tend to be highly reactive and are hydrophilic. The hemoglobin proteins packed into red blood cells are an example (see Figure 2.26 d ); however, globular proteins are abundant throughout the body, playing critical roles in most body functions. Enzymes, introduced earlier as protein catalysts, are examples of this. The next section takes a closer look at the action of enzymes. Proteins Function as Enzymes If you were trying to type a paper, and every time you hit a key on your laptop there was a delay of six or seven minutes before you got a response, you would probably get a new laptop. In a similar way, without enzymes to catalyze chemical reactions, the human body would be nonfunctional. It functions only because enzymes function. Enzymatic reactions—chemical reactions catalyzed by enzymes—begin when substrates bind to the enzyme. A substrate is a reactant in an enzymatic reaction. This occurs on regions of the enzyme known as active sites ( Figure 2.27 ). Any given enzyme catalyzes just one type of chemical reaction. This characteristic, called specificity, is due to the fact that a substrate with a particular shape and electrical charge can bind only to an active site corresponding to that substrate. Due to this jigsaw puzzle-like match between an enzyme and its substrates, enzymes are known for their specificity. In fact, as an enzyme binds to its substrate(s), the enzyme structure changes slightly to find the best fit between the transition state (a structural intermediate between the substrate and product) and the active site, just as a rubber glove molds to a hand inserted into it. This active-site modification in the presence of substrate, along with the simultaneous formation of the transition state, is called induced fit. Overall, there is a specifically matched enzyme for each substrate and, thus, for each chemical reaction; however, there is some flexibility as well. Some enzymes have the ability to act on several different structurally related substrates. Binding of a substrate produces an enzyme–substrate complex. It is likely that enzymes speed up chemical reactions in part because the enzyme–substrate complex undergoes a set of temporary and reversible changes that cause the substrates to be oriented toward each other in an optimal position to facilitate their interaction. This promotes increased reaction speed. The enzyme then releases the product(s), and resumes its original shape. The enzyme is then free to engage in the process again, and will do so as long as substrate remains. Other Functions of Proteins Advertisements for protein bars, powders, and shakes all say that protein is important in building, repairing, and maintaining muscle tissue, but the truth is that proteins contribute to all body tissues, from the skin to the brain cells. Also, certain proteins act as hormones, chemical messengers that help regulate body functions, For example, growth hormone is important for skeletal growth, among other roles. As was noted earlier, the basic and acidic components enable proteins to function as buffers in maintaining acid–base balance, but they also help regulate fluid–electrolyte balance. Proteins attract fluid, and a healthy concentration of proteins in the blood, the cells, and the spaces between cells helps ensure a balance of fluids in these various “compartments.” Moreover, proteins in the cell membrane help to transport electrolytes in and out of the cell, keeping these ions in a healthy balance. Like lipids, proteins can bind with carbohydrates. They can thereby produce glycoproteins or proteoglycans, both of which have many functions in the body. The body can use proteins for energy when carbohydrate and fat intake is inadequate, and stores of glycogen and adipose tissue become depleted. However, since there is no storage site for protein except functional tissues, using protein for energy causes tissue breakdown, and results in body wasting. Nucleotides The fourth type of organic compound important to human structure and function are the nucleotides ( Figure 2.28 ). A nucleotide is one of a class of organic compounds composed of three subunits: one or more phosphate groups a pentose sugar: either deoxyribose or ribose a nitrogen-containing base: adenine, cytosine, guanine, thymine, or uracil Nucleotides can be assembled into nucleic acids (DNA or RNA) or the energy compound adenosine triphosphate. Nucleic Acids The nucleic acids differ in their type of pentose sugar. Deoxyribonucleic acid (DNA) is nucleotide that stores genetic information. DNA contains deoxyribose (so-called because it has one less atom of oxygen than ribose) plus one phosphate group and one nitrogen-containing base. The “choices” of base for DNA are adenine, cytosine, guanine, and thymine. Ribonucleic acid (RNA) is a ribose-containing nucleotide that helps manifest the genetic code as protein. RNA contains ribose, one phosphate group, and one nitrogen-containing base, but the “choices” of base for RNA are adenine, cytosine, guanine, and uracil. The nitrogen-containing bases adenine and guanine are classified as purines. A purine is a nitrogen-containing molecule with a double ring structure, which accommodates several nitrogen atoms. The bases cytosine, thymine (found in DNA only) and uracil (found in RNA only) are pyramidines. A pyramidine is a nitrogen-containing base with a single ring structure Bonds formed by dehydration synthesis between the pentose sugar of one nucleic acid monomer and the phosphate group of another form a “backbone,” from which the components’ nitrogen-containing bases protrude. In DNA, two such backbones attach at their protruding bases via hydrogen bonds. These twist to form a shape known as a double helix ( Figure 2.29 ). The sequence of nitrogen-containing bases within a strand of DNA form the genes that act as a molecular code instructing cells in the assembly of amino acids into proteins. Humans have almost 22,000 genes in their DNA, locked up in the 46 chromosomes inside the nucleus of each cell (except red blood cells which lose their nuclei during development). These genes carry the genetic code to build one’s body, and are unique for each individual except identical twins. In contrast, RNA consists of a single strand of sugar-phosphate backbone studded with bases. Messenger RNA (mRNA) is created during protein synthesis to carry the genetic instructions from the DNA to the cell’s protein manufacturing plants in the cytoplasm, the ribosomes. Adenosine Triphosphate The nucleotide adenosine triphosphate (ATP), is composed of a ribose sugar, an adenine base, and three phosphate groups ( Figure 2.30 ). ATP is classified as a high energy compound because the two covalent bonds linking its three phosphates store a significant amount of potential energy. In the body, the energy released from these high energy bonds helps fuel the body’s activities, from muscle contraction to the transport of substances in and out of cells to anabolic chemical reactions. When a phosphate group is cleaved from ATP, the products are adenosine diphosphate (ADP) and inorganic phosphate (P i ). This hydrolysis reaction can be written: ATP + H 2 O  →  ADP + P i  + energy ATP + H 2 O  →  ADP + P i  + energy Removal of a second phosphate leaves adenosine monophosphate (AMP) and two phosphate groups. Again, these reactions also liberate the energy that had been stored in the phosphate-phosphate bonds. They are reversible, too, as when ADP undergoes phosphorylation. Phosphorylation is the addition of a phosphate group to an organic compound, in this case, resulting in ATP. In such cases, the same level of energy that had been released during hydrolysis must be reinvested to power dehydration synthesis. Cells can also transfer a phosphate group from ATP to another organic compound. For example, when glucose first enters a cell, a phosphate group is transferred from ATP, forming glucose phosphate (C 6 H 12 O 6 —P) and ADP. Once glucose is phosphorylated in this way, it can be stored as glycogen or metabolized for immediate energy.
biology
Chapter Outline 20.1 Organizing Life on Earth 20.2 Determining Evolutionary Relationships 20.3 Perspectives on the Phylogenetic Tree Introduction This bee and Echinacea flower ( Figure 20.1 ) could not look more different, yet they are related, as are all living organisms on Earth. By following pathways of similarities and changes—both visible and genetic—scientists seek to map the evolutionary past of how life developed from single-celled organisms to the tremendous collection of creatures that have germinated, crawled, floated, swam, flown, and walked on this planet.
[ { "answer": { "ans_choice": 2, "ans_text": "evolutionary history" }, "bloom": null, "hl_context": "<hl> In scientific terms , the evolutionary history and relationship of an organism or group of organisms is called its phylogeny . <hl> A phylogeny describes the relationships of an organism , such as from which organisms it is thought to have evolved , to which species it is most closely related , and so forth . Phylogenetic relationships provide information on shared ancestry but not necessarily on how organisms are similar or different . Phylogenetic Trees", "hl_sentences": "In scientific terms , the evolutionary history and relationship of an organism or group of organisms is called its phylogeny .", "question": { "cloze_format": "___ is used to determine phylogeny.", "normal_format": "What is used to determine phylogeny?", "question_choices": [ "mutations", "DNA", "evolutionary history", "organisms on earth" ], "question_id": "fs-idp28640208", "question_text": "What is used to determine phylogeny?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "organize and classify organisms" }, "bloom": null, "hl_context": "<hl> Many disciplines within the study of biology contribute to understanding how past and present life evolved over time ; these disciplines together contribute to building , updating , and maintaining the “ tree of life . ” Information is used to organize and classify organisms based on evolutionary relationships in a scientific field called systematics . <hl> Data may be collected from fossils , from studying the structure of body parts or molecules used by an organism , and by DNA analysis . By combining data from many sources , scientists can put together the phylogeny of an organism ; since phylogenetic trees are hypotheses , they will continue to change as new types of life are discovered and new information is learned .", "hl_sentences": "Many disciplines within the study of biology contribute to understanding how past and present life evolved over time ; these disciplines together contribute to building , updating , and maintaining the “ tree of life . ” Information is used to organize and classify organisms based on evolutionary relationships in a scientific field called systematics .", "question": { "cloze_format": "___ is what scientists in the field of systematics accomplish.", "normal_format": "What do scientists in the field of systematics accomplish?", "question_choices": [ "discover new fossil sites", "organize and classify organisms", "name new species", "communicate among field biologists" ], "question_id": "fs-idp40672624", "question_text": "What do scientists in the field of systematics accomplish?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "Subspecies are the most specific category of classification." }, "bloom": null, "hl_context": "<hl> The taxonomic classification system ( also called the Linnaean system after its inventor , Carl Linnaeus , a Swedish botanist , zoologist , and physician ) uses a hierarchical model . <hl> <hl> Moving from the point of origin , the groups become more specific , until one branch ends as a single species . <hl> For example , after the common beginning of all life , scientists divide organisms into three large categories called a domain : Bacteria , Archaea , and Eukarya . Within each domain is a second category called a kingdom . After kingdoms , the subsequent categories of increasing specificity are : phylum , class , order , family , genus , and species ( Figure 20.5 ) .", "hl_sentences": "The taxonomic classification system ( also called the Linnaean system after its inventor , Carl Linnaeus , a Swedish botanist , zoologist , and physician ) uses a hierarchical model . Moving from the point of origin , the groups become more specific , until one branch ends as a single species .", "question": { "cloze_format": "The statement about the taxonomic classification system that is correct is that ___.", "normal_format": "Which statement about the taxonomic classification system is correct?", "question_choices": [ "There are more domains than kingdoms.", "Kingdoms are the top category of classification.", "Classes are divisions of orders.", "Subspecies are the most specific category of classification." ], "question_id": "fs-idp177641920", "question_text": "Which statement about the taxonomic classification system is correct?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "sister taxa" }, "bloom": null, "hl_context": "In a rooted tree , the branching indicates evolutionary relationships ( Figure 20.3 ) . <hl> The point where a split occurs , called a branch point , represents where a single lineage evolved into a distinct new one . <hl> <hl> A lineage that evolved early from the root and remains unbranched is called basal taxon . <hl> <hl> When two lineages stem from the same branch point , they are called sister taxa . <hl> A branch with more than two lineages is called a polytomy and serves to illustrate where scientists have not definitively determined all of the relationships . It is important to note that although sister taxa and polytomy do share an ancestor , it does not mean that the groups of organisms split or evolved from each other . Organisms in two taxa may have split apart at a specific branch point , but neither taxa gave rise to the other .", "hl_sentences": "The point where a split occurs , called a branch point , represents where a single lineage evolved into a distinct new one . A lineage that evolved early from the root and remains unbranched is called basal taxon . When two lineages stem from the same branch point , they are called sister taxa .", "question": { "cloze_format": "On a phylogenetic tree, the term that refers to lineages that diverged from the same place is ___.", "normal_format": "On a phylogenetic tree, which term refers to lineages that diverged from the same place?", "question_choices": [ "sister taxa", "basal taxa", "rooted taxa", "dichotomous taxa" ], "question_id": "fs-idp32779440", "question_text": "On a phylogenetic tree, which term refers to lineages that diverged from the same place?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "They are derived by similar environmental constraints." }, "bloom": null, "hl_context": "Some organisms may be very closely related , even though a minor genetic change caused a major morphological difference to make them look quite different . Similarly , unrelated organisms may be distantly related , but appear very much alike . This usually happens because both organisms were in common adaptations that evolved within similar environmental conditions . <hl> When similar characteristics occur because of environmental constraints and not due to a close evolutionary relationship , it is called an analogy or homoplasy . <hl> For example , insects use wings to fly like bats and birds , but the wing structure and embryonic origin is completely different . These are called analogous structures ( Figure 20.8 ) .", "hl_sentences": "When similar characteristics occur because of environmental constraints and not due to a close evolutionary relationship , it is called an analogy or homoplasy .", "question": { "cloze_format": "The statement about analogies that is correct is that ___.", "normal_format": "Which statement about analogies is correct?", "question_choices": [ "They occur only as errors.", "They are synonymous with homologous traits.", "They are derived by similar environmental constraints.", "They are a form of mutation." ], "question_id": "fs-idp83844208", "question_text": "Which statement about analogies is correct?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "homologous traits" }, "bloom": null, "hl_context": "How do scientists construct phylogenetic trees ? <hl> After the homologous and analogous traits are sorted , scientists often organize the homologous traits using a system called cladistics . <hl> This system sorts organisms into clades : groups of organisms that descended from a single ancestor . For example , in Figure 20.10 , all of the organisms in the orange region evolved from a single ancestor that had amniotic eggs . Consequently , all of these organisms also have amniotic eggs and make a single clade , also called a monophyletic group . Clades must include all of the descendants from a branch point .", "hl_sentences": "After the homologous and analogous traits are sorted , scientists often organize the homologous traits using a system called cladistics .", "question": { "cloze_format": "To apply cladistics scientists use ___.", "normal_format": "What do scientists use to apply cladistics?", "question_choices": [ "homologous traits", "homoplasies", "analogous traits", "monophyletic groups" ], "question_id": "fs-idm4710848", "question_text": "What do scientists use to apply cladistics?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "They evolved from a shared ancestor." }, "bloom": null, "hl_context": "<hl> If a characteristic is found in the ancestor of a group , it is considered a shared ancestral character because all of the organisms in the taxon or clade have that trait . <hl> The vertebrate in Figure 20.10 is a shared ancestral character . Now consider the amniotic egg characteristic in the same figure . Only some of the organisms in Figure 20.10 have this trait , and to those that do , it is called a shared derived character because this trait derived at some point but does not include all of the ancestors in the tree . The tricky aspect to shared ancestral and shared derived characters is the fact that these terms are relative . The same trait can be considered one or the other depending on the particular diagram being used . Returning to Figure 20.10 , note that the amniotic egg is a shared ancestral character for the Amniota clade , while having hair is a shared derived character for some organisms in this group . These terms help scientists distinguish between clades in the building of phylogenetic trees . Choosing the Right Relationships", "hl_sentences": "If a characteristic is found in the ancestor of a group , it is considered a shared ancestral character because all of the organisms in the taxon or clade have that trait .", "question": { "cloze_format": "___ is a true statement about organisms that are a part of the same clade.", "normal_format": "What is true about organisms that are a part of the same clade?", "question_choices": [ "They all share the same basic characteristics.", "They evolved from a shared ancestor.", "They usually fall into the same classification taxa.", "They have identical phylogenies." ], "question_id": "fs-idp127733920", "question_text": "What is true about organisms that are a part of the same clade?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "to decipher accurate phylogenies" }, "bloom": null, "hl_context": "<hl> To aid in the tremendous task of describing phylogenies accurately , scientists often use a concept called maximum parsimony , which means that events occurred in the simplest , most obvious way . <hl> For example , if a group of people entered a forest preserve to go hiking , based on the principle of maximum parsimony , one could predict that most of the people would hike on established trails rather than forge new ones .", "hl_sentences": "To aid in the tremendous task of describing phylogenies accurately , scientists often use a concept called maximum parsimony , which means that events occurred in the simplest , most obvious way .", "question": { "cloze_format": "Scientists apply the concept of maximum parsimony ___ .", "normal_format": "Why do scientists apply the concept of maximum parsimony?", "question_choices": [ "to decipher accurate phylogenies", "to eliminate analogous traits", "to identify mutations in DNA codes", "to locate homoplasies" ], "question_id": "fs-idp149139024", "question_text": "Why do scientists apply the concept of maximum parsimony?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "horizontal gene transfer" }, "bloom": null, "hl_context": "Classical thinking about prokaryotic evolution , included in the classic tree model , is that species evolve clonally . That is , they produce offspring themselves with only random mutations causing the descent into the variety of modern-day and extinct species known to science . This view is somewhat complicated in eukaryotes that reproduce sexually , but the laws of Mendelian genetics explain the variation in offspring , again , to be a result of a mutation within the species . <hl> The concept of genes being transferred between unrelated species was not considered as a possibility until relatively recently . <hl> <hl> Horizontal gene transfer ( HGT ) , also known as lateral gene transfer , is the transfer of genes between unrelated species . <hl> <hl> HGT has been shown to be an ever-present phenomenon , with many evolutionists postulating a major role for this process in evolution , thus complicating the simple tree model . <hl> Genes have been shown to be passed between species which are only distantly related using standard phylogeny , thus adding a layer of complexity to the understanding of phylogenetic relationships .", "hl_sentences": "The concept of genes being transferred between unrelated species was not considered as a possibility until relatively recently . Horizontal gene transfer ( HGT ) , also known as lateral gene transfer , is the transfer of genes between unrelated species . HGT has been shown to be an ever-present phenomenon , with many evolutionists postulating a major role for this process in evolution , thus complicating the simple tree model .", "question": { "cloze_format": "The transfer of genes by a mechanism not involving asexual reproduction is called ___.", "normal_format": "All of the following criteria must be met for an element to be regarded as essential, except which one?", "question_choices": [ "meiosis", "web of life", "horizontal gene transfer", "gene fusion" ], "question_id": "fs-idm11427024", "question_text": "The transfer of genes by a mechanism not involving asexual reproduction is called:" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "gene transfer agents" }, "bloom": null, "hl_context": "<hl> More recently , a fourth mechanism of gene transfer between prokaryotes has been discovered . <hl> <hl> Small , virus-like particles called gene transfer agents ( GTAs ) transfer random genomic segments from one species of prokaryote to another . <hl> <hl> GTAs have been shown to be responsible for genetic changes , sometimes at a very high frequency compared to other evolutionary processes . <hl> The first GTA was characterized in 1974 using purple , non-sulfur bacteria . These GTAs , which are thought to be bacteriophages that lost the ability to reproduce on their own , carry random pieces of DNA from one organism to another . The ability of GTAs to act with high frequency has been demonstrated in controlled studies using marine bacteria . Gene transfer events in marine prokaryotes , either by GTAs or by viruses , have been estimated to be as high as 10 13 per year in the Mediterranean Sea alone . GTAs and viruses are thought to be efficient HGT vehicles with a major impact on prokaryotic evolution .", "hl_sentences": "More recently , a fourth mechanism of gene transfer between prokaryotes has been discovered . Small , virus-like particles called gene transfer agents ( GTAs ) transfer random genomic segments from one species of prokaryote to another . GTAs have been shown to be responsible for genetic changes , sometimes at a very high frequency compared to other evolutionary processes .", "question": { "cloze_format": "The transfer of genetic material by particles from one species to another, especially in marine prokaryotes, is called ___. ", "normal_format": "What are particles that transfer genetic material from one species to another, especially in marine prokaryotes?", "question_choices": [ "horizontal gene transfer", "lateral gene transfer", "genome fusion device", "gene transfer agents" ], "question_id": "fs-idm62610816", "question_text": "Particles that transfer genetic material from one species to another, especially in marine prokaryotes:" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "single common ancestor" }, "bloom": "2", "hl_context": "Many phylogenetic trees have been shown as models of the evolutionary relationship among species . Phylogenetic trees originated with Charles Darwin , who sketched the first phylogenetic tree in 1837 ( Figure 20.12 a ) , which served as a pattern for subsequent studies for more than a century . <hl> The concept of a phylogenetic tree with a single trunk representing a common ancestor , with the branches representing the divergence of species from this ancestor , fits well with the structure of many common trees , such as the oak ( Figure 20.12 b ) . <hl> However , evidence from modern DNA sequence analysis and newly developed computer algorithms has caused skepticism about the validity of the standard tree model in the scientific community .", "hl_sentences": "The concept of a phylogenetic tree with a single trunk representing a common ancestor , with the branches representing the divergence of species from this ancestor , fits well with the structure of many common trees , such as the oak ( Figure 20.12 b ) .", "question": { "cloze_format": "The trunk of the classic phylogenetic tree represents (a) ___.", "normal_format": "What does the trunk of the classic phylogenetic tree represent?", "question_choices": [ "single common ancestor", "pool of ancestral organisms", "new species", "old species" ], "question_id": "fs-idp131254784", "question_text": "What does the trunk of the classic phylogenetic tree represent?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "ring of life" }, "bloom": null, "hl_context": "<hl> Others have proposed abandoning any tree-like model of phylogeny in favor of a ring structure , the so-called “ ring of life ” ( Figure 20.17 ); a phylogenetic model where all three domains of life evolved from a pool of primitive prokaryotes . <hl> Lake , again using the conditioned reconstruction algorithm , proposes a ring-like model in which species of all three domains — Archaea , Bacteria , and Eukarya — evolved from a single pool of gene-swapping prokaryotes . His laboratory proposes that this structure is the best fit for data from extensive DNA analyses performed in his laboratory , and that the ring model is the only one that adequately takes HGT and genomic fusion into account . However , other phylogeneticists remain highly skeptical of this model .", "hl_sentences": "Others have proposed abandoning any tree-like model of phylogeny in favor of a ring structure , the so-called “ ring of life ” ( Figure 20.17 ); a phylogenetic model where all three domains of life evolved from a pool of primitive prokaryotes .", "question": { "cloze_format": "The phylogenetic model that proposes that all three domains of life evolved from a pool of primitive prokaryotes is the ___ .", "normal_format": "Which phylogenetic model proposes that all three domains of life evolved from a pool of primitive prokaryotes?", "question_choices": [ "tree of life", "web of life", "ring of life", "network model" ], "question_id": "fs-idm63059136", "question_text": "Which phylogenetic model proposes that all three domains of life evolved from a pool of primitive prokaryotes?" }, "references_are_paraphrase": null } ]
20
20.1 Organizing Life on Earth Learning Objectives By the end of this section, you will be able to: Discuss the need for a comprehensive classification system List the different levels of the taxonomic classification system Describe how systematics and taxonomy relate to phylogeny Discuss the components and purpose of a phylogenetic tree In scientific terms, the evolutionary history and relationship of an organism or group of organisms is called its phylogeny . A phylogeny describes the relationships of an organism, such as from which organisms it is thought to have evolved, to which species it is most closely related, and so forth. Phylogenetic relationships provide information on shared ancestry but not necessarily on how organisms are similar or different. Phylogenetic Trees Scientists use a tool called a phylogenetic tree to show the evolutionary pathways and connections among organisms. A phylogenetic tree is a diagram used to reflect evolutionary relationships among organisms or groups of organisms. Scientists consider phylogenetic trees to be a hypothesis of the evolutionary past since one cannot go back to confirm the proposed relationships. In other words, a “tree of life” can be constructed to illustrate when different organisms evolved and to show the relationships among different organisms ( Figure 20.2 ). Unlike a taxonomic classification diagram, a phylogenetic tree can be read like a map of evolutionary history. Many phylogenetic trees have a single lineage at the base representing a common ancestor. Scientists call such trees rooted , which means there is a single ancestral lineage (typically drawn from the bottom or left) to which all organisms represented in the diagram relate. Notice in the rooted phylogenetic tree that the three domains— Bacteria, Archaea, and Eukarya—diverge from a single point and branch off. The small branch that plants and animals (including humans) occupy in this diagram shows how recent and miniscule these groups are compared with other organisms. Unrooted trees don’t show a common ancestor but do show relationships among species. In a rooted tree, the branching indicates evolutionary relationships ( Figure 20.3 ). The point where a split occurs, called a branch point , represents where a single lineage evolved into a distinct new one. A lineage that evolved early from the root and remains unbranched is called basal taxon . When two lineages stem from the same branch point, they are called sister taxa . A branch with more than two lineages is called a polytomy and serves to illustrate where scientists have not definitively determined all of the relationships. It is important to note that although sister taxa and polytomy do share an ancestor, it does not mean that the groups of organisms split or evolved from each other. Organisms in two taxa may have split apart at a specific branch point, but neither taxa gave rise to the other. The diagrams above can serve as a pathway to understanding evolutionary history. The pathway can be traced from the origin of life to any individual species by navigating through the evolutionary branches between the two points. Also, by starting with a single species and tracing back towards the "trunk" of the tree, one can discover that species' ancestors, as well as where lineages share a common ancestry. In addition, the tree can be used to study entire groups of organisms. Another point to mention on phylogenetic tree structure is that rotation at branch points does not change the information. For example, if a branch point was rotated and the taxon order changed, this would not alter the information because the evolution of each taxon from the branch point was independent of the other. Many disciplines within the study of biology contribute to understanding how past and present life evolved over time; these disciplines together contribute to building, updating, and maintaining the “tree of life.” Information is used to organize and classify organisms based on evolutionary relationships in a scientific field called systematics . Data may be collected from fossils, from studying the structure of body parts or molecules used by an organism, and by DNA analysis. By combining data from many sources, scientists can put together the phylogeny of an organism; since phylogenetic trees are hypotheses, they will continue to change as new types of life are discovered and new information is learned. Limitations of Phylogenetic Trees It may be easy to assume that more closely related organisms look more alike, and while this is often the case, it is not always true. If two closely related lineages evolved under significantly varied surroundings or after the evolution of a major new adaptation, it is possible for the two groups to appear more different than other groups that are not as closely related. For example, the phylogenetic tree in Figure 20.4 shows that lizards and rabbits both have amniotic eggs, whereas frogs do not; yet lizards and frogs appear more similar than lizards and rabbits. Another aspect of phylogenetic trees is that, unless otherwise indicated, the branches do not account for length of time, only the evolutionary order. In other words, the length of a branch does not typically mean more time passed, nor does a short branch mean less time passed— unless specified on the diagram. For example, in Figure 20.4 , the tree does not indicate how much time passed between the evolution of amniotic eggs and hair. What the tree does show is the order in which things took place. Again using Figure 20.4 , the tree shows that the oldest trait is the vertebral column, followed by hinged jaws, and so forth. Remember that any phylogenetic tree is a part of the greater whole, and like a real tree, it does not grow in only one direction after a new branch develops. So, for the organisms in Figure 20.4 , just because a vertebral column evolved does not mean that invertebrate evolution ceased, it only means that a new branch formed. Also, groups that are not closely related, but evolve under similar conditions, may appear more phenotypically similar to each other than to a close relative. Link to Learning Head to this website to see interactive exercises that allow you to explore the evolutionary relationships among species. The Levels of Classification Taxonomy (which literally means “arrangement law”) is the science of classifying organisms to construct internationally shared classification systems with each organism placed into more and more inclusive groupings. Think about how a grocery store is organized. One large space is divided into departments, such as produce, dairy, and meats. Then each department further divides into aisles, then each aisle into categories and brands, and then finally a single product. This organization from larger to smaller, more specific categories is called a hierarchical system. The taxonomic classification system (also called the Linnaean system after its inventor, Carl Linnaeus, a Swedish botanist, zoologist, and physician) uses a hierarchical model. Moving from the point of origin, the groups become more specific, until one branch ends as a single species. For example, after the common beginning of all life, scientists divide organisms into three large categories called a domain: Bacteria, Archaea, and Eukarya. Within each domain is a second category called a kingdom . After kingdoms, the subsequent categories of increasing specificity are: phylum , class , order , family , genus , and species ( Figure 20.5 ). The kingdom Animalia stems from the Eukarya domain. For the common dog, the classification levels would be as shown in Figure 20.5 . Therefore, the full name of an organism technically has eight terms. For the dog, it is: Eukarya, Animalia, Chordata, Mammalia, Carnivora, Canidae, Canis, and lupus . Notice that each name is capitalized except for species, and the genus and species names are italicized. Scientists generally refer to an organism only by its genus and species, which is its two-word scientific name, in what is called binomial nomenclature . Therefore, the scientific name of the dog is Canis lupus . The name at each level is also called a taxon . In other words, dogs are in order Carnivora. Carnivora is the name of the taxon at the order level; Canidae is the taxon at the family level, and so forth. Organisms also have a common name that people typically use, in this case, dog. Note that the dog is additionally a subspecies: the “ familiaris ” in Canis lupus familiaris. Subspecies are members of the same species that are capable of mating and reproducing viable offspring, but they are considered separate subspecies due to geographic or behavioral isolation or other factors. Figure 20.6 shows how the levels move toward specificity with other organisms. Notice how the dog shares a domain with the widest diversity of organisms, including plants and butterflies. At each sublevel, the organisms become more similar because they are more closely related. Historically, scientists classified organisms using characteristics, but as DNA technology developed, more precise phylogenies have been determined. Visual Connection At what levels are cats and dogs considered to be part of the same group? Link to Learning Visit this website to classify three organisms—bear, orchid, and sea cucumber—from kingdom to species. To launch the game, under Classifying Life, click the picture of the bear or the Launch Interactive button. Recent genetic analysis and other advancements have found that some earlier phylogenetic classifications do not align with the evolutionary past; therefore, changes and updates must be made as new discoveries occur. Recall that phylogenetic trees are hypotheses and are modified as data becomes available. In addition, classification historically has focused on grouping organisms mainly by shared characteristics and does not necessarily illustrate how the various groups relate to each other from an evolutionary perspective. For example, despite the fact that a hippopotamus resembles a pig more than a whale, the hippopotamus may be the closest living relative of the whale. 20.2 Determining Evolutionary Relationships Learning Objectives By the end of this section, you will be able to: Compare homologous and analogous traits Discuss the purpose of cladistics Describe maximum parsimony Scientists must collect accurate information that allows them to make evolutionary connections among organisms. Similar to detective work, scientists must use evidence to uncover the facts. In the case of phylogeny, evolutionary investigations focus on two types of evidence: morphologic (form and function) and genetic. Two Options for Similarities In general, organisms that share similar physical features and genomes tend to be more closely related than those that do not. Such features that overlap both morphologically (in form) and genetically are referred to as homologous structures; they stem from developmental similarities that are based on evolution. For example, the bones in the wings of bats and birds have homologous structures ( Figure 20.7 ). Notice it is not simply a single bone, but rather a grouping of several bones arranged in a similar way. The more complex the feature, the more likely any kind of overlap is due to a common evolutionary past. Imagine two people from different countries both inventing a car with all the same parts and in exactly the same arrangement without any previous or shared knowledge. That outcome would be highly improbable. However, if two people both invented a hammer, it would be reasonable to conclude that both could have the original idea without the help of the other. The same relationship between complexity and shared evolutionary history is true for homologous structures in organisms. Misleading Appearances Some organisms may be very closely related, even though a minor genetic change caused a major morphological difference to make them look quite different. Similarly, unrelated organisms may be distantly related, but appear very much alike. This usually happens because both organisms were in common adaptations that evolved within similar environmental conditions. When similar characteristics occur because of environmental constraints and not due to a close evolutionary relationship, it is called an analogy or homoplasy. For example, insects use wings to fly like bats and birds, but the wing structure and embryonic origin is completely different. These are called analogous structures ( Figure 20.8 ). Similar traits can be either homologous or analogous. Homologous structures share a similar embryonic origin; analogous organs have a similar function. For example, the bones in the front flipper of a whale are homologous to the bones in the human arm. These structures are not analogous. The wings of a butterfly and the wings of a bird are analogous but not homologous. Some structures are both analogous and homologous: the wings of a bird and the wings of a bat are both homologous and analogous. Scientists must determine which type of similarity a feature exhibits to decipher the phylogeny of the organisms being studied. Link to Learning This website has several examples to show how appearances can be misleading in understanding the phylogenetic relationships of organisms. Molecular Comparisons With the advancement of DNA technology, the area of molecular systematics , which describes the use of information on the molecular level including DNA analysis, has blossomed. New computer programs not only confirm many earlier classified organisms, but also uncover previously made errors. As with physical characteristics, even the DNA sequence can be tricky to read in some cases. For some situations, two very closely related organisms can appear unrelated if a mutation occurred that caused a shift in the genetic code. An insertion or deletion mutation would move each nucleotide base over one place, causing two similar codes to appear unrelated. Sometimes two segments of DNA code in distantly related organisms randomly share a high percentage of bases in the same locations, causing these organisms to appear closely related when they are not. For both of these situations, computer technologies have been developed to help identify the actual relationships, and, ultimately, the coupled use of both morphologic and molecular information is more effective in determining phylogeny. Evolution Connection Why Does Phylogeny Matter? Evolutionary biologists could list many reasons why understanding phylogeny is important to everyday life in human society. For botanists, phylogeny acts as a guide to discovering new plants that can be used to benefit people. Think of all the ways humans use plants—food, medicine, and clothing are a few examples. If a plant contains a compound that is effective in treating cancer, scientists might want to examine all of the relatives of that plant for other useful drugs. A research team in China identified a segment of DNA thought to be common to some medicinal plants in the family Fabaceae (the legume family) and worked to identify which species had this segment ( Figure 20.9 ). After testing plant species in this family, the team found a DNA marker (a known location on a chromosome that enabled them to identify the species) present. Then, using the DNA to uncover phylogenetic relationships, the team could identify whether a newly discovered plant was in this family and assess its potential medicinal properties. Building Phylogenetic Trees How do scientists construct phylogenetic trees? After the homologous and analogous traits are sorted, scientists often organize the homologous traits using a system called cladistics . This system sorts organisms into clades: groups of organisms that descended from a single ancestor. For example, in Figure 20.10 , all of the organisms in the orange region evolved from a single ancestor that had amniotic eggs. Consequently, all of these organisms also have amniotic eggs and make a single clade, also called a monophyletic group . Clades must include all of the descendants from a branch point. Visual Connection Which animals in this figure belong to a clade that includes animals with hair? Which evolved first, hair or the amniotic egg? Clades can vary in size depending on which branch point is being referenced. The important factor is that all of the organisms in the clade or monophyletic group stem from a single point on the tree. This can be remembered because monophyletic breaks down into “mono,” meaning one, and “phyletic,” meaning evolutionary relationship. Figure 20.11 shows various examples of clades. Notice how each clade comes from a single point, whereas the non-clade groups show branches that do not share a single point. Visual Connection What is the largest clade in this diagram? Shared Characteristics Organisms evolve from common ancestors and then diversify. Scientists use the phrase “descent with modification” because even though related organisms have many of the same characteristics and genetic codes, changes occur. This pattern repeats over and over as one goes through the phylogenetic tree of life: A change in the genetic makeup of an organism leads to a new trait which becomes prevalent in the group. Many organisms descend from this point and have this trait. New variations continue to arise: some are adaptive and persist, leading to new traits. With new traits, a new branch point is determined (go back to step 1 and repeat). If a characteristic is found in the ancestor of a group, it is considered a shared ancestral character because all of the organisms in the taxon or clade have that trait. The vertebrate in Figure 20.10 is a shared ancestral character. Now consider the amniotic egg characteristic in the same figure. Only some of the organisms in Figure 20.10 have this trait, and to those that do, it is called a shared derived character because this trait derived at some point but does not include all of the ancestors in the tree. The tricky aspect to shared ancestral and shared derived characters is the fact that these terms are relative. The same trait can be considered one or the other depending on the particular diagram being used. Returning to Figure 20.10 , note that the amniotic egg is a shared ancestral character for the Amniota clade, while having hair is a shared derived character for some organisms in this group. These terms help scientists distinguish between clades in the building of phylogenetic trees. Choosing the Right Relationships Imagine being the person responsible for organizing all of the items in a department store properly—an overwhelming task. Organizing the evolutionary relationships of all life on Earth proves much more difficult: scientists must span enormous blocks of time and work with information from long-extinct organisms. Trying to decipher the proper connections, especially given the presence of homologies and analogies, makes the task of building an accurate tree of life extraordinarily difficult. Add to that the advancement of DNA technology, which now provides large quantities of genetic sequences to be used and analyzed. Taxonomy is a subjective discipline: many organisms have more than one connection to each other, so each taxonomist will decide the order of connections. To aid in the tremendous task of describing phylogenies accurately, scientists often use a concept called maximum parsimony , which means that events occurred in the simplest, most obvious way. For example, if a group of people entered a forest preserve to go hiking, based on the principle of maximum parsimony, one could predict that most of the people would hike on established trails rather than forge new ones. For scientists deciphering evolutionary pathways, the same idea is used: the pathway of evolution probably includes the fewest major events that coincide with the evidence at hand. Starting with all of the homologous traits in a group of organisms, scientists look for the most obvious and simple order of evolutionary events that led to the occurrence of those traits. Link to Learning Head to this website to learn how maximum parsimony is used to create phylogenetic trees. These tools and concepts are only a few of the strategies scientists use to tackle the task of revealing the evolutionary history of life on Earth. Recently, newer technologies have uncovered surprising discoveries with unexpected relationships, such as the fact that people seem to be more closely related to fungi than fungi are to plants. Sound unbelievable? As the information about DNA sequences grows, scientists will become closer to mapping the evolutionary history of all life on Earth. 20.3 Perspectives on the Phylogenetic Tree Learning Objectives By the end of this section, you will be able to: Describe horizontal gene transfer Illustrate how prokaryotes and eukaryotes transfer genes horizontally Identify the web and ring models of phylogenetic relationships and describe how they differ from the original phylogenetic tree concept The concepts of phylogenetic modeling are constantly changing. It is one of the most dynamic fields of study in all of biology. Over the last several decades, new research has challenged scientists’ ideas about how organisms are related. New models of these relationships have been proposed for consideration by the scientific community. Many phylogenetic trees have been shown as models of the evolutionary relationship among species. Phylogenetic trees originated with Charles Darwin, who sketched the first phylogenetic tree in 1837 ( Figure 20.12 a ), which served as a pattern for subsequent studies for more than a century. The concept of a phylogenetic tree with a single trunk representing a common ancestor, with the branches representing the divergence of species from this ancestor, fits well with the structure of many common trees, such as the oak ( Figure 20.12 b ). However, evidence from modern DNA sequence analysis and newly developed computer algorithms has caused skepticism about the validity of the standard tree model in the scientific community. Limitations to the Classic Model Classical thinking about prokaryotic evolution, included in the classic tree model, is that species evolve clonally. That is, they produce offspring themselves with only random mutations causing the descent into the variety of modern-day and extinct species known to science. This view is somewhat complicated in eukaryotes that reproduce sexually, but the laws of Mendelian genetics explain the variation in offspring, again, to be a result of a mutation within the species. The concept of genes being transferred between unrelated species was not considered as a possibility until relatively recently. Horizontal gene transfer (HGT), also known as lateral gene transfer, is the transfer of genes between unrelated species. HGT has been shown to be an ever-present phenomenon, with many evolutionists postulating a major role for this process in evolution, thus complicating the simple tree model. Genes have been shown to be passed between species which are only distantly related using standard phylogeny, thus adding a layer of complexity to the understanding of phylogenetic relationships. The various ways that HGT occurs in prokaryotes is important to understanding phylogenies. Although at present HGT is not viewed as important to eukaryotic evolution, HGT does occur in this domain as well. Finally, as an example of the ultimate gene transfer, theories of genome fusion between symbiotic or endosymbiotic organisms have been proposed to explain an event of great importance—the evolution of the first eukaryotic cell, without which humans could not have come into existence. Horizontal Gene Transfer Horizontal gene transfer (HGT) is the introduction of genetic material from one species to another species by mechanisms other than the vertical transmission from parent(s) to offspring. These transfers allow even distantly related species to share genes, influencing their phenotypes. It is thought that HGT is more prevalent in prokaryotes, but that only about 2% of the prokaryotic genome may be transferred by this process. Some researchers believe such estimates are premature: the actual importance of HGT to evolutionary processes must be viewed as a work in progress. As the phenomenon is investigated more thoroughly, it may be revealed to be more common. Many scientists believe that HGT and mutation appear to be (especially in prokaryotes) a significant source of genetic variation, which is the raw material for the process of natural selection. These transfers may occur between any two species that share an intimate relationship ( Table 20.1 ). Summary of Mechanisms of Prokaryotic and Eukaryotic HGT Mechanism Mode of Transmission Example Prokaryotes transformation DNA uptake many prokaryotes transduction bacteriophage (virus) bacteria conjugation pilus many prokaryotes gene transfer agents phage-like particles purple non-sulfur bacteria Eukaryotes from food organisms unknown aphid jumping genes transposons rice and millet plants epiphytes/parasites unknown yew tree fungi from viral infections Table 20.1 HGT in Prokaryotes The mechanism of HGT has been shown to be quite common in the prokaryotic domains of Bacteria and Archaea, significantly changing the way their evolution is viewed. The majority of evolutionary models, such as in the Endosymbiont Theory, propose that eukaryotes descended from multiple prokaryotes, which makes HGT all the more important to understanding the phylogenetic relationships of all extant and extinct species. The fact that genes are transferred among common bacteria is well known to microbiology students. These gene transfers between species are the major mechanism whereby bacteria acquire resistance to antibiotics. Classically, this type of transfer has been thought to occur by three different mechanisms: Transformation: naked DNA is taken up by a bacteria Transduction: genes are transferred using a virus Conjugation: the use a hollow tube called a pilus to transfer genes between organisms More recently, a fourth mechanism of gene transfer between prokaryotes has been discovered. Small, virus-like particles called gene transfer agents (GTAs) transfer random genomic segments from one species of prokaryote to another. GTAs have been shown to be responsible for genetic changes, sometimes at a very high frequency compared to other evolutionary processes. The first GTA was characterized in 1974 using purple, non-sulfur bacteria. These GTAs, which are thought to be bacteriophages that lost the ability to reproduce on their own, carry random pieces of DNA from one organism to another. The ability of GTAs to act with high frequency has been demonstrated in controlled studies using marine bacteria. Gene transfer events in marine prokaryotes, either by GTAs or by viruses, have been estimated to be as high as 10 13 per year in the Mediterranean Sea alone. GTAs and viruses are thought to be efficient HGT vehicles with a major impact on prokaryotic evolution. As a consequence of this modern DNA analysis, the idea that eukaryotes evolved directly from Archaea has fallen out of favor. While eukaryotes share many features that are absent in bacteria, such as the TATA box (found in the promoter region of many genes), the discovery that some eukaryotic genes were more homologous with bacterial DNA than Archaea DNA made this idea less tenable. Furthermore, the fusion of genomes from Archaea and Bacteria by endosymbiosis has been proposed as the ultimate event in eukaryotic evolution. HGT in Eukaryotes Although it is easy to see how prokaryotes exchange genetic material by HGT, it was initially thought that this process was absent in eukaryotes. After all, prokaryotes are but single cells exposed directly to their environment, whereas the sex cells of multicellular organisms are usually sequestered in protected parts of the body. It follows from this idea that the gene transfers between multicellular eukaryotes should be more difficult. Indeed, it is thought that this process is rarer in eukaryotes and has a much smaller evolutionary impact than in prokaryotes. In spite of this fact, HGT between distantly related organisms has been demonstrated in several eukaryotic species, and it is possible that more examples will be discovered in the future. In plants, gene transfer has been observed in species that cannot cross-pollinate by normal means. Transposons or “jumping genes” have been shown to transfer between rice and millet plant species. Furthermore, fungal species feeding on yew trees, from which the anti-cancer drug TAXOL® is derived from the bark, have acquired the ability to make taxol themselves, a clear example of gene transfer. In animals, a particularly interesting example of HGT occurs within the aphid species ( Figure 20.13 ). Aphids are insects that vary in color based on carotenoid content. Carotenoids are pigments made by a variety of plants, fungi, and microbes, and they serve a variety of functions in animals, who obtain these chemicals from their food. Humans require carotenoids to synthesize vitamin A, and we obtain them by eating orange fruits and vegetables: carrots, apricots, mangoes, and sweet potatoes. On the other hand, aphids have acquired the ability to make the carotenoids on their own. According to DNA analysis, this ability is due to the transfer of fungal genes into the insect by HGT, presumably as the insect consumed fungi for food. A carotenoid enzyme called a desaturase is responsible for the red coloration seen in certain aphids, and it has been further shown that when this gene is inactivated by mutation, the aphids revert back to their more common green color ( Figure 20.13 ). Genome Fusion and the Evolution of Eukaryotes Scientists believe the ultimate in HGT occurs through genome fusion between different species of prokaryotes when two symbiotic organisms become endosymbiotic. This occurs when one species is taken inside the cytoplasm of another species, which ultimately results in a genome consisting of genes from both the endosymbiont and the host. This mechanism is an aspect of the Endosymbiont Theory, which is accepted by a majority of biologists as the mechanism whereby eukaryotic cells obtained their mitochondria and chloroplasts. However, the role of endosymbiosis in the development of the nucleus is more controversial. Nuclear and mitochondrial DNA are thought to be of different (separate) evolutionary origin, with the mitochondrial DNA being derived from the circular genomes of bacteria that were engulfed by ancient prokaryotic cells. Mitochondrial DNA can be regarded as the smallest chromosome. Interestingly enough, mitochondrial DNA is inherited only from the mother. The mitochondrial DNA degrades in sperm when the sperm degrades in the fertilized egg or in other instances when the mitochondria located in the flagellum of the sperm fails to enter the egg. Within the past decade, the process of genome fusion by endosymbiosis has been proposed by James Lake of the UCLA/NASA Astrobiology Institute to be responsible for the evolution of the first eukaryotic cells ( Figure 20.14 a ). Using DNA analysis and a new mathematical algorithm called conditioned reconstruction (CR), his laboratory proposed that eukaryotic cells developed from an endosymbiotic gene fusion between two species, one an Archaea and the other a Bacteria. As mentioned, some eukaryotic genes resemble those of Archaea, whereas others resemble those from Bacteria. An endosymbiotic fusion event, such as Lake has proposed, would clearly explain this observation. On the other hand, this work is new and the CR algorithm is relatively unsubstantiated, which causes many scientists to resist this hypothesis. More recent work by Lake ( Figure 20.14 b ) proposes that gram-negative bacteria, which are unique within their domain in that they contain two lipid bilayer membranes, indeed resulted from an endosymbiotic fusion of archaeal and bacterial species. The double membrane would be a direct result of the endosymbiosis, with the endosymbiont picking up the second membrane from the host as it was internalized. This mechanism has also been used to explain the double membranes found in mitochondria and chloroplasts. Lake’s work is not without skepticism, and the ideas are still debated within the biological science community. In addition to Lake’s hypothesis, there are several other competing theories as to the origin of eukaryotes. How did the eukaryotic nucleus evolve? One theory is that the prokaryotic cells produced an additional membrane that surrounded the bacterial chromosome. Some bacteria have the DNA enclosed by two membranes; however, there is no evidence of a nucleolus or nuclear pores. Other proteobacteria also have membrane-bound chromosomes. If the eukaryotic nucleus evolved this way, we would expect one of the two types of prokaryotes to be more closely related to eukaryotes. The nucleus-first hypothesis proposes that the nucleus evolved in prokaryotes first ( Figure 20.15 a ), followed by a later fusion of the new eukaryote with bacteria that became mitochondria. The mitochondria-first hypothesis proposes that mitochondria were first established in a prokaryotic host ( Figure 20.15 b ), which subsequently acquired a nucleus, by fusion or other mechanisms, to become the first eukaryotic cell. Most interestingly, the eukaryote-first hypothesis proposes that prokaryotes actually evolved from eukaryotes by losing genes and complexity ( Figure 20.15 c ). All of these hypotheses are testable. Only time and more experimentation will determine which hypothesis is best supported by data. Web and Network Models The recognition of the importance of HGT, especially in the evolution of prokaryotes, has caused some to propose abandoning the classic “tree of life” model. In 1999, W. Ford Doolittle proposed a phylogenetic model that resembles a web or a network more than a tree. The hypothesis is that eukaryotes evolved not from a single prokaryotic ancestor, but from a pool of many species that were sharing genes by HGT mechanisms. As shown in Figure 20.16 a , some individual prokaryotes were responsible for transferring the bacteria that caused mitochondrial development to the new eukaryotes, whereas other species transferred the bacteria that gave rise to chloroplasts. This model is often called the “ web of life .” In an effort to save the tree analogy, some have proposed using the Ficus tree ( Figure 20.16 b ) with its multiple trunks as a phylogenetic to represent a diminished evolutionary role for HGT. Ring of Life Models Others have proposed abandoning any tree-like model of phylogeny in favor of a ring structure, the so-called “ ring of life ” ( Figure 20.17 ); a phylogenetic model where all three domains of life evolved from a pool of primitive prokaryotes. Lake, again using the conditioned reconstruction algorithm, proposes a ring-like model in which species of all three domains—Archaea, Bacteria, and Eukarya—evolved from a single pool of gene-swapping prokaryotes. His laboratory proposes that this structure is the best fit for data from extensive DNA analyses performed in his laboratory, and that the ring model is the only one that adequately takes HGT and genomic fusion into account. However, other phylogeneticists remain highly skeptical of this model. In summary, the “tree of life” model proposed by Darwin must be modified to include HGT. Does this mean abandoning the tree model completely? Even Lake argues that all attempts should be made to discover some modification of the tree model to allow it to accurately fit his data, and only the inability to do so will sway people toward his ring proposal. This doesn’t mean a tree, web, or a ring will correlate completely to an accurate description of phylogenetic relationships of life. A consequence of the new thinking about phylogenetic models is the idea that Darwin’s original conception of the phylogenetic tree is too simple, but made sense based on what was known at the time. However, the search for a more useful model moves on: each model serving as hypotheses to be tested with the possibility of developing new models. This is how science advances. These models are used as visualizations to help construct hypothetical evolutionary relationships and understand the massive amount of data being analyzed.
microbiology
Summary 7.1 Organic Molecules The most abundant elements in cells are hydrogen, carbon, oxygen, nitrogen, phosphorus, and sulfur. Life is carbon based. Each carbon atom can bind to another one producing a carbon skeleton that can be straight, branched, or ring shaped. The same numbers and types of atoms may bond together in different ways to yield different molecules called isomers . Isomers may differ in the bonding sequence of their atoms ( structural isomers ) or in the spatial arrangement of atoms whose bonding sequences are the same ( stereoisomers ), and their physical and chemical properties may vary slightly or drastically. Functional groups confer specific chemical properties to molecules bearing them. Common functional groups in biomolecules are hydroxyl, methyl, carbonyl, carboxyl, amino, phosphate, and sulfhydryl. Macromolecules are polymers assembled from individual units, the monomers , which bind together like building blocks. Many biologically significant macromolecules are formed by dehydration synthesis , a process in which monomers bind together by combining their functional groups and generating water molecules as byproducts. 7.2 Carbohydrates Carbohydrates , the most abundant biomolecules on earth, are widely used by organisms for structural and energy-storage purposes. Carbohydrates include individual sugar molecules ( monosaccharides ) as well as two or more molecules chemically linked by glycosidic bonds . Monosaccharides are classified based on the number of carbons the molecule as trioses (3 C), tetroses (4 C), pentoses (5 C), and hexoses (6 C). They are the building blocks for the synthesis of polymers or complex carbohydrates. Disaccharides such as sucrose, lactose, and maltose are molecules composed of two monosaccharides linked together by a glycosidic bond. Polysaccharides , or glycans , are polymers composed of hundreds of monosaccharide monomers linked together by glycosidic bonds. The energy-storage polymers starch and glycogen are examples of polysaccharides and are all composed of branched chains of glucose molecules. The polysaccharide cellulose is a common structural component of the cell walls of organisms. Other structural polysaccharides, such as N-acetyl glucosamine (NAG) and N-acetyl muramic acid (NAM), incorporate modified glucose molecules and are used in the construction of peptidoglycan or chitin. 7.3 Lipids Lipids are composed mainly of carbon and hydrogen, but they can also contain oxygen, nitrogen, sulfur, and phosphorous. They provide nutrients for organisms, store carbon and energy, play structural roles in membranes, and function as hormones, pharmaceuticals, fragrances, and pigments. Fatty acids are long-chain hydrocarbons with a carboxylic acid functional group. Their relatively long nonpolar hydrocarbon chains make them hydrophobic . Fatty acids with no double bonds are saturated ; those with double bonds are unsaturated . Fatty acids chemically bond to glycerol to form structurally essential lipids such as triglycerides and phospholipids. Triglycerides comprise three fatty acids bonded to glycerol, yielding a hydrophobic molecule. Phospholipids contain both hydrophobic hydrocarbon chains and polar head groups, making them amphipathic and capable of forming uniquely functional large scale structures. Biological membranes are large-scale structures based on phospholipid bilayers that provide hydrophilic exterior and interior surfaces suitable for aqueous environments, separated by an intervening hydrophobic layer. These bilayers are the structural basis for cell membranes in most organisms, as well as subcellular components such as vesicles. Isoprenoids are lipids derived from isoprene molecules that have many physiological roles and a variety of commercial applications. A wax is a long-chain isoprenoid that is typically water resistant; an example of a wax-containing substance is sebum, produced by sebaceous glands in the skin. Steroids are lipids with complex, ringed structures that function as structural components of cell membranes and as hormones. Sterols are a subclass of steroids containing a hydroxyl group at a specific location on one of the molecule’s rings; one example is cholesterol. Bacteria produce hopanoids, structurally similar to cholesterol, to strengthen bacterial membranes. Fungi and protozoa produce a strengthening agent called ergosterol. 7.4 Proteins Amino acids are small molecules essential to all life. Each has an α carbon to which a hydrogen atom, carboxyl group, and amine group are bonded. The fourth bonded group, represented by R, varies in chemical composition, size, polarity, and charge among different amino acids, providing variation in properties. Peptides are polymers formed by the linkage of amino acids via dehydration synthesis. The bonds between the linked amino acids are called peptide bonds. The number of amino acids linked together may vary from a few to many. Proteins are polymers formed by the linkage of a very large number of amino acids. They perform many important functions in a cell, serving as nutrients and enzymes; storage molecules for carbon, nitrogen, and energy; and structural components. The structure of a protein is a critical determinant of its function and is described by a graduated classification: primary , secondary , tertiary , and quaternary . The native structure of a protein may be disrupted by denaturation , resulting in loss of its higher-order structure and its biological function. Some proteins are formed by several separate protein subunits, the interaction of these subunits composing the quaternary structure of the protein complex. Conjugated proteins have a nonpolypeptide portion that can be a carbohydrate (forming a glycoprotein ) or a lipid fraction (forming a lipoprotein ). These proteins are important components of membranes. 7.5 Using Biochemistry to Identify Microorganisms Accurate identification of bacteria is essential in a clinical laboratory for diagnostic and management of epidemics, pandemics, and food poisoning caused by bacterial outbreaks. The phenotypic identification of microorganisms involves using observable traits, including profiles of structural components such as lipids, biosynthetic products such as sugars or amino acids, or storage compounds such as poly-β-hydroxybutyrate. An unknown microbe may be identified from the unique mass spectrum produced when it is analyzed by matrix assisted laser desorption/ionization time of flight mass spectrometry (MALDI-TOF) . Microbes can be identified by determining their lipid compositions, using fatty acid methyl esters ( FAME ) or phospholipid-derived fatty acids ( PLFA ) analysis . Proteomic analysis , the study of all accumulated proteins of an organism; can also be used for bacterial identification. Glycoproteins in the plasma membrane or cell wall structures can bind to lectins or antibodies and can be used for identification.
Chapter Outline 7.1 Organic Molecules 7.2 Carbohydrates 7.3 Lipids 7.4 Proteins 7.5 Using Biochemistry to Identify Microorganisms Introduction The earth is estimated to be 4.6 billion years old, but for the first 2 billion years, the atmosphere lacked oxygen, without which the earth could not support life as we know it. One hypothesis about how life emerged on earth involves the concept of a “primordial soup.” This idea proposes that life began in a body of water when metals and gases from the atmosphere combined with a source of energy, such as lightning or ultraviolet light, to form the carbon compounds that are the chemical building blocks of life. In 1952, Stanley Miller (1930–2007), a graduate student at the University of Chicago, and his professor Harold Urey (1893–1981), set out to confirm this hypothesis in a now-famous experiment. Miller and Urey combined what they believed to be the major components of the earth’s early atmosphere—water (H 2 O), methane (CH 4 ), hydrogen (H 2 ), and ammonia (NH 3 )—and sealed them in a sterile flask. Next, they heated the flask to produce water vapor and passed electric sparks through the mixture to mimic lightning in the atmosphere ( Figure 7.1 ). When they analyzed the contents of the flask a week later, they found amino acids, the structural units of proteins—molecules essential to the function of all organisms.
[ { "answer": { "ans_choice": 0, "ans_text": "A" }, "bloom": null, "hl_context": "The most abundant element in cells is hydrogen ( H ) , followed by carbon ( C ) , oxygen ( O ) , nitrogen ( N ) , phosphorous ( P ) , and sulfur ( S ) . We call these elements macronutrient s , and they account for about 99 % of the dry weight of cells . <hl> Some elements , such as sodium ( Na ) , potassium ( K ) , magnesium ( Mg ) , zinc ( Zn ) , iron ( Fe ) , calcium ( Ca ) , molybdenum ( Mo ) , copper ( Cu ) , cobalt ( Co ) , manganese ( Mn ) , or vanadium ( V ) , are required by some cells in very small amounts and are called micronutrient s or trace element s . <hl> All of these elements are essential to the function of many biochemical reactions , and , therefore , are essential to life . The four most abundant elements in living matter ( C , N , O , and H ) have low atomic numbers and are thus light elements capable of forming strong bonds with other atoms to produce molecules ( Figure 7.2 ) . Carbon forms four chemical bonds , whereas nitrogen forms three , oxygen forms two , and hydrogen forms one . When bonded together within molecules , oxygen , sulfur , and nitrogen often have one or more “ lone pairs ” of electrons that play important roles in determining many of the molecules ’ physical and chemical properties ( see Appendix A ) . These traits in combination permit the formation of a vast number of diverse molecular species necessary to form the structures and enable the functions of living organisms .", "hl_sentences": "Some elements , such as sodium ( Na ) , potassium ( K ) , magnesium ( Mg ) , zinc ( Zn ) , iron ( Fe ) , calcium ( Ca ) , molybdenum ( Mo ) , copper ( Cu ) , cobalt ( Co ) , manganese ( Mn ) , or vanadium ( V ) , are required by some cells in very small amounts and are called micronutrient s or trace element s .", "question": { "cloze_format": "The element that is not a micronutrient is ___ .", "normal_format": "Which of these elements is not a micronutrient?", "question_choices": [ "C", "Ca", "Co", "Cu" ], "question_id": "fs-id1167663723473", "question_text": "Which of these elements is not a micronutrient?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "D" }, "bloom": null, "hl_context": "Isomers that differ in the spatial arrangements of atoms are called stereoisomers ; one unique type is enantiomers . The properties of enantiomers were originally discovered by Louis Pasteur in 1848 while using a microscope to analyze crystallized fermentation products of wine . <hl> Enantiomers are molecules that have the characteristic of chirality , in which their structures are nonsuperimposable mirror images of each other . <hl> Chirality is an important characteristic in many biologically important molecules , as illustrated by the examples of structural differences in the enantiomeric forms of the monosaccharide glucose or the amino acid alanine ( Figure 7.5 ) .", "hl_sentences": "Enantiomers are molecules that have the characteristic of chirality , in which their structures are nonsuperimposable mirror images of each other .", "question": { "cloze_format": "___ is the name for molecules whose structures are nonsuperimposable mirror images.", "normal_format": "Which of the following is the name for molecules whose structures are nonsuperimposable mirror images?", "question_choices": [ "structural isomers", "monomers", "polymers", "enantiomers" ], "question_id": "fs-id1167663905700", "question_text": "Which of the following is the name for molecules whose structures are nonsuperimposable mirror images?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "C" }, "bloom": null, "hl_context": "The most abundant biomolecules on earth are carbohydrate s . From a chemical viewpoint , carbohydrates are primarily a combination of carbon and water , and many of them have the empirical formula ( CH 2 O ) n , where n is the number of repeated units . <hl> This view represents these molecules simply as “ hydrated ” carbon atom chains in which water molecules attach to each carbon atom , leading to the term “ carbohydrates . ” Although all carbohydrates contain carbon , hydrogen , and oxygen , there are some that also contain nitrogen , phosphorus , and / or sulfur . <hl> Carbohydrates have myriad different functions . They are abundant in terrestrial ecosystems , many forms of which we use as food sources . These molecules are also vital parts of macromolecular structures that store and transmit genetic information ( i . e . , DNA and RNA ) . They are the basis of biological polymers that impart strength to various structural components of organisms ( e . g . , cellulose and chitin ) , and they are the primary source of energy storage in the form of starch and glycogen .", "hl_sentences": "This view represents these molecules simply as “ hydrated ” carbon atom chains in which water molecules attach to each carbon atom , leading to the term “ carbohydrates . ” Although all carbohydrates contain carbon , hydrogen , and oxygen , there are some that also contain nitrogen , phosphorus , and / or sulfur .", "question": { "cloze_format": "By definition, carbohydrates contain ___ .", "normal_format": "By definition, carbohydrates contain which elements?", "question_choices": [ "carbon and hydrogen", "carbon, hydrogen, and nitrogen", "carbon, hydrogen, and oxygen", "carbon and oxygen" ], "question_id": "fs-id1167663732757", "question_text": "By definition, carbohydrates contain which elements?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "D" }, "bloom": null, "hl_context": "Polysaccharides , also called glycans , are large polymers composed of hundreds of monosaccharide monomers . Unlike mono - and disaccharides , polysaccharides are not sweet and , in general , they are not soluble in water . <hl> Like disaccharides , the monomeric units of polysaccharides are linked together by glycosidic bond s . <hl> Two monosaccharide molecules may chemically bond to form a disaccharide . <hl> The name given to the covalent bond between the two monosaccharides is a glycosidic bond . <hl> <hl> Glycosidic bonds form between hydroxyl groups of the two saccharide molecules , an example of the dehydration synthesis described in the previous section of this chapter : <hl>", "hl_sentences": "Like disaccharides , the monomeric units of polysaccharides are linked together by glycosidic bond s . The name given to the covalent bond between the two monosaccharides is a glycosidic bond . Glycosidic bonds form between hydroxyl groups of the two saccharide molecules , an example of the dehydration synthesis described in the previous section of this chapter :", "question": { "cloze_format": "Monosaccharides may link together to form polysaccharides by forming the ___ type of bond.", "normal_format": "Monosaccharides may link together to form polysaccharides by forming which type of bond?", "question_choices": [ "hydrogen", "peptide", "ionic", "glycosidic" ], "question_id": "fs-id1167663611699", "question_text": "Monosaccharides may link together to form polysaccharides by forming which type of bond?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 4, "ans_text": "E" }, "bloom": null, "hl_context": "Although they are composed primarily of carbon and hydrogen , lipid molecules may also contain oxygen , nitrogen , sulfur , and phosphorous . <hl> Lipids serve numerous and diverse purposes in the structure and functions of organisms . <hl> <hl> They can be a source of nutrients , a storage form for carbon , energy-storage molecules , or structural components of membranes and hormones . <hl> Lipids comprise a broad class of many chemically distinct compounds , the most common of which are discussed in this section .", "hl_sentences": "Lipids serve numerous and diverse purposes in the structure and functions of organisms . They can be a source of nutrients , a storage form for carbon , energy-storage molecules , or structural components of membranes and hormones .", "question": { "cloze_format": "Lipids are described as ___ .", "normal_format": "Which of the following describes lipids?", "question_choices": [ "a source of nutrients for organisms", "energy-storage molecules", "molecules having structural role in membranes", "molecules that are part of hormones and pigments", "all of the above" ], "question_id": "fs-id1167662451034", "question_text": "Which of the following describes lipids?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "B" }, "bloom": null, "hl_context": "<hl> The amphipathic nature of phospholipids enables them to form uniquely functional structures in aqueous environments . <hl> <hl> As mentioned , the polar heads of these molecules are strongly attracted to water molecules , and the nonpolar tails are not . <hl> Because of their considerable lengths , these tails are , in fact , strongly attracted to one another . As a result , energetically stable , large-scale assemblies of phospholipid molecules are formed in which the hydrophobic tails congregate within enclosed regions , shielded from contact with water by the polar heads ( Figure 7.14 ) . The simplest of these structures are micelle s , spherical assemblies containing a hydrophobic interior of phospholipid tails and an outer surface of polar head groups . Larger and more complex structures are created from lipid-bilayer sheets , or unit membranes , which are large , two-dimensional assemblies of phospholipid s congregated tail to tail . The cell membranes of nearly all organisms are made from lipid-bilayer sheets , as are the membranes of many intracellular components . These sheets may also form lipid-bilayer spheres that are the structural basis of vesicle s and liposome s , subcellular components that play a role in numerous physiological functions . The molecular structure of lipids results in unique behavior in aqueous environments . Figure 7.12 depicts the structure of a triglyceride . Because all three substituents on the glycerol backbone are long hydrocarbon chains , these compounds are nonpolar and not significantly attracted to polar water molecules — they are hydrophobic . Conversely , phospholipid s such as the one shown in Figure 7.13 have a negatively charged phosphate group . <hl> Because the phosphate is charged , it is capable of strong attraction to water molecules and thus is hydrophilic , or “ water loving . ” The hydrophilic portion of the phospholipid is often referred to as a polar “ head , ” and the long hydrocarbon chains as nonpolar “ tails . ” A molecule presenting a hydrophobic portion and a hydrophilic moiety is said to be amphipathic . <hl> Notice the “ R ” designation within the hydrophilic head depicted in Figure 7.13 , indicating that a polar head group can be more complex than a simple phosphate moiety . Glycolipids are examples in which carbohydrates are bonded to the lipids ’ head groups .", "hl_sentences": "The amphipathic nature of phospholipids enables them to form uniquely functional structures in aqueous environments . As mentioned , the polar heads of these molecules are strongly attracted to water molecules , and the nonpolar tails are not . Because the phosphate is charged , it is capable of strong attraction to water molecules and thus is hydrophilic , or “ water loving . ” The hydrophilic portion of the phospholipid is often referred to as a polar “ head , ” and the long hydrocarbon chains as nonpolar “ tails . ” A molecule presenting a hydrophobic portion and a hydrophilic moiety is said to be amphipathic .", "question": { "cloze_format": "Molecules bearing both polar and nonpolar groups are said to be ___ .", "normal_format": "Molecules bearing both polar and nonpolar groups are said to be which of the following?", "question_choices": [ "hydrophilic", "amphipathic", "hydrophobic", "polyfunctional" ], "question_id": "fs-id1167662552733", "question_text": "Molecules bearing both polar and nonpolar groups are said to be which of the following?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "C" }, "bloom": null, "hl_context": "Amino acids may chemically bond together by reaction of the carboxylic acid group of one molecule with the amine group of another . This reaction forms a peptide bond and a water molecule and is another example of dehydration synthesis ( Figure 7.18 ) . Molecules formed by chemically linking relatively modest numbers of amino acid s ( approximately 50 or fewer ) are called peptide s , and prefixes are often used to specify these numbers : dipeptide s ( two amino acids ) , tripeptide s ( three amino acids ) , and so forth . More generally , the approximate number of amino acids is designated : oligopeptide s are formed by joining up to approximately 20 amino acids , whereas polypeptide s are synthesized from up to approximately 50 amino acids . <hl> When the number of amino acids linked together becomes very large , or when multiple polypeptides are used as building subunits , the macromolecules that result are called proteins . <hl> <hl> The continuously variable length ( the number of monomers ) of these biopolymers , along with the variety of possible R group s on each amino acid , allows for a nearly unlimited diversity in the types of proteins that may be formed . <hl> <hl> An amino acid is an organic molecule in which a hydrogen atom , a carboxyl group ( – COOH ) , and an amino group ( – NH 2 ) are all bonded to the same carbon atom , the so-called α carbon . <hl> The fourth group bonded to the α carbon varies among the different amino acids and is called a residue or a side chain , represented in structural formulas by the letter R . A residue is a monomer that results when two or more amino acids combine and remove water molecules . The primary structure of a protein , a peptide chain , is made of amino acid residues . <hl> The unique characteristics of the functional groups and R group s allow these components of the amino acids to form hydrogen , ionic , and disulfide bonds , along with polar / nonpolar interactions needed to form secondary , tertiary , and quaternary protein structures . <hl> These groups are composed primarily of carbon , hydrogen , oxygen , nitrogen , and sulfur , in the form of hydrocarbons , acids , amides , alcohols , and amines . A few examples illustrating these possibilities are provided in Figure 7.17 .", "hl_sentences": "When the number of amino acids linked together becomes very large , or when multiple polypeptides are used as building subunits , the macromolecules that result are called proteins . The continuously variable length ( the number of monomers ) of these biopolymers , along with the variety of possible R group s on each amino acid , allows for a nearly unlimited diversity in the types of proteins that may be formed . An amino acid is an organic molecule in which a hydrogen atom , a carboxyl group ( – COOH ) , and an amino group ( – NH 2 ) are all bonded to the same carbon atom , the so-called α carbon . The unique characteristics of the functional groups and R group s allow these components of the amino acids to form hydrogen , ionic , and disulfide bonds , along with polar / nonpolar interactions needed to form secondary , tertiary , and quaternary protein structures .", "question": { "cloze_format": "The group that varies among different amino acids is the ___ .", "normal_format": "Which of the following groups varies among different amino acids?", "question_choices": [ "hydrogen atom", "carboxyl group", "R group", "amino group" ], "question_id": "fs-id1167663973313", "question_text": "Which of the following groups varies among different amino acids?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 3, "ans_text": "D" }, "bloom": null, "hl_context": "<hl> An amino acid is an organic molecule in which a hydrogen atom , a carboxyl group ( – COOH ) , and an amino group ( – NH 2 ) are all bonded to the same carbon atom , the so-called α carbon . <hl> <hl> The fourth group bonded to the α carbon varies among the different amino acids and is called a residue or a side chain , represented in structural formulas by the letter R . <hl> <hl> A residue is a monomer that results when two or more amino acids combine and remove water molecules . <hl> <hl> The primary structure of a protein , a peptide chain , is made of amino acid residues . <hl> <hl> The unique characteristics of the functional groups and R group s allow these components of the amino acids to form hydrogen , ionic , and disulfide bonds , along with polar / nonpolar interactions needed to form secondary , tertiary , and quaternary protein structures . <hl> These groups are composed primarily of carbon , hydrogen , oxygen , nitrogen , and sulfur , in the form of hydrocarbons , acids , amides , alcohols , and amines . A few examples illustrating these possibilities are provided in Figure 7.17 .", "hl_sentences": "An amino acid is an organic molecule in which a hydrogen atom , a carboxyl group ( – COOH ) , and an amino group ( – NH 2 ) are all bonded to the same carbon atom , the so-called α carbon . The fourth group bonded to the α carbon varies among the different amino acids and is called a residue or a side chain , represented in structural formulas by the letter R . A residue is a monomer that results when two or more amino acids combine and remove water molecules . The primary structure of a protein , a peptide chain , is made of amino acid residues . The unique characteristics of the functional groups and R group s allow these components of the amino acids to form hydrogen , ionic , and disulfide bonds , along with polar / nonpolar interactions needed to form secondary , tertiary , and quaternary protein structures .", "question": { "cloze_format": "The amino acids present in proteins differ in ___ .", "normal_format": "The amino acids present in proteins differ in which of the following?", "question_choices": [ "size", "shape", "side groups", "all of the above" ], "question_id": "fs-id1167663508187", "question_text": "The amino acids present in proteins differ in which of the following?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "A" }, "bloom": null, "hl_context": "The next level of protein organization is the tertiary structure , which is the large-scale three-dimensional shape of a single polypeptide chain . Tertiary structure is determined by interactions between amino acid residues that are far apart in the chain . <hl> A variety of interactions give rise to protein tertiary structure , such as disulfide bridge s , which are bonds between the sulfhydryl ( – SH ) functional groups on amino acid side groups ; hydrogen bonds ; ionic bonds ; and hydrophobic interactions between nonpolar side chains . <hl> All these interactions , weak and strong , combine to determine the final three-dimensional shape of the protein and its function ( Figure 7.21 ) .", "hl_sentences": "A variety of interactions give rise to protein tertiary structure , such as disulfide bridge s , which are bonds between the sulfhydryl ( – SH ) functional groups on amino acid side groups ; hydrogen bonds ; ionic bonds ; and hydrophobic interactions between nonpolar side chains .", "question": { "cloze_format": "The ___ are bonds that are not involved in tertiary structure.", "normal_format": "Which of the following bonds are not involved in tertiary structure?", "question_choices": [ "peptide bonds", "ionic bonds", "hydrophobic interactions", "hydrogen bonds" ], "question_id": "fs-id1167663810015", "question_text": "Which of the following bonds are not involved in tertiary structure?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "B" }, "bloom": null, "hl_context": "Microbes can also be identified by measuring their unique lipid profiles . As we have learned , fatty acids of lipids can vary in chain length , presence or absence of double bonds , and number of double bonds , hydroxyl groups , branches , and rings . <hl> To identify a microbe by its lipid composition , the fatty acids present in their membranes are analyzed . <hl> <hl> A common biochemical analysis used for this purpose is a technique used in clinical , public health , and food laboratories . <hl> It relies on detecting unique differences in fatty acids and is called fatty acid methyl ester ( FAME ) analysis . In a FAME analysis , fatty acids are extracted from the membranes of microorganisms , chemically altered to form volatile methyl esters , and analyzed by gas chromatography ( GC ) . The resulting GC chromatogram is compared with reference chromatograms in a database containing data for thousands of bacterial isolates to identify the unknown microorganism ( Figure 7.27 ) . <hl> Other systems rely on biochemical characteristics to identify microorganisms by their biochemical reactions , such as carbon utilization and other metabolic tests . <hl> In small laboratory settings or in teaching laboratories , those assays are carried out using a limited number of test tubes . However , more modern systems , such as the one developed by Biolog , Inc . , are based on panels of biochemical reactions performed simultaneously and analyzed by software . Biolog ’ s system identifies cells based on their ability to metabolize certain biochemicals and on their physiological properties , including pH and chemical sensitivity . It uses all major classes of biochemicals in its analysis . Identifications can be performed manually or with the semi - or fully automated instruments . <hl> Some microorganisms store certain compounds as granules within their cytoplasm , and the contents of these granules can be used for identification purposes . <hl> <hl> For example , poly-β-hydroxybutyrate ( PHB ) is a carbon - and energy-storage compound found in some nonfluorescent bacteria of the genus Pseudomonas . <hl> Different species within this genus can be classified by the presence or the absence of PHB and fluorescent pigments . The human pathogen P . aeruginosa and the plant pathogen P . syringae are two examples of fluorescent Pseudomonas species that do not accumulate PHB granules .", "hl_sentences": "To identify a microbe by its lipid composition , the fatty acids present in their membranes are analyzed . A common biochemical analysis used for this purpose is a technique used in clinical , public health , and food laboratories . Other systems rely on biochemical characteristics to identify microorganisms by their biochemical reactions , such as carbon utilization and other metabolic tests . Some microorganisms store certain compounds as granules within their cytoplasm , and the contents of these granules can be used for identification purposes . For example , poly-β-hydroxybutyrate ( PHB ) is a carbon - and energy-storage compound found in some nonfluorescent bacteria of the genus Pseudomonas .", "question": { "cloze_format": "The characteristic/compound that is not considered to be a phenotypic biochemical characteristic used of microbial identification is ___.", "normal_format": "Which of the following characteristics/compounds is not considered to be a phenotypic biochemical characteristic used of microbial identification?", "question_choices": [ "poly-β-hydroxybutyrate", "small-subunit (16S) rRNA gene", "carbon utilization", "lipid composition" ], "question_id": "fs-id1167663611303", "question_text": "Which of the following characteristics/compounds is not considered to be a phenotypic biochemical characteristic used of microbial identification?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 3, "ans_text": "D" }, "bloom": null, "hl_context": "<hl> Bacterial identification can also be based on the proteins produced under specific growth conditions within the human body . <hl> <hl> These types of identification procedures are called proteomic analysis . <hl> To perform proteomic analysis , proteins from the pathogen are first separated by high-pressure liquid chromatography ( HPLC ) , and the collected fractions are then digested to yield smaller peptide fragments . These peptides are identified by mass spectrometry and compared with those of known microorganisms to identify the unknown microorganism in the original specimen .", "hl_sentences": "Bacterial identification can also be based on the proteins produced under specific growth conditions within the human body . These types of identification procedures are called proteomic analysis .", "question": { "cloze_format": "Proteomic analysis is a methodology that deals with (the) ___.", "normal_format": "Proteomic analysis is a methodology that deals with which of the following?", "question_choices": [ "the analysis of proteins functioning as enzymes within the cell", "analysis of transport proteins in the cell", "the analysis of integral proteins of the cell membrane", "the study of all accumulated proteins of an organism" ], "question_id": "fs-id1167663528409", "question_text": "Proteomic analysis is a methodology that deals with which of the following?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "C" }, "bloom": null, "hl_context": "Another automated system identifies microorganisms by determining the specimen ’ s mass spectrum and then comparing it to a database that contains known mass spectra for thousands of microorganisms . <hl> This method is based on matrix-assisted laser desorption / ionization time-of-flight mass spectrometry ( MALDI-TOF ) and uses disposable MALDI plates on which the microorganism is mixed with a specialized matrix reagent ( Figure 7.26 ) . <hl> <hl> The sample / reagent mixture is irradiated with a high-intensity pulsed ultraviolet laser , resulting in the ejection of gaseous ions generated from the various chemical constituents of the microorganism . <hl> These gaseous ions are collected and accelerated through the mass spectrometer , with ions traveling at a velocity determined by their mass-to-charge ratio ( m / z ) , thus , reaching the detector at different times . A plot of detector signal versus m / z yields a mass spectrum for the organism that is uniquely related to its biochemical composition . Comparison of the mass spectrum to a library of reference spectra obtained from identical analyses of known microorganisms permits identification of the unknown microbe .", "hl_sentences": "This method is based on matrix-assisted laser desorption / ionization time-of-flight mass spectrometry ( MALDI-TOF ) and uses disposable MALDI plates on which the microorganism is mixed with a specialized matrix reagent ( Figure 7.26 ) . The sample / reagent mixture is irradiated with a high-intensity pulsed ultraviolet laser , resulting in the ejection of gaseous ions generated from the various chemical constituents of the microorganism .", "question": { "cloze_format": "The method that involves the generation of gas phase ions from intact microorganisms is ___.", "normal_format": "Which method involves the generation of gas phase ions from intact microorganisms?", "question_choices": [ "FAME", "PLFA", "MALDI-TOF", "Lancefield group testing" ], "question_id": "fs-id1167663985930", "question_text": "Which method involves the generation of gas phase ions from intact microorganisms?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 3, "ans_text": "D" }, "bloom": null, "hl_context": "Microorganisms can also be identified by the carbohydrates attached to proteins ( glycoproteins ) in the plasma membrane or cell wall . Antibodies and other carbohydrate-binding proteins can attach to specific carbohydrates on cell surfaces , causing the cells to clump together . <hl> Serological tests ( e . g . , the Lancefield groups tests , which are used for identification of Streptococcus species ) are performed to detect the unique carbohydrates located on the surface of the cell . <hl>", "hl_sentences": "Serological tests ( e . g . , the Lancefield groups tests , which are used for identification of Streptococcus species ) are performed to detect the unique carbohydrates located on the surface of the cell .", "question": { "cloze_format": "The ___ method involves the analysis of membrane-bound carbohydrates.", "normal_format": "Which method involves the analysis of membrane-bound carbohydrates?", "question_choices": [ "FAME", "PLFA", "MALDI-TOF", "Lancefield group testing" ], "question_id": "fs-id1167663589138", "question_text": "Which method involves the analysis of membrane-bound carbohydrates?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 0, "ans_text": "A" }, "bloom": null, "hl_context": "Microbes can also be identified by measuring their unique lipid profiles . As we have learned , fatty acids of lipids can vary in chain length , presence or absence of double bonds , and number of double bonds , hydroxyl groups , branches , and rings . To identify a microbe by its lipid composition , the fatty acids present in their membranes are analyzed . A common biochemical analysis used for this purpose is a technique used in clinical , public health , and food laboratories . It relies on detecting unique differences in fatty acids and is called fatty acid methyl ester ( FAME ) analysis . <hl> In a FAME analysis , fatty acids are extracted from the membranes of microorganisms , chemically altered to form volatile methyl esters , and analyzed by gas chromatography ( GC ) . <hl> The resulting GC chromatogram is compared with reference chromatograms in a database containing data for thousands of bacterial isolates to identify the unknown microorganism ( Figure 7.27 ) .", "hl_sentences": "In a FAME analysis , fatty acids are extracted from the membranes of microorganisms , chemically altered to form volatile methyl esters , and analyzed by gas chromatography ( GC ) .", "question": { "cloze_format": "The method that involves conversion of a microbe’s lipids to volatile compounds for analysis by gas chromatography is ___ .", "normal_format": "Which method involves conversion of a microbe’s lipids to volatile compounds for analysis by gas chromatography?", "question_choices": [ "FAME", "proteomic analysis", "MALDI-TOF", "Lancefield group testing" ], "question_id": "fs-id1167663929444", "question_text": "Which method involves conversion of a microbe’s lipids to volatile compounds for analysis by gas chromatography?" }, "references_are_paraphrase": 0 } ]
7
7.1 Organic Molecules Learning Objectives Identify common elements and structures found in organic molecules Explain the concept of isomerism Identify examples of functional groups Describe the role of functional groups in synthesizing polymers Clinical Focus Part 1 Penny is a 16-year-old student who visited her doctor, complaining about an itchy skin rash. She had a history of allergic episodes. The doctor looked at her sun-tanned skin and asked her if she switched to a different sunscreen. She said she had, so the doctor diagnosed an allergic eczema. The symptoms were mild so the doctor told Penny to avoid using the sunscreen that caused the reaction and prescribed an over-the-counter moisturizing cream to keep her skin hydrated and to help with itching. What kinds of substances would you expect to find in a moisturizing cream? What physical or chemical properties of these substances would help alleviate itching and inflammation of the skin? Jump to the next Clinical Focus box. Biochemistry is the discipline that studies the chemistry of life, and its objective is to explain form and function based on chemical principles. Organic chemistry is the discipline devoted to the study of carbon-based chemistry, which is the foundation for the study of biomolecules and the discipline of biochemistry . Both biochemistry and organic chemistry are based on the concepts of general chemistry, some of which are presented in Appendix A . Elements in Living Cells The most abundant element in cells is hydrogen (H), followed by carbon (C), oxygen (O), nitrogen (N), phosphorous (P), and sulfur (S). We call these elements macronutrient s , and they account for about 99% of the dry weight of cells. Some elements, such as sodium (Na), potassium (K), magnesium (Mg), zinc (Zn), iron (Fe), calcium (Ca), molybdenum (Mo), copper (Cu), cobalt (Co), manganese (Mn), or vanadium (V), are required by some cells in very small amounts and are called micronutrient s or trace element s . All of these elements are essential to the function of many biochemical reactions, and, therefore, are essential to life. The four most abundant elements in living matter (C, N, O, and H) have low atomic numbers and are thus light elements capable of forming strong bonds with other atoms to produce molecules ( Figure 7.2 ). Carbon forms four chemical bonds, whereas nitrogen forms three, oxygen forms two, and hydrogen forms one. When bonded together within molecules, oxygen, sulfur, and nitrogen often have one or more “lone pairs” of electrons that play important roles in determining many of the molecules’ physical and chemical properties (see Appendix A ). These traits in combination permit the formation of a vast number of diverse molecular species necessary to form the structures and enable the functions of living organisms. Living organisms contain inorganic compound s (mainly water and salts; see Appendix A ) and organic molecules. Organic molecules contain carbon; inorganic compounds do not. Carbon oxides and carbonates are exceptions; they contain carbon but are considered inorganic because they do not contain hydrogen. The atoms of an organic molecule are typically organized around chains of carbon atoms. Inorganic compounds make up 1%–1.5% of the dry weight of living cells. They are small, simple compounds that play important roles in the cell, although they do not form cell structures. Most of the carbon found in organic molecules originates from inorganic carbon sources such as carbon dioxide captured via carbon fixation by microorganisms. Check Your Understanding Describe the most abundant elements in nature. What are the differences between organic and inorganic molecules? Organic Molecules and Isomerism Organic molecules in organisms are generally larger and more complex than inorganic molecules. Their carbon skeletons are held together by covalent bonds. They form the cells of an organism and perform the chemical reactions that facilitate life. All of these molecules, called biomolecule s because they are part of living matter, contain carbon, which is the building block of life. Carbon is a very unique element in that it has four valence electrons in its outer orbitals and can form four single covalent bonds with up to four other atoms at the same time (see Appendix A ). These atoms are usually oxygen, hydrogen, nitrogen, sulfur, phosphorous, and carbon itself; the simplest organic compound is methane, in which carbon binds only to hydrogen ( Figure 7.3 ). As a result of carbon’s unique combination of size and bonding properties, carbon atoms can bind together in large numbers, thus producing a chain or carbon skeleton . The carbon skeleton of organic molecules can be straight, branched, or ring shaped (cyclic). Organic molecules are built on chains of carbon atoms of varying lengths; most are typically very long, which allows for a huge number and variety of compounds. No other element has the ability to form so many different molecules of so many different sizes and shapes. Molecules with the same atomic makeup but different structural arrangement of atoms are called isomers . The concept of isomerism is very important in chemistry because the structure of a molecule is always directly related to its function. Slight changes in the structural arrangements of atoms in a molecule may lead to very different properties. Chemists represent molecules by their structural formula , which is a graphic representation of the molecular structure, showing how the atoms are arranged. Compounds that have identical molecular formulas but differ in the bonding sequence of the atoms are called structural isomers . The monosaccharides glucose , galactose , and fructose all have the same molecular formula, C 6 H 12 O 6 , but we can see from Figure 7.4 that the atoms are bonded together differently. Isomers that differ in the spatial arrangements of atoms are called stereoisomers ; one unique type is enantiomers . The properties of enantiomers were originally discovered by Louis Pasteur in 1848 while using a microscope to analyze crystallized fermentation products of wine. Enantiomers are molecules that have the characteristic of chirality , in which their structures are nonsuperimposable mirror images of each other. Chirality is an important characteristic in many biologically important molecules, as illustrated by the examples of structural differences in the enantiomeric forms of the monosaccharide glucose or the amino acid alanine ( Figure 7.5 ). Many organisms are only able to use one enantiomeric form of certain types of molecules as nutrients and as building blocks to make structures within a cell. Some enantiomeric forms of amino acids have distinctly different tastes and smells when consumed as food. For example, L-aspartame, commonly called aspartame, tastes sweet, whereas D-aspartame is tasteless. Drug enantiomers can have very different pharmacologic affects. For example, the compound methorphan exists as two enantiomers, one of which acts as an antitussive ( dextro methorphan, a cough suppressant), whereas the other acts as an analgesic ( levo methorphan, a drug similar in effect to codeine). Enantiomers are also called optical isomers because they can rotate the plane of polarized light. Some of the crystals Pasteur observed from wine fermentation rotated light clockwise whereas others rotated the light counterclockwise. Today, we denote enantiomers that rotate polarized light clockwise (+) as d form s, and the mirror image of the same molecule that rotates polarized light counterclockwise (−) as the l form . The d and l labels are derived from the Latin words dexter (on the right) and laevus (on the left), respectively. These two different optical isomers often have very different biological properties and activities. Certain species of molds, yeast, and bacteria, such as Rhizopus , Yarrowia , and Lactobacillus spp., respectively, can only metabolize one type of optical isomer; the opposite isomer is not suitable as a source of nutrients. Another important reason to be aware of optical isomers is the therapeutic use of these types of chemicals for drug treatment, because some microorganisms can only be affected by one specific optical isomer. Check Your Understanding We say that life is carbon based. What makes carbon so suitable to be part of all the macromolecules of living organisms? Biologically Significant Functional Groups In addition to containing carbon atoms, biomolecules also contain functional groups —groups of atoms within molecules that are categorized by their specific chemical composition and the chemical reactions they perform, regardless of the molecule in which the group is found. Some of the most common functional groups are listed in Figure 7.6 . In the formulas, the symbol R stands for “residue” and represents the remainder of the molecule. R might symbolize just a single hydrogen atom or it may represent a group of many atoms. Notice that some functional groups are relatively simple, consisting of just one or two atoms, while some comprise two of these simpler functional groups. For example, a carbonyl group is a functional group composed of a carbon atom double bonded to an oxygen atom: C=O. It is present in several classes of organic compounds as part of larger functional groups such as ketones, aldehydes, carboxylic acids, and amides. In ketones, the carbonyl is present as an internal group, whereas in aldehydes it is a terminal group. Macromolecules Carbon chains form the skeletons of most organic molecules. Functional groups combine with the chain to form biomolecules. Because these biomolecules are typically large, we call them macromolecule s. Many biologically relevant macromolecules are formed by linking together a great number of identical, or very similar, smaller organic molecules. The smaller molecules act as building blocks and are called monomer s , and the macromolecules that result from their linkage are called polymer s . Cells and cell structures include four main groups of carbon-containing macromolecules: polysaccharides , proteins , lipids , and nucleic acids . The first three groups of molecules will be studied throughout this chapter. The biochemistry of nucleic acids will be discussed in Biochemistry of the Genome . Of the many possible ways that monomers may be combined to yield polymers, one common approach encountered in the formation of biological macromolecules is dehydration synthesis . In this chemical reaction, monomer molecules bind end to end in a process that results in the formation of water molecules as a byproduct: H—monomer—OH + H—monomer—OH ⟶ H—monomer—monomer—OH + H 2 O H—monomer—OH + H—monomer—OH ⟶ H—monomer—monomer—OH + H 2 O Figure 7.7 shows dehydration synthesis of glucose binding together to form maltose and a water molecule. Table 7.1 summarizes macromolecules and some of their functions. Some Functions of Macromolecules Macromolecule Functions Carbohydrates Energy storage, receptors, food, structural role in plants, fungal cell walls, exoskeletons of insects Lipids Energy storage, membrane structure, insulation, hormones, pigments Nucleic acids Storage and transfer of genetic information Proteins Enzymes, structure, receptors, transport, structural role in the cytoskeleton of a cell and the extracellular matrix Table 7.1 Check Your Understanding What is the byproduct of a dehydration synthesis reaction? 7.2 Carbohydrates Learning Objectives Give examples of monosaccharides and polysaccharides Describe the function of monosaccharides and polysaccharides within a cell The most abundant biomolecules on earth are carbohydrate s . From a chemical viewpoint, carbohydrates are primarily a combination of carbon and water, and many of them have the empirical formula (CH 2 O) n , where n is the number of repeated units. This view represents these molecules simply as “hydrated” carbon atom chains in which water molecules attach to each carbon atom, leading to the term “carbohydrates.” Although all carbohydrates contain carbon, hydrogen, and oxygen, there are some that also contain nitrogen, phosphorus, and/or sulfur. Carbohydrates have myriad different functions. They are abundant in terrestrial ecosystems, many forms of which we use as food sources. These molecules are also vital parts of macromolecular structures that store and transmit genetic information (i.e., DNA and RNA). They are the basis of biological polymers that impart strength to various structural components of organisms (e.g., cellulose and chitin), and they are the primary source of energy storage in the form of starch and glycogen. Monosaccharides: The Sweet Ones In biochemistry, carbohydrates are often called saccharide s , from the Greek sakcharon , meaning sugar, although not all the saccharides are sweet. The simplest carbohydrates are called monosaccharide s , or simple sugars. They are the building blocks (monomers) for the synthesis of polymers or complex carbohydrates, as will be discussed further in this section. Monosaccharides are classified based on the number of carbons in the molecule. General categories are identified using a prefix that indicates the number of carbons and the suffix – ose , which indicates a saccharide; for example, triose (three carbons), tetrose (four carbons), pentose (five carbons), and hexose (six carbons) ( Figure 7.8 ). The hexose D-glucose is the most abundant monosaccharide in nature. Other very common and abundant hexose monosaccharides are galactose , used to make the disaccharide milk sugar lactose , and the fruit sugar fructose . Monosaccharides of four or more carbon atoms are typically more stable when they adopt cyclic, or ring, structures. These ring structures result from a chemical reaction between functional groups on opposite ends of the sugar’s flexible carbon chain, namely the carbonyl group and a relatively distant hydroxyl group. Glucose, for example, forms a six-membered ring ( Figure 7.9 ). Check Your Understanding Why do monosaccharides form ring structures? Disaccharides Two monosaccharide molecules may chemically bond to form a disaccharide . The name given to the covalent bond between the two monosaccharides is a glycosidic bond . Glycosidic bonds form between hydroxyl groups of the two saccharide molecules, an example of the dehydration synthesis described in the previous section of this chapter: monosaccharide—OH + HO—monosaccharide ⟶ monosaccharide—O—monosaccharide ⎵ disaccharide monosaccharide—OH + HO—monosaccharide ⟶ monosaccharide—O—monosaccharide ⎵ disaccharide Common disaccharides are the grain sugar maltose , made of two glucose molecules; the milk sugar lactose , made of a galactose and a glucose molecule; and the table sugar sucrose , made of a glucose and a fructose molecule ( Figure 7.10 ). Polysaccharides Polysaccharides, also called glycans , are large polymers composed of hundreds of monosaccharide monomers. Unlike mono- and disaccharides, polysaccharides are not sweet and, in general, they are not soluble in water. Like disaccharides, the monomeric units of polysaccharides are linked together by glycosidic bond s. Polysaccharides are very diverse in their structure. Three of the most biologically important polysaccharides— starch , glycogen , and cellulose —are all composed of repetitive glucose units, although they differ in their structure ( Figure 7.11 ). Cellulose consists of a linear chain of glucose molecules and is a common structural component of cell walls in plants and other organisms. Glycogen and starch are branched polymers; glycogen is the primary energy-storage molecule in animals and bacteria, whereas plants primarily store energy in starch. The orientation of the glycosidic linkage s in these three polymers is different as well and, as a consequence, linear and branched macromolecules have different properties. Modified glucose molecules can be fundamental components of other structural polysaccharide s. Examples of these types of structural polysaccharides are N-acetyl glucosamine (NAG) and N-acetyl muramic acid (NAM) found in bacterial cell wall peptidoglycan. Polymers of NAG form chitin , which is found in fungal cell walls and in the exoskeleton of insects. Check Your Understanding What are the most biologically important polysaccharides and why are they important? 7.3 Lipids Learning Objectives Describe the chemical composition of lipids Describe the unique characteristics and diverse structures of lipids Compare and contrast triacylglycerides (triglycerides) and phospholipids. Describe how phospholipids are used to construct biological membranes. Although they are composed primarily of carbon and hydrogen, lipid molecules may also contain oxygen, nitrogen, sulfur, and phosphorous. Lipids serve numerous and diverse purposes in the structure and functions of organisms. They can be a source of nutrients, a storage form for carbon, energy-storage molecules, or structural components of membranes and hormones. Lipids comprise a broad class of many chemically distinct compounds, the most common of which are discussed in this section. Fatty Acids and Triacylglycerides The fatty acid s are lipids that contain long-chain hydrocarbons terminated with a carboxylic acid functional group. Because the long hydrocarbon chain , fatty acids are hydrophobic (“water fearing”) or nonpolar . Fatty acids with hydrocarbon chains that contain only single bonds are called saturated fatty acid s because they have the greatest number of hydrogen atoms possible and are, therefore, “saturated” with hydrogen. Fatty acids with hydrocarbon chains containing at least one double bond are called unsaturated fatty acid s because they have fewer hydrogen atoms. Saturated fatty acids have a straight, flexible carbon backbone, whereas unsaturated fatty acids have “kinks” in their carbon skeleton because each double bond causes a rigid bend of the carbon skeleton. These differences in saturated versus unsaturated fatty acid structure result in different properties for the corresponding lipids in which the fatty acids are incorporated. For example, lipids containing saturated fatty acids are solids at room temperature, whereas lipids containing unsaturated fatty acids are liquids. A triacylglycerol , or triglyceride , is formed when three fatty acids are chemically linked to a glycerol molecule ( Figure 7.12 ). Triglycerides are the primary components of adipose tissue (body fat), and are major constituents of sebum (skin oils). They play an important metabolic role, serving as efficient energy-storage molecules that can provide more than double the caloric content of both carbohydrates and proteins . Check Your Understanding Explain why fatty acids with hydrocarbon chains that contain only single bonds are called saturated fatty acids. Phospholipids and Biological Membranes Triglycerides are classified as simple lipids because they are formed from just two types of compounds: glycerol and fatty acids . In contrast, complex lipids contain at least one additional component, for example, a phosphate group ( phospholipid s ) or a carbohydrate moiety ( glycolipid s ). Figure 7.13 depicts a typical phospholipid composed of two fatty acids linked to glycerol (a diglyceride ). The two fatty acid carbon chains may be both saturated, both unsaturated, or one of each. Instead of another fatty acid molecule (as for triglycerides), the third binding position on the glycerol molecule is occupied by a modified phosphate group. The molecular structure of lipids results in unique behavior in aqueous environments. Figure 7.12 depicts the structure of a triglyceride . Because all three substituents on the glycerol backbone are long hydrocarbon chains, these compounds are nonpolar and not significantly attracted to polar water molecules—they are hydrophobic. Conversely, phospholipid s such as the one shown in Figure 7.13 have a negatively charged phosphate group. Because the phosphate is charged, it is capable of strong attraction to water molecules and thus is hydrophilic , or “water loving.” The hydrophilic portion of the phospholipid is often referred to as a polar “head,” and the long hydrocarbon chains as nonpolar “tails.” A molecule presenting a hydrophobic portion and a hydrophilic moiety is said to be amphipathic . Notice the “R” designation within the hydrophilic head depicted in Figure 7.13 , indicating that a polar head group can be more complex than a simple phosphate moiety. Glycolipids are examples in which carbohydrates are bonded to the lipids’ head groups. The amphipathic nature of phospholipids enables them to form uniquely functional structures in aqueous environments. As mentioned, the polar heads of these molecules are strongly attracted to water molecules, and the nonpolar tails are not. Because of their considerable lengths, these tails are, in fact, strongly attracted to one another. As a result, energetically stable, large-scale assemblies of phospholipid molecules are formed in which the hydrophobic tails congregate within enclosed regions, shielded from contact with water by the polar heads ( Figure 7.14 ). The simplest of these structures are micelle s , spherical assemblies containing a hydrophobic interior of phospholipid tails and an outer surface of polar head groups. Larger and more complex structures are created from lipid-bilayer sheets, or unit membranes , which are large, two-dimensional assemblies of phospholipid s congregated tail to tail. The cell membranes of nearly all organisms are made from lipid-bilayer sheets, as are the membranes of many intracellular components. These sheets may also form lipid-bilayer spheres that are the structural basis of vesicle s and liposome s, subcellular components that play a role in numerous physiological functions. Check Your Understanding How is the amphipathic nature of phospholipids significant? Isoprenoids and Sterols The isoprenoids are branched lipids, also referred to as terpenoids , that are formed by chemical modifications of the isoprene molecule ( Figure 7.15 ). These lipids play a wide variety of physiological roles in plants and animals, with many technological uses as pharmaceuticals (capsaicin), pigments (e.g., orange beta carotene, xanthophylls), and fragrances (e.g., menthol, camphor, limonene [lemon fragrance], and pinene [pine fragrance]). Long-chain isoprenoids are also found in hydrophobic oils and waxes . Waxes are typically water resistant and hard at room temperature, but they soften when heated and liquefy if warmed adequately. In humans, the main wax production occurs within the sebaceous glands of hair follicles in the skin, resulting in a secreted material called sebum, which consists mainly of triacylglycerol, wax esters, and the hydrocarbon squalene. There are many bacteria in the microbiota on the skin that feed on these lipids. One of the most prominent bacteria that feed on lipids is Propionibacterium acnes , which uses the skin’s lipids to generate short-chain fatty acids and is involved in the production of acne. Another type of lipids are steroid s , complex, ringed structures that are found in cell membranes; some function as hormones. The most common types of steroids are sterol s , which are steroids containing an OH group. These are mainly hydrophobic molecules, but also have hydrophilic hydroxyl group s. The most common sterol found in animal tissues is cholesterol . Its structure consists of four rings with a double bond in one of the rings, and a hydroxyl group at the sterol-defining position. The function of cholesterol is to strengthen cell membranes in eukaryotes and in bacteria without cell walls, such as Mycoplasma . Prokaryotes generally do not produce cholesterol, although bacteria produce similar compounds called hopanoids , which are also multiringed structures that strengthen bacterial membranes ( Figure 7.16 ). Fungi and some protozoa produce a similar compound called ergosterol , which strengthens the cell membranes of these organisms. Link to Learning Liposomes This video provides additional information about phospholipids and liposomes. Check Your Understanding How are isoprenoids used in technology? Clinical Focus Part 2 The moisturizing cream prescribed by Penny’s doctor was a topical corticosteroid cream containing hydrocortisone. Hydrocortisone is a synthetic form of cortisol, a corticosteroid hormone produced in the adrenal glands, from cholesterol. When applied directly to the skin, it can reduce inflammation and temporarily relieve minor skin irritations, itching, and rashes by reducing the secretion of histamine, a compound produced by cells of the immune system in response to the presence of pathogens or other foreign substances. Because histamine triggers the body’s inflammatory response, the ability of hydrocortisone to reduce the local production of histamine in the skin effectively suppresses the immune system and helps limit inflammation and accompanying symptoms such as pruritus (itching) and rashes. Does the corticosteroid cream treat the cause of Penny’s rash, or just the symptoms? Jump to the next Clinical Focus box. Go back to the previous Clinical Focus box. 7.4 Proteins Learning Objectives Describe the fundamental structure of an amino acid Describe the chemical structures of proteins Summarize the unique characteristics of proteins At the beginning of this chapter, a famous experiment was described in which scientists synthesized amino acid s under conditions simulating those present on earth long before the evolution of life as we know it. These compounds are capable of bonding together in essentially any number, yielding molecules of essentially any size that possess a wide array of physical and chemical properties and perform numerous functions vital to all organisms. The molecules derived from amino acids can function as structural components of cells and subcellular entities, as sources of nutrients, as atom- and energy-storage reservoirs, and as functional species such as hormones, enzymes, receptors, and transport molecules. Amino Acids and Peptide Bonds An amino acid is an organic molecule in which a hydrogen atom, a carboxyl group (–COOH), and an amino group (–NH 2 ) are all bonded to the same carbon atom, the so-called α carbon . The fourth group bonded to the α carbon varies among the different amino acids and is called a residue or a side chain , represented in structural formulas by the letter R . A residue is a monomer that results when two or more amino acids combine and remove water molecules. The primary structure of a protein, a peptide chain, is made of amino acid residues. The unique characteristics of the functional groups and R group s allow these components of the amino acids to form hydrogen, ionic, and disulfide bonds, along with polar/nonpolar interactions needed to form secondary, tertiary, and quaternary protein structures. These groups are composed primarily of carbon, hydrogen, oxygen, nitrogen, and sulfur, in the form of hydrocarbons, acids, amides, alcohols, and amines. A few examples illustrating these possibilities are provided in Figure 7.17 . Amino acids may chemically bond together by reaction of the carboxylic acid group of one molecule with the amine group of another. This reaction forms a peptide bond and a water molecule and is another example of dehydration synthesis ( Figure 7.18 ). Molecules formed by chemically linking relatively modest numbers of amino acid s (approximately 50 or fewer) are called peptide s, and prefixes are often used to specify these numbers: dipeptide s (two amino acids), tripeptide s (three amino acids), and so forth. More generally, the approximate number of amino acids is designated: oligopeptide s are formed by joining up to approximately 20 amino acids, whereas polypeptide s are synthesized from up to approximately 50 amino acids. When the number of amino acids linked together becomes very large, or when multiple polypeptides are used as building subunits, the macromolecules that result are called proteins . The continuously variable length (the number of monomers) of these biopolymers , along with the variety of possible R group s on each amino acid, allows for a nearly unlimited diversity in the types of proteins that may be formed. Check Your Understanding How many amino acids are in polypeptides? Protein Structure The size (length) and specific amino acid sequence of a protein are major determinants of its shape, and the shape of a protein is critical to its function. For example, in the process of biological nitrogen fixation (see Biogeochemical Cycles ), soil microorganisms collectively known as rhizobia symbiotically interact with roots of legume plants such as soybeans, peanuts, or beans to form a novel structure called a nodule on the plant roots. The plant then produces a carrier protein called leghemoglobin, a protein that carries nitrogen or oxygen. Leghemoglobin binds with a very high affinity to its substrate oxygen at a specific region of the protein where the shape and amino acid sequence are appropriate (the active site ). If the shape or chemical environment of the active site is altered, even slightly, the substrate may not be able to bind as strongly, or it may not bind at all. Thus, for the protein to be fully active, it must have the appropriate shape for its function. Protein structure is categorized in terms of four levels: primary, secondary, tertiary, and quaternary. The primary structure is simply the sequence of amino acid s that make up the polypeptide chain . Figure 7.19 depicts the primary structure of a protein. The chain of amino acids that defines a protein’s primary structure is not rigid, but instead is flexible because of the nature of the bonds that hold the amino acids together. When the chain is sufficiently long, hydrogen bonding may occur between amine and carbonyl functional groups within the peptide backbone (excluding the R side group), resulting in localized folding of the polypeptide chain into helices and sheets. These shapes constitute a protein’s secondary structure . The most common secondary structures are the α-helix and β-pleated sheet. In the α-helix structure, the helix is held by hydrogen bonds between the oxygen atom in a carbonyl group of one amino acid and the hydrogen atom of the amino group that is just four amino acid units farther along the chain. In the β-pleated sheet , the pleats are formed by similar hydrogen bonds between continuous sequences of carbonyl and amino groups that are further separated on the backbone of the polypeptide chain ( Figure 7.20 ). The next level of protein organization is the tertiary structure , which is the large-scale three-dimensional shape of a single polypeptide chain. Tertiary structure is determined by interactions between amino acid residues that are far apart in the chain. A variety of interactions give rise to protein tertiary structure, such as disulfide bridge s, which are bonds between the sulfhydryl (–SH) functional groups on amino acid side groups; hydrogen bonds; ionic bonds; and hydrophobic interactions between nonpolar side chains. All these interactions, weak and strong, combine to determine the final three-dimensional shape of the protein and its function ( Figure 7.21 ). The process by which a polypeptide chain assumes a large-scale, three-dimensional shape is called protein folding . Folded proteins that are fully functional in their normal biological role are said to possess a native structure . When a protein loses its three-dimensional shape, it may no longer be functional. These unfolded proteins are denatured . Denaturation implies the loss of the secondary structure and tertiary structure (and, if present, the quaternary structure) without the loss of the primary structure. Some proteins are assemblies of several separate polypeptide s, also known as protein subunit s. These proteins function adequately only when all subunits are present and appropriately configured. The interactions that hold these subunits together constitute the quaternary structure of the protein. The overall quaternary structure is stabilized by relatively weak interactions. Hemoglobin, for example, has a quaternary structure of four globular protein subunits: two α and two β polypeptides, each one containing an iron-based heme ( Figure 7.22 ). Another important class of proteins is the conjugated proteins that have a nonprotein portion. If the conjugated protein has a carbohydrate attached, it is called a glycoprotein . If it has a lipid attached, it is called a lipoprotein . These proteins are important components of membranes. Figure 7.23 summarizes the four levels of protein structure. Check Your Understanding What can happen if a protein’s primary, secondary, tertiary, or quaternary structure is changed? Micro Connections Primary Structure, Dysfunctional Proteins, and Cystic Fibrosis Proteins associated with biological membranes are classified as extrinsic or intrinsic. Extrinsic proteins, also called peripheral proteins, are loosely associated with one side of the membrane. Intrinsic proteins, or integral proteins, are embedded in the membrane and often function as part of transport systems as transmembrane proteins. Cystic fibrosis (CF) is a human genetic disorder caused by a change in the transmembrane protein. It affects mostly the lungs but may also affect the pancreas, liver, kidneys, and intestine. CF is caused by a loss of the amino acid phenylalanine in a cystic fibrosis transmembrane protein (CFTR). The loss of one amino acid changes the primary structure of a protein that normally helps transport salt and water in and out of cells ( Figure 7.24 ). The change in the primary structure prevents the protein from functioning properly, which causes the body to produce unusually thick mucus that clogs the lungs and leads to the accumulation of sticky mucus. The mucus obstructs the pancreas and stops natural enzymes from helping the body break down food and absorb vital nutrients. In the lungs of individuals with cystic fibrosis, the altered mucus provides an environment where bacteria can thrive. This colonization leads to the formation of biofilms in the small airways of the lungs. The most common pathogens found in the lungs of patients with cystic fibrosis are Pseudomonas aeruginosa ( Figure 7.25 ) and Burkholderia cepacia . Pseudomonas differentiates within the biofilm in the lung and forms large colonies, called “mucoid” Pseudomonas . The colonies have a unique pigmentation that shows up in laboratory tests ( Figure 7.25 ) and provides physicians with the first clue that the patient has CF (such colonies are rare in healthy individuals). Link to Learning For more information about cystic fibrosis, visit the Cystic Fibrosis Foundation website. 7.5 Using Biochemistry to Identify Microorganisms Learning Objectives Describe examples of biosynthesis products within a cell that can be detected to identify bacteria Accurate identification of bacterial isolates is essential in a clinical microbiology laboratory because the results often inform decisions about treatment that directly affect patient outcomes. For example, cases of food poisoning require accurate identification of the causative agent so that physicians can prescribe appropriate treatment. Likewise, it is important to accurately identify the causative pathogen during an outbreak of disease so that appropriate strategies can be employed to contain the epidemic. There are many ways to detect, characterize, and identify microorganisms. Some methods rely on phenotypic biochemical characteristics, while others use genotypic identification. The biochemical characteristics of a bacterium provide many traits that are useful for classification and identification. Analyzing the nutritional and metabolic capabilities of the bacterial isolate is a common approach for determining the genus and the species of the bacterium. Some of the most important metabolic pathways that bacteria use to survive will be discussed in Microbial Metabolism . In this section, we will discuss a few methods that use biochemical characteristics to identify microorganisms. Some microorganisms store certain compounds as granules within their cytoplasm, and the contents of these granules can be used for identification purposes. For example, poly-β-hydroxybutyrate (PHB) is a carbon- and energy-storage compound found in some nonfluorescent bacteria of the genus Pseudomonas . Different species within this genus can be classified by the presence or the absence of PHB and fluorescent pigments. The human pathogen P. aeruginosa and the plant pathogen P. syringae are two examples of fluorescent Pseudomonas species that do not accumulate PHB granules. Other systems rely on biochemical characteristics to identify microorganisms by their biochemical reactions, such as carbon utilization and other metabolic tests. In small laboratory settings or in teaching laboratories, those assays are carried out using a limited number of test tubes. However, more modern systems, such as the one developed by Biolog, Inc., are based on panels of biochemical reactions performed simultaneously and analyzed by software. Biolog’s system identifies cells based on their ability to metabolize certain biochemicals and on their physiological properties, including pH and chemical sensitivity. It uses all major classes of biochemicals in its analysis. Identifications can be performed manually or with the semi- or fully automated instruments. Another automated system identifies microorganisms by determining the specimen’s mass spectrum and then comparing it to a database that contains known mass spectra for thousands of microorganisms. This method is based on matrix-assisted laser desorption/ionization time-of-flight mass spectrometry ( MALDI-TOF ) and uses disposable MALDI plates on which the microorganism is mixed with a specialized matrix reagent ( Figure 7.26 ). The sample/reagent mixture is irradiated with a high-intensity pulsed ultraviolet laser, resulting in the ejection of gaseous ions generated from the various chemical constituents of the microorganism. These gaseous ions are collected and accelerated through the mass spectrometer, with ions traveling at a velocity determined by their mass-to-charge ratio (m/z), thus, reaching the detector at different times. A plot of detector signal versus m/z yields a mass spectrum for the organism that is uniquely related to its biochemical composition. Comparison of the mass spectrum to a library of reference spectra obtained from identical analyses of known microorganisms permits identification of the unknown microbe. Microbes can also be identified by measuring their unique lipid profiles. As we have learned, fatty acids of lipids can vary in chain length, presence or absence of double bonds, and number of double bonds, hydroxyl groups, branches, and rings. To identify a microbe by its lipid composition, the fatty acids present in their membranes are analyzed. A common biochemical analysis used for this purpose is a technique used in clinical, public health, and food laboratories. It relies on detecting unique differences in fatty acids and is called fatty acid methyl ester (FAME) analysis . In a FAME analysis , fatty acids are extracted from the membranes of microorganisms, chemically altered to form volatile methyl esters, and analyzed by gas chromatography (GC) . The resulting GC chromatogram is compared with reference chromatograms in a database containing data for thousands of bacterial isolates to identify the unknown microorganism ( Figure 7.27 ). A related method for microorganism identification is called phospholipid-derived fatty acids (PLFA) analysis . Membranes are mostly composed of phospholipids, which can be saponified (hydrolyzed with alkali) to release the fatty acids. The resulting fatty acid mixture is then subjected to FAME analysis, and the measured lipid profiles can be compared with those of known microorganisms to identify the unknown microorganism. Bacterial identification can also be based on the proteins produced under specific growth conditions within the human body. These types of identification procedures are called proteomic analysis . To perform proteomic analysis, proteins from the pathogen are first separated by high-pressure liquid chromatography (HPLC) , and the collected fractions are then digested to yield smaller peptide fragments. These peptides are identified by mass spectrometry and compared with those of known microorganisms to identify the unknown microorganism in the original specimen. Microorganisms can also be identified by the carbohydrates attached to proteins (glycoproteins) in the plasma membrane or cell wall. Antibodies and other carbohydrate-binding proteins can attach to specific carbohydrates on cell surfaces, causing the cells to clump together. Serological tests (e.g., the Lancefield groups tests, which are used for identification of Streptococcus species) are performed to detect the unique carbohydrates located on the surface of the cell. Clinical Focus Resolution Penny stopped using her new sunscreen and applied the corticosteroid cream to her rash as directed. However, after several days, her rash had not improved and actually seemed to be getting worse. She made a follow-up appointment with her doctor, who observed a bumpy red rash and pus-filled blisters around hair follicles ( Figure 7.28 ). The rash was especially concentrated in areas that would have been covered by a swimsuit. After some questioning, Penny told the physician that she had recently attended a pool party and spent some time in a hot tub. In light of this new information, the doctor suspected a case of hot tub rash , an infection frequently caused by the bacterium Pseudomonas aeruginosa , an opportunistic pathogen that can thrive in hot tubs and swimming pools, especially when the water is not sufficiently chlorinated. P. aeruginosa is the same bacterium that is associated with infections in the lungs of patients with cystic fibrosis . The doctor collected a specimen from Penny’s rash to be sent to the clinical microbiology lab. Confirmatory tests were carried out to distinguish P. aeruginosa from enteric pathogens that can also be present in pool and hot-tub water. The test included the production of the blue-green pigment pyocyanin on cetrimide agar and growth at 42 °C. Cetrimide is a selective agent that inhibits the growth of other species of microbial flora and also enhances the production of P. aeruginosa pigments pyocyanin and fluorescein, which are a characteristic blue-green and yellow-green, respectively. Tests confirmed the presence of P. aeruginosa in Penny’s skin sample, but the doctor decided not to prescribe an antibiotic. Even though P. aeruginosa is a bacterium, Pseudomonas species are generally resistant to many antibiotics. Luckily, skin infections like Penny’s are usually self-limiting; the rash typically lasts about 2 weeks and resolves on its own, with or without medical treatment. The doctor advised Penny to wait it out and keep using the corticosteroid cream. The cream will not kill the P. aeruginosa on Penny’s skin, but it should calm her rash and minimize the itching by suppressing her body’s inflammatory response to the bacteria. Go back to the previous Clinical Focus box.
principles_of_accounting,_volume_2:_managerial_accounting
Summary 8.1 Explain How and Why a Standard Cost Is Developed Standards are budgeted unit amounts for price paid and amount used. Variances are the difference between actual and standard amounts. A favorable variance is when the actual price or quantity is less than the standard amount. An unfavorable variance is when the actual price or amount is greater than the standard amount. 8.2 Compute and Evaluate Materials Variances There are two components to material variances: the direct materials price variance and the direct materials quantity variance. The direct materials price variance is caused by paying too much or too little for material. The direct materials quantity variance is caused by using too much or too little material. 8.3 Compute and Evaluate Labor Variances There are two labor variances: the direct labor rate variance and the direct labor time variance. The direct labor rate variance determines if the rate paid is greater than or less than the standard rate. The direct labor time variance determines if the actual hours used are greater than or less than the standards that should have been used. 8.4 Compute and Evaluate Overhead Variances There are two sets of overhead variances: variable and fixed. The variable variances are caused by the overhead application rate and the activity level against which the rate was applied. The variable overhead rate variance is the difference between the actual variable manufacturing overhead and the variable overhead that was expected given the number of hours worked. The variable overhead efficiency variance is driven by the difference between the actual hours worked and the standard hours expected for the units produced. There are two fixed overhead variances. One is caused by spending too much or too little on fixed overhead. The other is caused by actual production being above or below the expected production level. 8.5 Describe How Companies Use Variance Analysis The key to analyzing variances is to determine why the variance occurred. If a company cannot determine why there is a variance, it will not know if the variance is indicative of a problem or not. All firms—manufacturing, retail, and service—use standards and variances.
Chapter Outline 8.1 Explain How and Why a Standard Cost Is Developed 8.2 Compute and Evaluate Materials Variances 8.3 Compute and Evaluate Labor Variances 8.4 Compute and Evaluate Overhead Variances 8.5 Describe How Companies Use Variance Analysis Why It Matters Sam saw how much coffee his fellow students were drinking and decided to open a student-run coffee shop on campus. Sam knew that developing a plan for the coffee shop would help make the shop successful. He researched what types of coffee to offer, the hours the shop would be open, and the number of employees needed, by researching other coffee shops near campus. He brewed coffee to determine the cost of the coffee and the time it took to brew. He also served several friends to determine how long it would take to serve customers. He observed, in other coffee shops, how much cream and other additives are used by customers. He talked to several coffee suppliers for prices of his various materials. He looked at empty stores near campus to determine what his rent would be. Now that he has this information, he is not sure how to make it useful to him. How could he use this information to plan and control the operations of the shop? One calculation he can do is determine his standard costs. What is the difference between a budget and a standard ? A budget usually refers to a company’s projections for costs, revenues, and cash flows associated with the overall operations of the organization, or a subsection of the corporation such as a division. A standard usually refers to a company’s projected costs for a single unit of a product or service and includes the expected, or standard, cost for the various cost components of each unit, such as materials, labor, and overhead.
[ { "answer": { "ans_choice": 0, "ans_text": "A" }, "bloom": null, "hl_context": "<hl> Standard costing provides many benefits and challenges , and a thorough analysis of each variance and the possible unfavorable or favorable outcomes is required to set future expectations and adjust current production goals . <hl>", "hl_sentences": "Standard costing provides many benefits and challenges , and a thorough analysis of each variance and the possible unfavorable or favorable outcomes is required to set future expectations and adjust current production goals .", "question": { "cloze_format": "A company uses a standard costing system ___ .", "normal_format": "Why does a company use a standard costing system?", "question_choices": [ "to identify variances from actual cost that assist them in maintaining profits", "to identify nonperformers in the workplace", "to identify what vendors are unreliable", "to identify defective materials" ], "question_id": "fs-idm162308912", "question_text": "Why does a company use a standard costing system?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "attainable standard" }, "bloom": null, "hl_context": "<hl> Instead of these two extremes , a company would set an attainable standard , which is one that employees can reach with reasonable effort . <hl> The standards are not so high that employees will not try to reach them and not so low that they do not give any incentive for employees to achieve profitability . <hl> Standard costs are typically established for reasonably attainable levels of efficiency ( production ) . <hl> <hl> They serve as a target and are useful in motivating standard performance . <hl> An ideal standard level is set assuming that everything is perfect , machines do not break down , employees show up on time , there are no defects , there is no scrap , and materials are perfect . This level of standard is not the best motivator , because employees may see this level as unattainable . For example , consider whether you would take a course if the letter grades were as follows : an A is 99 – 100 % , a B is 98 – 99 % , a C is 97 – 98 % , a D is 96 – 97 % , and below 96 % is an F . These standards are unreasonable and unrealistic , and they would not motivate students to do well in the course .", "hl_sentences": "Instead of these two extremes , a company would set an attainable standard , which is one that employees can reach with reasonable effort . Standard costs are typically established for reasonably attainable levels of efficiency ( production ) . They serve as a target and are useful in motivating standard performance .", "question": { "cloze_format": "The standard that is set at a level that may be reached with reasonable effort is the ___.", "normal_format": "Which standard is set at a level that may be reached with reasonable effort?", "question_choices": [ "ideal standard", "attainable standard", "unattainable standard", "variance from standard" ], "question_id": "fs-idm220309872", "question_text": "This standard is set at a level that may be reached with reasonable effort." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "A" }, "bloom": null, "hl_context": "Standard costs are typically established for reasonably attainable levels of efficiency ( production ) . They serve as a target and are useful in motivating standard performance . <hl> An ideal standard level is set assuming that everything is perfect , machines do not break down , employees show up on time , there are no defects , there is no scrap , and materials are perfect . <hl> This level of standard is not the best motivator , because employees may see this level as unattainable . For example , consider whether you would take a course if the letter grades were as follows : an A is 99 – 100 % , a B is 98 – 99 % , a C is 97 – 98 % , a D is 96 – 97 % , and below 96 % is an F . These standards are unreasonable and unrealistic , and they would not motivate students to do well in the course .", "hl_sentences": "An ideal standard level is set assuming that everything is perfect , machines do not break down , employees show up on time , there are no defects , there is no scrap , and materials are perfect .", "question": { "cloze_format": "The standard that is set at a level that could be achieved if everything ran perfectly is called the ___ .", "normal_format": "Which standard is set at a level that could be achieved if everything ran perfectly?", "question_choices": [ "ideal standard", "attainable standard", "unattainable standard", "variance from standard" ], "question_id": "fs-idm460465856", "question_text": "This standard is set at a level that could be achieved if everything ran perfectly." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "unfavorable variance" }, "bloom": null, "hl_context": "Once a company determines a standard cost , they can then evaluate any variances . A variance is the difference between a standard cost and actual performance . There are favorable and unfavorable variances . A favorable variance involves spending less , or using less , than the anticipated or estimated standard . <hl> An unfavorable variance involves spending more , or using more , than the anticipated or estimated standard . <hl> Before determining whether the variance is favorable or unfavorable , it is often helpful for the company to determine why the variance exists .", "hl_sentences": "An unfavorable variance involves spending more , or using more , than the anticipated or estimated standard .", "question": { "cloze_format": "The variance that is the difference involving spending more or using more than the standard amount is (the) ___.", "normal_format": "Which variance is the difference involving spending more or using more than the standard amount?", "question_choices": [ "favorable variance", "unfavorable variance", "no variance", "variance" ], "question_id": "fs-idm222685248", "question_text": "This variance is the difference involving spending more or using more than the standard amount." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "A" }, "bloom": null, "hl_context": "<hl> If the actual quantity of materials used is less than the standard quantity used at the actual production output level , the variance will be a favorable variance . <hl> A favorable outcome means you used fewer materials than anticipated , to make the actual number of production units . If , however , the actual quantity of materials used is greater than the standard quantity used at the actual production output level , the variance will be unfavorable . An unfavorable outcome means you used more materials than anticipated to make the actual number of production units . Once a company determines a standard cost , they can then evaluate any variances . A variance is the difference between a standard cost and actual performance . There are favorable and unfavorable variances . <hl> A favorable variance involves spending less , or using less , than the anticipated or estimated standard . <hl> An unfavorable variance involves spending more , or using more , than the anticipated or estimated standard . Before determining whether the variance is favorable or unfavorable , it is often helpful for the company to determine why the variance exists .", "hl_sentences": "If the actual quantity of materials used is less than the standard quantity used at the actual production output level , the variance will be a favorable variance . A favorable variance involves spending less , or using less , than the anticipated or estimated standard .", "question": { "cloze_format": "The variance that is the difference involving spending less, or using less than the standard amount is (the) ___.", "normal_format": "Which variance is the difference involving spending less, or using less than the standard amount?", "question_choices": [ "favorable variance", "unfavorable variance", "no variance", "variance" ], "question_id": "fs-idm175122960", "question_text": "This variance is the difference involving spending less, or using less than the standard amount." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "substandard material" }, "bloom": null, "hl_context": "Different factors may produce a variance . The company could have paid too much or too little for production . It may have purchased the wrong grade of material or hired employees with more or less experience than required . Sometimes the variances are interrelated . <hl> For example , purchasing substandard materials may lead to using more time to make the product and may produce more scrap . <hl> <hl> The substandard material may have been more difficult to work with or had more defects than the proper grade material . <hl> <hl> In such a situation , a favorable material price variance could cause an unfavorable labor efficiency variance and an unfavorable material quantity variance . <hl> Employees who do not have the expected experience level may save money in the wage rate but may require more hours to be worked and more material to be used because of their inexperience .", "hl_sentences": "For example , purchasing substandard materials may lead to using more time to make the product and may produce more scrap . The substandard material may have been more difficult to work with or had more defects than the proper grade material . In such a situation , a favorable material price variance could cause an unfavorable labor efficiency variance and an unfavorable material quantity variance .", "question": { "cloze_format": "___ is some possible reason for a material price variance.", "normal_format": "What are some possible reasons for a material price variance?", "question_choices": [ "substandard material", "labor rate increases", "labor rate decreases", "labor efficiency" ], "question_id": "fs-idm214870784", "question_text": "What are some possible reasons for a material price variance?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "C" }, "bloom": null, "hl_context": "If the actual price paid per unit of material is lower than the standard price per unit , the variance will be a favorable variance . A favorable outcome means you spent less on the purchase of materials than you anticipated . <hl> If , however , the actual price paid per unit of material is greater than the standard price per unit , the variance will be unfavorable . <hl> An unfavorable outcome means you spent more on the purchase of materials than you anticipated .", "hl_sentences": "If , however , the actual price paid per unit of material is greater than the standard price per unit , the variance will be unfavorable .", "question": { "cloze_format": "The material price variance is unfavorable ___.", "normal_format": "When is the material price variance unfavorable?", "question_choices": [ "when the actual quantity used is greater than the standard quantity", "when the actual quantity used is less than the standard quantity", "when the actual price paid is greater than the standard price", "when the actual price is less than the standard price" ], "question_id": "fs-idm174768512", "question_text": "When is the material price variance unfavorable?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "when the actual price is less than the standard price" }, "bloom": null, "hl_context": "<hl> If the actual price paid per unit of material is lower than the standard price per unit , the variance will be a favorable variance . <hl> A favorable outcome means you spent less on the purchase of materials than you anticipated . If , however , the actual price paid per unit of material is greater than the standard price per unit , the variance will be unfavorable . An unfavorable outcome means you spent more on the purchase of materials than you anticipated .", "hl_sentences": "If the actual price paid per unit of material is lower than the standard price per unit , the variance will be a favorable variance .", "question": { "cloze_format": "The material price variance is favorable ___ .", "normal_format": "When is the material price variance favorable?", "question_choices": [ "when the actual quantity used is greater than the standard quantity", "when the actual quantity used is less than the standard quantity", "when the actual price paid is greater than the standard price", "when the actual price is less than the standard price" ], "question_id": "fs-idm216470464", "question_text": "When is the material price variance favorable?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "C" }, "bloom": null, "hl_context": "Different factors may produce a variance . The company could have paid too much or too little for production . It may have purchased the wrong grade of material or hired employees with more or less experience than required . Sometimes the variances are interrelated . For example , purchasing substandard materials may lead to using more time to make the product and may produce more scrap . The substandard material may have been more difficult to work with or had more defects than the proper grade material . <hl> In such a situation , a favorable material price variance could cause an unfavorable labor efficiency variance and an unfavorable material quantity variance . <hl> <hl> Employees who do not have the expected experience level may save money in the wage rate but may require more hours to be worked and more material to be used because of their inexperience . <hl>", "hl_sentences": "In such a situation , a favorable material price variance could cause an unfavorable labor efficiency variance and an unfavorable material quantity variance . Employees who do not have the expected experience level may save money in the wage rate but may require more hours to be worked and more material to be used because of their inexperience .", "question": { "cloze_format": "A reason for a material quantity variance includes ___ .", "normal_format": "What are some reasons for a material quantity variance?", "question_choices": [ "building rental charges increase", "labor rate decreases", "more qualified workers", "change in the actual cost of materials" ], "question_id": "fs-idm494214224", "question_text": "What are some reasons for a material quantity variance?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 1, "ans_text": "when the actual quantity used is less than the standard quantity" }, "bloom": null, "hl_context": "<hl> In this case , the actual quantity of materials used is 0.20 pounds , the standard price per unit of materials is $ 7.00 , and the standard quantity used is 0.25 pounds . <hl> <hl> This computes as a favorable outcome . <hl> This is a favorable outcome because the actual quantity of materials used was less than the standard quantity expected at the actual production output level . As a result of this favorable outcome information , the company may consider continuing operations as they exist , or could change future budget projections to reflect higher profit margins , among other things . <hl> If the actual quantity of materials used is less than the standard quantity used at the actual production output level , the variance will be a favorable variance . <hl> A favorable outcome means you used fewer materials than anticipated , to make the actual number of production units . If , however , the actual quantity of materials used is greater than the standard quantity used at the actual production output level , the variance will be unfavorable . An unfavorable outcome means you used more materials than anticipated to make the actual number of production units .", "hl_sentences": "In this case , the actual quantity of materials used is 0.20 pounds , the standard price per unit of materials is $ 7.00 , and the standard quantity used is 0.25 pounds . This computes as a favorable outcome . If the actual quantity of materials used is less than the standard quantity used at the actual production output level , the variance will be a favorable variance .", "question": { "cloze_format": "The material quantity variance is favorable ___.", "normal_format": "When is the material quantity variance favorable?", "question_choices": [ "when the actual quantity used is greater than the standard quantity", "when the actual quantity used is less than the standard quantity", "when the actual price paid is greater than the standard price", "when the actual price is less than the standard price" ], "question_id": "fs-idm199404976", "question_text": "When is the material quantity variance favorable?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "A" }, "bloom": null, "hl_context": "If the actual quantity of materials used is less than the standard quantity used at the actual production output level , the variance will be a favorable variance . A favorable outcome means you used fewer materials than anticipated , to make the actual number of production units . <hl> If , however , the actual quantity of materials used is greater than the standard quantity used at the actual production output level , the variance will be unfavorable . <hl> <hl> An unfavorable outcome means you used more materials than anticipated to make the actual number of production units . <hl>", "hl_sentences": "If , however , the actual quantity of materials used is greater than the standard quantity used at the actual production output level , the variance will be unfavorable . An unfavorable outcome means you used more materials than anticipated to make the actual number of production units .", "question": { "cloze_format": "The material is quantity unfavorable ___.", "normal_format": "When is the material quantity unfavorable?", "question_choices": [ "when the actual quantity used is greater than the standard quantity", "when the actual quantity used is less than the standard quantity", "when the actual price paid is greater than the standard price", "when the actual price is less than the standard price" ], "question_id": "fs-idm195262432", "question_text": "When is the material quantity unfavorable?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "hiring of less qualified workers" }, "bloom": null, "hl_context": "<hl> A favorable labor rate variance occurred because the rate paid per hour was less than the rate expected to be paid ( standard ) per hour . <hl> <hl> This could occur because the company was able to hire workers at a lower rate , because of negotiated union contracts , or because of a poor labor rate estimate used in creating the standard . <hl>", "hl_sentences": "A favorable labor rate variance occurred because the rate paid per hour was less than the rate expected to be paid ( standard ) per hour . This could occur because the company was able to hire workers at a lower rate , because of negotiated union contracts , or because of a poor labor rate estimate used in creating the standard .", "question": { "cloze_format": "___ is a possible reason for a labor rate variance.", "normal_format": "What are some possible reasons for a labor rate variance?", "question_choices": [ "hiring of less qualified workers", "an excess of material usage", "material price increase", "utilities usage change" ], "question_id": "fs-idm218951024", "question_text": "What are some possible reasons for a labor rate variance?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "C" }, "bloom": null, "hl_context": "<hl> A favorable labor rate variance occurred because the rate paid per hour was less than the rate expected to be paid ( standard ) per hour . <hl> <hl> This could occur because the company was able to hire workers at a lower rate , because of negotiated union contracts , or because of a poor labor rate estimate used in creating the standard . <hl> If the actual rate of pay per hour is less than the standard rate of pay per hour , the variance will be a favorable variance . A favorable outcome means you paid workers less than anticipated . <hl> If , however , the actual rate of pay per hour is greater than the standard rate of pay per hour , the variance will be unfavorable . <hl> <hl> An unfavorable outcome means you paid workers more than anticipated . <hl> <hl> The direct labor rate variance compares the actual rate per hour of direct labor to the standard rate per hour of labor for the hours worked . <hl> The direct labor rate variance is calculated using this formula :", "hl_sentences": "A favorable labor rate variance occurred because the rate paid per hour was less than the rate expected to be paid ( standard ) per hour . This could occur because the company was able to hire workers at a lower rate , because of negotiated union contracts , or because of a poor labor rate estimate used in creating the standard . If , however , the actual rate of pay per hour is greater than the standard rate of pay per hour , the variance will be unfavorable . An unfavorable outcome means you paid workers more than anticipated . The direct labor rate variance compares the actual rate per hour of direct labor to the standard rate per hour of labor for the hours worked .", "question": { "cloze_format": "The labor rate variance is unfavorable ___.", "normal_format": "When is the labor rate variance unfavorable?", "question_choices": [ "when the actual quantity used is greater than the standard quantity", "when the actual quantity used is less than the standard quantity", "when the actual price paid is greater than the standard price", "when the actual price is less than the standard price" ], "question_id": "fs-idm196724784", "question_text": "When is the labor rate variance unfavorable?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "when the actual price is less than the standard price" }, "bloom": null, "hl_context": "<hl> A favorable labor rate variance occurred because the rate paid per hour was less than the rate expected to be paid ( standard ) per hour . <hl> <hl> This could occur because the company was able to hire workers at a lower rate , because of negotiated union contracts , or because of a poor labor rate estimate used in creating the standard . <hl> <hl> If the actual rate of pay per hour is less than the standard rate of pay per hour , the variance will be a favorable variance . <hl> <hl> A favorable outcome means you paid workers less than anticipated . <hl> <hl> If , however , the actual rate of pay per hour is greater than the standard rate of pay per hour , the variance will be unfavorable . <hl> <hl> An unfavorable outcome means you paid workers more than anticipated . <hl>", "hl_sentences": "A favorable labor rate variance occurred because the rate paid per hour was less than the rate expected to be paid ( standard ) per hour . This could occur because the company was able to hire workers at a lower rate , because of negotiated union contracts , or because of a poor labor rate estimate used in creating the standard . If the actual rate of pay per hour is less than the standard rate of pay per hour , the variance will be a favorable variance . A favorable outcome means you paid workers less than anticipated . If , however , the actual rate of pay per hour is greater than the standard rate of pay per hour , the variance will be unfavorable . An unfavorable outcome means you paid workers more than anticipated .", "question": { "cloze_format": "The labor rate variance is favorable ___.", "normal_format": "When is the labor rate variance favorable?", "question_choices": [ "when the actual quantity used is greater than the standard quantity", "when the actual quantity used is less than the standard quantity", "when the actual price is greater than the standard price", "when the actual price is less than the standard price" ], "question_id": "fs-idm215172752", "question_text": "When is the labor rate variance favorable?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "B" }, "bloom": null, "hl_context": "An unfavorable labor quantity variance occurred because the actual hours worked to make the 10,000 units were greater than the expected hours to make that many units . <hl> This could occur because of inefficiencies of the workers , defects and errors that caused additional time reworking items , or the use of new workers who were less efficient . <hl> With either of these formulas , the actual hours worked refers to the actual number of hours used at the actual production output . <hl> The standard rate per hour is the expected hourly rate paid to workers . <hl> <hl> The standard hours are the expected number of hours used at the actual production output . <hl> <hl> If there is no difference between the actual hours worked and the standard hours , the outcome will be zero , and no variance exists . <hl> <hl> The direct labor time variance compares the actual labor hours used to the standard labor hours that were expected to be used to make the actual units produced . <hl> The variance is calculated using this formula :", "hl_sentences": "This could occur because of inefficiencies of the workers , defects and errors that caused additional time reworking items , or the use of new workers who were less efficient . The standard rate per hour is the expected hourly rate paid to workers . The standard hours are the expected number of hours used at the actual production output . If there is no difference between the actual hours worked and the standard hours , the outcome will be zero , and no variance exists . The direct labor time variance compares the actual labor hours used to the standard labor hours that were expected to be used to make the actual units produced .", "question": { "cloze_format": "___are some possible reasons for a direct labor time variance.", "normal_format": "What are some possible reasons for a direct labor time variance?", "question_choices": [ "utility usage decrease", "less qualified workers", "office supplies spending", "sales decline" ], "question_id": "fs-idm203583696", "question_text": "What are some possible reasons for a direct labor time variance?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 0, "ans_text": "when the actual quantity used is greater than the standard quantity" }, "bloom": null, "hl_context": "<hl> If the actual hours worked are less than the standard hours at the actual production output level , the variance will be a favorable variance . <hl> A favorable outcome means you used fewer hours than anticipated to make the actual number of production units . If , however , the actual hours worked are greater than the standard hours at the actual production output level , the variance will be unfavorable . An unfavorable outcome means you used more hours than anticipated to make the actual number of production units . <hl> The direct labor time variance compares the actual labor hours used to the standard labor hours that were expected to be used to make the actual units produced . <hl> The variance is calculated using this formula :", "hl_sentences": "If the actual hours worked are less than the standard hours at the actual production output level , the variance will be a favorable variance . The direct labor time variance compares the actual labor hours used to the standard labor hours that were expected to be used to make the actual units produced .", "question": { "cloze_format": "The direct labor time variance is favorable ___. ", "normal_format": "When is the direct labor time variance favorable?", "question_choices": [ "when the actual quantity used is greater than the standard quantity", "when the actual quantity used is less than the standard quantity", "when the actual price paid is greater than the standard price", "when the actual price is less than the standard price" ], "question_id": "fs-idm222558064", "question_text": "When is the direct labor time variance favorable?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 0, "ans_text": "A" }, "bloom": null, "hl_context": "If the actual hours worked are less than the standard hours at the actual production output level , the variance will be a favorable variance . A favorable outcome means you used fewer hours than anticipated to make the actual number of production units . <hl> If , however , the actual hours worked are greater than the standard hours at the actual production output level , the variance will be unfavorable . <hl> An unfavorable outcome means you used more hours than anticipated to make the actual number of production units . <hl> The direct labor time variance compares the actual labor hours used to the standard labor hours that were expected to be used to make the actual units produced . <hl> The variance is calculated using this formula :", "hl_sentences": "If , however , the actual hours worked are greater than the standard hours at the actual production output level , the variance will be unfavorable . The direct labor time variance compares the actual labor hours used to the standard labor hours that were expected to be used to make the actual units produced .", "question": { "cloze_format": "The direct labor time variance is unfavorable ___ .", "normal_format": "When is the direct labor time variance unfavorable?", "question_choices": [ "when the actual quantity used is greater than the standard quantity", "when the actual quantity used is less than the standard quantity", "when the actual price paid is greater than the standard price", "when the actual price is less than the standard price" ], "question_id": "fs-idm207751760", "question_text": "When is the direct labor time variance unfavorable?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "predicts estimated revenues and costs at varying levels of production" }, "bloom": null, "hl_context": "<hl> To determine the overhead standard cost , companies prepare a flexible budget that gives estimated revenues and costs at varying levels of production . <hl> The standard overhead cost is usually expressed as the sum of its component parts , fixed and variable costs per unit . Note that at different levels of production , total fixed costs are the same , so the standard fixed cost per unit will change for each production level . However , the variable standard cost per unit is the same per unit for each level of production , but the total variable costs will change .", "hl_sentences": "To determine the overhead standard cost , companies prepare a flexible budget that gives estimated revenues and costs at varying levels of production .", "question": { "cloze_format": "A flexible budget ________.", "normal_format": "Which of the following is correct about a flexible budget?", "question_choices": [ "predicts estimated revenues and costs at varying levels of production", "gives actual figures for selling price", "gives actual figures for variable and fixed overhead", "is not used in overhead variance calculations" ], "question_id": "fs-idm346975424", "question_text": "A flexible budget ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "actual and standard allocation base" }, "bloom": null, "hl_context": "<hl> The variable overhead efficiency variance , also known as the controllable variance , is driven by the difference between the actual hours worked and the standard hours expected for the units produced . <hl> <hl> This variance measures whether the allocation base was efficiently used . <hl> The variable overhead efficiency variance is calculated using this formula :", "hl_sentences": "The variable overhead efficiency variance , also known as the controllable variance , is driven by the difference between the actual hours worked and the standard hours expected for the units produced . This variance measures whether the allocation base was efficiently used .", "question": { "cloze_format": "The variable overhead efficiency variance is caused by the difference between ___.", "normal_format": "The variable overhead efficiency variance is caused by the difference between which of the following?", "question_choices": [ "actual and budgeted units", "actual and standard allocation base", "actual and standard overhead rates", "actual units and actual overhead rates" ], "question_id": "fs-idm354623072", "question_text": "The variable overhead efficiency variance is caused by the difference between which of the following?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "C" }, "bloom": null, "hl_context": "<hl> The fixed factory overhead variance represents the difference between the actual fixed overhead and the applied fixed overhead . <hl> There are two fixed overhead variances . One variance determines if too much or too little was spent on fixed overhead . The other variance computes whether or not actual production was above or below the expected production level .", "hl_sentences": "The fixed factory overhead variance represents the difference between the actual fixed overhead and the applied fixed overhead .", "question": { "cloze_format": "The fixed factory overhead variance is caused by the difference between ___ .", "normal_format": "The fixed factory overhead variance is caused by the difference between which of the following?", "question_choices": [ "actual and standard allocation base", "actual and budgeted units", "actual fixed overhead and applied fixed overhead", "actual and standard overhead rates" ], "question_id": "fs-idm346925472", "question_text": "The fixed factory overhead variance is caused by the difference between which of the following?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "purchasing higher-quality material" }, "bloom": null, "hl_context": "<hl> An unfavorable materials price variance occurred because the actual cost of materials was greater than the expected or standard cost . <hl> <hl> This could occur if a higher-quality material was purchased or the suppliers raised their prices . <hl>", "hl_sentences": "An unfavorable materials price variance occurred because the actual cost of materials was greater than the expected or standard cost . This could occur if a higher-quality material was purchased or the suppliers raised their prices .", "question": { "cloze_format": "A possible cause of an unfavorable material price variance is ___ .", "normal_format": "Which of the following is a possible cause of an unfavorable material price variance?", "question_choices": [ "purchasing too much material", "purchasing higher-quality material", "hiring substandard workers", "buying substandard material" ], "question_id": "fs-idm483605600", "question_text": "Which of the following is a possible cause of an unfavorable material price variance?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "A" }, "bloom": null, "hl_context": "Different factors may produce a variance . The company could have paid too much or too little for production . It may have purchased the wrong grade of material or hired employees with more or less experience than required . Sometimes the variances are interrelated . <hl> For example , purchasing substandard materials may lead to using more time to make the product and may produce more scrap . <hl> <hl> The substandard material may have been more difficult to work with or had more defects than the proper grade material . <hl> <hl> In such a situation , a favorable material price variance could cause an unfavorable labor efficiency variance and an unfavorable material quantity variance . <hl> Employees who do not have the expected experience level may save money in the wage rate but may require more hours to be worked and more material to be used because of their inexperience .", "hl_sentences": "For example , purchasing substandard materials may lead to using more time to make the product and may produce more scrap . The substandard material may have been more difficult to work with or had more defects than the proper grade material . In such a situation , a favorable material price variance could cause an unfavorable labor efficiency variance and an unfavorable material quantity variance .", "question": { "cloze_format": "A possible cause of an unfavorable material quantity variance is ___ .", "normal_format": "Which of the following is a possible cause of an unfavorable material quantity variance?", "question_choices": [ "purchasing substandard material", "hiring higher-quality workers", "paying more than should have for workers", "purchasing too much material" ], "question_id": "fs-idm377970288", "question_text": "Which of the following is a possible cause of an unfavorable material quantity variance?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 0, "ans_text": "hiring substandard workers" }, "bloom": null, "hl_context": "<hl> An unfavorable labor quantity variance occurred because the actual hours worked to make the 10,000 units were greater than the expected hours to make that many units . <hl> <hl> This could occur because of inefficiencies of the workers , defects and errors that caused additional time reworking items , or the use of new workers who were less efficient . <hl>", "hl_sentences": "An unfavorable labor quantity variance occurred because the actual hours worked to make the 10,000 units were greater than the expected hours to make that many units . This could occur because of inefficiencies of the workers , defects and errors that caused additional time reworking items , or the use of new workers who were less efficient .", "question": { "cloze_format": "___ is a possible cause of an unfavorable labor efficiency variance.", "normal_format": "Which of the following is a possible cause of an unfavorable labor efficiency variance?", "question_choices": [ "hiring substandard workers", "making too many units", "buying higher-quality material", "paying too much for workers" ], "question_id": "fs-idm366776368", "question_text": "Which of the following is a possible cause of an unfavorable labor efficiency variance?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 1, "ans_text": "B" }, "bloom": null, "hl_context": "<hl> An unfavorable labor quantity variance occurred because the actual hours worked to make the 10,000 units were greater than the expected hours to make that many units . <hl> <hl> This could occur because of inefficiencies of the workers , defects and errors that caused additional time reworking items , or the use of new workers who were less efficient . <hl> <hl> A favorable labor rate variance occurred because the rate paid per hour was less than the rate expected to be paid ( standard ) per hour . <hl> <hl> This could occur because the company was able to hire workers at a lower rate , because of negotiated union contracts , or because of a poor labor rate estimate used in creating the standard . <hl> <hl> Different factors may produce a variance . <hl> The company could have paid too much or too little for production . <hl> It may have purchased the wrong grade of material or hired employees with more or less experience than required . <hl> Sometimes the variances are interrelated . For example , purchasing substandard materials may lead to using more time to make the product and may produce more scrap . The substandard material may have been more difficult to work with or had more defects than the proper grade material . In such a situation , a favorable material price variance could cause an unfavorable labor efficiency variance and an unfavorable material quantity variance . Employees who do not have the expected experience level may save money in the wage rate but may require more hours to be worked and more material to be used because of their inexperience .", "hl_sentences": "An unfavorable labor quantity variance occurred because the actual hours worked to make the 10,000 units were greater than the expected hours to make that many units . This could occur because of inefficiencies of the workers , defects and errors that caused additional time reworking items , or the use of new workers who were less efficient . A favorable labor rate variance occurred because the rate paid per hour was less than the rate expected to be paid ( standard ) per hour . This could occur because the company was able to hire workers at a lower rate , because of negotiated union contracts , or because of a poor labor rate estimate used in creating the standard . Different factors may produce a variance . It may have purchased the wrong grade of material or hired employees with more or less experience than required .", "question": { "cloze_format": "___ is a possible cause of an unfavorable labor rate variance.", "normal_format": "Which of the following is a possible cause of an unfavorable labor rate variance?", "question_choices": [ "hiring too many workers", "hiring higher-quality workers at a higher wage", "making too many units", "purchasing too much material" ], "question_id": "fs-idm389324288", "question_text": "Which of the following is a possible cause of an unfavorable labor rate variance?" }, "references_are_paraphrase": 0 } ]
8
8.1 Explain How and Why a Standard Cost Is Developed A syllabus is one way an instructor can communicate expectations to students. Students can use the syllabus to plan their studying to maximize their grade and to coordinate the amount and timing of studying for each course. Knowing what is expected, and when it is expected, allows for better plans and performance. When your performance does not match your expectations, a variance arises—a difference between the standard and the actual performance. You then need to determine why the difference occurred. You want to know why you did not receive the grade you expected so you can make adjustments for the next assignment to earn a better grade. Companies operate in a similar manner. They have an expectation, or standard , for production. For example, if a company is producing tables, it might establish standards for such components as the amount of board feet of lumber expected to be used in producing each table or the number of hours of direct labor hours it expected to use in the table’s production. These standards can then be used in establishing standard costs that can be used in creating an assortment of different types of budgets. When a variance occurs in its standards, the company investigates to determine the causes, so they can perform better in the future. For example, General Motors has standards for each item on a vehicle. It can determine the cost and selling price of a power antenna by knowing the standard material cost for the antenna and the standard labor cost of adding the antenna to the vehicle. General Motors also can add up all of the standard times for all vehicles it makes to determine if too much or too little labor was used in production. Link to Learning Developing standards is a complicated and costly process. Review this article on how to develop a standard cost system for more details. Fundamentals of Standard Costs It is important to establish standards for cost at the beginning of a period to prepare the budget; manage material, labor, and overhead costs; and create a reasonable sales price for a good. A standard cost is an expected cost that a company usually establishes at the beginning of a fiscal year for prices paid and amounts used. The standard cost is an expected amount paid for materials costs or labor rates. The standard quantity is the expected usage amount of materials or labor. A standard cost may be determined by past history or industry norms. The company can then compare the standard costs against its actual results to measure its efficiency. Sometimes when comparing standard costs against actual results, there is a difference. This difference can be attributed to many reasons. For example, the coffee company mentioned in the opening vignette may expect to pay $0.50 per ounce for coffee grounds. After the company purchased the coffee grounds, it discovered it paid $0.60 per ounce. This variance would need to be accounted for, and possible operational changes would occur as a result. Cost accounting systems become more useful to management when they include budgeted amounts to serve as a point of comparison with actual results. Many departments help determine standard cost. Product design, in conjunction with production, purchasing, and sales, determines what the product will look like and what materials will be used. Production works with purchasing to determine what material will work best in production and will be the most cost efficient. Sales will also help decide the material in terms of customer demand. Production will work with personnel to determine labor costs for the product, which is based on how long it will take to make the product, which departments will be involved, and what type and number of employees it will take. Consider how many different materials can go into a product. For example, there are approximately 14,000 parts that comprise the average automobile. The manufacturer will set a standard price and a quantity used per automobile for each part, and it will determine the labor required to install the part. At Fiat Chrysler Automobiles ’ Belvidere Assembly Plant, for example, there are approximately 5,000 employees assembling automobiles. 1 In addition to having standard costs associated with each part, each employee has standards for the job he or she performs. 1 “Belvidere Assembly Plant and Belvidere Satellite Stamping Plant.” Fiat Chrysler Automobiles. June 2018. http://media.fcanorthamerica.com/newsrelease.do?id=323&mid=1 Standard costs are typically established for reasonably attainable levels of efficiency (production). They serve as a target and are useful in motivating standard performance. An ideal standard level is set assuming that everything is perfect, machines do not break down, employees show up on time, there are no defects, there is no scrap, and materials are perfect. This level of standard is not the best motivator, because employees may see this level as unattainable. For example, consider whether you would take a course if the letter grades were as follows: an A is 99–100%, a B is 98–99%, a C is 97–98%, a D is 96–97%, and below 96% is an F. These standards are unreasonable and unrealistic, and they would not motivate students to do well in the course. At the other end of the spectrum, if the standards are too easy, there is little motivation to do better, and products may not be properly built, may be built with inferior materials, or both. For example, consider how you would handle the following grading scale for your course: an A is 50–100%, a B is 35–50%, a C is 10–35%, a D is 2–10%, and below 2% is an F. Would you learn anything? Would you try very hard? The same considerations come into play for employees with standards that are too easy. Instead of these two extremes, a company would set an attainable standard , which is one that employees can reach with reasonable effort. The standards are not so high that employees will not try to reach them and not so low that they do not give any incentive for employees to achieve profitability. In order for a company to establish its attainable standard cost for each product, it must consider the standard costs for materials, labor, and overhead. The material standard cost consists of a standard price per unit of material and a standard amount of material per unit. Returning to the opening vignette, let us say the coffee shop is trying to establish the standard materials cost for one cup of regular coffee. To keep the example simple, we are not incorporating the cost of water or the ceramic cup cost (since they are reused). Two components for the cup of coffee will need to be considered: Price per ounce of coffee grounds Amount of coffee grounds (materials) used per cup of coffee To determine the standards for labor, the coffee shop would need to consider two additional components: Labor rate per minute Amount of time to make one cup of regular coffee To determine the standard for overhead, the coffee shop would first need to consider the fact that it has two types of overhead as shown in Figure 8.2 . Greater detail about the calculation of the variable and fixed overhead is provided in Compute and Evaluate Overhead Variances . Fixed overhead (does not change in total with production) Variable overhead (does change in total with production) All of this information is entered on a standard cost card. Once a company determines a standard cost, they can then evaluate any variances. A variance is the difference between a standard cost and actual performance. There are favorable and unfavorable variances. A favorable variance involves spending less, or using less, than the anticipated or estimated standard. An unfavorable variance involves spending more, or using more, than the anticipated or estimated standard. Before determining whether the variance is favorable or unfavorable, it is often helpful for the company to determine why the variance exists. Your Turn Developing a Standard Cost Card Use the information provided to create a standard cost card for production of one deluxe bicycle from Bicycles Unlimited. To make one bicycle it takes four pounds of material. The material can usually be purchased for $5.25 per pound. The labor necessary to build a bicycle consists of two types. The first type of labor is assembly, which takes 2.75 hours. These workers are paid $11.00 per hour. The second type of labor is finishing, which takes 4 hours. These workers are paid $15.00 per hour. Overhead is applied using labor hours. The variable overhead rate is $5.00 per labor hour. The fixed overhead rate is $3.00 per hour. Solution Ethical Considerations Ethical Variance Analysis Variance analysis allows managers to see whether costs are different than planned. Once a difference between expected and actual costs is identified, variance analysis should delve into why the costs differ and what the magnitude of the difference means. To determine why a cost differs, it should be established if the additional cost provides a benefit or detriment to an organization’s stakeholders, the people or entities that are affected by the organization’s actions or inactions. Not all stakeholders are equal in the analysis, but an organization should recognize each stakeholder’s interest in the organization’s business and operational decisions, while ranking the importance of the stakeholder in relation to any decision made. Ranking should look to how stakeholders are affected by costs and any decisions related to cost variance, or why the variance occurred. For example, if a cost variance is due to an additional cost to make a product eco-friendly, then an organization may determine that incurring the cost is a benefit to its stakeholders. However, if the additional cost creates an unfavorable situation for a stakeholder, the process incurring the cost should be investigated. Remember that the owners of a company, including shareholders, are also stakeholders. To determine the best course of action for an organization, cost analysis should help inform stakeholder analysis—the process of systematically gathering and analyzing all of the information related to a business decision. Different factors may produce a variance. The company could have paid too much or too little for production. It may have purchased the wrong grade of material or hired employees with more or less experience than required. Sometimes the variances are interrelated. For example, purchasing substandard materials may lead to using more time to make the product and may produce more scrap. The substandard material may have been more difficult to work with or had more defects than the proper grade material. In such a situation, a favorable material price variance could cause an unfavorable labor efficiency variance and an unfavorable material quantity variance. Employees who do not have the expected experience level may save money in the wage rate but may require more hours to be worked and more material to be used because of their inexperience. Another situation in which a variance may occur is when the cost of labor and/or material changes after the standard was established. Toward the end of the fiscal year, standards often become less reliable because time has passed and the environment has changed. It is not reasonable to expect the price of all materials and labor to remain constant for 12 months. For example, the grade of material used to establish the standard may no longer be available. Manufacturing Cost Variances As you’ve learned, the standard price and standard quantity are anticipated amounts. Any change from these budgeted amounts will produce a variance. There can be variances for materials, labor, and overhead. Direct materials may have a variance in price of materials or quantity of materials used. Direct labor may have a variance in the rate paid to workers or the amount of time used to make a product. Overhead may produce a variance in expected fixed or variable costs, leading to possible differences in production capacity and management’s ability to control overhead. More specifics on the formulas, processes, and interpretations of the direct materials, direct labor, and overhead variances are discussed in each of this chapter’s following sections. Concepts In Practice Qualcomm 2 Qualcomm Inc. is a large producer of telecommunications equipment focusing mainly on wireless products and services. As with any company, Qualcomm sets labor standards and must address any variances in labor costs to stay on budget, and control overall manufacturing costs. In 2018, Qualcomm announced a reduction to its labor force, affecting many of its full-time and temporary workers. The reduction in labor was necessary to suppress rising expenses that could not be controlled through overhead or materials cost-cutting measures. The variances between standard labor rates and actual labor rates, and diminishing profit margins will have contributed to this decision. It is important for Qualcomm management to keep labor variances minimal in the future so that large workforce reductions are not required to control costs. 2 Munsif Vengattil. “Qualcomm Begins Layoffs as Part of Cost Cuts.” Reuters . April 18, 2018. https://www.reuters.com/article/us-qualcomm-layoffs/qualcomm-begins-layoffs-as-part-of-cost-cuts-idUSKBN1HP33L Think It Through Chocolate Cow Ice Cream Company The Chocolate Cow Ice Cream Company has grown substantially recently, and management now feels the need to develop standards and compute variances. A consulting firm was hired to develop the standards and the format for the variance computation. One standard in particular that the consulting firm developed seemed too excessive to plant management. The consulting firm’s standard was production of 100 gallons of ice cream every 45 minutes. The plant’s middle level of management thought the standard should be 100 gallons every 55 minutes, while the top management of the company thought that the consulting firm’s standard would provide more motivation to the employees. Why is the company establishing a standard for production? What are some factors the company may need to consider before selecting one of the proposed standards? 8.2 Compute and Evaluate Materials Variances As you’ve learned, direct materials are those materials used in the production of goods that are easily traceable and are a major component of the product. The amount of materials used and the price paid for those materials may differ from the standard costs determined at the beginning of a period. A company can compute these materials variances and, from these calculations, can interpret the results and decide how to address these differences. Concepts In Practice Buttering Popcorn In a movie theater, management uses standards to determine if the proper amount of butter is being used on the popcorn. They train the employees to put two tablespoons of butter on each bag of popcorn, so total butter usage is based on the number of bags of popcorn sold. Therefore, if the theater sells 300 bags of popcorn with two tablespoons of butter on each, the total amount of butter that should be used is 600 tablespoons. Management can then compare the predicted use of 600 tablespoons of butter to the actual amount used. If the actual usage of butter was less than 600, customers may not be happy, because they may feel that they did not get enough butter. If more than 600 tablespoons of butter were used, management would investigate to determine why. Some reasons why more butter was used than expected (unfavorable outcome) would be because of inexperienced workers pouring too much, or the standard was set too low, producing unrealistic expectations that do not satisfy customers. Fundamentals of Direct Materials Variances The direct materials variances measure how efficient the company is at using materials as well as how effective it is at using materials. There are two components to a direct materials variance , the direct materials price variance and the direct materials quantity variance, which both compare the actual price or amount used to the standard amount. Direct Materials Price Variance The direct materials price variance compares the actual price per unit (pound or yard, for example) of the direct materials to the standard price per unit of direct materials. The formula for direct materials price variance is calculated as: Factoring out actual quantity used from both components of the formula, it can be rewritten as: With either of these formulas, the actual quantity used refers to the actual amount of materials used to create one unit of product. The standard price is the expected price paid for materials per unit. The actual price paid is the actual amount paid for materials per unit. If there is no difference between the standard price and the actual price paid, the outcome will be zero, and no price variance exists. If the actual price paid per unit of material is lower than the standard price per unit, the variance will be a favorable variance. A favorable outcome means you spent less on the purchase of materials than you anticipated. If, however, the actual price paid per unit of material is greater than the standard price per unit, the variance will be unfavorable. An unfavorable outcome means you spent more on the purchase of materials than you anticipated. The actual price can differ from the standard or expected price because of such factors as supply and demand of the material, increased labor costs to the supplier that are passed along to the customer, or improvements in technology that make the material cheaper. The producer must be aware that the difference between what it expects to happen and what actually happens will affect all of the goods produced using these particular materials. Therefore, the sooner management is aware of a problem, the sooner they can fix it. For that reason, the material price variance is computed at the time of purchase and not when the material is used in production. Let us consider an example. Connie’s Candy Company produces various types of candies that they sell to retailers. Connie’s Candy establishes a standard price for candy-making materials of $7.00 per pound. Each box of candy is expected to use 0.25 pounds of candy-making materials. Connie’s Candy found that the actual price of materials was $6.00 per pound. They still actually use 0.25 pounds of materials to make each box. The direct materials price variance computes as: Direct Materials Price Variance = ( $6.00 – $7.00 ) × 0.25 lb. = $0.25 or $0.25 (Favorable) Direct Materials Price Variance = ( $6.00 – $7.00 ) × 0.25 lb. = $0.25 or $0.25 (Favorable) In this case, the actual price per unit of materials is $6.00, the standard price per unit of materials is $7.00, and the actual quantity used is 0.25 pounds. This computes as a favorable outcome. This is a favorable outcome because the actual price for materials was less than the standard price. As a result of this favorable outcome information, the company may consider continuing operations as they exist, or could change future budget projections to reflect higher profit margins, among other things. Let us take the same example except now the actual price for candy-making materials is $9.00 per pound. The direct materials price variance computes as: Direct Materials Price Variance = ( $9.00 – $7.00 ) × 0.25 lbs. = $0.50 or $0.50 (Unfavorable) Direct Materials Price Variance = ( $9.00 – $7.00 ) × 0.25 lbs. = $0.50 or $0.50 (Unfavorable) In this case, the actual price per unit of materials is $9.00, the standard price per unit of materials is $7.00, and the actual quantity used is 0.25 pounds. This computes as an unfavorable outcome. This is an unfavorable outcome because the actual price for materials was more than the standard price. As a result of this unfavorable outcome information, the company may consider using cheaper materials, changing suppliers, or increasing prices to cover costs. Another element this company and others must consider is a direct materials quantity variance. Think It Through Don’t “Skirt” the Issue You run a fabric store and order materials through a supplier. At the end of the month, you review your materials cost and discover that your direct materials price and quantity variances produced unfavorable results. What could be attributed to these unfavorable outcomes? How would these unfavorable outcomes impact the total direct materials variance? Direct Materials Quantity Variance The direct materials quantity variance compares the actual quantity of materials used to the standard materials that were expected to be used to make the actual units produced. The variance is calculated using this formula: Factoring out standard price from both components of the formula, it can be rewritten as: With either of these formulas, the actual quantity used refers to the actual amount of materials used at the actual production output. The standard price is the expected price paid for materials per unit. The standard quantity is the expected amount of materials used at the actual production output. If there is no difference between the actual quantity used and the standard quantity, the outcome will be zero, and no variance exists. If the actual quantity of materials used is less than the standard quantity used at the actual production output level, the variance will be a favorable variance. A favorable outcome means you used fewer materials than anticipated, to make the actual number of production units. If, however, the actual quantity of materials used is greater than the standard quantity used at the actual production output level, the variance will be unfavorable. An unfavorable outcome means you used more materials than anticipated to make the actual number of production units. The actual quantity used can differ from the standard quantity because of improved efficiencies in production, carelessness or inefficiencies in production, or poor estimation when creating the standard usage. Consider the previous example with Connie’s Candy Company. Connie’s Candy established a standard price for candy-making materials of $7.00 per pound. Each box of candy is expected to use 0.25 pounds of candy-making materials. Connie’s Candy found that the actual quantity of candy-making materials used to produce one box of candy was 0.20 per pound. The direct materials quantity variance computes as: Direct Materials Quantity Variance = ( 0.20 lb. – 0.25 lb. ) × $7.00 = –$0.35 or $0.35 (Favorable) Direct Materials Quantity Variance = ( 0.20 lb. – 0.25 lb. ) × $7.00 = –$0.35 or $0.35 (Favorable) In this case, the actual quantity of materials used is 0.20 pounds, the standard price per unit of materials is $7.00, and the standard quantity used is 0.25 pounds. This computes as a favorable outcome. This is a favorable outcome because the actual quantity of materials used was less than the standard quantity expected at the actual production output level. As a result of this favorable outcome information, the company may consider continuing operations as they exist, or could change future budget projections to reflect higher profit margins, among other things. Let us take the same example except now the actual quantity of candy-making materials used to produce one box of candy was 0.50 per pound. The direct materials quantity variance computes as: Direct Materials Quantity Variance = ( 0.50 lb. – 0.25 lb. ) × $7.00 = $1.75 or $1.75 (Unfavorable) Direct Materials Quantity Variance = ( 0.50 lb. – 0.25 lb. ) × $7.00 = $1.75 or $1.75 (Unfavorable) In this case, the actual quantity of materials used is 0.50 pounds, the standard price per unit of materials is $7.00, and the standard quantity used is 0.25 pounds. This computes as an unfavorable outcome. This is an unfavorable outcome because the actual quantity of materials used was more than the standard quantity expected at the actual production output level. As a result of this unfavorable outcome information, the company may consider retraining workers to reduce waste or change their production process to decrease materials needs per box. The combination of the two variances can produce one overall total direct materials cost variance. Link to Learning Watch this video featuring a professor of accounting walking through the steps involved in calculating a material price variance and a material quantity variance to learn more. Total Direct Materials Cost Variance When a company makes a product and compares the actual materials cost to the standard materials cost, the result is the total direct materials cost variance . An unfavorable outcome means the actual costs related to materials were more than the expected (standard) costs. If the outcome is a favorable outcome, this means the actual costs related to materials are less than the expected (standard) costs. The total direct materials cost variance is also found by combining the direct materials price variance and the direct materials quantity variance. By showing the total materials variance as the sum of the two components, management can better analyze the two variances and enhance decision-making. Figure 8.3 shows the connection between the direct materials price variance and direct materials quantity variance to total direct materials cost variance. For example, Connie’s Candy Company expects to pay $7.00 per pound for candy-making materials but actually pays $9.00 per pound. The company expected to use 0.25 pounds of materials per box but actually used 0.50 per box. The total direct materials variance is computed as: Total Direct Materials Variance = ( 0.50 lbs. × $9.00 ) – ( 0.25 lbs. × $7.00 ) = $4.50 – $1.75 = $2.75 (Unfavorable) Total Direct Materials Variance = ( 0.50 lbs. × $9.00 ) – ( 0.25 lbs. × $7.00 ) = $4.50 – $1.75 = $2.75 (Unfavorable) In this case, two elements contribute to the unfavorable outcome. Connie’s Candy paid $2.00 per pound more for materials than expected and used 0.25 pounds more of materials than expected to make one box of candy. The same calculation is shown using the outcomes of the direct materials price and quantity variances. As with the interpretations for the materials price and quantity variances, the company would review the individual components contributing to the overall unfavorable outcome for the total direct materials variance, and possibly make changes to production elements as a result. Your Turn Sweet and Fresh Shampoo Materials Biglow Company makes a hair shampoo called Sweet and Fresh. Each bottle has a standard material cost of 8 ounces at $0.85 per ounce. During May, Biglow manufactured 11,000 bottles. They bought 89,000 ounces of material at a cost of $74,760. All 89,000 ounces were used to make the 11,000 bottles. Calculate the material price variance and the material quantity variance. Solution Actual price per pound: 74,760/89,000 = $0.84 Material price variance: 89,000 × (0.84 − 0.85) = $890 favorable Material quantity variance: 0.85 × (89,000 – 88,000) = $850 unfavorable 8.3 Compute and Evaluate Labor Variances In addition to evaluating materials usage, companies must assess how efficiently and effectively they are using labor in the production of their products. Direct labor is a cost associated with workers working directly in the production process. The company must look at both the quantity of hours used and the rate of the labor and compare outcomes to standard costs. Determining efficiency and effectiveness of labor leads to individual labor variances. A company can compute these labor variances and make informed decisions about labor operations based on these differences. Fundamentals of Direct Labor Variances The direct labor variance measures how efficiently the company uses labor as well as how effective it is at pricing labor. There are two components to a labor variance, the direct labor rate variance and the direct labor time variance. Direct Labor Rate Variance The direct labor rate variance compares the actual rate per hour of direct labor to the standard rate per hour of labor for the hours worked. The direct labor rate variance is calculated using this formula: Factoring out the actual hours worked from both components of the formula, it can be rewritten as With either of these formulas, the actual rate per hour refers to the actual rate of pay for workers to create one unit of product. The standard rate per hour is the expected rate of pay for workers to create one unit of product. The actual hours worked are the actual number of hours worked to create one unit of product. If there is no difference between the standard rate and the actual rate, the outcome will be zero, and no variance exists. If the actual rate of pay per hour is less than the standard rate of pay per hour, the variance will be a favorable variance. A favorable outcome means you paid workers less than anticipated. If, however, the actual rate of pay per hour is greater than the standard rate of pay per hour, the variance will be unfavorable. An unfavorable outcome means you paid workers more than anticipated. The actual rate can differ from the standard or expected rate because of supply and demand of the workers, increased labor costs due to economic changes or union contracts, or the ability to hire employees at a different skill level. Once the manufacturer makes the products, the labor costs will follow the goods through production, so the company should evaluate how the difference between what it expected to happen and what actually happened will affect all the goods produced using these particular labor rates. Let us again consider Connie’s Candy Company with respect to labor. Connie’s Candy establishes a standard rate per hour for labor of $8.00. Each box of candy is expected to require 0.10 hours of labor (6 minutes). Connie’s Candy found that the actual rate of pay per hour for labor was $7.50. They still actually required 0.10 hours of labor to make each box. The direct labor rate variance computes as: Direct Labor Rate Variance = ( $7.50 – $8.00 ) × 0.10 hours = – $0.05 or $0.05 ( Favorable ) Direct Labor Rate Variance = ( $7.50 – $8.00 ) × 0.10 hours = – $0.05 or $0.05 ( Favorable ) In this case, the actual rate per hour is $7.50, the standard rate per hour is $8.00, and the actual hour worked is 0.10 hours per box. This computes as a favorable outcome. This is a favorable outcome because the actual rate of pay was less than the standard rate of pay. As a result of this favorable outcome information, the company may consider continuing operations as they exist, or could change future budget projections to reflect higher profit margins, among other things. Let us take the same example except now the actual rate of pay per hour is $9.50. The direct labor rate variance computes as: Direct Labor Rate Variance = ( $9.50 – $8.00 ) × 0.10 hours = $0.15 or $0.15 ( Unfavorable ) Direct Labor Rate Variance = ( $9.50 – $8.00 ) × 0.10 hours = $0.15 or $0.15 ( Unfavorable ) In this case, the actual rate per hour is $9.50, the standard rate per hour is $8.00, and the actual hours worked per box are 0.10 hours. This computes as an unfavorable outcome. This is an unfavorable outcome because the actual rate per hour was more than the standard rate per hour. As a result of this unfavorable outcome information, the company may consider using cheaper labor, changing the production process to be more efficient, or increasing prices to cover labor costs. Another element this company and others must consider is a direct labor time variance. Direct Labor Time Variance The direct labor time variance compares the actual labor hours used to the standard labor hours that were expected to be used to make the actual units produced. The variance is calculated using this formula: Factoring out the standard rate per hour from both components of the formula, it can be rewritten as: With either of these formulas, the actual hours worked refers to the actual number of hours used at the actual production output. The standard rate per hour is the expected hourly rate paid to workers. The standard hours are the expected number of hours used at the actual production output. If there is no difference between the actual hours worked and the standard hours, the outcome will be zero, and no variance exists. If the actual hours worked are less than the standard hours at the actual production output level, the variance will be a favorable variance. A favorable outcome means you used fewer hours than anticipated to make the actual number of production units. If, however, the actual hours worked are greater than the standard hours at the actual production output level, the variance will be unfavorable. An unfavorable outcome means you used more hours than anticipated to make the actual number of production units. The actual hours used can differ from the standard hours because of improved efficiencies in production, carelessness or inefficiencies in production, or poor estimation when creating the standard usage. Consider the previous example with Connie’s Candy Company. Connie’s Candy establishes a standard rate per hour for labor of $8.00. Each box of candy is expected to require 0.10 hours of labor (6 minutes). Connie’s Candy found that the actual hours worked per box were 0.05 hours (3 minutes). The actual rate per hour for labor remained at $8.00 to make each box. The direct labor time variance computes as: Direct Labor Time Variance = ( 0.05 – 0.10 ) × $8.00 per hour = – $0.40 or $0.40 ( Favorable ) Direct Labor Time Variance = ( 0.05 – 0.10 ) × $8.00 per hour = – $0.40 or $0.40 ( Favorable ) In this case, the actual hours worked are 0.05 per box, the standard hours are 0.10 per box, and the standard rate per hour is $8.00. This computes as a favorable outcome. This is a favorable outcome because the actual hours worked were less than the standard hours expected. As a result of this favorable outcome information, the company may consider continuing operations as they exist, or could change future budget projections to reflect higher profit margins, among other things. Let us take the same example except now the actual hours worked are 0.20 hours per box. The direct labor time variance computes as: Direct Labor Time Variance = ( $0.20 – $0.10 ) × $8.00 per hour = $0.80 or $0.80 ( Unfavorable ) Direct Labor Time Variance = ( $0.20 – $0.10 ) × $8.00 per hour = $0.80 or $0.80 ( Unfavorable ) In this case, the actual hours worked per box are 0.20, the standard hours per box are 0.10, and the standard rate per hour is $8.00. This computes as an unfavorable outcome. This is an unfavorable outcome because the actual hours worked were more than the standard hours expected per box. As a result of this unfavorable outcome information, the company may consider retraining its workers, changing the production process to be more efficient, or increasing prices to cover labor costs. The combination of the two variances can produce one overall total direct labor cost variance. Think It Through Package Deliveries UPS drivers are evaluated on how many miles they drive and how quickly they deliver packages. The drivers are given the route and time they are expected to take, so they are expected to complete their route in a timely and efficient manner. They also work until all packages are delivered. A GPS tracking system tracks the trucks throughout the day. The system keeps track of how much they back up and if they take any left turns because right turns are much more time efficient. 3 Tracking drivers like this does not leave them very much time to deal with customers. Customer service is a major part of the driver’s job. Can the driver service the customer and drive the route in the time and distance allotted? Which is more important: customer service or driving the route in a timely and efficient manner? 3 Graham Kendall. “Why UPS Drivers Don’t Turn Left and Why You Shouldn’t Either.” The Conversation . January 20, 2017. http://theconversation.com/why-ups-drivers-dont-turn-left-and-you-probably-shouldnt-either-71432 Link to Learning Watch this video presenting an instructor walking through the steps involved in calculating direct labor variances to learn more. Total Direct Labor Variance When a company makes a product and compares the actual labor cost to the standard labor cost, the result is the total direct labor variance . If the outcome is unfavorable, the actual costs related to labor were more than the expected (standard) costs. If the outcome is favorable, the actual costs related to labor are less than the expected (standard) costs. The total direct labor variance is also found by combining the direct labor rate variance and the direct labor time variance. By showing the total direct labor variance as the sum of the two components, management can better analyze the two variances and enhance decision-making. Figure 8.4 shows the connection between the direct labor rate variance and direct labor time variance to total direct labor variance. For example, Connie’s Candy Company expects to pay a rate of $8.00 per hour for labor but actually pays $9.50 per hour. The company expected to use 0.10 hours of labor per box but actually used 0.20 hours per box. The total direct labor variance is computed as: Total Direct Labor Time Variance = ( 0.20 hours × $9.50 ) – ( 0.10 hours × $8.00 ) = $1.90 – $0.80 = $1.10 ( Unfavorable ) Total Direct Labor Time Variance = ( 0.20 hours × $9.50 ) – ( 0.10 hours × $8.00 ) = $1.90 – $0.80 = $1.10 ( Unfavorable ) In this case, two elements are contributing to the unfavorable outcome. Connie’s Candy paid $1.50 per hour more for labor than expected and used 0.10 hours more than expected to make one box of candy. The same calculation is shown as follows using the outcomes of the direct labor rate and time variances. As with the interpretations for the labor rate and time variances, the company would review the individual components contributing to the overall unfavorable outcome for the total direct labor variance, and possibly make changes to production elements as a result. Your Turn Sweet and Fresh Shampoo Labor Biglow Company makes a hair shampoo called Sweet and Fresh. Each bottle has a standard labor cost of 1.5 hours at $35.00 per hour. During May, Biglow manufactured 11,000 bottles. They used 16,000 hours at a cost of $565,600. Calculate the labor rate variance, labor time variance, and total labor variance. Solution Concepts In Practice Labor Costs in Service Industries In the service industry, labor is the main cost. Doctors, for example, have a time allotment for a physical exam and base their fee on the expected time. Insurance companies pay doctors according to a set schedule, so they set the labor standard. They pay a set rate for a physical exam, no matter how long it takes. If the exam takes longer than expected, the doctor is not compensated for that extra time. This would produce an unfavorable labor variance for the doctor. Doctors know the standard and try to schedule accordingly so a variance does not exist. If anything, they try to produce a favorable variance by seeing more patients in a quicker time frame to maximize their compensation potential. 8.4 Compute and Evaluate Overhead Variances Recall that the standard cost of a product includes not only materials and labor but also variable and fixed overhead. It is likely that the amounts determined for standard overhead costs will differ from what actually occurs. This will lead to overhead variances . Determination and Evaluation of Overhead Variance In a standard cost system, overhead is applied to the goods based on a standard overhead rate. This is similar to the predetermined overhead rate used previously. The standard overhead rate is calculated by dividing budgeted overhead at a given level of production (known as normal capacity) by the level of activity required for that particular level of production. Usually, the level of activity is either direct labor hours or direct labor cost, but it could be machine hours or units of production. Creation of Flexible Overhead Budget To determine the overhead standard cost, companies prepare a flexible budget that gives estimated revenues and costs at varying levels of production. The standard overhead cost is usually expressed as the sum of its component parts, fixed and variable costs per unit. Note that at different levels of production, total fixed costs are the same, so the standard fixed cost per unit will change for each production level. However, the variable standard cost per unit is the same per unit for each level of production, but the total variable costs will change. We continue to use Connie’s Candy Company to illustrate. Suppose Connie’s Candy budgets capacity of production at 100% and determines expected overhead at this capacity. Connie’s Candy also wants to understand what overhead cost outcomes will be at 90% capacity and 110% capacity. The following information is the flexible budget Connie’s Candy prepared to show expected overhead at each capacity level. Units of output at 100% is 1,000 candy boxes (units). The standard overhead rate is the total budgeted overhead of $10,000 divided by the level of activity (direct labor hours) of 2,000 hours. Notice that fixed overhead remains constant at each of the production levels, but variable overhead changes based on unit output. If Connie’s Candy only produced at 90% capacity, for example, they should expect total overhead to be $9,600 and a standard overhead rate of $5.33 (rounded). If Connie’s Candy produced 2,200 units, they should expect total overhead to be $10,400 and a standard overhead rate of $4.73 (rounded). In addition to the total standard overhead rate, Connie’s Candy will want to know the variable overhead rates at each activity level. Using the flexible budget, we can determine the standard variable cost per unit at each level of production by taking the total expected variable overhead divided by the level of activity, which can still be direct labor hours or machine hours. Looking at Connie’s Candies, the following table shows the variable overhead rate at each of the production capacity levels. Production Capacity Variable/Unit 90% $3,600/1,800 = $2 100% $4,000/2,000 = $2 110% $4,400/2,200 = $2 Sometimes these flexible budget figures and overhead rates differ from the actual results, which produces a variance. Determination of Variable Overhead Variances There are two components to variable overhead rates: the overhead application rate and the activity level against which that rate was applied. If we compare the actual variable overhead to the standard variable overhead, by analyzing the difference between actual overhead costs and the standard overhead for current production, it is difficult to determine if the variance is due to application rate differences or activity level differences. Thus, there are two variable overhead variances that will better provide these answers: the variable overhead rate variance and the variable overhead efficiency variance. Determination of Variable Overhead Rate Variance The variable overhead rate variance , also known as the spending variance, is the difference between the actual variable manufacturing overhead and the variable overhead that was expected given the number of hours worked. The variable overhead rate variance is calculated using this formula: Factoring out actual hours worked, we can rewrite the formula as If the outcome is favorable (a negative outcome occurs in the calculation), this means the company spent less than what it had anticipated for variable overhead. If the outcome is unfavorable (a positive outcome occurs in the calculation), this means the company spent more than what it had anticipated for variable overhead. Connie’s Candy Company wants to determine if its variable overhead spending was more or less than anticipated. Connie’s Candy had this data available in the flexible budget: Connie’s Candy also had this actual output information: To determine the variable overhead rate variance, the standard variable overhead rate per hour and the actual variable overhead rate per hour must be determined. The standard variable overhead rate per hour is $2.00 ($4,000/2,000 hours), taken from the flexible budget at 100% capacity. The actual variable overhead rate is $2.80 ($7,000/2,500), taken from the actual results at 100% capacity. Therefore, Variable Overhead Rate Variance = ( $2.80 – $2.00 ) × 2,500 = $2,000 ( Unfavorable ) Variable Overhead Rate Variance = ( $2.80 – $2.00 ) × 2,500 = $2,000 ( Unfavorable ) This produces an unfavorable outcome. This could be for many reasons, and the production supervisor would need to determine where the variable cost difference is occurring to make production changes. Let us look at another example producing a favorable outcome. Connie’s Candy had this data available in the flexible budget: Connie’s Candy also had this actual output information: To determine the variable overhead rate variance, the standard variable overhead rate per hour and the actual variable overhead rate per hour must be determined. The standard variable overhead rate per hour is $2.00 ($4,000/2,000 hours), taken from the flexible budget at 100% capacity. The actual variable overhead rate is $1.75 ($3,500/2,000), taken from the actual results at 100% capacity. Therefore, Variable Overhead Rate Variance = ( $1.75 – $2.00 ) × $2,000 = –$500 or $500 ( Favorable ) Variable Overhead Rate Variance = ( $1.75 – $2.00 ) × $2,000 = –$500 or $500 ( Favorable ) This produces a favorable outcome. This could be for many reasons, and the production supervisor would need to determine where the variable cost difference is occurring to better understand the variable overhead reduction. Interpretation of the variable overhead rate variance is often difficult because the cost of one overhead item, such as indirect labor, could go up, but another overhead cost, such as indirect materials, could go down. Often, explanation of this variance will need clarification from the production supervisor. Another variable overhead variance to consider is the variable overhead efficiency variance. Determination of Variable Overhead Efficiency Variance The variable overhead efficiency variance , also known as the controllable variance, is driven by the difference between the actual hours worked and the standard hours expected for the units produced. This variance measures whether the allocation base was efficiently used. The variable overhead efficiency variance is calculated using this formula: Factoring out standard overhead rate, the formula can be written as If the outcome is favorable (a negative outcome occurs in the calculation), this means the company was more efficient than what it had anticipated for variable overhead. If the outcome is unfavorable (a positive outcome occurs in the calculation), this means the company was less efficient than what it had anticipated for variable overhead. Connie’s Candy Company wants to determine if its variable overhead efficiency was more or less than anticipated. Connie’s Candy had the following data available in the flexible budget: Connie’s Candy also had the following actual output information: To determine the variable overhead efficiency variance, the actual hours worked and the standard hours worked at the production capacity of 100% must be determined. Actual hours worked are 2,500, and standard hours are 2,000. The standard variable overhead rate per hour is $2.00 ($4,000/2,000 hours), taken from the flexible budget at 100% capacity. Therefore, Variable Overhead Efficiency Variance = ( 2,500 – 2,000 ) × $2.00 = $1,000 ( Unfavorable ) Variable Overhead Efficiency Variance = ( 2,500 – 2,000 ) × $2.00 = $1,000 ( Unfavorable ) This produces an unfavorable outcome. This could be for many reasons, and the production supervisor would need to determine where the variable cost difference is occurring to make production changes. Let us look at another example producing a favorable outcome. Connie’s Candy had the following data available in the flexible budget: Connie’s Candy also had the following actual output information: To determine the variable overhead efficiency variance, the actual hours worked and the standard hours worked at the production capacity of 100% must be determined. Actual hours worked are 1,800, and standard hours are 2,000. The standard variable overhead rate per hour is $2.00 ($4,000/2,000 hours), taken from the flexible budget at 100% capacity. Therefore, Variable Overhead Efficiency Variance = ( 1,800 – 2,000 ) × $2.00 = –$400 or $400 ( Favorable ) Variable Overhead Efficiency Variance = ( 1,800 – 2,000 ) × $2.00 = –$400 or $400 ( Favorable ) This produces a favorable outcome. This could be for many reasons, and the production supervisor would need to determine where the variable cost difference is occurring to better understand the variable overhead efficiency reduction. The total variable overhead cost variance is also found by combining the variable overhead rate variance and the variable overhead efficiency variance. By showing the total variable overhead cost variance as the sum of the two components, management can better analyze the two variances and enhance decision-making. Figure 8.5 shows the connection between the variable overhead rate variance and variable overhead efficiency variance to total variable overhead cost variance. For example, Connie’s Candy Company had the following data available in the flexible budget: Connie’s Candy also had the following actual output information: The variable overhead rate variance is calculated as (1,800 × $1.94) – (1,800 × $2.00) = –$108, or $108 (favorable). The variable overhead efficiency variance is calculated as (1,800 × $2.00) – (2,000 × $2.00) = –$400, or $400 (favorable). The total variable overhead cost variance is computed as: Total Variable Overhead Cost Variance = ( –$108 ) + ( –$400 ) = –$508 or $508 ( Favorable ) Total Variable Overhead Cost Variance = ( –$108 ) + ( –$400 ) = –$508 or $508 ( Favorable ) In this case, two elements are contributing to the favorable outcome. Connie’s Candy used fewer direct labor hours and less variable overhead to produce 1,000 candy boxes (units). The same calculation is shown as follows in diagram format. As with the interpretations for the variable overhead rate and efficiency variances, the company would review the individual components contributing to the overall favorable outcome for the total variable overhead cost variance, before making any decisions about production in the future. Other variances companies consider are fixed factory overhead variances. Fundamentals of Fixed Factory Overhead Variances The fixed factory overhead variance represents the difference between the actual fixed overhead and the applied fixed overhead. There are two fixed overhead variances. One variance determines if too much or too little was spent on fixed overhead. The other variance computes whether or not actual production was above or below the expected production level. Your Turn Sweet and Fresh Shampoo Overhead Biglow Company makes a hair shampoo called Sweet and Fresh. They have the following flexible budget data: What is the standard variable overhead rate at 90%, 100%, and 110% capacity levels? Solution 90% = $315,000/14,000 = $22.50, 100% = $346,000/16,000 = $21.63 (rounded), 110% = $378,000/18,000 = $21.00. Think It Through Purchasing Planes The XYZ Firm is bidding on a contract for a new plane for the military. As the management team is going over the bid, they come to the conclusion it is too high on a per-plane basis, but they cannot find any costs they feel can be reduced. The information from the military states they will purchase between 50 and 100 planes, but will more likely purchase 50 planes rather than 100 planes. XYZ’s bid is based on 50 planes. The controller suggests that they base their bid on 100 planes. This would spread the fixed costs over more planes and reduce the bid price. The lower bid price will increase substantially the chances of XYZ winning the bid. Should XYZ Firm keep the bid at 50 planes or increase its bid to 100 planes? What are the pros and cons to keeping the bid at 50 or increasing to 100 planes? 8.5 Describe How Companies Use Variance Analysis Companies use variance analysis in different ways. The starting point is the determination of standards against which to compare actual results. Many companies produce variance reports, and the management responsible for the variances must explain any variances outside of a certain range. Some companies only require that unfavorable variances be explained, while many companies require both favorable and unfavorable variances to be explained. Requiring managers to determine what caused unfavorable variances forces them to identify potential problem areas or consider if the variance was a one-time occurrence. Requiring managers to explain favorable variances allows them to assess whether the favorable variance is sustainable. Knowing what caused the favorable variance allows management to plan for it in the future, depending on whether it was a one-time variance or it will be ongoing. Another possibility is that management may have built the favorable variance into the standards. Management may overestimate the material price, labor rate, material quantity, or labor hours per unit, for example. This method of overestimation, sometimes called budget slack , is built into the standards so management can still look good even if costs are higher than planned. In either case, managers potentially can help other managers and the company overall by noticing particular problem areas or by sharing knowledge that can improve variances. Often, management will manage “to the variances,” meaning they will make decisions that may not be advantageous to the company’s best interests over the long run, in order to meet the variance report threshold limits. This can occur when the standards are improperly established, causing significant differences between actual and standard numbers. Ethical Considerations Ethical Long-Term Decisions in Variance Analysis The proper use of variance analysis is a significant tool for an organization to reach its long-term goals. When its accounting system recognizes a variance, an organization needs to understand the significant influence of accounting not only in recording its financial results, but also in how reacting to that variance can shape management’s behavior toward reaching its goals. 4 Many managers use variance analysis only to determine a short-term reaction, and do not analyze why the variance occurred from a long-term perspective. A more long-term analysis of variances allows an approach that “is responsibility accounting in which authority and accountability for tasks is delegated downward to those managers with the most influence and control over them.” 5 It is important for managers to analyze the reported variances with more than just a short-term perspective. 4 Jeffrey R. Cohen and Laurie W. Pant. “The Only Thing That Counts Is That Which Is Counted: A Discussion of Behavioral and Ethical Issues in Cost Accounting That Are Relevant for the OB Professor.” September 18, 2018. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.1026.5569&rep=rep1&type=pdf 5 Jeffrey R. Cohen and Laurie W. Pant. “The Only Thing That Counts Is That Which Is Counted: A Discussion of Behavioral and Ethical Issues in Cost Accounting That Are Relevant for the OB Professor.” September 18, 2018. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.1026.5569&rep=rep1&type=pdf Managers sometimes focus only on making numbers for the current period. For example, a manager might decide to make a manufacturing division’s results look profitable in the short term at the expense of reaching the organization’s long-term goals. A recognizable cost variance could be an increase in repair costs as a percentage of sales on an increasing basis. This variance could indicate that equipment is not operating efficiently and is increasing overall cost. However, the expense of implementing new, more efficient equipment might be higher than repairing the current equipment. In the short term, it might be more economical to repair the outdated equipment, but in the long term, purchasing more efficient equipment would help the organization reach its goal of eco-friendly manufacturing. If the system use for controlling costs is not aligned to reinforce management of the organization with a long-term perspective, “the manager has no organizational incentive to be concerned with important issues unrelated to anything but the immediate costs” 6 related to the variance. A manager needs to be cognizant of his or her organization’s goals when making decisions based on variance analysis. 6 Jeffrey R. Cohen and Laurie W. Pant. “The Only Thing That Counts Is That Which Is Counted: A Discussion of Behavioral and Ethical Issues in Cost Accounting That Are Relevant for the OB Professor.” September 18, 2018. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.1026.5569&rep=rep1&type=pdf Management can use standard costs to prepare the budget for the upcoming period, using the past information to possibly make changes to production elements. Standard costs are a measurement tool and can thus be used to evaluate performance. As you’ve learned, management may manage “to the variances” and can manipulate results to meet expectations. To reduce this possibility, performance should be measured on multiple outcomes, not simply on standard cost variances. As shown in Table 8.1 , standard costs have pros and cons to consider when using them in the decision-making and evaluation processes. Standard Costs Pros Cons Useful when developing a future budget Can be used as a benchmark for performance and quality expectations Can individually identify areas of success and areas for improvement Might ignore customer and employee satisfaction rates Information could be historical data and not useful in real-time decision-making needs The system to manage and develop standard costs requires a lot of resources, which could be costly and time consuming Table 8.1 Standard costing provides many benefits and challenges, and a thorough analysis of each variance and the possible unfavorable or favorable outcomes is required to set future expectations and adjust current production goals. The following is a summary of all direct materials variances ( Figure 8.6 ), direct labor variances ( Figure 8.7 ), and overhead variances ( Figure 8.8 ) presented as both formulas and tree diagrams. Note that for some of the formulas, there are two presentations of the same formula, for example, there are two presentations of the direct materials price variance. While both arrive at the same answer, students usually prefer one formula structure over the other. Your Turn Barley, Inc. Production Barley, Inc., produces a product and has the following as standard costs per unit for materials and labor: For the month of October, the following information was gathered related to production: Compute: The materials price and quantity variances The labor rate and efficiency variances Provide possible explanations for each variance. Solution A. Materials price variance: $50,000 unfavorable = ($16* – $15) × 50,000 lb. *$800,000/50,000 An unfavorable materials price variance occurred because the actual cost of materials was greater than the expected or standard cost. This could occur if a higher-quality material was purchased or the suppliers raised their prices. Materials quantity variance: $150,000 unfavorable = (50,000 lb. – 40,000* lb.) × $15 per lb. *4 lb. × 10,000 units An unfavorable materials quantity variance occurred because the pounds of materials used were greater than the pounds expected to be used. This could occur if there were inefficiencies in production or the quality of the materials was such that more needed to be used to meet safety or other standards. Materials inputs: B. Labor rate variance: $50,000 favorable = ($18* per hour – $20 per hour) × 25,000 hours *$450,000/25,000 A favorable labor rate variance occurred because the rate paid per hour was less than the rate expected to be paid (standard) per hour. This could occur because the company was able to hire workers at a lower rate, because of negotiated union contracts, or because of a poor labor rate estimate used in creating the standard. Labor quantity variance: $100,000 unfavorable = (25,000 hours – 20,000* hours) × $20 per hour *2 hours × 10,000 units An unfavorable labor quantity variance occurred because the actual hours worked to make the 10,000 units were greater than the expected hours to make that many units. This could occur because of inefficiencies of the workers, defects and errors that caused additional time reworking items, or the use of new workers who were less efficient. Labor inputs: Think It Through Explaining Differences in Expected and Actual Operational Outcomes The manager of a plant has called operations, purchasing, and personnel into her office to discuss the results of the last month. She notes that there was more than normal scrap, and employees worked more hours than expected. She is looking for an explanation for these results. What system might she have used to determine these material and labor issues? Why might these variances have occurred? What should she do about it for future periods? Link to Learning Standard Costing Advantages Explained See this article on the four major advantages of standard costing to learn more.
introduction_to_sociology
Learning Objectives 10.1 Global Stratification and Classification Describe global stratification Understand how different classification systems have developed Use terminology from Wallerstein’s world systems approach Explain the World Bank’s classification of economies 10.2 Global Wealth and Poverty Understand the differences between relative, absolute, and subjective poverty Describe the economic situation of some of the world’s most impoverished areas Explain the cyclical impact of the consequences of poverty 10.3 Theoretical Perspectives on Global Stratification Describe the modernization and dependency theory perspectives on global stratification Introduction to Global Inequality In 2000, the world entered a new millennium. In the spirit of a grand-scale New Year’s resolution, it was a time for lofty aspirations and dreams of changing the world. It was also the time of the Millennium Development Goals, a series of ambitious goals set by UN member nations. The MDGs, as they become known, sought to provide a practical and specific plan for eradicating extreme poverty around the world. Nearly 200 countries signed on, and they worked to create a series of 21 targets with 60 indicators, with an ambitious goal of reaching them by 2015. The goals spanned eight categories: To eradicate extreme poverty and hunger To achieve universal primary education To promote gender equality and empower women To reduce child mortality To improve maternal health To combat HIV/AIDS, malaria, and other diseases To ensure environmental sustainability To develop a global partnership for development (United Nations 2010) There’s no question that these were well-thought-out objectives to work toward. So 11 years later, what has happened? As of the 2010 Outcome Document, much progress has been made toward some MDGs, while others are still lagging far behind. Goals related to poverty, education, child mortality, and access to clean water have seen much progress. But these successes show a disparity: some nations have seen great strides made, while others have seen virtually no progress. Improvements have been erratic, with hunger and malnutrition increasing from 2007 through 2009, undoing earlier achievements. Employment has also been slow to progress, as has a reduction in HIV infection rates, which have continued to outpace the number of people getting treatment. The mortality and healthcare rates for mothers and infants also show little advancement. Even in the areas that made gains, the successes are tenuous. And with the global recession slowing both institutional and personal funding, the attainment of the goals is very much in question (United Nations 2010). As we consider the global effort to meet these ambitious goals, we can think about how the world’s people have ended up in such disparate circumstances. How did wealth become concentrated in some nations? What motivates companies to globalize? Is it fair for powerful countries to make rules that make it difficult for less-powerful nations to compete on the global scene? How can we address the needs of the world’s population?
[ { "answer": { "ans_choice": 3, "ans_text": "want to interview women working in factories to understand how they manage the expectations of their supervisors, make ends meet, and support their households on a day-to-day basis" }, "bloom": "3", "hl_context": "<hl> The symbolic interaction perspective studies the day-to-day impact of global inequality , the meanings individuals attach to global stratification , and the subjective nature of poverty . <hl> Someone applying this view to global inequality would probably focus on understanding the difference between what someone living in a core nation defines as poverty ( relative poverty , defined as being unable to live the lifestyle of the average person in your country ) and what someone living in a peripheral nation defines as poverty ( absolute poverty , defined as being barely able , or unable , to afford basic necessities , such as food ) .", "hl_sentences": "The symbolic interaction perspective studies the day-to-day impact of global inequality , the meanings individuals attach to global stratification , and the subjective nature of poverty .", "question": { "cloze_format": "A sociologist working from a symbolic interaction perspective would ___.", "normal_format": "What would do a sociologist working from a symbolic interaction perspective?", "question_choices": [ "study how inequality is created and reproduced", "study how corporations can improve the lives of their low-income workers", "try to understand how companies provide an advantage to high-income nations compared to low-income nations", "want to interview women working in factories to understand how they manage the expectations of their supervisors, make ends meet, and support their households on a day-to-day basis" ], "question_id": "fs-id1772752", "question_text": "A sociologist working from a symbolic interaction perspective would:" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "Core" }, "bloom": null, "hl_context": "<hl> Semi-peripheral nations are in-between nations , not powerful enough to dictate policy but nevertheless acting as a major source for raw material and an expanding middle-class marketplace for core nations , while also exploiting peripheral nations . <hl> Mexico is an example , providing abundant cheap agricultural labor to the U . S . , and supplying goods to the U . S . market at a rate dictated by the U . S . without the constitutional protections offered to U . S . workers . <hl> Peripheral nations have very little industrialization ; what they do have often represents the outdated castoffs of core nations or the factories and means of production owned by core nations . <hl> They typically have unstable government , inadequate social programs , and are economically dependent on core nations for jobs and aid . There are abundant examples of countries in this category . Check the label of your jeans or sweatshirt and see where it was made . Chances are it was a peripheral nation such as Guatemala , Bangladesh , Malaysia , or Colombia . One can be sure the workers in these factories , which are owned or leased by global core nation companies , are not enjoying the same privileges and rights of American workers . <hl> Core nations are dominant capitalist countries , highly industrialized , technological , and urbanized . <hl> For example , Wallerstein contends that the U . S . is an economic powerhouse that can support or deny support to important economic legislation with far-reaching implications , thus exerting control over every aspect of the global economy and exploiting both semi-peripheral and peripheral nations . One can look at free trade agreements such as the North American Free Trade Agreement ( NAFTA ) as an example of how a core nation is able to leverage its power to gain the most advantageous position in the matter of global trade .", "hl_sentences": "Semi-peripheral nations are in-between nations , not powerful enough to dictate policy but nevertheless acting as a major source for raw material and an expanding middle-class marketplace for core nations , while also exploiting peripheral nations . Peripheral nations have very little industrialization ; what they do have often represents the outdated castoffs of core nations or the factories and means of production owned by core nations . Core nations are dominant capitalist countries , highly industrialized , technological , and urbanized .", "question": { "cloze_format": "The kind of nation France might be classified as is a ___ nation.", "normal_format": "France might be classified as which kind of nation?", "question_choices": [ "Global", "Core", "Semi-peripheral", "Peripheral" ], "question_id": "fs-id3656886", "question_text": "France might be classified as which kind of nation?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 3, "ans_text": "capital flight" }, "bloom": "3", "hl_context": "<hl> Big Picture Capital Flight , Outsourcing , and Jobs in America As mentioned above , capital flight describes jobs and infrastructure moving from one nation to another . <hl> Look at the American automobile industry . In the early 20th century , the cars driven in America were made in America , employing thousands of workers in Detroit , and providing an abundance of jobs in the factories and companies that produced everything that made building cars possible . However , once the fuel crisis of the 1970s hit and Americans increasingly looked to imported cars with better gas mileage , American auto manufacturing began to decline . During the recession of 2008 , the U . S . government bailed out the three main auto companies , underscoring their vulnerability . At the same time , Japanese-owned Toyota and Honda and South Korean Kia maintained stable sales levels . The World Bank defines high-income nations as having a gross national income of at least $ 12,276 per capita . It separates out the OECD ( Organization for Economic and Cooperative Development ) countries , a group of 34 nations whose governments work together to promote economic growth and sustainability . According to the Work Bank ( 2011 ) , in 2010 , the average GNI of a high-income nation belonging to the OECD was $ 40,136 per capita and the total population was over one billion ( 1,032 , 856,261 ); on average , 77 percent of the population in these nations was urban . Some of these countries include the United States , Germany , Canada , and the United Kingdom ( World Bank 2011 ) . In 2010 , the average GNI of a high-income nation that did not belong to the OECD was $ 23,839 per capita and the average population was about 94 million , of which 83 percent was urban . Examples of these countries include Saudi Arabia and Qatar ( World Bank 2011 ) . There are two major issues facing high-income countries : capital flight and deindustrialization . <hl> Capital flight refers to the movement ( flight ) of capital from one nation to another , as when General Motors automotive company closed American factories in Michigan and opened factories in Mexico . <hl> <hl> Deindustrialization , a related issue , occurs as a consequence of capital flight , as no new companies open to replace jobs lost to foreign nations . <hl> As expected , global companies move their industrial processes to the places where they can get the most production with the least cost , including the building of infrastructure , training of workers , shipment of goods , and , of course , employee wages . This means that as emerging economies create their own industrial zones , global companies see the opportunity for existing infrastructure and much lower costs . Those opportunities lead to businesses closing the factories that provide jobs to the middle-class within core nations and moving their industrial production to peripheral and semi-peripheral nations .", "hl_sentences": "Big Picture Capital Flight , Outsourcing , and Jobs in America As mentioned above , capital flight describes jobs and infrastructure moving from one nation to another . Capital flight refers to the movement ( flight ) of capital from one nation to another , as when General Motors automotive company closed American factories in Michigan and opened factories in Mexico . Deindustrialization , a related issue , occurs as a consequence of capital flight , as no new companies open to replace jobs lost to foreign nations .", "question": { "cloze_format": "This is an example of ___ .", "normal_format": "In the past, the United States manufactured clothes. Many clothing corporations have shut down their American factories and relocated to China. What is this an example of?", "question_choices": [ "conflict theory", "OECD", "global inequality", "capital flight" ], "question_id": "fs-id2880933", "question_text": "In the past, the United States manufactured clothes. Many clothing corporations have shut down their American factories and relocated to China. This is an example of:" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "chattel Slavery" }, "bloom": null, "hl_context": "While most of us are accustomed to thinking of slavery in terms of the antebellum South , modern day slavery goes hand-in-hand with global inequality . In short , slavery refers to any time people are sold , treated as property , or forced to work for little or no pay . Just as in pre-Civil War America , these humans are at the mercy of their employers . <hl> Chattel slavery , the form of slavery practiced in the pre-Civil War American South , is when one person owns another as property . <hl> Child slavery , which may include child prostitution , is a form of chattel slavery . Debt bondage , or bonded labor , involves the poor pledging themselves as servants in exchange for the cost of basic necessities like transportation , room , and board . In this scenario , people are paid less than they are charged for room and board . When travel is involved , people can arrive in debt for their travel expenses and be unable to work their way free , since their wages do not allow them to ever get ahead .", "hl_sentences": "Chattel slavery , the form of slavery practiced in the pre-Civil War American South , is when one person owns another as property .", "question": { "cloze_format": "Slavery in the pre-Civil War American South most closely resembled ___ .", "normal_format": "What did slavery in the pre-Civil War American South most closely resemble?", "question_choices": [ "chattel Slavery", "debt Bondage", "relative Poverty", "peonage" ], "question_id": "fs-id1177568", "question_text": "Slavery in the pre-Civil War American South most closely resembled" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 0, "ans_text": "that previously low-income nations such as China have successfully developed their economies and can no longer be classified as dependent on core nations" }, "bloom": "1", "hl_context": "<hl> Dependency theory was created in part as a response to the western-centric mindset of modernization theory . <hl> It states that global inequality is primarily caused by core nations ( or high-income nations ) exploiting semi-peripheral and peripheral nations ( or middle-income and low-income nations ) , creating a cycle of dependence ( Hendricks 2010 ) . As long as peripheral nations are dependent on core nations for economic stimulus and access to a larger piece of the global economy , they will never achieve stable and consistent economic growth . Further , the theory states that since core nations , as well as the World Bank , choose which countries to make loans to , and for what they will loan funds , they are creating highly segmented labor markets that are built to benefit the dominant market countries . <hl> At first glance , it seems this theory ignores the formerly low-income nations that are now considered middle-income nations and are on their way to becoming high-income nations and major players in the global economy , such as China . <hl> But some dependency theorists would state that it is in the best interests of core nations to ensure the long-term usefulness of their peripheral and semi-peripheral partners . Following that theory , sociologists have found that entities are more likely to outsource a significant portion of a company ’ s work if they are the dominant player in the equation ; in other words , companies want to see their partner countries healthy enough to provide work , but not so healthy as to establish a threat ( Caniels and Roeleveld 2009 ) .", "hl_sentences": "Dependency theory was created in part as a response to the western-centric mindset of modernization theory . At first glance , it seems this theory ignores the formerly low-income nations that are now considered middle-income nations and are on their way to becoming high-income nations and major players in the global economy , such as China .", "question": { "cloze_format": "One flaw in dependency theory is the unwillingness to recognize _______.", "normal_format": "One flaw in dependency theory is the unwillingness to recognize which of the following?", "question_choices": [ "that previously low-income nations such as China have successfully developed their economies and can no longer be classified as dependent on core nations", "that previously high-income nations such as China have been economically overpowered by low-income nations entering the global marketplace", "that countries such as China are growing more dependent on core nations", "that countries such as China do not necessarily want to be more like core nations" ], "question_id": "fs-id2627114", "question_text": "One flaw in dependency theory is the unwillingness to recognize _______." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "its inherent ethnocentric bias" }, "bloom": "1", "hl_context": "<hl> Critics point out the inherent ethnocentric bias of this theory . <hl> It supposes all countries have the same resources and are capable of following the same path . In addition , it assumes that the goal of all countries is to be as “ developed ” as possible . There is no room within this theory for the possibility that industrialization and technology are not the best goals . <hl> Modernization Theory <hl>", "hl_sentences": "Critics point out the inherent ethnocentric bias of this theory . Modernization Theory", "question": { "cloze_format": "One flaw in modernization theory is the unwillingness to recognize _________.", "normal_format": "One flaw in modernization theory is the unwillingness to recognize which of the following?", "question_choices": [ "that semi-peripheral nations are incapable of industrializing", "that peripheral nations prevent semi-peripheral nations from entering the global market", "its inherent ethnocentric bias", "the importance of semi-peripheral nations industrializing" ], "question_id": "fs-id2218331", "question_text": "One flaw in modernization theory is the unwillingness to recognize _________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "core nations exploit peripheral nations" }, "bloom": null, "hl_context": "<hl> Dependency theory was created in part as a response to the western-centric mindset of modernization theory . <hl> <hl> It states that global inequality is primarily caused by core nations ( or high-income nations ) exploiting semi-peripheral and peripheral nations ( or middle-income and low-income nations ) , creating a cycle of dependence ( Hendricks 2010 ) . <hl> As long as peripheral nations are dependent on core nations for economic stimulus and access to a larger piece of the global economy , they will never achieve stable and consistent economic growth . Further , the theory states that since core nations , as well as the World Bank , choose which countries to make loans to , and for what they will loan funds , they are creating highly segmented labor markets that are built to benefit the dominant market countries . At first glance , it seems this theory ignores the formerly low-income nations that are now considered middle-income nations and are on their way to becoming high-income nations and major players in the global economy , such as China . But some dependency theorists would state that it is in the best interests of core nations to ensure the long-term usefulness of their peripheral and semi-peripheral partners . Following that theory , sociologists have found that entities are more likely to outsource a significant portion of a company ’ s work if they are the dominant player in the equation ; in other words , companies want to see their partner countries healthy enough to provide work , but not so healthy as to establish a threat ( Caniels and Roeleveld 2009 ) .", "hl_sentences": "Dependency theory was created in part as a response to the western-centric mindset of modernization theory . It states that global inequality is primarily caused by core nations ( or high-income nations ) exploiting semi-peripheral and peripheral nations ( or middle-income and low-income nations ) , creating a cycle of dependence ( Hendricks 2010 ) .", "question": { "cloze_format": "Dependency theorists explain global inequality and global stratification by focusing on the way that ___.", "normal_format": "by focusing on what dependency theorists explain global inequality and global stratification?", "question_choices": [ "core nations and peripheral nations exploit semi-peripheral nations", "semi-peripheral nations exploit core nations", "peripheral nations exploit core nations", "core nations exploit peripheral nations" ], "question_id": "fs-id1844162", "question_text": "Dependency theorists explain global inequality and global stratification by focusing on the way that:" }, "references_are_paraphrase": null } ]
10
10.1 Global Stratification and Classification Just as America’s wealth is increasingly concentrated among its richest citizens while the middle class slowly disappears, global inequality involves the concentration of resources in certain nations, significantly affecting the opportunities of individuals in poorer and less powerful countries. But before we delve into the complexities of global inequality, let’s consider how the three major sociological perspectives might contribute to our understanding of it. The functionalist perspective is a macroanalytical view that focuses on the way that all aspects of society are integral to the continued health and viability of the whole. A functionalist might focus on why we have global inequality and what social purposes it serves. This view might assert, for example, that we have global inequality because some nations are better than others at adapting to new technologies and profiting from a globalized economy, and that when core nation companies locate in peripheral nations, they expand the local economy and benefit the workers. Conflict theory focuses on the creation and reproduction of inequality. A conflict theorist would likely address the systematic inequality created when core nations exploit the resources of peripheral nations. For example, how many American companies take advantage of overseas workers who lack the constitutional protection and guaranteed minimum wages that exist in the United States? Doing so allows them to maximize profits, but at what cost? The symbolic interaction perspective studies the day-to-day impact of global inequality, the meanings individuals attach to global stratification, and the subjective nature of poverty. Someone applying this view to global inequality would probably focus on understanding the difference between what someone living in a core nation defines as poverty (relative poverty, defined as being unable to live the lifestyle of the average person in your country) and what someone living in a peripheral nation defines as poverty (absolute poverty, defined as being barely able, or unable, to afford basic necessities, such as food). Global Stratification While stratification in the United States refers to the unequal distribution of resources among individuals, global stratification refers to this unequal distribution among nations. There are two dimensions to this stratification: gaps between nations and gaps within nations. When it comes to global inequality, both economic inequality and social inequality may concentrate the burden of poverty among certain segments of the earth’s population (Myrdal 1970). As the chart below illustrates, people’s life expectancy depends heavily on where they happen to be born. Country Infant Mortality Rate Life Expectancy Canada 4.9 deaths per 1000 live births 81 years Mexico 17.2 deaths per 1000 live births 76 years Democratic Republic of Congo 78.4 deaths per 1000 live births 55 years Table 10.1 Statistics such as infant mortality rates and life expectancy vary greatly by country of origin. (Central Intelligence Agency 2011) Most of us are accustomed to thinking of global stratification as economic inequality. For example, we can compare China’s average worker’s wage to America’s average wage. Social inequality, however, is just as harmful as economic discrepancies. Prejudice and discrimination—whether against a certain race, ethnicity, religion, or the like—can create and aggravate conditions of economic equality, both within and between nations. Think about the inequity that existed for decades within the nation of South Africa. Apartheid, one of the most extreme cases of institutionalized and legal racism, created a social inequality that earned it the world’s condemnation. When looking at inequity between nations, think also about the disregard of the crisis in Darfur by most Western nations. Since few citizens of Western nations identified with the impoverished, non-white victims of the genocide, there has been little push to provide aid. Gender inequity is another global concern. Consider the controversy surrounding female genital mutilation. Nations that practice this female circumcision procedure defend it as a longstanding cultural tradition in certain tribes and argue that the West shouldn’t interfere. Western nations, however, decry the practice and are working to stop it. Inequalities based on sexual orientation and gender identity exist around the globe. According to Amnesty International, there are a number of crimes committed against individuals who do not conform to traditional gender roles or sexual orientations (however those are culturally defined). From culturally sanctioned rape to state-sanctioned executions, the abuses are serious. These legalized and culturally accepted forms of prejudice and discrimination exist everywhere—from the United States to Somalia to Tibet—restricting the freedom of individuals and often putting their lives at risk (Amnesty International 2012). Global Classification A major concern when discussing global inequality is how to avoid an ethnocentric bias implying that less developed nations want to be like those who’ve attained post-industrial global power. Terms such as developing (non-industrialized) and developed (industrialized) imply that unindustrialized countries are somehow inferior, and must improve to participate successfully in the global economy, a label indicating that all aspects of the economy cross national borders. We must take care in how we delineate different countries. Over time, terminology has shifted to make way for a more inclusive view of the world. Cold War Terminology Cold War terminology was developed during the Cold War era (1945–1980). Familiar and still used by many, it involves classifying countries into first world, second world, and third world nations based on respective economic development and standards of living. When this nomenclature was developed, capitalistic democracies such as the U.S. and Japan were considered part of the first world . The poorest, most undeveloped countries were referred to as the third world and included most of sub-Saharan Africa, Latin America, and Asia. The second world was the in-between category: nations not as limited in development as the third world, but not as well off as the first world, having moderate economies and standard of living, such as China or Cuba. Later, sociologist Manual Castells (1998) added the term fourth world to refer to stigmatized minority groups that were denied a political voice all over the globe (indigenous minority populations, prisoners, and the homeless, for example). Also during the Cold War, global inequality was described in terms of economic development. Along with developing and developed nations, the terms less-developed nation and underdeveloped nation were used. This was the era when the idea of noblesse oblige (first-world responsibility) took root, suggesting that the so-termed developed nations should provide foreign aid to the less-developed and underdeveloped nations in order to raise their standard of living. Immanuel Wallerstein: World Systems Approach Wallerstein’s (1979) world systems approach uses an economic basis to understand global inequality. He conceived the global economy as a complex system supporting an economic hierarchy that placed some nations in positions of power with numerous resources and other nations in a state of economic subordination. Those that were in a state of subordination faced significant obstacles to mobilization. Core nations are dominant capitalist countries, highly industrialized, technological, and urbanized. For example, Wallerstein contends that the U.S. is an economic powerhouse that can support or deny support to important economic legislation with far-reaching implications, thus exerting control over every aspect of the global economy and exploiting both semi-peripheral and peripheral nations. One can look at free trade agreements such as the North American Free Trade Agreement (NAFTA) as an example of how a core nation is able to leverage its power to gain the most advantageous position in the matter of global trade. Peripheral nations have very little industrialization; what they do have often represents the outdated castoffs of core nations or the factories and means of production owned by core nations. They typically have unstable government, inadequate social programs, and are economically dependent on core nations for jobs and aid. There are abundant examples of countries in this category. Check the label of your jeans or sweatshirt and see where it was made. Chances are it was a peripheral nation such as Guatemala, Bangladesh, Malaysia, or Colombia. One can be sure the workers in these factories, which are owned or leased by global core nation companies, are not enjoying the same privileges and rights of American workers. Semi-peripheral nations are in-between nations, not powerful enough to dictate policy but nevertheless acting as a major source for raw material and an expanding middle-class marketplace for core nations, while also exploiting peripheral nations. Mexico is an example, providing abundant cheap agricultural labor to the U.S., and supplying goods to the U.S. market at a rate dictated by the U.S. without the constitutional protections offered to U.S. workers. World Bank Economic Classification by Income While there is often criticism of the World Bank, both for its policies and its method of calculating data, it is still a common source for global economic data. When using the World Bank categorization to classify economies, the measure of GNI, or gross national income, provides a picture of the overall economic health of nation. Gross national income equals all goods and services plus net income earned outside the country by nationals and corporations headquartered in the country doing business out of the country, measured in U.S. dollars. In other words, the GNI of a country includes not only the value of goods and services inside the country, but also the value of income earned outside the country if it is earned by U.S. nationals or U.S. businesses. That means that multinational corporations that might earn billions in offices and factories around the globe are considered part of the United States’ GNI if they have headquarters in the U.S. Along with tracking the economy, the World Bank tracks demographics and environmental health to provide a complete picture of whether a nation is high-income, middle-income, or low-income. High-Income Nations The World Bank defines high-income nations as having a gross national income of at least $12,276 per capita. It separates out the OECD (Organization for Economic and Cooperative Development) countries, a group of 34 nations whose governments work together to promote economic growth and sustainability. According to the Work Bank (2011), in 2010, the average GNI of a high-income nation belonging to the OECD was $40,136 per capita and the total population was over one billion (1,032,856,261); on average, 77 percent of the population in these nations was urban. Some of these countries include the United States, Germany, Canada, and the United Kingdom (World Bank 2011). In 2010, the average GNI of a high-income nation that did not belong to the OECD was $23,839 per capita and the average population was about 94 million , of which 83 percent was urban. Examples of these countries include Saudi Arabia and Qatar (World Bank 2011). There are two major issues facing high-income countries: capital flight and deindustrialization. Capital flight refers to the movement (flight) of capital from one nation to another, as when General Motors automotive company closed American factories in Michigan and opened factories in Mexico. Deindustrialization , a related issue, occurs as a consequence of capital flight, as no new companies open to replace jobs lost to foreign nations. As expected, global companies move their industrial processes to the places where they can get the most production with the least cost, including the building of infrastructure, training of workers, shipment of goods, and, of course, employee wages. This means that as emerging economies create their own industrial zones, global companies see the opportunity for existing infrastructure and much lower costs. Those opportunities lead to businesses closing the factories that provide jobs to the middle-class within core nations and moving their industrial production to peripheral and semi-peripheral nations. Big Picture Capital Flight, Outsourcing, and Jobs in America As mentioned above, capital flight describes jobs and infrastructure moving from one nation to another. Look at the American automobile industry. In the early 20th century, the cars driven in America were made in America, employing thousands of workers in Detroit, and providing an abundance of jobs in the factories and companies that produced everything that made building cars possible. However, once the fuel crisis of the 1970s hit and Americans increasingly looked to imported cars with better gas mileage, American auto manufacturing began to decline. During the recession of 2008, the U.S. government bailed out the three main auto companies, underscoring their vulnerability. At the same time, Japanese-owned Toyota and Honda and South Korean Kia maintained stable sales levels. Capital flight also occurs when services (as opposed to manufacturing) are relocated. Chances are if you have called the tech support line for your cell phone or internet provider, you’ve spoken to someone halfway across the globe. This professional might tell you her name is Susan or Joan, but her accent makes it clear that her real name might be Parvati or Indira. It might be the middle of the night in that country, yet these service providers pick up the line saying, “good morning,” as though they are in the next town over. They know everything about your phone or your modem, often using a remote server to log in to your home computer to accomplish what is needed. These are the workers of the 21st century. They are not on factory floors or in traditional sweatshops; they are educated, speak at least two languages, and usually have significant technology skills. They are skilled workers, but they are paid a fraction of what similar workers are paid in the U.S. For American and multinational companies, the equation makes sense. India and other semi-peripheral countries have emerging infrastructures and education systems to fill their needs, without core nation costs. As services are relocated, so are jobs. In the United States, unemployment is high. Many college-educated people are unable to find work, and those with only a high school diploma are in even worse shape. We have, as a country, outsourced ourselves out of jobs, and not just menial jobs, but white-collar work as well. But before we complain too bitterly, we must look at the culture of consumerism that Americans embrace. A flat screen television that might have cost $1,000 a few years ago is now $350. That cost savings has to come from somewhere. When Americans seek the lowest possible price, shop at big box stores for the biggest discount they can get, and generally ignore other factors in exchange for low cost, they are building the market for outsourcing. And as the demand is built, the market will ensure it is met, even at the expense of the people who wanted it in the first place. Middle-Income Nations The World Bank defines lower middle income countries as having a GNI that ranges from $1,006 to $3,975 per capita and upper middle income countries as having a GNI ranging from $3,976 to $12,275 per capita. According to the World Bank (2011), in 2010, the average GNI of an upper middle income nation was $5,886 per capita with a total population of 2,452,168,701, of which 57 percent was urban. Thailand, China, and Namibia are examples of middle-income nations (World Bank 2011). Perhaps the most pressing issue for middle-income nations is the problem of debt accumulation. As the name suggests, debt accumulation is the buildup of external debt, wherein countries borrow money from other nations to fund their expansion or growth goals. As the uncertainties of the global economy make repaying these debts, or even paying the interest on them, more challenging, nations can find themselves in trouble. Once global markets have reduced the value of a country’s goods, it can be very difficult to ever manage the debt burden. Such issues have plagued middle-income countries in Latin America and the Caribbean, as well as East Asian and Pacific nations (Dogruel and Dogruel 2007). By way of example, even in the European Union, which is composed of more core nations than semi-peripheral nations, the semi-peripheral nations of Italy and Greece face increasing debt burdens. The economic downturns in both Greece and Italy are threatening the economy of the entire European Union. Low-Income Nations The World Bank defines low-income countries as nations whose GNI was $1,005 per capita or less in 2010. According to the World Bank (2011), in 2010, the average GNI of a low-income nation was $528 per capita and the total population was 796,261,360, with 28 percent located in urban areas. For example, Myanmar, Ethiopia, and Somalia are considered low-income countries. Low-income economies are primarily found in Asia and Africa (World Bank 2011), where most of the world’s population lives. There are two major challenges that these countries face: women are disproportionately affected by poverty (in a trend towards a global feminization of poverty) and much of the population lives in absolute poverty. In some ways, the term global feminization of poverty says it all: around the world, women are bearing a disproportionate percentage of the burden of poverty. This means more women live in poor conditions, receive inadequate healthcare, bear the brunt of malnutrition and inadequate drinking water, and so on. Throughout the 1990s, data indicated that while overall poverty rates were rising, especially in peripheral nations, the rates of impoverishment increased for women nearly 20 percent more than for men (Mogadham 2005). Why is this happening? While there are myriad variables affecting women’s poverty, research specializing in this issue identifies three causes: The expansion of female-headed households The persistence and consequences of intra-household inequalities and biases against women The implementation of neoliberal economic policies around the world (Mogadham 2005) In short, this means that within an impoverished household, women are more likely to go hungry than men; in agricultural aid programs, women are less likely to receive help than men; and often, women are left taking care of families with no male counterpart. 10.2 Global Wealth and Poverty What does it mean to be poor? Does it mean being a single mother with two kids in New York City, waiting for her next paycheck before she can buy groceries? Does it mean living with almost no furniture in your apartment because your income doesn’t allow for extras like beds or chairs? Or does it mean the distended bellies of the chronically malnourished throughout the peripheral nations of Sub-Saharan Africa and South Asia? Poverty has a thousand faces and a thousand gradations; there is no single definition that pulls together every part of the spectrum. You might feel you are poor if you can’t afford cable television or your own car. Every time you see a fellow student with a new laptop and smartphone you might feel that you, with your ten-year-old desktop computer, are barely keeping up. However, someone else might look at the clothes you wear and the calories you consume and consider you rich. Types of Poverty Social scientists define global poverty in different ways, taking into account the complexities and the issues of relativism described above. Relative poverty is a state of living where people can afford necessities but are unable to meet their society’s average standard of living. People often disparage “keeping up with the Joneses”—the idea that you must keep up with the neighbors’ standard of living to not feel deprived. But it is true that you might feel ”poor” if you are living without a car to drive to and from work, without any money for a safety net should a family member fall ill, and without any “extras” beyond just making ends meet. Contrary to relative poverty, people who live in absolute poverty lack even the basic necessities, which typically include adequate food, clean water, safe housing, and access to health care. Absolute poverty is defined by the World Bank (2011) as when someone lives on less than a dollar a day. A shocking number of people––88 million––live in absolute poverty, and close to 3 billion people live on less than $2.50 a day (Shah 2011). If you were forced to live on $2.50 a day, how would you do it? What would you deem worthy of spending money on, and what could you do without? How would you manage the necessities—and how would you make up the gap between what you need to live and what you can afford? Subjective poverty describes poverty that is composed of many dimensions; it is subjectively present when your actual income does not meet your expectations and perceptions. With the concept of subjective poverty, the poor themselves have a greater say in recognizing when it is present. In short, subjective poverty has more to do with how a person or a family defines themselves. This means that a family subsisting on a few dollars a day in Nepal might think of themselves as doing well, within their perception of normal. However, a westerner traveling to Nepal might visit the same family and see extreme need. Big Picture The Underground Economy Around the World What do the driver of an unlicensed hack cab in New York, a piecework seamstress working from her home in Mumbai, and a street tortilla vendor in Mexico City have in common? They are all members of the underground economy , a loosely defined unregulated market unhindered by taxes, government permits, or human protections. Official statistics before the worldwide recession posit that the underground economy accounted for over 50 percent of non-agricultural work in Latin America; the figure went as high as 80 percent in parts of Asia and Africa (Chen 2001). A recent article in the Wall Street Journal discusses the challenges, parameters, and surprising benefits of this informal marketplace. The wages earned in most underground economy jobs, especially in peripheral nations, are a pittance––a few rupees for a handmade bracelet at a market, or maybe 250 rupees (around five U.S. dollars) for a day’s worth of fruit and vegetable sales (Barta 2009). But these tiny sums mark the difference between survival and extinction for the world’s poor. The underground economy has never been viewed very positively by global economists. After all, its members don’t pay taxes, don’t take out loans to grow their businesses, and rarely earn enough to put money back into the economy in the form of consumer spending. But according to the International Labor Organization (an agency of the United Nations), some 52 million people worldwide will lose their jobs due to the ongoing worldwide recession. And while those in core nations know that unemployment rates and limited government safety nets can be frightening, it is nothing compared to the loss of a job for those barely eking out an existence. Once that job disappears, the chance of staying afloat is very slim. Within the context of this recession, some see the underground economy as a key player in keeping people alive. Indeed, an economist at the World Bank credits jobs created by the informal economy as a primary reason why peripheral nations are not in worse shape during this recession. Women in particular benefit from the informal sector. The majority of economically active women in peripheral nations are engaged in the informal sector, which is somewhat buffered from the economic downturn. The flip side, of course, is that it is equally buffered from the possibility of economic growth. Even in the United States, the informal economy exists, although not on the same scale as in peripheral and semi-peripheral nations. It might include under-the-table nannies, gardeners, and housecleaners, as well as unlicensed street vendors and taxi drivers. There are also those who run informal businesses, like daycares or salons, from their houses. Analysts estimate that this type of labor may make up 10 percent of the overall U.S. economy, a number that will likely grow as companies reduce head counts, leaving more workers to seek other options. In the end, the article suggests that, whether selling medicinal wines in Thailand or woven bracelets in India, the workers of the underground economy at least have what most people want most of all: a chance to stay afloat (Barta 2009). Who Are the Impoverished? Who are the impoverished? Who is living in absolute poverty? The truth that most of us would guess is that the richest countries are often those with the least people. Compare the United States, which possesses a relatively small slice of the population pie and owns by far the largest slice of the wealth pie, with India. These disparities have the expected consequence. The poorest people in the world are women and those in peripheral and semi-peripheral nations. For women, the rate of poverty is particularly exacerbated by the pressure on their time. In general, time is one of the few luxuries the very poor have, but study after study has shown that women in poverty, who are responsible for all family comforts as well as any earnings they can make, have less of it. The result is that while men and women may have the same rate of economic poverty, women are suffering more in terms of overall wellbeing (Buvinic 1997). It is harder for females to get credit to expand businesses, to take the time to learn a new skill, or to spend extra hours improving their craft so as to be able to earn at a higher rate. Africa The majority of the poorest countries in the world are in Africa. That is not to say there is not diversity within the countries of that continent; countries like South Africa and Egypt have much lower rates of poverty than Angola and Ethiopia, for instance. Overall, African income levels have been dropping relative to the rest of the world, meaning that Africa as a whole is getting relatively poorer. Exacerbating the problem, 2011 saw the beginning of a drought in Northeast Africa that could bring starvation to millions in the region. Why is Africa in such dire straits? Much of the continent’s poverty can be traced to the availability of land, especially arable land (land that can be farmed). Centuries of struggle over land ownership have meant that much useable land has been ruined or left unfarmed, while many countries with inadequate rainfall have never set up an infrastructure to irrigate. Many of Africa’s natural resources were long ago taken by colonial forces, leaving little agricultural and mineral wealth on the continent. Further, African poverty is worsened by civil wars and inadequate governance that are the result of a continent re-imagined with artificial colonial borders and leaders. Consider the example of Rwanda. There, two ethnic groups cohabitated with their own system of hierarchy and management until Belgians took control of the country in 1915 and rigidly confined members of the population into two unequal ethnic groups. While, historically, members of the Tutsi group held positions of power, the involvement of Belgians led to the Hutu’s seizing power during a 1960s revolt. This ultimately led to a repressive government and genocide against Tutsis that left hundreds of thousands of Rwandans dead or living in diaspora (U.S. Department of State 2011c). The painful rebirth of a self-ruled Africa has meant many countries bear ongoing scars as they try to see their way towards the future (World Poverty 2012a). Asia While the majority of the world’s poorest countries are in Africa, the majority of the world’s poorest people are in Asia. As in Africa, Asia finds itself with disparity in the distribution of poverty, with Japan and South Korea holding much more wealth than India and Cambodia. In fact, most poverty is concentrated in South Asia. One of the most pressing causes of poverty in Asia is simply the pressure that the size of the population puts on its resources. In fact, many believe that China’s success in recent times has much to do with its draconian population control rules. According to the U.S. State department, China’s market-oriented reforms have contributed to its significant reduction of poverty and the speed at which it has experienced an increase in income levels (U.S. Department of State 2011b). However, every part of Asia is feeling the current global recession, from the poorest countries whose aid packages will be hit, to the more industrialized ones whose own industries are slowing down. These factors make the poverty on the ground unlikely to improve any time soon (World Poverty 2012b). Latin America Poverty rates in some Latin American countries like Mexico have improved recently, in part due to investment in education. But other countries like Paraguay and Peru continue to struggle. Although there is a large amount of foreign investment in this part of the world, it tends to be higher-risk speculative investment (rather than the more stable long-term investment Europe often makes in Africa and Asia). The volatility of these investments means that the region has been unable to leverage them, especially when mixed with high interest rates for aid loans. Further, internal political struggles, illegal drug trafficking, and corrupt governments have added to the pressure (World Poverty 2012c). Argentina is one nation that suffered from increasing debt load in the early 2000s, as the country tried to fight hyperinflation by fixing the peso to the U.S. dollar. The move hurt the nation’s ability to be competitive in the world market and ultimately created chronic deficits that could only be financed by massive borrowing from other countries and markets. By 2001, so much money was leaving the country that there was a financial panic, leading to riots and ultimately, the resignation of the president. Sociology in the Real World Sweatshops and Student Protests: Who’s Making Your Team Spirit? Most of us don’t pay too much attention to where our favorite products are made. And certainly when you’re shopping for a college sweatshirt or ball cap to wear to a school football game, you probably don’t turn over the label, check who produced the item, and then research whether or not the company has fair labor practices. But for the members of USAS––United Students Against Sweatshops––that’s exactly what they do. The organization, which was founded in 1997, has waged countless battles against both apparel makers and other multinational corporations that do not meet what USAS considers fair working conditions and wages (USAS 2009). Sometimes their demonstrations take on a sensationalist tone, as in 2006 when 20 Penn State students protested while naked or nearly naked, in order to draw attention to the issue of sweatshop labor. The school is actually already a member of an independent monitoring organization called Worker Rights Consortium (WRC) that monitors working conditions and works to assist colleges and universities with maintaining compliance with their labor code. But the students were protesting in order to have the same code of conduct applied to the factories that provide materials for the goods, not just where the final product is assembled (Chronicle of Higher Education 2006). The USAS organization has chapters on over 250 campuses in the United States and Canada and has waged countless campaigns against companies like Nike and Forever 21 apparel, Taco Bell restaurants, and Sodexo food service. In 2000, members of USAS helped to create WRC. Schools that affiliate with WRC pay annual fees that help offset the organization’s costs. Over 180 schools are affiliated with the organization. Yet, USAS still sees signs of inequality everywhere. And the members feel that, as current and future workers, it is within their scope of responsibility to ensure that workers of the world are treated fairly. For them, at least, the global inequality that we see everywhere should not be ignored for a team spirit sweatshirt. Consequences of Poverty Not surprisingly, the consequences of poverty are often also causes. The poor often experience inadequate health care, limited education, and the inaccessibility of birth control. But those born into these conditions are incredibly challenged in their efforts to break out since these consequences of poverty are also causes of poverty, perpetuating a cycle of disadvantage. According to sociologists Neckerman and Torche (2007) in their analysis of global inequality studies, the consequences of poverty are many. They have divided the consequences into three areas. The first, termed “the sedimentation of global inequality,” relates to the fact that once poverty becomes entrenched in an area, it is typically very difficult to reverse. As mentioned above, poverty exists in a cycle where the consequences and causes are intertwined. The second consequence of poverty is its effect on physical and mental health. Poor people face physical health challenges, including malnutrition and high infant mortality rates. Mental health is also detrimentally affected by the emotional stresses of poverty, with relative deprivation carrying the most robust effect. Again, as with the ongoing inequality, the effects of poverty on mental and physical health become more entrenched as time goes on. Neckerman and Torche’s third consequence of poverty is the prevalence of crime. Cross-nationally, crime rates are higher, particularly with violent crime, in countries with higher levels of income inequality (Fajnzylber, Lederman and Loayza 2002). Slavery While most of us are accustomed to thinking of slavery in terms of the antebellum South, modern day slavery goes hand-in-hand with global inequality. In short, slavery refers to any time people are sold, treated as property, or forced to work for little or no pay. Just as in pre-Civil War America, these humans are at the mercy of their employers. Chattel slavery , the form of slavery practiced in the pre-Civil War American South, is when one person owns another as property. Child slavery, which may include child prostitution, is a form of chattel slavery. Debt bondage , or bonded labor, involves the poor pledging themselves as servants in exchange for the cost of basic necessities like transportation, room, and board. In this scenario, people are paid less than they are charged for room and board. When travel is involved, people can arrive in debt for their travel expenses and be unable to work their way free, since their wages do not allow them to ever get ahead. The global watchdog group Anti-Slavery International recognizes other forms of slavery: human trafficking (where people are moved away from their communities and forced to work against their will), child domestic work and child labor, and certain forms of servile marriage, in which women are little more than chattel slaves (Anti-Slavery International 2012). 10.3 Theoretical Perspectives on Global Stratification As with any social issue, global or otherwise, there are a variety of theories that scholars develop to study the topic. The two most widely applied perspectives on global stratification are modernization theory and dependency theory. Modernization Theory According to modernization theory , low-income countries are affected by their lack of industrialization and can improve their global economic standing through: an adjustment of cultural values and attitudes to work industrialization and other forms of economic growth (Armer and Katsillis 2010) Critics point out the inherent ethnocentric bias of this theory. It supposes all countries have the same resources and are capable of following the same path. In addition, it assumes that the goal of all countries is to be as “developed” as possible. There is no room within this theory for the possibility that industrialization and technology are not the best goals. There is, of course, some basis for this assumption. Data show that core nations tend to have lower maternal and child mortality rates, longer life spans, and less absolute poverty. It is also true that in the poorest countries, millions of people die from the lack of clean drinking water and sanitation facilities, which are benefits most of us take for granted. At the same time, the issue is more complex than the numbers might suggest. Cultural equality, history, community, and local traditions are all at risk as modernization pushes into peripheral countries. The challenge, then, is to allow the benefits of modernization while maintaining a cultural sensitivity to what already exists. Dependency Theory Dependency theory was created in part as a response to the western-centric mindset of modernization theory. It states that global inequality is primarily caused by core nations (or high-income nations) exploiting semi-peripheral and peripheral nations (or middle-income and low-income nations), creating a cycle of dependence (Hendricks 2010). As long as peripheral nations are dependent on core nations for economic stimulus and access to a larger piece of the global economy, they will never achieve stable and consistent economic growth. Further, the theory states that since core nations, as well as the World Bank, choose which countries to make loans to, and for what they will loan funds, they are creating highly segmented labor markets that are built to benefit the dominant market countries. At first glance, it seems this theory ignores the formerly low-income nations that are now considered middle-income nations and are on their way to becoming high-income nations and major players in the global economy, such as China. But some dependency theorists would state that it is in the best interests of core nations to ensure the long-term usefulness of their peripheral and semi-peripheral partners. Following that theory, sociologists have found that entities are more likely to outsource a significant portion of a company’s work if they are the dominant player in the equation; in other words, companies want to see their partner countries healthy enough to provide work, but not so healthy as to establish a threat (Caniels and Roeleveld 2009). Factory Girls We’ve examined functionalist and conflict theorist perspectives on global inequality, as well as modernization and dependency theories. How might a symbolic interactionist approach this topic? The book Factory Girls: From Village to City in Changing China , by Leslie T. Chang, provides this opportunity. Chang follows two young women (Min and Chunming) employed at a handbag plant. They help manufacture coveted purses and bags for the global market. As part of the growing population of young people who are leaving behind the homesteads and farms of rural China, these female factory workers are ready to enter the urban fray and pursue an ambitious income. Although Chang’s study is based in a town many have never heard of (Dongguan), this city produces one-third of all shoes on the planet (Nike and Reebok are major manufacturers here) and 30 percent of the world’s computer disk drives, in addition to a plethora of apparel (Chang 2008). But Chang’s focus is less centered on this global phenomenon on a large scale, and more concerned with how it affects these two women. As a symbolic interactionist would do, Chang examines the daily lives and interactions of Min and Chunming—their workplace friendships, family relations, gadgets and goods—in this evolving global space where young women can leave tradition behind and fashion their own futures. Their story is one that all people, not just scholars, can learn from as we contemplate sociological issues like global economies, cultural traditions and innovations, and opportunities for women in the workforce.
biology
Chapter Outline 45.1 Population Demography 45.2 Life Histories and Natural Selection 45.3 Environmental Limits to Population Growth 45.4 Population Dynamics and Regulation 45.5 Human Population Growth 45.6 Community Ecology 45.7 Behavioral Biology: Proximate and Ultimate Causes of Behavior Introduction Imagine sailing down a river in a small motorboat on a weekend afternoon; the water is smooth and you are enjoying the warm sunshine and cool breeze when suddenly you are hit in the head by a 20-pound silver carp. This is a risk now on many rivers and canal systems in Illinois and Missouri because of the presence of Asian carp. This fish—actually a group of species including the silver, black, grass, and big head carp—has been farmed and eaten in China for over 1000 years. It is one of the most important aquaculture food resources worldwide. In the United States, however, Asian carp is considered a dangerous invasive species that disrupts community structure and composition to the point of threatening native species.
[ { "answer": { "ans_choice": 2, "ans_text": "quadrat" }, "bloom": null, "hl_context": "The most accurate way to determine population size is to simply count all of the individuals within the habitat . However , this method is often not logistically or economically feasible , especially when studying large habitats . Thus , scientists usually study populations by sampling a representative portion of each habitat and using this data to make inferences about the habitat as a whole . <hl> A variety of methods can be used to sample populations to determine their size and density . <hl> <hl> For immobile organisms such as plants , or for very small and slow-moving organisms , a quadrat may be used ( Figure 45.3 ) . <hl> A quadrat is a way of marking off square areas within a habitat , either by staking out an area with sticks and string , or by the use of a wood , plastic , or metal square placed on the ground . After setting the quadrats , researchers then count the number of individuals that lie within their boundaries . Multiple quadrat samples are performed throughout the habitat at several random locations . All of this data can then be used to estimate the population size and population density within the entire habitat . The number and size of quadrat samples depends on the type of organisms under study and other factors , including the density of the organism . For example , if sampling daffodils , a 1 m 2 quadrat might be used whereas with giant redwoods , which are larger and live much further apart from each other , a larger quadrat of 100 m 2 might be employed . This ensures that enough individuals of the species are counted to get an accurate sample that correlates with the habitat , including areas not sampled . For mobile organisms , such as mammals , birds , or fish , a technique called mark and recapture is often used . This method involves marking a sample of captured animals in some way ( such as tags , bands , paint , or other body markings ) , and then releasing them back into the environment to allow them to mix with the rest of the population ; later , a new sample is collected , including some individuals that are marked ( recaptures ) and some individuals that are unmarked ( Figure 45.4 ) .", "hl_sentences": "A variety of methods can be used to sample populations to determine their size and density . For immobile organisms such as plants , or for very small and slow-moving organisms , a quadrat may be used ( Figure 45.3 ) .", "question": { "cloze_format": "The method that will tell an ecologist about both the size and density of a population is ___.", "normal_format": "Which of the following methods will tell an ecologist about both the size and density of a population?", "question_choices": [ "mark and recapture", "mark and release", "quadrat", "life table" ], "question_id": "fs-idp107793584", "question_text": "Which of the following methods will tell an ecologist about both the size and density of a population?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "life table" }, "bloom": null, "hl_context": "In addition , the demographic characteristics of a population can influence how the population grows or declines over time . If birth and death rates are equal , the population remains stable . However , the population size will increase if birth rates exceed death rates ; the population will decrease if birth rates are less than death rates . Life expectancy is another important factor ; the length of time individuals remain in the population impacts local resources , reproduction , and the overall health of the population . These demographic characteristics are often displayed in the form of a life table . <hl> Life Tables Life tables provide important information about the life history of an organism . <hl> <hl> Life tables divide the population into age groups and often sexes , and show how long a member of that group is likely to live . <hl> <hl> They are modeled after actuarial tables used by the insurance industry for estimating human life expectancy . <hl> Life tables may include the probability of individuals dying before their next birthday ( i . e . , their mortality rate ) , the percentage of surviving individuals dying at a particular age interval , and their life expectancy at each interval . An example of a life table is shown in Table 45.1 from a study of Dall mountain sheep , a species native to northwestern North America . Notice that the population is divided into age intervals ( column A ) . The mortality rate ( per 1000 ) , shown in column D , is based on the number of individuals dying during the age interval ( column B ) divided by the number of individuals surviving at the beginning of the interval ( Column C ) , multiplied by 1000 . Populations are dynamic entities . Populations consist all of the species living within a specific area , and populations fluctuate based on a number of factors : seasonal and yearly changes in the environment , natural disasters such as forest fires and volcanic eruptions , and competition for resources between and within species . The statistical study of population dynamics , demography , uses a series of mathematical tools to investigate how populations respond to changes in their biotic and abiotic environments . Many of these tools were originally designed to study human populations . <hl> For example , life tables , which detail the life expectancy of individuals within a population , were initially developed by life insurance companies to set insurance rates . <hl> In fact , while the term “ demographics ” is commonly used when discussing humans , all living populations can be studied using this approach .", "hl_sentences": "Life Tables Life tables provide important information about the life history of an organism . Life tables divide the population into age groups and often sexes , and show how long a member of that group is likely to live . They are modeled after actuarial tables used by the insurance industry for estimating human life expectancy . For example , life tables , which detail the life expectancy of individuals within a population , were initially developed by life insurance companies to set insurance rates .", "question": { "cloze_format": "A ___ is best at showing the life expectancy of an individual within a population.", "normal_format": "Which of the following is best at showing the life expectancy of an individual within a population?", "question_choices": [ "quadrat", "mark and recapture", "survivorship curve", "life table" ], "question_id": "fs-idp188104608", "question_text": "Which of the following is best at showing the life expectancy of an individual within a population?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "Type I" }, "bloom": null, "hl_context": "Another tool used by population ecologists is a survivorship curve , which is a graph of the number of individuals surviving at each age interval plotted versus time ( usually with data compiled from a life table ) . These curves allow us to compare the life histories of different populations ( Figure 45.6 ) . <hl> Humans and most primates exhibit a Type I survivorship curve because a high percentage of offspring survive their early and middle years — death occurs predominantly in older individuals . <hl> These types of species usually have small numbers of offspring at one time , and they give a high amount of parental care to them to ensure their survival . Birds are an example of an intermediate or Type II survivorship curve because birds die more or less equally at each age interval . These organisms also may have relatively few offspring and provide significant parental care . Trees , marine invertebrates , and most fishes exhibit a Type III survivorship curve because very few of these organisms survive their younger years ; however , those that make it to an old age are more likely to survive for a relatively long period of time . Organisms in this category usually have a very large number of offspring , but once they are born , little parental care is provided . Thus these offspring are “ on their own ” and vulnerable to predation , but their sheer numbers assure the survival of enough individuals to perpetuate the species . 45.2 Life Histories and Natural Selection Learning Objectives By the end of this section , you will be able to :", "hl_sentences": "Humans and most primates exhibit a Type I survivorship curve because a high percentage of offspring survive their early and middle years — death occurs predominantly in older individuals .", "question": { "cloze_format": "Humans have ___of survivorship curve.", "normal_format": "Humans have which type of survivorship curve?", "question_choices": [ "Type I", "Type II", "Type III", "Type IV" ], "question_id": "fs-idp82590032", "question_text": "Humans have which type of survivorship curve?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "few offspring" }, "bloom": "2", "hl_context": "<hl> Animal species that have few offspring during a reproductive event usually give extensive parental care , devoting much of their energy budget to these activities , sometimes at the expense of their own health . <hl> This is the case with many mammals , such as humans , kangaroos , and pandas . The offspring of these species are relatively helpless at birth and need to develop before they achieve self-sufficiency .", "hl_sentences": "Animal species that have few offspring during a reproductive event usually give extensive parental care , devoting much of their energy budget to these activities , sometimes at the expense of their own health .", "question": { "cloze_format": "___ is associated with long-term parental care.", "normal_format": "Which of the following is associated with long-term parental care?", "question_choices": [ "few offspring", "many offspring", "semelparity", "fecundity" ], "question_id": "fs-idm26065904", "question_text": "Which of the following is associated with long-term parental care?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "iteroparity" }, "bloom": null, "hl_context": "<hl> Iteroparity describes species that reproduce repeatedly during their lives . <hl> Some animals are able to mate only once per year , but survive multiple mating seasons . The pronghorn antelope is an example of an animal that goes into a seasonal estrus cycle ( “ heat ” ): a hormonally induced physiological condition preparing the body for successful mating ( Figure 45.7 b ) . Females of these species mate only during the estrus phase of the cycle . A different pattern is observed in primates , including humans and chimpanzees , which may attempt reproduction at any time during their reproductive years , even though their menstrual cycles make pregnancy likely only a few days per month during ovulation ( Figure 45.7 c ) .", "hl_sentences": "Iteroparity describes species that reproduce repeatedly during their lives .", "question": { "cloze_format": "___ is associated with multiple reproductive episodes during a species’ lifetime.", "normal_format": "Which of the following is associated with multiple reproductive episodes during a species’ lifetime?", "question_choices": [ "semiparity", "iteroparity", "semelparity", "fecundity" ], "question_id": "fs-idm109424304", "question_text": "Which of the following is associated with multiple reproductive episodes during a species’ lifetime?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "fecundity" }, "bloom": null, "hl_context": "<hl> Fecundity is the potential reproductive capacity of an individual within a population . <hl> <hl> In other words , fecundity describes how many offspring could ideally be produced if an individual has as many offspring as possible , repeating the reproductive cycle as soon as possible after the birth of the offspring . <hl> In animals , fecundity is inversely related to the amount of parental care given to an individual offspring . Species , such as many marine invertebrates , that produce many offspring usually provide little if any care for the offspring ( they would not have the energy or the ability to do so anyway ) . Most of their energy budget is used to produce many tiny offspring . Animals with this strategy are often self-sufficient at a very early age . This is because of the energy tradeoff these organisms have made to maximize their evolutionary fitness . Because their energy is used for producing offspring instead of parental care , it makes sense that these offspring have some ability to be able to move within their environment and find food and perhaps shelter . Even with these abilities , their small size makes them extremely vulnerable to predation , so the production of many offspring allows enough of them to survive to maintain the species .", "hl_sentences": "Fecundity is the potential reproductive capacity of an individual within a population . In other words , fecundity describes how many offspring could ideally be produced if an individual has as many offspring as possible , repeating the reproductive cycle as soon as possible after the birth of the offspring .", "question": { "cloze_format": "___ is associated with the reproductive potential of a species.", "normal_format": "Which of the following is associated with the reproductive potential of a species?", "question_choices": [ "few offspring", "many offspring", "semelparity", "fecundity" ], "question_id": "fs-idp18708688", "question_text": "Which of the following is associated with the reproductive potential of a species?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "logistic" }, "bloom": null, "hl_context": "Exponential growth is possible only when infinite natural resources are available ; this is not the case in the real world . Charles Darwin recognized this fact in his description of the “ struggle for existence , ” which states that individuals will compete ( with members of their own or other species ) for limited resources . The successful ones will survive to pass on their own characteristics and traits ( which we know now are transferred by genes ) to the next generation at a greater rate ( natural selection ) . <hl> To model the reality of limited resources , population ecologists developed the logistic growth model . <hl>", "hl_sentences": "To model the reality of limited resources , population ecologists developed the logistic growth model .", "question": { "cloze_format": "Species with limited resources usually exhibit a(n) ________ growth curve.", "normal_format": "What kind of growth curve do species with limited resources usually exhibit?", "question_choices": [ "logistic", "logical", "experimental", "exponential" ], "question_id": "fs-idm150035168", "question_text": "Species with limited resources usually exhibit a(n) ________ growth curve." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "biotic potential" }, "bloom": null, "hl_context": "The value “ r ” can be positive , meaning the population is increasing in size ; or negative , meaning the population is decreasing in size ; or zero , where the population ’ s size is unchanging , a condition known as zero population growth . A further refinement of the formula recognizes that different species have inherent differences in their intrinsic rate of increase ( often thought of as the potential for reproduction ) , even under ideal conditions . Obviously , a bacterium can reproduce more rapidly and have a higher intrinsic rate of growth than a human . <hl> The maximal growth rate for a species is its biotic potential , or r max , thus changing the equation to : <hl>", "hl_sentences": "The maximal growth rate for a species is its biotic potential , or r max , thus changing the equation to :", "question": { "cloze_format": "The maximum rate of increased characteristic of a species is called its ________.", "normal_format": "What is the maximum rate of increased characteristic of a species called?", "question_choices": [ "limit", "carrying capacity", "biotic potential", "exponential growth pattern" ], "question_id": "fs-idm266378560", "question_text": "The maximum rate of increased characteristic of a species is called its ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "carrying capacity" }, "bloom": null, "hl_context": "In the real world , with its limited resources , exponential growth cannot continue indefinitely . Exponential growth may occur in environments where there are few individuals and plentiful resources , but when the number of individuals gets large enough , resources will be depleted , slowing the growth rate . Eventually , the growth rate will plateau or level off ( Figure 45.9 ) . <hl> This population size , which represents the maximum population size that a particular environment can support , is called the carrying capacity , or K . <hl> The formula we use to calculate logistic growth adds the carrying capacity as a moderating force in the growth rate . The expression “ K – N ” is indicative of how many individuals may be added to a population at a given stage , and “ K – N ” divided by “ K ” is the fraction of the carrying capacity available for further growth . Thus , the exponential growth model is restricted by this factor to generate the logistic growth equation :", "hl_sentences": "This population size , which represents the maximum population size that a particular environment can support , is called the carrying capacity , or K .", "question": { "cloze_format": "The population size of a species capable of being supported by the environment is called its ________.", "normal_format": "What is the population size of a species capable of being supported by the environment called?", "question_choices": [ "limit", "carrying capacity", "biotic potential", "logistic growth pattern" ], "question_id": "fs-idm257702896", "question_text": "The population size of a species capable of being supported by the environment is called its ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "r-selected" }, "bloom": null, "hl_context": "<hl> In contrast , r - selected species have a large number of small offspring ( hence their r designation ( Table 45.2 ) . <hl> This strategy is often employed in unpredictable or changing environments . Animals that are r - selected do not give long-term parental care and the offspring are relatively mature and self-sufficient at birth . Examples of r - selected species are marine invertebrates , such as jellyfish , and plants , such as the dandelion ( Figure 45.13 b ) . Dandelions have small seeds that are wind dispersed long distances . Many seeds are produced simultaneously to ensure that at least some of them reach a hospitable environment . Seeds that land in inhospitable environments have little chance for survival since their seeds are low in energy content . Note that survival is not necessarily a function of energy stored in the seed itself .", "hl_sentences": "In contrast , r - selected species have a large number of small offspring ( hence their r designation ( Table 45.2 ) .", "question": { "cloze_format": "Species that have many offspring at one time are usually ___.", "normal_format": "What are species that have many offspring at one time?", "question_choices": [ "r-selected", "K-selected", "both r- and K-selected", "not selected" ], "question_id": "fs-idm74770800", "question_text": "Species that have many offspring at one time are usually:" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "density-independent" }, "bloom": null, "hl_context": "<hl> Density-independent Regulation and Interaction with Density-dependent Factors Many factors , typically physical or chemical in nature ( abiotic ) , influence the mortality of a population regardless of its density , including weather , natural disasters , and pollution . <hl> <hl> An individual deer may be killed in a forest fire regardless of how many deer happen to be in that area . <hl> Its chances of survival are the same whether the population density is high or low . The same holds true for cold winter weather .", "hl_sentences": "Density-independent Regulation and Interaction with Density-dependent Factors Many factors , typically physical or chemical in nature ( abiotic ) , influence the mortality of a population regardless of its density , including weather , natural disasters , and pollution . An individual deer may be killed in a forest fire regardless of how many deer happen to be in that area .", "question": { "cloze_format": "A forest fire is an example of ________ regulation.", "normal_format": "A forest fire is an example of which regulation?", "question_choices": [ "density-dependent", "density-independent", "r-selected", "K-selected" ], "question_id": "fs-idm96228320", "question_text": "A forest fire is an example of ________ regulation." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "K-selected species" }, "bloom": null, "hl_context": "K - selected species are species selected by stable , predictable environments . Populations of K - selected species tend to exist close to their carrying capacity ( hence the term K - selected ) where intraspecific competition is high . These species have few , large offspring , a long gestation period , and often give long-term care to their offspring ( Table B45_04_01 ) . While larger in size when born , the offspring are relatively helpless and immature at birth . By the time they reach adulthood , they must develop skills to compete for natural resources . In plants , scientists think of parental care more broadly : how long fruit takes to develop or how long it remains on the plant are determining factors in the time to the next reproductive event . <hl> Examples of K - selected species are primates including humans ) , elephants , and plants such as oak trees ( Figure 45.13 a ) . <hl>", "hl_sentences": "Examples of K - selected species are primates including humans ) , elephants , and plants such as oak trees ( Figure 45.13 a ) .", "question": { "cloze_format": "Primates are examples of ___.", "normal_format": "What are primates examples of?", "question_choices": [ "density-dependent species", "density-independent species", "r-selected species", "K-selected species" ], "question_id": "fs-idm88520640", "question_text": "Primates are examples of:" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "economically underdeveloped" }, "bloom": null, "hl_context": "The age structure of a population is an important factor in population dynamics . Age structure is the proportion of a population at different age ranges . Age structure allows better prediction of population growth , plus the ability to associate this growth with the level of economic development in the region . <hl> Countries with rapid growth have a pyramidal shape in their age structure diagrams , showing a preponderance of younger individuals , many of whom are of reproductive age or will be soon ( Figure 45.16 ) . <hl> <hl> This pattern is most often observed in underdeveloped countries where individuals do not live to old age because of less-than-optimal living conditions . <hl> Age structures of areas with slow growth , including developed countries such as the United States , still have a pyramidal structure , but with many fewer young and reproductive-aged individuals and a greater proportion of older individuals . Other developed countries , such as Italy , have zero population growth . The age structure of these populations is more conical , with an even greater percentage of middle-aged and older individuals . The actual growth rates in different countries are shown in Figure 45.17 , with the highest rates tending to be in the less economically developed countries of Africa and Asia .", "hl_sentences": "Countries with rapid growth have a pyramidal shape in their age structure diagrams , showing a preponderance of younger individuals , many of whom are of reproductive age or will be soon ( Figure 45.16 ) . This pattern is most often observed in underdeveloped countries where individuals do not live to old age because of less-than-optimal living conditions .", "question": { "cloze_format": "The type of country that has the greatest proportion of young individuals is ___.", "normal_format": "Which type of country has the greatest proportion of young individuals?", "question_choices": [ "economically developed", "economically underdeveloped", "countries with zero population growth", "countries in Europe" ], "question_id": "fs-idm80509840", "question_text": "Which type of country has the greatest proportion of young individuals?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "using large amounts of natural resources" }, "bloom": null, "hl_context": "Humans are unique in their ability to alter their environment with the conscious purpose of increasing its carrying capacity . This ability is a major factor responsible for human population growth and a way of overcoming density-dependent growth regulation . Much of this ability is related to human intelligence , society , and communication . <hl> Humans can construct shelter to protect them from the elements and have developed agriculture and domesticated animals to increase their food supplies . <hl> <hl> In addition , humans use language to communicate this technology to new generations , allowing them to improve upon previous accomplishments . <hl> <hl> Although humans have increased the carrying capacity of their environment , the technologies used to achieve this transformation have caused unprecedented changes to Earth ’ s environment , altering ecosystems to the point where some may be in danger of collapse . <hl> The depletion of the ozone layer , erosion due to acid rain , and damage from global climate change are caused by human activities . The ultimate effect of these changes on our carrying capacity is unknown . <hl> As some point out , it is likely that the negative effects of increasing carrying capacity will outweigh the positive ones — the carrying capacity of the world for human beings might actually decrease . <hl>", "hl_sentences": "Humans can construct shelter to protect them from the elements and have developed agriculture and domesticated animals to increase their food supplies . In addition , humans use language to communicate this technology to new generations , allowing them to improve upon previous accomplishments . Although humans have increased the carrying capacity of their environment , the technologies used to achieve this transformation have caused unprecedented changes to Earth ’ s environment , altering ecosystems to the point where some may be in danger of collapse . As some point out , it is likely that the negative effects of increasing carrying capacity will outweigh the positive ones — the carrying capacity of the world for human beings might actually decrease .", "question": { "cloze_format": "___ is not a way that humans have increased the carrying capacity of the environment.", "normal_format": "Which of the following is not a way that humans have increased the carrying capacity of the environment?", "question_choices": [ "agriculture", "using large amounts of natural resources", "domestication of animals", "use of language" ], "question_id": "fs-idm205371408", "question_text": "Which of the following is not a way that humans have increased the carrying capacity of the environment?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "pioneer species" }, "bloom": null, "hl_context": "Primary succession occurs when new land is formed or rock is exposed : for example , following the eruption of volcanoes , such as those on the Big Island of Hawaii . As lava flows into the ocean , new land is continually being formed . On the Big Island , approximately 32 acres of land is added each year . <hl> First , weathering and other natural forces break down the substrate enough for the establishment of certain hearty plants and lichens with few soil requirements , known as pioneer species ( Figure 45.32 ) . <hl> These species help to further break down the mineral rich lava into soil where other , less hardy species will grow and eventually replace the pioneer species . In addition , as these early species grow and die , they add to an ever-growing layer of decomposing organic material and contribute to soil formation . Over time the area will reach an equilibrium state , with a set of organisms quite different from the pioneer species .", "hl_sentences": "First , weathering and other natural forces break down the substrate enough for the establishment of certain hearty plants and lichens with few soil requirements , known as pioneer species ( Figure 45.32 ) .", "question": { "cloze_format": "The first species to live on new land, such as that formed from volcanic lava, are called ________.", "normal_format": "What are the first species to live on new land, such as that formed from volcanic lava, called?", "question_choices": [ "climax community", "keystone species", "foundation species", "pioneer species" ], "question_id": "fs-idm20868240", "question_text": "The first species to live on new land, such as that formed from volcanic lava, are called ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "Müllerian mimicry" }, "bloom": null, "hl_context": "<hl> In Müllerian mimicry , multiple species share the same warning coloration , but all of them actually have defenses . <hl> Figure 45.23 shows a variety of foul-tasting butterflies with similar coloration . In Emsleyan / Mertensian mimicry , a deadly prey mimics a less dangerous one , such as the venomous coral snake mimicking the non-venomous milk snake . This type of mimicry is extremely rare and more difficult to understand than the previous two types . For this type of mimicry to work , it is essential that eating the milk snake has unpleasant but not fatal consequences . Then , these predators learn not to eat snakes with this coloration , protecting the coral snake as well . If the snake were fatal to the predator , there would be no opportunity for the predator to learn not to eat it , and the benefit for the less toxic species would disappear .", "hl_sentences": "In Müllerian mimicry , multiple species share the same warning coloration , but all of them actually have defenses .", "question": { "cloze_format": "The type of mimicry that involves multiple species with similar warning coloration that are all toxic to predators is the ___.", "normal_format": "Which type of mimicry involves multiple species with similar warning coloration that are all toxic to predators?", "question_choices": [ "Batesian mimicry", "Müllerian mimicry", "Emsleyan/Mertensian mimicry", "Mertensian mimicry" ], "question_id": "fs-idm10859360", "question_text": "Which type of mimicry involves multiple species with similar warning coloration that are all toxic to predators?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "mutualism" }, "bloom": null, "hl_context": "<hl> A second type of symbiotic relationship is called mutualism , where two species benefit from their interaction . <hl> Some scientists believe that these are the only true examples of symbiosis . For example , termites have a mutualistic relationship with protozoa that live in the insect ’ s gut ( Figure 45.26 a ) . The termite benefits from the ability of bacterial symbionts within the protozoa to digest cellulose . The termite itself cannot do this , and without the protozoa , it would not be able to obtain energy from its food ( cellulose from the wood it chews and eats ) . The protozoa and the bacterial symbionts benefit by having a protective environment and a constant supply of food from the wood chewing actions of the termite . Lichens have a mutualistic relationship between fungus and photosynthetic algae or bacteria ( Figure 45.26 b ) . As these symbionts grow together , the glucose produced by the algae provides nourishment for both organisms , whereas the physical structure of the lichen protects the algae from the elements and makes certain nutrients in the atmosphere more available to the algae .", "hl_sentences": "A second type of symbiotic relationship is called mutualism , where two species benefit from their interaction .", "question": { "cloze_format": "A symbiotic relationship where both of the coexisting species benefit from the interaction is called ________.", "normal_format": "What is a symbiotic relationship called where both of the coexisting species benefit from the interaction?", "question_choices": [ "commensalism", "parasitism", "mutualism", "communism" ], "question_id": "fs-idp168517216", "question_text": "A symbiotic relationship where both of the coexisting species benefit from the interaction is called ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "cognitive learning" }, "bloom": null, "hl_context": "<hl> Cognitive learning is not limited to primates , although they are the most efficient in using it . <hl> <hl> Maze running experiments done with rats by H . C . Blodgett in the 1920s were the first to show cognitive skills in a simple mammal . <hl> The motivation for the animals to work their way through the maze was a piece of food at its end . In these studies , the animals in Group I were run in one trial per day and had food available to them each day on completion of the run ( Figure 45.42 ) . Group II rats were not fed in the maze for the first six days and then subsequent runs were done with food for several days after . Group III rats had food available on the third day and every day thereafter . The results were that the control rats , Group I , learned quickly , and figured out how to run the maze in seven days . Group III did not learn much during the three days without food , but rapidly caught up to the control group when given the food reward . Group II learned very slowly for the six days with no reward to motivate them , and they did not begin to catch up to the control group until the day food was given , and then it took two days longer to learn the maze .", "hl_sentences": "Cognitive learning is not limited to primates , although they are the most efficient in using it . Maze running experiments done with rats by H . C . Blodgett in the 1920s were the first to show cognitive skills in a simple mammal .", "question": { "cloze_format": "The ability of rats to learn how to run a maze is an example of ________.", "normal_format": "What is the ability of rats to learn how to run a maze an example for?", "question_choices": [ "imprinting", "classical conditioning", "operant conditioning", "cognitive learning" ], "question_id": "fs-idp21100560", "question_text": "The ability of rats to learn how to run a maze is an example of ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "operant conditioning" }, "bloom": null, "hl_context": "In operant conditioning , the conditioned behavior is gradually modified by its consequences as the animal responds to the stimulus . A major proponent of such conditioning was psychologist B . F . Skinner , the inventor of the Skinner box . Skinner put rats in his boxes that contained a lever that would dispense food to the rat when depressed . While initially the rat would push the lever a few times by accident , it eventually associated pushing the lever with getting the food . <hl> This type of learning is an example of operant conditioning . <hl> <hl> Operant learning is the basis of most animal training . <hl> The conditioned behavior is continually modified by positive or negative reinforcement , often a reward such as food or some type of punishment , respectively . In this way , the animal is conditioned to associate a type of behavior with the punishment or reward , and , over time , can be induced to perform behaviors that they would not have done in the wild , such as the “ tricks ” dolphins perform at marine amusement park shows ( Figure 45.41 ) .", "hl_sentences": "This type of learning is an example of operant conditioning . Operant learning is the basis of most animal training .", "question": { "cloze_format": "The training of animals usually involves ________.", "normal_format": "What does the training of animals usually involve?", "question_choices": [ "imprinting", "classical conditioning", "operant conditioning", "cognitive learning" ], "question_id": "fs-idp199593600", "question_text": "The training of animals usually involves ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "kin selection" }, "bloom": null, "hl_context": "Even less-related individuals , those with less genetic identity than that shared by parent and offspring , benefit from seemingly altruistic behavior . The activities of social insects such as bees , wasps , ants , and termites are good examples . Sterile workers in these societies take care of the queen because they are closely related to it , and as the queen has offspring , she is passing on genes from the workers indirectly . Thus , it is of fitness benefit for the worker to maintain the queen without having any direct chance of passing on its genes due to its sterility . <hl> The lowering of individual fitness to enhance the reproductive fitness of a relative and thus one ’ s inclusive fitness evolves through kin selection . <hl> This phenomenon can explain many superficially altruistic behaviors seen in animals . However , these behaviors may not be truly defined as altruism in these cases because the actor is actually increasing its own fitness either directly ( through its own offspring ) or indirectly ( through the inclusive fitness it gains through relatives that share genes with it ) . There has been much discussion over why altruistic behaviors exist . Do these behaviors lead to overall evolutionary advantages for their species ? Do they help the altruistic individual pass on its own genes ? And what about such activities between unrelated individuals ? One explanation for altruistic-type behaviors is found in the genetics of natural selection . In the 1976 book , The Selfish Gene , scientist Richard Dawkins attempted to explain many seemingly altruistic behaviors from the viewpoint of the gene itself . <hl> Although a gene obviously cannot be selfish in the human sense , it may appear that way if the sacrifice of an individual benefits related individuals that share genes that are identical by descent ( present in relatives because of common lineage ) . <hl> <hl> Mammal parents make this sacrifice to take care of their offspring . <hl> Emperor penguins migrate miles in harsh conditions to bring food back for their young . Selfish gene theory has been controversial over the years and is still discussed among scientists in related fields .", "hl_sentences": "The lowering of individual fitness to enhance the reproductive fitness of a relative and thus one ’ s inclusive fitness evolves through kin selection . Although a gene obviously cannot be selfish in the human sense , it may appear that way if the sacrifice of an individual benefits related individuals that share genes that are identical by descent ( present in relatives because of common lineage ) . Mammal parents make this sacrifice to take care of their offspring .", "question": { "cloze_format": "The sacrifice of the life of an individual so that the genes of relatives may be passed on is called ________.", "normal_format": "What is the sacrifice of the life of an individual so that the genes of relatives may be passed on called?", "question_choices": [ "operant learning", "kin selection", "kinesis", "imprinting" ], "question_id": "fs-idm22616272", "question_text": "The sacrifice of the life of an individual so that the genes of relatives may be passed on is called ________." }, "references_are_paraphrase": null } ]
45
45.1 Population Demography Learning Objectives By the end of this section, you will be able to: Describe how ecologists measure population size and density Describe three different patterns of population distribution Use life tables to calculate mortality rates Describe the three types of survivorship curves and relate them to specific populations Populations are dynamic entities. Populations consist all of the species living within a specific area, and populations fluctuate based on a number of factors: seasonal and yearly changes in the environment, natural disasters such as forest fires and volcanic eruptions, and competition for resources between and within species. The statistical study of population dynamics, demography , uses a series of mathematical tools to investigate how populations respond to changes in their biotic and abiotic environments. Many of these tools were originally designed to study human populations. For example, life tables , which detail the life expectancy of individuals within a population, were initially developed by life insurance companies to set insurance rates. In fact, while the term “demographics” is commonly used when discussing humans, all living populations can be studied using this approach. Population Size and Density The study of any population usually begins by determining how many individuals of a particular species exist, and how closely associated they are with each other. Within a particular habitat, a population can be characterized by its population size ( N ) , the total number of individuals, and its population density , the number of individuals within a specific area or volume. Population size and density are the two main characteristics used to describe and understand populations. For example, populations with more individuals may be more stable than smaller populations based on their genetic variability, and thus their potential to adapt to the environment. Alternatively, a member of a population with low population density (more spread out in the habitat), might have more difficulty finding a mate to reproduce compared to a population of higher density. As is shown in Figure 45.2 , smaller organisms tend to be more densely distributed than larger organisms. Visual Connection As this graph shows, population density typically decreases with increasing body size. Why do you think this is the case? Population Research Methods The most accurate way to determine population size is to simply count all of the individuals within the habitat. However, this method is often not logistically or economically feasible, especially when studying large habitats. Thus, scientists usually study populations by sampling a representative portion of each habitat and using this data to make inferences about the habitat as a whole. A variety of methods can be used to sample populations to determine their size and density. For immobile organisms such as plants, or for very small and slow-moving organisms, a quadrat may be used ( Figure 45.3 ). A quadrat is a way of marking off square areas within a habitat, either by staking out an area with sticks and string, or by the use of a wood, plastic, or metal square placed on the ground. After setting the quadrats, researchers then count the number of individuals that lie within their boundaries. Multiple quadrat samples are performed throughout the habitat at several random locations. All of this data can then be used to estimate the population size and population density within the entire habitat. The number and size of quadrat samples depends on the type of organisms under study and other factors, including the density of the organism. For example, if sampling daffodils, a 1 m 2 quadrat might be used whereas with giant redwoods, which are larger and live much further apart from each other, a larger quadrat of 100 m 2 might be employed. This ensures that enough individuals of the species are counted to get an accurate sample that correlates with the habitat, including areas not sampled. For mobile organisms, such as mammals, birds, or fish, a technique called mark and recapture is often used. This method involves marking a sample of captured animals in some way (such as tags, bands, paint, or other body markings), and then releasing them back into the environment to allow them to mix with the rest of the population; later, a new sample is collected, including some individuals that are marked (recaptures) and some individuals that are unmarked ( Figure 45.4 ). Using the ratio of marked and unmarked individuals, scientists determine how many individuals are in the sample. From this, calculations are used to estimate the total population size. This method assumes that the larger the population, the lower the percentage of tagged organisms that will be recaptured since they will have mixed with more untagged individuals. For example, if 80 deer are captured, tagged, and released into the forest, and later 100 deer are captured and 20 of them are already marked, we can determine the population size ( N ) using the following equation: (number marked first catch x total number of second catch) number marked second catch  =  N (number marked first catch x total number of second catch) number marked second catch  =  N Using our example, the population size would be estimated at 400. (80 x 100) 20  = 400 (80 x 100) 20  = 400 Therefore, there are an estimated 400 total individuals in the original population. There are some limitations to the mark and recapture method. Some animals from the first catch may learn to avoid capture in the second round, thus inflating population estimates. Alternatively, animals may preferentially be retrapped (especially if a food reward is offered), resulting in an underestimate of population size. Also, some species may be harmed by the marking technique, reducing their survival. A variety of other techniques have been developed, including the electronic tracking of animals tagged with radio transmitters and the use of data from commercial fishing and trapping operations to estimate the size and health of populations and communities. Species Distribution In addition to measuring simple density, further information about a population can be obtained by looking at the distribution of the individuals. Species dispersion patterns (or distribution patterns) show the spatial relationship between members of a population within a habitat at a particular point in time. In other words, they show whether members of the species live close together or far apart, and what patterns are evident when they are spaced apart. Individuals in a population can be more or less equally spaced apart, dispersed randomly with no predictable pattern, or clustered in groups. These are known as uniform, random, and clumped dispersion patterns, respectively ( Figure 45.5 ). Uniform dispersion is observed in plants that secrete substances inhibiting the growth of nearby individuals (such as the release of toxic chemicals by the sage plant Salvia leucophylla, a phenomenon called allelopathy) and in animals like the penguin that maintain a defined territory. An example of random dispersion occurs with dandelion and other plants that have wind-dispersed seeds that germinate wherever they happen to fall in a favorable environment. A clumped dispersion may be seen in plants that drop their seeds straight to the ground, such as oak trees, or animals that live in groups (schools of fish or herds of elephants). Clumped dispersions may also be a function of habitat heterogeneity. Thus, the dispersion of the individuals within a population provides more information about how they interact with each other than does a simple density measurement. Just as lower density species might have more difficulty finding a mate, solitary species with a random distribution might have a similar difficulty when compared to social species clumped together in groups. Demography While population size and density describe a population at one particular point in time, scientists must use demography to study the dynamics of a population. Demography is the statistical study of population changes over time: birth rates, death rates, and life expectancies. Each of these measures, especially birth rates, may be affected by the population characteristics described above. For example, a large population size results in a higher birth rate because more potentially reproductive individuals are present. In contrast, a large population size can also result in a higher death rate because of competition, disease, and the accumulation of waste. Similarly, a higher population density or a clumped dispersion pattern results in more potential reproductive encounters between individuals, which can increase birth rate. Lastly, a female-biased sex ratio (the ratio of males to females) or age structure (the proportion of population members at specific age ranges) composed of many individuals of reproductive age can increase birth rates. In addition, the demographic characteristics of a population can influence how the population grows or declines over time. If birth and death rates are equal, the population remains stable. However, the population size will increase if birth rates exceed death rates; the population will decrease if birth rates are less than death rates. Life expectancy is another important factor; the length of time individuals remain in the population impacts local resources, reproduction, and the overall health of the population. These demographic characteristics are often displayed in the form of a life table. Life Tables Life tables provide important information about the life history of an organism. Life tables divide the population into age groups and often sexes, and show how long a member of that group is likely to live. They are modeled after actuarial tables used by the insurance industry for estimating human life expectancy. Life tables may include the probability of individuals dying before their next birthday (i.e., their mortality rate ), the percentage of surviving individuals dying at a particular age interval, and their life expectancy at each interval. An example of a life table is shown in Table 45.1 from a study of Dall mountain sheep, a species native to northwestern North America. Notice that the population is divided into age intervals (column A). The mortality rate (per 1000), shown in column D, is based on the number of individuals dying during the age interval (column B) divided by the number of individuals surviving at the beginning of the interval (Column C), multiplied by 1000. mortality rate =  number of individuals dying number of individuals surviving  x 1000 mortality rate =  number of individuals dying number of individuals surviving  x 1000 For example, between ages three and four, 12 individuals die out of the 776 that were remaining from the original 1000 sheep. This number is then multiplied by 1000 to get the mortality rate per thousand. mortality rate =  12 776  x 1000  ≈  15 .5 mortality rate =  12 776  x 1000  ≈  15 .5 As can be seen from the mortality rate data (column D), a high death rate occurred when the sheep were between 6 and 12 months old, and then increased even more from 8 to 12 years old, after which there were few survivors. The data indicate that if a sheep in this population were to survive to age one, it could be expected to live another 7.7 years on average, as shown by the life expectancy numbers in column E. Life Table of Dall Mountain Sheep 1 1 Data Adapted from Edward S. Deevey, Jr., “Life Tables for Natural Populations of Animals,” The Quarterly Review of Biology 22, no. 4 (December 1947): 283-314. Age interval (years) Number dying in age interval out of 1000 born Number surviving at beginning of age interval out of 1000 born Mortality rate per 1000 alive at beginning of age interval Life expectancy or mean lifetime remaining to those attaining age interval 0-0.5 54 1000 54.0 7.06 0.5-1 145 946 153.3 -- 1-2 12 801 15.0 7.7 2-3 13 789 16.5 6.8 3-4 12 776 15.5 5.9 4-5 30 764 39.3 5.0 5-6 46 734 62.7 4.2 6-7 48 688 69.8 3.4 7-8 69 640 107.8 2.6 8-9 132 571 231.2 1.9 9-10 187 439 426.0 1.3 10-11 156 252 619.0 0.9 11-12 90 96 937.5 0.6 12-13 3 6 500.0 1.2 13-14 3 3 1000 0.7 Table 45.1 This life table of Ovis dalli shows the number of deaths, number of survivors, mortality rate, and life expectancy at each age interval for the Dall mountain sheep. Survivorship Curves Another tool used by population ecologists is a survivorship curve , which is a graph of the number of individuals surviving at each age interval plotted versus time (usually with data compiled from a life table). These curves allow us to compare the life histories of different populations ( Figure 45.6 ). Humans and most primates exhibit a Type I survivorship curve because a high percentage of offspring survive their early and middle years—death occurs predominantly in older individuals. These types of species usually have small numbers of offspring at one time, and they give a high amount of parental care to them to ensure their survival. Birds are an example of an intermediate or Type II survivorship curve because birds die more or less equally at each age interval. These organisms also may have relatively few offspring and provide significant parental care. Trees, marine invertebrates, and most fishes exhibit a Type III survivorship curve because very few of these organisms survive their younger years; however, those that make it to an old age are more likely to survive for a relatively long period of time. Organisms in this category usually have a very large number of offspring, but once they are born, little parental care is provided. Thus these offspring are “on their own” and vulnerable to predation, but their sheer numbers assure the survival of enough individuals to perpetuate the species. 45.2 Life Histories and Natural Selection Learning Objectives By the end of this section, you will be able to: Describe how life history patterns are influenced by natural selection Explain different life history patterns and how different reproductive strategies affect species’ survival A species’ life history describes the series of events over its lifetime, such as how resources are allocated for growth, maintenance, and reproduction. Life history traits affect the life table of an organism. A species’ life history is genetically determined and shaped by the environment and natural selection. Life History Patterns and Energy Budgets Energy is required by all living organisms for their growth, maintenance, and reproduction; at the same time, energy is often a major limiting factor in determining an organism’s survival. Plants, for example, acquire energy from the sun via photosynthesis, but must expend this energy to grow, maintain health, and produce energy-rich seeds to produce the next generation. Animals have the additional burden of using some of their energy reserves to acquire food. Furthermore, some animals must expend energy caring for their offspring. Thus, all species have an energy budget : they must balance energy intake with their use of energy for metabolism, reproduction, parental care, and energy storage (such as bears building up body fat for winter hibernation). Parental Care and Fecundity Fecundity is the potential reproductive capacity of an individual within a population. In other words, fecundity describes how many offspring could ideally be produced if an individual has as many offspring as possible, repeating the reproductive cycle as soon as possible after the birth of the offspring. In animals, fecundity is inversely related to the amount of parental care given to an individual offspring. Species, such as many marine invertebrates, that produce many offspring usually provide little if any care for the offspring (they would not have the energy or the ability to do so anyway). Most of their energy budget is used to produce many tiny offspring. Animals with this strategy are often self-sufficient at a very early age. This is because of the energy tradeoff these organisms have made to maximize their evolutionary fitness. Because their energy is used for producing offspring instead of parental care, it makes sense that these offspring have some ability to be able to move within their environment and find food and perhaps shelter. Even with these abilities, their small size makes them extremely vulnerable to predation, so the production of many offspring allows enough of them to survive to maintain the species. Animal species that have few offspring during a reproductive event usually give extensive parental care, devoting much of their energy budget to these activities, sometimes at the expense of their own health. This is the case with many mammals, such as humans, kangaroos, and pandas. The offspring of these species are relatively helpless at birth and need to develop before they achieve self-sufficiency. Plants with low fecundity produce few energy-rich seeds (such as coconuts and chestnuts) with each having a good chance to germinate into a new organism; plants with high fecundity usually have many small, energy-poor seeds (like orchids) that have a relatively poor chance of surviving. Although it may seem that coconuts and chestnuts have a better chance of surviving, the energy tradeoff of the orchid is also very effective. It is a matter of where the energy is used, for large numbers of seeds or for fewer seeds with more energy. Early versus Late Reproduction The timing of reproduction in a life history also affects species survival. Organisms that reproduce at an early age have a greater chance of producing offspring, but this is usually at the expense of their growth and the maintenance of their health. Conversely, organisms that start reproducing later in life often have greater fecundity or are better able to provide parental care, but they risk that they will not survive to reproductive age. Examples of this can be seen in fishes. Small fish like guppies use their energy to reproduce rapidly, but never attain the size that would give them defense against some predators. Larger fish, like the bluegill or shark, use their energy to attain a large size, but do so with the risk that they will die before they can reproduce or at least reproduce to their maximum. These different energy strategies and tradeoffs are key to understanding the evolution of each species as it maximizes its fitness and fills its niche. In terms of energy budgeting, some species “blow it all” and use up most of their energy reserves to reproduce early before they die. Other species delay having reproduction to become stronger, more experienced individuals and to make sure that they are strong enough to provide parental care if necessary. Single versus Multiple Reproductive Events Some life history traits, such as fecundity, timing of reproduction, and parental care, can be grouped together into general strategies that are used by multiple species. Semelparity occurs when a species reproduces only once during its lifetime and then dies. Such species use most of their resource budget during a single reproductive event, sacrificing their health to the point that they do not survive. Examples of semelparity are bamboo, which flowers once and then dies, and the Chinook salmon ( Figure 45.7 a ), which uses most of its energy reserves to migrate from the ocean to its freshwater nesting area, where it reproduces and then dies. Scientists have posited alternate explanations for the evolutionary advantage of the Chinook’s post-reproduction death: a programmed suicide caused by a massive release of corticosteroid hormones, presumably so the parents can become food for the offspring, or simple exhaustion caused by the energy demands of reproduction; these are still being debated. Iteroparity describes species that reproduce repeatedly during their lives. Some animals are able to mate only once per year, but survive multiple mating seasons. The pronghorn antelope is an example of an animal that goes into a seasonal estrus cycle (“heat”): a hormonally induced physiological condition preparing the body for successful mating ( Figure 45.7 b ). Females of these species mate only during the estrus phase of the cycle. A different pattern is observed in primates, including humans and chimpanzees, which may attempt reproduction at any time during their reproductive years, even though their menstrual cycles make pregnancy likely only a few days per month during ovulation ( Figure 45.7 c ). Link to Learning Play this interactive PBS evolution-based mating game to learn more about reproductive strategies. Evolution Connection Energy Budgets, Reproductive Costs, and Sexual Selection in Drosophila Research into how animals allocate their energy resources for growth, maintenance, and reproduction has used a variety of experimental animal models. Some of this work has been done using the common fruit fly, Drosophila melanogaster . Studies have shown that not only does reproduction have a cost as far as how long male fruit flies live, but also fruit flies that have already mated several times have limited sperm remaining for reproduction. Fruit flies maximize their last chances at reproduction by selecting optimal mates. In a 1981 study, male fruit flies were placed in enclosures with either virgin or inseminated females. The males that mated with virgin females had shorter life spans than those in contact with the same number of inseminated females with which they were unable to mate. This effect occurred regardless of how large (indicative of their age) the males were. Thus, males that did not mate lived longer, allowing them more opportunities to find mates in the future. More recent studies, performed in 2006, show how males select the female with which they will mate and how this is affected by previous matings ( Figure 45.8 ). 2 Males were allowed to select between smaller and larger females. Findings showed that larger females had greater fecundity, producing twice as many offspring per mating as the smaller females did. Males that had previously mated, and thus had lower supplies of sperm, were termed “resource-depleted,” while males that had not mated were termed “non-resource-depleted.” The study showed that although non-resource-depleted males preferentially mated with larger females, this selection of partners was more pronounced in the resource-depleted males. Thus, males with depleted sperm supplies, which were limited in the number of times that they could mate before they replenished their sperm supply, selected larger, more fecund females, thus maximizing their chances for offspring. This study was one of the first to show that the physiological state of the male affected its mating behavior in a way that clearly maximizes its use of limited reproductive resources. 2 Adapted from Phillip G. Byrne and William R. Rice, “Evidence for adaptive male mate choice in the fruit fly Drosophila melanogaster, ” Proc Biol Sci. 273, no. 1589 (2006): 917-922, doi: 10.1098/rspb.2005.3372. These studies demonstrate two ways in which the energy budget is a factor in reproduction. First, energy expended on mating may reduce an animal’s lifespan, but by this time they have already reproduced, so in the context of natural selection this early death is not of much evolutionary importance. Second, when resources such as sperm (and the energy needed to replenish it) are low, an organism’s behavior can change to give them the best chance of passing their genes on to the next generation. These changes in behavior, so important to evolution, are studied in a discipline known as behavioral biology, or ethology, at the interface between population biology and psychology. 45.3 Environmental Limits to Population Growth Learning Objectives By the end of this section, you will be able to: Explain the characteristics of and differences between exponential and logistic growth patterns Give examples of exponential and logistic growth in natural populations Describe how natural selection and environmental adaptation led to the evolution of particular life history patterns Although life histories describe the way many characteristics of a population (such as their age structure) change over time in a general way, population ecologists make use of a variety of methods to model population dynamics mathematically. These more precise models can then be used to accurately describe changes occurring in a population and better predict future changes. Certain models that have been accepted for decades are now being modified or even abandoned due to their lack of predictive ability, and scholars strive to create effective new models. Exponential Growth Charles Darwin, in his theory of natural selection, was greatly influenced by the English clergyman Thomas Malthus. Malthus published a book in 1798 stating that populations with unlimited natural resources grow very rapidly, and then population growth decreases as resources become depleted. This accelerating pattern of increasing population size is called exponential growth . The best example of exponential growth is seen in bacteria. Bacteria are prokaryotes that reproduce by prokaryotic fission. This division takes about an hour for many bacterial species. If 1000 bacteria are placed in a large flask with an unlimited supply of nutrients (so the nutrients will not become depleted), after an hour, there is one round of division and each organism divides, resulting in 2000 organisms—an increase of 1000. In another hour, each of the 2000 organisms will double, producing 4000, an increase of 2000 organisms. After the third hour, there should be 8000 bacteria in the flask, an increase of 4000 organisms. The important concept of exponential growth is that the population growth rate —the number of organisms added in each reproductive generation—is accelerating; that is, it is increasing at a greater and greater rate. After 1 day and 24 of these cycles, the population would have increased from 1000 to more than 16 billion. When the population size, N , is plotted over time, a J-shaped growth curve is produced ( Figure 45.9 ). The bacteria example is not representative of the real world where resources are limited. Furthermore, some bacteria will die during the experiment and thus not reproduce, lowering the growth rate. Therefore, when calculating the growth rate of a population, the death rate ( D ) (number organisms that die during a particular time interval) is subtracted from the birth rate ( B ) (number organisms that are born during that interval). This is shown in the following formula: Δ N  (change in number) Δ T  (change in time)  =  B  (birth rate) -  D  (death rate) Δ N  (change in number) Δ T  (change in time)  =  B  (birth rate) -  D  (death rate) The birth rate is usually expressed on a per capita (for each individual) basis. Thus, B (birth rate) = bN (the per capita birth rate “ b ” multiplied by the number of individuals “ N ”) and D (death rate) = dN (the per capita death rate “d” multiplied by the number of individuals “ N ”). Additionally, ecologists are interested in the population at a particular point in time, an infinitely small time interval. For this reason, the terminology of differential calculus is used to obtain the “instantaneous” growth rate, replacing the change in number and time with an instant-specific measurement of number and time. d N d T  =  b N   −   d N  =  ( b   -   d ) N d N d T  =  b N   −   d N  =  ( b   -   d ) N Notice that the “ d ” associated with the first term refers to the derivative (as the term is used in calculus) and is different from the death rate, also called “ d .” The difference between birth and death rates is further simplified by substituting the term “ r ” (intrinsic rate of increase) for the relationship between birth and death rates: d N d T  =  r N d N d T  =  r N The value “ r” can be positive, meaning the population is increasing in size; or negative, meaning the population is decreasing in size; or zero, where the population’s size is unchanging, a condition known as zero population growth . A further refinement of the formula recognizes that different species have inherent differences in their intrinsic rate of increase (often thought of as the potential for reproduction), even under ideal conditions. Obviously, a bacterium can reproduce more rapidly and have a higher intrinsic rate of growth than a human. The maximal growth rate for a species is its biotic potential, or r max , thus changing the equation to: d N d T = r max N d N d T = r max N Logistic Growth Exponential growth is possible only when infinite natural resources are available; this is not the case in the real world. Charles Darwin recognized this fact in his description of the “struggle for existence,” which states that individuals will compete (with members of their own or other species) for limited resources. The successful ones will survive to pass on their own characteristics and traits (which we know now are transferred by genes) to the next generation at a greater rate (natural selection). To model the reality of limited resources, population ecologists developed the logistic growth model. Carrying Capacity and the Logistic Model In the real world, with its limited resources, exponential growth cannot continue indefinitely. Exponential growth may occur in environments where there are few individuals and plentiful resources, but when the number of individuals gets large enough, resources will be depleted, slowing the growth rate. Eventually, the growth rate will plateau or level off ( Figure 45.9 ). This population size, which represents the maximum population size that a particular environment can support, is called the carrying capacity, or K . The formula we use to calculate logistic growth adds the carrying capacity as a moderating force in the growth rate. The expression “ K – N ” is indicative of how many individuals may be added to a population at a given stage, and “ K – N ” divided by “ K ” is the fraction of the carrying capacity available for further growth. Thus, the exponential growth model is restricted by this factor to generate the logistic growth equation: d N d T = r max d N d T = r max N ( K   -   N ) K d N d T = r max d N d T = r max N ( K   -   N ) K Notice that when N is very small, ( K-N )/ K becomes close to K/K or 1, and the right side of the equation reduces to r max N , which means the population is growing exponentially and is not influenced by carrying capacity. On the other hand, when N is large, ( K-N )/ K come close to zero, which means that population growth will be slowed greatly or even stopped. Thus, population growth is greatly slowed in large populations by the carrying capacity K . This model also allows for the population of a negative population growth, or a population decline. This occurs when the number of individuals in the population exceeds the carrying capacity (because the value of (K-N)/K is negative). A graph of this equation yields an S-shaped curve ( Figure 45.9 ), and it is a more realistic model of population growth than exponential growth. There are three different sections to an S-shaped curve. Initially, growth is exponential because there are few individuals and ample resources available. Then, as resources begin to become limited, the growth rate decreases. Finally, growth levels off at the carrying capacity of the environment, with little change in population size over time. Role of Intraspecific Competition The logistic model assumes that every individual within a population will have equal access to resources and, thus, an equal chance for survival. For plants, the amount of water, sunlight, nutrients, and the space to grow are the important resources, whereas in animals, important resources include food, water, shelter, nesting space, and mates. In the real world, phenotypic variation among individuals within a population means that some individuals will be better adapted to their environment than others. The resulting competition between population members of the same species for resources is termed intraspecific competition (intra- = “within”; -specific = “species”). Intraspecific competition for resources may not affect populations that are well below their carrying capacity—resources are plentiful and all individuals can obtain what they need. However, as population size increases, this competition intensifies. In addition, the accumulation of waste products can reduce an environment’s carrying capacity. Examples of Logistic Growth Yeast, a microscopic fungus used to make bread and alcoholic beverages, exhibits the classical S-shaped curve when grown in a test tube ( Figure 45.10 a ). Its growth levels off as the population depletes the nutrients that are necessary for its growth. In the real world, however, there are variations to this idealized curve. Examples in wild populations include sheep and harbor seals ( Figure 45.10 b ). In both examples, the population size exceeds the carrying capacity for short periods of time and then falls below the carrying capacity afterwards. This fluctuation in population size continues to occur as the population oscillates around its carrying capacity. Still, even with this oscillation, the logistic model is confirmed. Visual Connection If the major food source of the seals declines due to pollution or overfishing, which of the following would likely occur? The carrying capacity of seals would decrease, as would the seal population. The carrying capacity of seals would decrease, but the seal population would remain the same. The number of seal deaths would increase but the number of births would also increase, so the population size would remain the same. The carrying capacity of seals would remain the same, but the population of seals would decrease. 45.4 Population Dynamics and Regulation Learning Objectives By the end of this section, you will be able to: Give examples of how the carrying capacity of a habitat may change Compare and contrast density-dependent growth regulation and density-independent growth regulation, giving examples Give examples of exponential and logistic growth in wild animal populations Describe how natural selection and environmental adaptation leads to the evolution of particular life-history patterns The logistic model of population growth, while valid in many natural populations and a useful model, is a simplification of real-world population dynamics. Implicit in the model is that the carrying capacity of the environment does not change, which is not the case. The carrying capacity varies annually: for example, some summers are hot and dry whereas others are cold and wet. In many areas, the carrying capacity during the winter is much lower than it is during the summer. Also, natural events such as earthquakes, volcanoes, and fires can alter an environment and hence its carrying capacity. Additionally, populations do not usually exist in isolation. They engage in interspecific competition : that is, they share the environment with other species, competing with them for the same resources. These factors are also important to understanding how a specific population will grow. Nature regulates population growth in a variety of ways. These are grouped into density-dependent factors, in which the density of the population at a given time affects growth rate and mortality, and density-independent factors, which influence mortality in a population regardless of population density. Note that in the former, the effect of the factor on the population depends on the density of the population at onset. Conservation biologists want to understand both types because this helps them manage populations and prevent extinction or overpopulation. Density-dependent Regulation Most density-dependent factors are biological in nature (biotic), and include predation, inter- and intraspecific competition, accumulation of waste, and diseases such as those caused by parasites. Usually, the denser a population is, the greater its mortality rate. For example, during intra- and interspecific competition, the reproductive rates of the individuals will usually be lower, reducing their population’s rate of growth. In addition, low prey density increases the mortality of its predator because it has more difficulty locating its food source. An example of density-dependent regulation is shown in Figure 45.11 with results from a study focusing on the giant intestinal roundworm ( Ascaris lumbricoides ), a parasite of humans and other mammals. 3 Denser populations of the parasite exhibited lower fecundity: they contained fewer eggs. One possible explanation for this is that females would be smaller in more dense populations (due to limited resources) and that smaller females would have fewer eggs. This hypothesis was tested and disproved in a 2009 study which showed that female weight had no influence. 4 The actual cause of the density-dependence of fecundity in this organism is still unclear and awaiting further investigation. 3 N.A. Croll et al., “The Population Biology and Control of Ascaris lumbricoides in a Rural Community in Iran.” Transactions of the Royal Society of Tropical Medicine and Hygiene 76, no. 2 (1982): 187-197, doi:10.1016/0035-9203(82)90272-3. 4 Martin Walker et al., “Density-Dependent Effects on the Weight of Female Ascaris lumbricoides Infections of Humans and its Impact on Patterns of Egg Production.” Parasites & Vectors 2, no. 11 (February 2009), doi:10.1186/1756-3305-2-11. Density-independent Regulation and Interaction with Density-dependent Factors Many factors, typically physical or chemical in nature (abiotic), influence the mortality of a population regardless of its density, including weather, natural disasters, and pollution. An individual deer may be killed in a forest fire regardless of how many deer happen to be in that area. Its chances of survival are the same whether the population density is high or low. The same holds true for cold winter weather. In real-life situations, population regulation is very complicated and density-dependent and independent factors can interact. A dense population that is reduced in a density-independent manner by some environmental factor(s) will be able to recover differently than a sparse population. For example, a population of deer affected by a harsh winter will recover faster if there are more deer remaining to reproduce. Evolution Connection Why Did the Woolly Mammoth Go Extinct? It's easy to get lost in the discussion of dinosaurs and theories about why they went extinct 65 million years ago. Was it due to a meteor slamming into Earth near the coast of modern-day Mexico, or was it from some long-term weather cycle that is not yet understood? One hypothesis that will never be proposed is that humans had something to do with it. Mammals were small, insignificant creatures of the forest 65 million years ago, and no humans existed. Woolly mammoths, however, began to go extinct about 10,000 years ago, when they shared the Earth with humans who were no different anatomically than humans today ( Figure 45.12 ). Mammoths survived in isolated island populations as recently as 1700 BC. We know a lot about these animals from carcasses found frozen in the ice of Siberia and other regions of the north. Scientists have sequenced at least 50 percent of its genome and believe mammoths are between 98 and 99 percent identical to modern elephants. It is commonly thought that climate change and human hunting led to their extinction. A 2008 study estimated that climate change reduced the mammoth’s range from 3,000,000 square miles 42,000 years ago to 310,000 square miles 6,000 years ago. 6 It is also well documented that humans hunted these animals. A 2012 study showed that no single factor was exclusively responsible for the extinction of these magnificent creatures. 7 In addition to human hunting, climate change, and reduction of habitat, these scientists demonstrated another important factor in the mammoth’s extinction was the migration of humans across the Bering Strait to North America during the last ice age 20,000 years ago. 6 David Nogués-Bravo et al., “Climate Change, Humans, and the Extinction of the Woolly Mammoth.” PLoS Biol 6 (April 2008): e79, doi:10.1371/journal.pbio.0060079. 7 G.M. MacDonald et al., “Pattern of Extinction of the Woolly Mammoth in Beringia.” Nature Communications 3, no. 893 (June 2012), doi:10.1038/ncomms1881. The maintenance of stable populations was and is very complex, with many interacting factors determining the outcome. It is important to remember that humans are also part of nature. Once we contributed to a species’ decline using primitive hunting technology only. Life Histories of K -selected and r -selected Species While reproductive strategies play a key role in life histories, they do not account for important factors like limited resources and competition. The regulation of population growth by these factors can be used to introduce a classical concept in population biology, that of K -selected versus r -selected species. Early Theories about Life History: K -selected and r -selected Species By the second half of the twentieth century, the concept of K- and r-selected species was used extensively and successfully to study populations. The concept relates not only reproductive strategies, but also to a species’ habitat and behavior, especially in the way that they obtain resources and care for their young. It includes length of life and survivorship factors as well. For this analysis, population biologists have grouped species into the two large categories— K -selected and r -selected—although they are really two ends of a continuum. K -selected species are species selected by stable, predictable environments. Populations of K -selected species tend to exist close to their carrying capacity (hence the term K -selected) where intraspecific competition is high. These species have few, large offspring, a long gestation period, and often give long-term care to their offspring (Table B45_04_01). While larger in size when born, the offspring are relatively helpless and immature at birth. By the time they reach adulthood, they must develop skills to compete for natural resources. In plants, scientists think of parental care more broadly: how long fruit takes to develop or how long it remains on the plant are determining factors in the time to the next reproductive event. Examples of K -selected species are primates including humans), elephants, and plants such as oak trees ( Figure 45.13 a ). Oak trees grow very slowly and take, on average, 20 years to produce their first seeds, known as acorns. As many as 50,000 acorns can be produced by an individual tree, but the germination rate is low as many of these rot or are eaten by animals such as squirrels. In some years, oaks may produce an exceptionally large number of acorns, and these years may be on a two- or three-year cycle depending on the species of oak ( r -selection). As oak trees grow to a large size and for many years before they begin to produce acorns, they devote a large percentage of their energy budget to growth and maintenance. The tree’s height and size allow it to dominate other plants in the competition for sunlight, the oak’s primary energy resource. Furthermore, when it does reproduce, the oak produces large, energy-rich seeds that use their energy reserve to become quickly established ( K -selection). In contrast, r -selected species have a large number of small offspring (hence their r designation ( Table 45.2 ). This strategy is often employed in unpredictable or changing environments. Animals that are r -selected do not give long-term parental care and the offspring are relatively mature and self-sufficient at birth. Examples of r -selected species are marine invertebrates, such as jellyfish, and plants, such as the dandelion ( Figure 45.13 b ). Dandelions have small seeds that are wind dispersed long distances. Many seeds are produced simultaneously to ensure that at least some of them reach a hospitable environment. Seeds that land in inhospitable environments have little chance for survival since their seeds are low in energy content. Note that survival is not necessarily a function of energy stored in the seed itself. Characteristics of K -selected and r -selected species Characteristics of K -selected species Characteristics of r -selected species Mature late Mature early Greater longevity Lower longevity Increased parental care Decreased parental care Increased competition Decreased competition Fewer offspring More offspring Larger offspring Smaller offspring Table 45.2 Modern Theories of Life History The r - and K -selection theory, although accepted for decades and used for much groundbreaking research, has now been reconsidered, and many population biologists have abandoned or modified it. Over the years, several studies attempted to confirm the theory, but these attempts have largely failed. Many species were identified that did not follow the theory’s predictions. Furthermore, the theory ignored the age-specific mortality of the populations which scientists now know is very important. New demographic-based models of life history evolution have been developed which incorporate many ecological concepts included in r - and K -selection theory as well as population age structure and mortality factors. 45.5 Human Population Growth Learning Objectives By the end of this section, you will be able to: Discuss how human population growth can be exponential Explain how humans have expanded the carrying capacity of their habitat Relate population growth and age structure to the level of economic development in different countries Discuss the long-term implications of unchecked human population growth Concepts of animal population dynamics can be applied to human population growth. Humans are not unique in their ability to alter their environment. For example, beaver dams alter the stream environment where they are built. Humans, however, have the ability to alter their environment to increase its carrying capacity sometimes to the detriment of other species (e.g., via artificial selection for crops that have a higher yield). Earth’s human population is growing rapidly, to the extent that some worry about the ability of the earth’s environment to sustain this population, as long-term exponential growth carries the potential risks of famine, disease, and large-scale death. Although humans have increased the carrying capacity of their environment, the technologies used to achieve this transformation have caused unprecedented changes to Earth’s environment, altering ecosystems to the point where some may be in danger of collapse. The depletion of the ozone layer, erosion due to acid rain, and damage from global climate change are caused by human activities. The ultimate effect of these changes on our carrying capacity is unknown. As some point out, it is likely that the negative effects of increasing carrying capacity will outweigh the positive ones—the carrying capacity of the world for human beings might actually decrease. The world’s human population is currently experiencing exponential growth even though human reproduction is far below its biotic potential ( Figure 45.14 ). To reach its biotic potential, all females would have to become pregnant every nine months or so during their reproductive years. Also, resources would have to be such that the environment would support such growth. Neither of these two conditions exists. In spite of this fact, human population is still growing exponentially. A consequence of exponential human population growth is the time that it takes to add a particular number of humans to the Earth is becoming shorter. Figure 45.15 shows that 123 years were necessary to add 1 billion humans in 1930, but it only took 24 years to add two billion people between 1975 and 1999. As already discussed, at some point it would appear that our ability to increase our carrying capacity indefinitely on a finite world is uncertain. Without new technological advances, the human growth rate has been predicted to slow in the coming decades. However, the population will still be increasing and the threat of overpopulation remains. Link to Learning Click through this interactive view of how human populations have changed over time. Overcoming Density-Dependent Regulation Humans are unique in their ability to alter their environment with the conscious purpose of increasing its carrying capacity. This ability is a major factor responsible for human population growth and a way of overcoming density-dependent growth regulation. Much of this ability is related to human intelligence, society, and communication. Humans can construct shelter to protect them from the elements and have developed agriculture and domesticated animals to increase their food supplies. In addition, humans use language to communicate this technology to new generations, allowing them to improve upon previous accomplishments. Other factors in human population growth are migration and public health. Humans originated in Africa, but have since migrated to nearly all inhabitable land on the Earth. Public health, sanitation, and the use of antibiotics and vaccines have decreased the ability of infectious disease to limit human population growth. In the past, diseases such as the bubonic plaque of the fourteenth century killed between 30 and 60 percent of Europe’s population and reduced the overall world population by as many as 100 million people. Today, the threat of infectious disease, while not gone, is certainly less severe. According to the World Health Organization, global death from infectious disease declined from 16.4 million in 1993 to 14.7 million in 1992. To compare to some of the epidemics of the past, the percentage of the world's population killed between 1993 and 2002 decreased from 0.30 percent of the world's population to 0.24 percent. Thus, it appears that the influence of infectious disease on human population growth is becoming less significant. Age Structure, Population Growth, and Economic Development The age structure of a population is an important factor in population dynamics. Age structure is the proportion of a population at different age ranges. Age structure allows better prediction of population growth, plus the ability to associate this growth with the level of economic development in the region. Countries with rapid growth have a pyramidal shape in their age structure diagrams, showing a preponderance of younger individuals, many of whom are of reproductive age or will be soon ( Figure 45.16 ). This pattern is most often observed in underdeveloped countries where individuals do not live to old age because of less-than-optimal living conditions. Age structures of areas with slow growth, including developed countries such as the United States, still have a pyramidal structure, but with many fewer young and reproductive-aged individuals and a greater proportion of older individuals. Other developed countries, such as Italy, have zero population growth. The age structure of these populations is more conical, with an even greater percentage of middle-aged and older individuals. The actual growth rates in different countries are shown in Figure 45.17 , with the highest rates tending to be in the less economically developed countries of Africa and Asia. Visual Connection Age structure diagrams for rapidly growing, slow growing and stable populations are shown in stages 1 through 3. What type of population change do you think stage 4 represents? Long-Term Consequences of Exponential Human Population Growth Many dire predictions have been made about the world’s population leading to a major crisis called the “population explosion.” In the 1968 book The Population Bomb , biologist Dr. Paul R. Ehrlich wrote, “The battle to feed all of humanity is over. In the 1970s hundreds of millions of people will starve to death in spite of any crash programs embarked upon now. At this late date nothing can prevent a substantial increase in the world death rate.” 8 While many critics view this statement as an exaggeration, the laws of exponential population growth are still in effect, and unchecked human population growth cannot continue indefinitely. 8 Paul R. Erlich, prologue to The Population Bomb , (1968; repr., New York: Ballantine, 1970). Efforts to control population growth led to the one-child policy in China, which used to include more severe consequences, but now imposes fines on urban couples who have more than one child. Due to the fact that some couples wish to have a male heir, many Chinese couples continue to have more than one child. The policy itself, its social impacts, and the effectiveness of limiting overall population growth are controversial. In spite of population control policies, the human population continues to grow. At some point the food supply may run out because of the subsequent need to produce more and more food to feed our population. The United Nations estimates that future world population growth may vary from 6 billion (a decrease) to 16 billion people by the year 2100. There is no way to know whether human population growth will moderate to the point where the crisis described by Dr. Ehrlich will be averted. Another result of population growth is the endangerment of the natural environment. Many countries have attempted to reduce the human impact on climate change by reducing their emission of the greenhouse gas carbon dioxide. However, these treaties have not been ratified by every country, and many underdeveloped countries trying to improve their economic condition may be less likely to agree with such provisions if it means slower economic development. Furthermore, the role of human activity in causing climate change has become a hotly debated socio-political issue in some developed countries, including the United States. Thus, we enter the future with considerable uncertainty about our ability to curb human population growth and protect our environment. Link to Learning Visit this website and select “Launch movie” for an animation discussing the global impacts of human population growth. 45.6 Community Ecology Learning Objectives By the end of this section, you will be able to: Discuss the predator-prey cycle Give examples of defenses against predation and herbivory Describe the competitive exclusion principle Give examples of symbiotic relationships between species Describe community structure and succession Populations rarely, if ever, live in isolation from populations of other species. In most cases, numerous species share a habitat. The interactions between these populations play a major role in regulating population growth and abundance. All populations occupying the same habitat form a community: populations inhabiting a specific area at the same time. The number of species occupying the same habitat and their relative abundance is known as species diversity. Areas with low diversity, such as the glaciers of Antarctica, still contain a wide variety of living things, whereas the diversity of tropical rainforests is so great that it cannot be counted. Ecology is studied at the community level to understand how species interact with each other and compete for the same resources. Predation and Herbivory Perhaps the classical example of species interaction is predation: the hunting of prey by its predator. Nature shows on television highlight the drama of one living organism killing another. Populations of predators and prey in a community are not constant over time: in most cases, they vary in cycles that appear to be related. The most often cited example of predator-prey dynamics is seen in the cycling of the lynx (predator) and the snowshoe hare (prey), using nearly 200 year-old trapping data from North American forests ( Figure 45.18 ). This cycle of predator and prey lasts approximately 10 years, with the predator population lagging 1–2 years behind that of the prey population. As the hare numbers increase, there is more food available for the lynx, allowing the lynx population to increase as well. When the lynx population grows to a threshold level, however, they kill so many hares that hare population begins to decline, followed by a decline in the lynx population because of scarcity of food. When the lynx population is low, the hare population size begins to increase due, at least in part, to low predation pressure, starting the cycle anew. The idea that the population cycling of the two species is entirely controlled by predation models has come under question. More recent studies have pointed to undefined density-dependent factors as being important in the cycling, in addition to predation. One possibility is that the cycling is inherent in the hare population due to density-dependent effects such as lower fecundity (maternal stress) caused by crowding when the hare population gets too dense. The hare cycling would then induce the cycling of the lynx because it is the lynxes’ major food source. The more we study communities, the more complexities we find, allowing ecologists to derive more accurate and sophisticated models of population dynamics. Herbivory describes the consumption of plants by insects and other animals, and it is another interspecific relationship that affects populations. Unlike animals, most plants cannot outrun predators or use mimicry to hide from hungry animals. Some plants have developed mechanisms to defend against herbivory. Other species have developed mutualistic relationships; for example, herbivory provides a mechanism of seed distribution that aids in plant reproduction. Defense Mechanisms against Predation and Herbivory The study of communities must consider evolutionary forces that act on the members of the various populations contained within it. Species are not static, but slowly changing and adapting to their environment by natural selection and other evolutionary forces. Species have evolved numerous mechanisms to escape predation and herbivory. These defenses may be mechanical, chemical, physical, or behavioral. Mechanical defenses, such as the presence of thorns on plants or the hard shell on turtles, discourage animal predation and herbivory by causing physical pain to the predator or by physically preventing the predator from being able to eat the prey. Chemical defenses are produced by many animals as well as plants, such as the foxglove which is extremely toxic when eaten. Figure 45.19 shows some organisms’ defenses against predation and herbivory. Many species use their body shape and coloration to avoid being detected by predators. The tropical walking stick is an insect with the coloration and body shape of a twig which makes it very hard to see when stationary against a background of real twigs ( Figure 45.20 a ). In another example, the chameleon can change its color to match its surroundings ( Figure 45.20 b ). Both of these are examples of camouflage , or avoiding detection by blending in with the background. Some species use coloration as a way of warning predators that they are not good to eat. For example, the cinnabar moth caterpillar, the fire-bellied toad, and many species of beetle have bright colors that warn of a foul taste, the presence of toxic chemical, and/or the ability to sting or bite, respectively. Predators that ignore this coloration and eat the organisms will experience their unpleasant taste or presence of toxic chemicals and learn not to eat them in the future. This type of defensive mechanism is called aposematic coloration , or warning coloration ( Figure 45.21 ). While some predators learn to avoid eating certain potential prey because of their coloration, other species have evolved mechanisms to mimic this coloration to avoid being eaten, even though they themselves may not be unpleasant to eat or contain toxic chemicals. In Batesian mimicry , a harmless species imitates the warning coloration of a harmful one. Assuming they share the same predators, this coloration then protects the harmless ones, even though they do not have the same level of physical or chemical defenses against predation as the organism they mimic. Many insect species mimic the coloration of wasps or bees, which are stinging, venomous insects, thereby discouraging predation ( Figure 45.22 ). In Müllerian mimicry , multiple species share the same warning coloration, but all of them actually have defenses. Figure 45.23 shows a variety of foul-tasting butterflies with similar coloration. In Emsleyan/Mertensian mimicry , a deadly prey mimics a less dangerous one, such as the venomous coral snake mimicking the non-venomous milk snake. This type of mimicry is extremely rare and more difficult to understand than the previous two types. For this type of mimicry to work, it is essential that eating the milk snake has unpleasant but not fatal consequences. Then, these predators learn not to eat snakes with this coloration, protecting the coral snake as well. If the snake were fatal to the predator, there would be no opportunity for the predator to learn not to eat it, and the benefit for the less toxic species would disappear. Link to Learning Go to this website to view stunning examples of mimicry. Competitive Exclusion Principle Resources are often limited within a habitat and multiple species may compete to obtain them. All species have an ecological niche in the ecosystem, which describes how they acquire the resources they need and how they interact with other species in the community. The competitive exclusion principle states that two species cannot occupy the same niche in a habitat. In other words, different species cannot coexist in a community if they are competing for all the same resources. An example of this principle is shown in Figure 45.24 , with two protozoan species, Paramecium aurelia and Paramecium caudatum . When grown individually in the laboratory, they both thrive. But when they are placed together in the same test tube (habitat), P. aurelia outcompetes P. caudatum for food, leading to the latter’s eventual extinction. This exclusion may be avoided if a population evolves to make use of a different resource, a different area of the habitat, or feeds during a different time of day, called resource partitioning. The two organisms are then said to occupy different microniches. These organisms coexist by minimizing direct competition. Symbiosis Symbiotic relationships, or symbioses (plural), are close interactions between individuals of different species over an extended period of time which impact the abundance and distribution of the associating populations. Most scientists accept this definition, but some restrict the term to only those species that are mutualistic, where both individuals benefit from the interaction. In this discussion, the broader definition will be used. Commensalism A commensal relationship occurs when one species benefits from the close, prolonged interaction, while the other neither benefits nor is harmed. Birds nesting in trees provide an example of a commensal relationship ( Figure 45.25 ). The tree is not harmed by the presence of the nest among its branches. The nests are light and produce little strain on the structural integrity of the branch, and most of the leaves, which the tree uses to get energy by photosynthesis, are above the nest so they are unaffected. The bird, on the other hand, benefits greatly. If the bird had to nest in the open, its eggs and young would be vulnerable to predators. Another example of a commensal relationship is the clown fish and the sea anemone. The sea anemone is not harmed by the fish, and the fish benefits with protection from predators who would be stung upon nearing the sea anemone. Mutualism A second type of symbiotic relationship is called mutualism , where two species benefit from their interaction. Some scientists believe that these are the only true examples of symbiosis. For example, termites have a mutualistic relationship with protozoa that live in the insect’s gut ( Figure 45.26 a ). The termite benefits from the ability of bacterial symbionts within the protozoa to digest cellulose. The termite itself cannot do this, and without the protozoa, it would not be able to obtain energy from its food (cellulose from the wood it chews and eats). The protozoa and the bacterial symbionts benefit by having a protective environment and a constant supply of food from the wood chewing actions of the termite. Lichens have a mutualistic relationship between fungus and photosynthetic algae or bacteria ( Figure 45.26 b ). As these symbionts grow together, the glucose produced by the algae provides nourishment for both organisms, whereas the physical structure of the lichen protects the algae from the elements and makes certain nutrients in the atmosphere more available to the algae. Parasitism A parasite is an organism that lives in or on another living organism and derives nutrients from it. In this relationship, the parasite benefits, but the organism being fed upon, the host is harmed. The host is usually weakened by the parasite as it siphons resources the host would normally use to maintain itself. The parasite, however, is unlikely to kill the host, especially not quickly, because this would allow no time for the organism to complete its reproductive cycle by spreading to another host. The reproductive cycles of parasites are often very complex, sometimes requiring more than one host species. A tapeworm is a parasite that causes disease in humans when contaminated, undercooked meat such as pork, fish, or beef is consumed ( Figure 45.27 ). The tapeworm can live inside the intestine of the host for several years, benefiting from the food the host is bringing into its gut by eating, and may grow to be over 50 ft long by adding segments. The parasite moves from species to species in a cycle, making two hosts necessary to complete its life cycle. Another common parasite is Plasmodium falciparum , the protozoan cause of malaria, a significant disease in many parts of the world. Living in human liver and red blood cells, the organism reproduces asexually in the gut of blood-feeding mosquitoes to complete its life cycle. Thus malaria is spread from human to human by mosquitoes, one of many arthropod-borne infectious diseases. Characteristics of Communities Communities are complex entities that can be characterized by their structure (the types and numbers of species present) and dynamics (how communities change over time). Understanding community structure and dynamics enables community ecologists to manage ecosystems more effectively. Foundation Species Foundation species are considered the “base” or “bedrock” of a community, having the greatest influence on its overall structure. They are usually the primary producers: organisms that bring most of the energy into the community. Kelp, brown algae, is a foundation species, forming the basis of the kelp forests off the coast of California. Foundation species may physically modify the environment to produce and maintain habitats that benefit the other organisms that use them. An example is the photosynthetic corals of the coral reef ( Figure 45.28 ). Corals themselves are not photosynthetic, but harbor symbionts within their body tissues (dinoflagellates called zooxanthellae) that perform photosynthesis; this is another example of a mutualism. The exoskeletons of living and dead coral make up most of the reef structure, which protects many other species from waves and ocean currents. Biodiversity, Species Richness, and Relative Species Abundance Biodiversity describes a community’s biological complexity: it is measured by the number of different species (species richness) in a particular area and their relative abundance (species evenness). The area in question could be a habitat, a biome, or the entire biosphere. Species richness is the term that is used to describe the number of species living in a habitat or biome. Species richness varies across the globe ( Figure 45.29 ). One factor in determining species richness is latitude, with the greatest species richness occurring in ecosystems near the equator, which often have warmer temperatures, large amounts of rainfall, and low seasonality. The lowest species richness occurs near the poles, which are much colder, drier, and thus less conducive to life in Geologic time (time since glaciations). The predictability of climate or productivity is also an important factor. Other factors influence species richness as well. For example, the study of island biogeography attempts to explain the relatively high species richness found in certain isolated island chains, including the Galápagos Islands that inspired the young Darwin. Relative species abundance is the number of individuals in a species relative to the total number of individuals in all species within a habitat, ecosystem, or biome. Foundation species often have the highest relative abundance of species. Keystone Species A keystone species is one whose presence is key to maintaining biodiversity within an ecosystem and to upholding an ecological community’s structure. The intertidal sea star, Pisaster ochraceus , of the northwestern United States is a keystone species ( Figure 45.30 ). Studies have shown that when this organism is removed from communities, populations of their natural prey (mussels) increase, completely altering the species composition and reducing biodiversity. Another keystone species is the banded tetra, a fish in tropical streams, which supplies nearly all of the phosphorus, a necessary inorganic nutrient, to the rest of the community. If these fish were to become extinct, the community would be greatly affected. Everyday Connection Invasive Species Invasive species are non-native organisms that, when introduced to an area out of their native range, threaten the ecosystem balance of that habitat. Many such species exist in the United States, as shown in Figure 45.31 . Whether enjoying a forest hike, taking a summer boat trip, or simply walking down an urban street, you have likely encountered an invasive species. One of the many recent proliferations of an invasive species concerns the growth of Asian carp populations. Asian carp were introduced to the United States in the 1970s by fisheries and sewage treatment facilities that used the fish’s excellent filter feeding capabilities to clean their ponds of excess plankton. Some of the fish escaped, however, and by the 1980s they had colonized many waterways of the Mississippi River basin, including the Illinois and Missouri Rivers. Voracious eaters and rapid reproducers, Asian carp may outcompete native species for food, potentially leading to their extinction. For example, black carp are voracious eaters of native mussels and snails, limiting this food source for native fish species. Silver carp eat plankton that native mussels and snails feed on, reducing this food source by a different alteration of the food web. In some areas of the Mississippi River, Asian carp species have become the most predominant, effectively outcompeting native fishes for habitat. In some parts of the Illinois River, Asian carp constitute 95 percent of the community's biomass. Although edible, the fish is bony and not a desired food in the United States. Moreover, their presence threatens the native fish and fisheries of the Great Lakes, which are important to local economies and recreational anglers. Asian carp have even injured humans. The fish, frightened by the sound of approaching motorboats, thrust themselves into the air, often landing in the boat or directly hitting the boaters. The Great Lakes and their prized salmon and lake trout fisheries are also being threatened by these invasive fish. Asian carp have already colonized rivers and canals that lead into Lake Michigan. One infested waterway of particular importance is the Chicago Sanitary and Ship Channel, the major supply waterway linking the Great Lakes to the Mississippi River. To prevent the Asian carp from leaving the canal, a series of electric barriers have been successfully used to discourage their migration; however, the threat is significant enough that several states and Canada have sued to have the Chicago channel permanently cut off from Lake Michigan. Local and national politicians have weighed in on how to solve the problem, but no one knows whether the Asian carp will ultimately be considered a nuisance, like other invasive species such as the water hyacinth and zebra mussel, or whether it will be the destroyer of the largest freshwater fishery of the world. The issues associated with Asian carp show how population and community ecology, fisheries management, and politics intersect on issues of vital importance to the human food supply and economy. Socio-political issues like this make extensive use of the sciences of population ecology (the study of members of a particular species occupying a particular area known as a habitat) and community ecology (the study of the interaction of all species within a habitat). Community Dynamics Community dynamics are the changes in community structure and composition over time. Sometimes these changes are induced by environmental disturbances such as volcanoes, earthquakes, storms, fires, and climate change. Communities with a stable structure are said to be at equilibrium. Following a disturbance, the community may or may not return to the equilibrium state. Succession describes the sequential appearance and disappearance of species in a community over time. In primary succession , newly exposed or newly formed land is colonized by living things; in secondary succession , part of an ecosystem is disturbed and remnants of the previous community remain. Primary Succession and Pioneer Species Primary succession occurs when new land is formed or rock is exposed: for example, following the eruption of volcanoes, such as those on the Big Island of Hawaii. As lava flows into the ocean, new land is continually being formed. On the Big Island, approximately 32 acres of land is added each year. First, weathering and other natural forces break down the substrate enough for the establishment of certain hearty plants and lichens with few soil requirements, known as pioneer species ( Figure 45.32 ). These species help to further break down the mineral rich lava into soil where other, less hardy species will grow and eventually replace the pioneer species. In addition, as these early species grow and die, they add to an ever-growing layer of decomposing organic material and contribute to soil formation. Over time the area will reach an equilibrium state, with a set of organisms quite different from the pioneer species. Secondary succession A classic example of secondary succession occurs in oak and hickory forests cleared by wildfire ( Figure 45.33 ). Wildfires will burn most vegetation and kill those animals unable to flee the area. Their nutrients, however, are returned to the ground in the form of ash. Thus, even when areas are devoid of life due to severe fires, the area will soon be ready for new life to take hold. Before the fire, the vegetation was dominated by tall trees with access to the major plant energy resource: sunlight. Their height gave them access to sunlight while also shading the ground and other low-lying species. After the fire, though, these trees are no longer dominant. Thus, the first plants to grow back are usually annual plants followed within a few years by quickly growing and spreading grasses and other pioneer species. Due to, at least in part, changes in the environment brought on by the growth of the grasses and other species, over many years, shrubs will emerge along with small pine, oak, and hickory trees. These organisms are called intermediate species. Eventually, over 150 years, the forest will reach its equilibrium point where species composition is no longer changing and resembles the community before the fire. This equilibrium state is referred to as the climax community , which will remain stable until the next disturbance. 45.7 Behavioral Biology: Proximate and Ultimate Causes of Behavior Learning Objectives By the end of this section, you will be able to: Compare innate and learned behavior Discuss how movement and migration behaviors are a result of natural selection Discuss the different ways members of a population communicate with each other Give examples of how species use energy for mating displays and other courtship behaviors Differentiate between various mating systems Describe different ways that species learn Behavior is the change in activity of an organism in response to a stimulus. Behavioral biology is the study of the biological and evolutionary bases for such changes. The idea that behaviors evolved as a result of the pressures of natural selection is not new. Animal behavior has been studied for decades, by biologists in the science of ethology , by psychologists in the science of comparative psychology, and by scientists of many disciplines in the study of neurobiology. Although there is overlap between these disciplines, scientists in these behavioral fields take different approaches. Comparative psychology is an extension of work done in human and behavioral psychology. Ethology is an extension of genetics, evolution, anatomy, physiology, and other biological disciplines. Still, one cannot study behavioral biology without touching on both comparative psychology and ethology. One goal of behavioral biology is to dissect out the innate behaviors , which have a strong genetic component and are largely independent of environmental influences, from the learned behaviors , which result from environmental conditioning. Innate behavior, or instinct, is important because there is no risk of an incorrect behavior being learned. They are “hard wired” into the system. On the other hand, learned behaviors, although riskier, are flexible, dynamic, and can be altered according to changes in the environment. Innate Behaviors: Movement and Migration Innate or instinctual behaviors rely on response to stimuli. The simplest example of this is a reflex action , an involuntary and rapid response to stimulus. To test the “knee-jerk” reflex, a doctor taps the patellar tendon below the kneecap with a rubber hammer. The stimulation of the nerves there leads to the reflex of extending the leg at the knee. This is similar to the reaction of someone who touches a hot stove and instinctually pulls his or her hand away. Even humans, with our great capacity to learn, still exhibit a variety of innate behaviors. Kinesis and Taxis Another activity or movement of innate behavior is kinesis , or the undirected movement in response to a stimulus. Orthokinesis is the increased or decreased speed of movement of an organism in response to a stimulus. Woodlice, for example, increase their speed of movement when exposed to high or low temperatures. This movement, although random, increases the probability that the insect spends less time in the unfavorable environment. Another example is klinokinesis, an increase in turning behaviors. It is exhibited by bacteria such as E. coli which, in association with orthokinesis, helps the organisms randomly find a more hospitable environment. A similar, but more directed version of kinesis is taxis : the directed movement towards or away from a stimulus. This movement can be in response to light (phototaxis), chemical signals (chemotaxis), or gravity (geotaxis) and can be directed toward (positive) or away (negative) from the source of the stimulus. An example of a positive chemotaxis is exhibited by the unicellular protozoan Tetrahymena thermophila . This organism swims using its cilia, at times moving in a straight line, and at other times making turns. The attracting chemotactic agent alters the frequency of turning as the organism moves directly toward the source, following the increasing concentration gradient. Fixed Action Patterns A fixed action pattern is a series of movements elicited by a stimulus such that even when the stimulus is removed, the pattern goes on to completion. An example of such a behavior occurs in the three-spined stickleback, a small freshwater fish ( Figure 45.34 ). Males of this species develop a red belly during breeding season and show instinctual aggressiveness to other males during this time. In laboratory experiments, researchers exposed such fish to objects that in no way resemble a fish in their shape, but which were painted red on their lower halves. The male sticklebacks responded aggressively to the objects just as if they were real male sticklebacks. Migration Migration is the long-range seasonal movement of animals. It is an evolved, adapted response to variation in resource availability, and it is a common phenomenon found in all major groups of animals. Birds fly south for the winter to get to warmer climates with sufficient food, and salmon migrate to their spawning grounds. The popular 2005 documentary March of the Penguins followed the 62-mile migration of emperor penguins through Antarctica to bring food back to their breeding site and to their young. Wildebeests ( Figure 45.35 ) migrate over 1800 miles each year in search of new grasslands. Although migration is thought of as innate behavior, only some migrating species always migrate (obligate migration). Animals that exhibit facultative migration can choose to migrate or not. Additionally, in some animals, only a portion of the population migrates, whereas the rest does not migrate (incomplete migration). For example, owls that live in the tundra may migrate in years when their food source, small rodents, is relatively scarce, but not migrate during the years when rodents are plentiful. Foraging Foraging is the act of searching for and exploiting food resources. Feeding behaviors that maximize energy gain and minimize energy expenditure are called optimal foraging behaviors, and these are favored by natural section. The painted stork, for example, uses its long beak to search the bottom of a freshwater marshland for crabs and other food ( Figure 45.36 ). Innate Behaviors: Living in Groups Not all animals live in groups, but even those that live relatively solitary lives, with the exception of those that can reproduce asexually, must mate. Mating usually involves one animal signaling another so as to communicate the desire to mate. There are several types of energy-intensive behaviors or displays associated with mating, called mating rituals. Other behaviors found in populations that live in groups are described in terms of which animal benefits from the behavior. In selfish behavior, only the animal in question benefits; in altruistic behavior, one animal’s actions benefit another animal; cooperative behavior describes when both animals benefit. All of these behaviors involve some sort of communication between population members. Communication within a Species Animals communicate with each other using stimuli known as signals . An example of this is seen in the three-spined stickleback, where the visual signal of a red region in the lower half of a fish signals males to become aggressive and signals females to mate. Other signals are chemical (pheromones), aural (sound), visual (courtship and aggressive displays), or tactile (touch). These types of communication may be instinctual or learned or a combination of both. These are not the same as the communication we associate with language, which has been observed only in humans and perhaps in some species of primates and cetaceans. A pheromone is a secreted chemical signal used to obtain a response from another individual of the same species. The purpose of pheromones is to elicit a specific behavior from the receiving individual. Pheromones are especially common among social insects, but they are used by many species to attract the opposite sex, to sound alarms, to mark food trails, and to elicit other, more complex behaviors. Even humans are thought to respond to certain pheromones called axillary steroids. These chemicals influence human perception of other people, and in one study were responsible for a group of women synchronizing their menstrual cycles. The role of pheromones in human-to-human communication is still somewhat controversial and continues to be researched. Songs are an example of an aural signal, one that needs to be heard by the recipient. Perhaps the best known of these are songs of birds, which identify the species and are used to attract mates. Other well-known songs are those of whales, which are of such low frequency that they can travel long distances underwater. Dolphins communicate with each other using a wide variety of vocalizations. Male crickets make chirping sounds using a specialized organ to attract a mate, repel other males, and to announce a successful mating. Courtship displays are a series of ritualized visual behaviors (signals) designed to attract and convince a member of the opposite sex to mate. These displays are ubiquitous in the animal kingdom. Often these displays involve a series of steps, including an initial display by one member followed by a response from the other. If at any point, the display is performed incorrectly or a proper response is not given, the mating ritual is abandoned and the mating attempt will be unsuccessful. The mating display of the common stork is shown in Figure 45.37 . Aggressive displays are also common in the animal kingdom. An example is when a dog bares its teeth when it wants another dog to back down. Presumably, these displays communicate not only the willingness of the animal to fight, but also its fighting ability. Although these displays do signal aggression on the part of the sender, it is thought that these displays are actually a mechanism to reduce the amount of actual fighting that occurs between members of the same species: they allow individuals to assess the fighting ability of their opponent and thus decide whether it is “worth the fight.” The testing of certain hypotheses using game theory has led to the conclusion that some of these displays may overstate an animal’s actual fighting ability and are used to “bluff” the opponent. This type of interaction, even if “dishonest,” would be favored by natural selection if it is successful more times than not. Distraction displays are seen in birds and some fish. They are designed to attract a predator away from the nest that contains their young. This is an example of an altruistic behavior: it benefits the young more than the individual performing the display, which is putting itself at risk by doing so. Many animals, especially primates, communicate with other members in the group through touch. Activities such as grooming, touching the shoulder or root of the tail, embracing, lip contact, and greeting ceremonies have all been observed in the Indian langur, an Old World monkey. Similar behaviors are found in other primates, especially in the great apes. Link to Learning The killdeer bird distracts predators from its eggs by faking a broken wing display in this video taken in Boise, Idaho. Click to view content Altruistic Behaviors Behaviors that lower the fitness of the individual but increase the fitness of another individual are termed altruistic. Examples of such behaviors are seen widely across the animal kingdom. Social insects such as worker bees have no ability to reproduce, yet they maintain the queen so she can populate the hive with her offspring. Meerkats keep a sentry standing guard to warn the rest of the colony about intruders, even though the sentry is putting itself at risk. Wolves and wild dogs bring meat to pack members not present during a hunt. Lemurs take care of infants unrelated to them. Although on the surface, these behaviors appear to be altruistic, it may not be so simple. There has been much discussion over why altruistic behaviors exist. Do these behaviors lead to overall evolutionary advantages for their species? Do they help the altruistic individual pass on its own genes? And what about such activities between unrelated individuals? One explanation for altruistic-type behaviors is found in the genetics of natural selection. In the 1976 book, The Selfish Gene, scientist Richard Dawkins attempted to explain many seemingly altruistic behaviors from the viewpoint of the gene itself. Although a gene obviously cannot be selfish in the human sense, it may appear that way if the sacrifice of an individual benefits related individuals that share genes that are identical by descent (present in relatives because of common lineage). Mammal parents make this sacrifice to take care of their offspring. Emperor penguins migrate miles in harsh conditions to bring food back for their young. Selfish gene theory has been controversial over the years and is still discussed among scientists in related fields. Even less-related individuals, those with less genetic identity than that shared by parent and offspring, benefit from seemingly altruistic behavior. The activities of social insects such as bees, wasps, ants, and termites are good examples. Sterile workers in these societies take care of the queen because they are closely related to it, and as the queen has offspring, she is passing on genes from the workers indirectly. Thus, it is of fitness benefit for the worker to maintain the queen without having any direct chance of passing on its genes due to its sterility. The lowering of individual fitness to enhance the reproductive fitness of a relative and thus one’s inclusive fitness evolves through kin selection . This phenomenon can explain many superficially altruistic behaviors seen in animals. However, these behaviors may not be truly defined as altruism in these cases because the actor is actually increasing its own fitness either directly (through its own offspring) or indirectly (through the inclusive fitness it gains through relatives that share genes with it). Unrelated individuals may also act altruistically to each other, and this seems to defy the “selfish gene” explanation. An example of this observed in many monkey species where a monkey will present its back to an unrelated monkey to have that individual pick the parasites from its fur. After a certain amount of time, the roles are reversed and the first monkey now grooms the second monkey. Thus, there is reciprocity in the behavior. Both benefit from the interaction and their fitness is raised more than if neither cooperated nor if one cooperated and the other did not cooperate. This behavior is still not necessarily altruism, as the “giving” behavior of the actor is based on the expectation that it will be the “receiver” of the behavior in the future, termed reciprocal altruism. Reciprocal altruism requires that individuals repeatedly encounter each other, often the result of living in the same social group, and that cheaters (those that never “give back”) are punished. Evolutionary game theory, a modification of classical game theory in mathematics, has shown that many of these so-called “altruistic behaviors” are not altruistic at all. The definition of “pure” altruism, based on human behavior, is an action that benefits another without any direct benefit to oneself. Most of the behaviors previously described do not seem to satisfy this definition, and game theorists are good at finding “selfish” components in them. Others have argued that the terms “selfish” and “altruistic” should be dropped completely when discussing animal behavior, as they describe human behavior and may not be directly applicable to instinctual animal activity. What is clear, though, is that heritable behaviors that improve the chances of passing on one’s genes or a portion of one’s genes are favored by natural selection and will be retained in future generations as long as those behaviors convey a fitness advantage. These instinctual behaviors may then be applied, in special circumstances, to other species, as long as it doesn’t lower the animal’s fitness. Finding Sex Partners Not all animals reproduce sexually, but many that do have the same challenge: they need to find a suitable mate and often have to compete with other individuals to obtain one. Significant energy is spent in the process of locating, attracting, and mating with the sex partner. Two types of selection occur during this process and can lead to traits that are important to reproduction called secondary sexual characteristics: intersexual selection , the choosing of a mate where individuals of one sex choose mates of the other sex, and intrasexual selection , the competition for mates between species members of the same sex. Intersexual selection is often complex because choosing a mate may be based on a variety of visual, aural, tactile, and chemical cues. An example of intersexual selection is when female peacocks choose to mate with the male with the brightest plumage. This type of selection often leads to traits in the chosen sex that do not enhance survival, but are those traits most attractive to the opposite sex (often at the expense of survival). Intrasexual selection involves mating displays and aggressive mating rituals such as rams butting heads—the winner of these battles is the one that is able to mate. Many of these rituals use up considerable energy but result in the selection of the healthiest, strongest, and/or most dominant individuals for mating. Three general mating systems, all involving innate as opposed to learned behaviors, are seen in animal populations: monogamous, polygynous, and polyandrous. Link to Learning Visit this website for informative videos on sexual selection. In monogamous systems, one male and one female are paired for at least one breeding season. In some animals, such as the gray wolf, these associations can last much longer, even a lifetime. Several explanations have been proposed for this type of mating system. The “mate-guarding hypothesis” states that males stay with the female to prevent other males from mating with her. This behavior is advantageous in such situations where mates are scarce and difficult to find. Another explanation is the “male-assistance hypothesis,” where males that remain with a female to help guard and rear their young will have more and healthier offspring. Monogamy is observed in many bird populations where, in addition to the parental care from the female, the male is also a major provider of parental care for the chicks. A third explanation for the evolutionary advantages of monogamy is the “female-enforcement hypothesis.” In this scenario, the female ensures that the male does not have other offspring that might compete with her own, so she actively interferes with the male’s signaling to attract other mates. Polygynous mating refers to one male mating with multiple females. In these situations, the female must be responsible for most of the parental care as the single male is not capable of providing care to that many offspring. In resourced-based polygyny, males compete for territories with the best resources, and then mate with females that enter the territory, drawn to its resource richness. The female benefits by mating with a dominant, genetically fit male; however, it is at the cost of having no male help in caring for the offspring. An example is seen in the yellow-rumped honeyguide, a bird whose males defend beehives because the females feed on their wax. As the females approach, the male defending the nest will mate with them. Harem mating structures are a type of polygynous system where certain males dominate mating while controlling a territory with resources. Elephant seals, where the alpha male dominates the mating within the group are an example. A third type of polygyny is a lek system. Here there is a communal courting area where several males perform elaborate displays for females, and the females choose their mate from this group. This behavior is observed in several bird species including the sage grouse and the prairie chicken. In polyandrous mating systems, one female mates with many males. These types of systems are much rarer than monogamous and polygynous mating systems. In pipefishes and seahorses, males receive the eggs from the female, fertilize them, protect them within a pouch, and give birth to the offspring ( Figure 45.38 ). Therefore, the female is able to provide eggs to several males without the burden of carrying the fertilized eggs. Simple Learned Behaviors The majority of the behaviors previously discussed were innate or at least have an innate component (variations on the innate behaviors may be learned). They are inherited and the behaviors do not change in response to signals from the environment. Conversely, learned behaviors, even though they may have instinctive components, allow an organism to adapt to changes in the environment and are modified by previous experiences. Simple learned behaviors include habituation and imprinting—both are important to the maturation process of young animals. Habituation Habituation is a simple form of learning in which an animal stops responding to a stimulus after a period of repeated exposure. This is a form of non-associative learning, as the stimulus is not associated with any punishment or reward. Prairie dogs typically sound an alarm call when threatened by a predator, but they become habituated to the sound of human footsteps when no harm is associated with this sound, therefore, they no longer respond to them with an alarm call. In this example, habituation is specific to the sound of human footsteps, as the animals still respond to the sounds of potential predators. Imprinting Imprinting is a type of learning that occurs at a particular age or a life stage that is rapid and independent of the species involved. Hatchling ducks recognize the first adult they see, their mother, and make a bond with her. A familiar sight is ducklings walking or swimming after their mothers ( Figure 45.39 ). This is another type of non-associative learning, but is very important in the maturation process of these animals as it encourages them to stay near their mother so they will be protected, greatly increasing their chances of survival. However, if newborn ducks see a human before they see their mother, they will imprint on the human and follow it in just the same manner as they would follow their real mother. Link to Learning The International Crane Foundation has helped raise the world’s population of whooping cranes from 21 individuals to about 600. Imprinting hatchlings has been a key to success: biologists wear full crane costumes so the birds never “see” humans. Watch this video to learn more. Click to view content Conditioned Behavior Conditioned behaviors are types of associative learning, where a stimulus becomes associated with a consequence. During operant conditioning, the behavioral response is modified by its consequences, with regards to its form, strength, or frequency. Classical Conditioning In classical conditioning , a response called the conditioned response is associated with a stimulus that it had previously not been associated with, the conditioned stimulus. The response to the original, unconditioned stimulus is called the unconditioned response. The most cited example of classical conditioning is Ivan Pavlov’s experiments with dogs ( Figure 45.40 ). In Pavlov’s experiments, the unconditioned response was the salivation of dogs in response to the unconditioned stimulus of seeing or smelling their food. The conditioning stimulus that researchers associated with the unconditioned response was the ringing of a bell. During conditioning, every time the animal was given food, the bell was rung. This was repeated during several trials. After some time, the dog learned to associate the ringing of the bell with food and to respond by salivating. After the conditioning period was finished, the dog would respond by salivating when the bell was rung, even when the unconditioned stimulus, the food, was absent. Thus, the ringing of the bell became the conditioned stimulus and the salivation became the conditioned response. Although it is thought by some scientists that the unconditioned and conditioned responses are identical, even Pavlov discovered that the saliva in the conditioned dogs had characteristic differences when compared to the unconditioned dog. It had been thought by some scientists that this type of conditioning required multiple exposures to the paired stimulus and response, but it is now known that this is not necessary in all cases, and that some conditioning can be learned in a single pairing experiment. Classical conditioning is a major tenet of behaviorism, a branch of psychological philosophy that proposes that all actions, thoughts, and emotions of living things are behaviors that can be treated by behavior modification and changes in the environment. Operant Conditioning In operant conditioning , the conditioned behavior is gradually modified by its consequences as the animal responds to the stimulus. A major proponent of such conditioning was psychologist B.F. Skinner, the inventor of the Skinner box. Skinner put rats in his boxes that contained a lever that would dispense food to the rat when depressed. While initially the rat would push the lever a few times by accident, it eventually associated pushing the lever with getting the food. This type of learning is an example of operant conditioning. Operant learning is the basis of most animal training. The conditioned behavior is continually modified by positive or negative reinforcement, often a reward such as food or some type of punishment, respectively. In this way, the animal is conditioned to associate a type of behavior with the punishment or reward, and, over time, can be induced to perform behaviors that they would not have done in the wild, such as the “tricks” dolphins perform at marine amusement park shows ( Figure 45.41 ). Cognitive Learning Classical and operant conditioning are inefficient ways for humans and other intelligent animals to learn. Some primates, including humans, are able to learn by imitating the behavior of others and by taking instructions. The development of complex language by humans has made cognitive learning , the manipulation of information using the mind, the most prominent method of human learning. In fact, that is how students are learning right now by reading this book. As students read, they can make mental images of objects or organisms and imagine changes to them, or behaviors by them, and anticipate the consequences. In addition to visual processing, cognitive learning is also enhanced by remembering past experiences, touching physical objects, hearing sounds, tasting food, and a variety of other sensory-based inputs. Cognitive learning is so powerful that it can be used to understand conditioning in detail. In the reverse scenario, conditioning cannot help someone learn about cognition. Classic work on cognitive learning was done by Wolfgang Köhler with chimpanzees. He demonstrated that these animals were capable of abstract thought by showing that they could learn how to solve a puzzle. When a banana was hung in their cage too high for them to reach, and several boxes were placed randomly on the floor, some of the chimps were able to stack the boxes one on top of the other, climb on top of them, and get the banana. This implies that they could visualize the result of stacking the boxes even before they had performed the action. This type of learning is much more powerful and versatile than conditioning. Cognitive learning is not limited to primates, although they are the most efficient in using it. Maze running experiments done with rats by H.C. Blodgett in the 1920s were the first to show cognitive skills in a simple mammal. The motivation for the animals to work their way through the maze was a piece of food at its end. In these studies, the animals in Group I were run in one trial per day and had food available to them each day on completion of the run ( Figure 45.42 ). Group II rats were not fed in the maze for the first six days and then subsequent runs were done with food for several days after. Group III rats had food available on the third day and every day thereafter. The results were that the control rats, Group I, learned quickly, and figured out how to run the maze in seven days. Group III did not learn much during the three days without food, but rapidly caught up to the control group when given the food reward. Group II learned very slowly for the six days with no reward to motivate them, and they did not begin to catch up to the control group until the day food was given, and then it took two days longer to learn the maze. It may not be immediately obvious that this type of learning is different than conditioning. Although one might be tempted to believe that the rats simply learned how to find their way through a conditioned series of right and left turns, E.C. Tolman proved a decade later that the rats were making a representation of the maze in their minds, which he called a “cognitive map.” This was an early demonstration of the power of cognitive learning and how these abilities were not just limited to humans. Sociobiology Sociobiology is an interdisciplinary science originally popularized by social insect researcher E.O. Wilson in the 1970s. Wilson defined the science as “the extension of population biology and evolutionary theory to social organization.” 9 The main thrust of sociobiology is that animal and human behavior, including aggressiveness and other social interactions, can be explained almost solely in terms of genetics and natural selection. This science is controversial; noted scientist such as the late Stephen Jay Gould criticized the approach for ignoring the environmental effects on behavior. This is another example of the “nature versus nurture” debate of the role of genetics versus the role of environment in determining an organism’s characteristics. 9 Edward O. Wilson. On Human Nature (1978; repr., Cambridge: Harvard University Press, 2004), xx. Sociobiology also links genes with behaviors and has been associated with “biological determinism,” the belief that all behaviors are hardwired into our genes. No one disputes that certain behaviors can be inherited and that natural selection plays a role retaining them. It is the application of such principles to human behavior that sparks this controversy, which remains active today.
biology
Chapter Outline 10.1 Cell Division 10.2 The Cell Cycle 10.3 Control of the Cell Cycle 10.4 Cancer and the Cell Cycle 10.5 Prokaryotic Cell Division Introduction A human, as well as every sexually reproducing organism, begins life as a fertilized egg (embryo) or zygote. Trillions of cell divisions subsequently occur in a controlled manner to produce a complex, multicellular human. In other words, that original single cell is the ancestor of every other cell in the body. Once a being is fully grown, cell reproduction is still necessary to repair or regenerate tissues. For example, new blood and skin cells are constantly being produced. All multicellular organisms use cell division for growth and the maintenance and repair of cells and tissues. Cell division is tightly regulated, and the occasional failure of regulation can have life-threatening consequences. Single-celled organisms use cell division as their method of reproduction.
[ { "answer": { "ans_choice": 2, "ans_text": "twice" }, "bloom": null, "hl_context": "Before discussing the steps a cell must undertake to replicate , a deeper understanding of the structure and function of a cell ’ s genetic information is necessary . A cell ’ s DNA , packaged as a double-stranded DNA molecule , is called its genome . In prokaryotes , the genome is composed of a single , double-stranded DNA molecule in the form of a loop or circle ( Figure 10.2 ) . The region in the cell containing this genetic material is called a nucleoid . Some prokaryotes also have smaller loops of DNA called plasmids that are not essential for normal growth . Bacteria can exchange these plasmids with other bacteria , sometimes receiving beneficial new genes that the recipient can add to their chromosomal DNA . Antibiotic resistance is one trait that often spreads through a bacterial colony through plasmid exchange . In eukaryotes , the genome consists of several double-stranded linear DNA molecules ( Figure 10.3 ) . Each species of eukaryotes has a characteristic number of chromosomes in the nuclei of its cells . <hl> Human body cells have 46 chromosomes , while human gametes ( sperm or eggs ) have 23 chromosomes each . <hl> <hl> A typical body cell , or somatic cell , contains two matched sets of chromosomes , a configuration known as diploid . <hl> <hl> The letter n is used to represent a single set of chromosomes ; therefore , a diploid organism is designated 2 n . <hl> <hl> Human cells that contain one set of chromosomes are called gametes , or sex cells ; these are eggs and sperm , and are designated 1n , or haploid . <hl> Matched pairs of chromosomes in a diploid organism are called homologous ( “ same knowledge ” ) chromosomes . Homologous chromosomes are the same length and have specific nucleotide segments called genes in exactly the same location , or locus . Genes , the functional units of chromosomes , determine specific characteristics by coding for specific proteins . Traits are the variations of those characteristics . For example , hair color is a characteristic with traits that are blonde , brown , or black .", "hl_sentences": "Human body cells have 46 chromosomes , while human gametes ( sperm or eggs ) have 23 chromosomes each . A typical body cell , or somatic cell , contains two matched sets of chromosomes , a configuration known as diploid . The letter n is used to represent a single set of chromosomes ; therefore , a diploid organism is designated 2 n . Human cells that contain one set of chromosomes are called gametes , or sex cells ; these are eggs and sperm , and are designated 1n , or haploid .", "question": { "cloze_format": "The initial mechanism for repairing nucleotide errors in DNA is ________.", "normal_format": "What is the initial mechanism for repairing nucleotide errors in DNA?", "question_choices": [ "one-fourth", "half", "twice", "four times" ], "question_id": "fs-id1414909", "question_text": "A diploid cell has_______ the number of chromosomes as a haploid cell." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "genes." }, "bloom": null, "hl_context": "<hl> Each copy of a homologous pair of chromosomes originates from a different parent ; therefore , the genes themselves are not identical . <hl> <hl> The variation of individuals within a species is due to the specific combination of the genes inherited from both parents . <hl> <hl> Even a slightly altered sequence of nucleotides within a gene can result in an alternative trait . <hl> For example , there are three possible gene sequences on the human chromosome that code for blood type : sequence A , sequence B , and sequence O . Because all diploid human cells have two copies of the chromosome that determines blood type , the blood type ( the trait ) is determined by which two versions of the marker gene are inherited . It is possible to have two copies of the same gene sequence on both homologous chromosomes , with one on each ( for example , AA , BB , or OO ) , or two different sequences , such as AB . Before discussing the steps a cell must undertake to replicate , a deeper understanding of the structure and function of a cell ’ s genetic information is necessary . A cell ’ s DNA , packaged as a double-stranded DNA molecule , is called its genome . In prokaryotes , the genome is composed of a single , double-stranded DNA molecule in the form of a loop or circle ( Figure 10.2 ) . The region in the cell containing this genetic material is called a nucleoid . Some prokaryotes also have smaller loops of DNA called plasmids that are not essential for normal growth . Bacteria can exchange these plasmids with other bacteria , sometimes receiving beneficial new genes that the recipient can add to their chromosomal DNA . Antibiotic resistance is one trait that often spreads through a bacterial colony through plasmid exchange . In eukaryotes , the genome consists of several double-stranded linear DNA molecules ( Figure 10.3 ) . Each species of eukaryotes has a characteristic number of chromosomes in the nuclei of its cells . Human body cells have 46 chromosomes , while human gametes ( sperm or eggs ) have 23 chromosomes each . A typical body cell , or somatic cell , contains two matched sets of chromosomes , a configuration known as diploid . The letter n is used to represent a single set of chromosomes ; therefore , a diploid organism is designated 2 n . Human cells that contain one set of chromosomes are called gametes , or sex cells ; these are eggs and sperm , and are designated 1n , or haploid . Matched pairs of chromosomes in a diploid organism are called homologous ( “ same knowledge ” ) chromosomes . Homologous chromosomes are the same length and have specific nucleotide segments called genes in exactly the same location , or locus . <hl> Genes , the functional units of chromosomes , determine specific characteristics by coding for specific proteins . <hl> <hl> Traits are the variations of those characteristics . <hl> <hl> For example , hair color is a characteristic with traits that are blonde , brown , or black . <hl>", "hl_sentences": "Each copy of a homologous pair of chromosomes originates from a different parent ; therefore , the genes themselves are not identical . The variation of individuals within a species is due to the specific combination of the genes inherited from both parents . Even a slightly altered sequence of nucleotides within a gene can result in an alternative trait . Genes , the functional units of chromosomes , determine specific characteristics by coding for specific proteins . Traits are the variations of those characteristics . For example , hair color is a characteristic with traits that are blonde , brown , or black .", "question": { "cloze_format": "An organism’s traits are determined by the specific combination of inherited _____.", "normal_format": "What inherited specific combination determines an organism’s traits? ", "question_choices": [ "cells.", "genes.", "proteins.", "chromatids." ], "question_id": "fs-id1359443", "question_text": "An organism’s traits are determined by the specific combination of inherited _____." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "histone" }, "bloom": "2", "hl_context": "<hl> In the first level of compaction , short stretches of the DNA double helix wrap around a core of eight histone proteins at regular intervals along the entire length of the chromosome ( Figure 10.4 ) . <hl> The DNA-histone complex is called chromatin . The beadlike , histone DNA complex is called a nucleosome , and DNA connecting the nucleosomes is called linker DNA . A DNA molecule in this form is about seven times shorter than the double helix without the histones , and the beads are about 10 nm in diameter , in contrast with the 2 - nm diameter of a DNA double helix . The next level of compaction occurs as the nucleosomes and the linker DNA between them are coiled into a 30 - nm chromatin fiber . This coiling further shortens the chromosome so that it is now about 50 times shorter than the extended form . In the third level of packing , a variety of fibrous proteins is used to pack the chromatin . These fibrous proteins also ensure that each chromosome in a non-dividing cell occupies a particular area of the nucleus that does not overlap with that of any other chromosome ( see the top image in Figure 10.3 ) .", "hl_sentences": "In the first level of compaction , short stretches of the DNA double helix wrap around a core of eight histone proteins at regular intervals along the entire length of the chromosome ( Figure 10.4 ) .", "question": { "cloze_format": "The first level of DNA organization in a eukaryotic cell is maintained by the ___ molecule.", "normal_format": "The first level of DNA organization in a eukaryotic cell is maintained by which molecule?", "question_choices": [ "cohesin", "condensin", "chromatin", "histone" ], "question_id": "fs-id2642317", "question_text": "The first level of DNA organization in a eukaryotic cell is maintained by which molecule?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "sister chromatids." }, "bloom": null, "hl_context": "DNA replicates in the S phase of interphase . <hl> After replication , the chromosomes are composed of two linked sister chromatids . <hl> <hl> When fully compact , the pairs of identically packed chromosomes are bound to each other by cohesin proteins . <hl> <hl> The connection between the sister chromatids is closest in a region called the centromere . <hl> The conjoined sister chromatids , with a diameter of about 1 µm , are visible under a light microscope . The centromeric region is highly condensed and thus will appear as a constricted area .", "hl_sentences": "After replication , the chromosomes are composed of two linked sister chromatids . When fully compact , the pairs of identically packed chromosomes are bound to each other by cohesin proteins . The connection between the sister chromatids is closest in a region called the centromere .", "question": { "cloze_format": "Identical copies of chromatin held together by cohesin at the centromere are called _____.", "normal_format": "What are identical copies of chromatin held together by cohesin at the centromere called?", "question_choices": [ "histones.", "nucleosomes.", "chromatin.", "sister chromatids." ], "question_id": "fs-id2315550", "question_text": "Identical copies of chromatin held together by cohesin at the centromere are called _____." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "S phase" }, "bloom": "1", "hl_context": "Throughout interphase , nuclear DNA remains in a semi-condensed chromatin configuration . <hl> In the S phase , DNA replication can proceed through the mechanisms that result in the formation of identical pairs of DNA molecules — sister chromatids — that are firmly attached to the centromeric region . <hl> <hl> The centrosome is duplicated during the S phase . <hl> The two centrosomes will give rise to the mitotic spindle , the apparatus that orchestrates the movement of chromosomes during mitosis . At the center of each animal cell , the centrosomes of animal cells are associated with a pair of rod-like objects , the centrioles , which are at right angles to each other . Centrioles help organize cell division . Centrioles are not present in the centrosomes of other eukaryotic species , such as plants and most fungi . <hl> DNA replicates in the S phase of interphase . <hl> <hl> After replication , the chromosomes are composed of two linked sister chromatids . <hl> When fully compact , the pairs of identically packed chromosomes are bound to each other by cohesin proteins . The connection between the sister chromatids is closest in a region called the centromere . The conjoined sister chromatids , with a diameter of about 1 µm , are visible under a light microscope . The centromeric region is highly condensed and thus will appear as a constricted area .", "hl_sentences": "In the S phase , DNA replication can proceed through the mechanisms that result in the formation of identical pairs of DNA molecules — sister chromatids — that are firmly attached to the centromeric region . The centrosome is duplicated during the S phase . DNA replicates in the S phase of interphase . After replication , the chromosomes are composed of two linked sister chromatids .", "question": { "cloze_format": "The stage of the cell cycle during which chromosomes are duplicated is called the ___ .", "normal_format": "Chromosomes are duplicated during what stage of the cell cycle?", "question_choices": [ "G1 phase", "S phase", "prophase", "prometaphase" ], "question_id": "fs-id1671396", "question_text": "Chromosomes are duplicated during what stage of the cell cycle?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "separation of sister chromatids" }, "bloom": "2", "hl_context": "<hl> The M checkpoint occurs near the end of the metaphase stage of karyokinesis . <hl> <hl> The M checkpoint is also known as the spindle checkpoint , because it determines whether all the sister chromatids are correctly attached to the spindle microtubules . <hl> <hl> Because the separation of the sister chromatids during anaphase is an irreversible step , the cycle will not proceed until the kinetochores of each pair of sister chromatids are firmly anchored to at least two spindle fibers arising from opposite poles of the cell . <hl> <hl> During interphase , the cell undergoes normal growth processes while also preparing for cell division . <hl> In order for a cell to move from interphase into the mitotic phase , many internal and external conditions must be met . The three stages of interphase are called G 1 , S , and G 2 . The cell cycle is an ordered series of events involving cell growth and cell division that produces two new daughter cells . Cells on the path to cell division proceed through a series of precisely timed and carefully regulated stages of growth , DNA replication , and division that produces two identical ( clone ) cells . The cell cycle has two major phases : interphase and the mitotic phase ( Figure 10.5 ) . <hl> During interphase , the cell grows and DNA is replicated . <hl> <hl> During the mitotic phase , the replicated DNA and cytoplasmic contents are separated , and the cell divides . <hl> <hl> DNA replicates in the S phase of interphase . <hl> After replication , the chromosomes are composed of two linked sister chromatids . When fully compact , the pairs of identically packed chromosomes are bound to each other by cohesin proteins . The connection between the sister chromatids is closest in a region called the centromere . The conjoined sister chromatids , with a diameter of about 1 µm , are visible under a light microscope . The centromeric region is highly condensed and thus will appear as a constricted area .", "hl_sentences": "The M checkpoint occurs near the end of the metaphase stage of karyokinesis . The M checkpoint is also known as the spindle checkpoint , because it determines whether all the sister chromatids are correctly attached to the spindle microtubules . Because the separation of the sister chromatids during anaphase is an irreversible step , the cycle will not proceed until the kinetochores of each pair of sister chromatids are firmly anchored to at least two spindle fibers arising from opposite poles of the cell . During interphase , the cell undergoes normal growth processes while also preparing for cell division . During interphase , the cell grows and DNA is replicated . During the mitotic phase , the replicated DNA and cytoplasmic contents are separated , and the cell divides . DNA replicates in the S phase of interphase .", "question": { "cloze_format": "The event that does not occur during some stages of interphase is ___.", "normal_format": "Which of the following events does not occur during some stages of interphase?", "question_choices": [ "DNA duplication", "organelle duplication", "increase in cell size", "separation of sister chromatids" ], "question_id": "fs-id2755057", "question_text": "Which of the following events does not occur during some stages of interphase?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "centrosome" }, "bloom": "1", "hl_context": "Animal cells Linear chromosomes exist in the nucleus . <hl> A mitotic spindle forms from the centrosomes . <hl> The nuclear envelope dissolves . Chromosomes attach to the mitotic spindle , which separates the chromosomes and elongates the cell . Microfilaments form a cleavage furrow that pinches the cell in two .", "hl_sentences": "A mitotic spindle forms from the centrosomes .", "question": { "cloze_format": "The mitotic spindles arise from the ___ cell structure.", "normal_format": "The mitotic spindles arise from which cell structure?", "question_choices": [ "centromere", "centrosome", "kinetochore", "cleavage furrow" ], "question_id": "fs-id1458009", "question_text": "The mitotic spindles arise from which cell structure?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "prometaphase" }, "bloom": "1", "hl_context": "<hl> During prometaphase , the “ first change phase , ” many processes that were begun in prophase continue to advance . <hl> The remnants of the nuclear envelope fragment . <hl> The mitotic spindle continues to develop as more microtubules assemble and stretch across the length of the former nuclear area . <hl> Chromosomes become more condensed and discrete . <hl> Each sister chromatid develops a protein structure called a kinetochore in the centromeric region ( Figure 10.7 ) . <hl> <hl> The proteins of the kinetochore attract and bind mitotic spindle microtubules . <hl> <hl> As the spindle microtubules extend from the centrosomes , some of these microtubules come into contact with and firmly bind to the kinetochores . <hl> <hl> Once a mitotic fiber attaches to a chromosome , the chromosome will be oriented until the kinetochores of sister chromatids face the opposite poles . <hl> <hl> Eventually , all the sister chromatids will be attached via their kinetochores to microtubules from opposing poles . <hl> Spindle microtubules that do not engage the chromosomes are called polar microtubules . These microtubules overlap each other midway between the two poles and contribute to cell elongation . Astral microtubules are located near the poles , aid in spindle orientation , and are required for the regulation of mitosis .", "hl_sentences": "During prometaphase , the “ first change phase , ” many processes that were begun in prophase continue to advance . The mitotic spindle continues to develop as more microtubules assemble and stretch across the length of the former nuclear area . Each sister chromatid develops a protein structure called a kinetochore in the centromeric region ( Figure 10.7 ) . The proteins of the kinetochore attract and bind mitotic spindle microtubules . As the spindle microtubules extend from the centrosomes , some of these microtubules come into contact with and firmly bind to the kinetochores . Once a mitotic fiber attaches to a chromosome , the chromosome will be oriented until the kinetochores of sister chromatids face the opposite poles . Eventually , all the sister chromatids will be attached via their kinetochores to microtubules from opposing poles .", "question": { "cloze_format": "Attachment of the mitotic spindle fibers to the kinetochores is a characteristic of the ___ stage of mitosis.", "normal_format": "Attachment of the mitotic spindle fibers to the kinetochores is a characteristic of which stage of mitosis?", "question_choices": [ "prophase", "prometaphase", "metaphase", "anaphase" ], "question_id": "fs-id1462637", "question_text": "Attachment of the mitotic spindle fibers to the kinetochores is a characteristic of which stage of mitosis?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "telophase" }, "bloom": "1", "hl_context": "<hl> During telophase , the “ distance phase , ” the chromosomes reach the opposite poles and begin to decondense ( unravel ) , relaxing into a chromatin configuration . <hl> <hl> The mitotic spindles are depolymerized into tubulin monomers that will be used to assemble cytoskeletal components for each daughter cell . <hl> <hl> Nuclear envelopes form around the chromosomes , and nucleosomes appear within the nuclear area . <hl>", "hl_sentences": "During telophase , the “ distance phase , ” the chromosomes reach the opposite poles and begin to decondense ( unravel ) , relaxing into a chromatin configuration . The mitotic spindles are depolymerized into tubulin monomers that will be used to assemble cytoskeletal components for each daughter cell . Nuclear envelopes form around the chromosomes , and nucleosomes appear within the nuclear area .", "question": { "cloze_format": "The stage of mitosis that unpacking of chromosomes and the formation of a new nuclear envelope is a characteristic of is ___.", "normal_format": "Unpacking of chromosomes and the formation of a new nuclear envelope is a characteristic of which stage of mitosis?", "question_choices": [ "prometaphase", "metaphase", "anaphase", "telophase" ], "question_id": "fs-id1565777", "question_text": "Unpacking of chromosomes and the formation of a new nuclear envelope is a characteristic of which stage of mitosis?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "anaphase" }, "bloom": null, "hl_context": "The M checkpoint occurs near the end of the metaphase stage of karyokinesis . The M checkpoint is also known as the spindle checkpoint , because it determines whether all the sister chromatids are correctly attached to the spindle microtubules . <hl> Because the separation of the sister chromatids during anaphase is an irreversible step , the cycle will not proceed until the kinetochores of each pair of sister chromatids are firmly anchored to at least two spindle fibers arising from opposite poles of the cell . <hl>", "hl_sentences": "Because the separation of the sister chromatids during anaphase is an irreversible step , the cycle will not proceed until the kinetochores of each pair of sister chromatids are firmly anchored to at least two spindle fibers arising from opposite poles of the cell .", "question": { "cloze_format": "Separation of the sister chromatids is a characteristic of the ___ stage of mitosis.", "normal_format": "Separation of the sister chromatids is a characteristic of which stage of mitosis?", "question_choices": [ "prometaphase", "metaphase", "anaphase", "telophase" ], "question_id": "fs-id1506066", "question_text": "Separation of the sister chromatids is a characteristic of which stage of mitosis?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "prophase" }, "bloom": null, "hl_context": "<hl> During prophase , the “ first phase , ” the nuclear envelope starts to dissociate into small vesicles , and the membranous organelles ( such as the Golgi complex or Golgi apparatus , and endoplasmic reticulum ) , fragment and disperse toward the periphery of the cell . <hl> The nucleolus disappears ( disperses ) . <hl> The centrosomes begin to move to opposite poles of the cell . <hl> <hl> Microtubules that will form the mitotic spindle extend between the centrosomes , pushing them farther apart as the microtubule fibers lengthen . <hl> <hl> The sister chromatids begin to coil more tightly with the aid of condensin proteins and become visible under a light microscope . <hl>", "hl_sentences": "During prophase , the “ first phase , ” the nuclear envelope starts to dissociate into small vesicles , and the membranous organelles ( such as the Golgi complex or Golgi apparatus , and endoplasmic reticulum ) , fragment and disperse toward the periphery of the cell . The centrosomes begin to move to opposite poles of the cell . Microtubules that will form the mitotic spindle extend between the centrosomes , pushing them farther apart as the microtubule fibers lengthen . The sister chromatids begin to coil more tightly with the aid of condensin proteins and become visible under a light microscope .", "question": { "cloze_format": "The chromosomes become visible under a light microscope during the ___ stage of mitosis.", "normal_format": "The chromosomes become visible under a light microscope during which stage of mitosis?", "question_choices": [ "prophase", "prometaphase", "metaphase", "anaphase" ], "question_id": "fs-id1433213", "question_text": "The chromosomes become visible under a light microscope during which stage of mitosis?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "cell plate" }, "bloom": "2", "hl_context": "In cells such as animal cells that lack cell walls , cytokinesis follows the onset of anaphase . <hl> A contractile ring composed of actin filaments forms just inside the plasma membrane at the former metaphase plate . <hl> The actin filaments pull the equator of the cell inward , forming a fissure . This fissure , or “ crack , ” is called the cleavage furrow . The furrow deepens as the actin ring contracts , and eventually the membrane is cleaved in two ( Figure 10.8 ) . In plant cells , a new cell wall must form between the daughter cells . During interphase , the Golgi apparatus accumulates enzymes , structural proteins , and glucose molecules prior to breaking into vesicles and dispersing throughout the dividing cell . <hl> During telophase , these Golgi vesicles are transported on microtubules to form a phragmoplast ( a vesicular structure ) at the metaphase plate . <hl> <hl> There , the vesicles fuse and coalesce from the center toward the cell walls ; this structure is called a cell plate . <hl> As more vesicles fuse , the cell plate enlarges until it merges with the cell walls at the periphery of the cell . Enzymes use the glucose that has accumulated between the membrane layers to build a new cell wall . The Golgi membranes become parts of the plasma membrane on either side of the new cell wall ( Figure 10.8 ) .", "hl_sentences": "A contractile ring composed of actin filaments forms just inside the plasma membrane at the former metaphase plate . During telophase , these Golgi vesicles are transported on microtubules to form a phragmoplast ( a vesicular structure ) at the metaphase plate . There , the vesicles fuse and coalesce from the center toward the cell walls ; this structure is called a cell plate .", "question": { "cloze_format": "The structure that the fusing of Golgi vesicles at the metaphase plate of dividing plant cells forms is a(n) ___.", "normal_format": "The fusing of Golgi vesicles at the metaphase plate of dividing plant cells forms what structure?", "question_choices": [ "cell plate", "actin ring", "cleavage furrow", "mitotic spindle" ], "question_id": "fs-id1414671", "question_text": "The fusing of Golgi vesicles at the metaphase plate of dividing plant cells forms what structure?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "G1 checkpoint" }, "bloom": "2", "hl_context": "The G 1 checkpoint determines whether all conditions are favorable for cell division to proceed . The G 1 checkpoint , also called the restriction point ( in yeast ) , is a point at which the cell irreversibly commits to the cell division process . <hl> External influences , such as growth factors , play a large role in carrying the cell past the G 1 checkpoint . <hl> In addition to adequate reserves and cell size , there is a check for genomic DNA damage at the G 1 checkpoint . A cell that does not meet all the requirements will not be allowed to progress into the S phase . The cell can halt the cycle and attempt to remedy the problematic condition , or the cell can advance into G 0 and await further signals when conditions improve . It is essential that the daughter cells produced be exact duplicates of the parent cell . Mistakes in the duplication or distribution of the chromosomes lead to mutations that may be passed forward to every new cell produced from an abnormal cell . To prevent a compromised cell from continuing to divide , there are internal control mechanisms that operate at three main cell cycle checkpoints . <hl> A checkpoint is one of several points in the eukaryotic cell cycle at which the progression of a cell to the next stage in the cycle can be halted until conditions are favorable . <hl> <hl> These checkpoints occur near the end of G 1 , at the G 2 / M transition , and during metaphase ( Figure 10.10 ) . <hl>", "hl_sentences": "External influences , such as growth factors , play a large role in carrying the cell past the G 1 checkpoint . A checkpoint is one of several points in the eukaryotic cell cycle at which the progression of a cell to the next stage in the cycle can be halted until conditions are favorable . These checkpoints occur near the end of G 1 , at the G 2 / M transition , and during metaphase ( Figure 10.10 ) .", "question": { "cloze_format": "The cell cycle checkpoints at which external forces have the greatest influence is the ___.", "normal_format": "At which of the cell cycle checkpoints do external forces have the greatest influence?", "question_choices": [ "G1 checkpoint", "G2 checkpoint", "M checkpoint", "G0 checkpoint" ], "question_id": "fs-id32412080", "question_text": "At which of the cell cycle checkpoints do external forces have the greatest influence?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "accurate and complete DNA replication" }, "bloom": "1", "hl_context": "<hl> The G 2 checkpoint bars entry into the mitotic phase if certain conditions are not met . <hl> As at the G 1 checkpoint , cell size and protein reserves are assessed . <hl> However , the most important role of the G 2 checkpoint is to ensure that all of the chromosomes have been replicated and that the replicated DNA is not damaged . <hl> <hl> If the checkpoint mechanisms detect problems with the DNA , the cell cycle is halted , and the cell attempts to either complete DNA replication or repair the damaged DNA . <hl>", "hl_sentences": "The G 2 checkpoint bars entry into the mitotic phase if certain conditions are not met . However , the most important role of the G 2 checkpoint is to ensure that all of the chromosomes have been replicated and that the replicated DNA is not damaged . If the checkpoint mechanisms detect problems with the DNA , the cell cycle is halted , and the cell attempts to either complete DNA replication or repair the damaged DNA .", "question": { "cloze_format": "The main prerequisite for clearance at the G2 checkpoint is ___ .", "normal_format": "What is the main prerequisite for clearance at the G2 checkpoint?", "question_choices": [ "cell has reached a sufficient size", "an adequate stockpile of nucleotides", "accurate and complete DNA replication", "proper attachment of mitotic spindle fibers to kinetochores" ], "question_id": "fs-id1595608", "question_text": "What is the main prerequisite for clearance at the G2 checkpoint?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "anaphase" }, "bloom": "2", "hl_context": "<hl> The M checkpoint occurs near the end of the metaphase stage of karyokinesis . <hl> <hl> The M checkpoint is also known as the spindle checkpoint , because it determines whether all the sister chromatids are correctly attached to the spindle microtubules . <hl> <hl> Because the separation of the sister chromatids during anaphase is an irreversible step , the cycle will not proceed until the kinetochores of each pair of sister chromatids are firmly anchored to at least two spindle fibers arising from opposite poles of the cell . <hl>", "hl_sentences": "The M checkpoint occurs near the end of the metaphase stage of karyokinesis . The M checkpoint is also known as the spindle checkpoint , because it determines whether all the sister chromatids are correctly attached to the spindle microtubules . Because the separation of the sister chromatids during anaphase is an irreversible step , the cycle will not proceed until the kinetochores of each pair of sister chromatids are firmly anchored to at least two spindle fibers arising from opposite poles of the cell .", "question": { "cloze_format": "If the M checkpoint is not cleared, the ___stage of mitosis will be blocked.", "normal_format": "If the M checkpoint is not cleared, what stage of mitosis will be blocked?", "question_choices": [ "prophase", "prometaphase", "metaphase", "anaphase" ], "question_id": "fs-id2046795", "question_text": "If the M checkpoint is not cleared, what stage of mitosis will be blocked?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "cyclin-dependent kinase (Cdk)" }, "bloom": "1", "hl_context": "The genes that code for the positive cell cycle regulators are called proto-oncogenes . Proto-oncogenes are normal genes that , when mutated in certain ways , become oncogenes , genes that cause a cell to become cancerous . Consider what might happen to the cell cycle in a cell with a recently acquired oncogene . In most instances , the alteration of the DNA sequence will result in a less functional ( or non-functional ) protein . The result is detrimental to the cell and will likely prevent the cell from completing the cell cycle ; however , the organism is not harmed because the mutation will not be carried forward . If a cell cannot reproduce , the mutation is not propagated and the damage is minimal . <hl> Occasionally , however , a gene mutation causes a change that increases the activity of a positive regulator . <hl> <hl> For example , a mutation that allows Cdk to be activated without being partnered with cyclin could push the cell cycle past a checkpoint before all of the required conditions are met . <hl> If the resulting daughter cells are too damaged to undergo further cell divisions , the mutation would not be propagated and no harm would come to the organism . However , if the atypical daughter cells are able to undergo further cell divisions , subsequent generations of cells will probably accumulate even more mutations , some possibly in additional genes that regulate the cell cycle . <hl> Cyclins regulate the cell cycle only when they are tightly bound to Cdks . <hl> <hl> To be fully active , the Cdk / cyclin complex must also be phosphorylated in specific locations . <hl> <hl> Like all kinases , Cdks are enzymes ( kinases ) that phosphorylate other proteins . <hl> <hl> Phosphorylation activates the protein by changing its shape . <hl> <hl> The proteins phosphorylated by Cdks are involved in advancing the cell to the next phase . <hl> ( Figure 10.12 ) . The levels of Cdk proteins are relatively stable throughout the cell cycle ; however , the concentrations of cyclin fluctuate and determine when Cdk / cyclin complexes form . The different cyclins and Cdks bind at specific points in the cell cycle and thus regulate different checkpoints . Since the cyclic fluctuations of cyclin levels are based on the timing of the cell cycle and not on specific events , regulation of the cell cycle usually occurs by either the Cdk molecules alone or the Cdk / cyclin complexes . Without a specific concentration of fully activated cyclin / Cdk complexes , the cell cycle cannot proceed through the checkpoints .", "hl_sentences": "Occasionally , however , a gene mutation causes a change that increases the activity of a positive regulator . For example , a mutation that allows Cdk to be activated without being partnered with cyclin could push the cell cycle past a checkpoint before all of the required conditions are met . Cyclins regulate the cell cycle only when they are tightly bound to Cdks . To be fully active , the Cdk / cyclin complex must also be phosphorylated in specific locations . Like all kinases , Cdks are enzymes ( kinases ) that phosphorylate other proteins . Phosphorylation activates the protein by changing its shape . The proteins phosphorylated by Cdks are involved in advancing the cell to the next phase .", "question": { "cloze_format": "The protein that is a positive regulator that phosphorylates other proteins when activated is ___.", "normal_format": "Which protein is a positive regulator that phosphorylates other proteins when activated?", "question_choices": [ "p53", "retinoblastoma protein (Rb)", "cyclin", "cyclin-dependent kinase (Cdk)" ], "question_id": "fs-id2321141", "question_text": "Which protein is a positive regulator that phosphorylates other proteins when activated?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "cancer cells" }, "bloom": "2", "hl_context": "<hl> Tumor Suppressor Genes Like proto-oncogenes , many of the negative cell cycle regulatory proteins were discovered in cells that had become cancerous . <hl> <hl> Tumor suppressor genes are segments of DNA that code for negative regulator proteins , the type of regulators that , when activated , can prevent the cell from undergoing uncontrolled division . <hl> The collective function of the best-understood tumor suppressor gene proteins , Rb , p53 , and p21 , is to put up a roadblock to cell cycle progression until certain events are completed . A cell that carries a mutated form of a negative regulator might not be able to halt the cell cycle if there is a problem . Tumor suppressors are similar to brakes in a vehicle : Malfunctioning brakes can contribute to a car crash .", "hl_sentences": "Tumor Suppressor Genes Like proto-oncogenes , many of the negative cell cycle regulatory proteins were discovered in cells that had become cancerous . Tumor suppressor genes are segments of DNA that code for negative regulator proteins , the type of regulators that , when activated , can prevent the cell from undergoing uncontrolled division .", "question": { "cloze_format": "The type of cells many of the negative regulator proteins of the cell cycle were discovered in were the ___.", "normal_format": "Many of the negative regulator proteins of the cell cycle were discovered in what type of cells?", "question_choices": [ "gametes", "cells in G0", "cancer cells", "stem cells" ], "question_id": "fs-id2899970", "question_text": "Many of the negative regulator proteins of the cell cycle were discovered in what type of cells?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "p53" }, "bloom": null, "hl_context": "will deem the cell unsalvageable and trigger programmed cell death ( apoptosis ) . <hl> The damaged version of p53 found in cancer cells , however , cannot trigger apoptosis . <hl> Visual Connection <hl> The best understood negative regulatory molecules are retinoblastoma protein ( Rb ) , p53 , and p21 . <hl> <hl> Retinoblastoma proteins are a group of tumor-suppressor proteins common in many cells . <hl> <hl> The 53 and 21 designations refer to the functional molecular masses of the proteins ( p ) in kilodaltons . <hl> Much of what is known about cell cycle regulation comes from research conducted with cells that have lost regulatory control . All three of these regulatory proteins were discovered to be damaged or non-functional in cells that had begun to replicate uncontrollably ( became cancerous ) . In each case , the main cause of the unchecked progress through the cell cycle was a faulty copy of the regulatory protein . Rb , p53 , and p21 act primarily at the G 1 checkpoint . p53 is a multi-functional protein that has a major impact on the commitment of a cell to division because it acts when there is damaged DNA in cells that are undergoing the preparatory processes during G 1 . If damaged DNA is detected , p53 halts the cell cycle and recruits enzymes to repair the DNA . <hl> If the DNA cannot be repaired , p53 can trigger apoptosis , or cell suicide , to prevent the duplication of damaged chromosomes . <hl> As p53 levels rise , the production of p21 is triggered . p21 enforces the halt in the cycle dictated by p53 by binding to and inhibiting the activity of the Cdk / cyclin complexes . As a cell is exposed to more stress , higher levels of p53 and p21 accumulate , making it less likely that the cell will move into the S phase .", "hl_sentences": "The damaged version of p53 found in cancer cells , however , cannot trigger apoptosis . The best understood negative regulatory molecules are retinoblastoma protein ( Rb ) , p53 , and p21 . Retinoblastoma proteins are a group of tumor-suppressor proteins common in many cells . The 53 and 21 designations refer to the functional molecular masses of the proteins ( p ) in kilodaltons . If the DNA cannot be repaired , p53 can trigger apoptosis , or cell suicide , to prevent the duplication of damaged chromosomes .", "question": { "cloze_format": "The negative regulatory molecule ___ can trigger cell suicide (apoptosis) if vital cell cycle events do not occur.", "normal_format": "Which negative regulatory molecule can trigger cell suicide (apoptosis) if vital cell cycle events do not occur?", "question_choices": [ "p53", "p21", "retinoblastoma protein (Rb)", "cyclin-dependent kinase (Cdk)" ], "question_id": "fs-id1836812", "question_text": "Which negative regulatory molecule can trigger cell suicide (apoptosis) if vital cell cycle events do not occur?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "Gene mutations" }, "bloom": null, "hl_context": "Cancer comprises many different diseases caused by a common mechanism : uncontrolled cell growth . Despite the redundancy and overlapping levels of cell cycle control , errors do occur . One of the critical processes monitored by the cell cycle checkpoint surveillance mechanism is the proper replication of DNA during the S phase . Even when all of the cell cycle controls are fully functional , a small percentage of replication errors ( mutations ) will be passed on to the daughter cells . <hl> If changes to the DNA nucleotide sequence occur within a coding portion of a gene and are not corrected , a gene mutation results . <hl> <hl> All cancers start when a gene mutation gives rise to a faulty protein that plays a key role in cell reproduction . <hl> The change in the cell that results from the malformed protein may be minor : perhaps a slight delay in the binding of Cdk to cyclin or an Rb protein that detaches from its target DNA while still phosphorylated . Even minor mistakes , however , may allow subsequent mistakes to occur more readily . Over and over , small uncorrected errors are passed from the parent cell to the daughter cells and amplified as each generation produces more non-functional proteins from uncorrected DNA damage . Eventually , the pace of the cell cycle speeds up as the effectiveness of the control and repair mechanisms decreases . Uncontrolled growth of the mutated cells outpaces the growth of normal cells in the area , and a tumor ( “ - oma ” ) can result .", "hl_sentences": "If changes to the DNA nucleotide sequence occur within a coding portion of a gene and are not corrected , a gene mutation results . All cancers start when a gene mutation gives rise to a faulty protein that plays a key role in cell reproduction .", "question": { "cloze_format": "___________ are changes to the order of nucleotides in a segment of DNA that codes for a protein.", "normal_format": "What are changes to the order of nucleotides in a segment of DNA that codes for a protein?", "question_choices": [ "Proto-oncogenes", "Tumor suppressor genes", "Gene mutations", "Negative regulators" ], "question_id": "fs-id23155500", "question_text": "___________ are changes to the order of nucleotides in a segment of DNA that codes for a protein." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "proto-oncogene." }, "bloom": null, "hl_context": "<hl> The genes that code for the positive cell cycle regulators are called proto-oncogenes . <hl> Proto-oncogenes are normal genes that , when mutated in certain ways , become oncogenes , genes that cause a cell to become cancerous . Consider what might happen to the cell cycle in a cell with a recently acquired oncogene . In most instances , the alteration of the DNA sequence will result in a less functional ( or non-functional ) protein . The result is detrimental to the cell and will likely prevent the cell from completing the cell cycle ; however , the organism is not harmed because the mutation will not be carried forward . If a cell cannot reproduce , the mutation is not propagated and the damage is minimal . Occasionally , however , a gene mutation causes a change that increases the activity of a positive regulator . For example , a mutation that allows Cdk to be activated without being partnered with cyclin could push the cell cycle past a checkpoint before all of the required conditions are met . If the resulting daughter cells are too damaged to undergo further cell divisions , the mutation would not be propagated and no harm would come to the organism . However , if the atypical daughter cells are able to undergo further cell divisions , subsequent generations of cells will probably accumulate even more mutations , some possibly in additional genes that regulate the cell cycle .", "hl_sentences": "The genes that code for the positive cell cycle regulators are called proto-oncogenes .", "question": { "cloze_format": "A gene that codes for a positive cell cycle regulator is called a(n) _____.", "normal_format": "What is a gene that codes for a positive cell cycle regulator called?", "question_choices": [ "kinase inhibitor.", "tumor suppressor gene.", "proto-oncogene.", "oncogene." ], "question_id": "fs-id1305287", "question_text": "A gene that codes for a positive cell cycle regulator is called a(n) _____." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "oncogene." }, "bloom": null, "hl_context": "The Cdk gene in the above example is only one of many genes that are considered proto-oncogenes . In addition to the cell cycle regulatory proteins , any protein that influences the cycle can be altered in such a way as to override cell cycle checkpoints . <hl> An oncogene is any gene that , when altered , leads to an increase in the rate of cell cycle progression . <hl> <hl> The genes that code for the positive cell cycle regulators are called proto-oncogenes . <hl> <hl> Proto-oncogenes are normal genes that , when mutated in certain ways , become oncogenes , genes that cause a cell to become cancerous . <hl> <hl> Consider what might happen to the cell cycle in a cell with a recently acquired oncogene . <hl> In most instances , the alteration of the DNA sequence will result in a less functional ( or non-functional ) protein . The result is detrimental to the cell and will likely prevent the cell from completing the cell cycle ; however , the organism is not harmed because the mutation will not be carried forward . If a cell cannot reproduce , the mutation is not propagated and the damage is minimal . <hl> Occasionally , however , a gene mutation causes a change that increases the activity of a positive regulator . <hl> <hl> For example , a mutation that allows Cdk to be activated without being partnered with cyclin could push the cell cycle past a checkpoint before all of the required conditions are met . <hl> <hl> If the resulting daughter cells are too damaged to undergo further cell divisions , the mutation would not be propagated and no harm would come to the organism . <hl> <hl> However , if the atypical daughter cells are able to undergo further cell divisions , subsequent generations of cells will probably accumulate even more mutations , some possibly in additional genes that regulate the cell cycle . <hl>", "hl_sentences": "An oncogene is any gene that , when altered , leads to an increase in the rate of cell cycle progression . The genes that code for the positive cell cycle regulators are called proto-oncogenes . Proto-oncogenes are normal genes that , when mutated in certain ways , become oncogenes , genes that cause a cell to become cancerous . Consider what might happen to the cell cycle in a cell with a recently acquired oncogene . Occasionally , however , a gene mutation causes a change that increases the activity of a positive regulator . For example , a mutation that allows Cdk to be activated without being partnered with cyclin could push the cell cycle past a checkpoint before all of the required conditions are met . If the resulting daughter cells are too damaged to undergo further cell divisions , the mutation would not be propagated and no harm would come to the organism . However , if the atypical daughter cells are able to undergo further cell divisions , subsequent generations of cells will probably accumulate even more mutations , some possibly in additional genes that regulate the cell cycle .", "question": { "cloze_format": "A mutated gene that codes for an altered version of Cdk that is active in the absence of cyclin is a(n) _____.", "normal_format": "What is a mutated gene that codes for an altered version of Cdk that is active in the absence of cyclin?", "question_choices": [ "kinase inhibitor.", "tumor suppressor gene.", "proto-oncogene.", "oncogene." ], "question_id": "fs-id20467950", "question_text": "A mutated gene that codes for an altered version of Cdk that is active in the absence of cyclin is a(n) _____." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "p21" }, "bloom": "1", "hl_context": "<hl> The best understood negative regulatory molecules are retinoblastoma protein ( Rb ) , p53 , and p21 . <hl> Retinoblastoma proteins are a group of tumor-suppressor proteins common in many cells . The 53 and 21 designations refer to the functional molecular masses of the proteins ( p ) in kilodaltons . Much of what is known about cell cycle regulation comes from research conducted with cells that have lost regulatory control . All three of these regulatory proteins were discovered to be damaged or non-functional in cells that had begun to replicate uncontrollably ( became cancerous ) . In each case , the main cause of the unchecked progress through the cell cycle was a faulty copy of the regulatory protein . Rb , p53 , and p21 act primarily at the G 1 checkpoint . p53 is a multi-functional protein that has a major impact on the commitment of a cell to division because it acts when there is damaged DNA in cells that are undergoing the preparatory processes during G 1 . If damaged DNA is detected , p53 halts the cell cycle and recruits enzymes to repair the DNA . If the DNA cannot be repaired , p53 can trigger apoptosis , or cell suicide , to prevent the duplication of damaged chromosomes . <hl> As p53 levels rise , the production of p21 is triggered . <hl> <hl> p21 enforces the halt in the cycle dictated by p53 by binding to and inhibiting the activity of the Cdk / cyclin complexes . <hl> <hl> As a cell is exposed to more stress , higher levels of p53 and p21 accumulate , making it less likely that the cell will move into the S phase . <hl>", "hl_sentences": "The best understood negative regulatory molecules are retinoblastoma protein ( Rb ) , p53 , and p21 . As p53 levels rise , the production of p21 is triggered . p21 enforces the halt in the cycle dictated by p53 by binding to and inhibiting the activity of the Cdk / cyclin complexes . As a cell is exposed to more stress , higher levels of p53 and p21 accumulate , making it less likely that the cell will move into the S phase .", "question": { "cloze_format": "The ___ molecule is a Cdk inhibitor that is controlled by p53.", "normal_format": "Which molecule is a Cdk inhibitor that is controlled by p53?", "question_choices": [ "cyclin", "anti-kinase", "Rb", "p21" ], "question_id": "fs-id26418420", "question_text": "Which molecule is a Cdk inhibitor that is controlled by p53?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "karyokinesis" }, "bloom": null, "hl_context": "To achieve the outcome of cloned offspring , certain steps are essential . The genomic DNA must be replicated and then allocated into the daughter cells ; the cytoplasmic contents must also be divided to give both new cells the machinery to sustain life . <hl> In bacterial cells , the genome consists of a single , circular DNA chromosome ; therefore , the process of cell division is simplified . <hl> <hl> Karyokinesis is unnecessary because there is no nucleus and thus no need to direct one copy of the multiple chromosomes into each daughter cell . <hl> <hl> This type of cell division is called binary ( prokaryotic ) fission . <hl> Binary Fission", "hl_sentences": "In bacterial cells , the genome consists of a single , circular DNA chromosome ; therefore , the process of cell division is simplified . Karyokinesis is unnecessary because there is no nucleus and thus no need to direct one copy of the multiple chromosomes into each daughter cell . This type of cell division is called binary ( prokaryotic ) fission .", "question": { "cloze_format": "The eukaryotic cell cycle event that is missing in binary fission is ___.", "normal_format": "Which eukaryotic cell cycle event is missing in binary fission?", "question_choices": [ "cell growth", "DNA duplication", "karyokinesis", "cytokinesis" ], "question_id": "fs-id2106304", "question_text": "Which eukaryotic cell cycle event is missing in binary fission?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "cell plate" }, "bloom": null, "hl_context": "The precise timing and formation of the mitotic spindle is critical to the success of eukaryotic cell division . Prokaryotic cells , on the other hand , do not undergo karyokinesis and therefore have no need for a mitotic spindle . <hl> However , the FtsZ protein that plays such a vital role in prokaryotic cytokinesis is structurally and functionally very similar to tubulin , the building block of the microtubules that make up the mitotic spindle fibers that are necessary for eukaryotes . <hl> <hl> FtsZ proteins can form filaments , rings , and other three-dimensional structures that resemble the way tubulin forms microtubules , centrioles , and various cytoskeletal components . <hl> In addition , both FtsZ and tubulin employ the same energy source , GTP ( guanosine triphosphate ) , to rapidly assemble and disassemble complex structures . Due to the relative simplicity of the prokaryotes , the cell division process , called binary fission , is a less complicated and much more rapid process than cell division in eukaryotes . The single , circular DNA chromosome of bacteria is not enclosed in a nucleus , but instead occupies a specific location , the nucleoid , within the cell ( Figure 10.2 ) . Although the DNA of the nucleoid is associated with proteins that aid in packaging the molecule into a compact size , there are no histone proteins and thus no nucleosomes in prokaryotes . The packing proteins of bacteria are , however , related to the cohesin and condensin proteins involved in the chromosome compaction of eukaryotes . The bacterial chromosome is attached to the plasma membrane at about the midpoint of the cell . The starting point of replication , the origin , is close to the binding site of the chromosome to the plasma membrane ( Figure 10.15 ) . Replication of the DNA is bidirectional , moving away from the origin on both strands of the loop simultaneously . As the new double strands are formed , each origin point moves away from the cell wall attachment toward the opposite ends of the cell . As the cell elongates , the growing membrane aids in the transport of the chromosomes . After the chromosomes have cleared the midpoint of the elongated cell , cytoplasmic separation begins . <hl> The formation of a ring composed of repeating units of a protein called FtsZ directs the partition between the nucleoids . <hl> <hl> Formation of the FtsZ ring triggers the accumulation of other proteins that work together to recruit new membrane and cell wall materials to the site . <hl> <hl> A septum is formed between the nucleoids , extending gradually from the periphery toward the center of the cell . <hl> <hl> When the new cell walls are in place , the daughter cells separate . <hl> In cells such as animal cells that lack cell walls , cytokinesis follows the onset of anaphase . A contractile ring composed of actin filaments forms just inside the plasma membrane at the former metaphase plate . The actin filaments pull the equator of the cell inward , forming a fissure . This fissure , or “ crack , ” is called the cleavage furrow . The furrow deepens as the actin ring contracts , and eventually the membrane is cleaved in two ( Figure 10.8 ) . In plant cells , a new cell wall must form between the daughter cells . During interphase , the Golgi apparatus accumulates enzymes , structural proteins , and glucose molecules prior to breaking into vesicles and dispersing throughout the dividing cell . <hl> During telophase , these Golgi vesicles are transported on microtubules to form a phragmoplast ( a vesicular structure ) at the metaphase plate . <hl> <hl> There , the vesicles fuse and coalesce from the center toward the cell walls ; this structure is called a cell plate . <hl> <hl> As more vesicles fuse , the cell plate enlarges until it merges with the cell walls at the periphery of the cell . <hl> Enzymes use the glucose that has accumulated between the membrane layers to build a new cell wall . The Golgi membranes become parts of the plasma membrane on either side of the new cell wall ( Figure 10.8 ) .", "hl_sentences": "However , the FtsZ protein that plays such a vital role in prokaryotic cytokinesis is structurally and functionally very similar to tubulin , the building block of the microtubules that make up the mitotic spindle fibers that are necessary for eukaryotes . FtsZ proteins can form filaments , rings , and other three-dimensional structures that resemble the way tubulin forms microtubules , centrioles , and various cytoskeletal components . The formation of a ring composed of repeating units of a protein called FtsZ directs the partition between the nucleoids . Formation of the FtsZ ring triggers the accumulation of other proteins that work together to recruit new membrane and cell wall materials to the site . A septum is formed between the nucleoids , extending gradually from the periphery toward the center of the cell . When the new cell walls are in place , the daughter cells separate . During telophase , these Golgi vesicles are transported on microtubules to form a phragmoplast ( a vesicular structure ) at the metaphase plate . There , the vesicles fuse and coalesce from the center toward the cell walls ; this structure is called a cell plate . As more vesicles fuse , the cell plate enlarges until it merges with the cell walls at the periphery of the cell .", "question": { "cloze_format": "FtsZ proteins direct the formation of a _______ that will eventually form the new cell walls of the daughter cells.", "normal_format": "What do FtsZ proteins direct the formation of that will eventually form the new cell walls of the daughter cells?", "question_choices": [ "contractile ring", "cell plate", "cytoskeleton", "septum" ], "question_id": "fs-id1466552", "question_text": "FtsZ proteins direct the formation of a _______ that will eventually form the new cell walls of the daughter cells." }, "references_are_paraphrase": null } ]
10
10.1 Cell Division Learning Objectives By the end of this section, you will be able to: Describe the structure of prokaryotic and eukaryotic genomes Distinguish between chromosomes, genes, and traits Describe the mechanisms of chromosome compaction The continuity of life from one cell to another has its foundation in the reproduction of cells by way of the cell cycle. The cell cycle is an orderly sequence of events that describes the stages of a cell’s life from the division of a single parent cell to the production of two new daughter cells. The mechanisms involved in the cell cycle are highly regulated. Genomic DNA Before discussing the steps a cell must undertake to replicate, a deeper understanding of the structure and function of a cell’s genetic information is necessary. A cell’s DNA, packaged as a double-stranded DNA molecule, is called its genome . In prokaryotes, the genome is composed of a single, double-stranded DNA molecule in the form of a loop or circle ( Figure 10.2 ). The region in the cell containing this genetic material is called a nucleoid. Some prokaryotes also have smaller loops of DNA called plasmids that are not essential for normal growth. Bacteria can exchange these plasmids with other bacteria, sometimes receiving beneficial new genes that the recipient can add to their chromosomal DNA. Antibiotic resistance is one trait that often spreads through a bacterial colony through plasmid exchange. In eukaryotes, the genome consists of several double-stranded linear DNA molecules ( Figure 10.3 ). Each species of eukaryotes has a characteristic number of chromosomes in the nuclei of its cells. Human body cells have 46 chromosomes, while human gametes (sperm or eggs) have 23 chromosomes each. A typical body cell, or somatic cell, contains two matched sets of chromosomes, a configuration known as diploid . The letter n is used to represent a single set of chromosomes; therefore, a diploid organism is designated 2 n . Human cells that contain one set of chromosomes are called gametes, or sex cells; these are eggs and sperm, and are designated 1n , or haploid . Matched pairs of chromosomes in a diploid organism are called homologous (“same knowledge”) chromosomes . Homologous chromosomes are the same length and have specific nucleotide segments called genes in exactly the same location, or locus . Genes, the functional units of chromosomes, determine specific characteristics by coding for specific proteins. Traits are the variations of those characteristics. For example, hair color is a characteristic with traits that are blonde, brown, or black. Each copy of a homologous pair of chromosomes originates from a different parent; therefore, the genes themselves are not identical. The variation of individuals within a species is due to the specific combination of the genes inherited from both parents. Even a slightly altered sequence of nucleotides within a gene can result in an alternative trait. For example, there are three possible gene sequences on the human chromosome that code for blood type: sequence A, sequence B, and sequence O. Because all diploid human cells have two copies of the chromosome that determines blood type, the blood type (the trait) is determined by which two versions of the marker gene are inherited. It is possible to have two copies of the same gene sequence on both homologous chromosomes, with one on each (for example, AA, BB, or OO), or two different sequences, such as AB. Minor variations of traits, such as blood type, eye color, and handedness, contribute to the natural variation found within a species. However, if the entire DNA sequence from any pair of human homologous chromosomes is compared, the difference is less than one percent. The sex chromosomes, X and Y, are the single exception to the rule of homologous chromosome uniformity: Other than a small amount of homology that is necessary to accurately produce gametes, the genes found on the X and Y chromosomes are different. Eukaryotic Chromosomal Structure and Compaction If the DNA from all 46 chromosomes in a human cell nucleus was laid out end to end, it would measure approximately two meters; however, its diameter would be only 2 nm. Considering that the size of a typical human cell is about 10 µm (100,000 cells lined up to equal one meter), DNA must be tightly packaged to fit in the cell’s nucleus. At the same time, it must also be readily accessible for the genes to be expressed. During some stages of the cell cycle, the long strands of DNA are condensed into compact chromosomes. There are a number of ways that chromosomes are compacted. In the first level of compaction, short stretches of the DNA double helix wrap around a core of eight histone proteins at regular intervals along the entire length of the chromosome ( Figure 10.4 ). The DNA-histone complex is called chromatin. The beadlike, histone DNA complex is called a nucleosome , and DNA connecting the nucleosomes is called linker DNA. A DNA molecule in this form is about seven times shorter than the double helix without the histones, and the beads are about 10 nm in diameter, in contrast with the 2-nm diameter of a DNA double helix. The next level of compaction occurs as the nucleosomes and the linker DNA between them are coiled into a 30-nm chromatin fiber. This coiling further shortens the chromosome so that it is now about 50 times shorter than the extended form. In the third level of packing, a variety of fibrous proteins is used to pack the chromatin. These fibrous proteins also ensure that each chromosome in a non-dividing cell occupies a particular area of the nucleus that does not overlap with that of any other chromosome (see the top image in Figure 10.3 ). DNA replicates in the S phase of interphase. After replication, the chromosomes are composed of two linked sister chromatids . When fully compact, the pairs of identically packed chromosomes are bound to each other by cohesin proteins. The connection between the sister chromatids is closest in a region called the centromere . The conjoined sister chromatids, with a diameter of about 1 µm, are visible under a light microscope. The centromeric region is highly condensed and thus will appear as a constricted area. Link to Learning This animation illustrates the different levels of chromosome packing. Click to view content 10.2 The Cell Cycle Learning Objectives By the end of this section, you will be able to: Describe the three stages of interphase Discuss the behavior of chromosomes during karyokinesis Explain how the cytoplasmic content is divided during cytokinesis Define the quiescent G 0 phase The cell cycle is an ordered series of events involving cell growth and cell division that produces two new daughter cells. Cells on the path to cell division proceed through a series of precisely timed and carefully regulated stages of growth, DNA replication, and division that produces two identical (clone) cells. The cell cycle has two major phases: interphase and the mitotic phase ( Figure 10.5 ). During interphase , the cell grows and DNA is replicated. During the mitotic phase , the replicated DNA and cytoplasmic contents are separated, and the cell divides. Interphase During interphase, the cell undergoes normal growth processes while also preparing for cell division. In order for a cell to move from interphase into the mitotic phase, many internal and external conditions must be met. The three stages of interphase are called G 1 , S, and G 2 . G 1 Phase (First Gap) The first stage of interphase is called the G 1 phase (first gap) because, from a microscopic aspect, little change is visible. However, during the G 1 stage, the cell is quite active at the biochemical level. The cell is accumulating the building blocks of chromosomal DNA and the associated proteins as well as accumulating sufficient energy reserves to complete the task of replicating each chromosome in the nucleus. S Phase (Synthesis of DNA) Throughout interphase, nuclear DNA remains in a semi-condensed chromatin configuration. In the S phase , DNA replication can proceed through the mechanisms that result in the formation of identical pairs of DNA molecules—sister chromatids—that are firmly attached to the centromeric region. The centrosome is duplicated during the S phase. The two centrosomes will give rise to the mitotic spindle , the apparatus that orchestrates the movement of chromosomes during mitosis. At the center of each animal cell, the centrosomes of animal cells are associated with a pair of rod-like objects, the centrioles , which are at right angles to each other. Centrioles help organize cell division. Centrioles are not present in the centrosomes of other eukaryotic species, such as plants and most fungi. G 2 Phase (Second Gap) In the G 2 phase , the cell replenishes its energy stores and synthesizes proteins necessary for chromosome manipulation. Some cell organelles are duplicated, and the cytoskeleton is dismantled to provide resources for the mitotic phase. There may be additional cell growth during G 2 . The final preparations for the mitotic phase must be completed before the cell is able to enter the first stage of mitosis. The Mitotic Phase The mitotic phase is a multistep process during which the duplicated chromosomes are aligned, separated, and move into two new, identical daughter cells. The first portion of the mitotic phase is called karyokinesis , or nuclear division. The second portion of the mitotic phase, called cytokinesis, is the physical separation of the cytoplasmic components into the two daughter cells. Link to Learning Revisit the stages of mitosis at this site . Karyokinesis (Mitosis) Karyokinesis, also known as mitosis , is divided into a series of phases—prophase, prometaphase, metaphase, anaphase, and telophase—that result in the division of the cell nucleus ( Figure 10.6 ). Karyokinesis is also called mitosis. Visual Connection Which of the following is the correct order of events in mitosis? Sister chromatids line up at the metaphase plate. The kinetochore becomes attached to the mitotic spindle. The nucleus reforms and the cell divides. Cohesin proteins break down and the sister chromatids separate. The kinetochore becomes attached to the mitotic spindle. Cohesin proteins break down and the sister chromatids separate. Sister chromatids line up at the metaphase plate. The nucleus reforms and the cell divides. The kinetochore becomes attached to the cohesin proteins. Sister chromatids line up at the metaphase plate. The kinetochore breaks down and the sister chromatids separate. The nucleus reforms and the cell divides. The kinetochore becomes attached to the mitotic spindle. Sister chromatids line up at the metaphase plate. Cohesin proteins break down and the sister chromatids separate. The nucleus reforms and the cell divides. During prophase , the “first phase,” the nuclear envelope starts to dissociate into small vesicles, and the membranous organelles (such as the Golgi complex or Golgi apparatus, and endoplasmic reticulum), fragment and disperse toward the periphery of the cell. The nucleolus disappears (disperses). The centrosomes begin to move to opposite poles of the cell. Microtubules that will form the mitotic spindle extend between the centrosomes, pushing them farther apart as the microtubule fibers lengthen. The sister chromatids begin to coil more tightly with the aid of condensin proteins and become visible under a light microscope. During prometaphase , the “first change phase,” many processes that were begun in prophase continue to advance. The remnants of the nuclear envelope fragment. The mitotic spindle continues to develop as more microtubules assemble and stretch across the length of the former nuclear area. Chromosomes become more condensed and discrete. Each sister chromatid develops a protein structure called a kinetochore in the centromeric region ( Figure 10.7 ). The proteins of the kinetochore attract and bind mitotic spindle microtubules. As the spindle microtubules extend from the centrosomes, some of these microtubules come into contact with and firmly bind to the kinetochores. Once a mitotic fiber attaches to a chromosome, the chromosome will be oriented until the kinetochores of sister chromatids face the opposite poles. Eventually, all the sister chromatids will be attached via their kinetochores to microtubules from opposing poles. Spindle microtubules that do not engage the chromosomes are called polar microtubules. These microtubules overlap each other midway between the two poles and contribute to cell elongation. Astral microtubules are located near the poles, aid in spindle orientation, and are required for the regulation of mitosis. During metaphase , the “change phase,” all the chromosomes are aligned in a plane called the metaphase plate , or the equatorial plane, midway between the two poles of the cell. The sister chromatids are still tightly attached to each other by cohesin proteins. At this time, the chromosomes are maximally condensed. During anaphase , the “upward phase,” the cohesin proteins degrade, and the sister chromatids separate at the centromere. Each chromatid, now called a chromosome, is pulled rapidly toward the centrosome to which its microtubule is attached. The cell becomes visibly elongated (oval shaped) as the polar microtubules slide against each other at the metaphase plate where they overlap. During telophase , the “distance phase,” the chromosomes reach the opposite poles and begin to decondense (unravel), relaxing into a chromatin configuration. The mitotic spindles are depolymerized into tubulin monomers that will be used to assemble cytoskeletal components for each daughter cell. Nuclear envelopes form around the chromosomes, and nucleosomes appear within the nuclear area. Cytokinesis Cytokinesis , or “cell motion,” is the second main stage of the mitotic phase during which cell division is completed via the physical separation of the cytoplasmic components into two daughter cells. Division is not complete until the cell components have been apportioned and completely separated into the two daughter cells. Although the stages of mitosis are similar for most eukaryotes, the process of cytokinesis is quite different for eukaryotes that have cell walls, such as plant cells. In cells such as animal cells that lack cell walls, cytokinesis follows the onset of anaphase. A contractile ring composed of actin filaments forms just inside the plasma membrane at the former metaphase plate. The actin filaments pull the equator of the cell inward, forming a fissure. This fissure, or “crack,” is called the cleavage furrow . The furrow deepens as the actin ring contracts, and eventually the membrane is cleaved in two ( Figure 10.8 ). In plant cells, a new cell wall must form between the daughter cells. During interphase, the Golgi apparatus accumulates enzymes, structural proteins, and glucose molecules prior to breaking into vesicles and dispersing throughout the dividing cell. During telophase, these Golgi vesicles are transported on microtubules to form a phragmoplast (a vesicular structure) at the metaphase plate. There, the vesicles fuse and coalesce from the center toward the cell walls; this structure is called a cell plate . As more vesicles fuse, the cell plate enlarges until it merges with the cell walls at the periphery of the cell. Enzymes use the glucose that has accumulated between the membrane layers to build a new cell wall. The Golgi membranes become parts of the plasma membrane on either side of the new cell wall ( Figure 10.8 ). G 0 Phase Not all cells adhere to the classic cell cycle pattern in which a newly formed daughter cell immediately enters the preparatory phases of interphase, closely followed by the mitotic phase. Cells in G 0 phase are not actively preparing to divide. The cell is in a quiescent (inactive) stage that occurs when cells exit the cell cycle. Some cells enter G 0 temporarily until an external signal triggers the onset of G 1 . Other cells that never or rarely divide, such as mature cardiac muscle and nerve cells, remain in G 0 permanently. Scientific Method Connection Determine the Time Spent in Cell Cycle Stages Problem : How long does a cell spend in interphase compared to each stage of mitosis? Background : A prepared microscope slide of blastula cross-sections will show cells arrested in various stages of the cell cycle. It is not visually possible to separate the stages of interphase from each other, but the mitotic stages are readily identifiable. If 100 cells are examined, the number of cells in each identifiable cell cycle stage will give an estimate of the time it takes for the cell to complete that stage. Problem Statement : Given the events included in all of interphase and those that take place in each stage of mitosis, estimate the length of each stage based on a 24-hour cell cycle. Before proceeding, state your hypothesis. Test your hypothesis : Test your hypothesis by doing the following: Place a fixed and stained microscope slide of whitefish blastula cross-sections under the scanning objective of a light microscope. Locate and focus on one of the sections using the scanning objective of your microscope. Notice that the section is a circle composed of dozens of closely packed individual cells. Switch to the low-power objective and refocus. With this objective, individual cells are visible. Switch to the high-power objective and slowly move the slide left to right, and up and down to view all the cells in the section ( Figure 10.9 ). As you scan, you will notice that most of the cells are not undergoing mitosis but are in the interphase period of the cell cycle. Practice identifying the various stages of the cell cycle, using the drawings of the stages as a guide ( Figure 10.6 ). Once you are confident about your identification, begin to record the stage of each cell you encounter as you scan left to right, and top to bottom across the blastula section. Keep a tally of your observations and stop when you reach 100 cells identified. The larger the sample size (total number of cells counted), the more accurate the results. If possible, gather and record group data prior to calculating percentages and making estimates. Record your observations : Make a table similar to Table 10.1 in which you record your observations. Results of Cell Stage Identification Phase or Stage Individual Totals Group Totals Percent Interphase Prophase Metaphase Anaphase Telophase Cytokinesis Totals 100 100 100 percent Table 10.1 Analyze your data/report your results : To find the length of time whitefish blastula cells spend in each stage, multiply the percent (recorded as a decimal) by 24 hours. Make a table similar to Table 10.2 to illustrate your data. Estimate of Cell Stage Length Phase or Stage Percent (as Decimal) Time in Hours Interphase Prophase Metaphase Anaphase Telophase Cytokinesis Table 10.2 Draw a conclusion : Did your results support your estimated times? Were any of the outcomes unexpected? If so, discuss which events in that stage might contribute to the calculated time. 10.3 Control of the Cell Cycle Learning Objectives By the end of this section, you will be able to: Understand how the cell cycle is controlled by mechanisms both internal and external to the cell Explain how the three internal control checkpoints occur at the end of G 1 , at the G 2 /M transition, and during metaphase Describe the molecules that control the cell cycle through positive and negative regulation The length of the cell cycle is highly variable, even within the cells of a single organism. In humans, the frequency of cell turnover ranges from a few hours in early embryonic development, to an average of two to five days for epithelial cells, and to an entire human lifetime spent in G 0 by specialized cells, such as cortical neurons or cardiac muscle cells. There is also variation in the time that a cell spends in each phase of the cell cycle. When fast-dividing mammalian cells are grown in culture (outside the body under optimal growing conditions), the length of the cycle is about 24 hours. In rapidly dividing human cells with a 24-hour cell cycle, the G 1 phase lasts approximately nine hours, the S phase lasts 10 hours, the G 2 phase lasts about four and one-half hours, and the M phase lasts approximately one-half hour. In early embryos of fruit flies, the cell cycle is completed in about eight minutes. The timing of events in the cell cycle is controlled by mechanisms that are both internal and external to the cell. Regulation of the Cell Cycle by External Events Both the initiation and inhibition of cell division are triggered by events external to the cell when it is about to begin the replication process. An event may be as simple as the death of a nearby cell or as sweeping as the release of growth-promoting hormones, such as human growth hormone (HGH). A lack of HGH can inhibit cell division, resulting in dwarfism, whereas too much HGH can result in gigantism. Crowding of cells can also inhibit cell division. Another factor that can initiate cell division is the size of the cell; as a cell grows, it becomes inefficient due to its decreasing surface-to-volume ratio. The solution to this problem is to divide. Whatever the source of the message, the cell receives the signal, and a series of events within the cell allows it to proceed into interphase. Moving forward from this initiation point, every parameter required during each cell cycle phase must be met or the cycle cannot progress. Regulation at Internal Checkpoints It is essential that the daughter cells produced be exact duplicates of the parent cell. Mistakes in the duplication or distribution of the chromosomes lead to mutations that may be passed forward to every new cell produced from an abnormal cell. To prevent a compromised cell from continuing to divide, there are internal control mechanisms that operate at three main cell cycle checkpoints . A checkpoint is one of several points in the eukaryotic cell cycle at which the progression of a cell to the next stage in the cycle can be halted until conditions are favorable. These checkpoints occur near the end of G 1 , at the G 2 /M transition, and during metaphase ( Figure 10.10 ). The G 1 Checkpoint The G 1 checkpoint determines whether all conditions are favorable for cell division to proceed. The G 1 checkpoint, also called the restriction point (in yeast), is a point at which the cell irreversibly commits to the cell division process. External influences, such as growth factors, play a large role in carrying the cell past the G 1 checkpoint. In addition to adequate reserves and cell size, there is a check for genomic DNA damage at the G 1 checkpoint. A cell that does not meet all the requirements will not be allowed to progress into the S phase. The cell can halt the cycle and attempt to remedy the problematic condition, or the cell can advance into G 0 and await further signals when conditions improve. The G 2 Checkpoint The G 2 checkpoint bars entry into the mitotic phase if certain conditions are not met. As at the G 1 checkpoint, cell size and protein reserves are assessed. However, the most important role of the G 2 checkpoint is to ensure that all of the chromosomes have been replicated and that the replicated DNA is not damaged. If the checkpoint mechanisms detect problems with the DNA, the cell cycle is halted, and the cell attempts to either complete DNA replication or repair the damaged DNA. The M Checkpoint The M checkpoint occurs near the end of the metaphase stage of karyokinesis. The M checkpoint is also known as the spindle checkpoint, because it determines whether all the sister chromatids are correctly attached to the spindle microtubules. Because the separation of the sister chromatids during anaphase is an irreversible step, the cycle will not proceed until the kinetochores of each pair of sister chromatids are firmly anchored to at least two spindle fibers arising from opposite poles of the cell. Link to Learning Watch what occurs at the G 1 , G 2 , and M checkpoints by visiting this website to see an animation of the cell cycle. Regulator Molecules of the Cell Cycle In addition to the internally controlled checkpoints, there are two groups of intracellular molecules that regulate the cell cycle. These regulatory molecules either promote progress of the cell to the next phase (positive regulation) or halt the cycle (negative regulation). Regulator molecules may act individually, or they can influence the activity or production of other regulatory proteins. Therefore, the failure of a single regulator may have almost no effect on the cell cycle, especially if more than one mechanism controls the same event. Conversely, the effect of a deficient or non-functioning regulator can be wide-ranging and possibly fatal to the cell if multiple processes are affected. Positive Regulation of the Cell Cycle Two groups of proteins, called cyclins and cyclin-dependent kinases (Cdks), are responsible for the progress of the cell through the various checkpoints. The levels of the four cyclin proteins fluctuate throughout the cell cycle in a predictable pattern ( Figure 10.11 ). Increases in the concentration of cyclin proteins are triggered by both external and internal signals. After the cell moves to the next stage of the cell cycle, the cyclins that were active in the previous stage are degraded. Cyclins regulate the cell cycle only when they are tightly bound to Cdks. To be fully active, the Cdk/cyclin complex must also be phosphorylated in specific locations. Like all kinases, Cdks are enzymes (kinases) that phosphorylate other proteins. Phosphorylation activates the protein by changing its shape. The proteins phosphorylated by Cdks are involved in advancing the cell to the next phase. ( Figure 10.12 ). The levels of Cdk proteins are relatively stable throughout the cell cycle; however, the concentrations of cyclin fluctuate and determine when Cdk/cyclin complexes form. The different cyclins and Cdks bind at specific points in the cell cycle and thus regulate different checkpoints. Since the cyclic fluctuations of cyclin levels are based on the timing of the cell cycle and not on specific events, regulation of the cell cycle usually occurs by either the Cdk molecules alone or the Cdk/cyclin complexes. Without a specific concentration of fully activated cyclin/Cdk complexes, the cell cycle cannot proceed through the checkpoints. Although the cyclins are the main regulatory molecules that determine the forward momentum of the cell cycle, there are several other mechanisms that fine-tune the progress of the cycle with negative, rather than positive, effects. These mechanisms essentially block the progression of the cell cycle until problematic conditions are resolved. Molecules that prevent the full activation of Cdks are called Cdk inhibitors. Many of these inhibitor molecules directly or indirectly monitor a particular cell cycle event. The block placed on Cdks by inhibitor molecules will not be removed until the specific event that the inhibitor monitors is completed. Negative Regulation of the Cell Cycle The second group of cell cycle regulatory molecules are negative regulators. Negative regulators halt the cell cycle. Remember that in positive regulation, active molecules cause the cycle to progress. The best understood negative regulatory molecules are retinoblastoma protein (Rb) , p53 , and p21 . Retinoblastoma proteins are a group of tumor-suppressor proteins common in many cells. The 53 and 21 designations refer to the functional molecular masses of the proteins (p) in kilodaltons. Much of what is known about cell cycle regulation comes from research conducted with cells that have lost regulatory control. All three of these regulatory proteins were discovered to be damaged or non-functional in cells that had begun to replicate uncontrollably (became cancerous). In each case, the main cause of the unchecked progress through the cell cycle was a faulty copy of the regulatory protein. Rb, p53, and p21 act primarily at the G 1 checkpoint. p53 is a multi-functional protein that has a major impact on the commitment of a cell to division because it acts when there is damaged DNA in cells that are undergoing the preparatory processes during G 1 . If damaged DNA is detected, p53 halts the cell cycle and recruits enzymes to repair the DNA. If the DNA cannot be repaired, p53 can trigger apoptosis, or cell suicide, to prevent the duplication of damaged chromosomes. As p53 levels rise, the production of p21 is triggered. p21 enforces the halt in the cycle dictated by p53 by binding to and inhibiting the activity of the Cdk/cyclin complexes. As a cell is exposed to more stress, higher levels of p53 and p21 accumulate, making it less likely that the cell will move into the S phase. Rb exerts its regulatory influence on other positive regulator proteins. Chiefly, Rb monitors cell size. In the active, dephosphorylated state, Rb binds to proteins called transcription factors, most commonly, E2F ( Figure 10.13 ). Transcription factors “turn on” specific genes, allowing the production of proteins encoded by that gene. When Rb is bound to E2F, production of proteins necessary for the G 1 /S transition is blocked. As the cell increases in size, Rb is slowly phosphorylated until it becomes inactivated. Rb releases E2F, which can now turn on the gene that produces the transition protein, and this particular block is removed. For the cell to move past each of the checkpoints, all positive regulators must be “turned on,” and all negative regulators must be “turned off.” Visual Connection Rb and other proteins that negatively regulate the cell cycle are sometimes called tumor suppressors. Why do you think the name tumor suppressor might be appropriate for these proteins? 10.4 Cancer and the Cell Cycle Learning Objectives By the end of this section, you will be able to: Describe how cancer is caused by uncontrolled cell growth Understand how proto-oncogenes are normal cell genes that, when mutated, become oncogenes Describe how tumor suppressors function Explain how mutant tumor suppressors cause cancer Cancer comprises many different diseases caused by a common mechanism: uncontrolled cell growth. Despite the redundancy and overlapping levels of cell cycle control, errors do occur. One of the critical processes monitored by the cell cycle checkpoint surveillance mechanism is the proper replication of DNA during the S phase. Even when all of the cell cycle controls are fully functional, a small percentage of replication errors (mutations) will be passed on to the daughter cells. If changes to the DNA nucleotide sequence occur within a coding portion of a gene and are not corrected, a gene mutation results. All cancers start when a gene mutation gives rise to a faulty protein that plays a key role in cell reproduction. The change in the cell that results from the malformed protein may be minor: perhaps a slight delay in the binding of Cdk to cyclin or an Rb protein that detaches from its target DNA while still phosphorylated. Even minor mistakes, however, may allow subsequent mistakes to occur more readily. Over and over, small uncorrected errors are passed from the parent cell to the daughter cells and amplified as each generation produces more non-functional proteins from uncorrected DNA damage. Eventually, the pace of the cell cycle speeds up as the effectiveness of the control and repair mechanisms decreases. Uncontrolled growth of the mutated cells outpaces the growth of normal cells in the area, and a tumor (“-oma”) can result. Proto-oncogenes The genes that code for the positive cell cycle regulators are called proto-oncogenes . Proto-oncogenes are normal genes that, when mutated in certain ways, become oncogenes , genes that cause a cell to become cancerous. Consider what might happen to the cell cycle in a cell with a recently acquired oncogene. In most instances, the alteration of the DNA sequence will result in a less functional (or non-functional) protein. The result is detrimental to the cell and will likely prevent the cell from completing the cell cycle; however, the organism is not harmed because the mutation will not be carried forward. If a cell cannot reproduce, the mutation is not propagated and the damage is minimal. Occasionally, however, a gene mutation causes a change that increases the activity of a positive regulator. For example, a mutation that allows Cdk to be activated without being partnered with cyclin could push the cell cycle past a checkpoint before all of the required conditions are met. If the resulting daughter cells are too damaged to undergo further cell divisions, the mutation would not be propagated and no harm would come to the organism. However, if the atypical daughter cells are able to undergo further cell divisions, subsequent generations of cells will probably accumulate even more mutations, some possibly in additional genes that regulate the cell cycle. The Cdk gene in the above example is only one of many genes that are considered proto-oncogenes. In addition to the cell cycle regulatory proteins, any protein that influences the cycle can be altered in such a way as to override cell cycle checkpoints. An oncogene is any gene that, when altered, leads to an increase in the rate of cell cycle progression. Tumor Suppressor Genes Like proto-oncogenes, many of the negative cell cycle regulatory proteins were discovered in cells that had become cancerous. Tumor suppressor genes are segments of DNA that code for negative regulator proteins, the type of regulators that, when activated, can prevent the cell from undergoing uncontrolled division. The collective function of the best-understood tumor suppressor gene proteins, Rb, p53, and p21, is to put up a roadblock to cell cycle progression until certain events are completed. A cell that carries a mutated form of a negative regulator might not be able to halt the cell cycle if there is a problem. Tumor suppressors are similar to brakes in a vehicle: Malfunctioning brakes can contribute to a car crash. Mutated p53 genes have been identified in more than one-half of all human tumor cells. This discovery is not surprising in light of the multiple roles that the p53 protein plays at the G 1 checkpoint. A cell with a faulty p53 may fail to detect errors present in the genomic DNA ( Figure 10.14 ). Even if a partially functional p53 does identify the mutations, it may no longer be able to signal the necessary DNA repair enzymes. Either way, damaged DNA will remain uncorrected. At this point, a functional p53 will deem the cell unsalvageable and trigger programmed cell death (apoptosis). The damaged version of p53 found in cancer cells, however, cannot trigger apoptosis. Visual Connection Human papillomavirus can cause cervical cancer. The virus encodes E6, a protein that binds p53. Based on this fact and what you know about p53, what effect do you think E6 binding has on p53 activity? E6 activates p53 E6 inactivates p53 E6 mutates p53 E6 binding marks p53 for degradation The loss of p53 function has other repercussions for the cell cycle. Mutated p53 might lose its ability to trigger p21 production. Without adequate levels of p21, there is no effective block on Cdk activation. Essentially, without a fully functional p53, the G 1 checkpoint is severely compromised and the cell proceeds directly from G 1 to S regardless of internal and external conditions. At the completion of this shortened cell cycle, two daughter cells are produced that have inherited the mutated p53 gene. Given the non-optimal conditions under which the parent cell reproduced, it is likely that the daughter cells will have acquired other mutations in addition to the faulty tumor suppressor gene. Cells such as these daughter cells quickly accumulate both oncogenes and non-functional tumor suppressor genes. Again, the result is tumor growth. Link to Learning Watch an animation of how cancer results from errors in the cell cycle. Click to view content 10.5 Prokaryotic Cell Division Learning Objectives By the end of this section, you will be able to: Describe the process of binary fission in prokaryotes Explain how FtsZ and tubulin proteins are examples of homology Prokaryotes, such as bacteria, propagate by binary fission. For unicellular organisms, cell division is the only method to produce new individuals. In both prokaryotic and eukaryotic cells, the outcome of cell reproduction is a pair of daughter cells that are genetically identical to the parent cell. In unicellular organisms, daughter cells are individuals. To achieve the outcome of cloned offspring, certain steps are essential. The genomic DNA must be replicated and then allocated into the daughter cells; the cytoplasmic contents must also be divided to give both new cells the machinery to sustain life. In bacterial cells, the genome consists of a single, circular DNA chromosome; therefore, the process of cell division is simplified. Karyokinesis is unnecessary because there is no nucleus and thus no need to direct one copy of the multiple chromosomes into each daughter cell. This type of cell division is called binary (prokaryotic) fission . Binary Fission Due to the relative simplicity of the prokaryotes, the cell division process, called binary fission, is a less complicated and much more rapid process than cell division in eukaryotes. The single, circular DNA chromosome of bacteria is not enclosed in a nucleus, but instead occupies a specific location, the nucleoid, within the cell ( Figure 10.2 ). Although the DNA of the nucleoid is associated with proteins that aid in packaging the molecule into a compact size, there are no histone proteins and thus no nucleosomes in prokaryotes. The packing proteins of bacteria are, however, related to the cohesin and condensin proteins involved in the chromosome compaction of eukaryotes. The bacterial chromosome is attached to the plasma membrane at about the midpoint of the cell. The starting point of replication, the origin , is close to the binding site of the chromosome to the plasma membrane ( Figure 10.15 ). Replication of the DNA is bidirectional, moving away from the origin on both strands of the loop simultaneously. As the new double strands are formed, each origin point moves away from the cell wall attachment toward the opposite ends of the cell. As the cell elongates, the growing membrane aids in the transport of the chromosomes. After the chromosomes have cleared the midpoint of the elongated cell, cytoplasmic separation begins. The formation of a ring composed of repeating units of a protein called FtsZ directs the partition between the nucleoids. Formation of the FtsZ ring triggers the accumulation of other proteins that work together to recruit new membrane and cell wall materials to the site. A septum is formed between the nucleoids, extending gradually from the periphery toward the center of the cell. When the new cell walls are in place, the daughter cells separate. Evolution Connection Mitotic Spindle Apparatus The precise timing and formation of the mitotic spindle is critical to the success of eukaryotic cell division. Prokaryotic cells, on the other hand, do not undergo karyokinesis and therefore have no need for a mitotic spindle. However, the FtsZ protein that plays such a vital role in prokaryotic cytokinesis is structurally and functionally very similar to tubulin, the building block of the microtubules that make up the mitotic spindle fibers that are necessary for eukaryotes. FtsZ proteins can form filaments, rings, and other three-dimensional structures that resemble the way tubulin forms microtubules, centrioles, and various cytoskeletal components. In addition, both FtsZ and tubulin employ the same energy source, GTP (guanosine triphosphate), to rapidly assemble and disassemble complex structures. FtsZ and tubulin are homologous structures derived from common evolutionary origins. In this example, FtsZ is the ancestor protein to tubulin (a modern protein). While both proteins are found in extant organisms, tubulin function has evolved and diversified tremendously since evolving from its FtsZ prokaryotic origin. A survey of mitotic assembly components found in present-day unicellular eukaryotes reveals crucial intermediary steps to the complex membrane-enclosed genomes of multicellular eukaryotes ( Table 10.3 ). Cell Division Apparatus among Various Organisms Structure of genetic material Division of nuclear material Separation of daughter cells Prokaryotes There is no nucleus. The single, circular chromosome exists in a region of cytoplasm called the nucleoid. Occurs through binary fission. As the chromosome is replicated, the two copies move to opposite ends of the cell by an unknown mechanism. FtsZ proteins assemble into a ring that pinches the cell in two. Some protists Linear chromosomes exist in the nucleus. Chromosomes attach to the nuclear envelope, which remains intact. The mitotic spindle passes through the envelope and elongates the cell. No centrioles exist. Microfilaments form a cleavage furrow that pinches the cell in two. Other protists Linear chromosomes exist in the nucleus. A mitotic spindle forms from the centrioles and passes through the nuclear membrane, which remains intact. Chromosomes attach to the mitotic spindle, which separates the chromosomes and elongates the cell. Microfilaments form a cleavage furrow that pinches the cell in two. Animal cells Linear chromosomes exist in the nucleus. A mitotic spindle forms from the centrosomes. The nuclear envelope dissolves. Chromosomes attach to the mitotic spindle, which separates the chromosomes and elongates the cell. Microfilaments form a cleavage furrow that pinches the cell in two. Table 10.3
business_law_i_essentials
Chapter Outline 7.1 Agreement, Consideration, and Promissory Estoppel 7.2 Capacity and Legality 7.3 Breach of Contract and Remedies Introduction Learning Outcome Analyze the principles of contract law and how they apply to businesses.
[ { "answer": { "ans_choice": 3, "ans_text": "Promissory Estoppel." }, "bloom": null, "hl_context": "<hl> The promissory estoppel doctrine is an exception to the requirement of consideration for contracts . <hl> Promissory estoppel is triggered when one party acts on the other party ’ s promise . In cases where it is triggered , there is harm or severe injustice to the party who acted because they relied on the other party ’ s broken promise . <hl> To be considered enforceable by law , a contract must contain several elements , including offer and acceptance , genuine agreement , consideration , capacity , and legality . <hl>", "hl_sentences": "The promissory estoppel doctrine is an exception to the requirement of consideration for contracts . To be considered enforceable by law , a contract must contain several elements , including offer and acceptance , genuine agreement , consideration , capacity , and legality .", "question": { "cloze_format": "The elements of a contract do not include ___ ", "normal_format": "Which of the following element is NOT an element of a contract?", "question_choices": [ "Offer and acceptance.", "Consideration.", "Capacity.", "Promissory Estoppel." ], "question_id": "fs-212323156", "question_text": "The elements of a contract include all but the following element:" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 3, "ans_text": "d" }, "bloom": null, "hl_context": "Genuine agreement , i . e . , “ a meeting of the minds , ” is also required . <hl> Agreement can be destroyed by fraud , misrepresentation , mistake , duress , or undue influence . <hl>", "hl_sentences": "Agreement can be destroyed by fraud , misrepresentation , mistake , duress , or undue influence .", "question": { "cloze_format": "The burden of proof in a criminal case is ___ .", "normal_format": "What is a criminal case that is the burden of proof?", "question_choices": [ "Fraud.", "Misrepresentation.", "Undue influence.", "All of the above." ], "question_id": "fs-21232315", "question_text": "What are the ways an agreement can be invalidated?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "b" }, "bloom": null, "hl_context": "In most cases , consideration need not be pecuniary ( monetary ) . Most contracts are enforceable only if each party gets consideration from the agreement . <hl> Consideration can be money , property , a promise , or some right . <hl> For instance , when a music company sells studio equipment , the promised equipment is the consideration for the buyer . The seller ’ s consideration is the money the buyer promises to pay for the equipment .", "hl_sentences": "Consideration can be money , property , a promise , or some right .", "question": { "cloze_format": "It is ___ that businesses can be charged with crimes", "normal_format": "Businesses can be charged with crimes.", "question_choices": [ "A promise.", "A gift.", "Property.", "Money." ], "question_id": "fs-2123231", "question_text": "Consideration may include any of the following except:" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "d" }, "bloom": null, "hl_context": "<hl> In most states , minors under the age of 18 lack the capacity to make a contract and may therefore either honor an agreement or void the contract . <hl> However , there are a few exceptions to this rule . <hl> In most states , a contract for necessities ( i . e . <hl> <hl> food and clothing ) may not be voided . <hl> Also , in most states , the contract can no longer be voided when the minor turns 18 .", "hl_sentences": "In most states , minors under the age of 18 lack the capacity to make a contract and may therefore either honor an agreement or void the contract . In most states , a contract for necessities ( i . e . food and clothing ) may not be voided .", "question": { "cloze_format": "___ is most likely to be classified as a necessity for which a minor will be held liable on a contract.", "normal_format": "Which of the following is most likely to be classified as a necessity for which a minor will be held liable on a contract?", "question_choices": [ "A television.", "School supplies.", "Education.", "Food." ], "question_id": "fs-2123231560", "question_text": "Which of the following is most likely to be classified as a necessity for which a minor will be held liable on a contract?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 3, "ans_text": "All of the above." }, "bloom": null, "hl_context": "<hl> In most states , minors under the age of 18 lack the capacity to make a contract and may therefore either honor an agreement or void the contract . <hl> However , there are a few exceptions to this rule . In most states , a contract for necessities ( i . e . food and clothing ) may not be voided . Also , in most states , the contract can no longer be voided when the minor turns 18 .", "hl_sentences": "In most states , minors under the age of 18 lack the capacity to make a contract and may therefore either honor an agreement or void the contract .", "question": { "cloze_format": "The elements of res ipsa loquitor that a plaintiff must establish in a product liability lawsuit does not include ___ .", "normal_format": "The elements of res ipsa loquitor that a plaintiff must establish in a product liability lawsuit do NOT include which of the following?", "question_choices": [ "The car has been destroyed.", "The car has been damaged.", "He or she grows tired of it.", "All of the above." ], "question_id": "fs-212323150", "question_text": "A minor can avoid a contract to purchase a car if:" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "Contracts in consideration of marriage." }, "bloom": null, "hl_context": "<hl> Some examples of contracts that would be considered illegal are contracts for the sale or distribution of illegal drugs , contracts for illegal activities such as loansharking , and employment contracts for the hiring of undocumented workers . <hl>", "hl_sentences": "Some examples of contracts that would be considered illegal are contracts for the sale or distribution of illegal drugs , contracts for illegal activities such as loansharking , and employment contracts for the hiring of undocumented workers .", "question": { "cloze_format": "It is ___ that the 14th Amendment is a part of the Bill of Rights.", "normal_format": "The 14th Amendment is a part of the Bill of Rights.", "question_choices": [ "Contracts for the sale or distribution of heroin.", "Contracts for loansharking.", "Contracts in consideration of marriage.", "Employment contracts for the hiring of undocumented workers." ], "question_id": "fs-21232310", "question_text": "Examples of illegal contracts include all but the following:" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "All of the above." }, "bloom": null, "hl_context": "<hl> Typically , the remedies that will be available if a breach of contract is found are money damages , restitution , rescission , reformation , and specific performance . <hl>", "hl_sentences": "Typically , the remedies that will be available if a breach of contract is found are money damages , restitution , rescission , reformation , and specific performance .", "question": { "cloze_format": "Typical remedies available for a breach of contract include ___", "normal_format": "What are typical remedies available for a breach of contract?", "question_choices": [ "Money damages.", "Rescission.", "Specific Performance.", "All of the above." ], "question_id": "fs-2123231562", "question_text": "Typical remedies available for a breach of contract include:" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "For a personal service contract." }, "bloom": null, "hl_context": "<hl> Specific performance compels one party to perform the promises stated in the contract as nearly as practicable . <hl> <hl> Specific performance is only mandated when money damages do not adequately compensate for the breach . <hl> <hl> Personal service , however , may not be used to compel specific performance , since doing so would constitute forced labor , i . e . <hl> <hl> slavery , which is in violation of the U . S . Constitution . <hl>", "hl_sentences": "Specific performance compels one party to perform the promises stated in the contract as nearly as practicable . Specific performance is only mandated when money damages do not adequately compensate for the breach . Personal service , however , may not be used to compel specific performance , since doing so would constitute forced labor , i . e . slavery , which is in violation of the U . S . Constitution .", "question": { "cloze_format": "Courts of equity will not grant specific performance of contracts ___.", "normal_format": "For what courts of equity will not grant specific performance of contracts?", "question_choices": [ "For a personal service contract.", "For the sale of real estate.", "For the sale of the original manuscript of a rare edition book.", "All of these are correct." ], "question_id": "fs-212323152", "question_text": "Courts of equity will not grant specific performance of contracts:" }, "references_are_paraphrase": 0 } ]
7
7.1 Agreement, Consideration, and Promissory Estoppel A contract is defined as an agreement between two or more parties that is enforceable by law. To be considered enforceable by law, a contract must contain several elements, including offer and acceptance, genuine agreement, consideration, capacity, and legality. The key to a contract is that there must be an offer, and acceptance of the terms of that offer. An offer is a proposal made to demonstrate an intent to enter a contract. Acceptance is the agreement to be bound by the terms of the offer. Offers must be made with intent, must be definite and certain (i.e., the offer must be clearly expressed for it to be enforceable), and must be communicated to the offeree. An acceptance must demonstrate the willingness to consent to all of the terms of the offer. Genuine agreement, i.e., “a meeting of the minds,” is also required. Agreement can be destroyed by fraud, misrepresentation, mistake, duress, or undue influence. Consideration must be included in contracts. Consideration is a thing of value promised in exchange for something else of value. This mutual exchange binds the parties together. Capacity to contract is the next element required for a valid agreement. The law presumes that anyone entering a contract has the legal capacity to do so. Minors are generally excused from contractual responsibility, as are mentally incompetent and drugged or drunk individuals. Finally, legality is the last element considered. Parties entering into contracts that involve illegal conduct may not expect judicial relief to have that contract enforced. This theory has also been applied to conduct that would be considered in opposition to public policy. Consideration and Promissory Estoppel Contract law employs the principles of consideration and promissory estoppel. Consideration In most cases, consideration need not be pecuniary (monetary). Most contracts are enforceable only if each party gets consideration from the agreement. Consideration can be money, property, a promise, or some right. For instance, when a music company sells studio equipment, the promised equipment is the consideration for the buyer. The seller’s consideration is the money the buyer promises to pay for the equipment. Promissory Estoppel The promissory estoppel doctrine is an exception to the requirement of consideration for contracts. Promissory estoppel is triggered when one party acts on the other party’s promise. In cases where it is triggered, there is harm or severe injustice to the party who acted because they relied on the other party’s broken promise. The doctrine of promissory estoppel allows aggrieved parties to pursue justice or fairness for the performance of a contract in court, or other equitable remedies, even in the absence of any consideration. Its legal application may vary from state to state, but the basic elements include: A legal relationship existed between the parties. A promise was made. There was reliance on the promise that caused one party to act before any real consideration was exchanged. A substantial and measurable detriment occurred as a result of the failure to perform on the contract. An unconscionable result, or gross injustice, resulted from the broken promise. If it is found that these elements are satisfied and that the doctrine of estoppel is applicable, then the court will issue the appropriate damages in the form of reliance damages to restore the aggrieved party to the position they were in prior to the broken promise. Expectation damages are not usually available if promissory estoppel is being claimed. An example of how this principle would apply is: After a bidding war for his services, Bob turns down a job offer with We are the Best, LLC in Miami, Florida (where he lives), and accepts a dream job offer from MegaCorp Co. in San Francisco, California. The offer contains a specific start date, compensation terms, benefits outline, and more. However, it does not include relocation expenses or provisions. The company is aware of his plans to move across the country for the sole purpose of taking this dream role. Bob breaks his Miami lease with penalty and spends approximately $13,000 in moving and travel costs. As the cost of living in San Francisco is much higher than in Miami, he puts down a much pricier first and last month’s rent and security deposit payment than he is used to. Within two days of his planned start date, he receives a call from management at MegaCorp Co. stating that the company has changed its mind and decided to go in a different direction. If Bob brings a promissory estoppel suit, he will likely be entitled to all of the costs that he incurred while anticipating the start of the promised role (i.e., penalty for the broken lease, moving costs, difference in the rental costs, cost of breaking the new lease, if necessary, etc.) Following reimbursement of his costs, Bob will be returned to the same position he was in prior to the broken promise. However, the company will not likely be required to reopen the role for him or give him the job, as originally anticipated. Also, he will not likely be awarded any damages for the job that he turned down with We are the Best, LLC, as expectation damages are not usually available. The doctrines of consideration and promissory estoppel are essential to an understanding of how contracts are formed and enforced in the United States. 7.2 Capacity and Legality For a contract to be legally binding, the parties entering into the contract must have the capacity to do so. As a legal matter, there are certain classes of people who are presumed to have no capacity to contract. These include legal minors, the mentally ill, and those who are intoxicated. If people meeting these criteria enter into a contract, the agreement is considered voidable . If a contract is voidable, then the person who lacked capacity has the choice to either end the contract or continue with it as agreed upon. This design is meant to protect the party lacking capacity. Following are some examples of the application of these rules. Minors Have No Capacity to Contract In most states, minors under the age of 18 lack the capacity to make a contract and may therefore either honor an agreement or void the contract. However, there are a few exceptions to this rule. In most states, a contract for necessities (i.e. food and clothing) may not be voided. Also, in most states, the contract can no longer be voided when the minor turns 18. Example Mary, 16, an athlete, signs a long-term endorsement deal with a well-known brand and is compensated for several years. At age 20, she decides she wants to take a better endorsement deal, so she tries to void the agreement on the grounds that it was made when she was a minor and that she lacked capacity at that time. Mary will not likely succeed in having her agreement voided, as she has passed the period of incapacity. Mental Incapacity If a person lacks the mental capacity to enter a contract, then either he or she, or his or her legal guardian, may void it, except in cases where the contract involved necessities. In most states, mental capacity is measured against the “cognitive standard” of whether the party understood its meaning and effect. Example Mr. Williams contracted to sell a patent. Later, however, he claimed that he lacked capacity to enter the agreement. He, therefore, sought to have the contract voided. Williams based his claim on the fact that he had been diagnosed as manic-depressive and had received treatment from a variety of mental hospitals for this condition. His doctor stated that he was unable to properly evaluate business opportunities and contracts while in a “manic” state. A California Court of Appeals, evaluating a similar situation, refused to terminate the contract and stated that even in his manic state, the party was capable of contracting, as his condition may have impaired his judgment but not his understanding of the contract. With other mental conditions, a different legal conclusion could be reached. Voluntary Intoxication – Drugs and Alcohol Courts generally do not find lack of capacity to contract for people who are voluntarily intoxicated. The rationale for this decision is found in the reasoning that individuals should not be allowed to side-step their contractual obligations by virtue of their self-induced states. By another token, however, courts also seek to avoid the undesirable result of allowing the sober party to take advantage of the other person’s condition. Therefore, if a party is so inebriated that he or she is unable to understand the nature and consequences of the agreement, then the contract may be voided by the inebriated party. Example In the late 1900s, the owner of a significant amount of stock went on a three-month drinking binge. A local bank that was aware of his consistent inebriation hired a third party to contract with him. The third party succeeded in getting him to sell his stock for about 1.5% of the worth of its total value. When the duped seller ended his binge a month later, he learned that the third party had sold the stock to the local bank behind the deal. He then sued the third party. Ultimately, the case was decided by the U.S. Supreme Court, which found that the agreement was void because both the bank and the third party knew that the plaintiff was unaware of what he was doing when he entered the contract. The bank was required to return the shares to the plaintiff, minus the 1.5% amount of real value that he had been paid for the shares. Legality Contracts must be created for the exchange of legal goods and services to be enforced. An agreement is void if it violates the law, or is formed for the purpose of violating the law. Contracts may also be found voidable if they are found violative of public policy, although this is rarer. Typically, this conclusion is only invoked in clear cases where the potential harm to the public is substantially incontestable, eluding the idiosyncrasies of particular judges. For a contract to be binding, it must not have a criminal or immoral purpose or go against public policy. For example, a contract to commit murder in exchange for money will not be enforced by the courts. If performing the terms of the agreement, or if formation of the contract, will cause the parties to engage in activity that is illegal, then the contract will be deemed illegal and will be considered void or “unenforceable,” similar to a nonexistent contract. In this case, there will not be any relief available to either party if they breach the contract. Indeed, it is a defense to a breach of contract claim that the contract itself was illegal. Example In a state where gambling is illegal, two parties enter into an employment contract for the hiring of a blackjack dealer. The contract would be void because the contract requires the employee to perform illegal gambling activities. If the blackjack dealer tries to recover any unpaid wages for work completed, his claim will not be recognized because the courts will treat the contract as if it never existed. By contrast, parties enter a contract that involves the sale of dice to a known dealer in a state where gambling is unlawful. The contract would not be considered void because the act of selling dice, in and of itself, is not illegal. Some examples of contracts that would be considered illegal are contracts for the sale or distribution of illegal drugs, contracts for illegal activities such as loansharking, and employment contracts for the hiring of undocumented workers. An understanding of the several theories outlined herein for establishing (or challenging) capacity and legality in contract law is essential to this area of law. 7.3 Breach of Contract and Remedies Once a contract is legally formed, both parties are generally expected to perform according to the terms of the contract. A breach of contract claim arises when either (or both) parties claim that there was a failure, without legal excuse, to perform on any, or all, parts and promises of the contract. Several inquiries are triggered when a breach of contract claims is initiated. The first step is to determine whether a contract existed in the first place. If it did, the following questions may be asked: What did the terms of the contract require of the parties? Were the contractual terms modified at any point? Did the breach actually occur? Was the claimed breach material to the contract? Does any legal excuse or defense to enforcement of the contract exist? What damages were caused by the breach? Material vs. Minor Breach The parties’ obligations and remedies for a breach of contract depend on whether the breach is considered material or minor. When something substantially different from what was expected under the terms of the contract is delivered, the breach will be considered material. For example, the breach will be considered material if the contract promises the delivery of Christmas ornaments, but the buyer receives a box of candies. In the case of a material breach, the non-breaching party has the right to all remedies for breach of the entire contract and is no longer expected to perform their obligations. In considering whether a breach is material, courts will determine whether the non-breaching party still received a benefit, and if so, how much was received, adequate compensation for the damages, the extent of the performance (if any) by the breaching party, any hardship to the breaching party, the negligence or intent behind the behavior of the breaching party, and finally, the possibility that the breaching party will perform the remainder of the contract. There are times, however, that despite the breaching party’s failure to perform some of the contract, the other party still receives a majority of the goods or services specified in the contract. In this case, the breach will be considered minor. For example, the breaching party may be late on delivering goods or services promised under a contract that does not specify a firm delivery date and that doesn’t state that time is of the essence. In this case, a reasonably short delay would likely only be considered a minor breach of the contract. Consequently, the non-breaching party would still be required to perform as pursuant to the contract. However, damages may be available to them if they suffered some harm as a result of the delay. Remedies Typically, the remedies that will be available if a breach of contract is found are money damages, restitution, rescission, reformation, and specific performance. Money damages include compensation for financial losses caused by the breach. Restitution restores the injured party to status quo or the position they had prior to the formation of the contract, by returning to the plaintiff any money or property given pursuant to the contract. This type of relief is typically sought when a contract is voided by courts due to a finding that the defendant is incompetent or lacks capacity. Rescission or reformation may be available to parties who enter into contracts by mistake, fraud, undue influence, or duress. Rescission terminates the duties of both parties under the contract, while reformation allows courts to equitably change the contract’s substance. Specific performance compels one party to perform the promises stated in the contract as nearly as practicable. Specific performance is only mandated when money damages do not adequately compensate for the breach. Personal service, however, may not be used to compel specific performance, since doing so would constitute forced labor, i.e. slavery, which is in violation of the U.S. Constitution. Inevitably, when valid contracts are created, the potential for breach exists. An understanding of what happens when a contract’s terms are breached is fundamental to an understanding of contract law.
american_government
Summary 4.1 What Are Civil Liberties? The Bill of Rights is designed to protect the freedoms of individuals from interference by government officials. Originally these protections were applied only to actions by the national government; different sets of rights and liberties were protected by state constitutions and laws, and even when the rights themselves were the same, the level of protection for them often differed by definition across the states. Since the Civil War, as a result of the passage and ratification of the Fourteenth Amendment and a series of Supreme Court decisions, most of the Bill of Rights’ protections of civil liberties have been expanded to cover actions by state governments as well through a process of selective incorporation. Nonetheless there is still vigorous debate about what these rights entail and how they should be balanced against the interests of others and of society as a whole. 4.2 Securing Basic Freedoms The first four amendments of the Bill of Rights protect citizens’ key freedoms from governmental intrusion. The First Amendment limits the government’s ability to impose certain religious beliefs on the people, or to limit the practice of one’s own religion. The First Amendment also protects freedom of expression by the public, the media, and organized groups via rallies, protests, and the petition of grievances. The Second Amendment today protects an individual’s right to keep and bear arms for personal defense in the home, while the Third Amendment limits the ability of the government to allow the military to occupy civilians’ homes except under extraordinary circumstances. Finally, the Fourth Amendment protects our persons, homes, and property from unreasonable searches and seizures, and it protects the people from unlawful arrests. However, all these provisions are subject to limitations, often to protect the interests of public order, the good of society as a whole, or to balance the rights of some citizens against those of others. 4.3 The Rights of Suspects The rights of those suspected, accused, and convicted of crimes, along with rights in civil cases and economic liberties, are protected by the second major grouping of amendments within the Bill of Rights. The Fifth Amendment secures various procedural safeguards, protects suspects’ right to remain silent, forbids trying someone twice at the same level of government for the same criminal act, and limits the taking of property for public uses. The Sixth Amendment ensures fairness in criminal trials, including through a fair and speedy trial by an impartial jury, the right to assistance of counsel, and the right to examine and compel testimony from witnesses. The Seventh Amendment ensures the right to jury trials in most civil cases (but only at the federal level). Finally, the Eighth Amendment prohibits excessive fines and bails, as well as “cruel and unusual punishments,” although the scope of what is cruel and unusual is subject to debate. 4.4 Interpreting the Bill of Rights The interrelationship of constitutional amendments continues to be settled through key court cases over time. Because it was not explicitly laid out in the Constitution, privacy rights required clarification through public laws and court precedents. Important cases addressing the right to privacy relate to abortion, sexual behavior, internet activity, and the privacy of personal texts and cell phone calls. The place where we draw the line between privacy and public safety is an ongoing discussion in which the courts are a significant player.
Chapter Outline 4.1 What Are Civil Liberties? 4.2 Securing Basic Freedoms 4.3 The Rights of Suspects 4.4 Interpreting the Bill of Rights Introduction Americans have recently confronted situations in which government officials appeared not to provide citizens their basic freedoms and rights. Protests have erupted nationwide in response to the deaths of African Americans during interactions with police. Many people were deeply troubled by the revelations of Edward Snowden ( Figure 4.1 ) that U.S. government agencies are conducting widespread surveillance, capturing not only the conversations of foreign leaders and suspected terrorists but also the private communications of U.S. citizens, even those not suspected of criminal activity. These situations are hardly unique in U.S. history. The framers of the Constitution wanted a government that would not repeat the abuses of individual liberties and rights that caused them to declare independence from Britain. However, laws and other “parchment barriers” (or written documents) alone have not protected freedoms over the years; instead, citizens have learned the truth of the old saying (often attributed to Thomas Jefferson but actually said by Irish politician John Philpot Curran), “Eternal vigilance is the price of liberty.” The actions of ordinary citizens, lawyers, and politicians have been at the core of a vigilant effort to protect constitutional liberties. But what are those freedoms? And how should we balance them against the interests of society and other individuals? These are the key questions we will tackle in this chapter.
[ { "answer": { "ans_choice": 0, "ans_text": "A" }, "bloom": null, "hl_context": "<hl> Beginning in 1897 , the Supreme Court has found that various provisions of the Bill of Rights protecting these fundamental liberties must be upheld by the states , even if their state constitutions and laws do not protect them as fully as the Bill of Rights does — or at all . <hl> <hl> This means there has been a process of selective incorporation of the Bill of Rights into the practices of the states ; in other words , the Constitution effectively inserts parts of the Bill of Rights into state laws and constitutions , even though it doesn ’ t do so explicitly . <hl> When cases arise to clarify particular issues and procedures , the Supreme Court decides whether state laws violate the Bill of Rights and are therefore unconstitutional .", "hl_sentences": "Beginning in 1897 , the Supreme Court has found that various provisions of the Bill of Rights protecting these fundamental liberties must be upheld by the states , even if their state constitutions and laws do not protect them as fully as the Bill of Rights does — or at all . This means there has been a process of selective incorporation of the Bill of Rights into the practices of the states ; in other words , the Constitution effectively inserts parts of the Bill of Rights into state laws and constitutions , even though it doesn ’ t do so explicitly .", "question": { "cloze_format": "The Bill of Rights was added to the Constitution because ________.", "normal_format": "The Bill of Rights was added to the Constitution because of what?", "question_choices": [ "key states refused to ratify the Constitution unless it was added", "Alexander Hamilton believed it was necessary", "it was part of the Articles of Confederation", "it was originally part of the Declaration of Independence" ], "question_id": "fs-id1163758655917", "question_text": "The Bill of Rights was added to the Constitution because ________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "right to a writ of habeas corpus" }, "bloom": null, "hl_context": "Moreover , the framers thought that they had adequately covered rights issues in the main body of the document . Indeed , the Federalists did include in the Constitution some protections against legislative acts that might restrict the liberties of citizens , based on the history of real and perceived abuses by both British kings and parliaments as well as royal governors . <hl> In Article I , Section 9 , the Constitution limits the power of Congress in three ways : prohibiting the passage of bills of attainder , prohibiting ex post facto laws , and limiting the ability of Congress to suspend the writ of habeas corpus . <hl>", "hl_sentences": "In Article I , Section 9 , the Constitution limits the power of Congress in three ways : prohibiting the passage of bills of attainder , prohibiting ex post facto laws , and limiting the ability of Congress to suspend the writ of habeas corpus .", "question": { "cloze_format": "An example of a right explicitly protected by the Constitution as drafted at the Constitutional Convention is the ________.", "normal_format": "What is an example of a right explicitly protected by the Constitution as drafted at the Constitutional Convention?", "question_choices": [ "right to free speech", "right to keep and bear arms", "right to a writ of habeas corpus", "right not to be subjected to cruel and unusual punishment" ], "question_id": "fs-id1163758448114", "question_text": "An example of a right explicitly protected by the Constitution as drafted at the Constitutional Convention is the ________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "C" }, "bloom": null, "hl_context": "<hl> Beginning in 1897 , the Supreme Court has found that various provisions of the Bill of Rights protecting these fundamental liberties must be upheld by the states , even if their state constitutions and laws do not protect them as fully as the Bill of Rights does — or at all . <hl> This means there has been a process of selective incorporation of the Bill of Rights into the practices of the states ; in other words , the Constitution effectively inserts parts of the Bill of Rights into state laws and constitutions , even though it doesn ’ t do so explicitly . When cases arise to clarify particular issues and procedures , the Supreme Court decides whether state laws violate the Bill of Rights and are therefore unconstitutional . With the ratification of the Fourteenth Amendment in 1868 , civil liberties gained more clarification . First , the amendment says , “ no State shall make or enforce any law which shall abridge the privileges or immunities of citizens of the United States , ” which is a provision that echoes the privileges and immunities clause in Article IV , Section 2 , of the original Constitution ensuring that states treat citizens of other states the same as their own citizens . ( To use an example from today , the punishment for speeding by an out-of-state driver cannot be more severe than the punishment for an in-state driver ) . Legal scholars and the courts have extensively debated the meaning of this privileges or immunities clause over the years ; some have argued that it was supposed to extend the entire Bill of Rights ( or at least the first eight amendments ) to the states , while others have argued that only some rights are extended . In 1999 , Justice John Paul Stevens , writing for a majority of the Supreme Court , argued in Saenz v . Roe that the clause protects the right to travel from one state to another . 7 More recently , Justice Clarence Thomas argued in the 2010 McDonald v . Chicago ruling that the individual right to bear arms applied to the states because of this clause . <hl> 8 The second provision of the Fourteenth Amendment that pertains to applying the Bill of Rights to the states is the due process clause , which says , “ nor shall any State deprive any person of life , liberty , or property , without due process of law . ” This provision is similar to the Fifth Amendment in that it also refers to “ due process , ” a term that generally means people must be treated fairly and impartially by government officials ( or with what is commonly referred to as substantive due process ) . <hl> <hl> Although the text of the provision does not mention rights specifically , the courts have held in a series of cases that it indicates there are certain fundamental liberties that cannot be denied by the states . <hl> For example , in Sherbert v . Verner ( 1963 ) , the Supreme Court ruled that states could not deny unemployment benefits to an individual who turned down a job because it required working on the Sabbath . 9", "hl_sentences": "Beginning in 1897 , the Supreme Court has found that various provisions of the Bill of Rights protecting these fundamental liberties must be upheld by the states , even if their state constitutions and laws do not protect them as fully as the Bill of Rights does — or at all . 8 The second provision of the Fourteenth Amendment that pertains to applying the Bill of Rights to the states is the due process clause , which says , “ nor shall any State deprive any person of life , liberty , or property , without due process of law . ” This provision is similar to the Fifth Amendment in that it also refers to “ due process , ” a term that generally means people must be treated fairly and impartially by government officials ( or with what is commonly referred to as substantive due process ) . Although the text of the provision does not mention rights specifically , the courts have held in a series of cases that it indicates there are certain fundamental liberties that cannot be denied by the states .", "question": { "cloze_format": "The Fourteenth Amendment was critically important for civil liberties because it ________.", "normal_format": "The Fourteenth Amendment was critically important for civil liberties because it what?", "question_choices": [ "guaranteed freed slaves the right to vote", "outlawed slavery", "helped start the process of selective incorporation of the Bill of Rights", "allowed the states to continue to enact black codes" ], "question_id": "fs-id1163758448256", "question_text": "The Fourteenth Amendment was critically important for civil liberties because it ________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 0, "ans_text": "the right to keep and bear arms" }, "bloom": null, "hl_context": "<hl> Right to keep and bear arms to maintain a well-regulated militia <hl> <hl> Second Amendment <hl>", "hl_sentences": "Right to keep and bear arms to maintain a well-regulated militia Second Amendment", "question": { "cloze_format": "The provision that is not part of the First Amendment is ___ .", "normal_format": "Which of the following provisions is not part of the First Amendment?", "question_choices": [ "the right to keep and bear arms", "the right to peaceably assemble", "the right to free speech", "the protection of freedom of religion" ], "question_id": "fs-id1163758706982", "question_text": "Which of the following provisions is not part of the First Amendment?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "C" }, "bloom": null, "hl_context": "Although the term privacy does not appear in the Constitution or Bill of Rights , scholars have interpreted several Bill of Rights provisions as an indication that James Madison and Congress sought to protect a common-law right to privacy as it would have been understood in the late eighteenth century : a right to be free of government intrusion into our personal life , particularly within the bounds of the home . <hl> For example , we could perhaps see the Second Amendment as standing for the common-law right to self-defense in the home ; the Third Amendment as a statement that government soldiers should not be housed in anyone ’ s home ; the Fourth Amendment as setting a high legal standard for allowing agents of the state to intrude on someone ’ s home ; and the due process and takings clauses of the Fifth Amendment as applying an equally high legal standard to the government ’ s taking a home or property ( reinforced after the Civil War by the Fourteenth Amendment ) . <hl> Alternatively , we could argue that the Ninth Amendment anticipated the existence of a common-law right to privacy , among other rights , when it acknowledged the existence of basic , natural rights not listed in the Bill of Rights or the body of the Constitution itself . 60 Lawyers Samuel D . Warren and Louis Brandeis ( the latter a future Supreme Court justice ) famously developed the concept of privacy rights in a law review article published in 1890 . 61 <hl> Today it seems unlikely the federal government would need to house military forces in civilian lodgings against the will of property owners or tenants ; however , perhaps in the same way we consider the Second and Fourth amendments , we can think of the Third Amendment as reflecting a broader idea that our homes lie within a “ zone of privacy ” that government officials should not violate unless absolutely necessary . <hl> The First Amendment protects the right to freedom of religious conscience and practice and the right to free expression , particularly of political and social beliefs . <hl> The Second Amendment — perhaps the most controversial today — protects the right to defend yourself in your home or other property , as well as the collective right to protect the community as part of the militia . <hl> <hl> The Third Amendment prohibits the government from commandeering people ’ s homes to house soldiers , particularly in peacetime . <hl> <hl> Finally , the Fourth Amendment prevents the government from searching our persons or property or taking evidence without a warrant issued by a judge , with certain exceptions . <hl>", "hl_sentences": "For example , we could perhaps see the Second Amendment as standing for the common-law right to self-defense in the home ; the Third Amendment as a statement that government soldiers should not be housed in anyone ’ s home ; the Fourth Amendment as setting a high legal standard for allowing agents of the state to intrude on someone ’ s home ; and the due process and takings clauses of the Fifth Amendment as applying an equally high legal standard to the government ’ s taking a home or property ( reinforced after the Civil War by the Fourteenth Amendment ) . Today it seems unlikely the federal government would need to house military forces in civilian lodgings against the will of property owners or tenants ; however , perhaps in the same way we consider the Second and Fourth amendments , we can think of the Third Amendment as reflecting a broader idea that our homes lie within a “ zone of privacy ” that government officials should not violate unless absolutely necessary . The Second Amendment — perhaps the most controversial today — protects the right to defend yourself in your home or other property , as well as the collective right to protect the community as part of the militia . The Third Amendment prohibits the government from commandeering people ’ s homes to house soldiers , particularly in peacetime . Finally , the Fourth Amendment prevents the government from searching our persons or property or taking evidence without a warrant issued by a judge , with certain exceptions .", "question": { "cloze_format": "The Third Amendment can be thought of as ________.", "normal_format": "As what can The Third Amendment be thought of?", "question_choices": [ "reinforcing the right to keep and bear arms guaranteed by the Second Amendment", "ensuring the right to freedom of the press", "forming part of a broader conception of privacy in the home that is also protected by the Second and Fourth Amendments", "strengthening the right to a jury trial in criminal cases" ], "question_id": "fs-id1163758389607", "question_text": "The Third Amendment can be thought of as ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "does not apply when there is a serious risk that evidence will be destroyed before a warrant can be issued" }, "bloom": null, "hl_context": "<hl> In either case , the amendment indicates that government officials are required to apply for and receive a search warrant prior to a search or seizure ; this warrant is a legal document , signed by a judge , allowing police to search and / or seize persons or property . <hl> Since the 1960s , however , the Supreme Court has issued a series of rulings limiting the warrant requirement in situations where a person can be said to lack a “ reasonable expectation of privacy ” outside the home . <hl> Police can also search and / or seize people or property without a warrant if the owner or renter consents to the search , if there is a reasonable expectation that evidence may be destroyed or tampered with before a warrant can be issued ( i . e . , exigent circumstances ) , or if the items in question are in plain view of government officials . <hl> <hl> The text of the Fourth Amendment is as follows : <hl>", "hl_sentences": "In either case , the amendment indicates that government officials are required to apply for and receive a search warrant prior to a search or seizure ; this warrant is a legal document , signed by a judge , allowing police to search and / or seize persons or property . Police can also search and / or seize people or property without a warrant if the owner or renter consents to the search , if there is a reasonable expectation that evidence may be destroyed or tampered with before a warrant can be issued ( i . e . , exigent circumstances ) , or if the items in question are in plain view of government officials . The text of the Fourth Amendment is as follows :", "question": { "cloze_format": "The Fourth Amendment’s requirement for a warrant ________.", "normal_format": "Which of the following is correct about the Fourth Amendment’s requirement for a warrant?", "question_choices": [ "applies only to searches of the home", "applies only to the seizure of property as evidence", "does not protect people who rent or lease property", "does not apply when there is a serious risk that evidence will be destroyed before a warrant can be issued" ], "question_id": "fs-id1163758523699", "question_text": "The Fourth Amendment’s requirement for a warrant ________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 0, "ans_text": "A" }, "bloom": null, "hl_context": "However , increasingly eminent domain has been used to allow economic development , with beneficiaries ranging from politically connected big businesses such as car manufacturers building new factories to highly profitable sports teams seeking ever-more-luxurious stadiums ( Figure 4.15 ) . And , while we traditionally think of property owners as relatively well-off people whose rights don ’ t necessarily need protecting since they can fend for themselves in the political system , frequently these cases pit lower - and middle-class homeowners against multinational corporations or multimillionaires with the ear of city and state officials . <hl> In a notorious 2005 case , Kelo v . City of New London , the Supreme Court sided with municipal officials taking homes in a middle-class neighborhood to obtain land for a large pharmaceutical company ’ s corporate campus . <hl> <hl> 42 The case led to a public backlash against the use of eminent domain and legal changes in many states , making it harder for cities to take property from one private party and give it to another for economic redevelopment purposes . <hl>", "hl_sentences": "In a notorious 2005 case , Kelo v . City of New London , the Supreme Court sided with municipal officials taking homes in a middle-class neighborhood to obtain land for a large pharmaceutical company ’ s corporate campus . 42 The case led to a public backlash against the use of eminent domain and legal changes in many states , making it harder for cities to take property from one private party and give it to another for economic redevelopment purposes .", "question": { "cloze_format": "The Supreme Court case known as Kelo v. City of New London was controversial because it ________.", "normal_format": "Why was the Supreme Court case known as Kelo v. City of New London controversial?", "question_choices": [ "allowed greater use of the power of eminent domain", "regulated popular ride-sharing services like Lyft and Uber", "limited the application of the death penalty", "made it harder for police to use evidence obtained without a warrant" ], "question_id": "fs-id1163757575069", "question_text": "The Supreme Court case known as Kelo v. City of New London was controversial because it ________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "the right to remain silent" }, "bloom": null, "hl_context": "<hl> The Sixth Amendment guarantees the right of those accused of crimes to present witnesses in their own defense ( if necessary , compelling them to testify ) and to confront and cross-examine witnesses presented by the prosecution . <hl> In general , the only testimony acceptable in a criminal trial must be given in a courtroom and be subject to cross-examination ; hearsay , or testimony by one person about what another person has said , is generally inadmissible , although hearsay may be presented as evidence when it is an admission of guilt by the defendant or a “ dying declaration ” by a person who has passed away . Although both sides in a trial have the opportunity to examine and cross-examine witnesses , the judge may exclude testimony deemed irrelevant or prejudicial . <hl> Right to a speedy trial by an impartial jury <hl> <hl> Sixth Amendment <hl>", "hl_sentences": "The Sixth Amendment guarantees the right of those accused of crimes to present witnesses in their own defense ( if necessary , compelling them to testify ) and to confront and cross-examine witnesses presented by the prosecution . Right to a speedy trial by an impartial jury Sixth Amendment", "question": { "cloze_format": "The right that is not protected by the Sixth Amendment is ___.", "normal_format": "Which of the following rights is not protected by the Sixth Amendment?", "question_choices": [ "the right to trial by an impartial jury", "the right to cross-examine witnesses in a trial", "the right to remain silent", "the right to a speedy trial" ], "question_id": "fs-id1163757576822", "question_text": "Which of the following rights is not protected by the Sixth Amendment?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 3, "ans_text": "D" }, "bloom": null, "hl_context": "<hl> The double jeopardy rule does not prevent someone from recovering damages in a civil case — a legal dispute between individuals over a contract or compensation for an injury — that results from a criminal act , even if the person accused of that act is found not guilty . <hl> One famous case from the 1990s involved former football star and television personality O . J . Simpson . Simpson , although acquitted of the murders of his ex-wife Nicole Brown and her friend Ron Goldman in a criminal court , was later found to be responsible for their deaths in a subsequent civil case and as a result was forced to forfeit most of his wealth to pay damages to their families . The Fifth Amendment also protects individuals against double jeopardy , a process that subjects a suspect to prosecution twice for the same criminal act . No one who has been acquitted ( found not guilty ) of a crime can be prosecuted again for that crime . <hl> But the prohibition against double jeopardy has its own exceptions . <hl> <hl> The most notable is that it prohibits a second prosecution only at the same level of government ( federal or state ) as the first ; the federal government can try you for violating federal law , even if a state or local court finds you not guilty of the same action . <hl> For example , in the early 1990s , several Los Angeles police officers accused of brutally beating motorist Rodney King during his arrest were acquitted of various charges in a state court , but some were later convicted in a federal court of violating King ’ s civil rights .", "hl_sentences": "The double jeopardy rule does not prevent someone from recovering damages in a civil case — a legal dispute between individuals over a contract or compensation for an injury — that results from a criminal act , even if the person accused of that act is found not guilty . But the prohibition against double jeopardy has its own exceptions . The most notable is that it prohibits a second prosecution only at the same level of government ( federal or state ) as the first ; the federal government can try you for violating federal law , even if a state or local court finds you not guilty of the same action .", "question": { "cloze_format": "The double jeopardy rule in the Bill of Rights forbids ___.", "normal_format": "The double jeopardy rule in the Bill of Rights forbids which of the following?", "question_choices": [ "prosecuting someone in a state court for a criminal act he or she had been acquitted of in federal court", "prosecuting someone in federal court for a criminal act he or she had been acquitted of in a state court", "suing someone for damages for an act the person was found not guilty of", "none of these options" ], "question_id": "fs-id1163757282769", "question_text": "The double jeopardy rule in the Bill of Rights forbids which of the following?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 3, "ans_text": "may not be applied to those who were under 18 when they committed a crime" }, "bloom": null, "hl_context": "<hl> In recent years the Supreme Court has issued a series of rulings substantially narrowing the application of the death penalty . <hl> <hl> As a result , defendants who have mental disabilities may not be executed . <hl> 49 Also , defendants who were under eighteen when they committed an offense that is otherwise subject to the death penalty may not be executed . 50 The court has generally rejected the application of the death penalty to crimes that did not result in the death of another human being , most notably in the case of rape . 51 And , while permitting the death penalty to be applied to murder in some cases , the Supreme Court has generally struck down laws that require the application of the death penalty in certain circumstances . Still , the United States is among ten countries with the most executions worldwide ( Figure 4.18 ) .", "hl_sentences": "In recent years the Supreme Court has issued a series of rulings substantially narrowing the application of the death penalty . As a result , defendants who have mental disabilities may not be executed .", "question": { "cloze_format": "The Supreme Court has decided that the death penalty ________.", "normal_format": "What has the Supreme Court decided about the death penalty?", "question_choices": [ "is always cruel and unusual punishment", "is never cruel and unusual punishment", "may be applied only to acts of terrorism", "may not be applied to those who were under 18 when they committed a crime" ], "question_id": "fs-id1163757301948", "question_text": "The Supreme Court has decided that the death penalty ________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "C" }, "bloom": null, "hl_context": "However , the Tenth Amendment also allows states to guarantee rights and liberties more fully or extensively than the federal government does , or to include additional rights . <hl> For example , many state constitutions guarantee the right to a free public education , several states give victims of crimes certain rights , and eighteen states include the right to hunt game and / or fish . <hl> 57 A number of state constitutions explicitly guarantee equal rights for men and women . Some permitted women to vote before that right was expanded to all women with the Nineteenth Amendment in 1920 , and people aged 18 – 20 could vote in a few states before the Twenty-Sixth Amendment came into force in 1971 . <hl> As we will see below , several states also explicitly recognize a right to privacy . <hl> State courts at times have interpreted state constitutional provisions to include broader protections for basic liberties than their federal counterparts . For example , although in general people do not have the right to free speech and assembly on private property owned by others without their permission , California ’ s constitutional protection of freedom of expression was extended to portions of some privately owned shopping centers by the state ’ s supreme court ( Figure 4.19 ) . 58", "hl_sentences": "For example , many state constitutions guarantee the right to a free public education , several states give victims of crimes certain rights , and eighteen states include the right to hunt game and / or fish . As we will see below , several states also explicitly recognize a right to privacy .", "question": { "cloze_format": "The right that is not explicitly protected by some state constitutions is ___ .", "normal_format": "Which of the following rights is not explicitly protected by some state constitutions?", "question_choices": [ "the right to hunt", "the right to privacy", "the right to polygamous marriage", "the right to a free public education" ], "question_id": "fs-id1163757306760", "question_text": "Which of the following rights is not explicitly protected by some state constitutions?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "most U.S. citizens today believe the government should be allowed to outlaw birth control" }, "bloom": null, "hl_context": "<hl> The legal landscape changed dramatically as a result of the 1973 ruling in Roe v . Wade , 65 in which the Supreme Court decided the right to privacy encompassed a right for women to terminate a pregnancy , at least under certain scenarios . <hl> <hl> The justices ruled that while the government did have an interest in protecting the “ potentiality of human life , ” nonetheless this had to be balanced against the interests of both women ’ s health and women ’ s right to decide whether to have an abortion . <hl> Accordingly , the court established a framework for deciding whether abortions could be regulated based on the fetus ’ s viability ( i . e . , potential to survive outside the womb ) and the stage of pregnancy , with no restrictions permissible during the first three months of pregnancy ( i . e . , the first trimester ) , during which abortions were deemed safer for women than childbirth itself . <hl> Although the term privacy does not appear in the Constitution or Bill of Rights , scholars have interpreted several Bill of Rights provisions as an indication that James Madison and Congress sought to protect a common-law right to privacy as it would have been understood in the late eighteenth century : a right to be free of government intrusion into our personal life , particularly within the bounds of the home . <hl> For example , we could perhaps see the Second Amendment as standing for the common-law right to self-defense in the home ; the Third Amendment as a statement that government soldiers should not be housed in anyone ’ s home ; the Fourth Amendment as setting a high legal standard for allowing agents of the state to intrude on someone ’ s home ; and the due process and takings clauses of the Fifth Amendment as applying an equally high legal standard to the government ’ s taking a home or property ( reinforced after the Civil War by the Fourteenth Amendment ) . Alternatively , we could argue that the Ninth Amendment anticipated the existence of a common-law right to privacy , among other rights , when it acknowledged the existence of basic , natural rights not listed in the Bill of Rights or the body of the Constitution itself . 60 Lawyers Samuel D . Warren and Louis Brandeis ( the latter a future Supreme Court justice ) famously developed the concept of privacy rights in a law review article published in 1890 . 61 However , the Tenth Amendment also allows states to guarantee rights and liberties more fully or extensively than the federal government does , or to include additional rights . For example , many state constitutions guarantee the right to a free public education , several states give victims of crimes certain rights , and eighteen states include the right to hunt game and / or fish . 57 A number of state constitutions explicitly guarantee equal rights for men and women . Some permitted women to vote before that right was expanded to all women with the Nineteenth Amendment in 1920 , and people aged 18 – 20 could vote in a few states before the Twenty-Sixth Amendment came into force in 1971 . <hl> As we will see below , several states also explicitly recognize a right to privacy . <hl> <hl> State courts at times have interpreted state constitutional provisions to include broader protections for basic liberties than their federal counterparts . <hl> For example , although in general people do not have the right to free speech and assembly on private property owned by others without their permission , California ’ s constitutional protection of freedom of expression was extended to portions of some privately owned shopping centers by the state ’ s supreme court ( Figure 4.19 ) . 58", "hl_sentences": "The legal landscape changed dramatically as a result of the 1973 ruling in Roe v . Wade , 65 in which the Supreme Court decided the right to privacy encompassed a right for women to terminate a pregnancy , at least under certain scenarios . The justices ruled that while the government did have an interest in protecting the “ potentiality of human life , ” nonetheless this had to be balanced against the interests of both women ’ s health and women ’ s right to decide whether to have an abortion . Although the term privacy does not appear in the Constitution or Bill of Rights , scholars have interpreted several Bill of Rights provisions as an indication that James Madison and Congress sought to protect a common-law right to privacy as it would have been understood in the late eighteenth century : a right to be free of government intrusion into our personal life , particularly within the bounds of the home . As we will see below , several states also explicitly recognize a right to privacy . State courts at times have interpreted state constitutional provisions to include broader protections for basic liberties than their federal counterparts .", "question": { "cloze_format": "The right to privacy has been controversial for all the following reasons except ________.", "normal_format": "The right to privacy has not been controversial for which of the following reasons? ", "question_choices": [ "it is not explicitly included in the Constitution or Bill of Rights", "it has been interpreted to protect women’s right to have an abortion", "it has been used to overturn laws that have substantial public support", "most U.S. citizens today believe the government should be allowed to outlaw birth control" ], "question_id": "fs-id1163757282759", "question_text": "The right to privacy has been controversial for all the following reasons except ________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "C" }, "bloom": null, "hl_context": "Starting in the 1980s , Supreme Court justices appointed by Republican presidents began to roll back the Roe decision . A key turning point was the court ’ s ruling in Planned Parenthood v . Casey in 1992 , in which a plurality of the court rejected Roe ’ s framework based on trimesters of pregnancy and replaced it with the undue burden test , which allows restrictions prior to viability that are not “ substantial obstacle [ s ] ” ( undue burdens ) to women seeking an abortion . 66 Thus , the court upheld some state restrictions , including a required waiting period between arranging and having an abortion , parental consent ( or , if not possible for some reason such as incest , authorization of a judge ) for minors , and the requirement that women be informed of the health consequences of having an abortion . <hl> Other restrictions such as a requirement that a married woman notify her spouse prior to an abortion were struck down as an undue burden . <hl> Since the Casey decision , many states have passed other restrictions on abortions , such as banning certain procedures , requiring women to have and view an ultrasound before having an abortion , and implementing more stringent licensing and inspection requirements for facilities where abortions are performed . Although no majority of Supreme Court justices has ever moved to overrule Roe , the restrictions on abortion the Court has upheld in the last few decades have made access to abortions more difficult in many areas of the country , particularly in rural states and communities along the U . S . – Mexico border ( Figure 4.20 ) . However , in Whole Woman ’ s Health v . Hellerstedt ( 2016 ) , the Court reinforced Roe 5 – 3 by disallowing two Texas state regulations regarding the delivery of abortion services . 67", "hl_sentences": "Other restrictions such as a requirement that a married woman notify her spouse prior to an abortion were struck down as an undue burden .", "question": { "cloze_format": "The rule that has the Supreme Court said is an undue burden on the right to have an abortion is that ___.", "normal_format": "Which of the following rules has the Supreme Court said is an undue burden on the right to have an abortion?", "question_choices": [ "Women must make more than one visit to an abortion clinic before the procedure can be performed.", "Minors must gain the consent of a parent or judge before seeking an abortion.", "Women must notify their spouses before having an abortion.", "Women must be informed of the health consequences of having an abortion." ], "question_id": "fs-id1163757315717", "question_text": "Which of the following rules has the Supreme Court said is an undue burden on the right to have an abortion?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "laws in Europe more strictly regulate how government officials can use tracking technology" }, "bloom": null, "hl_context": "In the United States , many advocates of civil liberties are concerned that laws such as the USA PATRIOT Act ( i . e . , Uniting and Strengthening America by Providing Appropriate Tools Required to Intercept and Obstruct Terrorism Act ) , passed weeks after the 9/11 attacks in 2001 , have given the federal government too much power by making it easy for officials to seek and obtain search warrants or , in some cases , to bypass warrant requirements altogether . Critics have argued that the Patriot Act has largely been used to prosecute ordinary criminals , in particular drug dealers , rather than terrorists as intended . Most European countries , at least on paper , have opted for laws that protect against such government surveillance , perhaps mindful of past experience with communist and fascist regimes . <hl> European countries also tend to have stricter laws limiting the collection , retention , and use of private data by companies , which makes it harder for governments to obtain and use that data . <hl> Most recently , the battle between Apple Inc . and the National Security Agency ( NSA ) over whether Apple should allow the government access to key information that is encrypted has made the discussion of this tradeoff salient once again .", "hl_sentences": "European countries also tend to have stricter laws limiting the collection , retention , and use of private data by companies , which makes it harder for governments to obtain and use that data .", "question": { "cloze_format": "A major difference between most European countries and the United States today is ________.", "normal_format": "What is a major difference between most European countries and the United States today?", "question_choices": [ "most Europeans don’t use technologies that can easily be tracked", "laws in Europe more strictly regulate how government officials can use tracking technology", "there are more legal restrictions on how the U.S. government uses tracking technology than in Europe", "companies based in Europe don’t have to comply with U.S. privacy laws" ], "question_id": "fs-id1163757312923", "question_text": "A major difference between most European countries and the United States today is ________." }, "references_are_paraphrase": null } ]
4
4.1 What Are Civil Liberties? Learning Objectives By the end of this section, you will be able to: Define civil liberties and civil rights Describe the origin of civil liberties in the U.S. context Identify the key positions on civil liberties taken at the Constitutional Convention Explain the Civil War origin of concern that the states should respect civil liberties The U.S. Constitution —in particular, the first ten amendments that form the Bill of Rights—protects the freedoms and rights of individuals. It does not limit this protection just to citizens or adults; instead, in most cases, the Constitution simply refers to “persons,” which over time has grown to mean that even children, visitors from other countries, and immigrants—permanent or temporary, legal or undocumented—enjoy the same freedoms when they are in the United States or its territories as adult citizens do. So, whether you are a Japanese tourist visiting Disney World or someone who has stayed beyond the limit of days allowed on your visa, you do not sacrifice your liberties. In everyday conversation, we tend to treat freedoms, liberties, and rights as being effectively the same thing—similar to how separation of powers and checks and balances are often used as if they are interchangeable, when in fact they are distinct concepts. DEFINING CIVIL LIBERTIES To be more precise in their language, political scientists and legal experts make a distinction between civil liberties and civil rights, even though the Constitution has been interpreted to protect both. We typically envision civil liberties as being limitations on government power, intended to protect freedoms that governments may not legally intrude on. For example, the First Amendment denies the government the power to prohibit “the free exercise” of religion; the states and the national government cannot forbid people to follow a religion of their choice, even if politicians and judges think the religion is misguided, blasphemous, or otherwise inappropriate. You are free to create your own religion and recruit followers to it (subject to the U.S. Supreme Court deeming it a religion), even if both society and government disapprove of its tenets. That said, the way you practice your religion may be regulated if it impinges on the rights of others. Similarly, the Eighth Amendment says the government cannot impose “cruel and unusual punishments” on individuals for their criminal acts. Although the definitions of cruel and unusual have expanded over the years, as we will see later in this chapter, the courts have generally and consistently interpreted this provision as making it unconstitutional for government officials to torture suspects. Civil rights , on the other hand, are guarantees that government officials will treat people equally and that decisions will be made on the basis of merit rather than race, gender, or other personal characteristics. Because of the Constitution’s civil rights guarantee, it is unlawful for a school or university run by a state government to treat students differently based on their race, ethnicity, age, sex, or national origin. In the 1960s and 1970s, many states had separate schools where only students of a certain race or gender were able to study. However, the courts decided that these policies violated the civil rights of students who could not be admitted because of those rules. 1 The idea that Americans—indeed, people in general—have fundamental rights and liberties was at the core of the arguments in favor of their independence. In writing the Declaration of Independence in 1776, Thomas Jefferson drew on the ideas of John Locke to express the colonists’ belief that they had certain inalienable or natural rights that no ruler had the power or authority to deny to his or her subjects. It was a scathing legal indictment of King George III for violating the colonists’ liberties. Although the Declaration of Independence does not guarantee specific freedoms, its language was instrumental in inspiring many of the states to adopt protections for civil liberties and rights in their own constitutions, and in expressing principles of the founding era that have resonated in the United States since its independence. In particular, Jefferson’s words “all men are created equal” became the centerpiece of struggles for the rights of women and minorities ( Figure 4.2 ). Link to Learning Founded in 1920, the American Civil Liberties Union (ACLU) is one of the oldest interest groups in the United States. The mission of this non-partisan, not-for-profit organization is “to defend and preserve the individual rights and liberties guaranteed to every person in this country by the Constitution and laws of the United States.” Many of the Supreme Court cases in this chapter were litigated by, or with the support of, the ACLU. The ACLU offers a listing of state and local chapters on their website. CIVIL LIBERTIES AND THE CONSTITUTION The Constitution as written in 1787 did not include a Bill of Rights , although the idea of including one was proposed and, after brief discussion, dismissed in the final week of the Constitutional Convention. The framers of the Constitution believed they faced much more pressing concerns than the protection of civil rights and liberties, most notably keeping the fragile union together in the light of internal unrest and external threats. Moreover, the framers thought that they had adequately covered rights issues in the main body of the document. Indeed, the Federalists did include in the Constitution some protections against legislative acts that might restrict the liberties of citizens, based on the history of real and perceived abuses by both British kings and parliaments as well as royal governors. In Article I , Section 9, the Constitution limits the power of Congress in three ways: prohibiting the passage of bills of attainder, prohibiting ex post facto laws, and limiting the ability of Congress to suspend the writ of habeas corpus. A bill of attainder is a law that convicts or punishes someone for a crime without a trial, a tactic used fairly frequently in England against the king’s enemies. Prohibition of such laws means that the U.S. Congress cannot simply punish people who are unpopular or seem to be guilty of crimes. An ex post facto law has a retroactive effect: it can be used to punish crimes that were not crimes at the time they were committed, or it can be used to increase the severity of punishment after the fact. Finally, the writ of habeas corpus is used in our common-law legal system to demand that a neutral judge decide whether someone has been lawfully detained. Particularly in times of war, or even in response to threats against national security, the government has held suspected enemy agents without access to civilian courts, often without access to lawyers or a defense, seeking instead to try them before military tribunals or detain them indefinitely without trial. For example, during the Civil War, President Abraham Lincoln detained suspected Confederate saboteurs and sympathizers in Union-controlled states and attempted to have them tried in military court s, leading the Supreme Court to rule in Ex parte Milligan that the government could not bypass the civilian court system in states where it was operating. 2 During World War II, the Roosevelt administration interned Japanese Americans and had other suspected enemy agents—including U.S. citizens—tried by military courts rather than by the civilian justice system, a choice the Supreme Court upheld in Ex parte Quirin ( Figure 4.3 ). 3 More recently, in the wake of the 9/11 attacks on the World Trade Center and the Pentagon, the Bush and Obama administrations detained suspected terrorists captured both within and outside the United States and sought, with mixed results, to avoid trials in civilian courts. Hence, there have been times in our history when national security issues trumped individual liberties. Debate has always swirled over these issues. The Federalists reasoned that the limited set of enumerated powers of Congress, along with the limitations on those powers in Article I , Section 9, would suffice, and no separate bill of rights was needed. Alexander Hamilton , writing as Publius in Federalist No. 84, argued that the Constitution was “merely intended to regulate the general political interests of the nation,” rather than to concern itself with “the regulation of every species of personal and private concerns.” Hamilton went on to argue that listing some rights might actually be dangerous, because it would provide a pretext for people to claim that rights not included in such a list were not protected. Later, James Madison , in his speech introducing the proposed amendments that would become the Bill of Rights, acknowledged another Federalist argument: “It has been said, that a bill of rights is not necessary, because the establishment of this government has not repealed those declarations of rights which are added to the several state constitutions.” 4 For that matter, the Articles of Confederation had not included a specific listing of rights either. However, the Anti-Federalists argued that the Federalists’ position was incorrect and perhaps even insincere. The Anti-Federalists believed provisions such as the elastic clause in Article I, Section 8, of the Constitution would allow Congress to legislate on matters well beyond the limited ones foreseen by the Constitution’s authors; thus, they held that a bill of rights was necessary. One of the Anti-Federalists, Brutus , whom most scholars believe to be Robert Yates , wrote: “The powers, rights, and authority, granted to the general government by this Constitution, are as complete, with respect to every object to which they extend, as that of any state government—It reaches to every thing which concerns human happiness—Life, liberty, and property, are under its controul [sic]. There is the same reason, therefore, that the exercise of power, in this case, should be restrained within proper limits, as in that of the state governments.” 5 The experience of the past two centuries has suggested that the Anti-Federalists may have been correct in this regard; while the states retain a great deal of importance, the scope and powers of the national government are much broader today than in 1787—likely beyond even the imaginings of the Federalists themselves. The struggle to have rights clearly delineated and the decision of the framers to omit a bill of rights nearly derailed the ratification process. While some of the states were willing to ratify without any further guarantees, in some of the larger states—New York and Virginia in particular—the Constitution’s lack of specified rights became a serious point of contention. The Constitution could go into effect with the support of only nine states, but the Federalists knew it could not be effective without the participation of the largest states. To secure majorities in favor of ratification in New York and Virginia, as well as Massachusetts, they agreed to consider incorporating provisions suggested by the ratifying states as amendments to the Constitution. Ultimately, James Madison delivered on this promise by proposing a package of amendments in the First Congress, drawing from the Declaration of Rights in the Virginia state constitution, suggestions from the ratification conventions, and other sources, which were extensively debated in both houses of Congress and ultimately proposed as twelve separate amendments for ratification by the states. Ten of the amendments were successfully ratified by the requisite 75 percent of the states and became known as the Bill of Rights ( Table 4.1 ). Rights and Liberties Protected by the First Ten Amendments First Amendment Right to freedoms of religion and speech; right to assemble and to petition the government for redress of grievances Second Amendment Right to keep and bear arms to maintain a well-regulated militia Third Amendment Right to not house soldiers during time of war Fourth Amendment Right to be secure from unreasonable search and seizure Fifth Amendment Rights in criminal cases, including due process and indictment by grand jury for capital crimes, as well as the right not to testify against oneself Sixth Amendment Right to a speedy trial by an impartial jury Seventh Amendment Right to a jury trial in civil cases Eighth Amendment Right to not face excessive bail, excessive fines, or cruel and unusual punishment Ninth Amendment Rights retained by the people, even if they are not specifically enumerated by the Constitution Tenth Amendment States’ rights to powers not specifically delegated to the federal government Table 4.1 Finding a Middle Ground Debating the Need for a Bill of Rights One of the most serious debates between the Federalists and the Anti-Federalists was over the necessity of limiting the power of the new federal government with a Bill of Rights. As we saw in this section, the Federalists believed a Bill of Rights was unnecessary—and perhaps even dangerous to liberty, because it might invite violations of rights that weren’t included in it—while the Anti-Federalists thought the national government would prove adept at expanding its powers and influence and that citizens couldn’t depend on the good judgment of Congress alone to protect their rights. As George Washington’s call for a bill of rights in his first inaugural address suggested, while the Federalists ultimately had to add the Bill of Rights to the Constitution in order to win ratification, and the Anti-Federalists would soon be proved right that the national government might intrude on civil liberties. In 1798, at the behest of President John Adams during the Quasi-War with France, Congress passed a series of four laws collectively known as the Alien and Sedition Acts. These were drafted to allow the president to imprison or deport foreign citizens he believed were “dangerous to the peace and safety of the United States” and to restrict speech and newspaper articles that were critical of the federal government or its officials; the laws were primarily used against members and supporters of the opposition Democratic-Republican Party. State laws and constitutions protecting free speech and freedom of the press proved ineffective in limiting this new federal power. Although the courts did not decide on the constitutionality of these laws at the time, most scholars believe the Sedition Act, in particular, would be unconstitutional if it had remained in effect. Three of the four laws were repealed in the Jefferson administration, but one—the Alien Enemies Act—remains on the books today. Two centuries later, the issue of free speech and freedom of the press during times of international conflict remains a subject of public debate. Should the government be able to restrict or censor unpatriotic, disloyal, or critical speech in times of international conflict? How much freedom should journalists have to report on stories from the perspective of enemies or to repeat propaganda from opposing forces? EXTENDING THE BILL OF RIGHTS TO THE STATES In the decades following the Constitution’s ratification, the Supreme Court declined to expand the Bill of Rights to curb the power of the states, most notably in the 1833 case of Barron v. Baltimore . 6 In this case, which dealt with property rights under the Fifth Amendment , the Supreme Court unanimously decided that the Bill of Rights applied only to actions by the federal government. Explaining the court’s ruling, Chief Justice John Marshall wrote that it was incorrect to argue that “the Constitution was intended to secure the people of the several states against the undue exercise of power by their respective state governments; as well as against that which might be attempted by their [Federal] government.” In the wake of the Civil War, however, the prevailing thinking about the application of the Bill of Rights to the states changed. Soon after slavery was abolished by the Thirteenth Amendment , state governments—particularly those in the former Confederacy—began to pass “black codes” that restricted the rights of former slaves and effectively relegated them to second-class citizenship under their state laws and constitutions. Angered by these actions, members of the Radical Republican faction in Congress demanded that the laws be overturned. In the short term, they advocated suspending civilian government in most of the southern states and replacing politicians who had enacted the black codes. Their long-term solution was to propose two amendments to the Constitution to guarantee the rights of freed slaves on an equal standing with white people; these rights became the Fourteenth Amendment , which dealt with civil liberties and rights in general, and the Fifteenth Amendment , which protected the right to vote in particular ( Figure 4.4 ). But, the right to vote did not yet apply to women or to Native Americans. With the ratification of the Fourteenth Amendment in 1868, civil liberties gained more clarification. First, the amendment says, “no State shall make or enforce any law which shall abridge the privileges or immunities of citizens of the United States,” which is a provision that echoes the privileges and immunities clause in Article IV , Section 2, of the original Constitution ensuring that states treat citizens of other states the same as their own citizens. (To use an example from today, the punishment for speeding by an out-of-state driver cannot be more severe than the punishment for an in-state driver). Legal scholars and the courts have extensively debated the meaning of this privileges or immunities clause over the years; some have argued that it was supposed to extend the entire Bill of Rights (or at least the first eight amendments) to the states, while others have argued that only some rights are extended. In 1999, Justice John Paul Stevens , writing for a majority of the Supreme Court, argued in Saenz v. Roe that the clause protects the right to travel from one state to another. 7 More recently, Justice Clarence Thomas argued in the 2010 McDonald v. Chicago ruling that the individual right to bear arms applied to the states because of this clause. 8 The second provision of the Fourteenth Amendment that pertains to applying the Bill of Rights to the states is the due process clause , which says, “nor shall any State deprive any person of life, liberty, or property, without due process of law.” This provision is similar to the Fifth Amendment in that it also refers to “due process,” a term that generally means people must be treated fairly and impartially by government officials (or with what is commonly referred to as substantive due process). Although the text of the provision does not mention rights specifically, the courts have held in a series of cases that it indicates there are certain fundamental liberties that cannot be denied by the states. For example, in Sherbert v. Verner (1963), the Supreme Court ruled that states could not deny unemployment benefits to an individual who turned down a job because it required working on the Sabbath. 9 Beginning in 1897, the Supreme Court has found that various provisions of the Bill of Rights protecting these fundamental liberties must be upheld by the states, even if their state constitutions and laws do not protect them as fully as the Bill of Rights does—or at all. This means there has been a process of selective incorporation of the Bill of Rights into the practices of the states; in other words, the Constitution effectively inserts parts of the Bill of Rights into state laws and constitutions, even though it doesn’t do so explicitly. When cases arise to clarify particular issues and procedures, the Supreme Court decides whether state laws violate the Bill of Rights and are therefore unconstitutional. For example, under the Fifth Amendment a person can be tried in federal court for a felony—a serious crime—only after a grand jury issues an indictment indicating that it is reasonable to try the person for the crime in question. (A grand jury is a group of citizens charged with deciding whether there is enough evidence of a crime to prosecute someone.) But the Supreme Court has ruled that states don’t have to use grand juries as long as they ensure people accused of crimes are indicted using an equally fair process. Selective incorporation is an ongoing process. When the Supreme Court initially decided in 2008 that the Second Amendment protects an individual’s right to keep and bear arms, it did not decide then that it was a fundamental liberty the states must uphold as well. It was only in the McDonald v. Chicago case two years later that the Supreme Court incorporated the Second Amendment into state law. Another area in which the Supreme Court gradually moved to incorporate the Bill of Rights regards censorship and the Fourteenth Amendment. In Near v. Minnesota (1931), the Court disagreed with state courts regarding censorship and ruled it unconstitutional except in rare cases. 10 4.2 Securing Basic Freedoms Learning Objectives By the end of this section, you will be able to: Identify the liberties and rights guaranteed by the first four amendments to the Constitution Explain why in practice these rights and liberties are limited Explain why interpreting some amendments has been controversial We can broadly divide the provisions of the Bill of Rights into three categories. The First, Second, Third, and Fourth Amendments protect basic individual freedoms; the Fourth (partly), Fifth, Sixth, Seventh, and Eighth protect people suspected or accused of criminal activity or facing civil litigation; and the Ninth and Tenth, are consistent with the framers’ view that the Bill of Rights is not necessarily an exhaustive list of all the rights people have and guarantees a role for state as well as federal government ( Figure 4.5 ). The First Amendment protects the right to freedom of religious conscience and practice and the right to free expression, particularly of political and social beliefs. The Second Amendment—perhaps the most controversial today—protects the right to defend yourself in your home or other property, as well as the collective right to protect the community as part of the militia. The Third Amendment prohibits the government from commandeering people’s homes to house soldiers, particularly in peacetime. Finally, the Fourth Amendment prevents the government from searching our persons or property or taking evidence without a warrant issued by a judge, with certain exceptions. THE FIRST AMENDMENT The First Amendment is perhaps the most famous provision of the Bill of Rights; it is arguably also the most extensive, because it guarantees both religious freedoms and the right to express your views in public. Specifically, the First Amendment says: “Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances.” Given the broad scope of this amendment, it is helpful to break it into its two major parts. The first portion deals with religious freedom. However, it actually protects two related sorts of freedom: first, it protects people from having a set of religious beliefs imposed on them by the government, and second, it protects people from having their own religious beliefs restricted by government authorities. The Establishment Clause The first of these two freedoms is known as the establishment clause . Congress is prohibited from creating or promoting a state-sponsored religion (this now includes the states too). When the United States was founded, most countries around the world had an established church or religion, an officially sponsored set of religious beliefs and values. In Europe, bitter wars were fought between and within states, often because the established church of one territory was in conflict with that of another; wars and civil strife were common, particularly between states with Protestant and Catholic churches that had differing interpretations of Christianity. Even today, the legacy of these wars remains, most notably in Ireland, which has been divided between a mostly Catholic south and a largely Protestant north for nearly a century. Many settlers in the United States found themselves on this continent as refugees from such wars; others came to find a place where they could follow their own religion with like-minded people in relative peace. So as a practical matter, even if the early United States had wanted to establish a single national religion, the diversity of religious beliefs would already have prevented it. Nonetheless the differences were small; most people were of European origin and professed some form of Christianity (although in private some of the founders, most notably Thomas Jefferson, Thomas Paine, and Benjamin Franklin, held what today would be seen as Unitarian and/or deistic views). So for much of U.S. history, the establishment clause was not particularly important—the vast majority of citizens were Protestant Christians of some form, and since the federal government was relatively uninvolved in the day-to-day lives of the people, there was little opportunity for conflict. That said, there were some citizenship and office-holding restrictions on Jews within some of the states. Worry about state sponsorship of religion in the United States began to reemerge in the latter part of the nineteenth century. An influx of immigrants from Ireland and eastern and southern Europe brought large numbers of Catholics, and states—fearing the new immigrants and their children would not assimilate—passed laws forbidding government aid to religious schools. New religious organizations, such as The Church of Jesus Christ of Latter-day Saints (the Mormon Church), Seventh-day Adventists, Jehovah’s Witnesses, and many others, also emerged, blending aspects of Protestant beliefs with other ideas and teachings at odds with the more traditional Protestant churches of the era. At the same time, public schooling was beginning to take root on a wide scale. Since most states had traditional Protestant majorities and most state officials were Protestants themselves, the public school curriculum incorporated many Protestant features; at times, these features would come into conflict with the beliefs of children from other Christian sects or from other religious traditions. The establishment clause today tends to be interpreted a bit more broadly than in the past; it not only forbids the creation of a “Church of the United States” or “Church of Ohio” it also forbids the government from favoring one set of religious beliefs over others or favoring religion (of any variety) over non-religion. Thus, the government cannot promote, say, Islamic beliefs over Sikh beliefs or belief in God over atheism or agnosticism ( Figure 4.6 ). The key question that faces the courts is whether the establishment clause should be understood as imposing, in Thomas Jefferson’s words, “a wall of separation between church and state.” In a 1971 case known as Lemon v. Kurtzman , the Supreme Court established the Lemon test for deciding whether a law or other government action that might promote a particular religious practice should be allowed to stand. 11 The Lemon test has three criteria that must be satisfied for such a law or action to be found constitutional and remain in effect: 1. The action or law must not lead to excessive government entanglement with religion; in other words, policing the boundary between government and religion should be relatively straightforward and not require extensive effort by the government. 2. The action or law cannot either inhibit or advance religious practice; it should be neutral in its effects on religion. 3. The action or law must have some secular purpose ; there must be some non-religious justification for the law. For example, imagine your state decides to fund a school voucher program that allows children to attend private and parochial schools at public expense; the vouchers can be used to pay for school books and transportation to and from school. Would this voucher program be constitutional? Let’s start with the secular-purpose prong of the test. Educating children is a clear, non-religious purpose, so the law has a secular purpose. The law would neither inhibit nor advance religious practice, so that prong would be satisfied. The remaining question—and usually the one on which court decisions turn—is whether the law leads to excessive government entanglement with religious practice. Given that transportation and school books generally have no religious purpose, there is little risk that paying for them would lead the state to much entanglement with religion. The decision would become more difficult if the funding were unrestricted in use or helped to pay for facilities or teacher salaries; if that were the case, it might indeed be used for a religious purpose, and it would be harder for the government to ensure that it wasn’t without audits or other investigations that could lead to too much government entanglement with religion. The use of education as an example is not an accident; in fact, many of the court’s cases dealing with the establishment clause have involved education, particularly public education, because school-age children are considered a special and vulnerable population. Perhaps no subject affected by the First Amendment has been more controversial than the issue of prayer in public schools. Discussion about school prayer has been particularly fraught because in many ways it appears to bring the two religious liberty clauses into conflict with each other. The free exercise clause, discussed below, guarantees the right of individuals to practice their religion without government interference—and while the rights of children are not as extensive in all areas as those of adults, the courts have consistently ruled that the free exercise clause’s guarantee of religious freedom applies to children as well. At the same time, however, government actions that require or encourage particular religious practices might infringe upon children’s rights to follow their own religious beliefs and thus, in effect, be unconstitutional establishments of religion. For example, a teacher, an athletic coach, or even a student reciting a prayer in front of a class or leading students in prayer as part of the organized school activities constitutes an illegal establishment of religion. 12 Yet a school cannot prohibit voluntary, non-disruptive prayer by its students, because that would impair the free exercise of religion. So although the blanket statement that “prayer in schools is illegal” or unconstitutional is incorrect, the establishment clause does limit official endorsement of religion, including prayers organized or otherwise facilitated by school authorities, even as part of off-campus or extracurricular activities. 13 But some laws that may appear to establish certain religious practices are allowed. For example, the courts have permitted religiously inspired blue law s that limit working hours or even shutter businesses on Sunday, the Christian day of rest, because by allowing people to practice their (Christian) faith, such rules may help ensure the “health, safety, recreation, and general well-being” of citizens. They have allowed restrictions on the sale of alcohol and sometimes other goods on Sunday for similar reasons. The meaning of the establishment clause has been controversial at times because, as a matter of course, government officials acknowledge that we live in a society with vigorous religious practice where most people believe in God—even if we disagree on what God is. Disputes often arise over how much the government can acknowledge this widespread religious belief. The courts have generally allowed for a certain tolerance of what is described as ceremonial deism, an acknowledgement of God or a creator that generally lacks any substantive religious content. For example, the national motto “In God We Trust,” which appears on our coins and paper money ( Figure 4.7 ), is seen as more an acknowledgment that most citizens believe in God than any serious effort by government officials to promote religious belief and practice. This reasoning has also been used to permit the inclusion of the phrase “under God” in the Pledge of Allegiance—a change that came about during the early years of the Cold War as a means of contrasting the United States with the “godless” Soviet Union. In addition, the courts have allowed some religiously motivated actions by government agencies, such as clergy delivering prayers to open city council meetings and legislative sessions, on the presumption that—unlike school children—adult participants can distinguish between the government’s allowing someone to speak and endorsing that person’s speech. Yet, while some displays of religious codes (e.g., Ten Commandments) are permitted in the context of showing the evolution of law over the centuries ( Figure 4.7 ), in other cases, these displays have been removed after state supreme court rulings. In Oklahoma, the courts ordered the removal of a Ten Commandments sculpture at the state capitol when other groups, including Satanists and the Church of the Flying Spaghetti Monster, attempted to get their own sculptures allowed there. The Free Exercise Clause The free exercise clause , on the other hand, limits the ability of the government to control or restrict religious practices. This portion of the First Amendment regulates not the government’s promotion of religion, but rather government suppression of religious beliefs and practices. Much of the controversy surrounding the free exercise clause reflects the way laws or rules that apply to everyone might apply to people with particular religious beliefs. For example, can a Jewish police officer whose religious belief, if followed strictly, requires her to observe Shabbat be compelled to work on a Friday night or during the day on Saturday? Or must the government accommodate this religious practice, even if it means the general law or rule in question is not applied equally to everyone? In the 1930s and 1940s, cases involving Jehovah’s Witnesses demonstrated the difficulty of striking the right balance. In addition to following their church’s teaching that they should not participate in military combat, members refuse to participate in displays of patriotism, including saluting the flag and reciting the Pledge of Allegiance, and they regularly engage in door-to-door evangelism to recruit converts. These activities have led to frequent conflict with local authorities. Jehovah’s Witness children were punished in public schools for failing to salute the flag or recite the Pledge of Allegiance, and members attempting to evangelize were arrested for violating laws against door-to-door solicitation of customers. In early legal challenges brought by Jehovah’s Witnesses, the Supreme Court was reluctant to overturn state and local laws that burdened their religious beliefs. 14 However, in later cases, the court was willing to uphold the rights of Jehovah’s Witnesses to proselytize and refuse to salute the flag or recite the Pledge. 15 The rights of conscientious objector s —individuals who claim the right to refuse to perform military service on the grounds of freedom of thought, conscience, or religion—have also been controversial, although many conscientious objectors have contributed service as non-combatant medics during wartime. To avoid serving in the Vietnam War, many people claimed to have a conscientious objection to military service on the basis that they believed this particular war was unwise or unjust. However, the Supreme Court ruled in Gillette v. United States that to claim to be a conscientious objector, a person must be opposed to serving in any war, not just some wars. 16 Establishing a general framework for deciding whether a religious belief can trump general laws and policies has been a challenge for the Supreme Court. In the 1960s and 1970s, the court decided two cases in which it laid out a general test for deciding similar cases in the future. In both Sherbert v. Verner , a case dealing with unemployment compensation, and Wisconsin v. Yoder , which dealt with the right of Amish parents to homeschool their children, the court said that for a law to be allowed to limit or burden a religious practice, the government must meet two criteria. 17 It must demonstrate both that it had a “compelling governmental interest” in limiting that practice and that the restriction was “narrowly tailored.” In other words, it must show there was a very good reason for the law in question and that the law was the only feasible way of achieving that goal. This standard became known as the Sherbert test . Since the burden of proof in these cases was on the government, the Supreme Court made it very difficult for the federal and state governments to enforce laws against individuals that would infringe upon their religious beliefs. In 1990, the Supreme Court made a controversial decision substantially narrowing the Sherbert test in Employment Division v. Smith , more popularly known as “the peyote case.” 18 This case involved two men who were members of the Native American Church, a religious organization that uses the hallucinogenic peyote plant as part of its sacraments. After being arrested for possession of peyote, the two men were fired from their jobs as counselors at a private drug rehabilitation clinic. When they applied for unemployment benefits, the state refused to pay on the basis that they had been dismissed for work-related reasons. The men appealed the denial of benefits and were initially successful, since the state courts applied the Sherbert test and found that the denial of unemployment benefits burdened their religious beliefs. However, the Supreme Court ruled in a 6–3 decision that the “compelling governmental interest” standard should not apply; instead, so long as the law was not designed to target a person’s religious beliefs in particular, it was not up to the courts to decide that those beliefs were more important than the law in question. On the surface, a case involving the Native American Church seems unlikely to arouse much controversy. But because it replaced the Sherbert test with one that allowed more government regulation of religious practices, followers of other religious traditions grew concerned that state and local laws, even ones neutral on their face, might be used to curtail their religious practices. In 1993, in response to this decision, Congress passed a law known as the Religious Freedom Restoration Act (RFRA), which was followed in 2000 by the Religious Land Use and Institutionalized Persons Act after part of the RFRA was struck down by the Supreme Court. In addition, since 1990, twenty-one states have passed state RFRAs that include the Sherbert test in state law, and state court decisions in eleven states have enshrined the Sherbert test’s compelling governmental interest interpretation of the free exercise clause into state law. 19 However, the RFRA itself has not been without its critics. While it has been relatively uncontroversial as applied to the rights of individuals, debate has emerged about whether businesses and other groups can be said to have religious liberty. In explicitly religious organizations, such as a fundamentalist congregation (fundamentalists adhere very strictly to biblical absolutes) or the Roman Catholic Church, it is fairly obvious members have a meaningful, shared religious belief. But the application of the RFRA has become more problematic in businesses and non-profit organizations whose owners or organizers may share a religious belief while the organization has some secular, non-religious purpose. Such a conflict emerged in the 2014 Supreme Court case known as Burwell v. Hobby Lobby . 20 The Hobby Lobby chain of stores sells arts and crafts merchandise at hundreds of stores; its founder, David Green , is a devout fundamentalist Christian whose beliefs include opposition to abortion and contraception. Consistent with these beliefs, he used his business to object to a provision of the Patient Protection and Affordable Care Act (ACA or Obamacare) requiring employer-backed insurance plans to include no-charge access to the morning-after pill, a form of emergency contraception, arguing that this requirement infringed on his conscience. Based in part on the federal RFRA, the Supreme Court agreed 5–4 with Green and Hobby Lobby’s position and said that Hobby Lobby and other closely held businesses did not have to provide employees free access to emergency contraception or other birth control if doing so would violate the religious beliefs of the business’ owners, because there were other less restrictive ways the government could ensure access to these services for Hobby Lobby’s employees (e.g., paying for them directly). In 2015, state RFRAs became controversial when individuals and businesses that provided wedding services (e.g., catering and photography) were compelled to provide these for same-sex weddings in states where the practice had been newly legalized ( Figure 4.8 ). Proponents of state RFRA laws argued that people and businesses ought not be compelled to endorse practices their religious beliefs held to be immoral or indecent and feared clergy might be compelled to officiate same-sex marriages against their religion’s teachings. Opponents of RFRA laws argued that individuals and businesses should be required, per Obergefell v. Hodges , to serve same-sex marriages on an equal basis as a matter of ensuring the civil rights of gays and lesbians, just as they would be obliged to cater or photograph an interracial marriage. 21 Despite ongoing controversy, however, the courts have consistently found some public interests sufficiently compelling to override the free exercise clause. For example, since the late nineteenth century, the courts have consistently held that people’s religious beliefs do not exempt them from the general laws against polygamy. Other potential acts in the name of religion that are also out of the question are drug use and human sacrifice. Freedom of Expression Although the remainder of the First Amendment protects four distinct rights—free speech, press, assembly, and petition—we generally think of these rights today as encompassing a right to freedom of expression , particularly since the world’s technological evolution has blurred the lines between oral and written communication (i.e., speech and press) in the centuries since the First Amendment was written and adopted. Controversies over freedom of expression were rare until the 1900s, even though government censorship was quite common. For example, during the Civil War, the Union post office refused to deliver newspapers that opposed the war or sympathized with the Confederacy, while allowing pro-war newspapers to be mailed. The emergence of photography and movies, in particular, led to new public concerns about morality, causing both state and federal politicians to censor lewd and otherwise improper content. At the same time, writers became more ambitious in their subject matter by including explicit references to sex and using obscene language, leading to government censorship of books and magazines. Censorship reached its height during World War I. The United States was swept up in two waves of hysteria. Anti-German feeling was provoked by the actions of Germany and its allies leading up to the war, including the sinking of the RMS Lusitania and the Zimmerman Telegram, an effort by the Germans to conclude an alliance with Mexico against the United States. This concern was compounded in 1917 by the Bolshevik revolution against the more moderate interim government of Russia; the leaders of the Bolsheviks, most notably Vladimir Lenin, Leon Trotsky, and Joseph Stalin, withdrew from the war against Germany and called for communist revolutionaries to overthrow the capitalist, democratic governments in western Europe and North America. Americans who vocally supported the communist cause or opposed the war often found themselves in jail. In Schenck v. United States , the Supreme Court ruled that people encouraging young men to dodge the draft could be imprisoned for doing so, arguing that recommending that people disobey the law was tantamount to “falsely shouting fire in a theatre and causing a panic” and thus presented a “clear and present danger” to public order. 22 Similarly, communists and other revolutionary anarchists and socialists during the Red Scare after the war were prosecuted under various state and federal laws for supporting the forceful or violent overthrow of government. This general approach to political speech remained in place for the next fifty years. In the 1960s, however, the Supreme Court’s rulings on free expression became more liberal, in response to the Vietnam War and the growing antiwar movement. In a 1969 case involving the Ku Klux Klan, Brandenburg v. Ohio , the Supreme Court found that only speech or writing that constituted a direct call or plan to imminent lawless action, an illegal act in the immediate future, could be suppressed; the mere advocacy of a hypothetical revolution was not enough. 23 The Supreme Court also found that various forms of symbolic speech —wearing clothing like an armband that carried a political symbol or raising a fist in the air, for example—were subject to the same protections as written and spoken communication. Milestone Burning the U.S. Flag Perhaps no act of symbolic speech has been as controversial in U.S. history as the burning of the flag ( Figure 4.9 ). Citizens tend to revere the flag as a unifying symbol of the country in much the same way most people in Britain would treat the reigning queen (or king). States and the federal government have long had laws protecting the flag from being desecrated—defaced, damaged, or otherwise treated with disrespect. Perhaps in part because of these laws, people who have wanted to drive home a point in opposition to U.S. government policies have found desecrating the flag a useful way to gain public and press attention to their cause. One such person was Gregory Lee Johnson , a member of various pro-communist and antiwar groups. In 1984, as part of a protest near the Republican National Convention in Dallas, Texas, Johnson set fire to a U.S. flag that another protestor had torn from a flagpole. He was arrested, charged with “desecration of a venerated object” (among other offenses), and eventually convicted of that offense. However, in 1989, the Supreme Court decided in Texas v. Johnson that burning the flag was a form of symbolic speech protected by the First Amendment and found the law, as applied to flag desecration, to be unconstitutional. 24 This court decision was strongly criticized, and Congress responded by passing a federal law, the Flag Protection Act , intended to overrule it; the act, too, was struck down as unconstitutional in 1990. 25 Since then, Congress has attempted on several occasions to propose constitutional amendments allowing the states and federal government to re-criminalize flag desecration—to no avail. Should we amend the Constitution to allow Congress or the states to pass laws protecting the U.S. flag from desecration? Should we protect other symbols as well? Why or why not? Freedom of the press is an important component of the right to free expression as well. In Near v. Minnesota , an early case regarding press freedoms, the Supreme Court ruled that the government generally could not engage in prior restraint ; that is, states and the federal government could not in advance prohibit someone from publishing something without a very compelling reason. 26 This standard was reinforced in 1971 in the Pentagon Papers case, in which the Supreme Court found that the government could not prohibit the New York Times and Washington Post newspapers from publishing the Pentagon Papers . 27 These papers included materials from a secret history of the Vietnam War that had been compiled by the military. More specifically, the papers were compiled at the request of Secretary of Defense Robert McNamara and provided a study of U.S. political and military involvement in Vietnam from 1945 to 1967. Daniel Ellsberg famously released passages of the Papers to the press to show that the United States had secretly enlarged the scope of the war by bombing Cambodia and Laos among other deeds while lying to the American public about doing so. Although people who leak secret information to the media can still be prosecuted and punished, this does not generally extend to reporters and news outlets that pass that information on to the public. The Edward Snowden case is another good case in point. Snowden himself, rather than those involved in promoting the information that he shared, is the object of criminal prosecution. Furthermore, the courts have recognized that government officials and other public figures might try to silence press criticism and avoid unfavorable news coverage by threatening a lawsuit for defamation of character. In the 1964 New York Times v. Sullivan case, the Supreme Court decided that public figures needed to demonstrate not only that a negative press statement about them was untrue but also that the statement was published or made with either malicious intent or “reckless disregard” for the truth. 28 This ruling made it much harder for politicians to silence potential critics or to bankrupt their political opponents through the courts. The right to freedom of expression is not absolute; several key restrictions limit our ability to speak or publish opinions under certain circumstances. We have seen that the Constitution protects most forms of offensive and unpopular expression, particularly political speech; however, incitement of a criminal act, “fighting words,” and genuine threats are not protected. So, for example, you can’t point at someone in front of an angry crowd and shout, “Let’s beat up that guy!” And the Supreme Court has allowed laws that ban threatening symbolic speech, such as burning a cross on the lawn of an African American family’s home ( Figure 4.10 ). 29 Finally, as we’ve just seen, defamation of character—whether in written form (libel) or spoken form (slander)—is not protected by the First Amendment, so people who are subject to false accusations can sue to recover damages, although criminal prosecutions of libel and slander are uncommon. Another key exception to the right to freedom of expression is obscenity , acts or statements that are extremely offensive under current societal standards. Defining obscenity has been something of a challenge for the courts; Supreme Court Justice Potter Stewart famously said of obscenity, having watched pornography in the Supreme Court building, “I know it when I see it.” Into the early twentieth century, written work was frequently banned as being obscene, including works by noted authors such as James Joyce and Henry Miller, although today it is rare for the courts to uphold obscenity charges for written material alone. In 1973, the Supreme Court established the Miller test for deciding whether something is obscene: “(a) whether the average person, applying contemporary community standards, would find that the work, taken as a whole, appeals to the prurient interest, (b) whether the work depicts or describes, in a patently offensive way, sexual conduct specifically defined by the applicable state law; and (c) whether the work, taken as a whole, lacks serious literary, artistic, political, or scientific value.” 30 However, the application of this standard has at times been problematic. In particular, the concept of “contemporary community standards” raises the possibility that obscenity varies from place to place; many people in New York or San Francisco might not bat an eye at something people in Memphis or Salt Lake City would consider offensive. The one form of obscenity that has been banned almost without challenge is child pornography, although even in this area the courts have found exceptions. The courts have allowed censorship of less-than-obscene content when it is broadcast over the airwaves, particularly when it is available for anyone to receive. In general, these restrictions on indecency—a quality of acts or statements that offend societal norms or may be harmful to minors—apply only to radio and television programming broadcast when children might be in the audience, although most cable and satellite channels follow similar standards for commercial reasons. An infamous case of televised indecency occurred during the halftime show of the 2004 Super Bowl, during a performance by singer Janet Jackson in which a part of her clothing was removed by fellow performer Justin Timberlake, revealing her right breast. The network responsible for the broadcast, CBS, was ultimately presented with a fine of $550,000 by the Federal Communications Commission, the government agency that regulates television broadcasting. However, CBS was not ultimately required to pay. On the other hand, in 1997, the NBC network showed a broadcast of Schindler’s List , a film depicting events during the Holocaust in Nazi Germany, without any editing, so it included graphic nudity and depictions of violence. NBC was not fined or otherwise punished, suggesting there is no uniform standard for indecency. Similarly, in the 1990s Congress compelled television broadcasters to implement a television ratings system, enforced by a “V-Chip” in televisions and cable boxes, so parents could better control the television programming their children might watch. However, similar efforts to regulate indecent content on the Internet to protect children from pornography have largely been struck down as unconstitutional. This outcome suggests that technology has created new avenues for obscene material to be disseminated. The Children’s Internet Protection Act , however, requires K–12 schools and public libraries receiving Internet access using special E-rate discounts to filter or block access to obscene material and other material deemed harmful to minors, with certain exceptions. The courts have also allowed laws that forbid or compel certain forms of expression by businesses, such as laws that require the disclosure of nutritional information on food and beverage containers and warning labels on tobacco products ( Figure 4.11 ). The federal government requires the prices advertised for airline tickets to include all taxes and fees. Many states regulate advertising by lawyers. And, in general, false or misleading statements made in connection with a commercial transaction can be illegal if they constitute fraud. Furthermore, the courts have ruled that, although public school officials are government actors, the First Amendment freedom of expression rights of children attending public schools are somewhat limited. In particular, in Tinker v. Des Moines (1969) and Hazelwood v. Kuhlmeier (1988), the Supreme Court has upheld restrictions on speech that creates “substantial interference with school discipline or the rights of others” 31 or is “reasonably related to legitimate pedagogical concerns.” 32 For example, the content of school-sponsored activities like school newspapers and speeches delivered by students can be controlled, either for the purposes of instructing students in proper adult behavior or to deter conflict between students. Free expression includes the right to assemble peaceably and the right to petition government officials. This right even extends to members of groups whose views most people find abhorrent, such as American Nazis and the vehemently anti-gay Westboro Baptist Church , whose members have become known for their protests at the funerals of U.S. soldiers who have died fighting in the war on terror ( Figure 4.12 ). 33 Free expression—although a broad right—is subject to certain constraints to balance it against the interests of public order. In particular, the nature, place, and timing of protests—but not their substantive content—are subject to reasonable limits. The courts have ruled that while people may peaceably assemble in a place that is a public forum, not all public property is a public forum. For example, the inside of a government office building or a college classroom—particularly while someone is teaching—is not generally considered a public forum. Rallies and protests on land that has other dedicated uses, such as roads and highways, can be limited to groups that have secured a permit in advance, and those organizing large gatherings may be required to give sufficient notice so government authorities can ensure there is enough security available. However, any such regulation must be viewpoint-neutral; the government may not treat one group differently than another because of its opinions or beliefs. For example, the government can’t permit a rally by a group that favors a government policy but forbid opponents from staging a similar rally. Finally, there have been controversial situations in which government agencies have established free-speech zones for protesters during political conventions, presidential visits, and international meetings in areas that are arguably selected to minimize their public audience or to ensure that the subjects of the protests do not have to encounter the protesters. Link to Learning Since 2011, as part of the White House website, the Obama administration has included a dedicated system, “We the People: Your Voice in our Government,” for people to make petitions that will be reviewed by administration officials. THE SECOND AMENDMENT There has been increased conflict over the Second Amendment in recent years due to school shootings and gun violence. As a result, gun rights have become a highly charged political issue. The text of the Second Amendment is among the shortest of those included in the Constitution: “A well regulated Militia, being necessary to the security of a free State, the right of the people to keep and bear Arms, shall not be infringed.” But the relative simplicity of its text has not kept it from controversy; arguably, the Second Amendment has become controversial in large part because of its text. Is this amendment merely a protection of the right of the states to organize and arm a “well regulated militia” for civil defense, or is it a protection of a “right of the people” as a whole to individually bear arms? Before the Civil War, this would have been a nearly meaningless distinction. In most states at that time, white males of military age were considered part of the militia, liable to be called for service to put down rebellions or invasions, and the right “to keep and bear Arms” was considered a common-law right inherited from English law that predated the federal and state constitutions. The Constitution was not seen as a limitation on state power, and since the states expected all able-bodied free men to keep arms as a matter of course, what gun control there was mostly revolved around ensuring slaves (and their abolitionist allies) didn’t have guns. With the beginning of selective incorporation after the Civil War, debates over the Second Amendment were reinvigorated. In the meantime, as part of their black codes designed to reintroduce most of the trappings of slavery, several southern states adopted laws that restricted the carrying and ownership of weapons by former slaves. Despite acknowledging a common-law individual right to keep and bear arms, in 1876 the Supreme Court declined, in United States v. Cruickshank , to intervene to ensure the states would respect it. 34 In the following decades, states gradually began to introduce laws to regulate gun ownership. Federal gun control laws began to be introduced in the 1930s in response to organized crime, with stricter laws that regulated most commerce and trade in guns coming into force in the wake of the street protests of the 1960s. In the early 1980s, following an assassination attempt on President Ronald Reagan, laws requiring background checks for prospective gun buyers were passed. During this period, the Supreme Court’s decisions regarding the meaning of the Second Amendment were ambiguous at best. In United States v. Miller , the Supreme Court upheld the 1934 National Firearms Act ’s prohibition of sawed-off shotguns, largely on the basis that possession of such a gun was not related to the goal of promoting a “well regulated militia.” 35 This finding was generally interpreted as meaning that the Second Amendment protected the right of the states to organize a militia, rather than an individual right, and thus lower courts generally found most firearm regulations—including some city and state laws that virtually outlawed the private ownership of firearms—to be constitutional. However, in 2008, in a narrow 5–4 decision on District of Columbia v. Heller , the Supreme Court found that at least some gun control laws did violate the Second Amendment and that this amendment does protect an individual’s right to keep and bear arms, at least in some circumstances—in particular, “for traditionally lawful purposes, such as self-defense within the home.” 36 Because the District of Columbia is not a state, this decision immediately applied the right only to the federal government and territorial governments. Two years later, in McDonald v. Chicago , the Supreme Court overturned the Cruickshank decision (5–4) and again found that the right to bear arms was a fundamental right incorporated against the states, meaning that state regulation of firearms might, in some circumstances, be unconstitutional. In 2015, however, the Supreme Court allowed several of San Francisco’s strict gun control laws to remain in place, suggesting that—as in the case of rights protected by the First Amendment—the courts will not treat gun rights as absolute ( Figure 4.13 ). 37 THE THIRD AMENDMENT The Third Amendment says in full: “No Soldier shall, in time of peace be quartered in any house, without the consent of the Owner, nor in time of war, but in a manner to be prescribed by law.” Most people consider this provision of the Constitution obsolete and unimportant. However, it is worthwhile to note its relevance in the context of the time: citizens remembered having their cities and towns occupied by British soldiers and mercenaries during the Revolutionary War, and they viewed the British laws that required the colonists to house soldiers particularly offensive, to the point that it had been among the grievances listed in the Declaration of Independence. Today it seems unlikely the federal government would need to house military forces in civilian lodgings against the will of property owners or tenants; however, perhaps in the same way we consider the Second and Fourth amendments, we can think of the Third Amendment as reflecting a broader idea that our homes lie within a “zone of privacy” that government officials should not violate unless absolutely necessary. THE FOURTH AMENDMENT The Fourth Amendment sits at the boundary between general individual freedoms and the rights of those suspected of crimes. We saw earlier that perhaps it reflects James Madison’s broader concern about establishing an expectation of privacy from government intrusion at home. Another way to think of the Fourth Amendment is that it protects us from overzealous efforts by law enforcement to root out crime by ensuring that police have good reason before they intrude on people’s lives with criminal investigations. The text of the Fourth Amendment is as follows: “The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no Warrants shall issue, but upon probable cause, supported by Oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized.” The amendment places limits on both searches and seizures : Searches are efforts to locate documents and contraband. Seizures are the taking of these items by the government for use as evidence in a criminal prosecution (or, in the case of a person, the detention or taking of the person into custody). In either case, the amendment indicates that government officials are required to apply for and receive a search warrant prior to a search or seizure; this warrant is a legal document, signed by a judge, allowing police to search and/or seize persons or property. Since the 1960s, however, the Supreme Court has issued a series of rulings limiting the warrant requirement in situations where a person can be said to lack a “reasonable expectation of privacy” outside the home. Police can also search and/or seize people or property without a warrant if the owner or renter consents to the search, if there is a reasonable expectation that evidence may be destroyed or tampered with before a warrant can be issued (i.e., exigent circumstances), or if the items in question are in plain view of government officials. Furthermore, the courts have found that police do not generally need a warrant to search the passenger compartment of a car ( Figure 4.14 ), or to search people entering the United States from another country. 38 When a warrant is needed, law enforcement officers do not need enough evidence to secure a conviction, but they must demonstrate to a judge that there is probable cause to believe a crime has been committed or evidence will be found. Probable cause is the legal standard for determining whether a search or seizure is constitutional or a crime has been committed; it is a lower threshold than the standard of proof at a criminal trial. Critics have argued that this requirement is not very meaningful because law enforcement officers are almost always able to get a search warrant when they request one; on the other hand, since we wouldn’t expect the police to waste their time or a judge’s time trying to get search warrants that are unlikely to be granted, perhaps the high rate at which they get them should not be so surprising. What happens if the police conduct an illegal search or seizure without a warrant and find evidence of a crime? In the 1961 Supreme Court case Mapp v. Ohio , the court decided that evidence obtained without a warrant that didn’t fall under one of the exceptions mentioned above could not be used as evidence in a state criminal trial, giving rise to the broad application of what is known as the exclusionary rule , which was first established in 1914 on a federal level in Weeks v. United States . 39 The exclusionary rule doesn’t just apply to evidence found or to items or people seized without a warrant (or falling under an exception noted above); it also applies to any evidence developed or discovered as a result of the illegal search or seizure. For example, if police search your home without a warrant, find bank statements showing large cash deposits on a regular basis, and discover you are engaged in some other crime in which they were previously unaware (e.g., blackmail, drugs, or prostitution), not only can they not use the bank statements as evidence of criminal activity—they also can’t prosecute you for the crimes they discovered during the illegal search. This extension of the exclusionary rule is sometimes called the “fruit of the poisonous tree,” because just as the metaphorical tree (i.e., the original search or seizure) is poisoned, so is anything that grows out of it. 40 However, like the requirement for a search warrant, the exclusionary rule does have exceptions. The courts have allowed evidence to be used that was obtained without the necessary legal procedures in circumstances where police executed warrants they believed were correctly granted but in fact were not (“good faith” exception), and when the evidence would have been found anyway had they followed the law (“inevitable discovery”). The requirement of probable cause also applies to arrest warrants. A person cannot generally be detained by police or taken into custody without a warrant, although most states allow police to arrest someone suspected of a felony crime without a warrant so long as probable cause exists, and police can arrest people for minor crimes or misdemeanors they have witnessed themselves. 4.3 The Rights of Suspects Learning Objectives By the end of this section, you will be able to: Identify the rights of those suspected or accused of criminal activity Explain how Supreme Court decisions transformed the rights of the accused Explain why the Eighth Amendment is controversial regarding capital punishment In addition to protecting the personal freedoms of individuals, the Bill of Rights protects those suspected or accused of crimes from various forms of unfair or unjust treatment. The prominence of these protections in the Bill of Rights may seem surprising. Given the colonists’ experience of what they believed to be unjust rule by British authorities, however, and the use of the legal system to punish rebels and their sympathizers for political offenses, the impetus to ensure fair, just, and impartial treatment to everyone accused of a crime—no matter how unpopular—is perhaps more understandable. What is more, the revolutionaries, and the eventual framers of the Constitution, wanted to keep the best features of English law as well. In addition to the protections outlined in the Fourth Amendment, which largely pertain to investigations conducted before someone has been charged with a crime, the next four amendments pertain to those suspected, accused, or convicted of crimes, as well as people engaged in other legal disputes. At every stage of the legal process, the Bill of Rights incorporates protections for these people. THE FIFTH AMENDMENT Many of the provisions dealing with the rights of the accused are included in the Fifth Amendment ; accordingly, it is one of the longest in the Bill of Rights. The Fifth Amendment states in full: “No person shall be held to answer for a capital, or otherwise infamous crime, unless on a presentment or indictment of a Grand Jury, except in cases arising in the land or naval forces, or in the Militia, when in actual service in time of War or public danger; nor shall any person be subject for the same offence to be twice put in jeopardy of life or limb; nor shall be compelled in any criminal case to be a witness against himself, nor be deprived of life, liberty, or property, without due process of law; nor shall private property be taken for public use, without just compensation.” The first clause requires that serious crimes be prosecuted only after an indictment has been issued by a grand jury. However, several exceptions are permitted as a result of the evolving interpretation and understanding of this amendment by the courts, given the Constitution is a living document. First, the courts have generally found this requirement to apply only to felonies; less serious crimes can be tried without a grand jury proceeding. Second, this provision of the Bill of Rights does not apply to the states because it has not been incorporated; many states instead require a judge to hold a preliminary hearing to decide whether there is enough evidence to hold a full trial. Finally, members of the armed forces who are accused of crimes are not entitled to a grand jury proceeding. The Fifth Amendment also protects individuals against double jeopardy , a process that subjects a suspect to prosecution twice for the same criminal act. No one who has been acquitted (found not guilty) of a crime can be prosecuted again for that crime. But the prohibition against double jeopardy has its own exceptions. The most notable is that it prohibits a second prosecution only at the same level of government (federal or state) as the first; the federal government can try you for violating federal law, even if a state or local court finds you not guilty of the same action. For example, in the early 1990s, several Los Angeles police officers accused of brutally beating motorist Rodney King during his arrest were acquitted of various charges in a state court, but some were later convicted in a federal court of violating King’s civil rights. The double jeopardy rule does not prevent someone from recovering damages in a civil case—a legal dispute between individuals over a contract or compensation for an injury—that results from a criminal act, even if the person accused of that act is found not guilty. One famous case from the 1990s involved former football star and television personality O. J. Simpson . Simpson, although acquitted of the murders of his ex-wife Nicole Brown and her friend Ron Goldman in a criminal court, was later found to be responsible for their deaths in a subsequent civil case and as a result was forced to forfeit most of his wealth to pay damages to their families. Perhaps the most famous provision of the Fifth Amendment is its protection against self-incrimination , or the right to remain silent. This provision is so well known that we have a phrase for it: “taking the Fifth.” People have the right not to give evidence in court or to law enforcement officers that might constitute an admission of guilt or responsibility for a crime. Moreover, in a criminal trial, if someone does not testify in his or her own defense, the prosecution cannot use that failure to testify as evidence of guilt or imply that an innocent person would testify. This provision became embedded in the public consciousness following the Supreme Court’s 1966 ruling in Miranda v. Arizona , whereby suspects were required to be informed of their most important rights, including the right against self-incrimination, before being interrogated in police custody. 41 However, contrary to some media depictions of the Miranda warning , law enforcement officials do not necessarily have to inform suspects of their rights before they are questioned in situations where they are free to leave. Like the Fourteenth Amendment’s due process clause, the Fifth Amendment prohibits the federal government from depriving people of their “life, liberty, or property, without due process of law.” Recall that due process is a guarantee that people will be treated fairly and impartially by government officials when the government seeks to fine or imprison them or take their personal property away from them. The courts have interpreted this provision to mean that government officials must establish consistent, fair procedures to decide when people’s freedoms are limited; in other words, citizens cannot be detained, their freedom limited, or their property taken arbitrarily or on a whim by police or other government officials. As a result, an entire body of procedural safeguards comes into play for the legal prosecution of crimes. However, the Patriot Act, passed into law after the 9/11 terrorist attacks, somewhat altered this notion. The final provision of the Fifth Amendment has little to do with crime at all. The takings clause says that “private property [cannot] be taken for public use, without just compensation.” This provision, along with the due process clause’s provisions limiting the taking of property, can be viewed as a protection of individuals’ economic liberty : their right to obtain, use, and trade tangible and intangible property for their own benefit. For example, you have the right to trade your knowledge, skills, and labor for money through work or the use of your property, or trade money or goods for other things of value, such as clothing, housing, education, or food. The greatest recent controversy over economic liberty has been sparked by cities’ and states’ use of the power of eminent domain to take property for redevelopment. Traditionally, the main use of eminent domain was to obtain property for transportation corridors like railroads, highways, canals and reservoirs, and pipelines, which require fairly straight routes to be efficient. Because any single property owner could effectively block a particular route or extract an unfair price for land if it was the last piece needed to assemble a route, there are reasonable arguments for using eminent domain as a last resort in these circumstances, particularly for projects that convey substantial benefits to the public at large. However, increasingly eminent domain has been used to allow economic development, with beneficiaries ranging from politically connected big businesses such as car manufacturers building new factories to highly profitable sports teams seeking ever-more-luxurious stadiums ( Figure 4.15 ). And, while we traditionally think of property owners as relatively well-off people whose rights don’t necessarily need protecting since they can fend for themselves in the political system, frequently these cases pit lower- and middle-class homeowners against multinational corporations or multimillionaires with the ear of city and state officials. In a notorious 2005 case, Kelo v. City of New London , the Supreme Court sided with municipal officials taking homes in a middle-class neighborhood to obtain land for a large pharmaceutical company’s corporate campus. 42 The case led to a public backlash against the use of eminent domain and legal changes in many states, making it harder for cities to take property from one private party and give it to another for economic redevelopment purposes. Some disputes over economic liberty have gone beyond the idea of eminent domain. In the past few years, the emergence of on-demand ride-sharing services like Lyft and Uber, direct sales by electric car manufacturer Tesla Motors, and short-term property rentals through companies like Airbnb have led to conflicts between people seeking to offer profitable services online, states and cities trying to regulate these businesses, and the incumbent service providers that compete with these new business models. In the absence of new public policies to clarify rights, the path forward is often determined through norms established in practice, by governments, or by court cases. THE SIXTH AMENDMENT Once someone has been charged with a crime and indicted, the next stage in a criminal case is typically the trial itself, unless a plea bargain is reached. The Sixth Amendment contains the provisions that govern criminal trials; in full, it states: “In all criminal prosecutions, the accused shall enjoy the right to a speedy and public trial, by an impartial jury of the State and district wherein the crime shall have been committed, which district shall have been previously ascertained by law, and to be informed of the nature and cause of the accusation; to be confronted with the witnesses against him; to have compulsory process for obtaining witnesses in his favor, and to have the Assistance of Counsel for his defence [sic].” The first of these guarantees is the right to have a speedy, public trial by an impartial jury. Although there is no absolute limit on the length of time that may pass between an indictment and a trial, the Supreme Court has said that excessively lengthy delays must be justified and balanced against the potential harm to the defendant. 43 In effect, the speedy trial requirement protects people from being detained indefinitely by the government. Yet the courts have ruled that there are exceptions to the public trial requirement; if a public trial would undermine the defendant’s right to a fair trial, it can be held behind closed doors, while prosecutors can request closed proceedings only in certain, narrow circumstances (generally, to protect witnesses from retaliation or to guard classified information). In general, a prosecution must also be made in the “state and district” where the crime was committed; however, people accused of crimes may ask for a change of venue for their trial if they believe pre-trial publicity or other factors make it difficult or impossible for them to receive a fair trial where the crime occurred. Link to Learning Although the Supreme Court’s proceedings are not televised and there is no video of the courtroom, audio recordings of the oral arguments and decisions announced in cases have been made since 1955. A complete collection of these recordings can be found at the Oyez Project website along with full information about each case. Most people accused of crimes decline their right to a jury trial. This choice is typically the result of a plea bargain , an agreement between the defendant and the prosecutor in which the defendant pleads guilty to the charge(s) in question, or perhaps to less serious charges, in exchange for more lenient punishment than he or she might receive if convicted after a full trial. There are a number of reasons why this might happen. The evidence against the accused may be so overwhelming that conviction is a near-certainty, so he or she might decide that avoiding the more serious penalty (perhaps even the death penalty) is better than taking the small chance of being acquitted after a trial. Someone accused of being part of a larger crime or criminal organization might agree to testify against others in exchange for lighter punishment. At the same time, prosecutors might want to ensure a win in a case that might not hold up in court by securing convictions for offenses they know they can prove, while avoiding a lengthy trial on other charges they might lose. The requirement that a jury be impartial is a critical requirement of the Sixth Amendment. Both the prosecution and the defense are permitted to reject potential jurors who they believe are unable to fairly decide the case without prejudice. However, the courts have also said that the composition of the jury as a whole may in itself be prejudicial; potential jurors may not be excluded simply because of their race or sex, for example. 44 The Sixth Amendment guarantees the right of those accused of crimes to present witnesses in their own defense (if necessary, compelling them to testify) and to confront and cross-examine witnesses presented by the prosecution. In general, the only testimony acceptable in a criminal trial must be given in a courtroom and be subject to cross-examination; hearsay, or testimony by one person about what another person has said, is generally inadmissible, although hearsay may be presented as evidence when it is an admission of guilt by the defendant or a “dying declaration” by a person who has passed away. Although both sides in a trial have the opportunity to examine and cross-examine witnesses, the judge may exclude testimony deemed irrelevant or prejudicial. Finally, the Sixth Amendment guarantees the right of those accused of crimes to have the assistance of an attorney in their defense. Historically, many states did not provide attorneys to those accused of most crimes who could not afford one themselves; even when an attorney was provided, his or her assistance was often inadequate at best. This situation changed as a result of the Supreme Court’s decision in Gideon v. Wainwright (1963). 45 Clarence Gideon , a poor drifter, was accused of breaking into and stealing money and other items from a pool hall in Panama City, Florida. Denied a lawyer, Gideon was tried and convicted and sentenced to a five-year prison term. While in prison—still without assistance of a lawyer—he drafted a handwritten appeal and sent it to the Supreme Court, which agreed to hear his case ( Figure 4.16 ). The justices unanimously ruled that Gideon, and anyone else accused of a serious crime, was entitled to the assistance of a lawyer, even if they could not afford one, as part of the general due process right to a fair trial. The Supreme Court later extended the Gideon v. Wainwright ruling to apply to any case in which an accused person faced the possibility of “loss of liberty,” even for one day. The courts have also overturned convictions in which people had incompetent or ineffective lawyers through no fault of their own. The Gideon ruling has led to an increased need for professional public defenders, lawyers who are paid by the government to represent those who cannot afford an attorney themselves, although some states instead require practicing lawyers to represent poor defendants on a pro bono basis (essentially, donating their time and energy to the case). Link to Learning The National Association for Public Defense represents public defenders, lobbying for better funding for public defense and improvements in the justice system in general. Insider Perspective Criminal Justice: Theory Meets Practice Typically a person charged with a serious crime will have a brief hearing before a judge to be informed of the charges against him or her, to be made aware of the right to counsel, and to enter a plea. Other hearings may be held to decide on the admissibility of evidence seized or otherwise obtained by prosecutors. If the two sides cannot agree on a plea bargain during this period, the next stage is the selection of a jury. A pool of potential jurors is summoned to the court and screened for impartiality, with the goal of seating twelve (in most states) and one or two alternates. All hear the evidence in the trial; unless an alternate must serve, the original twelve decide whether the evidence overwhelmingly points toward guilt or innocence beyond a reasonable doubt. In the trial itself, the lawyers for the prosecution and defense make opening arguments, followed by testimony by witnesses for the prosecution (and any cross-examination), and then testimony by witnesses for the defense, including the defendant if he or she chooses. Additional prosecution witnesses may be called to rebut testimony by the defense. Finally, both sides make closing arguments. The judge then issues instructions to the jury, including an admonition not to discuss the case with anyone outside the jury room. The jury members leave the courtroom to enter the jury room and begin their deliberations ( Figure 4.17 ). The jurors pick a foreman or forewoman to coordinate their deliberations. They may ask to review evidence or to hear transcripts of testimony. They deliberate in secret and their decision must be unanimous; if they are unable to agree on a verdict after extensive deliberation, a mistrial may be declared, which in effect requires the prosecution to try the case all over again. A defendant found not guilty of all charges will be immediately released unless other charges are pending (e.g., the defendant is wanted for a crime in another jurisdiction). If the defendant is found guilty of one or more offenses, the judge will choose an appropriate sentence based on the law and the circumstances; in the federal system, this sentence will typically be based on guidelines that assign point values to various offenses and facts in the case. If the prosecution is pursuing the death penalty, the jury will decide whether the defendant should be subject to capital punishment or life imprisonment. The reality of court procedure is much less dramatic and exciting than what is typically portrayed in television shows and movies. Nonetheless, most Americans will participate in the legal system at least once in their lives as a witness, juror, or defendant. Have you or any member of your family served on a jury? If so, was the experience a positive one? Did the trial proceed as expected? If you haven’t served on a jury, is it something you look forward to? Why or why not? THE SEVENTH AMENDMENT The Seventh Amendment deals with the rights of those engaged in civil disputes; as noted earlier, these are disagreements between individuals or businesses in which people are typically seeking compensation for some harm caused. For example, in an automobile accident, the person responsible is compelled to compensate any others (either directly or through his or her insurance company). Much of the work of the legal system consists of efforts to resolve civil disputes. The Seventh Amendment, in full, reads: “In Suits at common law, where the value in controversy shall exceed twenty dollars, the right of trial by jury shall be preserved, and no fact tried by a jury, shall be otherwise re-examined in any Court of the United States, than according to the rules of the common law.” Because of this provision, all trials in civil cases must take place before a jury unless both sides waive their right to a jury trial. However, this right is not always incorporated; in many states, civil disputes—particularly those involving small sums of money, which may be heard by a dedicated small claims court—need not be tried in front of a jury and may instead be decided by a judge working alone. The Seventh Amendment limits the ability of judges to reconsider questions of fact, rather than of law, that were originally decided by a jury. For example, if a jury decides a person was responsible for an action and the case is appealed, the appeals judge cannot decide someone else was responsible. This preserves the traditional common-law distinction that judges are responsible for deciding questions of law while jurors are responsible for determining the facts of a particular case. THE EIGHTH AMENDMENT The Eighth Amendment says, in full: “Excessive bail shall not be required, nor excessive fines imposed, nor cruel and unusual punishments inflicted.” Bail is a payment of money that allows a person accused of a crime to be freed pending trial; if you “make bail” in a case and do not show up for your trial, you will forfeit the money you paid. Since many people cannot afford to pay bail directly, they may instead get a bail bond , which allows them to pay a fraction of the money (typically 10 percent) to a person who sells bonds and who pays the full bail amount. (In most states, the bond seller makes money because the defendant does not get back the money for the bond, and most people show up for their trials.) However, people believed likely to flee or who represent a risk to the community while free may be denied bail and held in jail until their trial takes place. It is rare for bail to be successfully challenged for being excessive. The Supreme Court has defined an excessive fine as one “so grossly excessive as to amount to deprivation of property without due process of law” or “grossly disproportional to the gravity of a defendant’s offense.” 46 In practice the courts have rarely struck down fines as excessive either. The most controversial provision of the Eighth Amendment is the ban on “cruel and unusual punishments.” Various torturous forms of execution common in the past—drawing and quartering, burning people alive, and the like—are prohibited by this provision. 47 Recent controversies over lethal injections and firing squads to administer the death penalty suggest the topic is still salient. While the Supreme Court has never established a definitive test for what constitutes a cruel and unusual punishment, it has generally allowed most penalties short of death for adults, even when to outside observers the punishment might be reasonably seen as disproportionate or excessive. 48 In recent years the Supreme Court has issued a series of rulings substantially narrowing the application of the death penalty. As a result, defendants who have mental disabilities may not be executed. 49 Also, defendants who were under eighteen when they committed an offense that is otherwise subject to the death penalty may not be executed. 50 The court has generally rejected the application of the death penalty to crimes that did not result in the death of another human being, most notably in the case of rape. 51 And, while permitting the death penalty to be applied to murder in some cases, the Supreme Court has generally struck down laws that require the application of the death penalty in certain circumstances. Still, the United States is among ten countries with the most executions worldwide ( Figure 4.18 ). At the same time, however, it appears that the public mood may have shifted somewhat against the death penalty, perhaps due in part to an overall decline in violent crime. The reexamination of past cases through DNA evidence has revealed dozens in which people were wrongfully executed. 52 For example, Claude Jones was executed for murder based on 1990-era DNA testing of a single hair that was determined at that time to be his; however, with better DNA testing technology, it was later found to be that of the victim. 53 Perhaps as a result of this and other cases, seven additional states have abolished capital punishment since 2007. As of 2015, nineteen states and the District of Columbia no longer apply the death penalty in new cases, and several other states do not carry out executions despite sentencing people to death. 54 It remains to be seen whether this gradual trend toward the elimination of the death penalty by the states will continue, or whether the Supreme Court will eventually decide to follow former Justice Harry Blackmun ’s decision to “no longer… tinker with the machinery of death” and abolish it completely. 4.4 Interpreting the Bill of Rights Learning Objectives By the end of this section, you will be able to: Describe how the Ninth and Tenth Amendments reflect on our other rights Identify the two senses of “right to privacy” embodied in the Constitution Explain the controversy over privacy when applied to abortion and same-sex relationships As this chapter has suggested, the provisions of the Bill of Rights have been interpreted and reinterpreted repeatedly over the past two centuries. However, the first eight amendments are largely silent on the status of traditional common law, which was the legal basis for many of the natural rights claimed by the framers in the Declaration of Independence. These amendments largely reflect the worldview of the time in which they were written; new technology and an evolving society and economy have presented us with novel situations that do not fit neatly into the framework established in the late eighteenth century. In this section, we consider the final two amendments of the Bill of Rights and the way they affect our understanding of the Constitution as a whole. Rather than protecting specific rights and liberties, the Ninth and Tenth Amendments indicate how the Constitution and the Bill of Rights should be interpreted, and they lay out the residual powers of the state governments. We will also examine privacy rights, an area the Bill of Rights does not address directly; instead, the emergence of defined privacy rights demonstrates how the Ninth and Tenth Amendments have been applied to expand the scope of rights protected by the Constitution. THE NINTH AMENDMENT We saw above that James Madison and the other framers were aware they might endanger some rights if they listed a few in the Constitution and omitted others. To ensure that those interpreting the Constitution would recognize that the listing of freedoms and rights in the Bill of Rights was not exhaustive, the Ninth Amendment states: “The enumeration in the Constitution, of certain rights, shall not be construed to deny or disparage others retained by the people.” These rights “retained by the people” include the common-law and natural rights inherited from the laws, traditions, and past court decisions of England. To this day, we regularly exercise and take for granted rights that aren’t written down in the federal constitution, like the right to marry, the right to seek opportunities for employment and education, and the right to have children and raise a family. Supreme Court justices over the years have interpreted the Ninth Amendment in different ways; some have argued that it was intended to extend the rights protected by the Constitution to those natural and common-law rights, while others have argued that it does not prohibit states from changing their constitutions and laws to modify or limit those rights as they see fit. Critics of a broad interpretation of the Ninth Amendment point out that the Constitution provides ways to protect newly formalized rights through the amendment process. For example, in the nineteenth and twentieth centuries, the right to vote was gradually expanded by a series of constitutional amendments (the Fifteenth and Nineteenth), even though at times this expansion was the subject of great public controversy. However, supporters of a broad interpretation of the Ninth Amendment point out that the rights of the people—particularly people belonging to political or demographic minorities—should not be subject to the whims of popular majorities. One right the courts have said may be at least partially based on the Ninth Amendment is a general right to privacy, discussed later in the chapter. THE TENTH AMENDMENT The Tenth Amendment is as follows: “The powers not delegated to the United States by the Constitution, nor prohibited by it to the States, are reserved to the States respectively, or to the people.” Unlike the other provisions of the Bill of Rights, this amendment focuses on power rather than rights. The courts have generally read the Tenth Amendment as merely stating, as Chief Justice Harlan Stone put it, a “truism that all is retained which has not been surrendered.” 55 In other words, rather than limiting the power of the federal government in any meaningful way, it simply restates what is made obvious elsewhere in the Constitution: the federal government has both enumerated and implied powers, but where the federal government does not (or chooses not to) exercise power, the states may do so. At times, politicians and state governments have argued that the Tenth Amendment means states can engage in interposition or nullification by blocking federal government laws and actions they deem to exceed the constitutional powers of the national government. But the courts have rarely been sympathetic to these arguments, except when the federal government appears to be directly requiring state and local officials to do something. For example, in 1997 the Supreme Court struck down part of a federal law that required state and local law enforcement to participate in conducting background checks for prospective gun purchasers, while in 2012 the court ruled that the government could not compel states to participate in expanding the joint state-federal Medicaid program by taking away all their existing Medicaid funding if they refused to do so. 56 However, the Tenth Amendment also allows states to guarantee rights and liberties more fully or extensively than the federal government does, or to include additional rights. For example, many state constitutions guarantee the right to a free public education, several states give victims of crimes certain rights, and eighteen states include the right to hunt game and/or fish. 57 A number of state constitutions explicitly guarantee equal rights for men and women. Some permitted women to vote before that right was expanded to all women with the Nineteenth Amendment in 1920, and people aged 18–20 could vote in a few states before the Twenty-Sixth Amendment came into force in 1971. As we will see below, several states also explicitly recognize a right to privacy. State courts at times have interpreted state constitutional provisions to include broader protections for basic liberties than their federal counterparts. For example, although in general people do not have the right to free speech and assembly on private property owned by others without their permission, California’s constitutional protection of freedom of expression was extended to portions of some privately owned shopping centers by the state’s supreme court ( Figure 4.19 ). 58 These state protections do not extend the other way, however. If the federal government passes a law or adopts a constitutional amendment that restricts rights or liberties, or a Supreme Court decision interprets the Constitution in a way that narrows these rights, the state’s protection no longer applies. For example, if Congress decided to outlaw hunting and fishing and the Supreme Court decided this law was a valid exercise of federal power, the state constitutional provisions that protect the right to hunt and fish would effectively be meaningless. More concretely, federal laws that control weapons and drugs override state laws and constitutional provisions that otherwise permit them. While federal marijuana policies are not strictly enforced, state-level marijuana policies in Colorado and Washington provide a prominent exception to that clarity. Get Connected! Student-Led Constitutional Change Although the United States has not had a national constitutional convention since 1787, the states have generally been much more willing to revise their constitutions. In 1998, two politicians in Texas decided to do something a little bit different: they enlisted the help of college students at Angelo State University to draft a completely new constitution for the state of Texas, which was then formally proposed to the state legislature. 59 Although the proposal failed, it was certainly a valuable learning experience for the students who took part. Each state has a different process for changing its constitution. In some, like California and Mississippi, voters can propose amendments to their state constitution directly, bypassing the state legislature. In others, such as Tennessee and Texas, the state legislature controls the process of initiation. The process can affect the sorts of amendments likely to be considered; it shouldn’t be surprising, for example, that amendments limiting the number of terms legislators can serve in office have been much more common in states where the legislators themselves have no say in whether such provisions are adopted. What rights or liberties do you think ought to be protected by your state constitution that aren’t already? Or would you get rid of some of these protections instead? Find a copy of your current state constitution, read through it, and decide. Then find out what steps would be needed to amend your state’s constitution to make the changes you would like to see. THE RIGHT TO PRIVACY Although the term privacy does not appear in the Constitution or Bill of Rights, scholars have interpreted several Bill of Rights provisions as an indication that James Madison and Congress sought to protect a common-law right to privacy as it would have been understood in the late eighteenth century: a right to be free of government intrusion into our personal life, particularly within the bounds of the home. For example, we could perhaps see the Second Amendment as standing for the common-law right to self-defense in the home; the Third Amendment as a statement that government soldiers should not be housed in anyone’s home; the Fourth Amendment as setting a high legal standard for allowing agents of the state to intrude on someone’s home; and the due process and takings clauses of the Fifth Amendment as applying an equally high legal standard to the government’s taking a home or property (reinforced after the Civil War by the Fourteenth Amendment). Alternatively, we could argue that the Ninth Amendment anticipated the existence of a common-law right to privacy, among other rights, when it acknowledged the existence of basic, natural rights not listed in the Bill of Rights or the body of the Constitution itself. 60 Lawyers Samuel D. Warren and Louis Brandeis (the latter a future Supreme Court justice) famously developed the concept of privacy rights in a law review article published in 1890. 61 Although several state constitutions do list the right to privacy as a protected right, the explicit recognition by the Supreme Court of a right to privacy in the U.S. Constitution emerged only in the middle of the twentieth century. In 1965, the court spelled out the right to privacy for the first time in Griswold v. Connecticut , a case that struck down a state law forbidding even married individuals to use any form of contraception. 62 Although many subsequent cases before the Supreme Court also dealt with privacy in the course of intimate, sexual conduct, the issue of privacy matters as well in the context of surveillance and monitoring by government and private parties of our activities, movements, and communications. Both these senses of privacy are examined below. Sexual Privacy Although the Griswold case originally pertained only to married couples, in 1972 it was extended to apply the right to obtain contraception to unmarried people as well. 63 Although neither decision was entirely without controversy, the “sexual revolution” taking place at the time may well have contributed to a sense that anti-contraception laws were at the very least dated, if not in violation of people’s rights. The contraceptive coverage controversy surrounding the Hobby Lobby case shows that this topic remains relevant. The Supreme Court’s application of the right to privacy doctrine to abortion rights proved far more problematic, legally and politically. In 1972, four states permitted abortions without restrictions, while thirteen allowed abortions “if the pregnant woman’s life or physical or mental health were endangered, if the fetus would be born with a severe physical or mental defect, or if the pregnancy had resulted from rape or incest”; abortions were completely illegal in Pennsylvania and heavily restricted in the remaining states. 64 On average, several hundred American women a year died as a result of “back alley abortions” in the 1960s. The legal landscape changed dramatically as a result of the 1973 ruling in Roe v. Wade , 65 in which the Supreme Court decided the right to privacy encompassed a right for women to terminate a pregnancy, at least under certain scenarios. The justices ruled that while the government did have an interest in protecting the “potentiality of human life,” nonetheless this had to be balanced against the interests of both women’s health and women’s right to decide whether to have an abortion. Accordingly, the court established a framework for deciding whether abortions could be regulated based on the fetus’s viability (i.e., potential to survive outside the womb) and the stage of pregnancy, with no restrictions permissible during the first three months of pregnancy (i.e., the first trimester), during which abortions were deemed safer for women than childbirth itself. Starting in the 1980s, Supreme Court justices appointed by Republican presidents began to roll back the Roe decision. A key turning point was the court’s ruling in Planned Parenthood v. Casey in 1992, in which a plurality of the court rejected Roe’s framework based on trimesters of pregnancy and replaced it with the undue burden test , which allows restrictions prior to viability that are not “substantial obstacle[s]” (undue burdens) to women seeking an abortion. 66 Thus, the court upheld some state restrictions, including a required waiting period between arranging and having an abortion, parental consent (or, if not possible for some reason such as incest, authorization of a judge) for minors, and the requirement that women be informed of the health consequences of having an abortion. Other restrictions such as a requirement that a married woman notify her spouse prior to an abortion were struck down as an undue burden. Since the Casey decision, many states have passed other restrictions on abortions, such as banning certain procedures, requiring women to have and view an ultrasound before having an abortion, and implementing more stringent licensing and inspection requirements for facilities where abortions are performed. Although no majority of Supreme Court justices has ever moved to overrule Roe , the restrictions on abortion the Court has upheld in the last few decades have made access to abortions more difficult in many areas of the country, particularly in rural states and communities along the U.S.–Mexico border ( Figure 4.20 ). However, in Whole Woman’s Health v. Hellerstedt (2016), the Court reinforced Roe 5–3 by disallowing two Texas state regulations regarding the delivery of abortion services. 67 Beyond the issues of contraception and abortion, the right to privacy has been interpreted to encompass a more general right for adults to have noncommercial, consensual sexual relationships in private. However, this legal development is relatively new; as recently as 1986, the Supreme Court ruled that states could still criminalize sex acts between two people of the same sex. 68 That decision was overturned in 2003 in Lawrence v. Texas , which invalidated state laws that criminalized sodomy. 69 The state and national governments still have leeway to regulate sexual morality to some degree; “anything goes” is not the law of the land, even for actions that are consensual. The Supreme Court has declined to strike down laws in a few states that outlaw the sale of vibrators and other sex toys. Prostitution remains illegal in every state except in certain rural counties in Nevada; both polygamy (marriage to more than one other person) and bestiality (sex with animals) are illegal everywhere. And, as we saw earlier, the states may regulate obscene materials and, in certain situations, material that may be harmful to minors or otherwise indecent; to this end, states and localities have sought to ban or regulate the production, distribution, and sale of pornography. Privacy of Communications and Property Another example of heightened concerns about privacy in the modern era is the reality that society is under pervasive surveillance. In the past, monitoring the public was difficult at best. During the Cold War, regimes in the Soviet bloc employed millions of people as domestic spies and informants in an effort to suppress internal dissent through constant monitoring of the general public. Not only was this effort extremely expensive in terms of the human and monetary capital it required, but it also proved remarkably ineffective. Groups like the East German Stasi and the Romanian Securitate were unable to suppress the popular uprisings that undermined communist one-party rule in most of those countries in the late 1980s. Technology has now made it much easier to track and monitor people. Police cars and roadways are equipped with cameras that can photograph the license plate of every passing car or truck and record it in a database; while allowing police to recover stolen vehicles and catch fleeing suspects, this data can also be used to track the movements of law-abiding citizens. But law enforcement officials don’t even have to go to this much work; millions of car and truck drivers pay tolls electronically without stopping at toll booths thanks to transponders attached to their vehicles, which can be read by scanners well away from any toll road or bridge to monitor traffic flow or any other purpose ( Figure 4.21 ). The pervasive use of GPS (Global Positioning System) raises similar issues. Even pedestrians and cyclists are relatively easy to track today. Cameras pointed at sidewalks and roadways can employ facial recognition software to identify people as they walk or bike around a city. Many people carry smartphones that constantly report their location to the nearest cell phone tower and broadcast a beacon signal to nearby wireless hotspots and Bluetooth devices. Police can set up a small device called a Stingray that identifies and tracks all cell phones that attempt to connect to it within a radius of several thousand feet. With the right software, law enforcement and criminals can remotely activate a phone’s microphone and camera, effectively planting a bug in someone’s pocket without the person even knowing it. These aren’t just gimmicks in a bad science fiction movie; businesses and governments have openly admitted they are using these methods. Research shows that even metadata—information about the messages we send and the calls we make and receive, such as time, location, sender, and recipient but excluding their content—can tell governments and businesses a lot about what someone is doing. Even when this information is collected in an anonymous way, it is often still possible to trace it back to specific individuals, since people travel and communicate in largely predictable patterns. The next frontier of privacy issues may well be the increased use of drones, small preprogrammed or remotely piloted aircraft. Drones can fly virtually undetected and monitor events from overhead. They can peek into backyards surrounded by fences, and using infrared cameras they can monitor activity inside houses and other buildings. The Fourth Amendment was written in an era when finding out what was going on in someone’s home meant either going inside or peeking through a window; applying its protections today, when seeing into someone’s house can be as easy as looking at a computer screen miles away, is no longer simple. In the United States, many advocates of civil liberties are concerned that laws such as the USA PATRIOT Act (i.e., Uniting and Strengthening America by Providing Appropriate Tools Required to Intercept and Obstruct Terrorism Act), passed weeks after the 9/11 attacks in 2001, have given the federal government too much power by making it easy for officials to seek and obtain search warrants or, in some cases, to bypass warrant requirements altogether. Critics have argued that the Patriot Act has largely been used to prosecute ordinary criminals, in particular drug dealers, rather than terrorists as intended. Most European countries, at least on paper, have opted for laws that protect against such government surveillance, perhaps mindful of past experience with communist and fascist regimes. European countries also tend to have stricter laws limiting the collection, retention, and use of private data by companies, which makes it harder for governments to obtain and use that data. Most recently, the battle between Apple Inc. and the National Security Agency (NSA) over whether Apple should allow the government access to key information that is encrypted has made the discussion of this tradeoff salient once again. Link to Learning Several groups lobby the government, such as The Electronic Frontier Foundation and The Electronic Privacy Information Center , on issues related to privacy in the information age, particularly on the Internet. All this is not to say that technological surveillance tools do not have value or are inherently bad. They can be used for many purposes that would benefit society and, perhaps, even enhance our freedoms. Spending less time stuck in traffic because we know there’s been an accident—detected automatically because the cell phones that normally whiz by at the speed limit are now crawling along—gives us time to spend on more valuable activities. Capturing criminals and terrorists by recognizing them or their vehicles before they can continue their agendas will protect the life, liberty, and property of the public at large. At the same time, however, the emergence of these technologies means calls for vigilance and limits on what businesses and governments can do with the information they collect and the length of time they may retain it. We might also be concerned about how this technology could be used by more oppressive regimes. If the technological resources that are at the disposal of today’s governments had been available to the East Germany Stasi and the Romanian Securitate, would those repressive regimes have fallen? How much privacy and freedom should citizens sacrifice in order to feel safe?
biology
Chapter Outline 47.1 The Biodiversity Crisis 47.2 The Importance of Biodiversity to Human Life 47.3 Threats to Biodiversity 47.4 Preserving Biodiversity Introduction In the 1980s, biologists working in Lake Victoria in Africa discovered one of the most extraordinary products of evolution on the planet. Located in the Great Rift Valley, Lake Victoria is a large lake about 68,900 km 2 in area (larger than Lake Huron, the second largest of North America’s Great Lakes). Biologists were studying species of a family of fish called cichlids. They found that as they sampled for fish in different locations of the lake, they never stopped finding new species, and they identified nearly 500 evolved types of cichlids. But while studying these variations, they quickly discovered that the invasive Nile Perch was destroying the lake’s cichlid population, bringing hundreds of cichlid species to extinction with devastating rapidity.
[ { "answer": { "ans_choice": 0, "ans_text": "a burst of speciation" }, "bloom": null, "hl_context": "The Lake Victoria cichlids provide an example through which we can begin to understand biodiversity . The biologists studying cichlids in the 1980s discovered hundreds of cichlid species representing a variety of specializations to particular habitat types and specific feeding strategies : eating plankton floating in the water , scraping and then eating algae from rocks , eating insect larvae from the bottom , and eating the eggs of other species of cichlid . The cichlids of Lake Victoria are the product of an adaptive radiation . <hl> An adaptive radiation is a rapid ( less than three million years in the case of the Lake Victoria cichlids ) branching through speciation of a phylogenetic tree into many closely related species ; typically , the species “ radiate ” into different habitats and niches . <hl> The Galápagos finches are an example of a modest adaptive radiation with 15 species . The cichlids of Lake Victoria are an example of a spectacular adaptive radiation that includes about 500 species .", "hl_sentences": "An adaptive radiation is a rapid ( less than three million years in the case of the Lake Victoria cichlids ) branching through speciation of a phylogenetic tree into many closely related species ; typically , the species “ radiate ” into different habitats and niches .", "question": { "cloze_format": "An adaptive radiation is________.", "normal_format": "What is an adaptive radiation?", "question_choices": [ "a burst of speciation", "a healthy level of UV radiation", "a hypothesized cause of a mass extinction", "evidence of an asteroid impact" ], "question_id": "fs-idp36989440", "question_text": "An adaptive radiation is________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "1.5 million" }, "bloom": null, "hl_context": "Current Species Diversity Despite considerable effort , knowledge of the species that inhabit the planet is limited . <hl> A recent estimate suggests that the eukaryote species for which science has names , about 1.5 million species , account for less than 20 percent of the total number of eukaryote species present on the planet (8 . 7 million species , by one estimate ) . <hl> Estimates of numbers of prokaryotic species are largely guesses , but biologists agree that science has only begun to catalog their diversity . Even with what is known , there is no central repository of names or samples of the described species ; therefore , there is no way to be sure that the 1.5 million descriptions is an accurate number . It is a best guess based on the opinions of experts in different taxonomic groups . Given that Earth is losing species at an accelerating pace , science is very much in the place it was with the Lake Victoria cichlids : knowing little about what is being lost . Table 47.1 presents recent estimates of biodiversity in different groups .", "hl_sentences": "A recent estimate suggests that the eukaryote species for which science has names , about 1.5 million species , account for less than 20 percent of the total number of eukaryote species present on the planet (8 . 7 million species , by one estimate ) .", "question": { "cloze_format": "The number of currently described species on the planet is about ________.", "normal_format": "The number of currently described species on the planet is about how many?", "question_choices": [ "17,000", "150,000", "1.5 million", "10 million" ], "question_id": "fs-idm50545184", "question_text": "The number of currently described species on the planet is about ________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 1, "ans_text": "a new drug" }, "bloom": "2", "hl_context": "Contemporary societies that live close to the land often have a broad knowledge of the medicinal uses of plants growing in their area . <hl> Most plants produce secondary plant compounds , which are toxins used to protect the plant from insects and other animals that eat them , but some of which also work as medication . <hl> For centuries in Europe , older knowledge about the medical uses of plants was compiled in herbals — books that identified plants and their uses . Humans are not the only species to use plants for medicinal reasons : the great apes , orangutans , chimpanzees , bonobos , and gorillas have all been observed self-medicating with plants .", "hl_sentences": "Most plants produce secondary plant compounds , which are toxins used to protect the plant from insects and other animals that eat them , but some of which also work as medication .", "question": { "cloze_format": "A secondary plant compound might be used for ___.", "normal_format": "A secondary plant compound might be used for which of the following?", "question_choices": [ "a new crop variety", "a new drug", "a soil nutrient", "a pest of a crop pest" ], "question_id": "fs-idp136212048", "question_text": "A secondary plant compound might be used for which of the following?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "an ecosystem service" }, "bloom": null, "hl_context": "Crop success s is largely dependent on the quality of the soil . Although some agricultural soils are rendered sterile using controversial cultivation and chemical treatments , most contain a huge diversity of organisms that maintain nutrient cycles — breaking down organic matter into nutrient compounds that crops need for growth . These organisms also maintain soil texture that affects water and oxygen dynamics in the soil that are necessary for plant growth . If farmers had to maintain arable soil using alternate means , the cost of food would be much higher than it is now . <hl> These kinds of processes are called ecosystem services . <hl> They occur within ecosystems , such as soil ecosystems , as a result of the diverse metabolic activities of the organisms living there , but they provide benefits to human food production , drinking water availability , and breathable air . <hl> Other key ecosystem services related to food production are plant pollination and crop pest control . <hl> Over 150 crops in the United States require pollination to produce . One estimate of the benefit of honeybee pollination within the United States is $ 1.6 billion per year ; other pollinators contribute up to $ 6.7 billion more .", "hl_sentences": "These kinds of processes are called ecosystem services . Other key ecosystem services related to food production are plant pollination and crop pest control .", "question": { "cloze_format": "Pollination is an example of ________.", "normal_format": "What is pollination an example of?", "question_choices": [ "a possible source of new drugs", "chemical diversity", "an ecosystem service", "crop pest control" ], "question_id": "fs-idm19385648", "question_text": "Pollination is an example of ________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 3, "ans_text": "predators of pests" }, "bloom": null, "hl_context": "Finally , humans compete for their food with crop pests , most of which are insects . <hl> Pesticides control these competitors ; however , pesticides are costly and lose their effectiveness over time as pest populations adapt . <hl> They also lead to collateral damage by killing non-pest species and risking the health of consumers and agricultural workers . <hl> Ecologists believe that the bulk of the work in removing pests is actually done by predators and parasites of those pests , but the impact has not been well studied . <hl> A review found that in 74 percent of studies that looked for an effect of landscape complexity on natural enemies of pests , the greater the complexity , the greater the effect of pest-suppressing organisms . An experimental study found that introducing multiple enemies of pea aphids ( an important alfalfa pest ) increased the yield of alfalfa significantly . This study shows the importance of landscape diversity via the question of whether a diversity of pests is more effective at control than one single pest ; the results showed this to be the case . Loss of diversity in pest enemies will inevitably make it more difficult and costly to grow food .", "hl_sentences": "Pesticides control these competitors ; however , pesticides are costly and lose their effectiveness over time as pest populations adapt . Ecologists believe that the bulk of the work in removing pests is actually done by predators and parasites of those pests , but the impact has not been well studied .", "question": { "cloze_format": "___ is an ecosystem service that performs the same function as a pesticide.", "normal_format": "What is an ecosystem service that performs the same function as a pesticide?", "question_choices": [ "pollination", "secondary plant compounds", "crop diversity", "predators of pests" ], "question_id": "fs-idp51456896", "question_text": "What is an ecosystem service that performs the same function as a pesticide?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "overharvesting and exotic species introduction" }, "bloom": null, "hl_context": "The core threat to biodiversity on the planet , and therefore a threat to human welfare , is the combination of human population growth and resource exploitation . The human population requires resources to survive and grow , and those resources are being removed unsustainably from the environment . <hl> The three greatest proximate threats to biodiversity are habitat loss , overharvesting , and introduction of exotic species . <hl> <hl> The first two of these are a direct result of human population growth and resource use . <hl> The third results from increased mobility and trade . A fourth major cause of extinction , anthropogenic climate change , has not yet had a large impact , but it is predicted to become significant during this century . Global climate change is also a consequence of human population needs for energy and the use of fossil fuels to meet those needs ( Figure 47.10 ) . Environmental issues , such as toxic pollution , have specific targeted effects on species , but they are not generally seen as threats at the magnitude of the others . Habitat Loss", "hl_sentences": "The three greatest proximate threats to biodiversity are habitat loss , overharvesting , and introduction of exotic species . The first two of these are a direct result of human population growth and resource use .", "question": { "cloze_format": "___ are two extinction risks that may be a direct result of the pet trade.", "normal_format": "Which two extinction risks may be a direct result of the pet trade?", "question_choices": [ "climate change and exotic species introduction", "habitat loss and overharvesting", "overharvesting and exotic species introduction", "habitat loss and climate change" ], "question_id": "fs-idp99102096", "question_text": "Which two extinction risks may be a direct result of the pet trade?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "islands" }, "bloom": "2", "hl_context": "Explore an interactive global database of exotic or invasive species . <hl> Lakes and islands are particularly vulnerable to extinction threats from introduced species . <hl> In Lake Victoria , as mentioned earlier , the intentional introduction of the Nile perch was largely responsible for the extinction of about 200 species of cichlids . The accidental introduction of the brown tree snake via aircraft ( Figure 47.12 ) from the Solomon Islands to Guam in 1950 has led to the extinction of three species of birds and three to five species of reptiles endemic to the island . Several other species are still threatened . The brown tree snake is adept at exploiting human transportation as a means to migrate ; one was even found on an aircraft arriving in Corpus Christi , Texas . Constant vigilance on the part of airport , military , and commercial aircraft personnel is required to prevent the snake from moving from Guam to other islands in the Pacific , especially Hawaii . Islands do not make up a large area of land on the globe , but they do contain a disproportionate number of endemic species because of their isolation from mainland ancestors . It now appears that the global decline in amphibian species recognized in the 1990s is , in some part , caused by the fungus Batrachochytrium dendrobatidis , which causes the disease chytridiomycosis ( Figure 47.13 ) . There is evidence that the fungus is native to Africa and may have been spread throughout the world by transport of a commonly used laboratory and pet species : the African clawed toad ( Xenopus laevis ) . It may well be that biologists themselves are responsible for spreading this disease worldwide . The North American bullfrog , Rana catesbeiana , which has also been widely introduced as a food animal but which easily escapes captivity , survives most infections of Batrachochytrium dendrobatidis and can act as a reservoir for the disease . Early evidence suggests that another fungal pathogen , Geomyces destructans , introduced from Europe is responsible for white-nose syndrome , which infects cave-hibernating bats in eastern North America and has spread from a point of origin in western New York State ( Figure 47.14 ) . The disease has decimated bat populations and threatens extinction of species already listed as endangered : the Indiana bat , Myotis sodalis , and potentially the Virginia big-eared bat , Corynorhinus townsendii virginianus . How the fungus was introduced is unclear , but one logical presumption would be that recreational cavers unintentionally brought the fungus on clothes or equipment from Europe .", "hl_sentences": "Lakes and islands are particularly vulnerable to extinction threats from introduced species .", "question": { "cloze_format": "The kind of ecosystem exotic species are especially threatening to is ___.", "normal_format": "Exotic species are especially threatening to what kind of ecosystem?", "question_choices": [ "deserts", "marine ecosystems", "islands", "tropical forests" ], "question_id": "fs-idp209777984", "question_text": "Exotic species are especially threatening to what kind of ecosystem?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "CITES" }, "bloom": null, "hl_context": "<hl> Legislation throughout the world has been enacted to protect species . <hl> <hl> The legislation includes international treaties as well as national and state laws . <hl> <hl> The Convention on International Trade in Endangered Species of Wild Fauna and Flora ( CITES ) treaty came into force in 1975 . <hl> The treaty , and the national legislation that supports it , provides a legal framework for preventing approximately 33,000 listed species from being transported across nations ’ borders , thus protecting them from being caught or killed when international trade is involved . The treaty is limited in its reach because it only deals with international movement of organisms or their parts . It is also limited by various countries ’ ability or willingness to enforce the treaty and supporting legislation . The illegal trade in organisms and their parts is probably a market in the hundreds of millions of dollars . Illegal wildlife trade is monitored by another non-profit : Trade Records Analysis of Flora and Fauna in Commerce ( TRAFFIC ) .", "hl_sentences": "Legislation throughout the world has been enacted to protect species . The legislation includes international treaties as well as national and state laws . The Convention on International Trade in Endangered Species of Wild Fauna and Flora ( CITES ) treaty came into force in 1975 .", "question": { "cloze_format": "Certain parrot species cannot be brought to the United States to be sold as pets. The name of the legislation that makes this illegal is ___.", "normal_format": "Certain parrot species cannot be brought to the United States to be sold as pets. What is the name of the legislation that makes this illegal?", "question_choices": [ "Red List", "Migratory Bird Act", "CITES", "Endangered Species Act (ESA)" ], "question_id": "fs-idp16760560", "question_text": "Certain parrot species cannot be brought to the United States to be sold as pets. What is the name of the legislation that makes this illegal?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "Kyoto Protocol" }, "bloom": "1", "hl_context": "The international response to global warming has been mixed . <hl> The Kyoto Protocol , an international agreement that came out of the United Nations Framework Convention on Climate Change that committed countries to reducing greenhouse gas emissions by 2012 , was ratified by some countries , but spurned by others . <hl> Two important countries in terms of their potential impact that did not ratify the Kyoto Protocol were the United States and China . The United States rejected it as a result of a powerful fossil fuel industry and China because of a concern it would stifle the nation ’ s growth . Some goals for reduction in greenhouse gasses were met and exceeded by individual countries , but worldwide , the effort to limit greenhouse gas production is not succeeding . The intended replacement for the Kyoto Protocol has not materialized because governments cannot agree on timelines and benchmarks . Meanwhile , climate scientists predict the resulting costs to human societies and biodiversity will be high .", "hl_sentences": "The Kyoto Protocol , an international agreement that came out of the United Nations Framework Convention on Climate Change that committed countries to reducing greenhouse gas emissions by 2012 , was ratified by some countries , but spurned by others .", "question": { "cloze_format": "The name of the first international agreement on climate change was the ___.", "normal_format": "What was the name of the first international agreement on climate change?", "question_choices": [ "Red List", "Montreal Protocol", "International Union for the Conservation of Nature (IUCN)", "Kyoto Protocol" ], "question_id": "fs-idp175505216", "question_text": "What was the name of the first international agreement on climate change?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "11 percent" }, "bloom": "1", "hl_context": "Due to the way protected lands are allocated ( they tend to contain less economically valuable resources rather than being set aside specifically for the species or ecosystems at risk ) and the way biodiversity is distributed , determining a target percentage of land or marine habitat that should be protected to maintain biodiversity levels is challenging . <hl> The IUCN World Parks Congress estimated that 11.5 percent of Earth ’ s land surface was covered by preserves of various kinds in 2003 . <hl> This area is greater than previous goals ; however , it only represents 9 out of 14 recognized major biomes . Research has shown that 12 percent of all species live only outside preserves ; these percentages are much higher when only threatened species and high quality preserves are considered . For example , high quality preserves include only about 50 percent of threatened amphibian species . The conclusion must be that either the percentage of area protected must increase , or the percentage of high quality preserves must increase , or preserves must be targeted with greater attention to biodiversity protection . Researchers argue that more attention to the latter solution is required .", "hl_sentences": "The IUCN World Parks Congress estimated that 11.5 percent of Earth ’ s land surface was covered by preserves of various kinds in 2003 .", "question": { "cloze_format": "The percentage of ___ of land on the planet is set aside as a preserve of some type.", "normal_format": "About what percentage of land on the planet is set aside as a preserve of some type?", "question_choices": [ "1 percent", "6 percent", "11 percent", "15 percent" ], "question_id": "fs-idm4134384", "question_text": "About what percentage of land on the planet is set aside as a preserve of some type?" }, "references_are_paraphrase": null } ]
47
47.1 The Biodiversity Crisis Learning Objectives By the end of this section, you will be able to: Define biodiversity Describe biodiversity as the equilibrium of naturally fluctuating rates of extinction and speciation Identify historical causes of high extinction rates in Earth’s history Traditionally, ecologists have measured biodiversity , a general term for the variety present in the biosphere, by taking into account both the number of species and their commonness. Biodiversity can be estimated at a number of levels of organization of living things. These estimation indexes, which came from information theory, are most useful as a first step in quantifying biodiversity between and within ecosystems; they are less useful when the main concern among conservation biologists is simply the loss of biodiversity. However, biologists recognize that measures of biodiversity, in terms of species diversity, may help focus efforts to preserve the biologically or technologically important elements of biodiversity. The Lake Victoria cichlids provide an example through which we can begin to understand biodiversity. The biologists studying cichlids in the 1980s discovered hundreds of cichlid species representing a variety of specializations to particular habitat types and specific feeding strategies: eating plankton floating in the water, scraping and then eating algae from rocks, eating insect larvae from the bottom, and eating the eggs of other species of cichlid. The cichlids of Lake Victoria are the product of an adaptive radiation . An adaptive radiation is a rapid (less than three million years in the case of the Lake Victoria cichlids) branching through speciation of a phylogenetic tree into many closely related species; typically, the species “radiate” into different habitats and niches. The Galápagos finches are an example of a modest adaptive radiation with 15 species. The cichlids of Lake Victoria are an example of a spectacular adaptive radiation that includes about 500 species. At the time biologists were making this discovery, some species began to quickly disappear. A culprit in these declines was a species of large fish that was introduced to Lake Victoria by fisheries to feed the people living around the lake. The Nile perch was introduced in 1963, but lay low until the 1980s when its populations began to surge. The Nile perch population grew by consuming cichlids, driving species after species to the point of extinction (the disappearance of a species). In fact, there were several factors that played a role in the extinction of perhaps 200 cichlid species in Lake Victoria: the Nile perch, declining lake water quality due to agriculture and land clearing on the shores of Lake Victoria, and increased fishing pressure. Scientists had not even catalogued all of the species present—so many were lost that were never named. The diversity is now a shadow of what it once was. The cichlids of Lake Victoria are a thumbnail sketch of contemporary rapid species loss that occurs all over Earth and is caused by human activity. Extinction is a natural process of macroevolution that occurs at the rate of about one out of 1 million species becoming extinct per year. The fossil record reveals that there have been five periods of mass extinction in history with much higher rates of species loss, and the rate of species loss today is comparable to those periods of mass extinction. However, there is a major difference between the previous mass extinctions and the current extinction we are experiencing: human activity. Specifically, three human activities have a major impact: destruction of habitat, introduction of exotic species, and over-harvesting. Predictions of species loss within the next century, a tiny amount of time on geological timescales, range from 10 percent to 50 percent. Extinctions on this scale have only happened five other times in the history of the planet, and they have been caused by cataclysmic events that changed the course of the history of life in each instance. Earth is now in one of those times. Types of Biodiversity Scientists generally accept that the term biodiversity describes the number and kinds of species in a location or on the planet. Species can be difficult to define, but most biologists still feel comfortable with the concept and are able to identify and count eukaryotic species in most contexts. Biologists have also identified alternate measures of biodiversity, some of which are important for planning how to preserve biodiversity. Genetic diversity is one of those alternate concepts. Genetic diversity or variation is the raw material for adaptation in a species. A species’ future potential for adaptation depends on the genetic diversity held in the genomes of the individuals in populations that make up the species. The same is true for higher taxonomic categories. A genus with very different types of species will have more genetic diversity than a genus with species that look alike and have similar ecologies. If there were a choice between one of these genera of species being preserved, the one with the greatest potential for subsequent evolution is the most genetically diverse one. It would be ideal not to have to make such choices, but increasingly this may be the norm. Many genes code for proteins, which in turn carry out the metabolic processes that keep organisms alive and reproducing. Genetic diversity can be measured as chemical diversity in that different species produce a variety of chemicals in their cells, both the proteins as well as the products and byproducts of metabolism. This chemical diversity has potential benefit for humans as a source of pharmaceuticals, so it provides one way to measure diversity that is important to human health and welfare. Humans have generated diversity in domestic animals, plants, and fungi. This diversity is also suffering losses because of migration, market forces, and increasing globalism in agriculture, especially in heavily populated regions such as China, India, and Japan. The human population directly depends on this diversity as a stable food source, and its decline is troubling biologists and agricultural scientists. It is also useful to define ecosystem diversity , meaning the number of different ecosystems on the planet or in a given geographic area ( Figure 47.2 ). Whole ecosystems can disappear even if some of the species might survive by adapting to other ecosystems. The loss of an ecosystem means the loss of interactions between species, the loss of unique features of coadaptation, and the loss of biological productivity that an ecosystem is able to create. An example of a largely extinct ecosystem in North America is the prairie ecosystem. Prairies once spanned central North America from the boreal forest in northern Canada down into Mexico. They are now all but gone, replaced by crop fields, pasture lands, and suburban sprawl. Many of the species survive, but the hugely productive ecosystem that was responsible for creating the most productive agricultural soils is now gone. As a consequence, soils are disappearing or must be maintained at greater expense. Current Species Diversity Despite considerable effort, knowledge of the species that inhabit the planet is limited. A recent estimate suggests that the eukaryote species for which science has names, about 1.5 million species, account for less than 20 percent of the total number of eukaryote species present on the planet (8.7 million species, by one estimate). Estimates of numbers of prokaryotic species are largely guesses, but biologists agree that science has only begun to catalog their diversity. Even with what is known, there is no central repository of names or samples of the described species; therefore, there is no way to be sure that the 1.5 million descriptions is an accurate number. It is a best guess based on the opinions of experts in different taxonomic groups. Given that Earth is losing species at an accelerating pace, science is very much in the place it was with the Lake Victoria cichlids: knowing little about what is being lost. Table 47.1 presents recent estimates of biodiversity in different groups. Estimates of the Numbers of Described and Predicted Species by Taxonomic Group Mora et al. 2011 1 1 Mora Camilo et al., “How Many Species Are There on Earth and in the Ocean?” PLoS Biology (2011), doi:10.1371/journal.pbio.1001127. Chapman 2009 2 2 Arthur D. Chapman, Numbers of Living Species in Australia and the World , 2nd ed. (Canberra, AU: Australian Biological Resources Study, 2009). http://www.environment.gov.au/biodiversity/abrs/publications/other/species-numbers/2009/pubs/nlsaw-2nd-complete.pdf. Groombridge & Jenkins 2002 3 3 Brian Groombridge and Martin D. Jenkins. World Atlas of Biodiversity: Earth’s Living Resources in the 21 st Century . Berkeley: University of California Press, 2002. Described Predicted Described Predicted Described Predicted Animalia 1,124,516 9,920,000 1,424,153 6,836,330 1,225,500 10,820,000 Chromista 17,892 34,900 25,044 200,500 — — Fungi 44,368 616,320 98,998 1,500,000 72,000 1,500,000 Plantae 224,244 314,600 310,129 390,800 270,000 320,000 Protozoa 16,236 72,800 28,871 1,000,000 80,000 600,000 Prokaryotes — — 10,307 1,000,000 10,175 — Total 1,438,769 10,960,000 1,897,502 10,897,630 1,657,675 13,240,000 Table 47.1 There are various initiatives to catalog described species in accessible ways, and the internet is facilitating that effort. Nevertheless, it has been pointed out that at the current rate of species description, which according to the State of Observed Species Report is 17,000 to 20,000 new species per year, it will take close to 500 years to finish describing life on this planet. 4 Over time, the task becomes both increasingly impossible and increasingly easier as extinction removes species from the planet. 4 International Institute for Species Exploration (IISE), 2011 State of Observed Species (SOS) . Tempe, AZ: IISE, 2011. Accessed May, 20, 2012. http://species.asu.edu/SOS. Naming and counting species may seem an unimportant pursuit given the other needs of humanity, but it is not simply an accounting. Describing species is a complex process by which biologists determine an organism’s unique characteristics and whether or not that organism belongs to any other described species. It allows biologists to find and recognize the species after the initial discovery, and allows them to follow up on questions about its biology. In addition, the unique characteristics of each species make it potentially valuable to humans or other species on which humans depend. Understanding these characteristics is the value of finding and naming species. Patterns of Biodiversity Biodiversity is not evenly distributed on Earth. Lake Victoria contained almost 500 species of cichlids alone, ignoring the other fish families present in the lake. All of these species were found only in Lake Victoria; therefore, the 500 species of cichlids were endemic. Endemic species are found in only one location. Endemics with highly restricted distributions are particularly vulnerable to extinction. Higher taxonomic levels, such as genera and families, can also be endemic. Lake Huron contains about 79 species of fish, all of which are found in many other lakes in North America. What accounts for the difference in fish diversity in these two lakes? Lake Victoria is a tropical lake, while Lake Huron is a temperate lake. Lake Huron in its present form is only about 7,000 years old, while Lake Victoria in its present form is about 15,000 years old. Biogeographers have suggested these two factors, latitude and age, are two of several hypotheses to explain biodiversity patterns on the planet. Career Connection Biogeographer Biogeography is the study of the distribution of the world’s species—both in the past and in the present. The work of biogeographers is critical to understanding our physical environment, how the environment affects species, and how environmental changes impact the distribution of a species; it has also been critical to developing evolutionary theory. Biogeographers need to understand both biology and ecology. They also need to be well-versed in evolutionary studies, soil science, and climatology. There are three main fields of study under the heading of biogeography: ecological biogeography, historical biogeography (called paleobiogeography), and conservation biogeography. Ecological biogeography studies the current factors affecting the distribution of plants and animals. Historical biogeography, as the name implies, studies the past distribution of species. Conservation biogeography, on the other hand, is focused on the protection and restoration of species based upon known historical and current ecological information. Each of these fields considers both zoogeography and phytogeography—the past and present distribution of animals and plants. One of the oldest observed patterns in ecology is that species biodiversity in almost every taxonomic group increases as latitude declines. In other words, biodiversity increases closer to the equator ( Figure 47.3 ). It is not yet clear why biodiversity increases closer to the equator, but hypotheses include the greater age of the ecosystems in the tropics versus temperate regions that were largely devoid of life or drastically impoverished during the last glaciation. The idea is that greater age provides more time for speciation. Another possible explanation is the increased energy the tropics receive from the sun versus the decreased energy that temperate and polar regions receive. It is not entirely clear how greater energy input could translate into more species. The complexity of tropical ecosystems may promote speciation by increasing the heterogeneity , or number of ecological niches, in the tropics relative to higher latitudes. The greater heterogeneity provides more opportunities for coevolution, specialization, and perhaps greater selection pressures leading to population differentiation. However, this hypothesis suffers from some circularity—ecosystems with more species encourage speciation, but how did they get more species to begin with? The tropics have been perceived as being more stable than temperate regions, which have a pronounced climate and day-length seasonality. The tropics have their own forms of seasonality, such as rainfall, but they are generally assumed to be more stable environments and this stability might promote speciation. Regardless of the mechanisms, it is certainly true that all levels of biodiversity are greatest in the tropics. Additionally, the rate of endemism is highest, and there are more biodiversity hotspots. However, this richness of diversity also means that knowledge of species is lowest, and there is a high potential for biodiversity loss. Conservation of Biodiversity In 1988, British environmentalist Norman Myers developed a conservation concept to identify areas rich in species and at significant risk for species loss: biodiversity hotspots. Biodiversity hotspots are geographical areas that contain high numbers of endemic species. The purpose of the concept was to identify important locations on the planet for conservation efforts, a kind of conservation triage. By protecting hotspots, governments are able to protect a larger number of species. The original criteria for a hotspot included the presence of 1500 or more endemic plant species and 70 percent of the area disturbed by human activity. There are now 34 biodiversity hotspots ( Figure 47.4 ) containing large numbers of endemic species, which include half of Earth’s endemic plants. Biodiversity Change through Geological Time The number of species on the planet, or in any geographical area, is the result of an equilibrium of two evolutionary processes that are ongoing: speciation and extinction. Both are natural “birth” and “death” processes of macroevolution. When speciation rates begin to outstrip extinction rates, the number of species will increase; likewise, the number of species will decrease when extinction rates begin to overtake speciation rates. Throughout Earth’s history, these two processes have fluctuated—sometimes leading to dramatic changes in the number of species on Earth as reflected in the fossil record ( Figure 47.5 ). Paleontologists have identified five strata in the fossil record that appear to show sudden and dramatic (greater than half of all extant species disappearing from the fossil record) losses in biodiversity. These are called mass extinctions. There are many lesser, yet still dramatic, extinction events, but the five mass extinctions have attracted the most research. An argument can be made that the five mass extinctions are only the five most extreme events in a continuous series of large extinction events throughout the Phanerozoic (since 542 million years ago). In most cases, the hypothesized causes are still controversial; however, the most recent event seems clear. The Five Mass Extinctions The fossil record of the mass extinctions was the basis for defining periods of geological history, so they typically occur at the transition point between geological periods. The transition in fossils from one period to another reflects the dramatic loss of species and the gradual origin of new species. These transitions can be seen in the rock strata. Table 47.2 provides data on the five mass extinctions. Mass Extinctions Geological Period Mass Extinction Name Time (millions of years ago) Ordovician–Silurian end-Ordovician O–S 450–440 Late Devonian end-Devonian 375–360 Permian–Triassic end-Permian 251 Triassic–Jurassic end-Triassic 205 Cretaceous–Paleogene end-Cretaceous K–Pg (K–T) 65.5 Table 47.2 This table shows the names and dates for the five mass extinctions in Earth’s history. The Ordovician-Silurian extinction event is the first recorded mass extinction and the second largest. During this period, about 85 percent of marine species (few species lived outside the oceans) became extinct. The main hypothesis for its cause is a period of glaciation and then warming. The extinction event actually consists of two extinction events separated by about 1 million years. The first event was caused by cooling, and the second event was due to the subsequent warming. The climate changes affected temperatures and sea levels. Some researchers have suggested that a gamma-ray burst, caused by a nearby supernova, is a possible cause of the Ordovician-Silurian extinction. The gamma-ray burst would have stripped away the Earth’s ozone layer causing intense ultraviolet radiation from the sun and may account for climate changes observed at the time. The hypothesis is speculative, but extraterrestrial influences on Earth’s history are an active line of research. Recovery of biodiversity after the mass extinction took from 5 to 20 million years, depending on the location. The late Devonian extinction may have occurred over a relatively long period of time. It appears to have affected marine species and not the plants or animals inhabiting terrestrial habitats. The causes of this extinction are poorly understood. The end-Permian extinction was the largest in the history of life. Indeed, an argument could be made that Earth nearly became devoid of life during this extinction event. The planet looked very different before and after this event. Estimates are that 96 percent of all marine species and 70 percent of all terrestrial species were lost. It was at this time, for example, that the trilobites, a group that survived the Ordovician–Silurian extinction, became extinct. The causes for this mass extinction are not clear, but the leading suspect is extended and widespread volcanic activity that led to a runaway global-warming event. The oceans became largely anoxic, suffocating marine life. Terrestrial tetrapod diversity took 30 million years to recover after the end-Permian extinction. The Permian extinction dramatically altered Earth’s biodiversity makeup and the course of evolution. The causes of the Triassic–Jurassic extinction event are not clear and hypotheses of climate change, asteroid impact, and volcanic eruptions have been argued. The extinction event occurred just before the breakup of the supercontinent Pangaea, although recent scholarship suggests that the extinctions may have occurred more gradually throughout the Triassic. The causes of the end-Cretaceous extinction event are the ones that are best understood. It was during this extinction event about 65 million years ago that the dinosaurs, the dominant vertebrate group for millions of years, disappeared from the planet (with the exception of a theropod clade that gave rise to birds). Indeed, every land animal that weighed more then 25 kg became extinct. The cause of this extinction is now understood to be the result of a cataclysmic impact of a large meteorite, or asteroid, off the coast of what is now the Yucatán Peninsula. This hypothesis, proposed first in 1980, was a radical explanation based on a sharp spike in the levels of iridium (which rains down from space in meteors at a fairly constant rate but is otherwise absent on Earth’s surface) at the rock stratum that marks the boundary between the Cretaceous and Paleogene periods ( Figure 47.6 ). This boundary marked the disappearance of the dinosaurs in fossils as well as many other taxa. The researchers who discovered the iridium spike interpreted it as a rapid influx of iridium from space to the atmosphere (in the form of a large asteroid) rather than a slowing in the deposition of sediments during that period. It was a radical explanation, but the report of an appropriately aged and sized impact crater in 1991 made the hypothesis more believable. Now an abundance of geological evidence supports the theory. Recovery times for biodiversity after the end-Cretaceous extinction are shorter, in geological time, than for the end-Permian extinction, on the order of 10 million years. Visual Connection Scientists measured the relative abundance of fern spores above and below the K–Pg boundary in this rock sample. Which of the following statements most likely represents their findings? An abundance of fern spores from several species was found below the K–Pg boundary, but none was found above. An abundance of fern spores from several species was found above the K–Pg boundary, but none was found below. An abundance of fern spores was found both above and below the K–Pg boundary, but only one species was found below the boundary, and many species were found above the boundary. Many species of fern spores were found both above and below the boundary, but the total number of spores was greater below the boundary. Link to Learning Explore this interactive website about mass extinctions. The Pleistocene Extinction The Pleistocene Extinction is one of the lesser extinctions, and a recent one. It is well known that the North American, and to some degree Eurasian, megafauna , or large animals, disappeared toward the end of the last glaciation period. The extinction appears to have happened in a relatively restricted time period of 10,000–12,000 years ago. In North America, the losses were quite dramatic and included the woolly mammoths (last dated about 4,000 years ago in an isolated population), mastodon, giant beavers, giant ground sloths, saber-toothed cats, and the North American camel, just to name a few. The possibility that the rapid extinction of these large animals was caused by over-hunting was first suggested in the 1900s. Research into this hypothesis continues today. It seems likely that over-hunting caused many pre-written history extinctions in many regions of the world. In general, the timing of the Pleistocene extinctions correlated with the arrival of humans and not with climate-change events, which is the main competing hypothesis for these extinctions. The extinctions began in Australia about 40,000 to 50,000 years ago, just after the arrival of humans in the area: a marsupial lion, a giant one-ton wombat, and several giant kangaroo species disappeared. In North America, the extinctions of almost all of the large mammals occurred 10,000–12,000 years ago. All that are left are the smaller mammals such as bears, elk, moose, and cougars. Finally, on many remote oceanic islands, the extinctions of many species occurred coincident with human arrivals. Not all of the islands had large animals, but when there were large animals, they were lost. Madagascar was colonized about 2,000 years ago and the large mammals that lived there became extinct. Eurasia and Africa do not show this pattern, but they also did not experience a recent arrival of humans. Humans arrived in Eurasia hundreds of thousands of years ago after the origin of the species in Africa. This topic remains an area of active research and hypothesizing. It seems clear that even if climate played a role, in most cases human hunting precipitated the extinctions. Present-Time Extinctions The sixth, or Holocene, mass extinction appears to have begun earlier than previously believed and has mostly to do with the activities of Homo sapiens . Since the beginning of the Holocene period, there are numerous recent extinctions of individual species that are recorded in human writings. Most of these are coincident with the expansion of the European colonies since the 1500s. One of the earlier and popularly known examples is the dodo bird. The dodo bird lived in the forests of Mauritius, an island in the Indian Ocean. The dodo bird became extinct around 1662. It was hunted for its meat by sailors and was easy prey because the dodo, which did not evolve with humans, would approach people without fear. Introduced pigs, rats, and dogs brought to the island by European ships also killed dodo young and eggs. Steller's sea cow became extinct in 1768; it was related to the manatee and probably once lived along the northwest coast of North America. Steller's sea cow was first discovered by Europeans in 1741 and was hunted for meat and oil. The last sea cow was killed in 1768. That amounts to 27 years between the sea cow’s first contact with Europeans and extinction of the species. In 1914, the last living passenger pigeon died in a zoo in Cincinnati, Ohio. This species had once darkened the skies of North America during its migrations, but it was hunted and suffered from habitat loss through the clearing of forests for farmland. In 1918, the last living Carolina parakeet died in captivity. This species was once common in the eastern United States, but it suffered from habitat loss. The species was also hunted because it ate orchard fruit when its native foods were destroyed to make way for farmland. The Japanese sea lion, which inhabited a broad area around Japan and the coast of Korea, became extinct in the 1950s due to fishermen. The Caribbean monk seal was distributed throughout the Caribbean Sea but was driven to extinction via hunting by 1952. These are only a few of the recorded extinctions in the past 500 years. The International Union for Conservation of Nature (IUCN) keeps a list of extinct and endangered species called the Red List. The list is not complete, but it describes 380 extinct species of vertebrates after 1500 AD, 86 of which were driven extinct by overhunting or overfishing. Estimates of Present-Time Extinction Rates Estimates of extinction rates are hampered by the fact that most extinctions are probably happening without observation. The extinction of a bird or mammal is likely to be noticed by humans, especially if it has been hunted or used in some other way. But there are many organisms that are of less interest to humans (not necessarily of less value) and many that are undescribed. The background extinction rate is estimated to be about one per million species per year (E/MSY). For example, assuming there are about ten million species in existence, the expectation is that ten species would become extinct each year (each year represents ten million species per year). One contemporary extinction rate estimate uses the extinctions in the written record since the year 1500. For birds alone this method yields an estimate of 26 E/MSY. However, this value may be underestimated for three reasons. First, many species would not have been described until much later in the time period, so their loss would have gone unnoticed. Second, the number of recently extinct species is increasing because extinct species now are being described from skeletal remains. And third, some species are probably already extinct even though conservationists are reluctant to name them as such. Taking these factors into account raises the estimated extinction rate closer to 100 E/MSY. The predicted rate by the end of the century is 1500 E/MSY. A second approach to estimating present-time extinction rates is to correlate species loss with habitat loss by measuring forest-area loss and understanding species-area relationships. The species-area relationship is the rate at which new species are seen when the area surveyed is increased. Studies have shown that the number of species present increases as the size of the island increases. This phenomenon has also been shown to hold true in other habitats as well. Turning this relationship around, if the habitat area is reduced, the number of species living there will also decline. Estimates of extinction rates based on habitat loss and species-area relationships have suggested that with about 90 percent habitat loss an expected 50 percent of species would become extinct. Species-area estimates have led to species extinction rate calculations of about 1000 E/MSY and higher. In general, actual observations do not show this amount of loss and suggestions have been made that there is a delay in extinction. Recent work has also called into question the applicability of the species-area relationship when estimating the loss of species. This work argues that the species-area relationship leads to an overestimate of extinction rates. A better relationship to use may be the endemics-area relationship. Using this method would bring estimates down to around 500 E/MSY in the coming century. Note that this value is still 500 times the background rate. Link to Learning Check out this interactive exploration of endangered and extinct species, their ecosystems, and the causes of the endangerment or extinction. 47.2 The Importance of Biodiversity to Human Life Learning Objectives By the end of this section, you will be able to: Identify chemical diversity benefits to humans Identify biodiversity components that support human agriculture Describe ecosystem services It may not be clear why biologists are concerned about biodiversity loss. When biodiversity loss is thought of as the extinction of the passenger pigeon, the dodo bird, and even the woolly mammoth, the loss may appear to be an emotional one. But is the loss practically important for the welfare of the human species? From the perspective of evolution and ecology, the loss of a particular individual species is unimportant (however, the loss of a keystone species can lead to ecological disaster). Extinction is a normal part of macroevolution. But the accelerated extinction rate means the loss of tens of thousands of species within our lifetimes, and it is likely to have dramatic effects on human welfare through the collapse of ecosystems and in added costs to maintain food production, clean air and water, and human health. Agriculture began after early hunter-gatherer societies first settled in one place and heavily modified their immediate environment. This cultural transition has made it difficult for humans to recognize their dependence on undomesticated living things on the planet. Biologists recognize the human species is embedded in ecosystems and is dependent on them, just as every other species on the planet is dependent. Technology smoothes out the extremes of existence, but ultimately the human species cannot exist without its ecosystem. Human Health Contemporary societies that live close to the land often have a broad knowledge of the medicinal uses of plants growing in their area. Most plants produce secondary plant compounds , which are toxins used to protect the plant from insects and other animals that eat them, but some of which also work as medication. For centuries in Europe, older knowledge about the medical uses of plants was compiled in herbals—books that identified plants and their uses. Humans are not the only species to use plants for medicinal reasons: the great apes, orangutans, chimpanzees, bonobos, and gorillas have all been observed self-medicating with plants. Modern pharmaceutical science also recognizes the importance of these plant compounds. Examples of significant medicines derived from plant compounds include aspirin, codeine, digoxin, atropine, and vincristine ( Figure 47.8 ). Many medicines were once derived from plant extracts but are now synthesized. It is estimated that, at one time, 25 percent of modern drugs contained at least one plant extract. That number has probably decreased to about 10 percent as natural plant ingredients are replaced by synthetic versions. Antibiotics, which are responsible for extraordinary improvements in health and lifespans in developed countries, are compounds largely derived from fungi and bacteria. In recent years, animal venoms and poisons have excited intense research for their medicinal potential. By 2007, the FDA had approved five drugs based on animal toxins to treat diseases such as hypertension, chronic pain, and diabetes. Another five drugs are undergoing clinical trials, and at least six drugs are being used in other countries. Other toxins under investigation come from mammals, snakes, lizards, various amphibians, fish, snails, octopuses, and scorpions. Aside from representing billions of dollars in profits, these medicines improve people’s lives. Pharmaceutical companies are actively looking for new compounds synthesized by living organisms that can function as medicine. It is estimated that 1/3 of pharmaceutical research and development is spent on natural compounds and that about 35 percent of new drugs brought to market between 1981 and 2002 were from natural compounds. The opportunities for new medications will be reduced in direct proportion to the disappearance of species. Agricultural Diversity Since the beginning of human agriculture more than 10,000 years ago, human groups have been breeding and selecting crop varieties. This crop diversity matched the cultural diversity of highly subdivided populations of humans. For example, potatoes were domesticated beginning around 7,000 years ago in the central Andes of Peru and Bolivia. The potatoes grown in that region belong to seven species and the number of varieties likely is in the thousands. Each variety has been bred to thrive at particular elevations and soil and climate conditions. The diversity is driven by the diverse demands of the topography, the limited movement of people, and the demands created by crop rotation for different varieties that will do well in different fields. Potatoes are only one example of human-generated diversity. Every plant, animal, and fungus that has been cultivated by humans has been bred from original wild ancestor species into diverse varieties arising from the demands for food value, adaptation to growing conditions, and resistance to pests. The potato demonstrates a well-known example of the risks of low crop diversity: the tragic Irish potato famine when the single variety grown in Ireland became susceptible to a potato blight, wiping out the crop. The loss of the crop led to famine, death, and mass emigration. Resistance to disease is a chief benefit to maintaining crop biodiversity, and lack of diversity in contemporary crop species carries similar risks. Seed companies, which are the source of most crop varieties in developed countries, must continually breed new varieties to keep up with evolving pest organisms. These same seed companies, however, have participated in the decline of the number of varieties available as they focus on selling fewer varieties in more areas of the world. The ability to create new crop varieties relies on the diversity of varieties available and the accessibility of wild forms related to the crop plant. These wild forms are often the source of new gene variants that can be bred with existing varieties to create varieties with new attributes. Loss of wild species related to a crop will mean the loss of potential in crop improvement. Maintaining the genetic diversity of wild species related to domesticated species ensures our continued food supply. Since the 1920s, government agriculture departments have maintained seed banks of crop varieties as a way to maintain crop diversity. This system has flaws because, over time, seed banks are lost through accidents, and there is no way to replace them. In 2008, the Svalbard Global Seed Vault ( Figure 47.9 ) began storing seeds from around the world as a backup system to the regional seed banks. If a regional seed bank stores varieties in Svalbard, losses can be replaced from Svalbard. The seed vault is located deep into the rock of an arctic island. Conditions within the vault are maintained at ideal temperature and humidity for seed survival, but the deep underground location of the vault in the arctic means that failure of the vault’s systems will not compromise the climatic conditions inside the vault. Visual Connection The Svalbard Global Seed Vault is located on Spitsbergen island in Norway, which has an arctic climate. Why might an arctic climate be good for seed storage? Crop success s is largely dependent on the quality of the soil. Although some agricultural soils are rendered sterile using controversial cultivation and chemical treatments, most contain a huge diversity of organisms that maintain nutrient cycles—breaking down organic matter into nutrient compounds that crops need for growth. These organisms also maintain soil texture that affects water and oxygen dynamics in the soil that are necessary for plant growth. If farmers had to maintain arable soil using alternate means, the cost of food would be much higher than it is now. These kinds of processes are called ecosystem services. They occur within ecosystems, such as soil ecosystems, as a result of the diverse metabolic activities of the organisms living there, but they provide benefits to human food production, drinking water availability, and breathable air. Other key ecosystem services related to food production are plant pollination and crop pest control. Over 150 crops in the United States require pollination to produce. One estimate of the benefit of honeybee pollination within the United States is $1.6 billion per year; other pollinators contribute up to $6.7 billion more. Many honeybee populations are managed by apiarists who rent out their hives’ services to farmers. Honeybee populations in North America have been suffering large losses caused by a syndrome known as colony collapse disorder, whose cause is unclear. Other pollinators include a diverse array of other bee species and various insects and birds. Loss of these species would make growing crops requiring pollination impossible, increasing dependence on other crops. Finally, humans compete for their food with crop pests, most of which are insects. Pesticides control these competitors; however, pesticides are costly and lose their effectiveness over time as pest populations adapt. They also lead to collateral damage by killing non-pest species and risking the health of consumers and agricultural workers. Ecologists believe that the bulk of the work in removing pests is actually done by predators and parasites of those pests, but the impact has not been well studied. A review found that in 74 percent of studies that looked for an effect of landscape complexity on natural enemies of pests, the greater the complexity, the greater the effect of pest-suppressing organisms. An experimental study found that introducing multiple enemies of pea aphids (an important alfalfa pest) increased the yield of alfalfa significantly. This study shows the importance of landscape diversity via the question of whether a diversity of pests is more effective at control than one single pest; the results showed this to be the case. Loss of diversity in pest enemies will inevitably make it more difficult and costly to grow food. Wild Food Sources In addition to growing crops and raising animals for food, humans obtain food resources from wild populations, primarily fish populations. For approximately 1 billion people, aquatic resources provide the main source of animal protein. But since 1990, global fish production has declined. Despite considerable effort, few fisheries on the planet are managed for sustainability. Fishery extinctions rarely lead to complete extinction of the harvested species, but rather to a radical restructuring of the marine ecosystem in which a dominant species is so over-harvested that it becomes a minor player, ecologically. In addition to humans losing the food source, these alterations affect many other species in ways that are difficult or impossible to predict. The collapse of fisheries has dramatic and long-lasting effects on local populations that work in the fishery. In addition, the loss of an inexpensive protein source to populations that cannot afford to replace it will increase the cost of living and limit societies in other ways. In general, the fish taken from fisheries have shifted to smaller species as larger species are fished to extinction. The ultimate outcome could clearly be the loss of aquatic systems as food sources. Link to Learning View a brief video discussing declining fish stocks. Psychological and Moral Value Finally, it has been argued that humans benefit psychologically from living in a biodiverse world. A chief proponent of this idea is entomologist E. O. Wilson. He argues that human evolutionary history has adapted us to live in a natural environment and that built environments generate stressors that affect human health and well-being. There is considerable research into the psychological regenerative benefits of natural landscapes that suggests the hypothesis may hold some truth. In addition, there is a moral argument that humans have a responsibility to inflict as little harm as possible on other species. 47.3 Threats to Biodiversity Learning Objectives By the end of this section, you will be able to: Identify significant threats to biodiversity Explain the effects of habitat loss, exotic species, and hunting on biodiversity Identify the early and predicted effects of climate change on biodiversity The core threat to biodiversity on the planet, and therefore a threat to human welfare, is the combination of human population growth and resource exploitation. The human population requires resources to survive and grow, and those resources are being removed unsustainably from the environment. The three greatest proximate threats to biodiversity are habitat loss, overharvesting, and introduction of exotic species. The first two of these are a direct result of human population growth and resource use. The third results from increased mobility and trade. A fourth major cause of extinction, anthropogenic climate change, has not yet had a large impact, but it is predicted to become significant during this century. Global climate change is also a consequence of human population needs for energy and the use of fossil fuels to meet those needs ( Figure 47.10 ). Environmental issues, such as toxic pollution, have specific targeted effects on species, but they are not generally seen as threats at the magnitude of the others. Habitat Loss Humans rely on technology to modify their environment and replace certain functions that were once performed by the natural ecosystem. Other species cannot do this. Elimination of their ecosystem—whether it is a forest, a desert, a grassland, a freshwater estuarine, or a marine environment—will kill the individuals in the species. Remove the entire habitat within the range of a species and, unless they are one of the few species that do well in human-built environments, the species will become extinct. Human destruction of habitats accelerated in the latter half of the twentieth century. Consider the exceptional biodiversity of Sumatra: it is home to one species of orangutan, a species of critically endangered elephant, and the Sumatran tiger, but half of Sumatra’s forest is now gone. The neighboring island of Borneo, home to the other species of orangutan, has lost a similar area of forest. Forest loss continues in protected areas of Borneo. The orangutan in Borneo is listed as endangered by the International Union for Conservation of Nature (IUCN), but it is simply the most visible of thousands of species that will not survive the disappearance of the forests of Borneo. The forests are removed for timber and to plant palm oil plantations ( Figure 47.11 ). Palm oil is used in many products including food products, cosmetics, and biodiesel in Europe. A five-year estimate of global forest cover loss for the years 2000–2005 was 3.1 percent. In the humid tropics where forest loss is primarily from timber extraction, 272,000 km 2 was lost out of a global total of 11,564,000 km 2 (or 2.4 percent). In the tropics, these losses certainly also represent the extinction of species because of high levels of endemism. Everyday Connection Preventing Habitat Destruction with Wise Wood Choices Most consumers do not imagine that the home improvement products they buy might be contributing to habitat loss and species extinctions. Yet the market for illegally harvested tropical timber is huge, and the wood products often find themselves in building supply stores in the United States. One estimate is that 10 percent of the imported timber stream in the United States, which is the world’s largest consumer of wood products, is potentially illegally logged. In 2006, this amounted to $3.6 billion in wood products. Most of the illegal products are imported from countries that act as intermediaries and are not the originators of the wood. How is it possible to determine if a wood product, such as flooring, was harvested sustainably or even legally? The Forest Stewardship Council (FSC) certifies sustainably harvested forest products, therefore, looking for their certification on flooring and other hardwood products is one way to ensure that the wood has not been taken illegally from a tropical forest. Certification applies to specific products, not to a producer; some producers’ products may not have certification while other products are certified. While there are other industry-backed certifications other than the FSC, these are unreliable due to lack of independence from the industry. Another approach is to buy domestic wood species. While it would be great if there was a list of legal versus illegal wood products, it is not that simple. Logging and forest management laws vary from country to country; what is illegal in one country may be legal in another. Where and how a product is harvested and whether the forest from which it comes is being maintained sustainably all factor into whether a wood product will be certified by the FSC. It is always a good idea to ask questions about where a wood product came from and how the supplier knows that it was harvested legally. Habitat destruction can affect ecosystems other than forests. Rivers and streams are important ecosystems and are frequently modified through land development and from damming or water removal. Damming of rivers affects the water flow and access to all parts of a river. Differing flow regimes can reduce or eliminate populations that are adapted to these changes in flow patterns. For example, an estimated 91percent of river lengths in the United States have been developed: they have modifications like dams, to create energy or store water; levees, to prevent flooding; or dredging or rerouting, to create land that is more suitable for human development. Many fish species in the United States, especially rare species or species with restricted distributions, have seen declines caused by river damming and habitat loss. Research has confirmed that species of amphibians that must carry out parts of their life cycles in both aquatic and terrestrial habitats have a greater chance of suffering population declines and extinction because of the increased likelihood that one of their habitats or access between them will be lost. Overharvesting Overharvesting is a serious threat to many species, but particularly to aquatic species. There are many examples of regulated commercial fisheries monitored by fisheries scientists that have nevertheless collapsed. The western Atlantic cod fishery is the most spectacular recent collapse. While it was a hugely productive fishery for 400 years, the introduction of modern factory trawlers in the 1980s and the pressure on the fishery led to it becoming unsustainable. The causes of fishery collapse are both economic and political in nature. Most fisheries are managed as a common (shared) resource even when the fishing territory lies within a country’s territorial waters. Common resources are subject to an economic pressure known as the tragedy of the commons in which essentially no fisher has a motivation to exercise restraint in harvesting a fishery when it is not owned by that fisher. The natural outcome of harvests of resources held in common is their overexploitation. While large fisheries are regulated to attempt to avoid this pressure, it still exists in the background. This overexploitation is exacerbated when access to the fishery is open and unregulated and when technology gives fishers the ability to overfish. In a few fisheries, the biological growth of the resource is less than the potential growth of the profits made from fishing if that time and money were invested elsewhere. In these cases—whales are an example—economic forces will always drive toward fishing the population to extinction. Link to Learning Explore a U.S. Fish & Wildlife Service interactive map of critical habitat for endangered and threatened species in the United States. To begin, select “Visit the online mapper.” For the most part, fishery extinction is not equivalent to biological extinction—the last fish of a species is rarely fished out of the ocean. At the same time, fishery extinction is still harmful to fish species and their ecosystems. There are some instances in which true extinction is a possibility. Whales have slow-growing populations and are at risk of complete extinction through hunting. There are some species of sharks with restricted distributions that are at risk of extinction. The groupers are another population of generally slow-growing fishes that, in the Caribbean, includes a number of species that are at risk of extinction from overfishing. Coral reefs are extremely diverse marine ecosystems that face peril from several processes. Reefs are home to 1/3 of the world’s marine fish species—about 4,000 species—despite making up only 1 percent of marine habitat. Most home marine aquaria are stocked with wild-caught organisms, not cultured organisms. Although no species is known to have been driven extinct by the pet trade in marine species, there are studies showing that populations of some species have declined in response to harvesting, indicating that the harvest is not sustainable at those levels. There are concerns about the effect of the pet trade on some terrestrial species such as turtles, amphibians, birds, plants, and even the orangutan. Link to Learning View a brief video discussing the role of marine ecosystems in supporting human welfare and the decline of ocean ecosystems. Bush meat is the generic term used for wild animals killed for food. Hunting is practiced throughout the world, but hunting practices, particularly in equatorial Africa and parts of Asia, are believed to threaten several species with extinction. Traditionally, bush meat in Africa was hunted to feed families directly; however, recent commercialization of the practice now has bush meat available in grocery stores, which has increased harvest rates to the level of unsustainability. Additionally, human population growth has increased the need for protein foods that are not being met from agriculture. Species threatened by the bush meat trade are mostly mammals including many primates living in the Congo basin. Exotic Species Exotic species are species that have been intentionally or unintentionally introduced by humans into an ecosystem in which they did not evolve. Such introductions likely occur frequently as natural phenomena. For example, Kudzu ( Pueraria lobata ), which is native to Japan, was introduced in the United States in 1876. It was later planted for soil conservation. Problematically, it grows too well in the southeastern United States—up to a foot a day. It is now a pest species and covers over 7 million acres in the southeastern United States. If an introduced species is able to survive in its new habitat, that introduction is now reflected in the observed range of the species. Human transportation of people and goods, including the intentional transport of organisms for trade, has dramatically increased the introduction of species into new ecosystems, sometimes at distances that are well beyond the capacity of the species to ever travel itself and outside the range of the species’ natural predators. Most exotic species introductions probably fail because of the low number of individuals introduced or poor adaptation to the ecosystem they enter. Some species, however, possess preadaptations that can make them especially successful in a new ecosystem. These exotic species often undergo dramatic population increases in their new habitat and reset the ecological conditions in the new environment, threatening the species that exist there. For this reason, exotic species are also called invasive species. Exotic species can threaten other species through competition for resources, predation, or disease. Link to Learning Explore an interactive global database of exotic or invasive species. Lakes and islands are particularly vulnerable to extinction threats from introduced species. In Lake Victoria, as mentioned earlier, the intentional introduction of the Nile perch was largely responsible for the extinction of about 200 species of cichlids. The accidental introduction of the brown tree snake via aircraft ( Figure 47.12 ) from the Solomon Islands to Guam in 1950 has led to the extinction of three species of birds and three to five species of reptiles endemic to the island. Several other species are still threatened. The brown tree snake is adept at exploiting human transportation as a means to migrate; one was even found on an aircraft arriving in Corpus Christi, Texas. Constant vigilance on the part of airport, military, and commercial aircraft personnel is required to prevent the snake from moving from Guam to other islands in the Pacific, especially Hawaii. Islands do not make up a large area of land on the globe, but they do contain a disproportionate number of endemic species because of their isolation from mainland ancestors. It now appears that the global decline in amphibian species recognized in the 1990s is, in some part, caused by the fungus Batrachochytrium dendrobatidis , which causes the disease chytridiomycosis ( Figure 47.13 ). There is evidence that the fungus is native to Africa and may have been spread throughout the world by transport of a commonly used laboratory and pet species: the African clawed toad ( Xenopus laevis ). It may well be that biologists themselves are responsible for spreading this disease worldwide. The North American bullfrog, Rana catesbeiana , which has also been widely introduced as a food animal but which easily escapes captivity, survives most infections of Batrachochytrium dendrobatidis and can act as a reservoir for the disease. Early evidence suggests that another fungal pathogen, Geomyces destructans , introduced from Europe is responsible for white-nose syndrome , which infects cave-hibernating bats in eastern North America and has spread from a point of origin in western New York State ( Figure 47.14 ). The disease has decimated bat populations and threatens extinction of species already listed as endangered: the Indiana bat, Myotis sodalis , and potentially the Virginia big-eared bat, Corynorhinus townsendii virginianus . How the fungus was introduced is unclear, but one logical presumption would be that recreational cavers unintentionally brought the fungus on clothes or equipment from Europe. Climate Change Climate change, and specifically the anthropogenic (meaning, caused by humans) warming trend presently underway, is recognized as a major extinction threat, particularly when combined with other threats such as habitat loss. Scientists disagree about the likely magnitude of the effects, with extinction rate estimates ranging from 15 percent to 40 percent of species committed to extinction by 2050. Scientists do agree, however, that climate change will alter regional climates, including rainfall and snowfall patterns, making habitats less hospitable to the species living in them. The warming trend will shift colder climates toward the north and south poles, forcing species to move with their adapted climate norms while facing habitat gaps along the way. The shifting ranges will impose new competitive regimes on species as they find themselves in contact with other species not present in their historic range. One such unexpected species contact is between polar bears and grizzly bears. Previously, these two species had separate ranges. Now, their ranges are overlapping and there are documented cases of these two species mating and producing viable offspring. Changing climates also throw off species’ delicate timing adaptations to seasonal food resources and breeding times. Many contemporary mismatches to shifts in resource availability and timing have already been documented. Range shifts are already being observed: for example, some European bird species ranges have moved 91 km northward. The same study suggested that the optimal shift based on warming trends was double that distance, suggesting that the populations are not moving quickly enough. Range shifts have also been observed in plants, butterflies, other insects, freshwater fishes, reptiles, and mammals. Climate gradients will also move up mountains, eventually crowding species higher in altitude and eliminating the habitat for those species adapted to the highest elevations. Some climates will completely disappear. The rate of warming appears to be accelerated in the arctic, which is recognized as a serious threat to polar bear populations that require sea ice to hunt seals during the winter months: seals are the only source of protein available to polar bears. A trend to decreasing sea ice coverage has occurred since observations began in the mid-twentieth century. The rate of decline observed in recent years is far greater than previously predicted by climate models. Finally, global warming will raise ocean levels due to melt water from glaciers and the greater volume of warmer water. Shorelines will be inundated, reducing island size, which will have an effect on some species, and a number of islands will disappear entirely. Additionally, the gradual melting and subsequent refreezing of the poles, glaciers, and higher elevation mountains—a cycle that has provided freshwater to environments for centuries—will also be jeopardized. This could result in an overabundance of salt water and a shortage of fresh water. 47.4 Preserving Biodiversity Learning Objectives By the end of this section, you will be able to: Identify new technologies for describing biodiversity Explain the legislative framework for conservation Describe principles and challenges of conservation preserve design Identify examples of the effects of habitat restoration Discuss the role of zoos in biodiversity conservation Preserving biodiversity is an extraordinary challenge that must be met by greater understanding of biodiversity itself, changes in human behavior and beliefs, and various preservation strategies. Measuring Biodiversity The technology of molecular genetics and data processing and storage are maturing to the point where cataloguing the planet’s species in an accessible way is close to feasible. DNA barcoding is one molecular genetic method, which takes advantage of rapid evolution in a mitochondrial gene present in eukaryotes, excepting the plants, to identify species using the sequence of portions of the gene. Plants may be barcoded using a combination of chloroplast genes. Rapid mass sequencing machines make the molecular genetics portion of the work relatively inexpensive and quick. Computer resources store and make available the large volumes of data. Projects are currently underway to use DNA barcoding to catalog museum specimens, which have already been named and studied, as well as testing the method on less studied groups. As of mid 2012, close to 150,000 named species had been barcoded. Early studies suggest there are significant numbers of undescribed species that looked too much like sibling species to previously be recognized as different. These now can be identified with DNA barcoding. Numerous computer databases now provide information about named species and a framework for adding new species. However, as already noted, at the present rate of description of new species, it will take close to 500 years before the complete catalog of life is known. Many, perhaps most, species on the planet do not have that much time. There is also the problem of understanding which species known to science are threatened and to what degree they are threatened. This task is carried out by the non-profit IUCN which, as previously mentioned, maintains the Red List—an online listing of endangered species categorized by taxonomy, type of threat, and other criteria ( Figure 47.16 ). The Red List is supported by scientific research. In 2011, the list contained 61,000 species, all with supporting documentation. Visual Connection Which of the following statements is not supported by this graph? There are more vulnerable fishes than critically endangered and endangered fishes combined. There are more critically endangered amphibians than vulnerable, endangered and critically endangered reptiles combined. Within each group, there are more critically endangered species than vulnerable species. A greater percentage of bird species are critically endangered than mollusk species. Changing Human Behavior Legislation throughout the world has been enacted to protect species. The legislation includes international treaties as well as national and state laws. The Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES) treaty came into force in 1975. The treaty, and the national legislation that supports it, provides a legal framework for preventing approximately 33,000 listed species from being transported across nations’ borders, thus protecting them from being caught or killed when international trade is involved. The treaty is limited in its reach because it only deals with international movement of organisms or their parts. It is also limited by various countries’ ability or willingness to enforce the treaty and supporting legislation. The illegal trade in organisms and their parts is probably a market in the hundreds of millions of dollars. Illegal wildlife trade is monitored by another non-profit: Trade Records Analysis of Flora and Fauna in Commerce (TRAFFIC). Within many countries there are laws that protect endangered species and regulate hunting and fishing. In the United States, the Endangered Species Act (ESA) was enacted in 1973. Species at risk are listed by the Act; the U.S. Fish & Wildlife Service is required by law to develop management plans that protect the listed species and bring them back to sustainable numbers. The Act, and others like it in other countries, is a useful tool, but it suffers because it is often difficult to get a species listed, or to get an effective management plan in place once it is listed. Additionally, species may be controversially taken off the list without necessarily having had a change in their situation. More fundamentally, the approach to protecting individual species rather than entire ecosystems is both inefficient and focuses efforts on a few highly visible and often charismatic species, perhaps at the expense of other species that go unprotected. At the same time, the Act has a critical habitat provision outlined in the recovery mechanism that may benefit species other than the one targeted for management. The Migratory Bird Treaty Act (MBTA) is an agreement between the United States and Canada that was signed into law in 1918 in response to declines in North American bird species caused by hunting. The Act now lists over 800 protected species. It makes it illegal to disturb or kill the protected species or distribute their parts (much of the hunting of birds in the past was for their feathers). The international response to global warming has been mixed. The Kyoto Protocol, an international agreement that came out of the United Nations Framework Convention on Climate Change that committed countries to reducing greenhouse gas emissions by 2012, was ratified by some countries, but spurned by others. Two important countries in terms of their potential impact that did not ratify the Kyoto Protocol were the United States and China. The United States rejected it as a result of a powerful fossil fuel industry and China because of a concern it would stifle the nation’s growth. Some goals for reduction in greenhouse gasses were met and exceeded by individual countries, but worldwide, the effort to limit greenhouse gas production is not succeeding. The intended replacement for the Kyoto Protocol has not materialized because governments cannot agree on timelines and benchmarks. Meanwhile, climate scientists predict the resulting costs to human societies and biodiversity will be high. As already mentioned, the private non-profit sector plays a large role in the conservation effort both in North America and around the world. The approaches range from species-specific organizations to the broadly focused IUCN and TRAFFIC. The Nature Conservancy takes a novel approach. It purchases land and protects it in an attempt to set up preserves for ecosystems. Ultimately, human behavior will change when human values change. At present, the growing urbanization of the human population is a force that poses challenges to the valuing of biodiversity. Conservation in Preserves Establishment of wildlife and ecosystem preserves is one of the key tools in conservation efforts. A preserve is an area of land set aside with varying degrees of protection for the organisms that exist within the boundaries of the preserve. Preserves can be effective in the short term for protecting both species and ecosystems, but they face challenges that scientists are still exploring to strengthen their viability as long-term solutions. How Much Area to Preserve? Due to the way protected lands are allocated (they tend to contain less economically valuable resources rather than being set aside specifically for the species or ecosystems at risk) and the way biodiversity is distributed, determining a target percentage of land or marine habitat that should be protected to maintain biodiversity levels is challenging. The IUCN World Parks Congress estimated that 11.5 percent of Earth’s land surface was covered by preserves of various kinds in 2003. This area is greater than previous goals; however, it only represents 9 out of 14 recognized major biomes. Research has shown that 12 percent of all species live only outside preserves; these percentages are much higher when only threatened species and high quality preserves are considered. For example, high quality preserves include only about 50 percent of threatened amphibian species. The conclusion must be that either the percentage of area protected must increase, or the percentage of high quality preserves must increase, or preserves must be targeted with greater attention to biodiversity protection. Researchers argue that more attention to the latter solution is required. Preserve Design There has been extensive research into optimal preserve designs for maintaining biodiversity. The fundamental principle behind much of the research has been the seminal theoretical work of Robert H. MacArthur and Edward O. Wilson published in 1967 on island biogeography. 5 This work sought to understand the factors affecting biodiversity on islands. The fundamental conclusion was that biodiversity on an island was a function of the origin of species through migration, speciation, and extinction on that island. Islands farther from a mainland are harder to get to, so migration is lower and the equilibrium number of species is lower. Within island populations, evidence suggests that the number of species gradually increases to a level similar to the numbers on the mainland from which the species is suspected to have migrated. In addition, smaller islands are harder to find, so their immigration rates for new species are lower. Smaller islands are also less geographically diverse so there are fewer niches to promote speciation. And finally, smaller islands support smaller populations, so the probability of extinction is higher. 5 Robert H. MacArthur and Edward O. Wilson, E. O., The Theory of Island Biogeography (Princeton, N.J.: Princeton University Press, 1967). As islands get larger, the number of species accelerates, although the effect of island area on species numbers is not a direct correlation. Conservation preserves can be seen as “islands” of habitat within “an ocean” of non-habitat. For a species to persist in a preserve, the preserve must be large enough. The critical size depends, in part, on the home range that is characteristic of the species. A preserve for wolves, which range hundreds of kilometers, must be much larger than a preserve for butterflies, which might range within ten kilometers during its lifetime. But larger preserves have more core area of optimal habitat for individual species, they have more niches to support more species, and they attract more species because they can be found and reached more easily. Preserves perform better when there are buffer zones around them of suboptimal habitat. The buffer allows organisms to exit the boundaries of the preserve without immediate negative consequences from predation or lack of resources. One large preserve is better than the same area of several smaller preserves because there is more core habitat unaffected by edges. For this same reason, preserves in the shape of a square or circle will be better than a preserve with many thin “arms.” If preserves must be smaller, then providing wildlife corridors between them so that individuals and their genes can move between the preserves, for example along rivers and streams, will make the smaller preserves behave more like a large one. All of these factors are taken into consideration when planning the nature of a preserve before the land is set aside. In addition to the physical, biological, and ecological specifications of a preserve, there are a variety of policy, legislative, and enforcement specifications related to uses of the preserve for functions other than protection of species. These can include anything from timber extraction, mineral extraction, regulated hunting, human habitation, and nondestructive human recreation. Many of these policy decisions are made based on political pressures rather than conservation considerations. In some cases, wildlife protection policies have been so strict that subsistence-living indigenous populations have been forced from ancestral lands that fell within a preserve. In other cases, even if a preserve is designed to protect wildlife, if the protections are not or cannot be enforced, the preserve status will have little meaning in the face of illegal poaching and timber extraction. This is a widespread problem with preserves in areas of the tropics. Limitations on Preserves Some of the limitations on preserves as conservation tools are evident from the discussion of preserve design. Political and economic pressures typically make preserves smaller, never larger, so setting aside areas that are large enough is difficult. If the area set aside is sufficiently large, there may not be sufficient area to create a buffer around the preserve. In this case, an area on the outer edges of the preserve inevitably becomes a riskier suboptimal habitat for the species in the preserve. Enforcement of protections is also a significant issue in countries without the resources or political will to prevent poaching and illegal resource extraction. Climate change will create inevitable problems with the location of preserves. The species within them will migrate to higher latitudes as the habitat of the preserve becomes less favorable. Scientists are planning for the effects of global warming on future preserves and striving to predict the need for new preserves to accommodate anticipated changes to habitats; however, the end effectiveness is tenuous since these efforts are prediction based. Finally, an argument can be made that conservation preserves reinforce the cultural perception that humans are separate from nature, can exist outside of it, and can only operate in ways that do damage to biodiversity. Creating preserves reduces the pressure on human activities outside the preserves to be sustainable and non-damaging to biodiversity. Ultimately, the political, economic, and human demographic pressures will degrade and reduce the size of conservation preserves if the activities outside them are not altered to be less damaging to biodiversity. Link to Learning An interactive global data system of protected areas can be found at website. Review data about individual protected areas by location or study statistics on protected areas by country or region. Habitat Restoration Habitat restoration holds considerable promise as a mechanism for restoring and maintaining biodiversity. Of course once a species has become extinct, its restoration is impossible. However, restoration can improve the biodiversity of degraded ecosystems. Reintroducing wolves, a top predator, to Yellowstone National Park in 1995 led to dramatic changes in the ecosystem that increased biodiversity. The wolves ( Figure 47.17 ) function to suppress elk and coyote populations and provide more abundant resources to the guild of carrion eaters. Reducing elk populations has allowed revegetation of riparian areas, which has increased the diversity of species in that habitat. Decreasing the coyote population has increased the populations of species that were previously suppressed by this predator. The number of species of carrion eaters has increased because of the predatory activities of the wolves. In this habitat, the wolf is a keystone species, meaning a species that is instrumental in maintaining diversity in an ecosystem. Removing a keystone species from an ecological community may cause a collapse in diversity. The results from the Yellowstone experiment suggest that restoring a keystone species can have the effect of restoring biodiversity in the community. Ecologists have argued for the identification of keystone species where possible and for focusing protection efforts on those species; likewise, it also makes sense to attempt to return them to their ecosystem if they have been removed. Other large-scale restoration experiments underway involve dam removal. In the United States, since the mid-1980s, many aging dams are being considered for removal rather than replacement because of shifting beliefs about the ecological value of free-flowing rivers and because many dams no longer provide the benefit and functions that they did when they were first built. The measured benefits of dam removal include restoration of naturally fluctuating water levels (the purpose of dams is frequently to reduce variation in river flows), which leads to increased fish diversity and improved water quality. In the Pacific Northwest, dam removal projects are expected to increase populations of salmon, which is considered a keystone species because it transports key nutrients to inland ecosystems during its annual spawning migrations. In other regions such as the Atlantic coast, dam removal has allowed the return of spawning anadromous fish species (species that are born in fresh water, live most of their lives in salt water, and return to fresh water to spawn). Some of the largest dam removal projects have yet to occur or have happened too recently for the consequences to be measured. The large-scale ecological experiments that these removal projects constitute will provide valuable data for other dam projects slated either for removal or construction. The Role of Captive Breeding Zoos have sought to play a role in conservation efforts both through captive breeding programs and education. The transformation of the missions of zoos from collection and exhibition facilities to organizations that are dedicated to conservation is ongoing. In general, it has been recognized that, except in some specific targeted cases, captive breeding programs for endangered species are inefficient and often prone to failure when the species are reintroduced to the wild. Zoo facilities are far too limited to contemplate captive breeding programs for the numbers of species that are now at risk. Education is another potential positive impact of zoos on conservation efforts, particularly given the global trend to urbanization and the consequent reduction in contacts between people and wildlife. A number of studies have been performed to look at the effectiveness of zoos on people’s attitudes and actions regarding conservation; at present, the results tend to be mixed.
principles_of_accounting,_volume_2:_managerial_accounting
Summary 12.1 Explain the Importance of Performance Measurement Well-designed performance measurement systems help businesses achieve goal congruence between the company and the employees. Managers should be evaluated only on factors over which they have control. Performance measures can be based on financial measures and/or nonfinancial measures. Performance measurement systems should help the company meet its strategic goals while helping the employee meet his or her professional goals. 12.2 Identify the Characteristics of an Effective Performance Measure A good performance measurement system uses measures over which a manager has control, provides timely and consistent feedback, compares the measures to standards of some form, has both short- and long-term measures, and puts the goals of the business and the individual on an equal level. 12.3 Evaluate an Operating Segment or a Project Using Return on Investment, Residual Income, and Economic Value Added Three common performance measures based on financial numbers are return on investment, residual income, and economic value added. Return on investment measures how effectively a company generates income using its assets. ROI can be broken into two separate measures: sales margin and asset turnover. Residual income measures whether or not a project or a division is exceeding a minimum return that has been determined by management. Economic value added is used to measure how well a project or division is contributing to shareholder wealth. A big challenge with ROI, RI, and EVA is determining which value of income and assets to use in calculating these measures. 12.4 Describe the Balanced Scorecard and Explain How It Is Used Balanced scorecards use both financial and nonfinancial measures to evaluate employees. The four categories of a balanced scorecard are financial perspective, internal business perspective, customer perspective, and learning and growth perspective. Financial perspective measures are usually traditional measures, based on financial statement information such as EPS or ROI. Internal business perspective measures are those that evaluate management’s operational goals, such as quality control or on-time production. Customer perspective measures are those that evaluate how the customer perceives the business and how the business interacts with customers. Learning and growth perspective measures are those that evaluate how effectively the company is growing by innovating and creating value. This is often done through employee training. Well-designed balanced scorecards can be very effective at goal congruence through the utilization of both financial and nonfinancial measures.
Chapter Outline 12.1 Explain the Importance of Performance Measurement 12.2 Identify the Characteristics of an Effective Performance Measure 12.3 Evaluate an Operating Segment or a Project Using Return on Investment, Residual Income, and Economic Value Added 12.4 Describe the Balanced Scorecard and Explain How It Is Used Why It Matters A friend comes to you in a panic. His parents are coming to visit, and his apartment is a complete mess. Although he and his roommates frequently say they should clean, now the apartment has gotten so messy that they don’t even know where to begin. He knows your place always looks clean and orderly, so he is seeking your help. You offer to help your friend, and, in the process, come up with a business idea. To address the needs of students like your friend, you create a company—Passing Inspection Cleaning and Organizing—that will clean and/or organize dorm rooms and apartments. You set up a list of ten standard cleaning tasks that will be performed for a flat fee, and put together a list of à la carte services, such as laundry, closet organization, and refrigerator cleaning. Four students sign on with you as employees. Because it is important for your company to have a good reputation, you want to motivate your employees to perform their tasks to a high standard. You also want your employees to solicit additional business whenever possible by handing out flyers or business cards to nearby rooms and apartments when on a cleaning assignment. How can you motivate your employees to perform to your standards so that your company goals are met? Will an hourly wage be sufficient? Should you pay per task or per job? What motivation will they have to sell your company’s services to others? How will you know if they are performing the tasks in the manner you set forth? To answer these questions, you will need to be aware of the goals of your company, such as increasing the number of clients, as well as the goals of your employees, such as receiving raises, bonuses, or promotions. Armed with this knowledge, you will be able to design relevant evaluation measures and tie those measures to appropriate performance rewards so that both your goals and the goals of your employees are met. This same need—to measure the performance of a business and its employees so that each party’s goals are met—is an issue that affects all businesses, regardless of size or type.
[ { "answer": { "ans_choice": 2, "ans_text": "C" }, "bloom": null, "hl_context": "Using the previous revenue center example , the manager of the reservation department should be evaluated on how well his team generates revenues . The proper incentives will motivate the team to perform better at their jobs . <hl> Evaluating a manager on the outcome of decisions over which he or she has no control , or uncontrollable factors , will be demotivating and does not promote goal congruence between the organization and the manager . <hl> The reservations manager has no control over fuel costs , plane maintenance costs , or pilot salaries . Thus , it would not be logical to evaluate the manager on flight costs .", "hl_sentences": "Evaluating a manager on the outcome of decisions over which he or she has no control , or uncontrollable factors , will be demotivating and does not promote goal congruence between the organization and the manager .", "question": { "cloze_format": "Components of the organization that are demotivating for purposes of performance management are known as ________.", "normal_format": "What are components of the organization that are demotivating for purposes of performance management known as?", "question_choices": [ "business goals", "strategic plans", "uncontrollable factors", "incentives" ], "question_id": "fs-idm252235520", "question_text": "Components of the organization that are demotivating for purposes of performance management are known as ________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 0, "ans_text": "A" }, "bloom": null, "hl_context": "It is important to identify the characteristics that make a performance measure a good assessment of goal congruence . <hl> A good performance measurement system will align the goals of management with the goals of the corporation , and both parties will benefit . <hl> <hl> A lack of goal congruence in a performance measurement system can be detrimental to a business in many ways . <hl> <hl> Without proper performance measures , goal congruence is almost impossible to achieve and will likely lead to lost profits and dissatisfied employees , <hl> Managerial accountants therefore must design a framework of responsibility accounting in which the evaluation system is based on criteria for which a manager is responsible . The framework should be structured to encourage managers to make decisions that will meet the goals of the company as well as their own professional goals . In your study of managerial accounting , you have learned about company goals such as increasing market share , increasing revenues , decreasing costs , and decreasing defects . Managers and employees have their own goals . These goals can be work related such as promotions or awards , or they can be more personal such as receiving raises , receiving bonuses , the privilege of telecommuting , or shares of company stock . <hl> This aligning of goals between a corporation ’ s strategy and a manager ’ s personal goals is known as goal congruence . <hl> Managers should make the best decisions for the benefit of the corporation , and the best way to motivate a manager to make those decisions is to link a reward system to performance results . To accomplish this , a business establishes performance evaluation measures that align the decisions made by management with the goals of the corporation and the professional goals of the manager .", "hl_sentences": "A good performance measurement system will align the goals of management with the goals of the corporation , and both parties will benefit . A lack of goal congruence in a performance measurement system can be detrimental to a business in many ways . Without proper performance measures , goal congruence is almost impossible to achieve and will likely lead to lost profits and dissatisfied employees , This aligning of goals between a corporation ’ s strategy and a manager ’ s personal goals is known as goal congruence .", "question": { "cloze_format": "Goal congruence in well-designed performance measurement systems best explains a congruence between ________.", "normal_format": "Goal congruence in well-designed performance measurement systems best explains a congruence between which of the following?", "question_choices": [ "employees and the company", "strategic plans and the future", "decisions and outcomes", "feedback and measurement" ], "question_id": "fs-idm241806784", "question_text": "Goal congruence in well-designed performance measurement systems best explains a congruence between ________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 3, "ans_text": "there is a baseline against which to compare the measured results" }, "bloom": null, "hl_context": "<hl> Performance measures are only useful if there is a baseline against which to compare the measured results . <hl> For example , students often evaluate how well they performed on a test by comparing their grade to the average for the test . If a student scored 65 out of 100 on a test , the initial response may be that this is a less than stellar grade unless that score is compared to the average . Suppose the average on that particular test was a 50 . Obviously , in this example , the student performed above average on this test , but this could not be interpreted correctly until the score was compared to a baseline . In evaluating performance measures , a standard , baseline , or threshold is typically used as a basis against which to compare the actual results of the manager . What types of measures are used to evaluate management performance ? Historically , performance measurement systems have been based on accounting or other quantitative numbers . One reason for this is that most accounting-based measures are easy to use due to their availability , since many accounting measures can be found in or generated from a company ’ s financial statements . <hl> Although this type of information is readily available , it does not mean the use of accounting numbers as performance measures is the best or only way to measure performance . <hl> One issue is that some accounting numbers can be affected by the actions of managers , and this may result in distorted performance results . <hl> Performance measurement is used to motivate managers to make decisions that benefit the corporation and themselves . <hl> Therefore , the key to good performance measurement techniques is to set goals that are realistic and that incorporate decisions over which the manager has control . <hl> Then , the company can evaluate the manager based on controllable factors , which are the components of the organization for which the manager is responsible and that the manager can control , such as revenues , costs and procurement of long-term assets , and other possible factors . <hl> Recall that in Responsibility Accounting and Decentralization , you learned about responsibility centers , which are a means by which an organization can be divided based on factors that the manager can control . This makes it easier to align the goals of the manager with those of the organization and to design effective performance measures . The four types of responsibility centers are revenue centers , cost centers , profit centers , and investment centers .", "hl_sentences": "Performance measures are only useful if there is a baseline against which to compare the measured results . Although this type of information is readily available , it does not mean the use of accounting numbers as performance measures is the best or only way to measure performance . Performance measurement is used to motivate managers to make decisions that benefit the corporation and themselves . Then , the company can evaluate the manager based on controllable factors , which are the components of the organization for which the manager is responsible and that the manager can control , such as revenues , costs and procurement of long-term assets , and other possible factors .", "question": { "cloze_format": "Performance measures are only useful if ________.", "normal_format": "When are performance measures are only useful?", "question_choices": [ "there are both controllable and uncontrollable factors to evaluate managers", "manager reward systems are designed by the chief financial officer prior to implementation", "all of the measures used are accounting numbers", "there is a baseline against which to compare the measured results" ], "question_id": "fs-idm253865472", "question_text": "Performance measures are only useful if ________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "based on activities over which managers have no control or influence" }, "bloom": null, "hl_context": "<hl> The measurements must not favor the manager over the goals of the entire organization . <hl> Often , managers have the ability to make decisions that favor their individual units but that may be detrimental to the overall performance of the organization . <hl> When appropriate , the actual results should be compared with the budgeted results , standards , or past performance . <hl> <hl> It should be consistent in its application . <hl> <hl> It should be timely . <hl> <hl> It should be measurable . <hl> <hl> It should be based on activities over which managers have control or influence . <hl> <hl> A good performance measurement system should have the following characteristics : <hl>", "hl_sentences": "The measurements must not favor the manager over the goals of the entire organization . When appropriate , the actual results should be compared with the budgeted results , standards , or past performance . It should be consistent in its application . It should be timely . It should be measurable . It should be based on activities over which managers have control or influence . A good performance measurement system should have the following characteristics :", "question": { "cloze_format": "___ is not a characteristic of a good performance measurement system.", "normal_format": "Which of the following is not a characteristic of a good performance measurement system?", "question_choices": [ "timely", "consistent", "based on activities over which managers have no control or influence", "uses both long- and short-term performances and standards" ], "question_id": "fs-idm252880544", "question_text": "Which of the following is not a characteristic of a good performance measurement system?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 1, "ans_text": "B" }, "bloom": null, "hl_context": "It is important to identify the characteristics that make a performance measure a good assessment of goal congruence . <hl> A good performance measurement system will align the goals of management with the goals of the corporation , and both parties will benefit . <hl> A lack of goal congruence in a performance measurement system can be detrimental to a business in many ways . Without proper performance measures , goal congruence is almost impossible to achieve and will likely lead to lost profits and dissatisfied employees ,", "hl_sentences": "A good performance measurement system will align the goals of management with the goals of the corporation , and both parties will benefit .", "question": { "cloze_format": "A good performance measurement system will align the goals of management with ________.", "normal_format": "A good performance measurement system will align the goals of management with which of the following?", "question_choices": [ "the goals of the city manager and the mayoral staff", "the goals of the corporation, and both parties will benefit", "the priorities of the stockholders as listed at the annual meeting", "the investment department’s response to the annual audit" ], "question_id": "fs-idm241527536", "question_text": "A good performance measurement system will align the goals of management with ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "Make sure that the manager being evaluated is aware of the measurement change, as this may affect his or her decision-making." }, "bloom": null, "hl_context": "In addition to being timely , performance measures need to be applied or measured consistently . The accounting variables or other measures that are used to evaluate a manager should be measured the same way from period to period . For example , if a performance measure includes some form of income , such as operating income , then that measure should be used each time and not replaced with another income measure for the current measurement cycle ( usually one year ) . If , upon further analysis , it seems that net income is a better measure to use in the evaluation of a manager , then the new measure can be implemented during the next measurement cycle . <hl> When measures are changed , it is imperative that the manager being evaluated is aware of the measurement change , as this may affect his or her decision-making . <hl> <hl> The idea is to keep the targets stable for a period . <hl> <hl> Otherwise , the measurements might be inconsistent , and thus misleading . <hl> A good performance measurement plan would include the manager ’ s input in the design discussion . Not only does this help to ensure that the plan is clear to all parties involved in the process , it also helps to motivate managers . Rather than being told what goals are to be met , managers will be more motivated to achieve the goals if they have input into the process , the goals to be reached , and the measurements or metrics being used .", "hl_sentences": "When measures are changed , it is imperative that the manager being evaluated is aware of the measurement change , as this may affect his or her decision-making . The idea is to keep the targets stable for a period . Otherwise , the measurements might be inconsistent , and thus misleading .", "question": { "cloze_format": "An organization should ___ if performance measures change.", "normal_format": "What should an organization do if performance measures change?", "question_choices": [ "Make sure that the manager being evaluated is aware of the measurement change, as this may affect his or her decision-making.", "Make sure that the manager benefits without the corporation also benefitting.", "Make sure that there are significant overriding opportunities for each manager, if the manager is unaware of the change.", "Obtain customer surveys on the change before communicating the change to the manager." ], "question_id": "fs-idm239142432", "question_text": "What should an organization do if performance measures change?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "C" }, "bloom": null, "hl_context": "A company has both short - and long-term goals . Short-term goals include reducing costs of production by a certain percentage for the current year or increasing year-over-year sales by a certain percentage . Long-term goals may include expanding into new territories or adding new products . Employees also have short - and long-term goals . Short-term goals can include a beach vacation , and long-term goals can include saving for retirement or college . <hl> A good performance measurement system will include both short - and long-term measures in order to motivate managers to make decisions that will fulfill both the corporations and their own short - and long-term goals . <hl>", "hl_sentences": "A good performance measurement system will include both short - and long-term measures in order to motivate managers to make decisions that will fulfill both the corporations and their own short - and long-term goals .", "question": { "cloze_format": "A good performance measurement system will include ___.", "normal_format": "A good performance measurement system will include which of the following?", "question_choices": [ "short-term goals", "long-term goals", "short-term and long-term goals", "no goals at all" ], "question_id": "fs-idm237890416", "question_text": "A good performance measurement system will include which of the following?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "lost profits and dissatisfied employees" }, "bloom": null, "hl_context": "It is important to identify the characteristics that make a performance measure a good assessment of goal congruence . A good performance measurement system will align the goals of management with the goals of the corporation , and both parties will benefit . A lack of goal congruence in a performance measurement system can be detrimental to a business in many ways . <hl> Without proper performance measures , goal congruence is almost impossible to achieve and will likely lead to lost profits and dissatisfied employees , <hl>", "hl_sentences": "Without proper performance measures , goal congruence is almost impossible to achieve and will likely lead to lost profits and dissatisfied employees ,", "question": { "cloze_format": "Without proper performance measures, goal congruence is almost impossible to achieve and will likely lead to ________.", "normal_format": "Without proper performance measures, goal congruence is almost impossible to achieve and will likely lead to which of the following?", "question_choices": [ "more stable targets", "decreased defects", "lost profits and dissatisfied employees", "employees satisfied with the status quo" ], "question_id": "fs-idm247319696", "question_text": "Without proper performance measures, goal congruence is almost impossible to achieve and will likely lead to ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "the rate of return required by investors to incentivize them to invest in a company" }, "bloom": null, "hl_context": "What about the cost component for each ? A company raises capital ( money ) in three primary ways : borrowing ( debt ) , issuing stock ( equity ) , or earning it ( income ) . The cost of debt is the after-tax interest rate associated with borrowing money . <hl> The cost of equity is the rate associated with what the shareholders expect the corporation to earn in order for that shareholder to maintain ownership in the company . <hl> For example , shareholders of Apple stock may on average expect the company to earn a return of 10 % per year ; otherwise , they will sell their stock .", "hl_sentences": "The cost of equity is the rate associated with what the shareholders expect the corporation to earn in order for that shareholder to maintain ownership in the company .", "question": { "cloze_format": "The cost of equity is ________.", "normal_format": "What is the cost of equity?", "question_choices": [ "the interest associated with debt", "the rate of return required by investors to incentivize them to invest in a company", "the weighted average cost of capital", "equal to the amount of asset turnover" ], "question_id": "fs-idm213861264", "question_text": "The cost of equity is ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "C" }, "bloom": null, "hl_context": "<hl> One way to measure how effective a company is at using its invested profits to be profitable is by measuring its return on investment ( ROI ) , which shows the percentage of income generated by profits that were invested in capital assets . <hl> It is calculated using the following formula :", "hl_sentences": "One way to measure how effective a company is at using its invested profits to be profitable is by measuring its return on investment ( ROI ) , which shows the percentage of income generated by profits that were invested in capital assets .", "question": { "cloze_format": "___ measures the profitability of a division relative to the size of its investment in capital assets.", "normal_format": "Which of the following measures the profitability of a division relative to the size of its investment in capital assets?", "question_choices": [ "residual income (RI)", "sales margin", "return on investment (ROI)", "economic value added (EVA)" ], "question_id": "fs-idm210685168", "question_text": "Which of the following measures the profitability of a division relative to the size of its investment in capital assets?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "Stakeholders cannot include stockholders." }, "bloom": null, "hl_context": "These areas were chosen by Kaplan and Norton because the success of a company is dependent on how it performs financially , which is directly related to the company ’ s internal operations , how the customer perceives and interacts with the company , and the direction in which the company is headed . The use of the balanced scorecard allows the company to take a stakeholder perspective as compared to a stockholder perspective . Stockholders are the owners of the company stock and often are most concerned with the profitability of the company and thus focus primarily on financial results . <hl> Stakeholders are people who are affected by the decisions made by a company , such as investors , creditors , managers , regulators , employees , customers , suppliers , and even laypeople who are concerned about whether or not the company is a good world citizen . <hl> This is why social responsibility factors are sometimes included in balanced scorecards . To understand where these types of factors might fit in a balanced scorecard framework , let ’ s look at the four sections or categories of a balanced scorecard .", "hl_sentences": "Stakeholders are people who are affected by the decisions made by a company , such as investors , creditors , managers , regulators , employees , customers , suppliers , and even laypeople who are concerned about whether or not the company is a good world citizen .", "question": { "cloze_format": "The statement that is false is that ___.", "normal_format": "Which of the following statements is false ?", "question_choices": [ "The four dimensions of performance that are considered in a balanced scorecard are financial, customer, internal process, and learning and growth", "A balanced scorecard will include qualitative and quantitative measures.", "Stakeholders cannot include stockholders.", "A balanced scorecard is the compatibility between personal goals and the goals of the organization." ], "question_id": "fs-idm254111472", "question_text": "Which of the following statements is false ?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 1, "ans_text": "B" }, "bloom": null, "hl_context": "<hl> Therefore , a balanced scorecard evaluates employees on an assortment of quantitative factors , or metrics based on financial information , and qualitative factors , or those based on nonfinancial information , in several significant areas . <hl> The quantitative or financial measurements tend to emphasize past results , often based on their financial statements , while the qualitative or nonfinancial measurements center on current results or activities , with the intent to evaluate activities that will influence future financial performance .", "hl_sentences": "Therefore , a balanced scorecard evaluates employees on an assortment of quantitative factors , or metrics based on financial information , and qualitative factors , or those based on nonfinancial information , in several significant areas .", "question": { "cloze_format": "The metrics based on nonfinancial information are known as ________.", "normal_format": "What are the metrics based on nonfinancial information known as?", "question_choices": [ "quantitative factors", "qualitative factors", "stakeholders", "stockholders" ], "question_id": "fs-idm239114592", "question_text": "The metrics based on nonfinancial information are known as ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "quantitative factors" }, "bloom": null, "hl_context": "<hl> Therefore , a balanced scorecard evaluates employees on an assortment of quantitative factors , or metrics based on financial information , and qualitative factors , or those based on nonfinancial information , in several significant areas . <hl> The quantitative or financial measurements tend to emphasize past results , often based on their financial statements , while the qualitative or nonfinancial measurements center on current results or activities , with the intent to evaluate activities that will influence future financial performance .", "hl_sentences": "Therefore , a balanced scorecard evaluates employees on an assortment of quantitative factors , or metrics based on financial information , and qualitative factors , or those based on nonfinancial information , in several significant areas .", "question": { "cloze_format": "The metrics based on financial numbers produced by the accounting system are ________.", "normal_format": "What are the metrics based on financial numbers produced by the accounting system? ", "question_choices": [ "quantitative factors", "qualitative factors", "stakeholders", "stockholders" ], "question_id": "fs-idm206858720", "question_text": "The metrics based on financial numbers produced by the accounting system are ________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "C" }, "bloom": null, "hl_context": "These areas were chosen by Kaplan and Norton because the success of a company is dependent on how it performs financially , which is directly related to the company ’ s internal operations , how the customer perceives and interacts with the company , and the direction in which the company is headed . The use of the balanced scorecard allows the company to take a stakeholder perspective as compared to a stockholder perspective . Stockholders are the owners of the company stock and often are most concerned with the profitability of the company and thus focus primarily on financial results . <hl> Stakeholders are people who are affected by the decisions made by a company , such as investors , creditors , managers , regulators , employees , customers , suppliers , and even laypeople who are concerned about whether or not the company is a good world citizen . <hl> This is why social responsibility factors are sometimes included in balanced scorecards . To understand where these types of factors might fit in a balanced scorecard framework , let ’ s look at the four sections or categories of a balanced scorecard .", "hl_sentences": "Stakeholders are people who are affected by the decisions made by a company , such as investors , creditors , managers , regulators , employees , customers , suppliers , and even laypeople who are concerned about whether or not the company is a good world citizen .", "question": { "cloze_format": "People affected by decisions made by a company, including investors, creditors, employees, managers, regulators, customers, suppliers, and laypeople, are known as ________.", "normal_format": "What are the people people affected by decisions made by a company, including investors, creditors, employees, managers, regulators, customers, suppliers, and laypeople known as?", "question_choices": [ "quantitative factors", "qualitative factors", "stakeholders", "stockholders" ], "question_id": "fs-idm239162320", "question_text": "People affected by decisions made by a company, including investors, creditors, employees, managers, regulators, customers, suppliers, and laypeople, are known as ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "stockholders" }, "bloom": null, "hl_context": "These areas were chosen by Kaplan and Norton because the success of a company is dependent on how it performs financially , which is directly related to the company ’ s internal operations , how the customer perceives and interacts with the company , and the direction in which the company is headed . The use of the balanced scorecard allows the company to take a stakeholder perspective as compared to a stockholder perspective . <hl> Stockholders are the owners of the company stock and often are most concerned with the profitability of the company and thus focus primarily on financial results . <hl> Stakeholders are people who are affected by the decisions made by a company , such as investors , creditors , managers , regulators , employees , customers , suppliers , and even laypeople who are concerned about whether or not the company is a good world citizen . This is why social responsibility factors are sometimes included in balanced scorecards . To understand where these types of factors might fit in a balanced scorecard framework , let ’ s look at the four sections or categories of a balanced scorecard .", "hl_sentences": "Stockholders are the owners of the company stock and often are most concerned with the profitability of the company and thus focus primarily on financial results .", "question": { "cloze_format": "The owners of company stock are ________.", "normal_format": "What are the owners of company stock?", "question_choices": [ "quantitative factors", "qualitative factors", "stakeholders", "stockholders" ], "question_id": "fs-idm238996896", "question_text": "The owners of company stock are ________." }, "references_are_paraphrase": null } ]
12
12.1 Explain the Importance of Performance Measurement As you learned in Responsibility Accounting and Decentralization , as a company grows, it will often decentralize to better control operations and therefore improve decision-making. Remember, a decentralized organization is one in which the decision-making is spread among various managers throughout the organization and does not solely rest with the chief executive officer (CEO). However, with this dispersion of decision-making comes an even greater need to monitor the results of the decisions made by the many managers at the various levels of the organization to ensure that the overall goals of the organization are still being met. Ethical Considerations Ethical Evaluation of Performance Measures To evaluate whether decisions made by management are both effective and ethical, performance is measured through responsibility accounting. This is a double-layer ethical analysis that requires some thought to establish and implement, as the evaluation system must also operate in an ethical fashion, just as the decision-making process itself does. In most organizations, the overall results of choices made by management, not just the resulting profit, need to be examined to determine whether or not the decisions are ethical. When an organization’s customers and other stakeholders are happy, and the corporate assets are in good condition, these are indicators that the customers, stakeholders, and assets are being treated ethically. Evaluation of customer and stakeholder satisfaction should come directly from the customer, such as through surveys or other direct questionnaires. Proper treatment of organizational assets can be determined by viewing the physical condition of such assets, or the loss rates and productivity of equipment. Customer satisfaction and positive results in the utilization of corporate assets typically indicate ethical decision-making and behavior, while negative results typically indicate the opposite. An organization with a satisfied group of stakeholders and customers, as well as assets that operate efficiently, is often more profitable in the long term. Managerial accountants therefore must design a framework of responsibility accounting in which the evaluation system is based on criteria for which a manager is responsible. The framework should be structured to encourage managers to make decisions that will meet the goals of the company as well as their own professional goals. In your study of managerial accounting, you have learned about company goals such as increasing market share, increasing revenues, decreasing costs, and decreasing defects. Managers and employees have their own goals. These goals can be work related such as promotions or awards, or they can be more personal such as receiving raises, receiving bonuses, the privilege of telecommuting, or shares of company stock. This aligning of goals between a corporation’s strategy and a manager’s personal goals is known as goal congruence . Managers should make the best decisions for the benefit of the corporation, and the best way to motivate a manager to make those decisions is to link a reward system to performance results. To accomplish this, a business establishes performance evaluation measures that align the decisions made by management with the goals of the corporation and the professional goals of the manager. Fundamentals of Performance Measurement Performance measurement is used to motivate managers to make decisions that benefit the corporation and themselves. Therefore, the key to good performance measurement techniques is to set goals that are realistic and that incorporate decisions over which the manager has control. Then, the company can evaluate the manager based on controllable factors , which are the components of the organization for which the manager is responsible and that the manager can control, such as revenues, costs and procurement of long-term assets, and other possible factors. Recall that in Responsibility Accounting and Decentralization , you learned about responsibility centers, which are a means by which an organization can be divided based on factors that the manager can control. This makes it easier to align the goals of the manager with those of the organization and to design effective performance measures. The four types of responsibility centers are revenue centers, cost centers, profit centers, and investment centers. In a revenue center , the manager has control over the revenues that are generated for the corporation but not over the costs of the organization. For example, the reservations department of an airline is a revenue center because the reservationists can control revenues by selling customers upgrades such as meals or first-class seating, by selling trip insurance, or by trying to keep customers from going to another airline. However, reservationists cannot control the costs of the flights the airline is offering and reserving because the reservation department cannot control the cost of the planes, airport space rental, or jet fuel. Therefore, the manager of the reservation department should have performance evaluations measures closely related to revenue generation. In a cost center , the manager has control over costs but not over revenues. An example of a cost center would be the accounting department of a grocery store chain. The manager can control the types of people hired, the wages that are paid, and the hours that are worked within that department, and each of these costs contributes to the total cost of the department. However, the manager of the accounting department has no control over the generation of revenues. In a profit center , the manager has control over both revenues and costs. An example would be a single location of Best Buy . The manager at that store has control over both revenues and costs; therefore, one component of evaluation for that manager will be store profits. An investment center is a component of a business for which the manager has control over revenues, costs, and capital assets. This means the manager not only can make decisions regarding generating revenues and controlling costs but also has authority to make decisions regarding assets, such as buying new machines, expanding facilities, or selling old assets. With each of these types of centers, designing the appropriate performance measures begins with evaluating management based on which business areas they oversee. Using the previous revenue center example, the manager of the reservation department should be evaluated on how well his team generates revenues. The proper incentives will motivate the team to perform better at their jobs. Evaluating a manager on the outcome of decisions over which he or she has no control, or uncontrollable factors , will be demotivating and does not promote goal congruence between the organization and the manager. The reservations manager has no control over fuel costs, plane maintenance costs, or pilot salaries. Thus, it would not be logical to evaluate the manager on flight costs. A good performance measurement system is one that utilizes appropriate performance measures , which are performance metrics used to evaluate a specific attribute of a manager’s role, to evaluate management in a way that will link the goals of the corporation with those of the manager. A metric is simply a means to measure something. For example, high school grade point average is a metric used by colleges when considering admission of prospective students, as it is considered a measure of prior academic success. In the business environment, individuals who design the performance measurement system must have extensive knowledge of the corporate strategic plan and the overall goals set by the organization, and a clear understanding of the job descriptions, responsibilities of each manager, and trends in rewards and compensation. Think It Through Motivating Dental Industry Employees As a dentist and owner of your own practice, you are considering ways to both reward and motivate your staff. The obvious choice is to simply give each employee a raise. However, you have heard that many businesses are compensating their employees for meeting various goals that are beneficial to the business. What types of goals might the dental practice have? What are several ideas for ways to motivate the staff, which consists of a receptionist, dental assistants, and dental hygienists? What are possible rewards for meeting goals? Advantages Derived from Performance Measurement Every business has a strategic plan , or a broad vision of how it will be in the future. This plan leads to goals that must be achieved to fulfill that vision. As shown in Figure 12.2 , a business will use the strategic plan to determine the goals needed to achieve the strategic vision. Once goals are determined, the business will decide on the appropriate actions necessary to meet the goals. Then, the business will implement, review, and adjust the goals as needed. Properly designed performance measures will help move the company toward meeting the goals of its strategic plan. Advantages of a good performance management system include increased employee retention and loyalty, better communication between the various levels of management, increased productivity, and increased efficiencies. In addition, a well-designed performance plan should lead to improved job satisfaction for the manager and increased personal wealth if the rewards are monetarily based. In summary, a company needs to first identify and create a strategy and then set the necessary goals, which will lead to actions, and finally to an applicable evaluation process. Your Turn Measuring Employee Performance All companies need ways to measure the performance of employees. These measures should be designed in a way that the rewards for performance will motivate the employees to make decisions that are good for the business. Reflecting on the Why It Matters scenario , if this were your company, what are five goals you would have for your business? What are some measures you could use to see if you are meeting those goals? What types of incentives could you offer to motivate your employees to help meet these goals? Use Table 12.1 for your answers. Motivating Employees toward Business Goals Five Business Goals Measures to Meet Goals Incentives to Motivate Employees toward Goals                               Table 12.1 Solution Answers will vary. Sample answer: Motivating Employees toward Business Goals Five Business Goals Measures to Meet Goals Incentives to Motivate Employees toward Goals Grow customer base Number of new customers Give a gift card to employees for each new customer they get Increase company name recognition Number of “likes” on Facebook, number of reviews on Google Host a party or take employees to dinner after certain number of likes or positive reviews occur Grow revenue each quarter Percent change in revenue from prior quarter Have a bonus pool that is shared after a targeted percentage increase in revenue is reached Lower cost of supplies used per job Compare supplies used to a standard for each type of job Provide a paid day off for suggestions that successfully reduce cost of supplies per job by 5% Decrease time at each job/increase efficiency Measure time on job using a call-in system of entering and leaving the job Pay a flat additional amount for each time the employee performs a job within the allotted time and that customer satisfaction is a 5/5 Potential Limitations of Traditional Performance Measurement What types of measures are used to evaluate management performance? Historically, performance measurement systems have been based on accounting or other quantitative numbers. One reason for this is that most accounting-based measures are easy to use due to their availability, since many accounting measures can be found in or generated from a company’s financial statements. Although this type of information is readily available, it does not mean the use of accounting numbers as performance measures is the best or only way to measure performance. One issue is that some accounting numbers can be affected by the actions of managers, and this may result in distorted performance results. For example, as shown in Figure 12.3 , if a retail company uses a last-in, first-out (LIFO) inventory system and the manager of the retail store is evaluated based on either cost containment or profit, the manager can postpone a decision to purchase inventory at the end of the year until the beginning of the next fiscal year if prices of the inventory have risen. This decision will postpone the effect of that purchase and, in turn, the higher costs associated with that inventory, until the next accounting cycle. As you can see, in either scenario, the company ordered 500,000 units of inventory but the timing of those orders, given the changing prices of the inventory, has a significant effect on income from operations. This scenario is an example of the possibility of an unintended conflict of interests between procurement and production decisions by an individual manager or department and the overall best interests of the company. A well-designed performance measurement system should eliminate these potential conflicts, as much as possible. Accounting numbers are often affected by economic conditions, but these economic effects are beyond the control of the manager. For example, if the parts used in a manufacturing process are ordered from another country, the manager cannot control the exchange rate that occurs between the two currencies, yet this can impact the cost of the components to the manager and thus affect the cost of the product the company is producing. Some management decisions affect multiple periods, or the decision being made will have the greatest impact in a future period. For example, capital budgeting decisions affect not only the current but future periods as well. This may compel a manager to have a short-term focus, because increasing his immediate remuneration, or compensation, is often his goal. Many long-term decisions, such as capital budgeting decisions, maintenance on equipment, or advertising campaigns, may most significantly affect future accounting numbers and, in turn, the compensation of the manager in future periods. If a manager cannot see himself reaping the rewards of that decision in future years, the decision becomes less attractive. If a performance measurement system is not designed properly, it can lead to managers having a short-term focus or making decisions that have the greatest impact on their individual goals (such as reaching a bonus goal), even if these decisions are not in the best long-term interest of the corporation. Last, a manager focused solely on accounting numbers may miss opportunities for future benefits because making the decision will have a negative impact on accounting measures in the current period. For example, spending money to build a potential customer database may decrease income in the current year. If the manager’s performance is measured based on the profitability of his division, he may avoid spending the money to create the customer database. However, that database may result in a significant increase in profitability in future years if the potential customers become actual customers. Is there a way to prevent these issues associated with using accounting measures as performance measures? The use of nonaccounting measures in conjunction with accounting-based measures can help mitigate the problems of using accounting-based measures alone. Therefore, most performance measurement systems today use a combination of accounting-based measures and non-accounting-based measures, short-term or long-term indicators, or quantitative and qualitative components. Let’s first look at the use of accounting-based measures, and then we’ll consider a methodology that also incorporates non-accounting-based measures. Think It Through Balancing Customer Needs with Company Needs Noah Barnes just graduated from college and took a position as production supervisor for Morgensen Machines, who manufactures sewing machine and vacuum cleaner parts. On his first day at work, one of Morgensen’s sales managers asked Noah if it would be OK to rearrange his manufacturing job schedule so that a special order from a new customer could be pushed to the front of the line. This new customer requires fast turnarounds; unfortunately, this also means running the production equipment for all three shifts at maximum output for at least one week, possibly more. This would completely prohibit the schedule that management told Noah to implement. Noah does not want to make the sales manager angry at him, but he also does not want to lose his job in the first month out of college. He knows that the manager is focused on landing this new customer, who could reward the company with a needed increase in overall sales and plant output. The problems, as Noah sees them, are that (1) current jobs will be delayed; (2) there will be greater demand on the machines during all three shifts, increasing the possibility that they will fail; (3) there will not be time for needed maintenance; and (4) eventually all of these factors will snowball into significant delays for the new customer, as well as extensive delays for the previously scheduled orders. How should Noah handle this problem? What managerial principles would you advise him to use from his college studies to help him develop better policies for future events like this? 12.2 Identify the Characteristics of an Effective Performance Measure It is important to identify the characteristics that make a performance measure a good assessment of goal congruence. A good performance measurement system will align the goals of management with the goals of the corporation, and both parties will benefit. A lack of goal congruence in a performance measurement system can be detrimental to a business in many ways. Without proper performance measures, goal congruence is almost impossible to achieve and will likely lead to lost profits and dissatisfied employees, A good performance measurement system should have the following characteristics: It should be based on activities over which managers have control or influence. It should be measurable. It should be timely. It should be consistent in its application. When appropriate, the actual results should be compared with the budgeted results, standards, or past performance. The measurements must not favor the manager over the goals of the entire organization. Often, managers have the ability to make decisions that favor their individual units but that may be detrimental to the overall performance of the organization. As you’ve learned, it is important that the activities on which managers are evaluated are within that manager’s control. In addition, it is very important for the information that is used in the performance measurement system be gathered, evaluated, and presented in a timely manner. Performance measurement systems provide an indication of how well the evaluated managers are doing their jobs. Remember, the organization wants managers to make decisions that are in the best interest of the organization as a whole, and hence the need for the performance management system. If managers do not receive appropriate feedback in a timely manner, they will not know which decisions they should continue to make in the same manner and which are less effective. The same is true from the corporation’s perspective. Timely information allows the evaluation team to determine the effects of individual management decisions on the corporation as a whole. In addition to being timely, performance measures need to be applied or measured consistently. The accounting variables or other measures that are used to evaluate a manager should be measured the same way from period to period. For example, if a performance measure includes some form of income, such as operating income, then that measure should be used each time and not replaced with another income measure for the current measurement cycle (usually one year). If, upon further analysis, it seems that net income is a better measure to use in the evaluation of a manager, then the new measure can be implemented during the next measurement cycle. When measures are changed, it is imperative that the manager being evaluated is aware of the measurement change, as this may affect his or her decision-making. The idea is to keep the targets stable for a period. Otherwise, the measurements might be inconsistent, and thus misleading. A good performance measurement plan would include the manager’s input in the design discussion. Not only does this help to ensure that the plan is clear to all parties involved in the process, it also helps to motivate managers. Rather than being told what goals are to be met, managers will be more motivated to achieve the goals if they have input into the process, the goals to be reached, and the measurements or metrics being used. Performance measures are only useful if there is a baseline against which to compare the measured results. For example, students often evaluate how well they performed on a test by comparing their grade to the average for the test. If a student scored 65 out of 100 on a test, the initial response may be that this is a less than stellar grade unless that score is compared to the average. Suppose the average on that particular test was a 50. Obviously, in this example, the student performed above average on this test, but this could not be interpreted correctly until the score was compared to a baseline. In evaluating performance measures, a standard, baseline, or threshold is typically used as a basis against which to compare the actual results of the manager. A company has both short- and long-term goals. Short-term goals include reducing costs of production by a certain percentage for the current year or increasing year-over-year sales by a certain percentage. Long-term goals may include expanding into new territories or adding new products. Employees also have short- and long-term goals. Short-term goals can include a beach vacation, and long-term goals can include saving for retirement or college. A good performance measurement system will include both short- and long-term measures in order to motivate managers to make decisions that will fulfill both the corporations and their own short- and long-term goals. You’ve learned about the human factor that causes managers to make what is typically the best decision for themselves rather than the best decision for the overall good of the corporation, especially if the decision that benefits the corporation is not beneficial to the manager. Again, this means the performance measurement system must attempt to prevent the manager from benefitting without the corporation also benefitting. This is one of the trickiest parts of performance measurement system design. For example, suppose the manager of the used car department at an automobile dealership is responsible for the profit he makes selling used cars that were taken as trade-ins on new car sales. Some of these used cars need a few repairs to prepare them for sale. The manager has the option of getting the cars fixed using the service department at the dealership or outsourcing the repairs to another company. If the manager can get the repairs completed at a lower cost at another repair shop, and if he is evaluated and receives a bonus based on his profit, then he is likely to use the outside repair shop. Is this a good thing to do? Obviously, it is good for the manager of the used car department who will have fewer costs getting the used car ready to sell and therefore will make more of a profit from the sale of that car. Higher profits for the used car department mean a higher bonus for the manager. But what about for the dealership? Was outsourcing the repairs the right decision? It depends on several factors, but here are points to ponder. What if the dealership’s service department is more expensive because it provides higher-quality parts and the mechanics are certified? Does the reputation of the quality of the used cars sold by the dealership affect more than just the used car department? What if the service department could have completed the work at cost? As you can tell by these questions, without further information, we do not know whether or not the used car manager should outsource the repairs. But we do know that his decision was based on his bonus being tied to his profitability and not linked to other factors such as dealership profitability or dealership reputation (customer satisfaction). Therefore, it is important that the performance management system not promote decisions that only benefit the manager to the detriment of the corporation. Concepts In Practice Performance Measures at NASA 1 Nearly twenty years ago, the National Aeronautics and Space Administration (NASA) along with five NASA contractors undertook a project to derive performance measures. As a result, they developed a series of five models for measures. These measures included effectiveness, quantity, quality, value, and change, and are as follows: Effectiveness was measured as projected/actual. An example was number of tests completed/number of tests planned. Quantity was measured as process or product unit/sources of cost. An example was total number of wind tunnel tests run/facilities management cost. Quality was measured as indicators of error or loss/process or product unit. An example of quality measures is mistakes in work packages issued/work packages issued in total. Value was measured as desirability/source of cost. An example of value measures is savings from suggestion program/man hours to review suggestions. Change was measured as the information provided by the indexes that are developed by tracking the same performance measures over time. An example would be the improvement measures, like Reduction by X percent in downtime of facilities/tests accomplished or attempted or Increase by X percent of documents prepared/procurement clerk These measures have some distinct advantages but also may be met with some resistance from employees and contractors. Advantages likely included a better understanding of their processes as well as an understanding of the amount of time wasted and value emulating from these processes. Development and implementation become an opportunity to discover what may be wrong with processes, to start a dialogue concerning ongoing change and improvement, and to communicate and brainstorm about organizational inefficiencies. Networking involved in development of the performance measures can become an equalizer among processes that break down silos and complexity. Resistance would likely come from the measurements being too time consuming and the processes too complex to be charted for these measurement objectives. How can upper management judge the complex progress on projects if they have little to no involvement? If these measures were so important, then NASA would have already developed them in an organization that was started around 1960. Resistance like this develops as one where the prior absence of these measures becomes the primary resistance toward developing them. 1 D. Kinlaw. “Developing Performance Measures with Aerospace Managers.” National Productivity Review . December 1, 1986. Link to Learning General Electric is changing their performance measurement practices to more closely align with the goals of millennials. Read the Impraise blog on GE Performance Reviews for more details. 12.3 Evaluate an Operating Segment or a Project Using Return on Investment, Residual Income, and Economic Value Added There are three performance measures commonly used when a manager has control over investments, such as the buying and selling of inventory and equipment: return on investment, residual income, and economic value added. These measures use financial accounting data to evaluate how well a manager is meeting certain goals. Introduction to Return on Investment, Residual Income, and Economic Value Added as Evaluative Tools One of the primary goals of a company is to be profitable. There are many ways a company can use profits. For example, companies can retain profits for future use, they can distribute them to shareholders in the form of dividends, or they can use the profits to pay off debts. However, none of these options actually contributes to the growth of the company. In order to stay profitable, a company must continuously evolve. A fourth option for the use of company profits is to reinvest the profits into the company in order to help it grow. For example, a company can buy new assets such as equipment, buildings, or patents; finance research and development; acquire other companies; or implement a vigorous advertising campaign. There are many options that will help the company to grow and to continue to be profitable. One way to measure how effective a company is at using its invested profits to be profitable is by measuring its return on investment (ROI) , which shows the percentage of income generated by profits that were invested in capital assets. It is calculated using the following formula: Capital assets are those tangible and intangible assets that have lives longer than one year; they are also called fixed assets . ROI in its basic form is useful; however, there are really two components of ROI: sales margin and asset turnover. This is known as the DuPont Model . It originated in the 1920s when the DuPont company implemented it for internal measurement purposes. The DuPont model can be expressed using this formula: Sales margin indicates how much profit is generated by each dollar of sales and is computed as shown: Asset turnover indicates the number of sales dollars produced by every dollar invested in capital assets—in other words, how efficiently the company is using its capital assets to generate sales. It is computed as: Using ROI represented as Sales Margin × Asset Turnover, we can get another formula for ROI. Substituting the formulas for each of these individual ratios, ROI can be expressed as: To visualize this ROI formula in another way, we can deconstruct it into its components, as in Figure 12.4 . When sales margin and asset turnover are multiplied by each other, the sales components of each measure will cancel out, leaving ROI captures the nuances of both elements. A good sales margin and a proper asset turnover are both needed for a successful operation. As an example, a jewelry store typically has a very low turnover but is profitable because of its high sales margin. A grocery store has a much lower sales margin but is successful because of high turnover. You can see it is important to understand each of these individual components of ROI. Calculation and Interpretation of the Return on Investment To put these concepts in context, consider a bakery called Scrumptious Sweets, Inc., that has three divisions and evaluates the managers of each of these decisions based on ROI. The following information is available for these divisions: This information can be used to find the sales margin, asset turnover, and ROI for each division: Alternatively, ROI could have been calculated by multiplying Sales Margin × Asset Turnover: ROI measures the return in a percentage form rather than in absolute dollars, which is helpful when comparing projects, divisions, or departments of different sizes. How do we interpret the ROIs for Scrumptious Sweets? Suppose Scrumptious has set a target ROI for each division at 30% in order to share in the bonus pool. In this case, both the donut division and the bagel division would participate in the company bonus pool. What does the analysis regarding the brownie division show? By looking at the breakdown of ROI into its component parts of sales margin and asset turnover, it is apparent that the brownie division has a higher sales margin than the donut division, but it has a lower asset turnover than the other divisions, and this is affecting the brownie division’s ROI. This would provide direction for management of the brownie division to investigate why their asset turnover is significantly lower than the other two divisions. Again, ROI is useful if there is a benchmark against which to compare, but it cannot be judged as a stand-alone measure without that comparison. Managers want a high ROI, so they strive to increase it. Looking at its components, there are certain decisions managers can make to increase their ROI. For example, the sales margin component can be increased by increasing income, which can be done by either increasing sales revenue or decreasing expenses. Sales revenue can be increased by increasing sales price per unit without losing volume, or by maintaining current sales price but increasing the volume of sales. Asset turnover can be increased by increasing sales revenue or decreasing the amount of capital assets. Capital assets can be decreased by selling off assets such as equipment. For example, suppose the manager of the brownie division has been running a new advertising campaign and is estimating that his sales volume will increase by 5% over the next year due to this ad campaign. This increase in sales volume will lead to an increase in income of $140,000. What does this do to his ROI? Division income will increase from $1,300,000 to $1,440,000, and the division average assets will stay the same, at $4,835,000. This will lead to an ROI of 30%, which is the ROI that must be achieved to participate in the bonus pool. Another factor to consider is the effect of depreciation on ROI. Assets are depreciated over time, and this will reduce the value of the capital assets. A reduction in the capital assets results in an increase in ROI. Looking at the bagel division, suppose the assets in that division depreciated $500,000 from the beginning of the year to the end of the year and that no capital assets were sold and none were purchased. Look at the effect on ROI: Notice that depreciation helped to improve the division’s ROI even though management made no new decisions. Some companies will calculate ROI based on historical cost, while others keep the calculation based on depreciated assets with the idea that the manager is efficiently using the assets as they age. However, if depreciated values are used in the calculation of ROI, as assets are replaced, the ROI will drop from the prior period. One drawback to using ROI is the potential of decreased goal congruence. For example, assume that one of the goals of a corporation is to have ROI of at least 15% (the cost of capital) on all new projects. Suppose one of the divisions within this corporation currently has a ROI of 20%, and the manager is evaluating the production of a new product in his division. If analysis shows that the new project is predicted to have a ROI of 18%, would the manager move forward with the project? Top management would opt to accept the production of the new product. However, since the project would decrease the division’s current ROI, the division manager may reject the project to avoid decreasing his overall performance and possibly his overall compensation. The division manager is making an intentional choice based on his division’s ROI relative to corporate ROI. In other situations, the use of ROI can unintentionally lead to improper decision-making. For example, look at the ROI for the following investment opportunities faced by a manager: In this example, though investment opportunity 1 has a higher ROI, it does not generate any significant income. Therefore, it is important to look at ROI among other factors in order to make an informed decision. Calculation and Interpretation of the Residual Income Another performance measure is residual income (RI) , which shows the amount of income a given division (or project) is expected to earn in excess of a firm’s minimum return goal. Every company sets a minimum required rate of return on projects and investments, representing the minimum return, usually in percentage form, that a project or investment must produce in order for the company to be willing to undertake it. This return is used as a basis for evaluating investments so that the firm may meet its targets and goals, and ensures that only profitable projects will be accepted. (You will learn the theory and mechanics behind establishing a minimum required rate of return in advanced accounting courses.) Think about this concept in your own life. If you plan to invest in stocks, bonds, a work of art, precious stones, a graduate degree, or a business, you would want to know what your expected return would be before you made that investment. Most people shy away from investing time or money in things that do not provide a certain return, whether that return is money, happiness, or satisfaction. A company has to make similar decisions and decide where to spend its money and does not want to spend it in areas that will not return a minimum profit to the company and its shareholders. Companies will determine a minimum required rate of return as a basis against which to compare investment opportunities to aid in the decision of whether or not to accept a project. This minimum required rate of return is used to calculate residual income, which uses this formula: Suppose the donut division of Scrumptious Sweets is considering acquiring new machinery to speed up the production of donuts and make the donuts more uniform in shape and size. The cost of the machine is $1,500,000, and it is expected to generate a profit of $250,000. Scrumptious has a corporate policy of a required minimum rate of return on projects of 18%. Based on residual income, should the donut division move forward on this project? RI = $250,000 – ($1,500,000 × 0.18) RI = –$20,000 RI = $250,000 – ($1,500,000 × 0.18) RI = –$20,000 A project will be accepted as long as the RI is a positive number, because that implies the project is earning more than the minimum required by the company. Therefore, the manager of the donut division would not accept this project based on RI alone. Note that RI is measured in absolute dollars. This makes it almost impossible to compare firms of different sizes or projects of different sizes to one another. Both ROI and RI are useful, but as shown, both tools have drawbacks. Therefore, many companies will use a combination of ROI and RI (as well as other measures) to evaluate performance. Calculation and Interpretation of Economic Value Added Economic value added (EVA) is similar to RI but is a measure of shareholder wealth that is being created by a project, segment, or division. Companies want to maximize shareholder wealth, and to do that, they have to generate enough income to cover their cost of debt and their cost of equity, but also to have income available to shareholders. Just as in residual income, the goal is a positive EVA. A positive EVA indicates management has effectively used its capital assets to increase the value of the firm and thus the wealth of shareholders. EVA is computed as shown: After-tax income is the income reduced by tax expenses. The weighted average cost of capital (WACC) is the cost that the company expects to pay on average to finance assets and growth using either debt or equity. WACC is based on the proportion of debt and equity held by a company and the costs of each of those. For example, if a company has a total of $1,000,000 in debt and equity, consisting of $400,000 in debt and $600,000 in stock, then the proportion of the company’s capital structure that is debt is 40% ($400,000/$1,000,000), and the proportion that is equity is 60% ($600,000/$1,000,000). What about the cost component for each? A company raises capital (money) in three primary ways: borrowing (debt), issuing stock (equity), or earning it (income). The cost of debt is the after-tax interest rate associated with borrowing money. The cost of equity is the rate associated with what the shareholders expect the corporation to earn in order for that shareholder to maintain ownership in the company. For example, shareholders of Apple stock may on average expect the company to earn a return of 10% per year; otherwise, they will sell their stock. Sometimes the weighted average cost of capital and the required rate of return are the same for some companies, but often they will differ. Suppose Scrumptious Sweets, for example, has both debt capital and equity capital. Table 12.2 lists the cost of each type of capital as well as what proportion of the capital is made up of each of the two types. Notice that debt makes up 45% of the capital of Scrumptious Sweets and that the cost of debt is 8%. Equity makes up the other 55% of the capital structure of Scrumptious and the cost of equity is 9.8%. The weighted average cost of capital is the sum of each of the weighted cost of each type of capital. Thus, the weighted cost of debt is 0.08 × 0.45 = 0.036 or 3.6% and the weighted cost of equity is 0.098 × 0.55 = 0.054 or 5.4%. This results in a weighted average cost of capital of 3.6% plus 5.4%, or 9%. Scrumptious Sweets’ Weighted Average Cost of Capital Type of Capital A Cost of Capital B Proportion of Total Capital A × B Weighted Cost Debt 8% 45% 3.6% Equity 9.8% 55% 5.4% Weighted Average Cost of Capital     9% Table 12.2 Reconsidering the new machine the donut division wants to buy, and using EVA to evaluate the project decision, would the decision change? Remember, the cost of the machine is $1,500,000, and it is expected to generate a profit of $250,000. Assume the tax rate for Scrumptious is 40%. To calculate EVA for the project, we need the following: The positive EVA of $15,000 indicates that the project is generating income for the shareholders and should be accepted. As you can see, though RI and EVA look similar, they can lead to different decisions. This difference stems from two sources. First, RI is calculated based on management’s choice for the required rate of return, which can be determined from many different variables, whereas the weighted average cost of capital is based on the actual cost of debt and the estimated cost of equity, weighted by the actual percentages of both components. Second, when used to evaluate unit managers, RI often is based on pretax income, whereas EVA is based on after-tax income to the company itself. EVA and RI do not always lead to different decisions, but it is important that managers understand the components of both measures to ensure they make the best decision for the company. Considerations in Using the Three Evaluative Tools One of the most challenging aspects of using ROI, RI, and EVA lies in the determination of the variables used to calculate these measures. Income and invested capital are factors in the ROI, RI, and EVA performance models, and each can be defined in several ways. Invested capital can be defined as fixed assets, productive assets, or operating assets. Fixed assets typically include only tangible long-term assets. Productive assets typically include inventory plus the fixed assets. Operating assets include productive assets plus intangible assets, and current assets. One problem is determining which assets the manager can control with his or her decision-making authority. Each definition of invested capital will have a different impact on the performance measure, whether that measure is ROI, RI, or EVA. Deciding how to define invested capital is further complicated when combined with the additional decision of whether to use net book value (depreciated value) or gross book value (nondepreciated value) of long-lived assets. Net book value is the historical cost of an asset minus any accumulated depreciation, whereas gross book value is merely the historical cost of the asset. Obviously at the time of acquisition of an asset, these two numbers are the same, but over time, net book value will decrease for any given asset, while gross book value will stay the same for that asset. Using gross book value will result in a higher value for invested capital than using net book value. Remember, net book value will vary based on the depreciation method employed—straight line versus double declining balance, for example. Thus, gross book value removes the effect of choosing different depreciation methods. Despite this, most companies use net book value in the computation of ROI since net book value aligns with their financial reporting of capital assets on the balance sheet at their net value. Assets can also be measured at fair value, also known as market value. This is the value at which the assets could be sold. Fair value is only used in special cases of computing ROI such as in computing ROI for a real estate investment. The reason fair value is not typically used for ROI is that the fair or market value is rarely known or determinable with certainty and is often very subjective, whereas both gross and book value are readily known and determinable. The second major component of these performance measures involves which income measure to use. First and foremost, no matter how a company measures income, the most important point is that the income the company uses as a measure should be controllable income if the performance model is to be a motivator and if the company uses responsibility accounting. Income, sometimes referred to as earnings, can be measured in many ways, and there are often common acronyms given for some of the these measures. Common ways to measure income are operating income (income before taxes); earnings before interest and taxes (EBIT); earnings before interest, taxes, and depreciation (EBITDA); net income (income after taxes); or return on funds employed (ROFE), which adds working capital to any of the other income measures. Companies must decide which income measure they want to use in their determination of these various performance metrics. They must consider how the metric is being used, who they are evaluating by that metric, and whether the income and capital asset chosen capture the decision-making authority of the individual or division whose performance is being evaluated. Your Turn SkyHigh Superball Decisions The manager of the SkyHigh division of Superball Corp. is faced with a decision on whether or not to buy a new machine that will mix the ingredients used in the SkyHigh superball produced by the SkyHigh division. This ball bounces as high as a two-story building upon first bounce and is so popular that the SkyHigh division barely keeps up with demand. The manager is hoping the new machine will allow the balls to be produced more quickly and therefore increase the volume of production within the same time currently being used in production. The manager wants to evaluate the effect of the purchase of the machine on his compensation. He receives a base salary plus a 25% bonus of his salary if he meets certain income goals. The information he has available for the analysis is shown here: The manager is looking at several different measures to evaluate this decision. Answer the following questions: What is the sales margin without the new machine? What is the asset turnover without the new machine? What is ROI without the new machine? What is RI without the new machine? What is EVA without the new machine? What is the sales margin with the new machine? What is the asset turnover with the new machine? What is ROI with the new machine? What is RI with the new machine? What is EVA with the new machine? Should the manager buy the new machine? Why or why not? How would ROI be affected if the invested capital were measured at gross book value, and the gross book values of the beginning and end of the year assets without the new machine were $11,000,000 and $11,800,000, respectively? Solution Income/Sales: $7,000,000/$18,000,000 = 39% Sales/Average Assets: $18,000,000/[($12,000,000 + $12,400,000)/2] = 1.48 times Income/Average Assets: $7,000,000/[($12,000,000 + $12,400,000)/2] = 58% Or #1 × #2: 39% × 1.48 = 58% Income – (Invested Capital × Minimum Required Rate of Return) $7,000,000 – ($12,200,000 × 0.15) = $5,170,000 After-Tax Income – (Invested Capital × Weighted Average Cost of Capital) [$7,000,000 × (1 − 0.30)] × ($12,200,000 × 0.09) = $3,802,000 Income/Sales: $8,000,000/$19,400,000 = 41% Sales/Average Assets: $19,400,000/[($12,000,000 + $12,400,000)/2] = 1.59 times Income/Average Assets: $8,000,000/[($12,000,000 + $12,400,000)/2] = 66% Or #7 × #8: 41% × 1.59 = 66% Income – (Invested Capital × Minimum Required Rate of Return) $8,000,000 – (12,200,000 × 0.15) = $6,170,000 After-Tax Income – (Invested Capital × Weighted Average Cost of Capital) [$8,000,000 × (1 – 0.30)] – ($12,200,000 × 0.09) = $4,502,000 The manager of the SkyHigh division of Superball Corp. should accept the project, as the project improves all of his performance measures. Income/Average Assets: $8,000,000/[($13,000,000 + $13,800,000)/2] = 60% This shows that the choice used as the measure of assets can affect the analysis. 12.4 Describe the Balanced Scorecard and Explain How It Is Used The performance measures considered up to this point have relied only on financial accounting measures as the means to evaluate performance. Over time, the trend has become to incorporate both quantitative and qualitative measures and short- and long-term goals when evaluating the performance of managers as well as the company as a whole. One approach to evaluating both financial and nonfinancial measures is to use a balanced scorecard. History and Function of the Balanced Scorecard Suppose you work in retail and your compensation consists of an hourly wage plus a bonus based on your sales. You have excellent interpersonal skills, and customers appreciate your help and often seek you out when they come to the store. Some of your customers will return on a different day, even making an extra trip to the store to make sure you are the employee who helps them. Sometimes these customers buy items and other times they do not, but they always come back. Your compensation does not include any acknowledgment of your attention to customers and your ability to keep them returning to the store, but consider how much more you could earn if this were the case. However, in order for compensation to include nonfinancial, or qualitative, factors, the store would need to track nonfinancial information, in addition to the financial, or quantitative, information already tracked in the accounting system. One way to track both qualitative and quantitative measures is to use a balanced scorecard . The idea for using a balanced scorecard to evaluate employees was first suggested by Art Schneiderman of Analog Devices in 1987 as a means to improve corporate performance by using metrics to measure improvements in areas in which Analog Devices was struggling, such as in a high number of defects. Schneiderman went through different iterations of a balanced scorecard design over several years, but the final design chosen measured three different categories: financial, customer, and internal. The financial category included measures such as return on assets and revenue growth, the customer category included measures such as customer satisfaction and on-time delivery, and the internal category included measures such as reduced defects and improved throughput time. Eventually, Robert Kaplan and David Norton, both Harvard University faculty, expanded upon Schneiderman’s ideas to create the current concept of the balanced scorecard and four general categories for evaluation: financial perspective, customer perspective, internal perspective, and learning and growth. These categories are sometimes modified for particular industries. Therefore, a balanced scorecard evaluates employees on an assortment of quantitative factors , or metrics based on financial information, and qualitative factors , or those based on nonfinancial information, in several significant areas. The quantitative or financial measurements tend to emphasize past results, often based on their financial statements, while the qualitative or nonfinancial measurements center on current results or activities, with the intent to evaluate activities that will influence future financial performance. Ethical Considerations Use of a Balanced Scorecard Leads to Ethical Decision-Making Managers and employees generally strive to create and work in an ethical environment. In order to develop such an environment, employees need to be informed of the organization’s ethical standards and values and have an understanding of the laws and regulations under which the organization operates. If employees do not know the standards by which they will be measured, they might not be aware if their behavior is ethical. A balanced scorecard allows employees to understand their organization’s obligations, and to evaluate their own obligations in the workplace. To evaluate their ethical environment, organizations can hold meetings that use ethical analysis metrics. Kaplan and Norton, leaders in balanced scorecard use, explain the use of the balanced scorecard in the context of strategy review meetings: Companies conduct strategy review meetings to discuss the indicators and initiatives from the unit’s Balanced Scorecard and assess the progress of and barriers to strategy execution. 2 In such meetings, the metrics analyzed should include, but not be limited to, the availability of a hotline; employee participation in ethics training; satisfaction of customers, employees, and other stakeholders; employee turnover rate; regulation compliance; community involvement; environmental awareness; diversity; legal expenses; efficient asset usage; condition of assets; and social responsibility. 3 Metrics should be tailored to an organization’s values and desired operational results. The use of a balanced scorecard helps lead to an ethical environment for employees and managers. 2 Alistair Craven. An Interview with Robert Kaplan & David Norton (Emerald Publishing, 2008). http://www.emeraldgrouppublishing.com/learning/management_thinking/interviews/kaplan_norton.htm 3 Paul Arveson. The Ethics Perspective (Balanced Scorecard Institute, Strategy Management Group, 2002). https://www.balancedscorecard.org/The-Ethics-Perspective Four Components of a Balanced Scorecard To create a balanced scorecard, a company will start with its strategic goals and organize them into key areas. The four key areas used by Kaplan and Norton were financial perspective, internal operations perspective, customer perspective, and learning and growth ( Figure 12.5 ). These areas were chosen by Kaplan and Norton because the success of a company is dependent on how it performs financially, which is directly related to the company’s internal operations, how the customer perceives and interacts with the company, and the direction in which the company is headed. The use of the balanced scorecard allows the company to take a stakeholder perspective as compared to a stockholder perspective. Stockholders are the owners of the company stock and often are most concerned with the profitability of the company and thus focus primarily on financial results. Stakeholders are people who are affected by the decisions made by a company, such as investors, creditors, managers, regulators, employees, customers, suppliers, and even laypeople who are concerned about whether or not the company is a good world citizen. This is why social responsibility factors are sometimes included in balanced scorecards. To understand where these types of factors might fit in a balanced scorecard framework, let’s look at the four sections or categories of a balanced scorecard. Financial Perspective The financial performance section of a balanced scorecard retains the types of metrics that have historically been set by companies to evaluate performance. The particular metric used in the scorecard will vary depending on the type of company involved, who is being evaluated, and what is being measured. You’ve learned that ROI, RI, and EVA can be used to evaluate performance. There are other financial measures that can be used as well, for example, earnings per share (EPS), revenue growth, sales growth, inventory turnover, and many others. The type of financial measures used should capture the components of the decision-making tasks of the person being evaluated. Financial measures can be very broad and general, such as sales growth, or they can be more specific, such as seat revenue. Looking back at the Scrumptious Sweets example, financial measures could include baked goods revenue growth, drink revenue growth, and product cost containment. Internal Business Perspective A successful company should operate like a well-tuned machine. This requires that the company monitor its internal operations and evaluate them to ensure they are meeting the strategic goals of the corporation. There are many variables that could be used as internal business measures, including number of defects produced, machine downtime, transaction efficiency, and number of products completed per day per employee, or more refined measures, such as percent of time planes are on the ground, or ensuring air tanks are well stocked for a scuba diving business. For Scrumptious Sweets, internal measures could include time between production and sale of the baked goods or amount of waste. Customer Perspectives All businesses have customers or clients—a business will cease to operate without them—thus, it is important for a company to measure how well it is doing with respect to customers. Examples of common variables that could be measured include customer satisfaction, number of repeat customers, number of new customers, number of new customers from customer referrals, and market share. Variables that are more specific to a particular business include factors such as being ranked first in the industry by customers and providing a safe diving environment for scuba diving. Customer measures for Scrumptious Sweets might include customer loyalty, customer satisfaction, and number of new customers. Learning and Growth The business environment is a very dynamic one and requires a company to constantly evolve in order to survive, let alone grow. To reach strategic targets such as increased market share, management must focus on ways to grow the company. The learning and growth measures are a means to assess how the employees and management are working together to grow the company and to help the employees grow within the company. Examples of measures in this category include the number of employee suggestions that are adopted, turnover rates, hours of employee training, scope of process improvements, and number of new products. Scrumptious Sweets may use learning and growth measures such as hours of customer service training and hours on workforce relationship training. Combining the Four Components of a Balanced Scorecard Balanced scorecards can be created for any type of business and can be used at any level of the organization. An effective and successful balanced scorecard will start with the strategic plan or goals of the organization. Those goals are then restated based on the level of the organization to which the balanced scorecard pertains. A balanced scorecard for an entire organization will be broader and more general in terms of goals and measures than a balanced scorecard designed for a division manager. Balanced scorecards can even be created at the individual employee level either as an evaluation mechanism or as a means for the employee to set and monitor individual goals. Once the strategic goals of the organization are stated for the appropriate level for which the balanced scorecard is being created, then the measures for each of the categories of the balanced scorecard should be defined, being sure to consider the areas over which the division or individual does or does not have control. In addition, the variables have to be obtainable and measurable. Last, the measures must be useful, meaning that what is actually being measured must be informative, and there must be a basis of comparison—either company standards or individual targets. Using both quantitative and nonquantitative performance measures, along with long- and short-term measurements, can be very beneficial, as they can serve to motivate an employee while providing a clear framework of how that employee fits into the company’s strategic plan. As an example, let’s examine several balanced scorecards for Scrumptious Sweets. First, Figure 12.6 shows an overall organizational balanced scorecard, the broadest and most general balanced scorecard. Notice that this scorecard starts with the overall corporate mission. It then contains very broad goals and measures in each of the four categories: financial, customer, internal, and learning and growth. In this scorecard, there are three general goals for each of these four categories. For example, the goals related to customers are to improve customer satisfaction, improve customer loyalty, and increase market share. For each of the goals, there is a general measure that will be used to assess if the goal has been met. In this example, the goal to improve customer satisfaction will be assessed using customer satisfaction surveys. But remember, measures are only useful as a management tool if there is a target to work toward. In this case, the goal is to achieve an overall 95% customer satisfaction rating. Obviously, the goals on this scorecard and the associated measures seem almost vague due to their general nature. However, these goals match with the overall corporate strategy and provide guidance for management at lower levels to begin dissecting these goals to more specific ones that pertain to their particular area or division. This allows them to create more detailed balanced scorecards that will allow them to help meet the overall corporate goals laid out in the corporate scorecard. Figure 12.7 shows how the corporate balanced scorecard previously presented could be further detailed for the manager of the brownie division. As you can see from the balanced scorecard for the brownie division, the same corporate mission is included, as are the same four categories; however, the divisional goals are more specific, as are the measures and the targets. For example, related to the overall corporate goal to increase customer satisfaction, the divisional goal is to meet customers’ unique needs. The division will assess how well they are accomplishing this goal by tracking the number of customer suggestions and customer special requests, such as when a customer requests a special flavor of brownie not normally produced by the brownie division. The target set by the management of the brownie division is to meet 95% of customer special requests and to track the number of customer suggestions that are implemented by the division. The idea is that if the division is meeting customer needs and requests, this will result in high customer satisfaction, which is an overriding corporate goal. The success of the division will be based on each employee doing his or her best at his or her specific job. Therefore, it is useful to see how the balanced scorecard can be used at an individual employee level. Figure 12.8 shows a balanced scorecard for the brownie division’s employees who work in the front end or store portion of the division. In this balanced scorecard the same categories are used, but there is more detail about each of the business objectives, and each objective has more refined measures than the prior two scorecards. Again in the customer category, one of the objectives of the storefront employees is to improve the customer experience. Notice that there are three initiatives listed to help drive this goal. The measures that would be used to evaluate the success of these initiatives as well as their specific targets are detailed. Again, the idea is that if the employees who work in the store portion of the brownie division make the customer experience great, this will translate into high scores on the customer satisfaction surveys and help the company meet its overriding goal to increase customer satisfaction. In order to ensure that this occurs, the specific goals and metrics are created. As previously expressed, it is best if these objectives, measures, and targets are determined by a process that includes management and the employees. Without employee input, employees may feel resentful of targets over which they had no input. But, the employees alone cannot set their own goals and targets, as there could be a tendency to set easy targets, or the employee may not be aware of how his or her efforts affect the division and overall corporation. Thus, a collaborative approach is best in creating balanced scorecards. The three scorecards presented show that the process of creating appropriate and viable scorecards can be quite complicated and challenging. Determining the appropriate qualitative and quantitative measures can be a daunting process, but the results can be extremely beneficial. The scorecards can be useful tools at all levels of the organization if they are adequately thought out and if there is buy-in at all levels being evaluated by a scorecard. Next, we’ll consider how the use of the balanced scorecard and performance measures are not mutually exclusive and can work well together. Continuing Application Balanced Scorecard Let’s revisit Gearhead Outfitters in the context of their operating results, internal processes, growth, and customer satisfaction. Recall that the company was founded as a single store in 1997 and grew to multiple locations mainly in the southern United States. How did Gearhead get there? How did the company gather information to make expansion decisions? Now that Gearhead has expanded, should it keep all current locations open? Is the company meeting the desires of its customers? Questions such as these are addressed through performance measures detailed in a balanced scorecard. Financial metrics such as return on investment and residual income give Gearhead information on whether or not dollars invested have translated into additional income, and if current income can support needed cash flow for current and future operations. While financial measures are important, they are only one aspect of evaluating the effectiveness of a company’s strategy. Value provided to customers should also be considered, as well as the success of internal processes, and whether or not the company adequately provides growth opportunities for employees. Sales from new products, employee turnover, and customer satisfaction surveys can also provide valuable data for measuring success. The idea of a balanced scorecard is to give a business both financial and nonfinancial information to use in its strategic decisions. The Our Story page of Gearhead’ s website reads: “Gearhead Outfitters exists to create a positive shopping experience for our guests. Gearhead is known for its relaxed environment, specialized inventory and customer service for those pursuing an active lifestyle. True to our local roots, we employ local residents of each city we operate in, support local organizations, and strive to build relationships within our communities.” 4 4 Gearhead Outfitters. “Our Story.” https://www.gearheadoutfitters.com/about-us/our-story/ Given how Gearhead describes itself, and the performance measures discussed previously, what other information might the company want to gather for its balanced scorecard? Final Summary of Quantitative and Quantitative Performance Measurement Tools As the business environment changes, one thing stays the same: businesses want to be successful, to be profitable, and to meet their strategic goals. With these changes in the business environment come more varied responsibilities placed on managers. These changes occur due to an increased use of technology along with ever-increasing globalization. It is very important that an organization can appropriately measure whether employees are meeting these various responsibilities and reward them accordingly. You’ve learned about some common performance measures such as ROI, RI, EVA, and the balanced scorecard. The more accurately and efficiently a company can monitor and measure its decision-making processes at all levels, the more quickly it can respond to change or problems, and the more likely the company will be able to meet its strategic goals. Most companies will use some combination of the quantitative and nonquantitative measures described. ROI, RI, and EVA are typically used to evaluate specific projects, but ROI is sometimes used as a divisional measure. These measures are all quantitative measures. The balanced scorecard not only has quantitative measures but adds qualitative measures to address more of the goals of the organization. The combination of these different types of quantitative and qualitative measures—project-specific measures, employee-level measures, divisional measures, and corporate measures—enables an organization to more adequately assess how it is progressing toward meeting short- and long-term goals. Remember, the best performance measurement system will contain multiple measures and consist of both quantitative and qualitative factors, which allows for better assessment of managers and better results for the corporation. Think It Through Nonfinancial Measurements of Success For each of the following businesses, what are four nonfinancial measures that might be useful for helping management evaluate the success of its strategies? Grocery store Hospital Auto manufacturer Law office Coffee shop Movie theater
u.s._history
Summary 27.1 The Origins of War: Europe, Asia, and the United States America sought, at the end of the First World War, to create new international relationships that would make such wars impossible in the future. But as the Great Depression hit Europe, several new leaders rose to power under the new political ideologies of Fascism and Nazism. Mussolini in Italy and Hitler in Germany were both proponents of Fascism, using dictatorial rule to achieve national unity. Still, the United States remained focused on the economic challenges of its own Great Depression. Hence, there was little interest in getting involved in Europe’s problems or even the China-Japan conflict. It soon became clear, however, that Germany and Italy’s alliance was putting democratic countries at risk. Roosevelt first sought to support Great Britain and China by providing economic support without intervening directly. However, when Japan, an ally of Germany and Italy, attacked Pearl Harbor, catching the military base unaware and claiming thousands of lives, America’s feelings toward war shifted, and the country was quickly pulled into the global conflict. 27.2 The Home Front The brunt of the war’s damage occurred far from United States soil, but Americans at home were still greatly affected by the war. Women struggled to care for children with scarce resources at their disposal and sometimes while working full time. Economically, the country surged forward, but strict rationing for the war effort meant that Americans still went without. New employment opportunities opened up for women and ethnic minorities, as White men enlisted or were drafted. These new opportunities were positive for those who benefited from them, but they also created new anxieties among White men about racial and gender equality. Race riots took place across the country, and Americans of Japanese ancestry were relocated to internment camps. Still, there was an overwhelming sense of patriotism in the country, which was reflected in the culture of the day. 27.3 Victory in the European Theater Upon entering the war, President Roosevelt believed that the greatest threat to the long-term survival of democracy and freedom would be a German victory. Hence, he entered into an alliance with British prime minister Winston Churchill and Soviet premier Joseph Stalin to defeat the common enemy while also seeking to lay the foundation for a peaceful postwar world in which the United States would play a major and permanent role. Appeasement and nonintervention had been proven to be shortsighted and tragic policies that failed to provide security and peace either for the United States or for the world. With the aid of the British, the United States invaded North Africa and from there invaded Europe by way of Italy. However, the cross-channel invasion of Europe through France that Stalin had long called for did not come until 1944, by which time the Soviets had turned the tide of battle in eastern Europe. The liberation of Hitler’s concentration camps forced Allied nations to confront the grisly horrors that had been taking place as the war unfolded. The Big Three met for one last time in February 1945, at Yalta, where Churchill and Roosevelt agreed to several conditions that strengthened Stalin’s position. They planned to finalize their plans at a later conference, but Roosevelt died two months later. 27.4 The Pacific Theater and the Atomic Bomb The way in which the United States fought the war in the Pacific was fueled by fear of Japanese imperialistic aggression, as well as anger over Japan’s attack on Pearl Harbor and its mistreatment of its enemies. It was also influenced by a long history of American racism towards Asians that dated back to the nineteenth century. From hostile anti-Japanese propaganda to the use of two atomic bombs on Japanese cities, America’s actions during the Pacific campaign were far more aggressive than they were in the European theater. Using the strategy of island hopping, the United States was able to get within striking distance of Japan. Only once they adopted this strategy were the Allied troops able to turn the tide against what had been a series of challenging Japanese victories. The war ended with Japan’s surrender. The combined Allied forces had successfully waged a crusade against Nazi Germany, Italy, and Japan. The United States, forced to abandon a policy of nonintervention outside the Western Hemisphere, had been able to mobilize itself and produce the weapons and the warriors necessary to defeat its enemies. Following World War II, America would never again retreat from the global stage, and its early mastery of nuclear weapons would make it the dominant force in the postwar world.
Chapter Outline 27.1 The Origins of War: Europe, Asia, and the United States 27.2 The Home Front 27.3 Victory in the European Theater 27.4 The Pacific Theater and the Atomic Bomb Introduction World War II awakened the sleeping giant of the United States from the lingering effects of the Great Depression. Although the country had not entirely disengaged itself from foreign affairs following World War I, it had remained largely divorced from events occurring in Europe until the late 1930s. World War II forced the United States to involve itself once again in European affairs. It also helped to relieve the unemployment of the 1930s and stir industrial growth. The propaganda poster above ( Figure 27.1 ) was part of a concerted effort to get Americans to see themselves as citizens of a strong, unified country, dedicated to the protection of freedom and democracy. However, the war that unified many Americans also brought to the fore many of the nation’s racial and ethnic divisions, both on the frontlines—where military units, such as the one depicted in this poster, were segregated by race—and on the home front. Yet, the war also created new opportunities for ethnic minorities and women, which, in postwar America, would contribute to their demand for greater rights.
[ { "answer": { "ans_choice": 0, "ans_text": "A" }, "bloom": null, "hl_context": "<hl> To ensure that the United States did not get drawn into another war , Congress passed a series of Neutrality Acts in the second half of the 1930s . <hl> <hl> The Neutrality Act of 1935 banned the sale of armaments to warring nations . <hl> The following year , another Neutrality Act prohibited loaning money to belligerent countries . The last piece of legislation , the Neutrality Act of 1937 , forbade the transportation of weapons or passengers to belligerent nations on board American ships and also prohibited American citizens from traveling on board the ships of nations at war . President Franklin Roosevelt was aware of the challenges facing the targets of Nazi aggression in Europe and Japanese aggression in Asia . Although he hoped to offer U . S . support , Congress ’ s commitment to nonintervention was difficult to overcome . Such a policy in regards to Europe was strongly encouraged by Senator Gerald P . Nye of North Dakota . Nye claimed that the United States had been tricked into participating in World War I by a group of industrialists and bankers who sought to gain from the country ’ s participation in the war . <hl> The United States , Nye urged , should not be drawn again into an international dispute over matters that did not concern it . <hl> <hl> His sentiments were shared by other noninterventionists in Congress ( Figure 27.5 ) . <hl>", "hl_sentences": "To ensure that the United States did not get drawn into another war , Congress passed a series of Neutrality Acts in the second half of the 1930s . The Neutrality Act of 1935 banned the sale of armaments to warring nations . The United States , Nye urged , should not be drawn again into an international dispute over matters that did not concern it . His sentiments were shared by other noninterventionists in Congress ( Figure 27.5 ) .", "question": { "cloze_format": "The United States Senator who led the noninterventionists in Congress and called for neutrality legislation in the 1930s was ________.", "normal_format": "Who was the United States Senator who led the noninterventionists in Congress and called for neutrality legislation in the 1930s?", "question_choices": [ "Gerald P. Nye", "Robert Wagner", "George C. Marshall", "Neville Chamberlain" ], "question_id": "fs-idm14244880", "question_text": "The United States Senator who led the noninterventionists in Congress and called for neutrality legislation in the 1930s was ________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 3, "ans_text": "D" }, "bloom": null, "hl_context": "Roosevelt and his administration already had experience in establishing government controls and taking the initiative in economic matters during the Depression . In April 1941 , Roosevelt created the Office of Price Administration ( OPA ) , and , once the United States entered the war , the OPA regulated prices and attempted to combat inflation . The OPA ultimately had the power to set ceiling prices for all goods , except agricultural commodities , and to ration a long list of items . <hl> During the war , major labor unions pledged not to strike in order to prevent disruptions in production ; in return , the government encouraged businesses to recognize unions and promised to help workers bargain for better wages . <hl>", "hl_sentences": "During the war , major labor unions pledged not to strike in order to prevent disruptions in production ; in return , the government encouraged businesses to recognize unions and promised to help workers bargain for better wages .", "question": { "cloze_format": "During World War II, unionized workers agreed ________.", "normal_format": "During World War II, unionized workers agreed to what?", "question_choices": [ "to work without pay", "to go without vacations or days off", "to live near the factories to save time commuting", "to keep production going by not striking" ], "question_id": "fs-idm285922736", "question_text": "During World War II, unionized workers agreed ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "bracero program" }, "bloom": null, "hl_context": "Mexican Americans also encountered racial prejudice . The Mexican American population in Southern California grew during World War II due to the increased use of Mexican agricultural workers in the fields to replace the White workers who had left for better paying jobs in the defense industries . <hl> The United States and Mexican governments instituted the “ bracero ” program on August 4 , 1942 , which sought to address the needs of California growers for manual labor to increase food production during wartime . <hl> The result was the immigration of thousands of impoverished Mexicans into the United States to work as braceros , or manual laborers .", "hl_sentences": "The United States and Mexican governments instituted the “ bracero ” program on August 4 , 1942 , which sought to address the needs of California growers for manual labor to increase food production during wartime .", "question": { "cloze_format": "The program to recruit Mexican agricultural workers during World War II was the ________.", "normal_format": "What was the program to recruit Mexican agricultural workers during World War II?", "question_choices": [ "bracero program", "maquiladora program", "brazzos program", "campesino program" ], "question_id": "fs-idm247819440", "question_text": "The program to recruit Mexican agricultural workers during World War II was the ________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "the invasion of western Europe to draw German forces away from the Soviet Union" }, "bloom": null, "hl_context": "Through a series of wartime conferences , Roosevelt and the other global leaders sought to come up with a strategy to both defeat the Germans and bolster relationships among allies . In January 1943 , at Casablanca , Morocco , Churchill convinced Roosevelt to delay an invasion of France in favor of an invasion of Sicily ( Figure 27.15 ) . It was also at this conference that Roosevelt enunciated the doctrine of “ unconditional surrender . ” Roosevelt agreed to demand an unconditional surrender from Germany and Japan to assure the Soviet Union that the United States would not negotiate a separate peace between the two belligerent states . He wanted a permanent transformation of Germany and Japan after the war . Roosevelt thought that announcing this as a specific war aim would discourage any nation or leader from seeking any negotiated armistice that would hinder efforts to reform and transform the defeated nations . Stalin , who was not at the conference , affirmed the concept of unconditional surrender when asked to do so . <hl> However , he was dismayed over the delay in establishing a “ second front ” along which the Americans and British would directly engage German forces in western Europe . <hl> <hl> A western front , brought about through an invasion across the English Channel , which Stalin had been demanding since 1941 , offered the best means of drawing Germany away from the east . <hl> At a meeting in Tehran , Iran , also in November 1943 , Churchill , Roosevelt , and Stalin met to finalize plans for a cross-channel invasion .", "hl_sentences": "However , he was dismayed over the delay in establishing a “ second front ” along which the Americans and British would directly engage German forces in western Europe . A western front , brought about through an invasion across the English Channel , which Stalin had been demanding since 1941 , offered the best means of drawing Germany away from the east .", "question": { "cloze_format": "The Soviet Union made a demand of Britain and the United States that ____.", "normal_format": "Which of the following demands did the Soviet Union make of Britain and the United States?", "question_choices": [ "the right to try all Nazi war criminals in the Soviet Union", "the invasion of North Africa to help the Soviet Union’s ally Iraq", "the invasion of western Europe to draw German forces away from the Soviet Union", "the right to place Communist Party leaders in charge of the German government" ], "question_id": "fs-idp6864688", "question_text": "Which of the following demands did the Soviet Union make of Britain and the United States?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 1, "ans_text": "B" }, "bloom": null, "hl_context": "In the Pacific , MacArthur and the Allied forces pursued an island hopping strategy that bypassed certain island strongholds held by the Japanese that were of little or no strategic value . By seizing locations from which Japanese communications and transportation routes could be disrupted or destroyed , the Allies advanced towards Japan without engaging the thousands of Japanese stationed on garrisoned islands . The goal was to advance American air strength close enough to Japan proper to achieve air superiority over the home islands ; the nation could then be bombed into submission or at least weakened in preparation for an amphibious assault . <hl> By February 1945 , American forces had reached the island of Iwo Jima ( Figure 27.20 ) . <hl> <hl> Iwo Jima was originally meant to serve as a forward air base for fighter planes , providing cover for long-distance bombing raids on Japan . <hl> Two months later , an even larger engagement , the hardest fought and bloodiest battle of the Pacific theater , took place as American forces invaded Okinawa . The battle raged from April 1945 well into July 1945 ; the island was finally secured at the cost of seventeen thousand American soldiers killed and thirty-six thousand wounded . Japanese forces lost over 100,000 troops . Perhaps as many as 150,000 civilians perished as well .", "hl_sentences": "By February 1945 , American forces had reached the island of Iwo Jima ( Figure 27.20 ) . Iwo Jima was originally meant to serve as a forward air base for fighter planes , providing cover for long-distance bombing raids on Japan .", "question": { "cloze_format": "___ had to be captured in order to provide a staging area for U.S. bombing raids against Japan.", "normal_format": "Which of the following islands had to be captured in order to provide a staging area for U.S. bombing raids against Japan?", "question_choices": [ "Sakhalin", "Iwo Jima", "Molokai", "Reunion" ], "question_id": "fs-idm60399888", "question_text": "Which of the following islands had to be captured in order to provide a staging area for U.S. bombing raids against Japan?" }, "references_are_paraphrase": null } ]
27
27.1 The Origins of War: Europe, Asia, and the United States Learning Objectives By the end of this section, you will be able to: Explain the factors in Europe that gave rise to Fascism and Nazism Discuss the events in Europe and Asia that led to the start of the war Identify the early steps taken by President Franklin D. Roosevelt to increase American aid to nations fighting totalitarianism while maintaining neutrality The years between the First and Second World Wars were politically and economically tumultuous for the United States and especially for the world. The Russian Revolution of 1917, Germany’s defeat in World War I, and the subsequent Treaty of Versailles had broken up the Austro-Hungarian, German, and Russian empires and significantly redrew the map of Europe. President Woodrow Wilson had wished to make World War I the “war to end all wars” and hoped that his new paradigm of “collective security” in international relations, as actualized through the League of Nations, would limit power struggles among the nations of the world. However, during the next two decades, America’s attention turned away from global politics and toward its own needs. At the same time, much of the world was dealing with economic and political crises, and different types of totalitarian regimes began to take hold in Europe. In Asia, an ascendant Japan began to expand its borders. Although the United States remained focused on the economic challenges of the Great Depression as World War II approached, ultimately it became clear that American involvement in the fight against Nazi Germany and Japan was in the nation’s interest. ISOLATION While during the 1920s and 1930s there were Americans who favored active engagement in Europe, most Americans, including many prominent politicians, were leery of getting too involved in European affairs or accepting commitments to other nations that might restrict America’s ability to act independently, keeping with the isolationist tradition. Although the United States continued to intervene in the affairs of countries in the Western Hemisphere during this period, the general mood in America was to avoid becoming involved in any crises that might lead the nation into another global conflict. Despite its largely noninterventionist foreign policy, the United States did nevertheless take steps to try to lessen the chances of war and cut its defense spending at the same time. President Warren G. Harding’s administration participated in the Washington Naval Conference of 1921–1922, which reduced the size of the navies of the nine signatory nations. In addition, the Four Power Treaty, signed by the United States, Great Britain, France, and Japan in 1921, committed the signatories to eschewing any territorial expansion in Asia. In 1928, the United States and fourteen other nations signed the Kellogg-Briand Pact , declaring war an international crime. Despite hopes that such agreements would lead to a more peaceful world—far more nations signed on to the agreement in later years—they failed because none of them committed any of the nations to take action in the event of treaty violations. THE MARCH TOWARD WAR While the United States focused on domestic issues, economic depression and political instability were growing in Europe. During the 1920s, the international financial system was propped up largely by American loans to foreign countries. The crash of 1929, when the U.S. stock market plummeted and American capital dried up, set in motion a series of financial chain reactions that contributed significantly to a global downward economic spiral. Around the world, industrialized economies faced significant problems of economic depression and worker unemployment. Totalitarianism in Europe Many European countries had been suffering even before the Great Depression began. A postwar recession and the continuation of wartime inflation had hurt many economies, as did a decrease in agricultural prices, which made it harder for farmers to buy manufactured goods or pay off loans to banks. In such an unstable environment, Benito Mussolini capitalized on the frustrations of the Italian people who felt betrayed by the Versailles Treaty. In 1919, Mussolini created the Fasci Italiani di Combattimento (Italian Combat Squadron). The organization’s main tenets of Fascism called for a heightened focus on national unity, militarism, social Darwinism, and loyalty to the state. Mussolini wanted a state organized to be what he called totalitario (totalitarian), which he insisted would mean “all within the state, none outside the state, none against the state. With the support of major Italian industrialists and the king, who saw Fascism as a bulwark against growing Socialist and Communist movements, Mussolini became prime minister in 1922. Between 1925 and 1927, Mussolini transformed the nation into a single party state and removed all restraints on his power. In Germany, a similar pattern led to the rise of the totalitarian National Socialist Party. Political fragmentation through the 1920s accentuated the severe economic problems facing the country. As a result, the German Communist Party began to grow in strength, frightening many wealthy and middle-class Germans. In addition, the terms of the Treaty of Versailles had given rise to a deep-seated resentment of the victorious Allies. It was in such an environment that Adolf Hitler’s anti-Communist National Socialist Party—the Nazis—was born. The Nazis gained numerous followers during the Great Depression, which hurt Germany tremendously, plunging it further into economic crisis. By 1932, nearly 30 percent of the German labor force was unemployed. Not surprisingly, the political mood was angry and sullen. Hitler, a World War I veteran, promised to return Germany to greatness. By the beginning of 1933, the Nazis had become the largest party in the German legislature. Germany’s president, Paul von Hindenburg, at the urging of large industrialists who feared a Communist uprising, appointed Hitler to the position of chancellor in January 1933. In the elections that took place in early March 1933, the Nazis gained the political power to pass the Enabling Act later that same month, which gave Hitler the power to make all laws for the next four years. Hitler thus effectively became the dictator of Germany and remained so long after the four-year term passed. Like Italy, Germany had become a one-party totalitarian state ( Figure 27.3 ). Nazi Germany was an anti-Semitic nation, and in 1935, the Nuremberg Laws deprived Jews, whom Hitler blamed for Germany’s downfall, of German citizenship and the rights thereof. Once in power, Hitler began to rebuild German military might. He commenced his program by withdrawing Germany from the League of Nations in October 1933. In 1936, in accordance with his promise to restore German greatness, Hitler dispatched military units into the Rhineland, on the border with France, which was an act contrary to the provisions of the Versailles Treaty. In March 1938, claiming that he sought only to reunite ethnic Germans within the borders of one country, Hitler invaded Austria. At a conference in Munich later that year, Great Britain’s prime minister, Neville Chamberlain, and France’s prime minister, Édouard Daladier, agreed to the partial dismemberment of Czechoslovakia and the occupation of the Sudetenland (a region with a sizable German population) by German troops ( Figure 27.4 ). This Munich Pact offered a policy of appeasement , in the hope that German expansionist appetites could be satisfied without war. But not long after the agreement, Germany occupied the rest of Czechoslovakia as well. Leaders in the Soviet Union, which developed its own form of brutal totalitarianism through communism, paid close attention to Hitler’s actions and public pronouncements. Soviet leader Joseph Stalin realized that Poland, part of which had belonged to Germany before the First World War, was most likely next. Although fiercely opposed to Hitler, Stalin, sobered by the French and British betrayal of Czechoslovakia and unprepared for a major war, decided the best way to protect the Soviet Union, and gain additional territory, was to come to some accommodation with the German dictator. In August 1939, Germany and the Soviet Union essentially agreed to divide Poland between them and not make war upon one another. Japan Militaristic politicians also took control of Japan in the 1930s. The Japanese had worked assiduously for decades to modernize, build their strength, and become a prosperous, respected nation. The sentiment in Japan was decidedly pro-capitalist, and the Japanese militarists were fiercely supportive of a capitalist economy. They viewed with great concern the rise of Communism in the Soviet Union and in particular China, where the issue was fueling a civil war, and feared that the Soviet Union would make inroads in Asia by assisting China’s Communists. The Japanese militarists thus found a common ideological enemy with Fascism and National Socialism, which had based their rise to power on anti-Communist sentiments. In 1936, Japan and Germany signed the Anti-Comintern Pact, pledging mutual assistance in defending themselves against the Comintern, the international agency created by the Soviet Union to promote worldwide Communist revolution. In 1937, Italy joined the pact, essentially creating the foundation of what became the military alliance of the Axis powers. Like its European allies, Japan was intent upon creating an empire for itself. In 1931, it created a new nation, a puppet state called Manchukuo, which had been cobbled together from the three northernmost provinces of China. Although the League of Nations formally protested Japan’s seizure of Chinese territory in 1931 and 1932, it did nothing else. In 1937, a clash between Japanese and Chinese troops, known as the Marco Polo Bridge Incident, led to a full-scale invasion of China by the Japanese. By the end of the year, the Chinese had suffered some serious defeats. In Nanjing, then called Nanking by Westerners, Japanese soldiers systematically raped Chinese women and massacred hundreds of thousands of civilians, leading to international outcry. Public sentiment against Japan in the United States reached new heights. Members of Protestant churches that were involved in missionary work in China were particularly outraged, as were Chinese Americans. A troop of Chinese American Boy Scouts in New York City’s Chinatown defied Boy Scout policy and marched in protest against Japanese aggression. FROM NEUTRALITY TO ENGAGEMENT President Franklin Roosevelt was aware of the challenges facing the targets of Nazi aggression in Europe and Japanese aggression in Asia. Although he hoped to offer U.S. support, Congress’s commitment to nonintervention was difficult to overcome. Such a policy in regards to Europe was strongly encouraged by Senator Gerald P. Nye of North Dakota. Nye claimed that the United States had been tricked into participating in World War I by a group of industrialists and bankers who sought to gain from the country’s participation in the war. The United States, Nye urged, should not be drawn again into an international dispute over matters that did not concern it. His sentiments were shared by other noninterventionists in Congress ( Figure 27.5 ). Roosevelt’s willingness to accede to the demands of the noninterventionists led him even to refuse assistance to those fleeing Nazi Germany. Although Roosevelt was aware of Nazi persecution of the Jews, he did little to aid them. In a symbolic act of support, he withdrew the American ambassador to Germany in 1938. He did not press for a relaxation of immigration quotas that would have allowed more refugees to enter the country, however. In 1939, he refused to support a bill that would have admitted twenty thousand Jewish refugee children to the United States. Again in 1939, when German refugees aboard the SS St. Louis , most of them Jews, were refused permission to land in Cuba and turned to the United States for help, the U.S. State Department informed them that immigration quotas for Germany had already been filled. Once again, Roosevelt did not intervene, because he feared that nativists in Congress might smear him as a friend of Jews. To ensure that the United States did not get drawn into another war, Congress passed a series of Neutrality Acts in the second half of the 1930s. The Neutrality Act of 1935 banned the sale of armaments to warring nations. The following year, another Neutrality Act prohibited loaning money to belligerent countries. The last piece of legislation, the Neutrality Act of 1937, forbade the transportation of weapons or passengers to belligerent nations on board American ships and also prohibited American citizens from traveling on board the ships of nations at war. Once all-out war began between Japan and China in 1937, Roosevelt sought ways to help the Chinese that did not violate U.S. law. Since Japan did not formally declare war on China, a state of belligerency did not technically exist. Therefore, under the terms of the Neutrality Acts, America was not prevented from transporting goods to China. In 1940, the president of China, Chiang Kai-shek, was able to prevail upon Roosevelt to ship to China one hundred P-40 fighter planes and to allow American volunteers, who technically became members of the Chinese Air Force, to fly them. War Begins in Europe In 1938, the agreement reached at the Munich Conference failed to satisfy Hitler—in fact, the refusal of Britain and France to go to war over the issue infuriated the German dictator. In May of the next year, Germany and Italy formalized their military alliance with the “Pact of Steel.” On September 1, 1939, Hitler unleashed his Blitzkrieg, or “lightning war,” against Poland, using swift, surprise attacks combining infantry, tanks, and aircraft to quickly overwhelm the enemy. Britain and France had already learned from Munich that Hitler could not be trusted and that his territorial demands were insatiable. On September 3, 1939, they declared war on Germany, and the European phase of World War II began. Responding to the German invasion of Poland, Roosevelt worked with Congress to alter the Neutrality Laws to permit a policy of “Cash and Carry” in munitions for Britain and France. The legislation, passed and signed by Roosevelt in November 1939, permitted belligerents to purchase war materiel if they could pay cash for it and arrange for its transportation on board their own ships. When the Germans commenced their spring offensive in 1940, they defeated France in six weeks with a highly mobile and quick invasion of France, Belgium, Luxembourg, and the Netherlands. In the Far East, Japan took advantage of France’s surrender to Germany to occupy French Indochina. In response, beginning with the Export Control Act in July 1940, the United States began to embargo the shipment of various materials to Japan, starting first with aviation gasoline and machine tools, and proceeding to scrap iron and steel. The Atlantic Charter Following the surrender of France, the Battle of Britain began, as Germany proceeded to try to bomb England into submission. As the battle raged in the skies over Great Britain throughout the summer and autumn of 1940 ( Figure 27.6 ), Roosevelt became increasingly concerned over England’s ability to hold out against the German juggernaut. In June 1941, Hitler broke the nonaggression pact with the Soviet Union that had given him the backing to ravage Poland and marched his armies deep into Soviet territory, where they would kill Red Army regulars and civilians by the millions until their advances were stalled and ultimately reversed by the devastating battle of Stalingrad, which took place from August 23, 1942 until February 2, 1943 when, surrounded and out of ammunition, the German 6th army surrendered. In August 1941, Roosevelt met with the British prime minister, Winston Churchill, off the coast of Newfoundland, Canada. At this meeting, the two leaders drafted the Atlantic Charter, the blueprint of Anglo-American cooperation during World War II. The charter stated that the United States and Britain sought no territory from the conflict. It proclaimed that citizens of all countries should be given the right of self-determination, self-government should be restored in places where it had been eliminated, and trade barriers should be lowered. Further, the charter mandated freedom of the seas, renounced the use of force to settle international disputes, and called for postwar disarmament. In March 1941, concerns over Britain’s ability to defend itself also influenced Congress to authorize a policy of Lend Lease, a practice by which the United States could sell, lease, or transfer armaments to any nation deemed important to the defense of the United States. Lend Lease effectively ended the policy of nonintervention and dissolved America’s pretense of being a neutral nation. The program ran from 1941 to 1945, and distributed some $45 billion worth of weaponry and supplies to Britain, the Soviet Union, China, and other allies. A Date Which Will Live in Infamy By the second half of 1941, Japan was feeling the pressure of the American embargo. As it could no longer buy strategic material from the United States, the Japanese were determined to obtain a sufficient supply of oil by taking control of the Dutch East Indies. However, they realized that such an action might increase the possibility of American intervention, since the Philippines, a U.S. territory, lay on the direct route that oil tankers would have to take to reach Japan from Indonesia. Japanese leaders thus attempted to secure a diplomatic solution by negotiating with the United States while also authorizing the navy to plan for war. The Japanese government also decided that if no peaceful resolution could be reached by the end of November 1941, then the nation would have to go to war against the United States. The American final counterproposal to various offers by Japan was for the Japanese to completely withdraw, without any conditions, from China and enter into nonaggression pacts with all the Pacific powers. Japan found that proposal unacceptable but delayed its rejection for as long as possible. Then, at 7:48 a.m. on Sunday, December 7, the Japanese attacked the U.S. Pacific fleet at anchor in Pearl Harbor, Hawaii ( Figure 27.7 ). They launched two waves of attacks from six aircraft carriers that had snuck into the central Pacific without being detected. The attacks brought some 353 fighters, bombers, and torpedo bombers down on the unprepared fleet. The Japanese hit all eight battleships in the harbor and sank four of them. They also damaged several cruisers and destroyers. On the ground, nearly two hundred aircraft were destroyed, and twenty-four hundred servicemen were killed. Another eleven hundred were wounded. Japanese losses were minimal. The strike was part of a more concerted campaign by the Japanese to gain territory. They subsequently attacked Hong Kong, Malaysia, Singapore, Guam, Wake Island, and the Philippines. Whatever reluctance to engage in conflict the American people had had before December 7, 1941, quickly evaporated. Americans’ incredulity that Japan would take such a radical step quickly turned to a fiery anger, especially as the attack took place while Japanese diplomats in Washington were still negotiating a possible settlement. President Roosevelt, referring to the day of the attack as “a date which will live in infamy,” asked Congress for a declaration of war, which it delivered to Japan on December 8. On December 11, Germany and Italy declared war on the United States in accordance with their alliance with Japan. Against its wishes, the United States had become part of the European conflict. 27.2 The Home Front Learning Objectives By the end of this section, you will be able to: Describe the steps taken by the United States to prepare for war Describe how the war changed employment patterns in the United States Discuss the contributions of civilians on the home front, especially women, to the war effort Analyze how the war affected race relations in the United States The impact of the war on the United States was nowhere near as devastating as it was in Europe and the Pacific, where the battles were waged, but it still profoundly changed everyday life for all Americans. On the positive side, the war effort finally and definitively ended the economic depression that had been plaguing the country since 1929. It also called upon Americans to unite behind the war effort and give of their money, their time, and their effort, as they sacrificed at home to assure success abroad. The upheaval caused by White men leaving for war meant that for many disenfranchised groups, such as women and African Americans, there were new opportunities in employment and wage earning. Still, fear and racism drove cracks in the nation’s unified facade. MOBILIZING A NATION Although the United States had sought to avoid armed conflict, the country was not entirely unprepared for war. Production of armaments had increased since 1939, when, as a result of Congress’s authorization of the Cash and Carry policy, contracts for weapons had begun to trickle into American factories. War production increased further following the passage of Lend Lease in 1941. However, when the United States entered the war, the majority of American factories were still engaged in civilian production, and many doubted that American businesses would be sufficiently motivated to convert their factories to wartime production. Just a few years earlier, Roosevelt had been frustrated and impatient with business leaders when they failed to fully support the New Deal, but enlisting industrialists in the nation’s crusade was necessary if the United States was to produce enough armaments to win the war. To encourage cooperation, the government agreed to assume all costs of development and production, and also guarantee a profit on the sale of what was produced. This arrangement resulted in 233 to 350 percent increases in profits over what the same businesses had been able to achieve from 1937 to 1940. In terms of dollars earned, corporate profits rose from $6.4 billion in 1940 to nearly $11 billion in 1944. As the country switched to wartime production, the top one hundred U.S. corporations received approximately 70 percent of government contracts; big businesses prospered. In addition to gearing up industry to fight the war, the country also needed to build an army. A peacetime draft, the first in American history, had been established in September 1940, but the initial draftees were to serve for only one year, a length of time that was later extended. Furthermore, Congress had specified that no more than 900,000 men could receive military training at any one time. By December 1941, the United States had only one division completely ready to be deployed. Military planners estimated that it might take nine million men to secure victory. A massive draft program was required to expand the nation’s military forces. Over the course of the war, approximately fifty million men registered for the draft; ten million were subsequently inducted into the service. Approximately 2.5 million African Americans registered for the draft, and 1 million of them subsequently served. Initially, African American soldiers, who served in segregated units, had been used as support troops and not been sent into combat. By the end of the war, however, manpower needs resulted in African American recruits serving in the infantry and flying planes. The Tuskegee Institute in Alabama had instituted a civilian pilot training program for aspiring African American pilots. When the war began, the Department of War absorbed the program and adapted it to train combat pilots. First Lady Eleanor Roosevelt demonstrated both her commitment to African Americans and the war effort by visiting Tuskegee in 1941, shortly after the unit had been organized. To encourage the military to give the airmen a chance to serve in actual combat, she insisted on taking a ride in a plane flown by an African American pilot to demonstrate the Tuskegee Airmen’s skill ( Figure 27.8 ). When the Tuskegee Airmen did get their opportunity to serve in combat, they did so with distinction. In addition, forty-four thousand Native Americans served in all theaters of the war. In some of the Pacific campaigns, Native Americans made distinct and unique contributions to Allied victories. Navajo marines served in communications units, exchanging information over radios using codes based on their native language, which the Japanese were unable to comprehend or to crack. They became known as code talkers and participated in the battles of Guadalcanal, Iwo Jima, Peleliu, and Tarawa. A smaller number of Comanche code talkers performed a similar function in the European theater. While millions of Americans heeded the rallying cry for patriotism and service, there were those who, for various reasons, did not accept the call. Before the war began, American Peace Mobilization had campaigned against American involvement in the European conflict as had the noninterventionist America First organization. Both groups ended their opposition, however, at the time of the German invasion of the Soviet Union and the Japanese attack on Pearl Harbor, respectively. Nevertheless, during the war, some seventy-two thousand men registered as conscientious objectors (COs), and fifty-two thousand were granted that status. Of that fifty-two thousand, some accepted noncombat roles in the military, whereas others accepted unpaid work in civilian work camps. Many belonged to pacifist religious sects such as the Quakers or Mennonites. They were willing to serve their country, but they refused to kill. COs suffered public condemnation for disloyalty, and family members often turned against them. Strangers assaulted them. A portion of the town of Plymouth, NH, was destroyed by fire because the residents did not want to call upon the services of the COs trained as firemen at a nearby camp. Only a very small number of men evaded the draft completely. Most Americans, however, were willing to serve, and they required a competent officer corps. The very same day that Germany invaded Poland in 1939, President Roosevelt promoted George C. Marshall, a veteran of World War I and an expert at training officers, from a one-star general to a four-star general, and gave him the responsibility of serving as Army Chief of Staff. The desire to create a command staff that could win the army’s confidence no doubt contributed to the rather meteoric rise of Dwight D. Eisenhower ( Figure 27.9 ). During World War I, Eisenhower had been assigned to organize America’s new tank corps, and, although he never saw combat during the war, he demonstrated excellent organizational skills. When the United States entered World War II, Eisenhower was appointed commander of the General European Theater of Operations in June 1942. My Story General Eisenhower on Winning a War Promoted to the level of one-star general just before the attack on Pearl Harbor, Dwight D. Eisenhower had never held an active command position above the level of a battalion and was not considered a potential commander of major military operations. However, after he was assigned to the General Staff in Washington, DC, he quickly rose through the ranks and, by late 1942, was appointed commander of the North African campaign. Excerpts from General Eisenhower’s diary reveal his dedication to the war effort. He continued to work despite suffering a great personal loss. March 9, 1942 General McNaughton (commanding Canadians in Britain) came to see me. He believes in attacking in Europe (thank God). He’s over here in an effort to speed up landing craft production and cargo ships. Has some d___ good ideas. Sent him to see Somervell and Admiral Land. How I hope he can do something on landing craft. March 10, 1942 Father dies this morning. Nothing I can do but send a wire. One thing that might help win this war is to get someone to shoot [Admiral] King. He’s the antithesis of cooperation, a deliberately rude person, which means he’s a mental bully. He became Commander in Chief of the fleet some time ago. Today he takes over, also Stark’s job as chief of naval operations. It’s a good thing to get rid of the double head in the navy, and of course Stark was just a nice old lady, but this fellow is going to cause a blow-up sooner or later, I’ll bet a cookie. Gradually some of the people with whom I have to deal are coming to agree with me that there are just three “musts” for the Allies this year: hold open the line to England and support her as necessary, keep Russia in the war as an active participant; hold the India-Middle East buttress between Japs and Germans. All this assumes the safety from major attack of North America, Hawaii, and Caribbean area. We lost eight cargo ships yesterday. That we must stop, because any effort we make depends upon sea communication. March 11, 1942 I have felt terribly. I should like so much to be with my Mother these few days. But we’re at war. And war is not soft, it has no time to indulge even the deepest and most sacred emotions. I loved my Dad. I think my Mother the finest person I’ve ever known. She has been the inspiration for Dad’s life and a true helpmeet in every sense of the word. I’m quitting work now, 7:30 p.m. I haven’t the heart to go on tonight. —Dwight D. Eisenhower, The Eisenhower Diaries What does Eisenhower identify as the most important steps to take to win the war? EMPLOYMENT AND MIGRATION PATTERNS IN THE UNITED STATES Even before the official beginning of the war, the country started to prepare. In August 1940, Congress created the Defense Plant Corporation, which had built 344 plants in the West by 1945, and had funneled over $1.8 billion into the economies of western states. After Pearl Harbor, as American military strategists began to plan counterattacks and campaigns against the Axis powers, California became a training ground. Troops trained there for tank warfare and amphibious assaults as well as desert campaigns—since the first assault against the Axis powers was planned for North Africa. As thousands of Americans swarmed to the West Coast to take jobs in defense plants and shipyards, cities like Richmond, California, and nearby Oakland, expanded quickly. Richmond grew from a city of 20,000 people to 100,000 in only three years. Almost overnight, the population of California skyrocketed. African Americans moved out of the rural South into northern or West Coast cities to provide the muscle and skill to build the machines of war. Building on earlier waves of African American migration after the Civil War and during World War I, the demographics of the nation changed with the growing urbanization of the African American population. Women also relocated to either follow their husbands to military bases or take jobs in the defense industry, as the total mobilization of the national economy began to tap into previously underemployed populations. Roosevelt and his administration already had experience in establishing government controls and taking the initiative in economic matters during the Depression. In April 1941, Roosevelt created the Office of Price Administration (OPA), and, once the United States entered the war, the OPA regulated prices and attempted to combat inflation. The OPA ultimately had the power to set ceiling prices for all goods, except agricultural commodities, and to ration a long list of items. During the war, major labor unions pledged not to strike in order to prevent disruptions in production; in return, the government encouraged businesses to recognize unions and promised to help workers bargain for better wages. As in World War I, the government turned to bond drives to finance the war. Millions of Americans purchased more than $185 billion worth of war bonds. Children purchased Victory Stamps and exchanged full stamp booklets for bonds. The federal government also instituted the current tax-withholding system to ensure collection of taxes. Finally, the government once again urged Americans to plant victory gardens, using marketing campaigns and celebrities to promote the idea ( Figure 27.10 ). Americans responded eagerly, planting gardens in their backyards and vacant lots. The federal government also instituted rationing to ensure that America’s fighting men were well fed. Civilians were issued ration booklets, books of coupons that enabled them to buy limited amounts of meat, coffee, butter, sugar, and other foods. Wartime cookbooks were produced, such as the Betty Crocker cookbook Your Share , telling housewives how to prepare tasty meals without scarce food items. Other items were rationed as well, including shoes, liquor, cigarettes, and gasoline. With a few exceptions, such as doctors, Americans were allowed to drive their automobiles only on certain days of the week. Most Americans complied with these regulations, but some illegally bought and sold rationed goods on the black market. Civilians on the home front also recycled, conserved, and participated in scrap drives to collect items needed for the production of war materiel. Housewives saved cooking fats, needed to produce explosives. Children collected scrap metal, paper, rubber, silk, nylon, and old rags. Some children sacrificed beloved metal toys in order to “win the war.” Civilian volunteers, trained to recognize enemy aircraft, watched the skies along the coasts and on the borders. WOMEN IN THE WAR: ROSIE THE RIVETER AND BEYOND As in the previous war, the gap in the labor force created by departing soldiers meant opportunities for women. In particular, World War II led many to take jobs in defense plants and factories around the country. For many women, these jobs provided unprecedented opportunities to move into occupations previously thought of as exclusive to men, especially the aircraft industry, where a majority of workers were composed of women by 1943. Most women in the labor force did not work in the defense industry, however. The majority took over other factory jobs that had been held by men. Many took positions in offices as well. As White women, many of whom had been in the workforce before the war, moved into these more highly paid positions, African American women, most of whom had previously been limited to domestic service, took over White women’s lower-paying positions in factories; some were also hired by defense plants, however. Although women often earned more money than ever before, it was still far less than men received for doing the same jobs. Nevertheless, many achieved a degree of financial self-reliance that was enticing. By 1944, as many as 33 percent of the women working in the defense industries were mothers and worked “double-day” shifts—one at the plant and one at home. Still, there was some resistance to women going to work in such a male-dominated environment. In order to recruit women for factory jobs, the government created a propaganda campaign centered on a now-iconic figure known as Rosie the Riveter ( Figure 27.11 ). Rosie, who was a composite based on several real women, was most famously depicted by American illustrator Norman Rockwell. Rosie was tough yet feminine. To reassure men that the demands of war would not make women too masculine, some factories gave female employees lessons in how to apply makeup, and cosmetics were never rationed during the war. Elizabeth Arden even created a special red lipstick for use by women reservists in the Marine Corps. Although many saw the entry of women into the workforce as a positive thing, they also acknowledged that working women, especially mothers, faced great challenges. To try to address the dual role of women as workers and mothers, Eleanor Roosevelt urged her husband to approve the first U.S. government childcare facilities under the Community Facilities Act of 1942. Eventually, seven centers, servicing 105,000 children, were built. The First Lady also urged industry leaders like Henry Kaiser to build model childcare facilities for their workers. Still, these efforts did not meet the full need for childcare for working mothers. The lack of childcare facilities meant that many children had to fend for themselves after school, and some had to assume responsibility for housework and the care of younger siblings. Some mothers took younger children to work with them and left them locked in their cars during the workday. Police and social workers also reported an increase in juvenile delinquency during the war. New York City saw its average number of juvenile cases balloon from 9,500 in the prewar years to 11,200 during the war. In San Diego, delinquency rates for girls, including sexual misbehavior, shot up by 355 percent. It is unclear whether more juveniles were actually engaging in delinquent behavior; the police may simply have become more vigilant during wartime and arrested youngsters for activities that would have gone overlooked before the war. In any event, law enforcement and juvenile courts attributed the perceived increase to a lack of supervision by working mothers. Tens of thousands of women served in the war effort more directly. Approximately 350,000 joined the military. They worked as nurses, drove trucks, repaired airplanes, and performed clerical work to free up men for combat. Over sixteen hundred of the women nurses received various decorations for courage under fire, but many also died or were captured in the war zones. Those who joined the Women’s Airforce Service Pilots (WASPs) flew planes from the factories to military bases. Many women also flocked to work in a variety of civil service jobs. Others worked as chemists and engineers, developing weapons for the war. This included thousands of women who were recruited to work on the Manhattan Project, developing the atomic bomb. THE CULTURE OF WAR: ENTERTAINERS AND THE WAR EFFORT During the Great Depression, movies had served as a welcome diversion from the difficulties of everyday life, and during the war, this held still truer. By 1941, there were more movie theaters than banks in the United States. In the 1930s, newsreels, which were shown in movie theaters before feature films, had informed the American public of what was happening elsewhere in the world. This interest grew once American armies began to engage the enemy. Many informational documentaries about the war were also shown in movie theaters. The most famous were those in the Why We Fight series, filmed by Hollywood director Frank Capra. During the war, Americans flocked to the movies not only to learn what was happening to the troops overseas but also to be distracted from the fears and hardships of wartime by cartoons, dramas, and comedies. By 1945, movie attendance had reached an all-time high. Many feature films were patriotic stories that showed the day’s biggest stars as soldiers fighting the nefarious German and Japanese enemy. During the war years, there was a consistent supply of patriotic movies, with actors glorifying and inspiring America’s fighting men. John Wayne, who had become a star in the 1930s, appeared in many war-themed movies, including The Fighting Seabees and Back to Bataan . Besides appearing in patriotic movies, many male entertainers temporarily gave up their careers to serve in the armed forces ( Figure 27.12 ). Jimmy Stewart served in the Army Air Force and appeared in a short film entitled Winning Your Wings that encouraged young men to enlist. Tyrone Power joined the U.S. Marines. Female entertainers did their part as well. Rita Hayworth and Marlene Dietrich entertained the troops. African American singer and dancer Josephine Baker entertained Allied troops in North Africa and also carried secret messages for the French Resistance. Actress Carole Lombard was killed in a plane crash while returning home from a rally where she had sold war bonds. Defining American The Meaning of Democracy E. B. White was one of the most famous writers of the twentieth century. During the 1940s, he was known for the articles that he contributed to The New Yorker and the column that he wrote for Harper’s Magazine . Today, he is remembered for his children’s books Stuart Little and Charlotte’s Web , and for his collaboration with William Strunk, Jr., The Elements of Style , a guide to writing. In 1943, he wrote a definition of democracy as an example of what Americans hoped that they were fighting for. We received a letter from the Writer’s War Board the other day asking for a statement on ‘The Meaning of Democracy.’ It presumably is our duty to comply with such a request, and it is certainly our pleasure. Surely the Board knows what democracy is. It is the line that forms on the right. It is the ‘don’t’ in don’t shove. It is the hole in the stuffed shirt through which the sawdust slowly trickles; it is the dent in the high hat. Democracy is the recurrent suspicion that more than half of the people are right more than half of the time. It is the feeling of privacy in the voting booths, the feeling of communion in the libraries, the feeling of vitality everywhere. Democracy is a letter to the editor. Democracy is the score at the beginning of the ninth. It is an idea that hasn’t been disproved yet, a song the words of which have not gone bad. It is the mustard on the hot dog and the cream in the rationed coffee. Democracy is a request from a War Board, in the middle of the morning in the middle of a war, wanting to know what democracy is. Do you agree with this definition of democracy? Would you change anything to make it more contemporary? SOCIAL TENSIONS ON THE HOME FRONT The need for Americans to come together, whether in Hollywood, the defense industries, or the military, to support the war effort encouraged feelings of unity among the American population. However, the desire for unity did not always mean that Americans of color were treated as equals or even tolerated, despite their proclamations of patriotism and their willingness to join in the effort to defeat America’s enemies in Europe and Asia. For African Americans, Mexican Americans, and especially for Japanese Americans, feelings of patriotism and willingness to serve one’s country both at home and abroad was not enough to guarantee equal treatment by White Americans or to prevent the U.S. government from regarding them as the enemy. African Americans and Double V The African American community had, at the outset of the war, forged some promising relationships with the Roosevelt administration through civil rights activist Mary McLeod Bethune and Roosevelt’s “Black Cabinet” of African American advisors. Through the intervention of Eleanor Roosevelt, Bethune was appointed to the advisory council set up by the War Department Women’s Interest Section. In this position, Bethune was able to organize the first officer candidate school for women and enable African American women to become officers in the Women’s Army Auxiliary Corps (WAAC), which was renamed Women's Army Corps (WAC) a year later when it was authorized as a branch of the U.S. Army. As the U.S. economy revived as a result of government defense contracts, African Americans wanted to ensure that their service to the country earned them better opportunities and more equal treatment. Accordingly, in 1941, African American labor leader A. Philip Randolph pressured Roosevelt with a threatened “March on Washington.” In response, the president signed Executive Order 8802, which created the Fair Employment Practices Committee to bar racial discrimination in the defense industry. While the committee was effective in forcing defense contractors, such as the DuPont Corporation, to hire African Americans, it was not able to force corporations to place African Americans in well-paid positions. For example, at DuPont’s plutonium production plant in Hanford, Washington, African Americans were hired as low-paid construction workers but not as laboratory technicians. During the war, the Congress of Racial Equality (CORE), founded by James Farmer in 1942, used peaceful civil disobedience in the form of sit-ins to desegregate certain public spaces in Washington, DC, and elsewhere, as its contribution to the war effort. Members of CORE sought support for their movement by stating that one of their goals was to deprive the enemy of the ability to generate anti-American propaganda by accusing the United States of racism. After all, they argued, if the United States were going to denounce Germany and Japan for abusing human rights, the country should itself be as exemplary as possible. Indeed, CORE’s actions were in keeping with the goals of the Double V campaign that was begun in 1942 by the Pittsburgh Courier , the largest African American newspaper at the time ( Figure 27.13 ). The campaign called upon African Americans to accomplish the two “Vs”: victory over America’s foreign enemies and victory over racism in the United States. Despite the willingness of African Americans to fight for the United States, racial tensions often erupted in violence, as the geographic relocation necessitated by the war brought African Americans into closer contact with Whites. There were race riots in Detroit, Harlem, and Beaumont, Texas, in which White residents responded with sometimes deadly violence to their new Black coworkers or neighbors. There were also racial incidents at or near several military bases in the South. Incidents of African American soldiers being harassed or assaulted occurred at Fort Benning, Georgia; Fort Jackson, South Carolina; Alexandria, Louisiana; Fayetteville, Arkansas; and Tampa, Florida. African American leaders such as James Farmer and Walter White, the executive secretary of the NAACP since 1931, were asked by General Eisenhower to investigate complaints of the mistreatment of African American servicemen while on active duty. They prepared a fourteen-point memorandum on how to improve conditions for African Americans in the service, sowing some of the seeds of the postwar civil rights movement during the war years. The Zoot Suit Riots Mexican Americans also encountered racial prejudice. The Mexican American population in Southern California grew during World War II due to the increased use of Mexican agricultural workers in the fields to replace the White workers who had left for better paying jobs in the defense industries. The United States and Mexican governments instituted the “bracero” program on August 4, 1942, which sought to address the needs of California growers for manual labor to increase food production during wartime. The result was the immigration of thousands of impoverished Mexicans into the United States to work as braceros , or manual laborers. Forced by racial discrimination to live in the barrios of East Los Angeles, many Mexican American youths sought to create their own identity and began to adopt a distinctive style of dress known as zoot suits , which were also popular among many young African American men. The zoot suits, which required large amounts of cloth to produce, violated wartime regulations that restricted the amount of cloth that could be used in civilian garments. Among the charges leveled at young Mexican Americans was that they were un-American and unpatriotic; wearing zoot suits was seen as evidence of this. Many native-born Americans also denounced Mexican American men for being unwilling to serve in the military, even though some 350,000 Mexican Americans either volunteered to serve or were drafted into the armed services. In the summer of 1943, “zoot-suit riots” occurred in Los Angeles when carloads of White sailors, encouraged by other White civilians, stripped and beat a group of young men wearing the distinctive form of dress. In retaliation, young Mexican American men attacked and beat up sailors. The response was swift and severe, as sailors and civilians went on a spree attacking young Mexican Americans on the streets, in bars, and in movie theaters. More than one hundred people were injured. Internment Japanese Americans also suffered from discrimination. The Japanese attack on Pearl Harbor unleashed a cascade of racist assumptions about Japanese immigrants and Japanese Americans in the United States that culminated in the relocation and internment of 120,000 people of Japanese ancestry, 66 percent of whom had been born in the United States. Executive Order 9066 , signed by Roosevelt on February 19, 1942, gave the army power to remove people from “military areas” to prevent sabotage or espionage. The army then used this authority to relocate people of Japanese ancestry living along the Pacific coast of Washington, Oregon, and California, as well as in parts of Arizona, to internment camps in the American interior. Although a study commissioned earlier by Roosevelt indicated that there was little danger of disloyalty on the part of West Coast Japanese, fears of sabotage, perhaps spurred by the attempted rescue of a Japanese airman shot down at Pearl Harbor by Japanese living in Hawaii, and racist sentiments led Roosevelt to act. Ironically, Japanese in Hawaii were not interned. Although characterized afterwards as America’s worst wartime mistake by Eugene V. Rostow in the September 1945 edition of Harper’s Magazine , the government’s actions were in keeping with decades of anti-Asian sentiment on the West Coast. After the order went into effect, Lt. General John L. DeWitt, in charge of the Western Defense command, ordered approximately 127,000 Japanese and Japanese Americans—roughly 90 percent of those of Japanese ethnicity living in the United States—to assembly centers where they were transferred to hastily prepared camps in the interior of California, Arizona, Colorado, Utah, Idaho, Wyoming, and Arkansas ( Figure 27.14 ). Those who were sent to the camps reported that the experience was deeply traumatic. Families were sometimes separated. People could only bring a few of their belongings and had to abandon the rest of their possessions. The camps themselves were dismal and overcrowded. Despite the hardships, the Japanese attempted to build communities in the camps and resume “normal” life. Adults participated in camp government and worked at a variety of jobs. Children attended school, played basketball against local teams, and organized Boy Scout units. Nevertheless, they were imprisoned, and minor infractions, such as wandering too near the camp gate or barbed wire fences while on an evening stroll, could meet with severe consequences. Some sixteen thousand Germans, including some from Latin America, and German Americans were also placed in internment camps, as were 2,373 persons of Italian ancestry. However, unlike the case with Japanese Americans, they represented only a tiny percentage of the members of these ethnic groups living in the country. Most of these people were innocent of any wrongdoing, but some Germans were members of the Nazi party. No interned Japanese Americans were found guilty of sabotage or espionage. Despite being singled out for special treatment, many Japanese Americans sought to enlist, but draft boards commonly classified them as 4-C: undesirable aliens. However, as the war ground on, some were reclassified as eligible for service. In total, nearly thirty-three thousand Japanese Americans served in the military during the war. Of particular note was the 442nd Regimental Combat Team, which finished the war as the most decorated unit in U.S. military history given its size and length of service. While their successes, and the successes of the African American pilots, were lauded, the country and the military still struggled to contend with its own racial tensions, even as the soldiers in Europe faced the brutality of Nazi Germany. 27.3 Victory in the European Theater Learning Objectives By the end of this section, you will be able to: Identify the major battles of the European theater Analyze the goals and results of the major wartime summit meetings Despite the fact that a Japanese attack in the Pacific was the tripwire for America’s entrance into the war, Roosevelt had been concerned about Great Britain since the beginning of the Battle of Britain. Roosevelt viewed Germany as the greater threat to freedom. Hence, he leaned towards a “Europe First” strategy, even before the United States became an active belligerent. That meant that the United States would concentrate the majority of its resources and energies in achieving a victory over Germany first and then focus on defeating Japan. Within Europe, Churchill and Roosevelt were committed to saving Britain and acted with this goal in mind, often ignoring the needs of the Soviet Union. As Roosevelt imagined an “empire-free” postwar world, in keeping with the goals of the Atlantic Charter, he could also envision the United States becoming the preeminent world power economically, politically, and militarily. WARTIME DIPLOMACY Franklin Roosevelt entered World War II with an eye toward a new postwar world, one where the United States would succeed Britain as the leader of Western capitalist democracies, replacing the old British imperial system with one based on free trade and decolonization. The goals of the Atlantic Charter had explicitly included self-determination, self-government, and free trade. In 1941, although Roosevelt had yet to meet Soviet premier Joseph Stalin, he had confidence that he could forge a positive relationship with him, a confidence that Churchill believed was born of naiveté. These allied leaders, known as the Big Three , thrown together by the necessity to defeat common enemies, took steps towards working in concert despite their differences. Through a series of wartime conferences, Roosevelt and the other global leaders sought to come up with a strategy to both defeat the Germans and bolster relationships among allies. In January 1943, at Casablanca, Morocco, Churchill convinced Roosevelt to delay an invasion of France in favor of an invasion of Sicily ( Figure 27.15 ). It was also at this conference that Roosevelt enunciated the doctrine of “unconditional surrender.” Roosevelt agreed to demand an unconditional surrender from Germany and Japan to assure the Soviet Union that the United States would not negotiate a separate peace between the two belligerent states. He wanted a permanent transformation of Germany and Japan after the war. Roosevelt thought that announcing this as a specific war aim would discourage any nation or leader from seeking any negotiated armistice that would hinder efforts to reform and transform the defeated nations. Stalin, who was not at the conference, affirmed the concept of unconditional surrender when asked to do so. However, he was dismayed over the delay in establishing a “second front” along which the Americans and British would directly engage German forces in western Europe. A western front, brought about through an invasion across the English Channel, which Stalin had been demanding since 1941, offered the best means of drawing Germany away from the east. At a meeting in Tehran, Iran, also in November 1943, Churchill, Roosevelt, and Stalin met to finalize plans for a cross-channel invasion. THE INVASION OF EUROPE Preparing to engage the Nazis in Europe, the United States landed in North Africa in 1942. The Axis campaigns in North Africa had begun when Italy declared war on England in June 1940, and British forces had invaded the Italian colony of Libya. The Italians had responded with a counteroffensive that penetrated into Egypt, only to be defeated by the British again. In response, Hitler dispatched the Afrika Korps under General Erwin Rommel, and the outcome of the situation was in doubt until shortly before American forces joined the British. Although the Allied campaign secured control of the southern Mediterranean and preserved Egypt and the Suez Canal for the British, Stalin and the Soviets were still engaging hundreds of German divisions in bitter struggles at Stalingrad and Leningrad. The invasion of North Africa did nothing to draw German troops away from the Soviet Union. An invasion of Europe by way of Italy, which is what the British and American campaign in North Africa laid the ground for, pulled a few German divisions away from their Russian targets. But while Stalin urged his allies to invade France, British and American troops pursued the defeat of Mussolini’s Italy. This choice greatly frustrated Stalin, who felt that British interests were taking precedence over the agony that the Soviet Union was enduring at the hands of the invading German army. However, Churchill saw Italy as the vulnerable underbelly of Europe and believed that Italian support for Mussolini was waning, suggesting that victory there might be relatively easy. Moreover, Churchill pointed out that if Italy were taken out of the war, then the Allies would control the Mediterranean, offering the Allies easier shipping access to both the Soviet Union and the British Far Eastern colonies. D-Day A direct assault on Nazi Germany’s “Fortress Europe” was still necessary for final victory. On June 6, 1944, the second front became a reality when Allied forces stormed the beaches of northern France on D-day . Beginning at 6:30 a.m., some twenty-four thousand British, Canadian, and American troops waded ashore along a fifty-mile piece of the Normandy coast ( Figure 27.16 ). Well over a million troops would follow their lead. German forces on the hills and cliffs above shot at them, and once they reached the beach, they encountered barbed wire and land mines. More than ten thousand Allied soldiers were wounded or killed during the assault. Following the establishment of beachheads at Normandy, it took months of difficult fighting before Paris was liberated on August 20, 1944. The invasion did succeed in diverting German forces from the eastern front to the western front, relieving some of the pressure on Stalin’s troops. By that time, however, Russian forces had already defeated the German army at Stalingrad, an event that many consider the turning point of the war in Europe, and begun to push the Germans out of the Soviet Union. Nazi Germany was not ready to surrender, however. On December 16, in a surprise move, the Germans threw nearly a quarter-million men at the Western Allies in an attempt to divide their armies and encircle major elements of the American forces. The struggle, known as the Battle of the Bulge, raged until the end of January. Some ninety thousand Americans were killed, wounded, or lost in action. Nevertheless, the Germans were turned back, and Hitler’s forces were so spent that they could never again mount offensive operations. Confronting the Holocaust The Holocaust, Hitler’s plan to kill the Jews of Europe, had begun as early as 1933, with the construction of Dachau, the first of more than forty thousand camps for incarcerating Jews, submitting them to forced labor, or exterminating them. Eventually, six extermination camps were established between 1941 and 1945 in Polish territory. Jewish men, women, and children from throughout Europe were transported to these camps in Germany and other areas under Nazi control. Although the majority of the people in the camps were Jews, the Nazis sent Roma (gypsies), gays and lesbians, Jehovah’s Witnesses, and political opponents to the camps as well. Some prisoners were put to work at hard labor; many of them subsequently died of disease or starvation. Most of those sent to the extermination camps were killed upon arrival with poisoned gas. Ultimately, some eleven million people died in the camps. As Soviet troops began to advance from the east and U.S. forces from the west, camp guards attempted to hide the evidence of their crimes by destroying records and camp buildings, and marching surviving prisoners away from the sites ( Figure 27.17 ). My Story Felix L. Sparks on the Liberation of Dachau The horrors of the concentration camps remained with the soldiers who liberated them long after the war had ended. Below is an excerpt of the recollection of one soldier. Our first experience with the camp came as a traumatic shock. The first evidence of the horrors to come was a string of forty railway cars on a railway spur leading into the camp. Each car was filled with emaciated human corpses, both men and women. A hasty search by the stunned infantry soldiers revealed no signs of life among the hundreds of still bodies, over two thousand in all. It was in this atmosphere of human depravity, degradation and death that the soldiers of my battalion then entered the camp itself. Almost all of the SS command guarding the camp had fled before our arrival, leaving behind about two hundred lower ranking members of the command. There was some sporadic firing of weapons. As we approached the confinement area, the scene numbed my senses. Dante’s Inferno seemed pale compared to the real hell of Dachau. A row of small cement structures near the prison entrance contained a coal-fired crematorium, a gas chamber, and rooms piled high with naked and emaciated corpses. As I turned to look over the prison yard with un-believing eyes, I saw a large number of dead inmates lying where they has fallen in the last few hours or days before our arrival. Since all of the bodies were in various stages of decomposition, the stench of death was overpowering. The men of the 45th Infantry Division were hardened combat veterans. We had been in combat almost two years at that point. While we were accustomed to death, we were not able to comprehend the type of death that we encountered at Dachau. —Felix L. Sparks, remarks at the U.S. Holocaust Museum, May 8, 1995 YALTA AND PREPARING FOR VICTORY The last time the Big Three met was in early February 1945 at Yalta in the Soviet Union. Roosevelt was sick, and Stalin’s armies were pushing the German army back towards Berlin from the east. Churchill and Roosevelt thus had to accept a number of compromises that strengthened Stalin’s position in eastern Europe. In particular, they agreed to allow the Communist government installed by the Soviet Union in Poland to remain in power until free elections took place. For his part, Stalin reaffirmed his commitment, first voiced at Tehran, to enter the war against Japan following the surrender of Germany ( Figure 27.18 ). He also agreed that the Soviet Union would participate in the United Nations, a new peacekeeping body intended to replace the League of Nations. The Big Three left Yalta with many details remaining unclear, planning to finalize plans for the treatment of Germany and the shape of postwar Europe at a later conference. However, Roosevelt did not live to attend the next meeting. He died on April 12, 1945, and Harry S. Truman became president. By April 1945, Soviet forces had reached Berlin, and both the U.S. and British Allies were pushing up against Germany’s last defenses in the western part of the nation. Hitler committed suicide on April 30, 1945. On May 8, 1945, Germany surrendered. The war in Europe was over, and the Allies and liberated regions celebrated the end of the long ordeal. Germany was thoroughly defeated; its industries and cities were badly damaged. The victorious Allies set about determining what to do to rebuild Europe at the Potsdam Summit Conference in July 1945. Attending the conference were Stalin, Truman, and Churchill, now the outgoing prime minister, as well as the new British prime minister, Clement Atlee. Plans to divide Germany and Austria, and their capital cities, into four zones—to be occupied by the British, French, Americans, and Soviets—a subject discussed at Yalta, were finalized. In addition, the Allies agreed to dismantle Germany’s heavy industry in order to make it impossible for the country to produce more armaments. 27.4 The Pacific Theater and the Atomic Bomb Learning Objectives By the end of this section, you will be able to: Discuss the strategy employed against the Japanese and some of the significant battles of the Pacific campaign Describe the effects of the atomic bombs on Hiroshima and Nagasaki Analyze the decision to drop atomic bombs on Japan Japanese forces won a series of early victories against Allied forces from December 1941 to May 1942. They seized Guam and Wake Island from the United States, and streamed through Malaysia and Thailand into the Philippines and through the Dutch East Indies. By February 1942, they were threatening Australia. The Allies turned the tide in May and June 1942, at the Battle of Coral Sea and the Battle of Midway. The Battle of Midway witnessed the first Japanese naval defeat since the nineteenth century. Shortly after the American victory, U.S. forces invaded Guadalcanal and New Guinea. Slowly, throughout 1943, the United States engaged in a campaign of “island hopping,” gradually moving across the Pacific to Japan. In 1944, the United States, seized Saipan and won the Battle of the Philippine Sea. Progressively, American forces drew closer to the strategically important targets of Iwo Jima and Okinawa. THE PACIFIC CAMPAIGN During the 1930s, Americans had caught glimpses of Japanese armies in action and grew increasingly sympathetic towards war-torn China. Stories of Japanese atrocities bordering on genocide and the shock of the attack on Pearl Harbor intensified racial animosity toward the Japanese. Wartime propaganda portrayed Japanese soldiers as uncivilized and barbaric, sometimes even inhuman ( Figure 27.19 ), unlike America’s German foes. Admiral William Halsey spoke for many Americans when he urged them to “Kill Japs! Kill Japs! Kill more Japs!” Stories of the dispiriting defeats at Bataan and the Japanese capture of the Philippines at Corregidor in 1942 revealed the Japanese cruelty and mistreatment of Americans. The “Bataan Death March,” during which as many as 650 American and 10,000 Filipino prisoners of war died, intensified anti-Japanese feelings. Kamikaze attacks that took place towards the end of the war were regarded as proof of the irrationality of Japanese martial values and mindless loyalty to Emperor Hirohito. Despite the Allies’ Europe First strategy, American forces took the resources that they could assemble and swung into action as quickly as they could to blunt the Japanese advance. Infuriated by stories of defeat at the hands of the allegedly racially inferior Japanese, many high-ranking American military leaders demanded that greater attention be paid to the Pacific campaign. Rather than simply wait for the invasion of France to begin, naval and army officers such as General Douglas MacArthur argued that American resources should be deployed in the Pacific to reclaim territory seized by Japan. In the Pacific, MacArthur and the Allied forces pursued an island hopping strategy that bypassed certain island strongholds held by the Japanese that were of little or no strategic value. By seizing locations from which Japanese communications and transportation routes could be disrupted or destroyed, the Allies advanced towards Japan without engaging the thousands of Japanese stationed on garrisoned islands. The goal was to advance American air strength close enough to Japan proper to achieve air superiority over the home islands; the nation could then be bombed into submission or at least weakened in preparation for an amphibious assault. By February 1945, American forces had reached the island of Iwo Jima ( Figure 27.20 ). Iwo Jima was originally meant to serve as a forward air base for fighter planes, providing cover for long-distance bombing raids on Japan. Two months later, an even larger engagement, the hardest fought and bloodiest battle of the Pacific theater, took place as American forces invaded Okinawa. The battle raged from April 1945 well into July 1945; the island was finally secured at the cost of seventeen thousand American soldiers killed and thirty-six thousand wounded. Japanese forces lost over 100,000 troops. Perhaps as many as 150,000 civilians perished as well. DROPPING THE ATOMIC BOMB All belligerents in World War II sought to develop powerful and devastating weaponry. As early as 1939, German scientists had discovered how to split uranium atoms, the technology that would ultimately allow for the creation of the atomic bomb. Albert Einstein, who had emigrated to the United States in 1933 to escape the Nazis, urged President Roosevelt to launch an American atomic research project, and Roosevelt agreed to do so, with reservations. In late 1941, the program received its code name: the Manhattan Project . Located at Los Alamos, New Mexico, the Manhattan Project ultimately employed 150,000 people and cost some $2 billion. In July 1945, the project’s scientists successfully tested the first atomic bomb. In the spring of 1945, the military began to prepare for the possible use of an atomic bomb by choosing appropriate targets. Suspecting that the immediate bomb blast would extend over one mile and secondary effects would include fire damage, a compact city of significant military value with densely built frame buildings seemed to be the best target. Eventually, the city of Hiroshima, the headquarters of the Japanese Second Army, and the communications and supply hub for all of southern Japan, was chosen. The city of Kokura was chosen as the primary target of the second bomb, and Nagasaki, an industrial center producing war materiel and the largest seaport in southern Japan, was selected as a secondary target. The Enola Gay , a B-29 bomber named after its pilot’s mother, dropped an atomic bomb known as “Little Boy” on Hiroshima at 8:15 a.m. Monday morning, August 6, 1945. A huge mushroom cloud rose above the city. Survivors sitting down for breakfast or preparing to go to school recalled seeing a bright light and then being blown across the room. The immense heat of the blast melted stone and metal, and ignited fires throughout the city. One man later recalled watching his mother and brother burn to death as fire consumed their home. A female survivor, a child at the time of the attack, remembered finding the body of her mother, which had been reduced to ashes and fell apart as she touched it. Two-thirds of the buildings in Hiroshima were destroyed. Within an hour after the bombing, radioactive “black rain” began to fall. Approximately seventy thousand people died in the original blast. The same number would later die of radiation poisoning. When Japan refused to surrender, a second atomic bomb, named Fat Man, was dropped on Nagasaki on August 9, 1945. At least sixty thousand people were killed at Nagasaki. Kokura, the primary target, had been shrouded in clouds on that morning and thus had escaped destruction. It is impossible to say with certainty how many died in the two attacks; the heat of the bomb blasts incinerated or vaporized many of the victims ( Figure 27.21 ). The decision to use nuclear weapons is widely debated. Why exactly did the United States deploy an atomic bomb? The fierce resistance that the Japanese forces mounted during their early campaigns led American planners to believe that any invasion of the Japanese home islands would be exceedingly bloody. According to some estimates, as many as 250,000 Americans might die in securing a final victory. Such considerations undoubtedly influenced President Truman’s decision. Truman, who had not known about the Manhattan Project until Roosevelt’s death, also may not have realized how truly destructive it was. Indeed, some of the scientists who had built the bomb were surprised by its power. One question that has not been fully answered is why the United States dropped the second bomb on Nagasaki. As some scholars have noted, if Truman’s intention was to eliminate the need for a home island invasion, he could have given Japan more time to respond after bombing Hiroshima. He did not, however. The second bombing may have been intended to send a message to Stalin, who was becoming intransigent regarding postwar Europe. If it is indeed true that Truman had political motivations for using the bombs, then the destruction of Nagasaki might have been the first salvo of the Cold War with the Soviet Union. And yet, other historians have pointed out that the war had unleashed such massive atrocities against civilians by all belligerents—the United States included—that by the summer of 1945, the president no longer needed any particular reason to use his entire nuclear arsenal. THE WAR ENDS Whatever the true reasons for their use, the bombs had the desired effect of getting Japan to surrender. Even before the atomic attacks, the conventional bombings of Japan, the defeat of its forces in the field, and the entry of the Soviet Union into the war had convinced the Imperial Council that they had to end the war. They had hoped to negotiate the terms of the peace, but Emperor Hirohito intervened after the destruction of Nagasaki and accepted unconditional surrender. Although many Japanese shuddered at the humiliation of defeat, most were relieved that the war was over. Japan’s industries and cities had been thoroughly destroyed, and the immediate future looked bleak as they awaited their fate at the hands of the American occupation forces. The victors had yet another nation to rebuild and reform, but the war was finally over. Following the surrender, the Japanese colony of Korea was divided along the thirty-eighth parallel; the Soviet Union was given control of the northern half and the United States was given control of the southern portion. In Europe, as had been agreed upon at a meeting of the Allies in Potsdam in the summer of 1945, Germany was divided into four occupation zones that would be controlled by Britain, France, the Soviet Union, and the United States, respectively. The city of Berlin was similarly split into four. Plans were made to prosecute war criminals in both Japan and Germany. In October 1945, the United Nations was created. People around the world celebrated the end of the conflict, but America’s use of atomic bombs and disagreements between the United States and the Soviet Union at Yalta and Potsdam would contribute to ongoing instability in the postwar world.
psychology
Summary 8.1 How Memory Functions Memory is a system or process that stores what we learn for future use. Our memory has three basic functions: encoding, storing, and retrieving information. Encoding is the act of getting information into our memory system through automatic or effortful processing. Storage is retention of the information, and retrieval is the act of getting information out of storage and into conscious awareness through recall, recognition, and relearning. The idea that information is processed through three memory systems is called the Atkinson-Shiffrin (A-S) model of memory. First, environmental stimuli enter our sensory memory for a period of less than a second to a few seconds. Those stimuli that we notice and pay attention to then move into short-term memory (also called working memory). According to the A-S model, if we rehearse this information, then it moves into long-term memory for permanent storage. Other models like that of Baddeley and Hitch suggest there is more of a feedback loop between short-term memory and long-term memory. Long-term memory has a practically limitless storage capacity and is divided into implicit and explicit memory. Finally, retrieval is the act of getting memories out of storage and back into conscious awareness. This is done through recall, recognition, and relearning. 8.2 Parts of the Brain Involved with Memory Beginning with Karl Lashley, researchers and psychologists have been searching for the engram, which is the physical trace of memory. Lashley did not find the engram, but he did suggest that memories are distributed throughout the entire brain rather than stored in one specific area. Now we know that three brain areas do play significant roles in the processing and storage of different types of memories: cerebellum, hippocampus, and amygdala. The cerebellum’s job is to process procedural memories; the hippocampus is where new memories are encoded; the amygdala helps determine what memories to store, and it plays a part in determining where the memories are stored based on whether we have a strong or weak emotional response to the event. Strong emotional experiences can trigger the release of neurotransmitters, as well as hormones, which strengthen memory, so that memory for an emotional event is usually stronger than memory for a non-emotional event. This is shown by what is known as the flashbulb memory phenomenon: our ability to remember significant life events. However, our memory for life events (autobiographical memory) is not always accurate. 8.3 Problems with Memory All of us at times have felt dismayed, frustrated, and even embarrassed when our memories have failed us. Our memory is flexible and prone to many errors, which is why eyewitness testimony has been found to be largely unreliable. There are several reasons why forgetting occurs. In cases of brain trauma or disease, forgetting may be due to amnesia. Another reason we forget is due to encoding failure. We can’t remember something if we never stored it in our memory in the first place. Schacter presents seven memory errors that also contribute to forgetting. Sometimes, information is actually stored in our memory, but we cannot access it due to interference. Proactive interference happens when old information hinders the recall of newly learned information. Retroactive interference happens when information learned more recently hinders the recall of older information. 8.4 Ways to Enhance Memory There are many ways to combat the inevitable failures of our memory system. Some common strategies that can be used in everyday situations include mnemonic devices, rehearsal, self-referencing, and adequate sleep. These same strategies also can help you to study more effectively.
Chapter Outline 8.1 How Memory Functions 8.2 Parts of the Brain Involved with Memory 8.3 Problems with Memory 8.4 Ways to Enhance Memory Introduction We may be top-notch learners, but if we don’t have a way to store what we’ve learned, what good is the knowledge we’ve gained? Take a few minutes to imagine what your day might be like if you could not remember anything you had learned. You would have to figure out how to get dressed. What clothing should you wear, and how do buttons and zippers work? You would need someone to teach you how to brush your teeth and tie your shoes. Who would you ask for help with these tasks, since you wouldn’t recognize the faces of these people in your house? Wait . . . is this even your house? Uh oh, your stomach begins to rumble and you feel hungry. You’d like something to eat, but you don’t know where the food is kept or even how to prepare it. Oh dear, this is getting confusing. Maybe it would be best just go back to bed. A bed . . . what is a bed? We have an amazing capacity for memory, but how, exactly, do we process and store information? Are there different kinds of memory, and if so, what characterizes the different types? How, exactly, do we retrieve our memories? And why do we forget? This chapter will explore these questions as we learn about memory.
[ { "answer": { "ans_choice": 2, "ans_text": "working memory" }, "bloom": null, "hl_context": "<hl> Short-term memory ( STM ) is a temporary storage system that processes incoming sensory memory ; sometimes it is called working memory . <hl> Short-term memory takes information from sensory memory and sometimes connects that memory to something already in long-term memory . Short-term memory storage lasts about 20 seconds . George Miller ( 1956 ) , in his research on the capacity of memory , found that most people can retain about 7 items in STM . Some remember 5 , some 9 , so he called the capacity of STM 7 plus or minus 2 .", "hl_sentences": "Short-term memory ( STM ) is a temporary storage system that processes incoming sensory memory ; sometimes it is called working memory .", "question": { "cloze_format": "________ is another name for short-term memory.", "normal_format": "Which is another name for short-term memory?", "question_choices": [ "sensory memory", "episodic memory", "working memory", "implicit memory" ], "question_id": "fs-idm127896928", "question_text": "________ is another name for short-term memory." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "essentially limitless" }, "bloom": null, "hl_context": "<hl> Long-term memory ( LTM ) is the continuous storage of information . <hl> <hl> Unlike short-term memory , the storage capacity of LTM has no limits . <hl> It encompasses all the things you can remember that happened more than just a few minutes ago to all of the things that you can remember that happened days , weeks , and years ago . In keeping with the computer analogy , the information in your LTM would be like the information you have saved on the hard drive . It isn ’ t there on your desktop ( your short-term memory ) , but you can pull up this information when you want it , at least most of the time . Not all long-term memories are strong memories . Some memories can only be recalled through prompts . For example , you might easily recall a fact — “ What is the capital of the United States ? ” — or a procedure — “ How do you ride a bike ? ” — but you might struggle to recall the name of the restaurant you had dinner when you were on vacation in France last summer . A prompt , such as that the restaurant was named after its owner , who spoke to you about your shared interest in soccer , may help you recall the name of the restaurant .", "hl_sentences": "Long-term memory ( LTM ) is the continuous storage of information . Unlike short-term memory , the storage capacity of LTM has no limits .", "question": { "cloze_format": "The storage capacity of long-term memory is ________.", "normal_format": "What is the storage capacity of long-term memory?", "question_choices": [ "one or two bits of information", "seven bits, plus or minus two", "limited", "essentially limitless" ], "question_id": "fs-idm87630592", "question_text": "The storage capacity of long-term memory is ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "encoding, storage, and retrieval" }, "bloom": null, "hl_context": "Memory is an information processing system ; therefore , we often compare it to a computer . <hl> Memory is the set of processes used to encode , store , and retrieve information over different periods of time ( Figure 8.2 ) . <hl> Link to Learning", "hl_sentences": "Memory is the set of processes used to encode , store , and retrieve information over different periods of time ( Figure 8.2 ) .", "question": { "cloze_format": "The three functions of memory are ________.", "normal_format": "What are the three functions of memory?", "question_choices": [ "automatic processing, effortful processing, and storage", "encoding, processing, and storage", "automatic processing, effortful processing, and retrieval", "encoding, storage, and retrieval" ], "question_id": "fs-idm127493296", "question_text": "The three functions of memory are ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "engram" }, "bloom": null, "hl_context": "Are memories stored in just one part of the brain , or are they stored in many different parts of the brain ? Karl Lashley began exploring this problem , about 100 years ago , by making lesions in the brains of animals such as rats and monkeys . He was searching for evidence of the engram : the group of neurons that serve as the “ physical representation of memory ” ( Josselyn , 2010 ) . First , Lashley ( 1950 ) trained rats to find their way through a maze . Then , he used the tools available at the time — in this case a soldering iron — to create lesions in the rats ’ brains , specifically in the cerebral cortex . <hl> He did this because he was trying to erase the engram , or the original memory trace that the rats had of the maze . <hl>", "hl_sentences": "He did this because he was trying to erase the engram , or the original memory trace that the rats had of the maze .", "question": { "cloze_format": "This physical trace of memory is known as the ________.", "normal_format": "What is this physical trace of memory known as?", "question_choices": [ "engram", "Lashley effect", "Deese-Roediger-McDermott Paradigm", "flashbulb memory effect" ], "question_id": "fs-idp103162160", "question_text": "This physical trace of memory is known as the ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "flashbulb memory" }, "bloom": null, "hl_context": "<hl> A flashbulb memory is an exceptionally clear recollection of an important event ( Figure 8.10 ) . <hl> Where were you when you first heard about the 9/11 terrorist attacks ? Most likely you can remember where you were and what you were doing . In fact , a Pew Research Center ( 2011 ) survey found that for those Americans who were age 8 or older at the time of the event , 97 % can recall the moment they learned of this event , even a decade after it happened . It is also believed that strong emotions trigger the formation of strong memories , and weaker emotional experiences form weaker memories ; this is called arousal theory ( Christianson , 1992 ) . <hl> For example , strong emotional experiences can trigger the release of neurotransmitters , as well as hormones , which strengthen memory ; therefore , our memory for an emotional event is usually better than our memory for a non-emotional event . <hl> When humans and animals are stressed , the brain secretes more of the neurotransmitter glutamate , which helps them remember the stressful event ( McGaugh , 2003 ) . <hl> This is clearly evidenced by what is known as the flashbulb memory phenomenon . <hl>", "hl_sentences": "A flashbulb memory is an exceptionally clear recollection of an important event ( Figure 8.10 ) . For example , strong emotional experiences can trigger the release of neurotransmitters , as well as hormones , which strengthen memory ; therefore , our memory for an emotional event is usually better than our memory for a non-emotional event . This is clearly evidenced by what is known as the flashbulb memory phenomenon .", "question": { "cloze_format": "An exceptionally clear recollection of an important event is a (an) ________.", "normal_format": "What is an exceptionally clear recollection of an important event?", "question_choices": [ "engram", "arousal theory", "flashbulb memory", "equipotentiality hypothesis" ], "question_id": "fs-idp52737968", "question_text": "An exceptionally clear recollection of an important event is a (an) ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "egocentric bias" }, "bloom": null, "hl_context": "<hl> Egocentric bias involves enhancing our memories of the past ( Payne et al . , 2004 ) . <hl> <hl> Did you really score the winning goal in that big soccer match , or did you just assist ? <hl>", "hl_sentences": "Egocentric bias involves enhancing our memories of the past ( Payne et al . , 2004 ) . Did you really score the winning goal in that big soccer match , or did you just assist ?", "question": { "cloze_format": "________ is when our recollections of the past are done in a self-enhancing manner.", "normal_format": "When our recollections of the past are done in a self-enhancing manner, it is called what?", "question_choices": [ "stereotypical bias", "egocentric bias", "hindsight bias", "enhancement bias" ], "question_id": "fs-idm52141824", "question_text": "________ is when our recollections of the past are done in a self-enhancing manner." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "blocking" }, "bloom": null, "hl_context": "<hl> Blocking <hl> <hl> Forgetting <hl> <hl> Memory Errors Psychologist Daniel Schacter ( 2001 ) , a well-known memory researcher , offers seven ways our memories fail us . <hl> <hl> He calls them the seven sins of memory and categorizes them into three groups : forgetting , distortion , and intrusion ( Table 8.1 ) . <hl>", "hl_sentences": "Blocking Forgetting Memory Errors Psychologist Daniel Schacter ( 2001 ) , a well-known memory researcher , offers seven ways our memories fail us . He calls them the seven sins of memory and categorizes them into three groups : forgetting , distortion , and intrusion ( Table 8.1 ) .", "question": { "cloze_format": "Tip-of-the-tongue phenomenon is also known as ________.", "normal_format": "What is tip-of-the-tongue phenomenon also known as?", "question_choices": [ "persistence", "misattribution", "transience", "blocking" ], "question_id": "fs-idm161208496", "question_text": "Tip-of-the-tongue phenomenon is also known as ________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 0, "ans_text": "construction; reconstruction" }, "bloom": null, "hl_context": "<hl> The formulation of new memories is sometimes called construction , and the process of bringing up old memories is called reconstruction . <hl> Yet as we retrieve our memories , we also tend to alter and modify them . A memory pulled from long-term storage into short-term memory is flexible . New events can be added and we can change what we think we remember about past events , resulting in inaccuracies and distortions . People may not intend to distort facts , but it can happen in the process of retrieving old memories and combining them with new memories ( Roediger and DeSoto , in press ) .", "hl_sentences": "The formulation of new memories is sometimes called construction , and the process of bringing up old memories is called reconstruction .", "question": { "cloze_format": "The formulation of new memories is sometimes called ________, and the process of bringing up old memories is called ________.", "normal_format": "What is the formulation of new memories called, and what is the process of bringing up old memories called?", "question_choices": [ "construction; reconstruction", "reconstruction; construction", "production; reproduction", "reproduction; production" ], "question_id": "fs-idm45358064", "question_text": "The formulation of new memories is sometimes called ________, and the process of bringing up old memories is called ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "a traumatic life experience" }, "bloom": null, "hl_context": "Some other strategies that are used to improve memory include expressive writing and saying words aloud . <hl> Expressive writing helps boost your short-term memory , particularly if you write about a traumatic experience in your life . <hl> Masao Yogo and Shuji Fujihara ( 2008 ) had participants write for 20 - minute intervals several times per month . The participants were instructed to write about a traumatic experience , their best possible future selves , or a trivial topic . The researchers found that this simple writing task increased short-term memory capacity after five weeks , but only for the participants who wrote about traumatic experiences . Psychologists can ’ t explain why this writing task works , but it does .", "hl_sentences": "Expressive writing helps boost your short-term memory , particularly if you write about a traumatic experience in your life .", "question": { "cloze_format": "According to a study by Yogo and Fujihara (2008), if you want to improve your short-term memory, you should spend time writing about ________.", "normal_format": "According to a study by Yogo and Fujihara (2008), if you want to improve your short-term memory, you should spend time writing about which of the following?", "question_choices": [ "your best possible future self", "a traumatic life experience", "a trivial topic", "your grocery list" ], "question_id": "fs-idp2921264", "question_text": "According to a study by Yogo and Fujihara (2008), if you want to improve your short-term memory, you should spend time writing about ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "making the material you are trying to memorize personally meaningful to you" }, "bloom": null, "hl_context": "<hl> Apply the self-reference effect : As you go through the process of elaborative rehearsal , it would be even more beneficial to make the material you are trying to memorize personally meaningful to you . <hl> In other words , make use of the self-reference effect . Write notes in your own words . Write definitions from the text , and then rewrite them in your own words . Relate the material to something you have already learned for another class , or think how you can apply the concepts to your own life . When you do this , you are building a web of retrieval cues that will help you access the material when you want to remember it . Words that had been encoded semantically were better remembered than those encoded visually or acoustically . Semantic encoding involves a deeper level of processing than the shallower visual or acoustic encoding . Craik and Tulving concluded that we process verbal information best through semantic encoding , especially if we apply what is called the self-reference effect . <hl> The self-reference effect is the tendency for an individual to have better memory for information that relates to oneself in comparison to material that has less personal relevance ( Rogers , Kuiper & Kirker , 1977 ) . <hl> Could semantic encoding be beneficial to you as you attempt to memorize the concepts in this chapter ?", "hl_sentences": "Apply the self-reference effect : As you go through the process of elaborative rehearsal , it would be even more beneficial to make the material you are trying to memorize personally meaningful to you . The self-reference effect is the tendency for an individual to have better memory for information that relates to oneself in comparison to material that has less personal relevance ( Rogers , Kuiper & Kirker , 1977 ) .", "question": { "cloze_format": "The self-referencing effect refers to ________.", "normal_format": "What does the self-referencing effect refer to?", "question_choices": [ "making the material you are trying to memorize personally meaningful to you", "making a phrase of all the first letters of the words you are trying to memorize", "making a word formed by the first letter of each of the words you are trying to memorize", "saying words you want to remember out loud to yourself" ], "question_id": "fs-idp2973584", "question_text": "The self-referencing effect refers to ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "mnemonic devices" }, "bloom": null, "hl_context": "<hl> Make use of mnemonic devices : As you learned earlier in this chapter , mnemonic devices often help us to remember and recall information . <hl> There are different types of mnemonic devices , such as the acronym . An acronym is a word formed by the first letter of each of the words you want to remember . For example , even if you live near one , you might have difficulty recalling the names of all five Great Lakes . What if I told you to think of the word Homes ? HOMES is an acronym that represents Huron , Ontario , Michigan , Erie , and Superior : the five Great Lakes . Another type of mnemonic device is an acrostic : you make a phrase of all the first letters of the words . For example , if you are taking a math test and you are having difficulty remembering the order of operations , recalling the following sentence will help you : “ Please Excuse My Dear Aunt Sally , ” because the order of mathematical operations is Parentheses , Exponents , Multiplication , Division , Addition , Subtraction . There also are jingles , which are rhyming tunes that contain key words related to the concept , such as i before e , except after c . <hl> Mnemonic devices are memory aids that help us organize information for encoding ( Figure 8.19 ) . <hl> They are especially useful when we want to recall larger bits of information such as steps , stages , phases , and parts of a system ( Bellezza , 1981 ) . Brian needs to learn the order of the planets in the solar system , but he ’ s having a hard time remembering the correct order . His friend Kelly suggests a mnemonic device that can help him remember . Kelly tells Brian to simply remember the name Mr . VEM J . SUN , and he can easily recall the correct order of the planets : M ercury , V enus , E arth , M ars , J upiter , S aturn , U ranus , and N eptune . You might use a mnemonic device to help you remember someone ’ s name , a mathematical formula , or the order of mathematical operations . If you have ever watched the television show Modern Family , you might have seen Phil Dunphy explain how he remembers names :", "hl_sentences": "Make use of mnemonic devices : As you learned earlier in this chapter , mnemonic devices often help us to remember and recall information . Mnemonic devices are memory aids that help us organize information for encoding ( Figure 8.19 ) .", "question": { "cloze_format": "Memory aids that help organize information for encoding are ________.", "normal_format": "What are memory aids that help organize information for encoding?", "question_choices": [ "mnemonic devices", "memory-enhancing strategies", "elaborative rehearsal", "effortful processing" ], "question_id": "fs-idp38792848", "question_text": "Memory aids that help organize information for encoding are ________." }, "references_are_paraphrase": null } ]
8
8.1 How Memory Functions Learning Objectives By the end of this section, you will be able to: Discuss the three basic functions of memory Describe the three stages of memory storage Describe and distinguish between procedural and declarative memory and semantic and episodic memory Memory is an information processing system; therefore, we often compare it to a computer. Memory is the set of processes used to encode, store, and retrieve information over different periods of time ( Figure 8.2 ). Link to Learning Watch this video for more information on some unexpected facts about memory. Encoding We get information into our brains through a process called encoding , which is the input of information into the memory system. Once we receive sensory information from the environment, our brains label or code it. We organize the information with other similar information and connect new concepts to existing concepts. Encoding information occurs through automatic processing and effortful processing. If someone asks you what you ate for lunch today, more than likely you could recall this information quite easily. This is known as automatic processing , or the encoding of details like time, space, frequency, and the meaning of words. Automatic processing is usually done without any conscious awareness. Recalling the last time you studied for a test is another example of automatic processing. But what about the actual test material you studied? It probably required a lot of work and attention on your part in order to encode that information. This is known as effortful processing ( Figure 8.3 ). What are the most effective ways to ensure that important memories are well encoded? Even a simple sentence is easier to recall when it is meaningful (Anderson, 1984). Read the following sentences (Bransford & McCarrell, 1974), then look away and count backwards from 30 by threes to zero, and then try to write down the sentences (no peeking back at this page!). The notes were sour because the seams split. The voyage wasn't delayed because the bottle shattered. The haystack was important because the cloth ripped. How well did you do? By themselves, the statements that you wrote down were most likely confusing and difficult for you to recall. Now, try writing them again, using the following prompts: bagpipe, ship christening, and parachutist. Next count backwards from 40 by fours, then check yourself to see how well you recalled the sentences this time. You can see that the sentences are now much more memorable because each of the sentences was placed in context. Material is far better encoded when you make it meaningful. There are three types of encoding. The encoding of words and their meaning is known as semantic encoding . It was first demonstrated by William Bousfield (1935) in an experiment in which he asked people to memorize words. The 60 words were actually divided into 4 categories of meaning, although the participants did not know this because the words were randomly presented. When they were asked to remember the words, they tended to recall them in categories, showing that they paid attention to the meanings of the words as they learned them. Visual encoding is the encoding of images, and acoustic encoding is the encoding of sounds, words in particular. To see how visual encoding works, read over this list of words: car, level, dog, truth, book, value . If you were asked later to recall the words from this list, which ones do you think you’d most likely remember? You would probably have an easier time recalling the words car, dog, and book , and a more difficult time recalling the words level, truth, and value . Why is this? Because you can recall images (mental pictures) more easily than words alone. When you read the words car, dog, and book you created images of these things in your mind. These are concrete, high-imagery words. On the other hand, abstract words like level, truth, and value are low-imagery words. High-imagery words are encoded both visually and semantically (Paivio, 1986), thus building a stronger memory. Now let’s turn our attention to acoustic encoding. You are driving in your car and a song comes on the radio that you haven’t heard in at least 10 years, but you sing along, recalling every word. In the United States, children often learn the alphabet through song, and they learn the number of days in each month through rhyme: “ Thirty days hath September, / April, June, and November; / All the rest have thirty-one, / Save February, with twenty-eight days clear, / And twenty-nine each leap year.” These lessons are easy to remember because of acoustic encoding. We encode the sounds the words make. This is one of the reasons why much of what we teach young children is done through song, rhyme, and rhythm. Which of the three types of encoding do you think would give you the best memory of verbal information? Some years ago, psychologists Fergus Craik and Endel Tulving (1975) conducted a series of experiments to find out. Participants were given words along with questions about them. The questions required the participants to process the words at one of the three levels. The visual processing questions included such things as asking the participants about the font of the letters. The acoustic processing questions asked the participants about the sound or rhyming of the words, and the semantic processing questions asked the participants about the meaning of the words. After participants were presented with the words and questions, they were given an unexpected recall or recognition task. Words that had been encoded semantically were better remembered than those encoded visually or acoustically. Semantic encoding involves a deeper level of processing than the shallower visual or acoustic encoding. Craik and Tulving concluded that we process verbal information best through semantic encoding, especially if we apply what is called the self-reference effect. The self-reference effect is the tendency for an individual to have better memory for information that relates to oneself in comparison to material that has less personal relevance (Rogers, Kuiper & Kirker, 1977). Could semantic encoding be beneficial to you as you attempt to memorize the concepts in this chapter? Storage Once the information has been encoded, we have to somehow retain it. Our brains take the encoded information and place it in storage. Storage is the creation of a permanent record of information. In order for a memory to go into storage (i.e., long-term memory), it has to pass through three distinct stages: Sensory Memory , Short-Term Memory , and finally Long-Term Memory . These stages were first proposed by Richard Atkinson and Richard Shiffrin (1968). Their model of human memory ( Figure 8.4 ), called Atkinson-Shiffrin (A-S), is based on the belief that we process memories in the same way that a computer processes information. But A-S is just one model of memory. Others, such as Baddeley and Hitch (1974), have proposed a model where short-term memory itself has different forms. In this model, storing memories in short-term memory is like opening different files on a computer and adding information. The type of short-term memory (or computer file) depends on the type of information received. There are memories in visual-spatial form, as well as memories of spoken or written material, and they are stored in three short-term systems: a visuospatial sketchpad, an episodic buffer, and a phonological loop. According to Baddeley and Hitch, a central executive part of memory supervises or controls the flow of information to and from the three short-term systems. Sensory Memory In the Atkinson-Shiffrin model, stimuli from the environment are processed first in sensory memory : storage of brief sensory events, such as sights, sounds, and tastes. It is very brief storage—up to a couple of seconds. We are constantly bombarded with sensory information. We cannot absorb all of it, or even most of it. And most of it has no impact on our lives. For example, what was your professor wearing the last class period? As long as the professor was dressed appropriately, it does not really matter what she was wearing. Sensory information about sights, sounds, smells, and even textures, which we do not view as valuable information, we discard. If we view something as valuable, the information will move into our short-term memory system. One study of sensory memory researched the significance of valuable information on short-term memory storage. J. R. Stroop discovered a memory phenomenon in the 1930s: you will name a color more easily if it appears printed in that color, which is called the Stroop effect . In other words, the word “red” will be named more quickly, regardless of the color the word appears in, than any word that is colored red. Try an experiment: name the colors of the words you are given in Figure 8.5 . Do not read the words, but say the color the word is printed in. For example, upon seeing the word “yellow” in green print, you should say “green,” not “yellow.” This experiment is fun, but it’s not as easy as it seems. Short-Term Memory Short-term memory (STM) is a temporary storage system that processes incoming sensory memory; sometimes it is called working memory. Short-term memory takes information from sensory memory and sometimes connects that memory to something already in long-term memory. Short-term memory storage lasts about 20 seconds. George Miller (1956), in his research on the capacity of memory, found that most people can retain about 7 items in STM. Some remember 5, some 9, so he called the capacity of STM 7 plus or minus 2. Think of short-term memory as the information you have displayed on your computer screen—a document, a spreadsheet, or a web page. Then, information in short-term memory goes to long-term memory (you save it to your hard drive), or it is discarded (you delete a document or close a web browser). This step of rehearsal , the conscious repetition of information to be remembered, to move STM into long-term memory is called memory consolidation . You may find yourself asking, “How much information can our memory handle at once?” To explore the capacity and duration of your short-term memory, have a partner read the strings of random numbers ( Figure 8.6 ) out loud to you, beginning each string by saying, “Ready?” and ending each by saying, “Recall,” at which point you should try to write down the string of numbers from memory. Note the longest string at which you got the series correct. For most people, this will be close to 7, Miller’s famous 7 plus or minus 2. Recall is somewhat better for random numbers than for random letters (Jacobs, 1887), and also often slightly better for information we hear (acoustic encoding) rather than see (visual encoding) (Anderson, 1969). Long-term Memory Long-term memory (LTM) is the continuous storage of information. Unlike short-term memory, the storage capacity of LTM has no limits. It encompasses all the things you can remember that happened more than just a few minutes ago to all of the things that you can remember that happened days, weeks, and years ago. In keeping with the computer analogy, the information in your LTM would be like the information you have saved on the hard drive. It isn’t there on your desktop (your short-term memory), but you can pull up this information when you want it, at least most of the time. Not all long-term memories are strong memories. Some memories can only be recalled through prompts. For example, you might easily recall a fact— “What is the capital of the United States?”—or a procedure—“How do you ride a bike?”—but you might struggle to recall the name of the restaurant you had dinner when you were on vacation in France last summer. A prompt, such as that the restaurant was named after its owner, who spoke to you about your shared interest in soccer, may help you recall the name of the restaurant. Long-term memory is divided into two types: explicit and implicit ( Figure 8.7 ). Understanding the different types is important because a person’s age or particular types of brain trauma or disorders can leave certain types of LTM intact while having disastrous consequences for other types. Explicit memories are those we consciously try to remember and recall. For example, if you are studying for your chemistry exam, the material you are learning will be part of your explicit memory. (Note: Sometimes, but not always, the terms explicit memory and declarative memory are used interchangeably.) Implicit memories are memories that are not part of our consciousness. They are memories formed from behaviors. Implicit memory is also called non-declarative memory. Procedural memory is a type of implicit memory: it stores information about how to do things. It is the memory for skilled actions, such as how to brush your teeth, how to drive a car, how to swim the crawl (freestyle) stroke. If you are learning how to swim freestyle, you practice the stroke: how to move your arms, how to turn your head to alternate breathing from side to side, and how to kick your legs. You would practice this many times until you become good at it. Once you learn how to swim freestyle and your body knows how to move through the water, you will never forget how to swim freestyle, even if you do not swim for a couple of decades. Similarly, if you present an accomplished guitarist with a guitar, even if he has not played in a long time, he will still be able to play quite well. Declarative memory has to do with the storage of facts and events we personally experienced. Explicit (declarative) memory has two parts: semantic memory and episodic memory. Semantic means having to do with language and knowledge about language. An example would be the question “what does argumentative mean?” Stored in our semantic memory is knowledge about words, concepts, and language-based knowledge and facts. For example, answers to the following questions are stored in your semantic memory: Who was the first President of the United States? What is democracy? What is the longest river in the world? Episodic memory is information about events we have personally experienced. The concept of episodic memory was first proposed about 40 years ago (Tulving, 1972). Since then, Tulving and others have looked at scientific evidence and reformulated the theory. Currently, scientists believe that episodic memory is memory about happenings in particular places at particular times, the what, where, and when of an event (Tulving, 2002). It involves recollection of visual imagery as well as the feeling of familiarity (Hassabis & Maguire, 2007). Everyday Connection Can You Remember Everything You Ever Did or Said? Episodic memories are also called autobiographical memories. Let’s quickly test your autobiographical memory. What were you wearing exactly five years ago today? What did you eat for lunch on April 10, 2009? You probably find it difficult, if not impossible, to answer these questions. Can you remember every event you have experienced over the course of your life—meals, conversations, clothing choices, weather conditions, and so on? Most likely none of us could even come close to answering these questions; however, American actress Marilu Henner , best known for the television show Taxi, can remember. She has an amazing and highly superior autobiographical memory ( Figure 8.8 ). Very few people can recall events in this way; right now, only 12 known individuals have this ability, and only a few have been studied (Parker, Cahill & McGaugh 2006). And although hyperthymesia normally appears in adolescence, two children in the United States appear to have memories from well before their tenth birthdays. Link to Learning Watch these Part 1 and Part 2 video clips on superior autobiographical memory from the television news show 60 Minutes . Retrieval So you have worked hard to encode (via effortful processing) and store some important information for your upcoming final exam. How do you get that information back out of storage when you need it? The act of getting information out of memory storage and back into conscious awareness is known as retrieval . This would be similar to finding and opening a paper you had previously saved on your computer’s hard drive. Now it’s back on your desktop, and you can work with it again. Our ability to retrieve information from long-term memory is vital to our everyday functioning. You must be able to retrieve information from memory in order to do everything from knowing how to brush your hair and teeth, to driving to work, to knowing how to perform your job once you get there. There are three ways you can retrieve information out of your long-term memory storage system: recall, recognition, and relearning. Recall is what we most often think about when we talk about memory retrieval: it means you can access information without cues. For example, you would use recall for an essay test. Recognition happens when you identify information that you have previously learned after encountering it again. It involves a process of comparison. When you take a multiple-choice test, you are relying on recognition to help you choose the correct answer. Here is another example. Let’s say you graduated from high school 10 years ago, and you have returned to your hometown for your 10-year reunion. You may not be able to recall all of your classmates, but you recognize many of them based on their yearbook photos. The third form of retrieval is relearning , and it’s just what it sounds like. It involves learning information that you previously learned. Whitney took Spanish in high school, but after high school she did not have the opportunity to speak Spanish. Whitney is now 31, and her company has offered her an opportunity to work in their Mexico City office. In order to prepare herself, she enrolls in a Spanish course at the local community center. She’s surprised at how quickly she’s able to pick up the language after not speaking it for 13 years; this is an example of relearning. 8.2 Parts of the Brain Involved with Memory Learning Objectives By the end of this section, you will be able to: Explain the brain functions involved in memory Recognize the roles of the hippocampus, amygdala, and cerebellum Are memories stored in just one part of the brain, or are they stored in many different parts of the brain? Karl Lashley began exploring this problem, about 100 years ago, by making lesions in the brains of animals such as rats and monkeys. He was searching for evidence of the engram : the group of neurons that serve as the “physical representation of memory” (Josselyn, 2010). First, Lashley (1950) trained rats to find their way through a maze. Then, he used the tools available at the time—in this case a soldering iron—to create lesions in the rats’ brains, specifically in the cerebral cortex. He did this because he was trying to erase the engram, or the original memory trace that the rats had of the maze. Lashley did not find evidence of the engram, and the rats were still able to find their way through the maze, regardless of the size or location of the lesion. Based on his creation of lesions and the animals’ reaction, he formulated the equipotentiality hypothesis : if part of one area of the brain involved in memory is damaged, another part of the same area can take over that memory function (Lashley, 1950). Although Lashley’s early work did not confirm the existence of the engram, modern psychologists are making progress locating it. Eric Kandel, for example, spent decades working on the synapse, the basic structure of the brain, and its role in controlling the flow of information through neural circuits needed to store memories (Mayford, Siegelbaum, & Kandel, 2012). Many scientists believe that the entire brain is involved with memory. However, since Lashley’s research, other scientists have been able to look more closely at the brain and memory. They have argued that memory is located in specific parts of the brain, and specific neurons can be recognized for their involvement in forming memories. The main parts of the brain involved with memory are the amygdala, the hippocampus, the cerebellum, and the prefrontal cortex ( Figure 8.9 ). The Amygdala First, let’s look at the role of the amygdala in memory formation. The main job of the amygdala is to regulate emotions, such as fear and aggression ( Figure 8.9 ). The amygdala plays a part in how memories are stored because storage is influenced by stress hormones. For example, one researcher experimented with rats and the fear response (Josselyn, 2010). Using Pavlovian conditioning, a neutral tone was paired with a foot shock to the rats. This produced a fear memory in the rats. After being conditioned, each time they heard the tone, they would freeze (a defense response in rats), indicating a memory for the impending shock. Then the researchers induced cell death in neurons in the lateral amygdala, which is the specific area of the brain responsible for fear memories. They found the fear memory faded (became extinct). Because of its role in processing emotional information, the amygdala is also involved in memory consolidation: the process of transferring new learning into long-term memory. The amygdala seems to facilitate encoding memories at a deeper level when the event is emotionally arousing. Link to Learning In this TED Talk called “A Mouse. A Laser Beam. A Manipulated Memory,” Steve Ramirez and Xu Liu from MIT talk about using laser beams to manipulate fear memory in rats. Find out why their work caused a media frenzy once it was published in Science . The Hippocampus Another group of researchers also experimented with rats to learn how the hippocampus functions in memory processing ( Figure 8.9 ). They created lesions in the hippocampi of the rats, and found that the rats demonstrated memory impairment on various tasks, such as object recognition and maze running. They concluded that the hippocampus is involved in memory, specifically normal recognition memory as well as spatial memory (when the memory tasks are like recall tests) (Clark, Zola, & Squire, 2000). Another job of the hippocampus is to project information to cortical regions that give memories meaning and connect them with other connected memories. It also plays a part in memory consolidation: the process of transferring new learning into long-term memory. Injury to this area leaves us unable to process new declarative memories. One famous patient, known for years only as H. M., had both his left and right temporal lobes (hippocampi) removed in an attempt to help control the seizures he had been suffering from for years (Corkin, Amaral, González, Johnson, & Hyman, 1997). As a result, his declarative memory was significantly affected, and he could not form new semantic knowledge. He lost the ability to form new memories, yet he could still remember information and events that had occurred prior to the surgery. Link to Learning For a closer look at how memory works, view this video on quirks of memory, and read more in this article about patient HM. The Cerebellum and Prefrontal Cortex Although the hippocampus seems to be more of a processing area for explicit memories, you could still lose it and be able to create implicit memories (procedural memory, motor learning, and classical conditioning), thanks to your cerebellum ( Figure 8.9 ). For example, one classical conditioning experiment is to accustom subjects to blink when they are given a puff of air. When researchers damaged the cerebellums of rabbits, they discovered that the rabbits were not able to learn the conditioned eye-blink response (Steinmetz, 1999; Green & Woodruff-Pak, 2000). Other researchers have used brain scans, including positron emission tomography (PET) scans, to learn how people process and retain information. From these studies, it seems the prefrontal cortex is involved. In one study, participants had to complete two different tasks: either looking for the letter a in words (considered a perceptual task) or categorizing a noun as either living or non-living (considered a semantic task) (Kapur et al., 1994). Participants were then asked which words they had previously seen. Recall was much better for the semantic task than for the perceptual task. According to PET scans, there was much more activation in the left inferior prefrontal cortex in the semantic task. In another study, encoding was associated with left frontal activity, while retrieval of information was associated with the right frontal region (Craik et al., 1999). Neurotransmitters There also appear to be specific neurotransmitters involved with the process of memory, such as epinephrine, dopamine, serotonin, glutamate, and acetylcholine (Myhrer, 2003). There continues to be discussion and debate among researchers as to which neurotransmitter plays which specific role (Blockland, 1996). Although we don’t yet know which role each neurotransmitter plays in memory, we do know that communication among neurons via neurotransmitters is critical for developing new memories. Repeated activity by neurons leads to increased neurotransmitters in the synapses and more efficient and more synaptic connections. This is how memory consolidation occurs. It is also believed that strong emotions trigger the formation of strong memories, and weaker emotional experiences form weaker memories; this is called arousal theory (Christianson, 1992). For example, strong emotional experiences can trigger the release of neurotransmitters, as well as hormones, which strengthen memory; therefore, our memory for an emotional event is usually better than our memory for a non-emotional event. When humans and animals are stressed, the brain secretes more of the neurotransmitter glutamate, which helps them remember the stressful event (McGaugh, 2003). This is clearly evidenced by what is known as the flashbulb memory phenomenon. A flashbulb memory is an exceptionally clear recollection of an important event ( Figure 8.10 ). Where were you when you first heard about the 9/11 terrorist attacks? Most likely you can remember where you were and what you were doing. In fact, a Pew Research Center (2011) survey found that for those Americans who were age 8 or older at the time of the event, 97% can recall the moment they learned of this event, even a decade after it happened. Dig Deeper Inaccurate and False Memories Even flashbulb memories can have decreased accuracy with the passage of time, even with very important events. For example, on at least three occasions, when asked how he heard about the terrorist attacks of 9/11, President George W. Bush responded inaccurately. In January 2002, less than 4 months after the attacks, the then sitting President Bush was asked how he heard about the attacks. He responded: I was sitting there, and my Chief of Staff—well, first of all, when we walked into the classroom, I had seen this plane fly into the first building. There was a TV set on. And you know, I thought it was pilot error and I was amazed that anybody could make such a terrible mistake. (Greenberg, 2004, p. 2) Contrary to what President Bush recalled, no one saw the first plane hit, except people on the ground near the twin towers. The first plane was not videotaped because it was a normal Tuesday morning in New York City, until the first plane hit. Some people attributed Bush’s wrong recall of the event to conspiracy theories. However, there is a much more benign explanation: human memory, even flashbulb memories, can be frail. In fact, memory can be so frail that we can convince a person an event happened to them, even when it did not. In studies, research participants will recall hearing a word, even though they never heard the word. For example, participants were given a list of 15 sleep-related words, but the word “sleep” was not on the list. Participants recalled hearing the word “sleep” even though they did not actually hear it (Roediger & McDermott, 2000). The researchers who discovered this named the theory after themselves and a fellow researcher, calling it the Deese-Roediger-McDermott paradigm. 8.3 Problems with Memory Learning Objectives By the end of this section, you will be able to: Compare and contrast the two types of amnesia Discuss the unreliability of eyewitness testimony Discuss encoding failure Discuss the various memory errors Compare and contrast the two types of interference You may pride yourself on your amazing ability to remember the birthdates and ages of all of your friends and family members, or you may be able recall vivid details of your 5th birthday party at Chuck E. Cheese’s. However, all of us have at times felt frustrated, and even embarrassed, when our memories have failed us. There are several reasons why this happens. Amnesia Amnesia is the loss of long-term memory that occurs as the result of disease, physical trauma, or psychological trauma. Psychologist Tulving (2002) and his colleagues at the University of Toronto studied K. C. for years. K. C. suffered a traumatic head injury in a motorcycle accident and then had severe amnesia. Tulving writes, the outstanding fact about K.C.'s mental make-up is his utter inability to remember any events, circumstances, or situations from his own life. His episodic amnesia covers his whole life, from birth to the present. The only exception is the experiences that, at any time, he has had in the last minute or two. (Tulving, 2002, p. 14) Anterograde Amnesia There are two common types of amnesia: anterograde amnesia and retrograde amnesia ( Figure 8.11 ). Anterograde amnesia is commonly caused by brain trauma, such as a blow to the head. With anterograde amnesia , you cannot remember new information, although you can remember information and events that happened prior to your injury. The hippocampus is usually affected (McLeod, 2011). This suggests that damage to the brain has resulted in the inability to transfer information from short-term to long-term memory; that is, the inability to consolidate memories. Many people with this form of amnesia are unable to form new episodic or semantic memories, but are still able to form new procedural memories (Bayley & Squire, 2002). This was true of H. M., which was discussed earlier. The brain damage caused by his surgery resulted in anterograde amnesia. H. M. would read the same magazine over and over, having no memory of ever reading it—it was always new to him. He also could not remember people he had met after his surgery. If you were introduced to H. M. and then you left the room for a few minutes, he would not know you upon your return and would introduce himself to you again. However, when presented the same puzzle several days in a row, although he did not remember having seen the puzzle before, his speed at solving it became faster each day (because of relearning) (Corkin, 1965, 1968). Retrograde Amnesia Retrograde amnesia is loss of memory for events that occurred prior to the trauma. People with retrograde amnesia cannot remember some or even all of their past. They have difficulty remembering episodic memories. What if you woke up in the hospital one day and there were people surrounding your bed claiming to be your spouse, your children, and your parents? The trouble is you don’t recognize any of them. You were in a car accident, suffered a head injury, and now have retrograde amnesia. You don’t remember anything about your life prior to waking up in the hospital. This may sound like the stuff of Hollywood movies, and Hollywood has been fascinated with the amnesia plot for nearly a century, going all the way back to the film Garden of Lies from 1915 to more recent movies such as the Jason Bourne spy thrillers. However, for real-life sufferers of retrograde amnesia, like former NFL football player Scott Bolzan, the story is not a Hollywood movie. Bolzan fell, hit his head, and deleted 46 years of his life in an instant. He is now living with one of the most extreme cases of retrograde amnesia on record. Link to Learning View the video story profiling Scott Bolzan’s amnesia and his attempts to get his life back. Memory Construction and Reconstruction The formulation of new memories is sometimes called construction , and the process of bringing up old memories is called reconstruction . Yet as we retrieve our memories, we also tend to alter and modify them. A memory pulled from long-term storage into short-term memory is flexible. New events can be added and we can change what we think we remember about past events, resulting in inaccuracies and distortions. People may not intend to distort facts, but it can happen in the process of retrieving old memories and combining them with new memories (Roediger and DeSoto, in press). Suggestibility When someone witnesses a crime, that person’s memory of the details of the crime is very important in catching the suspect. Because memory is so fragile, witnesses can be easily (and often accidentally) misled due to the problem of suggestibility. Suggestibility describes the effects of misinformation from external sources that leads to the creation of false memories. In the fall of 2002, a sniper in the DC area shot people at a gas station, leaving Home Depot, and walking down the street. These attacks went on in a variety of places for over three weeks and resulted in the deaths of ten people. During this time, as you can imagine, people were terrified to leave their homes, go shopping, or even walk through their neighborhoods. Police officers and the FBI worked frantically to solve the crimes, and a tip hotline was set up. Law enforcement received over 140,000 tips, which resulted in approximately 35,000 possible suspects (Newseum, n.d.). Most of the tips were dead ends, until a white van was spotted at the site of one of the shootings. The police chief went on national television with a picture of the white van. After the news conference, several other eyewitnesses called to say that they too had seen a white van fleeing from the scene of the shooting. At the time, there were more than 70,000 white vans in the area. Police officers, as well as the general public, focused almost exclusively on white vans because they believed the eyewitnesses. Other tips were ignored. When the suspects were finally caught, they were driving a blue sedan. As illustrated by this example, we are vulnerable to the power of suggestion, simply based on something we see on the news. Or we can claim to remember something that in fact is only a suggestion someone made. It is the suggestion that is the cause of the false memory. Eyewitness Misidentification Even though memory and the process of reconstruction can be fragile, police officers, prosecutors, and the courts often rely on eyewitness identification and testimony in the prosecution of criminals. However, faulty eyewitness identification and testimony can lead to wrongful convictions ( Figure 8.12 ). How does this happen? In 1984, Jennifer Thompson, then a 22-year-old college student in North Carolina, was brutally raped at knifepoint. As she was being raped, she tried to memorize every detail of her rapist’s face and physical characteristics, vowing that if she survived, she would help get him convicted. After the police were contacted, a composite sketch was made of the suspect, and Jennifer was shown six photos. She chose two, one of which was of Ronald Cotton. After looking at the photos for 4–5 minutes, she said, “Yeah. This is the one,” and then she added, “I think this is the guy.” When questioned about this by the detective who asked, “You’re sure? Positive?” She said that it was him. Then she asked the detective if she did OK, and he reinforced her choice by telling her she did great. These kinds of unintended cues and suggestions by police officers can lead witnesses to identify the wrong suspect. The district attorney was concerned about her lack of certainty the first time, so she viewed a lineup of seven men. She said she was trying to decide between numbers 4 and 5, finally deciding that Cotton, number 5, “Looks most like him.” He was 22 years old. By the time the trial began, Jennifer Thompson had absolutely no doubt that she was raped by Ronald Cotton. She testified at the court hearing, and her testimony was compelling enough that it helped convict him. How did she go from, “I think it’s the guy” and it “Looks most like him,” to such certainty? Gary Wells and Deah Quinlivan (2009) assert it’s suggestive police identification procedures, such as stacking lineups to make the defendant stand out, telling the witness which person to identify, and confirming witnesses choices by telling them “Good choice,” or “You picked the guy.” After Cotton was convicted of the rape, he was sent to prison for life plus 50 years. After 4 years in prison, he was able to get a new trial. Jennifer Thompson once again testified against him. This time Ronald Cotton was given two life sentences. After serving 11 years in prison, DNA evidence finally demonstrated that Ronald Cotton did not commit the rape, was innocent, and had served over a decade in prison for a crime he did not commit. Link to Learning To learn more about Ronald Cotton and the fallibility of memory, watch these excellent Part 1 and Part 2 videos by 60 Minutes . Ronald Cotton’s story, unfortunately, is not unique. There are also people who were convicted and placed on death row, who were later exonerated. The Innocence Project is a non-profit group that works to exonerate falsely convicted people, including those convicted by eyewitness testimony. To learn more, you can visit http://www.innocenceproject.org. Dig Deeper Preserving Eyewitness Memory: The Elizabeth Smart Case Contrast the Cotton case with what happened in the Elizabeth Smart case. When Elizabeth was 14 years old and fast asleep in her bed at home, she was abducted at knifepoint. Her nine-year-old sister, Mary Katherine, was sleeping in the same bed and watched, terrified, as her beloved older sister was abducted. Mary Katherine was the sole eyewitness to this crime and was very fearful. In the coming weeks, the Salt Lake City police and the FBI proceeded with caution with Mary Katherine. They did not want to implant any false memories or mislead her in any way. They did not show her police line-ups or push her to do a composite sketch of the abductor. They knew if they corrupted her memory, Elizabeth might never be found. For several months, there was little or no progress on the case. Then, about 4 months after the kidnapping, Mary Katherine first recalled that she had heard the abductor’s voice prior to that night (he had worked one time as a handyman at the family’s home) and then she was able to name the person whose voice it was. The family contacted the press and others recognized him—after a total of nine months, the suspect was caught and Elizabeth Smart was returned to her family. The Misinformation Effect Cognitive psychologist Elizabeth Loftus has conducted extensive research on memory. She has studied false memories as well as recovered memories of childhood sexual abuse. Loftus also developed the misinformation effect paradigm , which holds that after exposure to incorrect information, a person may misremember the original event. According to Loftus, an eyewitness’s memory of an event is very flexible due to the misinformation effect. To test this theory, Loftus and John Palmer (1974) asked 45 U.S. college students to estimate the speed of cars using different forms of questions ( Figure 8.13 ). The participants were shown films of car accidents and were asked to play the role of the eyewitness and describe what happened. They were asked, “About how fast were the cars going when they (smashed, collided, bumped, hit, contacted) each other?” The participants estimated the speed of the cars based on the verb used. Participants who heard the word “smashed” estimated that the cars were traveling at a much higher speed than participants who heard the word “contacted.” The implied information about speed, based on the verb they heard, had an effect on the participants’ memory of the accident. In a follow-up one week later, participants were asked if they saw any broken glass (none was shown in the accident pictures). Participants who had been in the “smashed” group were more than twice as likely to indicate that they did remember seeing glass. Loftus and Palmer demonstrated that a leading question encouraged them to not only remember the cars were going faster, but to also falsely remember that they saw broken glass. Controversies over Repressed and Recovered Memories Other researchers have described how whole events, not just words, can be falsely recalled, even when they did not happen. The idea that memories of traumatic events could be repressed has been a theme in the field of psychology, beginning with Sigmund Freud, and the controversy surrounding the idea continues today. Recall of false autobiographical memories is called false memory syndrome . This syndrome has received a lot of publicity, particularly as it relates to memories of events that do not have independent witnesses—often the only witnesses to the abuse are the perpetrator and the victim (e.g., sexual abuse). On one side of the debate are those who have recovered memories of childhood abuse years after it occurred. These researchers argue that some children’s experiences have been so traumatizing and distressing that they must lock those memories away in order to lead some semblance of a normal life. They believe that repressed memories can be locked away for decades and later recalled intact through hypnosis and guided imagery techniques (Devilly, 2007). Research suggests that having no memory of childhood sexual abuse is quite common in adults. For instance, one large-scale study conducted by John Briere and Jon Conte (1993) revealed that 59% of 450 men and women who were receiving treatment for sexual abuse that had occurred before age 18 had forgotten their experiences. Ross Cheit (2007) suggested that repressing these memories created psychological distress in adulthood. The Recovered Memory Project was created so that victims of childhood sexual abuse can recall these memories and allow the healing process to begin (Cheit, 2007; Devilly, 2007). On the other side, Loftus has challenged the idea that individuals can repress memories of traumatic events from childhood, including sexual abuse, and then recover those memories years later through therapeutic techniques such as hypnosis, guided visualization, and age regression. Loftus is not saying that childhood sexual abuse doesn’t happen, but she does question whether or not those memories are accurate, and she is skeptical of the questioning process used to access these memories, given that even the slightest suggestion from the therapist can lead to misinformation effects. For example, researchers Stephen Ceci and Maggie Brucks (1993, 1995) asked three-year-old children to use an anatomically correct doll to show where their pediatricians had touched them during an exam. Fifty-five percent of the children pointed to the genital/anal area on the dolls, even when they had not received any form of genital exam. Ever since Loftus published her first studies on the suggestibility of eyewitness testimony in the 1970s, social scientists, police officers, therapists, and legal practitioners have been aware of the flaws in interview practices. Consequently, steps have been taken to decrease suggestibility of witnesses. One way is to modify how witnesses are questioned. When interviewers use neutral and less leading language, children more accurately recall what happened and who was involved (Goodman, 2006; Pipe, 1996; Pipe, Lamb, Orbach, & Esplin, 2004). Another change is in how police lineups are conducted. It’s recommended that a blind photo lineup be used. This way the person administering the lineup doesn’t know which photo belongs to the suspect, minimizing the possibility of giving leading cues. Additionally, judges in some states now inform jurors about the possibility of misidentification. Judges can also suppress eyewitness testimony if they deem it unreliable. Forgetting “I’ve a grand memory for forgetting,” quipped Robert Louis Stevenson. Forgetting refers to loss of information from long-term memory. We all forget things, like a loved one’s birthday, someone’s name, or where we put our car keys. As you’ve come to see, memory is fragile, and forgetting can be frustrating and even embarrassing. But why do we forget? To answer this question, we will look at several perspectives on forgetting. Encoding Failure Sometimes memory loss happens before the actual memory process begins, which is encoding failure. We can’t remember something if we never stored it in our memory in the first place. This would be like trying to find a book on your e-reader that you never actually purchased and downloaded. Often, in order to remember something, we must pay attention to the details and actively work to process the information (effortful encoding). Lots of times we don’t do this. For instance, think of how many times in your life you’ve seen a penny. Can you accurately recall what the front of a U.S. penny looks like? When researchers Raymond Nickerson and Marilyn Adams (1979) asked this question, they found that most Americans don’t know which one it is. The reason is most likely encoding failure. Most of us never encode the details of the penny. We only encode enough information to be able to distinguish it from other coins. If we don’t encode the information, then it’s not in our long-term memory, so we will not be able to remember it. Memory Errors Psychologist Daniel Schacter (2001), a well-known memory researcher, offers seven ways our memories fail us. He calls them the seven sins of memory and categorizes them into three groups: forgetting, distortion, and intrusion ( Table 8.1 ). Sin Type Description Example Transience Forgetting Accessibility of memory decreases over time Forget events that occurred long ago absentmindedness Forgetting Forgetting caused by lapses in attention Forget where your phone is Blocking Forgetting Accessibility of information is temporarily blocked Tip of the tongue Misattribution Distortion Source of memory is confused Recalling a dream memory as a waking memory Suggestibility Distortion False memories Result from leading questions Bias Distortion Memories distorted by current belief system Align memories to current beliefs Persistence Intrusion Inability to forget undesirable memories Traumatic events Table 8.1 Schacter’s Seven Sins of Memory Let’s look at the first sin of the forgetting errors: transience , which means that memories can fade over time. Here’s an example of how this happens. Nathan’s English teacher has assigned his students to read the novel To Kill a Mockingbird . Nathan comes home from school and tells his mom he has to read this book for class. “Oh, I loved that book!” she says. Nathan asks her what the book is about, and after some hesitation she says, “Well . . . I know I read the book in high school, and I remember that one of the main characters is named Scout, and her father is an attorney, but I honestly don’t remember anything else.” Nathan wonders if his mother actually read the book, and his mother is surprised she can’t recall the plot. What is going on here is storage decay: unused information tends to fade with the passage of time. In 1885, German psychologist Hermann Ebbinghaus analyzed the process of memorization. First, he memorized lists of nonsense syllables. Then he measured how much he learned (retained) when he attempted to relearn each list. He tested himself over different periods of time from 20 minutes later to 30 days later. The result is his famous forgetting curve ( Figure 8.15 ). Due to storage decay, an average person will lose 50% of the memorized information after 20 minutes and 70% of the information after 24 hours (Ebbinghaus, 1885/1964). Your memory for new information decays quickly and then eventually levels out. Are you constantly losing your cell phone? Have you ever driven back home to make sure you turned off the stove? Have you ever walked into a room for something, but forgotten what it was? You probably answered yes to at least one, if not all, of these examples—but don’t worry, you are not alone. We are all prone to committing the memory error known as absentmindedness . These lapses in memory are caused by breaks in attention or our focus being somewhere else. Cynthia, a psychologist, recalls a time when she recently committed the memory error of absentmindedness. When I was completing court-ordered psychological evaluations, each time I went to the court, I was issued a temporary identification card with a magnetic strip which would open otherwise locked doors. As you can imagine, in a courtroom, this identification is valuable and important and no one wanted it to be lost or be picked up by a criminal. At the end of the day, I would hand in my temporary identification. One day, when I was almost done with an evaluation, my daughter’s day care called and said she was sick and needed to be picked up. It was flu season, I didn’t know how sick she was, and I was concerned. I finished up the evaluation in the next ten minutes, packed up my tools, and rushed to drive to my daughter’s day care. After I picked up my daughter, I could not remember if I had handed back my identification or if I had left it sitting out on a table. I immediately called the court to check. It turned out that I had handed back my identification. Why could I not remember that? (personal communication, September 5, 2013) When have you experienced absentmindedness? “I just went and saw this movie called Oblivion , and it had that famous actor in it. Oh, what’s his name? He’s been in all of those movies, like The Shawshank Redemption and The Dark Knight trilogy. I think he’s even won an Oscar. Oh gosh, I can picture his face in my mind, and hear his distinctive voice, but I just can’t think of his name! This is going to bug me until I can remember it!” This particular error can be so frustrating because you have the information right on the tip of your tongue. Have you ever experienced this? If so, you’ve committed the error known as blocking : you can’t access stored information ( Figure 8.16 ). Now let’s take a look at the three errors of distortion: misattribution, suggestibility, and bias. Misattribution happens when you confuse the source of your information. Let’s say Alejandro was dating Lucia and they saw the first Hobbit movie together. Then they broke up and Alejandro saw the second Hobbit movie with someone else. Later that year, Alejandro and Lucia get back together. One day, they are discussing how the Hobbit books and movies are different and Alejandro says to Lucia, “I loved watching the second movie with you and seeing you jump out of your seat during that super scary part.” When Lucia responded with a puzzled and then angry look, Alejandro realized he’d committed the error of misattribution. What if someone is a victim of rape shortly after watching a television program? Is it possible that the victim could actually blame the rape on the person she saw on television because of misattribution? This is exactly what happened to Donald Thomson. Australian eyewitness expert Donald Thomson appeared on a live TV discussion about the unreliability of eyewitness memory. He was later arrested, placed in a lineup and identified by a victim as the man who had raped her. The police charged Thomson although the rape had occurred at the time he was on TV. They dismissed his alibi that he was in plain view of a TV audience and in the company of the other discussants, including an assistant commissioner of police. . . . Eventually, the investigators discovered that the rapist had attacked the woman as she was watching TV—the very program on which Thomson had appeared. Authorities eventually cleared Thomson. The woman had confused the rapist's face with the face that she had seen on TV. (Baddeley, 2004, p. 133) The second distortion error is suggestibility. Suggestibility is similar to misattribution, since it also involves false memories, but it’s different. With misattribution you create the false memory entirely on your own, which is what the victim did in the Donald Thomson case above. With suggestibility, it comes from someone else, such as a therapist or police interviewer asking leading questions of a witness during an interview. Memories can also be affected by bias , which is the final distortion error. Schacter (2001) says that your feelings and view of the world can actually distort your memory of past events. There are several types of bias: Stereotypical bias involves racial and gender biases. For example, when Asian American and European American research participants were presented with a list of names, they more frequently incorrectly remembered typical African American names such as Jamal and Tyrone to be associated with the occupation basketball player, and they more frequently incorrectly remembered typical White names such as Greg and Howard to be associated with the occupation of politician (Payne, Jacoby, & Lambert, 2004). Egocentric bias involves enhancing our memories of the past (Payne et al., 2004). Did you really score the winning goal in that big soccer match, or did you just assist? Hindsight bias happens when we think an outcome was inevitable after the fact. This is the “I knew it all along” phenomenon. The reconstructive nature of memory contributes to hindsight bias (Carli, 1999). We remember untrue events that seem to confirm that we knew the outcome all along. Have you ever had a song play over and over in your head? How about a memory of a traumatic event, something you really do not want to think about? When you keep remembering something, to the point where you can’t “get it out of your head” and it interferes with your ability to concentrate on other things, it is called persistence . It’s Schacter’s seventh and last memory error. It’s actually a failure of our memory system because we involuntarily recall unwanted memories, particularly unpleasant ones ( Figure 8.17 ). For instance, you witness a horrific car accident on the way to work one morning, and you can’t concentrate on work because you keep remembering the scene. Interference Sometimes information is stored in our memory, but for some reason it is inaccessible. This is known as interference, and there are two types: proactive interference and retroactive interference ( Figure 8.18 ). Have you ever gotten a new phone number or moved to a new address, but right after you tell people the old (and wrong) phone number or address? When the new year starts, do you find you accidentally write the previous year? These are examples of proactive interference : when old information hinders the recall of newly learned information. Retroactive interference happens when information learned more recently hinders the recall of older information. For example, this week you are studying about Freud’s Psychoanalytic Theory. Next week you study the humanistic perspective of Maslow and Rogers. Thereafter, you have trouble remembering Freud’s Psychosexual Stages of Development because you can only remember Maslow’s Hierarchy of Needs. 8.4 Ways to Enhance Memory Learning Objectives By the end of this section, you will be able to: Recognize and apply memory-enhancing strategies Recognize and apply effective study techniques Most of us suffer from memory failures of one kind or another, and most of us would like to improve our memories so that we don’t forget where we put the car keys or, more importantly, the material we need to know for an exam. In this section, we’ll look at some ways to help you remember better, and at some strategies for more effective studying. Memory-Enhancing Strategies What are some everyday ways we can improve our memory, including recall? To help make sure information goes from short-term memory to long-term memory, you can use memory-enhancing strategies . One strategy is rehearsal , or the conscious repetition of information to be remembered (Craik & Watkins, 1973). Think about how you learned your multiplication tables as a child. You may recall that 6 x 6 = 36, 6 x 7 = 42, and 6 x 8 = 48. Memorizing these facts is rehearsal. Another strategy is chunking : you organize information into manageable bits or chunks (Bodie, Powers, & Fitch-Hauser, 2006). Chunking is useful when trying to remember information like dates and phone numbers. Instead of trying to remember 5205550467, you remember the number as 520-555-0467. So, if you met an interesting person at a party and you wanted to remember his phone number, you would naturally chunk it, and you could repeat the number over and over, which is the rehearsal strategy. Link to Learning Try this fun activity that employs a memory-enhancing strategy. You could also enhance memory by using elaborative rehearsal : a technique in which you think about the meaning of the new information and its relation to knowledge already stored in your memory (Tigner, 1999). For example, in this case, you could remember that 520 is an area code for Arizona and the person you met is from Arizona. This would help you better remember the 520 prefix. If the information is retained, it goes into long-term memory. Mnemonic devices are memory aids that help us organize information for encoding ( Figure 8.19 ). They are especially useful when we want to recall larger bits of information such as steps, stages, phases, and parts of a system (Bellezza, 1981). Brian needs to learn the order of the planets in the solar system, but he’s having a hard time remembering the correct order. His friend Kelly suggests a mnemonic device that can help him remember. Kelly tells Brian to simply remember the name Mr. VEM J. SUN, and he can easily recall the correct order of the planets: M ercury, V enus, E arth, M ars, J upiter, S aturn, U ranus, and N eptune. You might use a mnemonic device to help you remember someone’s name, a mathematical formula, or the order of mathematical operations. If you have ever watched the television show Modern Family , you might have seen Phil Dunphy explain how he remembers names: The other day I met this guy named Carl. Now, I might forget that name, but he was wearing a Grateful Dead t-shirt. What’s a band like the Grateful Dead? Phish. Where do fish live? The ocean. What else lives in the ocean? Coral. Hello, Co-arl. (Wrubel & Spiller, 2010) It seems the more vivid or unusual the mnemonic, the easier it is to remember. The key to using any mnemonic successfully is to find a strategy that works for you. Link to Learning Watch this fascinating TED Talks lecture titled “Feats of Memory Anyone Can Do.” The lecture is given by Joshua Foer, a science writer who “accidentally” won the U. S. Memory Championships. He explains a mnemonic device called the memory palace. Some other strategies that are used to improve memory include expressive writing and saying words aloud. Expressive writing helps boost your short-term memory, particularly if you write about a traumatic experience in your life. Masao Yogo and Shuji Fujihara (2008) had participants write for 20-minute intervals several times per month. The participants were instructed to write about a traumatic experience, their best possible future selves, or a trivial topic. The researchers found that this simple writing task increased short-term memory capacity after five weeks, but only for the participants who wrote about traumatic experiences. Psychologists can’t explain why this writing task works, but it does. What if you want to remember items you need to pick up at the store? Simply say them out loud to yourself. A series of studies (MacLeod, Gopie, Hourihan, Neary, & Ozubko, 2010) found that saying a word out loud improves your memory for the word because it increases the word’s distinctiveness. Feel silly, saying random grocery items aloud? This technique works equally well if you just mouth the words. Using these techniques increased participants’ memory for the words by more than 10%. These techniques can also be used to help you study. How to Study Effectively Based on the information presented in this chapter, here are some strategies and suggestions to help you hone your study techniques ( Figure 8.20 ). The key with any of these strategies is to figure out what works best for you. Use elaborative rehearsal : In a famous article, Craik and Lockhart (1972) discussed their belief that information we process more deeply goes into long-term memory. Their theory is called levels of processing . If we want to remember a piece of information, we should think about it more deeply and link it to other information and memories to make it more meaningful. For example, if we are trying to remember that the hippocampus is involved with memory processing, we might envision a hippopotamus with excellent memory and then we could better remember the hippocampus. Apply the self-reference effect : As you go through the process of elaborative rehearsal, it would be even more beneficial to make the material you are trying to memorize personally meaningful to you. In other words, make use of the self-reference effect. Write notes in your own words. Write definitions from the text, and then rewrite them in your own words. Relate the material to something you have already learned for another class, or think how you can apply the concepts to your own life. When you do this, you are building a web of retrieval cues that will help you access the material when you want to remember it. Don’t forget the forgetting curve : As you know, the information you learn drops off rapidly with time. Even if you think you know the material, study it again right before test time to increase the likelihood the information will remain in your memory. Overlearning can help prevent storage decay. Rehearse, rehearse, rehearse : Review the material over time, in spaced and organized study sessions. Organize and study your notes, and take practice quizzes/exams. Link the new information to other information you already know well. Be aware of interference : To reduce the likelihood of interference, study during a quiet time without interruptions or distractions (like television or music). Keep moving : Of course you already know that exercise is good for your body, but did you also know it’s also good for your mind? Research suggests that regular aerobic exercise (anything that gets your heart rate elevated) is beneficial for memory (van Praag, 2008). Aerobic exercise promotes neurogenesis: the growth of new brain cells in the hippocampus, an area of the brain known to play a role in memory and learning. Get enough sleep : While you are sleeping, your brain is still at work. During sleep the brain organizes and consolidates information to be stored in long-term memory (Abel & Bäuml, 2013). Make use of mnemonic devices : As you learned earlier in this chapter, mnemonic devices often help us to remember and recall information. There are different types of mnemonic devices, such as the acronym. An acronym is a word formed by the first letter of each of the words you want to remember. For example, even if you live near one, you might have difficulty recalling the names of all five Great Lakes. What if I told you to think of the word Homes? HOMES is an acronym that represents Huron, Ontario, Michigan, Erie, and Superior: the five Great Lakes. Another type of mnemonic device is an acrostic: you make a phrase of all the first letters of the words. For example, if you are taking a math test and you are having difficulty remembering the order of operations , recalling the following sentence will help you: “Please Excuse My Dear Aunt Sally,” because the order of mathematical operations is Parentheses, Exponents, Multiplication, Division, Addition, Subtraction. There also are jingles, which are rhyming tunes that contain key words related to the concept, such as i before e, except after c .
american_government
Summary 15.1 Bureaucracy and the Evolution of Public Administration During the post-Jacksonian era of the nineteenth century, the common charge against the bureaucracy was that it was overly political and corrupt. This changed in the 1880s as the United States began to create a modern civil service. The civil service grew once again in Franklin D. Roosevelt’s administration as he expanded government programs to combat the effects of the Great Depression. The most recent criticisms of the federal bureaucracy, notably under Ronald Reagan, emerged following the second great expansion of the federal government under Lyndon B Johnson in the 1960s. 15.2 Toward a Merit-Based Civil Service The merit-based system of filling jobs in the government bureaucracy elevates ability and accountability over political loyalties. Unfortunately, this system also has its downsides. The most common complaint is that the bureaucrats are no longer as responsive to elected public officials as they once had been. This, however, may be a necessary tradeoff for the level of efficiency and specialization necessary in the modern world. 15.3 Understanding Bureaucracies and their Types To understand why some bureaucracies act the way they do, sociologists have developed a handful of models. With the exception of the ideal bureaucracy described by Max Weber, these models see bureaucracies as self-serving. Harnessing self-serving instincts to make the bureaucracy work the way it was intended is a constant task for elected officials. One of the ways elected officials have tried to grapple with this problem is by designing different types of bureaucracies with different functions. These types include cabinet departments, independent regulatory agencies, independent executive agencies, and government corporations. 15.4 Controlling the Bureaucracy To reduce the intra-institutional disagreements the traditional rulemaking process seemed to bring, the negotiated rulemaking process was designed to encourage consensus. Both Congress and the president exercise direct oversight over the bureaucracy by holding hearings, making appointments, and setting budget allowances. Citizens exercise their oversight powers through their use of the Freedom of Information Act (FOIA) and by voting. Finally, bureaucrats also exercise oversight over their own institutions by using the channels carved out for whistleblowers to call attention to bureaucratic abuses.
Chapter Outline 15.1 Bureaucracy and the Evolution of Public Administration 15.2 Toward a Merit-Based Civil Service 15.3 Understanding Bureaucracies and their Types 15.4 Controlling the Bureaucracy Introduction What does the word “bureaucracy” conjure in your mind? For many, it evokes inefficiency, corruption, red tape, and government overreach ( Figure 15.1 ). For others, it triggers very different images—of professionalism, helpful and responsive service, and government management. Your experience with bureaucrats and the administration of government probably informs your response to the term. The ability of bureaucracy to inspire both revulsion and admiration is one of several features that make it a fascinating object of study. More than that, the many arms of the federal bureaucracy, often considered the fourth branch of government, are valuable components of the federal system. Without this administrative structure, staffed by nonelected workers who possess particular expertise to carry out their jobs, government could not function the way citizens need it to. That does not mean, however, that bureaucracies are perfect. What roles do professional government employees carry out? Who are they, and how and why do they acquire their jobs? How do they run the programs of government enacted by elected leaders? Who makes the rules of a bureaucracy? This chapter uncovers the answers to these questions and many more.
[ { "answer": { "ans_choice": 2, "ans_text": "party loyalty" }, "bloom": null, "hl_context": "The first development was the rise of centralized party politics in the 1820s . Under President Andrew Jackson , many thousands of party loyalists filled the ranks of the bureaucratic offices around the country . <hl> This was the beginning of the spoils system , in which political appointments were transformed into political patronage doled out by the president on the basis of party loyalty . <hl> 4 Political patronage is the use of state resources to reward individuals for their political support . The term “ spoils ” here refers to paid positions in the U . S . government . As the saying goes , “ to the victor , ” in this case the incoming president , “ go the spoils . ” It was assumed that government would work far more efficiently if the key federal posts were occupied by those already supportive of the president and his policies . This system served to enforce party loyalty by tying the livelihoods of the party faithful to the success or failure of the party . The number of federal posts the president sought to use as appropriate rewards for supporters swelled over the following decades .", "hl_sentences": "This was the beginning of the spoils system , in which political appointments were transformed into political patronage doled out by the president on the basis of party loyalty .", "question": { "cloze_format": "The “spoils system” allocated political appointments on the basis of ________.", "normal_format": "The “spoils system” allocated political appointments on the basis of which of the following?", "question_choices": [ "merit", "background", "party loyalty", "specialized education" ], "question_id": "fs-id1171474242307", "question_text": "The “spoils system” allocated political appointments on the basis of ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "A" }, "bloom": null, "hl_context": "<hl> Milestone “ The Nine Most Terrifying Words in the English Language ” The two periods of increased bureaucratic growth in the United States , the 1930s and the 1960s , accomplished far more than expanding the size of government . <hl> They transformed politics in ways that continue to shape political debate today . While the bureaucracies created in these two periods served important purposes , many at that time and even now argue that the expansion came with unacceptable costs , particularly economic costs . The common argument that bureaucratic regulation smothers capitalist innovation was especially powerful in the Cold War environment of the 1960s , 70s , and 80s . But as long as voters felt they were benefiting from the bureaucratic expansion , as they typically did , the political winds supported continued growth .", "hl_sentences": "Milestone “ The Nine Most Terrifying Words in the English Language ” The two periods of increased bureaucratic growth in the United States , the 1930s and the 1960s , accomplished far more than expanding the size of government .", "question": { "cloze_format": "Two recent periods of large-scale bureaucratic expansion were ________.", "normal_format": "What were the two recent periods of large-scale bureaucratic expansion?", "question_choices": [ "the 1930s and the 1960s", "the 1920s and the 1980s", "the 1910s and the 1990s", "the 1930s and the 1950s" ], "question_id": "fs-id1171474463327", "question_text": "Two recent periods of large-scale bureaucratic expansion were ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "A" }, "bloom": null, "hl_context": "<hl> The Pendleton Act of 1883 was not merely an important piece of reform legislation ; it also established the foundations for the merit-based system that emerged in the decades that followed . <hl> It accomplished this through a number of important changes , although three elements stand out as especially significant . First , the law attempted to reduce the impact of politics on the civil service sector by making it illegal to fire or otherwise punish government workers for strictly political reasons . Second , the law raised the qualifications for employment in civil service positions by requiring applicants to pass exams designed to test their competence in a number of important skill and knowledge areas . <hl> Third , it allowed for the creation of the United States Civil Service Commission ( CSC ) , which was charged with enforcing the elements of the law . <hl> 13", "hl_sentences": "The Pendleton Act of 1883 was not merely an important piece of reform legislation ; it also established the foundations for the merit-based system that emerged in the decades that followed . Third , it allowed for the creation of the United States Civil Service Commission ( CSC ) , which was charged with enforcing the elements of the law .", "question": { "cloze_format": "The Civil Service Commission was created by the ________.", "normal_format": "What was the Civil Service Commission created by?", "question_choices": [ "Pendleton Act of 1883", "Lloyd–La Follette Act of 1912", "Hatch Act of 1939", "Political Activities Act of 1939" ], "question_id": "fs-id1171473158233", "question_text": "The Civil Service Commission was created by the ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "Merit Systems Protection Board" }, "bloom": null, "hl_context": "Despite the efforts throughout the 1930s to build stronger walls of separation between the civil service bureaucrats and the political system that surrounds them , many citizens continued to grow skeptical of the growing bureaucracy . These concerns reached a high point in the late 1970s as the Vietnam War and the Watergate scandal prompted the public to a fever pitch of skepticism about government itself . <hl> Congress and the president responded with the Civil Service Reform Act of 1978 , which abolished the Civil Service Commission . <hl> <hl> In its place , the law created two new federal agencies : the Office of Personnel Management ( OPM ) and the Merit Systems Protection Board ( MSPB ) . <hl> The OPM has responsibility for recruiting , interviewing , and testing potential government employees in order to choose those who should be hired . The MSPB is responsible for investigating charges of agency wrongdoing and hearing appeals when corrective actions are ordered . Together these new federal agencies were intended to correct perceived and real problems with the merit system , protect employees from managerial abuse , and generally make the bureaucracy more efficient . 15", "hl_sentences": "Congress and the president responded with the Civil Service Reform Act of 1978 , which abolished the Civil Service Commission . In its place , the law created two new federal agencies : the Office of Personnel Management ( OPM ) and the Merit Systems Protection Board ( MSPB ) .", "question": { "cloze_format": "The Civil Service Reform Act of 1978 created the Office of Personnel Management and the ________.", "normal_format": "What did The Civil Service Reform Act of 1978 creat besides the Office of Personnel Management?", "question_choices": [ "Civil Service Commission", "Merit Systems Protection Board", "“spoils system”", "General Schedule" ], "question_id": "fs-id1171473148765", "question_text": "The Civil Service Reform Act of 1978 created the Office of Personnel Management and the ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "an apolitical, hierarchically organized agency" }, "bloom": null, "hl_context": "<hl> The classic model of bureaucracy is typically called the ideal Weberian model , and it was developed by Max Weber , an early German sociologist . <hl> Weber argued that the increasing complexity of life would simultaneously increase the demands of citizens for government services . <hl> Therefore , the ideal type of bureaucracy , the Weberian model , was one in which agencies are apolitical , hierarchically organized , and governed by formal procedures . <hl> Furthermore , specialized bureaucrats would be better able to solve problems through logical reasoning . Such efforts would eliminate entrenched patronage , stop problematic decision-making by those in charge , provide a system for managing and performing repetitive tasks that required little or no discretion , impose order and efficiency , create a clear understanding of the service provided , reduce arbitrariness , ensure accountability , and limit discretion . 19", "hl_sentences": "The classic model of bureaucracy is typically called the ideal Weberian model , and it was developed by Max Weber , an early German sociologist . Therefore , the ideal type of bureaucracy , the Weberian model , was one in which agencies are apolitical , hierarchically organized , and governed by formal procedures .", "question": { "cloze_format": "___ describes the ideal bureaucracy according to Max Weber.", "normal_format": "Which describes the ideal bureaucracy according to Max Weber?", "question_choices": [ "an apolitical, hierarchically organized agency", "an organization that competes with other bureaucracies for funding", "a wasteful, poorly organized agency", "an agency that shows clear electoral responsiveness" ], "question_id": "fs-id1171471113089", "question_text": "Which describes the ideal bureaucracy according to Max Weber?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "B" }, "bloom": null, "hl_context": "For Weber , as his ideal type suggests , the bureaucracy was not only necessary but also a positive human development . Later sociologists have not always looked so favorably upon bureaucracies , and they have developed alternate models to explain how and why bureaucracies function . One such model is called the acquisitive model of bureaucracy . <hl> The acquisitive model proposes that bureaucracies are naturally competitive and power-hungry . <hl> <hl> This means bureaucrats , especially at the highest levels , recognize that limited resources are available to feed bureaucracies , so they will work to enhance the status of their own bureaucracy to the detriment of others . <hl>", "hl_sentences": "The acquisitive model proposes that bureaucracies are naturally competitive and power-hungry . This means bureaucrats , especially at the highest levels , recognize that limited resources are available to feed bureaucracies , so they will work to enhance the status of their own bureaucracy to the detriment of others .", "question": { "cloze_format": "The model of bureaucracy that best accounts for the way bureaucracies tend to push Congress for more funding each year is ___.", "normal_format": "Which of the following models of bureaucracy best accounts for the way bureaucracies tend to push Congress for more funding each year?", "question_choices": [ "the Weberian model", "the acquisitive model", "the monopolistic model", "the ideal model" ], "question_id": "fs-id1171473159377", "question_text": "Which of the following models of bureaucracy best accounts for the way bureaucracies tend to push Congress for more funding each year?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "Amtrak" }, "bloom": null, "hl_context": "The most widely used government corporation is the U . S . Postal Service . Once a cabinet department , it was transformed into a government corporation in the early 1970s . <hl> Another widely used government corporation is the National Railroad Passenger Corporation , which uses the trade name Amtrak ( Figure 15.11 ) . <hl> Amtrak was the government ’ s response to the decline in passenger rail travel in the 1950s and 1960s as the automobile came to dominate . Recognizing the need to maintain a passenger rail service despite dwindling profits , the government consolidated the remaining lines and created Amtrak . 23", "hl_sentences": "Another widely used government corporation is the National Railroad Passenger Corporation , which uses the trade name Amtrak ( Figure 15.11 ) .", "question": { "cloze_format": "An example of a government corporation is ________.", "normal_format": "Which is an example of a government corporation?", "question_choices": [ "NASA", "the State Department", "Amtrak", "the CIA" ], "question_id": "fs-id1171473064587", "question_text": "An example of a government corporation is ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "opening government records to citizen scrutiny" }, "bloom": null, "hl_context": "<hl> The first , the Freedom of Information Act of 1966 ( FOIA ) , emerged in the early years of the Johnson presidency as the United States was conducting secret Cold War missions around the world , the U . S . military was becoming increasingly mired in the conflict in Vietnam , and questions were still swirling around the Kennedy assassination . <hl> <hl> FOIA provides journalists and the general public the right to request records from various federal agencies . <hl> These agencies are required by law to release that information unless it qualifies for one of nine exemptions . These exceptions cite sensitive issues related to national security or foreign policy , internal personnel rules , trade secrets , violations of personnel privacy rights , law enforcement information , and oil well data ( Figure 15.14 ) . FOIA also compels agencies to post some types of information for the public regularly without being requested .", "hl_sentences": "The first , the Freedom of Information Act of 1966 ( FOIA ) , emerged in the early years of the Johnson presidency as the United States was conducting secret Cold War missions around the world , the U . S . military was becoming increasingly mired in the conflict in Vietnam , and questions were still swirling around the Kennedy assassination . FOIA provides journalists and the general public the right to request records from various federal agencies .", "question": { "cloze_format": "The Freedom of Information Act of 1966 helps citizens exercise oversight over the bureaucracy by ________.", "normal_format": "How does the Freedom of Information Act of 1966 help citizens exercise oversight over the bureaucracy?", "question_choices": [ "empowering Congress", "opening government records to citizen scrutiny", "requiring annual evaluations by the president", "forcing agencies to hold public meetings" ], "question_id": "fs-id1171473101096", "question_text": "The Freedom of Information Act of 1966 helps citizens exercise oversight over the bureaucracy by ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "C" }, "bloom": null, "hl_context": "<hl> When those in government speak of privatization , they are often referring to one of a host of different models that incorporate the market forces of the private sector into the function of government to varying degrees . <hl> <hl> 35 These include using contractors to supply goods and / or services , distributing government vouchers with which citizens can purchase formerly government-controlled services on the private market , supplying government grants to private organizations to administer government programs , collaborating with a private entity to finance a government program , and even fully divesting the government of a function and directly giving it to the private sector ( Figure 15.15 ) . <hl> We will look at three of these types of privatization shortly .", "hl_sentences": "When those in government speak of privatization , they are often referring to one of a host of different models that incorporate the market forces of the private sector into the function of government to varying degrees . 35 These include using contractors to supply goods and / or services , distributing government vouchers with which citizens can purchase formerly government-controlled services on the private market , supplying government grants to private organizations to administer government programs , collaborating with a private entity to finance a government program , and even fully divesting the government of a function and directly giving it to the private sector ( Figure 15.15 ) .", "question": { "cloze_format": "When reformers speak of bureaucratic privatization, they mean all the following processes except ________.", "normal_format": "When do reformers speak of bureaucratic privatization, which of the following is NOT what they mean?", "question_choices": [ "divestiture", "government grants", "whistleblowing", "third-party financing" ], "question_id": "fs-id1171473092860", "question_text": "When reformers speak of bureaucratic privatization, they mean all the following processes except ________." }, "references_are_paraphrase": null } ]
15
15.1 Bureaucracy and the Evolution of Public Administration Learning Objectives By the end of this section, you will be able to: Define bureaucracy and bureaucrat Describe the evolution and growth of public administration in the United States Identify the reasons people undertake civil service Throughout history, both small and large nations have elevated certain types of nonelected workers to positions of relative power within the governmental structure. Collectively, these essential workers are called the bureaucracy. A bureaucracy is an administrative group of nonelected officials charged with carrying out functions connected to a series of policies and programs. In the United States, the bureaucracy began as a very small collection of individuals. Over time, however, it grew to be a major force in political affairs. Indeed, it grew so large that politicians in modern times have ridiculed it to great political advantage. However, the country’s many bureaucrats or civil servants , the individuals who work in the bureaucracy, fill necessary and even instrumental roles in every area of government: from high-level positions in foreign affairs and intelligence collection agencies to clerks and staff in the smallest regulatory agencies. They are hired, or sometimes appointed, for their expertise in carrying out the functions and programs of the government. WHAT DOES A BUREAUCRACY DO? Modern society relies on the effective functioning of government to provide public goods, enhance quality of life, and stimulate economic growth. The activities by which government achieves these functions include—but are not limited to—taxation, homeland security, immigration, foreign affairs, and education. The more society grows and the need for government services expands, the more challenging bureaucratic management and public administration becomes. Public administration is both the implementation of public policy in government bureaucracies and the academic study that prepares civil servants for work in those organizations. The classic version of a bureaucracy is hierarchical and can be described by an organizational chart that outlines the separation of tasks and worker specialization while also establishing a clear unity of command by assigning each employee to only one boss. Moreover, the classic bureaucracy employs a division of labor under which work is separated into smaller tasks assigned to different people or groups. Given this definition, bureaucracy is not unique to government but is also found in the private and nonprofit sectors. That is, almost all organizations are bureaucratic regardless of their scope and size; although public and private organizations differ in some important ways. For example, while private organizations are responsible to a superior authority such as an owner, board of directors, or shareholders, federal governmental organizations answer equally to the president, Congress, the courts, and ultimately the public. The underlying goals of private and public organizations also differ. While private organizations seek to survive by controlling costs, increasing market share, and realizing a profit, public organizations find it more difficult to measure the elusive goal of operating with efficiency and effectiveness. Link to Learning To learn more about the practice of public administration and opportunities to get involved in your local community, explore the American Society for Public Administration website. Bureaucracy may seem like a modern invention, but bureaucrats have served in governments for nearly as long as governments have existed. Archaeologists and historians point to the sometimes elaborate bureaucratic systems of the ancient world, from the Egyptian scribes who recorded inventories to the biblical tax collectors who kept the wheels of government well greased. 1 In Europe, government bureaucracy and its study emerged before democracies did. In contrast, in the United States, a democracy and the Constitution came first, followed by the development of national governmental organizations as needed, and then finally the study of U.S. government bureaucracies and public administration emerged. 2 In fact, the long pedigree of bureaucracy is an enduring testament to the necessity of administrative organization. More recently, modern bureaucratic management emerged in the eighteenth century from Scottish economist Adam Smith’s support for the efficiency of the division of labor and from Welsh reformer Robert Owen’s belief that employees are vital instruments in the functioning of an organization. However, it was not until the mid-1800s that the German scholar Lorenz von Stein argued for public administration as both a theory and a practice since its knowledge is generated and evaluated through the process of gathering evidence. For example, a public administration scholar might gather data to see whether the timing of tax collection during a particular season might lead to higher compliance or returns. Credited with being the father of the science of public administration, von Stein opened the path of administrative enlightenment for other scholars in industrialized nations. THE ORIGINS OF THE U.S. BUREAUCRACY In the early U.S. republic, the bureaucracy was quite small. This is understandable since the American Revolution was largely a revolt against executive power and the British imperial administrative order. Nevertheless, while neither the word “bureaucracy” nor its synonyms appear in the text of the Constitution, the document does establish a few broad channels through which the emerging government could develop the necessary bureaucratic administration. For example, Article II , Section 2, provides the president the power to appoint officers and department heads. In the following section, the president is further empowered to see that the laws are “faithfully executed.” More specifically, Article I , Section 8, empowers Congress to establish a post office, build roads, regulate commerce, coin money, and regulate the value of money. Granting the president and Congress such responsibilities appears to anticipate a bureaucracy of some size. Yet the design of the bureaucracy is not described, and it does not occupy its own section of the Constitution as bureaucracy often does in other countries’ governing documents; the design and form were left to be established in practice. Under President George Washington , the bureaucracy remained small enough to accomplish only the necessary tasks at hand. 3 Washington’s tenure saw the creation of the Department of State to oversee international issues, the Department of the Treasury to control coinage, and the Department of War to administer the armed forces. The employees within these three departments, in addition to the growing postal service, constituted the major portion of the federal bureaucracy for the first three decades of the republic ( Figure 15.2 ). Two developments, however, contributed to the growth of the bureaucracy well beyond these humble beginnings. The first development was the rise of centralized party politics in the 1820s. Under President Andrew Jackson, many thousands of party loyalists filled the ranks of the bureaucratic offices around the country. This was the beginning of the spoils system , in which political appointments were transformed into political patronage doled out by the president on the basis of party loyalty. 4 Political patronage is the use of state resources to reward individuals for their political support. The term “spoils” here refers to paid positions in the U.S. government. As the saying goes, “to the victor,” in this case the incoming president, “go the spoils.” It was assumed that government would work far more efficiently if the key federal posts were occupied by those already supportive of the president and his policies. This system served to enforce party loyalty by tying the livelihoods of the party faithful to the success or failure of the party. The number of federal posts the president sought to use as appropriate rewards for supporters swelled over the following decades. The second development was industrialization, which in the late nineteenth century significantly increased both the population and economic size of the United States. These changes in turn brought about urban growth in a number of places across the East and Midwest. Railroads and telegraph lines drew the country together and increased the potential for federal centralization. The government and its bureaucracy were closely involved in creating concessions for and providing land to the western railways stretching across the plains and beyond the Rocky Mountains. These changes set the groundwork for the regulatory framework that emerged in the early twentieth century. THE FALL OF POLITICAL PATRONAGE Patronage had the advantage of putting political loyalty to work by making the government quite responsive to the electorate and keeping election turnout robust because so much was at stake. However, the spoils system also had a number of obvious disadvantages. It was a reciprocal system. Clients who wanted positions in the civil service pledged their political loyalty to a particular patron who then provided them with their desired positions. These arrangements directed the power and resources of government toward perpetuating the reward system. They replaced the system that early presidents like Thomas Jefferson had fostered, in which the country’s intellectual and economic elite rose to the highest levels of the federal bureaucracy based on their relative merit. 5 Criticism of the spoils system grew, especially in the mid-1870s, after numerous scandals rocked the administration of President Ulysses S. Grant ( Figure 15.3 ). As the negative aspects of political patronage continued to infect bureaucracy in the late nineteenth century, calls for civil service reform grew louder. Those supporting the patronage system held that their positions were well earned; those who condemned it argued that federal legislation was needed to ensure jobs were awarded on the basis of merit. Eventually, after President James Garfield had been assassinated by a disappointed office seeker ( Figure 15.4 ), Congress responded to cries for reform with the Pendleton Act , also called the Civil Service Reform Act of 1883. The act established the Civil Service Commission, a centralized agency charged with ensuring that the federal government’s selection, retention, and promotion practices were based on open, competitive examinations in a merit system . 6 The passage of this law sparked a period of social activism and political reform that continued well into the twentieth century. As an active member and leader of the Progressive movement, President Woodrow Wilson is often considered the father of U.S. public administration. Born in Virginia and educated in history and political science at Johns Hopkins University, Wilson became a respected intellectual in his fields with an interest in public service and a profound sense of moralism. He was named president of Princeton University, became president of the American Political Science Association, was elected governor of New Jersey, and finally was elected the twenty-eighth president of the United States in 1912. It was through his educational training and vocational experiences that Wilson began to identify the need for a public administration discipline. He felt it was getting harder to run a constitutional government than to actually frame one. His stance was that “It is the object of administrative study to discover, first, what government can properly and successfully do, and, secondly, how it can do these proper things with the utmost efficiency. . .” 7 Wilson declared that while politics does set tasks for administration, public administration should be built on a science of management, and political science should be concerned with the way governments are administered. Therefore, administrative activities should be devoid of political manipulations. 8 Wilson advocated separating politics from administration by three key means: making comparative analyses of public and private organizations, improving efficiency with business-like practices, and increasing effectiveness through management and training. Wilson’s point was that while politics should be kept separate from administration, administration should not be insensitive to public opinion. Rather, the bureaucracy should act with a sense of vigor to understand and appreciate public opinion. Still, Wilson acknowledged that the separation of politics from administration was an ideal and not necessarily an achievable reality. THE BUREAUCRACY COMES OF AGE The late nineteenth and early twentieth centuries were a time of great bureaucratic growth in the United States: The Interstate Commerce Commission was established in 1887, the Federal Reserve Board in 1913, the Federal Trade Commission in 1914, and the Federal Power Commission in 1920. With the onset of the Great Depression in 1929, the United States faced record levels of unemployment and the associated fall into poverty, food shortages, and general desperation. When the Republican president and Congress were not seen as moving aggressively enough to fix the situation, the Democrats won the 1932 election in overwhelming fashion. President Franklin D. Roosevelt and the U.S. Congress rapidly reorganized the government’s problem-solving efforts into a series of programs designed to revive the economy, stimulate economic development, and generate employment opportunities. In the 1930s, the federal bureaucracy grew with the addition of the Federal Deposit Insurance Corporation to protect and regulate U.S. banking, the National Labor Relations Board to regulate the way companies could treat their workers, the Securities and Exchange Commission to regulate the stock market, and the Civil Aeronautics Board to regulate air travel. Additional programs and institutions emerged with the Social Security Administration in 1935 and then, during World War II, various wartime boards and agencies. By 1940, approximately 700,000 U.S. workers were employed in the federal bureaucracy. 9 Under President Lyndon B. Johnson in the 1960s, that number reached 2.2 million, and the federal budget increased to $332 billion. 10 This growth came as a result of what Johnson called his Great Society program, intended to use the power of government to relieve suffering and accomplish social good. The Economic Opportunity Act of 1964 was designed to help end poverty by creating a Job Corps and a Neighborhood Youth Corps. Volunteers in Service to America was a type of domestic Peace Corps intended to relieve the effects of poverty. Johnson also directed more funding to public education, created Medicare as a national insurance program for the elderly, and raised standards for consumer products. All of these new programs required bureaucrats to run them, and the national bureaucracy naturally ballooned. Its size became a rallying cry for conservatives, who eventually elected Ronald Reagan president for the express purpose of reducing the bureaucracy. While Reagan was able to work with Congress to reduce some aspects of the federal bureaucracy, he contributed to its expansion in other ways, particularly in his efforts to fight the Cold War. 11 For example, Reagan and Congress increased the defense budget dramatically over the course of the 1980s. 12 Milestone “The Nine Most Terrifying Words in the English Language” The two periods of increased bureaucratic growth in the United States, the 1930s and the 1960s, accomplished far more than expanding the size of government. They transformed politics in ways that continue to shape political debate today. While the bureaucracies created in these two periods served important purposes, many at that time and even now argue that the expansion came with unacceptable costs, particularly economic costs. The common argument that bureaucratic regulation smothers capitalist innovation was especially powerful in the Cold War environment of the 1960s, 70s, and 80s. But as long as voters felt they were benefiting from the bureaucratic expansion, as they typically did, the political winds supported continued growth. In the 1970s, however, Germany and Japan were thriving economies in positions to compete with U.S. industry. This competition, combined with technological advances and the beginnings of computerization, began to eat away at American prosperity. Factories began to close, wages began to stagnate, inflation climbed, and the future seemed a little less bright. In this environment, tax-paying workers were less likely to support generous welfare programs designed to end poverty. They felt these bureaucratic programs were adding to their misery in order to support unknown others. In his first and unsuccessful presidential bid in 1976, Ronald Reagan, a skilled politician and governor of California, stoked working-class anxieties by directing voters’ discontent at the bureaucratic dragon he proposed to slay. When he ran again four years later, his criticism of bureaucratic waste in Washington carried him to a landslide victory. While it is debatable whether Reagan actually reduced the size of government, he continued to wield rhetoric about bureaucratic waste to great political advantage. Even as late as 1986, he continued to rail against the Washington bureaucracy ( Figure 15.5 ), once declaring famously that “the nine most terrifying words in the English language are: I’m from the government, and I’m here to help.” Why might people be more sympathetic to bureaucratic growth during periods of prosperity? In what way do modern politicians continue to stir up popular animosity against bureaucracy to political advantage? Is it effective? Why or why not? 15.2 Toward a Merit-Based Civil Service Learning Objectives By the end of this section, you will be able to: Explain how the creation of the Civil Service Commission transformed the spoils system of the nineteenth century into a merit-based system of civil service Understand how carefully regulated hiring and pay practices helps to maintain a merit-based civil service While the federal bureaucracy grew by leaps and bounds during the twentieth century, it also underwent a very different evolution. Beginning with the Pendleton Act in the 1880s, the bureaucracy shifted away from the spoils system toward a merit system. The distinction between these two forms of bureaucracy is crucial. The evolution toward a civil service in the United States had important functional consequences. Today the United States has a civil service that carefully regulates hiring practices and pay to create an environment in which, it is hoped, the best people to fulfill each civil service responsibility are the same people hired to fill those positions. THE CIVIL SERVICE COMMISSION The Pendleton Act of 1883 was not merely an important piece of reform legislation; it also established the foundations for the merit-based system that emerged in the decades that followed. It accomplished this through a number of important changes, although three elements stand out as especially significant. First, the law attempted to reduce the impact of politics on the civil service sector by making it illegal to fire or otherwise punish government workers for strictly political reasons. Second, the law raised the qualifications for employment in civil service positions by requiring applicants to pass exams designed to test their competence in a number of important skill and knowledge areas. Third, it allowed for the creation of the United States Civil Service Commission (CSC), which was charged with enforcing the elements of the law. 13 The CSC, as created by the Pendleton Act, was to be made up of three commissioners, only two of whom could be from the same political party. These commissioners were given the responsibility of developing and applying the competitive examinations for civil service positions, ensuring that the civil service appointments were apportioned among the several states based on population, and seeing to it that no person in the public service is obligated to contribute to any political cause. The CSC was also charged with ensuring that all civil servants wait for a probationary period before being appointed and that no appointee uses his or her official authority to affect political changes either through coercion or influence. Both Congress and the president oversaw the CSC by requiring the commission to supply an annual report on its activities first to the president and then to Congress. In 1883, civil servants under the control of the commission amounted to about 10 percent of the entire government workforce. However, over the next few decades, this percentage increased dramatically. The effects on the government itself of both the law and the increase in the size of the civil service were huge. Presidents and representatives were no longer spending their days doling out or terminating appointments. Consequently, the many members of the civil service could no longer count on their political patrons for job security. Of course, job security was never guaranteed before the Pendleton Act because all positions were subject to the rise and fall of political parties. However, with civil service appointments no longer tied to partisan success, bureaucrats began to look to each other in order to create the job security the previous system had lacked. One of the most important ways they did this was by creating civil service organizations such as the National Association of All Civil Service Employees, formed in 1896. This organization worked to further civil service reform, especially in the area most important to civil service professionals: ensuring greater job security and maintaining the distance between themselves and the political parties that once controlled them. 14 Over the next few decades, civil servants gravitated to labor unions in much the same way that employees in the private sector did. Through the power of their collective voices amplified by their union representatives, they were able to achieve political influence. The growth of federal labor unions accelerated after the Lloyd–La Follette Act of 1912, which removed many of the penalties civil servants faced when joining a union. As the size of the federal government and its bureaucracy grew following the Great Depression and the Roosevelt reforms, many became increasingly concerned that the Pendleton Act prohibitions on political activities by civil servants were no longer strong enough. As a result of these mounting concerns, Congress passed the Hatch Act of 1939—or the Political Activities Act. The main provision of this legislation prohibits bureaucrats from actively engaging in political campaigns and from using their federal authority via bureaucratic rank to influence the outcomes of nominations and elections. Despite the efforts throughout the 1930s to build stronger walls of separation between the civil service bureaucrats and the political system that surrounds them, many citizens continued to grow skeptical of the growing bureaucracy. These concerns reached a high point in the late 1970s as the Vietnam War and the Watergate scandal prompted the public to a fever pitch of skepticism about government itself. Congress and the president responded with the Civil Service Reform Act of 1978, which abolished the Civil Service Commission. In its place, the law created two new federal agencies: the Office of Personnel Management (OPM) and the Merit Systems Protection Board (MSPB). The OPM has responsibility for recruiting, interviewing, and testing potential government employees in order to choose those who should be hired. The MSPB is responsible for investigating charges of agency wrongdoing and hearing appeals when corrective actions are ordered. Together these new federal agencies were intended to correct perceived and real problems with the merit system, protect employees from managerial abuse, and generally make the bureaucracy more efficient. 15 MERIT-BASED SELECTION The general trend from the 1880s to today has been toward a civil service system that is increasingly based on merit ( Figure 15.6 ). In this system, the large majority of jobs in individual bureaucracies are tied to the needs of the organization rather than to the political needs of the party bosses or political leaders. This purpose is reflected in the way civil service positions are advertised. A general civil service position announcement will describe the government agency or office seeking an employee, an explanation of what the agency or office does, an explanation of what the position requires, and a list of the knowledge, skills, and abilities, commonly referred to as KSAs, deemed especially important for fulfilling the role. A budget analyst position, for example, would include KSAs such as experience with automated financial systems, knowledge of budgetary regulations and policies, the ability to communicate orally, and demonstrated skills in budget administration, planning, and formulation. The merit system requires that a person be evaluated based on his or her ability to demonstrate KSAs that match those described or better. The individual who is hired should have better KSAs than the other applicants. Many years ago, the merit system would have required all applicants to also test well on a civil service exam, as was stipulated by the Pendleton Act. This mandatory testing has since been abandoned, and now approximately eighty-five percent of all federal government jobs are filled through an examination of the applicant’s education, background, knowledge, skills, and abilities. 16 That would suggest that some 20 percent are filled through appointment and patronage. Among the first group, those hired based on merit, a small percentage still require that applicants take one of the several civil service exams. These are sometimes positions that require applicants to demonstrate broad critical thinking skills, such as foreign service jobs. More often these exams are required for positions demanding specific or technical knowledge, such as customs officials, air traffic controllers, and federal law enforcement officers. Additionally, new online tests are increasingly being used to screen the ever-growing pool of applicants. 17 Civil service exams currently test for skills applicable to clerical workers, postal service workers, military personnel, health and social workers, and accounting and engineering employees among others. Applicants with the highest scores on these tests are most likely to be hired for the desired position. Like all organizations, bureaucracies must make thoughtful investments in human capital. And even after hiring people, they must continue to train and develop them to reap the investment they make during the hiring process. Get Connected! A Career in Government: Competitive Service, Excepted Service, Senior Executive Service One of the significant advantages of the enormous modern U.S. bureaucracy is that many citizens find employment there to be an important source of income and meaning in their lives. Job opportunities exist in a number of different fields, from foreign service with the State Department to information and record clerking at all levels. Each position requires specific background, education, experience, and skills. There are three general categories of work in the federal government: competitive service, excepted service, and senior executive service. Competitive service positions are closely regulated by Congress through the Office of Personnel Management to ensure they are filled in a fair way and the best applicant gets the job ( Figure 15.7 ). Qualifications for these jobs include work history, education, and grades on civil service exams. Federal jobs in the excepted service category are exempt from these hiring restrictions. Either these jobs require a far more rigorous hiring process, such as is the case at the Central Intelligence Agency, or they call for very specific skills, such as in the Nuclear Regulatory Commission. Excepted service jobs allow employers to set their own pay rates and requirements. Finally, senior executive service positions are filled by men and women who are able to demonstrate their experience in executive positions. These are leadership positions, and applicants must demonstrate certain executive core qualifications (ECQs). These qualifications are leading change, being results-driven, demonstrating business acumen, and building better coalitions. What might be the practical consequences of having these different job categories? Can you think of some specific positions you are familiar with and the categories they might be in? Link to Learning Where once federal jobs would have been posted in post offices and newspapers, they are now posted online. The most common place aspiring civil servants look for jobs is on USAjobs.gov, a web-based platform offered by the Office of Personnel Management for agencies to find the right employees. Visit their website to see the types of jobs currently available in the U.S. bureaucracy. Civil servants receive pay based on the U.S. Federal General Schedule. A pay schedule is a chart that shows salary ranges for different levels (grades) of positions vertically and for different ranks (steps) of seniority horizontally. The Pendleton Act of 1883 allowed for this type of pay schedule, but the modern version of the schedule emerged in the 1940s and was refined in the 1990s. The modern General Schedule includes fifteen grades, each with ten steps ( Figure 15.8 ). The grades reflect the different required competencies, education standards, skills, and experiences for the various civil service positions. Grades GS-1 and GS-2 require very little education, experience, and skills and pay little. Grades GS-3 through GS-7 and GS-8 through GS-12 require ascending levels of education and pay increasingly more. Grades GS-13 through GS-15 require specific, specialized experience and education, and these job levels pay the most. When hired into a position at a specific grade, employees are typically paid at the first step of that grade, the lowest allowable pay. Over time, assuming they receive satisfactory assessment ratings, they will progress through the various levels. Many careers allow for the civil servants to ascend through the grades of the specific career as well. 18 The intention behind these hiring practices and structured pay systems is to create an environment in which those most likely to succeed are in fact those who are ultimately appointed. The systems almost naturally result in organizations composed of experts who dedicate their lives to their work and their agency. Equally important, however, are the drawbacks. The primary one is that permanent employees can become too independent of the elected leaders. While a degree of separation is intentional and desired, too much can result in bureaucracies that are insufficiently responsive to political change. Another downside is that the accepted expertise of individual bureaucrats can sometimes hide their own chauvinistic impulses. The merit system encouraged bureaucrats to turn to each other and their bureaucracies for support and stability. Severing the political ties common in the spoils system creates the potential for bureaucrats to steer actions toward their own preferences even if these contradict the designs of elected leaders. 15.3 Understanding Bureaucracies and their Types Learning Objectives By the end of this section, you will be able to: Explain the three different models sociologists and others use to understand bureaucracies Identify the different types of federal bureaucracies and their functional differences Turning a spoils system bureaucracy into a merit-based civil service, while desirable, comes with a number of different consequences. The patronage system tied the livelihoods of civil service workers to their party loyalty and discipline. Severing these ties, as has occurred in the United States over the last century and a half, has transformed the way bureaucracies operate. Without the patronage network, bureaucracies form their own motivations. These motivations, sociologists have discovered, are designed to benefit and perpetuate the bureaucracies themselves. MODELS OF BUREAUCRACY Bureaucracies are complex institutions designed to accomplish specific tasks. This complexity, and the fact that they are organizations composed of human beings, can make it challenging for us to understand how bureaucracies work. Sociologists, however, have developed a number of models for understanding the process. Each model highlights specific traits that help explain the organizational behavior of governing bodies and associated functions. The Weberian Model The classic model of bureaucracy is typically called the ideal Weberian model , and it was developed by Max Weber, an early German sociologist. Weber argued that the increasing complexity of life would simultaneously increase the demands of citizens for government services. Therefore, the ideal type of bureaucracy, the Weberian model, was one in which agencies are apolitical, hierarchically organized, and governed by formal procedures. Furthermore, specialized bureaucrats would be better able to solve problems through logical reasoning. Such efforts would eliminate entrenched patronage, stop problematic decision-making by those in charge, provide a system for managing and performing repetitive tasks that required little or no discretion, impose order and efficiency, create a clear understanding of the service provided, reduce arbitrariness, ensure accountability, and limit discretion. 19 The Acquisitive Model For Weber, as his ideal type suggests, the bureaucracy was not only necessary but also a positive human development. Later sociologists have not always looked so favorably upon bureaucracies, and they have developed alternate models to explain how and why bureaucracies function. One such model is called the acquisitive model of bureaucracy. The acquisitive model proposes that bureaucracies are naturally competitive and power-hungry. This means bureaucrats, especially at the highest levels, recognize that limited resources are available to feed bureaucracies, so they will work to enhance the status of their own bureaucracy to the detriment of others. This effort can sometimes take the form of merely emphasizing to Congress the value of their bureaucratic task, but it also means the bureaucracy will attempt to maximize its budget by depleting all its allotted resources each year. This ploy makes it more difficult for legislators to cut the bureaucracy’s future budget, a strategy that succeeds at the expense of thrift. In this way, the bureaucracy will eventually grow far beyond what is necessary and create bureaucratic waste that would otherwise be spent more efficiently among the other bureaucracies. The Monopolistic Model Other theorists have come to the conclusion that the extent to which bureaucracies compete for scarce resources is not what provides the greatest insight into how a bureaucracy functions. Rather, it is the absence of competition. The model that emerged from this observation is the monopolistic model . Proponents of the monopolistic model recognize the similarities between a bureaucracy like the Internal Revenue Service (IRS) and a private monopoly like a regional power company or internet service provider that has no competitors. Such organizations are frequently criticized for waste, poor service, and a low level of client responsiveness. Consider, for example, the Bureau of Consular Affairs (BCA), the federal bureaucracy charged with issuing passports to citizens. There is no other organization from which a U.S. citizen can legitimately request and receive a passport, a process that normally takes several weeks. Thus there is no reason for the BCA to become more efficient or more responsive or to issue passports any faster. There are rare bureaucratic exceptions that typically compete for presidential favor, most notably organizations such as the Central Intelligence Agency, the National Security Agency, and the intelligence agencies in the Department of Defense. Apart from these, bureaucracies have little reason to become more efficient or responsive, nor are they often penalized for chronic inefficiency or ineffectiveness. Therefore, there is little reason for them to adopt cost-saving or performance measurement systems. While some economists argue that the problems of government could be easily solved if certain functions are privatized to reduce this prevailing incompetence, bureaucrats are not as easily swayed. TYPES OF BUREAUCRATIC ORGANIZATIONS A bureaucracy is a particular government unit established to accomplish a specific set of goals and objectives as authorized by a legislative body. In the United States, the federal bureaucracy enjoys a great degree of autonomy compared to those of other countries. This is in part due to the sheer size of the federal budget, approximately $3.5 trillion as of 2015. 20 And because many of its agencies do not have clearly defined lines of authority—roles and responsibilities established by means of a chain of command—they also are able to operate with a high degree of autonomy. However, many agency actions are subject to judicial review. In Schechter Poultry Corp. v. United States (1935), the Supreme Court found that agency authority seemed limitless. 21 Yet, not all bureaucracies are alike. In the U.S. government, there are four general types: cabinet departments, independent executive agencies, regulatory agencies, and government corporations. Cabinet Departments There are currently fifteen cabinet departments in the federal government. Cabinet departments are major executive offices that are directly accountable to the president. They include the Departments of State, Defense, Education, Treasury, and several others. Occasionally, a department will be eliminated when government officials decide its tasks no longer need direct presidential and congressional oversight, such as happened to the Post Office Department in 1970. Each cabinet department has a head called a secretary, appointed by the president and confirmed by the Senate. These secretaries report directly to the president, and they oversee a huge network of offices and agencies that make up the department. They also work in different capacities to achieve each department’s mission-oriented functions. Within these large bureaucratic networks are a number of undersecretaries, assistant secretaries, deputy secretaries, and many others. The Department of Justice is the one department that is structured somewhat differently. Rather than a secretary and undersecretaries, it has an attorney general, an associate attorney general, and a host of different bureau and division heads ( Table 15.1 ). Members of the Cabinet Department Year Created Secretary as of 2016 Purpose State 1789 John Kerry Oversees matters related to foreign policy and international issues relevant to the country Treasury 1789 Jack Lew Oversees the printing of U.S. currency, collects taxes, and manages government debt Justice 1870 Loretta Lynch (attorney general) Oversees the enforcement of U.S. laws, matters related to public safety, and crime prevention Interior 1849 Sally Jewell Oversees the conservation and management of U.S. lands, water, wildlife, and energy resources Agriculture 1862 Tom Vilsack Oversees the U.S. farming industry, provides agricultural subsidies, and conducts food inspections Commerce 1903 Penny Pritzker Oversees the promotion of economic growth, job creation, and the issuing of patents Labor 1913 Thomas Perez Oversees issues related to wages, unemployment insurance, and occupational safety Defense 1947 Ashton Carter Oversees the many elements of the U.S. armed forces, including the Army, Navy, Marine Corps, and Air Force Health and Human Services 1953 Sylvia Mathews Burwell Oversees the promotion of public health by providing essential human services and enforcing food and drug laws Housing and Urban Development 1965 Julián Castro Oversees matters related to U.S. housing needs, works to increase homeownership, and increases access to affordable housing Transportation 1966 Anthony Foxx Oversees the country’s many networks of national transportation Energy 1977 Ernest Moniz Oversees matters related to the country’s energy needs, including energy security and technological innovation Education 1980 John King Oversees public education, education policy, and relevant education research Veterans Affairs 1989 Robert McDonald Oversees the services provided to U.S. veterans, including health care services and benefits programs Homeland Security 2002 Jeh Johnson Oversees agencies charged with protecting the territory of the United States from natural and human threats Table 15.1 This table outlines all the current cabinet departments, along with the year they were created, their current top administrator, and other special details related to their purpose and functions. Individual cabinet departments are composed of numerous levels of bureaucracy. These levels descend from the department head in a mostly hierarchical pattern and consist of essential staff, smaller offices, and bureaus. Their tiered, hierarchical structure allows large bureaucracies to address many different issues by deploying dedicated and specialized officers. For example, below the secretary of state are a number of undersecretaries. These include undersecretaries for political affairs, for management, for economic growth, energy, and the environment, and many others. Each controls a number of bureaus and offices. Each bureau and office in turn oversees a more focused aspect of the undersecretary’s field of specialization ( Figure 15.9 ). For example, below the undersecretary for public diplomacy and public affairs are three bureaus: educational and cultural affairs, public affairs, and international information programs. Frequently, these bureaus have even more specialized departments under them. Under the bureau of educational and cultural affairs are the spokesperson for the Department of State and his or her staff, the Office of the Historian, and the United States Diplomacy Center. 22 Link to Learning Created in 1939 by President Franklin D. Roosevelt to help manage the growing responsibilities of the White House, the Executive Office of the President still works today to “provide the President with the support that he or she needs to govern effectively.” Independent Executive Agencies and Regulatory Agencies Like cabinet departments, independent executive agencies report directly to the president, with heads appointed by the president. Unlike the larger cabinet departments, however, independent agencies are assigned far more focused tasks. These agencies are considered independent because they are not subject to the regulatory authority of any specific department. They perform vital functions and are a major part of the bureaucratic landscape of the United States. Some prominent independent agencies are the Central Intelligence Agency (CIA), which collects and manages intelligence vital to national interests, the National Aeronautics and Space Administration (NASA), charged with developing technological innovation for the purposes of space exploration ( Figure 15.10 ), and the Environmental Protection Agency (EPA), which enforces laws aimed at protecting environmental sustainability. An important subset of the independent agency category is the regulatory agency. Regulatory agencies emerged in the late nineteenth century as a product of the progressive push to control the benefits and costs of industrialization. The first regulatory agency was the Interstate Commerce Commission (ICC), charged with regulating that most identifiable and prominent symbol of nineteenth-century industrialism, the railroad. Other regulatory agencies, such as the Commodity Futures Trading Commission, which regulates U.S. financial markets and the Federal Communications Commission, which regulates radio and television, have largely been created in the image of the ICC. These independent regulatory agencies cannot be influenced as readily by partisan politics as typical agencies and can therefore develop a good deal of power and authority. The Securities and Exchange Commission (SEC) illustrates well the potential power of such agencies. The SEC’s mission has expanded significantly in the digital era beyond mere regulation of stock floor trading. Government Corporations Agencies formed by the federal government to administer a quasi-business enterprise are called government corporation s . They exist because the services they provide are partly subject to market forces and tend to generate enough profit to be self-sustaining, but they also fulfill a vital service the government has an interest in maintaining. Unlike a private corporation, a government corporation does not have stockholders. Instead, it has a board of directors and managers. This distinction is important because whereas a private corporation’s profits are distributed as dividends, a government corporation’s profits are dedicated to perpetuating the enterprise. Unlike private businesses, which pay taxes to the federal government on their profits, government corporations are exempt from taxes. The most widely used government corporation is the U.S. Postal Service. Once a cabinet department, it was transformed into a government corporation in the early 1970s. Another widely used government corporation is the National Railroad Passenger Corporation, which uses the trade name Amtrak ( Figure 15.11 ). Amtrak was the government’s response to the decline in passenger rail travel in the 1950s and 1960s as the automobile came to dominate. Recognizing the need to maintain a passenger rail service despite dwindling profits, the government consolidated the remaining lines and created Amtrak. 23 THE FACE OF DEMOCRACY Those who work for the public bureaucracy are nearly always citizens, much like those they serve. As such they typically seek similar long-term goals from their employment, namely to be able to pay their bills and save for retirement. However, unlike those who seek employment in the private sector, public bureaucrats tend to have an additional motivator, the desire to accomplish something worthwhile on behalf of their country. In general, individuals attracted to public service display higher levels of public service motivation (PSM). This is a desire most people possess in varying degrees that drives us to seek fulfillment through doing good and contributing in an altruistic manner. 24 Insider Perspective Dogs and Fireplugs In Caught between the Dog and the Fireplug, or How to Survive Public Service (2001), author Kenneth Ashworth provides practical advice for individuals pursuing a career in civil service. 25 Through a series of letters, Ashworth shares his personal experience and professional expertise on a variety of issues with a relative named Kim who is about to embark upon an occupation in the public sector. By discussing what life is like in the civil service, Ashworth provides an “in the trenches” vantage point on public affairs. He goes on to discuss hot topics centering on bureaucratic behaviors, such as (1) having sound etiquette, ethics, and risk aversion when working with press, politicians, and unpleasant people; (2) being a subordinate while also delegating; (3) managing relationships, pressures, and influence; (4) becoming a functional leader; and (5) taking a multidimensional approach to addressing or solving complex problems. Ashworth says that politicians and civil servants differ in their missions, needs, and motivations, which will eventually reveal differences in their respective characters and, consequently, present a variety of challenges. He maintains that a good civil servant must realize he or she will need to be in the thick of things to provide preeminent service without actually being seen as merely a bureaucrat. Put differently, a bureaucrat walks a fine line between standing up for elected officials and their respective policies—the dog—and at the same time acting in the best interest of the public—the fireplug. In what ways is the problem identified by author Kenneth Ashworth a consequence of the merit-based civil service? Bureaucrats must implement and administer a wide range of policies and programs as established by congressional acts or presidential orders. Depending upon the agency’s mission, a bureaucrat’s roles and responsibilities vary greatly, from regulating corporate business and protecting the environment to printing money and purchasing office supplies. Bureaucrats are government officials subject to legislative regulations and procedural guidelines. Because they play a vital role in modern society, they hold managerial and functional positions in government; they form the core of most administrative agencies. Although many top administrators are far removed from the masses, many interact with citizens on a regular basis. Given the power bureaucrats have to adopt and enforce public policy, they must follow several legislative regulations and procedural guidelines. A regulation is a rule that permits government to restrict or prohibit certain behaviors among individuals and corporations. Bureaucratic rulemaking is a complex process that will be covered in more detail in the following section, but the rulemaking process typically creates procedural guidelines , or more formally, standard operating procedures . These are the rules that lower-level bureaucrats must abide by regardless of the situations they face. Elected officials are regularly frustrated when bureaucrats seem not follow the path they intended. As a result, the bureaucratic process becomes inundated with red tape . This is the name for the procedures and rules that must be followed to get something done. Citizens frequently criticize the seemingly endless networks of red tape they must navigate in order to effectively utilize bureaucratic services, although these devices are really meant to ensure the bureaucracies function as intended. 15.4 Controlling the Bureaucracy Learning Objectives By the end of this section, you will be able to: Explain the way Congress, the president, bureaucrats, and citizens provide meaningful oversight over the bureaucracies Identify the ways in which privatization has made bureaucracies both more and less efficient As our earlier description of the State Department demonstrates, bureaucracies are incredibly complicated. Understandably, then, the processes of rulemaking and bureaucratic oversight are equally complex. Historically, at least since the end of the spoils system, elected leaders have struggled to maintain control over their bureaucracies. This challenge arises partly due to the fact that elected leaders tend to have partisan motivations, while bureaucracies are designed to avoid partisanship. While that is not the only explanation, elected leaders and citizens have developed laws and institutions to help rein in bureaucracies that become either too independent, corrupt, or both. BUREAUCRATIC RULEMAKING Once the particulars of implementation have been spelled out in the legislation authorizing a new program, bureaucracies move to enact it. When they encounter grey areas, many follow the federal negotiated rulemaking process to propose a solution, that is, detailing how particular new federal polices, regulations, and/or programs will be implemented in the agencies. Congress cannot possibly legislate on that level of detail, so the experts in the bureaucracy do so. Negotiated rulemaking is a relatively recently developed bureaucratic device that emerged from the criticisms of bureaucratic inefficiencies in the 1970s, 1980s, and 1990s. 26 Before it was adopted, bureaucracies used a procedure called notice-and-comment rulemaking . This practice required that agencies attempting to adopt rules publish their proposal in the Federal Register , the official publication for all federal rules and proposed rules. By publishing the proposal, the bureaucracy was fulfilling its obligation to allow the public time to comment. But rather than encouraging the productive interchange of ideas, the comment period had the effect of creating an adversarial environment in which different groups tended to make extreme arguments for rules that would support their interests. As a result, administrative rulemaking became too lengthy, too contentious, and too likely to provoke litigation in the courts. Link to Learning The Federal Register was once available only in print. Now, however, it is available online and is far easier to navigate and use. Have a look at all the important information the government’s journal posts online. Reformers argued that these inefficiencies needed to be corrected. They proposed the negotiated rulemaking process , often referred to as regulatory negotiation, or “reg-neg” for short. This process was codified in the Negotiated Rulemaking Act s of 1990 and 1996, which encouraged agencies to employ negotiated rulemaking procedures. While negotiated rulemaking is required in only a handful of agencies and plenty still use the traditional process, others have recognized the potential of the new process and have adopted it. In negotiated rulemaking, neutral advisors known as convenors put together a committee of those who have vested interests in the proposed rules. The convenors then set about devising procedures for reaching a consensus on the proposed rules. The committee uses these procedures to govern the process through which the committee members discuss the various merits and demerits of the proposals. With the help of neutral mediators, the committee eventually reaches a general consensus on the rules. GOVERNMENT BUREAUCRATIC OVERSIGHT The ability for bureaucracies to develop their own rules and in many ways control their own budgets has often been a matter of great concern for elected leaders. As a result, elected leaders have employed a number of strategies and devices to control public administrators in the bureaucracy. Congress is particularly empowered to apply oversight of the federal bureaucracy because of its power to control funding and approve presidential appointments. The various bureaucratic agencies submit annual summaries of their activities and budgets for the following year, and committees and subcommittees in both chambers regularly hold hearings to question the leaders of the various bureaucracies. These hearings are often tame, practical, fact-finding missions. Occasionally, however, when a particular bureaucracy has committed or contributed to a blunder of some magnitude, the hearings can become quite animated and testy. This occurred in 2013 following the realization by Congress that the IRS had selected for extra scrutiny certain groups that had applied for tax-exempt status. While the error could have been a mere mistake or have resulted from any number of reasons, many in Congress became enraged at the thought that the IRS might purposely use its power to inconvenience citizens and their groups. 27 The House directed its Committee on Oversight and Government Reform to launch an investigation into the IRS, during which time it interviewed and publicly scrutinized a number of high-ranking civil servants ( Figure 15.12 ). Link to Learning The mission of the U.S. House Oversight Committee is to “ensure the efficiency, effectiveness, and accountability of the federal government and all its agencies.” The committee is an important congressional check on the power of the bureaucracy. Visit the website for more information about the U.S. House Oversight Committee. Perhaps Congress’s most powerful oversight tool is the Government Accountability Office (GAO). 28 The GAO is an agency that provides Congress, its committees, and the heads of the executive agencies with auditing, evaluation, and investigative services. It is designed to operate in a fact-based and nonpartisan manner to deliver important oversight information where and when it is needed. The GAO’s role is to produce reports, mostly at the insistence of Congress. In the approximately nine hundred reports it completes per year, the GAO sends Congress information about budgetary issues for everything from education, health care, and housing to defense, homeland security, and natural resource management. 29 Since it is an office within the federal bureaucracy, the GAO also supplies Congress with its own annual performance and accountability report. This report details the achievements and remaining weaknesses in the actions of the GAO for any given year. Apart from Congress, the president also executes oversight over the extensive federal bureaucracy through a number of different avenues. Most directly, the president controls the bureaucracies by appointing the heads of the fifteen cabinet departments and of many independent executive agencies, such as the CIA, the EPA, and the Federal Bureau of Investigation. These cabinet and agency appointments go through the Senate for confirmation. The other important channel through which the office of the president conducts oversight over the federal bureaucracy is the Office of Management and Budget (OMB). 30 The primary responsibility of the OMB is to produce the president’s annual budget for the country. With this huge responsibility, however, comes a number of other responsibilities. These include reporting to the president on the actions of the various executive departments and agencies in the federal government, overseeing the performance levels of the bureaucracies, coordinating and reviewing federal regulations for the president, and delivering executive orders and presidential directives to the various agency heads. Finding a Middle Ground Controversy and the CFPB: Overseeing a Bureau Whose Job Is Oversight During the 1990s, the two political parties in the United States had largely come together over the issue of the federal bureaucracy. While differences remained, a great number of bipartisan attempts to roll back the size of government took place during the Clinton administration. This shared effort began to fall apart during the presidency of Republican George W. Bush, who made repeated attempts to use contracting and privatization to reduce the size of the federal bureaucracy more than Democrats were willing to accept. This growing division was further compounded by Great Recession that began in 2007. For many on the left side of the political spectrum, the onset of the recession reflected a failure of weakened federal bureaucracies to properly regulate the financial markets. To those on the right, it merely reinforced the belief that government bureaucracies are inherently inefficient. Over the next few years, as the government attempted to grapple with the consequences of the recession, these divisions only grew. The debate over one particular bureaucratic response to the recession provides important insight into these divisions. The bureau in question is the Consumer Financial Protection Bureau (CFPB), an agency created in 2011 specifically to oversee certain financial industries that had proven themselves to be especially prone to abusive practices, such as sub-prime mortgage lenders and payday lenders. To many in the Republican Party, this new bureau was merely another instance of growing the federal bureaucracy to take care of problems caused by an inefficient government. To many in the Democratic Party, the new agency was an important cop on a notably chaotic street. Divisions over this agency were so bitter that Republicans refused for a time to allow the Senate to consider confirming anyone to head the new bureau ( Figure 15.13 ). Many wanted the bureau either scrapped or headed by a committee that would have to generate consensus in order to act. They attempted to cut the bureau’s budget and erected mountains of red tape designed to slow the CFPB’s achievement of its goals. During the height of the recession, many Democrats saw these tactics as a particularly destructive form of obstruction while the country reeled from the financial collapse. As the recession recedes into the past, however, the political heat the CFPB once generated has steadily declined. Republicans still push to reduce the power of the bureau and Democrats in general still support it, but lack of urgency has pushed these differences into the background. Indeed, there may be a growing consensus between the two parties that the bureau should be more tightly controlled. In the spring of 2016, as the agency was announcing new rules to help further restrict the predatory practices of payday lenders, a handful of Democratic members of Congress, including the party chair, joined Republicans to draft legislation to prevent the CFPB from further regulating lenders. This joint effort may be an anomaly. But it may also indicate the start of a return to more bipartisan interpretation of bureaucratic institutions. What do these divisions suggest about the way Congress exercises oversight over the federal bureaucracy? Do you think this oversight is an effective way to control a bureaucracy as large and complex as the U.S. federal bureaucracy? Why or why not? CITIZEN BUREAUCRATIC OVERSIGHT A number of laws passed in the decades between the end of the Second World War and the late 1970s have created a framework through which citizens can exercise their own bureaucratic oversight. The two most important laws are the Freedom of Information Act of 1966 and the Government in Sunshine Act of 1976. 31 Like many of the modern bureaucratic reforms in the United States, both emerged during a period of heightened skepticism about government activities. The first, the Freedom of Information Act of 1966 (FOIA), emerged in the early years of the Johnson presidency as the United States was conducting secret Cold War missions around the world, the U.S. military was becoming increasingly mired in the conflict in Vietnam, and questions were still swirling around the Kennedy assassination. FOIA provides journalists and the general public the right to request records from various federal agencies. These agencies are required by law to release that information unless it qualifies for one of nine exemptions. These exceptions cite sensitive issues related to national security or foreign policy, internal personnel rules, trade secrets, violations of personnel privacy rights, law enforcement information, and oil well data ( Figure 15.14 ). FOIA also compels agencies to post some types of information for the public regularly without being requested. In fiscal year 2015, the government received 713,168 FOIA requests, with just three departments—Defense, Homeland Security, and Justice—accounting for more than half those queries. 32 The Center for Effective Government analyzed the fifteen federal agencies that receive the most FOIA requests and concluded that they generally struggle to implement public disclosure rules. In its latest report, published in 2015 and using 2012 and 2013 data (the most recent available), ten of the fifteen did not earn satisfactory overall grades, scoring less than seventy of a possible one hundred points. 33 The Government in Sunshine Act of 1976 is different from FOIA in that it requires all multi-headed federal agencies to hold their meetings in a public forum on a regular basis. The name “Sunshine Act” is derived from the old adage that “sunlight is the best disinfectant”—the implication being that governmental and bureaucratic corruption thrive in secrecy but shrink when exposed to the light of public scrutiny. The act defines a meeting as any gathering of agency members in person or by phone, whether in a formal or informal manner. Like FOIA, the Sunshine Act allows for exceptions. These include meetings where classified information is discussed, proprietary data has been submitted for review, employee privacy matters are discussed, criminal matters are brought up, and information would prove financially harmful to companies were it released. Citizens and citizen groups can also follow rulemaking and testify at hearings held around the country on proposed rules. The rulemaking process and the efforts by federal agencies to keep open records and solicit public input on important changes are examples of responsive bureaucracy. GOVERNMENT PRIVATIZATION A more extreme, and in many instances, more controversial solution to the perceived and real inefficiencies in the bureaucracy is privatization . In the United States, largely because it was born during the Enlightenment and has a long history of championing free-market principles, the urge to privatize government services has never been as strong as it is in many other countries. There are simply far fewer government-run services. Nevertheless, the federal government has used forms of privatization and contracting throughout its history. But following the growth of bureaucracy and government services during President Johnson’s Great Society in the mid-1960s, a particularly vocal movement began calling for a rollback of government services. This movement grew stronger in the 1970s and 1980s as politicians, particularly on the right, declared that air needed to be let out of the bloated federal government. In the 1990s, as President Bill Clinton and especially his vice president, Al Gore, worked to aggressively shrink the federal bureaucracy, privatization came to be embraced across the political spectrum. 34 The rhetoric of privatization—that market competition would stimulate innovation and efficiency—sounded like the proper remedy to many people and still does. But to many others, talk of privatization is worrying. They contend that certain government functions are simply not possible to replicate in a private context. When those in government speak of privatization, they are often referring to one of a host of different models that incorporate the market forces of the private sector into the function of government to varying degrees. 35 These include using contractors to supply goods and/or services, distributing government vouchers with which citizens can purchase formerly government-controlled services on the private market, supplying government grants to private organizations to administer government programs, collaborating with a private entity to finance a government program, and even fully divesting the government of a function and directly giving it to the private sector ( Figure 15.15 ). We will look at three of these types of privatization shortly. Divestiture, or full privatization, occurs when government services are transferred, usually through sale, from government bureaucratic control into an entirely market-based, private environment. At the federal level this form of privatization is very rare, although it does occur. Consider the Student Loan Marketing Association, often referred to by its nickname, Sallie Mae. When it was created in 1973, it was designed to be a government entity for processing federal student education loans. Over time, however, it gradually moved further from its original purpose and became increasingly private. Sallie Mae reached full privatization in 2004. 36 Another example is the U.S. Investigations Services, Inc., which was once the investigative branch of the Office of Personnel Management (OPM) until it was privatized in the 1990s. At the state level, however, the privatization of roads, public utilities, bridges, schools, and even prisons has become increasingly common as state and municipal authorities look for ways to reduce the cost of government. Possibly the best-known form of privatization is the process of issuing government contracts to private companies in order for them to provide necessary services. This process grew to prominence during President Bill Clinton’s National Partnership for Reinventing Government initiative, intended to streamline the government bureaucracy. Under President George W. Bush, the use of contracting out federal services reached new heights. During the Iraq War, for example, large corporations like Kellogg Brown & Root, owned by Haliburton at the time, signed government contracts to perform a number of services once done by the military, such as military base construction, food preparation, and even laundry services. By 2006, reliance on contracting to run the war was so great that contractors outnumbered soldiers. Such contracting has faced quite a bit of criticism for both its high cost and its potential for corruption and inefficiencies. 37 However, it has become so routine that it is unlikely to slow any time soon. Third-party financing is a far more complex form of privatization than divestiture or contracting. Here the federal government signs an agreement with a private entity so the two can form a special-purpose vehicle to take ownership of the object being financed. The special-purpose vehicle is empowered to reach out to private financial markets to borrow money. This type of privatization is typically used to finance government office space, military base housing, and other large infrastructure projects. Departments like the Congressional Budget Office have frequently criticized this form of privatization as particularly inefficient and costly for the government. One the most the most important forms of bureaucratic oversight comes from inside the bureaucracy itself. Those within are in the best position to recognize and report on misconduct. But bureaucracies tend to jealously guard their reputations and are generally resistant to criticism from without and from within. This can create quite a problem for insiders who recognize and want to report mismanagement and even criminal behavior. The personal cost of doing the right thing can be prohibitive. 38 For a typical bureaucrat faced with the option of reporting corruption and risking possible termination or turning the other way and continuing to earn a living, the choice is sometimes easy. Under heightened skepticism due to government inefficiency and outright corruption in the 1970s, government officials began looking for solutions. When Congress drafted the Civil Service Reform Act of 1978, it specifically included rights for federal whistleblower s, those who publicize misdeeds committed within a bureaucracy or other organization, and set up protection from reprisals. The act’s Merit Systems Protection Board is a quasi-juridical institutional board headed by three members appointed by the president and confirmed by the Senate that hears complaints, conducts investigations into possible abuses, and institutes protections for bureaucrats who speak out. 39 Over time, Congress and the president have strengthened these protections with additional acts. These include the Whistleblower Protection Act of 1989 and the Whistleblower Protection Enhancement Act of 2012, which further compelled federal agencies to protect whistleblowers who reasonably perceive that an institution or the people in the institution are acting inappropriately ( Figure 15.16 ).
anatomy_and_physiology
Chapter Objectives After studying this chapter, you will be able to: Compare and contrast the anatomical structure of arteries, arterioles, capillaries, venules, and veins Accurately describe the forces that account for capillary exchange List the major factors affecting blood flow, blood pressure, and resistance Describe how blood flow, blood pressure, and resistance interrelate Discuss how the neural and endocrine mechanisms maintain homeostasis within the blood vessels Describe the interaction of the cardiovascular system with other body systems Label the major blood vessels of the pulmonary and systemic circulations Identify and describe the hepatic portal system Describe the development of blood vessels and fetal circulation Compare fetal circulation to that of an individual after birth Introduction In this chapter, you will learn about the vascular part of the cardiovascular system, that is, the vessels that transport blood throughout the body and provide the physical site where gases, nutrients, and other substances are exchanged with body cells. When vessel functioning is reduced, blood-borne substances do not circulate effectively throughout the body. As a result, tissue injury occurs, metabolism is impaired, and the functions of every bodily system are threatened.
[ { "answer": { "ans_choice": 0, "ans_text": "tunica intima" }, "bloom": "1", "hl_context": "The tunica intima ( also called the tunica interna ) is composed of epithelial and connective tissue layers . <hl> Lining the tunica intima is the specialized simple squamous epithelium called the endothelium , which is continuous throughout the entire vascular system , including the lining of the chambers of the heart . <hl> Damage to this endothelial lining and exposure of blood to the collagenous fibers beneath is one of the primary causes of clot formation . Until recently , the endothelium was viewed simply as the boundary between the blood in the lumen and the walls of the vessels . Recent studies , however , have shown that it is physiologically critical to such activities as helping to regulate capillary exchange and altering blood flow . The endothelium releases local chemicals called endothelins that can constrict the smooth muscle within the walls of the vessel to increase blood pressure . Uncompensated overproduction of endothelins may contribute to hypertension ( high blood pressure ) and cardiovascular disease .", "hl_sentences": "Lining the tunica intima is the specialized simple squamous epithelium called the endothelium , which is continuous throughout the entire vascular system , including the lining of the chambers of the heart .", "question": { "cloze_format": "The endothelium is found in the ________.", "normal_format": "The endothelium is found in what?", "question_choices": [ "tunica intima", "tunica media", "tunica externa", "lumen" ], "question_id": "fs-id2185152", "question_text": "The endothelium is found in the ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "both vasoconstriction and vasodilation" }, "bloom": "1", "hl_context": "The tunica media is the substantial middle layer of the vessel wall ( see Figure 20.3 ) . It is generally the thickest layer in arteries , and it is much thicker in arteries than it is in veins . The tunica media consists of layers of smooth muscle supported by connective tissue that is primarily made up of elastic fibers , most of which are arranged in circular sheets . Toward the outer portion of the tunic , there are also layers of longitudinal muscle . Contraction and relaxation of the circular muscles decrease and increase the diameter of the vessel lumen , respectively . Specifically in arteries , vasoconstriction decreases blood flow as the smooth muscle in the walls of the tunica media contracts , making the lumen narrower and increasing blood pressure . Similarly , vasodilation increases blood flow as the smooth muscle relaxes , allowing the lumen to widen and blood pressure to drop . <hl> Both vasoconstriction and vasodilation are regulated in part by small vascular nerves , known as nervi vasorum , or “ nerves of the vessel , ” that run within the walls of blood vessels . <hl> These are generally all sympathetic fibers , although some trigger vasodilation and others induce vasoconstriction , depending upon the nature of the neurotransmitter and receptors located on the target cell . Parasympathetic stimulation does trigger vasodilation as well as erection during sexual arousal in the external genitalia of both sexes . Nervous control over vessels tends to be more generalized than the specific targeting of individual blood vessels . Local controls , discussed later , account for this phenomenon . ( Seek additional content for more information on these dynamic aspects of the autonomic nervous system . ) Hormones and local chemicals also control blood vessels . Together , these neural and chemical mechanisms reduce or increase blood flow in response to changing body conditions , from exercise to hydration . Regulation of both blood flow and blood pressure is discussed in detail later in this chapter . The smooth muscle layers of the tunica media are supported by a framework of collagenous fibers that also binds the tunica media to the inner and outer tunics . Along with the collagenous fibers are large numbers of elastic fibers that appear as wavy lines in prepared slides . Separating the tunica media from the outer tunica externa in larger arteries is the external elastic membrane ( also called the external elastic lamina ) , which also appears wavy in slides . This structure is not usually seen in smaller arteries , nor is it seen in veins .", "hl_sentences": "Both vasoconstriction and vasodilation are regulated in part by small vascular nerves , known as nervi vasorum , or “ nerves of the vessel , ” that run within the walls of blood vessels .", "question": { "cloze_format": "Nervi vasorum control ________.", "normal_format": "Which does nervi vasorum control?", "question_choices": [ "vasoconstriction", "vasodilation", "capillary permeability", "both vasoconstriction and vasodilation" ], "question_id": "fs-id2008487", "question_text": "Nervi vasorum control ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "elastic fibers" }, "bloom": null, "hl_context": "An artery is a blood vessel that conducts blood away from the heart . All arteries have relatively thick walls that can withstand the high pressure of blood ejected from the heart . <hl> However , those close to the heart have the thickest walls , containing a high percentage of elastic fibers in all three of their tunics . <hl> This type of artery is known as an elastic artery ( Figure 20.4 ) . Vessels larger than 10 mm in diameter are typically elastic . Their abundant elastic fibers allow them to expand , as blood pumped from the ventricles passes through them , and then to recoil after the surge has passed . If artery walls were rigid and unable to expand and recoil , their resistance to blood flow would greatly increase and blood pressure would rise to even higher levels , which would in turn require the heart to pump harder to increase the volume of blood expelled by each pump ( the stroke volume ) and maintain adequate pressure and flow . Artery walls would have to become even thicker in response to this increased pressure . The elastic recoil of the vascular wall helps to maintain the pressure gradient that drives the blood through the arterial system . An elastic artery is also known as a conducting artery , because the large diameter of the lumen enables it to accept a large volume of blood from the heart and conduct it to smaller branches .", "hl_sentences": "However , those close to the heart have the thickest walls , containing a high percentage of elastic fibers in all three of their tunics .", "question": { "cloze_format": "Closer to the heart, arteries would be expected to have a higher percentage of ________.", "normal_format": "Closer to the heart, what would arteries be expected to have a higher percentage of?", "question_choices": [ "endothelium", "smooth muscle fibers", "elastic fibers", "collagenous fibers" ], "question_id": "fs-id1949070", "question_text": "Closer to the heart, arteries would be expected to have a higher percentage of ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "thin walled, large lumens, low pressure, have valves" }, "bloom": "1", "hl_context": "Veins A vein is a blood vessel that conducts blood toward the heart . <hl> Compared to arteries , veins are thin-walled vessels with large and irregular lumens ( see Figure 20.7 ) . <hl> <hl> Because they are low-pressure vessels , larger veins are commonly equipped with valves that promote the unidirectional flow of blood toward the heart and prevent backflow toward the capillaries caused by the inherent low blood pressure in veins as well as the pull of gravity . <hl> Table 20.2 compares the features of arteries and veins .", "hl_sentences": "Compared to arteries , veins are thin-walled vessels with large and irregular lumens ( see Figure 20.7 ) . Because they are low-pressure vessels , larger veins are commonly equipped with valves that promote the unidirectional flow of blood toward the heart and prevent backflow toward the capillaries caused by the inherent low blood pressure in veins as well as the pull of gravity .", "question": { "cloze_format": "___ best describes veins.", "normal_format": "Which of the following best describes veins?", "question_choices": [ "thick walled, small lumens, low pressure, lack valves", "thin walled, large lumens, low pressure, have valves", "thin walled, small lumens, high pressure, have valves", "thick walled, large lumens, high pressure, lack valves" ], "question_id": "fs-id1531202", "question_text": "Which of the following best describes veins?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "sinusoid capillary" }, "bloom": "1", "hl_context": "A sinusoid capillary ( or sinusoid ) is the least common type of capillary . Sinusoid capillaries are flattened , and they have extensive intercellular gaps and incomplete basement membranes , in addition to intercellular clefts and fenestrations . This gives them an appearance not unlike Swiss cheese . These very large openings allow for the passage of the largest molecules , including plasma proteins and even cells . Blood flow through sinusoids is very slow , allowing more time for exchange of gases , nutrients , and wastes . <hl> Sinusoids are found in the liver and spleen , bone marrow , lymph nodes ( where they carry lymph , not blood ) , and many endocrine glands including the pituitary and adrenal glands . <hl> Without these specialized capillaries , these organs would not be able to provide their myriad of functions . For example , when bone marrow forms new blood cells , the cells must enter the blood supply and can only do so through the large openings of a sinusoid capillary ; they cannot pass through the small openings of continuous or fenestrated capillaries . <hl> The liver also requires extensive specialized sinusoid capillaries in order to process the materials brought to it by the hepatic portal vein from both the digestive tract and spleen , and to release plasma proteins into circulation . <hl> For capillaries to function , their walls must be leaky , allowing substances to pass through . <hl> There are three major types of capillaries , which differ according to their degree of “ leakiness : ” continuous , fenestrated , and sinusoid capillaries ( Figure 20.5 ) . <hl>", "hl_sentences": "Sinusoids are found in the liver and spleen , bone marrow , lymph nodes ( where they carry lymph , not blood ) , and many endocrine glands including the pituitary and adrenal glands . The liver also requires extensive specialized sinusoid capillaries in order to process the materials brought to it by the hepatic portal vein from both the digestive tract and spleen , and to release plasma proteins into circulation . There are three major types of capillaries , which differ according to their degree of “ leakiness : ” continuous , fenestrated , and sinusoid capillaries ( Figure 20.5 ) .", "question": { "cloze_format": "An especially leaky type of capillary found in the liver and certain other tissues is called a ________.", "normal_format": "What is an especially leaky type of capillary found in the liver and certain other tissues called?", "question_choices": [ "capillary bed", "fenestrated capillary", "sinusoid capillary", "metarteriole" ], "question_id": "fs-id1985209", "question_text": "An especially leaky type of capillary found in the liver and certain other tissues is called a ________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 1, "ans_text": "As blood volume decreases, blood pressure and blood flow also decrease." }, "bloom": "2", "hl_context": "The relationship between blood volume , blood pressure , and blood flow is intuitively obvious . Water may merely trickle along a creek bed in a dry season , but rush quickly and under great pressure after a heavy rain . <hl> Similarly , as blood volume decreases , pressure and flow decrease . <hl> As blood volume increases , pressure and flow increase .", "hl_sentences": "Similarly , as blood volume decreases , pressure and flow decrease .", "question": { "cloze_format": "It is true that ___.", "normal_format": "Which of the following statements is true?", "question_choices": [ "The longer the vessel, the lower the resistance and the greater the flow.", "As blood volume decreases, blood pressure and blood flow also decrease.", "Increased viscosity increases blood flow.", "All of the above are true." ], "question_id": "fs-id1612130", "question_text": "Which of the following statements is true?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "all of the above" }, "bloom": "1", "hl_context": "As previously discussed , vasoconstriction of an artery or arteriole decreases the radius , increasing resistance and pressure , but decreasing flow . Venoconstriction , on the other hand , has a very different outcome . The walls of veins are thin but irregular ; thus , when the smooth muscle in those walls constricts , the lumen becomes more rounded . The more rounded the lumen , the less surface area the blood encounters , and the less resistance the vessel offers . <hl> Vasoconstriction increases pressure within a vein as it does in an artery , but in veins , the increased pressure increases flow . <hl> Recall that the pressure in the atria , into which the venous blood will flow , is very low , approaching zero for at least part of the relaxation phase of the cardiac cycle . <hl> Thus , venoconstriction increases the return of blood to the heart . <hl> Another way of stating this is that venoconstriction increases the preload or stretch of the cardiac muscle and increases contraction . 20.3 Capillary Exchange Learning Objectives By the end of this section , you will be able to :", "hl_sentences": "Vasoconstriction increases pressure within a vein as it does in an artery , but in veins , the increased pressure increases flow . Thus , venoconstriction increases the return of blood to the heart .", "question": { "cloze_format": "Venoconstriction increases ___.", "normal_format": "Venoconstriction increases which of the following?", "question_choices": [ "blood pressure within the vein", "blood flow within the vein", "return of blood to the heart", "all of the above" ], "question_id": "fs-id2173498", "question_text": "Venoconstriction increases which of the following?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "the pressure exerted by fluid in an enclosed space" }, "bloom": "1", "hl_context": "<hl> The primary force driving fluid transport between the capillaries and tissues is hydrostatic pressure , which can be defined as the pressure of any fluid enclosed in a space . <hl> Blood hydrostatic pressure is the force exerted by the blood confined within blood vessels or heart chambers . Even more specifically , the pressure exerted by blood against the wall of a capillary is called capillary hydrostatic pressure ( CHP ) , and is the same as capillary blood pressure . CHP is the force that drives fluid out of capillaries and into the tissues .", "hl_sentences": "The primary force driving fluid transport between the capillaries and tissues is hydrostatic pressure , which can be defined as the pressure of any fluid enclosed in a space .", "question": { "cloze_format": "Hydrostatic pressure is ________.", "normal_format": "Which of the following is correct about hydrostatic pressure?", "question_choices": [ "greater than colloid osmotic pressure at the venous end of the capillary bed", "the pressure exerted by fluid in an enclosed space", "about zero at the midpoint of a capillary bed", "all of the above" ], "question_id": "fs-id2137086", "question_text": "Hydrostatic pressure is ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "subtracting the blood colloid osmotic pressure from the capillary hydrostatic pressure" }, "bloom": "1", "hl_context": "The net filtration pressure ( NFP ) represents the interaction of the hydrostatic and osmotic pressures , driving fluid out of the capillary . <hl> It is equal to the difference between the CHP and the BCOP . <hl> Since filtration is , by definition , the movement of fluid out of the capillary , when reabsorption is occurring , the NFP is a negative number . <hl> The pressure created by the concentration of colloidal proteins in the blood is called the blood colloidal osmotic pressure ( BCOP ) . <hl> Its effect on capillary exchange accounts for the reabsorption of water . The plasma proteins suspended in blood cannot move across the semipermeable capillary cell membrane , and so they remain in the plasma . As a result , blood has a higher colloidal concentration and lower water concentration than tissue fluid . It therefore attracts water . We can also say that the BCOP is higher than the interstitial fluid colloidal osmotic pressure ( IFCOP ) , which is always very low because interstitial fluid contains few proteins . Thus , water is drawn from the tissue fluid back into the capillary , carrying dissolved molecules with it . This difference in colloidal osmotic pressure accounts for reabsorption . The primary force driving fluid transport between the capillaries and tissues is hydrostatic pressure , which can be defined as the pressure of any fluid enclosed in a space . Blood hydrostatic pressure is the force exerted by the blood confined within blood vessels or heart chambers . <hl> Even more specifically , the pressure exerted by blood against the wall of a capillary is called capillary hydrostatic pressure ( CHP ) , and is the same as capillary blood pressure . <hl> CHP is the force that drives fluid out of capillaries and into the tissues .", "hl_sentences": "It is equal to the difference between the CHP and the BCOP . The pressure created by the concentration of colloidal proteins in the blood is called the blood colloidal osmotic pressure ( BCOP ) . Even more specifically , the pressure exerted by blood against the wall of a capillary is called capillary hydrostatic pressure ( CHP ) , and is the same as capillary blood pressure .", "question": { "cloze_format": "Net filtration pressure is calculated by ________.", "normal_format": "How is net filtration pressure calculated?", "question_choices": [ "adding the capillary hydrostatic pressure to the interstitial fluid hydrostatic pressure", "subtracting the fluid drained by the lymphatic vessels from the total fluid in the interstitial fluid", "adding the blood colloid osmotic pressure to the capillary hydrostatic pressure", "subtracting the blood colloid osmotic pressure from the capillary hydrostatic pressure" ], "question_id": "fs-id2675192", "question_text": "Net filtration pressure is calculated by ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "In one day, more fluid exits the capillary through filtration than enters through reabsorption." }, "bloom": "2", "hl_context": "<hl> Since overall CHP is higher than BCOP , it is inevitable that more net fluid will exit the capillary through filtration at the arterial end than enters through reabsorption at the venous end . <hl> Considering all capillaries over the course of a day , this can be quite a substantial amount of fluid : Approximately 24 liters per day are filtered , whereas 20.4 liters are reabsorbed . This excess fluid is picked up by capillaries of the lymphatic system . These extremely thin-walled vessels have copious numbers of valves that ensure unidirectional flow through ever-larger lymphatic vessels that eventually drain into the subclavian veins in the neck . An important function of the lymphatic system is to return the fluid ( lymph ) to the blood . Lymph may be thought of as recycled blood plasma . ( Seek additional content for more detail on the lymphatic system . )", "hl_sentences": "Since overall CHP is higher than BCOP , it is inevitable that more net fluid will exit the capillary through filtration at the arterial end than enters through reabsorption at the venous end .", "question": { "cloze_format": "It is a true statement that ___ .", "normal_format": "Which of the following statements is true?", "question_choices": [ "In one day, more fluid exits the capillary through filtration than enters through reabsorption.", "In one day, approximately 35 mm of blood are filtered and 7 mm are reabsorbed.", "In one day, the capillaries of the lymphatic system absorb about 20.4 liters of fluid.", "None of the above are true." ], "question_id": "fs-id1304270", "question_text": "Which of the following statements is true?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 3, "ans_text": "the cardiovascular center" }, "bloom": "1", "hl_context": "<hl> Neurological regulation of blood pressure and flow depends on the cardiovascular centers located in the medulla oblongata . <hl> <hl> This cluster of neurons responds to changes in blood pressure as well as blood concentrations of oxygen , carbon dioxide , and hydrogen ions . <hl> The cardiovascular center contains three distinct paired components :", "hl_sentences": "Neurological regulation of blood pressure and flow depends on the cardiovascular centers located in the medulla oblongata . This cluster of neurons responds to changes in blood pressure as well as blood concentrations of oxygen , carbon dioxide , and hydrogen ions .", "question": { "cloze_format": "Clusters of neurons in the medulla oblongata that regulate blood pressure are known collectively as ________.", "normal_format": "What are known clusters of neurons in the medulla oblongata that regulate blood pressure?", "question_choices": [ "baroreceptors", "angioreceptors", "the cardiomotor mechanism", "the cardiovascular center" ], "question_id": "fs-id1276786", "question_text": "Clusters of neurons in the medulla oblongata that regulate blood pressure are known collectively as ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "aldosterone prompts the kidneys to reabsorb sodium" }, "bloom": "1", "hl_context": "Angiotensin II is a powerful vasoconstrictor , greatly increasing blood pressure . It also stimulates the release of ADH and aldosterone , a hormone produced by the adrenal cortex . <hl> Aldosterone increases the reabsorption of sodium into the blood by the kidneys . <hl> Since water follows sodium , this increases the reabsorption of water . This in turn increases blood volume , raising blood pressure . Angiotensin II also stimulates the thirst center in the hypothalamus , so an individual will likely consume more fluids , again increasing blood volume and pressure .", "hl_sentences": "Aldosterone increases the reabsorption of sodium into the blood by the kidneys .", "question": { "cloze_format": "In the renin-angiotensin-aldosterone mechanism, ________.", "normal_format": "Which of the following is correct about the renin-angiotensin-aldosterone mechanism?", "question_choices": [ "decreased blood pressure prompts the release of renin from the liver", "aldosterone prompts increased urine output", "aldosterone prompts the kidneys to reabsorb sodium", "all of the above" ], "question_id": "fs-id2676342", "question_text": "In the renin-angiotensin-aldosterone mechanism, ________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "vascular smooth muscle responds to stretch" }, "bloom": "1", "hl_context": "When blood flow is low , the vessel ’ s smooth muscle will be only minimally stretched . In response , it relaxes , allowing the vessel to dilate and thereby increase the movement of blood into the tissue . <hl> When blood flow is too high , the smooth muscle will contract in response to the increased stretch , prompting vasoconstriction that reduces blood flow . <hl> <hl> The myogenic response is a reaction to the stretching of the smooth muscle in the walls of arterioles as changes in blood flow occur through the vessel . <hl> This may be viewed as a largely protective function against dramatic fluctuations in blood pressure and blood flow to maintain homeostasis . If perfusion of an organ is too low ( ischemia ) , the tissue will experience low levels of oxygen ( hypoxia ) . In contrast , excessive perfusion could damage the organ ’ s smaller and more fragile vessels . The myogenic response is a localized process that serves to stabilize blood flow in the capillary network that follows that arteriole .", "hl_sentences": "When blood flow is too high , the smooth muscle will contract in response to the increased stretch , prompting vasoconstriction that reduces blood flow . The myogenic response is a reaction to the stretching of the smooth muscle in the walls of arterioles as changes in blood flow occur through the vessel .", "question": { "cloze_format": "In the myogenic response, ________.", "normal_format": "Which of the following is correct abuout the myogenic response?", "question_choices": [ "muscle contraction promotes venous return to the heart", "ventricular contraction strength is decreased", "vascular smooth muscle responds to stretch", "endothelins dilate muscular arteries" ], "question_id": "fs-id2043767", "question_text": "In the myogenic response, ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "hypovolemic shock" }, "bloom": "1", "hl_context": "<hl> Hypovolemic shock in adults is typically caused by hemorrhage , although in children it may be caused by fluid losses related to severe vomiting or diarrhea . <hl> Other causes for hypovolemic shock include extensive burns , exposure to some toxins , and excessive urine loss related to diabetes insipidus or ketoacidosis . Typically , patients present with a rapid , almost tachycardic heart rate ; a weak pulse often described as “ thready ; ” cool , clammy skin , particularly in the extremities , due to restricted peripheral blood flow ; rapid , shallow breathing ; hypothermia ; thirst ; and dry mouth . Treatments generally involve providing intravenous fluids to restore the patient to normal function and various drugs such as dopamine , epinephrine , and norepinephrine to raise blood pressure .", "hl_sentences": "Hypovolemic shock in adults is typically caused by hemorrhage , although in children it may be caused by fluid losses related to severe vomiting or diarrhea .", "question": { "cloze_format": "A form of circulatory shock common in young children with severe diarrhea or vomiting is ________.", "normal_format": "Which circulatory shock is common in young children with severe diarrhea or vomiting?", "question_choices": [ "hypovolemic shock", "anaphylactic shock", "obstructive shock", "hemorrhagic shock" ], "question_id": "fs-id3086392", "question_text": "A form of circulatory shock common in young children with severe diarrhea or vomiting is ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "ascending aorta" }, "bloom": "1", "hl_context": "<hl> The first vessels that branch from the ascending aorta are the paired coronary arteries ( see Figure 20.25 ) , which arise from two of the three sinuses in the ascending aorta just superior to the aortic semilunar valve . <hl> These sinuses contain the aortic baroreceptors and chemoreceptors critical to maintain cardiac function . The left coronary artery arises from the left posterior aortic sinus . The right coronary artery arises from the anterior aortic sinus . Normally , the right posterior aortic sinus does not give rise to a vessel .", "hl_sentences": "The first vessels that branch from the ascending aorta are the paired coronary arteries ( see Figure 20.25 ) , which arise from two of the three sinuses in the ascending aorta just superior to the aortic semilunar valve .", "question": { "cloze_format": "The coronary arteries branch off of the ________.", "normal_format": "Which coronary arteries branch off?", "question_choices": [ "aortic valve", "ascending aorta", "aortic arch", "thoracic aorta" ], "question_id": "fs-id2902805", "question_text": "The coronary arteries branch off of the ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "The radial and ulnar arteries join to form the palmar arch." }, "bloom": "2", "hl_context": "<hl> Formed from anastomosis of the radial and ulnar arteries ; supply blood to the hand and digital arteries <hl> <hl> Palmar arches ( superficial and deep ) <hl>", "hl_sentences": "Formed from anastomosis of the radial and ulnar arteries ; supply blood to the hand and digital arteries Palmar arches ( superficial and deep )", "question": { "cloze_format": "A true statement is that ___ .", "normal_format": "Which of the following statements is true?", "question_choices": [ "The left and right common carotid arteries both branch off of the brachiocephalic trunk.", "The brachial artery is the distal branch of the axillary artery.", "The radial and ulnar arteries join to form the palmar arch.", "All of the above are true." ], "question_id": "fs-id2107147", "question_text": "Which of the following statements is true?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "celiac trunk" }, "bloom": "1", "hl_context": "<hl> Branch of the celiac trunk ; supplies blood to the stomach <hl> <hl> Also called the celiac artery ; a major branch of the abdominal aorta ; gives rise to the left gastric artery , the splenic artery , and the common hepatic artery that forms the hepatic artery to the liver , the right gastric artery to the stomach , and the cystic artery to the gall bladder <hl> Abdominal Aorta and Major Branches After crossing through the diaphragm at the aortic hiatus , the thoracic aorta is called the abdominal aorta ( see Figure 20.28 ) . This vessel remains to the left of the vertebral column and is embedded in adipose tissue behind the peritoneal cavity . It formally ends at approximately the level of vertebra L4 , where it bifurcates to form the common iliac arteries . Before this division , the abdominal aorta gives rise to several important branches . <hl> A single celiac trunk ( artery ) emerges and divides into the left gastric artery to supply blood to the stomach and esophagus , the splenic artery to supply blood to the spleen , and the common hepatic artery , which in turn gives rise to the hepatic artery proper to supply blood to the liver , the right gastric artery to supply blood to the stomach , the cystic artery to supply blood to the gall bladder , and several branches , one to supply blood to the duodenum and another to supply blood to the pancreas . <hl> Two additional single vessels arise from the abdominal aorta . These are the superior and inferior mesenteric arteries . The superior mesenteric artery arises approximately 2.5 cm after the celiac trunk and branches into several major vessels that supply blood to the small intestine ( duodenum , jejunum , and ileum ) , the pancreas , and a majority of the large intestine . The inferior mesenteric artery supplies blood to the distal segment of the large intestine , including the rectum . It arises approximately 5 cm superior to the common iliac arteries .", "hl_sentences": "Branch of the celiac trunk ; supplies blood to the stomach Also called the celiac artery ; a major branch of the abdominal aorta ; gives rise to the left gastric artery , the splenic artery , and the common hepatic artery that forms the hepatic artery to the liver , the right gastric artery to the stomach , and the cystic artery to the gall bladder A single celiac trunk ( artery ) emerges and divides into the left gastric artery to supply blood to the stomach and esophagus , the splenic artery to supply blood to the spleen , and the common hepatic artery , which in turn gives rise to the hepatic artery proper to supply blood to the liver , the right gastric artery to supply blood to the stomach , the cystic artery to supply blood to the gall bladder , and several branches , one to supply blood to the duodenum and another to supply blood to the pancreas .", "question": { "cloze_format": "Arteries serving the stomach, pancreas, and liver all branch from the ________.", "normal_format": "Where do arteries serving the stomach, pancreas, and liver all branch from?", "question_choices": [ "superior mesenteric artery", "inferior mesenteric artery", "celiac trunk", "splenic artery" ], "question_id": "fs-id2548674", "question_text": "Arteries serving the stomach, pancreas, and liver all branch from the ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "all of the above are true" }, "bloom": "3", "hl_context": "<hl> Pair of veins that form from a fusion of the external and internal jugular veins and the subclavian vein ; subclavian , external and internal jugulars , vertebral , and internal thoracic veins flow into it ; drain the upper thoracic region and lead to the superior vena cava <hl> <hl> Brachiocephalic veins <hl>", "hl_sentences": "Pair of veins that form from a fusion of the external and internal jugular veins and the subclavian vein ; subclavian , external and internal jugulars , vertebral , and internal thoracic veins flow into it ; drain the upper thoracic region and lead to the superior vena cava Brachiocephalic veins", "question": { "cloze_format": "The right and left brachiocephalic veins ________.", "normal_format": "What do the right and left brachiocephalic veins do?", "question_choices": [ "drain blood from the right and left internal jugular veins", "drain blood from the right and left subclavian veins", "drain into the superior vena cava", "all of the above are true" ], "question_id": "fs-id3306950", "question_text": "The right and left brachiocephalic veins ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "liver" }, "bloom": "1", "hl_context": "<hl> Because of the hepatic portal system , the liver receives its blood supply from two different sources : from normal systemic circulation via the hepatic artery and from the hepatic portal vein . <hl> <hl> The liver processes the blood from the portal system to remove certain wastes and excess nutrients , which are stored for later use . <hl> This processed blood , as well as the systemic blood that came from the hepatic artery , exits the liver via the right , left , and middle hepatic veins , and flows into the inferior vena cava . Overall systemic blood composition remains relatively stable , since the liver is able to metabolize the absorbed digestive components . 20.6 Development of Blood Vessels and Fetal Circulation Learning Objectives By the end of this section , you will be able to : The hepatic portal system consists of the hepatic portal vein and the veins that drain into it . The hepatic portal vein itself is relatively short , beginning at the level of L2 with the confluence of the superior mesenteric and splenic veins . It also receives branches from the inferior mesenteric vein , plus the splenic veins and all their tributaries . The superior mesenteric vein receives blood from the small intestine , two-thirds of the large intestine , and the stomach . The inferior mesenteric vein drains the distal third of the large intestine , including the descending colon , the sigmoid colon , and the rectum . The splenic vein is formed from branches from the spleen , pancreas , and portions of the stomach , and the inferior mesenteric vein . After its formation , the hepatic portal vein also receives branches from the gastric veins of the stomach and cystic veins from the gall bladder . <hl> The hepatic portal vein delivers materials from these digestive and circulatory organs directly to the liver for processing . <hl>", "hl_sentences": "Because of the hepatic portal system , the liver receives its blood supply from two different sources : from normal systemic circulation via the hepatic artery and from the hepatic portal vein . The liver processes the blood from the portal system to remove certain wastes and excess nutrients , which are stored for later use . The hepatic portal vein delivers materials from these digestive and circulatory organs directly to the liver for processing .", "question": { "cloze_format": "The hepatic portal system delivers blood from the digestive organs to the ________.", "normal_format": "What does the hepatic portal system deliver blood from the digestive organs to?", "question_choices": [ "liver", "hypothalamus", "spleen", "left atrium" ], "question_id": "fs-id1929827", "question_text": "The hepatic portal system delivers blood from the digestive organs to the ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "masses of developing blood vessels and formed elements scattered throughout the embryonic disc" }, "bloom": "1", "hl_context": "In a developing embryo , the heart has developed enough by day 21 post-fertilization to begin beating . Circulation patterns are clearly established by the fourth week of embryonic life . It is critical to the survival of the developing human that the circulatory system forms early to supply the growing tissue with nutrients and gases , and to remove waste products . Blood cells and vessel production in structures outside the embryo proper called the yolk sac , chorion , and connecting stalk begin about 15 to 16 days following fertilization . Development of these circulatory elements within the embryo itself begins approximately 2 days later . You will learn more about the formation and function of these early structures when you study the chapter on development . During those first few weeks , blood vessels begin to form from the embryonic mesoderm . The precursor cells are known as hemangioblasts . These in turn differentiate into angioblasts , which give rise to the blood vessels and pluripotent stem cells , which differentiate into the formed elements of blood . ( Seek additional content for more detail on fetal development and circulation . ) <hl> Together , these cells form masses known as blood islands scattered throughout the embryonic disc . <hl> Spaces appear on the blood islands that develop into vessel lumens . The endothelial lining of the vessels arise from the angioblasts within these islands . Surrounding mesenchymal cells give rise to the smooth muscle and connective tissue layers of the vessels . While the vessels are developing , the pluripotent stem cells begin to form the blood . Vascular tubes also develop on the blood islands , and they eventually connect to one another as well as to the developing , tubular heart . Thus , the developmental pattern , rather than beginning from the formation of one central vessel and spreading outward , occurs in many regions simultaneously with vessels later joining together . This angiogenesis — the creation of new blood vessels from existing ones — continues as needed throughout life as we grow and develop .", "hl_sentences": "Together , these cells form masses known as blood islands scattered throughout the embryonic disc .", "question": { "cloze_format": "Blood islands are ________.", "normal_format": "What are blood islands?", "question_choices": [ "clusters of blood-filtering cells in the placenta", "masses of pluripotent stem cells scattered throughout the fetal bone marrow", "vascular tubes that give rise to the embryonic tubular heart", "masses of developing blood vessels and formed elements scattered throughout the embryonic disc" ], "question_id": "fs-id1440150", "question_text": "Blood islands are ________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 1, "ans_text": "One umbilical vein carries oxygen-rich blood from the placenta to the fetal heart." }, "bloom": "2", "hl_context": "As the embryo grows within the mother ’ s uterus , its requirements for nutrients and gas exchange also grow . The placenta — a circulatory organ unique to pregnancy — develops jointly from the embryo and uterine wall structures to fill this need . <hl> Emerging from the placenta is the umbilical vein , which carries oxygen-rich blood from the mother to the fetal inferior vena cava via the ductus venosus to the heart that pumps it into fetal circulation . <hl> Two umbilical arteries carry oxygen-depleted fetal blood , including wastes and carbon dioxide , to the placenta . Remnants of the umbilical arteries remain in the adult . ( Seek additional content for more information on the role of the placenta in fetal circulation . )", "hl_sentences": "Emerging from the placenta is the umbilical vein , which carries oxygen-rich blood from the mother to the fetal inferior vena cava via the ductus venosus to the heart that pumps it into fetal circulation .", "question": { "cloze_format": "It is true that ___", "normal_format": "Which of the following statements is true?", "question_choices": [ "Two umbilical veins carry oxygen-depleted blood from the fetal circulation to the placenta.", "One umbilical vein carries oxygen-rich blood from the placenta to the fetal heart.", "Two umbilical arteries carry oxygen-depleted blood to the fetal lungs.", "None of the above are true." ], "question_id": "fs-id2177683", "question_text": "Which of the following statements is true?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "most freshly oxygenated blood to flow into the fetal heart" }, "bloom": "1", "hl_context": "<hl> The ductus venosus is a temporary blood vessel that branches from the umbilical vein , allowing much of the freshly oxygenated blood from the placenta — the organ of gas exchange between the mother and fetus — to bypass the fetal liver and go directly to the fetal heart . <hl> The ductus venosus closes slowly during the first weeks of infancy and degenerates to become the ligamentum venosum .", "hl_sentences": "The ductus venosus is a temporary blood vessel that branches from the umbilical vein , allowing much of the freshly oxygenated blood from the placenta — the organ of gas exchange between the mother and fetus — to bypass the fetal liver and go directly to the fetal heart .", "question": { "cloze_format": "The ductus venosus is a shunt that allows ________.", "normal_format": "The ductus venosus is a shunt that allows what?", "question_choices": [ "fetal blood to flow from the right atrium to the left atrium", "fetal blood to flow from the right ventricle to the left ventricle", "most freshly oxygenated blood to flow into the fetal heart", "most oxygen-depleted fetal blood to flow directly into the fetal pulmonary trunk" ], "question_id": "fs-id1584918", "question_text": "The ductus venosus is a shunt that allows ________." }, "references_are_paraphrase": null } ]
20
20.1 Structure and Function of Blood Vessels Learning Objectives By the end of this section, you will be able to: Compare and contrast the three tunics that make up the walls of most blood vessels Distinguish between elastic arteries, muscular arteries, and arterioles on the basis of structure, location, and function Describe the basic structure of a capillary bed, from the supplying metarteriole to the venule into which it drains Explain the structure and function of venous valves in the large veins of the extremities Blood is carried through the body via blood vessels. An artery is a blood vessel that carries blood away from the heart, where it branches into ever-smaller vessels. Eventually, the smallest arteries, vessels called arterioles, further branch into tiny capillaries, where nutrients and wastes are exchanged, and then combine with other vessels that exit capillaries to form venules, small blood vessels that carry blood to a vein, a larger blood vessel that returns blood to the heart. Arteries and veins transport blood in two distinct circuits: the systemic circuit and the pulmonary circuit ( Figure 20.2 ). Systemic arteries provide blood rich in oxygen to the body’s tissues. The blood returned to the heart through systemic veins has less oxygen, since much of the oxygen carried by the arteries has been delivered to the cells. In contrast, in the pulmonary circuit, arteries carry blood low in oxygen exclusively to the lungs for gas exchange. Pulmonary veins then return freshly oxygenated blood from the lungs to the heart to be pumped back out into systemic circulation. Although arteries and veins differ structurally and functionally, they share certain features. Shared Structures Different types of blood vessels vary slightly in their structures, but they share the same general features. Arteries and arterioles have thicker walls than veins and venules because they are closer to the heart and receive blood that is surging at a far greater pressure ( Figure 20.3 ). Each type of vessel has a lumen —a hollow passageway through which blood flows. Arteries have smaller lumens than veins, a characteristic that helps to maintain the pressure of blood moving through the system. Together, their thicker walls and smaller diameters give arterial lumens a more rounded appearance in cross section than the lumens of veins. By the time blood has passed through capillaries and entered venules, the pressure initially exerted upon it by heart contractions has diminished. In other words, in comparison to arteries, venules and veins withstand a much lower pressure from the blood that flows through them. Their walls are considerably thinner and their lumens are correspondingly larger in diameter, allowing more blood to flow with less vessel resistance. In addition, many veins of the body, particularly those of the limbs, contain valves that assist the unidirectional flow of blood toward the heart. This is critical because blood flow becomes sluggish in the extremities, as a result of the lower pressure and the effects of gravity. The walls of arteries and veins are largely composed of living cells and their products (including collagenous and elastic fibers); the cells require nourishment and produce waste. Since blood passes through the larger vessels relatively quickly, there is limited opportunity for blood in the lumen of the vessel to provide nourishment to or remove waste from the vessel’s cells. Further, the walls of the larger vessels are too thick for nutrients to diffuse through to all of the cells. Larger arteries and veins contain small blood vessels within their walls known as the vasa vasorum —literally “vessels of the vessel”—to provide them with this critical exchange. Since the pressure within arteries is relatively high, the vasa vasorum must function in the outer layers of the vessel (see Figure 20.3 ) or the pressure exerted by the blood passing through the vessel would collapse it, preventing any exchange from occurring. The lower pressure within veins allows the vasa vasorum to be located closer to the lumen. The restriction of the vasa vasorum to the outer layers of arteries is thought to be one reason that arterial diseases are more common than venous diseases, since its location makes it more difficult to nourish the cells of the arteries and remove waste products. There are also minute nerves within the walls of both types of vessels that control the contraction and dilation of smooth muscle. These minute nerves are known as the nervi vasorum. Both arteries and veins have the same three distinct tissue layers, called tunics (from the Latin term tunica), for the garments first worn by ancient Romans; the term tunic is also used for some modern garments. From the most interior layer to the outer, these tunics are the tunica intima, the tunica media, and the tunica externa (see Figure 20.3 ). Table 20.1 compares and contrasts the tunics of the arteries and veins. Comparison of Tunics in Arteries and Veins Arteries Veins General appearance Thick walls with small lumens Generally appear rounded Thin walls with large lumens Generally appear flattened Tunica intima Endothelium usually appears wavy due to constriction of smooth muscle Internal elastic membrane present in larger vessels Endothelium appears smooth Internal elastic membrane absent Tunica media Normally the thickest layer in arteries Smooth muscle cells and elastic fibers predominate (the proportions of these vary with distance from the heart) External elastic membrane present in larger vessels Normally thinner than the tunica externa Smooth muscle cells and collagenous fibers predominate Nervi vasorum and vasa vasorum present External elastic membrane absent Tunica externa Normally thinner than the tunica media in all but the largest arteries Collagenous and elastic fibers Nervi vasorum and vasa vasorum present Normally the thickest layer in veins Collagenous and smooth fibers predominate Some smooth muscle fibers Nervi vasorum and vasa vasorum present Table 20.1 Tunica Intima The tunica intima (also called the tunica interna) is composed of epithelial and connective tissue layers. Lining the tunica intima is the specialized simple squamous epithelium called the endothelium, which is continuous throughout the entire vascular system, including the lining of the chambers of the heart. Damage to this endothelial lining and exposure of blood to the collagenous fibers beneath is one of the primary causes of clot formation. Until recently, the endothelium was viewed simply as the boundary between the blood in the lumen and the walls of the vessels. Recent studies, however, have shown that it is physiologically critical to such activities as helping to regulate capillary exchange and altering blood flow. The endothelium releases local chemicals called endothelins that can constrict the smooth muscle within the walls of the vessel to increase blood pressure. Uncompensated overproduction of endothelins may contribute to hypertension (high blood pressure) and cardiovascular disease. Next to the endothelium is the basement membrane, or basal lamina, that effectively binds the endothelium to the connective tissue. The basement membrane provides strength while maintaining flexibility, and it is permeable, allowing materials to pass through it. The thin outer layer of the tunica intima contains a small amount of areolar connective tissue that consists primarily of elastic fibers to provide the vessel with additional flexibility; it also contains some collagenous fibers to provide additional strength. In larger arteries, there is also a thick, distinct layer of elastic fibers known as the internal elastic membrane (also called the internal elastic lamina) at the boundary with the tunica media. Like the other components of the tunica intima, the internal elastic membrane provides structure while allowing the vessel to stretch. It is permeated with small openings that allow exchange of materials between the tunics. The internal elastic membrane is not apparent in veins. In addition, many veins, particularly in the lower limbs, contain valves formed by sections of thickened endothelium that are reinforced with connective tissue, extending into the lumen. Under the microscope, the lumen and the entire tunica intima of a vein will appear smooth, whereas those of an artery will normally appear wavy because of the partial constriction of the smooth muscle in the tunica media, the next layer of blood vessel walls. Tunica Media The tunica media is the substantial middle layer of the vessel wall (see Figure 20.3 ). It is generally the thickest layer in arteries, and it is much thicker in arteries than it is in veins. The tunica media consists of layers of smooth muscle supported by connective tissue that is primarily made up of elastic fibers, most of which are arranged in circular sheets. Toward the outer portion of the tunic, there are also layers of longitudinal muscle. Contraction and relaxation of the circular muscles decrease and increase the diameter of the vessel lumen, respectively. Specifically in arteries, vasoconstriction decreases blood flow as the smooth muscle in the walls of the tunica media contracts, making the lumen narrower and increasing blood pressure. Similarly, vasodilation increases blood flow as the smooth muscle relaxes, allowing the lumen to widen and blood pressure to drop. Both vasoconstriction and vasodilation are regulated in part by small vascular nerves, known as nervi vasorum , or “nerves of the vessel,” that run within the walls of blood vessels. These are generally all sympathetic fibers, although some trigger vasodilation and others induce vasoconstriction, depending upon the nature of the neurotransmitter and receptors located on the target cell. Parasympathetic stimulation does trigger vasodilation as well as erection during sexual arousal in the external genitalia of both sexes. Nervous control over vessels tends to be more generalized than the specific targeting of individual blood vessels. Local controls, discussed later, account for this phenomenon. (Seek additional content for more information on these dynamic aspects of the autonomic nervous system.) Hormones and local chemicals also control blood vessels. Together, these neural and chemical mechanisms reduce or increase blood flow in response to changing body conditions, from exercise to hydration. Regulation of both blood flow and blood pressure is discussed in detail later in this chapter. The smooth muscle layers of the tunica media are supported by a framework of collagenous fibers that also binds the tunica media to the inner and outer tunics. Along with the collagenous fibers are large numbers of elastic fibers that appear as wavy lines in prepared slides. Separating the tunica media from the outer tunica externa in larger arteries is the external elastic membrane (also called the external elastic lamina), which also appears wavy in slides. This structure is not usually seen in smaller arteries, nor is it seen in veins. Tunica Externa The outer tunic, the tunica externa (also called the tunica adventitia), is a substantial sheath of connective tissue composed primarily of collagenous fibers. Some bands of elastic fibers are found here as well. The tunica externa in veins also contains groups of smooth muscle fibers. This is normally the thickest tunic in veins and may be thicker than the tunica media in some larger arteries. The outer layers of the tunica externa are not distinct but rather blend with the surrounding connective tissue outside the vessel, helping to hold the vessel in relative position. If you are able to palpate some of the superficial veins on your upper limbs and try to move them, you will find that the tunica externa prevents this. If the tunica externa did not hold the vessel in place, any movement would likely result in disruption of blood flow. Arteries An artery is a blood vessel that conducts blood away from the heart. All arteries have relatively thick walls that can withstand the high pressure of blood ejected from the heart. However, those close to the heart have the thickest walls, containing a high percentage of elastic fibers in all three of their tunics. This type of artery is known as an elastic artery ( Figure 20.4 ). Vessels larger than 10 mm in diameter are typically elastic. Their abundant elastic fibers allow them to expand, as blood pumped from the ventricles passes through them, and then to recoil after the surge has passed. If artery walls were rigid and unable to expand and recoil, their resistance to blood flow would greatly increase and blood pressure would rise to even higher levels, which would in turn require the heart to pump harder to increase the volume of blood expelled by each pump (the stroke volume) and maintain adequate pressure and flow. Artery walls would have to become even thicker in response to this increased pressure. The elastic recoil of the vascular wall helps to maintain the pressure gradient that drives the blood through the arterial system. An elastic artery is also known as a conducting artery, because the large diameter of the lumen enables it to accept a large volume of blood from the heart and conduct it to smaller branches. Farther from the heart, where the surge of blood has dampened, the percentage of elastic fibers in an artery’s tunica intima decreases and the amount of smooth muscle in its tunica media increases. The artery at this point is described as a muscular artery . The diameter of muscular arteries typically ranges from 0.1 mm to 10 mm. Their thick tunica media allows muscular arteries to play a leading role in vasoconstriction. In contrast, their decreased quantity of elastic fibers limits their ability to expand. Fortunately, because the blood pressure has eased by the time it reaches these more distant vessels, elasticity has become less important. Notice that although the distinctions between elastic and muscular arteries are important, there is no “line of demarcation” where an elastic artery suddenly becomes muscular. Rather, there is a gradual transition as the vascular tree repeatedly branches. In turn, muscular arteries branch to distribute blood to the vast network of arterioles. For this reason, a muscular artery is also known as a distributing artery. Arterioles An arteriole is a very small artery that leads to a capillary. Arterioles have the same three tunics as the larger vessels, but the thickness of each is greatly diminished. The critical endothelial lining of the tunica intima is intact. The tunica media is restricted to one or two smooth muscle cell layers in thickness. The tunica externa remains but is very thin (see Figure 20.4 ). With a lumen averaging 30 micrometers or less in diameter, arterioles are critical in slowing down—or resisting—blood flow and, thus, causing a substantial drop in blood pressure. Because of this, you may see them referred to as resistance vessels. The muscle fibers in arterioles are normally slightly contracted, causing arterioles to maintain a consistent muscle tone—in this case referred to as vascular tone—in a similar manner to the muscular tone of skeletal muscle. In reality, all blood vessels exhibit vascular tone due to the partial contraction of smooth muscle. The importance of the arterioles is that they will be the primary site of both resistance and regulation of blood pressure. The precise diameter of the lumen of an arteriole at any given moment is determined by neural and chemical controls, and vasoconstriction and vasodilation in the arterioles are the primary mechanisms for distribution of blood flow. Capillaries A capillary is a microscopic channel that supplies blood to the tissues themselves, a process called perfusion . Exchange of gases and other substances occurs in the capillaries between the blood and the surrounding cells and their tissue fluid (interstitial fluid). The diameter of a capillary lumen ranges from 5–10 micrometers; the smallest are just barely wide enough for an erythrocyte to squeeze through. Flow through capillaries is often described as microcirculation . The wall of a capillary consists of the endothelial layer surrounded by a basement membrane with occasional smooth muscle fibers. There is some variation in wall structure: In a large capillary, several endothelial cells bordering each other may line the lumen; in a small capillary, there may be only a single cell layer that wraps around to contact itself. For capillaries to function, their walls must be leaky, allowing substances to pass through. There are three major types of capillaries, which differ according to their degree of “leakiness:” continuous, fenestrated, and sinusoid capillaries ( Figure 20.5 ). Continuous Capillaries The most common type of capillary, the continuous capillary , is found in almost all vascularized tissues. Continuous capillaries are characterized by a complete endothelial lining with tight junctions between endothelial cells. Although a tight junction is usually impermeable and only allows for the passage of water and ions, they are often incomplete in capillaries, leaving intercellular clefts that allow for exchange of water and other very small molecules between the blood plasma and the interstitial fluid. Substances that can pass between cells include metabolic products, such as glucose, water, and small hydrophobic molecules like gases and hormones, as well as various leukocytes. Continuous capillaries not associated with the brain are rich in transport vesicles, contributing to either endocytosis or exocytosis. Those in the brain are part of the blood-brain barrier. Here, there are tight junctions and no intercellular clefts, plus a thick basement membrane and astrocyte extensions called end feet; these structures combine to prevent the movement of nearly all substances. Fenestrated Capillaries A fenestrated capillary is one that has pores (or fenestrations) in addition to tight junctions in the endothelial lining. These make the capillary permeable to larger molecules. The number of fenestrations and their degree of permeability vary, however, according to their location. Fenestrated capillaries are common in the small intestine, which is the primary site of nutrient absorption, as well as in the kidneys, which filter the blood. They are also found in the choroid plexus of the brain and many endocrine structures, including the hypothalamus, pituitary, pineal, and thyroid glands. Sinusoid Capillaries A sinusoid capillary (or sinusoid) is the least common type of capillary. Sinusoid capillaries are flattened, and they have extensive intercellular gaps and incomplete basement membranes, in addition to intercellular clefts and fenestrations. This gives them an appearance not unlike Swiss cheese. These very large openings allow for the passage of the largest molecules, including plasma proteins and even cells. Blood flow through sinusoids is very slow, allowing more time for exchange of gases, nutrients, and wastes. Sinusoids are found in the liver and spleen, bone marrow, lymph nodes (where they carry lymph, not blood), and many endocrine glands including the pituitary and adrenal glands. Without these specialized capillaries, these organs would not be able to provide their myriad of functions. For example, when bone marrow forms new blood cells, the cells must enter the blood supply and can only do so through the large openings of a sinusoid capillary; they cannot pass through the small openings of continuous or fenestrated capillaries. The liver also requires extensive specialized sinusoid capillaries in order to process the materials brought to it by the hepatic portal vein from both the digestive tract and spleen, and to release plasma proteins into circulation. Metarterioles and Capillary Beds A metarteriole is a type of vessel that has structural characteristics of both an arteriole and a capillary. Slightly larger than the typical capillary, the smooth muscle of the tunica media of the metarteriole is not continuous but forms rings of smooth muscle (sphincters) prior to the entrance to the capillaries. Each metarteriole arises from a terminal arteriole and branches to supply blood to a capillary bed that may consist of 10–100 capillaries. The precapillary sphincters , circular smooth muscle cells that surround the capillary at its origin with the metarteriole, tightly regulate the flow of blood from a metarteriole to the capillaries it supplies. Their function is critical: If all of the capillary beds in the body were to open simultaneously, they would collectively hold every drop of blood in the body and there would be none in the arteries, arterioles, venules, veins, or the heart itself. Normally, the precapillary sphincters are closed. When the surrounding tissues need oxygen and have excess waste products, the precapillary sphincters open, allowing blood to flow through and exchange to occur before closing once more ( Figure 20.6 ). If all of the precapillary sphincters in a capillary bed are closed, blood will flow from the metarteriole directly into a thoroughfare channel and then into the venous circulation, bypassing the capillary bed entirely. This creates what is known as a vascular shunt . In addition, an arteriovenous anastomosis may bypass the capillary bed and lead directly to the venous system. Although you might expect blood flow through a capillary bed to be smooth, in reality, it moves with an irregular, pulsating flow. This pattern is called vasomotion and is regulated by chemical signals that are triggered in response to changes in internal conditions, such as oxygen, carbon dioxide, hydrogen ion, and lactic acid levels. For example, during strenuous exercise when oxygen levels decrease and carbon dioxide, hydrogen ion, and lactic acid levels all increase, the capillary beds in skeletal muscle are open, as they would be in the digestive system when nutrients are present in the digestive tract. During sleep or rest periods, vessels in both areas are largely closed; they open only occasionally to allow oxygen and nutrient supplies to travel to the tissues to maintain basic life processes. Venules A venule is an extremely small vein, generally 8–100 micrometers in diameter. Postcapillary venules join multiple capillaries exiting from a capillary bed. Multiple venules join to form veins. The walls of venules consist of endothelium, a thin middle layer with a few muscle cells and elastic fibers, plus an outer layer of connective tissue fibers that constitute a very thin tunica externa ( Figure 20.7 ). Venules as well as capillaries are the primary sites of emigration or diapedesis, in which the white blood cells adhere to the endothelial lining of the vessels and then squeeze through adjacent cells to enter the tissue fluid. Veins A vein is a blood vessel that conducts blood toward the heart. Compared to arteries, veins are thin-walled vessels with large and irregular lumens (see Figure 20.7 ). Because they are low-pressure vessels, larger veins are commonly equipped with valves that promote the unidirectional flow of blood toward the heart and prevent backflow toward the capillaries caused by the inherent low blood pressure in veins as well as the pull of gravity. Table 20.2 compares the features of arteries and veins. Comparison of Arteries and Veins Arteries Veins Direction of blood flow Conducts blood away from the heart Conducts blood toward the heart General appearance Rounded Irregular, often collapsed Pressure High Low Wall thickness Thick Thin Relative oxygen concentration Higher in systemic arteries Lower in pulmonary arteries Lower in systemic veins Higher in pulmonary veins Valves Not present Present most commonly in limbs and in veins inferior to the heart Table 20.2 Disorders of the... Cardiovascular System: Edema and Varicose Veins Despite the presence of valves and the contributions of other anatomical and physiological adaptations we will cover shortly, over the course of a day, some blood will inevitably pool, especially in the lower limbs, due to the pull of gravity. Any blood that accumulates in a vein will increase the pressure within it, which can then be reflected back into the smaller veins, venules, and eventually even the capillaries. Increased pressure will promote the flow of fluids out of the capillaries and into the interstitial fluid. The presence of excess tissue fluid around the cells leads to a condition called edema. Most people experience a daily accumulation of tissue fluid, especially if they spend much of their work life on their feet (like most health professionals). However, clinical edema goes beyond normal swelling and requires medical treatment. Edema has many potential causes, including hypertension and heart failure, severe protein deficiency, renal failure, and many others. In order to treat edema, which is a sign rather than a discrete disorder, the underlying cause must be diagnosed and alleviated. Edema may be accompanied by varicose veins, especially in the superficial veins of the legs ( Figure 20.8 ). This disorder arises when defective valves allow blood to accumulate within the veins, causing them to distend, twist, and become visible on the surface of the integument. Varicose veins may occur in both sexes, but are more common in women and are often related to pregnancy. More than simple cosmetic blemishes, varicose veins are often painful and sometimes itchy or throbbing. Without treatment, they tend to grow worse over time. The use of support hose, as well as elevating the feet and legs whenever possible, may be helpful in alleviating this condition. Laser surgery and interventional radiologic procedures can reduce the size and severity of varicose veins. Severe cases may require conventional surgery to remove the damaged vessels. As there are typically redundant circulation patterns, that is, anastomoses, for the smaller and more superficial veins, removal does not typically impair the circulation. There is evidence that patients with varicose veins suffer a greater risk of developing a thrombus or clot. Veins as Blood Reservoirs In addition to their primary function of returning blood to the heart, veins may be considered blood reservoirs, since systemic veins contain approximately 64 percent of the blood volume at any given time ( Figure 20.9 ). Their ability to hold this much blood is due to their high capacitance , that is, their capacity to distend (expand) readily to store a high volume of blood, even at a low pressure. The large lumens and relatively thin walls of veins make them far more distensible than arteries; thus, they are said to be capacitance vessels . When blood flow needs to be redistributed to other portions of the body, the vasomotor center located in the medulla oblongata sends sympathetic stimulation to the smooth muscles in the walls of the veins, causing constriction—or in this case, venoconstriction. Less dramatic than the vasoconstriction seen in smaller arteries and arterioles, venoconstriction may be likened to a “stiffening” of the vessel wall. This increases pressure on the blood within the veins, speeding its return to the heart. As you will note in Figure 20.9 , approximately 21 percent of the venous blood is located in venous networks within the liver, bone marrow, and integument. This volume of blood is referred to as venous reserve . Through venoconstriction, this “reserve” volume of blood can get back to the heart more quickly for redistribution to other parts of the circulation. Career Connection Vascular Surgeons and Technicians Vascular surgery is a specialty in which the physician deals primarily with diseases of the vascular portion of the cardiovascular system. This includes repair and replacement of diseased or damaged vessels, removal of plaque from vessels, minimally invasive procedures including the insertion of venous catheters, and traditional surgery. Following completion of medical school, the physician generally completes a 5-year surgical residency followed by an additional 1 to 2 years of vascular specialty training. In the United States, most vascular surgeons are members of the Society of Vascular Surgery. Vascular technicians are specialists in imaging technologies that provide information on the health of the vascular system. They may also assist physicians in treating disorders involving the arteries and veins. This profession often overlaps with cardiovascular technology, which would also include treatments involving the heart. Although recognized by the American Medical Association, there are currently no licensing requirements for vascular technicians, and licensing is voluntary. Vascular technicians typically have an Associate’s degree or certificate, involving 18 months to 2 years of training. The United States Bureau of Labor projects this profession to grow by 29 percent from 2010 to 2020. Interactive Link Visit this site to learn more about vascular surgery. Interactive Link Visit this site to learn more about vascular technicians. 20.2 Blood Flow, Blood Pressure, and Resistance Learning Objectives By the end of this section, you will be able to: Distinguish between systolic pressure, diastolic pressure, pulse pressure, and mean arterial pressure Describe the clinical measurement of pulse and blood pressure Identify and discuss five variables affecting arterial blood flow and blood pressure Discuss several factors affecting blood flow in the venous system Blood flow refers to the movement of blood through a vessel, tissue, or organ, and is usually expressed in terms of volume of blood per unit of time. It is initiated by the contraction of the ventricles of the heart. Ventricular contraction ejects blood into the major arteries, resulting in flow from regions of higher pressure to regions of lower pressure, as blood encounters smaller arteries and arterioles, then capillaries, then the venules and veins of the venous system. This section discusses a number of critical variables that contribute to blood flow throughout the body. It also discusses the factors that impede or slow blood flow, a phenomenon known as resistance . As noted earlier, hydrostatic pressure is the force exerted by a fluid due to gravitational pull, usually against the wall of the container in which it is located. One form of hydrostatic pressure is blood pressure , the force exerted by blood upon the walls of the blood vessels or the chambers of the heart. Blood pressure may be measured in capillaries and veins, as well as the vessels of the pulmonary circulation; however, the term blood pressure without any specific descriptors typically refers to systemic arterial blood pressure—that is, the pressure of blood flowing in the arteries of the systemic circulation. In clinical practice, this pressure is measured in mm Hg and is usually obtained using the brachial artery of the arm. Components of Arterial Blood Pressure Arterial blood pressure in the larger vessels consists of several distinct components ( Figure 20.10 ): systolic and diastolic pressures, pulse pressure, and mean arterial pressure. Systolic and Diastolic Pressures When systemic arterial blood pressure is measured, it is recorded as a ratio of two numbers (e.g., 120/80 is a normal adult blood pressure), expressed as systolic pressure over diastolic pressure. The systolic pressure is the higher value (typically around 120 mm Hg) and reflects the arterial pressure resulting from the ejection of blood during ventricular contraction, or systole. The diastolic pressure is the lower value (usually about 80 mm Hg) and represents the arterial pressure of blood during ventricular relaxation, or diastole. Pulse Pressure As shown in Figure 20.10 , the difference between the systolic pressure and the diastolic pressure is the pulse pressure . For example, an individual with a systolic pressure of 120 mm Hg and a diastolic pressure of 80 mm Hg would have a pulse pressure of 40 mmHg. Generally, a pulse pressure should be at least 25 percent of the systolic pressure. A pulse pressure below this level is described as low or narrow. This may occur, for example, in patients with a low stroke volume, which may be seen in congestive heart failure, stenosis of the aortic valve, or significant blood loss following trauma. In contrast, a high or wide pulse pressure is common in healthy people following strenuous exercise, when their resting pulse pressure of 30–40 mm Hg may increase temporarily to 100 mm Hg as stroke volume increases. A persistently high pulse pressure at or above 100 mm Hg may indicate excessive resistance in the arteries and can be caused by a variety of disorders. Chronic high resting pulse pressures can degrade the heart, brain, and kidneys, and warrant medical treatment. Mean Arterial Pressure Mean arterial pressure (MAP) represents the “average” pressure of blood in the arteries, that is, the average force driving blood into vessels that serve the tissues. Mean is a statistical concept and is calculated by taking the sum of the values divided by the number of values. Although complicated to measure directly and complicated to calculate, MAP can be approximated by adding the diastolic pressure to one-third of the pulse pressure or systolic pressure minus the diastolic pressure: MAP = diastolic BP +  (systolic-diastolic BP) 3 MAP = diastolic BP +  (systolic-diastolic BP) 3 MathType@MTEF@5@5@+=feaagyart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaaeytaiaabgeacaqGqbGaaeiiaiaab2dacaqGGaGaaeizaiaabMgacaqGHbGaae4CaiaabshacaqGVbGaaeiBaiaabMgacaqGJbGaaeiiaiaabkeacaqGqbGaaeiiaiaabUcacaqGGaWaaSaaaeaacaqGOaGaae4CaiaabMhacaqGZbGaaeiDaiaab+gacaqGSbGaaeyAaiaabogacaqGTaGaaeizaiaabMgacaqGHbGaae4CaiaabshacaqGVbGaaeiBaiaabMgacaqGJbGaaeiiaiaabkeacaqGqbGaaeykaaqaaiaabodaaaaaaa@5BD7@ In Figure 20.10 , this value is approximately 80 + (120 − 80) / 3, or 93.33. Normally, the MAP falls within the range of 70–110 mm Hg. If the value falls below 60 mm Hg for an extended time, blood pressure will not be high enough to ensure circulation to and through the tissues, which results in ischemia , or insufficient blood flow. A condition called hypoxia , inadequate oxygenation of tissues, commonly accompanies ischemia. The term hypoxemia refers to low levels of oxygen in systemic arterial blood. Neurons are especially sensitive to hypoxia and may die or be damaged if blood flow and oxygen supplies are not quickly restored. Pulse After blood is ejected from the heart, elastic fibers in the arteries help maintain a high-pressure gradient as they expand to accommodate the blood, then recoil. This expansion and recoiling effect, known as the pulse , can be palpated manually or measured electronically. Although the effect diminishes over distance from the heart, elements of the systolic and diastolic components of the pulse are still evident down to the level of the arterioles. Because pulse indicates heart rate, it is measured clinically to provide clues to a patient’s state of health. It is recorded as beats per minute. Both the rate and the strength of the pulse are important clinically. A high or irregular pulse rate can be caused by physical activity or other temporary factors, but it may also indicate a heart condition. The pulse strength indicates the strength of ventricular contraction and cardiac output. If the pulse is strong, then systolic pressure is high. If it is weak, systolic pressure has fallen, and medical intervention may be warranted. Pulse can be palpated manually by placing the tips of the fingers across an artery that runs close to the body surface and pressing lightly. While this procedure is normally performed using the radial artery in the wrist or the common carotid artery in the neck, any superficial artery that can be palpated may be used ( Figure 20.11 ). Common sites to find a pulse include temporal and facial arteries in the head, brachial arteries in the upper arm, femoral arteries in the thigh, popliteal arteries behind the knees, posterior tibial arteries near the medial tarsal regions, and dorsalis pedis arteries in the feet. A variety of commercial electronic devices are also available to measure pulse. Measurement of Blood Pressure Blood pressure is one of the critical parameters measured on virtually every patient in every healthcare setting. The technique used today was developed more than 100 years ago by a pioneering Russian physician, Dr. Nikolai Korotkoff. Turbulent blood flow through the vessels can be heard as a soft ticking while measuring blood pressure; these sounds are known as Korotkoff sounds . The technique of measuring blood pressure requires the use of a sphygmomanometer (a blood pressure cuff attached to a measuring device) and a stethoscope. The technique is as follows: The clinician wraps an inflatable cuff tightly around the patient’s arm at about the level of the heart. The clinician squeezes a rubber pump to inject air into the cuff, raising pressure around the artery and temporarily cutting off blood flow into the patient’s arm. The clinician places the stethoscope on the patient’s antecubital region and, while gradually allowing air within the cuff to escape, listens for the Korotkoff sounds. Although there are five recognized Korotkoff sounds, only two are normally recorded. Initially, no sounds are heard since there is no blood flow through the vessels, but as air pressure drops, the cuff relaxes, and blood flow returns to the arm. As shown in Figure 20.12 , the first sound heard through the stethoscope—the first Korotkoff sound—indicates systolic pressure. As more air is released from the cuff, blood is able to flow freely through the brachial artery and all sounds disappear. The point at which the last sound is heard is recorded as the patient’s diastolic pressure. The majority of hospitals and clinics have automated equipment for measuring blood pressure that work on the same principles. An even more recent innovation is a small instrument that wraps around a patient’s wrist. The patient then holds the wrist over the heart while the device measures blood flow and records pressure. Variables Affecting Blood Flow and Blood Pressure Five variables influence blood flow and blood pressure: Cardiac output Compliance Volume of the blood Viscosity of the blood Blood vessel length and diameter Recall that blood moves from higher pressure to lower pressure. It is pumped from the heart into the arteries at high pressure. If you increase pressure in the arteries (afterload), and cardiac function does not compensate, blood flow will actually decrease. In the venous system, the opposite relationship is true. Increased pressure in the veins does not decrease flow as it does in arteries, but actually increases flow. Since pressure in the veins is normally relatively low, for blood to flow back into the heart, the pressure in the atria during atrial diastole must be even lower. It normally approaches zero, except when the atria contract (see Figure 20.10 ). Cardiac Output Cardiac output is the measurement of blood flow from the heart through the ventricles, and is usually measured in liters per minute. Any factor that causes cardiac output to increase, by elevating heart rate or stroke volume or both, will elevate blood pressure and promote blood flow. These factors include sympathetic stimulation, the catecholamines epinephrine and norepinephrine, thyroid hormones, and increased calcium ion levels. Conversely, any factor that decreases cardiac output, by decreasing heart rate or stroke volume or both, will decrease arterial pressure and blood flow. These factors include parasympathetic stimulation, elevated or decreased potassium ion levels, decreased calcium levels, anoxia, and acidosis. Compliance Compliance is the ability of any compartment to expand to accommodate increased content. A metal pipe, for example, is not compliant, whereas a balloon is. The greater the compliance of an artery, the more effectively it is able to expand to accommodate surges in blood flow without increased resistance or blood pressure. Veins are more compliant than arteries and can expand to hold more blood. When vascular disease causes stiffening of arteries, compliance is reduced and resistance to blood flow is increased. The result is more turbulence, higher pressure within the vessel, and reduced blood flow. This increases the work of the heart. A Mathematical Approach to Factors Affecting Blood Flow Jean Louis Marie Poiseuille was a French physician and physiologist who devised a mathematical equation describing blood flow and its relationship to known parameters. The same equation also applies to engineering studies of the flow of fluids. Although understanding the math behind the relationships among the factors affecting blood flow is not necessary to understand blood flow, it can help solidify an understanding of their relationships. Please note that even if the equation looks intimidating, breaking it down into its components and following the relationships will make these relationships clearer, even if you are weak in math. Focus on the three critical variables: radius (r), vessel length (λ), and viscosity (η). Poiseuille’s equation: Blood flow =  π ΔP r 4 8ηλ Blood flow =  π ΔP r 4 8ηλ MathType@MTEF@5@5@+=feaagyart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaaeOqaiaabYgacaqGVbGaae4BaiaabsgacaqGGaGaaeOzaiaabYgacaqGVbGaae4DaiaabccacaqG9aGaaeiiamaalaaabaGaaeiWdiaabccacaqGuoGaaeiuaiaabccacaqGYbWaaWbaaSqabeaacaqG0aaaaaGcbaGaaeioaiaabE7acaqG7oaaaaaa@4A84@ π is the Greek letter pi, used to represent the mathematical constant that is the ratio of a circle’s circumference to its diameter. It may commonly be represented as 3.14, although the actual number extends to infinity. ΔP represents the difference in pressure. r 4 is the radius (one-half of the diameter) of the vessel to the fourth power. η is the Greek letter eta and represents the viscosity of the blood. λ is the Greek letter lambda and represents the length of a blood vessel. One of several things this equation allows us to do is calculate the resistance in the vascular system. Normally this value is extremely difficult to measure, but it can be calculated from this known relationship: Blood flow =  ΔP Resistance Blood flow =  ΔP Resistance MathType@MTEF@5@5@+=feaagyart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaaeOqaiaabYgacaqGVbGaae4BaiaabsgacaqGGaGaaeOzaiaabYgacaqGVbGaae4DaiaabccacaqG9aGaaeiiamaalaaabaGaaeiLdiaabcfaaeaacaqGsbGaaeyzaiaabohacaqGPbGaae4CaiaabshacaqGHbGaaeOBaiaabogacaqGLbaaaaaa@4C0B@ If we rearrange this slightly, Resistance =  ΔP Blood flow Resistance =  ΔP Blood flow MathType@MTEF@5@5@+=feaagyart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaaeOuaiaabwgacaqGZbGaaeyAaiaabohacaqG0bGaaeyyaiaab6gacaqGJbGaaeyzaiaabccacaqG9aGaaeiiamaalaaabaGaaeiLdiaabcfaaeaacaqGcbGaaeiBaiaab+gacaqGVbGaaeizaiaabccacaqGMbGaaeiBaiaab+gacaqG3baaaaaa@4C0B@ Then by substituting Pouseille’s equation for blood flow: Resistance = 8ηλ πr 4 Resistance = 8ηλ πr 4 MathType@MTEF@5@5@+=feaagyart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaaeOuaiaabwgacaqGZbGaaeyAaiaabohacaqG0bGaaeyyaiaab6gacaqGJbGaaeyzaiaabccacaqG9aWaaSaaaeaacaqG4aGaae4TdiaabU7aaeaacaqGapGaaeOCamaaCaaaleqabaGaaeinaaaaaaaaaa@46ED@ By examining this equation, you can see that there are only three variables: viscosity, vessel length, and radius, since 8 and π are both constants. The important thing to remember is this: Two of these variables, viscosity and vessel length, will change slowly in the body. Only one of these factors, the radius, can be changed rapidly by vasoconstriction and vasodilation, thus dramatically impacting resistance and flow. Further, small changes in the radius will greatly affect flow, since it is raised to the fourth power in the equation. We have briefly considered how cardiac output and blood volume impact blood flow and pressure; the next step is to see how the other variables (contraction, vessel length, and viscosity) articulate with Pouseille’s equation and what they can teach us about the impact on blood flow. Blood Volume The relationship between blood volume, blood pressure, and blood flow is intuitively obvious. Water may merely trickle along a creek bed in a dry season, but rush quickly and under great pressure after a heavy rain. Similarly, as blood volume decreases, pressure and flow decrease. As blood volume increases, pressure and flow increase. Under normal circumstances, blood volume varies little. Low blood volume, called hypovolemia , may be caused by bleeding, dehydration, vomiting, severe burns, or some medications used to treat hypertension. It is important to recognize that other regulatory mechanisms in the body are so effective at maintaining blood pressure that an individual may be asymptomatic until 10–20 percent of the blood volume has been lost. Treatment typically includes intravenous fluid replacement. Hypervolemia , excessive fluid volume, may be caused by retention of water and sodium, as seen in patients with heart failure, liver cirrhosis, some forms of kidney disease, hyperaldosteronism, and some glucocorticoid steroid treatments. Restoring homeostasis in these patients depends upon reversing the condition that triggered the hypervolemia. Blood Viscosity Viscosity is the thickness of fluids that affects their ability to flow. Clean water, for example, is less viscous than mud. The viscosity of blood is directly proportional to resistance and inversely proportional to flow; therefore, any condition that causes viscosity to increase will also increase resistance and decrease flow. For example, imagine sipping milk, then a milkshake, through the same size straw. You experience more resistance and therefore less flow from the milkshake. Conversely, any condition that causes viscosity to decrease (such as when the milkshake melts) will decrease resistance and increase flow. Normally the viscosity of blood does not change over short periods of time. The two primary determinants of blood viscosity are the formed elements and plasma proteins. Since the vast majority of formed elements are erythrocytes, any condition affecting erythropoiesis, such as polycythemia or anemia, can alter viscosity. Since most plasma proteins are produced by the liver, any condition affecting liver function can also change the viscosity slightly and therefore alter blood flow. Liver abnormalities such as hepatitis, cirrhosis, alcohol damage, and drug toxicities result in decreased levels of plasma proteins, which decrease blood viscosity. While leukocytes and platelets are normally a small component of the formed elements, there are some rare conditions in which severe overproduction can impact viscosity as well. Vessel Length and Diameter The length of a vessel is directly proportional to its resistance: the longer the vessel, the greater the resistance and the lower the flow. As with blood volume, this makes intuitive sense, since the increased surface area of the vessel will impede the flow of blood. Likewise, if the vessel is shortened, the resistance will decrease and flow will increase. The length of our blood vessels increases throughout childhood as we grow, of course, but is unchanging in adults under normal physiological circumstances. Further, the distribution of vessels is not the same in all tissues. Adipose tissue does not have an extensive vascular supply. One pound of adipose tissue contains approximately 200 miles of vessels, whereas skeletal muscle contains more than twice that. Overall, vessels decrease in length only during loss of mass or amputation. An individual weighing 150 pounds has approximately 60,000 miles of vessels in the body. Gaining about 10 pounds adds from 2000 to 4000 miles of vessels, depending upon the nature of the gained tissue. One of the great benefits of weight reduction is the reduced stress to the heart, which does not have to overcome the resistance of as many miles of vessels. In contrast to length, the diameter of blood vessels changes throughout the body, according to the type of vessel, as we discussed earlier. The diameter of any given vessel may also change frequently throughout the day in response to neural and chemical signals that trigger vasodilation and vasoconstriction. The vascular tone of the vessel is the contractile state of the smooth muscle and the primary determinant of diameter, and thus of resistance and flow. The effect of vessel diameter on resistance is inverse: Given the same volume of blood, an increased diameter means there is less blood contacting the vessel wall, thus lower friction and lower resistance, subsequently increasing flow. A decreased diameter means more of the blood contacts the vessel wall, and resistance increases, subsequently decreasing flow. The influence of lumen diameter on resistance is dramatic: A slight increase or decrease in diameter causes a huge decrease or increase in resistance. This is because resistance is inversely proportional to the radius of the blood vessel (one-half of the vessel’s diameter) raised to the fourth power (R = 1/r 4 ). This means, for example, that if an artery or arteriole constricts to one-half of its original radius, the resistance to flow will increase 16 times. And if an artery or arteriole dilates to twice its initial radius, then resistance in the vessel will decrease to 1/16 of its original value and flow will increase 16 times. The Roles of Vessel Diameter and Total Area in Blood Flow and Blood Pressure Recall that we classified arterioles as resistance vessels, because given their small lumen, they dramatically slow the flow of blood from arteries. In fact, arterioles are the site of greatest resistance in the entire vascular network. This may seem surprising, given that capillaries have a smaller size. How can this phenomenon be explained? Figure 20.13 compares vessel diameter, total cross-sectional area, average blood pressure, and blood velocity through the systemic vessels. Notice in parts (a) and (b) that the total cross-sectional area of the body’s capillary beds is far greater than any other type of vessel. Although the diameter of an individual capillary is significantly smaller than the diameter of an arteriole, there are vastly more capillaries in the body than there are other types of blood vessels. Part (c) shows that blood pressure drops unevenly as blood travels from arteries to arterioles, capillaries, venules, and veins, and encounters greater resistance. However, the site of the most precipitous drop, and the site of greatest resistance, is the arterioles. This explains why vasodilation and vasoconstriction of arterioles play more significant roles in regulating blood pressure than do the vasodilation and vasoconstriction of other vessels. Part (d) shows that the velocity (speed) of blood flow decreases dramatically as the blood moves from arteries to arterioles to capillaries. This slow flow rate allows more time for exchange processes to occur. As blood flows through the veins, the rate of velocity increases, as blood is returned to the heart. Disorders of the... Cardiovascular System: Arteriosclerosis Compliance allows an artery to expand when blood is pumped through it from the heart, and then to recoil after the surge has passed. This helps promote blood flow. In arteriosclerosis, compliance is reduced, and pressure and resistance within the vessel increase. This is a leading cause of hypertension and coronary heart disease, as it causes the heart to work harder to generate a pressure great enough to overcome the resistance. Arteriosclerosis begins with injury to the endothelium of an artery, which may be caused by irritation from high blood glucose, infection, tobacco use, excessive blood lipids, and other factors. Artery walls that are constantly stressed by blood flowing at high pressure are also more likely to be injured—which means that hypertension can promote arteriosclerosis, as well as result from it. Recall that tissue injury causes inflammation. As inflammation spreads into the artery wall, it weakens and scars it, leaving it stiff (sclerotic). As a result, compliance is reduced. Moreover, circulating triglycerides and cholesterol can seep between the damaged lining cells and become trapped within the artery wall, where they are frequently joined by leukocytes, calcium, and cellular debris. Eventually, this buildup, called plaque, can narrow arteries enough to impair blood flow. The term for this condition, atherosclerosis (athero- = “porridge”) describes the mealy deposits ( Figure 20.14 ). Sometimes a plaque can rupture, causing microscopic tears in the artery wall that allow blood to leak into the tissue on the other side. When this happens, platelets rush to the site to clot the blood. This clot can further obstruct the artery and—if it occurs in a coronary or cerebral artery—cause a sudden heart attack or stroke. Alternatively, plaque can break off and travel through the bloodstream as an embolus until it blocks a more distant, smaller artery. Even without total blockage, vessel narrowing leads to ischemia—reduced blood flow—to the tissue region “downstream” of the narrowed vessel. Ischemia in turn leads to hypoxia—decreased supply of oxygen to the tissues. Hypoxia involving cardiac muscle or brain tissue can lead to cell death and severe impairment of brain or heart function. A major risk factor for both arteriosclerosis and atherosclerosis is advanced age, as the conditions tend to progress over time. Arteriosclerosis is normally defined as the more generalized loss of compliance, “hardening of the arteries,” whereas atherosclerosis is a more specific term for the build-up of plaque in the walls of the vessel and is a specific type of arteriosclerosis. There is also a distinct genetic component, and pre-existing hypertension and/or diabetes also greatly increase the risk. However, obesity, poor nutrition, lack of physical activity, and tobacco use all are major risk factors. Treatment includes lifestyle changes, such as weight loss, smoking cessation, regular exercise, and adoption of a diet low in sodium and saturated fats. Medications to reduce cholesterol and blood pressure may be prescribed. For blocked coronary arteries, surgery is warranted. In angioplasty, a catheter is inserted into the vessel at the point of narrowing, and a second catheter with a balloon-like tip is inflated to widen the opening. To prevent subsequent collapse of the vessel, a small mesh tube called a stent is often inserted. In an endarterectomy, plaque is surgically removed from the walls of a vessel. This operation is typically performed on the carotid arteries of the neck, which are a prime source of oxygenated blood for the brain. In a coronary bypass procedure, a non-vital superficial vessel from another part of the body (often the great saphenous vein) or a synthetic vessel is inserted to create a path around the blocked area of a coronary artery. Venous System The pumping action of the heart propels the blood into the arteries, from an area of higher pressure toward an area of lower pressure. If blood is to flow from the veins back into the heart, the pressure in the veins must be greater than the pressure in the atria of the heart. Two factors help maintain this pressure gradient between the veins and the heart. First, the pressure in the atria during diastole is very low, often approaching zero when the atria are relaxed (atrial diastole). Second, two physiologic “pumps” increase pressure in the venous system. The use of the term “pump” implies a physical device that speeds flow. These physiological pumps are less obvious. Skeletal Muscle Pump In many body regions, the pressure within the veins can be increased by the contraction of the surrounding skeletal muscle. This mechanism, known as the skeletal muscle pump ( Figure 20.15 ), helps the lower-pressure veins counteract the force of gravity, increasing pressure to move blood back to the heart. As leg muscles contract, for example during walking or running, they exert pressure on nearby veins with their numerous one-way valves. This increased pressure causes blood to flow upward, opening valves superior to the contracting muscles so blood flows through. Simultaneously, valves inferior to the contracting muscles close; thus, blood should not seep back downward toward the feet. Military recruits are trained to flex their legs slightly while standing at attention for prolonged periods. Failure to do so may allow blood to pool in the lower limbs rather than returning to the heart. Consequently, the brain will not receive enough oxygenated blood, and the individual may lose consciousness. Respiratory Pump The respiratory pump aids blood flow through the veins of the thorax and abdomen. During inhalation, the volume of the thorax increases, largely through the contraction of the diaphragm, which moves downward and compresses the abdominal cavity. The elevation of the chest caused by the contraction of the external intercostal muscles also contributes to the increased volume of the thorax. The volume increase causes air pressure within the thorax to decrease, allowing us to inhale. Additionally, as air pressure within the thorax drops, blood pressure in the thoracic veins also decreases, falling below the pressure in the abdominal veins. This causes blood to flow along its pressure gradient from veins outside the thorax, where pressure is higher, into the thoracic region, where pressure is now lower. This in turn promotes the return of blood from the thoracic veins to the atria. During exhalation, when air pressure increases within the thoracic cavity, pressure in the thoracic veins increases, speeding blood flow into the heart while valves in the veins prevent blood from flowing backward from the thoracic and abdominal veins. Pressure Relationships in the Venous System Although vessel diameter increases from the smaller venules to the larger veins and eventually to the venae cavae (singular = vena cava), the total cross-sectional area actually decreases (see Figure 20.15 a and b ). The individual veins are larger in diameter than the venules, but their total number is much lower, so their total cross-sectional area is also lower. Also notice that, as blood moves from venules to veins, the average blood pressure drops (see Figure 20.15 c ), but the blood velocity actually increases (see Figure 20.15 ). This pressure gradient drives blood back toward the heart. Again, the presence of one-way valves and the skeletal muscle and respiratory pumps contribute to this increased flow. Since approximately 64 percent of the total blood volume resides in systemic veins, any action that increases the flow of blood through the veins will increase venous return to the heart. Maintaining vascular tone within the veins prevents the veins from merely distending, dampening the flow of blood, and as you will see, vasoconstriction actually enhances the flow. The Role of Venoconstriction in Resistance, Blood Pressure, and Flow As previously discussed, vasoconstriction of an artery or arteriole decreases the radius, increasing resistance and pressure, but decreasing flow. Venoconstriction, on the other hand, has a very different outcome. The walls of veins are thin but irregular; thus, when the smooth muscle in those walls constricts, the lumen becomes more rounded. The more rounded the lumen, the less surface area the blood encounters, and the less resistance the vessel offers. Vasoconstriction increases pressure within a vein as it does in an artery, but in veins, the increased pressure increases flow. Recall that the pressure in the atria, into which the venous blood will flow, is very low, approaching zero for at least part of the relaxation phase of the cardiac cycle. Thus, venoconstriction increases the return of blood to the heart. Another way of stating this is that venoconstriction increases the preload or stretch of the cardiac muscle and increases contraction. 20.3 Capillary Exchange Learning Objectives By the end of this section, you will be able to: Identify the primary mechanisms of capillary exchange Distinguish between capillary hydrostatic pressure and blood colloid osmotic pressure, explaining the contribution of each to net filtration pressure Compare filtration and reabsorption Explain the fate of fluid that is not reabsorbed from the tissues into the vascular capillaries The primary purpose of the cardiovascular system is to circulate gases, nutrients, wastes, and other substances to and from the cells of the body. Small molecules, such as gases, lipids, and lipid-soluble molecules, can diffuse directly through the membranes of the endothelial cells of the capillary wall. Glucose, amino acids, and ions—including sodium, potassium, calcium, and chloride—use transporters to move through specific channels in the membrane by facilitated diffusion. Glucose, ions, and larger molecules may also leave the blood through intercellular clefts. Larger molecules can pass through the pores of fenestrated capillaries, and even large plasma proteins can pass through the great gaps in the sinusoids. Some large proteins in blood plasma can move into and out of the endothelial cells packaged within vesicles by endocytosis and exocytosis. Water moves by osmosis. Bulk Flow The mass movement of fluids into and out of capillary beds requires a transport mechanism far more efficient than mere diffusion. This movement, often referred to as bulk flow, involves two pressure-driven mechanisms: Volumes of fluid move from an area of higher pressure in a capillary bed to an area of lower pressure in the tissues via filtration . In contrast, the movement of fluid from an area of higher pressure in the tissues into an area of lower pressure in the capillaries is reabsorption . Two types of pressure interact to drive each of these movements: hydrostatic pressure and osmotic pressure. Hydrostatic Pressure The primary force driving fluid transport between the capillaries and tissues is hydrostatic pressure, which can be defined as the pressure of any fluid enclosed in a space. Blood hydrostatic pressure is the force exerted by the blood confined within blood vessels or heart chambers. Even more specifically, the pressure exerted by blood against the wall of a capillary is called capillary hydrostatic pressure (CHP) , and is the same as capillary blood pressure. CHP is the force that drives fluid out of capillaries and into the tissues. As fluid exits a capillary and moves into tissues, the hydrostatic pressure in the interstitial fluid correspondingly rises. This opposing hydrostatic pressure is called the interstitial fluid hydrostatic pressure (IFHP) . Generally, the CHP originating from the arterial pathways is considerably higher than the IFHP, because lymphatic vessels are continually absorbing excess fluid from the tissues. Thus, fluid generally moves out of the capillary and into the interstitial fluid. This process is called filtration. Osmotic Pressure The net pressure that drives reabsorption—the movement of fluid from the interstitial fluid back into the capillaries—is called osmotic pressure (sometimes referred to as oncotic pressure). Whereas hydrostatic pressure forces fluid out of the capillary, osmotic pressure draws fluid back in. Osmotic pressure is determined by osmotic concentration gradients, that is, the difference in the solute-to-water concentrations in the blood and tissue fluid. A region higher in solute concentration (and lower in water concentration) draws water across a semipermeable membrane from a region higher in water concentration (and lower in solute concentration). As we discuss osmotic pressure in blood and tissue fluid, it is important to recognize that the formed elements of blood do not contribute to osmotic concentration gradients. Rather, it is the plasma proteins that play the key role. Solutes also move across the capillary wall according to their concentration gradient, but overall, the concentrations should be similar and not have a significant impact on osmosis. Because of their large size and chemical structure, plasma proteins are not truly solutes, that is, they do not dissolve but are dispersed or suspended in their fluid medium, forming a colloid rather than a solution. The pressure created by the concentration of colloidal proteins in the blood is called the blood colloidal osmotic pressure (BCOP) . Its effect on capillary exchange accounts for the reabsorption of water. The plasma proteins suspended in blood cannot move across the semipermeable capillary cell membrane, and so they remain in the plasma. As a result, blood has a higher colloidal concentration and lower water concentration than tissue fluid. It therefore attracts water. We can also say that the BCOP is higher than the interstitial fluid colloidal osmotic pressure (IFCOP) , which is always very low because interstitial fluid contains few proteins. Thus, water is drawn from the tissue fluid back into the capillary, carrying dissolved molecules with it. This difference in colloidal osmotic pressure accounts for reabsorption. Interaction of Hydrostatic and Osmotic Pressures The normal unit used to express pressures within the cardiovascular system is millimeters of mercury (mm Hg). When blood leaving an arteriole first enters a capillary bed, the CHP is quite high—about 35 mm Hg. Gradually, this initial CHP declines as the blood moves through the capillary so that by the time the blood has reached the venous end, the CHP has dropped to approximately 18 mm Hg. In comparison, the plasma proteins remain suspended in the blood, so the BCOP remains fairly constant at about 25 mm Hg throughout the length of the capillary and considerably below the osmotic pressure in the interstitial fluid. The net filtration pressure (NFP) represents the interaction of the hydrostatic and osmotic pressures, driving fluid out of the capillary. It is equal to the difference between the CHP and the BCOP. Since filtration is, by definition, the movement of fluid out of the capillary, when reabsorption is occurring, the NFP is a negative number. NFP changes at different points in a capillary bed ( Figure 20.16 ). Close to the arterial end of the capillary, it is approximately 10 mm Hg, because the CHP of 35 mm Hg minus the BCOP of 25 mm Hg equals 10 mm Hg. Recall that the hydrostatic and osmotic pressures of the interstitial fluid are essentially negligible. Thus, the NFP of 10 mm Hg drives a net movement of fluid out of the capillary at the arterial end. At approximately the middle of the capillary, the CHP is about the same as the BCOP of 25 mm Hg, so the NFP drops to zero. At this point, there is no net change of volume: Fluid moves out of the capillary at the same rate as it moves into the capillary. Near the venous end of the capillary, the CHP has dwindled to about 18 mm Hg due to loss of fluid. Because the BCOP remains steady at 25 mm Hg, water is drawn into the capillary, that is, reabsorption occurs. Another way of expressing this is to say that at the venous end of the capillary, there is an NFP of −7 mm Hg. The Role of Lymphatic Capillaries Since overall CHP is higher than BCOP, it is inevitable that more net fluid will exit the capillary through filtration at the arterial end than enters through reabsorption at the venous end. Considering all capillaries over the course of a day, this can be quite a substantial amount of fluid: Approximately 24 liters per day are filtered, whereas 20.4 liters are reabsorbed. This excess fluid is picked up by capillaries of the lymphatic system. These extremely thin-walled vessels have copious numbers of valves that ensure unidirectional flow through ever-larger lymphatic vessels that eventually drain into the subclavian veins in the neck. An important function of the lymphatic system is to return the fluid (lymph) to the blood. Lymph may be thought of as recycled blood plasma . (Seek additional content for more detail on the lymphatic system.) Interactive Link Watch this video to explore capillaries and how they function in the body. Capillaries are never more than 100 micrometers away. What is the main component of interstitial fluid? 20.4 Homeostatic Regulation of the Vascular System Learning Objectives By the end of this section, you will be able to: Discuss the mechanisms involved in the neural regulation of vascular homeostasis Describe the contribution of a variety of hormones to the renal regulation of blood pressure Identify the effects of exercise on vascular homeostasis Discuss how hypertension, hemorrhage, and circulatory shock affect vascular health In order to maintain homeostasis in the cardiovascular system and provide adequate blood to the tissues, blood flow must be redirected continually to the tissues as they become more active. In a very real sense, the cardiovascular system engages in resource allocation, because there is not enough blood flow to distribute blood equally to all tissues simultaneously. For example, when an individual is exercising, more blood will be directed to skeletal muscles, the heart, and the lungs. Following a meal, more blood is directed to the digestive system. Only the brain receives a more or less constant supply of blood whether you are active, resting, thinking, or engaged in any other activity. Table 20.3 provides the distribution of systemic blood at rest and during exercise. Although most of the data appears logical, the values for the distribution of blood to the integument may seem surprising. During exercise, the body distributes more blood to the body surface where it can dissipate the excess heat generated by increased activity into the environment. Systemic Blood Flow During Rest, Mild Exercise, and Maximal Exercise in a Healthy Young Individual Organ Resting (mL/min) Mild exercise (mL/min) Maximal exercise (mL/min) Skeletal muscle 1200 4500 12,500 Heart 250 350 750 Brain 750 750 750 Integument 500 1500 1900 Kidney 1100 900 600 Gastrointestinal 1400 1100 600 Others (i.e., liver, spleen) 600 400 400 Total 5800 9500 17,500 Table 20.3 Three homeostatic mechanisms ensure adequate blood flow, blood pressure, distribution, and ultimately perfusion: neural, endocrine, and autoregulatory mechanisms. They are summarized in Figure 20.17 . Neural Regulation The nervous system plays a critical role in the regulation of vascular homeostasis. The primary regulatory sites include the cardiovascular centers in the brain that control both cardiac and vascular functions. In addition, more generalized neural responses from the limbic system and the autonomic nervous system are factors. The Cardiovascular Centers in the Brain Neurological regulation of blood pressure and flow depends on the cardiovascular centers located in the medulla oblongata. This cluster of neurons responds to changes in blood pressure as well as blood concentrations of oxygen, carbon dioxide, and hydrogen ions. The cardiovascular center contains three distinct paired components: The cardioaccelerator centers stimulate cardiac function by regulating heart rate and stroke volume via sympathetic stimulation from the cardiac accelerator nerve. The cardioinhibitor centers slow cardiac function by decreasing heart rate and stroke volume via parasympathetic stimulation from the vagus nerve. The vasomotor centers control vessel tone or contraction of the smooth muscle in the tunica media. Changes in diameter affect peripheral resistance, pressure, and flow, which affect cardiac output. The majority of these neurons act via the release of the neurotransmitter norepinephrine from sympathetic neurons. Although each center functions independently, they are not anatomically distinct. There is also a small population of neurons that control vasodilation in the vessels of the brain and skeletal muscles by relaxing the smooth muscle fibers in the vessel tunics. Many of these are cholinergic neurons, that is, they release acetylcholine, which in turn stimulates the vessels’ endothelial cells to release nitric oxide (NO), which causes vasodilation. Others release norepinephrine that binds to β 2 receptors. A few neurons release NO directly as a neurotransmitter. Recall that mild stimulation of the skeletal muscles maintains muscle tone. A similar phenomenon occurs with vascular tone in vessels. As noted earlier, arterioles are normally partially constricted: With maximal stimulation, their radius may be reduced to one-half of the resting state. Full dilation of most arterioles requires that this sympathetic stimulation be suppressed. When it is, an arteriole can expand by as much as 150 percent. Such a significant increase can dramatically affect resistance, pressure, and flow. Baroreceptor Reflexes Baroreceptors are specialized stretch receptors located within thin areas of blood vessels and heart chambers that respond to the degree of stretch caused by the presence of blood. They send impulses to the cardiovascular center to regulate blood pressure. Vascular baroreceptors are found primarily in sinuses (small cavities) within the aorta and carotid arteries: The aortic sinuses are found in the walls of the ascending aorta just superior to the aortic valve, whereas the carotid sinuses are in the base of the internal carotid arteries. There are also low-pressure baroreceptors located in the walls of the venae cavae and right atrium. When blood pressure increases, the baroreceptors are stretched more tightly and initiate action potentials at a higher rate. At lower blood pressures, the degree of stretch is lower and the rate of firing is slower. When the cardiovascular center in the medulla oblongata receives this input, it triggers a reflex that maintains homeostasis ( Figure 20.18 ): When blood pressure rises too high, the baroreceptors fire at a higher rate and trigger parasympathetic stimulation of the heart. As a result, cardiac output falls. Sympathetic stimulation of the peripheral arterioles will also decrease, resulting in vasodilation. Combined, these activities cause blood pressure to fall. When blood pressure drops too low, the rate of baroreceptor firing decreases. This will trigger an increase in sympathetic stimulation of the heart, causing cardiac output to increase. It will also trigger sympathetic stimulation of the peripheral vessels, resulting in vasoconstriction. Combined, these activities cause blood pressure to rise. The baroreceptors in the venae cavae and right atrium monitor blood pressure as the blood returns to the heart from the systemic circulation. Normally, blood flow into the aorta is the same as blood flow back into the right atrium. If blood is returning to the right atrium more rapidly than it is being ejected from the left ventricle, the atrial receptors will stimulate the cardiovascular centers to increase sympathetic firing and increase cardiac output until homeostasis is achieved. The opposite is also true. This mechanism is referred to as the atrial reflex . Chemoreceptor Reflexes In addition to the baroreceptors are chemoreceptors that monitor levels of oxygen, carbon dioxide, and hydrogen ions (pH), and thereby contribute to vascular homeostasis. Chemoreceptors monitoring the blood are located in close proximity to the baroreceptors in the aortic and carotid sinuses. They signal the cardiovascular center as well as the respiratory centers in the medulla oblongata. Since tissues consume oxygen and produce carbon dioxide and acids as waste products, when the body is more active, oxygen levels fall and carbon dioxide levels rise as cells undergo cellular respiration to meet the energy needs of activities. This causes more hydrogen ions to be produced, causing the blood pH to drop. When the body is resting, oxygen levels are higher, carbon dioxide levels are lower, more hydrogen is bound, and pH rises. (Seek additional content for more detail about pH.) The chemoreceptors respond to increasing carbon dioxide and hydrogen ion levels (falling pH) by stimulating the cardioaccelerator and vasomotor centers, increasing cardiac output and constricting peripheral vessels. The cardioinhibitor centers are suppressed. With falling carbon dioxide and hydrogen ion levels (increasing pH), the cardioinhibitor centers are stimulated, and the cardioaccelerator and vasomotor centers are suppressed, decreasing cardiac output and causing peripheral vasodilation. In order to maintain adequate supplies of oxygen to the cells and remove waste products such as carbon dioxide, it is essential that the respiratory system respond to changing metabolic demands. In turn, the cardiovascular system will transport these gases to the lungs for exchange, again in accordance with metabolic demands. This interrelationship of cardiovascular and respiratory control cannot be overemphasized. Other neural mechanisms can also have a significant impact on cardiovascular function. These include the limbic system that links physiological responses to psychological stimuli, as well as generalized sympathetic and parasympathetic stimulation. Endocrine Regulation Endocrine control over the cardiovascular system involves the catecholamines, epinephrine and norepinephrine, as well as several hormones that interact with the kidneys in the regulation of blood volume. Epinephrine and Norepinephrine The catecholamines epinephrine and norepinephrine are released by the adrenal medulla, and enhance and extend the body’s sympathetic or “fight-or-flight” response (see Figure 20.17 ). They increase heart rate and force of contraction, while temporarily constricting blood vessels to organs not essential for flight-or-fight responses and redirecting blood flow to the liver, muscles, and heart. Antidiuretic Hormone Antidiuretic hormone (ADH), also known as vasopressin, is secreted by the cells in the hypothalamus and transported via the hypothalamic-hypophyseal tracts to the posterior pituitary where it is stored until released upon nervous stimulation. The primary trigger prompting the hypothalamus to release ADH is increasing osmolarity of tissue fluid, usually in response to significant loss of blood volume. ADH signals its target cells in the kidneys to reabsorb more water, thus preventing the loss of additional fluid in the urine. This will increase overall fluid levels and help restore blood volume and pressure. In addition, ADH constricts peripheral vessels. Renin-Angiotensin-Aldosterone Mechanism The renin-angiotensin-aldosterone mechanism has a major effect upon the cardiovascular system ( Figure 20.19 ). Renin is an enzyme, although because of its importance in the renin-angiotensin-aldosterone pathway, some sources identify it as a hormone. Specialized cells in the kidneys found in the juxtaglomerular apparatus respond to decreased blood flow by secreting renin into the blood. Renin converts the plasma protein angiotensinogen, which is produced by the liver, into its active form—angiotensin I. Angiotensin I circulates in the blood and is then converted into angiotensin II in the lungs. This reaction is catalyzed by the enzyme angiotensin-converting enzyme (ACE). Angiotensin II is a powerful vasoconstrictor, greatly increasing blood pressure. It also stimulates the release of ADH and aldosterone, a hormone produced by the adrenal cortex. Aldosterone increases the reabsorption of sodium into the blood by the kidneys. Since water follows sodium, this increases the reabsorption of water. This in turn increases blood volume, raising blood pressure. Angiotensin II also stimulates the thirst center in the hypothalamus, so an individual will likely consume more fluids, again increasing blood volume and pressure. Erythropoietin Erythropoietin (EPO) is released by the kidneys when blood flow and/or oxygen levels decrease. EPO stimulates the production of erythrocytes within the bone marrow. Erythrocytes are the major formed element of the blood and may contribute 40 percent or more to blood volume, a significant factor of viscosity, resistance, pressure, and flow. In addition, EPO is a vasoconstrictor. Overproduction of EPO or excessive intake of synthetic EPO, often to enhance athletic performance, will increase viscosity, resistance, and pressure, and decrease flow in addition to its contribution as a vasoconstrictor. Atrial Natriuretic Hormone Secreted by cells in the atria of the heart, atrial natriuretic hormone (ANH) (also known as atrial natriuretic peptide) is secreted when blood volume is high enough to cause extreme stretching of the cardiac cells. Cells in the ventricle produce a hormone with similar effects, called B-type natriuretic hormone. Natriuretic hormones are antagonists to angiotensin II. They promote loss of sodium and water from the kidneys, and suppress renin, aldosterone, and ADH production and release. All of these actions promote loss of fluid from the body, so blood volume and blood pressure drop. Autoregulation of Perfusion As the name would suggest, autoregulation mechanisms require neither specialized nervous stimulation nor endocrine control. Rather, these are local, self-regulatory mechanisms that allow each region of tissue to adjust its blood flow—and thus its perfusion. These local mechanisms include chemical signals and myogenic controls. Chemical Signals Involved in Autoregulation Chemical signals work at the level of the precapillary sphincters to trigger either constriction or relaxation. As you know, opening a precapillary sphincter allows blood to flow into that particular capillary, whereas constricting a precapillary sphincter temporarily shuts off blood flow to that region. The factors involved in regulating the precapillary sphincters include the following: Opening of the sphincter is triggered in response to decreased oxygen concentrations; increased carbon dioxide concentrations; increasing levels of lactic acid or other byproducts of cellular metabolism; increasing concentrations of potassium ions or hydrogen ions (falling pH); inflammatory chemicals such as histamines; and increased body temperature. These conditions in turn stimulate the release of NO, a powerful vasodilator, from endothelial cells (see Figure 20.17 ). Contraction of the precapillary sphincter is triggered by the opposite levels of the regulators, which prompt the release of endothelins, powerful vasoconstricting peptides secreted by endothelial cells. Platelet secretions and certain prostaglandins may also trigger constriction. Again, these factors alter tissue perfusion via their effects on the precapillary sphincter mechanism, which regulates blood flow to capillaries. Since the amount of blood is limited, not all capillaries can fill at once, so blood flow is allocated based upon the needs and metabolic state of the tissues as reflected in these parameters. Bear in mind, however, that dilation and constriction of the arterioles feeding the capillary beds is the primary control mechanism. The Myogenic Response The myogenic response is a reaction to the stretching of the smooth muscle in the walls of arterioles as changes in blood flow occur through the vessel. This may be viewed as a largely protective function against dramatic fluctuations in blood pressure and blood flow to maintain homeostasis. If perfusion of an organ is too low (ischemia), the tissue will experience low levels of oxygen (hypoxia). In contrast, excessive perfusion could damage the organ’s smaller and more fragile vessels. The myogenic response is a localized process that serves to stabilize blood flow in the capillary network that follows that arteriole. When blood flow is low, the vessel’s smooth muscle will be only minimally stretched. In response, it relaxes, allowing the vessel to dilate and thereby increase the movement of blood into the tissue. When blood flow is too high, the smooth muscle will contract in response to the increased stretch, prompting vasoconstriction that reduces blood flow. Figure 20.20 summarizes the effects of nervous, endocrine, and local controls on arterioles. Effect of Exercise on Vascular Homeostasis The heart is a muscle and, like any muscle, it responds dramatically to exercise. For a healthy young adult, cardiac output (heart rate × stroke volume) increases in the nonathlete from approximately 5.0 liters (5.25 quarts) per minute to a maximum of about 20 liters (21 quarts) per minute. Accompanying this will be an increase in blood pressure from about 120/80 to 185/75. However, well-trained aerobic athletes can increase these values substantially. For these individuals, cardiac output soars from approximately 5.3 liters (5.57 quarts) per minute resting to more than 30 liters (31.5 quarts) per minute during maximal exercise. Along with this increase in cardiac output, blood pressure increases from 120/80 at rest to 200/90 at maximum values. In addition to improved cardiac function, exercise increases the size and mass of the heart. The average weight of the heart for the nonathlete is about 300 g, whereas in an athlete it will increase to 500 g. This increase in size generally makes the heart stronger and more efficient at pumping blood, increasing both stroke volume and cardiac output. Tissue perfusion also increases as the body transitions from a resting state to light exercise and eventually to heavy exercise (see Figure 20.20 ). These changes result in selective vasodilation in the skeletal muscles, heart, lungs, liver, and integument. Simultaneously, vasoconstriction occurs in the vessels leading to the kidneys and most of the digestive and reproductive organs. The flow of blood to the brain remains largely unchanged whether at rest or exercising, since the vessels in the brain largely do not respond to regulatory stimuli, in most cases, because they lack the appropriate receptors. As vasodilation occurs in selected vessels, resistance drops and more blood rushes into the organs they supply. This blood eventually returns to the venous system. Venous return is further enhanced by both the skeletal muscle and respiratory pumps. As blood returns to the heart more quickly, preload rises and the Frank-Starling principle tells us that contraction of the cardiac muscle in the atria and ventricles will be more forceful. Eventually, even the best-trained athletes will fatigue and must undergo a period of rest following exercise. Cardiac output and distribution of blood then return to normal. Regular exercise promotes cardiovascular health in a variety of ways. Because an athlete’s heart is larger than a nonathlete’s, stroke volume increases, so the athletic heart can deliver the same amount of blood as the nonathletic heart but with a lower heart rate. This increased efficiency allows the athlete to exercise for longer periods of time before muscles fatigue and places less stress on the heart. Exercise also lowers overall cholesterol levels by removing from the circulation a complex form of cholesterol, triglycerides, and proteins known as low-density lipoproteins (LDLs), which are widely associated with increased risk of cardiovascular disease. Although there is no way to remove deposits of plaque from the walls of arteries other than specialized surgery, exercise does promote the health of vessels by decreasing the rate of plaque formation and reducing blood pressure, so the heart does not have to generate as much force to overcome resistance. Generally as little as 30 minutes of noncontinuous exercise over the course of each day has beneficial effects and has been shown to lower the rate of heart attack by nearly 50 percent. While it is always advisable to follow a healthy diet, stop smoking, and lose weight, studies have clearly shown that fit, overweight people may actually be healthier overall than sedentary slender people. Thus, the benefits of moderate exercise are undeniable. Clinical Considerations in Vascular Homeostasis Any disorder that affects blood volume, vascular tone, or any other aspect of vascular functioning is likely to affect vascular homeostasis as well. That includes hypertension, hemorrhage, and shock. Hypertension and Hypotension New guidelines in 2017 from the American College of Cardiology and American Heart Association list normal blood pressure (BP) as less than 120/80 mm Hg and elevated BP as systolic P between 120–129 and diastolic P less than 80 mm Hg. Chronically elevated blood pressure is known clinically as hypertension . The new guidelines list hypertension that should be treated as 130/80 mm Hg. Tens of millions of Americans currently suffer from hypertension. Unfortunately, hypertension is typically a silent disorder; therefore, hypertensive patients may fail to recognize the seriousness of their condition and fail to follow their treatment plan. The result is often a heart attack or stroke. Hypertension may also lead to an aneurysm (ballooning of a blood vessel caused by a weakening of the wall), peripheral arterial disease (obstruction of vessels in peripheral regions of the body), chronic kidney disease, or heart failure. Interactive Link Listen to this CDC podcast to learn about hypertension, often described as a “silent killer.” What steps can you take to reduce your risk of a heart attack or stroke? Hemorrhage Minor blood loss is managed by hemostasis and repair. Hemorrhage is a loss of blood that cannot be controlled by hemostatic mechanisms. Initially, the body responds to hemorrhage by initiating mechanisms aimed at increasing blood pressure and maintaining blood flow. Ultimately, however, blood volume will need to be restored, either through physiological processes or through medical intervention. In response to blood loss, stimuli from the baroreceptors trigger the cardiovascular centers to stimulate sympathetic responses to increase cardiac output and vasoconstriction. This typically prompts the heart rate to increase to about 180–200 contractions per minute, restoring cardiac output to normal levels. Vasoconstriction of the arterioles increases vascular resistance, whereas constriction of the veins increases venous return to the heart. Both of these steps will help increase blood pressure. Sympathetic stimulation also triggers the release of epinephrine and norepinephrine, which enhance both cardiac output and vasoconstriction. If blood loss were less than 20 percent of total blood volume, these responses together would usually return blood pressure to normal and redirect the remaining blood to the tissues. Additional endocrine involvement is necessary, however, to restore the lost blood volume. The angiotensin-renin-aldosterone mechanism stimulates the thirst center in the hypothalamus, which increases fluid consumption to help restore the lost blood. More importantly, it increases renal reabsorption of sodium and water, reducing water loss in urine output. The kidneys also increase the production of EPO, stimulating the formation of erythrocytes that not only deliver oxygen to the tissues but also increase overall blood volume. Figure 20.21 summarizes the responses to loss of blood volume. Circulatory Shock The loss of too much blood may lead to circulatory shock , a life-threatening condition in which the circulatory system is unable to maintain blood flow to adequately supply sufficient oxygen and other nutrients to the tissues to maintain cellular metabolism. It should not be confused with emotional or psychological shock. Typically, the patient in circulatory shock will demonstrate an increased heart rate but decreased blood pressure, but there are cases in which blood pressure will remain normal. Urine output will fall dramatically, and the patient may appear confused or lose consciousness. Urine output less than 1 mL/kg body weight/hour is cause for concern. Unfortunately, shock is an example of a positive-feedback loop that, if uncorrected, may lead to the death of the patient. There are several recognized forms of shock: Hypovolemic shock in adults is typically caused by hemorrhage, although in children it may be caused by fluid losses related to severe vomiting or diarrhea. Other causes for hypovolemic shock include extensive burns, exposure to some toxins, and excessive urine loss related to diabetes insipidus or ketoacidosis. Typically, patients present with a rapid, almost tachycardic heart rate; a weak pulse often described as “thready;” cool, clammy skin, particularly in the extremities, due to restricted peripheral blood flow; rapid, shallow breathing; hypothermia; thirst; and dry mouth. Treatments generally involve providing intravenous fluids to restore the patient to normal function and various drugs such as dopamine, epinephrine, and norepinephrine to raise blood pressure. Cardiogenic shock results from the inability of the heart to maintain cardiac output. Most often, it results from a myocardial infarction (heart attack), but it may also be caused by arrhythmias, valve disorders, cardiomyopathies, cardiac failure, or simply insufficient flow of blood through the cardiac vessels. Treatment involves repairing the damage to the heart or its vessels to resolve the underlying cause, rather than treating cardiogenic shock directly. Vascular shock occurs when arterioles lose their normal muscular tone and dilate dramatically. It may arise from a variety of causes, and treatments almost always involve fluid replacement and medications, called inotropic or pressor agents, which restore tone to the muscles of the vessels. In addition, eliminating or at least alleviating the underlying cause of the condition is required. This might include antibiotics and antihistamines, or select steroids, which may aid in the repair of nerve damage. A common cause is sepsis (or septicemia), also called “blood poisoning,” which is a widespread bacterial infection that results in an organismal-level inflammatory response known as septic shock . Neurogenic shock is a form of vascular shock that occurs with cranial or spinal injuries that damage the cardiovascular centers in the medulla oblongata or the nervous fibers originating from this region. Anaphylactic shock is a severe allergic response that causes the widespread release of histamines, triggering vasodilation throughout the body. Obstructive shock , as the name would suggest, occurs when a significant portion of the vascular system is blocked. It is not always recognized as a distinct condition and may be grouped with cardiogenic shock, including pulmonary embolism and cardiac tamponade. Treatments depend upon the underlying cause and, in addition to administering fluids intravenously, often include the administration of anticoagulants, removal of fluid from the pericardial cavity, or air from the thoracic cavity, and surgery as required. The most common cause is a pulmonary embolism, a clot that lodges in the pulmonary vessels and interrupts blood flow. Other causes include stenosis of the aortic valve; cardiac tamponade, in which excess fluid in the pericardial cavity interferes with the ability of the heart to fully relax and fill with blood (resulting in decreased preload); and a pneumothorax, in which an excessive amount of air is present in the thoracic cavity, outside of the lungs, which interferes with venous return, pulmonary function, and delivery of oxygen to the tissues. 20.5 Circulatory Pathways Learning Objectives By the end of this section, you will be able to: Identify the vessels through which blood travels within the pulmonary circuit, beginning from the right ventricle of the heart and ending at the left atrium Create a flow chart showing the major systemic arteries through which blood travels from the aorta and its major branches, to the most significant arteries feeding into the right and left upper and lower limbs Create a flow chart showing the major systemic veins through which blood travels from the feet to the right atrium of the heart Virtually every cell, tissue, organ, and system in the body is impacted by the circulatory system. This includes the generalized and more specialized functions of transport of materials, capillary exchange, maintaining health by transporting white blood cells and various immunoglobulins (antibodies), hemostasis, regulation of body temperature, and helping to maintain acid-base balance. In addition to these shared functions, many systems enjoy a unique relationship with the circulatory system. Figure 20.22 summarizes these relationships. As you learn about the vessels of the systemic and pulmonary circuits, notice that many arteries and veins share the same names, parallel one another throughout the body, and are very similar on the right and left sides of the body. These pairs of vessels will be traced through only one side of the body. Where differences occur in branching patterns or when vessels are singular, this will be indicated. For example, you will find a pair of femoral arteries and a pair of femoral veins, with one vessel on each side of the body. In contrast, some vessels closer to the midline of the body, such as the aorta, are unique. Moreover, some superficial veins, such as the great saphenous vein in the femoral region, have no arterial counterpart. Another phenomenon that can make the study of vessels challenging is that names of vessels can change with location. Like a street that changes name as it passes through an intersection, an artery or vein can change names as it passes an anatomical landmark. For example, the left subclavian artery becomes the axillary artery as it passes through the body wall and into the axillary region, and then becomes the brachial artery as it flows from the axillary region into the upper arm (or brachium). You will also find examples of anastomoses where two blood vessels that previously branched reconnect. Anastomoses are especially common in veins, where they help maintain blood flow even when one vessel is blocked or narrowed, although there are some important ones in the arteries supplying the brain. As you read about circular pathways, notice that there is an occasional, very large artery referred to as a trunk , a term indicating that the vessel gives rise to several smaller arteries. For example, the celiac trunk gives rise to the left gastric, common hepatic, and splenic arteries. As you study this section, imagine you are on a “Voyage of Discovery” similar to Lewis and Clark’s expedition in 1804–1806, which followed rivers and streams through unfamiliar territory, seeking a water route from the Atlantic to the Pacific Ocean. You might envision being inside a miniature boat, exploring the various branches of the circulatory system. This simple approach has proven effective for many students in mastering these major circulatory patterns. Another approach that works well for many students is to create simple line drawings similar to the ones provided, labeling each of the major vessels. It is beyond the scope of this text to name every vessel in the body. However, we will attempt to discuss the major pathways for blood and acquaint you with the major named arteries and veins in the body. Also, please keep in mind that individual variations in circulation patterns are not uncommon. Interactive Link Visit this site for a brief summary of the arteries. Pulmonary Circulation Recall that blood returning from the systemic circuit enters the right atrium ( Figure 20.23 ) via the superior and inferior venae cavae and the coronary sinus, which drains the blood supply of the heart muscle. These vessels will be described more fully later in this section. This blood is relatively low in oxygen and relatively high in carbon dioxide, since much of the oxygen has been extracted for use by the tissues and the waste gas carbon dioxide was picked up to be transported to the lungs for elimination. From the right atrium, blood moves into the right ventricle, which pumps it to the lungs for gas exchange. This system of vessels is referred to as the pulmonary circuit . The single vessel exiting the right ventricle is the pulmonary trunk . At the base of the pulmonary trunk is the pulmonary semilunar valve, which prevents backflow of blood into the right ventricle during ventricular diastole. As the pulmonary trunk reaches the superior surface of the heart, it curves posteriorly and rapidly bifurcates (divides) into two branches, a left and a right pulmonary artery . To prevent confusion between these vessels, it is important to refer to the vessel exiting the heart as the pulmonary trunk, rather than also calling it a pulmonary artery. The pulmonary arteries in turn branch many times within the lung, forming a series of smaller arteries and arterioles that eventually lead to the pulmonary capillaries. The pulmonary capillaries surround lung structures known as alveoli that are the sites of oxygen and carbon dioxide exchange. Once gas exchange is completed, oxygenated blood flows from the pulmonary capillaries into a series of pulmonary venules that eventually lead to a series of larger pulmonary veins . Four pulmonary veins, two on the left and two on the right, return blood to the left atrium. At this point, the pulmonary circuit is complete. Table 20.4 defines the major arteries and veins of the pulmonary circuit discussed in the text. Pulmonary Arteries and Veins Vessel Description Pulmonary trunk Single large vessel exiting the right ventricle that divides to form the right and left pulmonary arteries Pulmonary arteries Left and right vessels that form from the pulmonary trunk and lead to smaller arterioles and eventually to the pulmonary capillaries Pulmonary veins Two sets of paired vessels—one pair on each side—that are formed from the small venules, leading away from the pulmonary capillaries to flow into the left atrium Table 20.4 Overview of Systemic Arteries Blood relatively high in oxygen concentration is returned from the pulmonary circuit to the left atrium via the four pulmonary veins. From the left atrium, blood moves into the left ventricle, which pumps blood into the aorta. The aorta and its branches—the systemic arteries—send blood to virtually every organ of the body ( Figure 20.24 ). The Aorta The aorta is the largest artery in the body ( Figure 20.25 ). It arises from the left ventricle and eventually descends to the abdominal region, where it bifurcates at the level of the fourth lumbar vertebra into the two common iliac arteries. The aorta consists of the ascending aorta, the aortic arch, and the descending aorta, which passes through the diaphragm and a landmark that divides into the superior thoracic and inferior abdominal components. Arteries originating from the aorta ultimately distribute blood to virtually all tissues of the body. At the base of the aorta is the aortic semilunar valve that prevents backflow of blood into the left ventricle while the heart is relaxing. After exiting the heart, the ascending aorta moves in a superior direction for approximately 5 cm and ends at the sternal angle. Following this ascent, it reverses direction, forming a graceful arc to the left, called the aortic arch . The aortic arch descends toward the inferior portions of the body and ends at the level of the intervertebral disk between the fourth and fifth thoracic vertebrae. Beyond this point, the descending aorta continues close to the bodies of the vertebrae and passes through an opening in the diaphragm known as the aortic hiatus . Superior to the diaphragm, the aorta is called the thoracic aorta , and inferior to the diaphragm, it is called the abdominal aorta . The abdominal aorta terminates when it bifurcates into the two common iliac arteries at the level of the fourth lumbar vertebra. See Figure 20.25 for an illustration of the ascending aorta, the aortic arch, and the initial segment of the descending aorta plus major branches; Table 20.5 summarizes the structures of the aorta. Components of the Aorta Vessel Description Aorta Largest artery in the body, originating from the left ventricle and descending to the abdominal region, where it bifurcates into the common iliac arteries at the level of the fourth lumbar vertebra; arteries originating from the aorta distribute blood to virtually all tissues of the body Ascending aorta Initial portion of the aorta, rising superiorly from the left ventricle for a distance of approximately 5 cm Aortic arch Graceful arc to the left that connects the ascending aorta to the descending aorta; ends at the intervertebral disk between the fourth and fifth thoracic vertebrae Descending aorta Portion of the aorta that continues inferiorly past the end of the aortic arch; subdivided into the thoracic aorta and the abdominal aorta Thoracic aorta Portion of the descending aorta superior to the aortic hiatus Abdominal aorta Portion of the aorta inferior to the aortic hiatus and superior to the common iliac arteries Table 20.5 Coronary Circulation The first vessels that branch from the ascending aorta are the paired coronary arteries (see Figure 20.25 ), which arise from two of the three sinuses in the ascending aorta just superior to the aortic semilunar valve. These sinuses contain the aortic baroreceptors and chemoreceptors critical to maintain cardiac function. The left coronary artery arises from the left posterior aortic sinus. The right coronary artery arises from the anterior aortic sinus. Normally, the right posterior aortic sinus does not give rise to a vessel. The coronary arteries encircle the heart, forming a ring-like structure that divides into the next level of branches that supplies blood to the heart tissues. (Seek additional content for more detail on cardiac circulation.) Aortic Arch Branches There are three major branches of the aortic arch: the brachiocephalic artery, the left common carotid artery, and the left subclavian (literally “under the clavicle”) artery. As you would expect based upon proximity to the heart, each of these vessels is classified as an elastic artery. The brachiocephalic artery is located only on the right side of the body; there is no corresponding artery on the left. The brachiocephalic artery branches into the right subclavian artery and the right common carotid artery. The left subclavian and left common carotid arteries arise independently from the aortic arch but otherwise follow a similar pattern and distribution to the corresponding arteries on the right side (see Figure 20.23 ). Each subclavian artery supplies blood to the arms, chest, shoulders, back, and central nervous system. It then gives rise to three major branches: the internal thoracic artery, the vertebral artery, and the thyrocervical artery. The internal thoracic artery , or mammary artery, supplies blood to the thymus, the pericardium of the heart, and the anterior chest wall. The vertebral artery passes through the vertebral foramen in the cervical vertebrae and then through the foramen magnum into the cranial cavity to supply blood to the brain and spinal cord. The paired vertebral arteries join together to form the large basilar artery at the base of the medulla oblongata. This is an example of an anastomosis. The subclavian artery also gives rise to the thyrocervical artery that provides blood to the thyroid, the cervical region of the neck, and the upper back and shoulder. The common carotid artery divides into internal and external carotid arteries. The right common carotid artery arises from the brachiocephalic artery and the left common carotid artery arises directly from the aortic arch. The external carotid artery supplies blood to numerous structures within the face, lower jaw, neck, esophagus, and larynx. These branches include the lingual, facial, occipital, maxillary, and superficial temporal arteries. The internal carotid artery initially forms an expansion known as the carotid sinus, containing the carotid baroreceptors and chemoreceptors. Like their counterparts in the aortic sinuses, the information provided by these receptors is critical to maintaining cardiovascular homeostasis (see Figure 20.23 ). The internal carotid arteries along with the vertebral arteries are the two primary suppliers of blood to the human brain. Given the central role and vital importance of the brain to life, it is critical that blood supply to this organ remains uninterrupted. Recall that blood flow to the brain is remarkably constant, with approximately 20 percent of blood flow directed to this organ at any given time. When blood flow is interrupted, even for just a few seconds, a transient ischemic attack (TIA) , or mini-stroke, may occur, resulting in loss of consciousness or temporary loss of neurological function. In some cases, the damage may be permanent. Loss of blood flow for longer periods, typically between 3 and 4 minutes, will likely produce irreversible brain damage or a stroke, also called a cerebrovascular accident (CVA) . The locations of the arteries in the brain not only provide blood flow to the brain tissue but also prevent interruption in the flow of blood. Both the carotid and vertebral arteries branch once they enter the cranial cavity, and some of these branches form a structure known as the arterial circle (or circle of Willis ), an anastomosis that is remarkably like a traffic circle that sends off branches (in this case, arterial branches to the brain). As a rule, branches to the anterior portion of the cerebrum are normally fed by the internal carotid arteries; the remainder of the brain receives blood flow from branches associated with the vertebral arteries. The internal carotid artery continues through the carotid canal of the temporal bone and enters the base of the brain through the carotid foramen where it gives rise to several branches ( Figure 20.26 and Figure 20.27 ). One of these branches is the anterior cerebral artery that supplies blood to the frontal lobe of the cerebrum. Another branch, the middle cerebral artery , supplies blood to the temporal and parietal lobes, which are the most common sites of CVAs. The ophthalmic artery , the third major branch, provides blood to the eyes. The right and left anterior cerebral arteries join together to form an anastomosis called the anterior communicating artery . The initial segments of the anterior cerebral arteries and the anterior communicating artery form the anterior portion of the arterial circle. The posterior portion of the arterial circle is formed by a left and a right posterior communicating artery that branches from the posterior cerebral artery , which arises from the basilar artery. It provides blood to the posterior portion of the cerebrum and brain stem. The basilar artery is an anastomosis that begins at the junction of the two vertebral arteries and sends branches to the cerebellum and brain stem. It flows into the posterior cerebral arteries. Table 20.6 summarizes the aortic arch branches, including the major branches supplying the brain. Aortic Arch Branches and Brain Circulation Vessel Description Brachiocephalic artery Single vessel located on the right side of the body; the first vessel branching from the aortic arch; gives rise to the right subclavian artery and the right common carotid artery; supplies blood to the head, neck, upper limb, and wall of the thoracic region Subclavian artery The right subclavian artery arises from the brachiocephalic artery while the left subclavian artery arises from the aortic arch; gives rise to the internal thoracic, vertebral, and thyrocervical arteries; supplies blood to the arms, chest, shoulders, back, and central nervous system Internal thoracic artery Also called the mammary artery; arises from the subclavian artery; supplies blood to the thymus, pericardium of the heart, and anterior chest wall Vertebral artery Arises from the subclavian artery and passes through the vertebral foramen through the foramen magnum to the brain; joins with the internal carotid artery to form the arterial circle; supplies blood to the brain and spinal cord Thyrocervical artery Arises from the subclavian artery; supplies blood to the thyroid, the cervical region, the upper back, and shoulder Common carotid artery The right common carotid artery arises from the brachiocephalic artery and the left common carotid artery arises from the aortic arch; each gives rise to the external and internal carotid arteries; supplies the respective sides of the head and neck External carotid artery Arises from the common carotid artery; supplies blood to numerous structures within the face, lower jaw, neck, esophagus, and larynx Internal carotid artery Arises from the common carotid artery and begins with the carotid sinus; goes through the carotid canal of the temporal bone to the base of the brain; combines with the branches of the vertebral artery, forming the arterial circle; supplies blood to the brain Arterial circle or circle of Willis An anastomosis located at the base of the brain that ensures continual blood supply; formed from the branches of the internal carotid and vertebral arteries; supplies blood to the brain Anterior cerebral artery Arises from the internal carotid artery; supplies blood to the frontal lobe of the cerebrum Middle cerebral artery Another branch of the internal carotid artery; supplies blood to the temporal and parietal lobes of the cerebrum Ophthalmic artery Branch of the internal carotid artery; supplies blood to the eyes Anterior communicating artery An anastomosis of the right and left internal carotid arteries; supplies blood to the brain Posterior communicating artery Branches of the posterior cerebral artery that form part of the posterior portion of the arterial circle; supplies blood to the brain Posterior cerebral artery Branch of the basilar artery that forms a portion of the posterior segment of the arterial circle of Willis; supplies blood to the posterior portion of the cerebrum and brain stem Basilar artery Formed from the fusion of the two vertebral arteries; sends branches to the cerebellum, brain stem, and the posterior cerebral arteries; the main blood supply to the brain stem Table 20.6 Thoracic Aorta and Major Branches The thoracic aorta begins at the level of vertebra T5 and continues through to the diaphragm at the level of T12, initially traveling within the mediastinum to the left of the vertebral column. As it passes through the thoracic region, the thoracic aorta gives rise to several branches, which are collectively referred to as visceral branches and parietal branches ( Figure 20.28 ). Those branches that supply blood primarily to visceral organs are known as the visceral branches and include the bronchial arteries, pericardial arteries, esophageal arteries, and the mediastinal arteries, each named after the tissues it supplies. Each bronchial artery (typically two on the left and one on the right) supplies systemic blood to the lungs and visceral pleura, in addition to the blood pumped to the lungs for oxygenation via the pulmonary circuit. The bronchial arteries follow the same path as the respiratory branches, beginning with the bronchi and ending with the bronchioles. There is considerable, but not total, intermingling of the systemic and pulmonary blood at anastomoses in the smaller branches of the lungs. This may sound incongruous—that is, the mixing of systemic arterial blood high in oxygen with the pulmonary arterial blood lower in oxygen—but the systemic vessels also deliver nutrients to the lung tissue just as they do elsewhere in the body. The mixed blood drains into typical pulmonary veins, whereas the bronchial artery branches remain separate and drain into bronchial veins described later. Each pericardial artery supplies blood to the pericardium, the esophageal artery provides blood to the esophagus, and the mediastinal artery provides blood to the mediastinum. The remaining thoracic aorta branches are collectively referred to as parietal branches or somatic branches, and include the intercostal and superior phrenic arteries. Each intercostal artery provides blood to the muscles of the thoracic cavity and vertebral column. The superior phrenic artery provides blood to the superior surface of the diaphragm. Table 20.7 lists the arteries of the thoracic region. Arteries of the Thoracic Region Vessel Description Visceral branches A group of arterial branches of the thoracic aorta; supplies blood to the viscera (i.e., organs) of the thorax Bronchial artery Systemic branch from the aorta that provides oxygenated blood to the lungs; this blood supply is in addition to the pulmonary circuit that brings blood for oxygenation Pericardial artery Branch of the thoracic aorta; supplies blood to the pericardium Esophageal artery Branch of the thoracic aorta; supplies blood to the esophagus Mediastinal artery Branch of the thoracic aorta; supplies blood to the mediastinum Parietal branches Also called somatic branches, a group of arterial branches of the thoracic aorta; include those that supply blood to the thoracic wall, vertebral column, and the superior surface of the diaphragm Intercostal artery Branch of the thoracic aorta; supplies blood to the muscles of the thoracic cavity and vertebral column Superior phrenic artery Branch of the thoracic aorta; supplies blood to the superior surface of the diaphragm Table 20.7 Abdominal Aorta and Major Branches After crossing through the diaphragm at the aortic hiatus, the thoracic aorta is called the abdominal aorta (see Figure 20.28 ). This vessel remains to the left of the vertebral column and is embedded in adipose tissue behind the peritoneal cavity. It formally ends at approximately the level of vertebra L4, where it bifurcates to form the common iliac arteries. Before this division, the abdominal aorta gives rise to several important branches. A single celiac trunk (artery) emerges and divides into the left gastric artery to supply blood to the stomach and esophagus, the splenic artery to supply blood to the spleen, and the common hepatic artery , which in turn gives rise to the hepatic artery proper to supply blood to the liver, the right gastric artery to supply blood to the stomach, the cystic artery to supply blood to the gall bladder, and several branches, one to supply blood to the duodenum and another to supply blood to the pancreas. Two additional single vessels arise from the abdominal aorta. These are the superior and inferior mesenteric arteries. The superior mesenteric artery arises approximately 2.5 cm after the celiac trunk and branches into several major vessels that supply blood to the small intestine (duodenum, jejunum, and ileum), the pancreas, and a majority of the large intestine. The inferior mesenteric artery supplies blood to the distal segment of the large intestine, including the rectum. It arises approximately 5 cm superior to the common iliac arteries. In addition to these single branches, the abdominal aorta gives rise to several significant paired arteries along the way. These include the inferior phrenic arteries, the adrenal arteries, the renal arteries, the gonadal arteries, and the lumbar arteries. Each inferior phrenic artery is a counterpart of a superior phrenic artery and supplies blood to the inferior surface of the diaphragm. The adrenal artery supplies blood to the adrenal (suprarenal) glands and arises near the superior mesenteric artery. Each renal artery branches approximately 2.5 cm inferior to the superior mesenteric arteries and supplies a kidney. The right renal artery is longer than the left since the aorta lies to the left of the vertebral column and the vessel must travel a greater distance to reach its target. Renal arteries branch repeatedly to supply blood to the kidneys. Each gonadal artery supplies blood to the gonads, or reproductive organs, and is also described as either an ovarian artery or a testicular artery (internal spermatic), depending upon the sex of the individual. An ovarian artery supplies blood to an ovary, uterine (Fallopian) tube, and the uterus, and is located within the suspensory ligament of the uterus. It is considerably shorter than a testicular artery , which ultimately travels outside the body cavity to the testes, forming one component of the spermatic cord. The gonadal arteries arise inferior to the renal arteries and are generally retroperitoneal. The ovarian artery continues to the uterus where it forms an anastomosis with the uterine artery that supplies blood to the uterus. Both the uterine arteries and vaginal arteries, which distribute blood to the vagina, are branches of the internal iliac artery. The four paired lumbar arteries are the counterparts of the intercostal arteries and supply blood to the lumbar region, the abdominal wall, and the spinal cord. In some instances, a fifth pair of lumbar arteries emerges from the median sacral artery. The aorta divides at approximately the level of vertebra L4 into a left and a right common iliac artery but continues as a small vessel, the median sacral artery , into the sacrum. The common iliac arteries provide blood to the pelvic region and ultimately to the lower limbs. They split into external and internal iliac arteries approximately at the level of the lumbar-sacral articulation. Each internal iliac artery sends branches to the urinary bladder, the walls of the pelvis, the external genitalia, and the medial portion of the femoral region. In females, they also provide blood to the uterus and vagina. The much larger external iliac artery supplies blood to each of the lower limbs. Figure 20.29 shows the distribution of the major branches of the aorta into the thoracic and abdominal regions. Figure 20.30 shows the distribution of the major branches of the common iliac arteries. Table 20.8 summarizes the major branches of the abdominal aorta. Vessels of the Abdominal Aorta Vessel Description Celiac trunk Also called the celiac artery; a major branch of the abdominal aorta; gives rise to the left gastric artery, the splenic artery, and the common hepatic artery that forms the hepatic artery to the liver, the right gastric artery to the stomach, and the cystic artery to the gall bladder Left gastric artery Branch of the celiac trunk; supplies blood to the stomach Splenic artery Branch of the celiac trunk; supplies blood to the spleen Common hepatic artery Branch of the celiac trunk that forms the hepatic artery, the right gastric artery, and the cystic artery Hepatic artery proper Branch of the common hepatic artery; supplies systemic blood to the liver Right gastric artery Branch of the common hepatic artery; supplies blood to the stomach Cystic artery Branch of the common hepatic artery; supplies blood to the gall bladder Superior mesenteric artery Branch of the abdominal aorta; supplies blood to the small intestine (duodenum, jejunum, and ileum), the pancreas, and a majority of the large intestine Inferior mesenteric artery Branch of the abdominal aorta; supplies blood to the distal segment of the large intestine and rectum Inferior phrenic arteries Branches of the abdominal aorta; supply blood to the inferior surface of the diaphragm Adrenal artery Branch of the abdominal aorta; supplies blood to the adrenal (suprarenal) glands Renal artery Branch of the abdominal aorta; supplies each kidney Gonadal artery Branch of the abdominal aorta; supplies blood to the gonads or reproductive organs; also described as ovarian arteries or testicular arteries, depending upon the sex of the individual Ovarian artery Branch of the abdominal aorta; supplies blood to ovary, uterine (Fallopian) tube, and uterus Testicular artery Branch of the abdominal aorta; ultimately travels outside the body cavity to the testes and forms one component of the spermatic cord Lumbar arteries Branches of the abdominal aorta; supply blood to the lumbar region, the abdominal wall, and spinal cord Common iliac artery Branch of the aorta that leads to the internal and external iliac arteries Median sacral artery Continuation of the aorta into the sacrum Internal iliac artery Branch from the common iliac arteries; supplies blood to the urinary bladder, walls of the pelvis, external genitalia, and the medial portion of the femoral region; in females, also provides blood to the uterus and vagina External iliac artery Branch of the common iliac artery that leaves the body cavity and becomes a femoral artery; supplies blood to the lower limbs Table 20.8 Arteries Serving the Upper Limbs As the subclavian artery exits the thorax into the axillary region, it is renamed the axillary artery . Although it does branch and supply blood to the region near the head of the humerus (via the humeral circumflex arteries), the majority of the vessel continues into the upper arm, or brachium, and becomes the brachial artery ( Figure 20.31 ). The brachial artery supplies blood to much of the brachial region and divides at the elbow into several smaller branches, including the deep brachial arteries, which provide blood to the posterior surface of the arm, and the ulnar collateral arteries, which supply blood to the region of the elbow. As the brachial artery approaches the coronoid fossa, it bifurcates into the radial and ulnar arteries, which continue into the forearm, or antebrachium. The radial artery and ulnar artery parallel their namesake bones, giving off smaller branches until they reach the wrist, or carpal region. At this level, they fuse to form the superficial and deep palmar arches that supply blood to the hand, as well as the digital arteries that supply blood to the digits. Figure 20.32 shows the distribution of systemic arteries from the heart into the upper limb. Table 20.9 summarizes the arteries serving the upper limbs. Arteries Serving the Upper Limbs Vessel Description Axillary artery Continuation of the subclavian artery as it penetrates the body wall and enters the axillary region; supplies blood to the region near the head of the humerus (humeral circumflex arteries); the majority of the vessel continues into the brachium and becomes the brachial artery Brachial artery Continuation of the axillary artery in the brachium; supplies blood to much of the brachial region; gives off several smaller branches that provide blood to the posterior surface of the arm in the region of the elbow; bifurcates into the radial and ulnar arteries at the coronoid fossa Radial artery Formed at the bifurcation of the brachial artery; parallels the radius; gives off smaller branches until it reaches the carpal region where it fuses with the ulnar artery to form the superficial and deep palmar arches; supplies blood to the lower arm and carpal region Ulnar artery Formed at the bifurcation of the brachial artery; parallels the ulna; gives off smaller branches until it reaches the carpal region where it fuses with the radial artery to form the superficial and deep palmar arches; supplies blood to the lower arm and carpal region Palmar arches (superficial and deep) Formed from anastomosis of the radial and ulnar arteries; supply blood to the hand and digital arteries Digital arteries Formed from the superficial and deep palmar arches; supply blood to the digits Table 20.9 Arteries Serving the Lower Limbs The external iliac artery exits the body cavity and enters the femoral region of the lower leg ( Figure 20.33 ). As it passes through the body wall, it is renamed the femoral artery . It gives off several smaller branches as well as the lateral deep femoral artery that in turn gives rise to a lateral circumflex artery . These arteries supply blood to the deep muscles of the thigh as well as ventral and lateral regions of the integument. The femoral artery also gives rise to the genicular artery , which provides blood to the region of the knee. As the femoral artery passes posterior to the knee near the popliteal fossa, it is called the popliteal artery. The popliteal artery branches into the anterior and posterior tibial arteries. The anterior tibial artery is located between the tibia and fibula, and supplies blood to the muscles and integument of the anterior tibial region. Upon reaching the tarsal region, it becomes the dorsalis pedis artery , which branches repeatedly and provides blood to the tarsal and dorsal regions of the foot. The posterior tibial artery provides blood to the muscles and integument on the posterior surface of the tibial region. The fibular or peroneal artery branches from the posterior tibial artery. It bifurcates and becomes the medial plantar artery and lateral plantar artery , providing blood to the plantar surfaces. There is an anastomosis with the dorsalis pedis artery, and the medial and lateral plantar arteries form two arches called the dorsal arch (also called the arcuate arch) and the plantar arch , which provide blood to the remainder of the foot and toes. Figure 20.34 shows the distribution of the major systemic arteries in the lower limb. Table 20.10 summarizes the major systemic arteries discussed in the text. Arteries Serving the Lower Limbs Vessel Description Femoral artery Continuation of the external iliac artery after it passes through the body cavity; divides into several smaller branches, the lateral deep femoral artery, and the genicular artery; becomes the popliteal artery as it passes posterior to the knee Deep femoral artery Branch of the femoral artery; gives rise to the lateral circumflex arteries Lateral circumflex artery Branch of the deep femoral artery; supplies blood to the deep muscles of the thigh and the ventral and lateral regions of the integument Genicular artery Branch of the femoral artery; supplies blood to the region of the knee Popliteal artery Continuation of the femoral artery posterior to the knee; branches into the anterior and posterior tibial arteries Anterior tibial artery Branches from the popliteal artery; supplies blood to the anterior tibial region; becomes the dorsalis pedis artery Dorsalis pedis artery Forms from the anterior tibial artery; branches repeatedly to supply blood to the tarsal and dorsal regions of the foot Posterior tibial artery Branches from the popliteal artery and gives rise to the fibular or peroneal artery; supplies blood to the posterior tibial region Medial plantar artery Arises from the bifurcation of the posterior tibial arteries; supplies blood to the medial plantar surfaces of the foot Lateral plantar artery Arises from the bifurcation of the posterior tibial arteries; supplies blood to the lateral plantar surfaces of the foot Dorsal or arcuate arch Formed from the anastomosis of the dorsalis pedis artery and the medial and plantar arteries; branches supply the distal portions of the foot and digits Plantar arch Formed from the anastomosis of the dorsalis pedis artery and the medial and plantar arteries; branches supply the distal portions of the foot and digits Table 20.10 Overview of Systemic Veins Systemic veins return blood to the right atrium. Since the blood has already passed through the systemic capillaries, it will be relatively low in oxygen concentration. In many cases, there will be veins draining organs and regions of the body with the same name as the arteries that supplied these regions and the two often parallel one another. This is often described as a “complementary” pattern. However, there is a great deal more variability in the venous circulation than normally occurs in the arteries. For the sake of brevity and clarity, this text will discuss only the most commonly encountered patterns. However, keep this variation in mind when you move from the classroom to clinical practice. In both the neck and limb regions, there are often both superficial and deeper levels of veins. The deeper veins generally correspond to the complementary arteries. The superficial veins do not normally have direct arterial counterparts, but in addition to returning blood, they also make contributions to the maintenance of body temperature. When the ambient temperature is warm, more blood is diverted to the superficial veins where heat can be more easily dissipated to the environment. In colder weather, there is more constriction of the superficial veins and blood is diverted deeper where the body can retain more of the heat. The “Voyage of Discovery” analogy and stick drawings mentioned earlier remain valid techniques for the study of systemic veins, but veins present a more difficult challenge because there are numerous anastomoses and multiple branches. It is like following a river with many tributaries and channels, several of which interconnect. Tracing blood flow through arteries follows the current in the direction of blood flow, so that we move from the heart through the large arteries and into the smaller arteries to the capillaries. From the capillaries, we move into the smallest veins and follow the direction of blood flow into larger veins and back to the heart. Figure 20.35 outlines the path of the major systemic veins. Interactive Link Visit this site for a brief online summary of the veins. The right atrium receives all of the systemic venous return. Most of the blood flows into either the superior vena cava or inferior vena cava. If you draw an imaginary line at the level of the diaphragm, systemic venous circulation from above that line will generally flow into the superior vena cava; this includes blood from the head, neck, chest, shoulders, and upper limbs. The exception to this is that most venous blood flow from the coronary veins flows directly into the coronary sinus and from there directly into the right atrium. Beneath the diaphragm, systemic venous flow enters the inferior vena cava, that is, blood from the abdominal and pelvic regions and the lower limbs. The Superior Vena Cava The superior vena cava drains most of the body superior to the diaphragm ( Figure 20.36 ). On both the left and right sides, the subclavian vein forms when the axillary vein passes through the body wall from the axillary region. It fuses with the external and internal jugular veins from the head and neck to form the brachiocephalic vein . Each vertebral vein also flows into the brachiocephalic vein close to this fusion. These veins arise from the base of the brain and the cervical region of the spinal cord, and flow largely through the intervertebral foramina in the cervical vertebrae. They are the counterparts of the vertebral arteries. Each internal thoracic vein , also known as an internal mammary vein, drains the anterior surface of the chest wall and flows into the brachiocephalic vein. The remainder of the blood supply from the thorax drains into the azygos vein. Each intercostal vein drains muscles of the thoracic wall, each esophageal vein delivers blood from the inferior portions of the esophagus, each bronchial vein drains the systemic circulation from the lungs, and several smaller veins drain the mediastinal region. Bronchial veins carry approximately 13 percent of the blood that flows into the bronchial arteries; the remainder intermingles with the pulmonary circulation and returns to the heart via the pulmonary veins. These veins flow into the azygos vein , and with the smaller hemiazygos vein (hemi- = “half”) on the left of the vertebral column, drain blood from the thoracic region. The hemiazygos vein does not drain directly into the superior vena cava but enters the brachiocephalic vein via the superior intercostal vein. The azygos vein passes through the diaphragm from the thoracic cavity on the right side of the vertebral column and begins in the lumbar region of the thoracic cavity. It flows into the superior vena cava at approximately the level of T2, making a significant contribution to the flow of blood. It combines with the two large left and right brachiocephalic veins to form the superior vena cava. Table 20.11 summarizes the veins of the thoracic region that flow into the superior vena cava. Veins of the Thoracic Region Vessel Description Superior vena cava Large systemic vein; drains blood from most areas superior to the diaphragm; empties into the right atrium Subclavian vein Located deep in the thoracic cavity; formed by the axillary vein as it enters the thoracic cavity from the axillary region; drains the axillary and smaller local veins near the scapular region and leads to the brachiocephalic vein Brachiocephalic veins Pair of veins that form from a fusion of the external and internal jugular veins and the subclavian vein; subclavian, external and internal jugulars, vertebral, and internal thoracic veins flow into it; drain the upper thoracic region and lead to the superior vena cava Vertebral vein Arises from the base of the brain and the cervical region of the spinal cord; passes through the intervertebral foramina in the cervical vertebrae; drains smaller veins from the cranium, spinal cord, and vertebrae, and leads to the brachiocephalic vein; counterpart of the vertebral artery Internal thoracic veins Also called internal mammary veins; drain the anterior surface of the chest wall and lead to the brachiocephalic vein Intercostal vein Drains the muscles of the thoracic wall and leads to the azygos vein Esophageal vein Drains the inferior portions of the esophagus and leads to the azygos vein Bronchial vein Drains the systemic circulation from the lungs and leads to the azygos vein Azygos vein Originates in the lumbar region and passes through the diaphragm into the thoracic cavity on the right side of the vertebral column; drains blood from the intercostal veins, esophageal veins, bronchial veins, and other veins draining the mediastinal region, and leads to the superior vena cava Hemiazygos vein Smaller vein complementary to the azygos vein; drains the esophageal veins from the esophagus and the left intercostal veins, and leads to the brachiocephalic vein via the superior intercostal vein Table 20.11 Veins of the Head and Neck Blood from the brain and the superficial facial vein flow into each internal jugular vein ( Figure 20.37 ). Blood from the more superficial portions of the head, scalp, and cranial regions, including the temporal vein and maxillary vein , flow into each external jugular vein . Although the external and internal jugular veins are separate vessels, there are anastomoses between them close to the thoracic region. Blood from the external jugular vein empties into the subclavian vein. Table 20.12 summarizes the major veins of the head and neck. Major Veins of the Head and Neck Vessel Description Internal jugular vein Parallel to the common carotid artery, which is more or less its counterpart, and passes through the jugular foramen and canal; primarily drains blood from the brain, receives the superficial facial vein, and empties into the subclavian vein Temporal vein Drains blood from the temporal region and flows into the external jugular vein Maxillary vein Drains blood from the maxillary region and flows into the external jugular vein External jugular vein Drains blood from the more superficial portions of the head, scalp, and cranial regions, and leads to the subclavian vein Table 20.12 Venous Drainage of the Brain Circulation to the brain is both critical and complex (see Figure 20.37 ). Many smaller veins of the brain stem and the superficial veins of the cerebrum lead to larger vessels referred to as intracranial sinuses. These include the superior and inferior sagittal sinuses, straight sinus, cavernous sinuses, left and right sinuses, the petrosal sinuses, and the occipital sinuses. Ultimately, sinuses will lead back to either the inferior jugular vein or vertebral vein. Most of the veins on the superior surface of the cerebrum flow into the largest of the sinuses, the superior sagittal sinus . It is located midsagittally between the meningeal and periosteal layers of the dura mater within the falx cerebri and, at first glance in images or models, can be mistaken for the subarachnoid space. Most reabsorption of cerebrospinal fluid occurs via the chorionic villi (arachnoid granulations) into the superior sagittal sinus. Blood from most of the smaller vessels originating from the inferior cerebral veins flows into the great cerebral vein and into the straight sinus . Other cerebral veins and those from the eye socket flow into the cavernous sinus , which flows into the petrosal sinus and then into the internal jugular vein. The occipital sinus , sagittal sinus, and straight sinuses all flow into the left and right transverse sinuses near the lambdoid suture. The transverse sinuses in turn flow into the sigmoid sinuses that pass through the jugular foramen and into the internal jugular vein. The internal jugular vein flows parallel to the common carotid artery and is more or less its counterpart. It empties into the brachiocephalic vein. The veins draining the cervical vertebrae and the posterior surface of the skull, including some blood from the occipital sinus, flow into the vertebral veins. These parallel the vertebral arteries and travel through the transverse foramina of the cervical vertebrae. The vertebral veins also flow into the brachiocephalic veins. Table 20.13 summarizes the major veins of the brain. Major Veins of the Brain Vessel Description Superior sagittal sinus Enlarged vein located midsagittally between the meningeal and periosteal layers of the dura mater within the falx cerebri; receives most of the blood drained from the superior surface of the cerebrum and leads to the inferior jugular vein and the vertebral vein Great cerebral vein Receives most of the smaller vessels from the inferior cerebral veins and leads to the straight sinus Straight sinus Enlarged vein that drains blood from the brain; receives most of the blood from the great cerebral vein and leads to the left or right transverse sinus Cavernous sinus Enlarged vein that receives blood from most of the other cerebral veins and the eye socket, and leads to the petrosal sinus Petrosal sinus Enlarged vein that receives blood from the cavernous sinus and leads into the internal jugular veins Occipital sinus Enlarged vein that drains the occipital region near the falx cerebelli and leads to the left and right transverse sinuses, and also the vertebral veins Transverse sinuses Pair of enlarged veins near the lambdoid suture that drains the occipital, sagittal, and straight sinuses, and leads to the sigmoid sinuses Sigmoid sinuses Enlarged vein that receives blood from the transverse sinuses and leads through the jugular foramen to the internal jugular vein Table 20.13 Veins Draining the Upper Limbs The digital veins in the fingers come together in the hand to form the palmar venous arches ( Figure 20.38 ). From here, the veins come together to form the radial vein, the ulnar vein, and the median antebrachial vein. The radial vein and the ulnar vein parallel the bones of the forearm and join together at the antebrachium to form the brachial vein , a deep vein that flows into the axillary vein in the brachium. The median antebrachial vein parallels the ulnar vein, is more medial in location, and joins the basilic vein in the forearm. As the basilic vein reaches the antecubital region, it gives off a branch called the median cubital vein that crosses at an angle to join the cephalic vein. The median cubital vein is the most common site for drawing venous blood in humans. The basilic vein continues through the arm medially and superficially to the axillary vein. The cephalic vein begins in the antebrachium and drains blood from the superficial surface of the arm into the axillary vein. It is extremely superficial and easily seen along the surface of the biceps brachii muscle in individuals with good muscle tone and in those without excessive subcutaneous adipose tissue in the arms. The subscapular vein drains blood from the subscapular region and joins the cephalic vein to form the axillary vein . As it passes through the body wall and enters the thorax, the axillary vein becomes the subclavian vein. Many of the larger veins of the thoracic and abdominal region and upper limb are further represented in the flow chart in Figure 20.39 . Table 20.14 summarizes the veins of the upper limbs. Veins of the Upper Limbs Vessel Description Digital veins Drain the digits and lead to the palmar arches of the hand and dorsal venous arch of the foot Palmar venous arches Drain the hand and digits, and lead to the radial vein, ulnar veins, and the median antebrachial vein Radial vein Vein that parallels the radius and radial artery; arises from the palmar venous arches and leads to the brachial vein Ulnar vein Vein that parallels the ulna and ulnar artery; arises from the palmar venous arches and leads to the brachial vein Brachial vein Deeper vein of the arm that forms from the radial and ulnar veins in the lower arm; leads to the axillary vein Median antebrachial vein Vein that parallels the ulnar vein but is more medial in location; intertwines with the palmar venous arches; leads to the basilic vein Basilic vein Superficial vein of the arm that arises from the median antebrachial vein, intersects with the median cubital vein, parallels the ulnar vein, and continues into the upper arm; along with the brachial vein, it leads to the axillary vein Median cubital vein Superficial vessel located in the antecubital region that links the cephalic vein to the basilic vein in the form of a v; a frequent site from which to draw blood Cephalic vein Superficial vessel in the upper arm; leads to the axillary vein Subscapular vein Drains blood from the subscapular region and leads to the axillary vein Axillary vein The major vein in the axillary region; drains the upper limb and becomes the subclavian vein Table 20.14 The Inferior Vena Cava Other than the small amount of blood drained by the azygos and hemiazygos veins, most of the blood inferior to the diaphragm drains into the inferior vena cava before it is returned to the heart (see Figure 20.36 ). Lying just beneath the parietal peritoneum in the abdominal cavity, the inferior vena cava parallels the abdominal aorta, where it can receive blood from abdominal veins. The lumbar portions of the abdominal wall and spinal cord are drained by a series of lumbar veins , usually four on each side. The ascending lumbar veins drain into either the azygos vein on the right or the hemiazygos vein on the left, and return to the superior vena cava. The remaining lumbar veins drain directly into the inferior vena cava. Blood supply from the kidneys flows into each renal vein , normally the largest veins entering the inferior vena cava. A number of other, smaller veins empty into the left renal vein. Each adrenal vein drains the adrenal or suprarenal glands located immediately superior to the kidneys. The right adrenal vein enters the inferior vena cava directly, whereas the left adrenal vein enters the left renal vein. From the male reproductive organs, each testicular vein flows from the scrotum, forming a portion of the spermatic cord. Each ovarian vein drains an ovary in females. Each of these veins is generically called a gonadal vein . The right gonadal vein empties directly into the inferior vena cava, and the left gonadal vein empties into the left renal vein. Each side of the diaphragm drains into a phrenic vein ; the right phrenic vein empties directly into the inferior vena cava, whereas the left phrenic vein empties into the left renal vein. Blood supply from the liver drains into each hepatic vein and directly into the inferior vena cava. Since the inferior vena cava lies primarily to the right of the vertebral column and aorta, the left renal vein is longer, as are the left phrenic, adrenal, and gonadal veins. The longer length of the left renal vein makes the left kidney the primary target of surgeons removing this organ for donation. Figure 20.40 provides a flow chart of the veins flowing into the inferior vena cava. Table 20.15 summarizes the major veins of the abdominal region. Major Veins of the Abdominal Region Vessel Description Inferior vena cava Large systemic vein that drains blood from areas largely inferior to the diaphragm; empties into the right atrium Lumbar veins Series of veins that drain the lumbar portion of the abdominal wall and spinal cord; the ascending lumbar veins drain into the azygos vein on the right or the hemiazygos vein on the left; the remaining lumbar veins drain directly into the inferior vena cava Renal vein Largest vein entering the inferior vena cava; drains the kidneys and flows into the inferior vena cava Adrenal vein Drains the adrenal or suprarenal; the right adrenal vein enters the inferior vena cava directly and the left adrenal vein enters the left renal vein Testicular vein Drains the testes and forms part of the spermatic cord; the right testicular vein empties directly into the inferior vena cava and the left testicular vein empties into the left renal vein Ovarian vein Drains the ovary; the right ovarian vein empties directly into the inferior vena cava and the left ovarian vein empties into the left renal vein Gonadal vein Generic term for a vein draining a reproductive organ; may be either an ovarian vein or a testicular vein, depending on the sex of the individual Phrenic vein Drains the diaphragm; the right phrenic vein flows into the inferior vena cava and the left phrenic vein empties into the left renal vein Hepatic vein Drains systemic blood from the liver and flows into the inferior vena cava Table 20.15 Veins Draining the Lower Limbs The superior surface of the foot drains into the digital veins, and the inferior surface drains into the plantar veins , which flow into a complex series of anastomoses in the feet and ankles, including the dorsal venous arch and the plantar venous arch ( Figure 20.41 ). From the dorsal venous arch, blood supply drains into the anterior and posterior tibial veins. The anterior tibial vein drains the area near the tibialis anterior muscle and combines with the posterior tibial vein and the fibular vein to form the popliteal vein. The posterior tibial vein drains the posterior surface of the tibia and joins the popliteal vein. The fibular vein drains the muscles and integument in proximity to the fibula and also joins the popliteal vein. The small saphenous vein located on the lateral surface of the leg drains blood from the superficial regions of the lower leg and foot, and flows into to the popliteal vein . As the popliteal vein passes behind the knee in the popliteal region, it becomes the femoral vein. It is palpable in patients without excessive adipose tissue. Close to the body wall, the great saphenous vein, the deep femoral vein, and the femoral circumflex vein drain into the femoral vein. The great saphenous vein is a prominent surface vessel located on the medial surface of the leg and thigh that collects blood from the superficial portions of these areas. The deep femoral vein , as the name suggests, drains blood from the deeper portions of the thigh. The femoral circumflex vein forms a loop around the femur just inferior to the trochanters and drains blood from the areas in proximity to the head and neck of the femur. As the femoral vein penetrates the body wall from the femoral portion of the upper limb, it becomes the external iliac vein , a large vein that drains blood from the leg to the common iliac vein. The pelvic organs and integument drain into the internal iliac vein , which forms from several smaller veins in the region, including the umbilical veins that run on either side of the bladder. The external and internal iliac veins combine near the inferior portion of the sacroiliac joint to form the common iliac vein. In addition to blood supply from the external and internal iliac veins, the middle sacral vein drains the sacral region into the common iliac vein . Similar to the common iliac arteries, the common iliac veins come together at the level of L5 to form the inferior vena cava. Figure 20.42 is a flow chart of veins flowing into the lower limb. Table 20.16 summarizes the major veins of the lower limbs. Veins of the Lower Limbs Vessel Description Plantar veins Drain the foot and flow into the plantar venous arch Dorsal venous arch Drains blood from digital veins and vessels on the superior surface of the foot Plantar venous arch Formed from the plantar veins; flows into the anterior and posterior tibial veins through anastomoses Anterior tibial vein Formed from the dorsal venous arch; drains the area near the tibialis anterior muscle and flows into the popliteal vein Posterior tibial vein Formed from the dorsal venous arch; drains the area near the posterior surface of the tibia and flows into the popliteal vein Fibular vein Drains the muscles and integument near the fibula and flows into the popliteal vein Small saphenous vein Located on the lateral surface of the leg; drains blood from the superficial regions of the lower leg and foot, and flows into the popliteal vein Popliteal vein Drains the region behind the knee and forms from the fusion of the fibular, anterior, and posterior tibial veins; flows into the femoral vein Great saphenous vein Prominent surface vessel located on the medial surface of the leg and thigh; drains the superficial portions of these areas and flows into the femoral vein Deep femoral vein Drains blood from the deeper portions of the thigh and flows into the femoral vein Femoral circumflex vein Forms a loop around the femur just inferior to the trochanters; drains blood from the areas around the head and neck of the femur; flows into the femoral vein Femoral vein Drains the upper leg; receives blood from the great saphenous vein, the deep femoral vein, and the femoral circumflex vein; becomes the external iliac vein when it crosses the body wall External iliac vein Formed when the femoral vein passes into the body cavity; drains the legs and flows into the common iliac vein Internal iliac vein Drains the pelvic organs and integument; formed from several smaller veins in the region; flows into the common iliac vein Middle sacral vein Drains the sacral region and flows into the left common iliac vein Common iliac vein Flows into the inferior vena cava at the level of L5; the left common iliac vein drains the sacral region; formed from the union of the external and internal iliac veins near the inferior portion of the sacroiliac joint Table 20.16 Hepatic Portal System The liver is a complex biochemical processing plant. It packages nutrients absorbed by the digestive system; produces plasma proteins, clotting factors, and bile; and disposes of worn-out cell components and waste products. Instead of entering the circulation directly, absorbed nutrients and certain wastes (for example, materials produced by the spleen) travel to the liver for processing. They do so via the hepatic portal system ( Figure 20.43 ). Portal systems begin and end in capillaries. In this case, the initial capillaries from the stomach, small intestine, large intestine, and spleen lead to the hepatic portal vein and end in specialized capillaries within the liver, the hepatic sinusoids. You saw the only other portal system with the hypothalamic-hypophyseal portal vessel in the endocrine chapter. The hepatic portal system consists of the hepatic portal vein and the veins that drain into it. The hepatic portal vein itself is relatively short, beginning at the level of L2 with the confluence of the superior mesenteric and splenic veins. It also receives branches from the inferior mesenteric vein, plus the splenic veins and all their tributaries. The superior mesenteric vein receives blood from the small intestine, two-thirds of the large intestine, and the stomach. The inferior mesenteric vein drains the distal third of the large intestine, including the descending colon, the sigmoid colon, and the rectum. The splenic vein is formed from branches from the spleen, pancreas, and portions of the stomach, and the inferior mesenteric vein. After its formation, the hepatic portal vein also receives branches from the gastric veins of the stomach and cystic veins from the gall bladder. The hepatic portal vein delivers materials from these digestive and circulatory organs directly to the liver for processing. Because of the hepatic portal system, the liver receives its blood supply from two different sources: from normal systemic circulation via the hepatic artery and from the hepatic portal vein. The liver processes the blood from the portal system to remove certain wastes and excess nutrients, which are stored for later use. This processed blood, as well as the systemic blood that came from the hepatic artery, exits the liver via the right, left, and middle hepatic veins, and flows into the inferior vena cava. Overall systemic blood composition remains relatively stable, since the liver is able to metabolize the absorbed digestive components. 20.6 Development of Blood Vessels and Fetal Circulation Learning Objectives By the end of this section, you will be able to: Describe the development of blood vessels Describe the fetal circulation In a developing embryo,the heart has developed enough by day 21 post-fertilization to begin beating. Circulation patterns are clearly established by the fourth week of embryonic life. It is critical to the survival of the developing human that the circulatory system forms early to supply the growing tissue with nutrients and gases, and to remove waste products. Blood cells and vessel production in structures outside the embryo proper called the yolk sac, chorion, and connecting stalk begin about 15 to 16 days following fertilization. Development of these circulatory elements within the embryo itself begins approximately 2 days later. You will learn more about the formation and function of these early structures when you study the chapter on development. During those first few weeks, blood vessels begin to form from the embryonic mesoderm. The precursor cells are known as hemangioblasts . These in turn differentiate into angioblasts , which give rise to the blood vessels and pluripotent stem cells, which differentiate into the formed elements of blood. (Seek additional content for more detail on fetal development and circulation.) Together, these cells form masses known as blood islands scattered throughout the embryonic disc. Spaces appear on the blood islands that develop into vessel lumens. The endothelial lining of the vessels arise from the angioblasts within these islands. Surrounding mesenchymal cells give rise to the smooth muscle and connective tissue layers of the vessels. While the vessels are developing, the pluripotent stem cells begin to form the blood. Vascular tubes also develop on the blood islands, and they eventually connect to one another as well as to the developing, tubular heart. Thus, the developmental pattern, rather than beginning from the formation of one central vessel and spreading outward, occurs in many regions simultaneously with vessels later joining together. This angiogenesis —the creation of new blood vessels from existing ones—continues as needed throughout life as we grow and develop. Blood vessel development often follows the same pattern as nerve development and travels to the same target tissues and organs. This occurs because the many factors directing growth of nerves also stimulate blood vessels to follow a similar pattern. Whether a given vessel develops into an artery or a vein is dependent upon local concentrations of signaling proteins. As the embryo grows within the mother’s uterus, its requirements for nutrients and gas exchange also grow. The placenta—a circulatory organ unique to pregnancy—develops jointly from the embryo and uterine wall structures to fill this need. Emerging from the placenta is the umbilical vein , which carries oxygen-rich blood from the mother to the fetal inferior vena cava via the ductus venosus to the heart that pumps it into fetal circulation. Two umbilical arteries carry oxygen-depleted fetal blood, including wastes and carbon dioxide, to the placenta. Remnants of the umbilical arteries remain in the adult. (Seek additional content for more information on the role of the placenta in fetal circulation.) There are three major shunts—alternate paths for blood flow—found in the circulatory system of the fetus. Two of these shunts divert blood from the pulmonary to the systemic circuit, whereas the third connects the umbilical vein to the inferior vena cava. The first two shunts are critical during fetal life, when the lungs are compressed, filled with amniotic fluid, and nonfunctional, and gas exchange is provided by the placenta. These shunts close shortly after birth, however, when the newborn begins to breathe. The third shunt persists a bit longer but becomes nonfunctional once the umbilical cord is severed. The three shunts are as follows ( Figure 20.44 ): The foramen ovale is an opening in the interatrial septum that allows blood to flow from the right atrium to the left atrium. A valve associated with this opening prevents backflow of blood during the fetal period. As the newborn begins to breathe and blood pressure in the atria increases, this shunt closes. The fossa ovalis remains in the interatrial septum after birth, marking the location of the former foramen ovale. The ductus arteriosus is a short, muscular vessel that connects the pulmonary trunk to the aorta. Most of the blood pumped from the right ventricle into the pulmonary trunk is thereby diverted into the aorta. Only enough blood reaches the fetal lungs to maintain the developing lung tissue. When the newborn takes the first breath, pressure within the lungs drops dramatically, and both the lungs and the pulmonary vessels expand. As the amount of oxygen increases, the smooth muscles in the wall of the ductus arteriosus constrict, sealing off the passage. Eventually, the muscular and endothelial components of the ductus arteriosus degenerate, leaving only the connective tissue component of the ligamentum arteriosum. The ductus venosus is a temporary blood vessel that branches from the umbilical vein, allowing much of the freshly oxygenated blood from the placenta—the organ of gas exchange between the mother and fetus—to bypass the fetal liver and go directly to the fetal heart. The ductus venosus closes slowly during the first weeks of infancy and degenerates to become the ligamentum venosum.
anatomy_and_physiology
Chapter Objectives After studying the chapter, you will be able to: Describe the integumentary system and the role it plays in homeostasis Describe the layers of the skin and the functions of each layer Describe the accessory structures of the skin and the functions of each Describe the changes that occur in the integumentary system during the aging process Discuss several common diseases, disorders, and injuries that affect the integumentary system Explain treatments for some common diseases, disorders, and injuries of the integumentary system Introduction What do you think when you look at your skin in the mirror? Do you think about covering it with makeup, adding a tattoo, or maybe a body piercing? Or do you think about the fact that the skin belongs to one of the body’s most essential and dynamic systems: the integumentary system? The integumentary system refers to the skin and its accessory structures, and it is responsible for much more than simply lending to your outward appearance. In the adult human body, the skin makes up about 16 percent of body weight and covers an area of 1.5 to 2 m 2 . In fact, the skin and accessory structures are the largest organ system in the human body. As such, the skin protects your inner organs and it is in need of daily care and protection to maintain its health. This chapter will introduce the structure and functions of the integumentary system, as well as some of the diseases, disorders, and injuries that can affect this system.
[ { "answer": { "ans_choice": 3, "ans_text": "stratum basale" }, "bloom": "1", "hl_context": "<hl> The papillary layer is made of loose , areolar connective tissue , which means the collagen and elastin fibers of this layer form a loose mesh . <hl> <hl> This superficial layer of the dermis projects into the stratum basale of the epidermis to form finger-like dermal papillae ( see Figure 5.7 ) . <hl> Within the papillary layer are fibroblasts , a small number of fat cells ( adipocytes ) , and an abundance of small blood vessels . In addition , the papillary layer contains phagocytes , defensive cells that help fight bacteria or other infections that have breached the skin . This layer also contains lymphatic capillaries , nerve fibers , and touch receptors called the Meissner corpuscles . <hl> In a growing fetus , fingerprints form where the cells of the stratum basale meet the papillae of the underlying dermal layer ( papillary layer ) , resulting in the formation of the ridges on your fingers that you recognize as fingerprints . <hl> Fingerprints are unique to each individual and are used for forensic analyses because the patterns do not change with the growth and aging processes .", "hl_sentences": "The papillary layer is made of loose , areolar connective tissue , which means the collagen and elastin fibers of this layer form a loose mesh . This superficial layer of the dermis projects into the stratum basale of the epidermis to form finger-like dermal papillae ( see Figure 5.7 ) . In a growing fetus , fingerprints form where the cells of the stratum basale meet the papillae of the underlying dermal layer ( papillary layer ) , resulting in the formation of the ridges on your fingers that you recognize as fingerprints .", "question": { "cloze_format": "The ___ is the layer of the epidermis that is most closely associated with the papillary layer of the dermis.", "normal_format": "The papillary layer of the dermis is most closely associated with which layer of the epidermis?", "question_choices": [ "stratum spinosum", "stratum corneum", "stratum granulosum", "stratum basale" ], "question_id": "fs-id1264875", "question_text": "The papillary layer of the dermis is most closely associated with which layer of the epidermis?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 0, "ans_text": "stratum spinosum" }, "bloom": "1", "hl_context": "Stratum Spinosum As the name suggests , the stratum spinosum is spiny in appearance due to the protruding cell processes that join the cells via a structure called a desmosome . The desmosomes interlock with each other and strengthen the bond between the cells . It is interesting to note that the “ spiny ” nature of this layer is an artifact of the staining process . Unstained epidermis samples do not exhibit this characteristic appearance . <hl> The stratum spinosum is composed of eight to 10 layers of keratinocytes , formed as a result of cell division in the stratum basale ( Figure 5.6 ) . <hl> <hl> Interspersed among the keratinocytes of this layer is a type of dendritic cell called the Langerhans cell , which functions as a macrophage by engulfing bacteria , foreign particles , and damaged cells that occur in this layer . <hl>", "hl_sentences": "The stratum spinosum is composed of eight to 10 layers of keratinocytes , formed as a result of cell division in the stratum basale ( Figure 5.6 ) . Interspersed among the keratinocytes of this layer is a type of dendritic cell called the Langerhans cell , which functions as a macrophage by engulfing bacteria , foreign particles , and damaged cells that occur in this layer .", "question": { "cloze_format": "Langerhans cells are commonly found in the ________.", "normal_format": "Where are langerhans cells commonly found?", "question_choices": [ "stratum spinosum", "stratum corneum", "stratum granulosum", "stratum basale" ], "question_id": "fs-id1514097", "question_text": "Langerhans cells are commonly found in the ________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "connective tissue" }, "bloom": null, "hl_context": "<hl> Underlying the papillary layer is the much thicker reticular layer , composed of dense , irregular connective tissue . <hl> This layer is well vascularized and has a rich sensory and sympathetic nerve supply . The reticular layer appears reticulated ( net-like ) due to a tight meshwork of fibers . Elastin fibers provide some elasticity to the skin , enabling movement . Collagen fibers provide structure and tensile strength , with strands of collagen extending into both the papillary layer and the hypodermis . In addition , collagen binds water to keep the skin hydrated . Collagen injections and Retin-A creams help restore skin turgor by either introducing collagen externally or stimulating blood flow and repair of the dermis , respectively . <hl> The papillary layer is made of loose , areolar connective tissue , which means the collagen and elastin fibers of this layer form a loose mesh . <hl> This superficial layer of the dermis projects into the stratum basale of the epidermis to form finger-like dermal papillae ( see Figure 5.7 ) . Within the papillary layer are fibroblasts , a small number of fat cells ( adipocytes ) , and an abundance of small blood vessels . In addition , the papillary layer contains phagocytes , defensive cells that help fight bacteria or other infections that have breached the skin . This layer also contains lymphatic capillaries , nerve fibers , and touch receptors called the Meissner corpuscles .", "hl_sentences": "Underlying the papillary layer is the much thicker reticular layer , composed of dense , irregular connective tissue . The papillary layer is made of loose , areolar connective tissue , which means the collagen and elastin fibers of this layer form a loose mesh .", "question": { "cloze_format": "The papillary and reticular layers of the dermis are composed mainly of ________.", "normal_format": "What are the papillary and reticular layers of the dermis mainly composed of?", "question_choices": [ "melanocytes", "keratinocytes", "connective tissue", "adipose tissue" ], "question_id": "fs-id1476034", "question_text": "The papillary and reticular layers of the dermis are composed mainly of ________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 1, "ans_text": "structure" }, "bloom": "1", "hl_context": "Most cuts or wounds , with the exception of ones that only scratch the surface ( the epidermis ) , lead to scar formation . <hl> A scar is collagen-rich skin formed after the process of wound healing that differs from normal skin . <hl> <hl> Scarring occurs in cases in which there is repair of skin damage , but the skin fails to regenerate the original skin structure . <hl> Fibroblasts generate scar tissue in the form of collagen , and the bulk of repair is due to the basket-weave pattern generated by collagen fibers and does not result in regeneration of the typical cellular structure of skin . Instead , the tissue is fibrous in nature and does not allow for the regeneration of accessory structures , such as hair follicles , sweat glands , or sebaceous glands . Underlying the papillary layer is the much thicker reticular layer , composed of dense , irregular connective tissue . This layer is well vascularized and has a rich sensory and sympathetic nerve supply . The reticular layer appears reticulated ( net-like ) due to a tight meshwork of fibers . Elastin fibers provide some elasticity to the skin , enabling movement . <hl> Collagen fibers provide structure and tensile strength , with strands of collagen extending into both the papillary layer and the hypodermis . <hl> In addition , collagen binds water to keep the skin hydrated . Collagen injections and Retin-A creams help restore skin turgor by either introducing collagen externally or stimulating blood flow and repair of the dermis , respectively .", "hl_sentences": "A scar is collagen-rich skin formed after the process of wound healing that differs from normal skin . Scarring occurs in cases in which there is repair of skin damage , but the skin fails to regenerate the original skin structure . Collagen fibers provide structure and tensile strength , with strands of collagen extending into both the papillary layer and the hypodermis .", "question": { "cloze_format": "Collagen lends ________ to the skin.", "normal_format": "What does collagen lend to the skin?", "question_choices": [ "elasticity", "structure", "color", "UV protection" ], "question_id": "fs-id1492729", "question_text": "Collagen lends ________ to the skin." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "source of blood vessels in the epidermis" }, "bloom": "1", "hl_context": "<hl> The skin and accessory structures perform a variety of essential functions , such as protecting the body from invasion by microorganisms , chemicals , and other environmental factors ; preventing dehydration ; acting as a sensory organ ; modulating body temperature and electrolyte balance ; and synthesizing vitamin D . The underlying hypodermis has important roles in storing fats , forming a “ cushion ” over underlying structures , and providing insulation from cold temperatures . <hl> The hypodermis is home to most of the fat that concerns people when they are trying to keep their weight under control . <hl> Adipose tissue present in the hypodermis consists of fat-storing cells called adipocytes . <hl> <hl> This stored fat can serve as an energy reserve , insulate the body to prevent heat loss , and act as a cushion to protect underlying structures from trauma . <hl>", "hl_sentences": "The skin and accessory structures perform a variety of essential functions , such as protecting the body from invasion by microorganisms , chemicals , and other environmental factors ; preventing dehydration ; acting as a sensory organ ; modulating body temperature and electrolyte balance ; and synthesizing vitamin D . The underlying hypodermis has important roles in storing fats , forming a “ cushion ” over underlying structures , and providing insulation from cold temperatures . Adipose tissue present in the hypodermis consists of fat-storing cells called adipocytes . This stored fat can serve as an energy reserve , insulate the body to prevent heat loss , and act as a cushion to protect underlying structures from trauma .", "question": { "cloze_format": "___ is not a function of the hypodermis.", "normal_format": "Which of the following is not a function of the hypodermis?", "question_choices": [ "protects underlying organs", "helps maintain body temperature", "source of blood vessels in the epidermis", "a site to long-term energy storage" ], "question_id": "fs-id1543027", "question_text": "Which of the following is not a function of the hypodermis?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "are responsible for goose bumps" }, "bloom": "1", "hl_context": "Hair serves a variety of functions , including protection , sensory input , thermoregulation , and communication . For example , hair on the head protects the skull from the sun . The hair in the nose and ears , and around the eyes ( eyelashes ) defends the body by trapping and excluding dust particles that may contain allergens and microbes . Hair of the eyebrows prevents sweat and other particles from dripping into and bothering the eyes . Hair also has a sensory function due to sensory innervation by a hair root plexus surrounding the base of each hair follicle . Hair is extremely sensitive to air movement or other disturbances in the environment , much more so than the skin surface . This feature is also useful for the detection of the presence of insects or other potentially damaging substances on the skin surface . <hl> Each hair root is connected to a smooth muscle called the arrector pili that contracts in response to nerve signals from the sympathetic nervous system , making the external hair shaft “ stand up . ” The primary purpose for this is to trap a layer of air to add insulation . <hl> <hl> This is visible in humans as goose bumps and even more obvious in animals , such as when a frightened cat raises its fur . <hl> Of course , this is much more obvious in organisms with a heavier coat than most humans , such as dogs and cats .", "hl_sentences": "Each hair root is connected to a smooth muscle called the arrector pili that contracts in response to nerve signals from the sympathetic nervous system , making the external hair shaft “ stand up . ” The primary purpose for this is to trap a layer of air to add insulation . This is visible in humans as goose bumps and even more obvious in animals , such as when a frightened cat raises its fur .", "question": { "cloze_format": "In response to stimuli from the sympathetic nervous system, the arrector pili ________.", "normal_format": "In response to stimuli from the sympathetic nervous system, which of the following is correct about the arrector pili?", "question_choices": [ "are glands on the skin surface", "can lead to excessive sweating", "are responsible for goose bumps", "secrete sebum" ], "question_id": "fs-id912818", "question_text": "In response to stimuli from the sympathetic nervous system, the arrector pili ________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 3, "ans_text": "a layer of basal cells" }, "bloom": "1", "hl_context": "<hl> The hair follicle is made of multiple layers of cells that form from basal cells in the hair matrix and the hair root . <hl> Cells of the hair matrix divide and differentiate to form the layers of the hair . Watch this video to learn more about hair follicles . The wall of the hair follicle is made of three concentric layers of cells . The cells of the internal root sheath surround the root of the growing hair and extend just up to the hair shaft . <hl> They are derived from the basal cells of the hair matrix . <hl> The external root sheath , which is an extension of the epidermis , encloses the hair root . It is made of basal cells at the base of the hair root and tends to be more keratinous in the upper regions . The glassy membrane is a thick , clear connective tissue sheath covering the hair root , connecting it to the tissue of the dermis . Hair is a keratinous filament growing out of the epidermis . It is primarily made of dead , keratinized cells . Strands of hair originate in an epidermal penetration of the dermis called the hair follicle . The hair shaft is the part of the hair not anchored to the follicle , and much of this is exposed at the skin ’ s surface . The rest of the hair , which is anchored in the follicle , lies below the surface of the skin and is referred to as the hair root . <hl> The hair root ends deep in the dermis at the hair bulb , and includes a layer of mitotically active basal cells called the hair matrix . <hl> The hair bulb surrounds the hair papilla , which is made of connective tissue and contains blood capillaries and nerve endings from the dermis ( Figure 5.11 ) .", "hl_sentences": "The hair follicle is made of multiple layers of cells that form from basal cells in the hair matrix and the hair root . They are derived from the basal cells of the hair matrix . The hair root ends deep in the dermis at the hair bulb , and includes a layer of mitotically active basal cells called the hair matrix .", "question": { "cloze_format": "The hair matrix contains ________.", "normal_format": "What does the hair matrix contain?", "question_choices": [ "the hair follicle", "the hair shaft", "the glassy membrane", "a layer of basal cells" ], "question_id": "fs-id1667854", "question_text": "The hair matrix contains ________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 1, "ans_text": "are present in the skin throughout the body and produce watery sweat" }, "bloom": "1", "hl_context": "<hl> An eccrine sweat gland is type of gland that produces a hypotonic sweat for thermoregulation . <hl> <hl> These glands are found all over the skin ’ s surface , but are especially abundant on the palms of the hand , the soles of the feet , and the forehead ( Figure 5.14 ) . <hl> <hl> They are coiled glands lying deep in the dermis , with the duct rising up to a pore on the skin surface , where the sweat is released . <hl> <hl> This type of sweat , released by exocytosis , is hypotonic and composed mostly of water , with some salt , antibodies , traces of metabolic waste , and dermicidin , an antimicrobial peptide . <hl> Eccrine glands are a primary component of thermoregulation in humans and thus help to maintain homeostasis .", "hl_sentences": "An eccrine sweat gland is type of gland that produces a hypotonic sweat for thermoregulation . These glands are found all over the skin ’ s surface , but are especially abundant on the palms of the hand , the soles of the feet , and the forehead ( Figure 5.14 ) . They are coiled glands lying deep in the dermis , with the duct rising up to a pore on the skin surface , where the sweat is released . This type of sweat , released by exocytosis , is hypotonic and composed mostly of water , with some salt , antibodies , traces of metabolic waste , and dermicidin , an antimicrobial peptide .", "question": { "cloze_format": "Eccrine sweat glands ________.", "normal_format": "What are Eccrine sweat glands?", "question_choices": [ "are present on hair", "are present in the skin throughout the body and produce watery sweat", "produce sebum", "act as a moisturizer" ], "question_id": "fs-id11839080", "question_text": "Eccrine sweat glands ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "are associated with hair follicles" }, "bloom": "1", "hl_context": "A sebaceous gland is a type of oil gland that is found all over the body and helps to lubricate and waterproof the skin and hair . <hl> Most sebaceous glands are associated with hair follicles . <hl> They generate and excrete sebum , a mixture of lipids , onto the skin surface , thereby naturally lubricating the dry and dead layer of keratinized cells of the stratum corneum , keeping it pliable . The fatty acids of sebum also have antibacterial properties , and prevent water loss from the skin in low-humidity environments . The secretion of sebum is stimulated by hormones , many of which do not become active until puberty . Thus , sebaceous glands are relatively inactive during childhood . 5.3 Functions of the Integumentary System Learning Objectives By the end of this section , you will be able to :", "hl_sentences": "Most sebaceous glands are associated with hair follicles .", "question": { "cloze_format": "The structure that is not under direct control of the peripheral nervous system is the ___ .", "normal_format": "Which of these structures is not under direct control of the peripheral nervous system?", "question_choices": [ "are a type of sweat gland", "are associated with hair follicles", "may function in response to touch", "release a watery solution of salt and metabolic waste" ], "question_id": "fs-id1272771", "question_text": "Sebaceous glands ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "hyponychium" }, "bloom": null, "hl_context": "The nail bed is a specialized structure of the epidermis that is found at the tips of our fingers and toes . The nail body is formed on the nail bed , and protects the tips of our fingers and toes as they are the farthest extremities and the parts of the body that experience the maximum mechanical stress ( Figure 5.13 ) . In addition , the nail body forms a back-support for picking up small objects with the fingers . The nail body is composed of densely packed dead keratinocytes . The epidermis in this part of the body has evolved a specialized structure upon which nails can form . The nail body forms at the nail root , which has a matrix of proliferating cells from the stratum basale that enables the nail to grow continuously . The lateral nail fold overlaps the nail on the sides , helping to anchor the nail body . The nail fold that meets the proximal end of the nail body forms the nail cuticle , also called the eponychium . The nail bed is rich in blood vessels , making it appear pink , except at the base , where a thick layer of epithelium over the nail matrix forms a crescent-shaped region called the lunula ( the “ little moon ” ) . <hl> The area beneath the free edge of the nail , furthest from the cuticle , is called the hyponychium . <hl> It consists of a thickened layer of stratum corneum .", "hl_sentences": "The area beneath the free edge of the nail , furthest from the cuticle , is called the hyponychium .", "question": { "cloze_format": "The ___ is furthest from the nail growth center. Similar to the hair, nails grow continuously throughout our lives. ", "normal_format": "Similar to the hair, nails grow continuously throughout our lives. Which of the following is furthest from the nail growth center?", "question_choices": [ "nail bed", "hyponychium", "nail root", "eponychium" ], "question_id": "fs-id1163437", "question_text": "Similar to the hair, nails grow continuously throughout our lives. Which of the following is furthest from the nail growth center?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 0, "ans_text": "vitamin D synthesis" }, "bloom": "1", "hl_context": "<hl> The epidermal layer of human skin synthesizes vitamin D when exposed to UV radiation . <hl> <hl> In the presence of sunlight , a form of vitamin D 3 called cholecalciferol is synthesized from a derivative of the steroid cholesterol in the skin . <hl> The liver converts cholecalciferol to calcidiol , which is then converted to calcitriol ( the active chemical form of the vitamin ) in the kidneys . Vitamin D is essential for normal absorption of calcium and phosphorous , which are required for healthy bones . The absence of sun exposure can lead to a lack of vitamin D in the body , leading to a condition called rickets , a painful condition in children where the bones are misshapen due to a lack of calcium , causing bowleggedness . Elderly individuals who suffer from vitamin D deficiency can develop a condition called osteomalacia , a softening of the bones . In present day society , vitamin D is added as a supplement to many foods , including milk and orange juice , compensating for the need for sun exposure .", "hl_sentences": "The epidermal layer of human skin synthesizes vitamin D when exposed to UV radiation . In the presence of sunlight , a form of vitamin D 3 called cholecalciferol is synthesized from a derivative of the steroid cholesterol in the skin .", "question": { "cloze_format": "In humans, exposure of the skin to sunlight is required for ________.", "normal_format": "In humans, exposure of the skin to sunlight is required for what? ", "question_choices": [ "vitamin D synthesis", "arteriole constriction", "folate production", "thermoregulation" ], "question_id": "fs-id1037242", "question_text": "In humans, exposure of the skin to sunlight is required for ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "can be reduced by limiting exposure to the sun" }, "bloom": null, "hl_context": "One of the most talked about diseases is skin cancer . Cancer is a broad term that describes diseases caused by abnormal cells in the body dividing uncontrollably . Most cancers are identified by the organ or tissue in which the cancer originates . One common form of cancer is skin cancer . <hl> The Skin Cancer Foundation reports that one in five Americans will experience some type of skin cancer in their lifetime . <hl> <hl> The degradation of the ozone layer in the atmosphere and the resulting increase in exposure to UV radiation has contributed to its rise . <hl> <hl> Overexposure to UV radiation damages DNA , which can lead to the formation of cancerous lesions . <hl> Although melanin offers some protection against DNA damage from the sun , often it is not enough . The fact that cancers can also occur on areas of the body that are normally not exposed to UV radiation suggests that there are additional factors that can lead to cancerous lesions . <hl> Too much sun exposure can eventually lead to wrinkling due to the destruction of the cellular structure of the skin , and in severe cases , can cause sufficient DNA damage to result in skin cancer . <hl> When there is an irregular accumulation of melanocytes in the skin , freckles appear . Moles are larger masses of melanocytes , and although most are benign , they should be monitored for changes that might indicate the presence of cancer ( Figure 5.9 ) .", "hl_sentences": "The Skin Cancer Foundation reports that one in five Americans will experience some type of skin cancer in their lifetime . The degradation of the ozone layer in the atmosphere and the resulting increase in exposure to UV radiation has contributed to its rise . Overexposure to UV radiation damages DNA , which can lead to the formation of cancerous lesions . Too much sun exposure can eventually lead to wrinkling due to the destruction of the cellular structure of the skin , and in severe cases , can cause sufficient DNA damage to result in skin cancer .", "question": { "cloze_format": "In general, skin cancers ________.", "normal_format": "Which of the following is in general correct about skin cancers?", "question_choices": [ "are easily treatable and not a major health concern", "occur due to poor hygiene", "can be reduced by limiting exposure to the sun", "affect only the epidermis" ], "question_id": "fs-id1200625", "question_text": "In general, skin cancers ________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "are preventable by eliminating pressure points" }, "bloom": "1", "hl_context": "Skin and its underlying tissue can be affected by excessive pressure . One example of this is called a bedsore . <hl> Bedsores , also called decubitis ulcers , are caused by constant , long-term , unrelieved pressure on certain body parts that are bony , reducing blood flow to the area and leading to necrosis ( tissue death ) . <hl> Bedsores are most common in elderly patients who have debilitating conditions that cause them to be immobile . <hl> Most hospitals and long-term care facilities have the practice of turning the patients every few hours to prevent the incidence of bedsores . <hl> If left untreated by removal of necrotized tissue , bedsores can be fatal if they become infected .", "hl_sentences": "Bedsores , also called decubitis ulcers , are caused by constant , long-term , unrelieved pressure on certain body parts that are bony , reducing blood flow to the area and leading to necrosis ( tissue death ) . Most hospitals and long-term care facilities have the practice of turning the patients every few hours to prevent the incidence of bedsores .", "question": { "cloze_format": "Bedsores ________.", "normal_format": "Which of the following is correct about bedsores?", "question_choices": [ "can be treated with topical moisturizers", "can result from deep massages", "are preventable by eliminating pressure points", "are caused by dry skin" ], "question_id": "fs-id1304639", "question_text": "Bedsores ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "keratinocytes of the stratum spinosum" }, "bloom": "1", "hl_context": "<hl> Squamous cell carcinoma is a cancer that affects the keratinocytes of the stratum spinosum and presents as lesions commonly found on the scalp , ears , and hands ( Figure 5.19 ) . <hl> It is the second most common skin cancer . The American Cancer Society reports that two of 10 skin cancers are squamous cell carcinomas , and it is more aggressive than basal cell carcinoma . If not removed , these carcinomas can metastasize . Surgery and radiation are used to cure squamous cell carcinoma .", "hl_sentences": "Squamous cell carcinoma is a cancer that affects the keratinocytes of the stratum spinosum and presents as lesions commonly found on the scalp , ears , and hands ( Figure 5.19 ) .", "question": { "cloze_format": "The cells that this cancer affects are ___ .", "normal_format": "Squamous cell carcinomas are the second most common of the skin cancers and are capable of metastasizing if not treated. This cancer affects which cells?", "question_choices": [ "basal cells of the stratum basale", "melanocytes of the stratum basale", "keratinocytes of the stratum spinosum", "Langerhans cells of the stratum lucidum" ], "question_id": "fs-id1200668", "question_text": "Squamous cell carcinomas are the second most common of the skin cancers and are capable of metastasizing if not treated. This cancer affects which cells?" }, "references_are_paraphrase": null } ]
5
5.1 Layers of the Skin Learning Objectives By the end of this section, you will be able to: Identify the components of the integumentary system Describe the layers of the skin and the functions of each layer Identify and describe the hypodermis and deep fascia Describe the role of keratinocytes and their life cycle Describe the role of melanocytes in skin pigmentation Although you may not typically think of the skin as an organ, it is in fact made of tissues that work together as a single structure to perform unique and critical functions. The skin and its accessory structures make up the integumentary system , which provides the body with overall protection. The skin is made of multiple layers of cells and tissues, which are held to underlying structures by connective tissue ( Figure 5.2 ). The deeper layer of skin is well vascularized (has numerous blood vessels). It also has numerous sensory, and autonomic and sympathetic nerve fibers ensuring communication to and from the brain. Interactive Link The skin consists of two main layers and a closely associated layer. View this animation to learn more about layers of the skin. What are the basic functions of each of these layers? The Epidermis The epidermis is composed of keratinized, stratified squamous epithelium. It is made of four or five layers of epithelial cells, depending on its location in the body. It does not have any blood vessels within it (i.e., it is avascular). Skin that has four layers of cells is referred to as “thin skin.” From deep to superficial, these layers are the stratum basale, stratum spinosum, stratum granulosum, and stratum corneum. Most of the skin can be classified as thin skin. “Thick skin” is found only on the palms of the hands and the soles of the feet. It has a fifth layer, called the stratum lucidum, located between the stratum corneum and the stratum granulosum ( Figure 5.3 ). The cells in all of the layers except the stratum basale are called keratinocytes. A keratinocyte is a cell that manufactures and stores the protein keratin. Keratin is an intracellular fibrous protein that gives hair, nails, and skin their hardness and water-resistant properties. The keratinocytes in the stratum corneum are dead and regularly slough away, being replaced by cells from the deeper layers ( Figure 5.4 ). Interactive Link View the University of Michigan WebScope to explore the tissue sample in greater detail. If you zoom on the cells at the outermost layer of this section of skin, what do you notice about the cells? Stratum Basale The stratum basale (also called the stratum germinativum) is the deepest epidermal layer and attaches the epidermis to the basal lamina, below which lie the layers of the dermis. The cells in the stratum basale bond to the dermis via intertwining collagen fibers, referred to as the basement membrane. A finger-like projection, or fold, known as the dermal papilla (plural = dermal papillae) is found in the superficial portion of the dermis. Dermal papillae increase the strength of the connection between the epidermis and dermis; the greater the folding, the stronger the connections made ( Figure 5.5 ). The stratum basale is a single layer of cells primarily made of basal cells. A basal cell is a cuboidal-shaped stem cell that is a precursor of the keratinocytes of the epidermis. All of the keratinocytes are produced from this single layer of cells, which are constantly going through mitosis to produce new cells. As new cells are formed, the existing cells are pushed superficially away from the stratum basale. Two other cell types are found dispersed among the basal cells in the stratum basale. The first is a Merkel cell , which functions as a receptor and is responsible for stimulating sensory nerves that the brain perceives as touch. These cells are especially abundant on the surfaces of the hands and feet. The second is a melanocyte , a cell that produces the pigment melanin. Melanin gives hair and skin its color, and also helps protect the living cells of the epidermis from ultraviolet (UV) radiation damage. In a growing fetus, fingerprints form where the cells of the stratum basale meet the papillae of the underlying dermal layer (papillary layer), resulting in the formation of the ridges on your fingers that you recognize as fingerprints. Fingerprints are unique to each individual and are used for forensic analyses because the patterns do not change with the growth and aging processes. Stratum Spinosum As the name suggests, the stratum spinosum is spiny in appearance due to the protruding cell processes that join the cells via a structure called a desmosome . The desmosomes interlock with each other and strengthen the bond between the cells. It is interesting to note that the “spiny” nature of this layer is an artifact of the staining process. Unstained epidermis samples do not exhibit this characteristic appearance. The stratum spinosum is composed of eight to 10 layers of keratinocytes, formed as a result of cell division in the stratum basale ( Figure 5.6 ). Interspersed among the keratinocytes of this layer is a type of dendritic cell called the Langerhans cell , which functions as a macrophage by engulfing bacteria, foreign particles, and damaged cells that occur in this layer. Interactive Link View the University of Michigan WebScope to explore the tissue sample in greater detail. If you zoom on the cells at the outermost layer of this section of skin, what do you notice about the cells? The keratinocytes in the stratum spinosum begin the synthesis of keratin and release a water-repelling glycolipid that helps prevent water loss from the body, making the skin relatively waterproof. As new keratinocytes are produced atop the stratum basale, the keratinocytes of the stratum spinosum are pushed into the stratum granulosum. Stratum Granulosum The stratum granulosum has a grainy appearance due to further changes to the keratinocytes as they are pushed from the stratum spinosum. The cells (three to five layers deep) become flatter, their cell membranes thicken, and they generate large amounts of the proteins keratin, which is fibrous, and keratohyalin , which accumulates as lamellar granules within the cells (see Figure 5.5 ). These two proteins make up the bulk of the keratinocyte mass in the stratum granulosum and give the layer its grainy appearance. The nuclei and other cell organelles disintegrate as the cells die, leaving behind the keratin, keratohyalin, and cell membranes that will form the stratum lucidum, the stratum corneum, and the accessory structures of hair and nails. Stratum Lucidum The stratum lucidum is a smooth, seemingly translucent layer of the epidermis located just above the stratum granulosum and below the stratum corneum. This thin layer of cells is found only in the thick skin of the palms, soles, and digits. The keratinocytes that compose the stratum lucidum are dead and flattened (see Figure 5.5 ). These cells are densely packed with eleiden , a clear protein rich in lipids, derived from keratohyalin, which gives these cells their transparent (i.e., lucid) appearance and provides a barrier to water. Stratum Corneum The stratum corneum is the most superficial layer of the epidermis and is the layer exposed to the outside environment (see Figure 5.5 ). The increased keratinization (also called cornification) of the cells in this layer gives it its name. There are usually 15 to 30 layers of cells in the stratum corneum. This dry, dead layer helps prevent the penetration of microbes and the dehydration of underlying tissues, and provides a mechanical protection against abrasion for the more delicate, underlying layers. Cells in this layer are shed periodically and are replaced by cells pushed up from the stratum granulosum (or stratum lucidum in the case of the palms and soles of feet). The entire layer is replaced during a period of about 4 weeks. Cosmetic procedures, such as microdermabrasion, help remove some of the dry, upper layer and aim to keep the skin looking “fresh” and healthy. Dermis The dermis might be considered the “core” of the integumentary system (derma- = “skin”), as distinct from the epidermis (epi- = “upon” or “over”) and hypodermis (hypo- = “below”). It contains blood and lymph vessels, nerves, and other structures, such as hair follicles and sweat glands. The dermis is made of two layers of connective tissue that compose an interconnected mesh of elastin and collagenous fibers, produced by fibroblasts ( Figure 5.7 ). Papillary Layer The papillary layer is made of loose, areolar connective tissue, which means the collagen and elastin fibers of this layer form a loose mesh. This superficial layer of the dermis projects into the stratum basale of the epidermis to form finger-like dermal papillae (see Figure 5.7 ). Within the papillary layer are fibroblasts, a small number of fat cells (adipocytes), and an abundance of small blood vessels. In addition, the papillary layer contains phagocytes, defensive cells that help fight bacteria or other infections that have breached the skin. This layer also contains lymphatic capillaries, nerve fibers, and touch receptors called the Meissner corpuscles. Reticular Layer Underlying the papillary layer is the much thicker reticular layer , composed of dense, irregular connective tissue. This layer is well vascularized and has a rich sensory and sympathetic nerve supply. The reticular layer appears reticulated (net-like) due to a tight meshwork of fibers. Elastin fibers provide some elasticity to the skin, enabling movement. Collagen fibers provide structure and tensile strength, with strands of collagen extending into both the papillary layer and the hypodermis. In addition, collagen binds water to keep the skin hydrated. Collagen injections and Retin-A creams help restore skin turgor by either introducing collagen externally or stimulating blood flow and repair of the dermis, respectively. Hypodermis The hypodermis (also called the subcutaneous layer or superficial fascia) is a layer directly below the dermis and serves to connect the skin to the underlying fascia (fibrous tissue) of the bones and muscles. It is not strictly a part of the skin, although the border between the hypodermis and dermis can be difficult to distinguish. The hypodermis consists of well-vascularized, loose, areolar connective tissue and adipose tissue, which functions as a mode of fat storage and provides insulation and cushioning for the integument. Everyday Connection Lipid Storage The hypodermis is home to most of the fat that concerns people when they are trying to keep their weight under control. Adipose tissue present in the hypodermis consists of fat-storing cells called adipocytes. This stored fat can serve as an energy reserve, insulate the body to prevent heat loss, and act as a cushion to protect underlying structures from trauma. Where the fat is deposited and accumulates within the hypodermis depends on hormones (testosterone, estrogen, insulin, glucagon, leptin, and others), as well as genetic factors. Fat distribution changes as our bodies mature and age. Men tend to accumulate fat in different areas (neck, arms, lower back, and abdomen) than do women (breasts, hips, thighs, and buttocks). The body mass index (BMI) is often used as a measure of fat, although this measure is, in fact, derived from a mathematical formula that compares body weight (mass) to height. Therefore, its accuracy as a health indicator can be called into question in individuals who are extremely physically fit. In many animals, there is a pattern of storing excess calories as fat to be used in times when food is not readily available. In much of the developed world, insufficient exercise coupled with the ready availability and consumption of high-calorie foods have resulted in unwanted accumulations of adipose tissue in many people. Although periodic accumulation of excess fat may have provided an evolutionary advantage to our ancestors, who experienced unpredictable bouts of famine, it is now becoming chronic and considered a major health threat. Recent studies indicate that a distressing percentage of our population is overweight and/or clinically obese. Not only is this a problem for the individuals affected, but it also has a severe impact on our healthcare system. Changes in lifestyle, specifically in diet and exercise, are the best ways to control body fat accumulation, especially when it reaches levels that increase the risk of heart disease and diabetes. Pigmentation The color of skin is influenced by a number of pigments, including melanin, carotene, and hemoglobin. Recall that melanin is produced by cells called melanocytes, which are found scattered throughout the stratum basale of the epidermis. The melanin is transferred into the keratinocytes via a cellular vesicle called a melanosome ( Figure 5.8 ). Melanin occurs in two primary forms. Eumelanin exists as black and brown, whereas pheomelanin provides a red color. Dark-skinned individuals produce more melanin than those with pale skin. Exposure to the UV rays of the sun or a tanning salon causes melanin to be manufactured and built up in keratinocytes, as sun exposure stimulates keratinocytes to secrete chemicals that stimulate melanocytes. The accumulation of melanin in keratinocytes results in the darkening of the skin, or a tan. This increased melanin accumulation protects the DNA of epidermal cells from UV ray damage and the breakdown of folic acid, a nutrient necessary for our health and well-being. In contrast, too much melanin can interfere with the production of vitamin D, an important nutrient involved in calcium absorption. Thus, the amount of melanin present in our skin is dependent on a balance between available sunlight and folic acid destruction, and protection from UV radiation and vitamin D production. It requires about 10 days after initial sun exposure for melanin synthesis to peak, which is why pale-skinned individuals tend to suffer sunburns of the epidermis initially. Dark-skinned individuals can also get sunburns, but are more protected than are pale-skinned individuals. Melanosomes are temporary structures that are eventually destroyed by fusion with lysosomes; this fact, along with melanin-filled keratinocytes in the stratum corneum sloughing off, makes tanning impermanent. Too much sun exposure can eventually lead to wrinkling due to the destruction of the cellular structure of the skin, and in severe cases, can cause sufficient DNA damage to result in skin cancer. When there is an irregular accumulation of melanocytes in the skin, freckles appear. Moles are larger masses of melanocytes, and although most are benign, they should be monitored for changes that might indicate the presence of cancer ( Figure 5.9 ). Disorders of the... Integumentary System The first thing a clinician sees is the skin, and so the examination of the skin should be part of any thorough physical examination. Most skin disorders are relatively benign, but a few, including melanomas, can be fatal if untreated. A couple of the more noticeable disorders, albinism and vitiligo, affect the appearance of the skin and its accessory organs. Although neither is fatal, it would be hard to claim that they are benign, at least to the individuals so afflicted. Albinism is a genetic disorder that affects (completely or partially) the coloring of skin, hair, and eyes. The defect is primarily due to the inability of melanocytes to produce melanin. Individuals with albinism tend to appear white or very pale due to the lack of melanin in their skin and hair. Recall that melanin helps protect the skin from the harmful effects of UV radiation. Individuals with albinism tend to need more protection from UV radiation, as they are more prone to sunburns and skin cancer. They also tend to be more sensitive to light and have vision problems due to the lack of pigmentation on the retinal wall. Treatment of this disorder usually involves addressing the symptoms, such as limiting UV light exposure to the skin and eyes. In vitiligo , the melanocytes in certain areas lose their ability to produce melanin, possibly due to an autoimmune reaction. This leads to a loss of color in patches ( Figure 5.10 ). Neither albinism nor vitiligo directly affects the lifespan of an individual. Other changes in the appearance of skin coloration can be indicative of diseases associated with other body systems. Liver disease or liver cancer can cause the accumulation of bile and the yellow pigment bilirubin, leading to the skin appearing yellow or jaundiced ( jaune is the French word for “yellow”). Tumors of the pituitary gland can result in the secretion of large amounts of melanocyte-stimulating hormone (MSH), which results in a darkening of the skin. Similarly, Addison’s disease can stimulate the release of excess amounts of adrenocorticotropic hormone (ACTH), which can give the skin a deep bronze color. A sudden drop in oxygenation can affect skin color, causing the skin to initially turn ashen (white). With a prolonged reduction in oxygen levels, dark red deoxyhemoglobin becomes dominant in the blood, making the skin appear blue, a condition referred to as cyanosis ( kyanos is the Greek word for “blue”). This happens when the oxygen supply is restricted, as when someone is experiencing difficulty in breathing because of asthma or a heart attack. However, in these cases the effect on skin color has nothing do with the skin’s pigmentation. Interactive Link This ABC video follows the story of a pair of fraternal African-American twins, one of whom is albino. Watch this video to learn about the challenges these children and their family face. Which ethnicities do you think are exempt from the possibility of albinism? 5.2 Accessory Structures of the Skin Learning Objectives By the end of this section, you will be able to: Identify the accessory structures of the skin Describe the structure and function of hair and nails Describe the structure and function of sweat glands and sebaceous glands Accessory structures of the skin include hair, nails, sweat glands, and sebaceous glands. These structures embryologically originate from the epidermis and can extend down through the dermis into the hypodermis. Hair Hair is a keratinous filament growing out of the epidermis. It is primarily made of dead, keratinized cells. Strands of hair originate in an epidermal penetration of the dermis called the hair follicle . The hair shaft is the part of the hair not anchored to the follicle, and much of this is exposed at the skin’s surface. The rest of the hair, which is anchored in the follicle, lies below the surface of the skin and is referred to as the hair root . The hair root ends deep in the dermis at the hair bulb , and includes a layer of mitotically active basal cells called the hair matrix . The hair bulb surrounds the hair papilla , which is made of connective tissue and contains blood capillaries and nerve endings from the dermis ( Figure 5.11 ). Just as the basal layer of the epidermis forms the layers of epidermis that get pushed to the surface as the dead skin on the surface sheds, the basal cells of the hair bulb divide and push cells outward in the hair root and shaft as the hair grows. The medulla forms the central core of the hair, which is surrounded by the cortex , a layer of compressed, keratinized cells that is covered by an outer layer of very hard, keratinized cells known as the cuticle . These layers are depicted in a longitudinal cross-section of the hair follicle ( Figure 5.12 ), although not all hair has a medullary layer. Hair texture (straight, curly) is determined by the shape and structure of the cortex, and to the extent that it is present, the medulla. The shape and structure of these layers are, in turn, determined by the shape of the hair follicle. Hair growth begins with the production of keratinocytes by the basal cells of the hair bulb. As new cells are deposited at the hair bulb, the hair shaft is pushed through the follicle toward the surface. Keratinization is completed as the cells are pushed to the skin surface to form the shaft of hair that is externally visible. The external hair is completely dead and composed entirely of keratin. For this reason, our hair does not have sensation. Furthermore, you can cut your hair or shave without damaging the hair structure because the cut is superficial. Most chemical hair removers also act superficially; however, electrolysis and yanking both attempt to destroy the hair bulb so hair cannot grow. The wall of the hair follicle is made of three concentric layers of cells. The cells of the internal root sheath surround the root of the growing hair and extend just up to the hair shaft. They are derived from the basal cells of the hair matrix. The external root sheath , which is an extension of the epidermis, encloses the hair root. It is made of basal cells at the base of the hair root and tends to be more keratinous in the upper regions. The glassy membrane is a thick, clear connective tissue sheath covering the hair root, connecting it to the tissue of the dermis. Interactive Link The hair follicle is made of multiple layers of cells that form from basal cells in the hair matrix and the hair root. Cells of the hair matrix divide and differentiate to form the layers of the hair. Watch this video to learn more about hair follicles. Hair serves a variety of functions, including protection, sensory input, thermoregulation, and communication. For example, hair on the head protects the skull from the sun. The hair in the nose and ears, and around the eyes (eyelashes) defends the body by trapping and excluding dust particles that may contain allergens and microbes. Hair of the eyebrows prevents sweat and other particles from dripping into and bothering the eyes. Hair also has a sensory function due to sensory innervation by a hair root plexus surrounding the base of each hair follicle. Hair is extremely sensitive to air movement or other disturbances in the environment, much more so than the skin surface. This feature is also useful for the detection of the presence of insects or other potentially damaging substances on the skin surface. Each hair root is connected to a smooth muscle called the arrector pili that contracts in response to nerve signals from the sympathetic nervous system, making the external hair shaft “stand up.” The primary purpose for this is to trap a layer of air to add insulation. This is visible in humans as goose bumps and even more obvious in animals, such as when a frightened cat raises its fur. Of course, this is much more obvious in organisms with a heavier coat than most humans, such as dogs and cats. Hair Growth Hair grows and is eventually shed and replaced by new hair. This occurs in three phases. The first is the anagen phase, during which cells divide rapidly at the root of the hair, pushing the hair shaft up and out. The length of this phase is measured in years, typically from 2 to 7 years. The catagen phase lasts only 2 to 3 weeks, and marks a transition from the hair follicle’s active growth. Finally, during the telogen phase, the hair follicle is at rest and no new growth occurs. At the end of this phase, which lasts about 2 to 4 months, another anagen phase begins. The basal cells in the hair matrix then produce a new hair follicle, which pushes the old hair out as the growth cycle repeats itself. Hair typically grows at the rate of 0.3 mm per day during the anagen phase. On average, 50 hairs are lost and replaced per day. Hair loss occurs if there is more hair shed than what is replaced and can happen due to hormonal or dietary changes. Hair loss can also result from the aging process, or the influence of hormones. Hair Color Similar to the skin, hair gets its color from the pigment melanin, produced by melanocytes in the hair papilla. Different hair color results from differences in the type of melanin, which is genetically determined. As a person ages, the melanin production decreases, and hair tends to lose its color and becomes gray and/or white. Nails The nail bed is a specialized structure of the epidermis that is found at the tips of our fingers and toes. The nail body is formed on the nail bed , and protects the tips of our fingers and toes as they are the farthest extremities and the parts of the body that experience the maximum mechanical stress ( Figure 5.13 ). In addition, the nail body forms a back-support for picking up small objects with the fingers. The nail body is composed of densely packed dead keratinocytes. The epidermis in this part of the body has evolved a specialized structure upon which nails can form. The nail body forms at the nail root , which has a matrix of proliferating cells from the stratum basale that enables the nail to grow continuously. The lateral nail fold overlaps the nail on the sides, helping to anchor the nail body. The nail fold that meets the proximal end of the nail body forms the nail cuticle , also called the eponychium . The nail bed is rich in blood vessels, making it appear pink, except at the base, where a thick layer of epithelium over the nail matrix forms a crescent-shaped region called the lunula (the “little moon”). The area beneath the free edge of the nail, furthest from the cuticle, is called the hyponychium . It consists of a thickened layer of stratum corneum. Interactive Link Nails are accessory structures of the integumentary system. Visit this link to learn more about the origin and growth of fingernails. Sweat Glands When the body becomes warm, sudoriferous glands produce sweat to cool the body. Sweat glands develop from epidermal projections into the dermis and are classified as merocrine glands; that is, the secretions are excreted by exocytosis through a duct without affecting the cells of the gland. There are two types of sweat glands, each secreting slightly different products. An eccrine sweat gland is type of gland that produces a hypotonic sweat for thermoregulation. These glands are found all over the skin’s surface, but are especially abundant on the palms of the hand, the soles of the feet, and the forehead ( Figure 5.14 ). They are coiled glands lying deep in the dermis, with the duct rising up to a pore on the skin surface, where the sweat is released. This type of sweat, released by exocytosis, is hypotonic and composed mostly of water, with some salt, antibodies, traces of metabolic waste, and dermicidin, an antimicrobial peptide. Eccrine glands are a primary component of thermoregulation in humans and thus help to maintain homeostasis. An apocrine sweat gland is usually associated with hair follicles in densely hairy areas, such as armpits and genital regions. Apocrine sweat glands are larger than eccrine sweat glands and lie deeper in the dermis, sometimes even reaching the hypodermis, with the duct normally emptying into the hair follicle. In addition to water and salts, apocrine sweat includes organic compounds that make the sweat thicker and subject to bacterial decomposition and subsequent smell. The release of this sweat is under both nervous and hormonal control, and plays a role in the poorly understood human pheromone response. Most commercial antiperspirants use an aluminum-based compound as their primary active ingredient to stop sweat. When the antiperspirant enters the sweat gland duct, the aluminum-based compounds precipitate due to a change in pH and form a physical block in the duct, which prevents sweat from coming out of the pore. Interactive Link Sweating regulates body temperature. The composition of the sweat determines whether body odor is a byproduct of sweating. Visit this link to learn more about sweating and body odor. Sebaceous Glands A sebaceous gland is a type of oil gland that is found all over the body and helps to lubricate and waterproof the skin and hair. Most sebaceous glands are associated with hair follicles. They generate and excrete sebum , a mixture of lipids, onto the skin surface, thereby naturally lubricating the dry and dead layer of keratinized cells of the stratum corneum, keeping it pliable. The fatty acids of sebum also have antibacterial properties, and prevent water loss from the skin in low-humidity environments. The secretion of sebum is stimulated by hormones, many of which do not become active until puberty. Thus, sebaceous glands are relatively inactive during childhood. 5.3 Functions of the Integumentary System Learning Objectives By the end of this section, you will be able to: Describe the different functions of the skin and the structures that enable them Explain how the skin helps maintain body temperature The skin and accessory structures perform a variety of essential functions, such as protecting the body from invasion by microorganisms, chemicals, and other environmental factors; preventing dehydration; acting as a sensory organ; modulating body temperature and electrolyte balance; and synthesizing vitamin D. The underlying hypodermis has important roles in storing fats, forming a “cushion” over underlying structures, and providing insulation from cold temperatures. Protection The skin protects the rest of the body from the basic elements of nature such as wind, water, and UV sunlight. It acts as a protective barrier against water loss, due to the presence of layers of keratin and glycolipids in the stratum corneum. It also is the first line of defense against abrasive activity due to contact with grit, microbes, or harmful chemicals. Sweat excreted from sweat glands deters microbes from over-colonizing the skin surface by generating dermicidin, which has antibiotic properties. Everyday Connection Tattoos and Piercings The word “armor” evokes several images. You might think of a Roman centurion or a medieval knight in a suit of armor. The skin, in its own way, functions as a form of armor—body armor. It provides a barrier between your vital, life-sustaining organs and the influence of outside elements that could potentially damage them. For any form of armor, a breach in the protective barrier poses a danger. The skin can be breached when a child skins a knee or an adult has blood drawn—one is accidental and the other medically necessary. However, you also breach this barrier when you choose to “accessorize” your skin with a tattoo or body piercing. Because the needles involved in producing body art and piercings must penetrate the skin, there are dangers associated with the practice. These include allergic reactions; skin infections; blood-borne diseases, such as tetanus, hepatitis C, and hepatitis D; and the growth of scar tissue. Despite the risk, the practice of piercing the skin for decorative purposes has become increasingly popular. According to the American Academy of Dermatology, 24 percent of people from ages 18 to 50 have a tattoo. Interactive Link Tattooing has a long history, dating back thousands of years ago. The dyes used in tattooing typically derive from metals. A person with tattoos should be cautious when having a magnetic resonance imaging (MRI) scan because an MRI machine uses powerful magnets to create images of the soft tissues of the body, which could react with the metals contained in the tattoo dyes. Watch this video to learn more about tattooing. Sensory Function The fact that you can feel an ant crawling on your skin, allowing you to flick it off before it bites, is because the skin, and especially the hairs projecting from hair follicles in the skin, can sense changes in the environment. The hair root plexus surrounding the base of the hair follicle senses a disturbance, and then transmits the information to the central nervous system (brain and spinal cord), which can then respond by activating the skeletal muscles of your eyes to see the ant and the skeletal muscles of the body to act against the ant. The skin acts as a sense organ because the epidermis, dermis, and the hypodermis contain specialized sensory nerve structures that detect touch, surface temperature, and pain. These receptors are more concentrated on the tips of the fingers, which are most sensitive to touch, especially the Meissner corpuscle (tactile corpuscle) ( Figure 5.15 ), which responds to light touch, and the Pacinian corpuscle (lamellated corpuscle), which responds to vibration. Merkel cells, seen scattered in the stratum basale, are also touch receptors. In addition to these specialized receptors, there are sensory nerves connected to each hair follicle, pain and temperature receptors scattered throughout the skin, and motor nerves innervate the arrector pili muscles and glands. This rich innervation helps us sense our environment and react accordingly. Thermoregulation The integumentary system helps regulate body temperature through its tight association with the sympathetic nervous system, the division of the nervous system involved in our fight-or-flight responses. The sympathetic nervous system is continuously monitoring body temperature and initiating appropriate motor responses. Recall that sweat glands, accessory structures to the skin, secrete water, salt, and other substances to cool the body when it becomes warm. Even when the body does not appear to be noticeably sweating, approximately 500 mL of sweat (insensible perspiration) are secreted a day. If the body becomes excessively warm due to high temperatures, vigorous activity ( Figure 5.16 ac ), or a combination of the two, sweat glands will be stimulated by the sympathetic nervous system to produce large amounts of sweat, as much as 0.7 to 1.5 L per hour for an active person. When the sweat evaporates from the skin surface, the body is cooled as body heat is dissipated. In addition to sweating, arterioles in the dermis dilate so that excess heat carried by the blood can dissipate through the skin and into the surrounding environment ( Figure 5.16 b ). This accounts for the skin redness that many people experience when exercising. When body temperatures drop, the arterioles constrict to minimize heat loss, particularly in the ends of the digits and tip of the nose. This reduced circulation can result in the skin taking on a whitish hue. Although the temperature of the skin drops as a result, passive heat loss is prevented, and internal organs and structures remain warm. If the temperature of the skin drops too much (such as environmental temperatures below freezing), the conservation of body core heat can result in the skin actually freezing, a condition called frostbite. Aging and the... Integumentary System All systems in the body accumulate subtle and some not-so-subtle changes as a person ages. Among these changes are reductions in cell division, metabolic activity, blood circulation, hormonal levels, and muscle strength ( Figure 5.17 ). In the skin, these changes are reflected in decreased mitosis in the stratum basale, leading to a thinner epidermis. The dermis, which is responsible for the elasticity and resilience of the skin, exhibits a reduced ability to regenerate, which leads to slower wound healing. The hypodermis, with its fat stores, loses structure due to the reduction and redistribution of fat, which in turn contributes to the thinning and sagging of skin. The accessory structures also have lowered activity, generating thinner hair and nails, and reduced amounts of sebum and sweat. A reduced sweating ability can cause some elderly to be intolerant to extreme heat. Other cells in the skin, such as melanocytes and dendritic cells, also become less active, leading to a paler skin tone and lowered immunity. Wrinkling of the skin occurs due to breakdown of its structure, which results from decreased collagen and elastin production in the dermis, weakening of muscles lying under the skin, and the inability of the skin to retain adequate moisture. Many anti-aging products can be found in stores today. In general, these products try to rehydrate the skin and thereby fill out the wrinkles, and some stimulate skin growth using hormones and growth factors. Additionally, invasive techniques include collagen injections to plump the tissue and injections of BOTOX ® (the name brand of the botulinum neurotoxin) that paralyze the muscles that crease the skin and cause wrinkling. Vitamin D Synthesis The epidermal layer of human skin synthesizes vitamin D when exposed to UV radiation. In the presence of sunlight, a form of vitamin D 3 called cholecalciferol is synthesized from a derivative of the steroid cholesterol in the skin. The liver converts cholecalciferol to calcidiol, which is then converted to calcitriol (the active chemical form of the vitamin) in the kidneys. Vitamin D is essential for normal absorption of calcium and phosphorous, which are required for healthy bones. The absence of sun exposure can lead to a lack of vitamin D in the body, leading to a condition called rickets , a painful condition in children where the bones are misshapen due to a lack of calcium, causing bowleggedness. Elderly individuals who suffer from vitamin D deficiency can develop a condition called osteomalacia, a softening of the bones. In present day society, vitamin D is added as a supplement to many foods, including milk and orange juice, compensating for the need for sun exposure. In addition to its essential role in bone health, vitamin D is essential for general immunity against bacterial, viral, and fungal infections. Recent studies are also finding a link between insufficient vitamin D and cancer. 5.4 Diseases, Disorders, and Injuries of the Integumentary System Learning Objectives By the end of this section, you will be able to: Describe several different diseases and disorders of the skin Describe the effect of injury to the skin and the process of healing The integumentary system is susceptible to a variety of diseases, disorders, and injuries. These range from annoying but relatively benign bacterial or fungal infections that are categorized as disorders, to skin cancer and severe burns, which can be fatal. In this section, you will learn several of the most common skin conditions. Diseases One of the most talked about diseases is skin cancer. Cancer is a broad term that describes diseases caused by abnormal cells in the body dividing uncontrollably. Most cancers are identified by the organ or tissue in which the cancer originates. One common form of cancer is skin cancer. The Skin Cancer Foundation reports that one in five Americans will experience some type of skin cancer in their lifetime. The degradation of the ozone layer in the atmosphere and the resulting increase in exposure to UV radiation has contributed to its rise. Overexposure to UV radiation damages DNA, which can lead to the formation of cancerous lesions. Although melanin offers some protection against DNA damage from the sun, often it is not enough. The fact that cancers can also occur on areas of the body that are normally not exposed to UV radiation suggests that there are additional factors that can lead to cancerous lesions. In general, cancers result from an accumulation of DNA mutations. These mutations can result in cell populations that do not die when they should and uncontrolled cell proliferation that leads to tumors. Although many tumors are benign (harmless), some produce cells that can mobilize and establish tumors in other organs of the body; this process is referred to as metastasis . Cancers are characterized by their ability to metastasize. Basal Cell Carcinoma Basal cell carcinoma is a form of cancer that affects the mitotically active stem cells in the stratum basale of the epidermis. It is the most common of all cancers that occur in the United States and is frequently found on the head, neck, arms, and back, which are areas that are most susceptible to long-term sun exposure. Although UV rays are the main culprit, exposure to other agents, such as radiation and arsenic, can also lead to this type of cancer. Wounds on the skin due to open sores, tattoos, burns, etc. may be predisposing factors as well. Basal cell carcinomas start in the stratum basale and usually spread along this boundary. At some point, they begin to grow toward the surface and become an uneven patch, bump, growth, or scar on the skin surface ( Figure 5.18 ). Like most cancers, basal cell carcinomas respond best to treatment when caught early. Treatment options include surgery, freezing (cryosurgery), and topical ointments (Mayo Clinic 2012). Squamous Cell Carcinoma Squamous cell carcinoma is a cancer that affects the keratinocytes of the stratum spinosum and presents as lesions commonly found on the scalp, ears, and hands ( Figure 5.19 ). It is the second most common skin cancer. The American Cancer Society reports that two of 10 skin cancers are squamous cell carcinomas, and it is more aggressive than basal cell carcinoma. If not removed, these carcinomas can metastasize. Surgery and radiation are used to cure squamous cell carcinoma. Melanoma A melanoma is a cancer characterized by the uncontrolled growth of melanocytes, the pigment-producing cells in the epidermis. Typically, a melanoma develops from a mole. It is the most fatal of all skin cancers, as it is highly metastatic and can be difficult to detect before it has spread to other organs. Melanomas usually appear as asymmetrical brown and black patches with uneven borders and a raised surface ( Figure 5.20 ). Treatment typically involves surgical excision and immunotherapy. Doctors often give their patients the following ABCDE mnemonic to help with the diagnosis of early-stage melanoma. If you observe a mole on your body displaying these signs, consult a doctor. A symmetry – the two sides are not symmetrical B orders – the edges are irregular in shape C olor – the color is varied shades of brown or black D iameter – it is larger than 6 mm (0.24 in) E volving – its shape has changed Some specialists cite the following additional signs for the most serious form, nodular melanoma: E levated – it is raised on the skin surface F irm – it feels hard to the touch G rowing – it is getting larger Skin Disorders Two common skin disorders are eczema and acne. Eczema is an inflammatory condition and occurs in individuals of all ages. Acne involves the clogging of pores, which can lead to infection and inflammation, and is often seen in adolescents. Other disorders, not discussed here, include seborrheic dermatitis (on the scalp), psoriasis, cold sores, impetigo, scabies, hives, and warts. Eczema Eczema is an allergic reaction that manifests as dry, itchy patches of skin that resemble rashes ( Figure 5.21 ). It may be accompanied by swelling of the skin, flaking, and in severe cases, bleeding. Many who suffer from eczema have antibodies against dust mites in their blood, but the link between eczema and allergy to dust mites has not been proven. Symptoms are usually managed with moisturizers, corticosteroid creams, and immunosuppressants. Acne Acne is a skin disturbance that typically occurs on areas of the skin that are rich in sebaceous glands (face and back). It is most common along with the onset of puberty due to associated hormonal changes, but can also occur in infants and continue into adulthood. Hormones, such as androgens, stimulate the release of sebum. An overproduction and accumulation of sebum along with keratin can block hair follicles. This plug is initially white. The sebum, when oxidized by exposure to air, turns black. Acne results from infection by acne-causing bacteria ( Propionibacterium and Staphylococcus ), which can lead to redness and potential scarring due to the natural wound healing process ( Figure 5.22 ). Career Connection Dermatologist Have you ever had a skin rash that did not respond to over-the-counter creams, or a mole that you were concerned about? Dermatologists help patients with these types of problems and more, on a daily basis. Dermatologists are medical doctors who specialize in diagnosing and treating skin disorders. Like all medical doctors, dermatologists earn a medical degree and then complete several years of residency training. In addition, dermatologists may then participate in a dermatology fellowship or complete additional, specialized training in a dermatology practice. If practicing in the United States, dermatologists must pass the United States Medical Licensing Exam (USMLE), become licensed in their state of practice, and be certified by the American Board of Dermatology. Most dermatologists work in a medical office or private-practice setting. They diagnose skin conditions and rashes, prescribe oral and topical medications to treat skin conditions, and may perform simple procedures, such as mole or wart removal. In addition, they may refer patients to an oncologist if skin cancer that has metastasized is suspected. Recently, cosmetic procedures have also become a prominent part of dermatology. Botox injections, laser treatments, and collagen and dermal filler injections are popular among patients, hoping to reduce the appearance of skin aging. Dermatology is a competitive specialty in medicine. Limited openings in dermatology residency programs mean that many medical students compete for a few select spots. Dermatology is an appealing specialty to many prospective doctors, because unlike emergency room physicians or surgeons, dermatologists generally do not have to work excessive hours or be “on-call” weekends and holidays. Moreover, the popularity of cosmetic dermatology has made it a growing field with many lucrative opportunities. It is not unusual for dermatology clinics to market themselves exclusively as cosmetic dermatology centers, and for dermatologists to specialize exclusively in these procedures. Consider visiting a dermatologist to talk about why he or she entered the field and what the field of dermatology is like. Visit this site for additional information. Injuries Because the skin is the part of our bodies that meets the world most directly, it is especially vulnerable to injury. Injuries include burns and wounds, as well as scars and calluses. They can be caused by sharp objects, heat, or excessive pressure or friction to the skin. Skin injuries set off a healing process that occurs in several overlapping stages. The first step to repairing damaged skin is the formation of a blood clot that helps stop the flow of blood and scabs over with time. Many different types of cells are involved in wound repair, especially if the surface area that needs repair is extensive. Before the basal stem cells of the stratum basale can recreate the epidermis, fibroblasts mobilize and divide rapidly to repair the damaged tissue by collagen deposition, forming granulation tissue. Blood capillaries follow the fibroblasts and help increase blood circulation and oxygen supply to the area. Immune cells, such as macrophages, roam the area and engulf any foreign matter to reduce the chance of infection. Burns A burn results when the skin is damaged by intense heat, radiation, electricity, or chemicals. The damage results in the death of skin cells, which can lead to a massive loss of fluid. Dehydration, electrolyte imbalance, and renal and circulatory failure follow, which can be fatal. Burn patients are treated with intravenous fluids to offset dehydration, as well as intravenous nutrients that enable the body to repair tissues and replace lost proteins. Another serious threat to the lives of burn patients is infection. Burned skin is extremely susceptible to bacteria and other pathogens, due to the loss of protection by intact layers of skin. Burns are sometimes measured in terms of the size of the total surface area affected. This is referred to as the “rule of nines,” which associates specific anatomical areas with a percentage that is a factor of nine ( Figure 5.23 ). Burns are also classified by the degree of their severity. A first-degree burn is a superficial burn that affects only the epidermis. Although the skin may be painful and swollen, these burns typically heal on their own within a few days. Mild sunburn fits into the category of a first-degree burn. A second-degree burn goes deeper and affects both the epidermis and a portion of the dermis. These burns result in swelling and a painful blistering of the skin. It is important to keep the burn site clean and sterile to prevent infection. If this is done, the burn will heal within several weeks. A third-degree burn fully extends into the epidermis and dermis, destroying the tissue and affecting the nerve endings and sensory function. These are serious burns that may appear white, red, or black; they require medical attention and will heal slowly without it. A fourth-degree burn is even more severe, affecting the underlying muscle and bone. Oddly, third and fourth-degree burns are usually not as painful because the nerve endings themselves are damaged. Full-thickness burns cannot be repaired by the body, because the local tissues used for repair are damaged and require excision (debridement), or amputation in severe cases, followed by grafting of the skin from an unaffected part of the body, or from skin grown in tissue culture for grafting purposes. Interactive Link Skin grafts are required when the damage from trauma or infection cannot be closed with sutures or staples. Watch this video to learn more about skin grafting procedures. Scars and Keloids Most cuts or wounds, with the exception of ones that only scratch the surface (the epidermis), lead to scar formation. A scar is collagen-rich skin formed after the process of wound healing that differs from normal skin. Scarring occurs in cases in which there is repair of skin damage, but the skin fails to regenerate the original skin structure. Fibroblasts generate scar tissue in the form of collagen, and the bulk of repair is due to the basket-weave pattern generated by collagen fibers and does not result in regeneration of the typical cellular structure of skin. Instead, the tissue is fibrous in nature and does not allow for the regeneration of accessory structures, such as hair follicles, sweat glands, or sebaceous glands. Sometimes, there is an overproduction of scar tissue, because the process of collagen formation does not stop when the wound is healed; this results in the formation of a raised or hypertrophic scar called a keloid . In contrast, scars that result from acne and chickenpox have a sunken appearance and are called atrophic scars. Scarring of skin after wound healing is a natural process and does not need to be treated further. Application of mineral oil and lotions may reduce the formation of scar tissue. However, modern cosmetic procedures, such as dermabrasion, laser treatments, and filler injections have been invented as remedies for severe scarring. All of these procedures try to reorganize the structure of the epidermis and underlying collagen tissue to make it look more natural. Bedsores and Stretch Marks Skin and its underlying tissue can be affected by excessive pressure. One example of this is called a bedsore . Bedsores, also called decubitis ulcers, are caused by constant, long-term, unrelieved pressure on certain body parts that are bony, reducing blood flow to the area and leading to necrosis (tissue death). Bedsores are most common in elderly patients who have debilitating conditions that cause them to be immobile. Most hospitals and long-term care facilities have the practice of turning the patients every few hours to prevent the incidence of bedsores. If left untreated by removal of necrotized tissue, bedsores can be fatal if they become infected. The skin can also be affected by pressure associated with rapid growth. A stretch mark results when the dermis is stretched beyond its limits of elasticity, as the skin stretches to accommodate the excess pressure. Stretch marks usually accompany rapid weight gain during puberty and pregnancy. They initially have a reddish hue, but lighten over time. Other than for cosmetic reasons, treatment of stretch marks is not required. They occur most commonly over the hips and abdomen. Calluses When you wear shoes that do not fit well and are a constant source of abrasion on your toes, you tend to form a callus at the point of contact. This occurs because the basal stem cells in the stratum basale are triggered to divide more often to increase the thickness of the skin at the point of abrasion to protect the rest of the body from further damage. This is an example of a minor or local injury, and the skin manages to react and treat the problem independent of the rest of the body. Calluses can also form on your fingers if they are subject to constant mechanical stress, such as long periods of writing, playing string instruments, or video games. A corn is a specialized form of callus. Corns form from abrasions on the skin that result from an elliptical-type motion.
u.s._history
Summary 1.1 The Americas Great civilizations had risen and fallen in the Americas before the arrival of the Europeans. In North America, the complex Pueblo societies including the Mogollon, Hohokam, and Anasazi as well as the city at Cahokia had peaked and were largely memories. The Eastern Woodland peoples were thriving, but they were soon overwhelmed as the number of English, French, and Dutch settlers increased. Mesoamerica and South America had also witnessed the rise and fall of cultures. The once-mighty Mayan population centers were largely empty. In 1492, however, the Aztecs in Mexico City were at their peak. Subjugating surrounding tribes and requiring tribute of both humans for sacrifice and goods for consumption, the island city of Tenochtitlán was the hub of an ever-widening commercial center and the equal of any large European city until Cortés destroyed it. Further south in Peru, the Inca linked one of the largest empires in history through the use of roads and disciplined armies. Without the use of the wheel, they cut and fashioned stone to build Machu Picchu high in the Andes before abandoning the city for unknown reasons. Thus, depending on what part of the New World they explored, the Europeans encountered peoples that diverged widely in their cultures, traditions, and numbers. 1.2 Europe on the Brink of Change One effect of the Crusades was that a larger portion of western Europe became familiar with the goods of the East. A lively trade subsequently developed along a variety of routes known collectively as the Silk Road to supply the demand for these products. Brigands and greedy middlemen made the trip along this route expensive and dangerous. By 1492, Europe—recovered from the Black Death and in search of new products and new wealth—was anxious to improve trade and communications with the rest of the world. Venice and Genoa led the way in trading with the East. The lure of profit pushed explorers to seek new trade routes to the Spice Islands and eliminate Muslim middlemen. Portugal, under the leadership of Prince Henry the Navigator, attempted to send ships around the continent of Africa. Ferdinand of Aragon and Isabella of Castile hired Columbus to find a route to the East by going west. As strong supporters of the Catholic Church, they sought to bring Christianity to the East and any newly found lands, as well as hoping to find sources of wealth. 1.3 West Africa and the Role of Slavery Before 1492, Africa, like the Americas, had experienced the rise and fall of many cultures, but the continent did not develop a centralized authority structure. African peoples practiced various forms of slavery, all of which differed significantly from the racial slavery that ultimately developed in the New World. After the arrival of Islam and before the Portuguese came to the coast of West Africa in 1444, Arabs and Berbers controlled the slave trade out of Africa, which expanded as European powers began to colonize the New World. Driven by a demand for labor, slavery in the Americas developed a new form: It was based on race, and the status of slave was both permanent and inherited.
Chapter Outline 1.1 The Americas 1.2 Europe on the Brink of Change 1.3 West Africa and the Role of Slavery Introduction Globalization, the ever-increasing interconnectedness of the world, is not a new phenomenon, but it accelerated when western Europeans discovered the riches of the East. During the Crusades (1095–1291), Europeans developed an appetite for spices, silk, porcelain, sugar, and other luxury items from the East, for which they traded fur, timber, and Slavic people they captured and sold (hence the word slave ). But when the Silk Road, the long overland trading route from China to the Mediterranean, became costlier and more dangerous to travel, Europeans searched for a more efficient and inexpensive trade route over water, initiating the development of what we now call the Atlantic World. In pursuit of commerce in Asia, fifteenth-century traders unexpectedly encountered a “New World” populated by millions and home to sophisticated and numerous peoples. Mistakenly believing they had reached the East Indies, these early explorers called its inhabitants “Indians.” West Africa, a diverse and culturally rich area, soon entered the stage as other nations exploited its slave trade and brought its peoples to the New World in chains. Although Europeans would come to dominate the New World, they could not have done so without Africans and native peoples ( Figure 1.1 ).
[ { "answer": { "ans_choice": 0, "ans_text": "A" }, "bloom": null, "hl_context": "The Mogollon thrived in the Mimbres Valley ( New Mexico ) from about 150 BCE to 1450 CE . They developed a distinctive artistic style for painting bowls with finely drawn geometric figures and wildlife , especially birds , in black on a white background . Beginning about 600 CE , the Hohokam built an extensive irrigation system of canals to irrigate the desert and grow fields of corn , beans , and squash . By 1300 , their crop yields were supporting the most highly populated settlements in the southwest . The Hohokam decorated pottery with a red-on-buff design and made jewelry of turquoise . <hl> In the high desert of New Mexico , the Anasazi , whose name means “ ancient enemy ” or “ ancient ones , ” carved homes from steep cliffs accessed by ladders or ropes that could be pulled in at night or in case of enemy attack ( Figure 1.10 ) . <hl>", "hl_sentences": "In the high desert of New Mexico , the Anasazi , whose name means “ ancient enemy ” or “ ancient ones , ” carved homes from steep cliffs accessed by ladders or ropes that could be pulled in at night or in case of enemy attack ( Figure 1.10 ) .", "question": { "cloze_format": "The ___ are Native peoples that built homes in cliff dwellings that still exist.", "normal_format": "Which of the following Native peoples built homes in cliff dwellings that still exist?", "question_choices": [ "Anasazi", "Cherokee", "Aztec", "Inca" ], "question_id": "fs-idp46070112", "question_text": "Which of the following Native peoples built homes in cliff dwellings that still exist?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "Olmec" }, "bloom": null, "hl_context": "THE FIRST AMERICANS : THE OLMEC Mesoamerica is the geographic area stretching from north of Panama up to the desert of central Mexico . Although marked by great topographic , linguistic , and cultural diversity , this region cradled a number of civilizations with similar characteristics . Mesoamericans were polytheistic ; their gods possessed both male and female traits and demanded blood sacrifices of enemies taken in battle or ritual bloodletting . Corn , or maize , domesticated by 5000 BCE , formed the basis of their diet . They developed a mathematical system , built huge edifices , and devised a calendar that accurately predicted eclipses and solstices and that priest-astronomers used to direct the planting and harvesting of crops . <hl> Most important for our knowledge of these peoples , they created the only known written language in the Western Hemisphere ; researchers have made much progress in interpreting the inscriptions on their temples and pyramids . <hl> Though the area had no overarching political structure , trade over long distances helped diffuse culture . Weapons made of obsidian , jewelry crafted from jade , feathers woven into clothing and ornaments , and cacao beans that were whipped into a chocolate drink formed the basis of commerce . The mother of Mesoamerican cultures was the Olmec civilization .", "hl_sentences": "Most important for our knowledge of these peoples , they created the only known written language in the Western Hemisphere ; researchers have made much progress in interpreting the inscriptions on their temples and pyramids .", "question": { "cloze_format": "The ___ culture developed the first writing system in the Western Hemisphere.", "normal_format": "Which culture developed the first writing system in the Western Hemisphere?", "question_choices": [ "Inca", "Maya", "Olmec", "Pueblo" ], "question_id": "fs-idm14330752", "question_text": "Which culture developed the first writing system in the Western Hemisphere?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "B" }, "bloom": null, "hl_context": "In South America , the most highly developed and complex society was that of the Inca , whose name means “ lord ” or “ ruler ” in the Andean language called Quechua . At its height in the fifteenth and sixteenth centuries , the Inca Empire , located on the Pacific coast and straddling the Andes Mountains , extended some twenty-five hundred miles . It stretched from modern-day Colombia in the north to Chile in the south and included cities built at an altitude of 14,000 feet above sea level . <hl> Its road system , kept free of debris and repaired by workers stationed at varying intervals , rivaled that of the Romans and efficiently connected the sprawling empire . <hl> The Inca , like all other pre-Columbian societies , did not use axle-mounted wheels for transportation . They built stepped roads to ascend and descend the steep slopes of the Andes ; these would have been impractical for wheeled vehicles but worked well for pedestrians . These roads enabled the rapid movement of the highly trained Incan army . Also like the Romans , the Inca were effective administrators . Runners called chasquis traversed the roads in a continuous relay system , ensuring quick communication over long distances . The Inca had no system of writing , however . They communicated and kept records using a system of colored strings and knots called the quipu ( Figure 1.8 ) .", "hl_sentences": "Its road system , kept free of debris and repaired by workers stationed at varying intervals , rivaled that of the Romans and efficiently connected the sprawling empire .", "question": { "cloze_format": "The ___ culture developed a road system rivaling that of the Romans.", "normal_format": "Which culture developed a road system rivaling that of the Romans?", "question_choices": [ "Cherokee", "Inca", "Olmec", "Anasazi" ], "question_id": "fs-idm43223200", "question_text": "Which culture developed a road system rivaling that of the Romans?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "A" }, "bloom": null, "hl_context": "The city of Jerusalem is a holy site for Jews , Christians , and Muslims . It was here King Solomon built the Temple in the tenth century BCE . It was here the Romans crucified Jesus in 33 CE , and from here , Christians maintain , he ascended into heaven , promising to return . From here , Muslims believe , Muhammad traveled to heaven in 621 to receive instructions about prayer . Thus claims on the area go deep , and emotions about it run high , among followers of all three faiths . Evidence exists that the three religions lived in harmony for centuries . <hl> In 1095 , however , European Christians decided not only to retake the holy city from the Muslim rulers but also to conquer what they called the Holy Lands , an area that extended from modern-day Turkey in the north along the Mediterranean coast to the Sinai Peninsula and that was also held by Muslims . <hl> <hl> The Crusades had begun . <hl> The Islamic conquest of Europe continued until 732 . Then , at the Battle of Tours ( in modern France ) , Charles Martel , nicknamed the Hammer , led a Christian force in defeating the army of Abdul Rahman al-Ghafiqi . Muslims , however , retained control of much of Spain , where Córdoba , known for leather and wool production , became a major center of learning and trade . By the eleventh century , a major Christian holy war called the Reconquista , or reconquest , had begun to slowly push Muslims from Spain . <hl> With the start of the Crusades , the wars between Christians and Muslims for domination of the Holy Land ( the Biblical region of Palestine ) , Christians in Spain and around Europe began to see the Reconquista as part of a larger religious struggle with Islam . <hl>", "hl_sentences": "In 1095 , however , European Christians decided not only to retake the holy city from the Muslim rulers but also to conquer what they called the Holy Lands , an area that extended from modern-day Turkey in the north along the Mediterranean coast to the Sinai Peninsula and that was also held by Muslims . The Crusades had begun . With the start of the Crusades , the wars between Christians and Muslims for domination of the Holy Land ( the Biblical region of Palestine ) , Christians in Spain and around Europe began to see the Reconquista as part of a larger religious struggle with Islam .", "question": { "cloze_format": "The series of attempts by Christian armies to retake the Holy Lands from Muslims was known as ________.", "normal_format": "What ware the series of attempts by Christian armies to retake the Holy Lands from Muslims known as?", "question_choices": [ "the Crusades", "the Reconquista", "the Black Death", "the Silk Road" ], "question_id": "fs-idm23107536", "question_text": "The series of attempts by Christian armies to retake the Holy Lands from Muslims was known as ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "Venice" }, "bloom": null, "hl_context": "On the positive side , maritime trade between East and West expanded . As Crusaders experienced the feel of silk , the taste of spices , and the utility of porcelain , desire for these products created new markets for merchants . <hl> In particular , the Adriatic port city of Venice prospered enormously from trade with Islamic merchants . <hl> Merchants ’ ships brought Europeans valuable goods , traveling between the port cities of western Europe and the East from the tenth century on , along routes collectively labeled the Silk Road . From the days of the early adventurer Marco Polo , Venetian sailors had traveled to ports on the Black Sea and established their own colonies along the Mediterranean Coast . However , transporting goods along the old Silk Road was costly , slow , and unprofitable . Muslim middlemen collected taxes as the goods changed hands . Robbers waited to ambush the treasure-laden caravans . A direct water route to the East , cutting out the land portion of the trip , had to be found . As well as seeking a water passage to the wealthy cities in the East , sailors wanted to find a route to the exotic and wealthy Spice Islands in modern-day Indonesia , whose location was kept secret by Muslim rulers . Longtime rivals of Venice , the merchants of Genoa and Florence also looked west .", "hl_sentences": "In particular , the Adriatic port city of Venice prospered enormously from trade with Islamic merchants .", "question": { "cloze_format": "________ became wealthy trading with the East.", "normal_format": "Which became wealthy trading with the East?", "question_choices": [ "Carcassonne", "Jerusalem", "Rome", "Venice" ], "question_id": "fs-idm52957872", "question_text": "________ became wealthy trading with the East." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "A" }, "bloom": null, "hl_context": "<hl> The year 1492 witnessed some of the most significant events of Ferdinand and Isabella ’ s reign . <hl> <hl> The couple oversaw the final expulsion of North African Muslims ( Moors ) from the Kingdom of Granada , bringing the nearly eight-hundred-year Reconquista to an end . <hl> <hl> In this same year , they also ordered all unconverted Jews to leave Spain . <hl>", "hl_sentences": "The year 1492 witnessed some of the most significant events of Ferdinand and Isabella ’ s reign . The couple oversaw the final expulsion of North African Muslims ( Moors ) from the Kingdom of Granada , bringing the nearly eight-hundred-year Reconquista to an end . In this same year , they also ordered all unconverted Jews to leave Spain .", "question": { "cloze_format": "In 1492, the Spanish forced the religious groups of the ___ to either convert or leave.", "normal_format": "In 1492, the Spanish forced which two religious groups to either convert or leave?", "question_choices": [ "Jews and Muslims", "Christians and Jews", "Protestants and Muslims", "Catholics and Jews" ], "question_id": "fs-idm49976592", "question_text": "In 1492, the Spanish forced these two religious groups to either convert or leave." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "Timbuktu" }, "bloom": null, "hl_context": "By 1200 CE , under the leadership of Sundiata Keita , Mali had replaced Ghana as the leading state in West Africa . After Sundiata ’ s rule , the court converted to Islam , and Muslim scribes played a large part in administration and government . Miners then discovered huge new deposits of gold east of the Niger River . By the fourteenth century , the empire was so wealthy that while on a hajj , or pilgrimage to the holy city of Mecca , Mali ’ s ruler Mansu Musa gave away enough gold to create serious price inflation in the cities along his route . <hl> Timbuktu , the capital city , became a leading Islamic center for education , commerce and the slave trade . <hl> Meanwhile , in the east , the city of Gao became increasingly strong under the leadership of Sonni Ali and soon eclipsed Mali ’ s power . Timbuktu sought Ali ’ s assistance in repelling the Tuaregs from the north . By 1500 , however , the Tuareg empire of Songhay had eclipsed Mali , where weak and ineffective leadership prevailed .", "hl_sentences": "Timbuktu , the capital city , became a leading Islamic center for education , commerce and the slave trade .", "question": { "cloze_format": "The city of ________ became a leading center for Muslim scholarship and trade.", "normal_format": "Which city became a leading center for Muslim scholarship and trade?", "question_choices": [ "Cairo", "Timbuktu", "Morocco", "Mali" ], "question_id": "fs-idm26945568", "question_text": "The city of ________ became a leading center for Muslim scholarship and trade." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 3, "ans_text": "D" }, "bloom": null, "hl_context": "In the English colonies along the Atlantic coast , indentured servants initially filled the need for labor in the North , where family farms were the norm . In the South , however , labor-intensive crops such as tobacco , rice , and indigo prevailed , and eventually the supply of indentured servants was insufficient to meet the demand . These workers served only for periods of three to seven years before being freed ; a more permanent labor supply was needed . <hl> Thus , whereas in Africa permanent , inherited slavery was unknown , and children of those bound in slavery to the tribe usually were free and intermarried with their captors , this changed in the Americas ; slavery became permanent , and children born to enslaved people became enslaved . <hl> This development , along with slavery ’ s identification with race , forever changed the institution and shaped its unique character in the New World . The institution of slavery is not a recent phenomenon . <hl> Most civilizations have practiced some form of human bondage and servitude , and African empires were no different ( Figure 1.17 ) . <hl> <hl> Famine or fear of stronger enemies might force one tribe to ask another for help and give themselves in a type of bondage in exchange . <hl> <hl> Similar to the European serf system , those seeking protection , or relief from starvation , would become the servants of those who provided relief . <hl> <hl> Debt might also be worked off through a form of servitude . <hl> <hl> Typically , these servants became a part of the extended tribal family . <hl> <hl> There is some evidence of chattel slavery , in which people are treated as personal property to be bought and sold , in the Nile Valley . <hl> It appears there was a slave-trade route through the Sahara that brought sub-Saharan Africans to Rome , which had enslaved people from all over the world .", "hl_sentences": "Thus , whereas in Africa permanent , inherited slavery was unknown , and children of those bound in slavery to the tribe usually were free and intermarried with their captors , this changed in the Americas ; slavery became permanent , and children born to enslaved people became enslaved . Most civilizations have practiced some form of human bondage and servitude , and African empires were no different ( Figure 1.17 ) . Famine or fear of stronger enemies might force one tribe to ask another for help and give themselves in a type of bondage in exchange . Similar to the European serf system , those seeking protection , or relief from starvation , would become the servants of those who provided relief . Debt might also be worked off through a form of servitude . Typically , these servants became a part of the extended tribal family . There is some evidence of chattel slavery , in which people are treated as personal property to be bought and sold , in the Nile Valley .", "question": { "cloze_format": "___ does not describe a form of slavery traditionally practiced in Africa.", "normal_format": "Which of the following does not describe a form of slavery traditionally practiced in Africa?", "question_choices": [ "a system in which those in need of supplies or protection give themselves in servitude", "a system in which debtors repay those whom they owe by giving themselves in servitude", "a system in which people are treated as chattel—that is, as personal property to be bought and sold", "a system in which people are enslaved permanently on account of their race" ], "question_id": "fs-idm15915456", "question_text": "Which of the following does not describe a form of slavery traditionally practiced in Africa?" }, "references_are_paraphrase": 0 } ]
1
1.1 The Americas Learning Objectives By the end of this section, you will be able to: Locate on a map the major American civilizations before the arrival of the Spanish Discuss the cultural achievements of these civilizations Discuss the differences and similarities between lifestyles, religious practices, and customs among the native peoples Some scholars believe that between nine and fifteen thousand years ago, a land bridge existed between Asia and North America that we now call Beringia . The first inhabitants of what would be named the Americas migrated across this bridge in search of food. When the glaciers melted, water engulfed Beringia, and the Bering Strait was formed. Later settlers came by boat across the narrow strait. (The fact that Asians and Native Americans share genetic markers on a Y chromosome lends credibility to this migration theory.) Continually moving southward, the settlers eventually populated both North and South America, creating unique cultures that ranged from the highly complex and urban Aztec civilization in what is now Mexico City to the woodland tribes of eastern North America. Recent research along the west coast of South America suggests that migrant populations may have traveled down this coast by water as well as by land. Researchers believe that about ten thousand years ago, humans also began the domestication of plants and animals, adding agriculture as a means of sustenance to hunting and gathering techniques. With this agricultural revolution, and the more abundant and reliable food supplies it brought, populations grew and people were able to develop a more settled way of life, building permanent settlements. Nowhere in the Americas was this more obvious than in Mesoamerica ( Figure 1.3 ). THE FIRST AMERICANS: THE OLMEC Mesoamerica is the geographic area stretching from north of Panama up to the desert of central Mexico. Although marked by great topographic, linguistic, and cultural diversity, this region cradled a number of civilizations with similar characteristics. Mesoamericans were polytheistic ; their gods possessed both male and female traits and demanded blood sacrifices of enemies taken in battle or ritual bloodletting. Corn, or maize, domesticated by 5000 BCE, formed the basis of their diet. They developed a mathematical system, built huge edifices, and devised a calendar that accurately predicted eclipses and solstices and that priest-astronomers used to direct the planting and harvesting of crops. Most important for our knowledge of these peoples, they created the only known written language in the Western Hemisphere; researchers have made much progress in interpreting the inscriptions on their temples and pyramids. Though the area had no overarching political structure, trade over long distances helped diffuse culture. Weapons made of obsidian, jewelry crafted from jade, feathers woven into clothing and ornaments, and cacao beans that were whipped into a chocolate drink formed the basis of commerce. The mother of Mesoamerican cultures was the Olmec civilization. Flourishing along the hot Gulf Coast of Mexico from about 1200 to about 400 BCE, the Olmec produced a number of major works of art, architecture, pottery, and sculpture. Most recognizable are their giant head sculptures ( Figure 1.4 ) and the pyramid in La Venta. The Olmec built aqueducts to transport water into their cities and irrigate their fields. They grew maize, squash, beans, and tomatoes. They also bred small domesticated dogs which, along with fish, provided their protein. Although no one knows what happened to the Olmec after about 400 BCE, in part because the jungle reclaimed many of their cities, their culture was the base upon which the Maya and the Aztec built. It was the Olmec who worshipped a rain god, a maize god, and the feathered serpent so important in the future pantheons of the Aztecs (who called him Quetzalcoatl) and the Maya (to whom he was Kukulkan). The Olmec also developed a system of trade throughout Mesoamerica, giving rise to an elite class. THE MAYA After the decline of the Olmec, a city rose in the fertile central highlands of Mesoamerica. One of the largest population centers in pre-Columbian America and home to more than 100,000 people at its height in about 500 CE, Teotihuacan was located about thirty miles northeast of modern Mexico City. The ethnicity of this settlement’s inhabitants is debated; some scholars believe it was a multiethnic city. Large-scale agriculture and the resultant abundance of food allowed time for people to develop special trades and skills other than farming. Builders constructed over twenty-two hundred apartment compounds for multiple families, as well as more than a hundred temples. Among these were the Pyramid of the Sun (which is two hundred feet high) and the Pyramid of the Moon (one hundred and fifty feet high). Near the Temple of the Feathered Serpent, graves have been uncovered that suggest humans were sacrificed for religious purposes. The city was also the center for trade, which extended to settlements on Mesoamerica’s Gulf Coast. The Maya were one Mesoamerican culture that had strong ties to Teotihuacan. The Maya’s architectural and mathematical contributions were significant. Flourishing from roughly 2000 BCE to 900 CE in what is now Mexico, Belize, Honduras, and Guatemala, the Maya perfected the calendar and written language the Olmec had begun. They devised a written mathematical system to record crop yields and the size of the population, and to assist in trade. Surrounded by farms relying on primitive agriculture, they built the city-states of Copan, Tikal, and Chichen Itza along their major trade routes, as well as temples, statues of gods, pyramids, and astronomical observatories ( Figure 1.5 ). However, because of poor soil and a drought that lasted nearly two centuries, their civilization declined by about 900 CE and they abandoned their large population centers. The Spanish found little organized resistance among the weakened Maya upon their arrival in the 1520s. However, they did find Mayan history, in the form of glyphs, or pictures representing words, recorded in folding books called codices (the singular is codex ). In 1562, Bishop Diego de Landa, who feared the converted natives had reverted to their traditional religious practices, collected and burned every codex he could find. Today only a few survive. THE AZTEC When the Spaniard Hernán Cortés arrived on the coast of Mexico in the sixteenth century, at the site of present-day Veracruz, he soon heard of a great city ruled by an emperor named Moctezuma. This city was tremendously wealthy—filled with gold—and took in tribute from surrounding tribes. The riches and complexity Cortés found when he arrived at that city, known as Tenochtitlán, were far beyond anything he or his men had ever seen. According to legend, a warlike people called the Aztec (also known as the Mexica) had left a city called Aztlán and traveled south to the site of present-day Mexico City. In 1325, they began construction of Tenochtitlán on an island in Lake Texcoco. By 1519, when Cortés arrived, this settlement contained upwards of 200,000 inhabitants and was certainly the largest city in the Western Hemisphere at that time and probably larger than any European city ( Figure 1.6 ). One of Cortés’s soldiers, Bernal Díaz del Castillo, recorded his impressions upon first seeing it: “When we saw so many cities and villages built in the water and other great towns on dry land we were amazed and said it was like the enchantments . . . on account of the great towers and cues and buildings rising from the water, and all built of masonry. And some of our soldiers even asked whether the things that we saw were not a dream? . . . I do not know how to describe it, seeing things as we did that had never been heard of or seen before, not even dreamed about.” Unlike the dirty, fetid cities of Europe at the time, Tenochtitlán was well planned, clean, and orderly. The city had neighborhoods for specific occupations, a trash collection system, markets, two aqueducts bringing in fresh water, and public buildings and temples. Unlike the Spanish, Aztecs bathed daily, and wealthy homes might even contain a steam bath. A labor force of enslaved people from subjugated neighboring tribes had built the fabulous city and the three causeways that connected it to the mainland. To farm, the Aztec constructed barges made of reeds and filled them with fertile soil. Lake water constantly irrigated these chinampas , or “floating gardens,” which are still in use and can be seen today in Xochimilco, a district of Mexico City. Each god in the Aztec pantheon represented and ruled an aspect of the natural world, such as the heavens, farming, rain, fertility, sacrifice, and combat. A ruling class of warrior nobles and priests performed ritual human sacrifice daily to sustain the sun on its long journey across the sky, to appease or feed the gods, and to stimulate agricultural production. The sacrificial ceremony included cutting open the chest of a criminal or captured warrior with an obsidian knife and removing the still-beating heart ( Figure 1.7 ). My Story The Aztec Predict the Coming of the Spanish The following is an excerpt from the sixteenth-century Florentine Codex of the writings of Fray Bernardino de Sahagun, a priest and early chronicler of Aztec history. When an old man from Xochimilco first saw the Spanish in Veracruz, he recounted an earlier dream to Moctezuma, the ruler of the Aztecs. Said Quzatli to the sovereign, “Oh mighty lord, if because I tell you the truth I am to die, nevertheless I am here in your presence and you may do what you wish to me!” He narrated that mounted men would come to this land in a great wooden house [ships] this structure was to lodge many men, serving them as a home; within they would eat and sleep. On the surface of this house they would cook their food, walk and play as if they were on firm land. They were to be White, bearded men, dressed in different colors and on their heads they would wear round coverings. Ten years before the arrival of the Spanish, Moctezuma received several omens which at the time he could not interpret. A fiery object appeared in the night sky, a spontaneous fire broke out in a religious temple and could not be extinguished with water, a water spout appeared in Lake Texcoco, and a woman could be heard wailing, “O my children we are about to go forever.” Moctezuma also had dreams and premonitions of impending disaster. These foretellings were recorded after the Aztecs’ destruction. They do, however, give us insight into the importance placed upon signs and omens in the pre-Columbian world. THE INCA In South America, the most highly developed and complex society was that of the Inca, whose name means “lord” or “ruler” in the Andean language called Quechua. At its height in the fifteenth and sixteenth centuries, the Inca Empire, located on the Pacific coast and straddling the Andes Mountains, extended some twenty-five hundred miles. It stretched from modern-day Colombia in the north to Chile in the south and included cities built at an altitude of 14,000 feet above sea level. Its road system, kept free of debris and repaired by workers stationed at varying intervals, rivaled that of the Romans and efficiently connected the sprawling empire. The Inca, like all other pre-Columbian societies, did not use axle-mounted wheels for transportation. They built stepped roads to ascend and descend the steep slopes of the Andes; these would have been impractical for wheeled vehicles but worked well for pedestrians. These roads enabled the rapid movement of the highly trained Incan army. Also like the Romans, the Inca were effective administrators. Runners called chasquis traversed the roads in a continuous relay system, ensuring quick communication over long distances. The Inca had no system of writing, however. They communicated and kept records using a system of colored strings and knots called the quipu ( Figure 1.8 ). The Inca people worshipped their lord who, as a member of an elite ruling class, had absolute authority over every aspect of life. Much like feudal lords in Europe at the time, the ruling class lived off the labor of the peasants, collecting vast wealth that accompanied them as they went, mummified, into the next life. The Inca farmed corn, beans, squash, quinoa (a grain cultivated for its seeds), and the indigenous potato on terraced land they hacked from the steep mountains. Peasants received only one-third of their crops for themselves. The Inca ruler required a third, and a third was set aside in a kind of welfare system for those unable to work. Huge storehouses were filled with food for times of need. Each peasant also worked for the Inca ruler a number of days per month on public works projects, a requirement known as the mita . For example, peasants constructed rope bridges made of grass to span the mountains above fast-flowing icy rivers. In return, the lord provided laws, protection, and relief in times of famine. The Inca worshipped the sun god Inti and called gold the “sweat” of the sun. Unlike the Maya and the Aztecs, they rarely practiced human sacrifice and usually offered the gods food, clothing, and coca leaves. In times of dire emergency, however, such as in the aftermath of earthquakes, volcanoes, or crop failure, they resorted to sacrificing prisoners. The ultimate sacrifice was children, who were specially selected and well fed. The Inca believed these children would immediately go to a much better afterlife. In 1911, the American historian Hiram Bingham uncovered the lost Incan city of Machu Picchu ( Figure 1.9 ). Located about fifty miles northwest of Cusco, Peru, at an altitude of about 8,000 feet, the city had been built in 1450 and inexplicably abandoned roughly a hundred years later. Scholars believe the city was used for religious ceremonial purposes and housed the priesthood. The architectural beauty of this city is unrivaled. Using only the strength of human labor and no machines, the Inca constructed walls and buildings of polished stones, some weighing over fifty tons, that were fitted together perfectly without the use of mortar. In 1983, UNESCO designated the ruined city a World Heritage Site. NORTH AMERICAN NATIVES With few exceptions, the North American Native cultures were much more widely dispersed than the Mayan, Aztec, and Incan societies, and did not have their population size or organized social structures. Although the cultivation of corn had made its way north, many Native people still practiced hunting and gathering. Horses, first introduced by the Spanish, allowed the Plains Natives to more easily follow and hunt the huge herds of bison. A few societies had evolved into relatively complex forms, but they were already in decline at the time of Christopher Columbus’s arrival. In the southwestern part of today’s United States dwelled several groups we collectively call the Pueblo. The Spanish first gave them this name, which means “town” or “village,” because they lived in towns or villages of permanent stone-and-mud buildings with thatched roofs. Like present-day apartment houses, these buildings had multiple stories, each with multiple rooms. The three main groups of the Pueblo people were the Mogollon, Hohokam, and Anasazi. The Mogollon thrived in the Mimbres Valley (New Mexico) from about 150 BCE to 1450 CE. They developed a distinctive artistic style for painting bowls with finely drawn geometric figures and wildlife, especially birds, in black on a white background. Beginning about 600 CE, the Hohokam built an extensive irrigation system of canals to irrigate the desert and grow fields of corn, beans, and squash. By 1300, their crop yields were supporting the most highly populated settlements in the southwest. The Hohokam decorated pottery with a red-on-buff design and made jewelry of turquoise. In the high desert of New Mexico, the Anasazi, whose name means “ancient enemy” or “ancient ones,” carved homes from steep cliffs accessed by ladders or ropes that could be pulled in at night or in case of enemy attack ( Figure 1.10 ). Roads extending some 180 miles connected the Pueblos’ smaller urban centers to each other and to Chaco Canyon, which by 1050 CE had become the administrative, religious, and cultural center of their civilization. A century later, however, probably because of drought, the Pueblo peoples abandoned their cities. Their present-day descendants include the Hopi and Zuni tribes. The Indigenous groups who lived in the present-day Ohio River Valley and achieved their cultural apex from the first century CE to 400 CE are collectively known as the Hopewell culture. Their settlements, unlike those of the southwest, were small hamlets. They lived in wattle-and-daub houses (made from woven lattice branches “daubed” with wet mud, clay, or sand and straw) and practiced agriculture, which they supplemented by hunting and fishing. Utilizing waterways, they developed trade routes stretching from Canada to Louisiana, where they exchanged goods with other tribes and negotiated in many different languages. From the coast they received shells; from Canada, copper; and from the Rocky Mountains, obsidian. With these materials they created necklaces, woven mats, and exquisite carvings. What remains of their culture today are huge burial mounds and earthworks. Many of the mounds that were opened by archaeologists contained artworks and other goods that indicate their society was socially stratified. Perhaps the largest indigenous cultural and population center in North America was located along the Mississippi River near present-day St. Louis. At its height in about 1100 CE, this five-square-mile city, now called Cahokia, was home to more than ten thousand residents; tens of thousands more lived on farms surrounding the urban center. The city also contained one hundred and twenty earthen mounds or pyramids, each dominating a particular neighborhood and on each of which lived a leader who exercised authority over the surrounding area. The largest mound covered fifteen acres. Cahokia was the hub of political and trading activities along the Mississippi River. After 1300 CE, however, this civilization declined—possibly because the area became unable to support the large population. NATIVE PEOPLES OF THE EASTERN WOODLAND Encouraged by the wealth found by the Spanish in the settled civilizations to the south, fifteenth- and sixteenth-century English, Dutch, and French explorers expected to discover the same in North America. What they found instead were small, disparate communities, many already ravaged by European diseases brought by the Spanish and transmitted among the natives. Rather than gold and silver, there was an abundance of land, and the timber and fur that land could produce. The Native peoples living east of the Mississippi did not construct the large and complex societies of those to the west. Because they lived in small autonomous clans or tribal units, each group adapted to the specific environment in which it lived ( Figure 1.11 ). These groups were by no means unified, and warfare among tribes was common as they sought to increase their hunting and fishing areas. Still, these tribes shared some common traits. A chief or group of tribal elders made decisions, and although the chief was male, usually the women selected and counseled him. Gender roles were not as fixed as they were in the patriarchal societies of Europe, Mesoamerica, and South America. Women typically cultivated corn, beans, and squash and harvested nuts and berries, while men hunted, fished, and provided protection. But both took responsibility for raising children, and most major Native societies in the east were matriarchal. In tribes such as the Iroquois, Lenape, Muscogee, and Cherokee, women had both power and influence. They counseled the chief and passed on the traditions of the tribe. This matriarchy changed dramatically with the coming of the Europeans, who introduced, sometimes forcibly, their own customs and traditions to the natives. Clashing beliefs about land ownership and use of the environment would be the greatest area of conflict with Europeans. Although tribes often claimed the right to certain hunting grounds—usually identified by some geographical landmark—Native peoples did not practice, or in general even have the concept of, private ownership of land. A person’s possessions included only what he or she had made, such as tools or weapons. The European Christian worldview, on the other hand, viewed land as the source of wealth. According to the Christian Bible, God created humanity in his own image with the command to use and subdue the rest of creation, which included not only land, but also all animal life. Upon their arrival in North America, Europeans found no fences, no signs designating ownership. Land, and the game that populated it, they believed, were there for the taking. 1.2 Europe on the Brink of Change Learning Objectives By the end of this section, you will be able to: Describe the European societies that engaged in conversion, conquest, and commerce Discuss the motives for and mechanisms of early European exploration The fall of the Roman Empire (476 CE) and the beginning of the European Renaissance in the late fourteenth century roughly bookend the period we call the Middle Ages. Without a dominant centralized power or overarching cultural hub, Europe experienced political and military discord during this time. Its inhabitants retreated into walled cities, fearing marauding pillagers including Vikings, Mongols, Arabs, and Magyars. In return for protection, they submitted to powerful lords and their armies of knights. In their brief, hard lives, few people traveled more than ten miles from the place they were born. The Christian Church remained intact, however, and emerged from the period as a unified and powerful institution. Priests, tucked away in monasteries, kept knowledge alive by collecting and copying religious and secular manuscripts, often adding beautiful drawings or artwork. Social and economic devastation arrived in 1340s, however, when Genoese merchants returning from the Black Sea unwittingly brought with them a rat-borne and highly contagious disease, known as the bubonic plague. In a few short years, it had killed many millions, about one-third of Europe’s population. A different strain, spread by airborne germs, also killed many. Together these two are collectively called the Black Death ( Figure 1.12 ). Entire villages disappeared. A high birth rate, however, coupled with bountiful harvests, meant that the population grew during the next century. By 1450, a newly rejuvenated European society was on the brink of tremendous change. LIFE IN FEUDAL EUROPE During the Middle Ages, most Europeans lived in small villages that consisted of a manorial house or castle for the lord, a church, and simple homes for the peasants or serfs , who made up about 60 percent of western Europe’s population. Hundreds of these castles and walled cities remain all over Europe ( Figure 1.13 ). Europe’s feudal society was a mutually supportive system. The lords owned the land; knights gave military service to a lord and carried out his justice; serfs worked the land in return for the protection offered by the lord’s castle or the walls of his city, into which they fled in times of danger from invaders. Much land was communally farmed at first, but as lords became more powerful they extended their ownership and rented land to their subjects. Thus, although they were technically free, serfs were effectively bound to the land they worked, which supported them and their families as well as the lord and all who depended on him. The Catholic Church, the only church in Europe at the time, also owned vast tracts of land and became very wealthy by collecting not only tithes (taxes consisting of 10 percent of annual earnings) but also rents on its lands. A serf’s life was difficult. Women often died in childbirth, and perhaps one-third of children died before the age of five. Without sanitation or medicine, many people perished from diseases we consider inconsequential today; few lived to be older than forty-five. Entire families, usually including grandparents, lived in one- or two-room hovels that were cold, dark, and dirty. A fire was kept lit and was always a danger to the thatched roofs, while its constant smoke affected the inhabitants’ health and eyesight. Most individuals owned no more than two sets of clothing, consisting of a woolen jacket or tunic and linen undergarments, and bathed only when the waters melted in spring. In an agrarian society , the seasons dictate the rhythm of life. Everyone in Europe’s feudal society had a job to do and worked hard. The father was the unquestioned head of the family. Idleness meant hunger. When the land began to thaw in early spring, peasants started tilling the soil with primitive wooden plows and crude rakes and hoes. Then they planted crops of wheat, rye, barley, and oats, reaping small yields that barely sustained the population. Bad weather, crop disease, or insect infestation could cause an entire village to starve or force the survivors to move to another location. Early summer saw the first harvesting of hay, which was stored until needed to feed the animals in winter. Men and boys sheared the sheep, now heavy with wool from the cold weather, while women and children washed the wool and spun it into yarn. The coming of fall meant crops needed to be harvested and prepared for winter. Livestock was butchered and the meat smoked or salted to preserve it. With the harvest in and the provisions stored, fall was also the time for celebrating and giving thanks to God. Winter brought the people indoors to weave yarn into fabric, sew clothing, thresh grain, and keep the fires going. Everyone celebrated the birth of Christ in conjunction with the winter solstice. THE CHURCH AND SOCIETY After the fall of Rome, the Christian Church—united in dogma but unofficially divided into western and eastern branches—was the only organized institution in medieval Europe. In 1054, the eastern branch of Christianity, led by the Patriarch of Constantinople (a title that because roughly equivalent to the western Church’s pope), established its center in Constantinople and adopted the Greek language for its services. The western branch, under the pope, remained in Rome, becoming known as the Roman Catholic Church and continuing to use Latin. Following this split, known as the Great Schism, each branch of Christianity maintained a strict organizational hierarchy. The pope in Rome, for example, oversaw a huge bureaucracy led by cardinals, known as “princes of the church,” who were followed by archbishops, bishops, and then priests. During this period, the Roman Church became the most powerful international organization in western Europe. Just as agrarian life depended on the seasons, village and family life revolved around the Church. The sacraments , or special ceremonies of the Church, marked every stage of life, from birth to maturation, marriage, and burial, and brought people into the church on a regular basis. As Christianity spread throughout Europe, it replaced pagan and animistic views, explaining supernatural events and forces of nature in its own terms. A benevolent God in heaven, creator of the universe and beyond the realm of nature and the known, controlled all events, warring against the force of darkness, known as the Devil or Satan, here on earth. Although ultimately defeated, Satan still had the power to trick humans and cause them to commit evil or sin. All events had a spiritual connotation. Sickness, for example, might be a sign that a person had sinned, while crop failure could result from the villagers’ not saying their prayers. Penitents confessed their sins to the priest, who absolved them and assigned them penance to atone for their acts and save themselves from eternal damnation. Thus the parish priest held enormous power over the lives of his parishioners. Ultimately, the pope decided all matters of theology, interpreting the will of God to the people, but he also had authority over temporal matters. Because the Church had the ability to excommunicate people, or send a soul to hell forever, even monarchs feared to challenge its power. It was also the seat of all knowledge. Latin, the language of the Church, served as a unifying factor for a continent of isolated regions, each with its own dialect; in the early Middle Ages, nations as we know them today did not yet exist. The mostly illiterate serfs were thus dependent on those literate priests to read and interpret the Bible, the word of God, for them. CHRISTIANITY ENCOUNTERS ISLAM The year 622 brought a new challenge to Christendom. Near Mecca, Saudi Arabia, a prophet named Muhammad received a revelation that became a cornerstone of the Islamic faith. The Koran contained his message, affirming monotheism but identifying Christ not as God but as a prophet like Moses, Abraham, David, and Muhammad. Following Muhammad’s death in 632, Islam spread by both conversion and military conquest across the Middle East and Asia Minor to India and northern Africa, crossing the Straits of Gibraltar into Spain in the year 711 ( Figure 1.14 ). The Islamic conquest of Europe continued until 732. Then, at the Battle of Tours (in modern France), Charles Martel, nicknamed the Hammer, led a Christian force in defeating the army of Abdul Rahman al-Ghafiqi. Muslims, however, retained control of much of Spain, where Córdoba, known for leather and wool production, became a major center of learning and trade. By the eleventh century, a major Christian holy war called the Reconquista, or reconquest, had begun to slowly push Muslims from Spain. With the start of the Crusades , the wars between Christians and Muslims for domination of the Holy Land (the Biblical region of Palestine), Christians in Spain and around Europe began to see the Reconquista as part of a larger religious struggle with Islam. JERUSALEM AND THE CRUSADES The city of Jerusalem is a holy site for Jews, Christians, and Muslims. It was here King Solomon built the Temple in the tenth century BCE. It was here the Romans crucified Jesus in 33 CE, and from here, Christians maintain, he ascended into heaven, promising to return. From here, Muslims believe, Muhammad traveled to heaven in 621 to receive instructions about prayer. Thus claims on the area go deep, and emotions about it run high, among followers of all three faiths. Evidence exists that the three religions lived in harmony for centuries. In 1095, however, European Christians decided not only to retake the holy city from the Muslim rulers but also to conquer what they called the Holy Lands, an area that extended from modern-day Turkey in the north along the Mediterranean coast to the Sinai Peninsula and that was also held by Muslims. The Crusades had begun. Religious zeal motivated the knights who participated in the four Crusades. Adventure, the chance to win land and a title, and the Church’s promise of wholesale forgiveness of sins also motivated many. The Crusaders, mostly French knights, retook Jerusalem in June 1099 amid horrific slaughter. A French writer who accompanied them recorded this eyewitness account: “On the top of Solomon’s Temple, to which they had climbed in fleeing, many were shot to death with arrows and cast down headlong from the roof. Within this Temple, about ten thousand were beheaded. If you had been there, your feet would have been stained up to the ankles with the blood of the slain. What more shall I tell? Not one of them was allowed to live. They did not spare the women and children.” A Muslim eyewitness also described how the conquerors stripped the temple of its wealth and looted private homes. In 1187, under the legendary leader Saladin, Muslim forces took back the city. Reaction from Europe was swift as King Richard I of England, the Lionheart, joined others to mount yet another action. The battle for the Holy Lands did not conclude until the Crusaders lost their Mediterranean stronghold at Acre (in present-day Israel) in 1291 and the last of the Christians left the area a few years later. The Crusades had lasting effects, both positive and negative. On the negative side, the wide-scale persecution of Jews began. Christians classed them with the infidel Muslims and labeled them “the killers of Christ.” In the coming centuries, kings either expelled Jews from their kingdoms or forced them to pay heavy tributes for the privilege of remaining. Muslim-Christian hatred also festered, and intolerance grew. On the positive side, maritime trade between East and West expanded. As Crusaders experienced the feel of silk, the taste of spices, and the utility of porcelain, desire for these products created new markets for merchants. In particular, the Adriatic port city of Venice prospered enormously from trade with Islamic merchants. Merchants’ ships brought Europeans valuable goods, traveling between the port cities of western Europe and the East from the tenth century on, along routes collectively labeled the Silk Road. From the days of the early adventurer Marco Polo, Venetian sailors had traveled to ports on the Black Sea and established their own colonies along the Mediterranean Coast. However, transporting goods along the old Silk Road was costly, slow, and unprofitable. Muslim middlemen collected taxes as the goods changed hands. Robbers waited to ambush the treasure-laden caravans. A direct water route to the East, cutting out the land portion of the trip, had to be found. As well as seeking a water passage to the wealthy cities in the East, sailors wanted to find a route to the exotic and wealthy Spice Islands in modern-day Indonesia, whose location was kept secret by Muslim rulers. Longtime rivals of Venice, the merchants of Genoa and Florence also looked west. THE IBERIAN PENINSULA Although Norse explorers such as Leif Ericson, the son of Eric the Red who first settled Greenland, had reached Canada roughly five hundred years prior to Christopher Columbus’s voyage, it was explorers sailing for Portugal and Spain who traversed the Atlantic throughout the fifteenth century and ushered in an unprecedented age of exploration and permanent contact with North America. Located on the extreme western edge of Europe, Portugal, with its port city of Lisbon, soon became the center for merchants desiring to undercut the Venetians’ hold on trade. With a population of about one million and supported by its ruler Prince Henry, whom historians call “the Navigator,” this independent kingdom fostered exploration of and trade with western Africa. Skilled shipbuilders and navigators who took advantage of maps from all over Europe, Portuguese sailors used triangular sails and built lighter vessels called caravels that could sail down the African coast. Just to the east of Portugal, King Ferdinand of Aragon married Queen Isabella of Castile in 1469, uniting two of the most powerful independent kingdoms on the Iberian peninsula and laying the foundation for the modern nation of Spain. Isabella, motivated by strong religious zeal, was instrumental in beginning the Inquisition in 1480, a brutal campaign to root out Jews and Muslims who had seemingly converted to Christianity but secretly continued to practice their faith, as well as other heretics. This powerful couple ruled for the next twenty-five years, centralizing authority and funding exploration and trade with the East. One of their daughters, Catherine of Aragon, became the first wife of King Henry VIII of England. Americana Motives for European Exploration Historians generally recognize three motives for European exploration—God, glory, and gold. Particularly in the strongly Catholic nations of Spain and Portugal, religious zeal motivated the rulers to make converts and retake land from the Muslims. Prince Henry the Navigator of Portugal described his “great desire to make increase in the faith of our Lord Jesus Christ and to bring him all the souls that should be saved.” Sailors’ tales about fabulous monsters and fantasy literature about exotic worlds filled with gold, silver, and jewels captured the minds of men who desired to explore these lands and return with untold wealth and the glory of adventure and discovery. They sparked the imagination of merchants like Marco Polo, who made the long and dangerous trip to the realm of the great Mongol ruler Kublai Khan in 1271. The story of his trip, printed in a book entitled Travels , inspired Columbus, who had a copy in his possession during his voyage more than two hundred years later. Passages such as the following, which describes China’s imperial palace, are typical of the Travels : You must know that it is the greatest Palace that ever was. . . . The roof is very lofty, and the walls of the Palace are all covered with gold and silver. They are also adorned with representations of dragons [sculptured and gilt], beasts and birds, knights and idols, and sundry other subjects. And on the ceiling too you see nothing but gold and silver and painting. [On each of the four sides there is a great marble staircase leading to the top of the marble wall, and forming the approach to the Palace.] The hall of the Palace is so large that it could easily dine 6,000 people; and it is quite a marvel to see how many rooms there are besides. The building is altogether so vast, so rich, and so beautiful, that no man on earth could design anything superior to it. The outside of the roof also is all colored with vermilion and yellow and green and blue and other hues, which are fixed with a varnish so fine and exquisite that they shine like crystal, and lend a resplendent lustre to the Palace as seen for a great way round. This roof is made too with such strength and solidity that it is fit to last forever. Why might a travel account like this one have influenced an explorer like Columbus? What does this tell us about European explorers’ motivations and goals? The year 1492 witnessed some of the most significant events of Ferdinand and Isabella’s reign. The couple oversaw the final expulsion of North African Muslims (Moors) from the Kingdom of Granada, bringing the nearly eight-hundred-year Reconquista to an end. In this same year, they also ordered all unconverted Jews to leave Spain. Also in 1492, after six years of lobbying, a Genoese sailor named Christopher Columbus persuaded the monarchs to fund his expedition to the Far East. Columbus had already pitched his plan to the rulers of Genoa and Venice without success, so the Spanish monarchy was his last hope. Christian zeal was the prime motivating factor for Isabella, as she imagined her faith spreading to the East. Ferdinand, the more practical of the two, hoped to acquire wealth from trade. Most educated individuals at the time knew the earth was round, so Columbus’s plan to reach the East by sailing west was plausible. Though the calculations of Earth’s circumference made by the Greek geographer Eratosthenes in the second century BCE were known (and, as we now know, nearly accurate), most scholars did not believe they were dependable. Thus Columbus would have no way of knowing when he had traveled far enough around the Earth to reach his goal—and in fact, Columbus greatly underestimated the Earth’s circumference. In August 1492, Columbus set sail with his three small caravels ( Figure 1.15 ). After a voyage of about three thousand miles lasting six weeks, he landed on an island in the Bahamas named Guanahani by the native Lucayans. He promptly christened it San Salvador, the name it bears today. 1.3 West Africa and the Role of Slavery Learning Objectives At the end of this section, you will be able to: Locate the major West African empires on a map Discuss the roles of Islam and Europe in the slave trade It is difficult to generalize about West Africa, which was linked to the rise and diffusion of Islam. This geographical unit, central to the rise of the Atlantic World, stretches from modern-day Mauritania to the Democratic Republic of the Congo and encompasses lush rainforests along the equator, savannas on either side of the forest, and much drier land to the north. Until about 600 CE, most Africans were hunter-gatherers. Where water was too scarce for farming, herders maintained sheep, goats, cattle, or camels. In the more heavily wooded area near the equator, farmers raised yams, palm products, or plantains. The savanna areas yielded rice, millet, and sorghum. Sub-Saharan Africans had little experience in maritime matters. Most of the population lived away from the coast, which is connected to the interior by five main rivers—the Senegal, Gambia, Niger, Volta, and Congo. Although there were large trading centers along these rivers, most West Africans lived in small villages and identified with their extended family or their clan. Wives, children, and dependents (including enslaved people) were a sign of wealth among men, and polygyny , the practice of having more than one wife at a time, was widespread. In time of need, relatives, however far away, were counted upon to assist in supplying food or security. Because of the clannish nature of African society, “we” was associated with the village and family members, while “they” included everyone else. Hundreds of separate dialects emerged; in modern Nigeria, nearly five hundred are still spoken. THE MAJOR AFRICAN EMPIRES Following the death of the prophet Muhammad in 632 CE, Islam continued to spread quickly across North Africa, bringing not only a unifying faith but a political and legal structure as well. As lands fell under the control of Muslim armies, they instituted Islamic rule and legal structures as local chieftains converted, usually under penalty of death. Only those who had converted to Islam could rule or be engaged in trade. The first major empire to emerge in West Africa was the Ghana Empire ( Figure 1.16 ). By 750, the Soninke farmers of the sub-Sahara had become wealthy by taxing the trade that passed through their area. For instance, the Niger River basin supplied gold to the Berber and Arab traders from west of the Nile Valley, who brought cloth, weapons, and manufactured goods into the interior. Huge Saharan salt mines supplied the life-sustaining mineral to the Mediterranean coast of Africa and inland areas. By 900, the monotheistic Muslims controlled most of this trade and had converted many of the African ruling elite. The majority of the population, however, maintained their tribal animistic practices, which gave living attributes to nonliving objects such as mountains, rivers, and wind. Because Ghana’s king controlled the gold supply, he was able to maintain price controls and afford a strong military. Soon, however, a new kingdom emerged. By 1200 CE, under the leadership of Sundiata Keita, Mali had replaced Ghana as the leading state in West Africa. After Sundiata’s rule, the court converted to Islam, and Muslim scribes played a large part in administration and government. Miners then discovered huge new deposits of gold east of the Niger River. By the fourteenth century, the empire was so wealthy that while on a hajj , or pilgrimage to the holy city of Mecca, Mali’s ruler Mansu Musa gave away enough gold to create serious price inflation in the cities along his route. Timbuktu, the capital city, became a leading Islamic center for education, commerce and the slave trade. Meanwhile, in the east, the city of Gao became increasingly strong under the leadership of Sonni Ali and soon eclipsed Mali’s power. Timbuktu sought Ali’s assistance in repelling the Tuaregs from the north. By 1500, however, the Tuareg empire of Songhay had eclipsed Mali, where weak and ineffective leadership prevailed. THE ROLE OF SLAVERY The institution of slavery is not a recent phenomenon. Most civilizations have practiced some form of human bondage and servitude, and African empires were no different ( Figure 1.17 ). Famine or fear of stronger enemies might force one tribe to ask another for help and give themselves in a type of bondage in exchange. Similar to the European serf system, those seeking protection, or relief from starvation, would become the servants of those who provided relief. Debt might also be worked off through a form of servitude. Typically, these servants became a part of the extended tribal family. There is some evidence of chattel slavery , in which people are treated as personal property to be bought and sold, in the Nile Valley. It appears there was a slave-trade route through the Sahara that brought sub-Saharan Africans to Rome, which had enslaved people from all over the world. Arab slave trading, which exchanged enslaved people for goods from the Mediterranean, existed long before Islam’s spread across North Africa. Muslims later expanded this trade and enslaved not only Africans but also Europeans, especially from Spain, Sicily, and Italy. Male captives were forced to build coastal fortifications and serve as enslaved galley people. Women were added to the harem. The major European slave trade began with Portugal’s exploration of the west coast of Africa in search of a trade route to the East. By 1444, enslaved people were being brought from Africa to work on the sugar plantations of the Madeira Islands, off the coast of modern Morocco. The slave trade then expanded greatly as European colonies in the New World demanded an ever-increasing number of workers for the extensive plantations growing tobacco, sugar, and eventually rice and cotton ( Figure 1.18 ). In the New World, the institution of slavery assumed a new aspect when the mercantilist system demanded a permanent, identifiable, and plentiful labor supply. Enslaved Africans were both easily identified (by their skin color) and plentiful, because of the thriving slave trade. This led to a race-based slavery system in the New World unlike any bondage system that had come before. Initially, the Spanish tried to force Native people to farm their crops. Most Spanish and Portuguese settlers coming to the New World were gentlemen and did not perform physical labor. They came to “serve God, but also to get rich,” as noted by Bernal Díaz del Castillo. However, enslaved natives tended to sicken or die from disease or from the overwork and cruel treatment they were subjected to, and so the indigenous peoples proved not to be a dependable source of labor. Although he later repented of his ideas, the great defender of the Native peoples, Bartolomé de Las Casas, seeing the near extinction of the native population, suggested the Spanish send Black (and White) laborers to the Indies. These workers proved hardier, and within fifty years, a change took place: The profitability of the African slave trade, coupled with the seemingly limitless number of potential enslaved people and the Catholic Church’s denunciation of the enslavement of Christians, led race to become a dominant factor in the institution of slavery. In the English colonies along the Atlantic coast, indentured servants initially filled the need for labor in the North, where family farms were the norm. In the South, however, labor-intensive crops such as tobacco, rice, and indigo prevailed, and eventually the supply of indentured servants was insufficient to meet the demand. These workers served only for periods of three to seven years before being freed; a more permanent labor supply was needed. Thus, whereas in Africa permanent, inherited slavery was unknown, and children of those bound in slavery to the tribe usually were free and intermarried with their captors, this changed in the Americas; slavery became permanent, and children born to enslaved people became enslaved. This development, along with slavery’s identification with race, forever changed the institution and shaped its unique character in the New World. Americana The Beginnings of Racial Slavery Slavery has a long history. The ancient Greek philosopher Aristotle posited that some peoples were homunculi , or humanlike but not really people—for instance, if they did not speak Greek. Both the Bible and the Koran have passages that address the treatment of enslaved people. Vikings who raided from Ireland to Russia brought back enslaved people of all nationalities. During the Middle Ages, traders from the interior of Africa brought enslaved people along well-established routes to sell them along the Mediterranean coast. Initially, slavers also brought enslaved Europeans to the Caribbean. Many of these were orphaned or homeless children captured in the cities of Ireland. The question is, when did slavery become based on race? This appears to have developed in the New World, with the introduction of gruelingly labor-intensive crops such as sugar and coffee. Unable to fill their growing need from the ranks of prisoners or indentured servants, the European colonists turned to African laborers. The Portuguese, although seeking a trade route to India, also set up forts along the West African coast for the purpose of exporting people to Europe. Historians believe that by the year 1500, 10 percent of the population of Lisbon and Seville consisted of Black enslaved people. Because of the influence of the Catholic Church, which frowned on the enslavement of Christians, European slave traders expanded their reach down the coast of Africa. When Europeans settled Brazil, the Caribbean, and North America, they thus established a system of racially based slavery. Here, the need for a massive labor force was greater than in western Europe. The land was ripe for growing sugar, coffee, rice, and ultimately cotton. To fulfill the ever-growing demand for these crops, large plantations were created. The success of these plantations depended upon the availability of a permanent, plentiful, identifiable, and skilled labor supply. As Africans were already familiar with animal husbandry as well as farming, had an identifying skin color, and could be readily supplied by the existing African slave trade, they proved the answer to this need. This process set the stage for the expansion of New World slavery into North America.
u.s._history
Summary 3.1 Spanish Exploration and Colonial Society In their outposts at St. Augustine and Santa Fe, the Spanish never found the fabled mountains of gold they sought. They did find many native people to convert to Catholicism, but their zeal nearly cost them the colony of Santa Fe, which they lost for twelve years after the Pueblo Revolt. In truth, the grand dreams of wealth, conversion, and a social order based on Spanish control never came to pass as Spain envisioned them. 3.2 Colonial Rivalries: Dutch and French Colonial Ambitions The French and Dutch established colonies in the northeastern part of North America: the Dutch in present-day New York, and the French in present-day Canada. Both colonies were primarily trading posts for furs. While they failed to attract many colonists from their respective home countries, these outposts nonetheless intensified imperial rivalries in North America. Both the Dutch and the French relied on native peoples to harvest the pelts that proved profitable in Europe. 3.3 English Settlements in America The English came late to colonization of the Americas, establishing stable settlements in the 1600s after several unsuccessful attempts in the 1500s. After Roanoke Colony failed in 1587, the English found more success with the founding of Jamestown in 1607 and Plymouth in 1620. The two colonies were very different in origin. The Virginia Company of London founded Jamestown with the express purpose of making money for its investors, while Puritans founded Plymouth to practice their own brand of Protestantism without interference. Both colonies battled difficult circumstances, including poor relationships with neighboring Native American tribes. Conflicts flared repeatedly in the Chesapeake Bay tobacco colonies and in New England, where a massive uprising against the English in 1675 to 1676—King Philip’s War—nearly succeeded in driving the intruders back to the sea. 3.4 The Impact of Colonization The development of the Atlantic slave trade forever changed the course of European settlement in the Americas. Other transatlantic travelers, including diseases, goods, plants, animals, and even ideas like the concept of private land ownership, further influenced life in America during the sixteenth and seventeenth centuries. The exchange of pelts for European goods including copper kettles, knives, and guns played a significant role in changing the material cultures of native peoples. During the seventeenth century, native peoples grew increasingly dependent on European trade items. At the same time, many native inhabitants died of European diseases, while survivors adopted new ways of living with their new neighbors.
Chapter Outline 3.1 Spanish Exploration and Colonial Society 3.2 Colonial Rivalries: Dutch and French Colonial Ambitions 3.3 English Settlements in America 3.4 The Impact of Colonization Introduction By the mid-seventeenth century, the geopolitical map of North America had become a patchwork of imperial designs and ambitions as the Spanish, Dutch, French, and English reinforced their claims to parts of the land. Uneasiness, punctuated by violent clashes, prevailed in the border zones between the Europeans’ territorial claims. Meanwhile, still-powerful native peoples waged war to drive the invaders from the continent. In the Chesapeake Bay and New England colonies, conflicts erupted as the English pushed against their native neighbors ( Figure 3.1 ). The rise of colonial societies in the Americas brought Native Americans, Africans, and Europeans together for the first time, highlighting the radical social, cultural, and religious differences that hampered their ability to understand each other. European settlement affected every aspect of the land and its people, bringing goods, ideas, and diseases that transformed the Americas. Reciprocally, Native American practices, such as the use of tobacco, profoundly altered European habits and tastes.
[ { "answer": { "ans_choice": 2, "ans_text": "C" }, "bloom": null, "hl_context": "Spain gained a foothold in present-day Florida , viewing that area and the lands to the north as a logical extension of their Caribbean empire . In 1513 , Juan Ponce de León had claimed the area around today ’ s St . Augustine for the Spanish crown , naming the land Pascua Florida ( Feast of Flowers , or Easter ) for the nearest feast day . Ponce de León was unable to establish a permanent settlement there , but by 1565 , Spain was in need of an outpost to confront the French and English privateers using Florida as a base from which to attack treasure-laden Spanish ships heading from Cuba to Spain . The threat to Spanish interests took a new turn in 1562 when a group of French Protestants ( Huguenots ) established a small settlement they called Fort Caroline , north of St . Augustine . With the authorization of King Philip II , Spanish nobleman Pedro Menéndez led an attack on Fort Caroline , killing most of the colonists and destroying the fort . <hl> Eliminating Fort Caroline served dual purposes for the Spanish — it helped reduce the danger from French privateers and eradicated the French threat to Spain ’ s claim to the area . <hl> The contest over Florida illustrates how European rivalries spilled over into the Americas , especially religious conflict between Catholics and Protestants .", "hl_sentences": "Eliminating Fort Caroline served dual purposes for the Spanish — it helped reduce the danger from French privateers and eradicated the French threat to Spain ’ s claim to the area .", "question": { "cloze_format": "___ was a goal of the Spanish in their destruction of Fort Caroline.", "normal_format": "Which of the following was a goal of the Spanish in their destruction of Fort Caroline?", "question_choices": [ "establishing a foothold from which to battle the Timucua", "claiming a safe place to house the New World treasures that would be shipped back to Spain", "reducing the threat of French privateers", "locating a site for the establishment of Santa Fe" ], "question_id": "fs-idm5684544", "question_text": "Which of the following was a goal of the Spanish in their destruction of Fort Caroline?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "to defend against imperial challengers" }, "bloom": null, "hl_context": "Spanish Florida made an inviting target for Spain ’ s imperial rivals , especially the English , who wanted to gain access to the Caribbean . In 1586 , Spanish settlers in St . Augustine discovered their vulnerability to attack when the English pirate Sir Francis Drake destroyed the town with a fleet of twenty ships and one hundred men . Over the next several decades , the Spanish built more wooden forts , all of which were burnt by raiding European rivals . <hl> Between 1672 and 1695 , the Spanish constructed a stone fort , Castillo de San Marcos ( Figure 3.4 ) , to better defend St . Augustine against challengers . <hl>", "hl_sentences": "Between 1672 and 1695 , the Spanish constructed a stone fort , Castillo de San Marcos ( Figure 3.4 ) , to better defend St . Augustine against challengers .", "question": { "cloze_format": "The Spanish build Castillo de San Marcos ___.", "normal_format": "Why did the Spanish build Castillo de San Marcos?", "question_choices": [ "to protect the local Timucua", "to defend against imperial challengers", "as a seat for visiting Spanish royalty", "to house visiting delegates from rival imperial powers" ], "question_id": "fs-idp19123920", "question_text": "Why did the Spanish build Castillo de San Marcos?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "a Dutch system of granting tracts of land in New Netherland to encourage colonization" }, "bloom": null, "hl_context": "The Dutch West India Company found the business of colonization in New Netherland to be expensive . <hl> To share some of the costs , it granted Dutch merchants who invested heavily in it patroonships , or large tracts of land and the right to govern the tenants there . <hl> <hl> In return , the shareholder who gained the patroonship promised to pay for the passage of at least thirty Dutch farmers to populate the colony . <hl> One of the largest patroonships was granted to Kiliaen van Rensselaer , one of the directors of the Dutch West India Company ; it covered most of present-day Albany and Rensselaer Counties . This pattern of settlement created a yawning gap in wealth and status between the tenants , who paid rent , and the wealthy patroons .", "hl_sentences": "To share some of the costs , it granted Dutch merchants who invested heavily in it patroonships , or large tracts of land and the right to govern the tenants there . In return , the shareholder who gained the patroonship promised to pay for the passage of at least thirty Dutch farmers to populate the colony .", "question": { "cloze_format": "Patroonship was ___.", "normal_format": "What was patroonship?", "question_choices": [ "a Dutch ship used for transporting beaver furs", "a Dutch system of patronage that encouraged the arts", "a Dutch system of granting tracts of land in New Netherland to encourage colonization", "a Dutch style of hat trimmed with beaver fur from New Netherland" ], "question_id": "fs-idm102769264", "question_text": "What was patroonship?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 3, "ans_text": "D" }, "bloom": null, "hl_context": "<hl> A handful of French Jesuit priests also made their way to Canada , intent on converting the native inhabitants to Catholicism . <hl> The Jesuits were members of the Society of Jesus , an elite religious order founded in the 1540s to spread Catholicism and combat the spread of Protestantism . The first Jesuits arrived in Quebec in the 1620s , and for the next century , their numbers did not exceed forty priests . Like the Spanish Franciscan missionaries , the Jesuits in the colony called New France labored to convert the native peoples to Catholicism . They wrote detailed annual reports about their progress in bringing the faith to the Algonquian and , beginning in the 1660s , to the Iroquois . These documents are known as the Jesuit Relations ( Figure 3.7 ) , and they provide a rich source for understanding both the Jesuit view of the Native Americans and the Native response to the colonizers .", "hl_sentences": "A handful of French Jesuit priests also made their way to Canada , intent on converting the native inhabitants to Catholicism .", "question": { "cloze_format": "___ joined the French settlement in Canada and tried to convert the natives to Christianity.", "normal_format": "Which religious order joined the French settlement in Canada and tried to convert the natives to Christianity?", "question_choices": [ "Franciscans", "Calvinists", "Anglicans", "Jesuits" ], "question_id": "fs-idm37120", "question_text": "Which religious order joined the French settlement in Canada and tried to convert the natives to Christianity?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 1, "ans_text": "B" }, "bloom": null, "hl_context": "The transition from indentured servitude to slavery as the main labor source for some English colonies happened first in the West Indies . On the small island of Barbados , colonized in the 1620s , English planters first grew tobacco as their main export crop , but in the 1640s , they converted to sugarcane and began increasingly to rely on African enslaved people . In 1655 , England wrestled control of Jamaica from the Spanish and quickly turned it into a lucrative sugar island , run on forced labor , for its expanding empire . <hl> While slavery was slower to take hold in the Chesapeake colonies , by the end of the seventeenth century , both Virginia and Maryland had also adopted chattel slavery — which legally defined Africans as property and not people — as the dominant form of labor to grow tobacco . <hl> Chesapeake colonists also enslaved native people . By the 1620s , Virginia had weathered the worst and gained a degree of permanence . Political stability came slowly , but by 1619 , the fledgling colony was operating under the leadership of a governor , a council , and a House of Burgesses . <hl> Economic stability came from the lucrative cultivation of tobacco . <hl> Smoking tobacco was a long-standing practice among native peoples , and English and other European consumers soon adopted it . In 1614 , the Virginia colony began exporting tobacco back to England , which earned it a sizable profit and saved the colony from ruin . A second tobacco colony , Maryland , was formed in 1634 , when King Charles I granted its charter to the Calvert family for their loyal service to England . Cecilius Calvert , the second Lord Baltimore , conceived of Maryland as a refuge for English Catholics .", "hl_sentences": "While slavery was slower to take hold in the Chesapeake colonies , by the end of the seventeenth century , both Virginia and Maryland had also adopted chattel slavery — which legally defined Africans as property and not people — as the dominant form of labor to grow tobacco . Economic stability came from the lucrative cultivation of tobacco .", "question": { "cloze_format": "___ was the most lucrative product of the Cheasapeake colonies.", "normal_format": "What was the most lucrative product of the Chesapeake colonies?", "question_choices": [ "corn", "tobacco", "gold and silver", "enslaved people" ], "question_id": "fs-idm30740512", "question_text": "What was the most lucrative product of the Chesapeake colonies?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 0, "ans_text": "former indentured servants wanted more opportunities to expand their territory" }, "bloom": null, "hl_context": "<hl> Bacon ’ s Rebellion , an uprising of both White people and Black people who believed that the Virginia government was impeding their access to land and wealth and seemed to do little to clear the land of Native Americans , hastened the transition to African slavery in the Chesapeake colonies . <hl> The rebellion takes its name from Nathaniel Bacon , a wealthy young Englishman who arrived in Virginia in 1674 . Despite an early friendship with Virginia ’ s royal governor , William Berkeley , Bacon found himself excluded from the governor ’ s circle of influential friends and councilors . He wanted land on the Virginia frontier , but the governor , fearing war with neighboring tribes , forbade further expansion . <hl> Bacon marshaled others , especially former indentured servants who believed the governor was limiting their economic opportunities and denying them the right to own tobacco farms . <hl> Bacon ’ s followers believed Berkeley ’ s frontier policy didn ’ t protect English settlers enough . Worse still in their eyes , Governor Berkeley tried to keep peace in Virginia by signing treaties with various local native peoples . Bacon and his followers , who saw all Native peoples as an obstacle to their access to land , pursued a policy of extermination .", "hl_sentences": "Bacon ’ s Rebellion , an uprising of both White people and Black people who believed that the Virginia government was impeding their access to land and wealth and seemed to do little to clear the land of Native Americans , hastened the transition to African slavery in the Chesapeake colonies . Bacon marshaled others , especially former indentured servants who believed the governor was limiting their economic opportunities and denying them the right to own tobacco farms .", "question": { "cloze_format": "___ was the primary cause of Bacon's Rebellion.", "normal_format": "What was the primary cause of Bacon’s Rebellion?", "question_choices": [ "former indentured servants wanted more opportunities to expand their territory", "Enslaved Africans wanted better treatment", "Susquahannock Natives wanted the Jamestown settlers to pay a fair price for their land", "Jamestown politicians were jockeying for power" ], "question_id": "fs-idm87682768", "question_text": "What was the primary cause of Bacon’s Rebellion?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "A" }, "bloom": null, "hl_context": "The first group of Puritans to make their way across the Atlantic was a small contingent known as the Pilgrims . Unlike other Puritans , they insisted on a complete separation from the Church of England and had first migrated to the Dutch Republic seeking religious freedom . Although they found they could worship without hindrance there , they grew concerned that they were losing their Englishness as they saw their children begin to learn the Dutch language and adopt Dutch ways . In addition , the English Pilgrims ( and others in Europe ) feared another attack on the Dutch Republic by Catholic Spain . <hl> Therefore , in 1620 , they moved on to found the Plymouth Colony in present-day Massachusetts . <hl> The governor of Plymouth , William Bradford , was a Separatist , a proponent of complete separation from the English state church . Bradford and the other Pilgrim Separatists represented a major challenge to the prevailing vision of a unified English national church and empire . On board the Mayflower , which was bound for Virginia but landed on the tip of Cape Cod , Bradford and forty other adult men signed the Mayflower Compact ( Figure 3.11 ) , which presented a religious ( rather than an economic ) rationale for colonization . The compact expressed a community ideal of working together . When a larger exodus of Puritans established the Massachusetts Bay Colony in the 1630s , the Pilgrims at Plymouth welcomed them and the two colonies cooperated with each other . <hl> Plymouth : The First Puritan Colony <hl>", "hl_sentences": "Therefore , in 1620 , they moved on to found the Plymouth Colony in present-day Massachusetts . Plymouth : The First Puritan Colony", "question": { "cloze_format": "___ were the founders of the Plymouth colony.", "normal_format": "Who were the founders of the Plymouth colony?", "question_choices": [ "Puritans", "Catholics", "Anglicans", "Jesuits" ], "question_id": "fs-idp20925664", "question_text": "The founders of the Plymouth colony were:" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "Only men could participate." }, "bloom": null, "hl_context": "Puritan New England differed in many ways from both England and the rest of Europe . Protestants emphasized literacy so that everyone could read the Bible . This attitude was in stark contrast to that of Catholics , who refused to tolerate private ownership of Bibles in the vernacular . <hl> The Puritans , for their part , placed a special emphasis on reading scripture , and their commitment to literacy led to the establishment of the first printing press in English America in 1636 . <hl> Four years later , in 1640 , they published the first book in North America , the Bay Psalm Book . As Calvinists , Puritans adhered to the doctrine of predestination , whereby a few “ elect ” would be saved and all others damned . No one could be sure whether they were predestined for salvation , but through introspection , guided by scripture , Puritans hoped to find a glimmer of redemptive grace . <hl> Church membership was restricted to those Puritans who were willing to provide a conversion narrative telling how they came to understand their spiritual estate by hearing sermons and studying the Bible . <hl>", "hl_sentences": "The Puritans , for their part , placed a special emphasis on reading scripture , and their commitment to literacy led to the establishment of the first printing press in English America in 1636 . Church membership was restricted to those Puritans who were willing to provide a conversion narrative telling how they came to understand their spiritual estate by hearing sermons and studying the Bible .", "question": { "cloze_format": "The Puritan religion does not say that ___.", "normal_format": "Which of the following is not true of the Puritan religion?", "question_choices": [ "It required close reading of scripture.", "Church membership required a conversion narrative.", "Literacy was crucial.", "Only men could participate." ], "question_id": "fs-idm34342816", "question_text": "Which of the following is not true of the Puritan religion?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "the transatlantic journey that enslaved Africans made to America" }, "bloom": null, "hl_context": "<hl> Once sold to traders , all captured people sent to America endured the hellish Middle Passage , the transatlantic crossing , which took one to two months . <hl> By 1625 , more than 325,800 Africans had been shipped to the New World , though many thousands perished during the voyage . An astonishing number , some four million , were transported to the Caribbean between 1501 and 1830 . When they reached their destination in America , Africans found themselves trapped in shockingly brutal slave societies . In the Chesapeake colonies , they faced a lifetime of harvesting and processing tobacco .", "hl_sentences": "Once sold to traders , all captured people sent to America endured the hellish Middle Passage , the transatlantic crossing , which took one to two months .", "question": { "cloze_format": "The Middle Passage was ___.", "normal_format": "What was the Middle Passage?", "question_choices": [ "the fabled sea route from Europe to the Far East", "the land route from Europe to Africa", "the transatlantic journey that enslaved Africans made to America", "the line between the northern and southern colonies" ], "question_id": "fs-idm2144", "question_text": "What was the Middle Passage?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "A" }, "bloom": null, "hl_context": "At the same time , European goods had begun to change Native life radically . <hl> In the 1500s , some of the earliest objects Europeans introduced to Native Americans were glass beads , copper kettles , and metal utensils . <hl> Native people often adapted these items for their own use . For example , some cut up copper kettles and refashioned the metal for other uses , including jewelry that conferred status on the wearer , who was seen as connected to the new European source of raw materials .", "hl_sentences": "In the 1500s , some of the earliest objects Europeans introduced to Native Americans were glass beads , copper kettles , and metal utensils .", "question": { "cloze_format": "___ is not an item Europeans introduced to Native Americans. ", "normal_format": "Which of the following is not an item Europeans introduced to Native Americans?", "question_choices": [ "wampum", "glass beads", "copper kettles", "metal tools" ], "question_id": "fs-idp1016528", "question_text": "Which of the following is not an item Europeans introduced to Native Americans?" }, "references_are_paraphrase": null } ]
3
3.1 Spanish Exploration and Colonial Society Learning Objectives By the end of this section, you will be able to: Identify the main Spanish American colonial settlements of the 1500s and 1600s Discuss economic, political, and demographic similarities and differences between the Spanish colonies During the 1500s, Spain expanded its colonial empire to the Philippines in the Far East and to areas in the Americas that later became the United States. The Spanish dreamed of mountains of gold and silver and imagined converting thousands of eager Native Americans to Catholicism. In their vision of colonial society, everyone would know his or her place. Patriarchy (the rule of men over family, society, and government) shaped the Spanish colonial world. Women occupied a lower status. In all matters, the Spanish held themselves to be atop the social pyramid, with Native peoples and Africans beneath them. Both Africans and native peoples, however, contested Spanish claims to dominance. Everywhere the Spanish settled, they brought devastating diseases, such as smallpox, that led to a horrific loss of life among native peoples. European diseases killed far more native inhabitants than did Spanish swords. The world Native peoples had known before the coming of the Spanish was further upset by Spanish colonial practices. The Spanish imposed the encomienda system in the areas they controlled. Under this system, authorities assigned Native workers to mine and plantation owners with the understanding that the recipients would defend the colony and teach the workers the tenets of Christianity. In reality, the encomienda system exploited native workers. It was eventually replaced by another colonial labor system, the repartimiento , which required Native towns to supply a pool of labor for Spanish overlords. ST. AUGUSTINE, FLORIDA Spain gained a foothold in present-day Florida, viewing that area and the lands to the north as a logical extension of their Caribbean empire. In 1513, Juan Ponce de León had claimed the area around today’s St. Augustine for the Spanish crown, naming the land Pascua Florida (Feast of Flowers, or Easter) for the nearest feast day. Ponce de León was unable to establish a permanent settlement there, but by 1565, Spain was in need of an outpost to confront the French and English privateers using Florida as a base from which to attack treasure-laden Spanish ships heading from Cuba to Spain. The threat to Spanish interests took a new turn in 1562 when a group of French Protestants (Huguenots) established a small settlement they called Fort Caroline, north of St. Augustine. With the authorization of King Philip II, Spanish nobleman Pedro Menéndez led an attack on Fort Caroline, killing most of the colonists and destroying the fort. Eliminating Fort Caroline served dual purposes for the Spanish—it helped reduce the danger from French privateers and eradicated the French threat to Spain’s claim to the area. The contest over Florida illustrates how European rivalries spilled over into the Americas, especially religious conflict between Catholics and Protestants. In 1565, the victorious Menéndez founded St. Augustine, now the oldest European settlement in the Americas. In the process, the Spanish displaced the local Timucua Natives from their ancient town of Seloy, which had stood for thousands of years ( Figure 3.3 ). The Timucua suffered greatly from diseases introduced by the Spanish, shrinking from a population of around 200,000 pre-contact to fifty thousand in 1590. By 1700, only one thousand Timucua remained. As in other areas of Spanish conquest, Catholic priests worked to bring about a spiritual conquest by forcing the surviving Timucua, demoralized and reeling from catastrophic losses of family and community, to convert to Catholicism. Spanish Florida made an inviting target for Spain’s imperial rivals, especially the English, who wanted to gain access to the Caribbean. In 1586, Spanish settlers in St. Augustine discovered their vulnerability to attack when the English pirate Sir Francis Drake destroyed the town with a fleet of twenty ships and one hundred men. Over the next several decades, the Spanish built more wooden forts, all of which were burnt by raiding European rivals. Between 1672 and 1695, the Spanish constructed a stone fort, Castillo de San Marcos ( Figure 3.4 ), to better defend St. Augustine against challengers. SANTA FE, NEW MEXICO Farther west, the Spanish in Mexico, intent on expanding their empire, looked north to the land of the Pueblo Natives. Under orders from King Philip II, Juan de Oñate explored the American southwest for Spain in the late 1590s. The Spanish hoped that what we know as New Mexico would yield gold and silver, but the land produced little of value to them. In 1610, Spanish settlers established themselves at Santa Fe—originally named La Villa Real de la Santa Fe de San Francisco de Asís, or “Royal City of the Holy Faith of St. Francis of Assisi”—where many Pueblo villages were located. Santa Fe became the capital of the Kingdom of New Mexico, an outpost of the larger Spanish Viceroyalty of New Spain, which had its headquarters in Mexico City. As they had in other Spanish colonies, Franciscan missionaries labored to bring about a spiritual conquest by converting the Pueblo to Catholicism. At first, the Pueblo adopted the parts of Catholicism that dovetailed with their own long-standing view of the world. However, Spanish priests insisted that natives discard their old ways entirely and angered the Pueblo by focusing on the young, drawing them away from their parents. This deep insult, combined with an extended period of drought and increased attacks by local Apache and Navajo in the 1670s—troubles that the Pueblo came to believe were linked to the Spanish presence—moved the Pueblo to push the Spanish and their religion from the area. Pueblo leader Popé demanded a return to native ways so the hardships his people faced would end. To him and to thousands of others, it seemed obvious that “when Jesus came, the Corn Mothers went away.” The expulsion of the Spanish would bring a return to prosperity and a pure, native way of life. In 1680, the Pueblo launched a coordinated rebellion against the Spanish. The Pueblo Revolt killed over four hundred Spaniards and drove the rest of the settlers, perhaps as many as two thousand, south toward Mexico. However, as droughts and attacks by rival tribes continued, the Spanish sensed an opportunity to regain their foothold. In 1692, they returned and reasserted their control of the area. Some of the Spanish explained the Pueblo success in 1680 as the work of the Devil. Satan, they believed, had stirred up the Pueblo to take arms against God’s chosen people—the Spanish—but the Spanish, and their God, had prevailed in the end. 3.2 Colonial Rivalries: Dutch and French Colonial Ambitions Learning Objectives By the end of this section, you will be able to: Compare and contrast the development and character of the French and Dutch colonies in North America Discuss the economies of the French and Dutch colonies in North America Seventeenth-century French and Dutch colonies in North America were modest in comparison to Spain’s colossal global empire. New France and New Netherland remained small commercial operations focused on the fur trade and did not attract an influx of migrants. The Dutch in New Netherland confined their operations to Manhattan Island, Long Island, the Hudson River Valley, and what later became New Jersey. Dutch trade goods circulated widely among the native peoples in these areas and also traveled well into the interior of the continent along preexisting native trade routes. French habitants , or farmer-settlers, eked out an existence along the St. Lawrence River. French fur traders and missionaries, however, ranged far into the interior of North America, exploring the Great Lakes region and the Mississippi River. These pioneers gave France somewhat inflated imperial claims to lands that nonetheless remained firmly under the dominion of native peoples. FUR TRADING IN NEW NETHERLAND The Dutch Republic emerged as a major commercial center in the 1600s. Its fleets plied the waters of the Atlantic, while other Dutch ships sailed to the Far East, returning with prized spices like pepper to be sold in the bustling ports at home, especially Amsterdam. In North America, Dutch traders established themselves first on Manhattan Island. One of the Dutch directors-general of the North American settlement, Peter Stuyvesant, served from 1647 to 1664. He expanded the fledgling outpost of New Netherland east to present-day Long Island, and for many miles north along the Hudson River. The resulting elongated colony served primarily as a fur-trading post, with the powerful Dutch West India Company controlling all commerce. Fort Amsterdam, on the southern tip of Manhattan Island, defended the growing city of New Amsterdam. In 1655, Stuyvesant took over the small outpost of New Sweden along the banks of the Delaware River in present-day New Jersey, Pennsylvania, and Delaware. He also defended New Amsterdam from Native American attacks by ordering enslaved Africans to build a protective wall on the city’s northeastern border, giving present-day Wall Street its name ( Figure 3.5 ). New Netherland failed to attract many Dutch colonists; by 1664, only nine thousand people were living there. Conflict with Native peoples, as well as dissatisfaction with the Dutch West India Company’s trading practices, made the Dutch outpost an undesirable place for many migrants. The small size of the population meant a severe labor shortage, and to complete the arduous tasks of early settlement, the Dutch West India Company imported some 450 enslaved Africans between 1626 and 1664. (The company had involved itself heavily in the slave trade and in 1637 captured Elmina, the slave-trading post on the west coast of Africa, from the Portuguese.) The shortage of labor also meant that New Netherland welcomed non-Dutch immigrants, including Protestants from Germany, Sweden, Denmark, and England, and embraced a degree of religious tolerance, allowing Jewish immigrants to become residents beginning in the 1650s. Thus, a wide variety of people lived in New Netherland from the start. Indeed, one observer claimed eighteen different languages could be heard on the streets of New Amsterdam. As new settlers arrived, the colony of New Netherland stretched farther to the north and the west ( Figure 3.6 ). The Dutch West India Company found the business of colonization in New Netherland to be expensive. To share some of the costs, it granted Dutch merchants who invested heavily in it patroonships , or large tracts of land and the right to govern the tenants there. In return, the shareholder who gained the patroonship promised to pay for the passage of at least thirty Dutch farmers to populate the colony. One of the largest patroonships was granted to Kiliaen van Rensselaer, one of the directors of the Dutch West India Company; it covered most of present-day Albany and Rensselaer Counties. This pattern of settlement created a yawning gap in wealth and status between the tenants, who paid rent, and the wealthy patroons. During the summer trading season, Native Americans gathered at trading posts such as the Dutch site at Beverwijck (present-day Albany), where they exchanged furs for guns, blankets, and alcohol. The furs, especially beaver pelts destined for the lucrative European millinery market, would be sent down the Hudson River to New Amsterdam. There, enslaved laborers or workers would load them aboard ships bound for Amsterdam. COMMERCE AND CONVERSION IN NEW FRANCE After Jacques Cartier’s voyages of discovery in the 1530s, France showed little interest in creating permanent colonies in North America until the early 1600s, when Samuel de Champlain established Quebec as a French fur-trading outpost. Although the fur trade was lucrative, the French saw Canada as an inhospitable frozen wasteland, and by 1640, fewer than four hundred settlers had made their home there. The sparse French presence meant that colonists depended on the local native Algonquian people; without them, the French would have perished. French fishermen, explorers, and fur traders made extensive contact with the Algonquian. The Algonquian, in turn, tolerated the French because the colonists supplied them with firearms for their ongoing war with the Iroquois. Thus, the French found themselves escalating native wars and supporting the Algonquian against the Iroquois, who received weapons from their Dutch trading partners. These seventeenth-century conflicts centered on the lucrative trade in beaver pelts, earning them the name of the Beaver Wars. In these wars, fighting between rival native peoples spread throughout the Great Lakes region. A handful of French Jesuit priests also made their way to Canada, intent on converting the native inhabitants to Catholicism. The Jesuits were members of the Society of Jesus, an elite religious order founded in the 1540s to spread Catholicism and combat the spread of Protestantism. The first Jesuits arrived in Quebec in the 1620s, and for the next century, their numbers did not exceed forty priests. Like the Spanish Franciscan missionaries, the Jesuits in the colony called New France labored to convert the native peoples to Catholicism. They wrote detailed annual reports about their progress in bringing the faith to the Algonquian and, beginning in the 1660s, to the Iroquois. These documents are known as the Jesuit Relations ( Figure 3.7 ), and they provide a rich source for understanding both the Jesuit view of the Native Americans and the Native response to the colonizers. One Native convert to Catholicism, a Mohawk woman named Kateri Tekakwitha, so impressed the priests with her piety that a Jesuit named Claude Chauchetière attempted to make her a saint in the Church. However, the effort to canonize Tekakwitha faltered when leaders of the Church balked at elevating a “savage” to such a high status; she was eventually canonized in 2012. French colonizers pressured the native inhabitants of New France to convert, but they virtually never saw Native peoples as their equals. Defining American A Jesuit Priest on Native Healing Traditions The Jesuit Relations ( Figure 3.7 ) provide incredible detail about Native life. For example, the 1636 edition, written by the Catholic priest Jean de Brébeuf, addresses the devastating effects of disease on Native peoples and the efforts made to combat it. Let us return to the feasts. The Aoutaerohi is a remedy which is only for one particular kind of disease, which they call also Aoutaerohi , from the name of a little Demon as large as the fist, which they say is in the body of the sick man, especially in the part which pains him. They find out that they are sick of this disease, by means of a dream, or by the intervention of some Sorcerer. . . . Of three kinds of games especially in use among these Peoples,—namely, the games of crosse [lacrosse], dish, and straw,—the first two are, they say, most healing. Is not this worthy of compassion? There is a poor sick man, fevered of body and almost dying, and a miserable Sorcerer will order for him, as a cooling remedy, a game of crosse. Or the sick man himself, sometimes, will have dreamed that he must die unless the whole country shall play crosse for his health; and, no matter how little may be his credit, you will see then in a beautiful field, Village contending against Village, as to who will play crosse the better, and betting against one another Beaver robes and Porcelain collars, so as to excite greater interest. According to this account, how did Native Americans attempt to cure disease? Why did they prescribe a game of lacrosse? What benefits might these games have for the sick? 3.3 English Settlements in America Learning Objectives By the end of this section, you will be able to: Identify the first English settlements in America Describe the differences between the Chesapeake Bay colonies and the New England colonies Compare and contrast the wars between Native inhabitants and English colonists in both the Chesapeake Bay and New England colonies Explain the role of Bacon’s Rebellion in the rise of chattel slavery in Virginia At the start of the seventeenth century, the English had not established a permanent settlement in the Americas. Over the next century, however, they outpaced their rivals. The English encouraged emigration far more than the Spanish, French, or Dutch. They established nearly a dozen colonies, sending swarms of immigrants to populate the land. England had experienced a dramatic rise in population in the sixteenth century, and the colonies appeared a welcoming place for those who faced overcrowding and grinding poverty at home. Thousands of English migrants arrived in the Chesapeake Bay colonies of Virginia and Maryland to work in the tobacco fields. Another stream, this one of pious Puritan families, sought to live as they believed scripture demanded and established the Plymouth, Massachusetts Bay, New Haven, Connecticut, and Rhode Island colonies of New England ( Figure 3.8 ). THE DIVERGING CULTURES OF THE NEW ENGLAND AND CHESAPEAKE COLONIES Promoters of English colonization in North America, many of whom never ventured across the Atlantic, wrote about the bounty the English would find there. These boosters of colonization hoped to turn a profit—whether by importing raw resources or providing new markets for English goods—and spread Protestantism. The English migrants who actually made the journey, however, had different goals. In Chesapeake Bay, English migrants established Virginia and Maryland with a decidedly commercial orientation. Though the early Virginians at Jamestown hoped to find gold, they and the settlers in Maryland quickly discovered that growing tobacco was the only sure means of making money. Thousands of unmarried, unemployed, and impatient young Englishmen, along with a few Englishwomen, pinned their hopes for a better life on the tobacco fields of these two colonies. A very different group of English men and women flocked to the cold climate and rocky soil of New England, spurred by religious motives. Many of the Puritans crossing the Atlantic were people who brought families and children. Often they were following their ministers in a migration “beyond the seas,” envisioning a new English Israel where reformed Protestantism would grow and thrive, providing a model for the rest of the Christian world and a counter to what they saw as the Catholic menace. While the English in Virginia and Maryland worked on expanding their profitable tobacco fields, the English in New England built towns focused on the church, where each congregation decided what was best for itself. The Congregational Church is the result of the Puritan enterprise in America. Many historians believe the fault lines separating what later became the North and South in the United States originated in the profound differences between the Chesapeake and New England colonies. The source of those differences lay in England’s domestic problems. Increasingly in the early 1600s, the English state church—the Church of England, established in the 1530s—demanded conformity, or compliance with its practices, but Puritans pushed for greater reforms. By the 1620s, the Church of England began to see leading Puritan ministers and their followers as outlaws, a national security threat because of their opposition to its power. As the noose of conformity tightened around them, many Puritans decided to remove to New England. By 1640, New England had a population of twenty-five thousand. Meanwhile, many loyal members of the Church of England, who ridiculed and mocked Puritans both at home and in New England, flocked to Virginia for economic opportunity. The troubles in England escalated in the 1640s when civil war broke out, pitting Royalist supporters of King Charles I and the Church of England against Parliamentarians, the Puritan reformers and their supporters in Parliament. In 1649, the Parliamentarians gained the upper hand and, in an unprecedented move, executed Charles I. In the 1650s, therefore, England became a republic, a state without a king. English colonists in America closely followed these events. Indeed, many Puritans left New England and returned home to take part in the struggle against the king and the national church. Other English men and women in the Chesapeake colonies and elsewhere in the English Atlantic World looked on in horror at the mayhem the Parliamentarians, led by the Puritan insurgents, appeared to unleash in England. The turmoil in England made the administration and imperial oversight of the Chesapeake and New England colonies difficult, and the two regions developed divergent cultures. THE CHESAPEAKE COLONIES: VIRGINIA AND MARYLAND The Chesapeake colonies of Virginia and Maryland served a vital purpose in the developing seventeenth-century English empire by providing tobacco, a cash crop. However, the early history of Jamestown did not suggest the English outpost would survive. From the outset, its settlers struggled both with each other and with the native inhabitants, the powerful Powhatan, who controlled the area. Jealousies and infighting among the English destabilized the colony. One member, John Smith, whose famous map begins this chapter, took control and exercised near-dictatorial powers, which furthered aggravated the squabbling. The settlers’ inability to grow their own food compounded this unstable situation. They were essentially employees of the Virginia Company of London, an English joint-stock company, in which investors provided the capital and assumed the risk in order to reap the profit, and they had to make a profit for their shareholders as well as for themselves. Most initially devoted themselves to finding gold and silver instead of finding ways to grow their own food. Early Struggles and the Development of the Tobacco Economy Poor health, lack of food, and fighting with native peoples took the lives of many of the original Jamestown settlers. The winter of 1609–1610, which became known as “the starving time,” came close to annihilating the colony. By June 1610, the few remaining settlers had decided to abandon the area; only the last-minute arrival of a supply ship from England prevented another failed colonization effort. The supply ship brought new settlers, but only twelve hundred of the seventy-five hundred who came to Virginia between 1607 and 1624 survived. My Story George Percy on “The Starving Time” George Percy, the youngest son of an English nobleman, was in the first group of settlers at the Jamestown Colony. He kept a journal describing their experiences; in the excerpt below, he reports on the privations of the colonists’ third winter. Now all of us at James Town, beginning to feel that sharp prick of hunger which no man truly describe but he which has tasted the bitterness thereof, a world of miseries ensued as the sequel will express unto you, in so much that some to satisfy their hunger have robbed the store for the which I caused them to be executed. Then having fed upon horses and other beasts as long as they lasted, we were glad to make shift with vermin as dogs, cats, rats, and mice. All was fish that came to net to satisfy cruel hunger as to eat boots, shoes, or any other leather some could come by, and, those being spent and devoured, some were enforced to search the woods and to feed upon serpents and snakes and to dig the earth for wild and unknown roots, where many of our men were cut off of and slain by the savages. And now famine beginning to look ghastly and pale in every face that nothing was spared to maintain life and to do those things which seem incredible as to dig up dead corpses out of graves and to eat them, and some have licked up the blood which has fallen from their weak fellows. —George Percy, “A True Relation of the Proceedings and Occurances of Moment which have happened in Virginia from the Time Sir Thomas Gates shipwrecked upon the Bermudes anno 1609 until my departure out of the Country which was in anno Domini 1612,” London 1624 What is your reaction to George Percy’s story? How do you think Jamestown managed to survive after such an experience? What do you think the Jamestown colonists learned? By the 1620s, Virginia had weathered the worst and gained a degree of permanence. Political stability came slowly, but by 1619, the fledgling colony was operating under the leadership of a governor, a council, and a House of Burgesses. Economic stability came from the lucrative cultivation of tobacco. Smoking tobacco was a long-standing practice among native peoples, and English and other European consumers soon adopted it. In 1614, the Virginia colony began exporting tobacco back to England, which earned it a sizable profit and saved the colony from ruin. A second tobacco colony, Maryland, was formed in 1634, when King Charles I granted its charter to the Calvert family for their loyal service to England. Cecilius Calvert, the second Lord Baltimore, conceived of Maryland as a refuge for English Catholics. Growing tobacco proved very labor-intensive ( Figure 3.9 ), and the Chesapeake colonists needed a steady workforce to do the hard work of clearing the land and caring for the tender young plants. The mature leaf of the plant then had to be cured (dried), which necessitated the construction of drying barns. Once cured, the tobacco had to be packaged in hogsheads (large wooden barrels) and loaded aboard ship, which also required considerable labor. To meet these labor demands, early Virginians relied on indentured servants. An indenture is a labor contract that young, impoverished, and often illiterate Englishmen and occasionally Englishwomen signed in England, pledging to work for a number of years (usually between five and seven) growing tobacco in the Chesapeake colonies. In return, indentured servants received paid passage to America and food, clothing, and lodging. At the end of their indenture servants received “freedom dues,” usually food and other provisions, including, in some cases, land provided by the colony. The promise of a new life in America was a strong attraction for members of England’s underclass, who had few if any options at home. In the 1600s, some 100,000 indentured servants traveled to the Chesapeake Bay. Most were poor young men in their early twenties. Life in the colonies proved harsh, however. Indentured servants could not marry, and they were subject to the will of the tobacco planters who bought their labor contracts. If they committed a crime or disobeyed their masters, they found their terms of service lengthened, often by several years. Female indentured servants faced special dangers in what was essentially a bachelor colony. Many were exploited by unscrupulous tobacco planters who seduced them with promises of marriage. These planters would then sell their pregnant servants to other tobacco planters to avoid the costs of raising a child. Nonetheless, those indentured servants who completed their term of service often began new lives as tobacco planters. To entice even more migrants to the New World, the Virginia Company also implemented the headright system , in which those who paid their own passage to Virginia received fifty acres plus an additional fifty for each servant or family member they brought with them. The headright system and the promise of a new life for servants acted as powerful incentives for English migrants to hazard the journey to the New World. The Anglo-Powhatan Wars By choosing to settle along the rivers on the banks of the Chesapeake, the English unknowingly placed themselves at the center of the Powhatan Empire, a powerful Algonquian confederacy of thirty native groups with perhaps as many as twenty-two thousand people. The territory of the equally impressive Susquehannock people also bordered English settlements at the north end of the Chesapeake Bay. Tensions ran high between the English and the Powhatan, and near-constant war prevailed. The First Anglo-Powhatan War (1609–1614) resulted not only from the English colonists’ intrusion onto Powhatan land, but also from their refusal to follow native protocol by giving gifts. English actions infuriated and insulted the Powhatan. In 1613, the settlers captured Pocahontas (also called Matoaka), the daughter of a Powhatan headman named Wahunsonacook, and gave her in marriage to Englishman John Rolfe. Their union, and her choice to remain with the English, helped quell the war in 1614. Pocahontas converted to Christianity, changing her name to Rebecca, and sailed with her husband and several other Powhatan to England where she was introduced to King James I ( Figure 3.10 ). Promoters of colonization publicized Pocahontas as an example of the good work of converting the Powhatan to Christianity. Peace in Virginia did not last long. The Second Anglo-Powhatan War (1620s) broke out because of the expansion of the English settlement nearly one hundred miles into the interior, and because of the continued insults and friction caused by English activities. The Powhatan attacked in 1622 and succeeded in killing almost 350 English, about a third of the settlers. The English responded by annihilating every Powhatan village around Jamestown and from then on became even more intolerant. The Third Anglo-Powhatan War (1644–1646) began with a surprise attack in which the Powhatan killed around five hundred English colonists. However, their ultimate defeat in this conflict forced the Powhatan to acknowledge King Charles I as their sovereign. The Anglo-Powhatan Wars, spanning nearly forty years, illustrate the degree of native resistance that resulted from English intrusion into the Powhatan confederacy. The Rise of Slavery in the Chesapeake Bay Colonies The transition from indentured servitude to slavery as the main labor source for some English colonies happened first in the West Indies. On the small island of Barbados, colonized in the 1620s, English planters first grew tobacco as their main export crop, but in the 1640s, they converted to sugarcane and began increasingly to rely on African enslaved people. In 1655, England wrestled control of Jamaica from the Spanish and quickly turned it into a lucrative sugar island, run on forced labor, for its expanding empire. While slavery was slower to take hold in the Chesapeake colonies, by the end of the seventeenth century, both Virginia and Maryland had also adopted chattel slavery—which legally defined Africans as property and not people—as the dominant form of labor to grow tobacco. Chesapeake colonists also enslaved native people. When the first Africans arrived in Virginia in 1619, slavery—which did not exist in England—had not yet become an institution in colonial America. Many Africans worked as servants and, like their White counterparts, could acquire land of their own. Some Africans who converted to Christianity became free landowners with White servants. The change in the status of Africans in the Chesapeake to that of slaves occurred in the last decades of the seventeenth century. Bacon’s Rebellion, an uprising of both White people and Black people who believed that the Virginia government was impeding their access to land and wealth and seemed to do little to clear the land of Native Americans, hastened the transition to African slavery in the Chesapeake colonies. The rebellion takes its name from Nathaniel Bacon, a wealthy young Englishman who arrived in Virginia in 1674. Despite an early friendship with Virginia’s royal governor, William Berkeley, Bacon found himself excluded from the governor’s circle of influential friends and councilors. He wanted land on the Virginia frontier, but the governor, fearing war with neighboring tribes, forbade further expansion. Bacon marshaled others, especially former indentured servants who believed the governor was limiting their economic opportunities and denying them the right to own tobacco farms. Bacon’s followers believed Berkeley’s frontier policy didn’t protect English settlers enough. Worse still in their eyes, Governor Berkeley tried to keep peace in Virginia by signing treaties with various local native peoples. Bacon and his followers, who saw all Native peoples as an obstacle to their access to land, pursued a policy of extermination. Tensions between the English and the native peoples in the Chesapeake colonies led to open conflict. In 1675, war broke out when Susquehannock warriors attacked settlements on Virginia’s frontier, killing English planters and destroying English plantations, including one owned by Bacon. In 1676, Bacon and other Virginians attacked the Susquehannock without the governor’s approval. When Berkeley ordered Bacon’s arrest, Bacon led his followers to Jamestown, forced the governor to flee to the safety of Virginia’s eastern shore, and then burned the city. The civil war known as Bacon’s Rebellion, a vicious struggle between supporters of the governor and those who supported Bacon, ensued. Reports of the rebellion traveled back to England, leading Charles II to dispatch both royal troops and English commissioners to restore order in the tobacco colonies. By the end of 1676, Virginians loyal to the governor gained the upper hand, executing several leaders of the rebellion. Bacon escaped the hangman’s noose, instead dying of dysentery. The rebellion fizzled in 1676, but Virginians remained divided as supporters of Bacon continued to harbor grievances over access to Native land. Bacon’s Rebellion helped to catalyze the creation of a system of racial slavery in the Chesapeake colonies. At the time of the rebellion, indentured servants made up the majority of laborers in the region. Wealthy Whites worried over the presence of this large class of laborers and the relative freedom they enjoyed, as well as the alliance that Black and White servants had forged in the course of the rebellion. Replacing indentured servitude with Black slavery diminished these risks, alleviating the reliance on White indentured servants, who were often dissatisfied and troublesome, and creating a caste of racially defined laborers whose movements were strictly controlled. It also lessened the possibility of further alliances between Black and White workers. Racial slavery even served to heal some of the divisions between wealthy and poor Whites, who could now unite as members of a “superior” racial group. While colonial laws in the tobacco colonies had made slavery a legal institution before Bacon’s Rebellion, new laws passed in the wake of the rebellion severely curtailed Black freedom and laid the foundation for racial slavery. Virginia passed a law in 1680 prohibiting free Black people and enslaved people from bearing arms, banning Black people from congregating in large numbers, and establishing harsh punishments for enslaved people who assaulted Christians or attempted escape. Two years later, another Virginia law stipulated that all Africans brought to the colony would be enslaved for life. Thus, the increasing reliance on enslaved people in the tobacco colonies—and the draconian laws instituted to control them—not only helped planters meet labor demands, but also served to assuage English fears of further uprisings and alleviate class tensions between rich and poor White people. Defining American Robert Beverley on Servants and Enslaved People Robert Beverley was a wealthy Jamestown planter and enslaver. This excerpt from his History and Present State of Virginia , published in 1705, clearly illustrates the contrast between White servants and enslaved Black people. Their Servants, they distinguish by the Names of Slaves for Life, and Servants for a time. Slaves are the Negroes, and their Posterity, following the condition of the Mother, according to the Maxim, partus sequitur ventrem [status follows the womb]. They are call’d Slaves, in respect of the time of their Servitude, because it is for Life. Servants, are those which serve only for a few years, according to the time of their Indenture, or the Custom of the Country. The Custom of the Country takes place upon such as have no Indentures. The Law in this case is, that if such Servants be under Nineteen years of Age, they must be brought into Court, to have their Age adjudged; and from the Age they are judg’d to be of, they must serve until they reach four and twenty: But if they be adjudged upwards of Nineteen, they are then only to be Servants for the term of five Years. The Male-Servants, and Slaves of both Sexes, are employed together in Tilling and Manuring the Ground, in Sowing and Planting Tobacco, Corn, &c. Some Distinction indeed is made between them in their Cloaths, and Food; but the Work of both, is no other than what the Overseers, the Freemen, and the Planters themselves do. Sufficient Distinction is also made between the Female-Servants, and Slaves; for a White Woman is rarely or never put to work in the Ground, if she be good for any thing else: And to Discourage all Planters from using any Women so, their Law imposes the heaviest Taxes upon Female Servants working in the Ground, while it suffers all other White Women to be absolutely exempted: Whereas on the other hand, it is a common thing to work a Woman Slave out of Doors; nor does the Law make any Distinction in her Taxes, whether her Work be Abroad, or at Home. According to Robert Beverley, what are the differences between the servants and the enslaved? What protections did servants have that enslaved people did not? PURITAN NEW ENGLAND The second major area to be colonized by the English in the first half of the seventeenth century, New England, differed markedly in its founding principles from the commercially oriented Chesapeake tobacco colonies. Settled largely by waves of Puritan families in the 1630s, New England had a religious orientation from the start. In England, reform-minded men and women had been calling for greater changes to the English national church since the 1580s. These reformers, who followed the teachings of John Calvin and other Protestant reformers, were called Puritans because of their insistence on “purifying” the Church of England of what they believed to be un-scriptural, especially Catholic elements that lingered in its institutions and practices. Many who provided leadership in early New England were learned ministers who had studied at Cambridge or Oxford but who, because they had questioned the practices of the Church of England, had been deprived of careers by the king and his officials in an effort to silence all dissenting voices. Other Puritan leaders, such as the first governor of the Massachusetts Bay Colony, John Winthrop, came from the privileged class of English gentry. These well-to-do Puritans and many thousands more left their English homes not to establish a land of religious freedom, but to practice their own religion without persecution. Puritan New England offered them the opportunity to live as they believed the Bible demanded. In their “New” England, they set out to create a model of reformed Protestantism, a new English Israel. The conflict generated by Puritanism had divided English society, because the Puritans demanded reforms that undermined the traditional festive culture. For example, they denounced popular pastimes like bear-baiting—letting dogs attack a chained bear—which were often conducted on Sundays when people had a few leisure hours. In the culture where William Shakespeare had produced his masterpieces, Puritans called for an end to the theater, censuring playhouses as places of decadence. Indeed, the Bible itself became part of the struggle between Puritans and James I, who headed the Church of England. Soon after ascending the throne, James commissioned a new version of the Bible in an effort to stifle Puritan reliance on the Geneva Bible, which followed the teachings of John Calvin and placed God’s authority above the monarch’s. The King James Version, published in 1611, instead emphasized the majesty of kings. During the 1620s and 1630s, the conflict escalated to the point where the state church prohibited Puritan ministers from preaching. In the Church’s view, Puritans represented a national security threat, because their demands for cultural, social, and religious reforms undermined the king’s authority. Unwilling to conform to the Church of England, many Puritans found refuge in the New World. Yet those who emigrated to the Americas were not united. Some called for a complete break with the Church of England, while others remained committed to reforming the national church. Plymouth: The First Puritan Colony The first group of Puritans to make their way across the Atlantic was a small contingent known as the Pilgrims. Unlike other Puritans, they insisted on a complete separation from the Church of England and had first migrated to the Dutch Republic seeking religious freedom. Although they found they could worship without hindrance there, they grew concerned that they were losing their Englishness as they saw their children begin to learn the Dutch language and adopt Dutch ways. In addition, the English Pilgrims (and others in Europe) feared another attack on the Dutch Republic by Catholic Spain. Therefore, in 1620, they moved on to found the Plymouth Colony in present-day Massachusetts. The governor of Plymouth, William Bradford, was a Separatist, a proponent of complete separation from the English state church. Bradford and the other Pilgrim Separatists represented a major challenge to the prevailing vision of a unified English national church and empire. On board the Mayflower , which was bound for Virginia but landed on the tip of Cape Cod, Bradford and forty other adult men signed the Mayflower Compact ( Figure 3.11 ), which presented a religious (rather than an economic) rationale for colonization. The compact expressed a community ideal of working together. When a larger exodus of Puritans established the Massachusetts Bay Colony in the 1630s, the Pilgrims at Plymouth welcomed them and the two colonies cooperated with each other. Americana The Mayflower Compact and Its Religious Rationale The Mayflower Compact, which forty-one Pilgrim men signed on board the Mayflower in Plymouth Harbor, has been called the first American governing document, predating the U.S. Constitution by over 150 years. But was the Mayflower Compact a constitution? How much authority did it convey, and to whom? In the name of God, Amen. We, whose names are underwritten, the loyal subjects of our dread Sovereign Lord King James, by the Grace of God, of Great Britain, France, and Ireland, King, defender of the Faith, etc. Having undertaken, for the Glory of God, and advancements of the Christian faith and honor of our King and Country, a voyage to plant the first colony in the Northern parts of Virginia, do by these presents, solemnly and mutually, in the presence of God, and one another, covenant and combine ourselves together into a civil body politic; for our better ordering, and preservation and furtherance of the ends aforesaid; and by virtue hereof to enact, constitute, and frame, such just and equal laws, ordinances, acts, constitutions, and offices, from time to time, as shall be thought most meet and convenient for the general good of the colony; unto which we promise all due submission and obedience. In witness whereof we have hereunto subscribed our names at Cape Cod the 11th of November, in the year of the reign of our Sovereign Lord King James, of England, France, and Ireland, the eighteenth, and of Scotland the fifty-fourth, 1620 Different labor systems also distinguished early Puritan New England from the Chesapeake colonies. Puritans expected young people to work diligently at their calling, and all members of their large families, including children, did the bulk of the work necessary to run homes, farms, and businesses. Very few migrants came to New England as laborers; in fact, New England towns protected their disciplined homegrown workforce by refusing to allow outsiders in, assuring their sons and daughters of steady employment. New England’s labor system produced remarkable results, notably a powerful maritime-based economy with scores of oceangoing ships and the crews necessary to sail them. New England mariners sailing New England–made ships transported Virginian tobacco and West Indian sugar throughout the Atlantic World. “A City upon a Hill” A much larger group of English Puritans left England in the 1630s, establishing the Massachusetts Bay Colony, the New Haven Colony, the Connecticut Colony, and Rhode Island. Unlike the exodus of young males to the Chesapeake colonies, these migrants were families with young children and their university-trained ministers. Their aim, according to John Winthrop ( Figure 3.12 ), the first governor of Massachusetts Bay, was to create a model of reformed Protestantism—a “city upon a hill,” a new English Israel. The idea of a “city upon a hill” made clear the religious orientation of the New England settlement, and the charter of the Massachusetts Bay Colony stated as a goal that the colony’s people “may be soe religiously, peaceablie, and civilly governed, as their good Life and orderlie Conversacon, maie wynn and incite the Natives of Country, to the Knowledg and Obedience of the onlie true God and Saulor of Mankinde, and the Christian Fayth.” To illustrate this, the seal of the Massachusetts Bay Company ( Figure 3.12 ) shows a half-naked Native American who entreats more of the English to “come over and help us.” Puritan New England differed in many ways from both England and the rest of Europe. Protestants emphasized literacy so that everyone could read the Bible. This attitude was in stark contrast to that of Catholics, who refused to tolerate private ownership of Bibles in the vernacular. The Puritans, for their part, placed a special emphasis on reading scripture, and their commitment to literacy led to the establishment of the first printing press in English America in 1636. Four years later, in 1640, they published the first book in North America, the Bay Psalm Book. As Calvinists, Puritans adhered to the doctrine of predestination, whereby a few “elect” would be saved and all others damned. No one could be sure whether they were predestined for salvation, but through introspection, guided by scripture, Puritans hoped to find a glimmer of redemptive grace. Church membership was restricted to those Puritans who were willing to provide a conversion narrative telling how they came to understand their spiritual estate by hearing sermons and studying the Bible. Although many people assume Puritans escaped England to establish religious freedom, they proved to be just as intolerant as the English state church. When dissenters, including Puritan minister Roger Williams and Anne Hutchinson, challenged Governor Winthrop in Massachusetts Bay in the 1630s, they were banished. Roger Williams questioned the Puritans’ taking of Native land. Williams also argued for a complete separation from the Church of England, a position other Puritans in Massachusetts rejected, as well as the idea that the state could not punish individuals for their beliefs. Although he did accept that nonbelievers were destined for eternal damnation, Williams did not think the state could compel true orthodoxy. Puritan authorities found him guilty of spreading dangerous ideas, but he went on to found Rhode Island as a colony that sheltered dissenting Puritans from their brethren in Massachusetts. In Rhode Island, Williams wrote favorably about native peoples, contrasting their virtues with Puritan New England’s intolerance. Anne Hutchinson also ran afoul of Puritan authorities for her criticism of the evolving religious practices in the Massachusetts Bay Colony. In particular, she held that Puritan ministers in New England taught a shallow version of Protestantism emphasizing hierarchy and actions—a “covenant of works” rather than a “covenant of grace.” Literate Puritan women like Hutchinson presented a challenge to the male ministers’ authority. Indeed, her major offense was her claim of direct religious revelation, a type of spiritual experience that negated the role of ministers. Because of Hutchinson’s beliefs and her defiance of authority in the colony, especially that of Governor Winthrop, Puritan authorities tried and convicted her of holding false beliefs. In 1638, she was excommunicated and banished from the colony. She went to Rhode Island and later, in 1642, sought safety among the Dutch in New Netherland. The following year, Algonquian warriors killed Hutchinson and her family. In Massachusetts, Governor Winthrop noted her death as the righteous judgment of God against a heretic. Like many other Europeans, the Puritans believed in the supernatural. Every event appeared to be a sign of God’s mercy or judgment, and people believed that witches allied themselves with the Devil to carry out evil deeds and deliberate harm such as the sickness or death of children, the loss of cattle, and other catastrophes. Hundreds were accused of witchcraft in Puritan New England, including townspeople whose habits or appearance bothered their neighbors or who appeared threatening for any reason. Women, seen as more susceptible to the Devil because of their supposedly weaker constitutions, made up the vast majority of suspects and those who were executed. The most notorious cases occurred in Salem Village in 1692. Many of the accusers who prosecuted the suspected witches had been traumatized by the Native wars on the frontier and by unprecedented political and cultural changes in New England. Relying on their belief in witchcraft to help make sense of their changing world, Puritan authorities executed nineteen people and caused the deaths of several others. Puritan Relationships with Native Peoples Like their Spanish and French Catholic rivals, English Puritans in America took steps to convert native peoples to their version of Christianity. John Eliot, the leading Puritan missionary in New England, urged natives in Massachusetts to live in “praying towns” established by English authorities for converted Native Americans, and to adopt the Puritan emphasis on the centrality of the Bible. In keeping with the Protestant emphasis on reading scripture, he translated the Bible into the local Algonquian language and published his work in 1663. Eliot hoped that as a result of his efforts, some of New England’s native inhabitants would become preachers. Tensions had existed from the beginning between the Puritans and the native people who controlled southern New England ( Figure 3.13 ). Relationships deteriorated as the Puritans continued to expand their settlements aggressively and as European ways increasingly disrupted native life. These strains led to King Philip’s War (1675–1676), a massive regional conflict that was nearly successful in pushing the English out of New England. When the Puritans began to arrive in the 1620s and 1630s, local Algonquian peoples had viewed them as potential allies in the conflicts already simmering between rival native groups. In 1621, the Wampanoag, led by Massasoit, concluded a peace treaty with the Pilgrims at Plymouth. In the 1630s, the Puritans in Massachusetts and Plymouth allied themselves with the Narragansett and Mohegan people against the Pequot, who had recently expanded their claims into southern New England. In May 1637, the Puritans attacked a large group of several hundred Pequot along the Mystic River in Connecticut. To the horror of their native allies, the Puritans massacred all but a handful of the men, women, and children they found. By the mid-seventeenth century, the Puritans had pushed their way further into the interior of New England, establishing outposts along the Connecticut River Valley. There seemed no end to their expansion. Wampanoag leader Metacom or Metacomet, also known as King Philip among the English, was determined to stop the encroachment. The Wampanoag, along with the Nipmuck, Pocumtuck, and Narragansett, took up the hatchet to drive the English from the land. In the ensuing conflict, called King Philip’s War, native forces succeeded in destroying half of the frontier Puritan towns; however, in the end, the English (aided by Mohegans and Christian Native Americans) prevailed and sold many captives into slavery in the West Indies. (The severed head of King Philip was publicly displayed in Plymouth.) The war also forever changed the English perception of native peoples; from then on, Puritan writers took great pains to vilify the natives as bloodthirsty savages. A new type of racial hatred became a defining feature of Native-English relationships in the Northeast. My Story Mary Rowlandson’s Captivity Narrative Mary Rowlandson was a Puritan woman whom Native tribes captured and imprisoned for several weeks during King Philip’s War. After her release, she wrote The Narrative of the Captivity and the Restoration of Mrs. Mary Rowlandson , which was published in 1682 ( Figure 3.14 ). The book was an immediate sensation that was reissued in multiple editions for over a century. But now, the next morning, I must turn my back upon the town, and travel with them into the vast and desolate wilderness, I knew not whither. It is not my tongue, or pen, can express the sorrows of my heart, and bitterness of my spirit that I had at this departure: but God was with me in a wonderful manner, carrying me along, and bearing up my spirit, that it did not quite fail. One of the Indians carried my poor wounded babe upon a horse; it went moaning all along, “I shall die, I shall die.” I went on foot after it, with sorrow that cannot be expressed. At length I took it off the horse, and carried it in my arms till my strength failed, and I fell down with it. Then they set me upon a horse with my wounded child in my lap, and there being no furniture upon the horse’s back, as we were going down a steep hill we both fell over the horse’s head, at which they, like inhumane creatures, laughed, and rejoiced to see it, though I thought we should there have ended our days, as overcome with so many difficulties. But the Lord renewed my strength still, and carried me along, that I might see more of His power; yea, so much that I could never have thought of, had I not experienced it. What sustains Rowlandson her during her ordeal? How does she characterize her captors? What do you think made her narrative so compelling to readers? 3.4 The Impact of Colonization Learning Objectives By the end of this section, you will be able to: Explain the reasons for the rise of slavery in the American colonies Describe changes to Native life, including warfare and hunting Contrast European and Native American views on property Assess the impact of European settlement on the environment As Europeans moved beyond exploration and into colonization of the Americas, they brought changes to virtually every aspect of the land and its people, from trade and hunting to warfare and personal property. European goods, ideas, and diseases shaped the changing continent. As Europeans established their colonies, their societies also became segmented and divided along religious and racial lines. Most people in these societies were not free; they labored as servants or enslaved people, doing the work required to produce wealth for others. By 1700, the American continent had become a place of stark contrasts between slavery and freedom, between the haves and the have-nots. THE INSTITUTION OF SLAVERY Everywhere in the American colonies, a crushing demand for labor existed to grow New World cash crops, especially sugar and tobacco. This need led Europeans to rely increasingly on Africans, and after 1600, the movement of Africans across the Atlantic accelerated. The English crown chartered the Royal African Company in 1672, giving the company a monopoly over the transport of enslaved African people to the English colonies. Over the next four decades, the company transported around 350,000 Africans from their homelands. By 1700, the tiny English sugar island of Barbados had a population of fifty thousand enslaved people, and the English had encoded the institution of chattel slavery into colonial law. This new system of African slavery came slowly to the English colonists, who did not have slavery at home and preferred to use servant labor. Nevertheless, by the end of the seventeenth century, the English everywhere in America—and particularly in the Chesapeake Bay colonies—had come to rely on enslaved Africans. While Africans had long practiced slavery among their own people, it had not been based on race. Africans enslaved other Africans as war captives, for crimes, and to settle debts; they generally used enslaved people for domestic and small-scale agricultural work, not for growing cash crops on large plantations. Additionally, African slavery was often a temporary condition rather than a lifelong sentence, and, unlike New World slavery, it was typically not heritable (passed from an enslaved mother to her children). The growing slave trade with Europeans had a profound impact on the people of West Africa, giving prominence to local chieftains and merchants who traded enslaved people for European textiles, alcohol, guns, tobacco, and food. Africans also charged Europeans for the right to trade in enslaved people and imposed taxes on enslaved people purchases. Different African groups and kingdoms even staged large-scale raids on each other to meet the demand for enslaved people. Once sold to traders, all captured people sent to America endured the hellish Middle Passage , the transatlantic crossing, which took one to two months. By 1625, more than 325,800 Africans had been shipped to the New World, though many thousands perished during the voyage. An astonishing number, some four million, were transported to the Caribbean between 1501 and 1830. When they reached their destination in America, Africans found themselves trapped in shockingly brutal slave societies. In the Chesapeake colonies, they faced a lifetime of harvesting and processing tobacco. Everywhere, Africans resisted slavery, and running away was common. In Jamaica and elsewhere, escaped enslaved people created maroon communities , groups that resisted recapture and eked out a living from the land, rebuilding their communities as best they could. When possible, they adhered to traditional ways, following spiritual leaders such as Vodun priests. CHANGES TO NATIVE LIFE While the Americas remained firmly under the control of native peoples in the first decades of European settlement, conflict increased as colonization spread and Europeans placed greater demands upon the native populations, including expecting them to convert to Christianity (either Catholicism or Protestantism). Throughout the seventeenth century, the still-powerful native peoples and confederacies that retained control of the land waged war against the invading Europeans, achieving a degree of success in their effort to drive the newcomers from the continent. At the same time, European goods had begun to change Native life radically. In the 1500s, some of the earliest objects Europeans introduced to Native Americans were glass beads, copper kettles, and metal utensils. Native people often adapted these items for their own use. For example, some cut up copper kettles and refashioned the metal for other uses, including jewelry that conferred status on the wearer, who was seen as connected to the new European source of raw materials. As European settlements grew throughout the 1600s, European goods flooded Native communities. Soon Native people were using these items for the same purposes as the Europeans. For example, many Native inhabitants abandoned their animal-skin clothing in favor of European textiles. Similarly, clay cookware gave way to metal cooking implements, and Native Americans found that European flint and steel made starting fires much easier ( Figure 3.15 ). The abundance of European goods gave rise to new artistic objects. For example, iron awls made the creation of shell beads among the native people of the Eastern Woodlands much easier, and the result was an astonishing increase in the production of wampum , shell beads used in ceremonies and as jewelry and currency. Native peoples had always placed goods in the graves of their departed, and this practice escalated with the arrival of European goods. Archaeologists have found enormous caches of European trade goods in the graves of Native Americans on the East Coast. Native weapons changed dramatically as well, creating an arms race among the peoples living in European colonization zones. Native Americans refashioned European brassware into arrow points and turned axes used for chopping wood into weapons. The most prized piece of European weaponry to obtain was a musket , or light, long-barreled European gun. In order to trade with Europeans for these, Native peoples intensified their harvesting of beaver, commercializing their traditional practice. The influx of European materials made warfare more lethal and changed traditional patterns of authority among tribes. Formerly weaker groups, if they had access to European metal and weapons, suddenly gained the upper hand against once-dominant groups. The Algonquian, for instance, traded with the French for muskets and gained power against their enemies, the Iroquois. Eventually, native peoples also used their new weapons against the European colonizers who had provided them. ENVIRONMENTAL CHANGES The European presence in America spurred countless changes in the environment, setting into motion chains of events that affected native animals as well as people. The popularity of beaver-trimmed hats in Europe, coupled with Native American’s desire for European weapons, led to the overhunting of beaver in the Northeast. Soon, beavers were extinct in New England, New York, and other areas. With their loss came the loss of beaver ponds, which had served as habitats for fish as well as water sources for deer, moose, and other animals. Furthermore, Europeans introduced pigs, which they allowed to forage in forests and other wildlands. Pigs consumed the foods on which deer and other indigenous species depended, resulting in scarcity of the game native peoples had traditionally hunted. European ideas about owning land as private property clashed with natives’ understanding of land use. Native peoples did not believe in private ownership of land; instead, they viewed land as a resource to be held in common for the benefit of the group. The European idea of usufruct—the right to common land use and enjoyment—comes close to the native understanding, but colonists did not practice usufruct widely in America. Colonizers established fields, fences, and other means of demarcating private property. Native peoples who moved seasonally to take advantage of natural resources now found areas off limits, claimed by colonizers because of their insistence on private-property rights. The Introduction of Disease Perhaps European colonization’s single greatest impact on the North American environment was the introduction of disease. Microbes to which native inhabitants had no immunity led to death everywhere Europeans settled. Along the New England coast between 1616 and 1618, epidemics claimed the lives of 75 percent of the native people. In the 1630s, half the Huron and Iroquois around the Great Lakes died of smallpox. As is often the case with disease, the very young and the very old were the most vulnerable and had the highest mortality rates. The loss of the older generation meant the loss of knowledge and tradition, while the death of children only compounded the trauma, creating devastating implications for future generations. Some native peoples perceived disease as a weapon used by hostile spiritual forces, and they went to war to exorcise the disease from their midst. These “mourning wars” in eastern North America were designed to gain captives who would either be adopted (“requickened” as a replacement for a deceased loved one) or ritually tortured and executed to assuage the anger and grief caused by loss. The Cultivation of Plants European expansion in the Americas led to an unprecedented movement of plants across the Atlantic. A prime example is tobacco, which became a valuable export as the habit of smoking, previously unknown in Europe, took hold ( Figure 3.16 ). Another example is sugar. Columbus brought sugarcane to the Caribbean on his second voyage in 1494, and thereafter a wide variety of other herbs, flowers, seeds, and roots made the transatlantic voyage. Just as pharmaceutical companies today scour the natural world for new drugs, Europeans traveled to America to discover new medicines. The task of cataloging the new plants found there helped give birth to the science of botany. Early botanists included the English naturalist Sir Hans Sloane, who traveled to Jamaica in 1687 and there recorded hundreds of new plants ( Figure 3.17 ). Sloane also helped popularize the drinking of chocolate, made from the cacao bean, in England. Native Americans, who possessed a vast understanding of local New World plants and their properties, would have been a rich source of information for those European botanists seeking to find and catalog potentially useful plants. Enslaved Africans, who had a tradition of the use of medicinal plants in their native land, adapted to their new surroundings by learning the use of New World plants through experimentation or from the native inhabitants. Native peoples and Africans employed their knowledge effectively within their own communities. One notable example was the use of the peacock flower to induce abortions: Native American and enslaved African women living in oppressive colonial regimes are said to have used this herb to prevent the birth of children into slavery. Europeans distrusted medical knowledge that came from African or native sources, however, and thus lost the benefit of this source of information.
biology
Chapter Outline 46.1 Ecology of Ecosystems 46.2 Energy Flow through Ecosystems 46.3 Biogeochemical Cycles Introduction In 1993, an interesting example of ecosystem dynamics occurred when a rare lung disease struck inhabitants of the southwestern United States. This disease had an alarming rate of fatalities, killing more than half of early patients, many of whom were Native Americans. These formerly healthy young adults died from complete respiratory failure. The disease was unknown, and the Centers for Disease Control (CDC), the United States government agency responsible for managing potential epidemics, was brought in to investigate. The scientists could have learned about the disease had they known to talk with the Navajo healers who lived in the area and who had observed the connection between rainfall and mice populations, thereby predicting the 1993 outbreak. The cause of the disease, determined within a few weeks by the CDC investigators, was the hantavirus known as Sin Nombre , the virus with “no name.” With insights from traditional Navajo medicine, scientists were able to characterize the disease rapidly and institute effective health measures to prevent its spread. This example illustrates the importance of understanding the complexities of ecosystems and how they respond to changes in the environment.
[ { "answer": { "ans_choice": 3, "ans_text": "resilience" }, "bloom": null, "hl_context": "Equilibrium is the steady state of an ecosystem where all organisms are in balance with their environment and with each other . In ecology , two parameters are used to measure changes in ecosystems : resistance and resilience . The ability of an ecosystem to remain at equilibrium in spite of disturbances is called resistance . <hl> The speed at which an ecosystem recovers equilibrium after being disturbed , called its resilience . <hl> Ecosystem resistance and resilience are especially important when considering human impact . The nature of an ecosystem may change to such a degree that it can lose its resilience entirely . This process can lead to the complete destruction or irreversible altering of the ecosystem .", "hl_sentences": "The speed at which an ecosystem recovers equilibrium after being disturbed , called its resilience .", "question": { "cloze_format": "The ability of an ecosystem to return to its equilibrium state after an environmental disturbance is called ________.", "normal_format": "What is called the ability of an ecosystem to return to its equilibrium state after an environmental disturbance?", "question_choices": [ "resistance", "restoration", "reformation", "resilience" ], "question_id": "fs-idp193965264", "question_text": "The ability of an ecosystem to return to its equilibrium state after an environmental disturbance is called ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "microcosm" }, "bloom": null, "hl_context": "For these reasons , scientists study ecosystems under more controlled conditions . <hl> Experimental systems usually involve either partitioning a part of a natural ecosystem that can be used for experiments , termed a mesocosm , or by re-creating an ecosystem entirely in an indoor or outdoor laboratory environment , which is referred to as a microcosm . <hl> A major limitation to these approaches is that removing individual organisms from their natural ecosystem or altering a natural ecosystem through partitioning may change the dynamics of the ecosystem . These changes are often due to differences in species numbers and diversity and also to environment alterations caused by partitioning ( mesocosm ) or re-creating ( microcosm ) the natural habitat . Thus , these types of experiments are not totally predictive of changes that would occur in the ecosystem from which they were gathered .", "hl_sentences": "Experimental systems usually involve either partitioning a part of a natural ecosystem that can be used for experiments , termed a mesocosm , or by re-creating an ecosystem entirely in an indoor or outdoor laboratory environment , which is referred to as a microcosm .", "question": { "cloze_format": "A re-created ecosystem in a laboratory environment is known as a ________.", "normal_format": "What is a re-created ecosystem in a laboratory environment is known as?", "question_choices": [ "mesocosm", "simulation", "microcosm", "reproduction" ], "question_id": "fs-idp115382560", "question_text": "A re-created ecosystem in a laboratory environment is known as a ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "detrital" }, "bloom": null, "hl_context": "Head to this online interactive simulator to investigate food web function . In the Interactive Labs box , under Food Web , click Step 1 . Read the instructions first , and then click Step 2 for additional instructions . When you are ready to create a simulation , in the upper-right corner of the Interactive Labs box , click OPEN SIMULATOR . Two general types of food webs are often shown interacting within a single ecosystem . A grazing food web ( such as the Lake Ontario food web in Figure 46.6 ) has plants or other photosynthetic organisms at its base , followed by herbivores and various carnivores . <hl> A detrital food web consists of a base of organisms that feed on decaying organic matter ( dead organisms ) , called decomposers or detritivores . <hl> These organisms are usually bacteria or fungi that recycle organic material back into the biotic part of the ecosystem as they themselves are consumed by other organisms . As all ecosystems require a method to recycle material from dead organisms , most grazing food webs have an associated detrital food web . For example , in a meadow ecosystem , plants may support a grazing food web of different organisms , primary and other levels of consumers , while at the same time supporting a detrital food web of bacteria , fungi , and detrivorous invertebrates feeding off dead plants and animals .", "hl_sentences": "A detrital food web consists of a base of organisms that feed on decaying organic matter ( dead organisms ) , called decomposers or detritivores .", "question": { "cloze_format": "The class of food web that decomposers are associated with is ___.", "normal_format": "Decomposers are associated with which class of food web?", "question_choices": [ "grazing", "detrital", "inverted", "aquatic" ], "question_id": "fs-idp322525632", "question_text": "Decomposers are associated with which class of food web?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "phytoplankton" }, "bloom": null, "hl_context": "<hl> In many ecosystems , the bottom of the food chain consists of photosynthetic organisms ( plants and / or phytoplankton ) , which are called primary producers . <hl> <hl> The organisms that consume the primary producers are herbivores : the primary consumers . <hl> Secondary consumers are usually carnivores that eat the primary consumers . Tertiary consumers are carnivores that eat other carnivores . Higher-level consumers feed on the next lower tropic levels , and so on , up to the organisms at the top of the food chain : the apex consumers . In the Lake Ontario food chain shown in Figure 46.4 , the Chinook salmon is the apex consumer at the top of this food chain .", "hl_sentences": "In many ecosystems , the bottom of the food chain consists of photosynthetic organisms ( plants and / or phytoplankton ) , which are called primary producers . The organisms that consume the primary producers are herbivores : the primary consumers .", "question": { "cloze_format": "The primary producers in an ocean grazing food web are usually ________.", "normal_format": "What are usually the primary producers in an ocean grazing food web?", "question_choices": [ "plants", "animals", "fungi", "phytoplankton" ], "question_id": "fs-idp162905184", "question_text": "The primary producers in an ocean grazing food web are usually ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "analytical modeling" }, "bloom": "1", "hl_context": "<hl> Analytical models often use simple , linear components of ecosystems , such as food chains , and are known to be complex mathematically ; therefore , they require a significant amount of mathematical knowledge and expertise . <hl> Although analytical models have great potential , their simplification of complex ecosystems is thought to limit their accuracy . Simulation models that use computer programs are better able to deal with the complexities of ecosystem structure . Scientists use the data generated by these experimental studies to develop ecosystem models that demonstrate the structure and dynamics of ecosystems . Three basic types of ecosystem modeling are routinely used in research and ecosystem management : a conceptual model , an analytical model , and a simulation model . A conceptual model is an ecosystem model that consists of flow charts to show interactions of different compartments of the living and nonliving components of the ecosystem . A conceptual model describes ecosystem structure and dynamics and shows how environmental disturbances affect the ecosystem ; however , its ability to predict the effects of these disturbances is limited . Analytical and simulation models , in contrast , are mathematical methods of describing ecosystems that are indeed capable of predicting the effects of potential environmental changes without direct experimentation , although with some limitations as to accuracy . <hl> An analytical model is an ecosystem model that is created using simple mathematical formulas to predict the effects of environmental disturbances on ecosystem structure and dynamics . <hl> A simulation model is an ecosystem model that is created using complex computer algorithms to holistically model ecosystems and to predict the effects of environmental disturbances on ecosystem structure and dynamics . Ideally , these models are accurate enough to determine which components of the ecosystem are particularly sensitive to disturbances , and they can serve as a guide to ecosystem managers ( such as conservation ecologists or fisheries biologists ) in the practical maintenance of ecosystem health .", "hl_sentences": "Analytical models often use simple , linear components of ecosystems , such as food chains , and are known to be complex mathematically ; therefore , they require a significant amount of mathematical knowledge and expertise . An analytical model is an ecosystem model that is created using simple mathematical formulas to predict the effects of environmental disturbances on ecosystem structure and dynamics .", "question": { "cloze_format": "The term that describes the use of mathematical equations in the modeling of linear aspects of ecosystems is ___.", "normal_format": "What term describes the use of mathematical equations in the modeling of linear aspects of ecosystems?", "question_choices": [ "analytical modeling", "simulation modeling", "conceptual modeling", "individual-based modeling" ], "question_id": "fs-idp154317872", "question_text": "What term describes the use of mathematical equations in the modeling of linear aspects of ecosystems?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "trophic level" }, "bloom": null, "hl_context": "The scientific understanding of a food chain is more precise than in its everyday usage . In ecology , a food chain is a linear sequence of organisms through which nutrients and energy pass : primary producers , primary consumers , and higher-level consumers are used to describe ecosystem structure and dynamics . There is a single path through the chain . <hl> Each organism in a food chain occupies what is called a trophic level . <hl> Depending on their role as producers or consumers , species or groups of species can be assigned to various trophic levels .", "hl_sentences": "Each organism in a food chain occupies what is called a trophic level .", "question": { "cloze_format": "The position of an organism along a food chain is known as its ________.", "normal_format": "What is the position of an organism along a food chain?", "question_choices": [ "locus", "location", "trophic level", "microcosm" ], "question_id": "fs-idp156578576", "question_text": "The position of an organism along a food chain is known as its ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "biomass" }, "bloom": null, "hl_context": "Productivity within Trophic Levels Productivity within an ecosystem can be defined as the percentage of energy entering the ecosystem incorporated into biomass in a particular trophic level . <hl> Biomass is the total mass , in a unit area at the time of measurement , of living or previously living organisms within a trophic level . <hl> Ecosystems have characteristic amounts of biomass at each trophic level . For example , in the English Channel ecosystem the primary producers account for a biomass of 4 g / m 2 ( grams per meter squared ) , while the primary consumers exhibit a biomass of 21 g / m 2 . The productivity of the primary producers is especially important in any ecosystem because these organisms bring energy to other living organisms by photoautotrophy or chemoautotrophy . The rate at which photosynthetic primary producers incorporate energy from the sun is called gross primary productivity . An example of gross primary productivity is shown in the compartment diagram of energy flow within the Silver Springs aquatic ecosystem as shown ( Figure 46.8 ) . In this ecosystem , the total energy accumulated by the primary producers ( gross primary productivity ) was shown to be 20,810 kcal / m 2 / yr . Because all organisms need to use some of this energy for their own functions ( like respiration and resulting metabolic heat loss ) scientists often refer to the net primary productivity of an ecosystem . Net primary productivity is the energy that remains in the primary producers after accounting for the organisms ’ respiration and heat loss . The net productivity is then available to the primary consumers at the next trophic level . In our Silver Spring example , 13,187 of the 20,810 kcal / m 2 / yr were used for respiration or were lost as heat , leaving 7,632 kcal / m 2 / yr of energy for use by the primary consumers .", "hl_sentences": "Biomass is the total mass , in a unit area at the time of measurement , of living or previously living organisms within a trophic level .", "question": { "cloze_format": "The weight of living organisms in an ecosystem at a particular point in time is called ___ .", "normal_format": "What is called the weight of living organisms in an ecosystem at a particular point in time?", "question_choices": [ "energy", "production", "entropy", "biomass" ], "question_id": "fs-idp11895824", "question_text": "The weight of living organisms in an ecosystem at a particular point in time is called:" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "biomagnification" }, "bloom": null, "hl_context": "One of the most important environmental consequences of ecosystem dynamics is biomagnification . <hl> Biomagnification is the increasing concentration of persistent , toxic substances in organisms at each trophic level , from the primary producers to the apex consumers . <hl> Many substances have been shown to bioaccumulate , including classical studies with the pesticide d ichloro d iphenyl t richloroethane ( DDT ) , which was published in the 1960s bestseller , Silent Spring , by Rachel Carson . DDT was a commonly used pesticide before its dangers became known . In some aquatic ecosystems , organisms from each trophic level consumed many organisms of the lower level , which caused DDT to increase in birds ( apex consumers ) that ate fish . Thus , the birds accumulated sufficient amounts of DDT to cause fragility in their eggshells . This effect increased egg breakage during nesting and was shown to have adverse effects on these bird populations . The use of DDT was banned in the United States in the 1970s .", "hl_sentences": "Biomagnification is the increasing concentration of persistent , toxic substances in organisms at each trophic level , from the primary producers to the apex consumers .", "question": { "cloze_format": "___ is a term that describes the process whereby toxic substances increase along trophic levels of an ecosystem.", "normal_format": "Which term describes the process whereby toxic substances increase along trophic levels of an ecosystem?", "question_choices": [ "biomassification", "biomagnification", "bioentropy", "heterotrophy" ], "question_id": "fs-idp20638384", "question_text": "Which term describes the process whereby toxic substances increase along trophic levels of an ecosystem?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "chemoautotrophs" }, "bloom": null, "hl_context": "Chemoautotrophs are primarily bacteria that are found in rare ecosystems where sunlight is not available , such as in those associated with dark caves or hydrothermal vents at the bottom of the ocean ( Figure 46.9 ) . Many chemoautotrophs in hydrothermal vents use hydrogen sulfide ( H 2 S ) , which is released from the vents as a source of chemical energy . <hl> This allows chemoautotrophs to synthesize complex organic molecules , such as glucose , for their own energy and in turn supplies energy to the rest of the ecosystem . <hl> Photosynthetic and chemosynthetic organisms are both grouped into a category known as autotrophs : organisms capable of synthesizing their own food ( more specifically , capable of using inorganic carbon as a carbon source ) . <hl> Photosynthetic autotrophs ( photoautotrophs ) use sunlight as an energy source , whereas chemosynthetic autotrophs ( chemoautotrophs ) use inorganic molecules as an energy source . <hl> Autotrophs are critical for all ecosystems . Without these organisms , energy would not be available to other living organisms and life itself would not be possible .", "hl_sentences": "This allows chemoautotrophs to synthesize complex organic molecules , such as glucose , for their own energy and in turn supplies energy to the rest of the ecosystem . Photosynthetic autotrophs ( photoautotrophs ) use sunlight as an energy source , whereas chemosynthetic autotrophs ( chemoautotrophs ) use inorganic molecules as an energy source .", "question": { "cloze_format": "Organisms that can make their own food using inorganic molecules are called ___.", "normal_format": "What are called organisms that can make their food using inorganic molecules?", "question_choices": [ "autotrophs", "heterotrophs", "photoautotrophs", "chemoautotrophs" ], "question_id": "fs-idm130633648", "question_text": "Organisms that can make their own food using inorganic molecules are called:" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "the primary producers have a high turnover rate" }, "bloom": null, "hl_context": "Pyramids of numbers can be either upright or inverted , depending on the ecosystem . As shown in Figure 46.10 , typical grassland during the summer has a base of many plants and the numbers of organisms decrease at each trophic level . However , during the summer in a temperate forest , the base of the pyramid consists of few trees compared with the number of primary consumers , mostly insects . Because trees are large , they have great photosynthetic capability , and dominate other plants in this ecosystem to obtain sunlight . Even in smaller numbers , primary producers in forests are still capable of supporting other trophic levels . Another way to visualize ecosystem structure is with pyramids of biomass . This pyramid measures the amount of energy converted into living tissue at the different trophic levels . <hl> Using the Silver Springs ecosystem example , this data exhibits an upright biomass pyramid ( Figure 46.10 ) , whereas the pyramid from the English Channel example is inverted . <hl> The plants ( primary producers ) of the Silver Springs ecosystem make up a large percentage of the biomass found there . However , the phytoplankton in the English Channel example make up less biomass than the primary consumers , the zooplankton . <hl> As with inverted pyramids of numbers , this inverted pyramid is not due to a lack of productivity from the primary producers , but results from the high turnover rate of the phytoplankton . <hl> The phytoplankton are consumed rapidly by the primary consumers , thus , minimizing their biomass at any particular point in time . However , phytoplankton reproduce quickly , thus they are able to support the rest of the ecosystem . Pyramid ecosystem modeling can also be used to show energy flow through the trophic levels . Notice that these numbers are the same as those used in the energy flow compartment diagram in Figure 46.8 . Pyramids of energy are always upright , and an ecosystem without sufficient primary productivity cannot be supported . All types of ecological pyramids are useful for characterizing ecosystem structure . However , in the study of energy flow through the ecosystem , pyramids of energy are the most consistent and representative models of ecosystem structure ( Figure 46.10 ) . Visual Connection", "hl_sentences": "Using the Silver Springs ecosystem example , this data exhibits an upright biomass pyramid ( Figure 46.10 ) , whereas the pyramid from the English Channel example is inverted . As with inverted pyramids of numbers , this inverted pyramid is not due to a lack of productivity from the primary producers , but results from the high turnover rate of the phytoplankton .", "question": { "cloze_format": "In the English Channel ecosystem, the number of primary producers is smaller than the number of primary consumers because________.", "normal_format": "In the English Channel ecosystem, the number of primary producers is smaller than the number of primary consumers because of what?", "question_choices": [ "the apex consumers have a low turnover rate", "the primary producers have a low turnover rate", "the primary producers have a high turnover rate", "the primary consumers have a high turnover rate" ], "question_id": "fs-idm34720736", "question_text": "In the English Channel ecosystem, the number of primary producers is smaller than the number of primary consumers because________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "the second law of thermodynamics" }, "bloom": "1", "hl_context": "<hl> Ecological Efficiency : The Transfer of Energy between Trophic Levels As illustrated in Figure 46.8 , large amounts of energy are lost from the ecosystem from one trophic level to the next level as energy flows from the primary producers through the various trophic levels of consumers and decomposers . <hl> <hl> The main reason for this loss is the second law of thermodynamics , which states that whenever energy is converted from one form to another , there is a tendency toward disorder ( entropy ) in the system . <hl> In biologic systems , this means a great deal of energy is lost as metabolic heat when the organisms from one trophic level consume the next level . In the Silver Springs ecosystem example ( Figure 46.8 ) , we see that the primary consumers produced 1103 kcal / m 2 / yr from the 7618 kcal / m 2 / yr of energy available to them from the primary producers . The measurement of energy transfer efficiency between two successive trophic levels is termed the trophic level transfer efficiency ( TLTE ) and is defined by the formula :", "hl_sentences": "Ecological Efficiency : The Transfer of Energy between Trophic Levels As illustrated in Figure 46.8 , large amounts of energy are lost from the ecosystem from one trophic level to the next level as energy flows from the primary producers through the various trophic levels of consumers and decomposers . The main reason for this loss is the second law of thermodynamics , which states that whenever energy is converted from one form to another , there is a tendency toward disorder ( entropy ) in the system .", "question": { "cloze_format": "The law of chemistry that determines how much energy can be transferred when it is converted from one form to another is ___ .", "normal_format": "What law of chemistry determines how much energy can be transferred when it is converted from one form to another?", "question_choices": [ "the first law of thermodynamics", "the second law of thermodynamics", "the conservation of matter", "the conservation of energy" ], "question_id": "fs-idm91219184", "question_text": "What law of chemistry determines how much energy can be transferred when it is converted from one form to another?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "biogeochemical" }, "bloom": null, "hl_context": "Energy flows directionally through ecosystems , entering as sunlight ( or inorganic molecules for chemoautotrophs ) and leaving as heat during the many transfers between trophic levels . However , the matter that makes up living organisms is conserved and recycled . The six most common elements associated with organic molecules — carbon , nitrogen , hydrogen , oxygen , phosphorus , and sulfur — take a variety of chemical forms and may exist for long periods in the atmosphere , on land , in water , or beneath the Earth ’ s surface . Geologic processes , such as weathering , erosion , water drainage , and the subduction of the continental plates , all play a role in this recycling of materials . <hl> Because geology and chemistry have major roles in the study of this process , the recycling of inorganic matter between living organisms and their environment is called a biogeochemical cycle . <hl>", "hl_sentences": "Because geology and chemistry have major roles in the study of this process , the recycling of inorganic matter between living organisms and their environment is called a biogeochemical cycle .", "question": { "cloze_format": "The movement of mineral nutrients through organisms and their environment is called a ________ cycle.", "normal_format": "The movement of mineral nutrients through organisms and their environment is called which cycle?", "question_choices": [ "biological", "bioaccumulation", "biogeochemical", "biochemical" ], "question_id": "fs-idm77987344", "question_text": "The movement of mineral nutrients through organisms and their environment is called a ________ cycle." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "carbon dioxide" }, "bloom": null, "hl_context": "<hl> As stated , the atmosphere is a major reservoir of carbon in the form of carbon dioxide and is essential to the process of photosynthesis . <hl> The level of carbon dioxide in the atmosphere is greatly influenced by the reservoir of carbon in the oceans . The exchange of carbon between the atmosphere and water reservoirs influences how much carbon is found in each location , and each one affects the other reciprocally . Carbon dioxide ( CO 2 ) from the atmosphere dissolves in water and combines with water molecules to form carbonic acid , and then it ionizes to carbonate and bicarbonate ions ( Figure 46.16 )", "hl_sentences": "As stated , the atmosphere is a major reservoir of carbon in the form of carbon dioxide and is essential to the process of photosynthesis .", "question": { "cloze_format": "Carbon is present in the atmosphere as ________.", "normal_format": "Carbon is present in the atmosphere as which of the following?", "question_choices": [ "carbon dioxide", "carbonate ion", "carbon dust", "carbon monoxide" ], "question_id": "fs-idp16066048", "question_text": "Carbon is present in the atmosphere as ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "salt water" }, "bloom": null, "hl_context": "Water is the basis of all living processes . The human body is more than 1/2 water and human cells are more than 70 percent water . Thus , most land animals need a supply of fresh water to survive . <hl> However , when examining the stores of water on Earth , 97.5 percent of it is non-potable salt water ( Figure 46.12 ) . <hl> <hl> Of the remaining water , 99 percent is locked underground as water or as ice . <hl> <hl> Thus , less than 1 percent of fresh water is easily accessible from lakes and rivers . <hl> Many living things , such as plants , animals , and fungi , are dependent on the small amount of fresh surface water supply , a lack of which can have massive effects on ecosystem dynamics . Humans , of course , have developed technologies to increase water availability , such as digging wells to harvest groundwater , storing rainwater , and using desalination to obtain drinkable water from the ocean . Although this pursuit of drinkable water has been ongoing throughout human history , the supply of fresh water is still a major issue in modern times . Water cycling is extremely important to ecosystem dynamics . Water has a major influence on climate and , thus , on the environments of ecosystems , some located on distant parts of the Earth . Most of the water on Earth is stored for long periods in the oceans , underground , and as ice . Figure 46.13 illustrates the average time that an individual water molecule may spend in the Earth ’ s major water reservoirs . Residence time is a measure of the average time an individual water molecule stays in a particular reservoir . A large amount of the Earth ’ s water is locked in place in these reservoirs as ice , beneath the ground , and in the ocean , and , thus , is unavailable for short-term cycling ( only surface water can evaporate ) .", "hl_sentences": "However , when examining the stores of water on Earth , 97.5 percent of it is non-potable salt water ( Figure 46.12 ) . Of the remaining water , 99 percent is locked underground as water or as ice . Thus , less than 1 percent of fresh water is easily accessible from lakes and rivers .", "question": { "cloze_format": "The majority of water found on Earth is ___ .", "normal_format": "What is the majority of water found on Earth?", "question_choices": [ "ice", "water vapor", "fresh water", "salt water" ], "question_id": "fs-idm39704944", "question_text": "The majority of water found on Earth is:" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "residence time" }, "bloom": null, "hl_context": "Water is the basis of all living processes . The human body is more than 1/2 water and human cells are more than 70 percent water . Thus , most land animals need a supply of fresh water to survive . However , when examining the stores of water on Earth , 97.5 percent of it is non-potable salt water ( Figure 46.12 ) . Of the remaining water , 99 percent is locked underground as water or as ice . Thus , less than 1 percent of fresh water is easily accessible from lakes and rivers . Many living things , such as plants , animals , and fungi , are dependent on the small amount of fresh surface water supply , a lack of which can have massive effects on ecosystem dynamics . Humans , of course , have developed technologies to increase water availability , such as digging wells to harvest groundwater , storing rainwater , and using desalination to obtain drinkable water from the ocean . Although this pursuit of drinkable water has been ongoing throughout human history , the supply of fresh water is still a major issue in modern times . Water cycling is extremely important to ecosystem dynamics . Water has a major influence on climate and , thus , on the environments of ecosystems , some located on distant parts of the Earth . Most of the water on Earth is stored for long periods in the oceans , underground , and as ice . Figure 46.13 illustrates the average time that an individual water molecule may spend in the Earth ’ s major water reservoirs . <hl> Residence time is a measure of the average time an individual water molecule stays in a particular reservoir . <hl> A large amount of the Earth ’ s water is locked in place in these reservoirs as ice , beneath the ground , and in the ocean , and , thus , is unavailable for short-term cycling ( only surface water can evaporate ) .", "hl_sentences": "Residence time is a measure of the average time an individual water molecule stays in a particular reservoir .", "question": { "cloze_format": "The average time a molecule spends in its reservoir is known as ________.", "normal_format": "The average time a molecule spends in its reservoir is known as what?", "question_choices": [ "residence time", "restriction time", "resilience time", "storage time" ], "question_id": "fs-idp116092608", "question_text": "The average time a molecule spends in its reservoir is known as ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "eutrophication" }, "bloom": null, "hl_context": "Human activity can release nitrogen into the environment by two primary means : the combustion of fossil fuels , which releases different nitrogen oxides , and by the use of artificial fertilizers in agriculture , which are then washed into lakes , streams , and rivers by surface runoff . Atmospheric nitrogen is associated with several effects on Earth ’ s ecosystems including the production of acid rain ( as nitric acid , HNO 3 ) and greenhouse gas ( as nitrous oxide , N 2 O ) potentially causing climate change . <hl> A major effect from fertilizer runoff is saltwater and freshwater eutrophication , a process whereby nutrient runoff causes the excess growth of microorganisms , depleting dissolved oxygen levels and killing ecosystem fauna . <hl>", "hl_sentences": "A major effect from fertilizer runoff is saltwater and freshwater eutrophication , a process whereby nutrient runoff causes the excess growth of microorganisms , depleting dissolved oxygen levels and killing ecosystem fauna .", "question": { "cloze_format": "The process whereby oxygen is depleted by the growth of microorganisms due to excess nutrients in aquatic systems is called ________.", "normal_format": "What is process called whereby oxygen is depleted by the growth of microorganisms due to excess nutrients in aquatic systems?", "question_choices": [ "dead zoning", "eutrophication", "retrofication", "depletion" ], "question_id": "fs-idp90580688", "question_text": "The process whereby oxygen is depleted by the growth of microorganisms due to excess nutrients in aquatic systems is called ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "nitrogen fixation" }, "bloom": null, "hl_context": "Getting nitrogen into the living world is difficult . Plants and phytoplankton are not equipped to incorporate nitrogen from the atmosphere ( which exists as tightly bonded , triple covalent N 2 ) even though this molecule comprises approximately 78 percent of the atmosphere . <hl> Nitrogen enters the living world via free-living and symbiotic bacteria , which incorporate nitrogen into their macromolecules through nitrogen fixation ( conversion of N 2 ) . <hl> Cyanobacteria live in most aquatic ecosystems where sunlight is present ; they play a key role in nitrogen fixation . Cyanobacteria are able to use inorganic sources of nitrogen to “ fix ” nitrogen . Rhizobium bacteria live symbiotically in the root nodules of legumes ( such as peas , beans , and peanuts ) and provide them with the organic nitrogen they need . Free-living bacteria , such as Azotobacter , are also important nitrogen fixers .", "hl_sentences": "Nitrogen enters the living world via free-living and symbiotic bacteria , which incorporate nitrogen into their macromolecules through nitrogen fixation ( conversion of N 2 ) .", "question": { "cloze_format": "The process whereby nitrogen is brought into organic molecules is called ________.", "normal_format": "What is the process whereby nitrogen is brought into organic molecules?", "question_choices": [ "nitrification", "denitrification", "nitrogen fixation", "nitrogen cycling" ], "question_id": "fs-idp70882832", "question_text": "The process whereby nitrogen is brought into organic molecules is called ________." }, "references_are_paraphrase": null } ]
46
46.1 Ecology of Ecosystems Learning Objectives By the end of this section, you will be able to: Describe the basic types of ecosystems on Earth Explain the methods that ecologists use to study ecosystem structure and dynamics Identify the different methods of ecosystem modeling Differentiate between food chains and food webs and recognize the importance of each Life in an ecosystem is often about competition for limited resources, a characteristic of the theory of natural selection. Competition in communities (all living things within specific habitats) is observed both within species and among different species. The resources for which organisms compete include organic material from living or previously living organisms, sunlight, and mineral nutrients, which provide the energy for living processes and the matter to make up organisms’ physical structures. Other critical factors influencing community dynamics are the components of its physical and geographic environment: a habitat’s latitude, amount of rainfall, topography (elevation), and available species. These are all important environmental variables that determine which organisms can exist within a particular area. An ecosystem is a community of living organisms and their interactions with their abiotic (non-living) environment. Ecosystems can be small, such as the tide pools found near the rocky shores of many oceans, or large, such as the Amazon Rainforest in Brazil ( Figure 46.2 ). There are three broad categories of ecosystems based on their general environment: freshwater, ocean water, and terrestrial. Within these broad categories are individual ecosystem types based on the organisms present and the type of environmental habitat. Ocean ecosystems are the most common, comprising 75 percent of the Earth's surface and consisting of three basic types: shallow ocean, deep ocean water, and deep ocean surfaces (the low depth areas of the deep oceans). The shallow ocean ecosystems include extremely biodiverse coral reef ecosystems, and the deep ocean surface is known for its large numbers of plankton and krill (small crustaceans) that support it. These two environments are especially important to aerobic respirators worldwide as the phytoplankton perform 40 percent of all photosynthesis on Earth. Although not as diverse as the other two, deep ocean ecosystems contain a wide variety of marine organisms. Such ecosystems exist even at the bottom of the ocean where light is unable to penetrate through the water. Freshwater ecosystems are the rarest, occurring on only 1.8 percent of the Earth's surface. Lakes, rivers, streams, and springs comprise these systems; they are quite diverse, and they support a variety of fish, amphibians, reptiles, insects, phytoplankton, fungi, and bacteria. Terrestrial ecosystems, also known for their diversity, are grouped into large categories called biomes, such as tropical rain forests, savannas, deserts, coniferous forests, deciduous forests, and tundra. Grouping these ecosystems into just a few biome categories obscures the great diversity of the individual ecosystems within them. For example, there is great variation in desert vegetation: the saguaro cacti and other plant life in the Sonoran Desert, in the United States, are relatively abundant compared to the desolate rocky desert of Boa Vista, an island off the coast of Western Africa ( Figure 46.3 ). Ecosystems are complex with many interacting parts. They are routinely exposed to various disturbances, or changes in the environment that effect their compositions: yearly variations in rainfall and temperature and the slower processes of plant growth, which may take several years. Many of these disturbances are a result of natural processes. For example, when lightning causes a forest fire and destroys part of a forest ecosystem, the ground is eventually populated by grasses, then by bushes and shrubs, and later by mature trees, restoring the forest to its former state. The impact of environmental disturbances caused by human activities is as important as the changes wrought by natural processes. Human agricultural practices, air pollution, acid rain, global deforestation, overfishing, eutrophication, oil spills, and illegal dumping on land and into the ocean are all issues of concern to conservationists. Equilibrium is the steady state of an ecosystem where all organisms are in balance with their environment and with each other. In ecology, two parameters are used to measure changes in ecosystems: resistance and resilience. The ability of an ecosystem to remain at equilibrium in spite of disturbances is called resistance . The speed at which an ecosystem recovers equilibrium after being disturbed, called its resilience . Ecosystem resistance and resilience are especially important when considering human impact. The nature of an ecosystem may change to such a degree that it can lose its resilience entirely. This process can lead to the complete destruction or irreversible altering of the ecosystem. Food Chains and Food Webs The term “food chain” is sometimes used metaphorically to describe human social situations. In this sense, food chains are thought of as a competition for survival, such as “who eats whom?” Someone eats and someone is eaten. Therefore, it is not surprising that in our competitive “dog-eat-dog” society, individuals who are considered successful are seen as being at the top of the food chain, consuming all others for their benefit, whereas the less successful are seen as being at the bottom. The scientific understanding of a food chain is more precise than in its everyday usage. In ecology, a food chain is a linear sequence of organisms through which nutrients and energy pass: primary producers, primary consumers, and higher-level consumers are used to describe ecosystem structure and dynamics. There is a single path through the chain. Each organism in a food chain occupies what is called a trophic level . Depending on their role as producers or consumers, species or groups of species can be assigned to various trophic levels. In many ecosystems, the bottom of the food chain consists of photosynthetic organisms (plants and/or phytoplankton), which are called primary producers . The organisms that consume the primary producers are herbivores: the primary consumers . Secondary consumers are usually carnivores that eat the primary consumers. Tertiary consumers are carnivores that eat other carnivores. Higher-level consumers feed on the next lower tropic levels, and so on, up to the organisms at the top of the food chain: the apex consumers . In the Lake Ontario food chain shown in Figure 46.4 , the Chinook salmon is the apex consumer at the top of this food chain. One major factor that limits the length of food chains is energy. Energy is lost as heat between each trophic level due to the second law of thermodynamics. Thus, after a limited number of trophic energy transfers, the amount of energy remaining in the food chain may not be great enough to support viable populations at yet a higher trophic level. The loss of energy between trophic levels is illustrated by the pioneering studies of Howard T. Odum in the Silver Springs, Florida, ecosystem in the 1940s ( Figure 46.5 ). The primary producers generated 20,819 kcal/m 2 /yr (kilocalories per square meter per year), the primary consumers generated 3368 kcal/m 2 /yr, the secondary consumers generated 383 kcal/m 2 /yr, and the tertiary consumers only generated 21 kcal/m 2 /yr. Thus, there is little energy remaining for another level of consumers in this ecosystem. There is a one problem when using food chains to accurately describe most ecosystems. Even when all organisms are grouped into appropriate trophic levels, some of these organisms can feed on species from more than one trophic level; likewise, some of these organisms can be eaten by species from multiple trophic levels. In other words, the linear model of ecosystems, the food chain, is not completely descriptive of ecosystem structure. A holistic model—which accounts for all the interactions between different species and their complex interconnected relationships with each other and with the environment—is a more accurate and descriptive model for ecosystems. A food web is a graphic representation of a holistic, non-linear web of primary producers, primary consumers, and higher-level consumers used to describe ecosystem structure and dynamics ( Figure 46.6 ). A comparison of the two types of structural ecosystem models shows strength in both. Food chains are more flexible for analytical modeling, are easier to follow, and are easier to experiment with, whereas food web models more accurately represent ecosystem structure and dynamics, and data can be directly used as input for simulation modeling. Link to Learning Head to this online interactive simulator to investigate food web function. In the Interactive Labs box, under Food Web , click Step 1 . Read the instructions first, and then click Step 2 for additional instructions. When you are ready to create a simulation, in the upper-right corner of the Interactive Labs box, click OPEN SIMULATOR . Two general types of food webs are often shown interacting within a single ecosystem. A grazing food web (such as the Lake Ontario food web in Figure 46.6 ) has plants or other photosynthetic organisms at its base, followed by herbivores and various carnivores. A detrital food web consists of a base of organisms that feed on decaying organic matter (dead organisms), called decomposers or detritivores. These organisms are usually bacteria or fungi that recycle organic material back into the biotic part of the ecosystem as they themselves are consumed by other organisms. As all ecosystems require a method to recycle material from dead organisms, most grazing food webs have an associated detrital food web. For example, in a meadow ecosystem, plants may support a grazing food web of different organisms, primary and other levels of consumers, while at the same time supporting a detrital food web of bacteria, fungi, and detrivorous invertebrates feeding off dead plants and animals. Evolution Connection Three-spined Stickleback It is well established by the theory of natural selection that changes in the environment play a major role in the evolution of species within an ecosystem. However, little is known about how the evolution of species within an ecosystem can alter the ecosystem environment. In 2009, Dr. Luke Harmon, from the University of Idaho in Moscow, published a paper that for the first time showed that the evolution of organisms into subspecies can have direct effects on their ecosystem environment. 1 1 Nature (Vol. 458, April 1, 2009) The three-spines stickleback ( Gasterosteus aculeatus ) is a freshwater fish that evolved from a saltwater fish to live in freshwater lakes about 10,000 years ago, which is considered a recent development in evolutionary time ( Figure 46.7 ). Over the last 10,000 years, these freshwater fish then became isolated from each other in different lakes. Depending on which lake population was studied, findings showed that these sticklebacks then either remained as one species or evolved into two species. The divergence of species was made possible by their use of different areas of the pond for feeding called micro niches. Dr. Harmon and his team created artificial pond microcosms in 250-gallon tanks and added muck from freshwater ponds as a source of zooplankton and other invertebrates to sustain the fish. In different experimental tanks they introduced one species of stickleback from either a single-species or double-species lake. Over time, the team observed that some of the tanks bloomed with algae while others did not. This puzzled the scientists, and they decided to measure the water's dissolved organic carbon (DOC), which consists of mostly large molecules of decaying organic matter that give pond-water its slightly brownish color. It turned out that the water from the tanks with two-species fish contained larger particles of DOC (and hence darker water) than water with single-species fish. This increase in DOC blocked the sunlight and prevented algal blooming. Conversely, the water from the single-species tank contained smaller DOC particles, allowing more sunlight penetration to fuel the algal blooms. This change in the environment, which is due to the different feeding habits of the stickleback species in each lake type, probably has a great impact on the survival of other species in these ecosystems, especially other photosynthetic organisms. Thus, the study shows that, at least in these ecosystems, the environment and the evolution of populations have reciprocal effects that may now be factored into simulation models. Research into Ecosystem Dynamics: Ecosystem Experimentation and Modeling The study of the changes in ecosystem structure caused by changes in the environment (disturbances) or by internal forces is called ecosystem dynamics . Ecosystems are characterized using a variety of research methodologies. Some ecologists study ecosystems using controlled experimental systems, while some study entire ecosystems in their natural state, and others use both approaches. A holistic ecosystem model attempts to quantify the composition, interaction, and dynamics of entire ecosystems; it is the most representative of the ecosystem in its natural state. A food web is an example of a holistic ecosystem model. However, this type of study is limited by time and expense, as well as the fact that it is neither feasible nor ethical to do experiments on large natural ecosystems. To quantify all different species in an ecosystem and the dynamics in their habitat is difficult, especially when studying large habitats such as the Amazon Rainforest, which covers 1.4 billion acres (5.5 million km 2 ) of the Earth’s surface. For these reasons, scientists study ecosystems under more controlled conditions. Experimental systems usually involve either partitioning a part of a natural ecosystem that can be used for experiments, termed a mesocosm , or by re-creating an ecosystem entirely in an indoor or outdoor laboratory environment, which is referred to as a microcosm . A major limitation to these approaches is that removing individual organisms from their natural ecosystem or altering a natural ecosystem through partitioning may change the dynamics of the ecosystem. These changes are often due to differences in species numbers and diversity and also to environment alterations caused by partitioning (mesocosm) or re-creating (microcosm) the natural habitat. Thus, these types of experiments are not totally predictive of changes that would occur in the ecosystem from which they were gathered. As both of these approaches have their limitations, some ecologists suggest that results from these experimental systems should be used only in conjunction with holistic ecosystem studies to obtain the most representative data about ecosystem structure, function, and dynamics. Scientists use the data generated by these experimental studies to develop ecosystem models that demonstrate the structure and dynamics of ecosystems. Three basic types of ecosystem modeling are routinely used in research and ecosystem management: a conceptual model, an analytical model, and a simulation model. A conceptual model is an ecosystem model that consists of flow charts to show interactions of different compartments of the living and nonliving components of the ecosystem. A conceptual model describes ecosystem structure and dynamics and shows how environmental disturbances affect the ecosystem; however, its ability to predict the effects of these disturbances is limited. Analytical and simulation models, in contrast, are mathematical methods of describing ecosystems that are indeed capable of predicting the effects of potential environmental changes without direct experimentation, although with some limitations as to accuracy. An analytical model is an ecosystem model that is created using simple mathematical formulas to predict the effects of environmental disturbances on ecosystem structure and dynamics. A simulation model is an ecosystem model that is created using complex computer algorithms to holistically model ecosystems and to predict the effects of environmental disturbances on ecosystem structure and dynamics. Ideally, these models are accurate enough to determine which components of the ecosystem are particularly sensitive to disturbances, and they can serve as a guide to ecosystem managers (such as conservation ecologists or fisheries biologists) in the practical maintenance of ecosystem health. Conceptual Models Conceptual models are useful for describing ecosystem structure and dynamics and for demonstrating the relationships between different organisms in a community and their environment. Conceptual models are usually depicted graphically as flow charts. The organisms and their resources are grouped into specific compartments with arrows showing the relationship and transfer of energy or nutrients between them. Thus, these diagrams are sometimes called compartment models. To model the cycling of mineral nutrients, organic and inorganic nutrients are subdivided into those that are bioavailable (ready to be incorporated into biological macromolecules) and those that are not. For example, in a terrestrial ecosystem near a deposit of coal, carbon will be available to the plants of this ecosystem as carbon dioxide gas in a short-term period, not from the carbon-rich coal itself. However, over a longer period, microorganisms capable of digesting coal will incorporate its carbon or release it as natural gas (methane, CH 4 ), changing this unavailable organic source into an available one. This conversion is greatly accelerated by the combustion of fossil fuels by humans, which releases large amounts of carbon dioxide into the atmosphere. This is thought to be a major factor in the rise of the atmospheric carbon dioxide levels in the industrial age. The carbon dioxide released from burning fossil fuels is produced faster than photosynthetic organisms can use it. This process is intensified by the reduction of photosynthetic trees because of worldwide deforestation. Most scientists agree that high atmospheric carbon dioxide is a major cause of global climate change. Conceptual models are also used to show the flow of energy through particular ecosystems. Figure 46.8 is based on Howard T. Odum’s classical study of the Silver Springs, Florida, holistic ecosystem in the mid-twentieth century. 2 This study shows the energy content and transfer between various ecosystem compartments. 2 Howard T. Odum, “Trophic Structure and Productivity of Silver Springs, Florida,” Ecological Monographs 27, no. 1 (1957): 47–112. Visual Connection Why do you think the value for gross productivity of the primary producers is the same as the value for total heat and respiration (20,810 kcal/m 2 /yr)? Analytical and Simulation Models The major limitation of conceptual models is their inability to predict the consequences of changes in ecosystem species and/or environment. Ecosystems are dynamic entities and subject to a variety of abiotic and biotic disturbances caused by natural forces and/or human activity. Ecosystems altered from their initial equilibrium state can often recover from such disturbances and return to a state of equilibrium. As most ecosystems are subject to periodic disturbances and are often in a state of change, they are usually either moving toward or away from their equilibrium state. There are many of these equilibrium states among the various components of an ecosystem, which affects the ecosystem overall. Furthermore, as humans have the ability to greatly and rapidly alter the species content and habitat of an ecosystem, the need for predictive models that enable understanding of how ecosystems respond to these changes becomes more crucial. Analytical models often use simple, linear components of ecosystems, such as food chains, and are known to be complex mathematically; therefore, they require a significant amount of mathematical knowledge and expertise. Although analytical models have great potential, their simplification of complex ecosystems is thought to limit their accuracy. Simulation models that use computer programs are better able to deal with the complexities of ecosystem structure. A recent development in simulation modeling uses supercomputers to create and run individual-based simulations, which accounts for the behavior of individual organisms and their effects on the ecosystem as a whole. These simulations are considered to be the most accurate and predictive of the complex responses of ecosystems to disturbances. Link to Learning Visit The Darwin Project to view a variety of ecosystem models. 46.2 Energy Flow through Ecosystems Learning Objectives By the end of this section, you will be able to: Describe how organisms acquire energy in a food web and in associated food chains Explain how the efficiency of energy transfers between trophic levels affects ecosystem structure and dynamics Discuss trophic levels and how ecological pyramids are used to model them All living things require energy in one form or another. Energy is required by most complex metabolic pathways (often in the form of adenosine triphosphate, ATP), especially those responsible for building large molecules from smaller compounds, and life itself is an energy-driven process. Living organisms would not be able to assemble macromolecules (proteins, lipids, nucleic acids, and complex carbohydrates) from their monomeric subunits without a constant energy input. It is important to understand how organisms acquire energy and how that energy is passed from one organism to another through food webs and their constituent food chains. Food webs illustrate how energy flows directionally through ecosystems, including how efficiently organisms acquire it, use it, and how much remains for use by other organisms of the food web. How Organisms Acquire Energy in a Food Web Energy is acquired by living things in three ways: photosynthesis, chemosynthesis, and the consumption and digestion of other living or previously living organisms by heterotrophs. Photosynthetic and chemosynthetic organisms are both grouped into a category known as autotrophs: organisms capable of synthesizing their own food (more specifically, capable of using inorganic carbon as a carbon source). Photosynthetic autotrophs (photoautotrophs) use sunlight as an energy source, whereas chemosynthetic autotrophs (chemoautotrophs) use inorganic molecules as an energy source. Autotrophs are critical for all ecosystems. Without these organisms, energy would not be available to other living organisms and life itself would not be possible. Photoautotrophs, such as plants, algae, and photosynthetic bacteria, serve as the energy source for a majority of the world’s ecosystems. These ecosystems are often described by grazing food webs. Photoautotrophs harness the solar energy of the sun by converting it to chemical energy in the form of ATP (and NADP). The energy stored in ATP is used to synthesize complex organic molecules, such as glucose. Chemoautotrophs are primarily bacteria that are found in rare ecosystems where sunlight is not available, such as in those associated with dark caves or hydrothermal vents at the bottom of the ocean ( Figure 46.9 ). Many chemoautotrophs in hydrothermal vents use hydrogen sulfide (H 2 S), which is released from the vents as a source of chemical energy. This allows chemoautotrophs to synthesize complex organic molecules, such as glucose, for their own energy and in turn supplies energy to the rest of the ecosystem. Productivity within Trophic Levels Productivity within an ecosystem can be defined as the percentage of energy entering the ecosystem incorporated into biomass in a particular trophic level. Biomass is the total mass, in a unit area at the time of measurement, of living or previously living organisms within a trophic level. Ecosystems have characteristic amounts of biomass at each trophic level. For example, in the English Channel ecosystem the primary producers account for a biomass of 4 g/m 2 (grams per meter squared), while the primary consumers exhibit a biomass of 21 g/m 2 . The productivity of the primary producers is especially important in any ecosystem because these organisms bring energy to other living organisms by photoautotrophy or chemoautotrophy. The rate at which photosynthetic primary producers incorporate energy from the sun is called gross primary productivity . An example of gross primary productivity is shown in the compartment diagram of energy flow within the Silver Springs aquatic ecosystem as shown ( Figure 46.8 ). In this ecosystem, the total energy accumulated by the primary producers (gross primary productivity) was shown to be 20,810 kcal/m 2 /yr. Because all organisms need to use some of this energy for their own functions (like respiration and resulting metabolic heat loss) scientists often refer to the net primary productivity of an ecosystem. Net primary productivity is the energy that remains in the primary producers after accounting for the organisms’ respiration and heat loss. The net productivity is then available to the primary consumers at the next trophic level. In our Silver Spring example, 13,187 of the 20,810 kcal/m 2 /yr were used for respiration or were lost as heat, leaving 7,632 kcal/m 2 /yr of energy for use by the primary consumers. Ecological Efficiency: The Transfer of Energy between Trophic Levels As illustrated in Figure 46.8 , large amounts of energy are lost from the ecosystem from one trophic level to the next level as energy flows from the primary producers through the various trophic levels of consumers and decomposers. The main reason for this loss is the second law of thermodynamics, which states that whenever energy is converted from one form to another, there is a tendency toward disorder (entropy) in the system. In biologic systems, this means a great deal of energy is lost as metabolic heat when the organisms from one trophic level consume the next level. In the Silver Springs ecosystem example ( Figure 46.8 ), we see that the primary consumers produced 1103 kcal/m 2 /yr from the 7618 kcal/m 2 /yr of energy available to them from the primary producers. The measurement of energy transfer efficiency between two successive trophic levels is termed the trophic level transfer efficiency (TLTE) and is defined by the formula: TLTE  =   production at present trophic level production at previous trophic level  × 100 TLTE  =   production at present trophic level production at previous trophic level  × 100 In Silver Springs, the TLTE between the first two trophic levels was approximately 14.8 percent. The low efficiency of energy transfer between trophic levels is usually the major factor that limits the length of food chains observed in a food web. The fact is, after four to six energy transfers, there is not enough energy left to support another trophic level. In the Lake Ontario example shown in Figure 46.6 , only three energy transfers occurred between the primary producer, (green algae), and the apex consumer (Chinook salmon). Ecologists have many different methods of measuring energy transfers within ecosystems. Some transfers are easier or more difficult to measure depending on the complexity of the ecosystem and how much access scientists have to observe the ecosystem. In other words, some ecosystems are more difficult to study than others, and sometimes the quantification of energy transfers has to be estimated. Another main parameter that is important in characterizing energy flow within an ecosystem is the net production efficiency. Net production efficiency (NPE) allows ecologists to quantify how efficiently organisms of a particular trophic level incorporate the energy they receive into biomass; it is calculated using the following formula: NPE  =   net consumer productivity assimilation   ×  100 NPE  =   net consumer productivity assimilation   ×  100 Net consumer productivity is the energy content available to the organisms of the next trophic level. Assimilation is the biomass (energy content generated per unit area) of the present trophic level after accounting for the energy lost due to incomplete ingestion of food, energy used for respiration, and energy lost as waste. Incomplete ingestion refers to the fact that some consumers eat only a part of their food. For example, when a lion kills an antelope, it will eat everything except the hide and bones. The lion is missing the energy-rich bone marrow inside the bone, so the lion does not make use of all the calories its prey could provide. Thus, NPE measures how efficiently each trophic level uses and incorporates the energy from its food into biomass to fuel the next trophic level. In general, cold-blooded animals (ectotherms), such as invertebrates, fish, amphibians, and reptiles, use less of the energy they obtain for respiration and heat than warm-blooded animals (endotherms), such as birds and mammals. The extra heat generated in endotherms, although an advantage in terms of the activity of these organisms in colder environments, is a major disadvantage in terms of NPE. Therefore, many endotherms have to eat more often than ectotherms to get the energy they need for survival. In general, NPE for ectotherms is an order of magnitude (10x) higher than for endotherms. For example, the NPE for a caterpillar eating leaves has been measured at 18 percent, whereas the NPE for a squirrel eating acorns may be as low as 1.6 percent. The inefficiency of energy use by warm-blooded animals has broad implications for the world's food supply. It is widely accepted that the meat industry uses large amounts of crops to feed livestock, and because the NPE is low, much of the energy from animal feed is lost. For example, it costs about 1¢ to produce 1000 dietary calories (kcal) of corn or soybeans, but approximately $0.19 to produce a similar number of calories growing cattle for beef consumption. The same energy content of milk from cattle is also costly, at approximately $0.16 per 1000 kcal. Much of this difference is due to the low NPE of cattle. Thus, there has been a growing movement worldwide to promote the consumption of non-meat and non-dairy foods so that less energy is wasted feeding animals for the meat industry. Modeling Ecosystems Energy Flow: Ecological Pyramids The structure of ecosystems can be visualized with ecological pyramids, which were first described by the pioneering studies of Charles Elton in the 1920s. Ecological pyramids show the relative amounts of various parameters (such as number of organisms, energy, and biomass) across trophic levels. Pyramids of numbers can be either upright or inverted, depending on the ecosystem. As shown in Figure 46.10 , typical grassland during the summer has a base of many plants and the numbers of organisms decrease at each trophic level. However, during the summer in a temperate forest, the base of the pyramid consists of few trees compared with the number of primary consumers, mostly insects. Because trees are large, they have great photosynthetic capability, and dominate other plants in this ecosystem to obtain sunlight. Even in smaller numbers, primary producers in forests are still capable of supporting other trophic levels. Another way to visualize ecosystem structure is with pyramids of biomass. This pyramid measures the amount of energy converted into living tissue at the different trophic levels. Using the Silver Springs ecosystem example, this data exhibits an upright biomass pyramid ( Figure 46.10 ), whereas the pyramid from the English Channel example is inverted. The plants (primary producers) of the Silver Springs ecosystem make up a large percentage of the biomass found there. However, the phytoplankton in the English Channel example make up less biomass than the primary consumers, the zooplankton. As with inverted pyramids of numbers, this inverted pyramid is not due to a lack of productivity from the primary producers, but results from the high turnover rate of the phytoplankton. The phytoplankton are consumed rapidly by the primary consumers, thus, minimizing their biomass at any particular point in time. However, phytoplankton reproduce quickly, thus they are able to support the rest of the ecosystem. Pyramid ecosystem modeling can also be used to show energy flow through the trophic levels. Notice that these numbers are the same as those used in the energy flow compartment diagram in Figure 46.8 . Pyramids of energy are always upright, and an ecosystem without sufficient primary productivity cannot be supported. All types of ecological pyramids are useful for characterizing ecosystem structure. However, in the study of energy flow through the ecosystem, pyramids of energy are the most consistent and representative models of ecosystem structure ( Figure 46.10 ). Visual Connection Pyramids depicting the number of organisms or biomass may be inverted, upright, or even diamond-shaped. Energy pyramids, however, are always upright. Why? Consequences of Food Webs: Biological Magnification One of the most important environmental consequences of ecosystem dynamics is biomagnification. Biomagnification is the increasing concentration of persistent, toxic substances in organisms at each trophic level, from the primary producers to the apex consumers. Many substances have been shown to bioaccumulate, including classical studies with the pesticide d ichloro d iphenyl t richloroethane (DDT), which was published in the 1960s bestseller, Silent Spring , by Rachel Carson. DDT was a commonly used pesticide before its dangers became known. In some aquatic ecosystems, organisms from each trophic level consumed many organisms of the lower level, which caused DDT to increase in birds (apex consumers) that ate fish. Thus, the birds accumulated sufficient amounts of DDT to cause fragility in their eggshells. This effect increased egg breakage during nesting and was shown to have adverse effects on these bird populations. The use of DDT was banned in the United States in the 1970s. Other substances that biomagnify are polychlorinated biphenyls (PCBs), which were used in coolant liquids in the United States until their use was banned in 1979, and heavy metals, such as mercury, lead, and cadmium. These substances were best studied in aquatic ecosystems, where fish species at different trophic levels accumulate toxic substances brought through the ecosystem by the primary producers. As illustrated in a study performed by the National Oceanic and Atmospheric Administration (NOAA) in the Saginaw Bay of Lake Huron ( Figure 46.11 ), PCB concentrations increased from the ecosystem’s primary producers (phytoplankton) through the different trophic levels of fish species. The apex consumer (walleye) has more than four times the amount of PCBs compared to phytoplankton. Also, based on results from other studies, birds that eat these fish may have PCB levels at least one order of magnitude higher than those found in the lake fish. Other concerns have been raised by the accumulation of heavy metals, such as mercury and cadmium, in certain types of seafood. The United States Environmental Protection Agency (EPA) recommends that pregnant women and young children should not consume any swordfish, shark, king mackerel, or tilefish because of their high mercury content. These individuals are advised to eat fish low in mercury: salmon, tilapia, shrimp, pollock, and catfish. Biomagnification is a good example of how ecosystem dynamics can affect our everyday lives, even influencing the food we eat. 46.3 Biogeochemical Cycles Learning Objectives By the end of this section, you will be able to: Discuss the biogeochemical cycles of water, carbon, nitrogen, phosphorus, and sulfur Explain how human activities have impacted these cycles and the potential consequences for Earth Energy flows directionally through ecosystems, entering as sunlight (or inorganic molecules for chemoautotrophs) and leaving as heat during the many transfers between trophic levels. However, the matter that makes up living organisms is conserved and recycled. The six most common elements associated with organic molecules—carbon, nitrogen, hydrogen, oxygen, phosphorus, and sulfur—take a variety of chemical forms and may exist for long periods in the atmosphere, on land, in water, or beneath the Earth’s surface. Geologic processes, such as weathering, erosion, water drainage, and the subduction of the continental plates, all play a role in this recycling of materials. Because geology and chemistry have major roles in the study of this process, the recycling of inorganic matter between living organisms and their environment is called a biogeochemical cycle . Water contains hydrogen and oxygen, which is essential to all living processes. The hydrosphere is the area of the Earth where water movement and storage occurs: as liquid water on the surface and beneath the surface or frozen (rivers, lakes, oceans, groundwater, polar ice caps, and glaciers), and as water vapor in the atmosphere. Carbon is found in all organic macromolecules and is an important constituent of fossil fuels. Nitrogen is a major component of our nucleic acids and proteins and is critical to human agriculture. Phosphorus, a major component of nucleic acid (along with nitrogen), is one of the main ingredients in artificial fertilizers used in agriculture and their associated environmental impacts on our surface water. Sulfur, critical to the 3–D folding of proteins (as in disulfide binding), is released into the atmosphere by the burning of fossil fuels, such as coal. The cycling of these elements is interconnected. For example, the movement of water is critical for the leaching of nitrogen and phosphate into rivers, lakes, and oceans. Furthermore, the ocean itself is a major reservoir for carbon. Thus, mineral nutrients are cycled, either rapidly or slowly, through the entire biosphere, from one living organism to another, and between the biotic and abiotic world. Link to Learning Head to this website to learn more about biogeochemical cycles. The Water (Hydrologic) Cycle Water is the basis of all living processes. The human body is more than 1/2 water and human cells are more than 70 percent water. Thus, most land animals need a supply of fresh water to survive. However, when examining the stores of water on Earth, 97.5 percent of it is non-potable salt water ( Figure 46.12 ). Of the remaining water, 99 percent is locked underground as water or as ice. Thus, less than 1 percent of fresh water is easily accessible from lakes and rivers. Many living things, such as plants, animals, and fungi, are dependent on the small amount of fresh surface water supply, a lack of which can have massive effects on ecosystem dynamics. Humans, of course, have developed technologies to increase water availability, such as digging wells to harvest groundwater, storing rainwater, and using desalination to obtain drinkable water from the ocean. Although this pursuit of drinkable water has been ongoing throughout human history, the supply of fresh water is still a major issue in modern times. Water cycling is extremely important to ecosystem dynamics. Water has a major influence on climate and, thus, on the environments of ecosystems, some located on distant parts of the Earth. Most of the water on Earth is stored for long periods in the oceans, underground, and as ice. Figure 46.13 illustrates the average time that an individual water molecule may spend in the Earth’s major water reservoirs. Residence time is a measure of the average time an individual water molecule stays in a particular reservoir. A large amount of the Earth’s water is locked in place in these reservoirs as ice, beneath the ground, and in the ocean, and, thus, is unavailable for short-term cycling (only surface water can evaporate). There are various processes that occur during the cycling of water, shown in Figure 46.14 . These processes include the following: evaporation/sublimation condensation/precipitation subsurface water flow surface runoff/snowmelt streamflow The water cycle is driven by the sun’s energy as it warms the oceans and other surface waters. This leads to the evaporation (water to water vapor) of liquid surface water and the sublimation (ice to water vapor) of frozen water, which deposits large amounts of water vapor into the atmosphere. Over time, this water vapor condenses into clouds as liquid or frozen droplets and is eventually followed by precipitation (rain or snow), which returns water to the Earth’s surface. Rain eventually permeates into the ground, where it may evaporate again if it is near the surface, flow beneath the surface, or be stored for long periods. More easily observed is surface runoff: the flow of fresh water either from rain or melting ice. Runoff can then make its way through streams and lakes to the oceans or flow directly to the oceans themselves. Link to Learning Head to this website to learn more about the world’s fresh water supply. Rain and surface runoff are major ways in which minerals, including carbon, nitrogen, phosphorus, and sulfur, are cycled from land to water. The environmental effects of runoff will be discussed later as these cycles are described. The Carbon Cycle Carbon is the second most abundant element in living organisms. Carbon is present in all organic molecules, and its role in the structure of macromolecules is of primary importance to living organisms. Carbon compounds contain especially high energy, particularly those derived from fossilized organisms, mainly plants, which humans use as fuel. Since the 1800s, the number of countries using massive amounts of fossil fuels has increased. Since the beginning of the Industrial Revolution, global demand for the Earth’s limited fossil fuel supplies has risen; therefore, the amount of carbon dioxide in our atmosphere has increased. This increase in carbon dioxide has been associated with climate change and other disturbances of the Earth’s ecosystems and is a major environmental concern worldwide. Thus, the “carbon footprint” is based on how much carbon dioxide is produced and how much fossil fuel countries consume. The carbon cycle is most easily studied as two interconnected sub-cycles: one dealing with rapid carbon exchange among living organisms and the other dealing with the long-term cycling of carbon through geologic processes. The entire carbon cycle is shown in Figure 46.15 . Link to Learning Click this link to read information about the United States Carbon Cycle Science Program. The Biological Carbon Cycle Living organisms are connected in many ways, even between ecosystems. A good example of this connection is the exchange of carbon between autotrophs and heterotrophs within and between ecosystems by way of atmospheric carbon dioxide. Carbon dioxide is the basic building block that most autotrophs use to build multi-carbon, high energy compounds, such as glucose. The energy harnessed from the sun is used by these organisms to form the covalent bonds that link carbon atoms together. These chemical bonds thereby store this energy for later use in the process of respiration. Most terrestrial autotrophs obtain their carbon dioxide directly from the atmosphere, while marine autotrophs acquire it in the dissolved form (carbonic acid, H 2 CO 3 − ). However carbon dioxide is acquired, a by-product of the process is oxygen. The photosynthetic organisms are responsible for depositing approximately 21 percent oxygen content of the atmosphere that we observe today. Heterotrophs and autotrophs are partners in biological carbon exchange (especially the primary consumers, largely herbivores). Heterotrophs acquire the high-energy carbon compounds from the autotrophs by consuming them, and breaking them down by respiration to obtain cellular energy, such as ATP. The most efficient type of respiration, aerobic respiration, requires oxygen obtained from the atmosphere or dissolved in water. Thus, there is a constant exchange of oxygen and carbon dioxide between the autotrophs (which need the carbon) and the heterotrophs (which need the oxygen). Gas exchange through the atmosphere and water is one way that the carbon cycle connects all living organisms on Earth. The Biogeochemical Carbon Cycle The movement of carbon through the land, water, and air is complex, and in many cases, it occurs much more slowly geologically than as seen between living organisms. Carbon is stored for long periods in what are known as carbon reservoirs, which include the atmosphere, bodies of liquid water (mostly oceans), ocean sediment, soil, land sediments (including fossil fuels), and the Earth’s interior. As stated, the atmosphere is a major reservoir of carbon in the form of carbon dioxide and is essential to the process of photosynthesis. The level of carbon dioxide in the atmosphere is greatly influenced by the reservoir of carbon in the oceans. The exchange of carbon between the atmosphere and water reservoirs influences how much carbon is found in each location, and each one affects the other reciprocally. Carbon dioxide (CO 2 ) from the atmosphere dissolves in water and combines with water molecules to form carbonic acid, and then it ionizes to carbonate and bicarbonate ions ( Figure 46.16 ) The equilibrium coefficients are such that more than 90 percent of the carbon in the ocean is found as bicarbonate ions. Some of these ions combine with seawater calcium to form calcium carbonate (CaCO 3 ), a major component of marine organism shells. These organisms eventually form sediments on the ocean floor. Over geologic time, the calcium carbonate forms limestone, which comprises the largest carbon reservoir on Earth. On land, carbon is stored in soil as a result of the decomposition of living organisms (by decomposers) or from weathering of terrestrial rock and minerals. This carbon can be leached into the water reservoirs by surface runoff. Deeper underground, on land and at sea, are fossil fuels: the anaerobically decomposed remains of plants that take millions of years to form. Fossil fuels are considered a non-renewable resource because their use far exceeds their rate of formation. A non-renewable resource , such as fossil fuel, is either regenerated very slowly or not at all. Another way for carbon to enter the atmosphere is from land (including land beneath the surface of the ocean) by the eruption of volcanoes and other geothermal systems. Carbon sediments from the ocean floor are taken deep within the Earth by the process of subduction : the movement of one tectonic plate beneath another. Carbon is released as carbon dioxide when a volcano erupts or from volcanic hydrothermal vents. Carbon dioxide is also added to the atmosphere by the animal husbandry practices of humans. The large numbers of land animals raised to feed the Earth’s growing population results in increased carbon dioxide levels in the atmosphere due to farming practices and the respiration and methane production. This is another example of how human activity indirectly affects biogeochemical cycles in a significant way. Although much of the debate about the future effects of increasing atmospheric carbon on climate change focuses on fossils fuels, scientists take natural processes, such as volcanoes and respiration, into account as they model and predict the future impact of this increase. The Nitrogen Cycle Getting nitrogen into the living world is difficult. Plants and phytoplankton are not equipped to incorporate nitrogen from the atmosphere (which exists as tightly bonded, triple covalent N 2 ) even though this molecule comprises approximately 78 percent of the atmosphere. Nitrogen enters the living world via free-living and symbiotic bacteria, which incorporate nitrogen into their macromolecules through nitrogen fixation (conversion of N 2 ). Cyanobacteria live in most aquatic ecosystems where sunlight is present; they play a key role in nitrogen fixation. Cyanobacteria are able to use inorganic sources of nitrogen to “fix” nitrogen. Rhizobium bacteria live symbiotically in the root nodules of legumes (such as peas, beans, and peanuts) and provide them with the organic nitrogen they need. Free-living bacteria, such as Azotobacter , are also important nitrogen fixers. Organic nitrogen is especially important to the study of ecosystem dynamics since many ecosystem processes, such as primary production and decomposition, are limited by the available supply of nitrogen. As shown in Figure 46.17 , the nitrogen that enters living systems by nitrogen fixation is successively converted from organic nitrogen back into nitrogen gas by bacteria. This process occurs in three steps in terrestrial systems: ammonification, nitrification, and denitrification. First, the ammonification process converts nitrogenous waste from living animals or from the remains of dead animals into ammonium (NH 4 + ) by certain bacteria and fungi. Second, the ammonium is converted to nitrites (NO 2 − ) by nitrifying bacteria, such as Nitrosomonas , through nitrification. Subsequently, nitrites are converted to nitrates (NO 3 − ) by similar organisms. Third, the process of denitrification occurs, whereby bacteria, such as Pseudomonas and Clostridium , convert the nitrates into nitrogen gas, allowing it to re-enter the atmosphere. Visual Connection Which of the following statements about the nitrogen cycle is false? Ammonification converts organic nitrogenous matter from living organisms into ammonium (NH 4 + ). Denitrification by bacteria converts nitrates (NO 3 − ) to nitrogen gas (N 2 ). Nitrification by bacteria converts nitrates (NO 3 − ) to nitrites (NO 2 − ). Nitrogen fixing bacteria convert nitrogen gas (N 2 ) into organic compounds. Human activity can release nitrogen into the environment by two primary means: the combustion of fossil fuels, which releases different nitrogen oxides, and by the use of artificial fertilizers in agriculture, which are then washed into lakes, streams, and rivers by surface runoff. Atmospheric nitrogen is associated with several effects on Earth’s ecosystems including the production of acid rain (as nitric acid, HNO 3 ) and greenhouse gas (as nitrous oxide, N 2 O) potentially causing climate change. A major effect from fertilizer runoff is saltwater and freshwater eutrophication , a process whereby nutrient runoff causes the excess growth of microorganisms, depleting dissolved oxygen levels and killing ecosystem fauna. A similar process occurs in the marine nitrogen cycle, where the ammonification, nitrification, and denitrification processes are performed by marine bacteria. Some of this nitrogen falls to the ocean floor as sediment, which can then be moved to land in geologic time by uplift of the Earth’s surface and thereby incorporated into terrestrial rock. Although the movement of nitrogen from rock directly into living systems has been traditionally seen as insignificant compared with nitrogen fixed from the atmosphere, a recent study showed that this process may indeed be significant and should be included in any study of the global nitrogen cycle. 3 3 Scott L. Morford, Benjamin Z. Houlton, and Randy A. Dahlgren, “Increased Forest Ecosystem Carbon and Nitrogen Storage from Nitrogen Rich Bedrock,” Nature 477, no. 7362 (2011): 78–81. The Phosphorus Cycle Phosphorus is an essential nutrient for living processes; it is a major component of nucleic acid and phospholipids, and, as calcium phosphate, makes up the supportive components of our bones. Phosphorus is often the limiting nutrient (necessary for growth) in aquatic ecosystems ( Figure 46.18 ). Phosphorus occurs in nature as the phosphate ion (PO 4 3− ). In addition to phosphate runoff as a result of human activity, natural surface runoff occurs when it is leached from phosphate-containing rock by weathering, thus sending phosphates into rivers, lakes, and the ocean. This rock has its origins in the ocean. Phosphate-containing ocean sediments form primarily from the bodies of ocean organisms and from their excretions. However, in remote regions, volcanic ash, aerosols, and mineral dust may also be significant phosphate sources. This sediment then is moved to land over geologic time by the uplifting of areas of the Earth’s surface. Phosphorus is also reciprocally exchanged between phosphate dissolved in the ocean and marine ecosystems. The movement of phosphate from the ocean to the land and through the soil is extremely slow, with the average phosphate ion having an oceanic residence time between 20,000 and 100,000 years. Excess phosphorus and nitrogen that enters these ecosystems from fertilizer runoff and from sewage causes excessive growth of microorganisms and depletes the dissolved oxygen, which leads to the death of many ecosystem fauna, such as shellfish and finfish. This process is responsible for dead zones in lakes and at the mouths of many major rivers ( Figure 46.18 ). A dead zone is an area within a freshwater or marine ecosystem where large areas are depleted of their normal flora and fauna; these zones can be caused by eutrophication, oil spills, dumping of toxic chemicals, and other human activities. The number of dead zones has been increasing for several years, and more than 400 of these zones were present as of 2008. One of the worst dead zones is off the coast of the United States in the Gulf of Mexico, where fertilizer runoff from the Mississippi River basin has created a dead zone of over 8463 square miles. Phosphate and nitrate runoff from fertilizers also negatively affect several lake and bay ecosystems including the Chesapeake Bay in the eastern United States. Everyday Connection Chesapeake Bay The Chesapeake Bay has long been valued as one of the most scenic areas on Earth; it is now in distress and is recognized as a declining ecosystem. In the 1970s, the Chesapeake Bay was one of the first ecosystems to have identified dead zones, which continue to kill many fish and bottom-dwelling species, such as clams, oysters, and worms. Several species have declined in the Chesapeake Bay due to surface water runoff containing excess nutrients from artificial fertilizer used on land. The source of the fertilizers (with high nitrogen and phosphate content) is not limited to agricultural practices. There are many nearby urban areas and more than 150 rivers and streams empty into the bay that are carrying fertilizer runoff from lawns and gardens. Thus, the decline of the Chesapeake Bay is a complex issue and requires the cooperation of industry, agriculture, and everyday homeowners. Of particular interest to conservationists is the oyster population; it is estimated that more than 200,000 acres of oyster reefs existed in the bay in the 1700s, but that number has now declined to only 36,000 acres. Oyster harvesting was once a major industry for Chesapeake Bay, but it declined 88 percent between 1982 and 2007. This decline was due not only to fertilizer runoff and dead zones but also to overharvesting. Oysters require a certain minimum population density because they must be in close proximity to reproduce. Human activity has altered the oyster population and locations, greatly disrupting the ecosystem. The restoration of the oyster population in the Chesapeake Bay has been ongoing for several years with mixed success. Not only do many people find oysters good to eat, but they also clean up the bay. Oysters are filter feeders, and as they eat, they clean the water around them. In the 1700s, it was estimated that it took only a few days for the oyster population to filter the entire volume of the bay. Today, with changed water conditions, it is estimated that the present population would take nearly a year to do the same job. Restoration efforts have been ongoing for several years by non-profit organizations, such as the Chesapeake Bay Foundation. The restoration goal is to find a way to increase population density so the oysters can reproduce more efficiently. Many disease-resistant varieties (developed at the Virginia Institute of Marine Science for the College of William and Mary) are now available and have been used in the construction of experimental oyster reefs. Efforts to clean and restore the bay by Virginia and Delaware have been hampered because much of the pollution entering the bay comes from other states, which stresses the need for inter-state cooperation to gain successful restoration. The new, hearty oyster strains have also spawned a new and economically viable industry—oyster aquaculture—which not only supplies oysters for food and profit, but also has the added benefit of cleaning the bay. The Sulfur Cycle Sulfur is an essential element for the macromolecules of living things. As a part of the amino acid cysteine, it is involved in the formation of disulfide bonds within proteins, which help to determine their 3-D folding patterns, and hence their functions. As shown in Figure 46.20 , sulfur cycles between the oceans, land, and atmosphere. Atmospheric sulfur is found in the form of sulfur dioxide (SO 2 ) and enters the atmosphere in three ways: from the decomposition of organic molecules, from volcanic activity and geothermal vents, and from the burning of fossil fuels by humans. On land, sulfur is deposited in four major ways: precipitation, direct fallout from the atmosphere, rock weathering, and geothermal vents ( Figure 46.21 ). Atmospheric sulfur is found in the form of sulfur dioxide (SO 2 ), and as rain falls through the atmosphere, sulfur is dissolved in the form of weak sulfuric acid (H 2 SO 4 ). Sulfur can also fall directly from the atmosphere in a process called fallout . Also, the weathering of sulfur-containing rocks releases sulfur into the soil. These rocks originate from ocean sediments that are moved to land by the geologic uplifting of ocean sediments. Terrestrial ecosystems can then make use of these soil sulfates ( SO 4 − SO 4 − ), and upon the death and decomposition of these organisms, release the sulfur back into the atmosphere as hydrogen sulfide (H 2 S) gas. Sulfur enters the ocean via runoff from land, from atmospheric fallout, and from underwater geothermal vents. Some ecosystems ( Figure 46.9 ) rely on chemoautotrophs using sulfur as a biological energy source. This sulfur then supports marine ecosystems in the form of sulfates. Human activities have played a major role in altering the balance of the global sulfur cycle. The burning of large quantities of fossil fuels, especially from coal, releases larger amounts of hydrogen sulfide gas into the atmosphere. As rain falls through this gas, it creates the phenomenon known as acid rain. Acid rain is corrosive rain caused by rainwater falling to the ground through sulfur dioxide gas, turning it into weak sulfuric acid, which causes damage to aquatic ecosystems. Acid rain damages the natural environment by lowering the pH of lakes, which kills many of the resident fauna; it also affects the man-made environment through the chemical degradation of buildings. For example, many marble monuments, such as the Lincoln Memorial in Washington, DC, have suffered significant damage from acid rain over the years. These examples show the wide-ranging effects of human activities on our environment and the challenges that remain for our future. Link to Learning Click this link to learn more about global climate change.
biology
Chapter Outline 15.1 The Genetic Code 15.2 Prokaryotic Transcription 15.3 Eukaryotic Transcription 15.4 RNA Processing in Eukaryotes 15.5 Ribosomes and Protein Synthesis Introduction Since the rediscovery of Mendel’s work in 1900, the definition of the gene has progressed from an abstract unit of heredity to a tangible molecular entity capable of replication, expression, and mutation ( Figure 15.1 ). Genes are composed of DNA and are linearly arranged on chromosomes. Genes specify the sequences of amino acids, which are the building blocks of proteins. In turn, proteins are responsible for orchestrating nearly every function of the cell. Both genes and the proteins they encode are absolutely essential to life as we know it.
[ { "answer": { "ans_choice": 3, "ans_text": "degeneracy" }, "bloom": "2", "hl_context": "Transcribe a gene and translate it to protein using complementary pairing and the genetic code at this site . <hl> Degeneracy is believed to be a cellular mechanism to reduce the negative impact of random mutations . <hl> <hl> Codons that specify the same amino acid typically only differ by one nucleotide . <hl> In addition , amino acids with chemically similar side chains are encoded by similar codons . This nuance of the genetic code ensures that a single-nucleotide substitution mutation might either specify the same amino acid but have no effect or specify a similar amino acid , preventing the protein from being rendered completely nonfunctional .", "hl_sentences": "Degeneracy is believed to be a cellular mechanism to reduce the negative impact of random mutations . Codons that specify the same amino acid typically only differ by one nucleotide .", "question": { "cloze_format": "The AUC and AUA codons in mRNA both specify isoleucine. The feature of the genetic code that explains this is ___.", "normal_format": "The AUC and AUA codons in mRNA both specify isoleucine. What feature of the genetic code explains this?", "question_choices": [ "complementarity", "nonsense codons", "universality", "degeneracy" ], "question_id": "fs-id2024650", "question_text": "The AUC and AUA codons in mRNA both specify isoleucine. What feature of the genetic code explains this?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "36" }, "bloom": "3", "hl_context": "<hl> Of the 64 possible mRNA codons — or triplet combinations of A , U , G , and C — three specify the termination of protein synthesis and 61 specify the addition of amino acids to the polypeptide chain . <hl> Of these 61 , one codon ( AUG ) also encodes the initiation of translation . Each tRNA anticodon can base pair with one of the mRNA codons and add an amino acid or terminate translation , according to the genetic code . For instance , if the sequence CUA occurred on an mRNA template in the proper reading frame , it would bind a tRNA expressing the complementary sequence , GAU , which would be linked to the amino acid leucine .", "hl_sentences": "Of the 64 possible mRNA codons — or triplet combinations of A , U , G , and C — three specify the termination of protein synthesis and 61 specify the addition of amino acids to the polypeptide chain .", "question": { "cloze_format": "___ nucleotides are in 12 mRNA codons.", "normal_format": "How many nucleotides are in 12 mRNA codons?", "question_choices": [ "12", "24", "36", "48" ], "question_id": "fs-id1428699", "question_text": "How many nucleotides are in 12 mRNA codons?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "σ" }, "bloom": null, "hl_context": "Prokaryotes use the same RNA polymerase to transcribe all of their genes . <hl> In E . coli , the polymerase is composed of five polypeptide subunits , two of which are identical . <hl> <hl> Four of these subunits , denoted α , α , β , and β ' comprise the polymerase core enzyme . <hl> These subunits assemble every time a gene is transcribed , and they disassemble once transcription is complete . Each subunit has a unique role ; the two α - subunits are necessary to assemble the polymerase on the DNA ; the β - subunit binds to the ribonucleoside triphosphate that will become part of the nascent “ recently born ” mRNA molecule ; and the β ' binds the DNA template strand . <hl> The fifth subunit , σ , is involved only in transcription initiation . <hl> It confers transcriptional specificity such that the polymerase begins to synthesize mRNA from an appropriate initiation site . Without σ , the core enzyme would transcribe from random sites and would produce mRNA molecules that specified protein gibberish . The polymerase comprised of all five subunits is called the holoenzyme .", "hl_sentences": "In E . coli , the polymerase is composed of five polypeptide subunits , two of which are identical . Four of these subunits , denoted α , α , β , and β ' comprise the polymerase core enzyme . The fifth subunit , σ , is involved only in transcription initiation .", "question": { "cloze_format": "The ___ subunit of the E. coli polymerase confers specificity to transcription.", "normal_format": "Which subunit of the E. coli polymerase confers specificity to transcription?", "question_choices": [ "α", "β", "β'", "σ" ], "question_id": "fs-id2914875", "question_text": "Which subunit of the E. coli polymerase confers specificity to transcription?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "they are similar in all bacterial species" }, "bloom": null, "hl_context": "A promoter is a DNA sequence onto which the transcription machinery binds and initiates transcription . In most cases , promoters exist upstream of the genes they regulate . The specific sequence of a promoter is very important because it determines whether the corresponding gene is transcribed all the time , some of the time , or infrequently . Although promoters vary among prokaryotic genomes , a few elements are conserved . <hl> At the - 10 and - 35 regions upstream of the initiation site , there are two promoter consensus sequences , or regions that are similar across all promoters and across various bacterial species ( Figure 15.7 ) . <hl> <hl> The - 10 consensus sequence , called the - 10 region , is TATAAT . <hl> <hl> The - 35 sequence , TTGACA , is recognized and bound by σ . <hl> Once this interaction is made , the subunits of the core enzyme bind to the site . The A – T-rich - 10 region facilitates unwinding of the DNA template , and several phosphodiester bonds are made . The transcription initiation phase ends with the production of abortive transcripts , which are polymers of approximately 10 nucleotides that are made and released . Link to Learning", "hl_sentences": "At the - 10 and - 35 regions upstream of the initiation site , there are two promoter consensus sequences , or regions that are similar across all promoters and across various bacterial species ( Figure 15.7 ) . The - 10 consensus sequence , called the - 10 region , is TATAAT . The - 35 sequence , TTGACA , is recognized and bound by σ .", "question": { "cloze_format": "The -10 and -35 regions of prokaryotic promoters are called consensus sequences because ________.", "normal_format": "The -10 and -35 regions of prokaryotic promoters are called consensus sequences because of what?", "question_choices": [ "they are identical in all bacterial species", "they are similar in all bacterial species", "they exist in all organisms", "they have the same function in all organisms" ], "question_id": "fs-id1313546", "question_text": "The -10 and -35 regions of prokaryotic promoters are called consensus sequences because ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "TATA box" }, "bloom": "2", "hl_context": "<hl> Eukaryotic promoters are much larger and more complex than prokaryotic promoters , but both have a TATA box . <hl> For example , in the mouse thymidine kinase gene , the TATA box is located at approximately - 30 relative to the initiation ( + 1 ) site ( Figure 15.10 ) . For this gene , the exact TATA box sequence is TATAAAA , as read in the 5 ' to 3 ' direction on the nontemplate strand . This sequence is not identical to the E . coli TATA box , but it conserves the A – T rich element . The thermostability of A – T bonds is low and this helps the DNA template to locally unwind in preparation for transcription .", "hl_sentences": "Eukaryotic promoters are much larger and more complex than prokaryotic promoters , but both have a TATA box .", "question": { "cloze_format": "The feature of promoter that can be found in both prokaryotes and eukaryotes is the ___ .", "normal_format": "Which feature of promoters can be found in both prokaryotes and eukaryotes?", "question_choices": [ "GC box", "TATA box", "octamer box", "-10 and -35 sequences" ], "question_id": "fs-id2062496", "question_text": "Which feature of promoters can be found in both prokaryotes and eukaryotes?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "pre-mRNAs" }, "bloom": "2", "hl_context": "A scientist characterizing a new gene can determine which polymerase transcribes it by testing whether the gene is expressed in the presence of a particular mushroom poison , α-amanitin ( Table 15.1 ) . <hl> Interestingly , α-amanitin produced by Amanita phalloides , the Death Cap mushroom , affects the three polymerases very differently . <hl> <hl> RNA polymerase I is completely insensitive to α-amanitin , meaning that the polymerase can transcribe DNA in vitro in the presence of this poison . <hl> <hl> In contrast , RNA polymerase II is extremely sensitive to α-amanitin , and RNA polymerase III is moderately sensitive . <hl> Knowing the transcribing polymerase can clue a researcher into the general function of the gene being studied . Because RNA polymerase II transcribes the vast majority of genes , we will focus on this polymerase in our subsequent discussions about eukaryotic transcription factors and promoters . <hl> RNA polymerase III is also located in the nucleus . <hl> <hl> This polymerase transcribes a variety of structural RNAs that includes the 5S pre-rRNA , transfer pre-RNAs ( pre-tRNAs ) , and small nuclear pre - RNAs . <hl> <hl> The tRNAs have a critical role in translation ; they serve as the adaptor molecules between the mRNA template and the growing polypeptide chain . <hl> <hl> Small nuclear RNAs have a variety of functions , including “ splicing ” pre-mRNAs and regulating transcription factors . <hl>", "hl_sentences": "Interestingly , α-amanitin produced by Amanita phalloides , the Death Cap mushroom , affects the three polymerases very differently . RNA polymerase I is completely insensitive to α-amanitin , meaning that the polymerase can transcribe DNA in vitro in the presence of this poison . In contrast , RNA polymerase II is extremely sensitive to α-amanitin , and RNA polymerase III is moderately sensitive . RNA polymerase III is also located in the nucleus . This polymerase transcribes a variety of structural RNAs that includes the 5S pre-rRNA , transfer pre-RNAs ( pre-tRNAs ) , and small nuclear pre - RNAs . The tRNAs have a critical role in translation ; they serve as the adaptor molecules between the mRNA template and the growing polypeptide chain . Small nuclear RNAs have a variety of functions , including “ splicing ” pre-mRNAs and regulating transcription factors .", "question": { "cloze_format": "The transcript that will be most affected by low levels of α-amanitin is ___.", "normal_format": "What transcripts will be most affected by low levels of α-amanitin?", "question_choices": [ "18S and 28S rRNAs", "pre-mRNAs", "5S rRNAs and tRNAs", "other small nuclear RNAs" ], "question_id": "fs-id1419233", "question_text": "What transcripts will be most affected by low levels of α-amanitin?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "7-methylguanosine cap" }, "bloom": "2", "hl_context": "<hl> While the pre-mRNA is still being synthesized , a 7 - methylguanosine cap is added to the 5 ' end of the growing transcript by a phosphate linkage . <hl> <hl> This moiety ( functional group ) protects the nascent mRNA from degradation . <hl> <hl> In addition , factors involved in protein synthesis recognize the cap to help initiate translation by ribosomes . <hl>", "hl_sentences": "While the pre-mRNA is still being synthesized , a 7 - methylguanosine cap is added to the 5 ' end of the growing transcript by a phosphate linkage . This moiety ( functional group ) protects the nascent mRNA from degradation . In addition , factors involved in protein synthesis recognize the cap to help initiate translation by ribosomes .", "question": { "cloze_format": "The pre-mRNA processing step that is important for initiating translation is (the) ___ .", "normal_format": "Which pre-mRNA processing step is important for initiating translation?", "question_choices": [ "poly-A tail", "RNA editing", "splicing", "7-methylguanosine cap" ], "question_id": "fs-id1466731", "question_text": "Which pre-mRNA processing step is important for initiating translation?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "methylation" }, "bloom": "2", "hl_context": "<hl> Most of the tRNAs and rRNAs in eukaryotes and prokaryotes are first transcribed as a long precursor molecule that spans multiple rRNAs or tRNAs . <hl> <hl> Enzymes then cleave the precursors into subunits corresponding to each structural RNA . <hl> <hl> Some of the bases of pre-rRNAs are methylated ; that is , a – CH 3 moiety ( methyl functional group ) is added for stability . <hl> <hl> Pre-tRNA molecules also undergo methylation . <hl> <hl> As with pre-mRNAs , subunit excision occurs in eukaryotic pre-RNAs destined to become tRNAs or rRNAs . <hl>", "hl_sentences": "Most of the tRNAs and rRNAs in eukaryotes and prokaryotes are first transcribed as a long precursor molecule that spans multiple rRNAs or tRNAs . Enzymes then cleave the precursors into subunits corresponding to each structural RNA . Some of the bases of pre-rRNAs are methylated ; that is , a – CH 3 moiety ( methyl functional group ) is added for stability . Pre-tRNA molecules also undergo methylation . As with pre-mRNAs , subunit excision occurs in eukaryotic pre-RNAs destined to become tRNAs or rRNAs .", "question": { "cloze_format": "The processing step that enhances the stability of pre-tRNAs and pre-rRNAs is ___.", "normal_format": "What processing step enhances the stability of pre-tRNAs and pre-rRNAs?", "question_choices": [ "methylation", "nucleotide modification", "cleavage", "splicing" ], "question_id": "fs-id1428536", "question_text": "What processing step enhances the stability of pre-tRNAs and pre-rRNAs?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "nucleolus" }, "bloom": null, "hl_context": "<hl> RNA polymerase I is located in the nucleolus , a specialized nuclear substructure in which ribosomal RNA ( rRNA ) is transcribed , processed , and assembled into ribosomes ( Table 15.1 ) . <hl> The rRNA molecules are considered structural RNAs because they have a cellular role but are not translated into protein . The rRNAs are components of the ribosome and are essential to the process of translation . RNA polymerase I synthesizes all of the rRNAs except for the 5S rRNA molecule . The “ S ” designation applies to “ Svedberg ” units , a nonadditive value that characterizes the speed at which a particle sediments during centrifugation .", "hl_sentences": "RNA polymerase I is located in the nucleolus , a specialized nuclear substructure in which ribosomal RNA ( rRNA ) is transcribed , processed , and assembled into ribosomes ( Table 15.1 ) .", "question": { "cloze_format": "The RNA components of ribosomes are synthesized in the ________.", "normal_format": "Where are the RNA components of ribosomes synthesized in?", "question_choices": [ "cytoplasm", "nucleus", "nucleolus", "endoplasmic reticulum" ], "question_id": "fs-id2904762", "question_text": "The RNA components of ribosomes are synthesized in the ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "20" }, "bloom": "3", "hl_context": "The process of pre-tRNA synthesis by RNA polymerase III only creates the RNA portion of the adaptor molecule . The corresponding amino acid must be added later , once the tRNA is processed and exported to the cytoplasm . <hl> Through the process of tRNA “ charging , ” each tRNA molecule is linked to its correct amino acid by a group of enzymes called aminoacyl tRNA synthetases . <hl> <hl> At least one type of aminoacyl tRNA synthetase exists for each of the 20 amino acids ; the exact number of aminoacyl tRNA synthetases varies by species . <hl> These enzymes first bind and hydrolyze ATP to catalyze a high-energy bond between an amino acid and adenosine monophosphate ( AMP ); a pyrophosphate molecule is expelled in this reaction . The activated amino acid is then transferred to the tRNA , and AMP is released .", "hl_sentences": "Through the process of tRNA “ charging , ” each tRNA molecule is linked to its correct amino acid by a group of enzymes called aminoacyl tRNA synthetases . At least one type of aminoacyl tRNA synthetase exists for each of the 20 amino acids ; the exact number of aminoacyl tRNA synthetases varies by species .", "question": { "cloze_format": "In any given species, there are at least ___ types of aminoacyl tRNA synthetases.", "normal_format": "In any given species, there are at least how many types of aminoacyl tRNA synthetases?", "question_choices": [ "20", "40", "100", "200" ], "question_id": "fs-id2991759", "question_text": "In any given species, there are at least how many types of aminoacyl tRNA synthetases?" }, "references_are_paraphrase": null } ]
15
15.1 The Genetic Code Learning Objectives By the end of this section, you will be able to: Explain the “central dogma” of protein synthesis Describe the genetic code and how the nucleotide sequence prescribes the amino acid and the protein sequence The cellular process of transcription generates messenger RNA (mRNA), a mobile molecular copy of one or more genes with an alphabet of A, C, G, and uracil (U). Translation of the mRNA template converts nucleotide-based genetic information into a protein product. Protein sequences consist of 20 commonly occurring amino acids; therefore, it can be said that the protein alphabet consists of 20 letters ( Figure 15.2 ). Each amino acid is defined by a three-nucleotide sequence called the triplet codon. Different amino acids have different chemistries (such as acidic versus basic, or polar and nonpolar) and different structural constraints. Variation in amino acid sequence gives rise to enormous variation in protein structure and function. The Central Dogma: DNA Encodes RNA; RNA Encodes Protein The flow of genetic information in cells from DNA to mRNA to protein is described by the Central Dogma ( Figure 15.3 ), which states that genes specify the sequence of mRNAs, which in turn specify the sequence of proteins. The decoding of one molecule to another is performed by specific proteins and RNAs. Because the information stored in DNA is so central to cellular function, it makes intuitive sense that the cell would make mRNA copies of this information for protein synthesis, while keeping the DNA itself intact and protected. The copying of DNA to RNA is relatively straightforward, with one nucleotide being added to the mRNA strand for every nucleotide read in the DNA strand. The translation to protein is a bit more complex because three mRNA nucleotides correspond to one amino acid in the polypeptide sequence. However, the translation to protein is still systematic and colinear , such that nucleotides 1 to 3 correspond to amino acid 1, nucleotides 4 to 6 correspond to amino acid 2, and so on. The Genetic Code Is Degenerate and Universal Given the different numbers of “letters” in the mRNA and protein “alphabets,” scientists theorized that combinations of nucleotides corresponded to single amino acids. Nucleotide doublets would not be sufficient to specify every amino acid because there are only 16 possible two-nucleotide combinations (4 2 ). In contrast, there are 64 possible nucleotide triplets (4 3 ), which is far more than the number of amino acids. Scientists theorized that amino acids were encoded by nucleotide triplets and that the genetic code was degenerate . In other words, a given amino acid could be encoded by more than one nucleotide triplet. This was later confirmed experimentally; Francis Crick and Sydney Brenner used the chemical mutagen proflavin to insert one, two, or three nucleotides into the gene of a virus. When one or two nucleotides were inserted, protein synthesis was completely abolished. When three nucleotides were inserted, the protein was synthesized and functional. This demonstrated that three nucleotides specify each amino acid. These nucleotide triplets are called codons . The insertion of one or two nucleotides completely changed the triplet reading frame , thereby altering the message for every subsequent amino acid ( Figure 15.5 ). Though insertion of three nucleotides caused an extra amino acid to be inserted during translation, the integrity of the rest of the protein was maintained. Scientists painstakingly solved the genetic code by translating synthetic mRNAs in vitro and sequencing the proteins they specified ( Figure 15.4 ). In addition to instructing the addition of a specific amino acid to a polypeptide chain, three of the 64 codons terminate protein synthesis and release the polypeptide from the translation machinery. These triplets are called nonsense codons , or stop codons. Another codon, AUG, also has a special function. In addition to specifying the amino acid methionine, it also serves as the start codon to initiate translation. The reading frame for translation is set by the AUG start codon near the 5' end of the mRNA. The genetic code is universal. With a few exceptions, virtually all species use the same genetic code for protein synthesis. Conservation of codons means that a purified mRNA encoding the globin protein in horses could be transferred to a tulip cell, and the tulip would synthesize horse globin. That there is only one genetic code is powerful evidence that all of life on Earth shares a common origin, especially considering that there are about 10 84 possible combinations of 20 amino acids and 64 triplet codons. Link to Learning Transcribe a gene and translate it to protein using complementary pairing and the genetic code at this site . Degeneracy is believed to be a cellular mechanism to reduce the negative impact of random mutations. Codons that specify the same amino acid typically only differ by one nucleotide. In addition, amino acids with chemically similar side chains are encoded by similar codons. This nuance of the genetic code ensures that a single-nucleotide substitution mutation might either specify the same amino acid but have no effect or specify a similar amino acid, preventing the protein from being rendered completely nonfunctional. Scientific Method Connection Which Has More DNA: A Kiwi or a Strawberry? Question : Would a kiwifruit and strawberry that are approximately the same size ( Figure 15.6 ) also have approximately the same amount of DNA? Background : Genes are carried on chromosomes and are made of DNA. All mammals are diploid, meaning they have two copies of each chromosome. However, not all plants are diploid. The common strawberry is octoploid (8 n ) and the cultivated kiwi is hexaploid (6 n ). Research the total number of chromosomes in the cells of each of these fruits and think about how this might correspond to the amount of DNA in these fruits’ cell nuclei. Read about the technique of DNA isolation to understand how each step in the isolation protocol helps liberate and precipitate DNA. Hypothesis : Hypothesize whether you would be able to detect a difference in DNA quantity from similarly sized strawberries and kiwis. Which fruit do you think would yield more DNA? Test your hypothesis : Isolate the DNA from a strawberry and a kiwi that are similarly sized. Perform the experiment in at least triplicate for each fruit. Prepare a bottle of DNA extraction buffer from 900 mL water, 50 mL dish detergent, and two teaspoons of table salt. Mix by inversion (cap it and turn it upside down a few times). Grind a strawberry and a kiwifruit by hand in a plastic bag, or using a mortar and pestle, or with a metal bowl and the end of a blunt instrument. Grind for at least two minutes per fruit. Add 10 mL of the DNA extraction buffer to each fruit, and mix well for at least one minute. Remove cellular debris by filtering each fruit mixture through cheesecloth or porous cloth and into a funnel placed in a test tube or an appropriate container. Pour ice-cold ethanol or isopropanol (rubbing alcohol) into the test tube. You should observe white, precipitated DNA. Gather the DNA from each fruit by winding it around separate glass rods. Record your observations : Because you are not quantitatively measuring DNA volume, you can record for each trial whether the two fruits produced the same or different amounts of DNA as observed by eye. If one or the other fruit produced noticeably more DNA, record this as well. Determine whether your observations are consistent with several pieces of each fruit. Analyze your data : Did you notice an obvious difference in the amount of DNA produced by each fruit? Were your results reproducible? Draw a conclusion : Given what you know about the number of chromosomes in each fruit, can you conclude that chromosome number necessarily correlates to DNA amount? Can you identify any drawbacks to this procedure? If you had access to a laboratory, how could you standardize your comparison and make it more quantitative? 15.2 Prokaryotic Transcription Learning Objectives By the end of this section, you will be able to: List the different steps in prokaryotic transcription Discuss the role of promoters in prokaryotic transcription Describe how and when transcription is terminated The prokaryotes, which include bacteria and archaea, are mostly single-celled organisms that, by definition, lack membrane-bound nuclei and other organelles. A bacterial chromosome is a covalently closed circle that, unlike eukaryotic chromosomes, is not organized around histone proteins. The central region of the cell in which prokaryotic DNA resides is called the nucleoid. In addition, prokaryotes often have abundant plasmids , which are shorter circular DNA molecules that may only contain one or a few genes. Plasmids can be transferred independently of the bacterial chromosome during cell division and often carry traits such as antibiotic resistance. Transcription in prokaryotes (and in eukaryotes) requires the DNA double helix to partially unwind in the region of mRNA synthesis. The region of unwinding is called a transcription bubble. Transcription always proceeds from the same DNA strand for each gene, which is called the template strand . The mRNA product is complementary to the template strand and is almost identical to the other DNA strand, called the nontemplate strand . The only difference is that in mRNA, all of the T nucleotides are replaced with U nucleotides. In an RNA double helix, A can bind U via two hydrogen bonds, just as in A–T pairing in a DNA double helix. The nucleotide pair in the DNA double helix that corresponds to the site from which the first 5' mRNA nucleotide is transcribed is called the +1 site, or the initiation site . Nucleotides preceding the initiation site are given negative numbers and are designated upstream . Conversely, nucleotides following the initiation site are denoted with “+” numbering and are called downstream nucleotides. Initiation of Transcription in Prokaryotes Prokaryotes do not have membrane-enclosed nuclei. Therefore, the processes of transcription, translation, and mRNA degradation can all occur simultaneously. The intracellular level of a bacterial protein can quickly be amplified by multiple transcription and translation events occurring concurrently on the same DNA template. Prokaryotic transcription often covers more than one gene and produces polycistronic mRNAs that specify more than one protein. Our discussion here will exemplify transcription by describing this process in Escherichia coli , a well-studied bacterial species. Although some differences exist between transcription in E. coli and transcription in archaea, an understanding of E. coli transcription can be applied to virtually all bacterial species. Prokaryotic RNA Polymerase Prokaryotes use the same RNA polymerase to transcribe all of their genes. In E. coli , the polymerase is composed of five polypeptide subunits, two of which are identical. Four of these subunits, denoted α , α , β , and β ' comprise the polymerase core enzyme . These subunits assemble every time a gene is transcribed, and they disassemble once transcription is complete. Each subunit has a unique role; the two α -subunits are necessary to assemble the polymerase on the DNA; the β -subunit binds to the ribonucleoside triphosphate that will become part of the nascent “recently born” mRNA molecule; and the β ' binds the DNA template strand. The fifth subunit, σ , is involved only in transcription initiation. It confers transcriptional specificity such that the polymerase begins to synthesize mRNA from an appropriate initiation site. Without σ , the core enzyme would transcribe from random sites and would produce mRNA molecules that specified protein gibberish. The polymerase comprised of all five subunits is called the holoenzyme . Prokaryotic Promoters A promoter is a DNA sequence onto which the transcription machinery binds and initiates transcription. In most cases, promoters exist upstream of the genes they regulate. The specific sequence of a promoter is very important because it determines whether the corresponding gene is transcribed all the time, some of the time, or infrequently. Although promoters vary among prokaryotic genomes, a few elements are conserved. At the -10 and -35 regions upstream of the initiation site, there are two promoter consensus sequences, or regions that are similar across all promoters and across various bacterial species ( Figure 15.7 ). The -10 consensus sequence, called the -10 region, is TATAAT. The -35 sequence, TTGACA, is recognized and bound by σ . Once this interaction is made, the subunits of the core enzyme bind to the site. The A–T-rich -10 region facilitates unwinding of the DNA template, and several phosphodiester bonds are made. The transcription initiation phase ends with the production of abortive transcripts, which are polymers of approximately 10 nucleotides that are made and released. Link to Learning View this MolecularMovies animation to see the first part of transcription and the base sequence repetition of the TATA box. Elongation and Termination in Prokaryotes The transcription elongation phase begins with the release of the σ subunit from the polymerase. The dissociation of σ allows the core enzyme to proceed along the DNA template, synthesizing mRNA in the 5' to 3' direction at a rate of approximately 40 nucleotides per second. As elongation proceeds, the DNA is continuously unwound ahead of the core enzyme and rewound behind it ( Figure 15.8 ). The base pairing between DNA and RNA is not stable enough to maintain the stability of the mRNA synthesis components. Instead, the RNA polymerase acts as a stable linker between the DNA template and the nascent RNA strands to ensure that elongation is not interrupted prematurely. Prokaryotic Termination Signals Once a gene is transcribed, the prokaryotic polymerase needs to be instructed to dissociate from the DNA template and liberate the newly made mRNA. Depending on the gene being transcribed, there are two kinds of termination signals. One is protein-based and the other is RNA-based. Rho-dependent termination is controlled by the rho protein, which tracks along behind the polymerase on the growing mRNA chain. Near the end of the gene, the polymerase encounters a run of G nucleotides on the DNA template and it stalls. As a result, the rho protein collides with the polymerase. The interaction with rho releases the mRNA from the transcription bubble. Rho-independent termination is controlled by specific sequences in the DNA template strand. As the polymerase nears the end of the gene being transcribed, it encounters a region rich in C–G nucleotides. The mRNA folds back on itself, and the complementary C–G nucleotides bind together. The result is a stable hairpin that causes the polymerase to stall as soon as it begins to transcribe a region rich in A–T nucleotides. The complementary U–A region of the mRNA transcript forms only a weak interaction with the template DNA. This, coupled with the stalled polymerase, induces enough instability for the core enzyme to break away and liberate the new mRNA transcript. Upon termination, the process of transcription is complete. By the time termination occurs, the prokaryotic transcript would already have been used to begin synthesis of numerous copies of the encoded protein because these processes can occur concurrently. The unification of transcription, translation, and even mRNA degradation is possible because all of these processes occur in the same 5' to 3' direction, and because there is no membranous compartmentalization in the prokaryotic cell ( Figure 15.9 ). In contrast, the presence of a nucleus in eukaryotic cells precludes simultaneous transcription and translation. Link to Learning Visit this BioStudio animation to see the process of prokaryotic transcription. 15.3 Eukaryotic Transcription Learning Objectives By the end of this section, you will be able to: List the steps in eukaryotic transcription Discuss the role of RNA polymerases in transcription Compare and contrast the three RNA polymerases Explain the significance of transcription factors Prokaryotes and eukaryotes perform fundamentally the same process of transcription, with a few key differences. The most important difference between prokaryotes and eukaryotes is the latter’s membrane-bound nucleus and organelles. With the genes bound in a nucleus, the eukaryotic cell must be able to transport its mRNA to the cytoplasm and must protect its mRNA from degrading before it is translated. Eukaryotes also employ three different polymerases that each transcribe a different subset of genes. Eukaryotic mRNAs are usually monogenic, meaning that they specify a single protein. Initiation of Transcription in Eukaryotes Unlike the prokaryotic polymerase that can bind to a DNA template on its own, eukaryotes require several other proteins, called transcription factors, to first bind to the promoter region and then help recruit the appropriate polymerase. The Three Eukaryotic RNA Polymerases The features of eukaryotic mRNA synthesis are markedly more complex those of prokaryotes. Instead of a single polymerase comprising five subunits, the eukaryotes have three polymerases that are each made up of 10 subunits or more. Each eukaryotic polymerase also requires a distinct set of transcription factors to bring it to the DNA template. RNA polymerase I is located in the nucleolus, a specialized nuclear substructure in which ribosomal RNA (rRNA) is transcribed, processed, and assembled into ribosomes ( Table 15.1 ). The rRNA molecules are considered structural RNAs because they have a cellular role but are not translated into protein. The rRNAs are components of the ribosome and are essential to the process of translation. RNA polymerase I synthesizes all of the rRNAs except for the 5S rRNA molecule. The “S” designation applies to “Svedberg” units, a nonadditive value that characterizes the speed at which a particle sediments during centrifugation. Locations, Products, and Sensitivities of the Three Eukaryotic RNA Polymerases RNA Polymerase Cellular Compartment Product of Transcription α-Amanitin Sensitivity I Nucleolus All rRNAs except 5S rRNA Insensitive II Nucleus All protein-coding nuclear pre-mRNAs Extremely sensitive III Nucleus 5S rRNA, tRNAs, and small nuclear RNAs Moderately sensitive Table 15.1 RNA polymerase II is located in the nucleus and synthesizes all protein-coding nuclear pre-mRNAs. Eukaryotic pre-mRNAs undergo extensive processing after transcription but before translation. For clarity, this module’s discussion of transcription and translation in eukaryotes will use the term “mRNAs” to describe only the mature, processed molecules that are ready to be translated. RNA polymerase II is responsible for transcribing the overwhelming majority of eukaryotic genes. RNA polymerase III is also located in the nucleus. This polymerase transcribes a variety of structural RNAs that includes the 5S pre-rRNA, transfer pre-RNAs (pre-tRNAs), and small nuclear pre- RNAs . The tRNAs have a critical role in translation; they serve as the adaptor molecules between the mRNA template and the growing polypeptide chain. Small nuclear RNAs have a variety of functions, including “splicing” pre-mRNAs and regulating transcription factors. A scientist characterizing a new gene can determine which polymerase transcribes it by testing whether the gene is expressed in the presence of a particular mushroom poison, α-amanitin ( Table 15.1 ). Interestingly, α-amanitin produced by Amanita phalloides , the Death Cap mushroom, affects the three polymerases very differently. RNA polymerase I is completely insensitive to α-amanitin, meaning that the polymerase can transcribe DNA in vitro in the presence of this poison. In contrast, RNA polymerase II is extremely sensitive to α-amanitin, and RNA polymerase III is moderately sensitive. Knowing the transcribing polymerase can clue a researcher into the general function of the gene being studied. Because RNA polymerase II transcribes the vast majority of genes, we will focus on this polymerase in our subsequent discussions about eukaryotic transcription factors and promoters. Structure of an RNA Polymerase II Promoter Eukaryotic promoters are much larger and more complex than prokaryotic promoters, but both have a TATA box. For example, in the mouse thymidine kinase gene, the TATA box is located at approximately -30 relative to the initiation (+1) site ( Figure 15.10 ). For this gene, the exact TATA box sequence is TATAAAA, as read in the 5' to 3' direction on the nontemplate strand. This sequence is not identical to the E. coli TATA box, but it conserves the A–T rich element. The thermostability of A–T bonds is low and this helps the DNA template to locally unwind in preparation for transcription. Visual Connection A scientist splices a eukaryotic promoter in front of a bacterial gene and inserts the gene in a bacterial chromosome. Would you expect the bacteria to transcribe the gene? The mouse genome includes one gene and two pseudogenes for cytoplasmic thymidine kinase. Pseudogenes are genes that have lost their protein-coding ability or are no longer expressed by the cell. These pseudogenes are copied from mRNA and incorporated into the chromosome. For example, the mouse thymidine kinase promoter also has a conserved CAAT box (GGCCAATCT) at approximately -80. This sequence is essential and is involved in binding transcription factors. Further upstream of the TATA box, eukaryotic promoters may also contain one or more GC-rich boxes (GGCG) or octamer boxes (ATTTGCAT). These elements bind cellular factors that increase the efficiency of transcription initiation and are often identified in more “active” genes that are constantly being expressed by the cell. Transcription Factors for RNA Polymerase II The complexity of eukaryotic transcription does not end with the polymerases and promoters. An army of basal transcription factors, enhancers, and silencers also help to regulate the frequency with which pre-mRNA is synthesized from a gene. Enhancers and silencers affect the efficiency of transcription but are not necessary for transcription to proceed. Basal transcription factors are crucial in the formation of a preinitiation complex on the DNA template that subsequently recruits RNA polymerase II for transcription initiation. The names of the basal transcription factors begin with “TFII” (this is the transcription factor for RNA polymerase II) and are specified with the letters A–J. The transcription factors systematically fall into place on the DNA template, with each one further stabilizing the preinitiation complex and contributing to the recruitment of RNA polymerase II. The processes of bringing RNA polymerases I and III to the DNA template involve slightly less complex collections of transcription factors, but the general theme is the same. Eukaryotic transcription is a tightly regulated process that requires a variety of proteins to interact with each other and with the DNA strand. Although the process of transcription in eukaryotes involves a greater metabolic investment than in prokaryotes, it ensures that the cell transcribes precisely the pre-mRNAs that it needs for protein synthesis. Evolution Connection The Evolution of Promoters The evolution of genes may be a familiar concept. Mutations can occur in genes during DNA replication, and the result may or may not be beneficial to the cell. By altering an enzyme, structural protein, or some other factor, the process of mutation can transform functions or physical features. However, eukaryotic promoters and other gene regulatory sequences may evolve as well. For instance, consider a gene that, over many generations, becomes more valuable to the cell. Maybe the gene encodes a structural protein that the cell needs to synthesize in abundance for a certain function. If this is the case, it would be beneficial to the cell for that gene’s promoter to recruit transcription factors more efficiently and increase gene expression. Scientists examining the evolution of promoter sequences have reported varying results. In part, this is because it is difficult to infer exactly where a eukaryotic promoter begins and ends. Some promoters occur within genes; others are located very far upstream, or even downstream, of the genes they are regulating. However, when researchers limited their examination to human core promoter sequences that were defined experimentally as sequences that bind the preinitiation complex, they found that promoters evolve even faster than protein-coding genes. It is still unclear how promoter evolution might correspond to the evolution of humans or other higher organisms. However, the evolution of a promoter to effectively make more or less of a given gene product is an intriguing alternative to the evolution of the genes themselves. 1 1 H Liang et al., “Fast evolution of core promoters in primate genomes,” Molecular Biology and Evolution 25 (2008): 1239–44. Promoter Structures for RNA Polymerases I and III In eukaryotes, the conserved promoter elements differ for genes transcribed by RNA polymerases I, II, and III. RNA polymerase I transcribes genes that have two GC-rich promoter sequences in the -45 to +20 region. These sequences alone are sufficient for transcription initiation to occur, but promoters with additional sequences in the region from -180 to -105 upstream of the initiation site will further enhance initiation. Genes that are transcribed by RNA polymerase III have upstream promoters or promoters that occur within the genes themselves. Eukaryotic Elongation and Termination Following the formation of the preinitiation complex, the polymerase is released from the other transcription factors, and elongation is allowed to proceed as it does in prokaryotes with the polymerase synthesizing pre-mRNA in the 5' to 3' direction. As discussed previously, RNA polymerase II transcribes the major share of eukaryotic genes, so this section will focus on how this polymerase accomplishes elongation and termination. Although the enzymatic process of elongation is essentially the same in eukaryotes and prokaryotes, the DNA template is more complex. When eukaryotic cells are not dividing, their genes exist as a diffuse mass of DNA and proteins called chromatin. The DNA is tightly packaged around charged histone proteins at repeated intervals. These DNA–histone complexes, collectively called nucleosomes, are regularly spaced and include 146 nucleotides of DNA wound around eight histones like thread around a spool. For polynucleotide synthesis to occur, the transcription machinery needs to move histones out of the way every time it encounters a nucleosome. This is accomplished by a special protein complex called FACT , which stands for “facilitates chromatin transcription.” This complex pulls histones away from the DNA template as the polymerase moves along it. Once the pre-mRNA is synthesized, the FACT complex replaces the histones to recreate the nucleosomes. The termination of transcription is different for the different polymerases. Unlike in prokaryotes, elongation by RNA polymerase II in eukaryotes takes place 1,000–2,000 nucleotides beyond the end of the gene being transcribed. This pre-mRNA tail is subsequently removed by cleavage during mRNA processing. On the other hand, RNA polymerases I and III require termination signals. Genes transcribed by RNA polymerase I contain a specific 18-nucleotide sequence that is recognized by a termination protein. The process of termination in RNA polymerase III involves an mRNA hairpin similar to rho-independent termination of transcription in prokaryotes. 15.4 RNA Processing in Eukaryotes Learning Objectives By the end of this section, you will be able to: Describe the different steps in RNA processing Understand the significance of exons, introns, and splicing Explain how tRNAs and rRNAs are processed After transcription, eukaryotic pre-mRNAs must undergo several processing steps before they can be translated. Eukaryotic (and prokaryotic) tRNAs and rRNAs also undergo processing before they can function as components in the protein synthesis machinery. mRNA Processing The eukaryotic pre-mRNA undergoes extensive processing before it is ready to be translated. The additional steps involved in eukaryotic mRNA maturation create a molecule with a much longer half-life than a prokaryotic mRNA. Eukaryotic mRNAs last for several hours, whereas the typical E. coli mRNA lasts no more than five seconds. Pre-mRNAs are first coated in RNA-stabilizing proteins; these protect the pre-mRNA from degradation while it is processed and exported out of the nucleus. The three most important steps of pre-mRNA processing are the addition of stabilizing and signaling factors at the 5' and 3' ends of the molecule, and the removal of intervening sequences that do not specify the appropriate amino acids. In rare cases, the mRNA transcript can be “edited” after it is transcribed. Evolution Connection RNA Editing in Trypanosomes The trypanosomes are a group of protozoa that include the pathogen Trypanosoma brucei , which causes sleeping sickness in humans ( Figure 15.12 ). Trypanosomes, and virtually all other eukaryotes, have organelles called mitochondria that supply the cell with chemical energy. Mitochondria are organelles that express their own DNA and are believed to be the remnants of a symbiotic relationship between a eukaryote and an engulfed prokaryote. The mitochondrial DNA of trypanosomes exhibit an interesting exception to The Central Dogma: their pre-mRNAs do not have the correct information to specify a functional protein. Usually, this is because the mRNA is missing several U nucleotides. The cell performs an additional RNA processing step called RNA editing to remedy this. Other genes in the mitochondrial genome encode 40- to 80-nucleotide guide RNAs. One or more of these molecules interacts by complementary base pairing with some of the nucleotides in the pre-mRNA transcript. However, the guide RNA has more A nucleotides than the pre-mRNA has U nucleotides to bind with. In these regions, the guide RNA loops out. The 3' ends of guide RNAs have a long poly-U tail, and these U bases are inserted in regions of the pre-mRNA transcript at which the guide RNAs are looped. This process is entirely mediated by RNA molecules. That is, guide RNAs—rather than proteins—serve as the catalysts in RNA editing. RNA editing is not just a phenomenon of trypanosomes. In the mitochondria of some plants, almost all pre-mRNAs are edited. RNA editing has also been identified in mammals such as rats, rabbits, and even humans. What could be the evolutionary reason for this additional step in pre-mRNA processing? One possibility is that the mitochondria, being remnants of ancient prokaryotes, have an equally ancient RNA-based method for regulating gene expression. In support of this hypothesis, edits made to pre-mRNAs differ depending on cellular conditions. Although speculative, the process of RNA editing may be a holdover from a primordial time when RNA molecules, instead of proteins, were responsible for catalyzing reactions. 5' Capping While the pre-mRNA is still being synthesized, a 7-methylguanosine cap is added to the 5' end of the growing transcript by a phosphate linkage. This moiety (functional group) protects the nascent mRNA from degradation. In addition, factors involved in protein synthesis recognize the cap to help initiate translation by ribosomes. 3' Poly-A Tail Once elongation is complete, the pre-mRNA is cleaved by an endonuclease between an AAUAAA consensus sequence and a GU-rich sequence, leaving the AAUAAA sequence on the pre-mRNA. An enzyme called poly-A polymerase then adds a string of approximately 200 A residues, called the poly-A tail . This modification further protects the pre-mRNA from degradation and signals the export of the cellular factors that the transcript needs to the cytoplasm. Pre-mRNA Splicing Eukaryotic genes are composed of exons , which correspond to protein-coding sequences ( ex- on signifies that they are ex pressed), and int ervening sequences called introns ( int- ron denotes their int ervening role), which may be involved in gene regulation but are removed from the pre-mRNA during processing. Intron sequences in mRNA do not encode functional proteins. The discovery of introns came as a surprise to researchers in the 1970s who expected that pre-mRNAs would specify protein sequences without further processing, as they had observed in prokaryotes. The genes of higher eukaryotes very often contain one or more introns. These regions may correspond to regulatory sequences; however, the biological significance of having many introns or having very long introns in a gene is unclear. It is possible that introns slow down gene expression because it takes longer to transcribe pre-mRNAs with lots of introns. Alternatively, introns may be nonfunctional sequence remnants left over from the fusion of ancient genes throughout evolution. This is supported by the fact that separate exons often encode separate protein subunits or domains. For the most part, the sequences of introns can be mutated without ultimately affecting the protein product. All of a pre-mRNA’s introns must be completely and precisely removed before protein synthesis. If the process errs by even a single nucleotide, the reading frame of the rejoined exons would shift, and the resulting protein would be dysfunctional. The process of removing introns and reconnecting exons is called splicing ( Figure 15.13 ). Introns are removed and degraded while the pre-mRNA is still in the nucleus. Splicing occurs by a sequence-specific mechanism that ensures introns will be removed and exons rejoined with the accuracy and precision of a single nucleotide. The splicing of pre-mRNAs is conducted by complexes of proteins and RNA molecules called spliceosomes. Visual Connection Errors in splicing are implicated in cancers and other human diseases. What kinds of mutations might lead to splicing errors? Think of different possible outcomes if splicing errors occur. Note that more than 70 individual introns can be present, and each has to undergo the process of splicing—in addition to 5' capping and the addition of a poly-A tail—just to generate a single, translatable mRNA molecule. Link to Learning See how introns are removed during RNA splicing at this website . Processing of tRNAs and rRNAs The tRNAs and rRNAs are structural molecules that have roles in protein synthesis; however, these RNAs are not themselves translated. Pre-rRNAs are transcribed, processed, and assembled into ribosomes in the nucleolus. Pre-tRNAs are transcribed and processed in the nucleus and then released into the cytoplasm where they are linked to free amino acids for protein synthesis. Most of the tRNAs and rRNAs in eukaryotes and prokaryotes are first transcribed as a long precursor molecule that spans multiple rRNAs or tRNAs. Enzymes then cleave the precursors into subunits corresponding to each structural RNA. Some of the bases of pre-rRNAs are methylated; that is, a –CH 3 moiety (methyl functional group) is added for stability. Pre-tRNA molecules also undergo methylation. As with pre-mRNAs, subunit excision occurs in eukaryotic pre-RNAs destined to become tRNAs or rRNAs. Mature rRNAs make up approximately 50 percent of each ribosome. Some of a ribosome’s RNA molecules are purely structural, whereas others have catalytic or binding activities. Mature tRNAs take on a three-dimensional structure through intramolecular hydrogen bonding to position the amino acid binding site at one end and the anticodon at the other end ( Figure 15.14 ). The anticodon is a three-nucleotide sequence in a tRNA that interacts with an mRNA codon through complementary base pairing. 15.5 Ribosomes and Protein Synthesis Learning Objectives By the end of this section, you will be able to: Describe the different steps in protein synthesis Discuss the role of ribosomes in protein synthesis The synthesis of proteins consumes more of a cell’s energy than any other metabolic process. In turn, proteins account for more mass than any other component of living organisms (with the exception of water), and proteins perform virtually every function of a cell. The process of translation, or protein synthesis, involves the decoding of an mRNA message into a polypeptide product. Amino acids are covalently strung together by interlinking peptide bonds in lengths ranging from approximately 50 amino acid residues to more than 1,000. Each individual amino acid has an amino group (NH 2 ) and a carboxyl (COOH) group. Polypeptides are formed when the amino group of one amino acid forms an amide (i.e., peptide) bond with the carboxyl group of another amino acid ( Figure 15.15 ). This reaction is catalyzed by ribosomes and generates one water molecule. The Protein Synthesis Machinery In addition to the mRNA template, many molecules and macromolecules contribute to the process of translation. The composition of each component may vary across species; for instance, ribosomes may consist of different numbers of rRNAs and polypeptides depending on the organism. However, the general structures and functions of the protein synthesis machinery are comparable from bacteria to human cells. Translation requires the input of an mRNA template, ribosomes, tRNAs, and various enzymatic factors. Link to Learning Click through the steps of this PBS interactive to see protein synthesis in action. Ribosomes Even before an mRNA is translated, a cell must invest energy to build each of its ribosomes. In E. coli , there are between 10,000 and 70,000 ribosomes present in each cell at any given time. A ribosome is a complex macromolecule composed of structural and catalytic rRNAs, and many distinct polypeptides. In eukaryotes, the nucleolus is completely specialized for the synthesis and assembly of rRNAs. Ribosomes exist in the cytoplasm in prokaryotes and in the cytoplasm and rough endoplasmic reticulum in eukaryotes. Mitochondria and chloroplasts also have their own ribosomes in the matrix and stroma, which look more similar to prokaryotic ribosomes (and have similar drug sensitivities) than the ribosomes just outside their outer membranes in the cytoplasm. Ribosomes dissociate into large and small subunits when they are not synthesizing proteins and reassociate during the initiation of translation. In E. coli , the small subunit is described as 30S, and the large subunit is 50S, for a total of 70S (recall that Svedberg units are not additive). Mammalian ribosomes have a small 40S subunit and a large 60S subunit, for a total of 80S. The small subunit is responsible for binding the mRNA template, whereas the large subunit sequentially binds tRNAs. Each mRNA molecule is simultaneously translated by many ribosomes, all synthesizing protein in the same direction: reading the mRNA from 5' to 3' and synthesizing the polypeptide from the N terminus to the C terminus. The complete mRNA/poly-ribosome structure is called a polysome . tRNAs The tRNAs are structural RNA molecules that were transcribed from genes by RNA polymerase III. Depending on the species, 40 to 60 types of tRNAs exist in the cytoplasm. Serving as adaptors, specific tRNAs bind to sequences on the mRNA template and add the corresponding amino acid to the polypeptide chain. Therefore, tRNAs are the molecules that actually “translate” the language of RNA into the language of proteins. Of the 64 possible mRNA codons—or triplet combinations of A, U, G, and C—three specify the termination of protein synthesis and 61 specify the addition of amino acids to the polypeptide chain. Of these 61, one codon (AUG) also encodes the initiation of translation. Each tRNA anticodon can base pair with one of the mRNA codons and add an amino acid or terminate translation, according to the genetic code. For instance, if the sequence CUA occurred on an mRNA template in the proper reading frame, it would bind a tRNA expressing the complementary sequence, GAU, which would be linked to the amino acid leucine. As the adaptor molecules of translation, it is surprising that tRNAs can fit so much specificity into such a small package. Consider that tRNAs need to interact with three factors: 1) they must be recognized by the correct aminoacyl synthetase (see below); 2) they must be recognized by ribosomes; and 3) they must bind to the correct sequence in mRNA. Aminoacyl tRNA Synthetases The process of pre-tRNA synthesis by RNA polymerase III only creates the RNA portion of the adaptor molecule. The corresponding amino acid must be added later, once the tRNA is processed and exported to the cytoplasm. Through the process of tRNA “charging,” each tRNA molecule is linked to its correct amino acid by a group of enzymes called aminoacyl tRNA synthetases . At least one type of aminoacyl tRNA synthetase exists for each of the 20 amino acids; the exact number of aminoacyl tRNA synthetases varies by species. These enzymes first bind and hydrolyze ATP to catalyze a high-energy bond between an amino acid and adenosine monophosphate (AMP); a pyrophosphate molecule is expelled in this reaction. The activated amino acid is then transferred to the tRNA, and AMP is released. The Mechanism of Protein Synthesis As with mRNA synthesis, protein synthesis can be divided into three phases: initiation, elongation, and termination. The process of translation is similar in prokaryotes and eukaryotes. Here we’ll explore how translation occurs in E. coli , a representative prokaryote, and specify any differences between prokaryotic and eukaryotic translation. Initiation of Translation Protein synthesis begins with the formation of an initiation complex. In E. coli , this complex involves the small 30S ribosome, the mRNA template, three initiation factors (IFs; IF-1, IF-2, and IF-3), and a special initiator tRNA , called t R N A f M e t t R N A f M e t . The initiator tRNA interacts with the start codon AUG (or rarely, GUG), links to a formylated methionine called fMet, and can also bind IF-2. Formylated methionine is inserted by f M e t − t R N A f M e t f M e t − t R N A f M e t at the beginning of every polypeptide chain synthesized by E. coli , but it is usually clipped off after translation is complete. When an in-frame AUG is encountered during translation elongation, a non-formylated methionine is inserted by a regular Met-tRNA Met . In E. coli mRNA, a sequence upstream of the first AUG codon, called the Shine-Dalgarno sequence (AGGAGG), interacts with the rRNA molecules that compose the ribosome. This interaction anchors the 30S ribosomal subunit at the correct location on the mRNA template. Guanosine triphosphate (GTP), which is a purine nucleotide triphosphate, acts as an energy source during translation—both at the start of elongation and during the ribosome’s translocation. In eukaryotes, a similar initiation complex forms, comprising mRNA, the 40S small ribosomal subunit, IFs, and nucleoside triphosphates (GTP and ATP). The charged initiator tRNA, called Met-tRNA i , does not bind fMet in eukaryotes, but is distinct from other Met-tRNAs in that it can bind IFs. Instead of depositing at the Shine-Dalgarno sequence, the eukaryotic initiation complex recognizes the 7-methylguanosine cap at the 5' end of the mRNA. A cap-binding protein (CBP) and several other IFs assist the movement of the ribosome to the 5' cap. Once at the cap, the initiation complex tracks along the mRNA in the 5' to 3' direction, searching for the AUG start codon. Many eukaryotic mRNAs are translated from the first AUG, but this is not always the case. According to Kozak’s rules , the nucleotides around the AUG indicate whether it is the correct start codon. Kozak’s rules state that the following consensus sequence must appear around the AUG of vertebrate genes: 5'-gccRccAUGG-3'. The R (for purine) indicates a site that can be either A or G, but cannot be C or U. Essentially, the closer the sequence is to this consensus, the higher the efficiency of translation. Once the appropriate AUG is identified, the other proteins and CBP dissociate, and the 60S subunit binds to the complex of Met-tRNA i , mRNA, and the 40S subunit. This step completes the initiation of translation in eukaryotes. Translation, Elongation, and Termination In prokaryotes and eukaryotes, the basics of elongation are the same, so we will review elongation from the perspective of E. coli . The 50S ribosomal subunit of E. coli consists of three compartments: the A (aminoacyl) site binds incoming charged aminoacyl tRNAs. The P (peptidyl) site binds charged tRNAs carrying amino acids that have formed peptide bonds with the growing polypeptide chain but have not yet dissociated from their corresponding tRNA. The E (exit) site releases dissociated tRNAs so that they can be recharged with free amino acids. There is one exception to this assembly line of tRNAs: in E. coli , f M e t − t R N A f M e t f M e t − t R N A f M e t is capable of entering the P site directly without first entering the A site. Similarly, the eukaryotic Met-tRNA i , with help from other proteins of the initiation complex, binds directly to the P site. In both cases, this creates an initiation complex with a free A site ready to accept the tRNA corresponding to the first codon after the AUG. During translation elongation, the mRNA template provides specificity. As the ribosome moves along the mRNA, each mRNA codon comes into register, and specific binding with the corresponding charged tRNA anticodon is ensured. If mRNA were not present in the elongation complex, the ribosome would bind tRNAs nonspecifically. Elongation proceeds with charged tRNAs entering the A site and then shifting to the P site followed by the E site with each single-codon “step” of the ribosome. Ribosomal steps are induced by conformational changes that advance the ribosome by three bases in the 3' direction. The energy for each step of the ribosome is donated by an elongation factor that hydrolyzes GTP. Peptide bonds form between the amino group of the amino acid attached to the A-site tRNA and the carboxyl group of the amino acid attached to the P-site tRNA. The formation of each peptide bond is catalyzed by peptidyl transferase , an RNA-based enzyme that is integrated into the 50S ribosomal subunit. The energy for each peptide bond formation is derived from GTP hydrolysis, which is catalyzed by a separate elongation factor. The amino acid bound to the P-site tRNA is also linked to the growing polypeptide chain. As the ribosome steps across the mRNA, the former P-site tRNA enters the E site, detaches from the amino acid, and is expelled ( Figure 15.16 ). Amazingly, the E. coli translation apparatus takes only 0.05 seconds to add each amino acid, meaning that a 200-amino acid protein can be translated in just 10 seconds. Visual Connection Many antibiotics inhibit bacterial protein synthesis. For example, tetracycline blocks the A site on the bacterial ribosome, and chloramphenicol blocks peptidyl transfer. What specific effect would you expect each of these antibiotics to have on protein synthesis? Tetracycline would directly affect: tRNA binding to the ribosome ribosome assembly growth of the protein chain Chloramphenicol would directly affect tRNA binding to the ribosome ribosome assembly growth of the protein chain Termination of translation occurs when a nonsense codon (UAA, UAG, or UGA) is encountered. Upon aligning with the A site, these nonsense codons are recognized by release factors in prokaryotes and eukaryotes that instruct peptidyl transferase to add a water molecule to the carboxyl end of the P-site amino acid. This reaction forces the P-site amino acid to detach from its tRNA, and the newly made protein is released. The small and large ribosomal subunits dissociate from the mRNA and from each other; they are recruited almost immediately into another translation initiation complex. After many ribosomes have completed translation, the mRNA is degraded so the nucleotides can be reused in another transcription reaction. Protein Folding, Modification, and Targeting During and after translation, individual amino acids may be chemically modified, signal sequences may be appended, and the new protein “folds” into a distinct three-dimensional structure as a result of intramolecular interactions. A signal sequence is a short tail of amino acids that directs a protein to a specific cellular compartment. These sequences at the amino end or the carboxyl end of the protein can be thought of as the protein’s “train ticket” to its ultimate destination. Other cellular factors recognize each signal sequence and help transport the protein from the cytoplasm to its correct compartment. For instance, a specific sequence at the amino terminus will direct a protein to the mitochondria or chloroplasts (in plants). Once the protein reaches its cellular destination, the signal sequence is usually clipped off. Many proteins fold spontaneously, but some proteins require helper molecules, called chaperones, to prevent them from aggregating during the complicated process of folding. Even if a protein is properly specified by its corresponding mRNA, it could take on a completely dysfunctional shape if abnormal temperature or pH conditions prevent it from folding correctly.
microbiology
Summary 12.1 Microbes and the Tools of Genetic Engineering Biotechology is the science of utilizing living systems to benefit humankind. In recent years, the ability to directly alter an organism’s genome through genetic engineering has been made possible due to advances in recombinant DNA technology, which allows researchers to create recombinant DNA molecules with new combinations of genetic material. Molecular cloning involves methods used to construct recombinant DNA and facilitate their replication in host organisms. These methods include the use of restriction enzymes (to cut both foreign DNA and plasmid vectors) , ligation (to paste fragments of DNA together), and the introduction of recombinant DNA into a host organism (often bacteria). Blue-white screening allows selection of bacterial transformants that contain recombinant plasmids using the phenotype of a reporter gene that is disabled by insertion of the DNA fragment. Genomic libraries can be made by cloning genomic fragments from one organism into plasmid vectors or into bacteriophage. cDNA libraries can be generated to represent the mRNA molecules expressed in a cell at a given point. Transfection of eukaryotic hosts can be achieved through various methods using electroporation , gene guns , microinjection , shuttle vectors , and viral vectors . 12.2 Visualizing and Characterizing DNA, RNA, and Protein Finding a gene of interest within a sample requires the use of a single-stranded DNA probe labeled with a molecular beacon (typically radioactivity or fluorescence) that can hybridize with a complementary single-stranded nucleic acid in the sample. Agarose gel electrophoresis allows for the separation of DNA molecules based on size. Restriction fragment length polymorphism (RFLP) analysis allows for the visualization by agarose gel electrophoresis of distinct variants of a DNA sequence caused by differences in restriction sites. Southern blot analysis allows researchers to find a particular DNA sequence within a sample whereas northern blot analysis allows researchers to detect a particular mRNA sequence expressed in a sample. Microarray technology is a nucleic acid hybridization technique that allows for the examination of many thousands of genes at once to find differences in genes or gene expression patterns between two samples of genomic DNA or cDNA, Polyacrylamide gel electrophoresis (PAGE) allows for the separation of proteins by size, especially if native protein charges are masked through pretreatment with SDS. Polymerase chain reaction allows for the rapid amplification of a specific DNA sequence. Variations of PCR can be used to detect mRNA expression ( reverse transcriptase PCR ) or to quantify a particular sequence in the original sample (real-time PCR ). Although the development of Sanger DNA sequencing was revolutionary, advances in next generation sequencing allow for the rapid and inexpensive sequencing of the genomes of many organisms, accelerating the volume of new sequence data. 12.3 Whole Genome Methods and Pharmaceutical Applications of Genetic Engineering The science of genomics allows researchers to study organisms on a holistic level and has many applications of medical relevance. Transcriptomics and proteomics allow researchers to compare gene expression patterns between different cells and shows great promise in better understanding global responses to various conditions. The various –omics technologies complement each other and together provide a more complete picture of an organism’s or microbial community’s ( metagenomics ) state. The analysis required for large data sets produced through genomics, transcriptomics, and proteomics has led to the emergence of bioinformatics . Reporter genes encoding easily observable characteristics are commonly used to track gene expression patterns of genes of unknown function. The use of recombinant DNA technology has revolutionized the pharmaceutical industry, allowing for the rapid production of high-quality recombinant DNA pharmaceuticals used to treat a wide variety of human conditions. RNA interference technology has great promise as a method of treating viral infections by silencing the expression of specific genes 12.4 Gene Therapy While gene therapy shows great promise for the treatment of genetic diseases, there are also significant risks involved. There is considerable federal and local regulation of the development of gene therapies by pharmaceutical companies for use in humans. Before gene therapy use can increase dramatically, there are many ethical issues that need to be addressed by the medical and research communities, politicians, and society at large.
Chapter Outline 12.1 Microbes and the Tools of Genetic Engineering 12.2 Visualizing and Characterizing DNA, RNA, and Protein 12.3 Whole Genome Methods and Pharmaceutical Applications of Genetic Engineering 12.4 Gene Therapy Introduction Watson and Crick ’s identification of the structure of DNA in 1953 was the seminal event in the field of genetic engineering. Since the 1970s, there has been a veritable explosion in scientists’ ability to manipulate DNA in ways that have revolutionized the fields of biology, medicine, diagnostics, forensics, and industrial manufacturing. Many of the molecular tools discovered in recent decades have been produced using prokaryotic microbes. In this chapter, we will explore some of those tools, especially as they relate to applications in medicine and health care. As an example, the thermal cycler in Figure 12.1 is used to perform a diagnostic technique called the polymerase chain reaction (PCR) , which relies on DNA polymerase enzymes from thermophilic bacteria. Other molecular tools, such as restriction enzymes and plasmids obtained from microorganisms, allow scientists to insert genes from humans or other organisms into microorganisms. The microorganisms are then grown on an industrial scale to synthesize products such as insulin , vaccines, and biodegradable polymers. These are just a few of the numerous applications of microbial genetics that we will explore in this chapter.
[ { "answer": { "ans_choice": 3, "ans_text": "D" }, "bloom": null, "hl_context": "Molecules with complementary sticky ends can easily anneal , or form hydrogen bonds between complementary bases , at their sticky ends . The annealing step allows hybridization of the single-stranded overhangs . Hybridization refers to the joining together of two complementary single strands of DNA . Blunt ends can also attach together , but less efficiently than sticky ends due to the lack of complementary overhangs facilitating the process . <hl> In either case , ligation by DNA ligase can then rejoin the two sugar-phosphate backbones of the DNA through covalent bonding , making the molecule a continuous double strand . <hl> In 1972 , Paul Berg , a Stanford biochemist , was the first to produce a recombinant DNA molecule using this technique , combining the SV40 monkey virus with E . coli bacteriophage lambda to create a hybrid .", "hl_sentences": "In either case , ligation by DNA ligase can then rejoin the two sugar-phosphate backbones of the DNA through covalent bonding , making the molecule a continuous double strand .", "question": { "cloze_format": "___ is required for repairing the phosphodiester backbone of DNA during molecular cloning.", "normal_format": "Which of the following is required for repairing the phosphodiester backbone of DNA during molecular cloning?", "question_choices": [ "cDNA", "reverse transcriptase", "restriction enzymes", "DNA ligase" ], "question_id": "fs-id1167741301615", "question_text": "Which of the following is required for repairing the phosphodiester backbone of DNA during molecular cloning?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "C" }, "bloom": null, "hl_context": "<hl> Alternatively , bacteriophages can be used to introduce recombinant DNA into host bacterial cells through a manipulation of the transduction process ( see How Asexual Prokaryotes Achieve Genetic Diversity ) . <hl> In the laboratory , DNA fragments of interest can be engineered into phagemids , which are plasmids that have phage sequences that allow them to be packaged into bacteriophages . Bacterial cells can then be infected with these bacteriophages so that the recombinant phagemids can be introduced into the bacterial cells . Depending on the type of phage , the recombinant DNA may be integrated into the host bacterial genome ( lysogeny ) , or it may exist as a plasmid in the host ’ s cytoplasm . <hl> The bacterial process of conjugation ( see How Asexual Prokaryotes Achieve Genetic Diversity ) can also be manipulated for molecular cloning . <hl> F plasmids , or fertility plasmids , are transferred between bacterial cells through the process of conjugation . <hl> Recombinant DNA can be transferred by conjugation when bacterial cells containing a recombinant F plasmid are mixed with compatible bacterial cells lacking the plasmid . <hl> F plasmids encode a surface structure called an F pilus that facilitates contact between a cell containing an F plasmid and one without an F plasmid . On contact , a cytoplasmic bridge forms between the two cells and the F-plasmid-containing cell replicates its plasmid , transferring a copy of the recombinant F plasmid to the recipient cell . Once it has received the recombinant F plasmid , the recipient cell can produce its own F pilus and facilitate transfer of the recombinant F plasmid to an additional cell . The use of conjugation to transfer recombinant F plasmids to recipient cells is another effective way to introduce recombinant DNA molecules into host cells .", "hl_sentences": "Alternatively , bacteriophages can be used to introduce recombinant DNA into host bacterial cells through a manipulation of the transduction process ( see How Asexual Prokaryotes Achieve Genetic Diversity ) . The bacterial process of conjugation ( see How Asexual Prokaryotes Achieve Genetic Diversity ) can also be manipulated for molecular cloning . Recombinant DNA can be transferred by conjugation when bacterial cells containing a recombinant F plasmid are mixed with compatible bacterial cells lacking the plasmid .", "question": { "cloze_format": "___ is not a process used to introduce DNA molecules into bacterial cells.", "normal_format": "All of the following are processes used to introduce DNA molecules into bacterial cells, except which one?", "question_choices": [ "transformation", "transduction", "transcription", "conjugation" ], "question_id": "fs-id1167740398524", "question_text": "All of the following are processes used to introduce DNA molecules into bacterial cells except:" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "C" }, "bloom": null, "hl_context": "Several later modifications to PCR further increase the utility of this technique . <hl> Reverse transcriptase PCR ( RT-PCR ) is used for obtaining DNA copies of a specific mRNA molecule . <hl> <hl> RT-PCR begins with the use of the reverse transcriptase enzyme to convert mRNA molecules into cDNA . <hl> That cDNA is then used as a template for traditional PCR amplification . RT-PCR can detect whether a specific gene has been expressed in a sample . Another recent application of PCR is real-time PCR , also known as quantitative PCR ( qPCR ) . Standard PCR and RT-PCR protocols are not quantitative because any one of the reagents may become limiting before all of the cycles within the protocol are complete , and samples are only analyzed at the end . Because it is not possible to determine when in the PCR or RT-PCR protocol a given reagent has become limiting , it is not possible to know how many cycles were completed prior to this point , and thus it is not possible to determine how many original template molecules were present in the sample at the start of PCR . In qPCR , however , the use of fluorescence allows one to monitor the increase in a double-stranded template during a PCR reaction as it occurs . These kinetics data can then be used to quantify the amount of the original target sequence . The use of qPCR in recent years has further expanded the capabilities of PCR , allowing researchers to determine the number of DNA copies , and sometimes organisms , present in a sample . In clinical settings , qRT-PCR is used to determine viral load in HIV-positive patients to evaluate the effectiveness of their therapy .", "hl_sentences": "Reverse transcriptase PCR ( RT-PCR ) is used for obtaining DNA copies of a specific mRNA molecule . RT-PCR begins with the use of the reverse transcriptase enzyme to convert mRNA molecules into cDNA .", "question": { "cloze_format": "The enzyme that uses RNA as a template to produce a DNA copy is called ___.", "normal_format": "What is called the enzyme that uses RNA as a template to produce a DNA copy?", "question_choices": [ "a restriction enzyme", "DNA ligase", "reverse transcriptase", "DNA polymerase" ], "question_id": "fs-id1167740392810", "question_text": "The enzyme that uses RNA as a template to produce a DNA copy is called:" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "C" }, "bloom": null, "hl_context": "Following the transformation protocol , bacterial cells are plated onto an antibiotic-containing medium to inhibit the growth of the many host cells that were not transformed by the plasmid conferring antibiotic resistance . <hl> A technique called blue-white screening is then used for lacZ - encoding plasmid vectors such as pUC 19 . <hl> <hl> Blue colonies have a functional beta-galactosidase enzyme because the lacZ gene is uninterrupted , with no foreign DNA inserted into the polylinker site . <hl> <hl> These colonies typically result from the digested , linearized plasmid religating to itself . <hl> However , white colonies lack a functional beta-galactosidase enzyme , indicating the insertion of foreign DNA within the polylinker site of the plasmid vector , thus disrupting the lacZ gene . Thus , white colonies resulting from this blue-white screening contain plasmids with an insert and can be further screened to characterize the foreign DNA . To be sure the correct DNA was incorporated into the plasmid , the DNA insert can then be sequenced .", "hl_sentences": "A technique called blue-white screening is then used for lacZ - encoding plasmid vectors such as pUC 19 . Blue colonies have a functional beta-galactosidase enzyme because the lacZ gene is uninterrupted , with no foreign DNA inserted into the polylinker site . These colonies typically result from the digested , linearized plasmid religating to itself .", "question": { "cloze_format": "In blue-white screening, blue colonies represent ___ .", "normal_format": "In blue-white screening, what do blue colonies represent?", "question_choices": [ "cells that have not taken up the plasmid vector", "cells with recombinant plasmids containing a new insert", "cells containing empty plasmid vectors", "cells with a non-functional lacZ gene" ], "question_id": "fs-id1167741539100", "question_text": "In blue-white screening, what do blue colonies represent?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "B" }, "bloom": null, "hl_context": "Another method of transfecting plants involves shuttle vectors , plasmids that can move between bacterial and eukaryotic cells . <hl> The tumor-inducing ( T i ) plasmids originating from the bacterium Agrobacterium tumefaciens are commonly used as shuttle vectors for incorporating genes into plants ( Figure 12.12 ) . <hl> In nature , the T i plasmids of A . tumefaciens cause plants to develop tumors when they are transferred from bacterial cells to plant cells . Researchers have been able to manipulate these naturally occurring plasmids to remove their tumor-causing genes and insert desirable DNA fragments . The resulting recombinant T i plasmids can be transferred into the plant genome through the natural transfer of T i plasmids from the bacterium to the plant host . Once inside the plant host cell , the gene of interest recombines into the plant cell ’ s genome .", "hl_sentences": "The tumor-inducing ( T i ) plasmids originating from the bacterium Agrobacterium tumefaciens are commonly used as shuttle vectors for incorporating genes into plants ( Figure 12.12 ) .", "question": { "cloze_format": "The Ti plasmid is used for introducing genes into ___ .", "normal_format": "The Ti plasmid is used for introducing genes into what?", "question_choices": [ "animal cells", "plant cells", "bacteriophages", "E. coli cells" ], "question_id": "fs-id1167740227356", "question_text": "The Ti plasmid is used for introducing genes into:" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 0, "ans_text": "A" }, "bloom": null, "hl_context": "<hl> A variation of gel electrophoresis , called polyacrylamide gel electrophoresis ( PAGE ) , is commonly used for separating proteins . <hl> In PAGE , the gel matrix is finer and composed of polyacrylamide instead of agarose . Additionally , PAGE is typically performed using a vertical gel apparatus ( Figure 12.18 ) . Because of the varying charges associated with amino acid side chains , PAGE can be used to separate intact proteins based on their net charges . Alternatively , proteins can be denatured and coated with a negatively charged detergent called sodium dodecyl sulfate ( SDS ) , masking the native charges and allowing separation based on size only . PAGE can be further modified to separate proteins based on two characteristics , such as their charges at various pHs as well as their size , through the use of two-dimensional PAGE . In any of these cases , following electrophoresis , proteins are visualized through staining , commonly with either Coomassie blue or a silver stain .", "hl_sentences": "A variation of gel electrophoresis , called polyacrylamide gel electrophoresis ( PAGE ) , is commonly used for separating proteins .", "question": { "cloze_format": "The technique that is used to separate protein fragments based on size is called the ___ .", "normal_format": "Which technique is used to separate protein fragments based on size?", "question_choices": [ "polyacrylamide gel electrophoresis", "Southern blot", "agarose gel electrophoresis", "polymerase chain reaction" ], "question_id": "fs-id1167742637172", "question_text": "Which technique is used to separate protein fragments based on size?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "C" }, "bloom": null, "hl_context": "<hl> Forensic scientists use RFLP analysis as a form of DNA fingerprinting , which is useful for analyzing DNA obtained from crime scenes , suspects , and victims . <hl> <hl> DNA samples are collected , the numbers of copies of the sample DNA molecules are increased using PCR , and then subjected to restriction enzyme digestion and agarose gel electrophoresis to generate specific banding patterns . <hl> By comparing the banding patterns of samples collected from the crime scene against those collected from suspects or victims , investigators can definitively determine whether DNA evidence collected at the scene was left behind by suspects or victims .", "hl_sentences": "Forensic scientists use RFLP analysis as a form of DNA fingerprinting , which is useful for analyzing DNA obtained from crime scenes , suspects , and victims . DNA samples are collected , the numbers of copies of the sample DNA molecules are increased using PCR , and then subjected to restriction enzyme digestion and agarose gel electrophoresis to generate specific banding patterns .", "question": { "cloze_format": "The ___ technique uses restriction enzyme digestion followed by agarose gel electrophoresis to generate a banding pattern for comparison to another sample processed in the same way.", "normal_format": "Which technique uses restriction enzyme digestion followed by agarose gel electrophoresis to generate a banding pattern for comparison to another sample processed in the same way?", "question_choices": [ "qPCR", "RT-PCR", "RFLP", "454 sequencing" ], "question_id": "fs-id1167742480598", "question_text": "Which technique uses restriction enzyme digestion followed by agarose gel electrophoresis to generate a banding pattern for comparison to another sample processed in the same way?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 1, "ans_text": "B" }, "bloom": null, "hl_context": "<hl> Another technique that capitalizes on the hybridization between complementary nucleic acid sequences is called microarray analysis . <hl> Microarray analysis is useful for the comparison of gene-expression patterns between different cell types — for example , cells infected with a virus versus uninfected cells , or cancerous cells versus healthy cells ( Figure 12.17 ) . <hl> In the northern blot , another variation of the Southern blot , RNA ( not DNA ) is immobilized on the membrane and probed . <hl> Northern blots are typically used to detect the amount of mRNA made through gene expression within a tissue or organism sample . Several molecular techniques capitalize on sequence complementarity and hybridization between nucleic acids of a sample and DNA probes . Typically , probing nucleic-acid samples within a gel is unsuccessful because as the DNA probe soaks into a gel , the sample nucleic acids within the gel diffuse out . Thus , blotting techniques are commonly used to transfer nucleic acids to a thin , positively charged membrane made of nitrocellulose or nylon . <hl> In the Southern blot technique , developed by Sir Edwin Southern in 1975 , DNA fragments within a sample are first separated by agarose gel electrophoresis and then transferred to a membrane through capillary action ( Figure 12.16 ) . <hl> The DNA fragments that bind to the surface of the membrane are then exposed to a specific single-stranded DNA probe labeled with a radioactive or fluorescent molecular beacon to aid in detection . Southern blots may be used to detect the presence of certain DNA sequences in a given DNA sample . Once the target DNA within the membrane is visualized , researchers can cut out the portion of the membrane containing the fragment to recover the DNA fragment of interest .", "hl_sentences": "Another technique that capitalizes on the hybridization between complementary nucleic acid sequences is called microarray analysis . In the northern blot , another variation of the Southern blot , RNA ( not DNA ) is immobilized on the membrane and probed . In the Southern blot technique , developed by Sir Edwin Southern in 1975 , DNA fragments within a sample are first separated by agarose gel electrophoresis and then transferred to a membrane through capillary action ( Figure 12.16 ) .", "question": { "cloze_format": "All of the following techniques involve hybridization between single-stranded nucleic acid molecules except ___", "normal_format": "All of the following techniques involve hybridization between single-stranded nucleic acid molecules except which one?", "question_choices": [ "Southern blot analysis", "RFLP analysis", "northern blot analysis", "microarray analysis" ], "question_id": "fs-id1167742469532", "question_text": "All of the following techniques involve hybridization between single-stranded nucleic acid molecules except:" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "B" }, "bloom": null, "hl_context": "<hl> The field of transcriptomics is the science of the entire collection of mRNA molecules produced by cells . <hl> <hl> Scientists compare gene expression patterns between infected and uninfected host cells , gaining important information about the cellular responses to infectious disease . <hl> Additionally , transcriptomics can be used to monitor the gene expression of virulence factors in microorganisms , aiding scientists in better understanding pathogenic processes from this viewpoint .", "hl_sentences": "The field of transcriptomics is the science of the entire collection of mRNA molecules produced by cells . Scientists compare gene expression patterns between infected and uninfected host cells , gaining important information about the cellular responses to infectious disease .", "question": { "cloze_format": "The science of studying the entire collection of mRNA molecules produced by cells, allowing scientists to monitor differences in gene expression patterns between cells, is called ___.", "normal_format": "What is called the science of studying the entire collection of mRNA molecules produced by cells, allowing scientists to monitor differences in gene expression patterns between cells?", "question_choices": [ "genomics", "transcriptomics", "proteomics", "pharmacogenomics" ], "question_id": "fs-id1167742824998", "question_text": "The science of studying the entire collection of mRNA molecules produced by cells, allowing scientists to monitor differences in gene expression patterns between cells, is called:" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "C" }, "bloom": null, "hl_context": "When genomics and transcriptomics are applied to entire microbial communities , we use the terms metagenomics and metatranscriptomics , respectively . <hl> Metagenomics and metatranscriptomics allow researchers to study genes and gene expression from a collection of multiple species , many of which may not be easily cultured or cultured at all in the laboratory . <hl> A DNA microarray ( discussed in the previous section ) can be used in metagenomics studies .", "hl_sentences": "Metagenomics and metatranscriptomics allow researchers to study genes and gene expression from a collection of multiple species , many of which may not be easily cultured or cultured at all in the laboratory .", "question": { "cloze_format": "The science of studying genomic fragments from microbial communities, allowing researchers to study genes from a collection of multiple species, is called ___.", "normal_format": "What is the science of studying genomic fragments from microbial communities, allowing researchers to study genes from a collection of multiple species?", "question_choices": [ "pharmacogenomics", "transcriptomics", "metagenomics", "proteomics" ], "question_id": "fs-id1167742394023", "question_text": "The science of studying genomic fragments from microbial communities, allowing researchers to study genes from a collection of multiple species, is called:" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "B" }, "bloom": null, "hl_context": "Genetic engineering has also been important in the production of other therapeutic proteins , such as insulin , interferons , and human growth hormone , to treat a variety of human medical conditions . <hl> For example , at one time , it was possible to treat diabetes only by giving patients pig insulin , which caused allergic reactions due to small differences between the proteins expressed in human and pig insulin . <hl> <hl> However , since 1978 , recombinant DNA technology has been used to produce large-scale quantities of human insulin using E . coli in a relatively inexpensive process that yields a more consistently effective pharmaceutical product . <hl> Scientists have also genetically engineered E . coli capable of producing human growth hormone ( HGH ) , which is used to treat growth disorders in children and certain other disorders in adults . The HGH gene was cloned from a cDNA library and inserted into E . coli cells by cloning it into a bacterial vector . Eventually , genetic engineering will be used to produce DNA vaccines and various gene therapies , as well as customized medicines for fighting cancer and other diseases .", "hl_sentences": "For example , at one time , it was possible to treat diabetes only by giving patients pig insulin , which caused allergic reactions due to small differences between the proteins expressed in human and pig insulin . However , since 1978 , recombinant DNA technology has been used to produce large-scale quantities of human insulin using E . coli in a relatively inexpensive process that yields a more consistently effective pharmaceutical product .", "question": { "cloze_format": "The insulin produced by recombinant DNA technology is ___ .", "normal_format": "What is the characteristic of the insulin produced by recombinant DNA technology?", "question_choices": [ "a combination of E. coli and human insulin.", "identical to human insulin produced in the pancreas.", "cheaper but less effective than pig insulin for treating diabetes.", "engineered to be more effective than human insulin." ], "question_id": "fs-id1167742346689", "question_text": "The insulin produced by recombinant DNA technology is" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 3, "ans_text": "D" }, "bloom": null, "hl_context": "<hl> Gelsinger ’ s death led to increased scrutiny of gene therapy , and subsequent negative outcomes of gene therapy have resulted in the temporary halting of clinical trials pending further investigation . <hl> For example , when children in France treated with gene therapy for SCID began to develop leukemia several years after treatment , the FDA temporarily stopped clinical trials of similar types of gene therapy occurring in the United States . 13 Cases like these highlight the need for researchers and health professionals not only to value human well-being and patients ’ rights over profitability , but also to maintain scientific objectivity when evaluating the risks and benefits of new therapies . 13 Erika Check . “ Gene Therapy : A Tragic Setback . ” Nature 420 no . 6912 ( 2002 ): 116 – 118 . <hl> To receive FDA approval for a new therapy , researchers must collect significant laboratory data from animal trials and submit an Investigational New Drug ( IND ) application to the FDA ’ s Center for Drug Evaluation and Research ( CDER ) . <hl> Following a 30 - day waiting period during which the FDA reviews the IND , clinical trials involving human subjects may begin . If the FDA perceives a problem prior to or during the clinical trial , the FDA can order a “ clinical hold ” until any problems are addressed . During clinical trials , researchers collect and analyze data on the therapy ’ s effectiveness and safety , including any side effects observed . <hl> Once the therapy meets FDA standards for effectiveness and safety , the developers can submit a New Drug Application ( NDA ) that details how the therapy will be manufactured , packaged , monitored , and administered . <hl>", "hl_sentences": "Gelsinger ’ s death led to increased scrutiny of gene therapy , and subsequent negative outcomes of gene therapy have resulted in the temporary halting of clinical trials pending further investigation . To receive FDA approval for a new therapy , researchers must collect significant laboratory data from animal trials and submit an Investigational New Drug ( IND ) application to the FDA ’ s Center for Drug Evaluation and Research ( CDER ) . Once the therapy meets FDA standards for effectiveness and safety , the developers can submit a New Drug Application ( NDA ) that details how the therapy will be manufactured , packaged , monitored , and administered .", "question": { "cloze_format": "The point at which the FDA can halt the development or use of gene therapy is ___ .", "normal_format": "At what point can the FDA halt the development or use of gene therapy?", "question_choices": [ "on submission of an IND application", "during clinical trials", "after manufacturing and marketing of the approved therapy", "all of the answers are correct" ], "question_id": "fs-id1167740225217", "question_text": "At what point can the FDA halt the development or use of gene therapy?" }, "references_are_paraphrase": 0 } ]
12
12.1 Microbes and the Tools of Genetic Engineering Learning Objectives Identify tools of molecular genetics that are derived from microorganisms Describe the methods used to create recombinant DNA molecules Describe methods used to introduce DNA into prokaryotic cells List the types of genomic libraries and describe their uses Describe the methods used to introduce DNA into eukaryotic cells Clinical Focus Part 1 Kayla, a 24-year-old electrical engineer and running enthusiast, just moved from Arizona to New Hampshire to take a new job. On her weekends off, she loves to explore her new surroundings, going for long runs in the pine forests. In July she spent a week hiking through the mountains. In early August, Kayla developed a low fever, headache, and mild muscle aches, and she felt a bit fatigued. Not thinking much of it, she took some ibuprofen to combat her symptoms and vowed to get more rest. What types of medical conditions might be responsible for Kayla’s symptoms? Jump to the next Clinical Focus box. The science of using living systems to benefit humankind is called biotechnology . Technically speaking, the domestication of plants and animals through farming and breeding practices is a type of biotechnology. However, in a contemporary sense, we associate biotechnology with the direct alteration of an organism’s genetics to achieve desirable traits through the process of genetic engineering . Genetic engineering involves the use of recombinant DNA technology , the process by which a DNA sequence is manipulated in vitro, thus creating recombinant DNA molecule s that have new combinations of genetic material. The recombinant DNA is then introduced into a host organism. If the DNA that is introduced comes from a different species, the host organism is now considered to be transgenic . One example of a transgenic microorganism is the bacterial strain that produces human insulin ( Figure 12.2 ). The insulin gene from humans was inserted into a plasmid. This recombinant DNA plasmid was then inserted into bacteria. As a result, these transgenic microbes are able to produce and secrete human insulin. Many prokaryotes are able to acquire foreign DNA and incorporate functional genes into their own genome through “mating” with other cells ( conjugation ), viral infection ( transduction ), and taking up DNA from the environment ( transformation ). Recall that these mechanisms are examples of horizontal gene transfer —the transfer of genetic material between cells of the same generation. Molecular Cloning Herbert Boyer and Stanley Cohen first demonstrated the complete molecular cloning process in 1973 when they successfully cloned genes from the African clawed frog ( Xenopus laevis ) into a bacterial plasmid that was then introduced into the bacterial host Escherichia coli . Molecular cloning is a set of methods used to construct recombinant DNA and incorporate it into a host organism; it makes use of a number of molecular tools that are derived from microorganisms. Restriction Enzymes and Ligases In recombinant DNA technology, DNA molecules are manipulated using naturally occurring enzymes derived mainly from bacteria and viruses. The creation of recombinant DNA molecules is possible due to the use of naturally occurring restriction endonucleases ( restriction enzymes ), bacterial enzymes produced as a protection mechanism to cut and destroy foreign cytoplasmic DNA that is most commonly a result of bacteriophage infection. Stewart Linn and Werner Arber discovered restriction enzymes in their 1960s studies of how E. coli limits bacteriophage replication on infection. Today, we use restriction enzymes extensively for cutting DNA fragments that can then be spliced into another DNA molecule to form recombinant molecules. Each restriction enzyme cuts DNA at a characteristic recognition site , a specific, usually palindromic, DNA sequence typically between four to six base pairs in length. A palindrome is a sequence of letters that reads the same forward as backward. (The word “level” is an example of a palindrome.) Palindromic DNA sequences contain the same base sequences in the 5ʹ to 3ʹ direction on one strand as in the 5ʹ to 3ʹ direction on the complementary strand. A restriction enzyme recognizes the DNA palindrome and cuts each backbone at identical positions in the palindrome. Some restriction enzymes cut to produce molecules that have complementary overhangs ( sticky ends ) while others cut without generating such overhangs, instead producing blunt ends ( Figure 12.3 ). Molecules with complementary sticky ends can easily anneal , or form hydrogen bonds between complementary bases, at their sticky ends. The annealing step allows hybridization of the single-stranded overhangs. Hybridization refers to the joining together of two complementary single strands of DNA. Blunt ends can also attach together, but less efficiently than sticky ends due to the lack of complementary overhangs facilitating the process. In either case, ligation by DNA ligase can then rejoin the two sugar-phosphate backbones of the DNA through covalent bonding, making the molecule a continuous double strand. In 1972, Paul Berg , a Stanford biochemist, was the first to produce a recombinant DNA molecule using this technique, combining the SV40 monkey virus with E. coli bacteriophage lambda to create a hybrid. Plasmids After restriction digestion, genes of interest are commonly inserted into plasmids , small pieces of typically circular, double-stranded DNA that replicate independently of the bacterial chromosome (see Unique Characteristics of Prokaryotic Cells ). In recombinant DNA technology, plasmids are often used as vectors , DNA molecules that carry DNA fragments from one organism to another. Plasmids used as vectors can be genetically engineered by researchers and scientific supply companies to have specialized properties, as illustrated by the commonly used plasmid vector pUC19 ( Figure 12.4 ). Some plasmid vectors contain genes that confer antibiotic resistance ; these resistance genes allow researchers to easily find plasmid-containing colonies by plating them on media containing the corresponding antibiotic. The antibiotic kills all host cells that do not harbor the desired plasmid vector, but those that contain the vector are able to survive and grow. Plasmid vectors used for cloning typically have a polylinker site , or multiple cloning site (MCS) . A polylinker site is a short sequence containing multiple unique restriction enzyme recognition sites that are used for inserting DNA into the plasmid after restriction digestion of both the DNA and the plasmid. Having these multiple restriction enzyme recognition sites within the polylinker site makes the plasmid vector versatile, so it can be used for many different cloning experiments involving different restriction enzymes. This polylinker site is often found within a reporter gene , another gene sequence artificially engineered into the plasmid that encodes a protein that allows for visualization of DNA insertion. The reporter gene allows a researcher to distinguish host cells that contain recombinant plasmids with cloned DNA fragments from host cells that only contain the non-recombinant plasmid vector. The most common reporter gene used in plasmid vectors is the bacterial lacZ gene encoding beta-galactosidase, an enzyme that naturally degrades lactose but can also degrade a colorless synthetic analog X-gal , thereby producing blue colonies on X-gal–containing media. The lacZ reporter gene is disabled when the recombinant DNA is spliced into the plasmid. Because the LacZ protein is not produced when the gene is disabled, X-gal is not degraded and white colonies are produced, which can then be isolated. This blue-white screening method is described later and shown in Figure 12.5 . In addition to these features, some plasmids come pre-digested and with an enzyme linked to the linearized plasmid to aid in ligation after the insertion of foreign DNA fragments. Molecular Cloning using Transformation The most commonly used mechanism for introducing engineered plasmids into a bacterial cell is transformation , a process in which bacteria take up free DNA from their surroundings. In nature, free DNA typically comes from other lysed bacterial cells; in the laboratory, free DNA in the form of recombinant plasmids is introduced to the cell’s surroundings. Some bacteria, such as Bacillus spp., are naturally competent, meaning they are able to take up foreign DNA. However, not all bacteria are naturally competent. In most cases, bacteria must be made artificially competent in the laboratory by increasing the permeability of the cell membrane. This can be achieved through chemical treatments that neutralize charges on the cell membrane or by exposing the bacteria to an electric field that creates microscopic pores in the cell membrane. These methods yield chemically competent or electrocompetent bacteria, respectively. Following the transformation protocol, bacterial cells are plated onto an antibiotic-containing medium to inhibit the growth of the many host cells that were not transformed by the plasmid conferring antibiotic resistance. A technique called blue-white screening is then used for lacZ -encoding plasmid vectors such as pUC19. Blue colonies have a functional beta-galactosidase enzyme because the lacZ gene is uninterrupted, with no foreign DNA inserted into the polylinker site. These colonies typically result from the digested, linearized plasmid religating to itself. However, white colonies lack a functional beta-galactosidase enzyme, indicating the insertion of foreign DNA within the polylinker site of the plasmid vector, thus disrupting the lacZ gene. Thus, white colonies resulting from this blue-white screening contain plasmids with an insert and can be further screened to characterize the foreign DNA. To be sure the correct DNA was incorporated into the plasmid, the DNA insert can then be sequenced. Link to Learning View an animation of molecular cloning from the DNA Learning Center. Check Your Understanding In blue-white screening, what does a blue colony mean and why is it blue? Molecular Cloning Using Conjugation or Transduction The bacterial process of conjugation (see How Asexual Prokaryotes Achieve Genetic Diversity ) can also be manipulated for molecular cloning. F plasmids , or fertility plasmids, are transferred between bacterial cells through the process of conjugation. Recombinant DNA can be transferred by conjugation when bacterial cells containing a recombinant F plasmid are mixed with compatible bacterial cells lacking the plasmid. F plasmids encode a surface structure called an F pilus that facilitates contact between a cell containing an F plasmid and one without an F plasmid. On contact, a cytoplasmic bridge forms between the two cells and the F-plasmid-containing cell replicates its plasmid, transferring a copy of the recombinant F plasmid to the recipient cell. Once it has received the recombinant F plasmid, the recipient cell can produce its own F pilus and facilitate transfer of the recombinant F plasmid to an additional cell. The use of conjugation to transfer recombinant F plasmids to recipient cells is another effective way to introduce recombinant DNA molecules into host cells. Alternatively, bacteriophages can be used to introduce recombinant DNA into host bacterial cells through a manipulation of the transduction process (see How Asexual Prokaryotes Achieve Genetic Diversity ). In the laboratory, DNA fragments of interest can be engineered into phagemids , which are plasmids that have phage sequences that allow them to be packaged into bacteriophages. Bacterial cells can then be infected with these bacteriophages so that the recombinant phagemids can be introduced into the bacterial cells. Depending on the type of phage, the recombinant DNA may be integrated into the host bacterial genome (lysogeny), or it may exist as a plasmid in the host’s cytoplasm. Check Your Understanding What is the original function of a restriction enzyme? What two processes are exploited to get recombinant DNA into a bacterial host cell? Distinguish the uses of an antibiotic resistance gene and a reporter gene in a plasmid vector. Creating a Genomic Library Molecular cloning may also be used to generate a genomic library . The library is a complete (or nearly complete) copy of an organism’s genome contained as recombinant DNA plasmids engineered into unique clones of bacteria. Having such a library allows a researcher to create large quantities of each fragment by growing the bacterial host for that fragment. These fragments can be used to determine the sequence of the DNA and the function of any genes present. One method for generating a genomic library is to ligate individual restriction enzyme-digested genomic fragments into plasmid vectors cut with the same restriction enzyme ( Figure 12.6 ). After transformation into a bacterial host, each transformed bacterial cell takes up a single recombinant plasmid and grows into a colony of cells. All of the cells in this colony are identical clones and carry the same recombinant plasmid. The resulting library is a collection of colonies, each of which contains a fragment of the original organism’s genome, that are each separate and distinct and can each be used for further study. This makes it possible for researchers to screen these different clones to discover the one containing a gene of interest from the original organism’s genome. To construct a genomic library using larger fragments of genomic DNA, an E. coli bacteriophage, such as lambda , can be used as a host ( Figure 12.7 ). Genomic DNA can be sheared or enzymatically digested and ligated into a pre-digested bacteriophage lambda DNA vector. Then, these recombinant phage DNA molecules can be packaged into phage particles and used to infect E. coli host cells on a plate. During infection within each cell, each recombinant phage will make many copies of itself and lyse the E. coli lawn, forming a plaque. Thus, each plaque from a phage library represents a unique recombinant phage containing a distinct genomic DNA fragment. Plaques can then be screened further to look for genes of interest. One advantage to producing a library using phages instead of plasmids is that a phage particle holds a much larger insert of foreign DNA compared with a plasmid vector, thus requiring a much smaller number of cultures to fully represent the entire genome of the original organism. To focus on the expressed genes in an organism or even a tissue, researchers construct libraries using the organism’s messenger RNA (mRNA) rather than its genomic DNA. Whereas all cells in a single organism will have the same genomic DNA, different tissues express different genes, producing different complements of mRNA . For example, all human cells’ genomic DNA contains the gene for insulin, but only cells in the pancreas express mRNA directing the production of insulin. Because mRNA cannot be cloned directly, in the laboratory mRNA must be used as a template by the retroviral enzyme reverse transcriptase to make complementary DNA (cDNA) . A cell’s full complement of mRNA can be reverse-transcribed into cDNA molecules, which can be used as a template for DNA polymerase to make double-stranded DNA copies; these fragments can subsequently be ligated into either plasmid vectors or bacteriophage to produce a cDNA library. The benefit of a cDNA library is that it contains DNA from only the expressed genes in the cell. This means that the introns, control sequences such as promoters, and DNA not destined to be translated into proteins are not represented in the library. The focus on translated sequences means that the library cannot be used to study the sequence and structure of the genome in its entirety. The construction of a cDNA genomic library is shown in Figure 12.8 . Check Your Understanding What are the hosts for the genomic libraries described? What is cDNA? Introducing Recombinant Molecules into Eukaryotic Hosts The use of bacterial hosts for genetic engineering laid the foundation for recombinant DNA technology; however, researchers have also had great interest in genetically engineering eukaryotic cells, particularly those of plants and animals. The introduction of recombinant DNA molecules into eukaryotic hosts is called transfection . Genetically engineered plants, called transgenic plants , are of significant interest for agricultural and pharmaceutical purposes. The first transgenic plant sold commercially was the Flavr Savr delayed-ripening tomato, which came to market in 1994. Genetically engineered livestock have also been successfully produced, resulting, for example, in pigs with increased nutritional value 1 and goats that secrete pharmaceutical products in their milk. 2 1 Liangxue Lai, Jing X. Kang, Rongfeng Li, Jingdong Wang, William T. Witt, Hwan Yul Yong, Yanhong Hao et al. “Generation of Cloned Transgenic Pigs Rich in Omega-3 Fatty Acids.” Nature Biotechnology 24 no. 4 (2006): 435–436. 2 Raylene Ramos Moura, Luciana Magalhães Melo, and Vicente José de Figueirêdo Freitas. “Production of Recombinant Proteins in Milk of Transgenic and Non-Transgenic Goats.” Brazilian Archives of Biology and Technology 54 no. 5 (2011): 927–938. Electroporation Compared to bacterial cells, eukaryotic cells tend to be less amenable as hosts for recombinant DNA molecules. Because eukaryotes are typically neither competent to take up foreign DNA nor able to maintain plasmids, transfection of eukaryotic hosts is far more challenging and requires more intrusive techniques for success. One method used for transfecting cells in cell culture is called electroporation . A brief electric pulse induces the formation of transient pores in the phospholipid bilayers of cells through which the gene can be introduced. At the same time, the electric pulse generates a short-lived positive charge on one side of the cell’s interior and a negative charge on the opposite side; the charge difference draws negatively charged DNA molecules into the cell ( Figure 12.9 ). Microinjection An alternative method of transfection is called microinjection . Because eukaryotic cells are typically larger than those of prokaryotes, DNA fragments can sometimes be directly injected into the cytoplasm using a glass micropipette, as shown in Figure 12.10 . Gene Guns Transfecting plant cells can be even more difficult than animal cells because of their thick cell walls. One approach involves treating plant cells with enzymes to remove their cell walls, producing protoplasts. Then, a gene gun is used to shoot gold or tungsten particles coated with recombinant DNA molecules into the plant protoplasts at high speeds. Recipient protoplast cells can then recover and be used to generate new transgenic plants ( Figure 12.11 ). Shuttle Vectors Another method of transfecting plants involves shuttle vectors , plasmids that can move between bacterial and eukaryotic cells. The tumor-inducing (T i ) plasmids originating from the bacterium Agrobacterium tumefaciens are commonly used as shuttle vectors for incorporating genes into plants ( Figure 12.12 ). In nature, the T i plasmids of A. tumefaciens cause plants to develop tumors when they are transferred from bacterial cells to plant cells. Researchers have been able to manipulate these naturally occurring plasmids to remove their tumor-causing genes and insert desirable DNA fragments. The resulting recombinant T i plasmids can be transferred into the plant genome through the natural transfer of T i plasmids from the bacterium to the plant host. Once inside the plant host cell, the gene of interest recombines into the plant cell’s genome. Viral Vectors Viral vectors can also be used to transfect eukaryotic cells. In fact, this method is often used in gene therapy (see Gene Therapy ) to introduce healthy genes into human patients suffering from diseases that result from genetic mutations. Viral genes can be deleted and replaced with the gene to be delivered to the patient; 3 the virus then infects the host cell and delivers the foreign DNA into the genome of the targeted cell. Adenoviruses are often used for this purpose because they can be grown to high titer and can infect both nondividing and dividing host cells. However, use of viral vectors for gene therapy can pose some risks for patients, as discussed in Gene Therapy . 3 William S.M. Wold and Karoly Toth. “Adenovirus Vectors for Gene Therapy, Vaccination and Cancer Gene Therapy.” Current Gene Therapy 13 no. 6 (2013): 421. Check Your Understanding What are the methods used to introduce recombinant DNA vectors into animal cells? Compare and contrast shuttle vectors and viral vectors. 12.2 Visualizing and Characterizing DNA, RNA, and Protein Learning Objectives Explain the use of nucleic acid probes to visualize specific DNA sequences Explain the use of gel electrophoresis to separate DNA fragments Explain the principle of restriction fragment length polymorphism analysis and its uses Compare and contrast Southern and northern blots Explain the principles and uses of microarray analysis Describe the methods uses to separate and visualize protein variants Explain the method and uses of polymerase chain reaction and DNA sequencing The sequence of a DNA molecule can help us identify an organism when compared to known sequences housed in a database. The sequence can also tell us something about the function of a particular part of the DNA, such as whether it encodes a particular protein. Comparing protein signatures —the expression levels of specific arrays of proteins—between samples is an important method for evaluating cellular responses to a multitude of environmental factors and stresses. Analysis of protein signatures can reveal the identity of an organism or how a cell is responding during disease. The DNA and proteins of interest are microscopic and typically mixed in with many other molecules including DNA or proteins irrelevant to our interests. Many techniques have been developed to isolate and characterize molecules of interest. These methods were originally developed for research purposes, but in many cases they have been simplified to the point that routine clinical use is possible. For example, many pathogens, such as the bacterium Helicobacter pylori , which causes stomach ulcers , can be detected using protein-based tests. In addition, an increasing number of highly specific and accurate DNA amplification-based identification assays can now detect pathogens such as antibiotic-resistant enteric bacteria, herpes simplex virus , varicella-zoster virus , and many others. Molecular Analysis of DNA In this subsection, we will outline some of the basic methods used for separating and visualizing specific fragments of DNA that are of interest to a scientist. Some of these methods do not require knowledge of the complete sequence of the DNA molecule. Before the advent of rapid DNA sequencing, these methods were the only ones available to work with DNA, but they still form the basic arsenal of tools used by molecular geneticists to study the body’s responses to microbial and other diseases. Nucleic Acid Probing DNA molecules are small, and the information contained in their sequence is invisible. How does a researcher isolate a particular stretch of DNA, or having isolated it, determine what organism it is from, what its sequence is, or what its function is? One method to identify the presence of a certain DNA sequence uses artificially constructed pieces of DNA called probes. Probes can be used to identify different bacterial species in the environment and many DNA probes are now available to detect pathogens clinically. For example, DNA probes are used to detect the vaginal pathogens Candida albicans , Gardnerella vaginalis , and Trichomonas vaginalis . To screen a genomic library for a particular gene or sequence of interest, researchers must know something about that gene. If researchers have a portion of the sequence of DNA for the gene of interest, they can design a DNA probe , a single-stranded DNA fragment that is complementary to part of the gene of interest and different from other DNA sequences in the sample. The DNA probe may be synthesized chemically by commercial laboratories, or it may be created by cloning, isolating, and denaturing a DNA fragment from a living organism. In either case, the DNA probe must be labeled with a molecular tag or beacon, such as a radioactive phosphorus atom (as is used for autoradiography ) or a fluorescent dye (as is used in fluorescent in situ hybridization, or FISH), so that the probe and the DNA it binds to can be seen ( Figure 12.13 ). The DNA sample being probed must also be denatured to make it single-stranded so that the single-stranded DNA probe can anneal to the single-stranded DNA sample at locations where their sequences are complementary. While these techniques are valuable for diagnosis, their direct use on sputum and other bodily samples may be problematic due to the complex nature of these samples. DNA often must first be isolated from bodily samples through chemical extraction methods before a DNA probe can be used to identify pathogens. Clinical Focus Part 2 The mild, flu-like symptoms that Kayla is experiencing could be caused by any number of infectious agents. In addition, several non-infectious autoimmune conditions, such as multiple sclerosis, systemic lupus erythematosus (SLE), and amyotrophic lateral sclerosis (ALS), also have symptoms that are consistent with Kayla’s early symptoms. However, over the course of several weeks, Kayla’s symptoms worsened. She began to experience joint pain in her knees, heart palpitations, and a strange limpness in her facial muscles. In addition, she suffered from a stiff neck and painful headaches. Reluctantly, she decided it was time to seek medical attention. Do Kayla’s new symptoms provide any clues as to what type of infection or other medical condition she may have? What tests or tools might a health-care provider use to pinpoint the pathogen causing Kayla’s symptoms? Jump to the next Clinical Focus box. Go back to the previous Clinical Focus box. Agarose Gel Electrophoresis There are a number of situations in which a researcher might want to physically separate a collection of DNA fragments of different sizes. A researcher may also digest a DNA sample with a restriction enzyme to form fragments. The resulting size and fragment distribution pattern can often yield useful information about the sequence of DNA bases that can be used, much like a bar-code scan, to identify the individual or species to which the DNA belongs. Gel electrophoresis is a technique commonly used to separate biological molecules based on size and biochemical characteristics, such as charge and polarity. Agarose gel electrophoresis is widely used to separate DNA (or RNA) of varying sizes that may be generated by restriction enzyme digestion or by other means, such as the PCR ( Figure 12.14 ). Due to its negatively charged backbone, DNA is strongly attracted to a positive electrode. In agarose gel electrophoresis, the gel is oriented horizontally in a buffer solution. Samples are loaded into sample wells on the side of the gel closest to the negative electrode, then drawn through the molecular sieve of the agarose matrix toward the positive electrode. The agarose matrix impedes the movement of larger molecules through the gel, whereas smaller molecules pass through more readily. Thus, the distance of migration is inversely correlated to the size of the DNA fragment, with smaller fragments traveling a longer distance through the gel. Sizes of DNA fragments within a sample can be estimated by comparison to fragments of known size in a DNA ladder also run on the same gel. To separate very large DNA fragments, such as chromosomes or viral genomes, agarose gel electrophoresis can be modified by periodically alternating the orientation of the electric field during pulsed-field gel electrophoresis (PFGE) . In PFGE , smaller fragments can reorient themselves and migrate slightly faster than larger fragments and this technique can thus serve to separate very large fragments that would otherwise travel together during standard agarose gel electrophoresis. In any of these electrophoresis techniques, the locations of the DNA or RNA fragments in the gel can be detected by various methods. One common method is adding ethidium bromide , a stain that inserts into the nucleic acids at non-specific locations and can be visualized when exposed to ultraviolet light. Other stains that are safer than ethidium bromide, a potential carcinogen, are now available. Restriction Fragment Length Polymorphism (RFLP) Analysis Restriction enzyme recognition sites are short (only a few nucleotides long), sequence-specific palindromes, and may be found throughout the genome. Thus, differences in DNA sequences in the genomes of individuals will lead to differences in distribution of restriction-enzyme recognition sites that can be visualized as distinct banding patterns on a gel after agarose gel electrophoresis. Restriction fragment length polymorphism (RFLP) analysis compares DNA banding patterns of different DNA samples after restriction digestion ( Figure 12.15 ). RFLP analysis has many practical applications in both medicine and forensic science . For example, epidemiologists use RFLP analysis to track and identify the source of specific microorganisms implicated in outbreaks of food poisoning or certain infectious diseases. RFLP analysis can also be used on human DNA to determine inheritance patterns of chromosomes with variant genes, including those associated with heritable diseases or to establish paternity . Forensic scientists use RFLP analysis as a form of DNA fingerprinting , which is useful for analyzing DNA obtained from crime scenes, suspects, and victims. DNA samples are collected, the numbers of copies of the sample DNA molecules are increased using PCR , and then subjected to restriction enzyme digestion and agarose gel electrophoresis to generate specific banding patterns. By comparing the banding patterns of samples collected from the crime scene against those collected from suspects or victims, investigators can definitively determine whether DNA evidence collected at the scene was left behind by suspects or victims. Southern Blots and Modifications Several molecular techniques capitalize on sequence complementarity and hybridization between nucleic acids of a sample and DNA probes. Typically, probing nucleic-acid samples within a gel is unsuccessful because as the DNA probe soaks into a gel, the sample nucleic acids within the gel diffuse out. Thus, blotting techniques are commonly used to transfer nucleic acids to a thin, positively charged membrane made of nitrocellulose or nylon. In the Southern blot technique, developed by Sir Edwin Southern in 1975, DNA fragments within a sample are first separated by agarose gel electrophoresis and then transferred to a membrane through capillary action ( Figure 12.16 ). The DNA fragments that bind to the surface of the membrane are then exposed to a specific single-stranded DNA probe labeled with a radioactive or fluorescent molecular beacon to aid in detection. Southern blots may be used to detect the presence of certain DNA sequences in a given DNA sample. Once the target DNA within the membrane is visualized, researchers can cut out the portion of the membrane containing the fragment to recover the DNA fragment of interest. Variations of the Southern blot—the dot blot, slot blot, and the spot blot—do not involve electrophoresis, but instead concentrate DNA from a sample into a small location on a membrane. After hybridization with a DNA probe, the signal intensity detected is measured, allowing the researcher to estimate the amount of target DNA present within the sample. A colony blot is another variation of the Southern blot in which colonies representing different clones in a genomic library are transferred to a membrane by pressing the membrane onto the culture plate. The cells on the membrane are lysed and the membrane can then be probed to determine which colonies within a genomic library harbor the target gene. Because the colonies on the plate are still growing, the cells of interest can be isolated from the plate. In the northern blot , another variation of the Southern blot, RNA (not DNA) is immobilized on the membrane and probed. Northern blots are typically used to detect the amount of mRNA made through gene expression within a tissue or organism sample. Microarray Analysis Another technique that capitalizes on the hybridization between complementary nucleic acid sequences is called microarray analysis . Microarray analysis is useful for the comparison of gene-expression patterns between different cell types—for example, cells infected with a virus versus uninfected cells, or cancerous cells versus healthy cells ( Figure 12.17 ). Typically, DNA or cDNA from an experimental sample is deposited on a glass slide alongside known DNA sequences. Each slide can hold more than 30,000 different DNA fragment types. Distinct DNA fragments (encompassing an organism’s entire genomic library) or cDNA fragments (corresponding to an organism’s full complement of expressed genes) can be individually spotted on a glass slide. Once deposited on the slide, genomic DNA or mRNA can be isolated from the two samples for comparison. If mRNA is isolated, it is reverse-transcribed to cDNA using reverse transcriptase. Then the two samples of genomic DNA or cDNA are labeled with different fluorescent dyes (typically red and green). The labeled genomic DNA samples are then combined in equal amounts, added to the microarray chip, and allowed to hybridize to complementary spots on the microarray. Hybridization of sample genomic DNA molecules can be monitored by measuring the intensity of fluorescence at particular spots on the microarray. Differences in the amount of hybridization between the samples can be readily observed. If only one sample’s nucleic acids hybridize to a particular spot on the microarray, then that spot will appear either green or red. However, if both samples’ nucleic acids hybridize, then the spot will appear yellow due to the combination of the red and green dyes. Although microarray technology allows for a holistic comparison between two samples in a short time, it requires sophisticated (and expensive) detection equipment and analysis software. Because of the expense, this technology is typically limited to research settings. Researchers have used microarray analysis to study how gene expression is affected in organisms that are infected by bacteria or viruses or subjected to certain chemical treatments. Link to Learning Explore microchip technology at this interactive website. Check Your Understanding What does a DNA probe consist of? Why is a Southern blot used after gel electrophoresis of a DNA digest? Molecular Analysis of Proteins In many cases it may not be desirable or possible to study DNA or RNA directly. Proteins can provide species-specific information for identification as well as important information about how and whether a cell or tissue is responding to the presence of a pathogenic microorganism. Various proteins require different methods for isolation and characterization. Polyacrylamide Gel Electrophoresis A variation of gel electrophoresis, called polyacrylamide gel electrophoresis (PAGE) , is commonly used for separating proteins. In PAGE , the gel matrix is finer and composed of polyacrylamide instead of agarose. Additionally, PAGE is typically performed using a vertical gel apparatus ( Figure 12.18 ). Because of the varying charges associated with amino acid side chains, PAGE can be used to separate intact proteins based on their net charges. Alternatively, proteins can be denatured and coated with a negatively charged detergent called sodium dodecyl sulfate (SDS) , masking the native charges and allowing separation based on size only. PAGE can be further modified to separate proteins based on two characteristics, such as their charges at various pHs as well as their size, through the use of two-dimensional PAGE . In any of these cases, following electrophoresis, proteins are visualized through staining, commonly with either Coomassie blue or a silver stain. Check Your Understanding On what basis are proteins separated in SDS-PAGE? Clinical Focus Part 3 When Kayla described her symptoms, her physician at first suspected bacterial meningitis , which is consistent with her headaches and stiff neck. However, she soon ruled this out as a possibility because meningitis typically progresses more quickly than what Kayla was experiencing. Many of her symptoms still paralleled those of amyotrophic lateral sclerosis (ALS) and systemic lupus erythematosus (SLE) , and the physician also considered Lyme disease a possibility given how much time Kayla spends in the woods. Kayla did not recall any recent tick bites (the typical means by which Lyme disease is transmitted) and she did not have the typical bull’s-eye rash associated with Lyme disease ( Figure 12.19 ). However, 20–30% of patients with Lyme disease never develop this rash, so the physician did not want to rule it out. Kayla’s doctor ordered an MRI of her brain, a complete blood count to test for anemia, blood tests assessing liver and kidney function, and additional tests to confirm or rule out SLE or Lyme disease. Her test results were inconsistent with both SLE and ALS, and the result of the test looking for Lyme disease antibodies was “equivocal,” meaning inconclusive. Having ruled out ALS and SLE, Kayla’s doctor decided to run additional tests for Lyme disease. Why would Kayla’s doctor still suspect Lyme disease even if the test results did not detect Lyme antibodies in the blood? What type of molecular test might be used for the detection of blood antibodies to Lyme disease? Jump to the next Clinical Focus box. Go back to the previous Clinical Focus box. Amplification-Based DNA Analysis Methods Various methods can be used for obtaining sequences of DNA, which are useful for studying disease-causing organisms. With the advent of rapid sequencing technology, our knowledge base of the entire genomes of pathogenic organisms has grown phenomenally. We start with a description of the polymerase chain reaction, which is not a sequencing method but has allowed researchers and clinicians to obtain the large quantities of DNA needed for sequencing and other studies. The polymerase chain reaction eliminates the dependence we once had on cells to make multiple copies of DNA, achieving the same result through relatively simple reactions outside the cell. Polymerase Chain Reaction (PCR) Most methods of DNA analysis, such as restriction enzyme digestion and agarose gel electrophoresis, or DNA sequencing require large amounts of a specific DNA fragment. In the past, large amounts of DNA were produced by growing the host cells of a genomic library. However, libraries take time and effort to prepare and DNA samples of interest often come in minute quantities. The polymerase chain reaction (PCR) permits rapid amplification in the number of copies of specific DNA sequences for further analysis ( Figure 12.20 ). One of the most powerful techniques in molecular biology, PCR was developed in 1983 by Kary Mullis while at Cetus Corporation. PCR has specific applications in research, forensic, and clinical laboratories, including: determining the sequence of nucleotides in a specific region of DNA amplifying a target region of DNA for cloning into a plasmid vector identifying the source of a DNA sample left at a crime scene analyzing samples to determine paternity comparing samples of ancient DNA with modern organisms determining the presence of difficult to culture, or unculturable, microorganisms in humans or environmental samples PCR is an in vitro laboratory technique that takes advantage of the natural process of DNA replication. The heat-stable DNA polymerase enzymes used in PCR are derived from hyperthermophilic prokaryotes. Taq DNA polymerase , commonly used in PCR, is derived from the Thermus aquaticus bacterium isolated from a hot spring in Yellowstone National Park. DNA replication requires the use of primers for the initiation of replication to have free 3ʹ-hydroxyl groups available for the addition of nucleotides by DNA polymerase. However, while primers composed of RNA are normally used in cells, DNA primers are used for PCR. DNA primers are preferable due to their stability, and DNA primers with known sequences targeting a specific DNA region can be chemically synthesized commercially. These DNA primers are functionally similar to the DNA probes used for the various hybridization techniques described earlier, binding to specific targets due to complementarity between the target DNA sequence and the primer. PCR occurs over multiple cycles, each containing three steps: denaturation , annealing , and extension. Machines called thermal cycler s are used for PCR; these machines can be programmed to automatically cycle through the temperatures required at each step ( Figure 12.1 ). First, double-stranded template DNA containing the target sequence is denatured at approximately 95 °C. The high temperature required to physically (rather than enzymatically) separate the DNA strands is the reason the heat-stable DNA polymerase is required. Next, the temperature is lowered to approximately 50 °C. This allows the DNA primers complementary to the ends of the target sequence to anneal (stick) to the template strands, with one primer annealing to each strand. Finally, the temperature is raised to 72 °C, the optimal temperature for the activity of the heat-stable DNA polymerase, allowing for the addition of nucleotides to the primer using the single-stranded target as a template. Each cycle doubles the number of double-stranded target DNA copies. Typically, PCR protocols include 25–40 cycles, allowing for the amplification of a single target sequence by tens of millions to over a trillion. Natural DNA replication is designed to copy the entire genome, and initiates at one or more origin sites. Primers are constructed during replication, not before, and do not consist of a few specific sequences. PCR targets specific regions of a DNA sample using sequence-specific primers. In recent years, a variety of isothermal PCR amplification methods that circumvent the need for thermal cycling have been developed, taking advantage of accessory proteins that aid in the DNA replication process. As the development of these methods continues and their use becomes more widespread in research, forensic, and clinical labs, thermal cyclers may become obsolete. Link to Learning Deepen your understanding of the polymerase chain reaction by viewing this animation and working through an interactive exercise. PCR Variations Several later modifications to PCR further increase the utility of this technique. Reverse transcriptase PCR (RT-PCR) is used for obtaining DNA copies of a specific mRNA molecule. RT-PCR begins with the use of the reverse transcriptase enzyme to convert mRNA molecules into cDNA . That cDNA is then used as a template for traditional PCR amplification. RT-PCR can detect whether a specific gene has been expressed in a sample. Another recent application of PCR is real-time PCR , also known as quantitative PCR (qPCR) . Standard PCR and RT-PCR protocols are not quantitative because any one of the reagents may become limiting before all of the cycles within the protocol are complete, and samples are only analyzed at the end. Because it is not possible to determine when in the PCR or RT-PCR protocol a given reagent has become limiting, it is not possible to know how many cycles were completed prior to this point, and thus it is not possible to determine how many original template molecules were present in the sample at the start of PCR. In qPCR, however, the use of fluorescence allows one to monitor the increase in a double-stranded template during a PCR reaction as it occurs. These kinetics data can then be used to quantify the amount of the original target sequence. The use of qPCR in recent years has further expanded the capabilities of PCR, allowing researchers to determine the number of DNA copies, and sometimes organisms, present in a sample. In clinical settings, qRT-PCR is used to determine viral load in HIV-positive patients to evaluate the effectiveness of their therapy. DNA Sequencing A basic sequencing technique is the chain termination method , also known as the dideoxy method or the Sanger DNA sequencing method , developed by Frederick Sanger in 1972. The chain termination method involves DNA replication of a single-stranded template with the use of a DNA primer to initiate synthesis of a complementary strand, DNA polymerase, a mix of the four regular deoxynucleotide (dNTP) monomers, and a small proportion of dideoxynucleotides (ddNTPs), each labeled with a molecular beacon . The ddNTPs are monomers missing a hydroxyl group (–OH) at the site at which another nucleotide usually attaches to form a chain ( Figure 12.21 ). Every time a ddNTP is randomly incorporated into the growing complementary strand, it terminates the process of DNA replication for that particular strand. This results in multiple short strands of replicated DNA that are each terminated at a different point during replication. When the reaction mixture is subjected to gel electrophoresis, the multiple newly replicated DNA strands form a ladder of differing sizes. Because the ddNTPs are labeled, each band on the gel reflects the size of the DNA strand when the ddNTP terminated the reaction. In Sanger’s day, four reactions were set up for each DNA molecule being sequenced, each reaction containing only one of the four possible ddNTPs. Each ddNTP was labeled with a radioactive phosphorus molecule. The products of the four reactions were then run in separate lanes side by side on long, narrow PAGE gels, and the bands of varying lengths were detected by autoradiography. Today, this process has been simplified with the use of ddNTPs, each labeled with a different colored fluorescent dye or fluorochrome ( Figure 12.22 ), in one sequencing reaction containing all four possible ddNTPs for each DNA molecule being sequenced ( Figure 12.23 ). These fluorochromes are detected by fluorescence spectroscopy. Determining the fluorescence color of each band as it passes by the detector produces the nucleotide sequence of the template strand. Since 2005, automated sequencing techniques used by laboratories fall under the umbrella of next generation sequencing , which is a group of automated techniques used for rapid DNA sequencing. These methods have revolutionized the field of molecular genetics because the low-cost sequencers can generate sequences of hundreds of thousands or millions of short fragments (25 to 600 base pairs) just in one day. Although several variants of next generation sequencing technologies are made by different companies (for example, 454 Life Sciences’ pyrosequencing and Illumina’s Solexa technology), they all allow millions of bases to be sequenced quickly, making the sequencing of entire genomes relatively easy, inexpensive, and commonplace. In 454 sequencing (pyrosequencing) , for example, a DNA sample is fragmented into 400–600-bp single-strand fragments, modified with the addition of DNA adapters to both ends of each fragment. Each DNA fragment is then immobilized on a bead and amplified by PCR, using primers designed to anneal to the adapters, creating a bead containing many copies of that DNA fragment. Each bead is then put into a separate well containing sequencing enzymes. To the well, each of the four nucleotides is added one after the other; when each one is incorporated, pyrophosphate is released as a byproduct of polymerization, emitting a small flash of light that is recorded by a detector. This provides the order of nucleotides incorporated as a new strand of DNA is made and is an example of synthesis sequencing. Next generation sequencers use sophisticated software to get through the cumbersome process of putting all the fragments in order. Overall, these technologies continue to advance rapidly, decreasing the cost of sequencing and increasing the availability of sequence data from a wide variety of organisms quickly. The National Center for Biotechnology Information houses a widely used genetic sequence database called GenBank where researchers deposit genetic information for public use. Upon publication of sequence data, researchers upload it to GenBank, giving other researchers access to the information. The collaboration allows researchers to compare newly discovered or unknown sample sequence information with the vast array of sequence data that already exists. Link to Learning View an animation about 454 sequencing to deepen your understanding of this method. Case in Point Using a NAAT to Diagnose a C. difficile Infection Javier, an 80-year-old patient with a history of heart disease, recently returned home from the hospital after undergoing an angioplasty procedure to insert a stent into a cardiac artery. To minimize the possibility of infection, Javier was administered intravenous broad-spectrum antibiotics during and shortly after his procedure. He was released four days after the procedure, but a week later, he began to experience mild abdominal cramping and watery diarrhea several times a day. He lost his appetite, became severely dehydrated, and developed a fever. He also noticed blood in his stool. Javier’s wife called the physician, who instructed her to take him to the emergency room immediately. The hospital staff ran several tests and found that Javier’s kidney creatinine levels were elevated compared with the levels in his blood, indicating that his kidneys were not functioning well. Javier’s symptoms suggested a possible infection with Clostridium difficile , a bacterium that is resistant to many antibiotics. The hospital collected and cultured a stool sample to look for the production of toxins A and B by C. difficile , but the results came back negative. However, the negative results were not enough to rule out a C. difficile infection because culturing of C. difficile and detection of its characteristic toxins can be difficult, particularly in some types of samples. To be safe, they proceeded with a diagnostic nucleic acid amplification test (NAAT). Currently NAATs are the clinical diagnostician’s gold standard for detecting the genetic material of a pathogen. In Javier’s case, qPCR was used to look for the gene encoding C. difficile toxin B ( tcdB ). When the qPCR analysis came back positive, the attending physician concluded that Javier was indeed suffering from a C. difficile infection and immediately prescribed the antibiotic vancomycin , to be administered intravenously. The antibiotic cleared the infection and Javier made a full recovery. Because infections with C. difficile were becoming widespread in Javier’s community, his sample was further analyzed to see whether the specific strain of C. difficile could be identified. Javier’s stool sample was subjected to ribotyping and repetitive sequence-based PCR (rep-PCR) analysis. In ribotyping, a short sequence of DNA between the 16S rRNA and 23S rRNA genes is amplified and subjected to restriction digestion ( Figure 12.24 ). This sequence varies between strains of C. difficile , so restriction enzymes will cut in different places. In rep-PCR, DNA primers designed to bind to short sequences commonly found repeated within the C. difficile genome were used for PCR. Following restriction digestion, agarose gel electrophoresis was performed in both types of analysis to examine the banding patterns that resulted from each procedure ( Figure 12.25 ). Rep-PCR can be used to further subtype various ribotypes, increasing resolution for detecting differences between strains. The ribotype of the strain infecting Javier was found to be ribotype 27, a strain known for its increased virulence, resistance to antibiotics, and increased prevalence in the United States, Canada, Japan, and Europe. 4 4 Patrizia Spigaglia, Fabrizio Barbanti, Anna Maria Dionisi, and Paola Mastrantonio. “ Clostridium difficile Isolates Resistant to Fluoroquinolones in Italy: Emergence of PCR Ribotype 018.” Journal of Clinical Microbiology 48 no. 8 (2010): 2892–2896. How do banding patterns differ between strains of C. difficile ? Why do you think laboratory tests were unable to detect toxin production directly? Check Your Understanding How is PCR similar to the natural DNA replication process in cells? How is it different? Compare RT-PCR and qPCR in terms of their respective purposes. In chain-termination sequencing, how is the identity of each nucleotide in a sequence determined? 12.3 Whole Genome Methods and Pharmaceutical Applications of Genetic Engineering Learning Objectives Explain the uses of genome-wide comparative analyses Summarize the advantages of genetically engineered pharmaceutical products Advances in molecular biology have led to the creation of entirely new fields of science. Among these are fields that study aspects of whole genomes, collectively referred to as whole-genome methods. In this section, we’ll provide a brief overview of the whole-genome fields of genomics, transcriptomics, and proteomics. Genomics, Transcriptomics, and Proteomics The study and comparison of entire genomes, including the complete set of genes and their nucleotide sequence and organization, is called genomics . This field has great potential for future medical advances through the study of the human genome as well as the genomes of infectious organisms. Analysis of microbial genomes has contributed to the development of new antibiotics, diagnostic tools, vaccines, medical treatments, and environmental cleanup techniques. The field of transcriptomics is the science of the entire collection of mRNA molecules produced by cells. Scientists compare gene expression patterns between infected and uninfected host cells, gaining important information about the cellular responses to infectious disease. Additionally, transcriptomics can be used to monitor the gene expression of virulence factors in microorganisms, aiding scientists in better understanding pathogenic processes from this viewpoint. When genomics and transcriptomics are applied to entire microbial communities, we use the terms metagenomics and metatranscriptomics , respectively. Metagenomics and metatranscriptomics allow researchers to study genes and gene expression from a collection of multiple species, many of which may not be easily cultured or cultured at all in the laboratory. A DNA microarray (discussed in the previous section) can be used in metagenomics studies. Another up-and-coming clinical application of genomics and transcriptomics is pharmacogenomics , also called toxicogenomics , which involves evaluating the effectiveness and safety of drugs on the basis of information from an individual’s genomic sequence. Genomic responses to drugs can be studied using experimental animals (such as laboratory rats or mice) or live cells in the laboratory before embarking on studies with humans. Changes in gene expression in the presence of a drug can sometimes be an early indicator of the potential for toxic effects. Personal genome sequence information may someday be used to prescribe medications that will be most effective and least toxic on the basis of the individual patient’s genotype. The study of proteomics is an extension of genomics that allows scientists to study the entire complement of proteins in an organism, called the proteome . Even though all cells of a multicellular organism have the same set of genes, cells in various tissues produce different sets of proteins. Thus, the genome is constant, but the proteome varies and is dynamic within an organism. Proteomics may be used to study which proteins are expressed under various conditions within a single cell type or to compare protein expression patterns between different organisms. The most prominent disease being studied with proteomic approaches is cancer, but this area of study is also being applied to infectious diseases. Research is currently underway to examine the feasibility of using proteomic approaches to diagnose various types of hepatitis, tuberculosis, and HIV infection, which are rather difficult to diagnose using currently available techniques. 5 5 E.O. List, D.E. Berryman, B. Bower, L. Sackmann-Sala, E. Gosney, J. Ding, S. Okada, and J.J. Kopchick. “The Use of Proteomics to Study Infectious Diseases.” Infectious Disorders-Drug Targets (Formerly Current Drug Targets-Infectious Disorders ) 8 no. 1 (2008): 31–45. A recent and developing proteomic analysis relies on identifying proteins called biomarkers , whose expression is affected by the disease process. Biomarkers are currently being used to detect various forms of cancer as well as infections caused by pathogens such as Yersinia pestis and Vaccinia virus . 6 6 Mohan Natesan, and Robert G. Ulrich. “Protein Microarrays and Biomarkers of Infectious Disease.” International Journal of Molecular Sciences 11 no. 12 (2010): 5165–5183. Other “-omic” sciences related to genomics and proteomics include metabolomics, glycomics, and lipidomics, which focus on the complete set of small-molecule metabolites, sugars, and lipids, respectively, found within a cell. Through these various global approaches, scientists continue to collect, compile, and analyze large amounts of genetic information. This emerging field of bioinformatics can be used, among many other applications, for clues to treating diseases and understanding the workings of cells. Additionally, researchers can use reverse genetics , a technique related to classic mutational analysis , to determine the function of specific genes. Classic methods of studying gene function involved searching for the genes responsible for a given phenotype. Reverse genetics uses the opposite approach, starting with a specific DNA sequence and attempting to determine what phenotype it produces. Alternatively, scientists can attach known genes (called reporter genes) that encode easily observable characteristics to genes of interest, and the location of expression of such genes of interest can be easily monitored. This gives the researcher important information about what the gene product might be doing or where it is located in the organism. Common reporter genes include bacterial lacZ , which encodes beta-galactosidase and whose activity can be monitored by changes in colony color in the presence of X-gal as previously described, and the gene encoding the jellyfish protein green fluorescent protein (GFP) whose activity can be visualized in colonies under ultraviolet light exposure ( Figure 12.26 ). Check Your Understanding How is genomics different from traditional genetics? If you wanted to study how two different cells in the body respond to an infection, what –omics field would you apply? What are the biomarkers uncovered in proteomics used for? Clinical Focus Resolution Because Kayla’s symptoms were persistent and serious enough to interfere with daily activities, Kayla’s physician decided to order some laboratory tests. The physician collected samples of Kayla’s blood, cerebrospinal fluid (CSF), and synovial fluid (from one of her swollen knees) and requested PCR analysis on all three samples. The PCR tests on the CSF and synovial fluid came back positive for the presence of Borrelia burgdorferi , the bacterium that causes Lyme disease . Kayla’s physician immediately prescribed a full course of the antibiotic doxycycline . Fortunately, Kayla recovered fully within a few weeks and did not suffer from the long-term symptoms of post-treatment Lyme disease syndrome (PTLDS), which affects 10–20% of Lyme disease patients. To prevent future infections, Kayla’s physician advised her to use insect repellant and wear protective clothing during her outdoor adventures. These measures can limit exposure to Lyme-bearing ticks, which are common in many regions of the United States during the warmer months of the year. Kayla was also advised to make a habit of examining herself for ticks after returning from outdoor activities, as prompt removal of a tick greatly reduces the chances of infection. Lyme disease is often difficult to diagnose. B. burgdorferi is not easily cultured in the laboratory, and the initial symptoms can be very mild and resemble those of many other diseases. But left untreated, the symptoms can become quite severe and debilitating. In addition to two antibody tests, which were inconclusive in Kayla’s case, and the PCR test, a Southern blot could be used with B. burgdorferi -specific DNA probes to identify DNA from the pathogen. Sequencing of surface protein genes of Borrelia species is also being used to identify strains within the species that may be more readily transmitted to humans or cause more severe disease. Go back to the previous Clinical Focus box. Recombinant DNA Technology and Pharmaceutical Production Genetic engineering has provided a way to create new pharmaceutical products called recombinant DNA pharmaceuticals . Such products include antibiotic drugs, vaccines, and hormones used to treat various diseases. Table 12.1 lists examples of recombinant DNA products and their uses. For example, the naturally occurring antibiotic synthesis pathways of various Streptomyces spp., long known for their antibiotic production capabilities, can be modified to improve yields or to create new antibiotics through the introduction of genes encoding additional enzymes. More than 200 new antibiotics have been generated through the targeted inactivation of genes and the novel combination of antibiotic synthesis genes in antibiotic-producing Streptomyces hosts. 7 7 Jose-Luis Adrio and Arnold L. Demain. “Recombinant Organisms for Production of Industrial Products.” Bioengineered Bugs 1 no. 2 (2010): 116–131. Genetic engineering is also used to manufacture subunit vaccines , which are safer than other vaccines because they contain only a single antigenic molecule and lack any part of the genome of the pathogen (see Vaccines ). For example, a vaccine for hepatitis B is created by inserting a gene encoding a hepatitis B surface protein into a yeast; the yeast then produces this protein, which the human immune system recognizes as an antigen. The hepatitis B antigen is purified from yeast cultures and administered to patients as a vaccine. Even though the vaccine does not contain the hepatitis B virus, the presence of the antigenic protein stimulates the immune system to produce antibodies that will protect the patient against the virus in the event of exposure. 8 9 8 U.S. Department of Health and Human Services. “Types of Vaccines.” 2013. http://www.vaccines.gov/more_info/types/#subunit. Accessed May 27, 2016. 9 The Internet Drug List. Recombivax . 2015. http://www.rxlist.com/recombivax-drug.htm. Accessed May 27, 2016. Genetic engineering has also been important in the production of other therapeutic proteins, such as insulin , interferons , and human growth hormone , to treat a variety of human medical conditions. For example, at one time, it was possible to treat diabetes only by giving patients pig insulin, which caused allergic reactions due to small differences between the proteins expressed in human and pig insulin. However, since 1978, recombinant DNA technology has been used to produce large-scale quantities of human insulin using E. coli in a relatively inexpensive process that yields a more consistently effective pharmaceutical product. Scientists have also genetically engineered E. coli capable of producing human growth hormone (HGH), which is used to treat growth disorders in children and certain other disorders in adults. The HGH gene was cloned from a cDNA library and inserted into E. coli cells by cloning it into a bacterial vector. Eventually, genetic engineering will be used to produce DNA vaccines and various gene therapies, as well as customized medicines for fighting cancer and other diseases. Some Genetically Engineered Pharmaceutical Products and Applications Recombinant DNA Product Application Atrial natriuretic peptide Treatment of heart disease (e.g., congestive heart failure), kidney disease, high blood pressure DNase Treatment of viscous lung secretions in cystic fibrosis Erythropoietin Treatment of severe anemia with kidney damage Factor VIII Treatment of hemophilia Hepatitis B vaccine Prevention of hepatitis B infection Human growth hormone Treatment of growth hormone deficiency, Turner’s syndrome, burns Human insulin Treatment of diabetes Interferons Treatment of multiple sclerosis, various cancers (e.g., melanoma), viral infections (e.g., Hepatitis B and C) Tetracenomycins Used as antibiotics Tissue plasminogen activator Treatment of pulmonary embolism in ischemic stroke, myocardial infarction Table 12.1 Check Your Understanding What bacterium has been genetically engineered to produce human insulin for the treatment of diabetes? Explain how microorganisms can be engineered to produce vaccines. RNA Interference Technology In Structure and Function of RNA , we described the function of mRNA, rRNA, and tRNA. In addition to these types of RNA, cells also produce several types of small noncoding RNA molecules that are involved in the regulation of gene expression. These include antisense RNA molecules, which are complementary to regions of specific mRNA molecules found in both prokaryotes and eukaryotic cells. Non-coding RNA molecules play a major role in RNA interference (RNAi) , a natural regulatory mechanism by which mRNA molecules are prevented from guiding the synthesis of proteins. RNA interference of specific genes results from the base pairing of short, single-stranded antisense RNA molecules to regions within complementary mRNA molecules, preventing protein synthesis. Cells use RNA interference to protect themselves from viral invasion, which may introduce double-stranded RNA molecules as part of the viral replication process ( Figure 12.27 ). Researchers are currently developing techniques to mimic the natural process of RNA interference as a way to treat viral infections in eukaryotic cells. RNA interference technology involves using small interfering RNAs (siRNAs) or microRNAs (miRNAs) ( Figure 12.28 ). siRNAs are completely complementary to the mRNA transcript of a specific gene of interest while miRNAs are mostly complementary. These double-stranded RNAs are bound to DICER, an endonuclease that cleaves the RNA into short molecules (approximately 20 nucleotides long). The RNAs are then bound to RNA-induced silencing complex (RISC), a ribonucleoprotein. The siRNA-RISC complex binds to mRNA and cleaves it. For miRNA, only one of the two strands binds to RISC. The miRNA-RISC complex then binds to mRNA, inhibiting translation. If the miRNA is completely complementary to the target gene, then the mRNA can be cleaved. Taken together, these mechanisms are known as gene silencing . 12.4 Gene Therapy Learning Objectives Summarize the mechanisms, risks, and potential benefits of gene therapy Identify ethical issues involving gene therapy and the regulatory agencies that provide oversight for clinical trials Compare somatic-cell and germ-line gene therapy Many types of genetic engineering have yielded clear benefits with few apparent risks. Few would question, for example, the value of our now abundant supply of human insulin produced by genetically engineered bacteria. However, many emerging applications of genetic engineering are much more controversial, often because their potential benefits are pitted against significant risks, real or perceived. This is certainly the case for gene therapy , a clinical application of genetic engineering that may one day provide a cure for many diseases but is still largely an experimental approach to treatment. Mechanisms and Risks of Gene Therapy Human diseases that result from genetic mutations are often difficult to treat with drugs or other traditional forms of therapy because the signs and symptoms of disease result from abnormalities in a patient’s genome. For example, a patient may have a genetic mutation that prevents the expression of a specific protein required for the normal function of a particular cell type. This is the case in patients with Severe Combined Immunodeficiency (SCID), a genetic disease that impairs the function of certain white blood cells essential to the immune system. Gene therapy attempts to correct genetic abnormalities by introducing a nonmutated, functional gene into the patient’s genome. The nonmutated gene encodes a functional protein that the patient would otherwise be unable to produce. Viral vectors such as adenovirus are sometimes used to introduce the functional gene; part of the viral genome is removed and replaced with the desired gene ( Figure 12.29 ). More advanced forms of gene therapy attempt to correct the mutation at the original site in the genome, such as is the case with treatment of SCID. So far, gene therapies have proven relatively ineffective, with the possible exceptions of treatments for cystic fibrosis and adenosine deaminase deficiency , a type of SCID. Other trials have shown the clear hazards of attempting genetic manipulation in complex multicellular organisms like humans. In some patients, the use of an adenovirus vector can trigger an unanticipated inflammatory response from the immune system, which may lead to organ failure. Moreover, because viruses can often target multiple cell types, the virus vector may infect cells not targeted for the therapy, damaging these other cells and possibly leading to illnesses such as cancer. Another potential risk is that the modified virus could revert to being infectious and cause disease in the patient. Lastly, there is a risk that the inserted gene could unintentionally inactivate another important gene in the patient’s genome, disrupting normal cell cycling and possibly leading to tumor formation and cancer. Because gene therapy involves so many risks, candidates for gene therapy need to be fully informed of these risks before providing informed consent to undergo the therapy. Case in Point Gene Therapy Gone Wrong The risks of gene therapy were realized in the 1999 case of Jesse Gelsinger , an 18-year-old patient who received gene therapy as part of a clinical trial at the University of Pennsylvania. Jesse received gene therapy for a condition called ornithine transcarbamylase (OTC) deficiency , which leads to ammonia accumulation in the blood due to deficient ammonia processing. Four days after the treatment, Jesse died after a massive immune response to the adenovirus vector. 10 10 Barbara Sibbald. “Death but One Unintended Consequence of Gene-Therapy Trial.” Canadian Medical Association Journal 164 no. 11 (2001): 1612–1612. Until that point, researchers had not really considered an immune response to the vector to be a legitimate risk, but on investigation, it appears that the researchers had some evidence suggesting that this was a possible outcome. Prior to Jesse’s treatment, several other human patients had suffered side effects of the treatment, and three monkeys used in a trial had died as a result of inflammation and clotting disorders. Despite this information, it appears that neither Jesse nor his family were made aware of these outcomes when they consented to the therapy. Jesse’s death was the first patient death due to a gene therapy treatment and resulted in the immediate halting of the clinical trial in which he was involved, the subsequent halting of all other gene therapy trials at the University of Pennsylvania, and the investigation of all other gene therapy trials in the United States. As a result, the regulation and oversight of gene therapy overall was reexamined, resulting in new regulatory protocols that are still in place today. Check Your Understanding Explain how gene therapy works in theory. Identify some risks of gene therapy. Oversight of Gene Therapy Presently, there is significant oversight of gene therapy clinical trials. At the federal level, three agencies regulate gene therapy in parallel: the Food and Drug Administration (FDA), the Office of Human Research Protection (OHRP), and the Recombinant DNA Advisory Committee (RAC) at the National Institutes of Health (NIH). Along with several local agencies, these federal agencies interact with the institutional review board to ensure that protocols are in place to protect patient safety during clinical trials. Compliance with these protocols is enforced mostly on the local level in cooperation with the federal agencies. Gene therapies are currently under the most extensive federal and local review compared to other types of therapies, which are more typically only under the review of the FDA. Some researchers believe that these extensive regulations actually inhibit progress in gene therapy research. In 2013, the Institute of Medicine (now the National Academy of Medicine ) called upon the NIH to relax its review of gene therapy trials in most cases. 11 However, ensuring patient safety continues to be of utmost concern. 11 Kerry Grens. “Report: Ease Gene Therapy Reviews.” The Scientist , December 9, 2013. http://www.the-scientist.com/?articles.view/articleNo/38577/title/Report--Ease-Gene-Therapy-Reviews/. Accessed May 27, 2016. Ethical Concerns Beyond the health risks of gene therapy, the ability to genetically modify humans poses a number of ethical issues related to the limits of such “therapy.” While current research is focused on gene therapy for genetic diseases, scientists might one day apply these methods to manipulate other genetic traits not perceived as desirable. This raises questions such as: Which genetic traits are worthy of being “corrected”? Should gene therapy be used for cosmetic reasons or to enhance human abilities? Should genetic manipulation be used to impart desirable traits to the unborn? Is everyone entitled to gene therapy, or could the cost of gene therapy create new forms of social inequality? Who should be responsible for regulating and policing inappropriate use of gene therapies? The ability to alter reproductive cells using gene therapy could also generate new ethical dilemmas. To date, the various types of gene therapies have been targeted to somatic cells, the non-reproductive cells within the body. Because somatic cell traits are not inherited, any genetic changes accomplished by somatic-cell gene therapy would not be passed on to offspring. However, should scientists successfully introduce new genes to germ cells (eggs or sperm), the resulting traits could be passed on to offspring. This approach, called germ-line gene therapy , could potentially be used to combat heritable diseases, but it could also lead to unintended consequences for future generations. Moreover, there is the question of informed consent, because those impacted by germ-line gene therapy are unborn and therefore unable to choose whether they receive the therapy. For these reasons, the U.S. government does not currently fund research projects investigating germ-line gene therapies in humans. Eye on Ethics Risky Gene Therapies While there are currently no gene therapies on the market in the United States, many are in the pipeline and it is likely that some will eventually be approved. With recent advances in gene therapies targeting p53, a gene whose somatic cell mutations have been implicated in over 50% of human cancers, 12 cancer treatments through gene therapies could become much more widespread once they reach the commercial market. 12 Zhen Wang and Yi Sun. “Targeting p53 for Novel Anticancer Therapy.” Translational Oncology 3 , no. 1 (2010): 1–12. Bringing any new therapy to market poses ethical questions that pit the expected benefits against the risks. How quickly should new therapies be brought to the market? How can we ensure that new therapies have been sufficiently tested for safety and effectiveness before they are marketed to the public? The process by which new therapies are developed and approved complicates such questions, as those involved in the approval process are often under significant pressure to get a new therapy approved even in the face of significant risks. To receive FDA approval for a new therapy, researchers must collect significant laboratory data from animal trials and submit an Investigational New Drug (IND) application to the FDA’s Center for Drug Evaluation and Research (CDER) . Following a 30-day waiting period during which the FDA reviews the IND, clinical trials involving human subjects may begin. If the FDA perceives a problem prior to or during the clinical trial, the FDA can order a “clinical hold” until any problems are addressed. During clinical trials, researchers collect and analyze data on the therapy’s effectiveness and safety, including any side effects observed. Once the therapy meets FDA standards for effectiveness and safety, the developers can submit a New Drug Application (NDA) that details how the therapy will be manufactured, packaged, monitored, and administered. Because new gene therapies are frequently the result of many years (even decades) of laboratory and clinical research, they require a significant financial investment. By the time a therapy has reached the clinical trials stage, the financial stakes are high for pharmaceutical companies and their shareholders. This creates potential conflicts of interest that can sometimes affect the objective judgment of researchers, their funders, and even trial participants. The Jesse Gelsinger case (see Case in Point: Gene Therapy Gone Wrong ) is a classic example. Faced with a life-threatening disease and no reasonable treatments available, it is easy to see why a patient might be eager to participate in a clinical trial no matter the risks. It is also easy to see how a researcher might view the short-term risks for a small group of study participants as a small price to pay for the potential benefits of a game-changing new treatment. Gelsinger’s death led to increased scrutiny of gene therapy, and subsequent negative outcomes of gene therapy have resulted in the temporary halting of clinical trials pending further investigation. For example, when children in France treated with gene therapy for SCID began to develop leukemia several years after treatment, the FDA temporarily stopped clinical trials of similar types of gene therapy occurring in the United States. 13 Cases like these highlight the need for researchers and health professionals not only to value human well-being and patients’ rights over profitability, but also to maintain scientific objectivity when evaluating the risks and benefits of new therapies. 13 Erika Check. “Gene Therapy: A Tragic Setback.” Nature 420 no. 6912 (2002): 116–118. Check Your Understanding Why is gene therapy research so tightly regulated? What is the main ethical concern associated with germ-line gene therapy?
psychology
Summary 7.1 What Is Cognition? In this section, you were introduced to cognitive psychology, which is the study of cognition, or the brain’s ability to think, perceive, plan, analyze, and remember. Concepts and their corresponding prototypes help us quickly organize our thinking by creating categories into which we can sort new information. We also develop schemata, which are clusters of related concepts. Some schemata involve routines of thought and behavior, and these help us function properly in various situations without having to “think twice” about them. Schemata show up in social situations and routines of daily behavior. 7.2 Language Language is a communication system that has both a lexicon and a system of grammar. Language acquisition occurs naturally and effortlessly during the early stages of life, and this acquisition occurs in a predictable sequence for individuals around the world. Language has a strong influence on thought, and the concept of how language may influence cognition remains an area of study and debate in psychology. 7.3 Problem Solving Many different strategies exist for solving problems. Typical strategies include trial and error, applying algorithms, and using heuristics. To solve a large, complicated problem, it often helps to break the problem into smaller steps that can be accomplished individually, leading to an overall solution. Roadblocks to problem solving include a mental set, functional fixedness, and various biases that can cloud decision making skills. 7.4 What Are Intelligence and Creativity? Intelligence is a complex characteristic of cognition. Many theories have been developed to explain what intelligence is and how it works. Sternberg generated his triarchic theory of intelligence, whereas Gardner posits that intelligence is comprised of many factors. Still others focus on the importance of emotional intelligence. Finally, creativity seems to be a facet of intelligence, but it is extremely difficult to measure objectively. 7.5 Measures of Intelligence In this section, we learned about the history of intelligence testing and some of the challenges regarding intelligence testing. Intelligence tests began in earnest with Binet; Wechsler later developed intelligence tests that are still in use today: the WAIS-IV and WISC-V. The Bell curve shows the range of scores that encompass average intelligence as well as standard deviations. 7.6 The Source of Intelligence Genetics and environment affect intelligence and the challenges of certain learning disabilities. The intelligence levels of all individuals seem to benefit from rich stimulation in their early environments. Highly intelligent individuals, however, may have a built-in resiliency that allows them to overcome difficult obstacles in their upbringing. Learning disabilities can cause major challenges for children who are learning to read and write. Unlike developmental disabilities, learning disabilities are strictly neurological in nature and are not related to intelligence levels. Students with dyslexia, for example, may have extreme difficulty learning to read, but their intelligence levels are typically average or above average.
Chapter Outline 7.1 What Is Cognition? 7.2 Language 7.3 Problem Solving 7.4 What Are Intelligence and Creativity? 7.5 Measures of Intelligence 7.6 The Source of Intelligence Introduction Why is it so difficult to break habits—like reaching for your ringing phone even when you shouldn’t, such as when you’re driving? How does a person who has never seen or touched snow in real life develop an understanding of the concept of snow? How do young children acquire the ability to learn language with no formal instruction? Psychologists who study thinking explore questions like these. Cognitive psychologists also study intelligence. What is intelligence, and how does it vary from person to person? Are “street smarts” a kind of intelligence, and if so, how do they relate to other types of intelligence? What does an IQ test really measure? These questions and more will be explored in this chapter as you study thinking and intelligence. In other chapters, we discussed the cognitive processes of perception, learning, and memory. In this chapter, we will focus on high-level cognitive processes. As a part of this discussion, we will consider thinking and briefly explore the development and use of language. We will also discuss problem solving and creativity before ending with a discussion of how intelligence is measured and how our biology and environments interact to affect intelligence. After finishing this chapter, you will have a greater appreciation of the higher-level cognitive processes that contribute to our distinctiveness as a species.
[ { "answer": { "ans_choice": 1, "ans_text": "human thinking" }, "bloom": null, "hl_context": "<hl> Cognitive psychology is the field of psychology dedicated to examining how people think . <hl> <hl> It attempts to explain how and why we think the way we do by studying the interactions among human thinking , emotion , creativity , language , and problem solving , in addition to other cognitive processes . <hl> Cognitive psychologists strive to determine and measure different types of intelligence , why some people are better at problem solving than others , and how emotional intelligence affects success in the workplace , among countless other topics . They also sometimes focus on how we organize thoughts and information gathered from our environments into meaningful categories of thought , which will be discussed later .", "hl_sentences": "Cognitive psychology is the field of psychology dedicated to examining how people think . It attempts to explain how and why we think the way we do by studying the interactions among human thinking , emotion , creativity , language , and problem solving , in addition to other cognitive processes .", "question": { "cloze_format": "Cognitive psychology is the branch of psychology that focuses on the study of ________.", "normal_format": "Cognitive psychology is the branch of psychology that focuses on which study?", "question_choices": [ "human development", "human thinking", "human behavior", "human society" ], "question_id": "fs-idm128913904", "question_text": "Cognitive psychology is the branch of psychology that focuses on the study of ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "a triangle’s area" }, "bloom": null, "hl_context": "<hl> An artificial concept , on the other hand , is a concept that is defined by a specific set of characteristics . <hl> <hl> Various properties of geometric shapes , like squares and triangles , serve as useful examples of artificial concepts . <hl> A triangle always has three angles and three sides . A square always has four equal sides and four right angles . <hl> Mathematical formulas , like the equation for area ( length × width ) are artificial concepts defined by specific sets of characteristics that are always the same . <hl> Artificial concepts can enhance the understanding of a topic by building on one another . For example , before learning the concept of “ area of a square ” ( and the formula to find it ) , you must understand what a square is . Once the concept of “ area of a square ” is understood , an understanding of area for other geometric shapes can be built upon the original understanding of area . The use of artificial concepts to define an idea is crucial to communicating with others and engaging in complex thought . According to Goldstone and Kersten ( 2003 ) , concepts act as building blocks and can be connected in countless combinations to create complex thoughts .", "hl_sentences": "An artificial concept , on the other hand , is a concept that is defined by a specific set of characteristics . Various properties of geometric shapes , like squares and triangles , serve as useful examples of artificial concepts . Mathematical formulas , like the equation for area ( length × width ) are artificial concepts defined by specific sets of characteristics that are always the same .", "question": { "cloze_format": "___ is/are an example of an artificial concept.", "normal_format": "Which of the following is an example of an artificial concept?", "question_choices": [ "mammals", "a triangle’s area", "gemstones", "teachers" ], "question_id": "fs-idm69533312", "question_text": "Which of the following is an example of an artificial concept?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "script" }, "bloom": null, "hl_context": "<hl> An event schema , also known as a cognitive script , is a set of behaviors that can feel like a routine . <hl> Think about what you do when you walk into an elevator ( Figure 7.5 ) . First , the doors open and you wait to let exiting passengers leave the elevator car . Then , you step into the elevator and turn around to face the doors , looking for the correct button to push . You never face the back of the elevator , do you ? And when you ’ re riding in a crowded elevator and you can ’ t face the front , it feels uncomfortable , doesn ’ t it ? Interestingly , event schemata can vary widely among different cultures and countries . For example , while it is quite common for people to greet one another with a handshake in the United States , in Tibet , you greet someone by sticking your tongue out at them , and in Belize , you bump fists ( Cairns Regional Council , n . d . )", "hl_sentences": "An event schema , also known as a cognitive script , is a set of behaviors that can feel like a routine .", "question": { "cloze_format": "An event schema is also known as a cognitive ________.", "normal_format": "What cognitive is an event schema also known as?", "question_choices": [ "stereotype", "concept", "script", "prototype" ], "question_id": "fs-idm113398576", "question_text": "An event schema is also known as a cognitive ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "Syntax" }, "bloom": null, "hl_context": "Words are formed by combining the various phonemes that make up the language . A phoneme ( e . g . , the sounds “ ah ” vs . “ eh ” ) is a basic sound unit of a given language , and different languages have different sets of phonemes . Phonemes are combined to form morphemes , which are the smallest units of language that convey some type of meaning ( e . g . , “ I ” is both a phoneme and a morpheme ) . We use semantics and syntax to construct language . Semantics and syntax are part of a language ’ s grammar . Semantics refers to the process by which we derive meaning from morphemes and words . <hl> Syntax refers to the way words are organized into sentences ( Chomsky , 1965 ; Fernández & Cairns , 2011 ) . <hl>", "hl_sentences": "Syntax refers to the way words are organized into sentences ( Chomsky , 1965 ; Fernández & Cairns , 2011 ) .", "question": { "cloze_format": "________ provides general principles for organizing words into meaningful sentences.", "normal_format": "What provides general principles for organizing words into meaningful sentences?", "question_choices": [ "Linguistic determinism", "Lexicon", "Semantics", "Syntax" ], "question_id": "fs-idm175029136", "question_text": "________ provides general principles for organizing words into meaningful sentences." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "Morphemes" }, "bloom": null, "hl_context": "Words are formed by combining the various phonemes that make up the language . A phoneme ( e . g . , the sounds “ ah ” vs . “ eh ” ) is a basic sound unit of a given language , and different languages have different sets of phonemes . <hl> Phonemes are combined to form morphemes , which are the smallest units of language that convey some type of meaning ( e . g . , “ I ” is both a phoneme and a morpheme ) . <hl> We use semantics and syntax to construct language . Semantics and syntax are part of a language ’ s grammar . Semantics refers to the process by which we derive meaning from morphemes and words . Syntax refers to the way words are organized into sentences ( Chomsky , 1965 ; Fernández & Cairns , 2011 ) .", "hl_sentences": "Phonemes are combined to form morphemes , which are the smallest units of language that convey some type of meaning ( e . g . , “ I ” is both a phoneme and a morpheme ) .", "question": { "cloze_format": "________ are the smallest unit of language that carry meaning.", "normal_format": "Whart are the smallest unit of language that carry meaning?", "question_choices": [ "Lexicon", "Phonemes", "Morphemes", "Syntax" ], "question_id": "fs-idm92509680", "question_text": "________ are the smallest unit of language that carry meaning." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 3, "ans_text": "semantics" }, "bloom": null, "hl_context": "Words are formed by combining the various phonemes that make up the language . A phoneme ( e . g . , the sounds “ ah ” vs . “ eh ” ) is a basic sound unit of a given language , and different languages have different sets of phonemes . Phonemes are combined to form morphemes , which are the smallest units of language that convey some type of meaning ( e . g . , “ I ” is both a phoneme and a morpheme ) . <hl> We use semantics and syntax to construct language . <hl> <hl> Semantics and syntax are part of a language ’ s grammar . <hl> <hl> Semantics refers to the process by which we derive meaning from morphemes and words . <hl> Syntax refers to the way words are organized into sentences ( Chomsky , 1965 ; Fernández & Cairns , 2011 ) .", "hl_sentences": "We use semantics and syntax to construct language . Semantics and syntax are part of a language ’ s grammar . Semantics refers to the process by which we derive meaning from morphemes and words .", "question": { "cloze_format": "The meaning of words and phrases is determined by applying the rules of ________.", "normal_format": "By applying which rules, the meaning of words and phrases is determined?", "question_choices": [ "lexicon", "phonemes", "overgeneralization", "semantics" ], "question_id": "fs-idm58124784", "question_text": "The meaning of words and phrases is determined by applying the rules of ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "Phonemes" }, "bloom": null, "hl_context": "Words are formed by combining the various phonemes that make up the language . <hl> A phoneme ( e . g . , the sounds “ ah ” vs . “ eh ” ) is a basic sound unit of a given language , and different languages have different sets of phonemes . <hl> Phonemes are combined to form morphemes , which are the smallest units of language that convey some type of meaning ( e . g . , “ I ” is both a phoneme and a morpheme ) . We use semantics and syntax to construct language . Semantics and syntax are part of a language ’ s grammar . Semantics refers to the process by which we derive meaning from morphemes and words . Syntax refers to the way words are organized into sentences ( Chomsky , 1965 ; Fernández & Cairns , 2011 ) .", "hl_sentences": "A phoneme ( e . g . , the sounds “ ah ” vs . “ eh ” ) is a basic sound unit of a given language , and different languages have different sets of phonemes .", "question": { "cloze_format": "________ is (are) the basic sound units of a spoken language.", "normal_format": "What is (are) the basic sound units of a spoken language?", "question_choices": [ "Syntax", "Phonemes", "Morphemes", "Grammar" ], "question_id": "fs-idm143260304", "question_text": "________ is (are) the basic sound units of a spoken language." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "an algorithm" }, "bloom": null, "hl_context": "Table 7.2 Problem-Solving Strategies Another type of strategy is an algorithm . <hl> An algorithm is a problem-solving formula that provides you with step-by-step instructions used to achieve a desired outcome ( Kahneman , 2011 ) . <hl> You can think of an algorithm as a recipe with highly detailed instructions that produce the same result every time they are performed . Algorithms are used frequently in our everyday lives , especially in computer science . When you run a search on the Internet , search engines like Google use algorithms to decide which entries will appear first in your list of results . Facebook also uses algorithms to decide which posts to display on your newsfeed . Can you identify other situations in which algorithms are used ?", "hl_sentences": "An algorithm is a problem-solving formula that provides you with step-by-step instructions used to achieve a desired outcome ( Kahneman , 2011 ) .", "question": { "cloze_format": "A specific formula for solving a problem is called ________.", "normal_format": "What is a specific formula for solving a problem called?", "question_choices": [ "an algorithm", "a heuristic", "a mental set", "trial and error" ], "question_id": "fs-idm138779696", "question_text": "A specific formula for solving a problem is called ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "a heuristic" }, "bloom": null, "hl_context": "A heuristic is another type of problem solving strategy . <hl> While an algorithm must be followed exactly to produce a correct result , a heuristic is a general problem-solving framework ( Tversky & Kahneman , 1974 ) . <hl> <hl> You can think of these as mental shortcuts that are used to solve problems . <hl> A “ rule of thumb ” is an example of a heuristic . Such a rule saves the person time and energy when making a decision , but despite its time-saving characteristics , it is not always the best method for making a rational decision . Different types of heuristics are used in different types of situations , but the impulse to use a heuristic occurs when one of five conditions is met ( Pratkanis , 1989 ):", "hl_sentences": "While an algorithm must be followed exactly to produce a correct result , a heuristic is a general problem-solving framework ( Tversky & Kahneman , 1974 ) . You can think of these as mental shortcuts that are used to solve problems .", "question": { "cloze_format": "A mental shortcut in the form of a general problem-solving framework is called ________.", "normal_format": "What is a mental shortcut in the form of a general problem-solving framework called?", "question_choices": [ "an algorithm", "a heuristic", "a mental set", "trial and error" ], "question_id": "fs-idm172007920", "question_text": "A mental shortcut in the form of a general problem-solving framework is called ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "anchoring bias" }, "bloom": null, "hl_context": "Researchers have investigated whether functional fixedness is affected by culture . In one experiment , individuals from the Shuar group in Ecuador were asked to use an object for a purpose other than that for which the object was originally intended . For example , the participants were told a story about a bear and a rabbit that were separated by a river and asked to select among various objects , including a spoon , a cup , erasers , and so on , to help the animals . The spoon was the only object long enough to span the imaginary river , but if the spoon was presented in a way that reflected its normal usage , it took participants longer to choose the spoon to solve the problem . ( German & Barrett , 2005 ) . The researchers wanted to know if exposure to highly specialized tools , as occurs with individuals in industrialized nations , affects their ability to transcend functional fixedness . It was determined that functional fixedness is experienced in both industrialized and nonindustrialized cultures ( German & Barrett , 2005 ) . In order to make good decisions , we use our knowledge and our reasoning . Often , this knowledge and reasoning is sound and solid . Sometimes , however , we are swayed by biases or by others manipulating a situation . For example , let ’ s say you and three friends wanted to rent a house and had a combined target budget of $ 1,600 . The realtor shows you only very run-down houses for $ 1,600 and then shows you a very nice house for $ 2,000 . Might you ask each person to pay more in rent to get the $ 2,000 home ? Why would the realtor show you the run-down houses and the nice house ? The realtor may be challenging your anchoring bias . <hl> An anchoring bias occurs when you focus on one piece of information when making a decision or solving a problem . <hl> In this case , you ’ re so focused on the amount of money you are willing to spend that you may not recognize what kinds of houses are available at that price point . The confirmation bias is the tendency to focus on information that confirms your existing beliefs . For example , if you think that your professor is not very nice , you notice all of the instances of rude behavior exhibited by the professor while ignoring the countless pleasant interactions he is involved in on a daily basis . Hindsight bias leads you to believe that the event you just experienced was predictable , even though it really wasn ’ t . In other words , you knew all along that things would turn out the way they did . Representative bias describes a faulty way of thinking , in which you unintentionally stereotype someone or something ; for example , you may assume that your professors spend their free time reading books and engaging in intellectual conversation , because the idea of them spending their time playing volleyball or visiting an amusement park does not fit in with your stereotypes of professors .", "hl_sentences": "An anchoring bias occurs when you focus on one piece of information when making a decision or solving a problem .", "question": { "cloze_format": "___ involves becoming fixated on a single trait of a problem.", "normal_format": "Which type of bias involves becoming fixated on a single trait of a problem?", "question_choices": [ "anchoring bias", "confirmation bias", "representative bias", "availability bias" ], "question_id": "fs-idm104965904", "question_text": "Which type of bias involves becoming fixated on a single trait of a problem?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "representative bias" }, "bloom": null, "hl_context": "Researchers have investigated whether functional fixedness is affected by culture . In one experiment , individuals from the Shuar group in Ecuador were asked to use an object for a purpose other than that for which the object was originally intended . For example , the participants were told a story about a bear and a rabbit that were separated by a river and asked to select among various objects , including a spoon , a cup , erasers , and so on , to help the animals . The spoon was the only object long enough to span the imaginary river , but if the spoon was presented in a way that reflected its normal usage , it took participants longer to choose the spoon to solve the problem . ( German & Barrett , 2005 ) . The researchers wanted to know if exposure to highly specialized tools , as occurs with individuals in industrialized nations , affects their ability to transcend functional fixedness . It was determined that functional fixedness is experienced in both industrialized and nonindustrialized cultures ( German & Barrett , 2005 ) . In order to make good decisions , we use our knowledge and our reasoning . Often , this knowledge and reasoning is sound and solid . Sometimes , however , we are swayed by biases or by others manipulating a situation . For example , let ’ s say you and three friends wanted to rent a house and had a combined target budget of $ 1,600 . The realtor shows you only very run-down houses for $ 1,600 and then shows you a very nice house for $ 2,000 . Might you ask each person to pay more in rent to get the $ 2,000 home ? Why would the realtor show you the run-down houses and the nice house ? The realtor may be challenging your anchoring bias . An anchoring bias occurs when you focus on one piece of information when making a decision or solving a problem . In this case , you ’ re so focused on the amount of money you are willing to spend that you may not recognize what kinds of houses are available at that price point . The confirmation bias is the tendency to focus on information that confirms your existing beliefs . For example , if you think that your professor is not very nice , you notice all of the instances of rude behavior exhibited by the professor while ignoring the countless pleasant interactions he is involved in on a daily basis . Hindsight bias leads you to believe that the event you just experienced was predictable , even though it really wasn ’ t . In other words , you knew all along that things would turn out the way they did . <hl> Representative bias describes a faulty way of thinking , in which you unintentionally stereotype someone or something ; for example , you may assume that your professors spend their free time reading books and engaging in intellectual conversation , because the idea of them spending their time playing volleyball or visiting an amusement park does not fit in with your stereotypes of professors . <hl>", "hl_sentences": "Representative bias describes a faulty way of thinking , in which you unintentionally stereotype someone or something ; for example , you may assume that your professors spend their free time reading books and engaging in intellectual conversation , because the idea of them spending their time playing volleyball or visiting an amusement park does not fit in with your stereotypes of professors .", "question": { "cloze_format": "___ involves relying on a false stereotype to make a decision.", "normal_format": "Which type of bias involves relying on a false stereotype to make a decision?", "question_choices": [ "anchoring bias", "confirmation bias", "representative bias", "availability bias" ], "question_id": "fs-idm83509360", "question_text": "Which type of bias involves relying on a false stereotype to make a decision?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "being able to see complex relationships and solve problems" }, "bloom": null, "hl_context": "Others psychologists believe that instead of a single factor , intelligence is a collection of distinct abilities . In the 1940s , Raymond Cattell proposed a theory of intelligence that divided general intelligence into two components : crystallized intelligence and fluid intelligence ( Cattell , 1963 ) . Crystallized intelligence is characterized as acquired knowledge and the ability to retrieve it . When you learn , remember , and recall information , you are using crystallized intelligence . You use crystallized intelligence all the time in your coursework by demonstrating that you have mastered the information covered in the course . <hl> Fluid intelligence encompasses the ability to see complex relationships and solve problems . <hl> Navigating your way home after being detoured onto an unfamiliar route because of road construction would draw upon your fluid intelligence . Fluid intelligence helps you tackle complex , abstract challenges in your daily life , whereas crystallized intelligence helps you overcome concrete , straightforward problems ( Cattell , 1963 ) . Other theorists and psychologists believe that intelligence should be defined in more practical terms . For example , what types of behaviors help you get ahead in life ? Which skills promote success ? Think about this for a moment . Being able to recite all 44 presidents of the United States in order is an excellent party trick , but will knowing this make you a better person ?", "hl_sentences": "Fluid intelligence encompasses the ability to see complex relationships and solve problems .", "question": { "cloze_format": "Fluid intelligence is characterized by ________.", "normal_format": "What is fluid intelligence characterized by?", "question_choices": [ "being able to recall information", "being able to create new products", "being able to understand and communicate with different cultures", "being able to see complex relationships and solve problems" ], "question_id": "fs-idp4884704", "question_text": "Fluid intelligence is characterized by ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "creative" }, "bloom": null, "hl_context": "<hl> Spatial intelligence <hl> <hl> Musical intelligence <hl> <hl> Linguistic intelligence <hl> Multiple Intelligences Theory was developed by Howard Gardner , a Harvard psychologist and former student of Erik Erikson . Gardner ’ s theory , which has been refined for more than 30 years , is a more recent development among theories of intelligence . <hl> In Gardner ’ s theory , each person possesses at least eight intelligences . <hl> <hl> Among these eight intelligences , a person typically excels in some and falters in others ( Gardner , 1983 ) . <hl> <hl> Table 7.4 describes each type of intelligence . <hl>", "hl_sentences": "Spatial intelligence Musical intelligence Linguistic intelligence In Gardner ’ s theory , each person possesses at least eight intelligences . Among these eight intelligences , a person typically excels in some and falters in others ( Gardner , 1983 ) . Table 7.4 describes each type of intelligence .", "question": { "cloze_format": "___ is not one of Gardner’s Multiple Intelligences.", "normal_format": "Which of the following is not one of Gardner’s Multiple Intelligences?", "question_choices": [ "creative", "spatial", "linguistic", "musical" ], "question_id": "fs-idm81834432", "question_text": "Which of the following is not one of Gardner’s Multiple Intelligences?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "Sternberg" }, "bloom": null, "hl_context": "<hl> Robert Sternberg developed another theory of intelligence , which he titled the triarchic theory of intelligence because it sees intelligence as comprised of three parts ( Sternberg , 1988 ): practical , creative , and analytical intelligence ( Figure 7.12 ) . <hl>", "hl_sentences": "Robert Sternberg developed another theory of intelligence , which he titled the triarchic theory of intelligence because it sees intelligence as comprised of three parts ( Sternberg , 1988 ): practical , creative , and analytical intelligence ( Figure 7.12 ) .", "question": { "cloze_format": "___ put forth the triarchic theory of intelligence.", "normal_format": "Which theorist put forth the triarchic theory of intelligence?", "question_choices": [ "Goleman", "Gardner", "Sternberg", "Steitz" ], "question_id": "fs-idm141968384", "question_text": "Which theorist put forth the triarchic theory of intelligence?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "analytical" }, "bloom": null, "hl_context": "<hl> Analytical intelligence is closely aligned with academic problem solving and computations . <hl> <hl> Sternberg says that analytical intelligence is demonstrated by an ability to analyze , evaluate , judge , compare , and contrast . <hl> When reading a classic novel for literature class , for example , it is usually necessary to compare the motives of the main characters of the book or analyze the historical context of the story . In a science course such as anatomy , you must study the processes by which the body uses various minerals in different human systems . In developing an understanding of this topic , you are using analytical intelligence . When solving a challenging math problem , you would apply analytical intelligence to analyze different aspects of the problem and then solve it section by section .", "hl_sentences": "Analytical intelligence is closely aligned with academic problem solving and computations . Sternberg says that analytical intelligence is demonstrated by an ability to analyze , evaluate , judge , compare , and contrast .", "question": { "cloze_format": "When you are examining dato to look for trends, you are using most ___ intelligence.", "normal_format": "When you are examining data to look for trends, which type of intelligence are you using most?", "question_choices": [ "practical", "analytical", "emotional", "creative" ], "question_id": "fs-idm142080448", "question_text": "When you are examining data to look for trends, which type of intelligence are you using most?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "a representative sample" }, "bloom": null, "hl_context": "The IQ test has been synonymous with intelligence for over a century . In the late 1800s , Sir Francis Galton developed the first broad test of intelligence ( Flanagan & Kaufman , 2004 ) . Although he was not a psychologist , his contributions to the concepts of intelligence testing are still felt today ( Gordon , 1995 ) . Reliable intelligence testing ( you may recall from earlier chapters that reliability refers to a test ’ s ability to produce consistent results ) began in earnest during the early 1900s with a researcher named Alfred Binet ( Figure 7.13 ) . Binet was asked by the French government to develop an intelligence test to use on children to determine which ones might have difficulty in school ; it included many verbally based tasks . American researchers soon realized the value of such testing . Louis Terman , a Stanford professor , modified Binet ’ s work by standardizing the administration of the test and tested thousands of different-aged children to establish an average score for each age . <hl> As a result , the test was normed and standardized , which means that the test was administered consistently to a large enough representative sample of the population that the range of scores resulted in a bell curve ( bell curves will be discussed later ) . <hl> Standardization means that the manner of administration , scoring , and interpretation of results is consistent . Norming involves giving a test to a large population so data can be collected comparing groups , such as age groups . The resulting data provide norms , or referential scores , by which to interpret future scores . Norms are not expectations of what a given group should know but a demonstration of what that group does know . Norming and standardizing the test ensures that new scores are reliable . This new version of the test was called the Stanford-Binet Intelligence Scale ( Terman , 1916 ) . Remarkably , an updated version of this test is still widely used today .", "hl_sentences": "As a result , the test was normed and standardized , which means that the test was administered consistently to a large enough representative sample of the population that the range of scores resulted in a bell curve ( bell curves will be discussed later ) .", "question": { "cloze_format": "In order for a test to be normed and standardized it must be tested on ________.", "normal_format": "In order for a test to be normed and standardized, what must it be tested on?", "question_choices": [ "a group of same-age peers", "a representative sample", "children with mental disabilities", "children of average intelligence" ], "question_id": "fs-idm24295856", "question_text": "In order for a test to be normed and standardized it must be tested on ________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 3, "ans_text": "100" }, "bloom": null, "hl_context": "The same principles apply to intelligence tests scores . Individuals earn a score called an intelligence quotient ( IQ ) . Over the years , different types of IQ tests have evolved , but the way scores are interpreted remains the same . <hl> The average IQ score on an IQ test is 100 . <hl> Standard deviations describe how data are dispersed in a population and give context to large data sets . The bell curve uses the standard deviation to show how all scores are dispersed from the average score ( Figure 7.15 ) . In modern IQ testing , one standard deviation is 15 points . So a score of 85 would be described as “ one standard deviation below the mean . ” How would you describe a score of 115 and a score of 70 ? Any IQ score that falls within one standard deviation above and below the mean ( between 85 and 115 ) is considered average , and 68 % of the population has IQ scores in this range . An IQ score of 130 or above is considered a superior level . Only 2.2 % of the population has an IQ score below 70 ( American Psychological Association [ APA ] , 2013 ) . A score of 70 or below indicates significant cognitive delays . When these are combined with major deficits in adaptive functioning , a person is diagnosed with having an intellectual disability ( American Association on Intellectual and Developmental Disabilities , 2013 ) . Formerly known as mental retardation , the accepted term now is intellectual disability , and it has four subtypes : mild , moderate , severe , and profound ( Table 7.5 ) . The Diagnostic and Statistical Manual of Psychological Disorders lists criteria for each subgroup ( APA , 2013 ) .", "hl_sentences": "The average IQ score on an IQ test is 100 .", "question": { "cloze_format": "The mean score for a person with an average IQ is ________.", "normal_format": "What is the mean score for a person with an average IQ?", "question_choices": [ "70", "130", "85", "100" ], "question_id": "fs-idm133815024", "question_text": "The mean score for a person with an average IQ is ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "David Wechsler" }, "bloom": null, "hl_context": "<hl> In 1939 , David Wechsler , a psychologist who spent part of his career working with World War I veterans , developed a new IQ test in the United States . <hl> Wechsler combined several subtests from other intelligence tests used between 1880 and World War I . These subtests tapped into a variety of verbal and nonverbal skills , because Wechsler believed that intelligence encompassed “ the global capacity of a person to act purposefully , to think rationally , and to deal effectively with his environment ” ( Wechsler , 1958 , p . 7 ) . He named the test the Wechsler-Bellevue Intelligence Scale ( Wechsler , 1981 ) . <hl> This combination of subtests became one of the most extensively used intelligence tests in the history of psychology . <hl> Although its name was later changed to the Wechsler Adult Intelligence Scale ( WAIS ) and has been revised several times , the aims of the test remain virtually unchanged since its inception ( Boake , 2002 ) . Today , there are three intelligence tests credited to Wechsler , the Wechsler Adult Intelligence Scale-fourth edition ( WAIS-IV ) , the Wechsler Intelligence Scale for Children ( WISC-V ) , and the Wechsler Preschool and Primary Scale of Intelligence — IV ( WPPSI-IV ) ( Wechsler , 2012 ) . <hl> These tests are used widely in schools and communities throughout the United States , and they are periodically normed and standardized as a means of recalibration . <hl> Interestingly , the periodic recalibrations have led to an interesting observation known as the Flynn effect . Named after James Flynn , who was among the first to describe this trend , the Flynn effect refers to the observation that each generation has a significantly higher IQ than the last . Flynn himself argues , however , that increased IQ scores do not necessarily mean that younger generations are more intelligent per se ( Flynn , Shaughnessy , & Fulgham , 2012 ) . As a part of the recalibration process , the WISC-V was given to thousands of children across the country , and children taking the test today are compared with their same-age peers ( Figure 7.13 ) . The WISC-V is composed of 14 subtests , which comprise five indices , which then render an IQ score . The five indices are Verbal Comprehension , Visual Spatial , Fluid Reasoning , Working Memory , and Processing Speed . When the test is complete , individuals receive a score for each of the five indices and a Full Scale IQ score . The method of scoring reflects the understanding that intelligence is comprised of multiple abilities in several cognitive realms and focuses on the mental processes that the child used to arrive at his or her answers to each test item . Ultimately , we are still left with the question of how valid intelligence tests are . Certainly , the most modern versions of these tests tap into more than verbal competencies , yet the specific skills that should be assessed in IQ testing , the degree to which any test can truly measure an individual ’ s intelligence , and the use of the results of IQ tests are still issues of debate ( Gresham & Witt , 1997 ; Flynn , Shaughnessy , & Fulgham , 2012 ; Richardson , 2002 ; Schlinger , 2003 ) .", "hl_sentences": "In 1939 , David Wechsler , a psychologist who spent part of his career working with World War I veterans , developed a new IQ test in the United States . This combination of subtests became one of the most extensively used intelligence tests in the history of psychology . These tests are used widely in schools and communities throughout the United States , and they are periodically normed and standardized as a means of recalibration .", "question": { "cloze_format": "____ developed the IQ test most widely used today.", "normal_format": "Who developed the IQ test most widely used today?", "question_choices": [ "Sir Francis Galton", "Alfred Binet", "Louis Terman", "David Wechsler" ], "question_id": "fs-idm150976976", "question_text": "Who developed the IQ test most widely used today?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "intellectual disability" }, "bloom": null, "hl_context": "On the other end of the intelligence spectrum are those individuals whose IQs fall into the highest ranges . Consistent with the bell curve , about 2 % of the population falls into this category . People are considered gifted if they have an IQ score of 130 or higher , or superior intelligence in a particular area . Long ago , popular belief suggested that people of high intelligence were maladjusted . This idea was disproven through a groundbreaking study of gifted children . In 1921 , Lewis Terman began a longitudinal study of over 1500 children with IQs over 135 ( Terman , 1925 ) . His findings showed that these children became well-educated , successful adults who were , in fact , well-adjusted ( Terman & Oden , 1947 ) . Additionally , Terman ’ s study showed that the subjects were above average in physical build and attractiveness , dispelling an earlier popular notion that highly intelligent people were “ weaklings . ” Some people with very high IQs elect to join Mensa , an organization dedicated to identifying , researching , and fostering intelligence . Members must have an IQ score in the top 2 % of the population , and they may be required to pass other exams in their application to join the group . Dig Deeper What ’ s in a Name ? <hl> Mental Retardation In the past , individuals with IQ scores below 70 and significant adaptive and social functioning delays were diagnosed with mental retardation . <hl> When this diagnosis was first named , the title held no social stigma . <hl> In time , however , the degrading word “ retard ” sprang from this diagnostic term . <hl> <hl> “ Retard ” was frequently used as a taunt , especially among young people , until the words “ mentally retarded ” and “ retard ” became an insult . <hl> <hl> As such , the DSM - 5 now labels this diagnosis as “ intellectual disability . ” Many states once had a Department of Mental Retardation to serve those diagnosed with such cognitive delays , but most have changed their name to Department of Developmental Disabilities or something similar in language . <hl> The Social Security Administration still uses the term “ mental retardation ” but is considering eliminating it from its programming ( Goad , 2013 ) . Earlier in the chapter , we discussed how language affects how we think . Do you think changing the title of this department has any impact on how people regard those with developmental disabilities ? Does a different name give people more dignity , and if so , how ? Does it change the expectations for those with developmental or cognitive disabilities ? Why or why not ?", "hl_sentences": "Mental Retardation In the past , individuals with IQ scores below 70 and significant adaptive and social functioning delays were diagnosed with mental retardation . In time , however , the degrading word “ retard ” sprang from this diagnostic term . “ Retard ” was frequently used as a taunt , especially among young people , until the words “ mentally retarded ” and “ retard ” became an insult . As such , the DSM - 5 now labels this diagnosis as “ intellectual disability . ” Many states once had a Department of Mental Retardation to serve those diagnosed with such cognitive delays , but most have changed their name to Department of Developmental Disabilities or something similar in language .", "question": { "cloze_format": "The DSM-5 now uses ________ as a diagnostic label for what was once referred to as mental retardation.", "normal_format": "What does the DSM-5 now use as a diagnostic label for what was once referred to as mental retardation?", "question_choices": [ "autism and developmental disabilities", "lowered intelligence", "intellectual disability", "cognitive disruption" ], "question_id": "fs-idm89308304", "question_text": "The DSM-5 now uses ________ as a diagnostic label for what was once referred to as mental retardation." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "both A and B" }, "bloom": null, "hl_context": "The reality is that aspects of each idea are probably correct . <hl> In fact , one study suggests that although genetics seem to be in control of the level of intelligence , the environmental influences provide both stability and change to trigger manifestation of cognitive abilities ( Bartels , Rietveld , Van Baal , & Boomsma , 2002 ) . <hl> <hl> Certainly , there are behaviors that support the development of intelligence , but the genetic component of high intelligence should not be ignored . <hl> As with all heritable traits , however , it is not always possible to isolate how and when high intelligence is passed on to the next generation . High Intelligence : Nature or Nurture ? Where does high intelligence come from ? Some researchers believe that intelligence is a trait inherited from a person ’ s parents . Scientists who research this topic typically use twin studies to determine the heritability of intelligence . The Minnesota Study of Twins Reared Apart is one of the most well-known twin studies . In this investigation , researchers found that identical twins raised together and identical twins raised apart exhibit a higher correlation between their IQ scores than siblings or fraternal twins raised together ( Bouchard , Lykken , McGue , Segal , & Tellegen , 1990 ) . <hl> The findings from this study reveal a genetic component to intelligence ( Figure 7.16 ) . <hl> <hl> At the same time , other psychologists believe that intelligence is shaped by a child ’ s developmental environment . <hl> If parents were to provide their children with intellectual stimuli from before they are born , it is likely that they would absorb the benefits of that stimulation , and it would be reflected in intelligence levels .", "hl_sentences": "In fact , one study suggests that although genetics seem to be in control of the level of intelligence , the environmental influences provide both stability and change to trigger manifestation of cognitive abilities ( Bartels , Rietveld , Van Baal , & Boomsma , 2002 ) . Certainly , there are behaviors that support the development of intelligence , but the genetic component of high intelligence should not be ignored . The findings from this study reveal a genetic component to intelligence ( Figure 7.16 ) . At the same time , other psychologists believe that intelligence is shaped by a child ’ s developmental environment .", "question": { "cloze_format": "High intelligence comes from ___.", "normal_format": "Where does high intelligence come from?", "question_choices": [ "genetics", "environment", "both A and B", "neither A nor B" ], "question_id": "fs-idm97863040", "question_text": "Where does high intelligence come from?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 0, "ans_text": "genetics was solely responsible for intelligence" }, "bloom": null, "hl_context": "The debate around the foundations and influences on intelligence exploded in 1969 , when an educational psychologist named Arthur Jensen published the article “ How Much Can We Boost I . Q . and Achievement ” in the Harvard Educational Review . <hl> Jensen had administered IQ tests to diverse groups of students , and his results led him to the conclusion that IQ is determined by genetics . <hl> He also posited that intelligence was made up of two types of abilities : Level I and Level II . In his theory , Level I is responsible for rote memorization , whereas Level II is responsible for conceptual and analytical abilities . According to his findings , Level I remained consistent among the human race . Level II , however , exhibited differences among ethnic groups ( Modgil & Routledge , 1987 ) . Jensen ’ s most controversial conclusion was that Level II intelligence is prevalent among Asians , then Caucasians , then African Americans . Robert Williams was among those who called out racial bias in Jensen ’ s results ( Williams , 1970 ) .", "hl_sentences": "Jensen had administered IQ tests to diverse groups of students , and his results led him to the conclusion that IQ is determined by genetics .", "question": { "cloze_format": "Arthur Jensen believed that ________.", "normal_format": "What did Arthur Jensen believe?", "question_choices": [ "genetics was solely responsible for intelligence", "environment was solely responsible for intelligence", "intelligence level was determined by race", "IQ tests do not take socioeconomic status into account" ], "question_id": "fs-idm2307504", "question_text": "Arthur Jensen believed that ________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 1, "ans_text": "a neurological disorder" }, "bloom": null, "hl_context": "What are Learning Disabilities ? Learning disabilities are cognitive disorders that affect different areas of cognition , particularly language or reading . It should be pointed out that learning disabilities are not the same thing as intellectual disabilities . <hl> Learning disabilities are considered specific neurological impairments rather than global intellectual or developmental disabilities . <hl> A person with a language disability has difficulty understanding or using spoken language , whereas someone with a reading disability , such as dyslexia , has difficulty processing what he or she is reading .", "hl_sentences": "Learning disabilities are considered specific neurological impairments rather than global intellectual or developmental disabilities .", "question": { "cloze_format": "A learning disability is ____.", "normal_format": "What is a learning disability?", "question_choices": [ "a developmental disorder", "a neurological disorder", "an emotional disorder", "an intellectual disorder" ], "question_id": "fs-idm93642160", "question_text": "What is a learning disability?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "There are many factors working together to influence an individual’s intelligence level." }, "bloom": null, "hl_context": "High Intelligence : Nature or Nurture ? Where does high intelligence come from ? <hl> Some researchers believe that intelligence is a trait inherited from a person ’ s parents . <hl> Scientists who research this topic typically use twin studies to determine the heritability of intelligence . The Minnesota Study of Twins Reared Apart is one of the most well-known twin studies . In this investigation , researchers found that identical twins raised together and identical twins raised apart exhibit a higher correlation between their IQ scores than siblings or fraternal twins raised together ( Bouchard , Lykken , McGue , Segal , & Tellegen , 1990 ) . <hl> The findings from this study reveal a genetic component to intelligence ( Figure 7.16 ) . <hl> <hl> At the same time , other psychologists believe that intelligence is shaped by a child ’ s developmental environment . <hl> If parents were to provide their children with intellectual stimuli from before they are born , it is likely that they would absorb the benefits of that stimulation , and it would be reflected in intelligence levels .", "hl_sentences": "Some researchers believe that intelligence is a trait inherited from a person ’ s parents . The findings from this study reveal a genetic component to intelligence ( Figure 7.16 ) . At the same time , other psychologists believe that intelligence is shaped by a child ’ s developmental environment .", "question": { "cloze_format": "The statement that is true is ___.", "normal_format": "Which of the following statements is true?", "question_choices": [ "Poverty always affects whether individuals are able to reach their full intellectual potential.", "An individual’s intelligence is determined solely by the intelligence levels of his siblings.", "The environment in which an individual is raised is the strongest predictor of her future intelligence", "There are many factors working together to influence an individual’s intelligence level." ], "question_id": "fs-idm8980800", "question_text": "Which of the following statements is true?" }, "references_are_paraphrase": 0 } ]
7
7.1 What Is Cognition? Learning Objectives By the end of this section, you will be able to: Describe cognition Distinguish concepts and prototypes Explain the difference between natural and artificial concepts Imagine all of your thoughts as if they were physical entities, swirling rapidly inside your mind. How is it possible that the brain is able to move from one thought to the next in an organized, orderly fashion? The brain is endlessly perceiving, processing, planning, organizing, and remembering—it is always active. Yet, you don’t notice most of your brain’s activity as you move throughout your daily routine. This is only one facet of the complex processes involved in cognition. Simply put, cognition is thinking, and it encompasses the processes associated with perception, knowledge, problem solving, judgment, language, and memory. Scientists who study cognition are searching for ways to understand how we integrate, organize, and utilize our conscious cognitive experiences without being aware of all of the unconscious work that our brains are doing (for example, Kahneman, 2011). Cognition Upon waking each morning, you begin thinking—contemplating the tasks that you must complete that day. In what order should you run your errands? Should you go to the bank, the cleaners, or the grocery store first? Can you get these things done before you head to class or will they need to wait until school is done? These thoughts are one example of cognition at work. Exceptionally complex, cognition is an essential feature of human consciousness, yet not all aspects of cognition are consciously experienced. Cognitive psychology is the field of psychology dedicated to examining how people think. It attempts to explain how and why we think the way we do by studying the interactions among human thinking, emotion, creativity, language, and problem solving, in addition to other cognitive processes. Cognitive psychologists strive to determine and measure different types of intelligence, why some people are better at problem solving than others, and how emotional intelligence affects success in the workplace, among countless other topics. They also sometimes focus on how we organize thoughts and information gathered from our environments into meaningful categories of thought, which will be discussed later. Concepts and Prototypes The human nervous system is capable of handling endless streams of information. The senses serve as the interface between the mind and the external environment, receiving stimuli and translating it into nervous impulses that are transmitted to the brain. The brain then processes this information and uses the relevant pieces to create thoughts, which can then be expressed through language or stored in memory for future use. To make this process more complex, the brain does not gather information from external environments only. When thoughts are formed, the brain also pulls information from emotions and memories ( Figure 7.2 ). Emotion and memory are powerful influences on both our thoughts and behaviors. In order to organize this staggering amount of information, the brain has developed a file cabinet of sorts in the mind. The different files stored in the file cabinet are called concepts. Concepts are categories or groupings of linguistic information, images, ideas, or memories, such as life experiences. Concepts are, in many ways, big ideas that are generated by observing details, and categorizing and combining these details into cognitive structures. You use concepts to see the relationships among the different elements of your experiences and to keep the information in your mind organized and accessible. Concepts are informed by our semantic memory (you will learn more about semantic memory in a later chapter) and are present in every aspect of our lives; however, one of the easiest places to notice concepts is inside a classroom, where they are discussed explicitly. When you study United States history, for example, you learn about more than just individual events that have happened in America’s past. You absorb a large quantity of information by listening to and participating in discussions, examining maps, and reading first-hand accounts of people’s lives. Your brain analyzes these details and develops an overall understanding of American history. In the process, your brain gathers details that inform and refine your understanding of related concepts like democracy, power, and freedom. Concepts can be complex and abstract, like justice, or more concrete, like types of birds. In psychology, for example, Piaget’s stages of development are abstract concepts. Some concepts, like tolerance, are agreed upon by many people, because they have been used in various ways over many years. Other concepts, like the characteristics of your ideal friend or your family’s birthday traditions, are personal and individualized. In this way, concepts touch every aspect of our lives, from our many daily routines to the guiding principles behind the way governments function. Another technique used by your brain to organize information is the identification of prototypes for the concepts you have developed. A prototype is the best example or representation of a concept. For example, for the category of civil disobedience, your prototype could be Rosa Parks. Her peaceful resistance to segregation on a city bus in Montgomery, Alabama, is a recognizable example of civil disobedience. Or your prototype could be Mohandas Gandhi, sometimes called Mahatma Gandhi (“Mahatma” is an honorific title) ( Figure 7.3 ). Mohandas Gandhi served as a nonviolent force for independence for India while simultaneously demanding that Buddhist, Hindu, Muslim, and Christian leaders—both Indian and British—collaborate peacefully. Although he was not always successful in preventing violence around him, his life provides a steadfast example of the civil disobedience prototype (Constitutional Rights Foundation, 2013). Just as concepts can be abstract or concrete, we can make a distinction between concepts that are functions of our direct experience with the world and those that are more artificial in nature. Natural and Artificial Concepts In psychology, concepts can be divided into two categories, natural and artificial. Natural concepts are created “naturally” through your experiences and can be developed from either direct or indirect experiences. For example, if you live in Essex Junction, Vermont, you have probably had a lot of direct experience with snow. You’ve watched it fall from the sky, you’ve seen lightly falling snow that barely covers the windshield of your car, and you’ve shoveled out 18 inches of fluffy white snow as you’ve thought, “This is perfect for skiing.” You’ve thrown snowballs at your best friend and gone sledding down the steepest hill in town. In short, you know snow. You know what it looks like, smells like, tastes like, and feels like. If, however, you’ve lived your whole life on the island of Saint Vincent in the Caribbean, you may never have actually seen snow, much less tasted, smelled, or touched it. You know snow from the indirect experience of seeing pictures of falling snow—or from watching films that feature snow as part of the setting. Either way, snow is a natural concept because you can construct an understanding of it through direct observations or experiences of snow ( Figure 7.4 ). An artificial concept , on the other hand, is a concept that is defined by a specific set of characteristics. Various properties of geometric shapes, like squares and triangles, serve as useful examples of artificial concepts. A triangle always has three angles and three sides. A square always has four equal sides and four right angles. Mathematical formulas, like the equation for area (length × width) are artificial concepts defined by specific sets of characteristics that are always the same. Artificial concepts can enhance the understanding of a topic by building on one another. For example, before learning the concept of “area of a square” (and the formula to find it), you must understand what a square is. Once the concept of “area of a square” is understood, an understanding of area for other geometric shapes can be built upon the original understanding of area. The use of artificial concepts to define an idea is crucial to communicating with others and engaging in complex thought. According to Goldstone and Kersten (2003), concepts act as building blocks and can be connected in countless combinations to create complex thoughts. Schemata A schema is a mental construct consisting of a cluster or collection of related concepts (Bartlett, 1932). There are many different types of schemata, and they all have one thing in common: schemata are a method of organizing information that allows the brain to work more efficiently. When a schema is activated, the brain makes immediate assumptions about the person or object being observed. There are several types of schemata. A role schema makes assumptions about how individuals in certain roles will behave (Callero, 1994). For example, imagine you meet someone who introduces himself as a firefighter. When this happens, your brain automatically activates the “firefighter schema” and begins making assumptions that this person is brave, selfless, and community-oriented. Despite not knowing this person, already you have unknowingly made judgments about him. Schemata also help you fill in gaps in the information you receive from the world around you. While schemata allow for more efficient information processing, there can be problems with schemata, regardless of whether they are accurate: Perhaps this particular firefighter is not brave, he just works as a firefighter to pay the bills while studying to become a children’s librarian. An event schema , also known as a cognitive script , is a set of behaviors that can feel like a routine. Think about what you do when you walk into an elevator ( Figure 7.5 ). First, the doors open and you wait to let exiting passengers leave the elevator car. Then, you step into the elevator and turn around to face the doors, looking for the correct button to push. You never face the back of the elevator, do you? And when you’re riding in a crowded elevator and you can’t face the front, it feels uncomfortable, doesn’t it? Interestingly, event schemata can vary widely among different cultures and countries. For example, while it is quite common for people to greet one another with a handshake in the United States, in Tibet, you greet someone by sticking your tongue out at them, and in Belize, you bump fists (Cairns Regional Council, n.d.) Because event schemata are automatic, they can be difficult to change. Imagine that you are driving home from work or school. This event schema involves getting in the car, shutting the door, and buckling your seatbelt before putting the key in the ignition. You might perform this script two or three times each day. As you drive home, you hear your phone’s ring tone. Typically, the event schema that occurs when you hear your phone ringing involves locating the phone and answering it or responding to your latest text message. So without thinking, you reach for your phone, which could be in your pocket, in your bag, or on the passenger seat of the car. This powerful event schema is informed by your pattern of behavior and the pleasurable stimulation that a phone call or text message gives your brain. Because it is a schema, it is extremely challenging for us to stop reaching for the phone, even though we know that we endanger our own lives and the lives of others while we do it (Neyfakh, 2013) ( Figure 7.6 ). Remember the elevator? It feels almost impossible to walk in and not face the door. Our powerful event schema dictates our behavior in the elevator, and it is no different with our phones. Current research suggests that it is the habit, or event schema, of checking our phones in many different situations that makes refraining from checking them while driving especially difficult (Bayer & Campbell, 2012). Because texting and driving has become a dangerous epidemic in recent years, psychologists are looking at ways to help people interrupt the “phone schema” while driving. Event schemata like these are the reason why many habits are difficult to break once they have been acquired. As we continue to examine thinking, keep in mind how powerful the forces of concepts and schemata are to our understanding of the world. 7.2 Language Learning Objectives By the end of this section, you will be able to: Define language and demonstrate familiarity with the components of language Understand how the use of language develops Explain the relationship between language and thinking Language is a communication system that involves using words and systematic rules to organize those words to transmit information from one individual to another. While language is a form of communication, not all communication is language. Many species communicate with one another through their postures, movements, odors, or vocalizations. This communication is crucial for species that need to interact and develop social relationships with their conspecifics. However, many people have asserted that it is language that makes humans unique among all of the animal species (Corballis & Suddendorf, 2007; Tomasello & Rakoczy, 2003). This section will focus on what distinguishes language as a special form of communication, how the use of language develops, and how language affects the way we think. Components of Language Language, be it spoken, signed, or written, has specific components: a lexicon and grammar. Lexicon refers to the words of a given language. Thus, lexicon is a language’s vocabulary. Grammar refers to the set of rules that are used to convey meaning through the use of the lexicon (Fernández & Cairns, 2011). For instance, English grammar dictates that most verbs receive an “-ed” at the end to indicate past tense. Words are formed by combining the various phonemes that make up the language. A phoneme (e.g., the sounds “ah” vs. “eh”) is a basic sound unit of a given language, and different languages have different sets of phonemes. Phonemes are combined to form morphemes , which are the smallest units of language that convey some type of meaning (e.g., “I” is both a phoneme and a morpheme). We use semantics and syntax to construct language. Semantics and syntax are part of a language’s grammar. Semantics refers to the process by which we derive meaning from morphemes and words. Syntax refers to the way words are organized into sentences (Chomsky, 1965; Fernández & Cairns, 2011). We apply the rules of grammar to organize the lexicon in novel and creative ways, which allow us to communicate information about both concrete and abstract concepts. We can talk about our immediate and observable surroundings as well as the surface of unseen planets. We can share our innermost thoughts, our plans for the future, and debate the value of a college education. We can provide detailed instructions for cooking a meal, fixing a car, or building a fire. The flexibility that language provides to relay vastly different types of information is a property that makes language so distinct as a mode of communication among humans. Language Development Given the remarkable complexity of a language, one might expect that mastering a language would be an especially arduous task; indeed, for those of us trying to learn a second language as adults, this might seem to be true. However, young children master language very quickly with relative ease. B. F. Skinner (1957) proposed that language is learned through reinforcement. Noam Chomsky (1965) criticized this behaviorist approach, asserting instead that the mechanisms underlying language acquisition are biologically determined. The use of language develops in the absence of formal instruction and appears to follow a very similar pattern in children from vastly different cultures and backgrounds. It would seem, therefore, that we are born with a biological predisposition to acquire a language (Chomsky, 1965; Fernández & Cairns, 2011). Moreover, it appears that there is a critical period for language acquisition, such that this proficiency at acquiring language is maximal early in life; generally, as people age, the ease with which they acquire and master new languages diminishes (Johnson & Newport, 1989; Lenneberg, 1967; Singleton, 1995). Children begin to learn about language from a very early age ( Table 7.1 ). In fact, it appears that this is occurring even before we are born. Newborns show preference for their mother’s voice and appear to be able to discriminate between the language spoken by their mother and other languages. Babies are also attuned to the languages being used around them and show preferences for videos of faces that are moving in synchrony with the audio of spoken language versus videos that do not synchronize with the audio (Blossom & Morgan, 2006; Pickens, 1994; Spelke & Cortelyou, 1981). Stage Age Developmental Language and Communication 1 0–3 months Reflexive communication 2 3–8 months Reflexive communication; interest in others 3 8–13 months Intentional communication; sociability 4 12–18 months First words 5 18–24 months Simple sentences of two words 6 2–3 years Sentences of three or more words 7 3–5 years Complex sentences; has conversations Table 7.1 Stages of Language and Communication Development Dig Deeper The Case of Genie In the fall of 1970, a social worker in the Los Angeles area found a 13-year-old girl who was being raised in extremely neglectful and abusive conditions. The girl, who came to be known as Genie, had lived most of her life tied to a potty chair or confined to a crib in a small room that was kept closed with the curtains drawn. For a little over a decade, Genie had virtually no social interaction and no access to the outside world. As a result of these conditions, Genie was unable to stand up, chew solid food, or speak (Fromkin, Krashen, Curtiss, Rigler, & Rigler, 1974; Rymer, 1993). The police took Genie into protective custody. Genie’s abilities improved dramatically following her removal from her abusive environment, and early on, it appeared she was acquiring language—much later than would be predicted by critical period hypotheses that had been posited at the time (Fromkin et al., 1974). Genie managed to amass an impressive vocabulary in a relatively short amount of time. However, she never developed a mastery of the grammatical aspects of language (Curtiss, 1981). Perhaps being deprived of the opportunity to learn language during a critical period impeded Genie’s ability to fully acquire and use language. You may recall that each language has its own set of phonemes that are used to generate morphemes, words, and so on. Babies can discriminate among the sounds that make up a language (for example, they can tell the difference between the “s” in vision and the “ss” in fission); early on, they can differentiate between the sounds of all human languages, even those that do not occur in the languages that are used in their environments. However, by the time that they are about 1 year old, they can only discriminate among those phonemes that are used in the language or languages in their environments (Jensen, 2011; Werker & Lalonde, 1988; Werker & Tees, 1984). Link to Learning Visit this website to learn more about how babies lose the ability to discriminate among all possible human phonemes as they age. After the first few months of life, babies enter what is known as the babbling stage, during which time they tend to produce single syllables that are repeated over and over. As time passes, more variations appear in the syllables that they produce. During this time, it is unlikely that the babies are trying to communicate; they are just as likely to babble when they are alone as when they are with their caregivers (Fernández & Cairns, 2011). Interestingly, babies who are raised in environments in which sign language is used will also begin to show babbling in the gestures of their hands during this stage (Petitto, Holowka, Sergio, Levy, & Ostry, 2004). Generally, a child’s first word is uttered sometime between the ages of 1 year to 18 months, and for the next few months, the child will remain in the “one word” stage of language development. During this time, children know a number of words, but they only produce one-word utterances. The child’s early vocabulary is limited to familiar objects or events, often nouns. Although children in this stage only make one-word utterances, these words often carry larger meaning (Fernández & Cairns, 2011). So, for example, a child saying “cookie” could be identifying a cookie or asking for a cookie. As a child’s lexicon grows, she begins to utter simple sentences and to acquire new vocabulary at a very rapid pace. In addition, children begin to demonstrate a clear understanding of the specific rules that apply to their language(s). Even the mistakes that children sometimes make provide evidence of just how much they understand about those rules. This is sometimes seen in the form of overgeneralization . In this context, overgeneralization refers to an extension of a language rule to an exception to the rule. For example, in English, it is usually the case that an “s” is added to the end of a word to indicate plurality. For example, we speak of one dog versus two dogs. Young children will overgeneralize this rule to cases that are exceptions to the “add an s to the end of the word” rule and say things like “those two gooses” or “three mouses.” Clearly, the rules of the language are understood, even if the exceptions to the rules are still being learned (Moskowitz, 1978). Language and Thought When we speak one language, we agree that words are representations of ideas, people, places, and events. The given language that children learn is connected to their culture and surroundings. But can words themselves shape the way we think about things? Psychologists have long investigated the question of whether language shapes thoughts and actions, or whether our thoughts and beliefs shape our language. Two researchers, Edward Sapir and Benjamin Lee Whorf, began this investigation in the 1940s. They wanted to understand how the language habits of a community encourage members of that community to interpret language in a particular manner (Sapir, 1941/1964). Sapir and Whorf proposed that language determines thought, suggesting, for example, that a person whose community language did not have past-tense verbs would be challenged to think about the past (Whorf, 1956). Researchers have since identified this view as too absolute, pointing out a lack of empiricism behind what Sapir and Whorf proposed (Abler, 2013; Boroditsky, 2011; van Troyer, 1994). Today, psychologists continue to study and debate the relationship between language and thought. What Do You Think? The Meaning of Language Think about what you know of other languages; perhaps you even speak multiple languages. Imagine for a moment that your closest friend fluently speaks more than one language. Do you think that friend thinks differently, depending on which language is being spoken? You may know a few words that are not translatable from their original language into English. For example, the Portuguese word saudade originated during the 15th century, when Portuguese sailors left home to explore the seas and travel to Africa or Asia. Those left behind described the emptiness and fondness they felt as saudade ( Figure 7.7 ) . The word came to express many meanings, including loss, nostalgia, yearning, warm memories, and hope. There is no single word in English that includes all of those emotions in a single description. Do words such as saudade indicate that different languages produce different patterns of thought in people? What do you think?? Language may indeed influence the way that we think, an idea known as linguistic determinism. One recent demonstration of this phenomenon involved differences in the way that English and Mandarin Chinese speakers talk and think about time. English speakers tend to talk about time using terms that describe changes along a horizontal dimension, for example, saying something like “I’m running behind schedule” or “Don’t get ahead of yourself.” While Mandarin Chinese speakers also describe time in horizontal terms, it is not uncommon to also use terms associated with a vertical arrangement. For example, the past might be described as being “up” and the future as being “down.” It turns out that these differences in language translate into differences in performance on cognitive tests designed to measure how quickly an individual can recognize temporal relationships. Specifically, when given a series of tasks with vertical priming, Mandarin Chinese speakers were faster at recognizing temporal relationships between months. Indeed, Boroditsky (2001) sees these results as suggesting that “habits in language encourage habits in thought” (p. 12). One group of researchers who wanted to investigate how language influences thought compared how English speakers and the Dani people of Papua New Guinea think and speak about color. The Dani have two words for color: one word for light and one word for dark . In contrast, the English language has 11 color words. Researchers hypothesized that the number of color terms could limit the ways that the Dani people conceptualized color. However, the Dani were able to distinguish colors with the same ability as English speakers, despite having fewer words at their disposal (Berlin & Kay, 1969). A recent review of research aimed at determining how language might affect something like color perception suggests that language can influence perceptual phenomena, especially in the left hemisphere of the brain. You may recall from earlier chapters that the left hemisphere is associated with language for most people. However, the right (less linguistic hemisphere) of the brain is less affected by linguistic influences on perception (Regier & Kay, 2009) 7.3 Problem Solving Learning Objectives By the end of this section, you will be able to: Describe problem solving strategies Define algorithm and heuristic Explain some common roadblocks to effective problem solving People face problems every day—usually, multiple problems throughout the day. Sometimes these problems are straightforward: To double a recipe for pizza dough, for example, all that is required is that each ingredient in the recipe be doubled. Sometimes, however, the problems we encounter are more complex. For example, say you have a work deadline, and you must mail a printed copy of a report to your supervisor by the end of the business day. The report is time-sensitive and must be sent overnight. You finished the report last night, but your printer will not work today. What should you do? First, you need to identify the problem and then apply a strategy for solving the problem. Problem-Solving Strategies When you are presented with a problem—whether it is a complex mathematical problem or a broken printer, how do you solve it? Before finding a solution to the problem, the problem must first be clearly identified. After that, one of many problem solving strategies can be applied, hopefully resulting in a solution. A problem-solving strategy is a plan of action used to find a solution. Different strategies have different action plans associated with them ( Table 7.2 ). For example, a well-known strategy is trial and error . The old adage, “If at first you don’t succeed, try, try again” describes trial and error. In terms of your broken printer, you could try checking the ink levels, and if that doesn’t work, you could check to make sure the paper tray isn’t jammed. Or maybe the printer isn’t actually connected to your laptop. When using trial and error, you would continue to try different solutions until you solved your problem. Although trial and error is not typically one of the most time-efficient strategies, it is a commonly used one. Method Description Example Trial and error Continue trying different solutions until problem is solved Restarting phone, turning off WiFi, turning off bluetooth in order to determine why your phone is malfunctioning Algorithm Step-by-step problem-solving formula Instruction manual for installing new software on your computer Heuristic General problem-solving framework Working backwards; breaking a task into steps Table 7.2 Problem-Solving Strategies Another type of strategy is an algorithm. An algorithm is a problem-solving formula that provides you with step-by-step instructions used to achieve a desired outcome (Kahneman, 2011). You can think of an algorithm as a recipe with highly detailed instructions that produce the same result every time they are performed. Algorithms are used frequently in our everyday lives, especially in computer science. When you run a search on the Internet, search engines like Google use algorithms to decide which entries will appear first in your list of results. Facebook also uses algorithms to decide which posts to display on your newsfeed. Can you identify other situations in which algorithms are used? A heuristic is another type of problem solving strategy. While an algorithm must be followed exactly to produce a correct result, a heuristic is a general problem-solving framework (Tversky & Kahneman, 1974). You can think of these as mental shortcuts that are used to solve problems. A “rule of thumb” is an example of a heuristic. Such a rule saves the person time and energy when making a decision, but despite its time-saving characteristics, it is not always the best method for making a rational decision. Different types of heuristics are used in different types of situations, but the impulse to use a heuristic occurs when one of five conditions is met (Pratkanis, 1989): When one is faced with too much information When the time to make a decision is limited When the decision to be made is unimportant When there is access to very little information to use in making the decision When an appropriate heuristic happens to come to mind in the same moment Working backwards is a useful heuristic in which you begin solving the problem by focusing on the end result. Consider this example: You live in Washington, D.C. and have been invited to a wedding at 4 PM on Saturday in Philadelphia. Knowing that Interstate 95 tends to back up any day of the week, you need to plan your route and time your departure accordingly. If you want to be at the wedding service by 3:30 PM, and it takes 2.5 hours to get to Philadelphia without traffic, what time should you leave your house? You use the working backwards heuristic to plan the events of your day on a regular basis, probably without even thinking about it. Another useful heuristic is the practice of accomplishing a large goal or task by breaking it into a series of smaller steps. Students often use this common method to complete a large research project or long essay for school. For example, students typically brainstorm, develop a thesis or main topic, research the chosen topic, organize their information into an outline, write a rough draft, revise and edit the rough draft, develop a final draft, organize the references list, and proofread their work before turning in the project. The large task becomes less overwhelming when it is broken down into a series of small steps. Everyday Connection Solving Puzzles Problem-solving abilities can improve with practice. Many people challenge themselves every day with puzzles and other mental exercises to sharpen their problem-solving skills. Sudoku puzzles appear daily in most newspapers. Typically, a sudoku puzzle is a 9×9 grid. The simple sudoku below ( Figure 7.8 ) is a 4×4 grid. To solve the puzzle, fill in the empty boxes with a single digit: 1, 2, 3, or 4. Here are the rules: The numbers must total 10 in each bolded box, each row, and each column; however, each digit can only appear once in a bolded box, row, and column. Time yourself as you solve this puzzle and compare your time with a classmate. Here is another popular type of puzzle ( Figure 7.9 ) that challenges your spatial reasoning skills. Connect all nine dots with four connecting straight lines without lifting your pencil from the paper: Take a look at the “Puzzling Scales” logic puzzle below ( Figure 7.10 ). Sam Loyd, a well-known puzzle master, created and refined countless puzzles throughout his lifetime (Cyclopedia of Puzzles, n.d.). Pitfalls to Problem Solving Not all problems are successfully solved, however. What challenges stop us from successfully solving a problem? Albert Einstein once said, “Insanity is doing the same thing over and over again and expecting a different result.” Imagine a person in a room that has four doorways. One doorway that has always been open in the past is now locked. The person, accustomed to exiting the room by that particular doorway, keeps trying to get out through the same doorway even though the other three doorways are open. The person is stuck—but she just needs to go to another doorway, instead of trying to get out through the locked doorway. A mental set is where you persist in approaching a problem in a way that has worked in the past but is clearly not working now. Functional fixedness is a type of mental set where you cannot perceive an object being used for something other than what it was designed for. During the Apollo 13 mission to the moon, NASA engineers at Mission Control had to overcome functional fixedness to save the lives of the astronauts aboard the spacecraft. An explosion in a module of the spacecraft damaged multiple systems. The astronauts were in danger of being poisoned by rising levels of carbon dioxide because of problems with the carbon dioxide filters. The engineers found a way for the astronauts to use spare plastic bags, tape, and air hoses to create a makeshift air filter, which saved the lives of the astronauts. Link to Learning Check out this Apollo 13 scene where the group of NASA engineers are given the task of overcoming functional fixedness. Researchers have investigated whether functional fixedness is affected by culture. In one experiment, individuals from the Shuar group in Ecuador were asked to use an object for a purpose other than that for which the object was originally intended. For example, the participants were told a story about a bear and a rabbit that were separated by a river and asked to select among various objects, including a spoon, a cup, erasers, and so on, to help the animals. The spoon was the only object long enough to span the imaginary river, but if the spoon was presented in a way that reflected its normal usage, it took participants longer to choose the spoon to solve the problem. (German & Barrett, 2005). The researchers wanted to know if exposure to highly specialized tools, as occurs with individuals in industrialized nations, affects their ability to transcend functional fixedness. It was determined that functional fixedness is experienced in both industrialized and nonindustrialized cultures (German & Barrett, 2005). In order to make good decisions, we use our knowledge and our reasoning. Often, this knowledge and reasoning is sound and solid. Sometimes, however, we are swayed by biases or by others manipulating a situation. For example, let’s say you and three friends wanted to rent a house and had a combined target budget of $1,600. The realtor shows you only very run-down houses for $1,600 and then shows you a very nice house for $2,000. Might you ask each person to pay more in rent to get the $2,000 home? Why would the realtor show you the run-down houses and the nice house? The realtor may be challenging your anchoring bias. An anchoring bias occurs when you focus on one piece of information when making a decision or solving a problem. In this case, you’re so focused on the amount of money you are willing to spend that you may not recognize what kinds of houses are available at that price point. The confirmation bias is the tendency to focus on information that confirms your existing beliefs. For example, if you think that your professor is not very nice, you notice all of the instances of rude behavior exhibited by the professor while ignoring the countless pleasant interactions he is involved in on a daily basis. Hindsight bias leads you to believe that the event you just experienced was predictable, even though it really wasn’t. In other words, you knew all along that things would turn out the way they did. Representative bias describes a faulty way of thinking, in which you unintentionally stereotype someone or something; for example, you may assume that your professors spend their free time reading books and engaging in intellectual conversation, because the idea of them spending their time playing volleyball or visiting an amusement park does not fit in with your stereotypes of professors. Finally, the availability heuristic is a heuristic in which you make a decision based on an example, information, or recent experience that is that readily available to you, even though it may not be the best example to inform your decision . Biases tend to “preserve that which is already established—to maintain our preexisting knowledge, beliefs, attitudes, and hypotheses” (Aronson, 1995; Kahneman, 2011). These biases are summarized in Table 7.3 . Bias Description Anchoring Tendency to focus on one particular piece of information when making decisions or problem-solving Confirmation Focuses on information that confirms existing beliefs Hindsight Belief that the event just experienced was predictable Representative Unintentional stereotyping of someone or something Availability Decision is based upon either an available precedent or an example that may be faulty Table 7.3 Summary of Decision Biases Link to Learning Please visit this site to see a clever music video that a high school teacher made to explain these and other cognitive biases to his AP psychology students. Were you able to determine how many marbles are needed to balance the scales in Figure 7.10 ? You need nine. Were you able to solve the problems in Figure 7.8 and Figure 7.9 ? Here are the answers ( Figure 7.11 ). 7.4 What Are Intelligence and Creativity? Learning Objectives By the end of this section, you will be able to: Define intelligence Explain the triarchic theory of intelligence Identify the difference between intelligence theories Explain emotional intelligence A four-and-a-half-year-old boy sits at the kitchen table with his father, who is reading a new story aloud to him. He turns the page to continue reading, but before he can begin, the boy says, “Wait, Daddy!” He points to the words on the new page and reads aloud, “Go, Pig! Go!” The father stops and looks at his son. “Can you read that?” he asks. “Yes, Daddy!” And he points to the words and reads again, “Go, Pig! Go!” This father was not actively teaching his son to read, even though the child constantly asked questions about letters, words, and symbols that they saw everywhere: in the car, in the store, on the television. The dad wondered about what else his son might understand and decided to try an experiment. Grabbing a sheet of blank paper, he wrote several simple words in a list: mom, dad, dog, bird, bed, truck, car, tree. He put the list down in front of the boy and asked him to read the words. “Mom, dad, dog, bird, bed, truck, car, tree,” he read, slowing down to carefully pronounce bird and truck. Then, “Did I do it, Daddy?” “You sure did! That is very good.” The father gave his little boy a warm hug and continued reading the story about the pig, all the while wondering if his son’s abilities were an indication of exceptional intelligence or simply a normal pattern of linguistic development. Like the father in this example, psychologists have wondered what constitutes intelligence and how it can be measured. Classifying Intelligence What exactly is intelligence? The way that researchers have defined the concept of intelligence has been modified many times since the birth of psychology. British psychologist Charles Spearman believed intelligence consisted of one general factor, called g , which could be measured and compared among individuals. Spearman focused on the commonalities among various intellectual abilities and demphasized what made each unique. Long before modern psychology developed, however, ancient philosophers, such as Aristotle, held a similar view (Cianciolo & Sternberg, 2004). Others psychologists believe that instead of a single factor, intelligence is a collection of distinct abilities. In the 1940s, Raymond Cattell proposed a theory of intelligence that divided general intelligence into two components: crystallized intelligence and fluid intelligence (Cattell, 1963). Crystallized intelligence is characterized as acquired knowledge and the ability to retrieve it. When you learn, remember, and recall information, you are using crystallized intelligence. You use crystallized intelligence all the time in your coursework by demonstrating that you have mastered the information covered in the course. Fluid intelligence encompasses the ability to see complex relationships and solve problems. Navigating your way home after being detoured onto an unfamiliar route because of road construction would draw upon your fluid intelligence. Fluid intelligence helps you tackle complex, abstract challenges in your daily life, whereas crystallized intelligence helps you overcome concrete, straightforward problems (Cattell, 1963). Other theorists and psychologists believe that intelligence should be defined in more practical terms. For example, what types of behaviors help you get ahead in life? Which skills promote success? Think about this for a moment. Being able to recite all 44 presidents of the United States in order is an excellent party trick, but will knowing this make you a better person? Robert Sternberg developed another theory of intelligence, which he titled the triarchic theory of intelligence because it sees intelligence as comprised of three parts (Sternberg, 1988): practical, creative, and analytical intelligence ( Figure 7.12 ). Practical intelligence , as proposed by Sternberg, is sometimes compared to “street smarts.” Being practical means you find solutions that work in your everyday life by applying knowledge based on your experiences. This type of intelligence appears to be separate from traditional understanding of IQ; individuals who score high in practical intelligence may or may not have comparable scores in creative and analytical intelligence (Sternberg, 1988). This story about the 2007 Virginia Tech shootings illustrates both high and low practical intelligences. During the incident, one student left her class to go get a soda in an adjacent building. She planned to return to class, but when she returned to her building after getting her soda, she saw that the door she used to leave was now chained shut from the inside. Instead of thinking about why there was a chain around the door handles, she went to her class’s window and crawled back into the room. She thus potentially exposed herself to the gunman. Thankfully, she was not shot. On the other hand, a pair of students was walking on campus when they heard gunshots nearby. One friend said, “Let’s go check it out and see what is going on.” The other student said, “No way, we need to run away from the gunshots.” They did just that. As a result, both avoided harm. The student who crawled through the window demonstrated some creative intelligence but did not use common sense. She would have low practical intelligence. The student who encouraged his friend to run away from the sound of gunshots would have much higher practical intelligence. Analytical intelligence is closely aligned with academic problem solving and computations. Sternberg says that analytical intelligence is demonstrated by an ability to analyze, evaluate, judge, compare, and contrast. When reading a classic novel for literature class, for example, it is usually necessary to compare the motives of the main characters of the book or analyze the historical context of the story. In a science course such as anatomy, you must study the processes by which the body uses various minerals in different human systems. In developing an understanding of this topic, you are using analytical intelligence. When solving a challenging math problem, you would apply analytical intelligence to analyze different aspects of the problem and then solve it section by section. Creative intelligence is marked by inventing or imagining a solution to a problem or situation. Creativity in this realm can include finding a novel solution to an unexpected problem or producing a beautiful work of art or a well-developed short story. Imagine for a moment that you are camping in the woods with some friends and realize that you’ve forgotten your camp coffee pot. The person in your group who figures out a way to successfully brew coffee for everyone would be credited as having higher creative intelligence. Multiple Intelligences Theory was developed by Howard Gardner, a Harvard psychologist and former student of Erik Erikson. Gardner’s theory, which has been refined for more than 30 years, is a more recent development among theories of intelligence. In Gardner’s theory, each person possesses at least eight intelligences. Among these eight intelligences, a person typically excels in some and falters in others (Gardner, 1983). Table 7.4 describes each type of intelligence. Intelligence Type Characteristics Representative Career Linguistic intelligence Perceives different functions of language, different sounds and meanings of words, may easily learn multiple languages Journalist, novelist, poet, teacher Logical-mathematical intelligence Capable of seeing numerical patterns, strong ability to use reason and logic Scientist, mathematician Musical intelligence Understands and appreciates rhythm, pitch, and tone; may play multiple instruments or perform as a vocalist Composer, performer Bodily kinesthetic intelligence High ability to control the movements of the body and use the body to perform various physical tasks Dancer, athlete, athletic coach, yoga instructor Spatial intelligence Ability to perceive the relationship between objects and how they move in space Choreographer, sculptor, architect, aviator, sailor Interpersonal intelligence Ability to understand and be sensitive to the various emotional states of others Counselor, social worker, salesperson Intrapersonal intelligence Ability to access personal feelings and motivations, and use them to direct behavior and reach personal goals Key component of personal success over time Naturalist intelligence High capacity to appreciate the natural world and interact with the species within it Biologist, ecologist, environmentalist Table 7.4 Multiple Intelligences Gardner’s theory is relatively new and needs additional research to better establish empirical support. At the same time, his ideas challenge the traditional idea of intelligence to include a wider variety of abilities, although it has been suggested that Gardner simply relabeled what other theorists called “cognitive styles” as “intelligences” (Morgan, 1996). Furthermore, developing traditional measures of Gardner’s intelligences is extremely difficult (Furnham, 2009; Gardner & Moran, 2006; Klein, 1997). Gardner’s inter- and intrapersonal intelligences are often combined into a single type: emotional intelligence. Emotional intelligence encompasses the ability to understand the emotions of yourself and others, show empathy, understand social relationships and cues, and regulate your own emotions and respond in culturally appropriate ways (Parker, Saklofske, & Stough, 2009). People with high emotional intelligence typically have well-developed social skills. Some researchers, including Daniel Goleman, the author of Emotional Intelligence: Why It Can Matter More than IQ , argue that emotional intelligence is a better predictor of success than traditional intelligence (Goleman, 1995). However, emotional intelligence has been widely debated, with researchers pointing out inconsistencies in how it is defined and described, as well as questioning results of studies on a subject that is difficulty to measure and study emperically (Locke, 2005; Mayer, Salovey, & Caruso, 2004) Intelligence can also have different meanings and values in different cultures. If you live on a small island, where most people get their food by fishing from boats, it would be important to know how to fish and how to repair a boat. If you were an exceptional angler, your peers would probably consider you intelligent. If you were also skilled at repairing boats, your intelligence might be known across the whole island. Think about your own family’s culture. What values are important for Latino families? Italian families? In Irish families, hospitality and telling an entertaining story are marks of the culture. If you are a skilled storyteller, other members of Irish culture are likely to consider you intelligent. Some cultures place a high value on working together as a collective. In these cultures, the importance of the group supersedes the importance of individual achievement. When you visit such a culture, how well you relate to the values of that culture exemplifies your cultural intelligence , sometimes referred to as cultural competence. Creativity Creativity is the ability to generate, create, or discover new ideas, solutions, and possibilities. Very creative people often have intense knowledge about something, work on it for years, look at novel solutions, seek out the advice and help of other experts, and take risks. Although creativity is often associated with the arts, it is actually a vital form of intelligence that drives people in many disciplines to discover something new. Creativity can be found in every area of life, from the way you decorate your residence to a new way of understanding how a cell works. Creativity is often assessed as a function of one’s ability to engage in divergent thinking . Divergent thinking can be described as thinking “outside the box;” it allows an individual to arrive at unique, multiple solutions to a given problem. In contrast, convergent thinking describes the ability to provide a correct or well-established answer or solution to a problem (Cropley, 2006; Gilford, 1967) Everyday Connection Creativity Dr. Tom Steitz, the Sterling Professor of Biochemistry and Biophysics at Yale University, has spent his career looking at the structure and specific aspects of RNA molecules and how their interactions could help produce antibiotics and ward off diseases. As a result of his lifetime of work, he won the Nobel Prize in Chemistry in 2009. He wrote, “Looking back over the development and progress of my career in science, I am reminded how vitally important good mentorship is in the early stages of one's career development and constant face-to-face conversations, debate and discussions with colleagues at all stages of research. Outstanding discoveries, insights and developments do not happen in a vacuum” (Steitz, 2010, para. 39). Based on Steitz’s comment, it becomes clear that someone’s creativity, although an individual strength, benefits from interactions with others. Think of a time when your creativity was sparked by a conversation with a friend or classmate. How did that person influence you and what problem did you solve using creativity? 7.5 Measures of Intelligence Learning Objectives By the end of this section, you will be able to: Explain how intelligence tests are developed Describe the history of the use of IQ tests Describe the purposes and benefits of intelligence testing While you’re likely familiar with the term “IQ” and associate it with the idea of intelligence, what does IQ really mean? IQ stands for intelligence quotient and describes a score earned on a test designed to measure intelligence. You’ve already learned that there are many ways psychologists describe intelligence (or more aptly, intelligences). Similarly, IQ tests—the tools designed to measure intelligence—have been the subject of debate throughout their development and use. When might an IQ test be used? What do we learn from the results, and how might people use this information? IQ tests are expensive to administer and must be given by a licensed psychologist. Intelligence testing has been considered both a bane and a boon for education and social policy. In this section, we will explore what intelligence tests measure, how they are scored, and how they were developed. Measuring Intelligence It seems that the human understanding of intelligence is somewhat limited when we focus on traditional or academic-type intelligence. How then, can intelligence be measured? And when we measure intelligence, how do we ensure that we capture what we’re really trying to measure (in other words, that IQ tests function as valid measures of intelligence)? In the following paragraphs, we will explore the how intelligence tests were developed and the history of their use. The IQ test has been synonymous with intelligence for over a century. In the late 1800s, Sir Francis Galton developed the first broad test of intelligence (Flanagan & Kaufman, 2004). Although he was not a psychologist, his contributions to the concepts of intelligence testing are still felt today (Gordon, 1995). Reliable intelligence testing (you may recall from earlier chapters that reliability refers to a test’s ability to produce consistent results) began in earnest during the early 1900s with a researcher named Alfred Binet ( Figure 7.13 ). Binet was asked by the French government to develop an intelligence test to use on children to determine which ones might have difficulty in school; it included many verbally based tasks. American researchers soon realized the value of such testing. Louis Terman, a Stanford professor, modified Binet’s work by standardizing the administration of the test and tested thousands of different-aged children to establish an average score for each age. As a result, the test was normed and standardized, which means that the test was administered consistently to a large enough representative sample of the population that the range of scores resulted in a bell curve (bell curves will be discussed later). Standardization means that the manner of administration, scoring, and interpretation of results is consistent. Norming involves giving a test to a large population so data can be collected comparing groups, such as age groups. The resulting data provide norms, or referential scores, by which to interpret future scores. Norms are not expectations of what a given group should know but a demonstration of what that group does know. Norming and standardizing the test ensures that new scores are reliable. This new version of the test was called the Stanford-Binet Intelligence Scale (Terman, 1916). Remarkably, an updated version of this test is still widely used today. In 1939, David Wechsler, a psychologist who spent part of his career working with World War I veterans, developed a new IQ test in the United States. Wechsler combined several subtests from other intelligence tests used between 1880 and World War I. These subtests tapped into a variety of verbal and nonverbal skills, because Wechsler believed that intelligence encompassed “the global capacity of a person to act purposefully, to think rationally, and to deal effectively with his environment” (Wechsler, 1958, p. 7). He named the test the Wechsler-Bellevue Intelligence Scale (Wechsler, 1981). This combination of subtests became one of the most extensively used intelligence tests in the history of psychology. Although its name was later changed to the Wechsler Adult Intelligence Scale (WAIS) and has been revised several times, the aims of the test remain virtually unchanged since its inception (Boake, 2002). Today, there are three intelligence tests credited to Wechsler, the Wechsler Adult Intelligence Scale-fourth edition (WAIS-IV), the Wechsler Intelligence Scale for Children (WISC-V), and the Wechsler Preschool and Primary Scale of Intelligence—IV (WPPSI-IV) (Wechsler, 2012). These tests are used widely in schools and communities throughout the United States, and they are periodically normed and standardized as a means of recalibration. Interestingly, the periodic recalibrations have led to an interesting observation known as the Flynn effect. Named after James Flynn, who was among the first to describe this trend, the Flynn effect refers to the observation that each generation has a significantly higher IQ than the last. Flynn himself argues, however, that increased IQ scores do not necessarily mean that younger generations are more intelligent per se (Flynn, Shaughnessy, & Fulgham, 2012). As a part of the recalibration process, the WISC-V was given to thousands of children across the country, and children taking the test today are compared with their same-age peers ( Figure 7.13 ). The WISC-V is composed of 14 subtests, which comprise five indices, which then render an IQ score. The five indices are Verbal Comprehension, Visual Spatial, Fluid Reasoning, Working Memory, and Processing Speed. When the test is complete, individuals receive a score for each of the five indices and a Full Scale IQ score. The method of scoring reflects the understanding that intelligence is comprised of multiple abilities in several cognitive realms and focuses on the mental processes that the child used to arrive at his or her answers to each test item. Ultimately, we are still left with the question of how valid intelligence tests are. Certainly, the most modern versions of these tests tap into more than verbal competencies, yet the specific skills that should be assessed in IQ testing, the degree to which any test can truly measure an individual’s intelligence, and the use of the results of IQ tests are still issues of debate (Gresham & Witt, 1997; Flynn, Shaughnessy, & Fulgham, 2012; Richardson, 2002; Schlinger, 2003). What Do You Think? Intellectually Disabled Criminals and Capital Punishment The case of Atkins v. Virginia was a landmark case in the United States Supreme Court. On August 16, 1996, two men, Daryl Atkins and William Jones, robbed, kidnapped, and then shot and killed Eric Nesbitt, a local airman from the U.S. Air Force. A clinical psychologist evaluated Atkins and testified at the trial that Atkins had an IQ of 59. The mean IQ score is 100. The psychologist concluded that Atkins was mildly mentally retarded. The jury found Atkins guilty, and he was sentenced to death. Atkins and his attorneys appealed to the Supreme Court. In June 2002, the Supreme Court reversed a previous decision and ruled that executions of mentally retarded criminals are ‘cruel and unusual punishments’ prohibited by the Eighth Amendment. The court wrote in their decision: Clinical definitions of mental retardation require not only subaverage intellectual functioning, but also significant limitations in adaptive skills. Mentally retarded persons frequently know the difference between right and wrong and are competent to stand trial. Because of their impairments, however, by definition they have diminished capacities to understand and process information, to communicate, to abstract from mistakes and learn from experience, to engage in logical reasoning, to control impulses, and to understand others’ reactions. Their deficiencies do not warrant an exemption from criminal sanctions, but diminish their personal culpability ( Atkins v. Virginia , 2002, par. 5). The court also decided that there was a state legislature consensus against the execution of the mentally retarded and that this consensus should stand for all of the states. The Supreme Court ruling left it up to the states to determine their own definitions of mental retardation and intellectual disability. The definitions vary among states as to who can be executed. In the Atkins case, a jury decided that because he had many contacts with his lawyers and thus was provided with intellectual stimulation, his IQ had reportedly increased, and he was now smart enough to be executed. He was given an execution date and then received a stay of execution after it was revealed that lawyers for co-defendant, William Jones, coached Jones to “produce a testimony against Mr. Atkins that did match the evidence” (Liptak, 2008). After the revelation of this misconduct, Atkins was re-sentenced to life imprisonment. Atkins v. Virginia (2002) highlights several issues regarding society’s beliefs around intelligence. In the Atkins case, the Supreme Court decided that intellectual disability does affect decision making and therefore should affect the nature of the punishment such criminals receive. Where, however, should the lines of intellectual disability be drawn? In May 2014, the Supreme Court ruled in a related case ( Hall v. Florida ) that IQ scores cannot be used as a final determination of a prisoner’s eligibility for the death penalty (Roberts, 2014). The Bell Curve The results of intelligence tests follow the bell curve, a graph in the general shape of a bell. When the bell curve is used in psychological testing, the graph demonstrates a normal distribution of a trait, in this case, intelligence, in the human population. Many human traits naturally follow the bell curve. For example, if you lined up all your female schoolmates according to height, it is likely that a large cluster of them would be the average height for an American woman: 5’4”–5’6”. This cluster would fall in the center of the bell curve, representing the average height for American women ( Figure 7.14 ). There would be fewer women who stand closer to 4’11”. The same would be true for women of above-average height: those who stand closer to 5’11”. The trick to finding a bell curve in nature is to use a large sample size. Without a large sample size, it is less likely that the bell curve will represent the wider population. A representative sample is a subset of the population that accurately represents the general population. If, for example, you measured the height of the women in your classroom only, you might not actually have a representative sample. Perhaps the women’s basketball team wanted to take this course together, and they are all in your class. Because basketball players tend to be taller than average, the women in your class may not be a good representative sample of the population of American women. But if your sample included all the women at your school, it is likely that their heights would form a natural bell curve. The same principles apply to intelligence tests scores. Individuals earn a score called an intelligence quotient (IQ). Over the years, different types of IQ tests have evolved, but the way scores are interpreted remains the same. The average IQ score on an IQ test is 100. Standard deviations describe how data are dispersed in a population and give context to large data sets. The bell curve uses the standard deviation to show how all scores are dispersed from the average score ( Figure 7.15 ). In modern IQ testing, one standard deviation is 15 points. So a score of 85 would be described as “one standard deviation below the mean.” How would you describe a score of 115 and a score of 70? Any IQ score that falls within one standard deviation above and below the mean (between 85 and 115) is considered average, and 68% of the population has IQ scores in this range. An IQ score of 130 or above is considered a superior level. Only 2.2% of the population has an IQ score below 70 (American Psychological Association [APA], 2013). A score of 70 or below indicates significant cognitive delays. When these are combined with major deficits in adaptive functioning, a person is diagnosed with having an intellectual disability (American Association on Intellectual and Developmental Disabilities, 2013). Formerly known as mental retardation, the accepted term now is intellectual disability, and it has four subtypes: mild, moderate, severe, and profound ( Table 7.5 ). The Diagnostic and Statistical Manual of Psychological Disorders lists criteria for each subgroup (APA, 2013). Intellectual Disability Subtype Percentage of Intellectually Disabled Population Description Mild 85% 3rd- to 6th-grade skill level in reading, writing, and math; may be employed and live independently Moderate 10% Basic reading and writing skills; functional self-care skills; requires some oversight Severe 5% Functional self-care skills; requires oversight of daily environment and activities Profound <1% May be able to communicate verbally or nonverbally; requires intensive oversight Table 7.5 Characteristics of Cognitive Disorders On the other end of the intelligence spectrum are those individuals whose IQs fall into the highest ranges. Consistent with the bell curve, about 2% of the population falls into this category. People are considered gifted if they have an IQ score of 130 or higher, or superior intelligence in a particular area. Long ago, popular belief suggested that people of high intelligence were maladjusted. This idea was disproven through a groundbreaking study of gifted children. In 1921, Lewis Terman began a longitudinal study of over 1500 children with IQs over 135 (Terman, 1925). His findings showed that these children became well-educated, successful adults who were, in fact, well-adjusted (Terman & Oden, 1947). Additionally, Terman’s study showed that the subjects were above average in physical build and attractiveness, dispelling an earlier popular notion that highly intelligent people were “weaklings.” Some people with very high IQs elect to join Mensa, an organization dedicated to identifying, researching, and fostering intelligence. Members must have an IQ score in the top 2% of the population, and they may be required to pass other exams in their application to join the group. Dig Deeper What’s in a Name? Mental Retardation In the past, individuals with IQ scores below 70 and significant adaptive and social functioning delays were diagnosed with mental retardation. When this diagnosis was first named, the title held no social stigma. In time, however, the degrading word “retard” sprang from this diagnostic term. “Retard” was frequently used as a taunt, especially among young people, until the words “mentally retarded” and “retard” became an insult. As such, the DSM-5 now labels this diagnosis as “intellectual disability.” Many states once had a Department of Mental Retardation to serve those diagnosed with such cognitive delays, but most have changed their name to Department of Developmental Disabilities or something similar in language. The Social Security Administration still uses the term “mental retardation” but is considering eliminating it from its programming (Goad, 2013). Earlier in the chapter, we discussed how language affects how we think. Do you think changing the title of this department has any impact on how people regard those with developmental disabilities? Does a different name give people more dignity, and if so, how? Does it change the expectations for those with developmental or cognitive disabilities? Why or why not? Why Measure Intelligence? The value of IQ testing is most evident in educational or clinical settings. Children who seem to be experiencing learning difficulties or severe behavioral problems can be tested to ascertain whether the child’s difficulties can be partly attributed to an IQ score that is significantly different from the mean for her age group. Without IQ testing—or another measure of intelligence—children and adults needing extra support might not be identified effectively. In addition, IQ testing is used in courts to determine whether a defendant has special or extenuating circumstances that preclude him from participating in some way in a trial. People also use IQ testing results to seek disability benefits from the Social Security Administration. While IQ tests have sometimes been used as arguments in support of insidious purposes, such as the eugenics movement (Severson, 2011), the following case study demonstrates the usefulness and benefits of IQ testing. Candace, a 14-year-old girl experiencing problems at school, was referred for a court-ordered psychological evaluation. She was in regular education classes in ninth grade and was failing every subject. Candace had never been a stellar student but had always been passed to the next grade. Frequently, she would curse at any of her teachers who called on her in class. She also got into fights with other students and occasionally shoplifted. When she arrived for the evaluation, Candace immediately said that she hated everything about school, including the teachers, the rest of the staff, the building, and the homework. Her parents stated that they felt their daughter was picked on, because she was of a different race than the teachers and most of the other students. When asked why she cursed at her teachers, Candace replied, “They only call on me when I don’t know the answer. I don’t want to say, ‘I don’t know’ all of the time and look like an idiot in front of my friends. The teachers embarrass me.” She was given a battery of tests, including an IQ test. Her score on the IQ test was 68. What does Candace’s score say about her ability to excel or even succeed in regular education classes without assistance? 7.6 The Source of Intelligence Learning Objectives By the end of this section, you will be able to: Describe how genetics and environment affect intelligence Explain the relationship between IQ scores and socioeconomic status Describe the difference between a learning disability and a developmental disorder A young girl, born of teenage parents, lives with her grandmother in rural Mississippi. They are poor—in serious poverty—but they do their best to get by with what they have. She learns to read when she is just 3 years old. As she grows older, she longs to live with her mother, who now resides in Wisconsin. She moves there at the age of 6 years. At 9 years of age, she is raped. During the next several years, several different male relatives repeatedly molest her. Her life unravels. She turns to drugs and sex to fill the deep, lonely void inside her. Her mother then sends her to Nashville to live with her father, who imposes strict behavioral expectations upon her, and over time, her wild life settles once again. She begins to experience success in school, and at 19 years old, becomes the youngest and first African-American female news anchor (“Dates and Events,” n.d.). The woman—Oprah Winfrey—goes on to become a media giant known for both her intelligence and her empathy. High Intelligence: Nature or Nurture? Where does high intelligence come from? Some researchers believe that intelligence is a trait inherited from a person’s parents. Scientists who research this topic typically use twin studies to determine the heritability of intelligence. The Minnesota Study of Twins Reared Apart is one of the most well-known twin studies. In this investigation, researchers found that identical twins raised together and identical twins raised apart exhibit a higher correlation between their IQ scores than siblings or fraternal twins raised together (Bouchard, Lykken, McGue, Segal, & Tellegen, 1990). The findings from this study reveal a genetic component to intelligence ( Figure 7.16 ). At the same time, other psychologists believe that intelligence is shaped by a child’s developmental environment. If parents were to provide their children with intellectual stimuli from before they are born, it is likely that they would absorb the benefits of that stimulation, and it would be reflected in intelligence levels. The reality is that aspects of each idea are probably correct. In fact, one study suggests that although genetics seem to be in control of the level of intelligence, the environmental influences provide both stability and change to trigger manifestation of cognitive abilities (Bartels, Rietveld, Van Baal, & Boomsma, 2002). Certainly, there are behaviors that support the development of intelligence, but the genetic component of high intelligence should not be ignored. As with all heritable traits, however, it is not always possible to isolate how and when high intelligence is passed on to the next generation. Range of Reaction is the theory that each person responds to the environment in a unique way based on his or her genetic makeup. According to this idea, your genetic potential is a fixed quantity, but whether you reach your full intellectual potential is dependent upon the environmental stimulation you experience, especially in childhood. Think about this scenario: A couple adopts a child who has average genetic intellectual potential. They raise her in an extremely stimulating environment. What will happen to the couple’s new daughter? It is likely that the stimulating environment will improve her intellectual outcomes over the course of her life. But what happens if this experiment is reversed? If a child with an extremely strong genetic background is placed in an environment that does not stimulate him: What happens? Interestingly, according to a longitudinal study of highly gifted individuals, it was found that “the two extremes of optimal and pathological experience are both represented disproportionately in the backgrounds of creative individuals”; however, those who experienced supportive family environments were more likely to report being happy (Csikszentmihalyi & Csikszentmihalyi, 1993, p. 187). Another challenge to determining origins of high intelligence is the confounding nature of our human social structures. It is troubling to note that some ethnic groups perform better on IQ tests than others—and it is likely that the results do not have much to do with the quality of each ethnic group’s intellect. The same is true for socioeconomic status. Children who live in poverty experience more pervasive, daily stress than children who do not worry about the basic needs of safety, shelter, and food. These worries can negatively affect how the brain functions and develops, causing a dip in IQ scores. Mark Kishiyama and his colleagues determined that children living in poverty demonstrated reduced prefrontal brain functioning comparable to children with damage to the lateral prefrontal cortex (Kishyama, Boyce, Jimenez, Perry, & Knight, 2009). The debate around the foundations and influences on intelligence exploded in 1969, when an educational psychologist named Arthur Jensen published the article “How Much Can We Boost I.Q. and Achievement” in the Harvard Educational Review . Jensen had administered IQ tests to diverse groups of students, and his results led him to the conclusion that IQ is determined by genetics. He also posited that intelligence was made up of two types of abilities: Level I and Level II. In his theory, Level I is responsible for rote memorization, whereas Level II is responsible for conceptual and analytical abilities. According to his findings, Level I remained consistent among the human race. Level II, however, exhibited differences among ethnic groups (Modgil & Routledge, 1987). Jensen’s most controversial conclusion was that Level II intelligence is prevalent among Asians, then Caucasians, then African Americans. Robert Williams was among those who called out racial bias in Jensen’s results (Williams, 1970). Obviously, Jensen’s interpretation of his own data caused an intense response in a nation that continued to grapple with the effects of racism (Fox, 2012). However, Jensen’s ideas were not solitary or unique; rather, they represented one of many examples of psychologists asserting racial differences in IQ and cognitive ability. In fact, Rushton and Jensen (2005) reviewed three decades worth of research on the relationship between race and cognitive ability. Jensen’s belief in the inherited nature of intelligence and the validity of the IQ test to be the truest measure of intelligence are at the core of his conclusions. If, however, you believe that intelligence is more than Levels I and II, or that IQ tests do not control for socioeconomic and cultural differences among people, then perhaps you can dismiss Jensen’s conclusions as a single window that looks out on the complicated and varied landscape of human intelligence. In a related story, parents of African American students filed a case against the State of California in 1979, because they believed that the testing method used to identify students with learning disabilities was culturally unfair as the tests were normed and standardized using white children ( Larry P. v. Riles ). The testing method used by the state disproportionately identified African American children as mentally retarded. This resulted in many students being incorrectly classified as “mentally retarded.” According to a summary of the case, Larry P. v. Riles : In violation of Title VI of the Civil Rights Act of 1964, the Rehabilitation Act of 1973, and the Education for All Handicapped Children Act of 1975, defendants have utilized standardized intelligence tests that are racially and culturally biased, have a discriminatory impact against black children, and have not been validated for the purpose of essentially permanent placements of black children into educationally dead-end, isolated, and stigmatizing classes for the so-called educable mentally retarded. Further, these federal laws have been violated by defendants' general use of placement mechanisms that, taken together, have not been validated and result in a large over-representation of black children in the special E.M.R. classes. ( Larry P. v. Riles , par. 6) Once again, the limitations of intelligence testing were revealed. What are Learning Disabilities? Learning disabilities are cognitive disorders that affect different areas of cognition, particularly language or reading. It should be pointed out that learning disabilities are not the same thing as intellectual disabilities. Learning disabilities are considered specific neurological impairments rather than global intellectual or developmental disabilities. A person with a language disability has difficulty understanding or using spoken language, whereas someone with a reading disability, such as dyslexia, has difficulty processing what he or she is reading. Often, learning disabilities are not recognized until a child reaches school age. One confounding aspect of learning disabilities is that they often affect children with average to above-average intelligence. At the same time, learning disabilities tend to exhibit comorbidity with other disorders, like attention-deficit hyperactivity disorder (ADHD). Anywhere between 30–70% of individuals with diagnosed cases of ADHD also have some sort of learning disability (Riccio, Gonzales, & Hynd, 1994). Let’s take a look at two examples of common learning disabilities: dysgraphia and dyslexia. Dysgraphia Children with dysgraphia have a learning disability that results in a struggle to write legibly. The physical task of writing with a pen and paper is extremely challenging for the person. These children often have extreme difficulty putting their thoughts down on paper (Smits-Engelsman & Van Galen, 1997). This difficulty is inconsistent with a person’s IQ. That is, based on the child’s IQ and/or abilities in other areas, a child with dysgraphia should be able to write, but can’t. Children with dysgraphia may also have problems with spatial abilities. Students with dysgraphia need academic accommodations to help them succeed in school. These accommodations can provide students with alternative assessment opportunities to demonstrate what they know (Barton, 2003). For example, a student with dysgraphia might be permitted to take an oral exam rather than a traditional paper-and-pencil test. Treatment is usually provided by an occupational therapist, although there is some question as to how effective such treatment is (Zwicker, 2005). Dyslexia Dyslexia is the most common learning disability in children. An individual with dyslexia exhibits an inability to correctly process letters. The neurological mechanism for sound processing does not work properly in someone with dyslexia. As a result, dyslexic children may not understand sound-letter correspondence. A child with dyslexia may mix up letters within words and sentences—letter reversals, such as those shown in Figure 7.17 , are a hallmark of this learning disability—or skip whole words while reading. A dyslexic child may have difficulty spelling words correctly while writing. Because of the disordered way that the brain processes letters and sound, learning to read is a frustrating experience. Some dyslexic individuals cope by memorizing the shapes of most words, but they never actually learn to read (Berninger, 2008).
psychology
Summary 6.1 What Is Learning? Instincts and reflexes are innate behaviors—they occur naturally and do not involve learning. In contrast, learning is a change in behavior or knowledge that results from experience. There are three main types of learning: classical conditioning, operant conditioning, and observational learning. Both classical and operant conditioning are forms of associative learning where associations are made between events that occur together. Observational learning is just as it sounds: learning by observing others. 6.2 Classical Conditioning Pavlov’s pioneering work with dogs contributed greatly to what we know about learning. His experiments explored the type of associative learning we now call classical conditioning. In classical conditioning, organisms learn to associate events that repeatedly happen together, and researchers study how a reflexive response to a stimulus can be mapped to a different stimulus—by training an association between the two stimuli. Pavlov’s experiments show how stimulus-response bonds are formed. Watson, the founder of behaviorism, was greatly influenced by Pavlov’s work. He tested humans by conditioning fear in an infant known as Little Albert. His findings suggest that classical conditioning can explain how some fears develop. 6.3 Operant Conditioning Operant conditioning is based on the work of B. F. Skinner. Operant conditioning is a form of learning in which the motivation for a behavior happens after the behavior is demonstrated. An animal or a human receives a consequence after performing a specific behavior. The consequence is either a reinforcer or a punisher. All reinforcement (positive or negative) increases the likelihood of a behavioral response. All punishment (positive or negative) decreases the likelihood of a behavioral response. Several types of reinforcement schedules are used to reward behavior depending on either a set or variable period of time. 6.4 Observational Learning (Modeling) According to Bandura, learning can occur by watching others and then modeling what they do or say. This is known as observational learning. There are specific steps in the process of modeling that must be followed if learning is to be successful. These steps include attention, retention, reproduction, and motivation. Through modeling, Bandura has shown that children learn many things both good and bad simply by watching their parents, siblings, and others.
Chapter Outline 6.1 What Is Learning? 6.2 Classical Conditioning 6.3 Operant Conditioning 6.4 Observational Learning (Modeling) Introduction The summer sun shines brightly on a deserted stretch of beach. Suddenly, a tiny grey head emerges from the sand, then another and another. Soon the beach is teeming with loggerhead sea turtle hatchlings ( Figure 6.1 ). Although only minutes old, the hatchlings know exactly what to do. Their flippers are not very efficient for moving across the hot sand, yet they continue onward, instinctively. Some are quickly snapped up by gulls circling overhead and others become lunch for hungry ghost crabs that dart out of their holes. Despite these dangers, the hatchlings are driven to leave the safety of their nest and find the ocean. Not far down this same beach, Ben and his son, Julian, paddle out into the ocean on surfboards. A wave approaches. Julian crouches on his board, then jumps up and rides the wave for a few seconds before losing his balance. He emerges from the water in time to watch his father ride the face of the wave. Unlike baby sea turtles, which know how to find the ocean and swim with no help from their parents, we are not born knowing how to swim (or surf). Yet we humans pride ourselves on our ability to learn. In fact, over thousands of years and across cultures, we have created institutions devoted entirely to learning. But have you ever asked yourself how exactly it is that we learn? What processes are at work as we come to know what we know? This chapter focuses on the primary ways in which learning occurs.
[ { "answer": { "ans_choice": 2, "ans_text": "infant sucking on a nipple" }, "bloom": null, "hl_context": "Both reflexes and instincts help an organism adapt to its environment and do not have to be learned . <hl> For example , every healthy human baby has a sucking reflex , present at birth . <hl> <hl> Babies are born knowing how to suck on a nipple , whether artificial ( from a bottle ) or human . <hl> Nobody teaches the baby to suck , just as no one teaches a sea turtle hatchling to move toward the ocean .", "hl_sentences": "For example , every healthy human baby has a sucking reflex , present at birth . Babies are born knowing how to suck on a nipple , whether artificial ( from a bottle ) or human .", "question": { "cloze_format": "____ is an example of a reflex that occurs at some point in the development of a human being.", "normal_format": "Which of the following is an example of a reflex that occurs at some point in the development of a human being?", "question_choices": [ "child riding a bike", "teen socializing", "infant sucking on a nipple", "toddler walking" ], "question_id": "fs-idp80862112", "question_text": "Which of the following is an example of a reflex that occurs at some point in the development of a human being?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 1, "ans_text": "occurs as a result of experience" }, "bloom": null, "hl_context": "Learning , like reflexes and instincts , allows an organism to adapt to its environment . <hl> But unlike instincts and reflexes , learned behaviors involve change and experience : learning is a relatively permanent change in behavior or knowledge that results from experience . <hl> In contrast to the innate behaviors discussed above , learning involves acquiring knowledge and skills through experience . Looking back at our surfing scenario , Julian will have to spend much more time training with his surfboard before he learns how to ride the waves like his father .", "hl_sentences": "But unlike instincts and reflexes , learned behaviors involve change and experience : learning is a relatively permanent change in behavior or knowledge that results from experience .", "question": { "cloze_format": "Learning is best defined as a relatively permanent change in behavior that ________.", "normal_format": "Learning is best defined as a relatively permanent change in behavior that what?", "question_choices": [ "is innate", "occurs as a result of experience", "is found only in humans", "occurs by observing others" ], "question_id": "fs-idp103566368", "question_text": "Learning is best defined as a relatively permanent change in behavior that ________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 0, "ans_text": "classical conditioning; operant conditioning" }, "bloom": null, "hl_context": "<hl> Previous sections of this chapter focused on classical and operant conditioning , which are forms of associative learning . <hl> In observational learning , we learn by watching others and then imitating , or modeling , what they do or say . The individuals performing the imitated behavior are called models . Research suggests that this imitative learning involves a specific type of neuron , called a mirror neuron ( Hickock , 2010 ; Rizzolatti , Fadiga , Fogassi , & Gallese , 2002 ; Rizzolatti , Fogassi , & Gallese , 2006 ) . <hl> The previous section of this chapter focused on the type of associative learning known as classical conditioning . <hl> Remember that in classical conditioning , something in the environment triggers a reflex automatically , and researchers train the organism to react to a different stimulus . <hl> Now we turn to the second type of associative learning , operant conditioning . <hl> In operant conditioning , organisms learn to associate a behavior and its consequence ( Table 6.1 ) . A pleasant consequence makes that behavior more likely to be repeated in the future . For example , Spirit , a dolphin at the National Aquarium in Baltimore , does a flip in the air when her trainer blows a whistle . The consequence is that she gets a fish . Learning to surf , as well as any complex learning process ( e . g . , learning about the discipline of psychology ) , involves a complex interaction of conscious and unconscious processes . Learning has traditionally been studied in terms of its simplest components — the associations our minds automatically make between events . Our minds have a natural tendency to connect events that occur closely together or in sequence . Associative learning occurs when an organism makes connections between stimuli or events that occur together in the environment . <hl> You will see that associative learning is central to all three basic learning processes discussed in this chapter ; classical conditioning tends to involve unconscious processes , operant conditioning tends to involve conscious processes , and observational learning adds social and cognitive layers to all the basic associative processes , both conscious and unconscious . <hl> These learning processes will be discussed in detail later in the chapter , but it is helpful to have a brief overview of each as you begin to explore how learning is understood from a psychological perspective .", "hl_sentences": "Previous sections of this chapter focused on classical and operant conditioning , which are forms of associative learning . The previous section of this chapter focused on the type of associative learning known as classical conditioning . Now we turn to the second type of associative learning , operant conditioning . You will see that associative learning is central to all three basic learning processes discussed in this chapter ; classical conditioning tends to involve unconscious processes , operant conditioning tends to involve conscious processes , and observational learning adds social and cognitive layers to all the basic associative processes , both conscious and unconscious .", "question": { "cloze_format": "Two forms of associative learning are ________ and ________.", "normal_format": "What are the two forms of associative learning?", "question_choices": [ "classical conditioning; operant conditioning", "classical conditioning; Pavlovian conditioning", "operant conditioning; observational learning", "operant conditioning; learning conditioning" ], "question_id": "fs-idp17687200", "question_text": "Two forms of associative learning are ________ and ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "classical conditioning" }, "bloom": null, "hl_context": "Once we have established the connection between the unconditioned stimulus and the conditioned stimulus , how do we break that connection and get the dog , cat , or child to stop responding ? In Tiger ’ s case , imagine what would happen if you stopped using the electric can opener for her food and began to use it only for human food . Now , Tiger would hear the can opener , but she would not get food . <hl> In classical conditioning terms , you would be giving the conditioned stimulus , but not the unconditioned stimulus . <hl> <hl> Pavlov explored this scenario in his experiments with dogs : sounding the tone without giving the dogs the meat powder . <hl> Soon the dogs stopped responding to the tone . Extinction is the decrease in the conditioned response when the unconditioned stimulus is no longer presented with the conditioned stimulus . When presented with the conditioned stimulus alone , the dog , cat , or other organism would show a weaker and weaker response , and finally no response . In classical conditioning terms , there is a gradual weakening and disappearance of the conditioned response . Now that you know how classical conditioning works and have seen several examples , let ’ s take a look at some of the general processes involved . <hl> In classical conditioning , the initial period of learning is known as acquisition , when an organism learns to connect a neutral stimulus and an unconditioned stimulus . <hl> <hl> During acquisition , the neutral stimulus begins to elicit the conditioned response , and eventually the neutral stimulus becomes a conditioned stimulus capable of eliciting the conditioned response by itself . <hl> Timing is important for conditioning to occur . Typically , there should only be a brief interval between presentation of the conditioned stimulus and the unconditioned stimulus . Depending on what is being conditioned , sometimes this interval is as little as five seconds ( Chance , 2009 ) . However , with other types of conditioning , the interval can be up to several hours .", "hl_sentences": "In classical conditioning terms , you would be giving the conditioned stimulus , but not the unconditioned stimulus . Pavlov explored this scenario in his experiments with dogs : sounding the tone without giving the dogs the meat powder . In classical conditioning , the initial period of learning is known as acquisition , when an organism learns to connect a neutral stimulus and an unconditioned stimulus . During acquisition , the neutral stimulus begins to elicit the conditioned response , and eventually the neutral stimulus becomes a conditioned stimulus capable of eliciting the conditioned response by itself .", "question": { "cloze_format": "In ________ the stimulus or experience occurs before the behavior and then gets paired with the behavior.", "normal_format": "Where does the stimulus or experience occurs before the behavior and then gets paired with the behavior?", "question_choices": [ "associative learning", "observational learning", "operant conditioning", "classical conditioning" ], "question_id": "fs-idm9560544", "question_text": "In ________ the stimulus or experience occurs before the behavior and then gets paired with the behavior." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 1, "ans_text": "neutral stimulus" }, "bloom": null, "hl_context": "In classical conditioning , a neutral stimulus is presented immediately before an unconditioned stimulus . Pavlov would sound a tone ( like ringing a bell ) and then give the dogs the meat powder ( Figure 6.4 ) . <hl> The tone was the neutral stimulus ( NS ) , which is a stimulus that does not naturally elicit a response . <hl> Prior to conditioning , the dogs did not salivate when they just heard the tone because the tone had no association for the dogs . Quite simply this pairing means : Tone ( NS ) + Meat Powder ( UCS ) → Salivation ( UCR )", "hl_sentences": "The tone was the neutral stimulus ( NS ) , which is a stimulus that does not naturally elicit a response .", "question": { "cloze_format": "A stimulus that does not initially elicit a response in an organism is a(n) ________.", "normal_format": "What is a stimulus that does not initially elicit a response in an organism?", "question_choices": [ "unconditioned stimulus", "neutral stimulus", "conditioned stimulus", "unconditioned response" ], "question_id": "fs-idm99663600", "question_text": "A stimulus that does not initially elicit a response in an organism is a(n) ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "stimulus generalization" }, "bloom": null, "hl_context": "<hl> In 1920 , Watson was the chair of the psychology department at Johns Hopkins University . <hl> Through his position at the university he came to meet Little Albert ’ s mother , Arvilla Merritte , who worked at a campus hospital ( DeAngelis , 2010 ) . Watson offered her a dollar to allow her son to be the subject of his experiments in classical conditioning . Through these experiments , Little Albert was exposed to and conditioned to fear certain things . Initially he was presented with various neutral stimuli , including a rabbit , a dog , a monkey , masks , cotton wool , and a white rat . He was not afraid of any of these things . <hl> Then Watson , with the help of Rayner , conditioned Little Albert to associate these stimuli with an emotion — fear . <hl> For example , Watson handed Little Albert the white rat , and Little Albert enjoyed playing with it . Then Watson made a loud sound , by striking a hammer against a metal bar hanging behind Little Albert ’ s head , each time Little Albert touched the rat . Little Albert was frightened by the sound — demonstrating a reflexive fear of sudden loud noises — and began to cry . Watson repeatedly paired the loud sound with the white rat . <hl> Soon Little Albert became frightened by the white rat alone . <hl> In this case , what are the UCS , CS , UCR , and CR ? <hl> Days later , Little Albert demonstrated stimulus generalization — he became afraid of other furry things : a rabbit , a furry coat , and even a Santa Claus mask ( Figure 6.9 ) . <hl> Watson had succeeded in conditioning a fear response in Little Albert , thus demonstrating that emotions could become conditioned responses . It had been Watson ’ s intention to produce a phobia — a persistent , excessive fear of a specific object or situation — through conditioning alone , thus countering Freud ’ s view that phobias are caused by deep , hidden conflicts in the mind . However , there is no evidence that Little Albert experienced phobias in later years . Little Albert ’ s mother moved away , ending the experiment . While Watson ’ s research provided new insight into conditioning , it would be considered unethical by today ’ s standards . Link to Learning", "hl_sentences": "In 1920 , Watson was the chair of the psychology department at Johns Hopkins University . Then Watson , with the help of Rayner , conditioned Little Albert to associate these stimuli with an emotion — fear . Soon Little Albert became frightened by the white rat alone . Days later , Little Albert demonstrated stimulus generalization — he became afraid of other furry things : a rabbit , a furry coat , and even a Santa Claus mask ( Figure 6.9 ) .", "question": { "cloze_format": "In Watson and Rayner’s experiments, Little Albert was conditioned to fear a white rat, and then he began to be afraid of other furry white objects. This demonstrates ________.", "normal_format": "In Watson and Rayner’s experiments, Little Albert was conditioned to fear a white rat, and then he began to be afraid of other furry white objects. This demonstrates what?", "question_choices": [ "higher order conditioning", "acquisition", "stimulus discrimination", "stimulus generalization" ], "question_id": "fs-idm16955552", "question_text": "In Watson and Rayner’s experiments, Little Albert was conditioned to fear a white rat, and then he began to be afraid of other furry white objects. This demonstrates ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "the conditioned stimulus is presented repeatedly without being paired with an unconditioned stimulus" }, "bloom": null, "hl_context": "Once we have established the connection between the unconditioned stimulus and the conditioned stimulus , how do we break that connection and get the dog , cat , or child to stop responding ? In Tiger ’ s case , imagine what would happen if you stopped using the electric can opener for her food and began to use it only for human food . Now , Tiger would hear the can opener , but she would not get food . <hl> In classical conditioning terms , you would be giving the conditioned stimulus , but not the unconditioned stimulus . <hl> Pavlov explored this scenario in his experiments with dogs : sounding the tone without giving the dogs the meat powder . Soon the dogs stopped responding to the tone . <hl> Extinction is the decrease in the conditioned response when the unconditioned stimulus is no longer presented with the conditioned stimulus . <hl> When presented with the conditioned stimulus alone , the dog , cat , or other organism would show a weaker and weaker response , and finally no response . <hl> In classical conditioning terms , there is a gradual weakening and disappearance of the conditioned response . <hl>", "hl_sentences": "In classical conditioning terms , you would be giving the conditioned stimulus , but not the unconditioned stimulus . Extinction is the decrease in the conditioned response when the unconditioned stimulus is no longer presented with the conditioned stimulus . In classical conditioning terms , there is a gradual weakening and disappearance of the conditioned response .", "question": { "cloze_format": "Extinction occurs when ________.", "normal_format": "When does extinction occur?", "question_choices": [ "the conditioned stimulus is presented repeatedly without being paired with an unconditioned stimulus", "the unconditioned stimulus is presented repeatedly without being paired with a conditioned stimulus", "the neutral stimulus is presented repeatedly without being paired with an unconditioned stimulus", "the neutral stimulus is presented repeatedly without being paired with a conditioned stimulus" ], "question_id": "fs-idm27236064", "question_text": "Extinction occurs when ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "conditioned responses" }, "bloom": null, "hl_context": "<hl> Watson ’ s ideas were influenced by Pavlov ’ s work . <hl> <hl> According to Watson , human behavior , just like animal behavior , is primarily the result of conditioned responses . <hl> <hl> Whereas Pavlov ’ s work with dogs involved the conditioning of reflexes , Watson believed the same principles could be extended to the conditioning of human emotions ( Watson , 1919 ) . <hl> Thus began Watson ’ s work with his graduate student Rosalie Rayner and a baby called Little Albert . Through their experiments with Little Albert , Watson and Rayner ( 1920 ) demonstrated how fears can be conditioned .", "hl_sentences": "Watson ’ s ideas were influenced by Pavlov ’ s work . According to Watson , human behavior , just like animal behavior , is primarily the result of conditioned responses . Whereas Pavlov ’ s work with dogs involved the conditioning of reflexes , Watson believed the same principles could be extended to the conditioning of human emotions ( Watson , 1919 ) .", "question": { "cloze_format": "In Pavlov’s work with dogs, the psychic secretions were ________.", "normal_format": "In Pavlov’s work with dogs, what were the psychic secretions?", "question_choices": [ "unconditioned responses", "conditioned responses", "unconditioned stimuli", "conditioned stimuli" ], "question_id": "fs-idp89969056", "question_text": "In Pavlov’s work with dogs, the psychic secretions were ________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 3, "ans_text": "negative punishment" }, "bloom": null, "hl_context": "<hl> Many people confuse negative reinforcement with punishment in operant conditioning , but they are two very different mechanisms . <hl> Remember that reinforcement , even when it is negative , always increases a behavior . <hl> In contrast , punishment always decreases a behavior . <hl> In positive punishment , you add an undesirable stimulus to decrease a behavior . An example of positive punishment is scolding a student to get the student to stop texting in class . In this case , a stimulus ( the reprimand ) is added in order to decrease the behavior ( texting in class ) . <hl> In negative punishment , you remove a pleasant stimulus to decrease behavior . <hl> For example , when a child misbehaves , a parent can take away a favorite toy . In this case , a stimulus ( the toy ) is removed in order to decrease the behavior . Punishment , especially when it is immediate , is one way to decrease undesirable behavior . For example , imagine your four-year-old son , Brandon , hit his younger brother . You have Brandon write 100 times “ I will not hit my brother \" ( positive punishment ) . Chances are he won ’ t repeat this behavior . While strategies like this are common today , in the past children were often subject to physical punishment , such as spanking . It ’ s important to be aware of some of the drawbacks in using physical punishment on children . First , punishment may teach fear . Brandon may become fearful of the street , but he also may become fearful of the person who delivered the punishment — you , his parent . Similarly , children who are punished by teachers may come to fear the teacher and try to avoid school ( Gershoff et al . , 2010 ) . Consequently , most schools in the United States have banned corporal punishment . Second , punishment may cause children to become more aggressive and prone to antisocial behavior and delinquency ( Gershoff , 2002 ) . They see their parents resort to spanking when they become angry and frustrated , so , in turn , they may act out this same behavior when they become angry and frustrated . For example , because you spank Brenda when you are angry with her for her misbehavior , she might start hitting her friends when they won ’ t share their toys . While positive punishment can be effective in some cases , Skinner suggested that the use of punishment should be weighed against the possible negative effects . Today ’ s psychologists and parenting experts favor reinforcement over punishment — they recommend that you catch your child doing something good and reward her for it . In discussing operant conditioning , we use several everyday words — positive , negative , reinforcement , and punishment — in a specialized manner . In operant conditioning , positive and negative do not mean good and bad . Instead , positive means you are adding something , and negative means you are taking something away . Reinforcement means you are increasing a behavior , and punishment means you are decreasing a behavior . Reinforcement can be positive or negative , and punishment can also be positive or negative . All reinforcers ( positive or negative ) increase the likelihood of a behavioral response . <hl> All punishers ( positive or negative ) decrease the likelihood of a behavioral response . <hl> Now let ’ s combine these four terms : positive reinforcement , negative reinforcement , positive punishment , and negative punishment ( Table 6.2 ) .", "hl_sentences": "Many people confuse negative reinforcement with punishment in operant conditioning , but they are two very different mechanisms . In contrast , punishment always decreases a behavior . In negative punishment , you remove a pleasant stimulus to decrease behavior . All punishers ( positive or negative ) decrease the likelihood of a behavioral response .", "question": { "cloze_format": "________ is when you take away a pleasant stimulus to stop a behavior.", "normal_format": "What do you call it when you take away a pleasant stimulus to stop a behavior?", "question_choices": [ "positive reinforcement", "negative reinforcement", "positive punishment", "negative punishment" ], "question_id": "fs-idm72936720", "question_text": "________ is when you take away a pleasant stimulus to stop a behavior." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "money" }, "bloom": null, "hl_context": "<hl> A secondary reinforcer has no inherent value and only has reinforcing qualities when linked with a primary reinforcer . <hl> Praise , linked to affection , is one example of a secondary reinforcer , as when you called out “ Great shot ! ” every time Joaquin made a goal . <hl> Another example , money , is only worth something when you can use it to buy other things — either things that satisfy basic needs ( food , water , shelter — all primary reinforcers ) or other secondary reinforcers . <hl> If you were on a remote island in the middle of the Pacific Ocean and you had stacks of money , the money would not be useful if you could not spend it . What about the stickers on the behavior chart ? They also are secondary reinforcers . What would be a good reinforce for humans ? For your daughter Sydney , it was the promise of a toy if she cleaned her room . How about Joaquin , the soccer player ? If you gave Joaquin a piece of candy every time he made a goal , you would be using a primary reinforcer . Primary reinforcers are reinforcers that have innate reinforcing qualities . These kinds of reinforcers are not learned . <hl> Water , food , sleep , shelter , sex , and touch , among others , are primary reinforcers . <hl> Pleasure is also a primary reinforcer . Organisms do not lose their drive for these things . For most people , jumping in a cool lake on a very hot day would be reinforcing and the cool lake would be innately reinforcing — the water would cool the person off ( a physical need ) , as well as provide pleasure .", "hl_sentences": "A secondary reinforcer has no inherent value and only has reinforcing qualities when linked with a primary reinforcer . Another example , money , is only worth something when you can use it to buy other things — either things that satisfy basic needs ( food , water , shelter — all primary reinforcers ) or other secondary reinforcers . Water , food , sleep , shelter , sex , and touch , among others , are primary reinforcers .", "question": { "cloze_format": "___ is not an example of a primary reinforcer.", "normal_format": "Which of the following is not an example of a primary reinforcer?", "question_choices": [ "food", "money", "water", "sex" ], "question_id": "fs-idm43885376", "question_text": "Which of the following is not an example of a primary reinforcer?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 0, "ans_text": "shaping" }, "bloom": null, "hl_context": "<hl> Shaping In his operant conditioning experiments , Skinner often used an approach called shaping . <hl> <hl> Instead of rewarding only the target behavior , in shaping , we reward successive approximations of a target behavior . <hl> Why is shaping needed ? Remember that in order for reinforcement to work , the organism must first display the behavior . Shaping is needed because it is extremely unlikely that an organism will display anything but the simplest of behaviors spontaneously . In shaping , behaviors are broken down into many small , achievable steps . The specific steps used in the process are the following :", "hl_sentences": "Shaping In his operant conditioning experiments , Skinner often used an approach called shaping . Instead of rewarding only the target behavior , in shaping , we reward successive approximations of a target behavior .", "question": { "cloze_format": "Rewarding successive approximations toward a target behavior is ________.", "normal_format": "What are rewarding successive approximations toward a target behavior called?", "question_choices": [ "shaping", "extinction", "positive reinforcement", "negative reinforcement" ], "question_id": "fs-idp79982784", "question_text": "Rewarding successive approximations toward a target behavior is ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "variable ratio" }, "bloom": null, "hl_context": "<hl> Skinner uses gambling as an example of the power and effectiveness of conditioning behavior based on a variable ratio reinforcement schedule . <hl> In fact , Skinner was so confident in his knowledge of gambling addiction that he even claimed he could turn a pigeon into a pathological gambler ( “ Skinner ’ s Utopia , ” 1971 ) . Beyond the power of variable ratio reinforcement , gambling seems to work on the brain in the same way as some addictive drugs . The Illinois Institute for Addiction Recovery ( n . d . ) reports evidence suggesting that pathological gambling is an addiction similar to a chemical addiction ( Figure 6.14 ) . Specifically , gambling may activate the reward centers of the brain , much like cocaine does . Research has shown that some pathological gamblers have lower levels of the neurotransmitter ( brain chemical ) known as norepinephrine than do normal gamblers ( Roy , et al . , 1988 ) . According to a study conducted by Alec Roy and colleagues , norepinephrine is secreted when a person feels stress , arousal , or thrill ; pathological gamblers use gambling to increase their levels of this neurotransmitter . Another researcher , neuroscientist Hans Breiter , has done extensive research on gambling and its effects on the brain . Breiter ( as cited in Franzen , 2001 ) reports that “ Monetary reward in a gambling-like experiment produces brain activation very similar to that observed in a cocaine addict receiving an infusion of cocaine ” ( para . 1 ) . Deficiencies in serotonin ( another neurotransmitter ) might also contribute to compulsive behavior , including a gambling addiction . <hl> In a variable ratio reinforcement schedule , the number of responses needed for a reward varies . <hl> <hl> This is the most powerful partial reinforcement schedule . <hl> <hl> An example of the variable ratio reinforcement schedule is gambling . <hl> Imagine that Sarah — generally a smart , thrifty woman — visits Las Vegas for the first time . She is not a gambler , but out of curiosity she puts a quarter into the slot machine , and then another , and another . Nothing happens . Two dollars in quarters later , her curiosity is fading , and she is just about to quit . But then , the machine lights up , bells go off , and Sarah gets 50 quarters back . That ’ s more like it ! Sarah gets back to inserting quarters with renewed interest , and a few minutes later she has used up all her gains and is $ 10 in the hole . Now might be a sensible time to quit . And yet , she keeps putting money into the slot machine because she never knows when the next reinforcement is coming . She keeps thinking that with the next quarter she could win $ 50 , or $ 100 , or even more . Because the reinforcement schedule in most types of gambling has a variable ratio schedule , people keep trying and hoping that the next time they will win big . This is one of the reasons that gambling is so addictive — and so resistant to extinction .", "hl_sentences": "Skinner uses gambling as an example of the power and effectiveness of conditioning behavior based on a variable ratio reinforcement schedule . In a variable ratio reinforcement schedule , the number of responses needed for a reward varies . This is the most powerful partial reinforcement schedule . An example of the variable ratio reinforcement schedule is gambling .", "question": { "cloze_format": "Slot machines reward gamblers with money according to ___.", "normal_format": "Slot machines reward gamblers with money according to which reinforcement schedule?", "question_choices": [ "fixed ratio", "variable ratio", "fixed interval", "variable interval" ], "question_id": "fs-idm71327520", "question_text": "Slot machines reward gamblers with money according to which reinforcement schedule?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "model" }, "bloom": null, "hl_context": "Of course , we don ’ t learn a behavior simply by observing a model . Bandura described specific steps in the process of modeling that must be followed if learning is to be successful : attention , retention , reproduction , and motivation . First , you must be focused on what the model is doing — you have to pay attention . Next , you must be able to retain , or remember , what you observed ; this is retention . Then , you must be able to perform the behavior that you observed and committed to memory ; this is reproduction . Finally , you must have motivation . <hl> You need to want to copy the behavior , and whether or not you are motivated depends on what happened to the model . <hl> <hl> If you saw that the model was reinforced for her behavior , you will be more motivated to copy her . <hl> This is known as vicarious reinforcement . <hl> On the other hand , if you observed the model being punished , you would be less motivated to copy her . <hl> This is called vicarious punishment . For example , imagine that four-year-old Allison watched her older sister Kaitlyn playing in their mother ’ s makeup , and then saw Kaitlyn get a time out when their mother came in . After their mother left the room , Allison was tempted to play in the make-up , but she did not want to get a time-out from her mother . What do you think she did ? Once you actually demonstrate the new behavior , the reinforcement you receive plays a part in whether or not you will repeat the behavior . Previous sections of this chapter focused on classical and operant conditioning , which are forms of associative learning . In observational learning , we learn by watching others and then imitating , or modeling , what they do or say . <hl> The individuals performing the imitated behavior are called models . <hl> Research suggests that this imitative learning involves a specific type of neuron , called a mirror neuron ( Hickock , 2010 ; Rizzolatti , Fadiga , Fogassi , & Gallese , 2002 ; Rizzolatti , Fogassi , & Gallese , 2006 ) .", "hl_sentences": "You need to want to copy the behavior , and whether or not you are motivated depends on what happened to the model . If you saw that the model was reinforced for her behavior , you will be more motivated to copy her . On the other hand , if you observed the model being punished , you would be less motivated to copy her . The individuals performing the imitated behavior are called models .", "question": { "cloze_format": "The person who performs a behavior that serves as an example is called a ________.", "normal_format": "What is called a person who performs a behavior that serves as an example called? ", "question_choices": [ "teacher", "model", "instructor", "coach" ], "question_id": "fs-idm74922384", "question_text": "The person who performs a behavior that serves as an example is called a ________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 3, "ans_text": "kicked and threw the doll" }, "bloom": null, "hl_context": "Bandura researched modeling behavior , particularly children ’ s modeling of adults ’ aggressive and violent behaviors ( Bandura , Ross , & Ross , 1961 ) . He conducted an experiment with a five-foot inflatable doll that he called a Bobo doll . <hl> In the experiment , children ’ s aggressive behavior was influenced by whether the teacher was punished for her behavior . <hl> <hl> In one scenario , a teacher acted aggressively with the doll , hitting , throwing , and even punching the doll , while a child watched . <hl> <hl> There were two types of responses by the children to the teacher ’ s behavior . <hl> <hl> When the teacher was punished for her bad behavior , the children decreased their tendency to act as she had . <hl> <hl> When the teacher was praised or ignored ( and not punished for her behavior ) , the children imitated what she did , and even what she said . <hl> They punched , kicked , and yelled at the doll .", "hl_sentences": "In the experiment , children ’ s aggressive behavior was influenced by whether the teacher was punished for her behavior . In one scenario , a teacher acted aggressively with the doll , hitting , throwing , and even punching the doll , while a child watched . There were two types of responses by the children to the teacher ’ s behavior . When the teacher was punished for her bad behavior , the children decreased their tendency to act as she had . When the teacher was praised or ignored ( and not punished for her behavior ) , the children imitated what she did , and even what she said .", "question": { "cloze_format": "In Bandura’s Bobo doll study, when the children who watched the aggressive model were placed in a room with the doll and other toys, they ________.", "normal_format": "In Bandura’s Bobo doll study, when the children who watched the aggressive model were placed in a room with the doll and other toys, what did they do?", "question_choices": [ "ignored the doll", "played nicely with the doll", "played with tinker toys", "kicked and threw the doll" ], "question_id": "fs-idm117080128", "question_text": "In Bandura’s Bobo doll study, when the children who watched the aggressive model were placed in a room with the doll and other toys, they ________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 0, "ans_text": "attention, retention, reproduction, motivation" }, "bloom": null, "hl_context": "Of course , we don ’ t learn a behavior simply by observing a model . <hl> Bandura described specific steps in the process of modeling that must be followed if learning is to be successful : attention , retention , reproduction , and motivation . <hl> First , you must be focused on what the model is doing — you have to pay attention . Next , you must be able to retain , or remember , what you observed ; this is retention . Then , you must be able to perform the behavior that you observed and committed to memory ; this is reproduction . Finally , you must have motivation . You need to want to copy the behavior , and whether or not you are motivated depends on what happened to the model . If you saw that the model was reinforced for her behavior , you will be more motivated to copy her . This is known as vicarious reinforcement . On the other hand , if you observed the model being punished , you would be less motivated to copy her . This is called vicarious punishment . For example , imagine that four-year-old Allison watched her older sister Kaitlyn playing in their mother ’ s makeup , and then saw Kaitlyn get a time out when their mother came in . After their mother left the room , Allison was tempted to play in the make-up , but she did not want to get a time-out from her mother . What do you think she did ? Once you actually demonstrate the new behavior , the reinforcement you receive plays a part in whether or not you will repeat the behavior .", "hl_sentences": "Bandura described specific steps in the process of modeling that must be followed if learning is to be successful : attention , retention , reproduction , and motivation .", "question": { "cloze_format": "The correct order of steps in the modeling process is ____.", "normal_format": "Which is the correct order of steps in the modeling process?", "question_choices": [ "attention, retention, reproduction, motivation", "motivation, attention, reproduction, retention", "attention, motivation, retention, reproduction", "motivation, attention, retention, reproduction" ], "question_id": "fs-idm38860288", "question_text": "Which is the correct order of steps in the modeling process?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "Albert Bandura" }, "bloom": null, "hl_context": "<hl> Like Tolman , whose experiments with rats suggested a cognitive component to learning , psychologist Albert Bandura ’ s ideas about learning were different from those of strict behaviorists . <hl> Bandura and other researchers proposed a brand of behaviorism called social learning theory , which took cognitive processes into account . According to Bandura , pure behaviorism could not explain why learning can take place in the absence of external reinforcement . <hl> He felt that internal mental states must also have a role in learning and that observational learning involves much more than imitation . <hl> In imitation , a person simply copies what the model does . Observational learning is much more complex . According to Lefrançois ( 2012 ) there are several ways that observational learning can occur :", "hl_sentences": "Like Tolman , whose experiments with rats suggested a cognitive component to learning , psychologist Albert Bandura ’ s ideas about learning were different from those of strict behaviorists . He felt that internal mental states must also have a role in learning and that observational learning involves much more than imitation .", "question": { "cloze_format": "___ proposed observational learning.", "normal_format": "Who proposed observational learning?", "question_choices": [ "Ivan Pavlov", "John Watson", "Albert Bandura", "B. F. Skinner" ], "question_id": "fs-idm109101680", "question_text": "Who proposed observational learning?" }, "references_are_paraphrase": 0 } ]
6
6.1 What Is Learning? Learning Objectives By the end of this section, you will be able to: Explain how learned behaviors are different from instincts and reflexes Define learning Recognize and define three basic forms of learning—classical conditioning, operant conditioning, and observational learning Birds build nests and migrate as winter approaches. Infants suckle at their mother’s breast. Dogs shake water off wet fur. Salmon swim upstream to spawn, and spiders spin intricate webs. What do these seemingly unrelated behaviors have in common? They all are unlearned behaviors. Both instincts and reflexes are innate behaviors that organisms are born with. Reflexes are a motor or neural reaction to a specific stimulus in the environment. They tend to be simpler than instincts, involve the activity of specific body parts and systems (e.g., the knee-jerk reflex and the contraction of the pupil in bright light), and involve more primitive centers of the central nervous system (e.g., the spinal cord and the medulla). In contrast, instincts are innate behaviors that are triggered by a broader range of events, such as aging and the change of seasons. They are more complex patterns of behavior, involve movement of the organism as a whole (e.g., sexual activity and migration), and involve higher brain centers. Both reflexes and instincts help an organism adapt to its environment and do not have to be learned. For example, every healthy human baby has a sucking reflex, present at birth. Babies are born knowing how to suck on a nipple, whether artificial (from a bottle) or human. Nobody teaches the baby to suck, just as no one teaches a sea turtle hatchling to move toward the ocean. Learning, like reflexes and instincts, allows an organism to adapt to its environment. But unlike instincts and reflexes, learned behaviors involve change and experience: learning is a relatively permanent change in behavior or knowledge that results from experience. In contrast to the innate behaviors discussed above, learning involves acquiring knowledge and skills through experience. Looking back at our surfing scenario, Julian will have to spend much more time training with his surfboard before he learns how to ride the waves like his father. Learning to surf, as well as any complex learning process (e.g., learning about the discipline of psychology), involves a complex interaction of conscious and unconscious processes. Learning has traditionally been studied in terms of its simplest components—the associations our minds automatically make between events. Our minds have a natural tendency to connect events that occur closely together or in sequence. Associative learning occurs when an organism makes connections between stimuli or events that occur together in the environment. You will see that associative learning is central to all three basic learning processes discussed in this chapter; classical conditioning tends to involve unconscious processes, operant conditioning tends to involve conscious processes, and observational learning adds social and cognitive layers to all the basic associative processes, both conscious and unconscious. These learning processes will be discussed in detail later in the chapter, but it is helpful to have a brief overview of each as you begin to explore how learning is understood from a psychological perspective. In classical conditioning, also known as Pavlovian conditioning, organisms learn to associate events—or stimuli—that repeatedly happen together. We experience this process throughout our daily lives. For example, you might see a flash of lightning in the sky during a storm and then hear a loud boom of thunder. The sound of the thunder naturally makes you jump (loud noises have that effect by reflex). Because lightning reliably predicts the impending boom of thunder, you may associate the two and jump when you see lightning. Psychological researchers study this associative process by focusing on what can be seen and measured—behaviors. Researchers ask if one stimulus triggers a reflex, can we train a different stimulus to trigger that same reflex? In operant conditioning, organisms learn, again, to associate events—a behavior and its consequence (reinforcement or punishment). A pleasant consequence encourages more of that behavior in the future, whereas a punishment deters the behavior. Imagine you are teaching your dog, Hodor, to sit. You tell Hodor to sit, and give him a treat when he does. After repeated experiences, Hodor begins to associate the act of sitting with receiving a treat. He learns that the consequence of sitting is that he gets a doggie biscuit ( Figure 6.2 ). Conversely, if the dog is punished when exhibiting a behavior, it becomes conditioned to avoid that behavior (e.g., receiving a small shock when crossing the boundary of an invisible electric fence). Observational learning extends the effective range of both classical and operant conditioning. In contrast to classical and operant conditioning, in which learning occurs only through direct experience, observational learning is the process of watching others and then imitating what they do. A lot of learning among humans and other animals comes from observational learning. To get an idea of the extra effective range that observational learning brings, consider Ben and his son Julian from the introduction. How might observation help Julian learn to surf, as opposed to learning by trial and error alone? By watching his father, he can imitate the moves that bring success and avoid the moves that lead to failure. Can you think of something you have learned how to do after watching someone else? All of the approaches covered in this chapter are part of a particular tradition in psychology, called behaviorism, which we discuss in the next section. However, these approaches do not represent the entire study of learning. Separate traditions of learning have taken shape within different fields of psychology, such as memory and cognition, so you will find that other chapters will round out your understanding of the topic. Over time these traditions tend to converge. For example, in this chapter you will see how cognition has come to play a larger role in behaviorism, whose more extreme adherents once insisted that behaviors are triggered by the environment with no intervening thought. 6.2 Classical Conditioning Learning Objectives By the end of this section, you will be able to: Explain how classical conditioning occurs Summarize the processes of acquisition, extinction, spontaneous recovery, generalization, and discrimination Does the name Ivan Pavlov ring a bell? Even if you are new to the study of psychology, chances are that you have heard of Pavlov and his famous dogs. Pavlov (1849–1936), a Russian scientist, performed extensive research on dogs and is best known for his experiments in classical conditioning ( Figure 6.3 ). As we discussed briefly in the previous section, classical conditioning is a process by which we learn to associate stimuli and, consequently, to anticipate events. Pavlov came to his conclusions about how learning occurs completely by accident. Pavlov was a physiologist, not a psychologist. Physiologists study the life processes of organisms, from the molecular level to the level of cells, organ systems, and entire organisms. Pavlov’s area of interest was the digestive system (Hunt, 2007). In his studies with dogs, Pavlov surgically implanted tubes inside dogs’ cheeks to collect saliva. He then measured the amount of saliva produced in response to various foods. Over time, Pavlov (1927) observed that the dogs began to salivate not only at the taste of food, but also at the sight of food, at the sight of an empty food bowl, and even at the sound of the laboratory assistants' footsteps. Salivating to food in the mouth is reflexive, so no learning is involved. However, dogs don’t naturally salivate at the sight of an empty bowl or the sound of footsteps. These unusual responses intrigued Pavlov, and he wondered what accounted for what he called the dogs' “psychic secretions” (Pavlov, 1927). To explore this phenomenon in an objective manner, Pavlov designed a series of carefully controlled experiments to see which stimuli would cause the dogs to salivate. He was able to train the dogs to salivate in response to stimuli that clearly had nothing to do with food, such as the sound of a bell, a light, and a touch on the leg. Through his experiments, Pavlov realized that an organism has two types of responses to its environment: (1) unconditioned (unlearned) responses, or reflexes, and (2) conditioned (learned) responses. In Pavlov’s experiments, the dogs salivated each time meat powder was presented to them. The meat powder in this situation was an unconditioned stimulus (UCS) : a stimulus that elicits a reflexive response in an organism. The dogs’ salivation was an unconditioned response (UCR) : a natural (unlearned) reaction to a given stimulus. Before conditioning, think of the dogs’ stimulus and response like this: Meat powder (UCS)  →  Salivation (UCR) Meat powder (UCS)  →  Salivation (UCR) In classical conditioning, a neutral stimulus is presented immediately before an unconditioned stimulus. Pavlov would sound a tone (like ringing a bell) and then give the dogs the meat powder ( Figure 6.4 ). The tone was the neutral stimulus (NS) , which is a stimulus that does not naturally elicit a response. Prior to conditioning, the dogs did not salivate when they just heard the tone because the tone had no association for the dogs. Quite simply this pairing means: Tone (NS) + Meat Powder (UCS)  →  Salivation (UCR) Tone (NS) + Meat Powder (UCS)  →  Salivation (UCR) When Pavlov paired the tone with the meat powder over and over again, the previously neutral stimulus (the tone) also began to elicit salivation from the dogs. Thus, the neutral stimulus became the conditioned stimulus (CS) , which is a stimulus that elicits a response after repeatedly being paired with an unconditioned stimulus. Eventually, the dogs began to salivate to the tone alone, just as they previously had salivated at the sound of the assistants’ footsteps. The behavior caused by the conditioned stimulus is called the conditioned response (CR) . In the case of Pavlov’s dogs, they had learned to associate the tone (CS) with being fed, and they began to salivate (CR) in anticipation of food. Tone (CS)  →  Salivation (CR) Tone (CS)  →  Salivation (CR) Link to Learning Now that you have learned about the process of classical conditioning, do you think you can condition Pavlov’s dog? Visit this website to play the game. Link to Learning View this video to learn more about Pavlov and his dogs. Real World Application of Classical Conditioning How does classical conditioning work in the real world? Let’s say you have a cat named Tiger, who is quite spoiled. You keep her food in a separate cabinet, and you also have a special electric can opener that you use only to open cans of cat food. For every meal, Tiger hears the distinctive sound of the electric can opener (“zzhzhz”) and then gets her food. Tiger quickly learns that when she hears “zzhzhz” she is about to get fed. What do you think Tiger does when she hears the electric can opener? She will likely get excited and run to where you are preparing her food. This is an example of classical conditioning. In this case, what are the UCS, CS, UCR, and CR? What if the cabinet holding Tiger’s food becomes squeaky? In that case, Tiger hears “squeak” (the cabinet), “zzhzhz” (the electric can opener), and then she gets her food. Tiger will learn to get excited when she hears the “squeak” of the cabinet. Pairing a new neutral stimulus (“squeak”) with the conditioned stimulus (“zzhzhz”) is called higher-order conditioning , or second-order conditioning . This means you are using the conditioned stimulus of the can opener to condition another stimulus: the squeaky cabinet ( Figure 6.5 ). It is hard to achieve anything above second-order conditioning. For example, if you ring a bell, open the cabinet (“squeak”), use the can opener (“zzhzhz”), and then feed Tiger, Tiger will likely never get excited when hearing the bell alone. Everyday Connection Classical Conditioning at Stingray City Kate and her husband Scott recently vacationed in the Cayman Islands, and booked a boat tour to Stingray City, where they could feed and swim with the southern stingrays. The boat captain explained how the normally solitary stingrays have become accustomed to interacting with humans. About 40 years ago, fishermen began to clean fish and conch (unconditioned stimulus) at a particular sandbar near a barrier reef, and large numbers of stingrays would swim in to eat (unconditioned response) what the fishermen threw into the water; this continued for years. By the late 1980s, word of the large group of stingrays spread among scuba divers, who then started feeding them by hand. Over time, the southern stingrays in the area were classically conditioned much like Pavlov’s dogs. When they hear the sound of a boat engine (neutral stimulus that becomes a conditioned stimulus), they know that they will get to eat (conditioned response). As soon as Kate and Scott reached Stingray City, over two dozen stingrays surrounded their tour boat. The couple slipped into the water with bags of squid, the stingrays’ favorite treat. The swarm of stingrays bumped and rubbed up against their legs like hungry cats ( Figure 6.6 ). Kate and Scott were able to feed, pet, and even kiss (for luck) these amazing creatures. Then all the squid was gone, and so were the stingrays. Classical conditioning also applies to humans, even babies. For example, Sara buys formula in blue canisters for her six-month-old daughter, Angelina. Whenever Sara takes out a formula container, Angelina gets excited, tries to reach toward the food, and most likely salivates. Why does Angelina get excited when she sees the formula canister? What are the UCS, CS, UCR, and CR here? So far, all of the examples have involved food, but classical conditioning extends beyond the basic need to be fed. Consider our earlier example of a dog whose owners install an invisible electric dog fence. A small electrical shock (unconditioned stimulus) elicits discomfort (unconditioned response). When the unconditioned stimulus (shock) is paired with a neutral stimulus (the edge of a yard), the dog associates the discomfort (unconditioned response) with the edge of the yard (conditioned stimulus) and stays within the set boundaries. In this example, the edge of the yard elicits fear and anxiety in the dog. Fear and anxiety are the conditioned response. Link to Learning For a humorous look at conditioning, watch this video clip from the television show The Office , where Jim conditions Dwight to expect a breath mint every time Jim’s computer makes a specific sound. General Processes in Classical Conditioning Now that you know how classical conditioning works and have seen several examples, let’s take a look at some of the general processes involved. In classical conditioning, the initial period of learning is known as acquisition , when an organism learns to connect a neutral stimulus and an unconditioned stimulus. During acquisition, the neutral stimulus begins to elicit the conditioned response, and eventually the neutral stimulus becomes a conditioned stimulus capable of eliciting the conditioned response by itself. Timing is important for conditioning to occur. Typically, there should only be a brief interval between presentation of the conditioned stimulus and the unconditioned stimulus. Depending on what is being conditioned, sometimes this interval is as little as five seconds (Chance, 2009). However, with other types of conditioning, the interval can be up to several hours. Taste aversion is a type of conditioning in which an interval of several hours may pass between the conditioned stimulus (something ingested) and the unconditioned stimulus (nausea or illness). Here’s how it works. Between classes, you and a friend grab a quick lunch from a food cart on campus. You share a dish of chicken curry and head off to your next class. A few hours later, you feel nauseous and become ill. Although your friend is fine and you determine that you have intestinal flu (the food is not the culprit), you’ve developed a taste aversion; the next time you are at a restaurant and someone orders curry, you immediately feel ill. While the chicken dish is not what made you sick, you are experiencing taste aversion: you’ve been conditioned to be averse to a food after a single, negative experience. How does this occur—conditioning based on a single instance and involving an extended time lapse between the event and the negative stimulus? Research into taste aversion suggests that this response may be an evolutionary adaptation designed to help organisms quickly learn to avoid harmful foods (Garcia & Rusiniak, 1980; Garcia & Koelling, 1966). Not only may this contribute to species survival via natural selection, but it may also help us develop strategies for challenges such as helping cancer patients through the nausea induced by certain treatments (Holmes, 1993; Jacobsen et al., 1993; Hutton, Baracos, & Wismer, 2007; Skolin et al., 2006). Once we have established the connection between the unconditioned stimulus and the conditioned stimulus, how do we break that connection and get the dog, cat, or child to stop responding? In Tiger’s case, imagine what would happen if you stopped using the electric can opener for her food and began to use it only for human food. Now, Tiger would hear the can opener, but she would not get food. In classical conditioning terms, you would be giving the conditioned stimulus, but not the unconditioned stimulus. Pavlov explored this scenario in his experiments with dogs: sounding the tone without giving the dogs the meat powder. Soon the dogs stopped responding to the tone. Extinction is the decrease in the conditioned response when the unconditioned stimulus is no longer presented with the conditioned stimulus. When presented with the conditioned stimulus alone, the dog, cat, or other organism would show a weaker and weaker response, and finally no response. In classical conditioning terms, there is a gradual weakening and disappearance of the conditioned response. What happens when learning is not used for a while—when what was learned lies dormant? As we just discussed, Pavlov found that when he repeatedly presented the bell (conditioned stimulus) without the meat powder (unconditioned stimulus), extinction occurred; the dogs stopped salivating to the bell. However, after a couple of hours of resting from this extinction training, the dogs again began to salivate when Pavlov rang the bell. What do you think would happen with Tiger’s behavior if your electric can opener broke, and you did not use it for several months? When you finally got it fixed and started using it to open Tiger’s food again, Tiger would remember the association between the can opener and her food—she would get excited and run to the kitchen when she heard the sound. The behavior of Pavlov’s dogs and Tiger illustrates a concept Pavlov called spontaneous recovery : the return of a previously extinguished conditioned response following a rest period ( Figure 6.7 ). Of course, these processes also apply in humans. For example, let’s say that every day when you walk to campus, an ice cream truck passes your route. Day after day, you hear the truck’s music (neutral stimulus), so you finally stop and purchase a chocolate ice cream bar. You take a bite (unconditioned stimulus) and then your mouth waters (unconditioned response). This initial period of learning is known as acquisition, when you begin to connect the neutral stimulus (the sound of the truck) and the unconditioned stimulus (the taste of the chocolate ice cream in your mouth). During acquisition, the conditioned response gets stronger and stronger through repeated pairings of the conditioned stimulus and unconditioned stimulus. Several days (and ice cream bars) later, you notice that your mouth begins to water (conditioned response) as soon as you hear the truck’s musical jingle—even before you bite into the ice cream bar. Then one day you head down the street. You hear the truck’s music (conditioned stimulus), and your mouth waters (conditioned response). However, when you get to the truck, you discover that they are all out of ice cream. You leave disappointed. The next few days you pass by the truck and hear the music, but don’t stop to get an ice cream bar because you’re running late for class. You begin to salivate less and less when you hear the music, until by the end of the week, your mouth no longer waters when you hear the tune. This illustrates extinction. The conditioned response weakens when only the conditioned stimulus (the sound of the truck) is presented, without being followed by the unconditioned stimulus (chocolate ice cream in the mouth). Then the weekend comes. You don’t have to go to class, so you don’t pass the truck. Monday morning arrives and you take your usual route to campus. You round the corner and hear the truck again. What do you think happens? Your mouth begins to water again. Why? After a break from conditioning, the conditioned response reappears, which indicates spontaneous recovery. Acquisition and extinction involve the strengthening and weakening, respectively, of a learned association. Two other learning processes—stimulus discrimination and stimulus generalization—are involved in distinguishing which stimuli will trigger the learned association. Animals (including humans) need to distinguish between stimuli—for example, between sounds that predict a threatening event and sounds that do not—so that they can respond appropriately (such as running away if the sound is threatening). When an organism learns to respond differently to various stimuli that are similar, it is called stimulus discrimination . In classical conditioning terms, the organism demonstrates the conditioned response only to the conditioned stimulus. Pavlov’s dogs discriminated between the basic tone that sounded before they were fed and other tones (e.g., the doorbell), because the other sounds did not predict the arrival of food. Similarly, Tiger, the cat, discriminated between the sound of the can opener and the sound of the electric mixer. When the electric mixer is going, Tiger is not about to be fed, so she does not come running to the kitchen looking for food. On the other hand, when an organism demonstrates the conditioned response to stimuli that are similar to the condition stimulus, it is called stimulus generalization , the opposite of stimulus discrimination. The more similar a stimulus is to the condition stimulus, the more likely the organism is to give the conditioned response. For instance, if the electric mixer sounds very similar to the electric can opener, Tiger may come running after hearing its sound. But if you do not feed her following the electric mixer sound, and you continue to feed her consistently after the electric can opener sound, she will quickly learn to discriminate between the two sounds (provided they are sufficiently dissimilar that she can tell them apart). Sometimes, classical conditioning can lead to habituation. Habituation occurs when we learn not to respond to a stimulus that is presented repeatedly without change. As the stimulus occurs over and over, we learn not to focus our attention on it. For example, imagine that your neighbor or roommate constantly has the television blaring. This background noise is distracting and makes it difficult for you to focus when you’re studying. However, over time, you become accustomed to the stimulus of the television noise, and eventually you hardly notice it any longer. Behaviorism John B. Watson , shown in Figure 6.8 , is considered the founder of behaviorism. Behaviorism is a school of thought that arose during the first part of the 20th century, which incorporates elements of Pavlov’s classical conditioning (Hunt, 2007). In stark contrast with Freud, who considered the reasons for behavior to be hidden in the unconscious, Watson championed the idea that all behavior can be studied as a simple stimulus-response reaction, without regard for internal processes. Watson argued that in order for psychology to become a legitimate science, it must shift its concern away from internal mental processes because mental processes cannot be seen or measured. Instead, he asserted that psychology must focus on outward observable behavior that can be measured. Watson’s ideas were influenced by Pavlov’s work. According to Watson, human behavior, just like animal behavior, is primarily the result of conditioned responses. Whereas Pavlov’s work with dogs involved the conditioning of reflexes, Watson believed the same principles could be extended to the conditioning of human emotions (Watson, 1919). Thus began Watson’s work with his graduate student Rosalie Rayner and a baby called Little Albert. Through their experiments with Little Albert, Watson and Rayner (1920) demonstrated how fears can be conditioned. In 1920, Watson was the chair of the psychology department at Johns Hopkins University. Through his position at the university he came to meet Little Albert’s mother, Arvilla Merritte, who worked at a campus hospital (DeAngelis, 2010). Watson offered her a dollar to allow her son to be the subject of his experiments in classical conditioning. Through these experiments, Little Albert was exposed to and conditioned to fear certain things. Initially he was presented with various neutral stimuli, including a rabbit, a dog, a monkey, masks, cotton wool, and a white rat. He was not afraid of any of these things. Then Watson, with the help of Rayner, conditioned Little Albert to associate these stimuli with an emotion—fear. For example, Watson handed Little Albert the white rat, and Little Albert enjoyed playing with it. Then Watson made a loud sound, by striking a hammer against a metal bar hanging behind Little Albert’s head, each time Little Albert touched the rat. Little Albert was frightened by the sound—demonstrating a reflexive fear of sudden loud noises—and began to cry. Watson repeatedly paired the loud sound with the white rat. Soon Little Albert became frightened by the white rat alone. In this case, what are the UCS, CS, UCR, and CR? Days later, Little Albert demonstrated stimulus generalization—he became afraid of other furry things: a rabbit, a furry coat, and even a Santa Claus mask ( Figure 6.9 ). Watson had succeeded in conditioning a fear response in Little Albert, thus demonstrating that emotions could become conditioned responses. It had been Watson’s intention to produce a phobia—a persistent, excessive fear of a specific object or situation— through conditioning alone, thus countering Freud’s view that phobias are caused by deep, hidden conflicts in the mind. However, there is no evidence that Little Albert experienced phobias in later years. Little Albert’s mother moved away, ending the experiment. While Watson’s research provided new insight into conditioning, it would be considered unethical by today’s standards. Link to Learning View scenes from John Watson’s experiment in which Little Albert was conditioned to respond in fear to furry objects. As you watch the video, look closely at Little Albert’s reactions and the manner in which Watson and Rayner present the stimuli before and after conditioning. Based on what you see, would you come to the same conclusions as the researchers? Everyday Connection Advertising and Associative Learning Advertising executives are pros at applying the principles of associative learning. Think about the car commercials you have seen on television. Many of them feature an attractive model. By associating the model with the car being advertised, you come to see the car as being desirable (Cialdini, 2008). You may be asking yourself, does this advertising technique actually work? According to Cialdini (2008), men who viewed a car commercial that included an attractive model later rated the car as being faster, more appealing, and better designed than did men who viewed an advertisement for the same car minus the model. Have you ever noticed how quickly advertisers cancel contracts with a famous athlete following a scandal? As far as the advertiser is concerned, that athlete is no longer associated with positive feelings; therefore, the athlete cannot be used as an unconditioned stimulus to condition the public to associate positive feelings (the unconditioned response) with their product (the conditioned stimulus). Now that you are aware of how associative learning works, see if you can find examples of these types of advertisements on television, in magazines, or on the Internet. 6.3 Operant Conditioning Learning Objectives By the end of this section, you will be able to: Define operant conditioning Explain the difference between reinforcement and punishment Distinguish between reinforcement schedules The previous section of this chapter focused on the type of associative learning known as classical conditioning. Remember that in classical conditioning, something in the environment triggers a reflex automatically, and researchers train the organism to react to a different stimulus. Now we turn to the second type of associative learning, operant conditioning . In operant conditioning, organisms learn to associate a behavior and its consequence ( Table 6.1 ). A pleasant consequence makes that behavior more likely to be repeated in the future. For example, Spirit, a dolphin at the National Aquarium in Baltimore, does a flip in the air when her trainer blows a whistle. The consequence is that she gets a fish. Classical Conditioning Operant Conditioning Conditioning approach An unconditioned stimulus (such as food) is paired with a neutral stimulus (such as a bell). The neutral stimulus eventually becomes the conditioned stimulus, which brings about the conditioned response (salivation). The target behavior is followed by reinforcement or punishment to either strengthen or weaken it, so that the learner is more likely to exhibit the desired behavior in the future. Stimulus timing The stimulus occurs immediately before the response. The stimulus (either reinforcement or punishment) occurs soon after the response. Table 6.1 Classical and Operant Conditioning Compared Psychologist B. F. Skinner saw that classical conditioning is limited to existing behaviors that are reflexively elicited, and it doesn’t account for new behaviors such as riding a bike. He proposed a theory about how such behaviors come about. Skinner believed that behavior is motivated by the consequences we receive for the behavior: the reinforcements and punishments. His idea that learning is the result of consequences is based on the law of effect, which was first proposed by psychologist Edward Thorndike . According to the law of effect , behaviors that are followed by consequences that are satisfying to the organism are more likely to be repeated, and behaviors that are followed by unpleasant consequences are less likely to be repeated (Thorndike, 1911). Essentially, if an organism does something that brings about a desired result, the organism is more likely to do it again. If an organism does something that does not bring about a desired result, the organism is less likely to do it again. An example of the law of effect is in employment. One of the reasons (and often the main reason) we show up for work is because we get paid to do so. If we stop getting paid, we will likely stop showing up—even if we love our job. Working with Thorndike’s law of effect as his foundation, Skinner began conducting scientific experiments on animals (mainly rats and pigeons) to determine how organisms learn through operant conditioning (Skinner, 1938). He placed these animals inside an operant conditioning chamber, which has come to be known as a “Skinner box” ( Figure 6.10 ). A Skinner box contains a lever (for rats) or disk (for pigeons) that the animal can press or peck for a food reward via the dispenser. Speakers and lights can be associated with certain behaviors. A recorder counts the number of responses made by the animal. Link to Learning Watch this brief video clip to learn more about operant conditioning: Skinner is interviewed, and operant conditioning of pigeons is demonstrated. In discussing operant conditioning, we use several everyday words—positive, negative, reinforcement, and punishment—in a specialized manner. In operant conditioning, positive and negative do not mean good and bad. Instead, positive means you are adding something, and negative means you are taking something away. Reinforcement means you are increasing a behavior, and punishment means you are decreasing a behavior. Reinforcement can be positive or negative, and punishment can also be positive or negative. All reinforcers (positive or negative) increase the likelihood of a behavioral response. All punishers (positive or negative) decrease the likelihood of a behavioral response. Now let’s combine these four terms: positive reinforcement, negative reinforcement, positive punishment, and negative punishment ( Table 6.2 ). Reinforcement Punishment Positive Something is added to increase the likelihood of a behavior. Something is added to decrease the likelihood of a behavior. Negative Something is removed to increase the likelihood of a behavior. Something is removed to decrease the likelihood of a behavior. Table 6.2 Positive and Negative Reinforcement and Punishment Reinforcement The most effective way to teach a person or animal a new behavior is with positive reinforcement. In positive reinforcement , a desirable stimulus is added to increase a behavior. For example, you tell your five-year-old son, Jerome, that if he cleans his room, he will get a toy. Jerome quickly cleans his room because he wants a new art set. Let’s pause for a moment. Some people might say, “Why should I reward my child for doing what is expected?” But in fact we are constantly and consistently rewarded in our lives. Our paychecks are rewards, as are high grades and acceptance into our preferred school. Being praised for doing a good job and for passing a driver’s test is also a reward. Positive reinforcement as a learning tool is extremely effective. It has been found that one of the most effective ways to increase achievement in school districts with below-average reading scores was to pay the children to read. Specifically, second-grade students in Dallas were paid $2 each time they read a book and passed a short quiz about the book. The result was a significant increase in reading comprehension (Fryer, 2010). What do you think about this program? If Skinner were alive today, he would probably think this was a great idea. He was a strong proponent of using operant conditioning principles to influence students’ behavior at school. In fact, in addition to the Skinner box, he also invented what he called a teaching machine that was designed to reward small steps in learning (Skinner, 1961)—an early forerunner of computer-assisted learning. His teaching machine tested students’ knowledge as they worked through various school subjects. If students answered questions correctly, they received immediate positive reinforcement and could continue; if they answered incorrectly, they did not receive any reinforcement. The idea was that students would spend additional time studying the material to increase their chance of being reinforced the next time (Skinner, 1961). In negative reinforcement , an undesirable stimulus is removed to increase a behavior. For example, car manufacturers use the principles of negative reinforcement in their seatbelt systems, which go “beep, beep, beep” until you fasten your seatbelt. The annoying sound stops when you exhibit the desired behavior, increasing the likelihood that you will buckle up in the future. Negative reinforcement is also used frequently in horse training. Riders apply pressure—by pulling the reins or squeezing their legs—and then remove the pressure when the horse performs the desired behavior, such as turning or speeding up. The pressure is the negative stimulus that the horse wants to remove. Punishment Many people confuse negative reinforcement with punishment in operant conditioning, but they are two very different mechanisms. Remember that reinforcement, even when it is negative, always increases a behavior. In contrast, punishment always decreases a behavior. In positive punishment , you add an undesirable stimulus to decrease a behavior. An example of positive punishment is scolding a student to get the student to stop texting in class. In this case, a stimulus (the reprimand) is added in order to decrease the behavior (texting in class). In negative punishment , you remove a pleasant stimulus to decrease behavior. For example, when a child misbehaves, a parent can take away a favorite toy. In this case, a stimulus (the toy) is removed in order to decrease the behavior. Punishment, especially when it is immediate, is one way to decrease undesirable behavior. For example, imagine your four-year-old son, Brandon, hit his younger brother. You have Brandon write 100 times “I will not hit my brother" (positive punishment). Chances are he won’t repeat this behavior. While strategies like this are common today, in the past children were often subject to physical punishment, such as spanking. It’s important to be aware of some of the drawbacks in using physical punishment on children. First, punishment may teach fear. Brandon may become fearful of the street, but he also may become fearful of the person who delivered the punishment—you, his parent. Similarly, children who are punished by teachers may come to fear the teacher and try to avoid school (Gershoff et al., 2010). Consequently, most schools in the United States have banned corporal punishment. Second, punishment may cause children to become more aggressive and prone to antisocial behavior and delinquency (Gershoff, 2002). They see their parents resort to spanking when they become angry and frustrated, so, in turn, they may act out this same behavior when they become angry and frustrated. For example, because you spank Brenda when you are angry with her for her misbehavior, she might start hitting her friends when they won’t share their toys. While positive punishment can be effective in some cases, Skinner suggested that the use of punishment should be weighed against the possible negative effects. Today’s psychologists and parenting experts favor reinforcement over punishment—they recommend that you catch your child doing something good and reward her for it. Shaping In his operant conditioning experiments, Skinner often used an approach called shaping. Instead of rewarding only the target behavior, in shaping , we reward successive approximations of a target behavior. Why is shaping needed? Remember that in order for reinforcement to work, the organism must first display the behavior. Shaping is needed because it is extremely unlikely that an organism will display anything but the simplest of behaviors spontaneously. In shaping, behaviors are broken down into many small, achievable steps. The specific steps used in the process are the following: Reinforce any response that resembles the desired behavior. Then reinforce the response that more closely resembles the desired behavior. You will no longer reinforce the previously reinforced response. Next, begin to reinforce the response that even more closely resembles the desired behavior. Continue to reinforce closer and closer approximations of the desired behavior. Finally, only reinforce the desired behavior. Shaping is often used in teaching a complex behavior or chain of behaviors. Skinner used shaping to teach pigeons not only such relatively simple behaviors as pecking a disk in a Skinner box, but also many unusual and entertaining behaviors, such as turning in circles, walking in figure eights, and even playing ping pong; the technique is commonly used by animal trainers today. An important part of shaping is stimulus discrimination. Recall Pavlov’s dogs—he trained them to respond to the tone of a bell, and not to similar tones or sounds. This discrimination is also important in operant conditioning and in shaping behavior. Link to Learning Here is a brief video of Skinner’s pigeons playing ping pong. It’s easy to see how shaping is effective in teaching behaviors to animals, but how does shaping work with humans? Let’s consider parents whose goal is to have their child learn to clean his room. They use shaping to help him master steps toward the goal. Instead of performing the entire task, they set up these steps and reinforce each step. First, he cleans up one toy. Second, he cleans up five toys. Third, he chooses whether to pick up ten toys or put his books and clothes away. Fourth, he cleans up everything except two toys. Finally, he cleans his entire room. Primary and Secondary Reinforcers Rewards such as stickers, praise, money, toys, and more can be used to reinforce learning. Let’s go back to Skinner’s rats again. How did the rats learn to press the lever in the Skinner box? They were rewarded with food each time they pressed the lever. For animals, food would be an obvious reinforcer. What would be a good reinforce for humans? For your daughter Sydney, it was the promise of a toy if she cleaned her room. How about Joaquin, the soccer player? If you gave Joaquin a piece of candy every time he made a goal, you would be using a primary reinforcer . Primary reinforcers are reinforcers that have innate reinforcing qualities. These kinds of reinforcers are not learned. Water, food, sleep, shelter, sex, and touch, among others, are primary reinforcers. Pleasure is also a primary reinforcer. Organisms do not lose their drive for these things. For most people, jumping in a cool lake on a very hot day would be reinforcing and the cool lake would be innately reinforcing—the water would cool the person off (a physical need), as well as provide pleasure. A secondary reinforcer has no inherent value and only has reinforcing qualities when linked with a primary reinforcer. Praise, linked to affection, is one example of a secondary reinforcer, as when you called out “Great shot!” every time Joaquin made a goal. Another example, money, is only worth something when you can use it to buy other things—either things that satisfy basic needs (food, water, shelter—all primary reinforcers) or other secondary reinforcers. If you were on a remote island in the middle of the Pacific Ocean and you had stacks of money, the money would not be useful if you could not spend it. What about the stickers on the behavior chart? They also are secondary reinforcers. Sometimes, instead of stickers on a sticker chart, a token is used. Tokens, which are also secondary reinforcers, can then be traded in for rewards and prizes. Entire behavior management systems, known as token economies, are built around the use of these kinds of token reinforcers. Token economies have been found to be very effective at modifying behavior in a variety of settings such as schools, prisons, and mental hospitals. For example, a study by Cangi and Daly (2013) found that use of a token economy increased appropriate social behaviors and reduced inappropriate behaviors in a group of autistic school children. Autistic children tend to exhibit disruptive behaviors such as pinching and hitting. When the children in the study exhibited appropriate behavior (not hitting or pinching), they received a “quiet hands” token. When they hit or pinched, they lost a token. The children could then exchange specified amounts of tokens for minutes of playtime. Everyday Connection Behavior Modification in Children Parents and teachers often use behavior modification to change a child’s behavior. Behavior modification uses the principles of operant conditioning to accomplish behavior change so that undesirable behaviors are switched for more socially acceptable ones. Some teachers and parents create a sticker chart, in which several behaviors are listed ( Figure 6.11 ). Sticker charts are a form of token economies, as described in the text. Each time children perform the behavior, they get a sticker, and after a certain number of stickers, they get a prize, or reinforcer. The goal is to increase acceptable behaviors and decrease misbehavior. Remember, it is best to reinforce desired behaviors, rather than to use punishment. In the classroom, the teacher can reinforce a wide range of behaviors, from students raising their hands, to walking quietly in the hall, to turning in their homework. At home, parents might create a behavior chart that rewards children for things such as putting away toys, brushing their teeth, and helping with dinner. In order for behavior modification to be effective, the reinforcement needs to be connected with the behavior; the reinforcement must matter to the child and be done consistently. Time-out is another popular technique used in behavior modification with children. It operates on the principle of negative punishment. When a child demonstrates an undesirable behavior, she is removed from the desirable activity at hand ( Figure 6.12 ). For example, say that Sophia and her brother Mario are playing with building blocks. Sophia throws some blocks at her brother, so you give her a warning that she will go to time-out if she does it again. A few minutes later, she throws more blocks at Mario. You remove Sophia from the room for a few minutes. When she comes back, she doesn’t throw blocks. There are several important points that you should know if you plan to implement time-out as a behavior modification technique. First, make sure the child is being removed from a desirable activity and placed in a less desirable location. If the activity is something undesirable for the child, this technique will backfire because it is more enjoyable for the child to be removed from the activity. Second, the length of the time-out is important. The general rule of thumb is one minute for each year of the child’s age. Sophia is five; therefore, she sits in a time-out for five minutes. Setting a timer helps children know how long they have to sit in time-out. Finally, as a caregiver, keep several guidelines in mind over the course of a time-out: remain calm when directing your child to time-out; ignore your child during time-out (because caregiver attention may reinforce misbehavior); and give the child a hug or a kind word when time-out is over. Reinforcement Schedules Remember, the best way to teach a person or animal a behavior is to use positive reinforcement. For example, Skinner used positive reinforcement to teach rats to press a lever in a Skinner box. At first, the rat might randomly hit the lever while exploring the box, and out would come a pellet of food. After eating the pellet, what do you think the hungry rat did next? It hit the lever again, and received another pellet of food. Each time the rat hit the lever, a pellet of food came out. When an organism receives a reinforcer each time it displays a behavior, it is called continuous reinforcement . This reinforcement schedule is the quickest way to teach someone a behavior, and it is especially effective in training a new behavior. Let’s look back at the dog that was learning to sit earlier in the chapter. Now, each time he sits, you give him a treat. Timing is important here: you will be most successful if you present the reinforcer immediately after he sits, so that he can make an association between the target behavior (sitting) and the consequence (getting a treat). Link to Learning Watch this video clip where veterinarian Dr. Sophia Yin shapes a dog’s behavior using the steps outlined above. Once a behavior is trained, researchers and trainers often turn to another type of reinforcement schedule—partial reinforcement. In partial reinforcement , also referred to as intermittent reinforcement, the person or animal does not get reinforced every time they perform the desired behavior. There are several different types of partial reinforcement schedules ( Table 6.3 ). These schedules are described as either fixed or variable, and as either interval or ratio. Fixed refers to the number of responses between reinforcements, or the amount of time between reinforcements, which is set and unchanging. Variable refers to the number of responses or amount of time between reinforcements, which varies or changes. Interval means the schedule is based on the time between reinforcements, and ratio means the schedule is based on the number of responses between reinforcements. Reinforcement Schedule Description Result Example Fixed interval Reinforcement is delivered at predictable time intervals (e.g., after 5, 10, 15, and 20 minutes). Moderate response rate with significant pauses after reinforcement Hospital patient uses patient-controlled, doctor-timed pain relief Variable interval Reinforcement is delivered at unpredictable time intervals (e.g., after 5, 7, 10, and 20 minutes). Moderate yet steady response rate Checking Facebook Fixed ratio Reinforcement is delivered after a predictable number of responses (e.g., after 2, 4, 6, and 8 responses). High response rate with pauses after reinforcement Piecework—factory worker getting paid for every x number of items manufactured Variable ratio Reinforcement is delivered after an unpredictable number of responses (e.g., after 1, 4, 5, and 9 responses). High and steady response rate Gambling Table 6.3 Reinforcement Schedules Now let’s combine these four terms. A fixed interval reinforcement schedule is when behavior is rewarded after a set amount of time. For example, June undergoes major surgery in a hospital. During recovery, she is expected to experience pain and will require prescription medications for pain relief. June is given an IV drip with a patient-controlled painkiller. Her doctor sets a limit: one dose per hour. June pushes a button when pain becomes difficult, and she receives a dose of medication. Since the reward (pain relief) only occurs on a fixed interval, there is no point in exhibiting the behavior when it will not be rewarded. With a variable interval reinforcement schedule , the person or animal gets the reinforcement based on varying amounts of time, which are unpredictable. Say that Manuel is the manager at a fast-food restaurant. Every once in a while someone from the quality control division comes to Manuel’s restaurant. If the restaurant is clean and the service is fast, everyone on that shift earns a $20 bonus. Manuel never knows when the quality control person will show up, so he always tries to keep the restaurant clean and ensures that his employees provide prompt and courteous service. His productivity regarding prompt service and keeping a clean restaurant are steady because he wants his crew to earn the bonus. With a fixed ratio reinforcement schedule , there are a set number of responses that must occur before the behavior is rewarded. Carla sells glasses at an eyeglass store, and she earns a commission every time she sells a pair of glasses. She always tries to sell people more pairs of glasses, including prescription sunglasses or a backup pair, so she can increase her commission. She does not care if the person really needs the prescription sunglasses, Carla just wants her bonus. The quality of what Carla sells does not matter because her commission is not based on quality; it’s only based on the number of pairs sold. This distinction in the quality of performance can help determine which reinforcement method is most appropriate for a particular situation. Fixed ratios are better suited to optimize the quantity of output, whereas a fixed interval, in which the reward is not quantity based, can lead to a higher quality of output. In a variable ratio reinforcement schedule , the number of responses needed for a reward varies. This is the most powerful partial reinforcement schedule. An example of the variable ratio reinforcement schedule is gambling. Imagine that Sarah—generally a smart, thrifty woman—visits Las Vegas for the first time. She is not a gambler, but out of curiosity she puts a quarter into the slot machine, and then another, and another. Nothing happens. Two dollars in quarters later, her curiosity is fading, and she is just about to quit. But then, the machine lights up, bells go off, and Sarah gets 50 quarters back. That’s more like it! Sarah gets back to inserting quarters with renewed interest, and a few minutes later she has used up all her gains and is $10 in the hole. Now might be a sensible time to quit. And yet, she keeps putting money into the slot machine because she never knows when the next reinforcement is coming. She keeps thinking that with the next quarter she could win $50, or $100, or even more. Because the reinforcement schedule in most types of gambling has a variable ratio schedule, people keep trying and hoping that the next time they will win big. This is one of the reasons that gambling is so addictive—and so resistant to extinction. In operant conditioning, extinction of a reinforced behavior occurs at some point after reinforcement stops, and the speed at which this happens depends on the reinforcement schedule. In a variable ratio schedule, the point of extinction comes very slowly, as described above. But in the other reinforcement schedules, extinction may come quickly. For example, if June presses the button for the pain relief medication before the allotted time her doctor has approved, no medication is administered. She is on a fixed interval reinforcement schedule (dosed hourly), so extinction occurs quickly when reinforcement doesn’t come at the expected time. Among the reinforcement schedules, variable ratio is the most productive and the most resistant to extinction. Fixed interval is the least productive and the easiest to extinguish ( Figure 6.13 ). Connect the Concepts Gambling and the Brain Skinner (1953) stated, “If the gambling establishment cannot persuade a patron to turn over money with no return, it may achieve the same effect by returning part of the patron's money on a variable-ratio schedule” (p. 397). Skinner uses gambling as an example of the power and effectiveness of conditioning behavior based on a variable ratio reinforcement schedule. In fact, Skinner was so confident in his knowledge of gambling addiction that he even claimed he could turn a pigeon into a pathological gambler (“Skinner’s Utopia,” 1971). Beyond the power of variable ratio reinforcement, gambling seems to work on the brain in the same way as some addictive drugs. The Illinois Institute for Addiction Recovery (n.d.) reports evidence suggesting that pathological gambling is an addiction similar to a chemical addiction ( Figure 6.14 ). Specifically, gambling may activate the reward centers of the brain, much like cocaine does. Research has shown that some pathological gamblers have lower levels of the neurotransmitter (brain chemical) known as norepinephrine than do normal gamblers (Roy, et al., 1988). According to a study conducted by Alec Roy and colleagues, norepinephrine is secreted when a person feels stress, arousal, or thrill; pathological gamblers use gambling to increase their levels of this neurotransmitter. Another researcher, neuroscientist Hans Breiter, has done extensive research on gambling and its effects on the brain. Breiter (as cited in Franzen, 2001) reports that “Monetary reward in a gambling-like experiment produces brain activation very similar to that observed in a cocaine addict receiving an infusion of cocaine” (para. 1). Deficiencies in serotonin (another neurotransmitter) might also contribute to compulsive behavior, including a gambling addiction. It may be that pathological gamblers’ brains are different than those of other people, and perhaps this difference may somehow have led to their gambling addiction, as these studies seem to suggest. However, it is very difficult to ascertain the cause because it is impossible to conduct a true experiment (it would be unethical to try to turn randomly assigned participants into problem gamblers). Therefore, it may be that causation actually moves in the opposite direction—perhaps the act of gambling somehow changes neurotransmitter levels in some gamblers’ brains. It also is possible that some overlooked factor, or confounding variable, played a role in both the gambling addiction and the differences in brain chemistry. Cognition and Latent Learning Although strict behaviorists such as Skinner and Watson refused to believe that cognition (such as thoughts and expectations) plays a role in learning, another behaviorist, Edward C. Tolman , had a different opinion. Tolman’s experiments with rats demonstrated that organisms can learn even if they do not receive immediate reinforcement (Tolman & Honzik, 1930; Tolman, Ritchie, & Kalish, 1946). This finding was in conflict with the prevailing idea at the time that reinforcement must be immediate in order for learning to occur, thus suggesting a cognitive aspect to learning. In the experiments, Tolman placed hungry rats in a maze with no reward for finding their way through it. He also studied a comparison group that was rewarded with food at the end of the maze. As the unreinforced rats explored the maze, they developed a cognitive map : a mental picture of the layout of the maze ( Figure 6.15 ). After 10 sessions in the maze without reinforcement, food was placed in a goal box at the end of the maze. As soon as the rats became aware of the food, they were able to find their way through the maze quickly, just as quickly as the comparison group, which had been rewarded with food all along. This is known as latent learning : learning that occurs but is not observable in behavior until there is a reason to demonstrate it. Latent learning also occurs in humans. Children may learn by watching the actions of their parents but only demonstrate it at a later date, when the learned material is needed. For example, suppose that Ravi’s dad drives him to school every day. In this way, Ravi learns the route from his house to his school, but he’s never driven there himself, so he has not had a chance to demonstrate that he’s learned the way. One morning Ravi’s dad has to leave early for a meeting, so he can’t drive Ravi to school. Instead, Ravi follows the same route on his bike that his dad would have taken in the car. This demonstrates latent learning. Ravi had learned the route to school, but had no need to demonstrate this knowledge earlier. Everyday Connection This Place Is Like a Maze Have you ever gotten lost in a building and couldn’t find your way back out? While that can be frustrating, you’re not alone. At one time or another we’ve all gotten lost in places like a museum, hospital, or university library. Whenever we go someplace new, we build a mental representation—or cognitive map—of the location, as Tolman’s rats built a cognitive map of their maze. However, some buildings are confusing because they include many areas that look alike or have short lines of sight. Because of this, it’s often difficult to predict what’s around a corner or decide whether to turn left or right to get out of a building. Psychologist Laura Carlson (2010) suggests that what we place in our cognitive map can impact our success in navigating through the environment. She suggests that paying attention to specific features upon entering a building, such as a picture on the wall, a fountain, a statue, or an escalator, adds information to our cognitive map that can be used later to help find our way out of the building. Link to Learning Watch this video to learn more about Carlson’s studies on cognitive maps and navigation in buildings. 6.4 Observational Learning (Modeling) Learning Objectives By the end of this section, you will be able to: Define observational learning Discuss the steps in the modeling process Explain the prosocial and antisocial effects of observational learning Previous sections of this chapter focused on classical and operant conditioning, which are forms of associative learning. In observational learning , we learn by watching others and then imitating, or modeling, what they do or say. The individuals performing the imitated behavior are called models . Research suggests that this imitative learning involves a specific type of neuron, called a mirror neuron (Hickock, 2010; Rizzolatti, Fadiga, Fogassi, & Gallese, 2002; Rizzolatti, Fogassi, & Gallese, 2006). Humans and other animals are capable of observational learning. As you will see, the phrase “monkey see, monkey do” really is accurate ( Figure 6.16 ). The same could be said about other animals. For example, in a study of social learning in chimpanzees, researchers gave juice boxes with straws to two groups of captive chimpanzees. The first group dipped the straw into the juice box, and then sucked on the small amount of juice at the end of the straw. The second group sucked through the straw directly, getting much more juice. When the first group, the “dippers,” observed the second group, “the suckers,” what do you think happened? All of the “dippers” in the first group switched to sucking through the straws directly. By simply observing the other chimps and modeling their behavior, they learned that this was a more efficient method of getting juice (Yamamoto, Humle, and Tanaka, 2013). Imitation is much more obvious in humans, but is imitation really the sincerest form of flattery? Consider Claire’s experience with observational learning. Claire’s nine-year-old son, Jay, was getting into trouble at school and was defiant at home. Claire feared that Jay would end up like her brothers, two of whom were in prison. One day, after yet another bad day at school and another negative note from the teacher, Claire, at her wit’s end, beat her son with a belt to get him to behave. Later that night, as she put her children to bed, Claire witnessed her four-year-old daughter, Anna, take a belt to her teddy bear and whip it. Claire was horrified, realizing that Anna was imitating her mother. It was then that Claire knew she wanted to discipline her children in a different manner. Like Tolman, whose experiments with rats suggested a cognitive component to learning, psychologist Albert Bandura’s ideas about learning were different from those of strict behaviorists. Bandura and other researchers proposed a brand of behaviorism called social learning theory, which took cognitive processes into account. According to Bandura , pure behaviorism could not explain why learning can take place in the absence of external reinforcement. He felt that internal mental states must also have a role in learning and that observational learning involves much more than imitation. In imitation, a person simply copies what the model does. Observational learning is much more complex. According to Lefrançois (2012) there are several ways that observational learning can occur: You learn a new response. After watching your coworker get chewed out by your boss for coming in late, you start leaving home 10 minutes earlier so that you won’t be late. You choose whether or not to imitate the model depending on what you saw happen to the model. Remember Julian and his father? When learning to surf, Julian might watch how his father pops up successfully on his surfboard and then attempt to do the same thing. On the other hand, Julian might learn not to touch a hot stove after watching his father get burned on a stove. You learn a general rule that you can apply to other situations. Bandura identified three kinds of models: live, verbal, and symbolic. A live model demonstrates a behavior in person, as when Ben stood up on his surfboard so that Julian could see how he did it. A verbal instructional model does not perform the behavior, but instead explains or describes the behavior, as when a soccer coach tells his young players to kick the ball with the side of the foot, not with the toe. A symbolic model can be fictional characters or real people who demonstrate behaviors in books, movies, television shows, video games, or Internet sources ( Figure 6.17 ). Link to Learning Latent learning and modeling are used all the time in the world of marketing and advertising. This commercial played for months across the New York, New Jersey, and Connecticut areas, Derek Jeter, an award-winning baseball player for the New York Yankees, is advertising a Ford. The commercial aired in a part of the country where Jeter is an incredibly well-known athlete. He is wealthy, and considered very loyal and good looking. What message are the advertisers sending by having him featured in the ad? How effective do you think it is? Steps in the Modeling Process Of course, we don’t learn a behavior simply by observing a model. Bandura described specific steps in the process of modeling that must be followed if learning is to be successful: attention, retention, reproduction, and motivation. First, you must be focused on what the model is doing—you have to pay attention. Next, you must be able to retain, or remember, what you observed; this is retention. Then, you must be able to perform the behavior that you observed and committed to memory; this is reproduction. Finally, you must have motivation. You need to want to copy the behavior, and whether or not you are motivated depends on what happened to the model. If you saw that the model was reinforced for her behavior, you will be more motivated to copy her. This is known as vicarious reinforcement . On the other hand, if you observed the model being punished, you would be less motivated to copy her. This is called vicarious punishment . For example, imagine that four-year-old Allison watched her older sister Kaitlyn playing in their mother’s makeup, and then saw Kaitlyn get a time out when their mother came in. After their mother left the room, Allison was tempted to play in the make-up, but she did not want to get a time-out from her mother. What do you think she did? Once you actually demonstrate the new behavior, the reinforcement you receive plays a part in whether or not you will repeat the behavior. Bandura researched modeling behavior, particularly children’s modeling of adults’ aggressive and violent behaviors (Bandura, Ross, & Ross, 1961). He conducted an experiment with a five-foot inflatable doll that he called a Bobo doll. In the experiment, children’s aggressive behavior was influenced by whether the teacher was punished for her behavior. In one scenario, a teacher acted aggressively with the doll, hitting, throwing, and even punching the doll, while a child watched. There were two types of responses by the children to the teacher’s behavior. When the teacher was punished for her bad behavior, the children decreased their tendency to act as she had. When the teacher was praised or ignored (and not punished for her behavior), the children imitated what she did, and even what she said. They punched, kicked, and yelled at the doll. Link to Learning Watch this video clip to see a portion of the famous Bobo doll experiment, including an interview with Albert Bandura. What are the implications of this study? Bandura concluded that we watch and learn, and that this learning can have both prosocial and antisocial effects. Prosocial (positive) models can be used to encourage socially acceptable behavior. Parents in particular should take note of this finding. If you want your children to read, then read to them. Let them see you reading. Keep books in your home. Talk about your favorite books. If you want your children to be healthy, then let them see you eat right and exercise, and spend time engaging in physical fitness activities together. The same holds true for qualities like kindness, courtesy, and honesty. The main idea is that children observe and learn from their parents, even their parents’ morals, so be consistent and toss out the old adage “Do as I say, not as I do,” because children tend to copy what you do instead of what you say. Besides parents, many public figures, such as Martin Luther King, Jr. and Mahatma Gandhi, are viewed as prosocial models who are able to inspire global social change. Can you think of someone who has been a prosocial model in your life? The antisocial effects of observational learning are also worth mentioning. As you saw from the example of Claire at the beginning of this section, her daughter viewed Claire’s aggressive behavior and copied it. Research suggests that this may help to explain why abused children often grow up to be abusers themselves (Murrell, Christoff, & Henning, 2007). In fact, about 30% of abused children become abusive parents (U.S. Department of Health & Human Services, 2013). We tend to do what we know. Abused children, who grow up witnessing their parents deal with anger and frustration through violent and aggressive acts, often learn to behave in that manner themselves. Sadly, it’s a vicious cycle that’s difficult to break. Some studies suggest that violent television shows, movies, and video games may also have antisocial effects ( Figure 6.18 ) although further research needs to be done to understand the correlational and causational aspects of media violence and behavior. Some studies have found a link between viewing violence and aggression seen in children (Anderson & Gentile, 2008; Kirsch, 2010; Miller, Grabell, Thomas, Bermann, & Graham-Bermann, 2012). These findings may not be surprising, given that a child graduating from high school has been exposed to around 200,000 violent acts including murder, robbery, torture, bombings, beatings, and rape through various forms of media (Huston et al., 1992). Not only might viewing media violence affect aggressive behavior by teaching people to act that way in real life situations, but it has also been suggested that repeated exposure to violent acts also desensitizes people to it. Psychologists are working to understand this dynamic. Link to Learning View this video to hear Brad Bushman, a psychologist who has published extensively on human aggression and violence, discuss his research.
anatomy_and_physiology
Chapter Objectives After studying this chapter, you will be able to: Name the major divisions of the nervous system, both anatomical and functional Describe the functional and structural differences between gray matter and white matter structures Name the parts of the multipolar neuron in order of polarity List the types of glial cells and assign each to the proper division of the nervous system, along with their function(s) Distinguish the major functions of the nervous system: sensation, integration, and response Describe the components of the membrane that establish the resting membrane potential Describe the changes that occur to the membrane that result in the action potential Explain the differences between types of graded potentials Categorize the major neurotransmitters by chemical type and effect Introduction The nervous system is a very complex organ system. In Peter D. Kramer’s book Listening to Prozac , a pharmaceutical researcher is quoted as saying, “If the human brain were simple enough for us to understand, we would be too simple to understand it” (1994). That quote is from the early 1990s; in the two decades since, progress has continued at an amazing rate within the scientific disciplines of neuroscience. It is an interesting conundrum to consider that the complexity of the nervous system may be too complex for it (that is, for us) to completely unravel. But our current level of understanding is probably nowhere close to that limit. One easy way to begin to understand the structure of the nervous system is to start with the large divisions and work through to a more in-depth understanding. In other chapters, the finer details of the nervous system will be explained, but first looking at an overview of the system will allow you to begin to understand how its parts work together. The focus of this chapter is on nervous (neural) tissue, both its structure and its function. But before you learn about that, you will see a big picture of the system—actually, a few big pictures.
[ { "answer": { "ans_choice": 2, "ans_text": "cranial" }, "bloom": null, "hl_context": "The nervous system can be divided into two major regions : the central and peripheral nervous systems . <hl> The central nervous system ( CNS ) is the brain and spinal cord , and the peripheral nervous system ( PNS ) is everything else ( Figure 12.2 ) . <hl> <hl> The brain is contained within the cranial cavity of the skull , and the spinal cord is contained within the vertebral cavity of the vertebral column . <hl> <hl> It is a bit of an oversimplification to say that the CNS is what is inside these two cavities and the peripheral nervous system is outside of them , but that is one way to start to think about it . <hl> In actuality , there are some elements of the peripheral nervous system that are within the cranial or vertebral cavities . The peripheral nervous system is so named because it is on the periphery — meaning beyond the brain and spinal cord . Depending on different aspects of the nervous system , the dividing line between central and peripheral is not necessarily universal .", "hl_sentences": "The central nervous system ( CNS ) is the brain and spinal cord , and the peripheral nervous system ( PNS ) is everything else ( Figure 12.2 ) . The brain is contained within the cranial cavity of the skull , and the spinal cord is contained within the vertebral cavity of the vertebral column . It is a bit of an oversimplification to say that the CNS is what is inside these two cavities and the peripheral nervous system is outside of them , but that is one way to start to think about it .", "question": { "cloze_format": "The ___ cavitiy contains a component of the central nervous system.", "normal_format": "Which of the following cavities contains a component of the central nervous system?", "question_choices": [ "abdominal", "pelvic", "cranial", "thoracic" ], "question_id": "fs-id2480516", "question_text": "Which of the following cavities contains a component of the central nervous system?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 0, "ans_text": "myelinated axons" }, "bloom": "1", "hl_context": "Nervous tissue , present in both the CNS and PNS , contains two basic types of cells : neurons and glial cells . A glial cell is one of a variety of cells that provide a framework of tissue that supports the neurons and their activities . The neuron is the more functionally important of the two , in terms of the communicative function of the nervous system . To describe the functional divisions of the nervous system , it is important to understand the structure of a neuron . Neurons are cells and therefore have a soma , or cell body , but they also have extensions of the cell ; each extension is generally referred to as a process . There is one important process that every neuron has called an axon , which is the fiber that connects a neuron with its target . Another type of process that branches off from the soma is the dendrite . Dendrites are responsible for receiving most of the input from other neurons . Looking at nervous tissue , there are regions that predominantly contain cell bodies and regions that are largely composed of just axons . <hl> These two regions within nervous system structures are often referred to as gray matter ( the regions with many cell bodies and dendrites ) or white matter ( the regions with many axons ) . <hl> <hl> Figure 12.3 demonstrates the appearance of these regions in the brain and spinal cord . <hl> The colors ascribed to these regions are what would be seen in “ fresh , ” or unstained , nervous tissue . Gray matter is not necessarily gray . It can be pinkish because of blood content , or even slightly tan , depending on how long the tissue has been preserved . <hl> But white matter is white because axons are insulated by a lipid-rich substance called myelin . <hl> Lipids can appear as white ( “ fatty ” ) material , much like the fat on a raw piece of chicken or beef . Actually , gray matter may have that color ascribed to it because next to the white matter , it is just darker — hence , gray .", "hl_sentences": "These two regions within nervous system structures are often referred to as gray matter ( the regions with many cell bodies and dendrites ) or white matter ( the regions with many axons ) . Figure 12.3 demonstrates the appearance of these regions in the brain and spinal cord . But white matter is white because axons are insulated by a lipid-rich substance called myelin .", "question": { "cloze_format": "The structure that predominates in the white matter of the brain is ___.", "normal_format": "Which structure predominates in the white matter of the brain?", "question_choices": [ "myelinated axons", "neuronal cell bodies", "ganglia of the parasympathetic nerves", "bundles of dendrites from the enteric nervous system" ], "question_id": "fs-id1989047", "question_text": "Which structure predominates in the white matter of the brain?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "axon" }, "bloom": "1", "hl_context": "Watch this video to learn about the release of a neurotransmitter . <hl> The action potential reaches the end of the axon , called the axon terminal , and a chemical signal is released to tell the target cell to do something — either to initiate a new action potential , or to suppress that activity . <hl> <hl> In a very short space , the electrical signal of the action potential is changed into the chemical signal of a neurotransmitter and then back to electrical changes in the target cell membrane . <hl> What is the importance of voltage-gated calcium channels in the release of neurotransmitters ?", "hl_sentences": "The action potential reaches the end of the axon , called the axon terminal , and a chemical signal is released to tell the target cell to do something — either to initiate a new action potential , or to suppress that activity . In a very short space , the electrical signal of the action potential is changed into the chemical signal of a neurotransmitter and then back to electrical changes in the target cell membrane .", "question": { "cloze_format": "(A(n)) ___ is/are a part of a neuron that transmits an electrical signal to a target cell.", "normal_format": "Which part of a neuron transmits an electrical signal to a target cell?", "question_choices": [ "dendrites", "soma", "cell body", "axon" ], "question_id": "fs-id2002542", "question_text": "Which part of a neuron transmits an electrical signal to a target cell?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 3, "ans_text": "nerve" }, "bloom": "1", "hl_context": "Terminology applied to bundles of axons also differs depending on location . <hl> A bundle of axons , or fibers , found in the CNS is called a tract whereas the same thing in the PNS would be called a nerve . <hl> There is an important point to make about these terms , which is that they can both be used to refer to the same bundle of axons . When those axons are in the PNS , the term is nerve , but if they are CNS , the term is tract . The most obvious example of this is the axons that project from the retina into the brain . Those axons are called the optic nerve as they leave the eye , but when they are inside the cranium , they are referred to as the optic tract . There is a specific place where the name changes , which is the optic chiasm , but they are still the same axons ( Figure 12.5 ) . A similar situation outside of science can be described for some roads . Imagine a road called “ Broad Street ” in a town called “ Anyville . ” The road leaves Anyville and goes to the next town over , called “ Hometown . ” When the road crosses the line between the two towns and is in Hometown , its name changes to “ Main Street . ” That is the idea behind the naming of the retinal axons . In the PNS , they are called the optic nerve , and in the CNS , they are the optic tract . Table 12.1 helps to clarify which of these terms apply to the central or peripheral nervous systems . Interactive Link", "hl_sentences": "A bundle of axons , or fibers , found in the CNS is called a tract whereas the same thing in the PNS would be called a nerve .", "question": { "cloze_format": "The ___ is the term that describes a bundle of axons in the peripheral nervous system.", "normal_format": "Which term describes a bundle of axons in the peripheral nervous system?", "question_choices": [ "nucleus", "ganglion", "tract", "nerve" ], "question_id": "fs-id2070100", "question_text": "Which term describes a bundle of axons in the peripheral nervous system?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 1, "ans_text": "autonomic" }, "bloom": null, "hl_context": "<hl> The autonomic nervous system ( ANS ) is responsible for involuntary control of the body , usually for the sake of homeostasis ( regulation of the internal environment ) . <hl> Sensory input for autonomic functions can be from sensory structures tuned to external or internal environmental stimuli . The motor output extends to smooth and cardiac muscle as well as glandular tissue . <hl> The role of the autonomic system is to regulate the organ systems of the body , which usually means to control homeostasis . <hl> <hl> Sweat glands , for example , are controlled by the autonomic system . <hl> When you are hot , sweating helps cool your body down . That is a homeostatic mechanism . But when you are nervous , you might start sweating also . That is not homeostatic , it is the physiological response to an emotional state .", "hl_sentences": "The autonomic nervous system ( ANS ) is responsible for involuntary control of the body , usually for the sake of homeostasis ( regulation of the internal environment ) . The role of the autonomic system is to regulate the organ systems of the body , which usually means to control homeostasis . Sweat glands , for example , are controlled by the autonomic system .", "question": { "cloze_format": "The ___ functional division of the nervous system would be responsible for the physiological changes seen during exercise (e.g., increased heart rate and sweating).", "normal_format": "Which functional division of the nervous system would be responsible for the physiological changes seen during exercise (e.g., increased heart rate and sweating)?", "question_choices": [ "somatic", "autonomic", "enteric", "central" ], "question_id": "fs-id1433059", "question_text": "Which functional division of the nervous system would be responsible for the physiological changes seen during exercise (e.g., increased heart rate and sweating)?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 0, "ans_text": "oligodendrocyte" }, "bloom": null, "hl_context": "<hl> Also found in CNS tissue is the oligodendrocyte , sometimes called just “ oligo , ” which is the glial cell type that insulates axons in the CNS . <hl> The name means “ cell of a few branches ” ( oligo - = “ few ” ; dendro - = “ branches ” ; - cyte = “ cell ” ) . There are a few processes that extend from the cell body . Each one reaches out and surrounds an axon to insulate it in myelin . One oligodendrocyte will provide the myelin for multiple axon segments , either for the same axon or for separate axons . The function of myelin will be discussed below .", "hl_sentences": "Also found in CNS tissue is the oligodendrocyte , sometimes called just “ oligo , ” which is the glial cell type that insulates axons in the CNS .", "question": { "cloze_format": "The type of glial cell that provides myelin for the axons in a tract is the ___.", "normal_format": "What type of glial cell provides myelin for the axons in a tract?", "question_choices": [ "oligodendrocyte", "astrocyte", "Schwann cell", "satellite cell" ], "question_id": "fs-id1142742", "question_text": "What type of glial cell provides myelin for the axons in a tract?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 1, "ans_text": "soma" }, "bloom": "1", "hl_context": "<hl> As you learned in the first section , the main part of a neuron is the cell body , which is also known as the soma ( soma = “ body ” ) . <hl> <hl> The cell body contains the nucleus and most of the major organelles . <hl> But what makes neurons special is that they have many extensions of their cell membranes , which are generally referred to as processes . Neurons are usually described as having one , and only one , axon — a fiber that emerges from the cell body and projects to target cells . That single axon can branch repeatedly to communicate with many target cells . It is the axon that propagates the nerve impulse , which is communicated to one or more cells . The other processes of the neuron are dendrites , which receive information from other neurons at specialized areas of contact called synapses . The dendrites are usually highly branched processes , providing locations for other neurons to communicate with the cell body . Information flows through a neuron from the dendrites , across the cell body , and down the axon . This gives the neuron a polarity — meaning that information flows in this one direction . Figure 12.8 shows the relationship of these parts to one another .", "hl_sentences": "As you learned in the first section , the main part of a neuron is the cell body , which is also known as the soma ( soma = “ body ” ) . The cell body contains the nucleus and most of the major organelles .", "question": { "cloze_format": "The part of a neuron that contains the nucleus is the ___.", "normal_format": "Which part of a neuron contains the nucleus?", "question_choices": [ "dendrite", "soma", "axon", "synaptic end bulb" ], "question_id": "fs-id721912", "question_text": "Which part of a neuron contains the nucleus?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 3, "ans_text": "white blood cells" }, "bloom": "1", "hl_context": "Like a few other parts of the body , the brain has a privileged blood supply . Very little can pass through by diffusion . <hl> Most substances that cross the wall of a blood vessel into the CNS must do so through an active transport process . <hl> <hl> Because of this , only specific types of molecules can enter the CNS . <hl> <hl> Glucose — the primary energy source — is allowed , as are amino acids . <hl> <hl> Water and some other small particles , like gases and ions , can enter . <hl> <hl> But most everything else cannot , including white blood cells , which are one of the body ’ s main lines of defense . <hl> While this barrier protects the CNS from exposure to toxic or pathogenic substances , it also keeps out the cells that could protect the brain and spinal cord from disease and damage . The BBB also makes it harder for pharmaceuticals to be developed that can affect the nervous system . Aside from finding efficacious substances , the means of delivery is also crucial .", "hl_sentences": "Most substances that cross the wall of a blood vessel into the CNS must do so through an active transport process . Because of this , only specific types of molecules can enter the CNS . Glucose — the primary energy source — is allowed , as are amino acids . Water and some other small particles , like gases and ions , can enter . But most everything else cannot , including white blood cells , which are one of the body ’ s main lines of defense .", "question": { "cloze_format": "The substance that is least able to cross the blood-brain barrier is ___.", "normal_format": "Which of the following substances is least able to cross the blood-brain barrier?", "question_choices": [ "water", "sodium ions", "glucose", "white blood cells" ], "question_id": "fs-id696587", "question_text": "Which of the following substances is least able to cross the blood-brain barrier?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 0, "ans_text": "microglia" }, "bloom": "1", "hl_context": "<hl> Microglia are , as the name implies , smaller than most of the other glial cells . <hl> <hl> Ongoing research into these cells , although not entirely conclusive , suggests that they may originate as white blood cells , called macrophages , that become part of the CNS during early development . <hl> <hl> While their origin is not conclusively determined , their function is related to what macrophages do in the rest of the body . <hl> When macrophages encounter diseased or damaged cells in the rest of the body , they ingest and digest those cells or the pathogens that cause disease . Microglia are the cells in the CNS that can do this in normal , healthy tissue , and they are therefore also referred to as CNS-resident macrophages .", "hl_sentences": "Microglia are , as the name implies , smaller than most of the other glial cells . Ongoing research into these cells , although not entirely conclusive , suggests that they may originate as white blood cells , called macrophages , that become part of the CNS during early development . While their origin is not conclusively determined , their function is related to what macrophages do in the rest of the body .", "question": { "cloze_format": "A(n) ___ is a type of glial cell that is the resident macrophage behind the blood-brain barrier.", "normal_format": "What type of glial cell is the resident macrophage behind the blood-brain barrier?", "question_choices": [ "microglia", "astrocyte", "Schwann cell", "satellite cell" ], "question_id": "fs-id1435918", "question_text": "What type of glial cell is the resident macrophage behind the blood-brain barrier?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "lipids and proteins" }, "bloom": "1", "hl_context": "Myelin The insulation for axons in the nervous system is provided by glial cells , oligodendrocytes in the CNS , and Schwann cells in the PNS . Whereas the manner in which either cell is associated with the axon segment , or segments , that it insulates is different , the means of myelinating an axon segment is mostly the same in the two situations . <hl> Myelin is a lipid-rich sheath that surrounds the axon and by doing so creates a myelin sheath that facilitates the transmission of electrical signals along the axon . <hl> <hl> The lipids are essentially the phospholipids of the glial cell membrane . <hl> <hl> Myelin , however , is more than just the membrane of the glial cell . <hl> <hl> It also includes important proteins that are integral to that membrane . <hl> Some of the proteins help to hold the layers of the glial cell membrane closely together .", "hl_sentences": "Myelin is a lipid-rich sheath that surrounds the axon and by doing so creates a myelin sheath that facilitates the transmission of electrical signals along the axon . The lipids are essentially the phospholipids of the glial cell membrane . Myelin , however , is more than just the membrane of the glial cell . It also includes important proteins that are integral to that membrane .", "question": { "cloze_format": "The two types of macromolecules that are the main components of myelin are ___.", "normal_format": "What two types of macromolecules are the main components of myelin?", "question_choices": [ "carbohydrates and lipids", "proteins and nucleic acids", "lipids and proteins", "carbohydrates and nucleic acids" ], "question_id": "fs-id757155", "question_text": "What two types of macromolecules are the main components of myelin?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "cerebral cortex" }, "bloom": "1", "hl_context": "<hl> Within the cerebral cortex , information is processed among many neurons , integrating the stimulus of the water temperature with other sensory stimuli , with your emotional state ( you just aren't ready to wake up ; the bed is calling to you ) , memories ( perhaps of the lab notes you have to study before a quiz ) . <hl> <hl> Finally , a plan is developed about what to do , whether that is to turn the temperature up , turn the whole shower off and go back to bed , or step into the shower . <hl> <hl> To do any of these things , the cerebral cortex has to send a command out to your body to move muscles ( Figure 12.16 ) . <hl>", "hl_sentences": "Within the cerebral cortex , information is processed among many neurons , integrating the stimulus of the water temperature with other sensory stimuli , with your emotional state ( you just aren't ready to wake up ; the bed is calling to you ) , memories ( perhaps of the lab notes you have to study before a quiz ) . Finally , a plan is developed about what to do , whether that is to turn the temperature up , turn the whole shower off and go back to bed , or step into the shower . To do any of these things , the cerebral cortex has to send a command out to your body to move muscles ( Figure 12.16 ) .", "question": { "cloze_format": "The ___ is the location where the greatest level of integration is taking place in the example of testing the temperature of the shower.", "normal_format": "Which of these locations is where the greatest level of integration is taking place in the example of testing the temperature of the shower?", "question_choices": [ "skeletal muscle", "spinal cord", "thalamus", "cerebral cortex" ], "question_id": "fs-id1862585", "question_text": "Which of these locations is where the greatest level of integration is taking place in the example of testing the temperature of the shower?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "fraction of a second" }, "bloom": "1", "hl_context": "<hl> A region of the cortex is specialized for sending signals down to the spinal cord for movement . <hl> The upper motor neuron is in this region , called the precentral gyrus of the frontal cortex , which has an axon that extends all the way down the spinal cord . At the level of the spinal cord at which this axon makes a synapse , a graded potential occurs in the cell membrane of a lower motor neuron . <hl> This second motor neuron is responsible for causing muscle fibers to contract . <hl> In the manner described in the chapter on muscle tissue , an action potential travels along the motor neuron axon into the periphery . The axon terminates on muscle fibers at the neuromuscular junction . Acetylcholine is released at this specialized synapse , which causes the muscle action potential to begin , following a large potential known as an end plate potential . When the lower motor neuron excites the muscle fiber , it contracts . <hl> All of this occurs in a fraction of a second , but this story is the basis of how the nervous system functions . <hl> Career Connection Neurophysiologist", "hl_sentences": "A region of the cortex is specialized for sending signals down to the spinal cord for movement . This second motor neuron is responsible for causing muscle fibers to contract . All of this occurs in a fraction of a second , but this story is the basis of how the nervous system functions .", "question": { "cloze_format": "The signaling through the sensory pathway, within the central nervous system, and through the motor command pathway (takes (a)) ___.", "normal_format": "How long does all the signaling through the sensory pathway, within the central nervous system, and through the motor command pathway take?", "question_choices": [ "1 to 2 minutes", "1 to 2 seconds", "fraction of a second", "varies with graded potential" ], "question_id": "fs-id1533571", "question_text": "How long does all the signaling through the sensory pathway, within the central nervous system, and through the motor command pathway take?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 1, "ans_text": "lower motor neuron" }, "bloom": "1", "hl_context": "<hl> A region of the cortex is specialized for sending signals down to the spinal cord for movement . <hl> <hl> The upper motor neuron is in this region , called the precentral gyrus of the frontal cortex , which has an axon that extends all the way down the spinal cord . <hl> <hl> At the level of the spinal cord at which this axon makes a synapse , a graded potential occurs in the cell membrane of a lower motor neuron . <hl> This second motor neuron is responsible for causing muscle fibers to contract . In the manner described in the chapter on muscle tissue , an action potential travels along the motor neuron axon into the periphery . The axon terminates on muscle fibers at the neuromuscular junction . Acetylcholine is released at this specialized synapse , which causes the muscle action potential to begin , following a large potential known as an end plate potential . When the lower motor neuron excites the muscle fiber , it contracts . All of this occurs in a fraction of a second , but this story is the basis of how the nervous system functions . Career Connection Neurophysiologist", "hl_sentences": "A region of the cortex is specialized for sending signals down to the spinal cord for movement . The upper motor neuron is in this region , called the precentral gyrus of the frontal cortex , which has an axon that extends all the way down the spinal cord . At the level of the spinal cord at which this axon makes a synapse , a graded potential occurs in the cell membrane of a lower motor neuron .", "question": { "cloze_format": "The target of an upper motor neuron is the ___.", "normal_format": "What is the target of an upper motor neuron?", "question_choices": [ "cerebral cortex", "lower motor neuron", "skeletal muscle", "thalamus" ], "question_id": "fs-id1805294", "question_text": "What is the target of an upper motor neuron?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 0, "ans_text": "sodium" }, "bloom": "1", "hl_context": "This starts with a channel opening for Na + in the membrane . Because the concentration of Na + is higher outside the cell than inside the cell by a factor of 10 , ions will rush into the cell that are driven largely by the concentration gradient . <hl> Because sodium is a positively charged ion , it will change the relative voltage immediately inside the cell relative to immediately outside . <hl> <hl> The resting potential is the state of the membrane at a voltage of - 70 mV , so the sodium cation entering the cell will cause it to become less negative . <hl> <hl> This is known as depolarization , meaning the membrane potential moves toward zero . <hl>", "hl_sentences": "Because sodium is a positively charged ion , it will change the relative voltage immediately inside the cell relative to immediately outside . The resting potential is the state of the membrane at a voltage of - 70 mV , so the sodium cation entering the cell will cause it to become less negative . This is known as depolarization , meaning the membrane potential moves toward zero .", "question": { "cloze_format": "The ion ___ enters a neuron causing depolarization of the cell membrane.", "normal_format": "What ion enters a neuron causing depolarization of the cell membrane?", "question_choices": [ "sodium", "chloride", "potassium", "phosphate" ], "question_id": "fs-id1696554", "question_text": "What ion enters a neuron causing depolarization of the cell membrane?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 1, "ans_text": "threshold" }, "bloom": null, "hl_context": "A third type of channel that is an important part of depolarization in the action potential is the voltage-gated Na + channel . The channels that start depolarizing the membrane because of a stimulus help the cell to depolarize from - 70 mV to - 55 mV . <hl> Once the membrane reaches that voltage , the voltage-gated Na + channels open . <hl> <hl> This is what is known as the threshold . <hl> Any depolarization that does not change the membrane potential to - 55 mV or higher will not reach threshold and thus will not result in an action potential . Also , any stimulus that depolarizes the membrane to - 55 mV or beyond will cause a large number of channels to open and an action potential will be initiated .", "hl_sentences": "Once the membrane reaches that voltage , the voltage-gated Na + channels open . This is what is known as the threshold .", "question": { "cloze_format": "Voltage-gated Na+ channels open upon reaching the state of ___.", "normal_format": "Voltage-gated Na+ channels open upon reaching what state?", "question_choices": [ "resting potential", "threshold", "repolarization", "overshoot" ], "question_id": "fs-id1942313", "question_text": "Voltage-gated Na+ channels open upon reaching what state?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "binding of a neurotransmitter" }, "bloom": "1", "hl_context": "The question is , now , what initiates the action potential ? The description above conveniently glosses over that point . But it is vital to understanding what is happening . The membrane potential will stay at the resting voltage until something changes . The description above just says that a Na + channel opens . Now , to say “ a channel opens ” does not mean that one individual transmembrane protein changes . Instead , it means that one kind of channel opens . There are a few different types of channels that allow Na + to cross the membrane . <hl> A ligand-gated Na + channel will open when a neurotransmitter binds to it and a mechanically gated Na + channel will open when a physical stimulus affects a sensory receptor ( like pressure applied to the skin compresses a touch receptor ) . <hl> Whether it is a neurotransmitter binding to its receptor protein or a sensory stimulus activating a sensory receptor cell , some stimulus gets the process started . Sodium starts to enter the cell and the membrane becomes less negative .", "hl_sentences": "A ligand-gated Na + channel will open when a neurotransmitter binds to it and a mechanically gated Na + channel will open when a physical stimulus affects a sensory receptor ( like pressure applied to the skin compresses a touch receptor ) .", "question": { "cloze_format": "A ligand-gated channel requires ___ in order to open.", "normal_format": "What does a ligand-gated channel require in order to open?", "question_choices": [ "increase in concentration of Na+ ions", "binding of a neurotransmitter", "increase in concentration of K+ ions", "depolarization of the membrane" ], "question_id": "fs-id1689714", "question_text": "What does a ligand-gated channel require in order to open?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "physical stimulus" }, "bloom": null, "hl_context": "The question is , now , what initiates the action potential ? The description above conveniently glosses over that point . But it is vital to understanding what is happening . The membrane potential will stay at the resting voltage until something changes . The description above just says that a Na + channel opens . Now , to say “ a channel opens ” does not mean that one individual transmembrane protein changes . Instead , it means that one kind of channel opens . There are a few different types of channels that allow Na + to cross the membrane . <hl> A ligand-gated Na + channel will open when a neurotransmitter binds to it and a mechanically gated Na + channel will open when a physical stimulus affects a sensory receptor ( like pressure applied to the skin compresses a touch receptor ) . <hl> Whether it is a neurotransmitter binding to its receptor protein or a sensory stimulus activating a sensory receptor cell , some stimulus gets the process started . Sodium starts to enter the cell and the membrane becomes less negative .", "hl_sentences": "A ligand-gated Na + channel will open when a neurotransmitter binds to it and a mechanically gated Na + channel will open when a physical stimulus affects a sensory receptor ( like pressure applied to the skin compresses a touch receptor ) .", "question": { "cloze_format": "A mechanically gated channel responds to the ___ .", "normal_format": "What does a mechanically gated channel respond to?", "question_choices": [ "physical stimulus", "chemical stimulus", "increase in resistance", "decrease in resistance" ], "question_id": "fs-id1125844", "question_text": "What does a mechanically gated channel respond to?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 3, "ans_text": "-80 mv" }, "bloom": "1", "hl_context": "All of this takes place within approximately 2 milliseconds ( Figure 12.24 ) . While an action potential is in progress , another one cannot be initiated . That effect is referred to as the refractory period . <hl> There are two phases of the refractory period : the absolute refractory period and the relative refractory period . <hl> <hl> During the absolute phase , another action potential will not start . <hl> This is because of the inactivation gate of the voltage-gated Na + channel . <hl> Once that channel is back to its resting conformation ( less than - 55 mV ) , a new action potential could be started , but only by a stronger stimulus than the one that initiated the current action potential . <hl> This is because of the flow of K + out of the cell . Because that ion is rushing out , any Na + that tries to enter will not depolarize the cell , but will only keep the cell from hyperpolarizing .", "hl_sentences": "There are two phases of the refractory period : the absolute refractory period and the relative refractory period . During the absolute phase , another action potential will not start . Once that channel is back to its resting conformation ( less than - 55 mV ) , a new action potential could be started , but only by a stronger stimulus than the one that initiated the current action potential .", "question": { "cloze_format": "The voltage that would most likely be measured during the relative refractory period is ___.", "normal_format": "Which of the following voltages would most likely be measured during the relative refractory period?", "question_choices": [ "+30 mV", "0 mV", "-45 mV", "-80 mv" ], "question_id": "fs-id1661528", "question_text": "Which of the following voltages would most likely be measured during the relative refractory period?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 3, "ans_text": "a thick, myelinated axon" }, "bloom": "1", "hl_context": "Propagation along an unmyelinated axon is referred to as continuous conduction ; along the length of a myelinated axon , it is saltatory conduction . Continuous conduction is slow because there are always voltage-gated Na + channels opening , and more and more Na + is rushing into the cell . Saltatory conduction is faster because the action potential basically jumps from one node to the next ( saltare = “ to leap ” ) , and the new influx of Na + renews the depolarized membrane . <hl> Along with the myelination of the axon , the diameter of the axon can influence the speed of conduction . <hl> Much as water runs faster in a wide river than in a narrow creek , Na + - based depolarization spreads faster down a wide axon than down a narrow one . This concept is known as resistance and is generally true for electrical wires or plumbing , just as it is true for axons , although the specific conditions are different at the scales of electrons or ions versus water in a river .", "hl_sentences": "Along with the myelination of the axon , the diameter of the axon can influence the speed of conduction .", "question": { "cloze_format": "An action potential is probably going to be propagated the fastest by ___ .", "normal_format": "Which of the following is probably going to propagate an action potential fastest?", "question_choices": [ "a thin, unmyelinated axon", "a thin, myelinated axon", "a thick, unmyelinated axon", "a thick, myelinated axon" ], "question_id": "fs-id1490421", "question_text": "Which of the following is probably going to propagate an action potential fastest?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 1, "ans_text": "+15 mV" }, "bloom": "1", "hl_context": "A third type of channel that is an important part of depolarization in the action potential is the voltage-gated Na + channel . <hl> The channels that start depolarizing the membrane because of a stimulus help the cell to depolarize from - 70 mV to - 55 mV . <hl> <hl> Once the membrane reaches that voltage , the voltage-gated Na + channels open . <hl> This is what is known as the threshold . <hl> Any depolarization that does not change the membrane potential to - 55 mV or higher will not reach threshold and thus will not result in an action potential . <hl> <hl> Also , any stimulus that depolarizes the membrane to - 55 mV or beyond will cause a large number of channels to open and an action potential will be initiated . <hl>", "hl_sentences": "The channels that start depolarizing the membrane because of a stimulus help the cell to depolarize from - 70 mV to - 55 mV . Once the membrane reaches that voltage , the voltage-gated Na + channels open . Any depolarization that does not change the membrane potential to - 55 mV or higher will not reach threshold and thus will not result in an action potential . Also , any stimulus that depolarizes the membrane to - 55 mV or beyond will cause a large number of channels to open and an action potential will be initiated .", "question": { "cloze_format": "In the membrane potential a change of ___ is necessary for the summation of postsynaptic potentials to result in an action potential being generated.", "normal_format": "How much of a change in the membrane potential is necessary for the summation of postsynaptic potentials to result in an action potential being generated?", "question_choices": [ "+30 mV", "+15 mV", "+10 mV", "-15 mV" ], "question_id": "fs-id1859771", "question_text": "How much of a change in the membrane potential is necessary for the summation of postsynaptic potentials to result in an action potential being generated?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "hyperpolarizing" }, "bloom": "3", "hl_context": "A postsynaptic potential ( PSP ) is the graded potential in the dendrites of a neuron that is receiving synapses from other cells . Postsynaptic potentials can be depolarizing or hyperpolarizing . Depolarization in a postsynaptic potential is called an excitatory postsynaptic potential ( EPSP ) because it causes the membrane potential to move toward threshold . <hl> Hyperpolarization in a postsynaptic potential is an inhibitory postsynaptic potential ( IPSP ) because it causes the membrane potential to move away from threshold . <hl> Graded potentials can be of two sorts , either they are depolarizing or hyperpolarizing ( Figure 12.25 ) . For a membrane at the resting potential , a graded potential represents a change in that voltage either above - 70 mV or below - 70 mV . Depolarizing graded potentials are often the result of Na + or Ca 2 + entering the cell . Both of these ions have higher concentrations outside the cell than inside ; because they have a positive charge , they will move into the cell causing it to become less negative relative to the outside . <hl> Hyperpolarizing graded potentials can be caused by K + leaving the cell or Cl - entering the cell . <hl> <hl> If a positive charge moves out of a cell , the cell becomes more negative ; if a negative charge enters the cell , the same thing happens . <hl>", "hl_sentences": "Hyperpolarization in a postsynaptic potential is an inhibitory postsynaptic potential ( IPSP ) because it causes the membrane potential to move away from threshold . Hyperpolarizing graded potentials can be caused by K + leaving the cell or Cl - entering the cell . If a positive charge moves out of a cell , the cell becomes more negative ; if a negative charge enters the cell , the same thing happens .", "question": { "cloze_format": "A channel opens on a postsynaptic membrane that causes a negative ion to enter the cell. The type of graded potential is called ___ .", "normal_format": "A channel opens on a postsynaptic membrane that causes a negative ion to enter the cell. What type of graded potential is this?", "question_choices": [ "depolarizing", "repolarizing", "hyperpolarizing", "non-polarizing" ], "question_id": "fs-id890519", "question_text": "A channel opens on a postsynaptic membrane that causes a negative ion to enter the cell. What type of graded potential is this?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 3, "ans_text": "acetylcholine" }, "bloom": "1", "hl_context": "A region of the cortex is specialized for sending signals down to the spinal cord for movement . The upper motor neuron is in this region , called the precentral gyrus of the frontal cortex , which has an axon that extends all the way down the spinal cord . At the level of the spinal cord at which this axon makes a synapse , a graded potential occurs in the cell membrane of a lower motor neuron . This second motor neuron is responsible for causing muscle fibers to contract . In the manner described in the chapter on muscle tissue , an action potential travels along the motor neuron axon into the periphery . <hl> The axon terminates on muscle fibers at the neuromuscular junction . <hl> <hl> Acetylcholine is released at this specialized synapse , which causes the muscle action potential to begin , following a large potential known as an end plate potential . <hl> When the lower motor neuron excites the muscle fiber , it contracts . All of this occurs in a fraction of a second , but this story is the basis of how the nervous system functions . Career Connection Neurophysiologist", "hl_sentences": "The axon terminates on muscle fibers at the neuromuscular junction . Acetylcholine is released at this specialized synapse , which causes the muscle action potential to begin , following a large potential known as an end plate potential .", "question": { "cloze_format": "The neurotransmitter that is released at the neuromuscular junction is ___.", "normal_format": "What neurotransmitter is released at the neuromuscular junction?", "question_choices": [ "norepinephrine", "serotonin", "dopamine", "acetylcholine" ], "question_id": "fs-id1828568", "question_text": "What neurotransmitter is released at the neuromuscular junction?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "metabotropic receptor" }, "bloom": null, "hl_context": "The important thing to remember about neurotransmitters , and signaling chemicals in general , is that the effect is entirely dependent on the receptor . Neurotransmitters bind to one of two classes of receptors at the cell surface , ionotropic or metabotropic ( Figure 12.28 ) . Ionotropic receptors are ligand-gated ion channels , such as the nicotinic receptor for acetylcholine or the glycine receptor . <hl> A metabotropic receptor involves a complex of proteins that result in metabolic changes within the cell . <hl> The receptor complex includes the transmembrane receptor protein , a G protein , and an effector protein . The neurotransmitter , referred to as the first messenger , binds to the receptor protein on the extracellular surface of the cell , and the intracellular side of the protein initiates activity of the G protein . The G protein is a guanosine triphosphate ( GTP ) hydrolase that physically moves from the receptor protein to the effector protein to activate the latter . <hl> An effector protein is an enzyme that catalyzes the generation of a new molecule , which acts as the intracellular mediator of the signal that binds to the receptor . <hl> This intracellular mediator is called the second messenger . Different receptors use different second messengers . Two common examples of second messengers are cyclic adenosine monophosphate ( cAMP ) and inositol triphosphate ( IP 3 ) . The enzyme adenylate cyclase ( an example of an effector protein ) makes cAMP , and phospholipase C is the enzyme that makes IP 3 . Second messengers , after they are produced by the effector protein , cause metabolic changes within the cell . These changes are most likely the activation of other enzymes in the cell . In neurons , they often modify ion channels , either opening or closing them . These enzymes can also cause changes in the cell , such as the activation of genes in the nucleus , and therefore the increased synthesis of proteins . In neurons , these kinds of changes are often the basis of stronger connections between cells at the synapse and may be the basis of learning and memory .", "hl_sentences": "A metabotropic receptor involves a complex of proteins that result in metabolic changes within the cell . An effector protein is an enzyme that catalyzes the generation of a new molecule , which acts as the intracellular mediator of the signal that binds to the receptor .", "question": { "cloze_format": "The type of receptor that requires an effector protein to initiate a signal is ___.", "normal_format": "What type of receptor requires an effector protein to initiate a signal?", "question_choices": [ "biogenic amine", "ionotropic receptor", "cholinergic system", "metabotropic receptor" ], "question_id": "fs-id1532169", "question_text": "What type of receptor requires an effector protein to initiate a signal?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 0, "ans_text": "GABA" }, "bloom": null, "hl_context": "<hl> The amino acid neurotransmitters , glutamate , glycine , and GABA , are almost exclusively associated with just one effect . <hl> Glutamate is considered an excitatory amino acid , but only because Glu receptors in the adult cause depolarization of the postsynaptic cell . <hl> Glycine and GABA are considered inhibitory amino acids , again because their receptors cause hyperpolarization . <hl>", "hl_sentences": "The amino acid neurotransmitters , glutamate , glycine , and GABA , are almost exclusively associated with just one effect . Glycine and GABA are considered inhibitory amino acids , again because their receptors cause hyperpolarization .", "question": { "cloze_format": "___ is a neurotransmitter that is associated with inhibition exclusively.", "normal_format": "Which of the following neurotransmitters is associated with inhibition exclusively?", "question_choices": [ "GABA", "acetylcholine", "glutamate", "norepinephrine" ], "question_id": "fs-id2317736", "question_text": "Which of the following neurotransmitters is associated with inhibition exclusively?" }, "references_are_paraphrase": 0 } ]
12
12.1 Basic Structure and Function of the Nervous System Learning Objectives By the end of this section, you will be able to: Identify the anatomical and functional divisions of the nervous system Relate the functional and structural differences between gray matter and white matter structures of the nervous system to the structure of neurons List the basic functions of the nervous system The picture you have in your mind of the nervous system probably includes the brain , the nervous tissue contained within the cranium, and the spinal cord , the extension of nervous tissue within the vertebral column. That suggests it is made of two organs—and you may not even think of the spinal cord as an organ—but the nervous system is a very complex structure. Within the brain, many different and separate regions are responsible for many different and separate functions. It is as if the nervous system is composed of many organs that all look similar and can only be differentiated using tools such as the microscope or electrophysiology. In comparison, it is easy to see that the stomach is different than the esophagus or the liver, so you can imagine the digestive system as a collection of specific organs. The Central and Peripheral Nervous Systems The nervous system can be divided into two major regions: the central and peripheral nervous systems. The central nervous system (CNS) is the brain and spinal cord, and the peripheral nervous system (PNS) is everything else ( Figure 12.2 ). The brain is contained within the cranial cavity of the skull, and the spinal cord is contained within the vertebral cavity of the vertebral column. It is a bit of an oversimplification to say that the CNS is what is inside these two cavities and the peripheral nervous system is outside of them, but that is one way to start to think about it. In actuality, there are some elements of the peripheral nervous system that are within the cranial or vertebral cavities. The peripheral nervous system is so named because it is on the periphery—meaning beyond the brain and spinal cord. Depending on different aspects of the nervous system, the dividing line between central and peripheral is not necessarily universal. Nervous tissue, present in both the CNS and PNS, contains two basic types of cells: neurons and glial cells. A glial cell is one of a variety of cells that provide a framework of tissue that supports the neurons and their activities. The neuron is the more functionally important of the two, in terms of the communicative function of the nervous system. To describe the functional divisions of the nervous system, it is important to understand the structure of a neuron. Neurons are cells and therefore have a soma , or cell body, but they also have extensions of the cell; each extension is generally referred to as a process . There is one important process that every neuron has called an axon , which is the fiber that connects a neuron with its target. Another type of process that branches off from the soma is the dendrite . Dendrites are responsible for receiving most of the input from other neurons. Looking at nervous tissue, there are regions that predominantly contain cell bodies and regions that are largely composed of just axons. These two regions within nervous system structures are often referred to as gray matter (the regions with many cell bodies and dendrites) or white matter (the regions with many axons). Figure 12.3 demonstrates the appearance of these regions in the brain and spinal cord. The colors ascribed to these regions are what would be seen in “fresh,” or unstained, nervous tissue. Gray matter is not necessarily gray. It can be pinkish because of blood content, or even slightly tan, depending on how long the tissue has been preserved. But white matter is white because axons are insulated by a lipid-rich substance called myelin . Lipids can appear as white (“fatty”) material, much like the fat on a raw piece of chicken or beef. Actually, gray matter may have that color ascribed to it because next to the white matter, it is just darker—hence, gray. The distinction between gray matter and white matter is most often applied to central nervous tissue, which has large regions that can be seen with the unaided eye. When looking at peripheral structures, often a microscope is used and the tissue is stained with artificial colors. That is not to say that central nervous tissue cannot be stained and viewed under a microscope, but unstained tissue is most likely from the CNS—for example, a frontal section of the brain or cross section of the spinal cord. Regardless of the appearance of stained or unstained tissue, the cell bodies of neurons or axons can be located in discrete anatomical structures that need to be named. Those names are specific to whether the structure is central or peripheral. A localized collection of neuron cell bodies in the CNS is referred to as a nucleus . In the PNS, a cluster of neuron cell bodies is referred to as a ganglion . Figure 12.4 indicates how the term nucleus has a few different meanings within anatomy and physiology. It is the center of an atom, where protons and neutrons are found; it is the center of a cell, where the DNA is found; and it is a center of some function in the CNS. There is also a potentially confusing use of the word ganglion (plural = ganglia) that has a historical explanation. In the central nervous system, there is a group of nuclei that are connected together and were once called the basal ganglia before “ganglion” became accepted as a description for a peripheral structure. Some sources refer to this group of nuclei as the “basal nuclei” to avoid confusion. Terminology applied to bundles of axons also differs depending on location. A bundle of axons, or fibers, found in the CNS is called a tract whereas the same thing in the PNS would be called a nerve . There is an important point to make about these terms, which is that they can both be used to refer to the same bundle of axons. When those axons are in the PNS, the term is nerve, but if they are CNS, the term is tract. The most obvious example of this is the axons that project from the retina into the brain. Those axons are called the optic nerve as they leave the eye, but when they are inside the cranium, they are referred to as the optic tract. There is a specific place where the name changes, which is the optic chiasm, but they are still the same axons ( Figure 12.5 ). A similar situation outside of science can be described for some roads. Imagine a road called “Broad Street” in a town called “Anyville.” The road leaves Anyville and goes to the next town over, called “Hometown.” When the road crosses the line between the two towns and is in Hometown, its name changes to “Main Street.” That is the idea behind the naming of the retinal axons. In the PNS, they are called the optic nerve, and in the CNS, they are the optic tract. Table 12.1 helps to clarify which of these terms apply to the central or peripheral nervous systems. Interactive Link In 2003, the Nobel Prize in Physiology or Medicine was awarded to Paul C. Lauterbur and Sir Peter Mansfield for discoveries related to magnetic resonance imaging (MRI). This is a tool to see the structures of the body (not just the nervous system) that depends on magnetic fields associated with certain atomic nuclei. The utility of this technique in the nervous system is that fat tissue and water appear as different shades between black and white. Because white matter is fatty (from myelin) and gray matter is not, they can be easily distinguished in MRI images. Try this PhET simulation that demonstrates the use of this technology and compares it with other types of imaging technologies. Also, the results from an MRI session are compared with images obtained from X-ray or computed tomography. How do the imaging techniques shown in this game indicate the separation of white and gray matter compared with the freshly dissected tissue shown earlier? Structures of the CNS and PNS CNS PNS Group of Neuron Cell Bodies (i.e., gray matter) Nucleus Ganglion Bundle of Axons (i.e., white matter) Tract Nerve Table 12.1 Functional Divisions of the Nervous System The nervous system can also be divided on the basis of its functions, but anatomical divisions and functional divisions are different. The CNS and the PNS both contribute to the same functions, but those functions can be attributed to different regions of the brain (such as the cerebral cortex or the hypothalamus) or to different ganglia in the periphery. The problem with trying to fit functional differences into anatomical divisions is that sometimes the same structure can be part of several functions. For example, the optic nerve carries signals from the retina that are either used for the conscious perception of visual stimuli, which takes place in the cerebral cortex, or for the reflexive responses of smooth muscle tissue that are processed through the hypothalamus. There are two ways to consider how the nervous system is divided functionally. First, the basic functions of the nervous system are sensation, integration, and response. Secondly, control of the body can be somatic or autonomic—divisions that are largely defined by the structures that are involved in the response. There is also a region of the peripheral nervous system that is called the enteric nervous system that is responsible for a specific set of the functions within the realm of autonomic control related to gastrointestinal functions. Basic Functions The nervous system is involved in receiving information about the environment around us (sensation) and generating responses to that information (motor responses). The nervous system can be divided into regions that are responsible for sensation (sensory functions) and for the response (motor functions). But there is a third function that needs to be included. Sensory input needs to be integrated with other sensations, as well as with memories, emotional state, or learning (cognition). Some regions of the nervous system are termed integration or association areas. The process of integration combines sensory perceptions and higher cognitive functions such as memories, learning, and emotion to produce a response. Sensation. The first major function of the nervous system is sensation—receiving information about the environment to gain input about what is happening outside the body (or, sometimes, within the body). The sensory functions of the nervous system register the presence of a change from homeostasis or a particular event in the environment, known as a stimulus . The senses we think of most are the “big five”: taste, smell, touch, sight, and hearing. The stimuli for taste and smell are both chemical substances (molecules, compounds, ions, etc.), touch is physical or mechanical stimuli that interact with the skin, sight is light stimuli, and hearing is the perception of sound, which is a physical stimulus similar to some aspects of touch. There are actually more senses than just those, but that list represents the major senses. Those five are all senses that receive stimuli from the outside world, and of which there is conscious perception. Additional sensory stimuli might be from the internal environment (inside the body), such as the stretch of an organ wall or the concentration of certain ions in the blood. Response. The nervous system produces a response on the basis of the stimuli perceived by sensory structures. An obvious response would be the movement of muscles, such as withdrawing a hand from a hot stove, but there are broader uses of the term. The nervous system can cause the contraction of all three types of muscle tissue. For example, skeletal muscle contracts to move the skeleton, cardiac muscle is influenced as heart rate increases during exercise, and smooth muscle contracts as the digestive system moves food along the digestive tract. Responses also include the neural control of glands in the body as well, such as the production and secretion of sweat by the eccrine and merocrine sweat glands found in the skin to lower body temperature. Responses can be divided into those that are voluntary or conscious (contraction of skeletal muscle) and those that are involuntary (contraction of smooth muscles, regulation of cardiac muscle, activation of glands). Voluntary responses are governed by the somatic nervous system and involuntary responses are governed by the autonomic nervous system, which are discussed in the next section. Integration. Stimuli that are received by sensory structures are communicated to the nervous system where that information is processed. This is called integration. Stimuli are compared with, or integrated with, other stimuli, memories of previous stimuli, or the state of a person at a particular time. This leads to the specific response that will be generated. Seeing a baseball pitched to a batter will not automatically cause the batter to swing. The trajectory of the ball and its speed will need to be considered. Maybe the count is three balls and one strike, and the batter wants to let this pitch go by in the hope of getting a walk to first base. Or maybe the batter’s team is so far ahead, it would be fun to just swing away. Controlling the Body The nervous system can be divided into two parts mostly on the basis of a functional difference in responses. The somatic nervous system (SNS) is responsible for conscious perception and voluntary motor responses. Voluntary motor response means the contraction of skeletal muscle, but those contractions are not always voluntary in the sense that you have to want to perform them. Some somatic motor responses are reflexes, and often happen without a conscious decision to perform them. If your friend jumps out from behind a corner and yells “Boo!” you will be startled and you might scream or leap back. You didn’t decide to do that, and you may not have wanted to give your friend a reason to laugh at your expense, but it is a reflex involving skeletal muscle contractions. Other motor responses become automatic (in other words, unconscious) as a person learns motor skills (referred to as “habit learning” or “procedural memory”). The autonomic nervous system (ANS) is responsible for involuntary control of the body, usually for the sake of homeostasis (regulation of the internal environment). Sensory input for autonomic functions can be from sensory structures tuned to external or internal environmental stimuli. The motor output extends to smooth and cardiac muscle as well as glandular tissue. The role of the autonomic system is to regulate the organ systems of the body, which usually means to control homeostasis. Sweat glands, for example, are controlled by the autonomic system. When you are hot, sweating helps cool your body down. That is a homeostatic mechanism. But when you are nervous, you might start sweating also. That is not homeostatic, it is the physiological response to an emotional state. There is another division of the nervous system that describes functional responses. The enteric nervous system (ENS) is responsible for controlling the smooth muscle and glandular tissue in your digestive system. It is a large part of the PNS, and is not dependent on the CNS. It is sometimes valid, however, to consider the enteric system to be a part of the autonomic system because the neural structures that make up the enteric system are a component of the autonomic output that regulates digestion. There are some differences between the two, but for our purposes here there will be a good bit of overlap. See Figure 12.6 for examples of where these divisions of the nervous system can be found. Interactive Link Visit this site to read about a woman that notices that her daughter is having trouble walking up the stairs. This leads to the discovery of a hereditary condition that affects the brain and spinal cord. The electromyography and MRI tests indicated deficiencies in the spinal cord and cerebellum, both of which are responsible for controlling coordinated movements. To what functional division of the nervous system would these structures belong? Everyday Connection How Much of Your Brain Do You Use? Have you ever heard the claim that humans only use 10 percent of their brains? Maybe you have seen an advertisement on a website saying that there is a secret to unlocking the full potential of your mind—as if there were 90 percent of your brain sitting idle, just waiting for you to use it. If you see an ad like that, don’t click. It isn’t true. An easy way to see how much of the brain a person uses is to take measurements of brain activity while performing a task. An example of this kind of measurement is functional magnetic resonance imaging (fMRI), which generates a map of the most active areas and can be generated and presented in three dimensions ( Figure 12.7 ). This procedure is different from the standard MRI technique because it is measuring changes in the tissue in time with an experimental condition or event. The underlying assumption is that active nervous tissue will have greater blood flow. By having the subject perform a visual task, activity all over the brain can be measured. Consider this possible experiment: the subject is told to look at a screen with a black dot in the middle (a fixation point). A photograph of a face is projected on the screen away from the center. The subject has to look at the photograph and decipher what it is. The subject has been instructed to push a button if the photograph is of someone they recognize. The photograph might be of a celebrity, so the subject would press the button, or it might be of a random person unknown to the subject, so the subject would not press the button. In this task, visual sensory areas would be active, integrating areas would be active, motor areas responsible for moving the eyes would be active, and motor areas for pressing the button with a finger would be active. Those areas are distributed all around the brain and the fMRI images would show activity in more than just 10 percent of the brain (some evidence suggests that about 80 percent of the brain is using energy—based on blood flow to the tissue—during well-defined tasks similar to the one suggested above). This task does not even include all of the functions the brain performs. There is no language response, the body is mostly lying still in the MRI machine, and it does not consider the autonomic functions that would be ongoing in the background. 12.2 Nervous Tissue Learning Objectives By the end of this section, you will be able to: Describe the basic structure of a neuron Identify the different types of neurons on the basis of polarity List the glial cells of the CNS and describe their function List the glial cells of the PNS and describe their function Nervous tissue is composed of two types of cells, neurons and glial cells. Neurons are the primary type of cell that most anyone associates with the nervous system. They are responsible for the computation and communication that the nervous system provides. They are electrically active and release chemical signals to target cells. Glial cells, or glia, are known to play a supporting role for nervous tissue. Ongoing research pursues an expanded role that glial cells might play in signaling, but neurons are still considered the basis of this function. Neurons are important, but without glial support they would not be able to perform their function. Neurons Neurons are the cells considered to be the basis of nervous tissue. They are responsible for the electrical signals that communicate information about sensations, and that produce movements in response to those stimuli, along with inducing thought processes within the brain. An important part of the function of neurons is in their structure, or shape. The three-dimensional shape of these cells makes the immense numbers of connections within the nervous system possible. Parts of a Neuron As you learned in the first section, the main part of a neuron is the cell body, which is also known as the soma (soma = “body”). The cell body contains the nucleus and most of the major organelles. But what makes neurons special is that they have many extensions of their cell membranes, which are generally referred to as processes. Neurons are usually described as having one, and only one, axon—a fiber that emerges from the cell body and projects to target cells. That single axon can branch repeatedly to communicate with many target cells. It is the axon that propagates the nerve impulse, which is communicated to one or more cells. The other processes of the neuron are dendrites, which receive information from other neurons at specialized areas of contact called synapses . The dendrites are usually highly branched processes, providing locations for other neurons to communicate with the cell body. Information flows through a neuron from the dendrites, across the cell body, and down the axon. This gives the neuron a polarity—meaning that information flows in this one direction. Figure 12.8 shows the relationship of these parts to one another. Where the axon emerges from the cell body, there is a special region referred to as the axon hillock . This is a tapering of the cell body toward the axon fiber. Within the axon hillock, the cytoplasm changes to a solution of limited components called axoplasm . Because the axon hillock represents the beginning of the axon, it is also referred to as the initial segment . Many axons are wrapped by an insulating substance called myelin, which is actually made from glial cells. Myelin acts as insulation much like the plastic or rubber that is used to insulate electrical wires. A key difference between myelin and the insulation on a wire is that there are gaps in the myelin covering of an axon. Each gap is called a node of Ranvier and is important to the way that electrical signals travel down the axon. The length of the axon between each gap, which is wrapped in myelin, is referred to as an axon segment . At the end of the axon is the axon terminal , where there are usually several branches extending toward the target cell, each of which ends in an enlargement called a synaptic end bulb . These bulbs are what make the connection with the target cell at the synapse. Types of Neurons There are many neurons in the nervous system—a number in the trillions. And there are many different types of neurons. They can be classified by many different criteria. The first way to classify them is by the number of processes attached to the cell body. Using the standard model of neurons, one of these processes is the axon, and the rest are dendrites. Because information flows through the neuron from dendrites or cell bodies toward the axon, these names are based on the neuron's polarity ( Figure 12.9 ). Unipolar cells have only one process emerging from the cell. True unipolar cells are only found in invertebrate animals, so the unipolar cells in humans are more appropriately called “pseudo-unipolar” cells. Invertebrate unipolar cells do not have dendrites. Human unipolar cells have an axon that emerges from the cell body, but it splits so that the axon can extend along a very long distance. At one end of the axon are dendrites, and at the other end, the axon forms synaptic connections with a target. Unipolar cells are exclusively sensory neurons and have two unique characteristics. First, their dendrites are receiving sensory information, sometimes directly from the stimulus itself. Secondly, the cell bodies of unipolar neurons are always found in ganglia. Sensory reception is a peripheral function (those dendrites are in the periphery, perhaps in the skin) so the cell body is in the periphery, though closer to the CNS in a ganglion. The axon projects from the dendrite endings, past the cell body in a ganglion, and into the central nervous system. Bipolar cells have two processes, which extend from each end of the cell body, opposite to each other. One is the axon and one the dendrite. Bipolar cells are not very common. They are found mainly in the olfactory epithelium (where smell stimuli are sensed), and as part of the retina. Multipolar neurons are all of the neurons that are not unipolar or bipolar. They have one axon and two or more dendrites (usually many more). With the exception of the unipolar sensory ganglion cells, and the two specific bipolar cells mentioned above, all other neurons are multipolar. Some cutting edge research suggests that certain neurons in the CNS do not conform to the standard model of “one, and only one” axon. Some sources describe a fourth type of neuron, called an anaxonic neuron. The name suggests that it has no axon (an- = “without”), but this is not accurate. Anaxonic neurons are very small, and if you look through a microscope at the standard resolution used in histology (approximately 400X to 1000X total magnification), you will not be able to distinguish any process specifically as an axon or a dendrite. Any of those processes can function as an axon depending on the conditions at any given time. Nevertheless, even if they cannot be easily seen, and one specific process is definitively the axon, these neurons have multiple processes and are therefore multipolar. Neurons can also be classified on the basis of where they are found, who found them, what they do, or even what chemicals they use to communicate with each other. Some neurons referred to in this section on the nervous system are named on the basis of those sorts of classifications ( Figure 12.10 ). For example, a multipolar neuron that has a very important role to play in a part of the brain called the cerebellum is known as a Purkinje (commonly pronounced per-KIN-gee) cell. It is named after the anatomist who discovered it (Jan Evangilista Purkinje, 1787–1869). Glial Cells Glial cells, or neuroglia or simply glia, are the other type of cell found in nervous tissue. They are considered to be supporting cells, and many functions are directed at helping neurons complete their function for communication. The name glia comes from the Greek word that means “glue,” and was coined by the German pathologist Rudolph Virchow, who wrote in 1856: “This connective substance, which is in the brain, the spinal cord, and the special sense nerves, is a kind of glue (neuroglia) in which the nervous elements are planted.” Today, research into nervous tissue has shown that there are many deeper roles that these cells play. And research may find much more about them in the future. There are six types of glial cells. Four of them are found in the CNS and two are found in the PNS. Table 12.2 outlines some common characteristics and functions. Glial Cell Types by Location and Basic Function CNS glia PNS glia Basic function Astrocyte Satellite cell Support Oligodendrocyte Schwann cell Insulation, myelination Microglia - Immune surveillance and phagocytosis Ependymal cell - Creating CSF Table 12.2 Glial Cells of the CNS One cell providing support to neurons of the CNS is the astrocyte , so named because it appears to be star-shaped under the microscope (astro- = “star”). Astrocytes have many processes extending from their main cell body (not axons or dendrites like neurons, just cell extensions). Those processes extend to interact with neurons, blood vessels, or the connective tissue covering the CNS that is called the pia mater ( Figure 12.11 ). Generally, they are supporting cells for the neurons in the central nervous system. Some ways in which they support neurons in the central nervous system are by maintaining the concentration of chemicals in the extracellular space, removing excess signaling molecules, reacting to tissue damage, and contributing to the blood-brain barrier (BBB) . The blood-brain barrier is a physiological barrier that keeps many substances that circulate in the rest of the body from getting into the central nervous system, restricting what can cross from circulating blood into the CNS. Nutrient molecules, such as glucose or amino acids, can pass through the BBB, but other molecules cannot. This actually causes problems with drug delivery to the CNS. Pharmaceutical companies are challenged to design drugs that can cross the BBB as well as have an effect on the nervous system. Like a few other parts of the body, the brain has a privileged blood supply. Very little can pass through by diffusion. Most substances that cross the wall of a blood vessel into the CNS must do so through an active transport process. Because of this, only specific types of molecules can enter the CNS. Glucose—the primary energy source—is allowed, as are amino acids. Water and some other small particles, like gases and ions, can enter. But most everything else cannot, including white blood cells, which are one of the body’s main lines of defense. While this barrier protects the CNS from exposure to toxic or pathogenic substances, it also keeps out the cells that could protect the brain and spinal cord from disease and damage. The BBB also makes it harder for pharmaceuticals to be developed that can affect the nervous system. Aside from finding efficacious substances, the means of delivery is also crucial. Also found in CNS tissue is the oligodendrocyte , sometimes called just “oligo,” which is the glial cell type that insulates axons in the CNS. The name means “cell of a few branches” (oligo- = “few”; dendro- = “branches”; -cyte = “cell”). There are a few processes that extend from the cell body. Each one reaches out and surrounds an axon to insulate it in myelin. One oligodendrocyte will provide the myelin for multiple axon segments, either for the same axon or for separate axons. The function of myelin will be discussed below. Microglia are, as the name implies, smaller than most of the other glial cells. Ongoing research into these cells, although not entirely conclusive, suggests that they may originate as white blood cells, called macrophages, that become part of the CNS during early development. While their origin is not conclusively determined, their function is related to what macrophages do in the rest of the body. When macrophages encounter diseased or damaged cells in the rest of the body, they ingest and digest those cells or the pathogens that cause disease. Microglia are the cells in the CNS that can do this in normal, healthy tissue, and they are therefore also referred to as CNS-resident macrophages. The ependymal cell is a glial cell that filters blood to make cerebrospinal fluid (CSF) , the fluid that circulates through the CNS. Because of the privileged blood supply inherent in the BBB, the extracellular space in nervous tissue does not easily exchange components with the blood. Ependymal cells line each ventricle , one of four central cavities that are remnants of the hollow center of the neural tube formed during the embryonic development of the brain. The choroid plexus is a specialized structure in the ventricles where ependymal cells come in contact with blood vessels and filter and absorb components of the blood to produce cerebrospinal fluid. Because of this, ependymal cells can be considered a component of the BBB, or a place where the BBB breaks down. These glial cells appear similar to epithelial cells, making a single layer of cells with little intracellular space and tight connections between adjacent cells. They also have cilia on their apical surface to help move the CSF through the ventricular space. The relationship of these glial cells to the structure of the CNS is seen in Figure 12.11 . Glial Cells of the PNS One of the two types of glial cells found in the PNS is the satellite cell . Satellite cells are found in sensory and autonomic ganglia, where they surround the cell bodies of neurons. This accounts for the name, based on their appearance under the microscope. They provide support, performing similar functions in the periphery as astrocytes do in the CNS—except, of course, for establishing the BBB. The second type of glial cell is the Schwann cell , which insulate axons with myelin in the periphery. Schwann cells are different than oligodendrocytes, in that a Schwann cell wraps around a portion of only one axon segment and no others. Oligodendrocytes have processes that reach out to multiple axon segments, whereas the entire Schwann cell surrounds just one axon segment. The nucleus and cytoplasm of the Schwann cell are on the edge of the myelin sheath. The relationship of these two types of glial cells to ganglia and nerves in the PNS is seen in Figure 12.12 . Myelin The insulation for axons in the nervous system is provided by glial cells, oligodendrocytes in the CNS, and Schwann cells in the PNS. Whereas the manner in which either cell is associated with the axon segment, or segments, that it insulates is different, the means of myelinating an axon segment is mostly the same in the two situations. Myelin is a lipid-rich sheath that surrounds the axon and by doing so creates a myelin sheath that facilitates the transmission of electrical signals along the axon. The lipids are essentially the phospholipids of the glial cell membrane. Myelin, however, is more than just the membrane of the glial cell. It also includes important proteins that are integral to that membrane. Some of the proteins help to hold the layers of the glial cell membrane closely together. The appearance of the myelin sheath can be thought of as similar to the pastry wrapped around a hot dog for “pigs in a blanket” or a similar food. The glial cell is wrapped around the axon several times with little to no cytoplasm between the glial cell layers. For oligodendrocytes, the rest of the cell is separate from the myelin sheath as a cell process extends back toward the cell body. A few other processes provide the same insulation for other axon segments in the area. For Schwann cells, the outermost layer of the cell membrane contains cytoplasm and the nucleus of the cell as a bulge on one side of the myelin sheath. During development, the glial cell is loosely or incompletely wrapped around the axon ( Figure 12.13 a ). The edges of this loose enclosure extend toward each other, and one end tucks under the other. The inner edge wraps around the axon, creating several layers, and the other edge closes around the outside so that the axon is completely enclosed. Myelin sheaths can extend for one or two millimeters, depending on the diameter of the axon. Axon diameters can be as small as 1 to 20 micrometers. Because a micrometer is 1/1000 of a millimeter, this means that the length of a myelin sheath can be 100–1000 times the diameter of the axon. Figure 12.8 , Figure 12.11 , and Figure 12.12 show the myelin sheath surrounding an axon segment, but are not to scale. If the myelin sheath were drawn to scale, the neuron would have to be immense—possibly covering an entire wall of the room in which you are sitting. Disorders of the... Nervous Tissue Several diseases can result from the demyelination of axons. The causes of these diseases are not the same; some have genetic causes, some are caused by pathogens, and others are the result of autoimmune disorders. Though the causes are varied, the results are largely similar. The myelin insulation of axons is compromised, making electrical signaling slower. Multiple sclerosis (MS) is one such disease. It is an example of an autoimmune disease. The antibodies produced by lymphocytes (a type of white blood cell) mark myelin as something that should not be in the body. This causes inflammation and the destruction of the myelin in the central nervous system. As the insulation around the axons is destroyed by the disease, scarring becomes obvious. This is where the name of the disease comes from; sclerosis means hardening of tissue, which is what a scar is. Multiple scars are found in the white matter of the brain and spinal cord. The symptoms of MS include both somatic and autonomic deficits. Control of the musculature is compromised, as is control of organs such as the bladder. Guillain-Barré (pronounced gee-YAN bah-RAY) syndrome is an example of a demyelinating disease of the peripheral nervous system. It is also the result of an autoimmune reaction, but the inflammation is in peripheral nerves. Sensory symptoms or motor deficits are common, and autonomic failures can lead to changes in the heart rhythm or a drop in blood pressure, especially when standing, which causes dizziness. 12.3 The Function of Nervous Tissue Learning Objectives By the end of this section, you will be able to: Distinguish the major functions of the nervous system: sensation, integration, and response List the sequence of events in a simple sensory receptor–motor response pathway Having looked at the components of nervous tissue, and the basic anatomy of the nervous system, next comes an understanding of how nervous tissue is capable of communicating within the nervous system. Before getting to the nuts and bolts of how this works, an illustration of how the components come together will be helpful. An example is summarized in Figure 12.14 . Imagine you are about to take a shower in the morning before going to school. You have turned on the faucet to start the water as you prepare to get in the shower. After a few minutes, you expect the water to be a temperature that will be comfortable to enter. So you put your hand out into the spray of water. What happens next depends on how your nervous system interacts with the stimulus of the water temperature and what you do in response to that stimulus. Found in the skin of your fingers or toes is a type of sensory receptor that is sensitive to temperature, called a thermoreceptor . When you place your hand under the shower ( Figure 12.15 ), the cell membrane of the thermoreceptors changes its electrical state (voltage). The amount of change is dependent on the strength of the stimulus (how hot the water is). This is called a graded potential . If the stimulus is strong, the voltage of the cell membrane will change enough to generate an electrical signal that will travel down the axon. You have learned about this type of signaling before, with respect to the interaction of nerves and muscles at the neuromuscular junction. The voltage at which such a signal is generated is called the threshold , and the resulting electrical signal is called an action potential . In this example, the action potential travels—a process known as propagation —along the axon from the axon hillock to the axon terminals and into the synaptic end bulbs. When this signal reaches the end bulbs, it causes the release of a signaling molecule called a neurotransmitter . The neurotransmitter diffuses across the short distance of the synapse and binds to a receptor protein of the target neuron. When the molecular signal binds to the receptor, the cell membrane of the target neuron changes its electrical state and a new graded potential begins. If that graded potential is strong enough to reach threshold, the second neuron generates an action potential at its axon hillock. The target of this neuron is another neuron in the thalamus of the brain, the part of the CNS that acts as a relay for sensory information. At another synapse, neurotransmitter is released and binds to its receptor. The thalamus then sends the sensory information to the cerebral cortex , the outermost layer of gray matter in the brain, where conscious perception of that water temperature begins. Within the cerebral cortex, information is processed among many neurons, integrating the stimulus of the water temperature with other sensory stimuli, with your emotional state (you just aren't ready to wake up; the bed is calling to you), memories (perhaps of the lab notes you have to study before a quiz). Finally, a plan is developed about what to do, whether that is to turn the temperature up, turn the whole shower off and go back to bed, or step into the shower. To do any of these things, the cerebral cortex has to send a command out to your body to move muscles ( Figure 12.16 ). A region of the cortex is specialized for sending signals down to the spinal cord for movement. The upper motor neuron is in this region, called the precentral gyrus of the frontal cortex , which has an axon that extends all the way down the spinal cord. At the level of the spinal cord at which this axon makes a synapse, a graded potential occurs in the cell membrane of a lower motor neuron . This second motor neuron is responsible for causing muscle fibers to contract. In the manner described in the chapter on muscle tissue, an action potential travels along the motor neuron axon into the periphery. The axon terminates on muscle fibers at the neuromuscular junction. Acetylcholine is released at this specialized synapse, which causes the muscle action potential to begin, following a large potential known as an end plate potential. When the lower motor neuron excites the muscle fiber, it contracts. All of this occurs in a fraction of a second, but this story is the basis of how the nervous system functions. Career Connection Neurophysiologist Understanding how the nervous system works could be a driving force in your career. Studying neurophysiology is a very rewarding path to follow. It means that there is a lot of work to do, but the rewards are worth the effort. The career path of a research scientist can be straightforward: college, graduate school, postdoctoral research, academic research position at a university. A Bachelor’s degree in science will get you started, and for neurophysiology that might be in biology, psychology, computer science, engineering, or neuroscience. But the real specialization comes in graduate school. There are many different programs out there to study the nervous system, not just neuroscience itself. Most graduate programs are doctoral, meaning that a Master’s degree is not part of the work. These are usually considered five-year programs, with the first two years dedicated to course work and finding a research mentor, and the last three years dedicated to finding a research topic and pursuing that with a near single-mindedness. The research will usually result in a few publications in scientific journals, which will make up the bulk of a doctoral dissertation. After graduating with a Ph.D., researchers will go on to find specialized work called a postdoctoral fellowship within established labs. In this position, a researcher starts to establish their own research career with the hopes of finding an academic position at a research university. Other options are available if you are interested in how the nervous system works. Especially for neurophysiology, a medical degree might be more suitable so you can learn about the clinical applications of neurophysiology and possibly work with human subjects. An academic career is not a necessity. Biotechnology firms are eager to find motivated scientists ready to tackle the tough questions about how the nervous system works so that therapeutic chemicals can be tested on some of the most challenging disorders such as Alzheimer’s disease or Parkinson’s disease, or spinal cord injury. Others with a medical degree and a specialization in neuroscience go on to work directly with patients, diagnosing and treating mental disorders. You can do this as a psychiatrist, a neuropsychologist, a neuroscience nurse, or a neurodiagnostic technician, among other possible career paths. 12.4 The Action Potential Learning Objectives By the end of this section, you will be able to: Describe the components of the membrane that establish the resting membrane potential Describe the changes that occur to the membrane that result in the action potential The functions of the nervous system—sensation, integration, and response—depend on the functions of the neurons underlying these pathways. To understand how neurons are able to communicate, it is necessary to describe the role of an excitable membrane in generating these signals. The basis of this communication is the action potential, which demonstrates how changes in the membrane can constitute a signal. Looking at the way these signals work in more variable circumstances involves a look at graded potentials, which will be covered in the next section. Electrically Active Cell Membranes Most cells in the body make use of charged particles, ions, to build up a charge across the cell membrane. Previously, this was shown to be a part of how muscle cells work. For skeletal muscles to contract, based on excitation–contraction coupling, requires input from a neuron. Both of the cells make use of the cell membrane to regulate ion movement between the extracellular fluid and cytosol. As you learned in the chapter on cells, the cell membrane is primarily responsible for regulating what can cross the membrane and what stays on only one side. The cell membrane is a phospholipid bilayer, so only substances that can pass directly through the hydrophobic core can diffuse through unaided. Charged particles, which are hydrophilic by definition, cannot pass through the cell membrane without assistance ( Figure 12.17 ). Transmembrane proteins, specifically channel proteins, make this possible. Several passive transport channels, as well as active transport pumps, are necessary to generate a transmembrane potential and an action potential. Of special interest is the carrier protein referred to as the sodium/potassium pump that moves sodium ions (Na + ) out of a cell and potassium ions (K + ) into a cell, thus regulating ion concentration on both sides of the cell membrane. The sodium/potassium pump requires energy in the form of adenosine triphosphate (ATP), so it is also referred to as an ATPase. As was explained in the cell chapter, the concentration of Na + is higher outside the cell than inside, and the concentration of K + is higher inside the cell than outside. That means that this pump is moving the ions against the concentration gradients for sodium and potassium, which is why it requires energy. In fact, the pump basically maintains those concentration gradients. Ion channels are pores that allow specific charged particles to cross the membrane in response to an existing concentration gradient. Proteins are capable of spanning the cell membrane, including its hydrophobic core, and can interact with the charge of ions because of the varied properties of amino acids found within specific domains or regions of the protein channel. Hydrophobic amino acids are found in the domains that are apposed to the hydrocarbon tails of the phospholipids. Hydrophilic amino acids are exposed to the fluid environments of the extracellular fluid and cytosol. Additionally, the ions will interact with the hydrophilic amino acids, which will be selective for the charge of the ion. Channels for cations (positive ions) will have negatively charged side chains in the pore. Channels for anions (negative ions) will have positively charged side chains in the pore. This is called electrochemical exclusion , meaning that the channel pore is charge-specific. Ion channels can also be specified by the diameter of the pore. The distance between the amino acids will be specific for the diameter of the ion when it dissociates from the water molecules surrounding it. Because of the surrounding water molecules, larger pores are not ideal for smaller ions because the water molecules will interact, by hydrogen bonds, more readily than the amino acid side chains. This is called size exclusion . Some ion channels are selective for charge but not necessarily for size, and thus are called a nonspecific channel . These nonspecific channels allow cations—particularly Na + , K + , and Ca 2+ —to cross the membrane, but exclude anions. Ion channels do not always freely allow ions to diffuse across the membrane. Some are opened by certain events, meaning the channels are gated . So another way that channels can be categorized is on the basis of how they are gated. Although these classes of ion channels are found primarily in the cells of nervous or muscular tissue, they also can be found in the cells of epithelial and connective tissues. A ligand-gated channel opens because a signaling molecule, a ligand, binds to the extracellular region of the channel. This type of channel is also known as an ionotropic receptor because when the ligand, known as a neurotransmitter in the nervous system, binds to the protein, ions cross the membrane changing its charge ( Figure 12.18 ). A mechanically gated channel opens because of a physical distortion of the cell membrane. Many channels associated with the sense of touch (somatosensation) are mechanically gated. For example, as pressure is applied to the skin, these channels open and allow ions to enter the cell. Similar to this type of channel would be the channel that opens on the basis of temperature changes, as in testing the water in the shower ( Figure 12.19 ). A voltage-gated channel is a channel that responds to changes in the electrical properties of the membrane in which it is embedded. Normally, the inner portion of the membrane is at a negative voltage. When that voltage becomes less negative, the channel begins to allow ions to cross the membrane ( Figure 12.20 ). A leakage channel is randomly gated, meaning that it opens and closes at random, hence the reference to leaking. There is no actual event that opens the channel; instead, it has an intrinsic rate of switching between the open and closed states. Leakage channels contribute to the resting transmembrane voltage of the excitable membrane ( Figure 12.21 ). The Membrane Potential The electrical state of the cell membrane can have several variations. These are all variations in the membrane potential . A potential is a distribution of charge across the cell membrane, measured in millivolts (mV). The standard is to compare the inside of the cell relative to the outside, so the membrane potential is a value representing the charge on the intracellular side of the membrane based on the outside being zero, relatively speaking ( Figure 12.22 ). The concentration of ions in extracellular and intracellular fluids is largely balanced, with a net neutral charge. However, a slight difference in charge occurs right at the membrane surface, both internally and externally. It is the difference in this very limited region that has all the power in neurons (and muscle cells) to generate electrical signals, including action potentials. Before these electrical signals can be described, the resting state of the membrane must be explained. When the cell is at rest, and the ion channels are closed (except for leakage channels which randomly open), ions are distributed across the membrane in a very predictable way. The concentration of Na + outside the cell is 10 times greater than the concentration inside. Also, the concentration of K + inside the cell is greater than outside. The cytosol contains a high concentration of anions, in the form of phosphate ions and negatively charged proteins. Large anions are a component of the inner cell membrane, including specialized phospholipids and proteins associated with the inner leaflet of the membrane (leaflet is a term used for one side of the lipid bilayer membrane). The negative charge is localized in the large anions. With the ions distributed across the membrane at these concentrations, the difference in charge is measured at -70 mV, the value described as the resting membrane potential . The exact value measured for the resting membrane potential varies between cells, but -70 mV is most commonly used as this value. This voltage would actually be much lower except for the contributions of some important proteins in the membrane. Leakage channels allow Na + to slowly move into the cell or K + to slowly move out, and the Na + /K + pump restores them. This may appear to be a waste of energy, but each has a role in maintaining the membrane potential. The Action Potential Resting membrane potential describes the steady state of the cell, which is a dynamic process that is balanced by ion leakage and ion pumping. Without any outside influence, it will not change. To get an electrical signal started, the membrane potential has to change. This starts with a channel opening for Na + in the membrane. Because the concentration of Na + is higher outside the cell than inside the cell by a factor of 10, ions will rush into the cell that are driven largely by the concentration gradient. Because sodium is a positively charged ion, it will change the relative voltage immediately inside the cell relative to immediately outside. The resting potential is the state of the membrane at a voltage of -70 mV, so the sodium cation entering the cell will cause it to become less negative. This is known as depolarization , meaning the membrane potential moves toward zero. The concentration gradient for Na + is so strong that it will continue to enter the cell even after the membrane potential has become zero, so that the voltage immediately around the pore begins to become positive. The electrical gradient also plays a role, as negative proteins below the membrane attract the sodium ion. The membrane potential will reach +30 mV by the time sodium has entered the cell. As the membrane potential reaches +30 mV, other voltage-gated channels are opening in the membrane. These channels are specific for the potassium ion. A concentration gradient acts on K + , as well. As K + starts to leave the cell, taking a positive charge with it, the membrane potential begins to move back toward its resting voltage. This is called repolarization , meaning that the membrane voltage moves back toward the -70 mV value of the resting membrane potential. Repolarization returns the membrane potential to the -70 mV value that indicates the resting potential, but it actually overshoots that value. Potassium ions reach equilibrium when the membrane voltage is below -70 mV, so a period of hyperpolarization occurs while the K + channels are open. Those K + channels are slightly delayed in closing, accounting for this short overshoot. What has been described here is the action potential, which is presented as a graph of voltage over time in Figure 12.23 . It is the electrical signal that nervous tissue generates for communication. The change in the membrane voltage from -70 mV at rest to +30 mV at the end of depolarization is a 100 mV change. That can also be written as a 0.1 V change. To put that value in perspective, think about a battery. An AA battery that you might find in a television remote has a voltage of 1.5 V, or a 9 V battery (the rectangular battery with two posts on one end) is, obviously, 9 V. The change seen in the action potential is one or two orders of magnitude less than the charge in these batteries. In fact, the membrane potential can be described as a battery. A charge is stored across the membrane that can be released under the correct conditions. A battery in your remote has stored a charge that is “released” when you push a button. Interactive Link What happens across the membrane of an electrically active cell is a dynamic process that is hard to visualize with static images or through text descriptions. View this animation to learn more about this process. What is the difference between the driving force for Na + and K + ? And what is similar about the movement of these two ions? The question is, now, what initiates the action potential? The description above conveniently glosses over that point. But it is vital to understanding what is happening. The membrane potential will stay at the resting voltage until something changes. The description above just says that a Na + channel opens. Now, to say “a channel opens” does not mean that one individual transmembrane protein changes. Instead, it means that one kind of channel opens. There are a few different types of channels that allow Na + to cross the membrane. A ligand-gated Na + channel will open when a neurotransmitter binds to it and a mechanically gated Na + channel will open when a physical stimulus affects a sensory receptor (like pressure applied to the skin compresses a touch receptor). Whether it is a neurotransmitter binding to its receptor protein or a sensory stimulus activating a sensory receptor cell, some stimulus gets the process started. Sodium starts to enter the cell and the membrane becomes less negative. A third type of channel that is an important part of depolarization in the action potential is the voltage-gated Na + channel. The channels that start depolarizing the membrane because of a stimulus help the cell to depolarize from -70 mV to -55 mV. Once the membrane reaches that voltage, the voltage-gated Na + channels open. This is what is known as the threshold. Any depolarization that does not change the membrane potential to -55 mV or higher will not reach threshold and thus will not result in an action potential. Also, any stimulus that depolarizes the membrane to -55 mV or beyond will cause a large number of channels to open and an action potential will be initiated. Because of the threshold, the action potential can be likened to a digital event—it either happens or it does not. If the threshold is not reached, then no action potential occurs. If depolarization reaches -55 mV, then the action potential continues and runs all the way to +30 mV, at which K + causes repolarization, including the hyperpolarizing overshoot. Also, those changes are the same for every action potential, which means that once the threshold is reached, the exact same thing happens. A stronger stimulus, which might depolarize the membrane well past threshold, will not make a “bigger” action potential. Action potentials are “all or none.” Either the membrane reaches the threshold and everything occurs as described above, or the membrane does not reach the threshold and nothing else happens. All action potentials peak at the same voltage (+30 mV), so one action potential is not bigger than another. Stronger stimuli will initiate multiple action potentials more quickly, but the individual signals are not bigger. Thus, for example, you will not feel a greater sensation of pain, or have a stronger muscle contraction, because of the size of the action potential because they are not different sizes. As we have seen, the depolarization and repolarization of an action potential are dependent on two types of channels (the voltage-gated Na + channel and the voltage-gated K + channel). The voltage-gated Na + channel actually has two gates. One is the activation gate , which opens when the membrane potential crosses -55 mV. The other gate is the inactivation gate , which closes after a specific period of time—on the order of a fraction of a millisecond. When a cell is at rest, the activation gate is closed and the inactivation gate is open. However, when the threshold is reached, the activation gate opens, allowing Na + to rush into the cell. Timed with the peak of depolarization, the inactivation gate closes. During repolarization, no more sodium can enter the cell. When the membrane potential passes -55 mV again, the activation gate closes. After that, the inactivation gate re-opens, making the channel ready to start the whole process over again. The voltage-gated K + channel has only one gate, which is sensitive to a membrane voltage of -50 mV. However, it does not open as quickly as the voltage-gated Na + channel does. It might take a fraction of a millisecond for the channel to open once that voltage has been reached. The timing of this coincides exactly with when the Na + flow peaks, so voltage-gated K + channels open just as the voltage-gated Na + channels are being inactivated. As the membrane potential repolarizes and the voltage passes -50 mV again, the channel closes—again, with a little delay. Potassium continues to leave the cell for a short while and the membrane potential becomes more negative, resulting in the hyperpolarizing overshoot. Then the channel closes again and the membrane can return to the resting potential because of the ongoing activity of the non-gated channels and the Na + /K + pump. All of this takes place within approximately 2 milliseconds ( Figure 12.24 ). While an action potential is in progress, another one cannot be initiated. That effect is referred to as the refractory period . There are two phases of the refractory period: the absolute refractory period and the relative refractory period . During the absolute phase, another action potential will not start. This is because of the inactivation gate of the voltage-gated Na + channel. Once that channel is back to its resting conformation (less than -55 mV), a new action potential could be started, but only by a stronger stimulus than the one that initiated the current action potential. This is because of the flow of K + out of the cell. Because that ion is rushing out, any Na + that tries to enter will not depolarize the cell, but will only keep the cell from hyperpolarizing. Propagation of the Action Potential The action potential is initiated at the beginning of the axon, at what is called the initial segment. There is a high density of voltage-gated Na + channels so that rapid depolarization can take place here. Going down the length of the axon, the action potential is propagated because more voltage-gated Na + channels are opened as the depolarization spreads. This spreading occurs because Na + enters through the channel and moves along the inside of the cell membrane. As the Na + moves, or flows, a short distance along the cell membrane, its positive charge depolarizes a little more of the cell membrane. As that depolarization spreads, new voltage-gated Na + channels open and more ions rush into the cell, spreading the depolarization a little farther. Because voltage-gated Na + channels are inactivated at the peak of the depolarization, they cannot be opened again for a brief time—the absolute refractory period. Because of this, depolarization spreading back toward previously opened channels has no effect. The action potential must propagate toward the axon terminals; as a result, the polarity of the neuron is maintained, as mentioned above. Propagation, as described above, applies to unmyelinated axons. When myelination is present, the action potential propagates differently. Sodium ions that enter the cell at the initial segment start to spread along the length of the axon segment, but there are no voltage-gated Na + channels until the first node of Ranvier. Because there is not constant opening of these channels along the axon segment, the depolarization spreads at an optimal speed. The distance between nodes is the optimal distance to keep the membrane still depolarized above threshold at the next node. As Na + spreads along the inside of the membrane of the axon segment, the charge starts to dissipate. If the node were any farther down the axon, that depolarization would have fallen off too much for voltage-gated Na + channels to be activated at the next node of Ranvier. If the nodes were any closer together, the speed of propagation would be slower. Propagation along an unmyelinated axon is referred to as continuous conduction ; along the length of a myelinated axon, it is saltatory conduction . Continuous conduction is slow because there are always voltage-gated Na + channels opening, and more and more Na + is rushing into the cell. Saltatory conduction is faster because the action potential basically jumps from one node to the next (saltare = “to leap”), and the new influx of Na + renews the depolarized membrane. Along with the myelination of the axon, the diameter of the axon can influence the speed of conduction. Much as water runs faster in a wide river than in a narrow creek, Na + -based depolarization spreads faster down a wide axon than down a narrow one. This concept is known as resistance and is generally true for electrical wires or plumbing, just as it is true for axons, although the specific conditions are different at the scales of electrons or ions versus water in a river. Homeostatic Imbalances Potassium Concentration Glial cells, especially astrocytes, are responsible for maintaining the chemical environment of the CNS tissue. The concentrations of ions in the extracellular fluid are the basis for how the membrane potential is established and changes in electrochemical signaling. If the balance of ions is upset, drastic outcomes are possible. Normally the concentration of K + is higher inside the neuron than outside. After the repolarizing phase of the action potential, K + leakage channels and the Na + /K + pump ensure that the ions return to their original locations. Following a stroke or other ischemic event, extracellular K + levels are elevated. The astrocytes in the area are equipped to clear excess K + to aid the pump. But when the level is far out of balance, the effects can be irreversible. Astrocytes can become reactive in cases such as these, which impairs their ability to maintain the local chemical environment. The glial cells enlarge and their processes swell. They lose their K + buffering ability and the function of the pump is affected, or even reversed. One of the early signs of cell disease is this "leaking" of sodium ions into the body cells. This sodium/potassium imbalance negatively affects the internal chemistry of cells, preventing them from functioning normally. Interactive Link Visit this site to see a virtual neurophysiology lab, and to observe electrophysiological processes in the nervous system, where scientists directly measure the electrical signals produced by neurons. Often, the action potentials occur so rapidly that watching a screen to see them occur is not helpful. A speaker is powered by the signals recorded from a neuron and it “pops” each time the neuron fires an action potential. These action potentials are firing so fast that it sounds like static on the radio. Electrophysiologists can recognize the patterns within that static to understand what is happening. Why is the leech model used for measuring the electrical activity of neurons instead of using humans? 12.5 Communication Between Neurons Learning Objectives By the end of this section, you will be able to: Explain the differences between the types of graded potentials Categorize the major neurotransmitters by chemical type and effect The electrical changes taking place within a neuron, as described in the previous section, are similar to a light switch being turned on. A stimulus starts the depolarization, but the action potential runs on its own once a threshold has been reached. The question is now, “What flips the light switch on?” Temporary changes to the cell membrane voltage can result from neurons receiving information from the environment, or from the action of one neuron on another. These special types of potentials influence a neuron and determine whether an action potential will occur or not. Many of these transient signals originate at the synapse. Graded Potentials Local changes in the membrane potential are called graded potentials and are usually associated with the dendrites of a neuron. The amount of change in the membrane potential is determined by the size of the stimulus that causes it. In the example of testing the temperature of the shower, slightly warm water would only initiate a small change in a thermoreceptor, whereas hot water would cause a large amount of change in the membrane potential. Graded potentials can be of two sorts, either they are depolarizing or hyperpolarizing ( Figure 12.25 ). For a membrane at the resting potential, a graded potential represents a change in that voltage either above -70 mV or below -70 mV. Depolarizing graded potentials are often the result of Na + or Ca 2+ entering the cell. Both of these ions have higher concentrations outside the cell than inside; because they have a positive charge, they will move into the cell causing it to become less negative relative to the outside. Hyperpolarizing graded potentials can be caused by K + leaving the cell or Cl - entering the cell. If a positive charge moves out of a cell, the cell becomes more negative; if a negative charge enters the cell, the same thing happens. Types of Graded Potentials For the unipolar cells of sensory neurons—both those with free nerve endings and those within encapsulations—graded potentials develop in the dendrites that influence the generation of an action potential in the axon of the same cell. This is called a generator potential . For other sensory receptor cells, such as taste cells or photoreceptors of the retina, graded potentials in their membranes result in the release of neurotransmitters at synapses with sensory neurons. This is called a receptor potential . A postsynaptic potential (PSP) is the graded potential in the dendrites of a neuron that is receiving synapses from other cells. Postsynaptic potentials can be depolarizing or hyperpolarizing. Depolarization in a postsynaptic potential is called an excitatory postsynaptic potential (EPSP) because it causes the membrane potential to move toward threshold. Hyperpolarization in a postsynaptic potential is an inhibitory postsynaptic potential (IPSP) because it causes the membrane potential to move away from threshold. Summation All types of graded potentials will result in small changes of either depolarization or hyperpolarization in the voltage of a membrane. These changes can lead to the neuron reaching threshold if the changes add together, or summate . The combined effects of different types of graded potentials are illustrated in Figure 12.26 . If the total change in voltage in the membrane is a positive 15 mV, meaning that the membrane depolarizes from -70 mV to -55 mV, then the graded potentials will result in the membrane reaching threshold. For receptor potentials, threshold is not a factor because the change in membrane potential for receptor cells directly causes neurotransmitter release. However, generator potentials can initiate action potentials in the sensory neuron axon, and postsynaptic potentials can initiate an action potential in the axon of other neurons. Graded potentials summate at a specific location at the beginning of the axon to initiate the action potential, namely the initial segment. For sensory neurons, which do not have a cell body between the dendrites and the axon, the initial segment is directly adjacent to the dendritic endings. For all other neurons, the axon hillock is essentially the initial segment of the axon, and it is where summation takes place. These locations have a high density of voltage-gated Na + channels that initiate the depolarizing phase of the action potential. Summation can be spatial or temporal, meaning it can be the result of multiple graded potentials at different locations on the neuron, or all at the same place but separated in time. Spatial summation is related to associating the activity of multiple inputs to a neuron with each other. Temporal summation is the relationship of multiple action potentials from a single cell resulting in a significant change in the membrane potential. Spatial and temporal summation can act together, as well. Interactive Link Watch this video to learn about summation. The process of converting electrical signals to chemical signals and back requires subtle changes that can result in transient increases or decreases in membrane voltage. To cause a lasting change in the target cell, multiple signals are usually added together, or summated. Does spatial summation have to happen all at once, or can the separate signals arrive on the postsynaptic neuron at slightly different times? Explain your answer. Synapses There are two types of connections between electrically active cells, chemical synapses and electrical synapses. In a chemical synapse , a chemical signal—namely, a neurotransmitter—is released from one cell and it affects the other cell. In an electrical synapse , there is a direct connection between the two cells so that ions can pass directly from one cell to the next. If one cell is depolarized in an electrical synapse, the joined cell also depolarizes because the ions pass between the cells. Chemical synapses involve the transmission of chemical information from one cell to the next. This section will concentrate on the chemical type of synapse. An example of a chemical synapse is the neuromuscular junction (NMJ) described in the chapter on muscle tissue. In the nervous system, there are many more synapses that are essentially the same as the NMJ. All synapses have common characteristics, which can be summarized in this list: presynaptic element neurotransmitter (packaged in vesicles) synaptic cleft receptor proteins postsynaptic element neurotransmitter elimination or re-uptake For the NMJ, these characteristics are as follows: the presynaptic element is the motor neuron's axon terminals, the neurotransmitter is acetylcholine, the synaptic cleft is the space between the cells where the neurotransmitter diffuses, the receptor protein is the nicotinic acetylcholine receptor, the postsynaptic element is the sarcolemma of the muscle cell, and the neurotransmitter is eliminated by acetylcholinesterase. Other synapses are similar to this, and the specifics are different, but they all contain the same characteristics. Neurotransmitter Release When an action potential reaches the axon terminals, voltage-gated Ca 2+ channels in the membrane of the synaptic end bulb open. The concentration of Ca 2+ increases inside the end bulb, and the Ca 2+ ion associates with proteins in the outer surface of neurotransmitter vesicles. The Ca 2+ facilitates the merging of the vesicle with the presynaptic membrane so that the neurotransmitter is released through exocytosis into the small gap between the cells, known as the synaptic cleft . Once in the synaptic cleft, the neurotransmitter diffuses the short distance to the postsynaptic membrane and can interact with neurotransmitter receptors. Receptors are specific for the neurotransmitter, and the two fit together like a key and lock. One neurotransmitter binds to its receptor and will not bind to receptors for other neurotransmitters, making the binding a specific chemical event ( Figure 12.27 ). Neurotransmitter Systems There are several systems of neurotransmitters that are found at various synapses in the nervous system. These groups refer to the chemicals that are the neurotransmitters, and within the groups are specific systems. The first group, which is a neurotransmitter system of its own, is the cholinergic system . It is the system based on acetylcholine. This includes the NMJ as an example of a cholinergic synapse, but cholinergic synapses are found in other parts of the nervous system. They are in the autonomic nervous system, as well as distributed throughout the brain. The cholinergic system has two types of receptors, the nicotinic receptor is found in the NMJ as well as other synapses. There is also an acetylcholine receptor known as the muscarinic receptor . Both of these receptors are named for drugs that interact with the receptor in addition to acetylcholine. Nicotine will bind to the nicotinic receptor and activate it similar to acetylcholine. Muscarine, a product of certain mushrooms, will bind to the muscarinic receptor. However, nicotine will not bind to the muscarinic receptor and muscarine will not bind to the nicotinic receptor. Another group of neurotransmitters are amino acids. This includes glutamate (Glu), GABA (gamma-aminobutyric acid, a derivative of glutamate), and glycine (Gly). These amino acids have an amino group and a carboxyl group in their chemical structures. Glutamate is one of the 20 amino acids that are used to make proteins. Each amino acid neurotransmitter would be part of its own system, namely the glutamatergic, GABAergic, and glycinergic systems. They each have their own receptors and do not interact with each other. Amino acid neurotransmitters are eliminated from the synapse by reuptake. A pump in the cell membrane of the presynaptic element, or sometimes a neighboring glial cell, will clear the amino acid from the synaptic cleft so that it can be recycled, repackaged in vesicles, and released again. Another class of neurotransmitter is the biogenic amine , a group of neurotransmitters that are enzymatically made from amino acids. They have amino groups in them, but no longer have carboxyl groups and are therefore no longer classified as amino acids. Serotonin is made from tryptophan. It is the basis of the serotonergic system, which has its own specific receptors. Serotonin is transported back into the presynaptic cell for repackaging. Other biogenic amines are made from tyrosine, and include dopamine, norepinephrine, and epinephrine. Dopamine is part of its own system, the dopaminergic system, which has dopamine receptors. Dopamine is removed from the synapse by transport proteins in the presynaptic cell membrane. Norepinephrine and epinephrine belong to the adrenergic neurotransmitter system. The two molecules are very similar and bind to the same receptors, which are referred to as alpha and beta receptors. Norepinephrine and epinephrine are also transported back into the presynaptic cell. The chemical epinephrine (epi- = “on”; “-nephrine” = kidney) is also known as adrenaline (renal = “kidney”), and norepinephrine is sometimes referred to as noradrenaline. The adrenal gland produces epinephrine and norepinephrine to be released into the blood stream as hormones. A neuropeptide is a neurotransmitter molecule made up of chains of amino acids connected by peptide bonds. This is what a protein is, but the term protein implies a certain length to the molecule. Some neuropeptides are quite short, such as met-enkephalin, which is five amino acids long. Others are long, such as beta-endorphin, which is 31 amino acids long. Neuropeptides are often released at synapses in combination with another neurotransmitter, and they often act as hormones in other systems of the body, such as vasoactive intestinal peptide (VIP) or substance P. The effect of a neurotransmitter on the postsynaptic element is entirely dependent on the receptor protein. First, if there is no receptor protein in the membrane of the postsynaptic element, then the neurotransmitter has no effect. The depolarizing or hyperpolarizing effect is also dependent on the receptor. When acetylcholine binds to the nicotinic receptor, the postsynaptic cell is depolarized. This is because the receptor is a cation channel and positively charged Na + will rush into the cell. However, when acetylcholine binds to the muscarinic receptor, of which there are several variants, it might cause depolarization or hyperpolarization of the target cell. The amino acid neurotransmitters, glutamate, glycine, and GABA, are almost exclusively associated with just one effect. Glutamate is considered an excitatory amino acid, but only because Glu receptors in the adult cause depolarization of the postsynaptic cell. Glycine and GABA are considered inhibitory amino acids, again because their receptors cause hyperpolarization. The biogenic amines have mixed effects. For example, the dopamine receptors that are classified as D1 receptors are excitatory whereas D2-type receptors are inhibitory. Biogenic amine receptors and neuropeptide receptors can have even more complex effects because some may not directly affect the membrane potential, but rather have an effect on gene transcription or other metabolic processes in the neuron. The characteristics of the various neurotransmitter systems presented in this section are organized in Table 12.3 . The important thing to remember about neurotransmitters, and signaling chemicals in general, is that the effect is entirely dependent on the receptor. Neurotransmitters bind to one of two classes of receptors at the cell surface, ionotropic or metabotropic ( Figure 12.28 ). Ionotropic receptors are ligand-gated ion channels, such as the nicotinic receptor for acetylcholine or the glycine receptor. A metabotropic receptor involves a complex of proteins that result in metabolic changes within the cell. The receptor complex includes the transmembrane receptor protein, a G protein, and an effector protein. The neurotransmitter, referred to as the first messenger, binds to the receptor protein on the extracellular surface of the cell, and the intracellular side of the protein initiates activity of the G protein. The G protein is a guanosine triphosphate (GTP) hydrolase that physically moves from the receptor protein to the effector protein to activate the latter. An effector protein is an enzyme that catalyzes the generation of a new molecule, which acts as the intracellular mediator of the signal that binds to the receptor. This intracellular mediator is called the second messenger. Different receptors use different second messengers. Two common examples of second messengers are cyclic adenosine monophosphate (cAMP) and inositol triphosphate (IP 3 ). The enzyme adenylate cyclase (an example of an effector protein) makes cAMP, and phospholipase C is the enzyme that makes IP 3 . Second messengers, after they are produced by the effector protein, cause metabolic changes within the cell. These changes are most likely the activation of other enzymes in the cell. In neurons, they often modify ion channels, either opening or closing them. These enzymes can also cause changes in the cell, such as the activation of genes in the nucleus, and therefore the increased synthesis of proteins. In neurons, these kinds of changes are often the basis of stronger connections between cells at the synapse and may be the basis of learning and memory. Interactive Link Watch this video to learn about the release of a neurotransmitter. The action potential reaches the end of the axon, called the axon terminal, and a chemical signal is released to tell the target cell to do something—either to initiate a new action potential, or to suppress that activity. In a very short space, the electrical signal of the action potential is changed into the chemical signal of a neurotransmitter and then back to electrical changes in the target cell membrane. What is the importance of voltage-gated calcium channels in the release of neurotransmitters? Characteristics of Neurotransmitter Systems System Cholinergic Amino acids Biogenic amines Neuropeptides Neurotransmitters Acetylcholine Glutamate, glycine, GABA Serotonin (5-HT), dopamine, norepinephrine, (epinephrine) Met-enkephalin, beta-endorphin, VIP, Substance P, etc. Receptors Nicotinic and muscarinic receptors Glu receptors, gly receptors, GABA receptors 5-HT receptors, D1 and D2 receptors, α-adrenergic and β-adrenergic receptors Receptors are too numerous to list, but are specific to the peptides. Elimination Degradation by acetylcholinesterase Reuptake by neurons or glia Reuptake by neurons Degradation by enzymes called peptidases Postsynaptic effect Nicotinic receptor causes depolarization. Muscarinic receptors can cause both depolarization or hyperpolarization depending on the subtype. Glu receptors cause depolarization. Gly and GABA receptors cause hyperpolarization. Depolarization or hyperpolarization depends on the specific receptor. For example, D1 receptors cause depolarization and D2 receptors cause hyperpolarization. Depolarization or hyperpolarization depends on the specific receptor. Table 12.3 Disorders of the... Nervous System The underlying cause of some neurodegenerative diseases, such as Alzheimer’s and Parkinson’s, appears to be related to proteins—specifically, to proteins behaving badly. One of the strongest theories of what causes Alzheimer’s disease is based on the accumulation of beta-amyloid plaques, dense conglomerations of a protein that is not functioning correctly. Parkinson’s disease is linked to an increase in a protein known as alpha-synuclein that is toxic to the cells of the substantia nigra nucleus in the midbrain. For proteins to function correctly, they are dependent on their three-dimensional shape. The linear sequence of amino acids folds into a three-dimensional shape that is based on the interactions between and among those amino acids. When the folding is disturbed, and proteins take on a different shape, they stop functioning correctly. But the disease is not necessarily the result of functional loss of these proteins; rather, these altered proteins start to accumulate and may become toxic. For example, in Alzheimer’s, the hallmark of the disease is the accumulation of these amyloid plaques in the cerebral cortex. The term coined to describe this sort of disease is “proteopathy” and it includes other diseases. Creutzfeld-Jacob disease, the human variant of the prion disease known as mad cow disease in the bovine, also involves the accumulation of amyloid plaques, similar to Alzheimer’s. Diseases of other organ systems can fall into this group as well, such as cystic fibrosis or type 2 diabetes. Recognizing the relationship between these diseases has suggested new therapeutic possibilities. Interfering with the accumulation of the proteins, and possibly as early as their original production within the cell, may unlock new ways to alleviate these devastating diseases.
microbiology
Summary 2.1 The Properties of Light Light waves interacting with materials may be reflected , absorbed , or transmitted , depending on the properties of the material. Light waves can interact with each other ( interference ) or be distorted by interactions with small objects or openings ( diffraction ). Refraction occurs when light waves change speed and direction as they pass from one medium to another. Differences in the refraction indices of two materials determine the magnitude of directional changes when light passes from one to the other. A lens is a medium with a curved surface that refracts and focuses light to produce an image. Visible light is part of the electromagnetic spectrum ; light waves of different frequencies and wavelengths are distinguished as colors by the human eye. A prism can separate the colors of white light ( dispersion ) because different frequencies of light have different refractive indices for a given material. Fluorescent dyes and phosphorescent materials can effectively transform nonvisible electromagnetic radiation into visible light. The power of a microscope can be described in terms of its magnification and resolution . Resolution can be increased by shortening wavelength, increasing the numerical aperture of the lens, or using stains that enhance contrast. 2.2 Peering Into the Invisible World Antonie van Leeuwenhoek is credited with the first observation of microbes, including protists and bacteria, with simple microscopes that he made. Robert Hooke was the first to describe what we now call cells. Simple microscopes have a single lens, while compound microscopes have multiple lenses. 2.3 Instruments of Microscopy Numerous types of microscopes use various technologies to generate micrographs. Most are useful for a particular type of specimen or application. Light microscopy uses lenses to focus light on a specimen to produce an image. Commonly used light microscopes include brightfield , darkfield , phase-contrast , differential interference contrast , fluorescence , confocal , and two-photon microscopes. Electron microscopy focuses electrons on the specimen using magnets, producing much greater magnification than light microscopy. The transmission electron microscope (TEM) and scanning electron microscope (SEM) are two common forms. Scanning probe microscopy produces images of even greater magnification by measuring feedback from sharp probes that interact with the specimen. Probe microscopes include the scanning tunneling microscope (STM) and the atomic force microscope (AFM) . 2.4 Staining Microscopic Specimens Samples must be properly prepared for microscopy. This may involve staining , fixation , and/or cutting thin sections . A variety of staining techniques can be used with light microscopy, including Gram staining, acid-fast staining , capsule staining , endospore staining, and flagella staining . Samples for TEM require very thin sections, whereas samples for SEM require sputter-coating. Preparation for fluorescence microscopy is similar to that for light microscopy, except that fluorochromes are used.
Chapter Outline 2.1 The Properties of Light 2.2 Peering Into the Invisible World 2.3 Instruments of Microscopy 2.4 Staining Microscopic Specimens Introduction When we look at a rainbow, its colors span the full spectrum of light that the human eye can detect and differentiate. Each hue represents a different frequency of visible light, processed by our eyes and brains and rendered as red, orange, yellow, green, or one of the many other familiar colors that have always been a part of the human experience. But only recently have humans developed an understanding of the properties of light that allow us to see images in color. Over the past several centuries, we have learned to manipulate light to peer into previously invisible worlds—those too small or too far away to be seen by the naked eye. Through a microscope, we can examine microbial cells, using various techniques to manipulate color, size, and contrast in ways that help us identify species and diagnose disease. Figure 2.1 illustrates how we can apply the properties of light to visualize and magnify images; but these stunning micrographs are just two examples of the numerous types of images we are now able to produce with different microscopic technologies. This chapter explores how various types of microscopes manipulate light in order to provide a window into the world of microorganisms. By understanding how various kinds of microscopes work, we can produce highly detailed images of microbes that can be useful for both research and clinical applications.
[ { "answer": { "ans_choice": 2, "ans_text": "C" }, "bloom": null, "hl_context": "Photons with different energies interact differently with the retina . <hl> In the spectrum of visible light , each color corresponds to a particular frequency and wavelength ( Figure 2.7 ) . The lowest frequency of visible light appears as the color red , whereas the highest appears as the color violet . <hl> When the retina receives visible light of many different frequencies , we perceive this as white light . However , white light can be separated into its component colors using refraction . If we pass white light through a prism , different colors will be refracted in different directions , creating a rainbow-like spectrum on a screen behind the prism . This separation of colors is called dispersion , and it occurs because , for a given material , the refractive index is different for different frequencies of light . Whereas wavelength represents the distance between adjacent peaks of a light wave , frequency , in a simplified definition , represents the rate of oscillation . <hl> Waves with higher frequencies have shorter wavelengths and , therefore , have more oscillations per unit time than lower-frequency waves . <hl> <hl> Higher-frequency waves also contain more energy than lower-frequency waves . <hl> This energy is delivered as elementary particles called photons . Higher-frequency waves deliver more energetic photons than lower-frequency waves .", "hl_sentences": "In the spectrum of visible light , each color corresponds to a particular frequency and wavelength ( Figure 2.7 ) . The lowest frequency of visible light appears as the color red , whereas the highest appears as the color violet . Waves with higher frequencies have shorter wavelengths and , therefore , have more oscillations per unit time than lower-frequency waves . Higher-frequency waves also contain more energy than lower-frequency waves .", "question": { "cloze_format": "___ has the highest energy.", "normal_format": "Which of the following has the highest energy?", "question_choices": [ "light with a long wavelength", "light with an intermediate wavelength", "light with a short wavelength", "It is impossible to tell from the information given." ], "question_id": "fs-id1167793420863", "question_text": "Which of the following has the highest energy?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "A" }, "bloom": null, "hl_context": "<hl> Certain materials can refract nonvisible forms of EMR and , in effect , transform them into visible light . <hl> <hl> Certain fluorescent dyes , for instance , absorb ultraviolet or blue light and then use the energy to emit photons of a different color , giving off light rather than simply vibrating . <hl> This occurs because the energy absorption causes electrons to jump to higher energy states , after which they then almost immediately fall back down to their ground states , emitting specific amounts of energy as photons . Not all of the energy is emitted in a given photon , so the emitted photons will be of lower energy and , thus , of lower frequency than the absorbed ones . Thus , a dye such as Texas red may be excited by blue light , but emit red light ; or a dye such as fluorescein isothiocyanate ( FITC ) may absorb ( invisible ) high-energy ultraviolet light and emit green light ( Figure 2.8 ) . In some materials , the photons may be emitted following a delay after absorption ; in this case , the process is called phosphorescence . Glow-in-the-dark plastic works by using phosphorescent material .", "hl_sentences": "Certain materials can refract nonvisible forms of EMR and , in effect , transform them into visible light . Certain fluorescent dyes , for instance , absorb ultraviolet or blue light and then use the energy to emit photons of a different color , giving off light rather than simply vibrating .", "question": { "cloze_format": "You place a specimen under the microscope and notice that parts of the specimen begin to emit light immediately. These materials can be described as _____________.", "normal_format": "You place a specimen under the microscope and notice that parts of the specimen begin to emit light immediately. What are these materials can be described as?", "question_choices": [ "fluorescent", "phosphorescent", "transparent", "opaque" ], "question_id": "fs-id1167793775970", "question_text": "You place a specimen under the microscope and notice that parts of the specimen begin to emit light immediately. These materials can be described as _____________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 3, "ans_text": "D" }, "bloom": null, "hl_context": "<hl> Van Leeuwenhoek ’ s contemporary , the Englishman Robert Hooke ( 1635 – 1703 ) , also made important contributions to microscopy , publishing in his book Micrographia ( 1665 ) many observations using compound microscopes . <hl> <hl> Viewing a thin sample of cork through his microscope , he was the first to observe the structures that we now know as cells ( Figure 2.10 ) . <hl> Hooke described these structures as resembling “ Honey-comb , ” and as “ small Boxes or Bladders of Air , ” noting that each “ Cavern , Bubble , or Cell ” is distinct from the others ( in Latin , “ cell ” literally means “ small room ” ) . They likely appeared to Hooke to be filled with air because the cork cells were dead , with only the rigid cell walls providing the structure .", "hl_sentences": "Van Leeuwenhoek ’ s contemporary , the Englishman Robert Hooke ( 1635 – 1703 ) , also made important contributions to microscopy , publishing in his book Micrographia ( 1665 ) many observations using compound microscopes . Viewing a thin sample of cork through his microscope , he was the first to observe the structures that we now know as cells ( Figure 2.10 ) .", "question": { "cloze_format": "___ was the first to describe “cells” in dead cork tissue.", "normal_format": "Who was the first to describe “cells” in dead cork tissue?", "question_choices": [ "Hans Janssen", "Zaccharias Janssen", "Antonie van Leeuwenhoek", "Robert Hooke" ], "question_id": "fs-id1167793478986", "question_text": "Who was the first to describe “cells” in dead cork tissue?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 1, "ans_text": "B" }, "bloom": null, "hl_context": "Micro Connections Who Invented the Microscope ? <hl> While Antonie van Leeuwenhoek and Robert Hooke generally receive much of the credit for early advances in microscopy , neither can claim to be the inventor of the microscope . <hl> <hl> Some argue that this designation should belong to Hans and Zaccharias Janssen , Dutch spectacle-makers who may have invented the telescope , the simple microscope , and the compound microscope during the late 1500s or early 1600s ( Figure 2.11 ) . <hl> Unfortunately , little is known for sure about the Janssens , not even the exact dates of their births and deaths . The Janssens were secretive about their work and never published . It is also possible that the Janssens did not invent anything at all ; their neighbor , Hans Lippershey , also developed microscopes and telescopes during the same time frame , and he is often credited with inventing the telescope . The historical records from the time are as fuzzy and imprecise as the images viewed through those early lenses , and any archived records have been lost over the centuries .", "hl_sentences": "While Antonie van Leeuwenhoek and Robert Hooke generally receive much of the credit for early advances in microscopy , neither can claim to be the inventor of the microscope . Some argue that this designation should belong to Hans and Zaccharias Janssen , Dutch spectacle-makers who may have invented the telescope , the simple microscope , and the compound microscope during the late 1500s or early 1600s ( Figure 2.11 ) .", "question": { "cloze_format": "___ is the probable inventor of the compound microscope.", "normal_format": "Who is the probable inventor of the compound microscope?", "question_choices": [ "Girolamo Fracastoro", "Zaccharias Janssen", "Antonie van Leeuwenhoek", "Robert Hooke" ], "question_id": "fs-id1167793953497", "question_text": "Who is the probable inventor of the compound microscope?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "C" }, "bloom": null, "hl_context": "<hl> Darkfield microscopy can often create high-contrast , high-resolution images of specimens without the use of stains , which is particularly useful for viewing live specimens that might be killed or otherwise compromised by the stains . <hl> <hl> For example , thin spirochetes like Treponema pallidum , the causative agent of syphilis , can be best viewed using a darkfield microscope ( Figure 2.15 ) . <hl>", "hl_sentences": "Darkfield microscopy can often create high-contrast , high-resolution images of specimens without the use of stains , which is particularly useful for viewing live specimens that might be killed or otherwise compromised by the stains . For example , thin spirochetes like Treponema pallidum , the causative agent of syphilis , can be best viewed using a darkfield microscope ( Figure 2.15 ) .", "question": { "cloze_format": "___ would be the best choice for viewing internal structures of a living protist such as a Paramecium.", "normal_format": "Which would be the best choice for viewing internal structures of a living protist such as a Paramecium?", "question_choices": [ "a brightfield microscope with a stain", "a brightfield microscope without a stain", "a darkfield microscope", "a transmission electron microscope" ], "question_id": "fs-id1167793362004", "question_text": "Which would be the best choice for viewing internal structures of a living protist such as a Paramecium?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 3, "ans_text": "D" }, "bloom": null, "hl_context": "<hl> While the original fluorescent and confocal microscopes allowed better visualization of unique features in specimens , there were still problems that prevented optimum visualization . <hl> <hl> The effective sensitivity of fluorescence microscopy when viewing thick specimens was generally limited by out-of-focus flare , which resulted in poor resolution . <hl> This limitation was greatly reduced in the confocal microscope through the use of a confocal pinhole to reject out-of-focus background fluorescence with thin ( < 1 μm ) , unblurred optical sections . However , even the confocal microscopes lacked the resolution needed for viewing thick tissue samples . These problems were resolved with the development of the two-photon microscope , which uses a scanning technique , fluorochromes , and long-wavelength light ( such as infrared ) to visualize specimens . The low energy associated with the long-wavelength light means that two photons must strike a location at the same time to excite the fluorochrome . The low energy of the excitation light is less damaging to cells , and the long wavelength of the excitation light more easily penetrates deep into thick specimens . This makes the two-photon microscope useful for examining living cells within intact tissues — brain slices , embryos , whole organs , and even entire animals . Whereas other forms of light microscopy create an image that is maximally focused at a single distance from the observer ( the depth , or z-plane ) , a confocal microscope uses a laser to scan multiple z-planes successively . This produces numerous two-dimensional , high-resolution images at various depths , which can be constructed into a three-dimensional image by a computer . As with fluorescence microscopes , fluorescent stains are generally used to increase contrast and resolution . Image clarity is further enhanced by a narrow aperture that eliminates any light that is not from the z-plane . <hl> Confocal microscopes are thus very useful for examining thick specimens such as biofilms , which can be examined alive and unfixed ( Figure 2.20 ) . <hl>", "hl_sentences": "While the original fluorescent and confocal microscopes allowed better visualization of unique features in specimens , there were still problems that prevented optimum visualization . The effective sensitivity of fluorescence microscopy when viewing thick specimens was generally limited by out-of-focus flare , which resulted in poor resolution . Confocal microscopes are thus very useful for examining thick specimens such as biofilms , which can be examined alive and unfixed ( Figure 2.20 ) .", "question": { "cloze_format": "The type of microscope that is especially useful for viewing thick structures such as biofilms is ___.", "normal_format": "Which type of microscope is especially useful for viewing thick structures such as biofilms?", "question_choices": [ "a transmission electron microscope", "a scanning electron microscopes", "a phase-contrast microscope", "a confocal scanning laser microscope", "an atomic force microscope" ], "question_id": "fs-id1167793371737", "question_text": "Which type of microscope is especially useful for viewing thick structures such as biofilms?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "B" }, "bloom": null, "hl_context": "electrons that are knocked off of specimens by a beam of electrons . This can create highly detailed images with a three-dimensional appearance that are displayed on a monitor ( Figure 2.23 ) . Typically , specimens are dried and prepared with fixatives that reduce artifacts , such as shriveling , that can be produced by drying , before being sputter-coated with a thin layer of metal such as gold . <hl> Whereas transmission electron microscopy requires very thin sections and allows one to see internal structures such as organelles and the interior of membranes , scanning electron microscopy can be used to view the surfaces of larger objects ( such as a pollen grain ) as well as the surfaces of very small samples ( Figure 2.24 ) . <hl> Some EMs can magnify an image up to 2,000 , 000 ⨯ . 1 1 “ JEM-ARM 200F Transmission Electron Microscope , ” JEOL USA Inc , http://www.jeolusa.com/PRODUCTS/TransmissionElectronMicroscopes%28TEM%29/200kV/JEM-ARM200F/tabid/663/Default.aspx#195028-specifications . Accessed 8/ 28/2015 .", "hl_sentences": "Whereas transmission electron microscopy requires very thin sections and allows one to see internal structures such as organelles and the interior of membranes , scanning electron microscopy can be used to view the surfaces of larger objects ( such as a pollen grain ) as well as the surfaces of very small samples ( Figure 2.24 ) .", "question": { "cloze_format": "The type of microscope that would be the best choice for viewing very small surface structures of a cell is ___ .", "normal_format": "Which type of microscope would be the best choice for viewing very small surface structures of a cell?", "question_choices": [ "a transmission electron microscope", "a scanning electron microscope", "a brightfield microscope", "a darkfield microscope", "a phase-contrast microscope" ], "question_id": "fs-id1167793985254", "question_text": "Which type of microscope would be the best choice for viewing very small surface structures of a cell?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 4, "ans_text": "E" }, "bloom": null, "hl_context": "<hl> Phase-contrast microscopes use refraction and interference caused by structures in a specimen to create high-contrast , high-resolution images without staining . <hl> It is the oldest and simplest type of microscope that creates an image by altering the wavelengths of light rays passing through the specimen . <hl> To create altered wavelength paths , an annular stop is used in the condenser . <hl> The annular stop produces a hollow cone of light that is focused on the specimen before reaching the objective lens . The objective contains a phase plate containing a phase ring . As a result , light traveling directly from the illuminator passes through the phase ring while light refracted or reflected by the specimen passes through the plate . This causes waves traveling through the ring to be about one-half of a wavelength out of phase with those passing through the plate . Because waves have peaks and troughs , they can add together ( if in phase together ) or cancel each other out ( if out of phase ) . When the wavelengths are out of phase , wave troughs will cancel out wave peaks , which is called destructive interference . Structures that refract light then appear dark against a bright background of only unrefracted light . More generally , structures that differ in features such as refractive index will differ in levels of darkness ( Figure 2.16 ) .", "hl_sentences": "Phase-contrast microscopes use refraction and interference caused by structures in a specimen to create high-contrast , high-resolution images without staining . To create altered wavelength paths , an annular stop is used in the condenser .", "question": { "cloze_format": "The type of microscope that uses an annular stop is ___.", "normal_format": "What type of microscope uses an annular stop?", "question_choices": [ "a transmission electron microscope", "a scanning electron microscope", "a brightfield microscope", "a darkfield microscope", "a phase-contrast microscope" ], "question_id": "fs-id1167793270776", "question_text": "What type of microscope uses an annular stop?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 3, "ans_text": "D" }, "bloom": null, "hl_context": "<hl> Phase-contrast microscopes use refraction and interference caused by structures in a specimen to create high-contrast , high-resolution images without staining . <hl> It is the oldest and simplest type of microscope that creates an image by altering the wavelengths of light rays passing through the specimen . To create altered wavelength paths , an annular stop is used in the condenser . The annular stop produces a hollow cone of light that is focused on the specimen before reaching the objective lens . The objective contains a phase plate containing a phase ring . As a result , light traveling directly from the illuminator passes through the phase ring while light refracted or reflected by the specimen passes through the plate . This causes waves traveling through the ring to be about one-half of a wavelength out of phase with those passing through the plate . Because waves have peaks and troughs , they can add together ( if in phase together ) or cancel each other out ( if out of phase ) . When the wavelengths are out of phase , wave troughs will cancel out wave peaks , which is called destructive interference . <hl> Structures that refract light then appear dark against a bright background of only unrefracted light . <hl> More generally , structures that differ in features such as refractive index will differ in levels of darkness ( Figure 2.16 ) . <hl> A darkfield microscope is a brightfield microscope that has a small but significant modification to the condenser . <hl> A small , opaque disk ( about 1 cm in diameter ) is placed between the illuminator and the condenser lens . This opaque light stop , as the disk is called , blocks most of the light from the illuminator as it passes through the condenser on its way to the objective lens , producing a hollow cone of light that is focused on the specimen . The only light that reaches the objective is light that has been refracted or reflected by structures in the specimen . <hl> The resulting image typically shows bright objects on a dark background ( Figure 2.14 ) . <hl>", "hl_sentences": "Phase-contrast microscopes use refraction and interference caused by structures in a specimen to create high-contrast , high-resolution images without staining . Structures that refract light then appear dark against a bright background of only unrefracted light . A darkfield microscope is a brightfield microscope that has a small but significant modification to the condenser . The resulting image typically shows bright objects on a dark background ( Figure 2.14 ) .", "question": { "cloze_format": "___ is a type of microscope that uses a cone of light so that light only hits the specimen indirectly, producing a darker image on a brighter background.", "normal_format": "What type of microscope uses a cone of light so that light only hits the specimen indirectly, producing a darker image on a brighter background?", "question_choices": [ "a transmission electron microscope", "a scanning electron microscope", "a brightfield microscope", "a darkfield microscope", "a phase-contrast microscope" ], "question_id": "fs-id1167794036522", "question_text": "What type of microscope uses a cone of light so that light only hits the specimen indirectly, producing a darker image on a brighter background?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 3, "ans_text": "D" }, "bloom": null, "hl_context": "<hl> Explain the role of Gram ’ s iodine in the Gram stain procedure . <hl> <hl> Next , Gram ’ s iodine , a mordant , is added . <hl> A mordant is a substance used to set or stabilize stains or dyes ; in this case , Gram ’ s iodine acts like a trapping agent that complexes with the crystal violet , making the crystal violet – iodine complex clump and stay contained in thick layers of peptidoglycan in the cell walls .", "hl_sentences": "Explain the role of Gram ’ s iodine in the Gram stain procedure . Next , Gram ’ s iodine , a mordant , is added .", "question": { "cloze_format": "The ___ mordant is used in Gram staining.", "normal_format": "What mordant is used in Gram staining?", "question_choices": [ "crystal violet", "safranin", "acid-alcohol", "iodine" ], "question_id": "fs-id1167793372951", "question_text": "What mordant is used in Gram staining?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 1, "ans_text": "B" }, "bloom": null, "hl_context": "<hl> When samples are prepared for viewing using an SEM , they must also be dehydrated using an ethanol series . <hl> However , they must be even drier than is necessary for a TEM . Critical point drying with inert liquid carbon dioxide under pressure is used to displace the water from the specimen . <hl> After drying , the specimens are sputter-coated with metal by knocking atoms off of a palladium target , with energetic particles . <hl> Sputter-coating prevents specimens from becoming charged by the SEM ’ s electron beam .", "hl_sentences": "When samples are prepared for viewing using an SEM , they must also be dehydrated using an ethanol series . After drying , the specimens are sputter-coated with metal by knocking atoms off of a palladium target , with energetic particles .", "question": { "cloze_format": "One difference between specimen preparation for a transmission electron microscope (TEM) and preparation for a scanning electron microscope (SEM) is that ___.", "normal_format": "What is one difference between specimen preparation for a transmission electron microscope (TEM) and preparation for a scanning electron microscope (SEM)?", "question_choices": [ "Only the TEM specimen requires sputter coating.", "Only the SEM specimen requires sputter-coating.", "Only the TEM specimen must be dehydrated.", "Only the SEM specimen must be dehydrated." ], "question_id": "fs-id1167793964407", "question_text": "What is one difference between specimen preparation for a transmission electron microscope (TEM) and preparation for a scanning electron microscope (SEM)?" }, "references_are_paraphrase": null } ]
2
2.1 The Properties of Light Learning Objectives Identify and define the characteristics of electromagnetic radiation (EMR) used in microscopy Explain how lenses are used in microscopy to manipulate visible and ultraviolet (UV) light Clinical Focus Part 1 Cindy, a 17-year-old counselor at a summer sports camp, scraped her knee playing basketball 2 weeks ago. At the time, she thought it was only a minor abrasion that would heal, like many others before it. Instead, the wound began to look like an insect bite and has continued to become increasingly painful and swollen. The camp nurse examines the lesion and observes a large amount of pus oozing from the surface. Concerned that Cindy may have developed a potentially aggressive infection, she swabs the wound to collect a sample from the infection site. Then she cleans out the pus and dresses the wound, instructing Cindy to keep the area clean and to come back the next day. When Cindy leaves, the nurse sends the sample to the closest medical lab to be analyzed under a microscope. What are some things we can learn about these bacteria by looking at them under a microscope? Jump to the next Clinical Focus box. Visible light consists of electromagnetic waves that behave like other waves. Hence, many of the properties of light that are relevant to microscopy can be understood in terms of light’s behavior as a wave. An important property of light waves is the wavelength , or the distance between one peak of a wave and the next peak. The height of each peak (or depth of each trough) is called the amplitude . In contrast, the frequency of the wave is the rate of vibration of the wave, or the number of wavelengths within a specified time period ( Figure 2.2 ). Interactions of Light Light waves interact with materials by being reflected, absorbed, or transmitted. Reflection occurs when a wave bounces off of a material. For example, a red piece of cloth may reflect red light to our eyes while absorbing other colors of light. Absorbance occurs when a material captures the energy of a light wave. In the case of glow-in-the-dark plastics, the energy from light can be absorbed and then later re-emitted as another form of phosphorescence. Transmission occurs when a wave travels through a material, like light through glass (the process of transmission is called transmittance ). When a material allows a large proportion of light to be transmitted, it may do so because it is thinner, or more transparent (having more transparency and less opacity ). Figure 2.3 illustrates the difference between transparency and opacity. Light waves can also interact with each other by interference , creating complex patterns of motion. Dropping two pebbles into a puddle causes the waves on the puddle’s surface to interact, creating complex interference patterns. Light waves can interact in the same way. In addition to interfering with each other, light waves can also interact with small objects or openings by bending or scattering. This is called diffraction . Diffraction is larger when the object is smaller relative to the wavelength of the light (the distance between two consecutive peaks of a light wave). Often, when waves diffract in different directions around an obstacle or opening, they will interfere with each other. Check Your Understanding If a light wave has a long wavelength, is it likely to have a low or high frequency? If an object is transparent, does it reflect, absorb, or transmit light? Lenses and Refraction In the context of microscopy, refraction is perhaps the most important behavior exhibited by light waves. Refraction occurs when light waves change direction as they enter a new medium ( Figure 2.4 ). Different transparent materials transmit light at different speeds; thus, light can change speed when passing from one material to another. This change in speed usually also causes a change in direction (refraction), with the degree of change dependent on the angle of the incoming light. The extent to which a material slows transmission speed relative to empty space is called the refractive index of that material. Large differences between the refractive indices of two materials will result in a large amount of refraction when light passes from one material to the other. For example, light moves much more slowly through water than through air, so light entering water from air can change direction greatly. We say that the water has a higher refractive index than air ( Figure 2.5 ). When light crosses a boundary into a material with a higher refractive index, its direction turns to be closer to perpendicular to the boundary (i.e., more toward a normal to that boundary; see Figure 2.5 ). This is the principle behind lenses . We can think of a lens as an object with a curved boundary (or a collection of prisms) that collects all of the light that strikes it and refracts it so that it all meets at a single point called the image point (focus) . A convex lens can be used to magnify because it can focus at closer range than the human eye, producing a larger image. Concave lenses and mirrors can also be used in microscopes to redirect the light path. Figure 2.6 shows the focal point (the image point when light entering the lens is parallel) and the focal length (the distance to the focal point) for convex and concave lenses . The human eye contains a lens that enables us to see images. This lens focuses the light reflecting off of objects in front of the eye onto the surface of the retina, which is like a screen in the back of the eye. Artificial lenses placed in front of the eye (contact lenses, glasses, or microscopic lenses) focus light before it is focused (again) by the lens of the eye, manipulating the image that ends up on the retina (e.g., by making it appear larger). Images are commonly manipulated by controlling the distances between the object, the lens, and the screen, as well as the curvature of the lens. For example, for a given amount of curvature, when an object is closer to the lens, the focal points are farther from the lens. As a result, it is often necessary to manipulate these distances to create a focused image on a screen. Similarly, more curvature creates image points closer to the lens and a larger image when the image is in focus. This property is often described in terms of the focal distance, or distance to the focal point. Check Your Understanding Explain how a lens focuses light at the image point. Name some factors that affect the focal length of a lens. Electromagnetic Spectrum and Color Visible light is just one form of electromagnetic radiation (EMR) , a type of energy that is all around us. Other forms of EMR include microwaves, X-rays, and radio waves, among others. The different types of EMR fall on the electromagnetic spectrum, which is defined in terms of wavelength and frequency. The spectrum of visible light occupies a relatively small range of frequencies between infrared and ultraviolet light ( Figure 2.7 ). Whereas wavelength represents the distance between adjacent peaks of a light wave, frequency, in a simplified definition, represents the rate of oscillation. Waves with higher frequencies have shorter wavelengths and, therefore, have more oscillations per unit time than lower-frequency waves. Higher-frequency waves also contain more energy than lower-frequency waves. This energy is delivered as elementary particles called photons. Higher-frequency waves deliver more energetic photons than lower-frequency waves. Photons with different energies interact differently with the retina. In the spectrum of visible light, each color corresponds to a particular frequency and wavelength ( Figure 2.7 ).The lowest frequency of visible light appears as the color red, whereas the highest appears as the color violet. When the retina receives visible light of many different frequencies, we perceive this as white light. However, white light can be separated into its component colors using refraction. If we pass white light through a prism, different colors will be refracted in different directions, creating a rainbow-like spectrum on a screen behind the prism. This separation of colors is called dispersion , and it occurs because, for a given material, the refractive index is different for different frequencies of light. Certain materials can refract nonvisible forms of EMR and, in effect, transform them into visible light. Certain fluorescent dyes, for instance, absorb ultraviolet or blue light and then use the energy to emit photons of a different color, giving off light rather than simply vibrating. This occurs because the energy absorption causes electrons to jump to higher energy states, after which they then almost immediately fall back down to their ground states, emitting specific amounts of energy as photons. Not all of the energy is emitted in a given photon, so the emitted photons will be of lower energy and, thus, of lower frequency than the absorbed ones. Thus, a dye such as Texas red may be excited by blue light, but emit red light; or a dye such as fluorescein isothiocyanate (FITC) may absorb (invisible) high-energy ultraviolet light and emit green light ( Figure 2.8 ). In some materials, the photons may be emitted following a delay after absorption; in this case, the process is called phosphorescence . Glow-in-the-dark plastic works by using phosphorescent material. Check Your Understanding Which has a higher frequency: red light or green light? Explain why dispersion occurs when white light passes through a prism. Why do fluorescent dyes emit a different color of light than they absorb? Magnification, Resolution, and Contrast Microscopes magnify images and use the properties of light to create useful images of small objects. Magnification is defined as the ability of a lens to enlarge the image of an object when compared to the real object. For example, a magnification of 10⨯ means that the image appears 10 times the size of the object as viewed with the naked eye. Greater magnification typically improves our ability to see details of small objects, but magnification alone is not sufficient to make the most useful images. It is often useful to enhance the resolution of objects: the ability to tell that two separate points or objects are separate. A low-resolution image appears fuzzy, whereas a high-resolution image appears sharp. Two factors affect resolution. The first is wavelength. Shorter wavelengths are able to resolve smaller objects; thus, an electron microscope has a much higher resolution than a light microscope, since it uses an electron beam with a very short wavelength, as opposed to the long-wavelength visible light used by a light microscope. The second factor that affects resolution is numerical aperture , which is a measure of a lens’s ability to gather light. The higher the numerical aperture, the better the resolution. Link to Learning Read this article to learn more about factors that can increase or decrease the numerical aperture of a lens. Even when a microscope has high resolution, it can be difficult to distinguish small structures in many specimens because microorganisms are relatively transparent. It is often necessary to increase contrast to detect different structures in a specimen. Various types of microscopes use different features of light or electrons to increase contrast—visible differences between the parts of a specimen (see Instruments of Microscopy ). Additionally, dyes that bind to some structures but not others can be used to improve the contrast between images of relatively transparent objects (see Staining Microscopic Specimens ). Check Your Understanding Explain the difference between magnification and resolution. Explain the difference between resolution and contrast. Name two factors that affect resolution. 2.2 Peering Into the Invisible World Learning Objectives Describe historical developments and individual contributions that led to the invention and development of the microscope Compare and contrast the features of simple and compound microscopes Some of the fundamental characteristics and functions of microscopes can be understood in the context of the history of their use. Italian scholar Girolamo Fracastoro is regarded as the first person to formally postulate that disease was spread by tiny invisible seminaria , or “seeds of the contagion.” In his book De Contagione (1546), he proposed that these seeds could attach themselves to certain objects (which he called fomes [cloth]) that supported their transfer from person to person. However, since the technology for seeing such tiny objects did not yet exist, the existence of the seminaria remained hypothetical for a little over a century—an invisible world waiting to be revealed. Early Microscopes Antonie van Leeuwenhoek , sometimes hailed as “the Father of Microbiology,” is typically credited as the first person to have created microscopes powerful enough to view microbes ( Figure 2.9 ). Born in the city of Delft in the Dutch Republic, van Leeuwenhoek began his career selling fabrics. However, he later became interested in lens making (perhaps to look at threads) and his innovative techniques produced microscopes that allowed him to observe microorganisms as no one had before. In 1674, he described his observations of single-celled organisms, whose existence was previously unknown, in a series of letters to the Royal Society of London. His report was initially met with skepticism, but his claims were soon verified and he became something of a celebrity in the scientific community. While van Leeuwenhoek is credited with the discovery of microorganisms, others before him had contributed to the development of the microscope. These included eyeglass makers in the Netherlands in the late 1500s, as well as the Italian astronomer Galileo Galilei, who used a compound microscope to examine insect parts ( Figure 2.9 ). Whereas van Leeuwenhoek used a simple microscope , in which light is passed through just one lens, Galileo’s compound microscope was more sophisticated, passing light through two sets of lenses. Van Leeuwenhoek’s contemporary, the Englishman Robert Hooke (1635–1703), also made important contributions to microscopy, publishing in his book Micrographia (1665) many observations using compound microscopes. Viewing a thin sample of cork through his microscope, he was the first to observe the structures that we now know as cells ( Figure 2.10 ). Hooke described these structures as resembling “Honey-comb,” and as “small Boxes or Bladders of Air,” noting that each “Cavern, Bubble, or Cell” is distinct from the others (in Latin, “cell” literally means “small room”). They likely appeared to Hooke to be filled with air because the cork cells were dead, with only the rigid cell walls providing the structure. Check Your Understanding Explain the difference between simple and compound microscopes. Compare and contrast the contributions of van Leeuwenhoek, Hooke, and Galileo to early microscopy. Micro Connections Who Invented the Microscope? While Antonie van Leeuwenhoek and Robert Hooke generally receive much of the credit for early advances in microscopy, neither can claim to be the inventor of the microscope. Some argue that this designation should belong to Hans and Zaccharias Janssen , Dutch spectacle-makers who may have invented the telescope, the simple microscope, and the compound microscope during the late 1500s or early 1600s ( Figure 2.11 ). Unfortunately, little is known for sure about the Janssens, not even the exact dates of their births and deaths. The Janssens were secretive about their work and never published. It is also possible that the Janssens did not invent anything at all; their neighbor, Hans Lippershey , also developed microscopes and telescopes during the same time frame, and he is often credited with inventing the telescope. The historical records from the time are as fuzzy and imprecise as the images viewed through those early lenses, and any archived records have been lost over the centuries. By contrast, van Leeuwenhoek and Hooke can thank ample documentation of their work for their respective legacies. Like Janssen, van Leeuwenhoek began his work in obscurity, leaving behind few records. However, his friend, the prominent physician Reinier de Graaf, wrote a letter to the editor of the Philosophical Transactions of the Royal Society of London calling attention to van Leeuwenhoek’s powerful microscopes. From 1673 onward, van Leeuwenhoek began regularly submitting letters to the Royal Society detailing his observations. In 1674, his report describing single-celled organisms produced controversy in the scientific community, but his observations were soon confirmed when the society sent a delegation to investigate his findings. He subsequently enjoyed considerable celebrity, at one point even entertaining a visit by the czar of Russia. Similarly, Robert Hooke had his observations using microscopes published by the Royal Society in a book called Micrographia in 1665. The book became a bestseller and greatly increased interest in microscopy throughout much of Europe. 2.3 Instruments of Microscopy Learning Objectives Identify and describe the parts of a brightfield microscope Calculate total magnification for a compound microscope Describe the distinguishing features and typical uses for various types of light microscopes, electron microscopes, and scanning probe microscopes The early pioneers of microscopy opened a window into the invisible world of microorganisms. But microscopy continued to advance in the centuries that followed. In 1830, Joseph Jackson Lister created an essentially modern light microscope. The 20th century saw the development of microscopes that leveraged nonvisible light, such as fluorescence microscopy, which uses an ultraviolet light source, and electron microscopy, which uses short-wavelength electron beams. These advances led to major improvements in magnification, resolution, and contrast. By comparison, the relatively rudimentary microscopes of van Leeuwenhoek and his contemporaries were far less powerful than even the most basic microscopes in use today. In this section, we will survey the broad range of modern microscopic technology and common applications for each type of microscope. Light Microscopy Many types of microscopes fall under the category of light microscopes , which use light to visualize images. Examples of light microscopes include brightfield microscopes, darkfield microscopes, phase-contrast microscopes, differential interference contrast microscopes, fluorescence microscopes, confocal scanning laser microscopes, and two-photon microscopes. These various types of light microscopes can be used to complement each other in diagnostics and research. Brightfield Microscopes The brightfield microscope , perhaps the most commonly used type of microscope, is a compound microscope with two or more lenses that produce a dark image on a bright background. Some brightfield microscopes are monocular (having a single eyepiece), though most newer brightfield microscopes are binocular (having two eyepieces), like the one shown in Figure 2.12 ; in either case, each eyepiece contains a lens called an ocular lens . The ocular lenses typically magnify images 10 times (10⨯). At the other end of the body tube are a set of objective lenses on a rotating nosepiece. The magnification of these objective lenses typically ranges from 4⨯ to 100⨯, with the magnification for each lens designated on the metal casing of the lens. The ocular and objective lenses work together to create a magnified image. The total magnification is the product of the ocular magnification times the objective magnification: ocular magnification × objective magnification ocular magnification × objective magnification For example, if a 40⨯ objective lens is selected and the ocular lens is 10⨯, the total magnification would be ( 40 × ) ( 10 × ) = 400 × ( 40 × ) ( 10 × ) = 400 × The item being viewed is called a specimen. The specimen is placed on a glass slide, which is then clipped into place on the stage (a platform) of the microscope. Once the slide is secured, the specimen on the slide is positioned over the light using the x-y mechanical stage knobs . These knobs move the slide on the surface of the stage, but do not raise or lower the stage. Once the specimen is centered over the light, the stage position can be raised or lowered to focus the image. The coarse focusing knob is used for large-scale movements with 4⨯ and 10⨯ objective lenses; the fine focusing knob is used for small-scale movements, especially with 40⨯ or 100⨯ objective lenses. When images are magnified, they become dimmer because there is less light per unit area of image. Highly magnified images produced by microscopes, therefore, require intense lighting. In a brightfield microscope, this light is provided by an illuminator , which is typically a high-intensity bulb below the stage. Light from the illuminator passes up through condenser lens (located below the stage), which focuses all of the light rays on the specimen to maximize illumination. The position of the condenser can be optimized using the attached condenser focus knob; once the optimal distance is established, the condenser should not be moved to adjust the brightness. If less-than-maximal light levels are needed, the amount of light striking the specimen can be easily adjusted by opening or closing a diaphragm between the condenser and the specimen. In some cases, brightness can also be adjusted using the rheostat , a dimmer switch that controls the intensity of the illuminator. A brightfield microscope creates an image by directing light from the illuminator at the specimen; this light is differentially transmitted, absorbed, reflected, or refracted by different structures. Different colors can behave differently as they interact with chromophores (pigments that absorb and reflect particular wavelengths of light) in parts of the specimen. Often, chromophores are artificially added to the specimen using stains, which serve to increase contrast and resolution. In general, structures in the specimen will appear darker, to various extents, than the bright background, creating maximally sharp images at magnifications up to about 1000⨯. Further magnification would create a larger image, but without increased resolution. This allows us to see objects as small as bacteria, which are visible at about 400⨯ or so, but not smaller objects such as viruses. At very high magnifications, resolution may be compromised when light passes through the small amount of air between the specimen and the lens. This is due to the large difference between the refractive indices of air and glass; the air scatters the light rays before they can be focused by the lens. To solve this problem, a drop of oil can be used to fill the space between the specimen and an oil immersion lens , a special lens designed to be used with immersion oils. Since the oil has a refractive index very similar to that of glass, it increases the maximum angle at which light leaving the specimen can strike the lens. This increases the light collected and, thus, the resolution of the image ( Figure 2.13 ). A variety of oils can be used for different types of light. Micro Connections Microscope Maintenance: Best Practices Even a very powerful microscope cannot deliver high-resolution images if it is not properly cleaned and maintained. Since lenses are carefully designed and manufactured to refract light with a high degree of precision, even a slightly dirty or scratched lens will refract light in unintended ways, degrading the image of the specimen. In addition, microscopes are rather delicate instruments, and great care must be taken to avoid damaging parts and surfaces. Among other things, proper care of a microscope includes the following: cleaning the lenses with lens paper not allowing lenses to contact the slide (e.g., by rapidly changing the focus) protecting the bulb (if there is one) from breakage not pushing an objective into a slide not using the coarse focusing knob when using the 40⨯ or greater objective lenses only using immersion oil with a specialized oil objective, usually the 100⨯ objective cleaning oil from immersion lenses after using the microscope cleaning any oil accidentally transferred from other lenses covering the microscope or placing it in a cabinet when not in use Link to Learning Visit the online resource linked below for simulations and demonstrations involving the use of microscopes. Keep in mind that execution of specific techniques and procedures can vary depending on the specific instrument you are using. Thus, it is important to learn and practice with an actual microscope in a laboratory setting under expert supervision. University of Delaware’s Virtual Microscope Darkfield Microscopy A darkfield microscope is a brightfield microscope that has a small but significant modification to the condenser. A small, opaque disk (about 1 cm in diameter) is placed between the illuminator and the condenser lens. This opaque light stop, as the disk is called, blocks most of the light from the illuminator as it passes through the condenser on its way to the objective lens, producing a hollow cone of light that is focused on the specimen. The only light that reaches the objective is light that has been refracted or reflected by structures in the specimen. The resulting image typically shows bright objects on a dark background ( Figure 2.14 ). Darkfield microscopy can often create high-contrast, high-resolution images of specimens without the use of stains, which is particularly useful for viewing live specimens that might be killed or otherwise compromised by the stains. For example, thin spirochetes like Treponema pallidum , the causative agent of syphilis, can be best viewed using a darkfield microscope ( Figure 2.15 ). Check Your Understanding Identify the key differences between brightfield and darkfield microscopy. Clinical Focus Part 2 Wound infections like Cindy’s can be caused by many different types of bacteria, some of which can spread rapidly with serious complications. Identifying the specific cause is very important to select a medication that can kill or stop the growth of the bacteria. After calling a local doctor about Cindy’s case, the camp nurse sends the sample from the wound to the closest medical laboratory. Unfortunately, since the camp is in a remote area, the nearest lab is small and poorly equipped. A more modern lab would likely use other methods to culture, grow, and identify the bacteria, but in this case, the technician decides to make a wet mount from the specimen and view it under a brightfield microscope. In a wet mount, a small drop of water is added to the slide, and a cover slip is placed over the specimen to keep it in place before it is positioned under the objective lens. Under the brightfield microscope, the technician can barely see the bacteria cells because they are nearly transparent against the bright background. To increase contrast, the technician inserts an opaque light stop above the illuminator. The resulting darkfield image clearly shows that the bacteria cells are spherical and grouped in clusters, like grapes. Why is it important to identify the shape and growth patterns of cells in a specimen? What other types of microscopy could be used effectively to view this specimen? Jump to the next Clinical Focus box. Go back to the previous Clinical Focus box. Phase-Contrast Microscopes Phase-contrast microscopes use refraction and interference caused by structures in a specimen to create high-contrast, high-resolution images without staining. It is the oldest and simplest type of microscope that creates an image by altering the wavelengths of light rays passing through the specimen. To create altered wavelength paths, an annular stop is used in the condenser. The annular stop produces a hollow cone of light that is focused on the specimen before reaching the objective lens. The objective contains a phase plate containing a phase ring. As a result, light traveling directly from the illuminator passes through the phase ring while light refracted or reflected by the specimen passes through the plate. This causes waves traveling through the ring to be about one-half of a wavelength out of phase with those passing through the plate. Because waves have peaks and troughs, they can add together (if in phase together) or cancel each other out (if out of phase). When the wavelengths are out of phase, wave troughs will cancel out wave peaks, which is called destructive interference. Structures that refract light then appear dark against a bright background of only unrefracted light. More generally, structures that differ in features such as refractive index will differ in levels of darkness ( Figure 2.16 ). Because it increases contrast without requiring stains, phase-contrast microscopy is often used to observe live specimens. Certain structures, such as organelles in eukaryotic cells and endospores in prokaryotic cells, are especially well visualized with phase-contrast microscopy ( Figure 2.17 ). Differential Interference Contrast Microscopes Differential interference contrast (DIC) microscopes (also known as Nomarski optics) are similar to phase-contrast microscopes in that they use interference patterns to enhance contrast between different features of a specimen. In a DIC microscope, two beams of light are created in which the direction of wave movement (polarization) differs. Once the beams pass through either the specimen or specimen-free space, they are recombined and effects of the specimens cause differences in the interference patterns generated by the combining of the beams. This results in high-contrast images of living organisms with a three-dimensional appearance. These microscopes are especially useful in distinguishing structures within live, unstained specimens. ( Figure 2.18 ) Check Your Understanding What are some advantages of phase-contrast and DIC microscopy? Fluorescence Microscopes A fluorescence microscope uses fluorescent chromophores called fluorochromes , which are capable of absorbing energy from a light source and then emitting this energy as visible light. Fluorochromes include naturally fluorescent substances (such as chlorophylls) as well as fluorescent stains that are added to the specimen to create contrast. Dyes such as Texas red and FITC are examples of fluorochromes. Other examples include the nucleic acid dyes 4’,6’-diamidino-2-phenylindole (DAPI) and acridine orange. The microscope transmits an excitation light, generally a form of EMR with a short wavelength, such as ultraviolet or blue light, toward the specimen; the chromophores absorb the excitation light and emit visible light with longer wavelengths. The excitation light is then filtered out (in part because ultraviolet light is harmful to the eyes) so that only visible light passes through the ocular lens. This produces an image of the specimen in bright colors against a dark background. Fluorescence microscopes are especially useful in clinical microbiology. They can be used to identify pathogens, to find particular species within an environment, or to find the locations of particular molecules and structures within a cell. Approaches have also been developed to distinguish living from dead cells using fluorescence microscopy based upon whether they take up particular fluorochromes. Sometimes, multiple fluorochromes are used on the same specimen to show different structures or features. One of the most important applications of fluorescence microscopy is a technique called immunofluorescence , which is used to identify certain disease-causing microbes by observing whether antibodies bind to them. (Antibodies are protein molecules produced by the immune system that attach to specific pathogens to kill or inhibit them.) There are two approaches to this technique: direct immunofluorescence assay (DFA) and indirect immunofluorescence assay (IFA) . In DFA, specific antibodies (e.g., those that the target the rabies virus) are stained with a fluorochrome. If the specimen contains the targeted pathogen, one can observe the antibodies binding to the pathogen under the fluorescent microscope. This is called a primary antibody stain because the stained antibodies attach directly to the pathogen. In IFA, secondary antibodies are stained with a fluorochrome rather than primary antibodies. Secondary antibodies do not attach directly to the pathogen, but they do bind to primary antibodies. When the unstained primary antibodies bind to the pathogen, the fluorescent secondary antibodies can be observed binding to the primary antibodies. Thus, the secondary antibodies are attached indirectly to the pathogen. Since multiple secondary antibodies can often attach to a primary antibody, IFA increases the number of fluorescent antibodies attached to the specimen, making it easier visualize features in the specimen ( Figure 2.19 ). Check Your Understanding Why must fluorochromes be used to examine a specimen under a fluorescence microscope? Confocal Microscopes Whereas other forms of light microscopy create an image that is maximally focused at a single distance from the observer (the depth, or z-plane), a confocal microscope uses a laser to scan multiple z-planes successively. This produces numerous two-dimensional, high-resolution images at various depths, which can be constructed into a three-dimensional image by a computer. As with fluorescence microscopes, fluorescent stains are generally used to increase contrast and resolution. Image clarity is further enhanced by a narrow aperture that eliminates any light that is not from the z-plane. Confocal microscopes are thus very useful for examining thick specimens such as biofilms, which can be examined alive and unfixed ( Figure 2.20 ). Link to Learning Explore a rotating three-dimensional view of a biofilm as observed under a confocal microscope. After navigating to the webpage, click the “play” button to launch the video. Two-Photon Microscopes While the original fluorescent and confocal microscopes allowed better visualization of unique features in specimens, there were still problems that prevented optimum visualization. The effective sensitivity of fluorescence microscopy when viewing thick specimens was generally limited by out-of-focus flare, which resulted in poor resolution. This limitation was greatly reduced in the confocal microscope through the use of a confocal pinhole to reject out-of-focus background fluorescence with thin (<1 μm), unblurred optical sections. However, even the confocal microscopes lacked the resolution needed for viewing thick tissue samples. These problems were resolved with the development of the two-photon microscope , which uses a scanning technique, fluorochromes, and long-wavelength light (such as infrared) to visualize specimens. The low energy associated with the long-wavelength light means that two photons must strike a location at the same time to excite the fluorochrome. The low energy of the excitation light is less damaging to cells, and the long wavelength of the excitation light more easily penetrates deep into thick specimens. This makes the two-photon microscope useful for examining living cells within intact tissues—brain slices, embryos, whole organs, and even entire animals. Currently, use of two-photon microscopes is limited to advanced clinical and research laboratories because of the high costs of the instruments. A single two-photon microscope typically costs between $300,000 and $500,000, and the lasers used to excite the dyes used on specimens are also very expensive. However, as technology improves, two-photon microscopes may become more readily available in clinical settings. Check Your Understanding What types of specimens are best examined using confocal or two-photon microscopy? Electron Microscopy The maximum theoretical resolution of images created by light microscopes is ultimately limited by the wavelengths of visible light. Most light microscopes can only magnify 1000⨯, and a few can magnify up to 1500⨯, but this does not begin to approach the magnifying power of an electron microscope (EM) , which uses short-wavelength electron beams rather than light to increase magnification and resolution. Electrons, like electromagnetic radiation, can behave as waves, but with wavelengths of 0.005 nm, they can produce much better resolution than visible light. An EM can produce a sharp image that is magnified up to 100,000⨯. Thus, EMs can resolve subcellular structures as well as some molecular structures (e.g., single strands of DNA); however, electron microscopy cannot be used on living material because of the methods needed to prepare the specimens. There are two basic types of EM: the transmission electron microscope (TEM) and the scanning electron microscope (SEM) ( Figure 2.21 ). The TEM is somewhat analogous to the brightfield light microscope in terms of the way it functions. However, it uses an electron beam from above the specimen that is focused using a magnetic lens (rather than a glass lens) and projected through the specimen onto a detector. Electrons pass through the specimen, and then the detector captures the image ( Figure 2.22 ). For electrons to pass through the specimen in a TEM, the specimen must be extremely thin (20–100 nm thick). The image is produced because of varying opacity in various parts of the specimen. This opacity can be enhanced by staining the specimen with materials such as heavy metals, which are electron dense. TEM requires that the beam and specimen be in a vacuum and that the specimen be very thin and dehydrated. The specific steps needed to prepare a specimen for observation under an EM are discussed in detail in the next section. SEMs form images of surfaces of specimens, usually from electrons that are knocked off of specimens by a beam of electrons. This can create highly detailed images with a three-dimensional appearance that are displayed on a monitor ( Figure 2.23 ). Typically, specimens are dried and prepared with fixatives that reduce artifacts, such as shriveling, that can be produced by drying, before being sputter-coated with a thin layer of metal such as gold. Whereas transmission electron microscopy requires very thin sections and allows one to see internal structures such as organelles and the interior of membranes, scanning electron microscopy can be used to view the surfaces of larger objects (such as a pollen grain) as well as the surfaces of very small samples ( Figure 2.24 ). Some EMs can magnify an image up to 2,000,000⨯. 1 1 “JEM-ARM200F Transmission Electron Microscope,” JEOL USA Inc , http://www.jeolusa.com/PRODUCTS/TransmissionElectronMicroscopes%28TEM%29/200kV/JEM-ARM200F/tabid/663/Default.aspx#195028-specifications. Accessed 8/28/2015. Check Your Understanding What are some advantages and disadvantages of electron microscopy, as opposed to light microscopy, for examining microbiological specimens? What kinds of specimens are best examined using TEM? SEM? Micro Connections Using Microscopy to Study Biofilms A biofilm is a complex community of one or more microorganism species, typically forming as a slimy coating attached to a surface because of the production of an extrapolymeric substance (EPS) that attaches to a surface or at the interface between surfaces (e.g., between air and water). In nature, biofilms are abundant and frequently occupy complex niches within ecosystems ( Figure 2.25 ). In medicine, biofilms can coat medical devices and exist within the body. Because they possess unique characteristics, such as increased resistance against the immune system and to antimicrobial drugs, biofilms are of particular interest to microbiologists and clinicians alike. Because biofilms are thick, they cannot be observed very well using light microscopy; slicing a biofilm to create a thinner specimen might kill or disturb the microbial community. Confocal microscopy provides clearer images of biofilms because it can focus on one z-plane at a time and produce a three-dimensional image of a thick specimen. Fluorescent dyes can be helpful in identifying cells within the matrix. Additionally, techniques such as immunofluorescence and fluorescence in situ hybridization (FISH) , in which fluorescent probes are used to bind to DNA, can be used. Electron microscopy can be used to observe biofilms, but only after dehydrating the specimen, which produces undesirable artifacts and distorts the specimen. In addition to these approaches, it is possible to follow water currents through the shapes (such as cones and mushrooms) of biofilms, using video of the movement of fluorescently coated beads ( Figure 2.26 ). Scanning Probe Microscopy A scanning probe microscope does not use light or electrons, but rather very sharp probes that are passed over the surface of the specimen and interact with it directly. This produces information that can be assembled into images with magnifications up to 100,000,000⨯. Such large magnifications can be used to observe individual atoms on surfaces. To date, these techniques have been used primarily for research rather than for diagnostics. There are two types of scanning probe microscope: the scanning tunneling microscope (STM) and the atomic force microscope (AFM) . An STM uses a probe that is passed just above the specimen as a constant voltage bias creates the potential for an electric current between the probe and the specimen. This current occurs via quantum tunneling of electrons between the probe and the specimen, and the intensity of the current is dependent upon the distance between the probe and the specimen. The probe is moved horizontally above the surface and the intensity of the current is measured. Scanning tunneling microscopy can effectively map the structure of surfaces at a resolution at which individual atoms can be detected. Similar to an STM, AFMs have a thin probe that is passed just above the specimen. However, rather than measuring variations in the current at a constant height above the specimen, an AFM establishes a constant current and measures variations in the height of the probe tip as it passes over the specimen. As the probe tip is passed over the specimen, forces between the atoms (van der Waals forces, capillary forces, chemical bonding, electrostatic forces, and others) cause it to move up and down. Deflection of the probe tip is determined and measured using Hooke’s law of elasticity , and this information is used to construct images of the surface of the specimen with resolution at the atomic level ( Figure 2.27 ). Figure 2.28 , Figure 2.29 , and Figure 2.30 summarize the microscopy techniques for light microscopes, electron microscopes, and scanning probe microscopes, respectively. Check Your Understanding Which has higher magnification, a light microscope or a scanning probe microscope? Name one advantage and one limitation of scanning probe microscopy. 2.4 Staining Microscopic Specimens Learning Objectives Differentiate between simple and differential stains Describe the unique features of commonly used stains Explain the procedures and name clinical applications for Gram, endospore, acid-fast, negative capsule, and flagella staining In their natural state, most of the cells and microorganisms that we observe under the microscope lack color and contrast. This makes it difficult, if not impossible, to detect important cellular structures and their distinguishing characteristics without artificially treating specimens. We have already alluded to certain techniques involving stains and fluorescent dyes, and in this section we will discuss specific techniques for sample preparation in greater detail. Indeed, numerous methods have been developed to identify specific microbes, cellular structures, DNA sequences, or indicators of infection in tissue samples, under the microscope. Here, we will focus on the most clinically relevant techniques. Preparing Specimens for Light Microscopy In clinical settings, light microscopes are the most commonly used microscopes. There are two basic types of preparation used to view specimens with a light microscope: wet mounts and fixed specimens. The simplest type of preparation is the wet mount , in which the specimen is placed on the slide in a drop of liquid. Some specimens, such as a drop of urine, are already in a liquid form and can be deposited on the slide using a dropper. Solid specimens, such as a skin scraping, can be placed on the slide before adding a drop of liquid to prepare the wet mount. Sometimes the liquid used is simply water, but often stains are added to enhance contrast. Once the liquid has been added to the slide, a coverslip is placed on top and the specimen is ready for examination under the microscope. The second method of preparing specimens for light microscopy is fixation . The “fixing” of a sample refers to the process of attaching cells to a slide. Fixation is often achieved either by heating ( heat fixing ) or chemically treating the specimen. In addition to attaching the specimen to the slide, fixation also kills microorganisms in the specimen, stopping their movement and metabolism while preserving the integrity of their cellular components for observation. To heat-fix a sample, a thin layer of the specimen is spread on the slide (called a smear ), and the slide is then briefly heated over a heat source ( Figure 2.31 ). Chemical fixatives are often preferable to heat for tissue specimens. Chemical agents such as acetic acid, ethanol, methanol, formaldehyde (formalin), and glutaraldehyde can denature proteins, stop biochemical reactions, and stabilize cell structures in tissue samples ( Figure 2.31 ). In addition to fixation, staining is almost always applied to color certain features of a specimen before examining it under a light microscope. Stains, or dyes, contain salts made up of a positive ion and a negative ion. Depending on the type of dye, the positive or the negative ion may be the chromophore (the colored ion); the other, uncolored ion is called the counterion. If the chromophore is the positively charged ion, the stain is classified as a basic dye ; if the negative ion is the chromophore, the stain is considered an acidic dye . Dyes are selected for staining based on the chemical properties of the dye and the specimen being observed, which determine how the dye will interact with the specimen. In most cases, it is preferable to use a positive stain , a dye that will be absorbed by the cells or organisms being observed, adding color to objects of interest to make them stand out against the background. However, there are scenarios in which it is advantageous to use a negative stain , which is absorbed by the background but not by the cells or organisms in the specimen. Negative staining produces an outline or silhouette of the organisms against a colorful background ( Figure 2.32 ). Because cells typically have negatively charged cell walls, the positive chromophores in basic dyes tend to stick to the cell walls, making them positive stains. Thus, commonly used basic dyes such as basic fuchsin , crystal violet , malachite green , methylene blue , and safranin typically serve as positive stains. On the other hand, the negatively charged chromophores in acidic dyes are repelled by negatively charged cell walls, making them negative stains. Commonly used acidic dyes include acid fuchsin , eosin , and rose bengal . Figure 2.40 provides more detail. Some staining techniques involve the application of only one dye to the sample; others require more than one dye. In simple staining , a single dye is used to emphasize particular structures in the specimen. A simple stain will generally make all of the organisms in a sample appear to be the same color, even if the sample contains more than one type of organism. In contrast, differential staining distinguishes organisms based on their interactions with multiple stains. In other words, two organisms in a differentially stained sample may appear to be different colors. Differential staining techniques commonly used in clinical settings include Gram staining, acid-fast staining, endospore staining, flagella staining, and capsule staining. Figure 2.41 provides more detail on these differential staining techniques. Check Your Understanding Explain why it is important to fix a specimen before viewing it under a light microscope. What types of specimens should be chemically fixed as opposed to heat-fixed? Why might an acidic dye react differently with a given specimen than a basic dye? Explain the difference between a positive stain and a negative stain. Explain the difference between simple and differential staining. Gram Staining The Gram stain procedure is a differential staining procedure that involves multiple steps. It was developed by Danish microbiologist Hans Christian Gram in 1884 as an effective method to distinguish between bacteria with different types of cell walls, and even today it remains one of the most frequently used staining techniques. The steps of the Gram stain procedure are listed below and illustrated in Figure 2.33 . First, crystal violet , a primary stain , is applied to a heat-fixed smear, giving all of the cells a purple color. Next, Gram’s iodine , a mordant , is added. A mordant is a substance used to set or stabilize stains or dyes; in this case, Gram’s iodine acts like a trapping agent that complexes with the crystal violet, making the crystal violet–iodine complex clump and stay contained in thick layers of peptidoglycan in the cell walls. Next, a decolorizing agent is added, usually ethanol or an acetone/ethanol solution. Cells that have thick peptidoglycan layers in their cell walls are much less affected by the decolorizing agent; they generally retain the crystal violet dye and remain purple. However, the decolorizing agent more easily washes the dye out of cells with thinner peptidoglycan layers, making them again colorless. Finally, a secondary counterstain , usually safranin , is added. This stains the decolorized cells pink and is less noticeable in the cells that still contain the crystal violet dye. The purple, crystal-violet stained cells are referred to as gram-positive cells, while the red, safranin-dyed cells are gram-negative ( Figure 2.34 ). However, there are several important considerations in interpreting the results of a Gram stain. First, older bacterial cells may have damage to their cell walls that causes them to appear gram-negative even if the species is gram-positive. Thus, it is best to use fresh bacterial cultures for Gram staining. Second, errors such as leaving on decolorizer too long can affect the results. In some cases, most cells will appear gram-positive while a few appear gram-negative (as in Figure 2.34 ). This suggests damage to the individual cells or that decolorizer was left on for too long; the cells should still be classified as gram-positive if they are all the same species rather than a mixed culture. Besides their differing interactions with dyes and decolorizing agents, the chemical differences between gram-positive and gram-negative cells have other implications with clinical relevance. For example, Gram staining can help clinicians classify bacterial pathogens in a sample into categories associated with specific properties. Gram-negative bacteria tend to be more resistant to certain antibiotics than gram-positive bacteria. We will discuss this and other applications of Gram staining in more detail in later chapters. Check Your Understanding Explain the role of Gram’s iodine in the Gram stain procedure. Explain the role of alcohol in the Gram stain procedure. What color are gram-positive and gram-negative cells, respectively, after the Gram stain procedure? Clinical Focus Part 3 Viewing Cindy’s specimen under the darkfield microscope has provided the technician with some important clues about the identity of the microbe causing her infection. However, more information is needed to make a conclusive diagnosis. The technician decides to make a Gram stain of the specimen. This technique is commonly used as an early step in identifying pathogenic bacteria. After completing the Gram stain procedure , the technician views the slide under the brightfield microscope and sees purple, grape-like clusters of spherical cells ( Figure 2.35 ). Are these bacteria gram-positive or gram-negative? What does this reveal about their cell walls? Jump to the next Clinical Focus box. Go back to the previous Clinical Focus box. Acid-Fast Stains Acid-fast staining is another commonly used, differential staining technique that can be an important diagnostic tool. An acid-fast stain is able to differentiate two types of gram-positive cells: those that have waxy mycolic acids in their cell walls, and those that do not. Two different methods for acid-fast staining are the Ziehl-Neelsen technique and the Kinyoun technique . Both use carbolfuchsin as the primary stain. The waxy, acid-fast cells retain the carbolfuchsin even after a decolorizing agent (an acid-alcohol solution) is applied. A secondary counterstain, methylene blue, is then applied, which renders non–acid-fast cells blue. The fundamental difference between the two carbolfuchsin-based methods is whether heat is used during the primary staining process. The Ziehl-Neelsen method uses heat to infuse the carbolfuchsin into the acid-fast cells, whereas the Kinyoun method does not use heat. Both techniques are important diagnostic tools because a number of specific diseases are caused by acid-fast bacteria (AFB). If AFB are present in a tissue sample, their red or pink color can be seen clearly against the blue background of the surrounding tissue cells ( Figure 2.36 ). Check Your Understanding Why are acid-fast stains useful? Micro Connections Using Microscopy to Diagnose Tuberculosis Mycobacterium tuberculosis , the bacterium that causes tuberculosis , can be detected in specimens based on the presence of acid-fast bacilli. Often, a smear is prepared from a sample of the patient’s sputum and then stained using the Ziehl-Neelsen technique ( Figure 2.36 ). If acid-fast bacteria are confirmed, they are generally cultured to make a positive identification. Variations of this approach can be used as a first step in determining whether M. tuberculosis or other acid-fast bacteria are present, though samples from elsewhere in the body (such as urine) may contain other Mycobacterium species. An alternative approach for determining the presence of M. tuberculosis is immunofluorescence. In this technique, fluorochrome-labeled antibodies bind to M. tuberculosis , if present. Antibody-specific fluorescent dyes can be used to view the mycobacteria with a fluorescence microscope. Capsule Staining Certain bacteria and yeasts have a protective outer structure called a capsule. Since the presence of a capsule is directly related to a microbe’s virulence (its ability to cause disease), the ability to determine whether cells in a sample have capsules is an important diagnostic tool. Capsules do not absorb most basic dyes; therefore, a negative staining technique (staining around the cells) is typically used for capsule staining . The dye stains the background but does not penetrate the capsules, which appear like halos around the borders of the cell. The specimen does not need to be heat-fixed prior to negative staining. One common negative staining technique for identifying encapsulated yeast and bacteria is to add a few drops of India ink or nigrosin to a specimen. Other capsular stains can also be used to negatively stain encapsulated cells ( Figure 2.37 ). Alternatively, positive and negative staining techniques can be combined to visualize capsules: The positive stain colors the body of the cell, and the negative stain colors the background but not the capsule, leaving halo around each cell. Check Your Understanding How does negative staining help us visualize capsules? Endospore Staining Endospores are structures produced within certain bacterial cells that allow them to survive harsh conditions. Gram staining alone cannot be used to visualize endospores, which appear clear when Gram-stained cells are viewed. Endospore staining uses two stains to differentiate endospores from the rest of the cell. The Schaeffer-Fulton method (the most commonly used endospore-staining technique) uses heat to push the primary stain ( malachite green ) into the endospore. Washing with water decolorizes the cell, but the endospore retains the green stain. The cell is then counterstained pink with safranin . The resulting image reveals the shape and location of endospores, if they are present. The green endospores will appear either within the pink vegetative cells or as separate from the pink cells altogether. If no endospores are present, then only the pink vegetative cells will be visible ( Figure 2.38 ). Endospore-staining techniques are important for identifying Bacillus and Clostridium , two genera of endospore-producing bacteria that contain clinically significant species. Among others, B. anthracis (which causes anthrax ) has been of particular interest because of concern that its spores could be used as a bioterrorism agent. C. difficile is a particularly important species responsible for the typically hospital-acquired infection known as “C. diff.” Check Your Understanding Is endospore staining an example of positive, negative, or differential staining? Flagella Staining Flagella (singular: flagellum) are tail-like cellular structures used for locomotion by some bacteria, archaea, and eukaryotes. Because they are so thin, flagella typically cannot be seen under a light microscope without a specialized flagella staining technique. Flagella staining thickens the flagella by first applying mordant (generally tannic acid, but sometimes potassium alum), which coats the flagella; then the specimen is stained with pararosaniline (most commonly) or basic fuchsin ( Figure 2.39 ). Though flagella staining is uncommon in clinical settings, the technique is commonly used by microbiologists, since the location and number of flagella can be useful in classifying and identifying bacteria in a sample. When using this technique, it is important to handle the specimen with great care; flagella are delicate structures that can easily be damaged or pulled off, compromising attempts to accurately locate and count the number of flagella. Preparing Specimens for Electron Microscopy Samples to be analyzed using a TEM must have very thin sections. But cells are too soft to cut thinly, even with diamond knives. To cut cells without damage, the cells must be embedded in plastic resin and then dehydrated through a series of soaks in ethanol solutions (50%, 60%, 70%, and so on). The ethanol replaces the water in the cells, and the resin dissolves in ethanol and enters the cell, where it solidifies. Next, thin sections are cut using a specialized device called an ultramicrotome ( Figure 2.42 ). Finally, samples are fixed to fine copper wire or carbon-fiber grids and stained—not with colored dyes, but with substances like uranyl acetate or osmium tetroxide, which contain electron-dense heavy metal atoms. When samples are prepared for viewing using an SEM, they must also be dehydrated using an ethanol series. However, they must be even drier than is necessary for a TEM. Critical point drying with inert liquid carbon dioxide under pressure is used to displace the water from the specimen. After drying, the specimens are sputter-coated with metal by knocking atoms off of a palladium target, with energetic particles. Sputter-coating prevents specimens from becoming charged by the SEM’s electron beam. Check Your Understanding Why is it important to dehydrate cells before examining them under an electron microscope? Name the device that is used to create thin sections of specimens for electron microscopy. Micro Connections Using Microscopy to Diagnose Syphilis The causative agent of syphilis is Treponema pallidum , a flexible, spiral cell (spirochete) that can be very thin (<0.15 μm) and match the refractive index of the medium, making it difficult to view using brightfield microscopy. Additionally, this species has not been successfully cultured in the laboratory on an artificial medium; therefore, diagnosis depends upon successful identification using microscopic techniques and serology (analysis of body fluids, often looking for antibodies to a pathogen). Since fixation and staining would kill the cells, darkfield microscopy is typically used for observing live specimens and viewing their movements. However, other approaches can also be used. For example, the cells can be thickened with silver particles (in tissue sections) and observed using a light microscope. It is also possible to use fluorescence or electron microscopy to view Treponema ( Figure 2.43 ). In clinical settings, indirect immunofluorescence is often used to identify Treponema. A primary, unstained antibody attaches directly to the pathogen surface, and secondary antibodies “tagged” with a fluorescent stain attach to the primary antibody. Multiple secondary antibodies can attach to each primary antibody, amplifying the amount of stain attached to each Treponema cell, making them easier to spot ( Figure 2.44 ). Preparation and Staining for Other Microscopes Samples for fluorescence and confocal microscopy are prepared similarly to samples for light microscopy, except that the dyes are fluorochromes. Stains are often diluted in liquid before applying to the slide. Some dyes attach to an antibody to stain specific proteins on specific types of cells ( immunofluorescence ); others may attach to DNA molecules in a process called fluorescence in situ hybridization (FISH) , causing cells to be stained based on whether they have a specific DNA sequence. Sample preparation for two-photon microscopy is similar to fluorescence microscopy, except for the use of infrared dyes. Specimens for STM need to be on a very clean and atomically smooth surface. They are often mica coated with Au(111). Toluene vapor is a common fixative. Check Your Understanding What is the main difference between preparing a sample for fluorescence microscopy versus light microscopy? Link to Learning Cornell University’s Case Studies in Microscopy offers a series of clinical problems based on real-life events. Each case study walks you through a clinical problem using appropriate techniques in microscopy at each step. Clinical Focus Resolution From the results of the Gram stain, the technician now knows that Cindy’s infection is caused by spherical, gram-positive bacteria that form grape-like clusters, which is typical of staphylococcal bacteria. After some additional testing, the technician determines that these bacteria are the medically important species known as Staphylococcus aureus , a common culprit in wound infections. Because some strains of S. aureus are resistant to many antibiotics, skin infections may spread to other areas of the body and become serious, sometimes even resulting in amputations or death if the correct antibiotics are not used. After testing several antibiotics, the lab is able to identify one that is effective against this particular strain of S. aureus . Cindy’s doctor quickly prescribes the medication and emphasizes the importance of taking the entire course of antibiotics, even if the infection appears to clear up before the last scheduled dose. This reduces the risk that any especially resistant bacteria could survive, causing a second infection or spreading to another person. Go back to the previous Clinical Focus box. Eye on Ethics Microscopy and Antibiotic Resistance As the use of antibiotics has proliferated in medicine, as well as agriculture, microbes have evolved to become more resistant. Strains of bacteria such as methicillin-resistant S. aureus ( MRSA ), which has developed a high level of resistance to many antibiotics, are an increasingly worrying problem, so much so that research is underway to develop new and more diversified antibiotics. Fluorescence microscopy can be useful in testing the effectiveness of new antibiotics against resistant strains like MRSA. In a test of one new antibiotic derived from a marine bacterium, MC21-A (bromophene), researchers used the fluorescent dye SYTOX Green to stain samples of MRSA. SYTOX Green is often used to distinguish dead cells from living cells, with fluorescence microscopy. Live cells will not absorb the dye, but cells killed by an antibiotic will absorb the dye, since the antibiotic has damaged the bacterial cell membrane. In this particular case, MRSA bacteria that had been exposed to MC21-A did, indeed, appear green under the fluorescence microscope, leading researchers to conclude that it is an effective antibiotic against MRSA. Of course, some argue that developing new antibiotics will only lead to even more antibiotic-resistant microbes, so-called superbugs that could spawn epidemics before new treatments can be developed. For this reason, many health professionals are beginning to exercise more discretion in prescribing antibiotics. Whereas antibiotics were once routinely prescribed for common illnesses without a definite diagnosis, doctors and hospitals are much more likely to conduct additional testing to determine whether an antibiotic is necessary and appropriate before prescribing. A sick patient might reasonably object to this stingy approach to prescribing antibiotics. To the patient who simply wants to feel better as quickly as possible, the potential benefits of taking an antibiotic may seem to outweigh any immediate health risks that might occur if the antibiotic is ineffective. But at what point do the risks of widespread antibiotic use supersede the desire to use them in individual cases?
business_ethics
Summary 7.1 Loyalty to the Company Although employees’ and employers’ concepts of loyalty have changed, it is reasonable to expect workers to have a basic sense of responsibility to their company and willingness to protect a variety of important assets such as intellectual property and trade secrets. Current employees should not compete with their employer in a way that would violate conflict-of-interest rules, and former employees should not solicit previous customers or employees upon leaving employment. 7.2 Loyalty to the Brand and to Customers Employees have a duty to be loyal to the brand and treat customers well. Internal marketing is one process by which a company instills employee commitment to the brand and builds loyalty in its workforce. This loyalty should be a two-way street, however. If the company wants its employees to treat customers with respect, it must treat them with respect as well. 7.3 Contributing to a Positive Work Atmosphere Ethical employees accept their role in creating a workplace that is respectful, safe, and welcoming by getting along with coworkers and doing what is best for the company. They also comply with corporate codes of conduct, which cover a wide range of behaviors, from financial dealings and bribery to sexual harassment. In addition, they are alert to any situation in the workplace that could escalate into violence. In short, the employee has a duty to be a responsible person in the job. 7.4 Financial Integrity Legal and cultural differences may allow bribes in other countries, but bribery and insider trading (which allows someone with private information about securities to profit from that knowledge at the public’s expense) are illegal in the United States, as well as unethical. A clear gift policy should be in place to help employees understand when it is acceptable to accept a gift from another employee or an outsider (such as a vendor), and to distinguish gifts from bribes. 7.5 Criticism of the Company and Whistleblowing Employees should understand that there are limits to what can be posted about their employer online, just as there are limits to what they can say in the workplace, and that the First Amendment generally does not protect such speech. Whistleblowers are protected, and sometimes rewarded, for their willingness to come forward, but they can still face a hostile environment in some situations. Employees should not use whistleblowing as an attempt to get back at a boss or employer they do not like; rather, they should use it as a means to stop serious wrongdoing.
Chapter Outline 7.1 Loyalty to the Company 7.2 Loyalty to the Brand and to Customers 7.3 Contributing to a Positive Work Atmosphere 7.4 Financial Integrity 7.5 Criticism of the Company and Whistleblowing Introduction What Employers Owe Employees discussed the duties, obligations, and responsibilities managers and companies owe their employees. This chapter looks at the other side of that relationship to weigh the ethical dimensions of being a worthy employee and responsible coworker ( Figure 7.1 ). Coworkers may express their opinions differently, for instance, agreeing or disagreeing, perhaps in very animated ways. Although we and our peers at work may not see eye to eye on every issue, we work best when we understand the need to get along and to show a degree of loyalty to our employer and each other, as well as to ourselves, our values, and our own best interests. Balancing these factors requires a concerted effort. What would you do, for example, if one of your coworkers were being bullied or harassed by another employee or a manager? Suppose a former colleague tried to recruit you to her new firm. What is the ethical action for you to take? How would you react if you learned your company’s managers were behaving unethically or breaking the law? Who could you tell, and what could you expect as a result? What is the right response if a client or customer behaves badly toward you as an employee representing your firm? How do you provide good customer service and support the company brand in the face of difficult working conditions?
[ { "answer": { "ans_choice": 2, "ans_text": "C" }, "bloom": null, "hl_context": "<hl> In general terms , the duty of loyalty means an employee is obligated to render “ loyal and faithful ” service to the employer , to act with “ good faith , ” and not to compete with but rather to advance the employer ’ s interests . <hl> 1 The employee must not act in a way that benefits him - or herself ( or any other third party ) , especially when doing so would create a conflict of interest with the employer . 2 The common law of most states holds as a general rule that , without asking for and receiving the employer ’ s consent , an employee cannot hold a second job if it would compete or conflict with the first job . Thus , although the precise boundaries of this aspect of the duty of loyalty are unclear , an employee who works in the graphic design department of a large advertising agency in all likelihood cannot moonlight on the weekend for a friend ’ s small web design business . However , employers often grant permission for employees to work in positions that do not compete or interfere with their principal jobs . The graphic designer might work for a friend ’ s catering business , for example , or perhaps as a wedding photographer or editor of a blog for a public interest community group .", "hl_sentences": "In general terms , the duty of loyalty means an employee is obligated to render “ loyal and faithful ” service to the employer , to act with “ good faith , ” and not to compete with but rather to advance the employer ’ s interests .", "question": { "cloze_format": "The common law concept that requires an employee to render loyal and faithful service to the employer is ________.", "normal_format": "What is the common law concept that requires an employee to render loyal and faithful service to the employer?", "question_choices": [ "the duty of confidentiality", "a non-compete agreement", "the duty of loyalty", "trade secret protection" ], "question_id": "fs-idm248054176", "question_text": "The common law concept that requires an employee to render loyal and faithful service to the employer is ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "B" }, "bloom": null, "hl_context": "In general terms , the duty of loyalty means an employee is obligated to render “ loyal and faithful ” service to the employer , to act with “ good faith , ” and not to compete with but rather to advance the employer ’ s interests . 1 The employee must not act in a way that benefits him - or herself ( or any other third party ) , especially when doing so would create a conflict of interest with the employer . 2 The common law of most states holds as a general rule that , without asking for and receiving the employer ’ s consent , an employee cannot hold a second job if it would compete or conflict with the first job . <hl> Thus , although the precise boundaries of this aspect of the duty of loyalty are unclear , an employee who works in the graphic design department of a large advertising agency in all likelihood cannot moonlight on the weekend for a friend ’ s small web design business . <hl> However , employers often grant permission for employees to work in positions that do not compete or interfere with their principal jobs . The graphic designer might work for a friend ’ s catering business , for example , or perhaps as a wedding photographer or editor of a blog for a public interest community group .", "hl_sentences": "Thus , although the precise boundaries of this aspect of the duty of loyalty are unclear , an employee who works in the graphic design department of a large advertising agency in all likelihood cannot moonlight on the weekend for a friend ’ s small web design business .", "question": { "cloze_format": "An employee who works in the graphic design department of a large advertising agency most likely cannot moonlight after business hours for a friend’s ________.", "normal_format": "For which of the following businesses, an employee who works in the graphic design department of a large advertising agency most likely cannot moonlight after business hours?", "question_choices": [ "bakery business", "web design business", "construction business", "landscaping design business" ], "question_id": "fs-idm266201472", "question_text": "An employee who works in the graphic design department of a large advertising agency most likely cannot moonlight after business hours for a friend’s ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "A. Employers can encourage positive behavior toward customers by empowering employees to use their best judgment when working with them." }, "bloom": null, "hl_context": "As the public ’ s first point of contact with a company , employees are obliged to assist the firm in forming a positive relationship with customers . How well or poorly they do so contributes a great deal to customers ’ impression of the company . And customers ’ perceptions affect not only the company but all the employees who depend on its success for their livelihood . Thus , the ethical obligations of an employee also extend to interactions with customers , whom they should treat with respect . <hl> Employers can encourage positive behavior toward customers by empowering employees to use their best judgment when working with them . <hl>", "hl_sentences": "Employers can encourage positive behavior toward customers by empowering employees to use their best judgment when working with them .", "question": { "cloze_format": "___ is especially important for developing and maintaining employee loyalty to the brand.", "normal_format": "Which of the following is especially important for developing and maintaining employee loyalty to the brand?", "question_choices": [ "empowerment", "engagement", "commitment", "dedication" ], "question_id": "fs-idm247048128", "question_text": "Which of the following is especially important for developing and maintaining employee loyalty to the brand?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 3, "ans_text": "D. Violence by a customer occurs when the violent person has a legitimate relationship with the business, perhaps as a customer or patient." }, "bloom": null, "hl_context": "When the violence arises from problems in a personal relationship , the perpetrator often has a direct relationship not with the business but with the victim , who is an employee . This category of violence accounts for slightly less than 10 percent of all workplace homicides . Women are at higher risk of being victims of this type of violence than men . <hl> In the fourth scenario , the violent person has a legitimate relationship with the business , perhaps as a customer or patient , and becomes violent while on the premises . <hl> A large portion of customer incidents occur in the nightclub , restaurant , and health care industries . In 2014 , about one-fifth of all workplace homicides resulted from this type of violence . 23 As recent incidents have shown — for example , the April 2018 shooting at YouTube headquarters in San Bruno , California 21 — workplace violence is a reality , and all employees play a role in helping make work a safe , as well as harmonious place . Employees , in fact , have a legal and ethical duty not to be violent at work , and managers have a duty to prevent or stop violence . <hl> The National Institute for Occupational Safety and Health reports that violence at work usually fits into one of four categories : traditional criminal intent , violence by one worker against another , violence stemming from a personal relationship , and violence by a customer . <hl> 22", "hl_sentences": "In the fourth scenario , the violent person has a legitimate relationship with the business , perhaps as a customer or patient , and becomes violent while on the premises . The National Institute for Occupational Safety and Health reports that violence at work usually fits into one of four categories : traditional criminal intent , violence by one worker against another , violence stemming from a personal relationship , and violence by a customer .", "question": { "cloze_format": "A patient becomes violent on hospital premises after being turned down for the clinical trial of a new drug therapy. This scenario fits with ___ in the workplace violence categories.", "normal_format": "A patient becomes violent on hospital premises after being turned down for the clinical trial of a new drug therapy. This scenario fits which of the following workplace violence categories?", "question_choices": [ "traditional criminal intent", "violence by one worker against another", "violence stemming from a personal relationship", "violence by a customer" ], "question_id": "fs-idm255564704", "question_text": "A patient becomes violent on hospital premises after being turned down for the clinical trial of a new drug therapy. This scenario fits which of the following workplace violence categories?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "A" }, "bloom": null, "hl_context": "<hl> Understanding the various personalities at work can be a complex task , but it is a vital one for developing a sense of collegiality . <hl> One technique that may be helpful is to develop your own emotional intelligence , which is the capacity to recognize other people ’ s emotions and also to know and manage your own . One aspect of using emotional intelligence is showing empathy , the willingness to step into someone else ’ s shoes .", "hl_sentences": "Understanding the various personalities at work can be a complex task , but it is a vital one for developing a sense of collegiality .", "question": { "cloze_format": "Understanding the various personalities at work can be a complex task, but it is an important one for developing ___.", "normal_format": "Understanding the various personalities at work can be a complex task, but it is an important one for developing which of the following?", "question_choices": [ "collegiality", "emotional intelligence", "empathy", "personality harmony" ], "question_id": "fs-idm257294048", "question_text": "Understanding the various personalities at work can be a complex task, but it is an important one for developing which of the following?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "A" }, "bloom": null, "hl_context": "<hl> The buying or selling of stocks , bonds , or other investments based on nonpublic information that is likely to affect the price of the security being traded is called insider trading . <hl> For example , someone who is privy to information that a company is about to be taken over , which will cause its stock price to rise when the information becomes public , may buy the stock before it goes up in order to sell it later for an enhanced profit . Likewise , someone with inside information about a coming drop in share price may sell all his or her holdings at the current price before the information is announced , avoiding the loss other shareholders will suffer when the price falls . Although insider trading can be difficult to prove , it is essentially cheating . It is illegal , unethical , and unfair , and it often injures other investors , as well as undermining public confidence in the stock market .", "hl_sentences": "The buying or selling of stocks , bonds , or other investments based on nonpublic information that is likely to affect the price of the security being traded is called insider trading .", "question": { "cloze_format": "The buying or selling of stocks, bonds, or other investments based on nonpublic information that is likely to favorably affect the price of the security being traded is ___ .", "normal_format": "The buying or selling of stocks, bonds, or other investments based on nonpublic information that is likely to favorably affect the price of the security being traded is which of the following?", "question_choices": [ "insider trading", "bribery", "illegal transaction", "manipulation" ], "question_id": "fs-idm344203968", "question_text": "The buying or selling of stocks, bonds, or other investments based on nonpublic information that is likely to favorably affect the price of the security being traded is which of the following?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "B" }, "bloom": null, "hl_context": "Another temptation that may present itself to employees is the offer of a bribe . <hl> A bribe is a payment in some material form ( cash or noncash ) for an act that runs counter to the legal or ethical culture of the work environment . <hl> Bribery constitutes a violation of the law in all fifty U . S . states , as well as of a federal law that prohibits bribery in international transactions , the Foreign Corrupt Practices Act . Bribery generally injures not only individuals but also competitors , the government , and the free-market system as a whole . Of course , often the bribe is somewhat less obvious than an envelope full of money . It is important , therefore , to understand what constitutes a bribe .", "hl_sentences": "A bribe is a payment in some material form ( cash or noncash ) for an act that runs counter to the legal or ethical culture of the work environment .", "question": { "cloze_format": "A payment in some form (cash or noncash) for an act that runs counter to the legal or ethical culture of the work environment is called ________.", "normal_format": "What is a payment in some form (cash or noncash) for an act that runs counter to the legal or ethical culture of the work environment called?", "question_choices": [ "insider trading", "bribery", "illegal transaction", "manipulation" ], "question_id": "fs-idm352529840", "question_text": "A payment in some form (cash or noncash) for an act that runs counter to the legal or ethical culture of the work environment is called ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "B" }, "bloom": null, "hl_context": "<hl> The act of whistleblowing — going to an official government agency and disclosing an employer ’ s violation of the law — is different from everyday criticism . <hl> In fact , whistleblowing is largely viewed as a public service because it helps society reduce bad workplace behavior . Being a whistleblower is not easy , however , and someone inclined to act as one should expect many hurdles . If a whistleblower ’ s identity becomes known , his or her revelations may amount to career suicide . Even if they keep their job , whistleblowers often are not promoted , and they may face resentment not only from management but also from rank-and-file workers who fear the loss of their own jobs . Whistleblowers may also be blacklisted , making it difficult for them to get a job at a different firm , and all as a result of doing what is ethical .", "hl_sentences": "The act of whistleblowing — going to an official government agency and disclosing an employer ’ s violation of the law — is different from everyday criticism .", "question": { "cloze_format": "Going to an official government agency and disclosing an employer’s violation of the law is ________.", "normal_format": "What is going to an official government agency and disclosing an employer’s violation of the law?", "question_choices": [ "insider trading", "whistleblowing", "free speech expression", "tattle telling" ], "question_id": "fs-idm333590032", "question_text": "Going to an official government agency and disclosing an employer’s violation of the law is ________." }, "references_are_paraphrase": null } ]
7
7.1 Loyalty to the Company Learning Objectives By the end of this section, you will be able to: Define employees’ responsibilities to the company for which they work Describe a non-compete agreement Explain how confidentiality applies to trade secrets, intellectual property, and customer data The relationship between employee and employer is changing, especially our understanding of commitment and loyalty. An ethical employee owes the company a good day’s work and his or her best effort, whether the work is stimulating or dull. A duty of loyalty and our best effort are our primary obligations as employees, but what they mean can change. A manager who expects a twentieth-century concept of loyalty in the twenty-first century may be surprised when workers express a sense of entitlement, ask for a raise after six months, or leave for a new job after twelve months. This chapter will explore a wide range of issues from the perspective of what and how employees contribute to the overall success of a business enterprise. A Duty of Loyalty Hard work and our best effort likely make sense as obligations we owe an employer. However, loyalty is more abstract and less easily defined. Most workers do not have employment contracts, so there may not be a specific agreement between the two parties detailing their mutual responsibilities. Instead, the common law (case law) of agency in each state is often the source of the rules governing an employment relationship. The usual depiction of duty in common law is the duty of loyalty , which, in all fifty states, requires that an employee refrain from acting in a manner contrary to the employer’s interest. This duty creates some basic rules employees must follow on the job and provides employers with enforceable rights against employees who violate them. In general terms, the duty of loyalty means an employee is obligated to render “loyal and faithful” service to the employer, to act with “good faith,” and not to compete with but rather to advance the employer’s interests. 1 The employee must not act in a way that benefits him- or herself (or any other third party), especially when doing so would create a conflict of interest with the employer. 2 The common law of most states holds as a general rule that, without asking for and receiving the employer’s consent, an employee cannot hold a second job if it would compete or conflict with the first job. Thus, although the precise boundaries of this aspect of the duty of loyalty are unclear, an employee who works in the graphic design department of a large advertising agency in all likelihood cannot moonlight on the weekend for a friend’s small web design business. However, employers often grant permission for employees to work in positions that do not compete or interfere with their principal jobs. The graphic designer might work for a friend’s catering business, for example, or perhaps as a wedding photographer or editor of a blog for a public interest community group. Link to Learning Moonlighting has become such a common phenomenon that the website Glassdoor now has a section reserved for such jobs. The Glassdoor website has a number of postings for different moonlighting opportunities to explore. What is clear is that it is wrong for employees to make work decisions primarily for their own personal gain, rather than doing what is in the employer’s best interest. An employee might have the authority to decide which other companies the employer will do business with, for example, such as service vendors that maintain the copiers or clean the offices. What if the employee owned stock in one of those companies or had a relative who worked there? That gives him or her an incentive to encourage doing business with that particular company, whether it would be best for the employer or not. The degree to which the duty of loyalty exists is usually related to the degree of responsibility or trust an employer places in an employee. More trust equals a stronger duty. For example, when an employee has very extensive authority or access to confidential information, the duty can rise to its highest level, called a fiduciary duty, which is discussed in an earlier chapter. Differing Concepts of Loyalty There is no generally agreed-upon definition of an employee’s duty of loyalty to his or her employer. One indicator that our understanding of the term is changing is that millennials are three times more likely than older generations to change jobs, according to a Forbes Human Resources Council survey ( Figure 7.2 ). 3 About nine in ten millennials (91 percent) say they do not expect to stay with their current job longer than three years, compared with older workers who often anticipated spending ten years or even an entire career with one employer, relying on an implicit social contract between employer and employee that rewarded lifetime employment. The Loyalty Research Center, a consulting firm, defines loyal employees as “being committed to the success of the organization. They believe that working for this organization is their best option . . . and loyal employees do not actively search for alternative employment and are not responsive to offers.” 4 Likewise, Wharton School, University of Pennsylvania, professor Matthew Bidwell says there are two halves to the term: “One piece is having the employer’s best interests at heart. The other piece is remaining with the same employer rather than moving on.” Bidwell goes on to acknowledge, “There is less a sense that your organization is going to look after you in the way that it used to, which would lead [us] to expect a reduction in loyalty.” 5 Why are employees less likely to feel a duty of loyalty to their companies? One reason is that loyalty is a two-way street, a feeling developed through the enactment of mutual obligations and responsibilities. However, most employers do not want to be obligated to their workers in a legal sense; they usually require that almost all workers are employees “at will,” that is, without any long-term employment contract. Neither state nor federal law mandates an employment contract, so when a company says an employee is employed at will, it is sending a message that management is not making a long-term commitment to the employee. Employees may naturally feel less loyalty to an organization from which they believe they can be let go at any time and for any legal reason (which is essentially what at-will employment means). Of course, at-will employment also means the employee can also quit at any time. However, freedom to move is a benefit only if the employee has mobility and a skill set he or she can sell to the highest bidder. Otherwise, for most workers, at-will employment usually works to the employer’s advantage, not the employee’s. Another reason the concept of loyalty to an organization seems to be changing at all levels is the important role money plays in career decisions. When they see chief executive officers (CEOs) and other managers leaving to work for the highest bidder, subordinates quickly conclude that they, too, ought to look out for themselves, just as their bosses do, rather than trying to build up seniority with the company. Switching jobs can often be a way for employees to improve their salaries. Consider professional sports. For decades professional athletes were tied to one team and could not sell their services to the highest bidder, meaning that their salaries were effectively capped. Finally, after several court decisions (including the Curt Flood reserve clause case involving the St. Louis Cardinals and Major League Baseball), 6 players achieved some degree of freedom and can now switch employers frequently in an effort to maximize their earning potential. The same evolution occurred in the entertainment industry. In the early years of the movie business, actors were tied to studios by contracts that prevented them from making movies for any other studio, effectively limiting their earning power. Then the entertainment industry changed as actors gained the freedom to sell their services to the highest bidder, becoming much more highly compensated in the process. Employees in any industry, not just sports and entertainment, benefit from being able to change jobs if their salary at their current job stagnates or falls below the market rate. Another economic phenomenon affecting loyalty in the private sector was the switch from defined-benefit to defined-contribution retirement plans . In the former, often called a pension, employee benefits are usually sponsored (paid) fully by the employer and calculated using a formula based on length of employment, salary history, and other factors. The employer administers the plan and manages the investment risk, promising the employee a set payout upon retirement. In the defined-contribution plan, however, the employee invests a certain percentage of his or her salary in a retirement fund, often a 401(k) or 403(b) plan, where it is sometimes matched (partially or wholly) by the employer. (These savings plans with their seemingly strange designations are part of the U.S. Internal Revenue Code, and the letter/number combinations indicate subsections of the Code. 401(k) Plans typically are featured in for-profit employment settings and 403(b) plans in nonprofit environments.) Defined-benefit plans reward longevity in the firm, whereas defined-contribution plans reward high earnings over seniority. Thus, with the growth of defined-contribution plans, some reasons for staying with the same employer over time are no longer applicable. According to PayScale’s Compensation Best Practices Report, the two leading motivators people give for leaving their job are first, higher pay, and second, personal reasons (e.g., family, health, marriage, spousal relocation). 7 Of course, beyond money, workers seek meaning in their work, and it is largely true that money alone does not motivate employees to higher performance. However, it is a mistake for managers to think money is not a central factor influencing employees’ job satisfaction. Money matters because if employees are not making enough money to meet their financial obligations or goals, they will likely be looking to for a higher-paying job. And, of course, increasing salary or other benefits can be a way of demonstrating both the company’s loyalty to its employees and the role it believes employees’ best interests play in its mission—navigating the aforementioned two-way street. For some employees, simply being acknowledged and thanked for their service and good work can go a long way toward sparking their loyalty; for others, more concrete rewards may be necessary. Finally, many people work for themselves as freelance or contract workers in the new “gig” economy. They may take assignments from one or more companies at a time and are not employees in the traditional sense of the word. Therefore, it seems more reasonable that they would approach work in the same way a certified public accountant or attorney would—as completing a professional job for a client, after which they move on the next client, always keeping their independent status. We would not expect gig workers to demonstrate employer loyalty when they are not employees. What Would You Do? The Ties That Bind If building employee loyalty is a challenge for managers and they see their workers leaving for better opportunities, what can they do to change the situation? Some companies focus on team-building activities, company picnics, rock-climbing walls, or zip lines, but do these actually make workers decide to stay with their company for less salary? The answer is usually no. The reality is that salary plays an important role in an employee’s decision to move to a new job. Therefore, retention bonuses are a popular and perhaps more successful technique for instilling loyalty. The company provides a payment to an employee contingent on his or her committing to remain at the company for a specific period. According to a Glassdoor study, 8 when changing jobs, employees earn an average increase of more than 5 percent in salary alone, not including benefits. Thus, the offer of a salary increase and/or a retention or performance bonus can help turn many would-be former employees into newly loyal ones. The same study found that a 10 percent increase in pay upped the odds that an employee would stay at the company. According to Dr. Andrew Chamberlain, chief economist of Glassdoor, “While it is important to provide upward career paths for workers, a simple job title promotion may not be enough. Maintaining competitive pay is an important part of reducing turnover.” 9 Of course, a retention bonus may not be enough to keep someone at a job he or she hates, but it might help someone who likes the job to decide to stay. The Society for Human Resource Management believes retention plans should be part of an overall pay strategy, not merely giveaways for tenure. 10 Imagine that your colleague is considering leaving your firm for another company: Your manager has offered him a retention bonus to stay and your colleague is seeking your advice about what to do. What would you advise? Critical Thinking What questions would you ask your colleague to better determine the advice you should give him or her? Consider your summer jobs, part-time employment, work-study hours on campus, and internships. What meant more to you—the salary you made or the extent to which you were treated as a real contributor and not just a line on a payroll ledger? Or a combination of both? What lessons do you now draw about reciprocal loyalty between companies and their workers? Confidentiality In the competitive world of business, many employees encounter information in their day-to-day work that their employers reasonably expect they will keep confidential. Proprietary (private) information, the details of patents and copyrights, employee records and salary histories, and customer-related data are valued company assets that must remain in-house, not in the hands of competitors, trade publications, or the news media. Employers are well within their rights to expect employees to honor their duty of confidentiality and maintain the secrecy of such proprietary material. Sometimes the duty of confidentiality originates specifically from an employment contract, if there is one, and if not, the duty still exists in most situations under the common law of agency. Most companies do not consider U.S. common law on confidentiality sufficient protection, so they often adopt employment agreements or contracts with employees that set forth the conditions of confidentiality. (Note that such contracts define a one-way obligation, from the employee to the employer, so they do not protect the at-will employee from being terminated without cause.) Typically, an employment agreement will list a variety of requirements. For example, although in most situations the law would already hold that the employer owns copyrightable works created by employees within the scope of their employment (known as works for hire ), a contract usually also contains a specific clause stating that the company owns any and all such works and assigning ownership of them to the company. The agreement will also contain a patent assignment provision, stating that all inventions created within the scope of employment are owned by or assigned to the company. Link to Learning If one day you might be a freelancer, gig worker, or contractor, watch this video showing how a nondisclosure agreement can help you protect your ideas to learn more. Employers also want to protect their trade secret s , that is, information that has economic value because it is not generally known to the public and is kept secret by reasonable means. Trade secrets might include technical or design information, advertising and marketing plans, and research and development data that would be useful to competitors. Often nondisclosure agreement s are used to protect against the theft of all such information, most of which is normally protected only by the company’s requirement of secrecy, not by federal intellectual property law. Federal law generally protects registered trademarks (commercial identifications such as words, designs, logos, slogans, symbols, and trade dress , which is product appearance or packaging) and grants creators copyrights (to protect original literary and artistic expressions such as books, paintings, music, records, plays, movies, and software) and patents (to protect new and useful inventions and configurations of useful articles) ( Figure 7.3 ). U.S. companies have long used non-compete agreement s as a way to provide another layer of confidentiality, ensuring that employees with access to sensitive information will not compete with the company during or for some period after their employment there. The stated purpose of such agreements is to protect the company’s intellectual property , which is the manifestation of original ideas protected by legal means such as patent, copyright, or trademark. To be enforceable, non-compete agreements are usually limited by time and distance (i.e., they are in effect for a certain number of months or years and within a certain radius of the employer’s operations). However, some companies have begun requiring these agreements even from mid- and lower-level workers in an attempt to prevent them from changing jobs, including those who have no access to any confidential intellectual property. About 20 percent of the U.S. private-sector workforce, and about one in six people in jobs earning less than $40,000 a year, are now covered by non-compete agreements. 11 The increased use of such agreements has left many employees feeling trapped by their limited mobility. Link to Learning A template for a typical non-compete agreement can be found at PandaDoc. An ethical question arises regarding whether this practice is in the best interests of society and its workers, and some states are responding. California enacted a law in 2017 saying that most non-compete agreements are void, holding that although an employee may owe the employer a responsibility not to compete while employed, that duty ceases upon termination of employment. 12 In other words, an employee does not “belong” to a company forever. In California, therefore, a non-compete arrangement that limits employment after leaving the employer is now unenforceable. Does this law reflect the approach that most states will now take? A California company may still legally prohibit its employees from moonlighting during the term of their employment, particularly for a competitor. Cases from the Real World Non-Compete Agreements After an investigation by then–New York attorney general Eric Schneiderman, fast-food franchisor Jimmy John’s announced in 2016 that it would not enforce non-compete agreements signed by low-wage employees that prohibited them from working at other sandwich shops, and it agreed to stop using the agreements in the future. Jimmy John’s non-compete agreement had prohibited all workers, regardless of position, from working during their employment and for two years after at any other business that sold “submarine, hero-type, deli-style, pita, and/or wrapped or rolled sandwiches” in a geographic area within two miles of any Jimmy John’s shop anywhere in the United States. 13 Schneiderman said of the agreements, “They limit mobility and opportunity for vulnerable workers and bully them into staying with the threat of being sued.” Illinois Attorney General Lisa Madigan had also initiated action, filing a lawsuit that asked the court to strike down such clauses. “Preventing employees from seeking employment with a competitor is unfair to Illinois workers and bad for Illinois businesses,” Madigan said. “By locking low-wage workers into their jobs and prohibiting them from seeking better paying jobs elsewhere, the companies have no reason to increase their wages or benefits.” 14 Jimmy John’s has more than 2,500 franchises in forty-six states, so its agreement meant it would be difficult for a former worker to get a job in a sandwich shop in almost any big city in the United States. Critical Thinking Other than being punitive, what purpose do non-compete agreements serve when low-level employees are required to sign them? Suppose an executive chef or vice president of marketing or operations at Jimmy John’s or any large sandwich franchise leaves the firm with knowledge of trade secrets and competitive strategies. Should he or she be compelled to wait a negotiated period of time before working for a competitor? Why or why not? What is fair to all parties when high-level managers possess unique, sensitive information about their former employer? Employers may also insert a nonsolicitation clause , which protects a business from an employee who leaves for another job and then attempts to lure customers or former colleagues into following. Though these clauses have limitations, they can be effective tools to protect an employer’s interest in retaining its employees and customers. However, they are particularly difficult for employees to comply with in relatively closed markets. Sample language for all the clauses we have discussed is found in Figure 7.4 . A final clause an employee might be required to sign is a nondisparagement clause, which prohibits defaming or deliberately running down the reputation of the former employer. 7.2 Loyalty to the Brand and to Customers Learning Objectives By the end of this section, you will be able to: Describe how employees help build and sustain a brand Discuss how employees’ customer service can help or hurt a business A good employment relationship is beneficial to both management and employees. When a company’s products or services are legitimate and safe and its employment policies are fair and compassionate, managers should be able to rely on their employees’ dedication to those products or services and to their customers. Although no employee should be called upon to lie or cover up a misstep on the part of the firm, every employee should be willing to make a sincere commitment to an ethical employer. Respecting the Brand Every company puts time, effort, and money into developing a brand , that is, a product or service marketed by a particular company under a particular name. As Apple, Coca-Cola, Amazon, BMW, McDonald’s, and creators of other coveted brands know, branding —creating, differentiating, and maintaining a brand’s image or reputation—is an important way to build company value, sell products and services, and expand corporate goodwill. In the sense discussed here, the term “brand” encompasses an image, reputation, logo, tagline, or specific color scheme that is trademarked, meaning the company owns it and must give permission to others who would legally use it (such as Tiffany’s unique shade of blue). Companies want and expect employees to help in their branding endeavors. For example, according to the head of training at American Express, the company’s brand is its product, and its mantra has always been, “Happy employees make happy customers.” 15 American Express places significant emphasis on employee satisfaction because it is convinced this strategy helps protect and advance its brand. One company that uses positive employee involvement in branding is the technology conglomerate Cisco, which started a branding program on social media that reaches out to employees ( Figure 7.5 ). Employees are encouraged to be creative in their brand-boosting posts in the program. The benefit is that prospective job candidates get a peek into Cisco life, and current employees feel the company trusts and values their ideas. 16 Link to Learning Watch this video explaining the concept of brand loyalty to learn more. However, protecting the brand can be a special challenge today, thanks to the ease with which customers and even employees can post negative information about the brand on the Internet and social media. Consider these examples in the fast-food industry. A photo posted on Taco Bell’s Facebook page showed an employee licking a row of tacos. A Domino’s Pizza employee can be seen in a YouTube video spitting on food, putting cheese into his nose and then putting that cheese into a sandwich, and rubbing a sponge used for dishwashing on his groin area. 17 On Twitter, a Burger King employee in Japan posted a photo of himself lying on hamburger buns while on duty. The companies all responded swiftly. A Taco Bell spokesperson said the food was not served to customers and the employee in the photo was fired. The two Domino’s employees behind the videos were fired and faced felony charges and a civil lawsuit; Domino’s said the tainted food was never delivered. According to a Burger King news release, the buns in the photo were waste material because of an ordering mistake and were promptly discarded after the photo was taken; the employee in the photo was fired. These examples demonstrate how much damage disloyal or disgruntled employees can create, especially on social media. All three companies experienced financial and goodwill losses after the incidents and struggled to restore public trust in their products. The immediate and long-term costs of such incidents are the reason companies invest in developing brand loyalty among their employees. According to a Harvard Business Review interview with Colin Mitchell, global vice president, McDonald’s Brand, McDonald’s, good branding requires that a business think of marketing not just to its customers but also to its employees, because they are the “very people who can make the brand come alive for your customers”. 18 The process of getting employees to believe in the product, to commit to the idea that the company is selling something worth buying, and even to think about buying it, is called internal marketing . Of course, some employees may not want to be the equivalent of a company spokesperson. Is it reasonable to expect an employee to be a kind of roving ambassador for the company, even when off the clock and interacting with friends and neighbors? Suppose employers offer employees substantial discounts on their products or services. Is this an equitable way to sustain reciprocal loyalty between managers and workers? Why or why not? Internal marketing is an important part of the solution to the problem of employees who act as if they do not care about the company. It helps employees make a personal connection to the products and services the business sells, without which they might be more likely to undermine the company’s expectations, as in the three fast-food examples cited in this section. In those cases, it is clear the employees did not believe in the brand and felt hostile toward the company. The most common problem is usually not as extreme. More often it is a lack of effort or “slacking” on the job. Employees are more likely to develop some degree of brand loyalty when they share a common sense of purpose and identity with the company. Link to Learning The Working Advantage website offers corporate discounts to check out. Companies sometimes offer employees significant discounts to encourage them to buy, and support, their products. Obligations to Customers As the public’s first point of contact with a company, employees are obliged to assist the firm in forming a positive relationship with customers. How well or poorly they do so contributes a great deal to customers’ impression of the company. And customers’ perceptions affect not only the company but all the employees who depend on its success for their livelihood. Thus, the ethical obligations of an employee also extend to interactions with customers, whom they should treat with respect. Employers can encourage positive behavior toward customers by empowering employees to use their best judgment when working with them. Link to Learning Watch this video giving a light-hearted take on bad customer service to learn more. It may take only one bad customer interaction with a less-than-engaged or committed employee to sour brand loyalty, no matter how hard a company has worked to build it. In the same way, just one good experience can build up good will. Cases from the Real World Redefining Customers Sometimes engaged employees go above and beyond in the interest of customer service, even if they have no “customers” to speak of. Kathy Fryman is one such employee. Fryman was a custodian for three decades at a one hundred-year-old school in the Augusta (KY) Independent School District. She was not just taking care of the school building, she was also taking care of the people inside. 19 Fryman fixed doors that would not close, phones that would not ring, and alarms that did not sound when they should. She kept track of keys and swept up dirty floors before Parents’ Night. That was all part of the job of custodian, but she did much more. Fryman would often ask the nurse how an ill student was doing. She would check with a teacher about a kid who was going through tough times at home. If a teacher mentioned needing something, the next day it would show up on his or her desk. A student who needed something for class would suddenly find it in his or her backpack. Speaking of Fryman, district superintendent Lisa McCrane said, “She just has a unique way of making others feel nurtured, comforted, and cared for.” According to Fryman, “I need to be doing something for somebody.” Fryman’s customers were not there to buy a product on which she would make a commission. Her customers were students and teachers, parents and taxpayers. Yet she provided the kind of service that all employers would be proud of, the kind that makes a difference to people every day. Critical Thinking Is there a way for a manager to find, develop, and encourage the next Fryman, or is the desire to “do something good for somebody” an inherent trait in some employees that is missing in others? What is the appropriate means to reward a worker with Fryman’s level of commitment? Her salary was fixed by school district pay schedules. Should she have been given an extra stipend for service above and beyond the expected? Additional time off with pay? Some other reward? Employees who display Fryman’s zeal often do so for their own internal rewards. Others may simply want to be recognized and appreciated for their effort. If you were the superintendent in her district, how would you recognize Fryman? Could she, for example, be invited to speak to new hires about opportunities to render exceptional service? Employees who treat customers well are assets to the company and deserve to be treated as such. Sometimes, however, customers are rude or disrespectful, creating a challenge for an employee who wants to do a good job. This problem is best addressed by management and the employee working together. In the Pizza Hut case that follows, an employee was placed in a bad situation by customers. Cases from the Real World Is the Customer Always Right? At an independently owned Pizza Hut franchise in Oklahoma, 20 two regular customers made sexually offensive remarks to a female employee named Lockard, who then told her boss she did not like waiting on them. One evening, these customers again entered the restaurant, and her boss instructed Lockard to wait on them. She did, but this time the customers became physically abusive. Although it is the employee’s duty to provide good customer service, that does not mean accepting harassment. Lockard sued her employer, the owner of the franchise, for failing to take her complaints seriously and for making her continue to suffer sexual harassment and assault by customers. The jury ruled in her favor, awarding her $360,000, and an appeals court upheld the judgment. Critical Thinking Clearly, no employee should expect to be physically assaulted, but how far should an employee be expected to go in the name of customer service? Is taking verbal taunts expected? Why or why not? Just as every employee should treat customers and clients with respect, so every employer is ethically—and often legally—obligated to safeguard employees on the job. This includes establishing a workplace atmosphere that is safe and secure for workers. If you were the owner of this Pizza Hut franchise, what protections might you put in place for your employees? 7.3 Contributing to a Positive Work Atmosphere Learning Objectives By the end of this section, you will be able to: Explain employees’ responsibility to treat their peers with respect Describe employees’ duty to follow company policy and the code of conduct Discuss types of workplace violence You may spend more time with your coworkers than you spend with anyone else, including your family and friends. Thus, your ability to get along with work colleagues can have a significant impact on your life, as well as your attitude toward your job and your employer. All sorts of personalities populate our workplaces, but regardless of their working style, preferences, or quirks, employees owe one another courtesy and respect. That does not mean always agreeing with them, because evaluating a diversity of perspectives on business problems and opportunities is often essential for finding solutions. At the same time, however, we are responsible for limiting our arguments to principles, not personalities. This is what we owe to one another as human beings, as well as to the firm, so worksite arguments do not inflict lasting harm on the people who work there or on the company itself. Getting Along with Coworkers An employee who gets along with coworkers can help the company perform better. What can employees do to help create a more harmonious workplace with a positive atmosphere? One thing you can do is to keep an open mind. You may be wondering as you start a new job whether you will get along with your colleagues as well as you did at your old job. Or, if you did not get along with the people there and were looking for a change, you might fear things will be the same at the new job. Do not make any prejudgments. Get to know a bit about your new coworkers. Accept, or extend, lunch invitations, join weekend activities and office social events, and perhaps join those office traditions that bind long-serving employees and newcomers together in a collaborative spirit. Another thing you can do it to remember to be kind. Everyone has a bad day every now and then, and if you spot a coworker having one, performing a random act of kindness may make that person’s day better. You do not need to be extravagant. Offer to stay late to help the person meet a tight deadline, or bring coffee or a healthy snack to someone working on particularly difficult tasks. Remember the adage, “It’s nice to be important, but it’s more important to be nice.” For any relationship to succeed, including the relationship between coworkers, the parties must respect each other—and show it. Avoid doing things that might offend others. For example, do not take credit for someone else’s work. Do not be narrow minded; when someone brings up a topic such as politics or religion, be willing to listen and tolerate differing points of view. A related directive is to avoid sexual jokes, stories, anecdotes, and innuendos. You might think it is okay to talk about anything and everything at work, but it is not. Others may not find the topic funny and feel offended, and you may make yourself vulnerable to action by management if such behavior is reported. Your coworkers might be a captive audience, but you should never place them in an awkward position. Make an effort to get along with everyone, even difficult people. You did not choose your coworkers, and some may be hard to get along with. But professionalism requires that we attempt to establish the best working relationships we can on the job, no matter the opinions we might have about our colleagues. Normally, we might like some of them very much, be neutral about some others, and genuinely dislike still others. Yet our responsibility in the workplace is to respect and act at least civilly toward all of them. We likely will feel better about ourselves as professionals and also live up to our commitments to our companies. Finally, do no use social media to gossip. Gossiping at work can cause problems anywhere, perhaps especially on social media, so resist the urge to vent online about your coworkers. It makes you appear petty, small, and untrustworthy, and colleagues may stop communicating with you. You may also run afoul of your employer’s social media policy and risk disciplinary action or dismissal. Understanding Personalities Understanding the various personalities at work can be a complex task, but it is a vital one for developing a sense of collegiality. One technique that may be helpful is to develop your own emotional intelligence, which is the capacity to recognize other people’s emotions and also to know and manage your own. One aspect of using emotional intelligence is showing empathy, the willingness to step into someone else’s shoes. Link to Learning Do you think you know yourself? Take this free online personality test from IDR Labs; it may tell you something you did not know that you can use to your benefit at work. All of us have different workplace personalities , which express the way we think and act on the job. There are many such personalities, and none is superior or inferior to another, but they are a way in which we exhibit our uniqueness on the job ( Figure 7.6 ). Some of us lead with our brains and emphasize logic and reason. Others lead with our hearts, always emphasizing mercy over justice in our relationships with others. Employees can also have very different work style s , the way in which we are most comfortable accomplishing our tasks at work. Some of us gravitate toward independence and jobs or tasks we can accomplish alone. Others prefer team or project work, bringing us into touch with different personalities. Still others seek a mix of these environments. Some prioritize getting the job done as efficiently as possible, whereas others value the journey of working on the project with others and the shared experiences it brings. There is no right or wrong style, but it benefits any worker to know his or her preferences and something about the work personalities of colleagues. When in the office, the point for any of us individually is to appreciate what motivates our greatest success and happiness on the job. What Would You Do? Personality Test Imagine you are a department director with twenty-five employees reporting directly to you. Two of them are experts in their fields: You like and respect them individually, as do the others in your department, but they simply cannot get along with each other and so never work together. How do you resolve this personality clash? You cannot simply insist that the two colleagues cooperate, because personalities do not change. Still, you have to do your best to establish an atmosphere in which they can least collaborate civilly. Even though managers have no power to change human nature or the personality conflicts that inevitably occur, part of their responsibility is to establish a harmonious working environment, and others will judge you on the harmony you cultivate in your department. Critical Thinking Working relationships are extremely important to an employee’s job satisfaction. What options would you use to foster a cooperative working relationship in your department? Reducing Workplace Violence As recent incidents have shown—for example, the April 2018 shooting at YouTube headquarters in San Bruno, California 21 — workplace violence is a reality, and all employees play a role in helping make work a safe, as well as harmonious place. Employees, in fact, have a legal and ethical duty not to be violent at work, and managers have a duty to prevent or stop violence. The National Institute for Occupational Safety and Health reports that violence at work usually fits into one of four categories: traditional criminal intent, violence by one worker against another, violence stemming from a personal relationship, and violence by a customer. 22 In violence based on traditional criminal intent, the perpetrator has no legitimate relationship to the business or its employees, and often the violence is part of a crime such as robbery or shoplifting. Violence between coworkers occurs when a current or former employee attacks another employee in the workplace. Worker-on-worker deaths account for approximately 15 percent of all workplace homicides. All companies are at risk for this type of violence, and contributing factors include failure to conduct a criminal background check as part of the hiring process. When the violence arises from problems in a personal relationship, the perpetrator often has a direct relationship not with the business but with the victim, who is an employee. This category of violence accounts for slightly less than 10 percent of all workplace homicides. Women are at higher risk of being victims of this type of violence than men. In the fourth scenario, the violent person has a legitimate relationship with the business, perhaps as a customer or patient, and becomes violent while on the premises. A large portion of customer incidents occur in the nightclub, restaurant, and health care industries. In 2014, about one-fifth of all workplace homicides resulted from this type of violence. 23 Codes of Conduct Companies have a right to insist that their employees, including managers, engage in ethical decision-making. To help achieve this goal, most businesses provide a written code of ethics or code of conduct for all employees to follow. These cover a wide variety of topics, from workplace romance and sexual harassment to hiring and termination policies, client and customer entertainment, bribery and gifts, personal trading of company shares in any way that hints of acting on insider knowledge of the company’s fortunes, outside employment, and dozens of others. A typical code of conduct, regardless of the company or the industry, will also contain a variety of standard clauses, often blending legal compliance and ethical considerations ( Table 7.1 ). Sample Code of Conduct Compliance with all laws Employees must comply with all laws, including bribery, fraud, securities, environmental, safety, and employment laws. Corruption and fraud Employees must not accept certain types of gifts and hospitality from clients, vendors, or partners. Bribery is prohibited in all circumstances. Conflict of interest Employees must disclose and/or avoid any personal, financial, or other interests that might influence their ability to perform their job duties. Company property Employees must treat the company’s property with respect and care, not misuse it, and protect company facilities and other material property. Cybersecurity and digital devices policy Employees must not use company computer equipment to transfer illegal, offensive, or pirated material, or to visit potentially dangerous websites that might compromise the safety of the company network or servers; employees must respect their duty of confidentiality in all Internet interactions. Social media policy Employees may [or may not] access personal social media accounts at work but are expected to act responsibly, follow company policies, and maintain productivity. Sexual harassment Employees must not engage in unwelcome or unwanted sexual advances, requests for sexual favors, and other verbal or physical conduct of a sexual nature. Behaviors such as conditioning promotions, awards, training, or other job benefits upon acceptance of unwelcome actions of a sexual nature are always wrong. 24 Workplace respect Employees must show respect for their colleagues at every level. Neither inappropriate nor illegal behavior will be tolerated. Table 7.1 Link to Learning Exxon Mobil’s Code of Conduct is typical of that of most large companies. Read Exxon Mobil’s code of conduct on their website, and note that it demands ethical conduct at every level of the organization. Exxon expects its leadership team to model appropriate behavior for all employees. Decide whether, if you were an Exxon employee, you would find the code understandable and clear regarding what is allowed and what is not. Still thinking as an employee, identify the section of the code you think is most important for you, and explain why. Two areas that deserve special mention are cybersecurity and harassment . Recent news stories have highlighted the hacking of electronic tools such as computers and databases, and employees and managers can indirectly contribute to such data breaches through unauthorized web surfing, sloppy e-mail usage, and other careless actions. Large companies such as Equifax, LinkedIn, Sony, Facebook, and JP Morgan Chase have suffered the theft of customer information, leading to loss of consumer confidence; sometimes large fines have been levied on companies. Employees play a part in preventing such breaches by strictly following company guidelines about data privacy and confidentiality, the use and storage of passwords, and other safeguards that limit access to only authorized users. Link to Learning For more on recent data breaches, watch a couple of videos. Watch this video about how J.P. Morgan Chase’s $13 billion fine was the largest in history from CBS Evening News . Also watch this video about how the Sony PlayStation was hacked and data was stolen from 77 million users from CBS Early . We are also witnessing an increased level of public awareness about harassment in the workplace, particularly because of the #MeToo movement that followed revelations in 2017 and 2018 of years of sexual predation by powerful men in Hollywood and Washington, DC, as well as across workplaces of all kinds, including in sports and the arts. A victim of sexual harassment can be a man or a woman, and/or the same sex as the harasser. The harasser can be a supervisor, coworker, other employee, officer/director, intern, consultant, or nonemployee. Whatever the situation, harassing and threatening behavior is wrong (and sometimes criminal) and should always be reported. 7.4 Financial Integrity Learning Objectives By the end of this section, you will be able to: Describe an employee’s responsibilities to the employer in financial matters Define insider trading Discuss bribery and its legal and ethical consequences Employees may face ethical dilemmas in the area of finance, especially in situations such as bribery and insider trading in securities. Such dubious “profit opportunities” can offer the chance of realizing thousands or millions of dollars, creating serious temptation for an employee. However, insider trading and bribery are serious violations of the law that can result in incarceration and large fines. Insider Trading The buying or selling of stocks, bonds, or other investments based on nonpublic information that is likely to affect the price of the security being traded is called insider trading . For example, someone who is privy to information that a company is about to be taken over, which will cause its stock price to rise when the information becomes public, may buy the stock before it goes up in order to sell it later for an enhanced profit. Likewise, someone with inside information about a coming drop in share price may sell all his or her holdings at the current price before the information is announced, avoiding the loss other shareholders will suffer when the price falls. Although insider trading can be difficult to prove, it is essentially cheating. It is illegal, unethical, and unfair, and it often injures other investors, as well as undermining public confidence in the stock market. Insider trading laws are somewhat complex. They have developed through federal court interpretations of Section 10(b)5 of the Securities Exchange Act of 1934, as well as through actions by the U.S. Securities and Exchange Commission (SEC). The laws identify several kinds of violations. These include trading by an insider (generally someone who performs work for the company) who possesses significant confidential information relevant to the valuation of the company’s stock, and trading by someone outside of the company who is given this sort of information by an insider or who obtains it inappropriately. Even being the messenger (the one communicating material nonpublic information to others on behalf of someone else) can be a legal violation. The concept of an “insider” is broad and includes officers, directors, and employees of a company issuing securities. A person can even constitute what is called a “temporary insider” if he or she temporarily assumes a unique confidential relationship with a firm and, in doing so, acquires confidential information centered on the firm’s financial and operational affairs. Temporary insiders can be investment bankers, brokers, attorneys, accountants, or other professionals typically thought of as outsiders, such as newspaper and television reporters. A famous case of insider trading, Securities and Exchange Commission v. Texas Gulf Sulphur Co. (1968), began with the discovery of the Kidd Mine and implicated the employees of Texas mining company. 25 When first notified of the discovery of a large and very valuable copper deposit, mine employees bought stock in the company while keeping the information secret. When the information was released to the public, the price of the stock went up and the employees sold their stock, making a significant amount of money. The SEC and the Department of Justice prosecuted the employees for insider trading and won a conviction; the employees had to give back all the money they had made on their trades. Insider trading cases are often highly publicized, especially when charges are brought against high-profile figures. Ethics Across Time and Cultures Insider Trading and Fiduciary Duty One of the most famous cases of insider trading implicated Michael Milken, Dennis Levine, and Martin Siegel, all executives of Drexel Burnham Lambert (DBL), and the company itself. 26 Ivan Boesky, also accused, was an arbitrageur, an outside investor who bet on corporate takeovers and appeared to be able to uncannily anticipate takeover targets, buy their stock ahead of time, and earn huge profits. Everyone wondered how; the answer was that he cheated. Boesky went to the source—the major investment banks—to get insider information. He paid Levine and Siegel to give him pretakeover details, an illegal action, and he profited enormously from nearly every major deal in the merger-crazy 1980s, including huge deals involving oil companies such as Texaco, Getty, Gulf, and Chevron. The SEC started to become suspicious after receiving a tip that someone was leaking information. Investigators discovered Levine’s secret Swiss bank account, with all the money Boesky had paid him. Levine then gave up Boesky in a plea deal; the SEC started watching Boesky and subsequently caught Siegel and Milken. The penalties were the most severe ever given at the time. Milken, the biggest catch of all, agreed to pay $200 million in government fines, $400 million to investors who had been hurt by his actions, and $500 million to DBL clients—for a grand total of $1.1 billion. He was sentenced to ten years in prison and banned for life from any involvement in the securities industry. Boesky received a prison sentence of 3.5 years, was fined $100 million, and was permanently barred from working with securities. Levine agreed to pay $11.5 million and $2 million more in back taxes; he too was given a lifetime ban and was sentenced to two years in prison. Milken and Levine violated their financial duties to their employer and the company’s clients. Not only does insider trading create a public relations nightmare, it also subjects the company to legal liability. DBL ended up being held liable in civil lawsuits due to the actions of its employees, and it was also charged with violations of the Racketeer Influenced and Corrupt Organizations (RICO) Act) and ultimately failed, going bankrupt in 1990. (As a note of interest regarding the aftermath of all of this for Milken, he has tried to redeem his image since his incarceration. He resolutely advises others to avoid his criminal acts and has endowed some worthy causes in Los Angeles.) Critical Thinking Employers in financial services must have stringent codes of professional behavior for their employees to observe. Even given such a code, how should employees honor their fiduciary duty to safeguard the firm’s assets and treat clients equitably? What mechanisms would you suggest for keeping employees in banking, equities trading, and financial advising within the limits of the law and ethical behavior? This case dominated the headlines in the 1980s and the accused in this case were all severely fined and received prison sentences. How do you think this case might be treated today? Should employees in these industries be encouraged or even required to receive ethical certification from the state or from professional associations? Why or why not? Bribery and the Foreign Corrupt Practices Act Another temptation that may present itself to employees is the offer of a bribe. A bribe is a payment in some material form (cash or noncash) for an act that runs counter to the legal or ethical culture of the work environment. Bribery constitutes a violation of the law in all fifty U.S. states, as well as of a federal law that prohibits bribery in international transactions, the Foreign Corrupt Practices Act. Bribery generally injures not only individuals but also competitors, the government, and the free-market system as a whole. Of course, often the bribe is somewhat less obvious than an envelope full of money. It is important, therefore, to understand what constitutes a bribe. Numerous factors help establish the ethics (and legality) of gift giving and receiving: the value of the gift, its purpose, the circumstances under which it is given, the position of the person receiving it, company policy, and the law. Assuming an employee has decision-making authority, the company wants and has the right to expect him or her to make choices in its best interest, not the employee’s own self-interest. For example, assume an employee has the authority to buy a copy machine for the company. The employer wants to get the best copy machine for the best price, taking into account quality, service, warranties, and other factors. But what if the employee accepts a valuable gift card from a vendor who sells a copy machine with higher operating and maintenance charges, and then places the order with that vendor. This is clearly not in the best interests of the employer. It constitutes a failure on the part of the employee to follow ethical and legal rules, and, in all likelihood, company policy as well. If a company wants its employees always to do the right thing, it must have policies and procedures that ensure the employees know what the rules are and the consequences for breaking them. A gift may be only a well-intentioned token of appreciation, but the potential for violating company rules (and the law) is still present. A well-written and effectively communicated gift policy provides guidance to company employees about what is and is not appropriate to accept from a customer or vendor and when. This policy should clearly state whether employees are allowed to accept gifts on or outside the work premises and who may give or accept them. If gifts are allowed, the gift policy should define the acceptable value and type, and the circumstances under which an employee may accept a gift. When in doubt about whether the size or value of a gift renders it impossible for an employee to accept it, workers should be advised to check with the appropriate officer or department within their company. Be it an “ethics hotline” or simply the human resources department, wise firms provide an easy protocol for employees to follow in determining what falls within and without the protocols for accepting gifts. As an example of a gift policy, consider the federal government’s strict rules. 27 A federal employee may not give or solicit a contribution for a gift to an official superior and may not accept a gift from an employee receiving less pay if that employee is a subordinate. On annual occasions when gifts are traditionally given, such as birthdays and holidays, an employee may give a superior a gift valued at less than $10. An employee may not solicit or accept a gift given because of his or her official position, or from a prohibited source, including anyone who has or seeks official action or business with the agency. In special circumstances such as holidays, and unless the frequency of the gifts would appear to be improper, an employee generally may accept gifts of less than $20. Gifts of entertainment, such as expensive restaurant meals, are also restricted. Finally, gifts must be reported when their total value from one source exceeds $390 in a calendar year. Some companies in the private sector follow similar rules. Bribery presents a particular ethical challenge for employees in the international business arenas. Although every company wants to land lucrative contracts around the world, most expect their employees to follow both the law and company policy when attempting to consummate such deals. The U.S. law prohibiting bribery in international business dealings is the Foreign Corrupt Practices Act (FCPA), which is an amendment to the Securities and Exchange Act of 1934, one of the most important laws promoting transparency in corporate governance. The FCPA dates to 1977 and was amended in 1988 and 1998. Its main purpose is to make it illegal for companies and their managers to influence or bribe foreign officials with monetary payments or rewards of any kind in an attempt to get or keep business opportunities outside the United States. The FCPA is enforced through the joint efforts of the SEC and the Department of Justice. 28 It applies to any act by U.S. businesses, their representatives, foreign corporations whose stock is traded in U.S. markets, and all U.S. citizens, nationals, or residents acting in furtherance of a foreign corrupt practice, whether they are physically present in the United States or not (this is called the nationality principle). Antibribery law is a serious issue for companies with overseas business and cross-border sales. Any companies or individuals convicted of these activities may pay significant fines, and individuals can face prison time. The FCPA prohibits an agent of any company incorporated in the United States from extending a bribe to a foreign government official to achieve a business advantage in that country, but it does not specifically prohibit the extension of a bribe to a private officer of a nongovernmental company in a foreign country. The definition of a foreign government official can be expansive; it includes not only those working directly for the government but also company officials if the company is owned or operated by the government. An exception is made for “facilitating or grease payments,” small amounts of money paid to low-level government workers in an effort to speed routine tasks like processing paperwork or turning on electricity, but not to influence the granting of a contract. Illegal payments need not be cash; they can include anything of value such as gifts and trips. For example, BHP Billiton, a U.S. energy company, and GlaxoSmithKline, a U.K. pharmaceutical company, were each fined $25 million for buying foreign officials tickets to the 2008 Olympic Games in Beijing, China. 29 Fines for violations like these can be large and can include civil penalties as well as forfeited profits. For example, Telia, a Swedish telecommunications provider whose shares are traded on Nasdaq, recently agreed to pay nearly a billion dollars ($965 million) in a settlement to resolve FCPA violations that consisted of using bribery to win business in Uzbekistan. 30 Link to Learning The SEC website provides an interactive list of the SEC’s FCPA enforcement actions by calendar year and company name for more information. Click on Telia to read more details on the case cited in the preceding paragraph. Do you think the penalty was too harsh, or not harsh enough? Why? The potential effect of laws such as the FCPA that impose ethical duties on employees and the companies they work for is often debated. Although some believe the FCPA disadvantages U.S. firms competing in foreign markets, others say it is the backbone of an ethical free enterprise system. The argument against strong enforcement of the FCPA has some merit according to managers in the field, and there is a general sense that illegal or unethical conduct is sometimes necessary for success. An attorney for energy-related company Cinergy summed up the feelings of many executives: “Shame on the Justice Department’s myopic view and inability to understand the realities of the world.” 31 Some nations consider business bribery to be culturally acceptable and turn a blind eye to such activities. The argument in favor of FCPA enforcement has its supporters as well, who assert that the law not only covers the activities of U.S. companies but also levels the playing field because of its broad jurisdiction over foreign enterprises and their officials. The fact is that since the United States passed the FCPA, other nations have followed suit. The 1997 Organization for Economic Cooperation and Development (OECD) Anti-Bribery Convention has been instrumental in getting its signatories (the United Kingdom and most European Union nations) to enact stricter antibribery laws. The United Kingdom adopted the Bribery Act in 2010, Canada adopted the Corruption of Foreign Officials Act of 1999, and European Union nations have done the same. There is also the OECD Convention on Combating Bribery of Foreign Public Officials in International Business Transactions, which has forty-three signatories, including all thirty-five OECD countries and eight other countries. Companies and employees engaging in transactions in foreign markets face an increased level of regulatory scrutiny and are well served if they put ethics policies in place and enforce them. Companies must train employees at all levels to follow compliance guidelines and rules, rather than engaging in illegal conduct such as “under the table” and “off the books” payments ( Figure 7.7 ). Ethical Leadership Of course, bribery is just one of many ethical dilemmas an employee might face in the workplace. Not all such dilemmas are governed by the clear-cut rules generally laid out for illegal acts such as bribery. Employees may find themselves being asked to do something that is legal but not considered ethical. For example, an employee might receive confidential proprietary knowledge about another firm that would give his or her firm an unfair competitive advantage. Should the employee act on this information? What Would You Do? Should You Act on Information If You Have Doubts? Assume you are a partner in a successful computer consulting firm bidding for a contract with a large insurance company. Your chief rival is a firm that has usually offered services and prices similar to yours. However, from a new employee who used to work for that firm, you learn that it is unveiling a new competitive price structure and accelerated delivery dates, which will undercut the terms you had been prepared to offer the insurance company. Assume you have verified that the new employee is not in violation of any non-compete or nondisclosure agreement and therefore the information was not given to you illegally. Critical Thinking Would you change prices and delivery dates to beat your rival? Or would you inform both your rival and potential customer of what you have learned? Why? Most companies say they want all employees to obey the law and make ethical decisions. But employees typically should not be expected to make ethical decisions based just on gut instinct; they need guidance, training, and leadership to help them navigate the maze of grey areas that present themselves daily in business. This guidance can be provided by the company through standard setting and the development of ethical codes of conduct and policies. Senior managers modeling ethical behavior and so leading by direct example also provide significant direction. 7.5 Criticism of the Company and Whistleblowing Learning Objectives By the end of this section, you will be able to: Outline the rules and laws that govern employees’ criticism of the employer Identify situations in which an employee becomes a whistleblower This chapter has explained the many responsibilities employees owe their employers. But workers are not robots. They have minds of their own and the freedom to criticize their bosses and firms, even if managers and companies do not always welcome such criticism. What kind of criticism is fair and ethical, what is legal, and how should a whistleblowing employee be treated? Limiting Pay Secrecy For decades, most U.S. companies enforced pay secrecy , a policy that prohibits employees from disclosing or discussing salaries among themselves. The reason was obvious: Companies did not want to be scrutinized for their salary decisions. They knew that if workers were aware of what each was paid, they would question the inequities that pay secrecy kept hidden from them. Recently, the situation has begun to change. Ten states have enacted new laws banning employers from imposing pay secrecy rules: California, Colorado, Illinois, Louisiana, Maine, Michigan, Minnesota, New Hampshire, New Jersey, and Vermont. 32 The real game changer came in 2012, when multiple decisions by the National Labor Relations Board (NLRB) and various federal courts made it clear that most pay secrecy policies are unenforceable and violate federal labor law (National Labor Relations Act, 29 U.S.C. § 157-158). 33 Generally speaking, labor law lends employees the right to engage in collective activities, including that of discussing with each other the specifics of their individual employment arrangements, which includes how much they are paid. Moreover, the applicable sections of the 1935 National Labor Relations Act (NLRA) apply to union and non-union employees, so there is no exception made for companies whose employees are non-unionized, meaning the law protects all workers. In 2014, President Barack Obama issued an executive order banning companies that engaged in federal contracting from prohibiting such salary discussions. 34 Opening up the discussion of pay acknowledges the growing desire of employees to be well informed and to have the freedom to question or criticize their company. If employees cannot talk about something at work because they think it will make their boss angry, where do they go instead? Social media can be a likely answer. Protections generally extend to salary discussions on Facebook or Twitter or Instagram; Section 7 of the NRLA protects two or more employees who act together or discuss improving their terms and conditions of employment in person or online, just as it does in other settings. Speaking Out on Social Media Does the First Amendment protect employees at work who criticize their boss or their company? Generally, no. That answer may surprise those who believe that the First Amendment protects all speech. It does not. The Bill of Rights was created to protect citizens from an overreaching government, not from their employer. The First Amendment reads as follows: “Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances.” The key words are “Congress shall make no law,” meaning the content of speech is something the government and politicians cannot control with laws or policies. However, this right of free speech is generally not applicable to the private sector workplace and does not cover criticism of your employer. Does that mean an employee can be fired for criticizing the company or boss? Yes, under most circumstances. Therefore, if someone posts a message on social media that says, “My boss is a jerk” or “My company is a terrible place to work,” the likelihood is that the person can be fired without any recourse, assuming he or she is an employee at will (see the discussion of at-will employment earlier in this chapter). Unless the act of firing constitutes a violation under federal law, such as Title VII of the Civil Rights Act of 1964, the speech is not protected speech, and thus the speaker (the employee) is not protected. At some point, all of us may get angry with our companies or supervisors, but we still have a duty to keep our disputes in-house and not make public any situations we are attempting to resolve internally. Employers typically are prohibited from discussing human resource matters relating to any specific employees. Employees, too, should keep complaints confidential unless and until crimes are charged or civil suits are filed. Cases from the Real World Adrian Duane and IXL Learning Adrian Duane had worked for IXL, a Silicon Valley educational technology company, 35 for about a year when he got into a dispute with his supervisor over Duane’s ability to work flexible hours after he returned from medical leave following transgender surgery. Duane posted a critical comment on Glassdoor.com after he said his supervisor refused to accommodate a scheduling request. Duane’s critique said, in part: “If you’re not a family-oriented White or Asian straight or mainstream gay person with 1.7 kids who really likes softball—then you’re likely to find yourself on the outside. . . . Most management do not know what the word ‘discrimination’ means, nor do they seem to think it matters.” 36 According to court documents, Paul Mishkin, IXL’s CEO, confronted Duane with a printout of the Glassdoor review during a meeting about his complaints, at which time IXL terminated Duane. IXL claimed the derogatory post showed “poor judgment and ethical values.” Security had already cleared out Duane’s desk and boxed his personal effects, and he was escorted from the premises. According to IXL, the company had granted Duane’s requests for time off or modified work schedules and welcomes all individuals equally regardless of gender identity. The NLRB heard Duane’s case. Judge Gerald M. Etchingham said he did not believe the post was part of a concerted or group action among Duane’s fellow employees at the company, and therefore it was not protected under the NLRA, because it was not an attempt to improve collective terms and conditions of employment. Furthermore, Etchingham said Duane’s post was more like “a tantrum” and “childish ridicule” of his employer rather than speech protected under Section 7 of the NLRA. In other words, this was not an attempt to stimulate discussion but rather an anonymous one-way (and one-time) post. “Here, Duane’s posting on Glassdoor.com was not a social media posting like Facebook or Twitter. Instead, Glassdoor.com is a website used by respondent and prospective employees as a recruiting tool to recruit prospective employees.” 37 The NLRB decision is an interesting step in the development of the law as the NLRB tries to apply the NLRA’s protections to employee use of social media. Duane has a pending Equal Employment Opportunity Commission lawsuit alleging employment discrimination under Title VII of the Civil Rights Act of 1964. Critical Thinking What ethical and legal obligations do employees have to refrain from badmouthing their employers in a fit of pique, especially on the firm’s own website? Should management allow employees to criticize the company without fear of retaliation? Could management benefit from allowing such criticism? Why or why not? The rules related to social media are evolving, but applicable laws do not generally distinguish between sites or locations in which someone might criticize an employer, so criticism of the boss remains largely unprotected speech. As discussed earlier, employees can go online and post information about wages, hours, and working conditions, and that speech is protected by federal statute. So, although some general complaints against employers are not protected under the First Amendment, they may be protected under the NLRA (because arguably they may be related to terms and conditions of employment). However, most courts agree that statements personally critical of the boss or the company on a basis other than wages and working conditions are not protected. Obviously, there is no protection when employees post false or misleading information on social media in an attempt to harm the company’s reputation or that of management. Whistleblowing: Risks and Rewards The act of whistleblowing —going to an official government agency and disclosing an employer’s violation of the law—is different from everyday criticism. In fact, whistleblowing is largely viewed as a public service because it helps society reduce bad workplace behavior. Being a whistleblower is not easy, however, and someone inclined to act as one should expect many hurdles. If a whistleblower’s identity becomes known, his or her revelations may amount to career suicide. Even if they keep their job, whistleblowers often are not promoted, and they may face resentment not only from management but also from rank-and-file workers who fear the loss of their own jobs. Whistleblowers may also be blacklisted, making it difficult for them to get a job at a different firm, and all as a result of doing what is ethical. Blowing the whistle on your employer is thus a big decision with significant ramifications. However, most employees do not want to cover up unethical or illegal conduct, nor should they. When should employees decide to blow the whistle on their boss or company? Ethicists say it should be done with an appropriate motive—to get the company to comply with the law or to protect potential victims—and not to get revenge on a boss at whom you are angry. Of course, even if an employee has a personal revenge motive, if the company actively is breaking the law, it is still important that the wrongdoing be reported. In any case, knowing when and how to blow the whistle is a challenge for an employee wanting to do the right thing. The employee should usually try internal reporting channels first, to disclose the problem to management before going public. Sometimes workers mistakenly identify something as wrongdoing that was not wrongdoing after all. Internal reporting gives management a chance to start an investigation and attempt to rectify the situation. The employee who goes to the government should also have some kind of hard evidence that wrongful actions have occurred; the violation should be serious, and blowing the whistle should have some likelihood of stopping the wrongful act. Under many federal laws, an employer cannot retaliate by firing, demoting, or taking any other adverse action against workers who report injuries, concerns, or other protected activity. One of the first laws with a specific whistleblower protection provision was the Occupational Safety and Health Act of 1970. Since passage of that law, Congress has expanded whistleblower authority to protect workers who report violations of more than twenty different federal laws across various topics. (There is no all-purpose whistleblower protection; it must be granted by individual statutes.) A sample of the specific laws under which whistleblowing employees are protected can be found in the environmental area, where it is in the public interest for employees to report violations of the law to the authorities, which, in turn, helps the average citizen concerned about clean air and water. The Clean Air Act protects any employee reporting air emission violations from area, stationary, and mobile sources from any retaliation for such reporting. The Water Pollution Control Act similarly protects from retaliation any employee who reports alleged violations relating to discharge of pollutants into water. Without the help of employees who are “on the ground” and see the violations occur, it could be difficult for government regulators to always find the source of pollution. Even when whistleblowers are not acting completely altruistically, their revelations may still be true and worthy of being brought to the public’s attention. Thus, in such situations, the responsible employee becomes a steward of the public interest, and we all should want whistleblowers to come forward. Yet not all whistleblowers are white knights, and not all their firms are evil dragons worthy of being slain. Link to Learning Go to this U.S. Department of Labor website that lists all the laws under which whistleblowers have protection to learn more. Blowing the whistle may bring the employee more than just intrinsic ethical rewards; it may also result in cash. The most lucrative law under which employees can blow the whistle is the False Claims Act (FCA), 31 U.S.C. §§ 3729–3733. This legislation was enacted in 1863, during the American Civil War, because Congress was worried that suppliers of goods to the Union Army might cheat the government. The FCA has been amended many times since then, and today it serves as a leading example of a statutory law that remains important after more than 150 years. The FCA provides that any person who knowingly submits false claims to the government must pay a civil penalty for each false claim, plus triple the amount of the government’s damages. The amount of this basic civil penalty is regularly adjusted by the cost of living, and the current penalty range is from $5500 to $11,000. More importantly for our discussion, the qui tam provision of the law allows private persons (called relators) to file lawsuits for violations of the FCA on behalf of the government and to receive part of any penalty imposed. The person bringing the action is a type of a whistleblower, but one who initiates legal action on his or her own rather than simply reporting it to a government agency. If the government believes it is a worthwhile case and intervenes in the lawsuit, then the relator (whistleblower) is entitled to receive between 15 and 25 percent of the amount the government recovers. If the government thinks winning is a long shot and declines to intervene in the lawsuit, the relator’s share increases to 25 to 30 percent. A few whistleblowers have become rich (and famous, thanks to an ABC News story), with awards ranging in the neighborhood of $100 million. 38 In 2012, a single whistleblower, Bradley Birkenfeld, a former UBS employee, was awarded $104 million by the Internal Revenue Service (IRS), making him the most highly rewarded whistleblower in history. Birkenfeld also spent time in prison for participating in the tax fraud he reported. In 2009, ten former Pfizer employees were awarded $102 million for exposing an illegal promotion of prescription medications. John Kopchinski, the original whistleblower and one of the ten, received $50 million. In another case involving the health care company HCA, two employees who blew the whistle on Medicare fraud ended up receiving a combined total of $100 million. It is not just the size of the reward that should get your attention but also the amount of money these employees saved taxpayers and/or shareholders. They turned in companies that were cheating the Centers for Medicare and Medicaid Services (affecting taxpayers), the IRS (affecting government revenues), and private health insurance (affecting premiums). The public saved far more than the reward paid to the whistleblowers. Incredibly high rewards such as the aforementioned are somewhat unusual, but according to National Whistleblower Center director Stephen Kohn, “Birkenfeld’s and Eckard’s rewards act like advertisements for the U.S. government’s whistleblower programs, which make hundreds of rewards every year.” 39 The FCA is one of four laws under which whistleblowers can receive a reward; the others are administered by the IRS, the SEC, and the Commodity Futures Trading Commission. Most whistleblowers do not get paid until the lawsuit and all appeals have concluded and the full amount of any monetary penalty has been paid to the government. Many complex cases of business fraud can go on for several years before a verdict is rendered and appealed (or a settlement is reached). An employee whose identity has been disclosed and who has been unofficially blacklisted may not see any reward money for several years. Cases from the Real World Sherron Watkins and Enron Enron is one of the most infamous examples of corporate fraud in U.S. history. The scandal that destroyed the company resulted in approximately $60 billion in lost shareholder value. Sherron Watkins, an officer of the company, discovered the fraud and first went to her boss and mentor, founder and chairperson Ken Lay, to report the suspected accounting and financial irregularities. She was ignored more than once and eventually went to the press with her story. Because she did not go directly to the SEC, Watkins received no whistleblower protection. (The Sarbanes-Oxley Act was not passed until after the Enron scandal. In fact, it was Watkins’s circumstance and Enron’s misdeeds that helped convince Congress to pass the law. 40 ) Now a respected national speaker on the topic of ethics and employees’ responsibility, Watkins talks about how an employee should handle such situations. “When you’re faced with something that really matters, if you’re silent, you’re starting on the wrong path . . . go against the crowd if need be,” she said in a speech to the National Character and Leadership Symposium, (a seminar to instill leadership and moral qualities in young men and women). Watkins talks openly about the risk of being an honest employee, something employees should consider when evaluating what they owe their company, the public, and themselves. “I will never have a job in corporate America again. The minute you speak truth to power and you’re not heard, your career is never the same again.” Enron’s corporate leaders dealt with the looming crisis by a combination of blaming others and leaving their employees to fend for themselves. According to Watkins, “Within two weeks of me finding this fraud, [Enron president] Jeff Skilling quit. We did feel like we were on a battleship, and things were not going well, and the captain had just taken a helicopter home. The fall of 2001 was just the bleakest time in my life, because everything I thought was secure was no longer secure.” Critical Thinking Did Watkins owe an ethical duty to Enron, to its shareholders, or to the investing public to go public with her suspicions? Explain your answer. How big a price is it fair to ask a whistleblowing employee to pay? Link to Learning Visit the National Whistleblower Center website and learn more about some of the individuals discussed in this chapter who became whistleblowers. Watch this video about one of the most famous whistleblowers, Sherron Watkins, former vice president of Enron to learn more. Sometimes employees, including managers, face an ethical dilemma that they seek to address from within rather than becoming a whistleblower. The risk is that they may be ignored or that their speaking up will be held against them. However, companies should want and expect employees to step forward and report wrongdoing to their superiors, and they should support that decision, not punish it. Sallie Krawcheck, a financial industry executive, was not a whistleblower in either the classical or the legal sense. She went to her boss with her discovery of wrongdoing at work, which means she had no legal protection under whistleblower statutes. Read her story in the following box. Cases from the Real World Sallie Krawcheck and Merrill Lynch Shortly after Sallie Krawcheck took over as chief of Merrill Lynch’s wealth management division at Bank of America, she discovered that a mutual fund called the Stable Value Fund, a financial product Merrill had sold to customers as an investment for their 401k plans, was not as stable as its name implied. The team at Merrill had made a mistake by managing the fund in a way that assumed a higher risk than was acceptable to its investors, and the fund ended up losing much of its value. Unfortunately, because it was supposed to be a low-risk fund, the people who had invested in it, and who would suffer most from Merrill’s mistakes, were earners of relatively modest incomes, including Walmart employees, who made up the largest group. According to Krawcheck, she had two options. Option one was to say tough luck to the Stable Value Fund’s investors, including the Walmart employees, explaining that all investments carry some degree of risk. Option two was to bail out the investors by pouring money into the fund to increase its value. Krawcheck had already been burned once by trying to be ethical. She had been head of CitiGroup’s wealth management division (Smith Barney); in that capacity, she had made a decision to reimburse clients for some of their losses she felt were due to company mistakes. Rather than supporting her decision, however, CitiGroup terminated her, in large part for making the ethical decision rather than the profitable one. Now she was in the same predicament with a new company. Should Krawcheck risk her job again by choosing the ethical act, or should she make a purely financial decision and tell the 401k investors they would have to take the loss? Krawcheck began talking to people inside and outside the company to see what they thought. Most told her to just keep her head down and do nothing. One “industry titan” told her there was nothing to be done, that everyone knows stable-value funds are not really stable. Unconvinced, Krawcheck took the problem to Bank of America’s CEO. He agreed to back her up and put company money into the depleted stable-value funds to prop them up. Krawcheck opted to be honest and ethical by helping the small investors and felt good about it. “I thought, ethical business was good business,” she says. “It came down to my sense of purpose as well as my sense of my industry’s purpose; it wasn’t about some abstract ethical theorem . . . the answer wasn’t that I got into the business simply to make a lot of money. It was because it was a business that I knew could have a positive impact on clients’ lives.” 41 But the story does not really have a happy ending. Krawcheck writes that she thought at the time she had done the right thing and still had her job, a win/win outcome of a very tough ethical dilemma. However, speaking out did come at a cost. Krawcheck lost some important and powerful allies within the company, and although she did not lose her job at that time, she writes “the political damage was done; when that CEO retired, the clock began ticking down on my time at Bank of America, and before long I was ‘reorganized out’ of that role.” 42 Critical Thinking Could you do what Sallie Krawcheck did and risk being fired a second time? Why or why not? Krawcheck went on to start her own firm, Ellevest, specializing in investments for female clients. Why do you think she chose this route rather than moving to another large Wall Street firm? What Would You Do? Underestimating and Overcharging Suppose you are a supervising engineer at a small defense contractor of about one hundred employees. Your firm had barely been breaking even, but the recent award of a federal contract has dramatically turned the situation around. Midway through the new project, though, you realize that the principal partners in your firm have been overcharging the Department of Defense for services provided and components purchased. (You discovered this accidentally, and it would be difficult for anyone else to find it out.) You take this information to one of the principals, whom you know well and respect. He tells you apologetically that the overcharges became necessary when the firm seriously underestimated total project costs in its bid on the contract. If the overcharges do not continue, the firm will again be perilously close to bankruptcy. You know the firm has long struggled to remain financially viable. Furthermore, you have great confidence in the quality of the work your team is providing the government. Finally, you feel a special kinship with nearly all the employees and particularly with the founding partners, so you are loath to take your evidence to the government. Critical Thinking What are you going to do? Will you swallow your discomfort because making the overcharges public may very well put your job and those of one hundred friends and colleagues at risk? Would the overall quality of the firm’s work on the contract persuade you it is worth what it is charging? Or would you decide that fraud is never permissible, even if its disclosure comes at the cost of the survivability of the firm and the friendships you have within it? Explain your reasoning.
anatomy_and_physiology
Chapter Objectives After studying this chapter, you will be able to: Discuss the bones of the pectoral and pelvic girdles, and describe how these unite the limbs with the axial skeleton Describe the bones of the upper limb, including the bones of the arm, forearm, wrist, and hand Identify the features of the pelvis and explain how these differ between the adult male and female pelvis Describe the bones of the lower limb, including the bones of the thigh, leg, ankle, and foot Describe the embryonic formation and growth of the limb bones Introduction Your skeleton provides the internal supporting structure of the body. The adult axial skeleton consists of 80 bones that form the head and body trunk. Attached to this are the limbs, whose 126 bones constitute the appendicular skeleton. These bones are divided into two groups: the bones that are located within the limbs themselves, and the girdle bones that attach the limbs to the axial skeleton. The bones of the shoulder region form the pectoral girdle, which anchors the upper limb to the thoracic cage of the axial skeleton. The lower limb is attached to the vertebral column by the pelvic girdle. Because of our upright stance, different functional demands are placed upon the upper and lower limbs. Thus, the bones of the lower limbs are adapted for weight-bearing support and stability, as well as for body locomotion via walking or running. In contrast, our upper limbs are not required for these functions. Instead, our upper limbs are highly mobile and can be utilized for a wide variety of activities. The large range of upper limb movements, coupled with the ability to easily manipulate objects with our hands and opposable thumbs, has allowed humans to construct the modern world in which we live.
[ { "answer": { "ans_choice": 1, "ans_text": "sternal end" }, "bloom": "2", "hl_context": "The clavicle has three regions : the medial end , the lateral end , and the shaft . <hl> The medial end , known as the sternal end of the clavicle , has a triangular shape and articulates with the manubrium portion of the sternum . <hl> This forms the sternoclavicular joint , which is the only bony articulation between the pectoral girdle of the upper limb and the axial skeleton . This joint allows considerable mobility , enabling the clavicle and scapula to move in upward / downward and anterior / posterior directions during shoulder movements . The sternoclavicular joint is indirectly supported by the costoclavicular ligament ( costo - = “ rib ” ) , which spans the sternal end of the clavicle and the underlying first rib . The lateral or acromial end of the clavicle articulates with the acromion of the scapula , the portion of the scapula that forms the bony tip of the shoulder . There are some sex differences in the morphology of the clavicle . In women , the clavicle tends to be shorter , thinner , and less curved . In men , the clavicle is heavier and longer , and has a greater curvature and rougher surfaces where muscles attach , features that are more pronounced in manual workers .", "hl_sentences": "The medial end , known as the sternal end of the clavicle , has a triangular shape and articulates with the manubrium portion of the sternum .", "question": { "cloze_format": "The part of the clavicle that articulates with the manubrium is the ___ .", "normal_format": "Which part of the clavicle articulates with the manubrium?", "question_choices": [ "shaft", "sternal end", "acromial end", "coracoid process" ], "question_id": "fs-id1470998", "question_text": "Which part of the clavicle articulates with the manubrium?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "acromioclavicular joint" }, "bloom": null, "hl_context": "<hl> The acromioclavicular joint transmits forces from the upper limb to the clavicle . <hl> The ligaments around this joint are relatively weak . A hard fall onto the elbow or outstretched hand can stretch or tear the acromioclavicular ligaments , resulting in a moderate injury to the joint . However , the primary support for the acromioclavicular joint comes from a very strong ligament called the coracoclavicular ligament ( see Figure 8.3 ) . This connective tissue band anchors the coracoid process of the scapula to the inferior surface of the acromial end of the clavicle and thus provides important indirect support for the acromioclavicular joint . Following a strong blow to the lateral shoulder , such as when a hockey player is driven into the boards , a complete dislocation of the acromioclavicular joint can result . In this case , the acromion is thrust under the acromial end of the clavicle , resulting in ruptures of both the acromioclavicular and coracoclavicular ligaments . The scapula then separates from the clavicle , with the weight of the upper limb pulling the shoulder downward . <hl> This dislocation injury of the acromioclavicular joint is known as a “ shoulder separation ” and is common in contact sports such as hockey , football , or martial arts . <hl> 8.2 Bones of the Upper Limb Learning Objectives By the end of this section , you will be able to :", "hl_sentences": "The acromioclavicular joint transmits forces from the upper limb to the clavicle . This dislocation injury of the acromioclavicular joint is known as a “ shoulder separation ” and is common in contact sports such as hockey , football , or martial arts .", "question": { "cloze_format": "A shoulder separation results from injury to the ________.", "normal_format": "Injury to what results in a shoulder separation? ", "question_choices": [ "glenohumeral joint", "costoclavicular joint", "acromioclavicular joint", "sternoclavicular joint" ], "question_id": "fs-id1326592", "question_text": "A shoulder separation results from injury to the ________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 3, "ans_text": "supraspinous fossa" }, "bloom": "1", "hl_context": "<hl> The scapula has three depressions , each of which is called a fossa ( plural = fossae ) . <hl> <hl> Two of these are found on the posterior scapula , above and below the scapular spine . <hl> <hl> Superior to the spine is the narrow supraspinous fossa , and inferior to the spine is the broad infraspinous fossa . <hl> The anterior ( deep ) surface of the scapula forms the broad subscapular fossa . All of these fossae provide large surface areas for the attachment of muscles that cross the shoulder joint to act on the humerus . The scapula has several important landmarks ( Figure 8.4 ) . <hl> The three margins or borders of the scapula , named for their positions within the body , are the superior border of the scapula , the medial border of the scapula , and the lateral border of the scapula . <hl> The suprascapular notch is located lateral to the midpoint of the superior border . The corners of the triangular scapula , at either end of the medial border , are the superior angle of the scapula , located between the medial and superior borders , and the inferior angle of the scapula , located between the medial and lateral borders . The inferior angle is the most inferior portion of the scapula , and is particularly important because it serves as the attachment point for several powerful muscles involved in shoulder and upper limb movements . The remaining corner of the scapula , between the superior and lateral borders , is the location of the glenoid cavity ( glenoid fossa ) . This shallow depression articulates with the humerus bone of the arm to form the glenohumeral joint ( shoulder joint ) . The small bony bumps located immediately above and below the glenoid cavity are the supraglenoid tubercle and the infraglenoid tubercle , respectively . These provide attachments for muscles of the arm .", "hl_sentences": "The scapula has three depressions , each of which is called a fossa ( plural = fossae ) . Two of these are found on the posterior scapula , above and below the scapular spine . Superior to the spine is the narrow supraspinous fossa , and inferior to the spine is the broad infraspinous fossa . The three margins or borders of the scapula , named for their positions within the body , are the superior border of the scapula , the medial border of the scapula , and the lateral border of the scapula .", "question": { "cloze_format": "The feature that lies between the spine and superior border of the scapula is the ___.", "normal_format": "Which feature lies between the spine and superior border of the scapula?", "question_choices": [ "suprascapular notch", "glenoid cavity", "superior angle", "supraspinous fossa" ], "question_id": "fs-id1383481", "question_text": "Which feature lies between the spine and superior border of the scapula?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 0, "ans_text": "acromion" }, "bloom": "1", "hl_context": "The scapula also has two prominent projections . Toward the lateral end of the superior border , between the suprascapular notch and glenoid cavity , is the hook-like coracoid process ( coracoid = “ shaped like a crow ’ s beak ” ) . This process projects anteriorly and curves laterally . At the shoulder , the coracoid process is located inferior to the lateral end of the clavicle . It is anchored to the clavicle by a strong ligament , and serves as the attachment site for muscles of the anterior chest and arm . <hl> On the posterior aspect , the spine of the scapula is a long and prominent ridge that runs across its upper portion . <hl> <hl> Extending laterally from the spine is a flattened and expanded region called the acromion or acromial process . <hl> The acromion forms the bony tip of the superior shoulder region and articulates with the lateral end of the clavicle , forming the acromioclavicular joint ( see Figure 8.3 ) . Together , the clavicle , acromion , and spine of the scapula form a V-shaped bony line that provides for the attachment of neck and back muscles that act on the shoulder , as well as muscles that pass across the shoulder joint to act on the arm .", "hl_sentences": "On the posterior aspect , the spine of the scapula is a long and prominent ridge that runs across its upper portion . Extending laterally from the spine is a flattened and expanded region called the acromion or acromial process .", "question": { "cloze_format": "The structure that is an extension of the spine of the scapula is the ___ .", "normal_format": "What structure is an extension of the spine of the scapula?", "question_choices": [ "acromion", "coracoid process", "supraglenoid tubercle", "glenoid cavity" ], "question_id": "fs-id1861683", "question_text": "What structure is an extension of the spine of the scapula?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "coracoid process" }, "bloom": null, "hl_context": "<hl> The scapula also has two prominent projections . <hl> <hl> Toward the lateral end of the superior border , between the suprascapular notch and glenoid cavity , is the hook-like coracoid process ( coracoid = “ shaped like a crow ’ s beak ” ) . <hl> This process projects anteriorly and curves laterally . At the shoulder , the coracoid process is located inferior to the lateral end of the clavicle . It is anchored to the clavicle by a strong ligament , and serves as the attachment site for muscles of the anterior chest and arm . On the posterior aspect , the spine of the scapula is a long and prominent ridge that runs across its upper portion . Extending laterally from the spine is a flattened and expanded region called the acromion or acromial process . The acromion forms the bony tip of the superior shoulder region and articulates with the lateral end of the clavicle , forming the acromioclavicular joint ( see Figure 8.3 ) . Together , the clavicle , acromion , and spine of the scapula form a V-shaped bony line that provides for the attachment of neck and back muscles that act on the shoulder , as well as muscles that pass across the shoulder joint to act on the arm .", "hl_sentences": "The scapula also has two prominent projections . Toward the lateral end of the superior border , between the suprascapular notch and glenoid cavity , is the hook-like coracoid process ( coracoid = “ shaped like a crow ’ s beak ” ) .", "question": { "cloze_format": "The ___ is the short, hook-like bony process of the scapula that projects anteriorly.", "normal_format": "What is called the short, hook-like bony process of the scapula that projects anteriorly?", "question_choices": [ "acromial process", "clavicle", "coracoid process", "glenoid fossa" ], "question_id": "fs-id1211722", "question_text": "Name the short, hook-like bony process of the scapula that projects anteriorly." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 3, "ans_text": "60" }, "bloom": "1", "hl_context": "The upper limb is divided into three regions . These consist of the arm , located between the shoulder and elbow joints ; the forearm , which is between the elbow and wrist joints ; and the hand , which is located distal to the wrist . <hl> There are 30 bones in each upper limb ( see Figure 8.2 ) . <hl> The humerus is the single bone of the upper arm , and the ulna ( medially ) and the radius ( laterally ) are the paired bones of the forearm . The base of the hand contains eight bones , each called a carpal bone , and the palm of the hand is formed by five bones , each called a metacarpal bone . The fingers and thumb contain a total of 14 bones , each of which is a phalanx bone of the hand . Humerus", "hl_sentences": "There are 30 bones in each upper limb ( see Figure 8.2 ) .", "question": { "cloze_format": "The number of bones that are in the upper limbs combined is ___ .", "normal_format": "How many bones are there in the upper limbs combined?", "question_choices": [ "20", "30", "40", "60" ], "question_id": "fs-id2203573", "question_text": "How many bones are there in the upper limbs combined?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "greater tubercle" }, "bloom": "1", "hl_context": "The humerus is the single bone of the upper arm region ( Figure 8.5 ) . At its proximal end is the head of the humerus . This is the large , round , smooth region that faces medially . The head articulates with the glenoid cavity of the scapula to form the glenohumeral ( shoulder ) joint . The margin of the smooth area of the head is the anatomical neck of the humerus . <hl> Located on the lateral side of the proximal humerus is an expanded bony area called the greater tubercle . <hl> The smaller lesser tubercle of the humerus is found on the anterior aspect of the humerus . Both the greater and lesser tubercles serve as attachment sites for muscles that act across the shoulder joint . Passing between the greater and lesser tubercles is the narrow intertubercular groove ( sulcus ) , which is also known as the bicipital groove because it provides passage for a tendon of the biceps brachii muscle . The surgical neck is located at the base of the expanded , proximal end of the humerus , where it joins the narrow shaft of the humerus . The surgical neck is a common site of arm fractures . The deltoid tuberosity is a roughened , V-shaped region located on the lateral side in the middle of the humerus shaft . As its name indicates , it is the site of attachment for the deltoid muscle .", "hl_sentences": "Located on the lateral side of the proximal humerus is an expanded bony area called the greater tubercle .", "question": { "cloze_format": "The bony landmark that is located on the lateral side of the proximal humerus is ___.", "normal_format": "Which bony landmark is located on the lateral side of the proximal humerus?", "question_choices": [ "greater tubercle", "trochlea", "lateral epicondyle", "lesser tubercle" ], "question_id": "fs-id2051628", "question_text": "Which bony landmark is located on the lateral side of the proximal humerus?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "capitulum" }, "bloom": "1", "hl_context": "<hl> The distal end of the humerus has two articulation areas , which join the ulna and radius bones of the forearm to form the elbow joint . <hl> The more medial of these areas is the trochlea , a spindle - or pulley-shaped region ( trochlea = “ pulley ” ) , which articulates with the ulna bone . Immediately lateral to the trochlea is the capitulum ( “ small head ” ) , a knob-like structure located on the anterior surface of the distal humerus . <hl> The capitulum articulates with the radius bone of the forearm . <hl> Just above these bony areas are two small depressions . These spaces accommodate the forearm bones when the elbow is fully bent ( flexed ) . Superior to the trochlea is the coronoid fossa , which receives the coronoid process of the ulna , and above the capitulum is the radial fossa , which receives the head of the radius when the elbow is flexed . Similarly , the posterior humerus has the olecranon fossa , a larger depression that receives the olecranon process of the ulna when the forearm is fully extended .", "hl_sentences": "The distal end of the humerus has two articulation areas , which join the ulna and radius bones of the forearm to form the elbow joint . The capitulum articulates with the radius bone of the forearm .", "question": { "cloze_format": "The region of the humerus that articulates with the radius as part of the elbow joint is called the ___ .", "normal_format": "Which region of the humerus articulates with the radius as part of the elbow joint?", "question_choices": [ "trochlea", "styloid process", "capitulum", "olecranon process" ], "question_id": "fs-id1701110", "question_text": "Which region of the humerus articulates with the radius as part of the elbow joint?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 3, "ans_text": "scaphoid" }, "bloom": "1", "hl_context": "The wrist and base of the hand are formed by a series of eight small carpal bones ( see Figure 8.7 ) . The carpal bones are arranged in two rows , forming a proximal row of four carpal bones and a distal row of four carpal bones . <hl> The bones in the proximal row , running from the lateral ( thumb ) side to the medial side , are the scaphoid ( “ boat-shaped ” ) , lunate ( “ moon-shaped ” ) , triquetrum ( “ three-cornered ” ) , and pisiform ( “ pea-shaped ” ) bones . <hl> The small , rounded pisiform bone articulates with the anterior surface of the triquetrum bone . The pisiform thus projects anteriorly , where it forms the bony bump that can be felt at the medial base of your hand . The distal bones ( lateral to medial ) are the trapezium ( “ table ” ) , trapezoid ( “ resembles a table ” ) , capitate ( “ head-shaped ” ) , and hamate ( “ hooked bone ” ) bones . The hamate bone is characterized by a prominent bony extension on its anterior side called the hook of the hamate bone .", "hl_sentences": "The bones in the proximal row , running from the lateral ( thumb ) side to the medial side , are the scaphoid ( “ boat-shaped ” ) , lunate ( “ moon-shaped ” ) , triquetrum ( “ three-cornered ” ) , and pisiform ( “ pea-shaped ” ) bones .", "question": { "cloze_format": "___ is the lateral-most carpal bone of the proximal row.", "normal_format": "Which is the lateral-most carpal bone of the proximal row?", "question_choices": [ "trapezium", "hamate", "pisiform", "scaphoid" ], "question_id": "fs-id2309520", "question_text": "Which is the lateral-most carpal bone of the proximal row?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 1, "ans_text": "has a head that articulates with the radial notch of the ulna" }, "bloom": "1", "hl_context": "The radius runs parallel to the ulna , on the lateral ( thumb ) side of the forearm ( see Figure 8.6 ) . <hl> The head of the radius is a disc-shaped structure that forms the proximal end . <hl> <hl> The small depression on the surface of the head articulates with the capitulum of the humerus as part of the elbow joint , whereas the smooth , outer margin of the head articulates with the radial notch of the ulna at the proximal radioulnar joint . <hl> The neck of the radius is the narrowed region immediately below the expanded head . Inferior to this point on the medial side is the radial tuberosity , an oval-shaped , bony protuberance that serves as a muscle attachment point . The shaft of the radius is slightly curved and has a small ridge along its medial side . This ridge forms the interosseous border of the radius , which , like the similar border of the ulna , is the line of attachment for the interosseous membrane that unites the two forearm bones . The distal end of the radius has a smooth surface for articulation with two carpal bones to form the radiocarpal joint or wrist joint ( Figure 8.7 and Figure 8.8 ) . On the medial side of the distal radius is the ulnar notch of the radius . This shallow depression articulates with the head of the ulna , which together form the distal radioulnar joint . The lateral end of the radius has a pointed projection called the styloid process of the radius . This provides attachment for ligaments that support the lateral side of the wrist joint . Compared to the styloid process of the ulna , the styloid process of the radius projects more distally , thereby limiting the range of movement for lateral deviations of the hand at the wrist joint .", "hl_sentences": "The head of the radius is a disc-shaped structure that forms the proximal end . The small depression on the surface of the head articulates with the capitulum of the humerus as part of the elbow joint , whereas the smooth , outer margin of the head articulates with the radial notch of the ulna at the proximal radioulnar joint .", "question": { "cloze_format": "The radius bone ________.", "normal_format": "What is a characteristic of the radius bone?", "question_choices": [ "is found on the medial side of the forearm", "has a head that articulates with the radial notch of the ulna", "does not articulate with any of the carpal bones", "has the radial tuberosity located near its distal end" ], "question_id": "fs-id1266074", "question_text": "The radius bone ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "3" }, "bloom": "1", "hl_context": "The hip bone , or coxal bone , forms the pelvic girdle portion of the pelvis . The paired hip bones are the large , curved bones that form the lateral and anterior aspects of the pelvis . <hl> Each adult hip bone is formed by three separate bones that fuse together during the late teenage years . <hl> These bony components are the ilium , ischium , and pubis ( Figure 8.13 ) . These names are retained and used to define the three regions of the adult hip bone .", "hl_sentences": "Each adult hip bone is formed by three separate bones that fuse together during the late teenage years .", "question": { "cloze_format": "The number of bones that fuse in adulthood to form the hip bone is ___ .", "normal_format": "How many bones fuse in adulthood to form the hip bone?", "question_choices": [ "2", "3", "4", "5" ], "question_id": "fs-id1378753", "question_text": "How many bones fuse in adulthood to form the hip bone?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "ilium" }, "bloom": "1", "hl_context": "<hl> The ilium is the fan-like , superior region that forms the largest part of the hip bone . <hl> It is firmly united to the sacrum at the largely immobile sacroiliac joint ( see Figure 8.12 ) . The ischium forms the posteroinferior region of each hip bone . It supports the body when sitting . The pubis forms the anterior portion of the hip bone . The pubis curves medially , where it joins to the pubis of the opposite hip bone at a specialized joint called the pubic symphysis .", "hl_sentences": "The ilium is the fan-like , superior region that forms the largest part of the hip bone .", "question": { "cloze_format": "The component that forms the superior part of the hip bone is the ___.", "normal_format": "Which component forms the superior part of the hip bone?", "question_choices": [ "ilium", "pubis", "ischium", "sacrum" ], "question_id": "fs-id2480582", "question_text": "Which component forms the superior part of the hip bone?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "ischial tuberosity" }, "bloom": "1", "hl_context": "The ischium forms the posterolateral portion of the hip bone ( see Figure 8.13 ) . <hl> The large , roughened area of the inferior ischium is the ischial tuberosity . <hl> <hl> This serves as the attachment for the posterior thigh muscles and also carries the weight of the body when sitting . <hl> You can feel the ischial tuberosity if you wiggle your pelvis against the seat of a chair . Projecting superiorly and anteriorly from the ischial tuberosity is a narrow segment of bone called the ischial ramus . The slightly curved posterior margin of the ischium above the ischial tuberosity is the lesser sciatic notch . The bony projection separating the lesser sciatic notch and greater sciatic notch is the ischial spine .", "hl_sentences": "The large , roughened area of the inferior ischium is the ischial tuberosity . This serves as the attachment for the posterior thigh muscles and also carries the weight of the body when sitting .", "question": { "cloze_format": "___ supports body weight when sitting.", "normal_format": "Which of the following supports body weight when sitting?", "question_choices": [ "iliac crest", "ischial tuberosity", "ischiopubic ramus", "pubic body" ], "question_id": "fs-id1284315", "question_text": "Which of the following supports body weight when sitting?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "lesser sciatic notch and greater sciatic notch" }, "bloom": "1", "hl_context": "The ischium forms the posterolateral portion of the hip bone ( see Figure 8.13 ) . The large , roughened area of the inferior ischium is the ischial tuberosity . This serves as the attachment for the posterior thigh muscles and also carries the weight of the body when sitting . You can feel the ischial tuberosity if you wiggle your pelvis against the seat of a chair . Projecting superiorly and anteriorly from the ischial tuberosity is a narrow segment of bone called the ischial ramus . The slightly curved posterior margin of the ischium above the ischial tuberosity is the lesser sciatic notch . <hl> The bony projection separating the lesser sciatic notch and greater sciatic notch is the ischial spine . <hl>", "hl_sentences": "The bony projection separating the lesser sciatic notch and greater sciatic notch is the ischial spine .", "question": { "cloze_format": "The structures the ischial spine is found between are the ___.", "normal_format": "The ischial spine is found between which of the following structures?", "question_choices": [ "inferior pubic ramus and ischial ramus", "pectineal line and arcuate line", "lesser sciatic notch and greater sciatic notch", "anterior superior iliac spine and posterior superior iliac spine" ], "question_id": "fs-id1547569", "question_text": "The ischial spine is found between which of the following structures?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 0, "ans_text": "has a subpubic angle that is larger in females" }, "bloom": "2", "hl_context": "The space enclosed by the bony pelvis is divided into two regions ( Figure 8.15 ) . The broad , superior region , defined laterally by the large , fan-like portion of the upper hip bone , is called the greater pelvis ( greater pelvic cavity ; false pelvis ) . This broad area is occupied by portions of the small and large intestines , and because it is more closely associated with the abdominal cavity , it is sometimes referred to as the false pelvis . More inferiorly , the narrow , rounded space of the lesser pelvis ( lesser pelvic cavity ; true pelvis ) contains the bladder and other pelvic organs , and thus is also known as the true pelvis . The pelvic brim ( also known as the pelvic inlet ) forms the superior margin of the lesser pelvis , separating it from the greater pelvis . The pelvic brim is defined by a line formed by the upper margin of the pubic symphysis anteriorly , and the pectineal line of the pubis , the arcuate line of the ilium , and the sacral promontory ( the anterior margin of the superior sacrum ) posteriorly . The inferior limit of the lesser pelvic cavity is called the pelvic outlet . This large opening is defined by the inferior margin of the pubic symphysis anteriorly , and the ischiopubic ramus , the ischial tuberosity , the sacrotuberous ligament , and the inferior tip of the coccyx posteriorly . Because of the anterior tilt of the pelvis , the lesser pelvis is also angled , giving it an anterosuperior ( pelvic inlet ) to posteroinferior ( pelvic outlet ) orientation . Comparison of the Female and Male Pelvis The differences between the adult female and male pelvis relate to function and body size . In general , the bones of the male pelvis are thicker and heavier , adapted for support of the male ’ s heavier physical build and stronger muscles . The greater sciatic notch of the male hip bone is narrower and deeper than the broader notch of females . Because the female pelvis is adapted for childbirth , it is wider than the male pelvis , as evidenced by the distance between the anterior superior iliac spines ( see Figure 8.15 ) . The ischial tuberosities of females are also farther apart , which increases the size of the pelvic outlet . <hl> Because of this increased pelvic width , the subpubic angle is larger in females ( greater than 80 degrees ) than it is in males ( less than 70 degrees ) . <hl> The female sacrum is wider , shorter , and less curved , and the sacral promontory projects less into the pelvic cavity , thus giving the female pelvic inlet ( pelvic brim ) a more rounded or oval shape compared to males . The lesser pelvic cavity of females is also wider and more shallow than the narrower , deeper , and tapering lesser pelvis of males . Because of the obvious differences between female and male hip bones , this is the one bone of the body that allows for the most accurate sex determination . Table 8.1 provides an overview of the general differences between the female and male pelvis . Overview of Differences between the Female and Male Pelvis", "hl_sentences": "Because of this increased pelvic width , the subpubic angle is larger in females ( greater than 80 degrees ) than it is in males ( less than 70 degrees ) .", "question": { "cloze_format": "The pelvis ________.", "normal_format": "Which of the following is correct about the pelvis?", "question_choices": [ "has a subpubic angle that is larger in females", "consists of the two hip bones, but does not include the sacrum or coccyx", "has an obturator foramen, an opening that is defined in part by the sacrospinous and sacrotuberous ligaments", "has a space located inferior to the pelvic brim called the greater pelvis" ], "question_id": "fs-id1424133", "question_text": "The pelvis ________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 1, "ans_text": "lesser trochanter" }, "bloom": "2", "hl_context": "The narrowed region below the head is the neck of the femur . This is a common area for fractures of the femur . The greater trochanter is the large , upward , bony projection located above the base of the neck . Multiple muscles that act across the hip joint attach to the greater trochanter , which , because of its projection from the femur , gives additional leverage to these muscles . The greater trochanter can be felt just under the skin on the lateral side of your upper thigh . <hl> The lesser trochanter is a small , bony prominence that lies on the medial aspect of the femur , just below the neck . <hl> <hl> A single , powerful muscle attaches to the lesser trochanter . <hl> Running between the greater and lesser trochanters on the anterior side of the femur is the roughened intertrochanteric line . The trochanters are also connected on the posterior side of the femur by the larger intertrochanteric crest .", "hl_sentences": "The lesser trochanter is a small , bony prominence that lies on the medial aspect of the femur , just below the neck . A single , powerful muscle attaches to the lesser trochanter .", "question": { "cloze_format": "The ___ is the bony landmark of the femur that serves as a site for muscle attachments.", "normal_format": "Which bony landmark of the femur serves as a site for muscle attachments?", "question_choices": [ "fovea capitis", "lesser trochanter", "head", "medial condyle" ], "question_id": "fs-id2030135", "question_text": "Which bony landmark of the femur serves as a site for muscle attachments?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "medial condyle of the tibia" }, "bloom": "1", "hl_context": "The distal end of the femur has medial and lateral bony expansions . On the lateral side , the smooth portion that covers the distal and posterior aspects of the lateral expansion is the lateral condyle of the femur . The roughened area on the outer , lateral side of the condyle is the lateral epicondyle of the femur . Similarly , the smooth region of the distal and posterior medial femur is the medial condyle of the femur , and the irregular outer , medial side of this is the medial epicondyle of the femur . <hl> The lateral and medial condyles articulate with the tibia to form the knee joint . <hl> The epicondyles provide attachment for muscles and supporting ligaments of the knee . The adductor tubercle is a small bump located at the superior margin of the medial epicondyle . Posteriorly , the medial and lateral condyles are separated by a deep depression called the intercondylar fossa . Anteriorly , the smooth surfaces of the condyles join together to form a wide groove called the patellar surface , which provides for articulation with the patella bone . The combination of the medial and lateral condyles with the patellar surface gives the distal end of the femur a horseshoe ( U ) shape .", "hl_sentences": "The lateral and medial condyles articulate with the tibia to form the knee joint .", "question": { "cloze_format": "The structure that contributes to the knee joint is the ___ .", "normal_format": "What structure contributes to the knee joint?", "question_choices": [ "lateral malleolus of the fibula", "tibial tuberosity", "medial condyle of the tibia", "lateral epicondyle of the femur" ], "question_id": "fs-id2020647", "question_text": "What structure contributes to the knee joint?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "talus" }, "bloom": "1", "hl_context": "<hl> The posterior half of the foot is formed by seven tarsal bones ( Figure 8.19 ) . <hl> <hl> The most superior bone is the talus . <hl> <hl> This has a relatively square-shaped , upper surface that articulates with the tibia and fibula to form the ankle joint . <hl> Three areas of articulation form the ankle joint : The superomedial surface of the talus bone articulates with the medial malleolus of the tibia , the top of the talus articulates with the distal end of the tibia , and the lateral side of the talus articulates with the lateral malleolus of the fibula . Inferiorly , the talus articulates with the calcaneus ( heel bone ) , the largest bone of the foot , which forms the heel . Body weight is transferred from the tibia to the talus to the calcaneus , which rests on the ground . The medial calcaneus has a prominent bony extension called the sustentaculum tali ( “ support for the talus ” ) that supports the medial side of the talus bone .", "hl_sentences": "The posterior half of the foot is formed by seven tarsal bones ( Figure 8.19 ) . The most superior bone is the talus . This has a relatively square-shaped , upper surface that articulates with the tibia and fibula to form the ankle joint .", "question": { "cloze_format": "The tarsal bone that articulates with the tibia and fibula is the ___.", "normal_format": "Which tarsal bone articulates with the tibia and fibula?", "question_choices": [ "calcaneus", "cuboid", "navicular", "talus" ], "question_id": "fs-id1689617", "question_text": "Which tarsal bone articulates with the tibia and fibula?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "is firmly anchored to the fibula by an interosseous membrane" }, "bloom": "1", "hl_context": "The head of the fibula is the small , knob-like , proximal end of the fibula . It articulates with the inferior aspect of the lateral tibial condyle , forming the proximal tibiofibular joint . <hl> The thin shaft of the fibula has the interosseous border of the fibula , a narrow ridge running down its medial side for the attachment of the interosseous membrane that spans the fibula and tibia . <hl> The distal end of the fibula forms the lateral malleolus , which forms the easily palpated bony bump on the lateral side of the ankle . The deep ( medial ) side of the lateral malleolus articulates with the talus bone of the foot as part of the ankle joint . The distal fibula also articulates with the fibular notch of the tibia .", "hl_sentences": "The thin shaft of the fibula has the interosseous border of the fibula , a narrow ridge running down its medial side for the attachment of the interosseous membrane that spans the fibula and tibia .", "question": { "cloze_format": "The tibia ________.", "normal_format": "Which of the following is correct about the tibia?", "question_choices": [ "has an expanded distal end called the lateral malleolus", "is not a weight-bearing bone", "is firmly anchored to the fibula by an interosseous membrane", "can be palpated (felt) under the skin only at its proximal and distal ends" ], "question_id": "fs-id2350603", "question_text": "The tibia ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "the rotation of the limbs" }, "bloom": "2", "hl_context": "The early outgrowth of the upper and lower limb buds initially has the limbs positioned so that the regions that will become the palm of the hand or the bottom of the foot are facing medially toward the body , with the future thumb or big toe both oriented toward the head . <hl> During the seventh week of development , the upper limb rotates laterally by 90 degrees , so that the palm of the hand faces anteriorly and the thumb points laterally . <hl> In contrast , the lower limb undergoes a 90 - degree medial rotation , thus bringing the big toe to the medial side of the foot .", "hl_sentences": "During the seventh week of development , the upper limb rotates laterally by 90 degrees , so that the palm of the hand faces anteriorly and the thumb points laterally .", "question": { "cloze_format": "The event that takes place during the seventh week of development is (the) ___.", "normal_format": "Which event takes place during the seventh week of development?", "question_choices": [ "appearance of the upper and lower limb buds", "flattening of the distal limb bud into a paddle shape", "the first appearance of hyaline cartilage models of future bones", "the rotation of the limbs" ], "question_id": "fs-id1845404", "question_text": "Which event takes place during the seventh week of development?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "growth of the epiphyseal plate will produce bone lengthening" }, "bloom": "2", "hl_context": "<hl> Ossification of Appendicular Bones All of the girdle and limb bones , except for the clavicle , develop by the process of endochondral ossification . <hl> This process begins as the mesenchyme within the limb bud differentiates into hyaline cartilage to form cartilage models for future bones . By the twelfth week , a primary ossification center will have appeared in the diaphysis ( shaft ) region of the long bones , initiating the process that converts the cartilage model into bone . A secondary ossification center will appear in each epiphysis ( expanded end ) of these bones at a later time , usually after birth . <hl> The primary and secondary ossification centers are separated by the epiphyseal plate , a layer of growing hyaline cartilage . <hl> This plate is located between the diaphysis and each epiphysis . <hl> It continues to grow and is responsible for the lengthening of the bone . <hl> The epiphyseal plate is retained for many years , until the bone reaches its final , adult size , at which time the epiphyseal plate disappears and the epiphysis fuses to the diaphysis . ( Seek additional content on ossification in the chapter on bone tissue . )", "hl_sentences": "Ossification of Appendicular Bones All of the girdle and limb bones , except for the clavicle , develop by the process of endochondral ossification . The primary and secondary ossification centers are separated by the epiphyseal plate , a layer of growing hyaline cartilage . It continues to grow and is responsible for the lengthening of the bone .", "question": { "cloze_format": "During endochondral ossification of a long bone, ________.", "normal_format": "During endochondral ossification of a long bone, which of the following occurs?", "question_choices": [ "a primary ossification center will develop within the epiphysis", "mesenchyme will differentiate directly into bone tissue", "growth of the epiphyseal plate will produce bone lengthening", "all epiphyseal plates will disappear before birth" ], "question_id": "fs-id1909786", "question_text": "During endochondral ossification of a long bone, ________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 0, "ans_text": "develops via intramembranous ossification" }, "bloom": "1", "hl_context": "The clavicle is the one appendicular skeleton bone that does not develop via endochondral ossification . <hl> Instead , the clavicle develops through the process of intramembranous ossification . <hl> During this process , mesenchymal cells differentiate directly into bone-producing cells , which produce the clavicle directly , without first making a cartilage model . Because of this early production of bone , the clavicle is the first bone of the body to begin ossification , with ossification centers appearing during the fifth week of development . However , ossification of the clavicle is not complete until age 25 .", "hl_sentences": "Instead , the clavicle develops through the process of intramembranous ossification .", "question": { "cloze_format": "The clavicle ________.", "normal_format": "Which of the following is correct about the clavicle?", "question_choices": [ "develops via intramembranous ossification", "develops via endochondral ossification", "is the last bone of the body to begin ossification", "is fully ossified at the time of birth" ], "question_id": "fs-id2004911", "question_text": "The clavicle ________." }, "references_are_paraphrase": null } ]
8
8.1 The Pectoral Girdle Learning Objectives By the end of this section, you will be able to: Describe the bones that form the pectoral girdle List the functions of the pectoral girdle The appendicular skeleton includes all of the limb bones, plus the bones that unite each limb with the axial skeleton ( Figure 8.2 ). The bones that attach each upper limb to the axial skeleton form the pectoral girdle (shoulder girdle). This consists of two bones, the scapula and clavicle ( Figure 8.3 ). The clavicle (collarbone) is an S-shaped bone located on the anterior side of the shoulder. It is attached on its medial end to the sternum of the thoracic cage, which is part of the axial skeleton. The lateral end of the clavicle articulates (joins) with the scapula just above the shoulder joint. You can easily palpate, or feel with your fingers, the entire length of your clavicle. The scapula (shoulder blade) lies on the posterior aspect of the shoulder. It is supported by the clavicle and articulates with the humerus (arm bone) to form the shoulder joint. The scapula is a flat, triangular-shaped bone with a prominent ridge running across its posterior surface. This ridge extends out laterally, where it forms the bony tip of the shoulder and joins with the lateral end of the clavicle. By following along the clavicle, you can palpate out to the bony tip of the shoulder, and from there, you can move back across your posterior shoulder to follow the ridge of the scapula. Move your shoulder around and feel how the clavicle and scapula move together as a unit. Both of these bones serve as important attachment sites for muscles that aid with movements of the shoulder and arm. The right and left pectoral girdles are not joined to each other, allowing each to operate independently. In addition, the clavicle of each pectoral girdle is anchored to the axial skeleton by a single, highly mobile joint. This allows for the extensive mobility of the entire pectoral girdle, which in turn enhances movements of the shoulder and upper limb. Clavicle The clavicle is the only long bone that lies in a horizontal position in the body (see Figure 8.3 ). The clavicle has several important functions. First, anchored by muscles from above, it serves as a strut that extends laterally to support the scapula. This in turn holds the shoulder joint superiorly and laterally from the body trunk, allowing for maximal freedom of motion for the upper limb. The clavicle also transmits forces acting on the upper limb to the sternum and axial skeleton. Finally, it serves to protect the underlying nerves and blood vessels as they pass between the trunk of the body and the upper limb. The clavicle has three regions: the medial end, the lateral end, and the shaft. The medial end, known as the sternal end of the clavicle , has a triangular shape and articulates with the manubrium portion of the sternum. This forms the sternoclavicular joint , which is the only bony articulation between the pectoral girdle of the upper limb and the axial skeleton. This joint allows considerable mobility, enabling the clavicle and scapula to move in upward/downward and anterior/posterior directions during shoulder movements. The sternoclavicular joint is indirectly supported by the costoclavicular ligament (costo- = “rib”), which spans the sternal end of the clavicle and the underlying first rib. The lateral or acromial end of the clavicle articulates with the acromion of the scapula, the portion of the scapula that forms the bony tip of the shoulder. There are some sex differences in the morphology of the clavicle. In women, the clavicle tends to be shorter, thinner, and less curved. In men, the clavicle is heavier and longer, and has a greater curvature and rougher surfaces where muscles attach, features that are more pronounced in manual workers. The clavicle is the most commonly fractured bone in the body. Such breaks often occur because of the force exerted on the clavicle when a person falls onto his or her outstretched arms, or when the lateral shoulder receives a strong blow. Because the sternoclavicular joint is strong and rarely dislocated, excessive force results in the breaking of the clavicle, usually between the middle and lateral portions of the bone. If the fracture is complete, the shoulder and lateral clavicle fragment will drop due to the weight of the upper limb, causing the person to support the sagging limb with their other hand. Muscles acting across the shoulder will also pull the shoulder and lateral clavicle anteriorly and medially, causing the clavicle fragments to override. The clavicle overlies many important blood vessels and nerves for the upper limb, but fortunately, due to the anterior displacement of a broken clavicle, these structures are rarely affected when the clavicle is fractured. Scapula The scapula is also part of the pectoral girdle and thus plays an important role in anchoring the upper limb to the body. The scapula is located on the posterior side of the shoulder. It is surrounded by muscles on both its anterior (deep) and posterior (superficial) sides, and thus does not articulate with the ribs of the thoracic cage. The scapula has several important landmarks ( Figure 8.4 ). The three margins or borders of the scapula, named for their positions within the body, are the superior border of the scapula , the medial border of the scapula , and the lateral border of the scapula . The suprascapular notch is located lateral to the midpoint of the superior border. The corners of the triangular scapula, at either end of the medial border, are the superior angle of the scapula , located between the medial and superior borders, and the inferior angle of the scapula , located between the medial and lateral borders. The inferior angle is the most inferior portion of the scapula, and is particularly important because it serves as the attachment point for several powerful muscles involved in shoulder and upper limb movements. The remaining corner of the scapula, between the superior and lateral borders, is the location of the glenoid cavity (glenoid fossa). This shallow depression articulates with the humerus bone of the arm to form the glenohumeral joint (shoulder joint). The small bony bumps located immediately above and below the glenoid cavity are the supraglenoid tubercle and the infraglenoid tubercle , respectively. These provide attachments for muscles of the arm. The scapula also has two prominent projections. Toward the lateral end of the superior border, between the suprascapular notch and glenoid cavity, is the hook-like coracoid process (coracoid = “shaped like a crow’s beak”). This process projects anteriorly and curves laterally. At the shoulder, the coracoid process is located inferior to the lateral end of the clavicle. It is anchored to the clavicle by a strong ligament, and serves as the attachment site for muscles of the anterior chest and arm. On the posterior aspect, the spine of the scapula is a long and prominent ridge that runs across its upper portion. Extending laterally from the spine is a flattened and expanded region called the acromion or acromial process . The acromion forms the bony tip of the superior shoulder region and articulates with the lateral end of the clavicle, forming the acromioclavicular joint (see Figure 8.3 ). Together, the clavicle, acromion, and spine of the scapula form a V-shaped bony line that provides for the attachment of neck and back muscles that act on the shoulder, as well as muscles that pass across the shoulder joint to act on the arm. The scapula has three depressions, each of which is called a fossa (plural = fossae). Two of these are found on the posterior scapula, above and below the scapular spine. Superior to the spine is the narrow supraspinous fossa , and inferior to the spine is the broad infraspinous fossa . The anterior (deep) surface of the scapula forms the broad subscapular fossa . All of these fossae provide large surface areas for the attachment of muscles that cross the shoulder joint to act on the humerus. The acromioclavicular joint transmits forces from the upper limb to the clavicle. The ligaments around this joint are relatively weak. A hard fall onto the elbow or outstretched hand can stretch or tear the acromioclavicular ligaments, resulting in a moderate injury to the joint. However, the primary support for the acromioclavicular joint comes from a very strong ligament called the coracoclavicular ligament (see Figure 8.3 ). This connective tissue band anchors the coracoid process of the scapula to the inferior surface of the acromial end of the clavicle and thus provides important indirect support for the acromioclavicular joint. Following a strong blow to the lateral shoulder, such as when a hockey player is driven into the boards, a complete dislocation of the acromioclavicular joint can result. In this case, the acromion is thrust under the acromial end of the clavicle, resulting in ruptures of both the acromioclavicular and coracoclavicular ligaments. The scapula then separates from the clavicle, with the weight of the upper limb pulling the shoulder downward. This dislocation injury of the acromioclavicular joint is known as a “shoulder separation” and is common in contact sports such as hockey, football, or martial arts. 8.2 Bones of the Upper Limb Learning Objectives By the end of this section, you will be able to: Identify the divisions of the upper limb and describe the bones in each region List the bones and bony landmarks that articulate at each joint of the upper limb The upper limb is divided into three regions. These consist of the arm , located between the shoulder and elbow joints; the forearm , which is between the elbow and wrist joints; and the hand , which is located distal to the wrist. There are 30 bones in each upper limb (see Figure 8.2 ). The humerus is the single bone of the upper arm, and the ulna (medially) and the radius (laterally) are the paired bones of the forearm. The base of the hand contains eight bones, each called a carpal bone , and the palm of the hand is formed by five bones, each called a metacarpal bone . The fingers and thumb contain a total of 14 bones, each of which is a phalanx bone of the hand . Humerus The humerus is the single bone of the upper arm region ( Figure 8.5 ). At its proximal end is the head of the humerus . This is the large, round, smooth region that faces medially. The head articulates with the glenoid cavity of the scapula to form the glenohumeral (shoulder) joint. The margin of the smooth area of the head is the anatomical neck of the humerus. Located on the lateral side of the proximal humerus is an expanded bony area called the greater tubercle . The smaller lesser tubercle of the humerus is found on the anterior aspect of the humerus. Both the greater and lesser tubercles serve as attachment sites for muscles that act across the shoulder joint. Passing between the greater and lesser tubercles is the narrow intertubercular groove (sulcus) , which is also known as the bicipital groove because it provides passage for a tendon of the biceps brachii muscle. The surgical neck is located at the base of the expanded, proximal end of the humerus, where it joins the narrow shaft of the humerus . The surgical neck is a common site of arm fractures. The deltoid tuberosity is a roughened, V-shaped region located on the lateral side in the middle of the humerus shaft. As its name indicates, it is the site of attachment for the deltoid muscle. Distally, the humerus becomes flattened. The prominent bony projection on the medial side is the medial epicondyle of the humerus . The much smaller lateral epicondyle of the humerus is found on the lateral side of the distal humerus. The roughened ridge of bone above the lateral epicondyle is the lateral supracondylar ridge . All of these areas are attachment points for muscles that act on the forearm, wrist, and hand. The powerful grasping muscles of the anterior forearm arise from the medial epicondyle, which is thus larger and more robust than the lateral epicondyle that gives rise to the weaker posterior forearm muscles. The distal end of the humerus has two articulation areas, which join the ulna and radius bones of the forearm to form the elbow joint . The more medial of these areas is the trochlea , a spindle- or pulley-shaped region (trochlea = “pulley”), which articulates with the ulna bone. Immediately lateral to the trochlea is the capitulum (“small head”), a knob-like structure located on the anterior surface of the distal humerus. The capitulum articulates with the radius bone of the forearm. Just above these bony areas are two small depressions. These spaces accommodate the forearm bones when the elbow is fully bent (flexed). Superior to the trochlea is the coronoid fossa , which receives the coronoid process of the ulna, and above the capitulum is the radial fossa , which receives the head of the radius when the elbow is flexed. Similarly, the posterior humerus has the olecranon fossa , a larger depression that receives the olecranon process of the ulna when the forearm is fully extended. Ulna The ulna is the medial bone of the forearm. It runs parallel to the radius, which is the lateral bone of the forearm ( Figure 8.6 ). The proximal end of the ulna resembles a crescent wrench with its large, C-shaped trochlear notch . This region articulates with the trochlea of the humerus as part of the elbow joint. The inferior margin of the trochlear notch is formed by a prominent lip of bone called the coronoid process of the ulna . Just below this on the anterior ulna is a roughened area called the ulnar tuberosity . To the lateral side and slightly inferior to the trochlear notch is a small, smooth area called the radial notch of the ulna . This area is the site of articulation between the proximal radius and the ulna, forming the proximal radioulnar joint . The posterior and superior portions of the proximal ulna make up the olecranon process , which forms the bony tip of the elbow. More distal is the shaft of the ulna . The lateral side of the shaft forms a ridge called the interosseous border of the ulna . This is the line of attachment for the interosseous membrane of the forearm , a sheet of dense connective tissue that unites the ulna and radius bones. The small, rounded area that forms the distal end is the head of the ulna . Projecting from the posterior side of the ulnar head is the styloid process of the ulna , a short bony projection. This serves as an attachment point for a connective tissue structure that unites the distal ends of the ulna and radius. In the anatomical position, with the elbow fully extended and the palms facing forward, the arm and forearm do not form a straight line. Instead, the forearm deviates laterally by 5–15 degrees from the line of the arm. This deviation is called the carrying angle. It allows the forearm and hand to swing freely or to carry an object without hitting the hip. The carrying angle is larger in females to accommodate their wider pelvis. Radius The radius runs parallel to the ulna, on the lateral (thumb) side of the forearm (see Figure 8.6 ). The head of the radius is a disc-shaped structure that forms the proximal end. The small depression on the surface of the head articulates with the capitulum of the humerus as part of the elbow joint, whereas the smooth, outer margin of the head articulates with the radial notch of the ulna at the proximal radioulnar joint. The neck of the radius is the narrowed region immediately below the expanded head. Inferior to this point on the medial side is the radial tuberosity , an oval-shaped, bony protuberance that serves as a muscle attachment point. The shaft of the radius is slightly curved and has a small ridge along its medial side. This ridge forms the interosseous border of the radius , which, like the similar border of the ulna, is the line of attachment for the interosseous membrane that unites the two forearm bones. The distal end of the radius has a smooth surface for articulation with two carpal bones to form the radiocarpal joint or wrist joint ( Figure 8.7 and Figure 8.8 ). On the medial side of the distal radius is the ulnar notch of the radius . This shallow depression articulates with the head of the ulna, which together form the distal radioulnar joint . The lateral end of the radius has a pointed projection called the styloid process of the radius . This provides attachment for ligaments that support the lateral side of the wrist joint. Compared to the styloid process of the ulna, the styloid process of the radius projects more distally, thereby limiting the range of movement for lateral deviations of the hand at the wrist joint. Interactive Link Watch this video to see how fractures of the distal radius bone can affect the wrist joint. Explain the problems that may occur if a fracture of the distal radius involves the joint surface of the radiocarpal joint of the wrist. Carpal Bones The wrist and base of the hand are formed by a series of eight small carpal bones (see Figure 8.7 ). The carpal bones are arranged in two rows, forming a proximal row of four carpal bones and a distal row of four carpal bones. The bones in the proximal row, running from the lateral (thumb) side to the medial side, are the scaphoid (“boat-shaped”), lunate (“moon-shaped”), triquetrum (“three-cornered”), and pisiform (“pea-shaped”) bones. The small, rounded pisiform bone articulates with the anterior surface of the triquetrum bone. The pisiform thus projects anteriorly, where it forms the bony bump that can be felt at the medial base of your hand. The distal bones (lateral to medial) are the trapezium (“table”), trapezoid (“resembles a table”), capitate (“head-shaped”), and hamate (“hooked bone”) bones. The hamate bone is characterized by a prominent bony extension on its anterior side called the hook of the hamate bone . A helpful mnemonic for remembering the arrangement of the carpal bones is “So Long To Pinky, Here Comes The Thumb.” This mnemonic starts on the lateral side and names the proximal bones from lateral to medial (scaphoid, lunate, triquetrum, pisiform), then makes a U-turn to name the distal bones from medial to lateral (hamate, capitate, trapezoid, trapezium). Thus, it starts and finishes on the lateral side. The carpal bones form the base of the hand. This can be seen in the radiograph (X-ray image) of the hand that shows the relationships of the hand bones to the skin creases of the hand (see Figure 8.8 ). Within the carpal bones, the four proximal bones are united to each other by ligaments to form a unit. Only three of these bones, the scaphoid, lunate, and triquetrum, contribute to the radiocarpal joint. The scaphoid and lunate bones articulate directly with the distal end of the radius, whereas the triquetrum bone articulates with a fibrocartilaginous pad that spans the radius and styloid process of the ulna. The distal end of the ulna thus does not directly articulate with any of the carpal bones. The four distal carpal bones are also held together as a group by ligaments. The proximal and distal rows of carpal bones articulate with each other to form the midcarpal joint (see Figure 8.8 ). Together, the radiocarpal and midcarpal joints are responsible for all movements of the hand at the wrist. The distal carpal bones also articulate with the metacarpal bones of the hand. In the articulated hand, the carpal bones form a U-shaped grouping. A strong ligament called the flexor retinaculum spans the top of this U-shaped area to maintain this grouping of the carpal bones. The flexor retinaculum is attached laterally to the trapezium and scaphoid bones, and medially to the hamate and pisiform bones. Together, the carpal bones and the flexor retinaculum form a passageway called the carpal tunnel , with the carpal bones forming the walls and floor, and the flexor retinaculum forming the roof of this space ( Figure 8.9 ). The tendons of nine muscles of the anterior forearm and an important nerve pass through this narrow tunnel to enter the hand. Overuse of the muscle tendons or wrist injury can produce inflammation and swelling within this space. This produces compression of the nerve, resulting in carpal tunnel syndrome, which is characterized by pain or numbness, and muscle weakness in those areas of the hand supplied by this nerve. Metacarpal Bones The palm of the hand contains five elongated metacarpal bones. These bones lie between the carpal bones of the wrist and the bones of the fingers and thumb (see Figure 8.7 ). The proximal end of each metacarpal bone articulates with one of the distal carpal bones. Each of these articulations is a carpometacarpal joint (see Figure 8.8 ). The expanded distal end of each metacarpal bone articulates at the metacarpophalangeal joint with the proximal phalanx bone of the thumb or one of the fingers. The distal end also forms the knuckles of the hand, at the base of the fingers. The metacarpal bones are numbered 1–5, beginning at the thumb. The first metacarpal bone, at the base of the thumb, is separated from the other metacarpal bones. This allows it a freedom of motion that is independent of the other metacarpal bones, which is very important for thumb mobility. The remaining metacarpal bones are united together to form the palm of the hand. The second and third metacarpal bones are firmly anchored in place and are immobile. However, the fourth and fifth metacarpal bones have limited anterior-posterior mobility, a motion that is greater for the fifth bone. This mobility is important during power gripping with the hand ( Figure 8.10 ). The anterior movement of these bones, particularly the fifth metacarpal bone, increases the strength of contact for the medial hand during gripping actions. Phalanx Bones The fingers and thumb contain 14 bones, each of which is called a phalanx bone (plural = phalanges), named after the ancient Greek phalanx (a rectangular block of soldiers). The thumb ( pollex ) is digit number 1 and has two phalanges, a proximal phalanx, and a distal phalanx bone (see Figure 8.7 ). Digits 2 (index finger) through 5 (little finger) have three phalanges each, called the proximal, middle, and distal phalanx bones. An interphalangeal joint is one of the articulations between adjacent phalanges of the digits (see Figure 8.8 ). Interactive Link Visit this site to explore the bones and joints of the hand. What are the three arches of the hand, and what is the importance of these during the gripping of an object? Disorders of the... Appendicular System: Fractures of Upper Limb Bones Due to our constant use of the hands and the rest of our upper limbs, an injury to any of these areas will cause a significant loss of functional ability. Many fractures result from a hard fall onto an outstretched hand. The resulting transmission of force up the limb may result in a fracture of the humerus, radius, or scaphoid bones. These injuries are especially common in elderly people whose bones are weakened due to osteoporosis. Falls onto the hand or elbow, or direct blows to the arm, can result in fractures of the humerus ( Figure 8.11 ). Following a fall, fractures at the surgical neck, the region at which the expanded proximal end of the humerus joins with the shaft, can result in an impacted fracture, in which the distal portion of the humerus is driven into the proximal portion. Falls or blows to the arm can also produce transverse or spiral fractures of the humeral shaft. In children, a fall onto the tip of the elbow frequently results in a distal humerus fracture. In these, the olecranon of the ulna is driven upward, resulting in a fracture across the distal humerus, above both epicondyles (supracondylar fracture), or a fracture between the epicondyles, thus separating one or both of the epicondyles from the body of the humerus (intercondylar fracture). With these injuries, the immediate concern is possible compression of the artery to the forearm due to swelling of the surrounding tissues. If compression occurs, the resulting ischemia (lack of oxygen) due to reduced blood flow can quickly produce irreparable damage to the forearm muscles. In addition, four major nerves for shoulder and upper limb muscles are closely associated with different regions of the humerus, and thus, humeral fractures may also damage these nerves. Another frequent injury following a fall onto an outstretched hand is a Colles fracture (“col-lees”) of the distal radius (see Figure 8.11 ). This involves a complete transverse fracture across the distal radius that drives the separated distal fragment of the radius posteriorly and superiorly. This injury results in a characteristic “dinner fork” bend of the forearm just above the wrist due to the posterior displacement of the hand. This is the most frequent forearm fracture and is a common injury in persons over the age of 50, particularly in older women with osteoporosis. It also commonly occurs following a high-speed fall onto the hand during activities such as snowboarding or skating. The most commonly fractured carpal bone is the scaphoid, often resulting from a fall onto the hand. Deep pain at the lateral wrist may yield an initial diagnosis of a wrist sprain, but a radiograph taken several weeks after the injury, after tissue swelling has subsided, will reveal the fracture. Due to the poor blood supply to the scaphoid bone, healing will be slow and there is the danger of bone necrosis and subsequent degenerative joint disease of the wrist. Interactive Link Watch this video to learn about a Colles fracture, a break of the distal radius, usually caused by falling onto an outstretched hand. When would surgery be required and how would the fracture be repaired in this case? 8.3 The Pelvic Girdle and Pelvis Learning Objectives By the end of this section, you will be able to: Define the pelvic girdle and describe the bones and ligaments of the pelvis Explain the three regions of the hip bone and identify their bony landmarks Describe the openings of the pelvis and the boundaries of the greater and lesser pelvis The pelvic girdle (hip girdle) is formed by a single bone, the hip bone or coxal bone (coxal = “hip”), which serves as the attachment point for each lower limb. Each hip bone, in turn, is firmly joined to the axial skeleton via its attachment to the sacrum of the vertebral column. The right and left hip bones also converge anteriorly to attach to each other. The bony pelvis is the entire structure formed by the two hip bones, the sacrum, and, attached inferiorly to the sacrum, the coccyx ( Figure 8.12 ). Unlike the bones of the pectoral girdle, which are highly mobile to enhance the range of upper limb movements, the bones of the pelvis are strongly united to each other to form a largely immobile, weight-bearing structure. This is important for stability because it enables the weight of the body to be easily transferred laterally from the vertebral column, through the pelvic girdle and hip joints, and into either lower limb whenever the other limb is not bearing weight. Thus, the immobility of the pelvis provides a strong foundation for the upper body as it rests on top of the mobile lower limbs. Hip Bone The hip bone, or coxal bone, forms the pelvic girdle portion of the pelvis. The paired hip bones are the large, curved bones that form the lateral and anterior aspects of the pelvis. Each adult hip bone is formed by three separate bones that fuse together during the late teenage years. These bony components are the ilium, ischium, and pubis ( Figure 8.13 ). These names are retained and used to define the three regions of the adult hip bone. The ilium is the fan-like, superior region that forms the largest part of the hip bone. It is firmly united to the sacrum at the largely immobile sacroiliac joint (see Figure 8.12 ). The ischium forms the posteroinferior region of each hip bone. It supports the body when sitting. The pubis forms the anterior portion of the hip bone. The pubis curves medially, where it joins to the pubis of the opposite hip bone at a specialized joint called the pubic symphysis . Ilium When you place your hands on your waist, you can feel the arching, superior margin of the ilium along your waistline (see Figure 8.13 ). This curved, superior margin of the ilium is the iliac crest . The rounded, anterior termination of the iliac crest is the anterior superior iliac spine . This important bony landmark can be felt at your anterolateral hip. Inferior to the anterior superior iliac spine is a rounded protuberance called the anterior inferior iliac spine . Both of these iliac spines serve as attachment points for muscles of the thigh. Posteriorly, the iliac crest curves downward to terminate as the posterior superior iliac spine . Muscles and ligaments surround but do not cover this bony landmark, thus sometimes producing a depression seen as a “dimple” located on the lower back. More inferiorly is the posterior inferior iliac spine . This is located at the inferior end of a large, roughened area called the auricular surface of the ilium . The auricular surface articulates with the auricular surface of the sacrum to form the sacroiliac joint. Both the posterior superior and posterior inferior iliac spines serve as attachment points for the muscles and very strong ligaments that support the sacroiliac joint. The shallow depression located on the anteromedial (internal) surface of the upper ilium is called the iliac fossa . The inferior margin of this space is formed by the arcuate line of the ilium , the ridge formed by the pronounced change in curvature between the upper and lower portions of the ilium. The large, inverted U-shaped indentation located on the posterior margin of the lower ilium is called the greater sciatic notch . Ischium The ischium forms the posterolateral portion of the hip bone (see Figure 8.13 ). The large, roughened area of the inferior ischium is the ischial tuberosity . This serves as the attachment for the posterior thigh muscles and also carries the weight of the body when sitting. You can feel the ischial tuberosity if you wiggle your pelvis against the seat of a chair. Projecting superiorly and anteriorly from the ischial tuberosity is a narrow segment of bone called the ischial ramus . The slightly curved posterior margin of the ischium above the ischial tuberosity is the lesser sciatic notch . The bony projection separating the lesser sciatic notch and greater sciatic notch is the ischial spine . Pubis The pubis forms the anterior portion of the hip bone (see Figure 8.13 ). The enlarged medial portion of the pubis is the pubic body . Located superiorly on the pubic body is a small bump called the pubic tubercle . The superior pubic ramus is the segment of bone that passes laterally from the pubic body to join the ilium. The narrow ridge running along the superior margin of the superior pubic ramus is the pectineal line of the pubis. The pubic body is joined to the pubic body of the opposite hip bone by the pubic symphysis. Extending downward and laterally from the body is the inferior pubic ramus . The pubic arch is the bony structure formed by the pubic symphysis, and the bodies and inferior pubic rami of the adjacent pubic bones. The inferior pubic ramus extends downward to join the ischial ramus. Together, these form the single ischiopubic ramus , which extends from the pubic body to the ischial tuberosity. The inverted V-shape formed as the ischiopubic rami from both sides come together at the pubic symphysis is called the subpubic angle . Pelvis The pelvis consists of four bones: the right and left hip bones, the sacrum, and the coccyx (see Figure 8.12 ). The pelvis has several important functions. Its primary role is to support the weight of the upper body when sitting and to transfer this weight to the lower limbs when standing. It serves as an attachment point for trunk and lower limb muscles, and also protects the internal pelvic organs. When standing in the anatomical position, the pelvis is tilted anteriorly. In this position, the anterior superior iliac spines and the pubic tubercles lie in the same vertical plane, and the anterior (internal) surface of the sacrum faces forward and downward. The three areas of each hip bone, the ilium, pubis, and ischium, converge centrally to form a deep, cup-shaped cavity called the acetabulum . This is located on the lateral side of the hip bone and is part of the hip joint. The large opening in the anteroinferior hip bone between the ischium and pubis is the obturator foramen . This space is largely filled in by a layer of connective tissue and serves for the attachment of muscles on both its internal and external surfaces. Several ligaments unite the bones of the pelvis ( Figure 8.14 ). The largely immobile sacroiliac joint is supported by a pair of strong ligaments that are attached between the sacrum and ilium portions of the hip bone. These are the anterior sacroiliac ligament on the anterior side of the joint and the posterior sacroiliac ligament on the posterior side. Also spanning the sacrum and hip bone are two additional ligaments. The sacrospinous ligament runs from the sacrum to the ischial spine, and the sacrotuberous ligament runs from the sacrum to the ischial tuberosity. These ligaments help to support and immobilize the sacrum as it carries the weight of the body. Interactive Link Watch this video for a 3-D view of the pelvis and its associated ligaments. What is the large opening in the bony pelvis, located between the ischium and pubic regions, and what two parts of the pubis contribute to the formation of this opening? The sacrospinous and sacrotuberous ligaments also help to define two openings on the posterolateral sides of the pelvis through which muscles, nerves, and blood vessels for the lower limb exit. The superior opening is the greater sciatic foramen . This large opening is formed by the greater sciatic notch of the hip bone, the sacrum, and the sacrospinous ligament. The smaller, more inferior lesser sciatic foramen is formed by the lesser sciatic notch of the hip bone, together with the sacrospinous and sacrotuberous ligaments. The space enclosed by the bony pelvis is divided into two regions ( Figure 8.15 ). The broad, superior region, defined laterally by the large, fan-like portion of the upper hip bone, is called the greater pelvis (greater pelvic cavity; false pelvis). This broad area is occupied by portions of the small and large intestines, and because it is more closely associated with the abdominal cavity, it is sometimes referred to as the false pelvis. More inferiorly, the narrow, rounded space of the lesser pelvis (lesser pelvic cavity; true pelvis) contains the bladder and other pelvic organs, and thus is also known as the true pelvis. The pelvic brim (also known as the pelvic inlet ) forms the superior margin of the lesser pelvis, separating it from the greater pelvis. The pelvic brim is defined by a line formed by the upper margin of the pubic symphysis anteriorly, and the pectineal line of the pubis, the arcuate line of the ilium, and the sacral promontory (the anterior margin of the superior sacrum) posteriorly. The inferior limit of the lesser pelvic cavity is called the pelvic outlet . This large opening is defined by the inferior margin of the pubic symphysis anteriorly, and the ischiopubic ramus, the ischial tuberosity, the sacrotuberous ligament, and the inferior tip of the coccyx posteriorly. Because of the anterior tilt of the pelvis, the lesser pelvis is also angled, giving it an anterosuperior (pelvic inlet) to posteroinferior (pelvic outlet) orientation. Comparison of the Female and Male Pelvis The differences between the adult female and male pelvis relate to function and body size. In general, the bones of the male pelvis are thicker and heavier, adapted for support of the male’s heavier physical build and stronger muscles. The greater sciatic notch of the male hip bone is narrower and deeper than the broader notch of females. Because the female pelvis is adapted for childbirth, it is wider than the male pelvis, as evidenced by the distance between the anterior superior iliac spines (see Figure 8.15 ). The ischial tuberosities of females are also farther apart, which increases the size of the pelvic outlet. Because of this increased pelvic width, the subpubic angle is larger in females (greater than 80 degrees) than it is in males (less than 70 degrees). The female sacrum is wider, shorter, and less curved, and the sacral promontory projects less into the pelvic cavity, thus giving the female pelvic inlet (pelvic brim) a more rounded or oval shape compared to males. The lesser pelvic cavity of females is also wider and more shallow than the narrower, deeper, and tapering lesser pelvis of males. Because of the obvious differences between female and male hip bones, this is the one bone of the body that allows for the most accurate sex determination. Table 8.1 provides an overview of the general differences between the female and male pelvis. Overview of Differences between the Female and Male Pelvis Female pelvis Male pelvis Pelvic weight Bones of the pelvis are lighter and thinner Bones of the pelvis are thicker and heavier Pelvic inlet shape Pelvic inlet has a round or oval shape Pelvic inlet is heart-shaped Lesser pelvic cavity shape Lesser pelvic cavity is shorter and wider Lesser pelvic cavity is longer and narrower Subpubic angle Subpubic angle is greater than 80 degrees Subpubic angle is less than 70 degrees Pelvic outlet shape Pelvic outlet is rounded and larger Pelvic outlet is smaller Table 8.1 Career Connection Forensic Pathology and Forensic Anthropology A forensic pathologist (also known as a medical examiner) is a medically trained physician who has been specifically trained in pathology to examine the bodies of the deceased to determine the cause of death. A forensic pathologist applies his or her understanding of disease as well as toxins, blood and DNA analysis, firearms and ballistics, and other factors to assess the cause and manner of death. At times, a forensic pathologist will be called to testify under oath in situations that involve a possible crime. Forensic pathology is a field that has received much media attention on television shows or following a high-profile death. While forensic pathologists are responsible for determining whether the cause of someone’s death was natural, a suicide, accidental, or a homicide, there are times when uncovering the cause of death is more complex, and other skills are needed. Forensic anthropology brings the tools and knowledge of physical anthropology and human osteology (the study of the skeleton) to the task of investigating a death. A forensic anthropologist assists medical and legal professionals in identifying human remains. The science behind forensic anthropology involves the study of archaeological excavation; the examination of hair; an understanding of plants, insects, and footprints; the ability to determine how much time has elapsed since the person died; the analysis of past medical history and toxicology; the ability to determine whether there are any postmortem injuries or alterations of the skeleton; and the identification of the decedent (deceased person) using skeletal and dental evidence. Due to the extensive knowledge and understanding of excavation techniques, a forensic anthropologist is an integral and invaluable team member to have on-site when investigating a crime scene, especially when the recovery of human skeletal remains is involved. When remains are bought to a forensic anthropologist for examination, he or she must first determine whether the remains are in fact human. Once the remains have been identified as belonging to a person and not to an animal, the next step is to approximate the individual’s age, sex, race, and height. The forensic anthropologist does not determine the cause of death, but rather provides information to the forensic pathologist, who will use all of the data collected to make a final determination regarding the cause of death. 8.4 Bones of the Lower Limb Learning Objectives By the end of this section, you will be able to: Identify the divisions of the lower limb and describe the bones of each region Describe the bones and bony landmarks that articulate at each joint of the lower limb Like the upper limb, the lower limb is divided into three regions. The thigh is that portion of the lower limb located between the hip joint and knee joint. The leg is specifically the region between the knee joint and the ankle joint. Distal to the ankle is the foot . The lower limb contains 30 bones. These are the femur, patella, tibia, fibula, tarsal bones, metatarsal bones, and phalanges (see Figure 8.2 ). The femur is the single bone of the thigh. The patella is the kneecap and articulates with the distal femur. The tibia is the larger, weight-bearing bone located on the medial side of the leg, and the fibula is the thin bone of the lateral leg. The bones of the foot are divided into three groups. The posterior portion of the foot is formed by a group of seven bones, each of which is known as a tarsal bone , whereas the mid-foot contains five elongated bones, each of which is a metatarsal bone . The toes contain 14 small bones, each of which is a phalanx bone of the foot . Femur The femur, or thigh bone, is the single bone of the thigh region ( Figure 8.16 ). It is the longest and strongest bone of the body, and accounts for approximately one-quarter of a person’s total height. The rounded, proximal end is the head of the femur , which articulates with the acetabulum of the hip bone to form the hip joint . The fovea capitis is a minor indentation on the medial side of the femoral head that serves as the site of attachment for the ligament of the head of the femur . This ligament spans the femur and acetabulum, but is weak and provides little support for the hip joint. It does, however, carry an important artery that supplies the head of the femur. The narrowed region below the head is the neck of the femur . This is a common area for fractures of the femur. The greater trochanter is the large, upward, bony projection located above the base of the neck. Multiple muscles that act across the hip joint attach to the greater trochanter, which, because of its projection from the femur, gives additional leverage to these muscles. The greater trochanter can be felt just under the skin on the lateral side of your upper thigh. The lesser trochanter is a small, bony prominence that lies on the medial aspect of the femur, just below the neck. A single, powerful muscle attaches to the lesser trochanter. Running between the greater and lesser trochanters on the anterior side of the femur is the roughened intertrochanteric line . The trochanters are also connected on the posterior side of the femur by the larger intertrochanteric crest . The elongated shaft of the femur has a slight anterior bowing or curvature. At its proximal end, the posterior shaft has the gluteal tuberosity , a roughened area extending inferiorly from the greater trochanter. More inferiorly, the gluteal tuberosity becomes continuous with the linea aspera (“rough line”). This is the roughened ridge that passes distally along the posterior side of the mid-femur. Multiple muscles of the hip and thigh regions make long, thin attachments to the femur along the linea aspera. The distal end of the femur has medial and lateral bony expansions. On the lateral side, the smooth portion that covers the distal and posterior aspects of the lateral expansion is the lateral condyle of the femur . The roughened area on the outer, lateral side of the condyle is the lateral epicondyle of the femur . Similarly, the smooth region of the distal and posterior medial femur is the medial condyle of the femur , and the irregular outer, medial side of this is the medial epicondyle of the femur . The lateral and medial condyles articulate with the tibia to form the knee joint. The epicondyles provide attachment for muscles and supporting ligaments of the knee. The adductor tubercle is a small bump located at the superior margin of the medial epicondyle. Posteriorly, the medial and lateral condyles are separated by a deep depression called the intercondylar fossa . Anteriorly, the smooth surfaces of the condyles join together to form a wide groove called the patellar surface , which provides for articulation with the patella bone. The combination of the medial and lateral condyles with the patellar surface gives the distal end of the femur a horseshoe (U) shape. Interactive Link Watch this video to view how a fracture of the mid-femur is surgically repaired. How are the two portions of the broken femur stabilized during surgical repair of a fractured femur? Patella The patella (kneecap) is largest sesamoid bone of the body (see Figure 8.16 ). A sesamoid bone is a bone that is incorporated into the tendon of a muscle where that tendon crosses a joint. The sesamoid bone articulates with the underlying bones to prevent damage to the muscle tendon due to rubbing against the bones during movements of the joint. The patella is found in the tendon of the quadriceps femoris muscle, the large muscle of the anterior thigh that passes across the anterior knee to attach to the tibia. The patella articulates with the patellar surface of the femur and thus prevents rubbing of the muscle tendon against the distal femur. The patella also lifts the tendon away from the knee joint, which increases the leverage power of the quadriceps femoris muscle as it acts across the knee. The patella does not articulate with the tibia. Interactive Link Visit this site to perform a virtual knee replacement surgery. The prosthetic knee components must be properly aligned to function properly. How is this alignment ensured? Homeostatic Imbalances Runner’s Knee Runner’s knee, also known as patellofemoral syndrome, is the most common overuse injury among runners. It is most frequent in adolescents and young adults, and is more common in females. It often results from excessive running, particularly downhill, but may also occur in athletes who do a lot of knee bending, such as jumpers, skiers, cyclists, weight lifters, and soccer players. It is felt as a dull, aching pain around the front of the knee and deep to the patella. The pain may be felt when walking or running, going up or down stairs, kneeling or squatting, or after sitting with the knee bent for an extended period. Patellofemoral syndrome may be initiated by a variety of causes, including individual variations in the shape and movement of the patella, a direct blow to the patella, or flat feet or improper shoes that cause excessive turning in or out of the feet or leg. These factors may cause in an imbalance in the muscle pull that acts on the patella, resulting in an abnormal tracking of the patella that allows it to deviate too far toward the lateral side of the patellar surface on the distal femur. Because the hips are wider than the knee region, the femur has a diagonal orientation within the thigh, in contrast to the vertically oriented tibia of the leg ( Figure 8.17 ). The Q-angle is a measure of how far the femur is angled laterally away from vertical. The Q-angle is normally 10–15 degrees, with females typically having a larger Q-angle due to their wider pelvis. During extension of the knee, the quadriceps femoris muscle pulls the patella both superiorly and laterally, with the lateral pull greater in women due to their large Q-angle. This makes women more vulnerable to developing patellofemoral syndrome than men. Normally, the large lip on the lateral side of the patellar surface of the femur compensates for the lateral pull on the patella, and thus helps to maintain its proper tracking. However, if the pull produced by the medial and lateral sides of the quadriceps femoris muscle is not properly balanced, abnormal tracking of the patella toward the lateral side may occur. With continued use, this produces pain and could result in damage to the articulating surfaces of the patella and femur, and the possible future development of arthritis. Treatment generally involves stopping the activity that produces knee pain for a period of time, followed by a gradual resumption of activity. Proper strengthening of the quadriceps femoris muscle to correct for imbalances is also important to help prevent reoccurrence. Tibia The tibia (shin bone) is the medial bone of the leg and is larger than the fibula, with which it is paired ( Figure 8.18 ). The tibia is the main weight-bearing bone of the lower leg and the second longest bone of the body, after the femur. The medial side of the tibia is located immediately under the skin, allowing it to be easily palpated down the entire length of the medial leg. The proximal end of the tibia is greatly expanded. The two sides of this expansion form the medial condyle of the tibia and the lateral condyle of the tibia . The tibia does not have epicondyles. The top surface of each condyle is smooth and flattened. These areas articulate with the medial and lateral condyles of the femur to form the knee joint . Between the articulating surfaces of the tibial condyles is the intercondylar eminence , an irregular, elevated area that serves as the inferior attachment point for two supporting ligaments of the knee. The tibial tuberosity is an elevated area on the anterior side of the tibia, near its proximal end. It is the final site of attachment for the muscle tendon associated with the patella. More inferiorly, the shaft of the tibia becomes triangular in shape. The anterior apex of this triangle forms the anterior border of the tibia , which begins at the tibial tuberosity and runs inferiorly along the length of the tibia. Both the anterior border and the medial side of the triangular shaft are located immediately under the skin and can be easily palpated along the entire length of the tibia. A small ridge running down the lateral side of the tibial shaft is the interosseous border of the tibia . This is for the attachment of the interosseous membrane of the leg , the sheet of dense connective tissue that unites the tibia and fibula bones. Located on the posterior side of the tibia is the soleal line , a diagonally running, roughened ridge that begins below the base of the lateral condyle, and runs down and medially across the proximal third of the posterior tibia. Muscles of the posterior leg attach to this line. The large expansion found on the medial side of the distal tibia is the medial malleolus (“little hammer”). This forms the large bony bump found on the medial side of the ankle region. Both the smooth surface on the inside of the medial malleolus and the smooth area at the distal end of the tibia articulate with the talus bone of the foot as part of the ankle joint. On the lateral side of the distal tibia is a wide groove called the fibular notch . This area articulates with the distal end of the fibula, forming the distal tibiofibular joint . Fibula The fibula is the slender bone located on the lateral side of the leg (see Figure 8.18 ). The fibula does not bear weight. It serves primarily for muscle attachments and thus is largely surrounded by muscles. Only the proximal and distal ends of the fibula can be palpated. The head of the fibula is the small, knob-like, proximal end of the fibula. It articulates with the inferior aspect of the lateral tibial condyle, forming the proximal tibiofibular joint . The thin shaft of the fibula has the interosseous border of the fibula , a narrow ridge running down its medial side for the attachment of the interosseous membrane that spans the fibula and tibia. The distal end of the fibula forms the lateral malleolus , which forms the easily palpated bony bump on the lateral side of the ankle. The deep (medial) side of the lateral malleolus articulates with the talus bone of the foot as part of the ankle joint. The distal fibula also articulates with the fibular notch of the tibia. Tarsal Bones The posterior half of the foot is formed by seven tarsal bones ( Figure 8.19 ). The most superior bone is the talus . This has a relatively square-shaped, upper surface that articulates with the tibia and fibula to form the ankle joint . Three areas of articulation form the ankle joint: The superomedial surface of the talus bone articulates with the medial malleolus of the tibia, the top of the talus articulates with the distal end of the tibia, and the lateral side of the talus articulates with the lateral malleolus of the fibula. Inferiorly, the talus articulates with the calcaneus (heel bone), the largest bone of the foot, which forms the heel. Body weight is transferred from the tibia to the talus to the calcaneus, which rests on the ground. The medial calcaneus has a prominent bony extension called the sustentaculum tali (“support for the talus”) that supports the medial side of the talus bone. The cuboid bone articulates with the anterior end of the calcaneus bone. The cuboid has a deep groove running across its inferior surface, which provides passage for a muscle tendon. The talus bone articulates anteriorly with the navicular bone, which in turn articulates anteriorly with the three cuneiform (“wedge-shaped”) bones. These bones are the medial cuneiform , the intermediate cuneiform , and the lateral cuneiform . Each of these bones has a broad superior surface and a narrow inferior surface, which together produce the transverse (medial-lateral) curvature of the foot. The navicular and lateral cuneiform bones also articulate with the medial side of the cuboid bone. Interactive Link Use this tutorial to review the bones of the foot. Which tarsal bones are in the proximal, intermediate, and distal groups? Metatarsal Bones The anterior half of the foot is formed by the five metatarsal bones, which are located between the tarsal bones of the posterior foot and the phalanges of the toes (see Figure 8.19 ). These elongated bones are numbered 1–5, starting with the medial side of the foot. The first metatarsal bone is shorter and thicker than the others. The second metatarsal is the longest. The base of the metatarsal bone is the proximal end of each metatarsal bone. These articulate with the cuboid or cuneiform bones. The base of the fifth metatarsal has a large, lateral expansion that provides for muscle attachments. This expanded base of the fifth metatarsal can be felt as a bony bump at the midpoint along the lateral border of the foot. The expanded distal end of each metatarsal is the head of the metatarsal bone . Each metatarsal bone articulates with the proximal phalanx of a toe to form a metatarsophalangeal joint . The heads of the metatarsal bones also rest on the ground and form the ball (anterior end) of the foot. Phalanges The toes contain a total of 14 phalanx bones (phalanges), arranged in a similar manner as the phalanges of the fingers (see Figure 8.19 ). The toes are numbered 1–5, starting with the big toe ( hallux ). The big toe has two phalanx bones, the proximal and distal phalanges. The remaining toes all have proximal, middle, and distal phalanges. A joint between adjacent phalanx bones is called an interphalangeal joint. Interactive Link View this link to learn about a bunion, a localized swelling on the medial side of the foot, next to the first metatarsophalangeal joint, at the base of the big toe. What is a bunion and what type of shoe is most likely to cause this to develop? Arches of the Foot When the foot comes into contact with the ground during walking, running, or jumping activities, the impact of the body weight puts a tremendous amount of pressure and force on the foot. During running, the force applied to each foot as it contacts the ground can be up to 2.5 times your body weight. The bones, joints, ligaments, and muscles of the foot absorb this force, thus greatly reducing the amount of shock that is passed superiorly into the lower limb and body. The arches of the foot play an important role in this shock-absorbing ability. When weight is applied to the foot, these arches will flatten somewhat, thus absorbing energy. When the weight is removed, the arch rebounds, giving “spring” to the step. The arches also serve to distribute body weight side to side and to either end of the foot. The foot has a transverse arch, a medial longitudinal arch, and a lateral longitudinal arch (see Figure 8.19 ). The transverse arch forms the medial-lateral curvature of the mid-foot. It is formed by the wedge shapes of the cuneiform bones and bases (proximal ends) of the first to fourth metatarsal bones. This arch helps to distribute body weight from side to side within the foot, thus allowing the foot to accommodate uneven terrain. The longitudinal arches run down the length of the foot. The lateral longitudinal arch is relatively flat, whereas the medial longitudinal arch is larger (taller). The longitudinal arches are formed by the tarsal bones posteriorly and the metatarsal bones anteriorly. These arches are supported at either end, where they contact the ground. Posteriorly, this support is provided by the calcaneus bone and anteriorly by the heads (distal ends) of the metatarsal bones. The talus bone, which receives the weight of the body, is located at the top of the longitudinal arches. Body weight is then conveyed from the talus to the ground by the anterior and posterior ends of these arches. Strong ligaments unite the adjacent foot bones to prevent disruption of the arches during weight bearing. On the bottom of the foot, additional ligaments tie together the anterior and posterior ends of the arches. These ligaments have elasticity, which allows them to stretch somewhat during weight bearing, thus allowing the longitudinal arches to spread. The stretching of these ligaments stores energy within the foot, rather than passing these forces into the leg. Contraction of the foot muscles also plays an important role in this energy absorption. When the weight is removed, the elastic ligaments recoil and pull the ends of the arches closer together. This recovery of the arches releases the stored energy and improves the energy efficiency of walking. Stretching of the ligaments that support the longitudinal arches can lead to pain. This can occur in overweight individuals, with people who have jobs that involve standing for long periods of time (such as a waitress), or walking or running long distances. If stretching of the ligaments is prolonged, excessive, or repeated, it can result in a gradual lengthening of the supporting ligaments, with subsequent depression or collapse of the longitudinal arches, particularly on the medial side of the foot. This condition is called pes planus (“flat foot” or “fallen arches”). 8.5 Development of the Appendicular Skeleton Learning Objectives By the end of this section, you will be able to: Describe the growth and development of the embryonic limb buds Discuss the appearance of primary and secondary ossification centers Embryologically, the appendicular skeleton arises from mesenchyme, a type of embryonic tissue that can differentiate into many types of tissues, including bone or muscle tissue. Mesenchyme gives rise to the bones of the upper and lower limbs, as well as to the pectoral and pelvic girdles. Development of the limbs begins near the end of the fourth embryonic week, with the upper limbs appearing first. Thereafter, the development of the upper and lower limbs follows similar patterns, with the lower limbs lagging behind the upper limbs by a few days. Limb Growth Each upper and lower limb initially develops as a small bulge called a limb bud , which appears on the lateral side of the early embryo. The upper limb bud appears near the end of the fourth week of development, with the lower limb bud appearing shortly after ( Figure 8.20 ). Initially, the limb buds consist of a core of mesenchyme covered by a layer of ectoderm. The ectoderm at the end of the limb bud thickens to form a narrow crest called the apical ectodermal ridge . This ridge stimulates the underlying mesenchyme to rapidly proliferate, producing the outgrowth of the developing limb. As the limb bud elongates, cells located farther from the apical ectodermal ridge slow their rates of cell division and begin to differentiate. In this way, the limb develops along a proximal-to-distal axis. During the sixth week of development, the distal ends of the upper and lower limb buds expand and flatten into a paddle shape. This region will become the hand or foot. The wrist or ankle areas then appear as a constriction that develops at the base of the paddle. Shortly after this, a second constriction on the limb bud appears at the future site of the elbow or knee. Within the paddle, areas of tissue undergo cell death, producing separations between the growing fingers and toes. Also during the sixth week of development, mesenchyme within the limb buds begins to differentiate into hyaline cartilage that will form models of the future limb bones. The early outgrowth of the upper and lower limb buds initially has the limbs positioned so that the regions that will become the palm of the hand or the bottom of the foot are facing medially toward the body, with the future thumb or big toe both oriented toward the head. During the seventh week of development, the upper limb rotates laterally by 90 degrees, so that the palm of the hand faces anteriorly and the thumb points laterally. In contrast, the lower limb undergoes a 90-degree medial rotation, thus bringing the big toe to the medial side of the foot. Interactive Link Watch this animation to follow the development and growth of the upper and lower limb buds. On what days of embryonic development do these events occur: (a) first appearance of the upper limb bud (limb ridge); (b) the flattening of the distal limb to form the handplate or footplate; and (c) the beginning of limb rotation? Ossification of Appendicular Bones All of the girdle and limb bones, except for the clavicle, develop by the process of endochondral ossification. This process begins as the mesenchyme within the limb bud differentiates into hyaline cartilage to form cartilage models for future bones. By the twelfth week, a primary ossification center will have appeared in the diaphysis (shaft) region of the long bones, initiating the process that converts the cartilage model into bone. A secondary ossification center will appear in each epiphysis (expanded end) of these bones at a later time, usually after birth. The primary and secondary ossification centers are separated by the epiphyseal plate, a layer of growing hyaline cartilage. This plate is located between the diaphysis and each epiphysis. It continues to grow and is responsible for the lengthening of the bone. The epiphyseal plate is retained for many years, until the bone reaches its final, adult size, at which time the epiphyseal plate disappears and the epiphysis fuses to the diaphysis. (Seek additional content on ossification in the chapter on bone tissue.) Small bones, such as the phalanges, will develop only one secondary ossification center and will thus have only a single epiphyseal plate. Large bones, such as the femur, will develop several secondary ossification centers, with an epiphyseal plate associated with each secondary center. Thus, ossification of the femur begins at the end of the seventh week with the appearance of the primary ossification center in the diaphysis, which rapidly expands to ossify the shaft of the bone prior to birth. Secondary ossification centers develop at later times. Ossification of the distal end of the femur, to form the condyles and epicondyles, begins shortly before birth. Secondary ossification centers also appear in the femoral head late in the first year after birth, in the greater trochanter during the fourth year, and in the lesser trochanter between the ages of 9 and 10 years. Once these areas have ossified, their fusion to the diaphysis and the disappearance of each epiphyseal plate follow a reversed sequence. Thus, the lesser trochanter is the first to fuse, doing so at the onset of puberty (around 11 years of age), followed by the greater trochanter approximately 1 year later. The femoral head fuses between the ages of 14–17 years, whereas the distal condyles of the femur are the last to fuse, between the ages of 16–19 years. Knowledge of the age at which different epiphyseal plates disappear is important when interpreting radiographs taken of children. Since the cartilage of an epiphyseal plate is less dense than bone, the plate will appear dark in a radiograph image. Thus, a normal epiphyseal plate may be mistaken for a bone fracture. The clavicle is the one appendicular skeleton bone that does not develop via endochondral ossification. Instead, the clavicle develops through the process of intramembranous ossification. During this process, mesenchymal cells differentiate directly into bone-producing cells, which produce the clavicle directly, without first making a cartilage model. Because of this early production of bone, the clavicle is the first bone of the body to begin ossification, with ossification centers appearing during the fifth week of development. However, ossification of the clavicle is not complete until age 25. Disorders of the... Appendicular System: Congenital Clubfoot Clubfoot, also known as talipes, is a congenital (present at birth) disorder of unknown cause and is the most common deformity of the lower limb. It affects the foot and ankle, causing the foot to be twisted inward at a sharp angle, like the head of a golf club ( Figure 8.21 ). Clubfoot has a frequency of about 1 out of every 1,000 births, and is twice as likely to occur in a male child as in a female child. In 50 percent of cases, both feet are affected. At birth, children with a clubfoot have the heel turned inward and the anterior foot twisted so that the lateral side of the foot is facing inferiorly, commonly due to ligaments or leg muscles attached to the foot that are shortened or abnormally tight. These pull the foot into an abnormal position, resulting in bone deformities. Other symptoms may include bending of the ankle that lifts the heel of the foot and an extremely high foot arch. Due to the limited range of motion in the affected foot, it is difficult to place the foot into the correct position. Additionally, the affected foot may be shorter than normal, and the calf muscles are usually underdeveloped on the affected side. Despite the appearance, this is not a painful condition for newborns. However, it must be treated early to avoid future pain and impaired walking ability. Although the cause of clubfoot is idiopathic (unknown), evidence indicates that fetal position within the uterus is not a contributing factor. Genetic factors are involved, because clubfoot tends to run within families. Cigarette smoking during pregnancy has been linked to the development of clubfoot, particularly in families with a history of clubfoot. Previously, clubfoot required extensive surgery. Today, 90 percent of cases are successfully treated without surgery using new corrective casting techniques. The best chance for a full recovery requires that clubfoot treatment begin during the first 2 weeks after birth. Corrective casting gently stretches the foot, which is followed by the application of a holding cast to keep the foot in the proper position. This stretching and casting is repeated weekly for several weeks. In severe cases, surgery may also be required, after which the foot typically remains in a cast for 6 to 8 weeks. After the cast is removed following either surgical or nonsurgical treatment, the child will be required to wear a brace part-time (at night) for up to 4 years. In addition, special exercises will be prescribed, and the child must also wear special shoes. Close monitoring by the parents and adherence to postoperative instructions are imperative in minimizing the risk of relapse. Despite these difficulties, treatment for clubfoot is usually successful, and the child will grow up to lead a normal, active life. Numerous examples of individuals born with a clubfoot who went on to successful careers include Dudley Moore (comedian and actor), Damon Wayans (comedian and actor), Troy Aikman (three-time Super Bowl-winning quarterback), Kristi Yamaguchi (Olympic gold medalist in figure skating), Mia Hamm (two-time Olympic gold medalist in soccer), and Charles Woodson (Heisman trophy and Super Bowl winner).
american_government
Summary 16.1 What Is Public Policy? Public policy is the broad strategy government uses to do its job, the relatively stable set of purposive governmental behaviors that address matters of concern to some part of society. Most policy outcomes are the result of considerable debate, compromise, and refinement that happen over years and are finalized only after input from multiple institutions within government. Health care reform, for instance, was developed after years of analysis, reflection on existing policy, and even trial implementation at the state level. People evaluate public policies based on their outcomes, that is, who benefits and who loses. Even the best-intended policies can have unintended consequences and may even ultimately harm someone, if only those who must pay for the policy through higher taxes. 16.2 Categorizing Public Policy Goods are the commodities, services, and systems that satisfy people’s wants or needs. Private goods can be owned by a particular person or group, and are excluded from use by others, typically by means of a price. Free-market economists believe that the government has no role in regulating the exchange of private goods because the market will regulate itself. Public goods, on the other hand, are goods like air, water, wildlife, and forests that no one owns, so no one has responsibility for them. Most people agree the government has some role to play in regulating public goods. We categorize policy based upon the degree to which costs and benefits are concentrated on the few or diffused across the many. Distributive policy collects from the many and benefits the few, whereas regulatory policy focuses costs on one group while benefitting larger society. Redistributive policy shares the wealth and income of some groups with others. 16.3 Policy Arenas The three major domestic policy areas are social welfare; science, technology, and education; and business stimulus and regulation. Social welfare programs like Social Security, Medicaid, and Medicare form a safety net for vulnerable populations. Science, technology, and education policies have the goal of securing the United States’ competitive advantages. Business stimulus and regulation policies have to balance business’ needs for an economic edge with consumers’ need for protection from unfair or unsafe practices. The United States spends billions of dollars on these programs. 16.4 Policymakers The two groups most engaged in making policy are policy advocates and policy analysts. Policy advocates are people who feel strongly enough about something to work toward changing public policy to fix it. Policy analysts, on the other hand, aim for impartiality. Their role is to assess potential policies and predict their outcomes. Although they are in theory unbiased, their findings often reflect specific political leanings. The public policy process has four major phases: identifying the problem, setting the agenda, implementing the policy, and evaluating the results. The process is a cycle, because the evaluation stage should feed back into the earlier stages, informing future decisions about the policy. 16.5 Budgeting and Tax Policy Until the Great Depression of the 1930s, the U.S. government took a laissez-faire or hands-off approach to economic policy, assuming that if left to itself, the economy would go through cycles of boom and bust, but would remain healthy overall. Keynesian economic policies, with their emphasis on government spending to increase consumer consumption, helped raise the country out of the Depression. The goal of federal fiscal policy is to have a balanced budget, in which expenditures and revenues match up. More frequently, the budget has a deficit, a gap between expenditures and revenues. It is very difficult to reduce the budget, which consists of mandatory and discretionary spending, but no one really wants to raise revenue by raising taxes. One way monetary policies can change the economy is through the level of interest rates. The Federal Reserve Board sets these rates and thus guiding monetary policy in the United States.
Chapter Outline 16.1 What Is Public Policy? 16.2 Categorizing Public Policy 16.3 Policy Arenas 16.4 Policymakers 16.5 Budgeting and Tax Policy Introduction On March 25, 2010, both chambers of Congress passed the Health Care and Education Reconciliation Act (HCERA). 1 The story of the HCERA, which expanded and improved some provisions of the Patient Protection and Affordable Care Act (ACA), also known as Obamacare, is a complicated tale of insider politics in which the Democratic Party was able to enact sweeping health care and higher education reforms over fierce Republican opposition ( Figure 16.1 ). Some people laud the HCERA as an example of getting things done in the face of partisan gridlock in Congress; others see it a case of government power run amok. Regardless of your view, the HCERA vividly demonstrates public policymaking in action. Each of the individual actors and institutions in the U.S. political system, such as the president, Congress, the courts, interest groups, and the media, gives us an idea of the component parts of the system and their functions. But in the study of public policy, we look at the larger picture and see all the parts working together to make laws, like the HCERA, that ultimately affect citizens and their communities. What is public policy? How do different areas of policy differ, and what roles do policy analysts and advocates play? What programs does the national government currently provide? And how do budgetary policy and politics operate? This chapter answers these questions and more.
[ { "answer": { "ans_choice": 3, "ans_text": "D" }, "bloom": "2", "hl_context": "<hl> During the George W . Bush administration , Social Security became a highly politicized topic as the Republican Party sought to find a way of preventing what experts predicted would be the impending collapse of the Social Security system ( Figure 16.9 ) . <hl> In 1950 , the ratio of workers paying into the program to beneficiaries receiving payments was 16.5 to 1 . By 2013 , that number was 2.8 to 1 and falling . Most predictions in fact suggest that , due to continuing demographic changes including slower population growth and an aging population , by 2033 , the amount of revenue generated from payroll taxes will no longer be sufficient to cover costs . <hl> The Bush administration proposed avoiding this by privatizing the program , in effect , taking it out of the government ’ s hands and making individuals ’ benefits variable instead of defined . <hl> The effort ultimately failed , and Social Security ’ s long-term viability continues to remain uncertain . Numerous other plans for saving the program have been proposed , including raising the retirement age , increasing payroll taxes ( especially on the wealthy ) by removing the $ 118,500 income cap , and reducing payouts for wealthier retirees . None of these proposals have been able to gain traction , however . On March 25 , 2010 , both chambers of Congress passed the Health Care and Education Reconciliation Act ( HCERA ) . <hl> 1 The story of the HCERA , which expanded and improved some provisions of the Patient Protection and Affordable Care Act ( ACA ) , also known as Obamacare , is a complicated tale of insider politics in which the Democratic Party was able to enact sweeping health care and higher education reforms over fierce Republican opposition ( Figure 16.1 ) . <hl> Some people laud the HCERA as an example of getting things done in the face of partisan gridlock in Congress ; others see it a case of government power run amok . <hl> Regardless of your view , the HCERA vividly demonstrates public policymaking in action . <hl>", "hl_sentences": "During the George W . Bush administration , Social Security became a highly politicized topic as the Republican Party sought to find a way of preventing what experts predicted would be the impending collapse of the Social Security system ( Figure 16.9 ) . The Bush administration proposed avoiding this by privatizing the program , in effect , taking it out of the government ’ s hands and making individuals ’ benefits variable instead of defined . 1 The story of the HCERA , which expanded and improved some provisions of the Patient Protection and Affordable Care Act ( ACA ) , also known as Obamacare , is a complicated tale of insider politics in which the Democratic Party was able to enact sweeping health care and higher education reforms over fierce Republican opposition ( Figure 16.1 ) . Regardless of your view , the HCERA vividly demonstrates public policymaking in action .", "question": { "cloze_format": "___ is not an example of a public policy outcome.", "normal_format": "Which of the following is not an example of a public policy outcome?", "question_choices": [ "the creation of a program to combat drug trafficking", "the passage of the Affordable Care Act (Obamacare)", "the passage of tax cuts during the George W. Bush administration", "none of the above; all are public policy outcomes" ], "question_id": "fs-id1171473053653", "question_text": "Which of the following is not an example of a public policy outcome?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "requires multiple actors and branches to carry out" }, "bloom": null, "hl_context": "One approach to thinking about public policy is to see it as the broad strategy government uses to do its job . More formally , it is the relatively stable set of purposive governmental actions that address matters of concern to some part of society . 2 This description is useful in that it helps to explain both what public policy is and what it isn ’ t . <hl> First , public policy is a guide to legislative action that is more or less fixed for long periods of time , not just short-term fixes or single legislative acts . <hl> Policy also doesn ’ t happen by accident , and it is rarely formed simply as the result of the campaign promises of a single elected official , even the president . <hl> While elected officials are often important in shaping policy , most policy outcomes are the result of considerable debate , compromise , and refinement that happen over years and are finalized only after input from multiple institutions within government as well as from interest groups and the public . <hl>", "hl_sentences": "First , public policy is a guide to legislative action that is more or less fixed for long periods of time , not just short-term fixes or single legislative acts . While elected officials are often important in shaping policy , most policy outcomes are the result of considerable debate , compromise , and refinement that happen over years and are finalized only after input from multiple institutions within government as well as from interest groups and the public .", "question": { "cloze_format": "Public policy ________.", "normal_format": "Which of the following is correct about public policy?", "question_choices": [ "is more of a theory than a reality", "is typically made by one branch of government acting alone", "requires multiple actors and branches to carry out", "focuses on only a few special individuals" ], "question_id": "fs-id1171471133738", "question_text": "Public policy ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "they require the payment of a fee up front" }, "bloom": null, "hl_context": "Economists consider goods like cable TV , cellphone service , and private schools to be toll goods . <hl> Toll goods are similar to public goods in that they are open to all and theoretically infinite if maintained , but they are paid for or provided by some outside ( nongovernment ) entity . <hl> Many people can make use of them , but only if they can pay the price . The name “ toll goods ” comes from the fact that , early on , many toll roads were in fact privately owned commodities . Even today , states from Virginia to California have allowed private companies to build public roads in exchange for the right to profit by charging tolls . 8", "hl_sentences": "Toll goods are similar to public goods in that they are open to all and theoretically infinite if maintained , but they are paid for or provided by some outside ( nongovernment ) entity .", "question": { "cloze_format": "Toll goods differ from public goods in that ________.", "normal_format": "What do toll goods differ from public goods in?", "question_choices": [ "they provide special access to some and not all", "they require the payment of a fee up front", "they provide a service for only the wealthy", "they are free and available to all" ], "question_id": "fs-id1171471304308", "question_text": "Toll goods differ from public goods in that ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "A" }, "bloom": null, "hl_context": "While distributive policy , according to Lowi , has diffuse costs and concentrated benefits , regulatory policy features the opposite arrangement , with concentrated costs and diffuse benefits . <hl> A relatively small number of groups or individuals bear the costs of regulatory policy , but its benefits are expected to be distributed broadly across society . <hl> As you might imagine , regulatory policy is most effective for controlling or protecting public or common resources . Among the best-known examples are policies designed to protect public health and safety , and the environment . These regulatory policies prevent manufacturers or businesses from maximizing their profits by excessively polluting the air or water , selling products they know to be harmful , or compromising the health of their employees during production .", "hl_sentences": "A relatively small number of groups or individuals bear the costs of regulatory policy , but its benefits are expected to be distributed broadly across society .", "question": { "cloze_format": "The type of policy that directly benefits the most citizens is the ___.", "normal_format": "Which type of policy directly benefits the most citizens?", "question_choices": [ "regulatory policy", "distributive policy", "redistributive policy", "self-regulatory policy" ], "question_id": "fs-id1171471111575", "question_text": "Which type of policy directly benefits the most citizens?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "D" }, "bloom": null, "hl_context": "<hl> While Social Security was designed to provide cash payments to sustain the aged and disabled , Medicare and Medicaid were intended to ensure that vulnerable populations have access to health care . <hl> Medicare , like Social Security , is an entitlement program funded through payroll taxes . Its purpose is to make sure that senior citizens and retirees have access to low-cost health care they might not otherwise have , because most U . S . citizens get their health insurance through their employers . Medicare provides three major forms of coverage : a guaranteed insurance benefit that helps cover major hospitalization , fee-based supplemental coverage that retirees can use to lower costs for doctor visits and other health expenses , and a prescription drug benefit . Medicare faces many of the same long-term challenges as Social Security , due to the same demographic shifts . Medicare also faces the problem that health care costs are rising significantly faster than inflation . In 2014 , Medicare cost the federal government almost $ 597 billion . 16", "hl_sentences": "While Social Security was designed to provide cash payments to sustain the aged and disabled , Medicare and Medicaid were intended to ensure that vulnerable populations have access to health care .", "question": { "cloze_format": "The group that Social Security and Medicare are notable for their assistance to are ___.", "normal_format": "Social Security and Medicare are notable for their assistance to which group?", "question_choices": [ "the poor", "young families starting out", "those in urban areas", "the elderly" ], "question_id": "fs-id1171474338967", "question_text": "Social Security and Medicare are notable for their assistance to which group?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "agenda setting" }, "bloom": null, "hl_context": "The policy process contains four sequential stages : ( 1 ) agenda setting , ( 2 ) policy enactment , ( 3 ) policy implementation , and ( 4 ) evaluation . Given the sheer number of issues already processed by the government , called the continuing agenda , and the large number of new proposals being pushed at any one time , it is typically quite difficult to move a new policy all the way through the process . <hl> Agenda setting is the crucial first stage of the public policy process . <hl> <hl> Agenda setting has two subphases : problem identification and alternative specification . <hl> <hl> Problem identification identifies the issues that merit discussion . <hl> Not all issues make it onto the governmental agenda because there is only so much attention that government can pay . Thus , one of the more important tasks for a policy advocate is to frame his or her issue in a compelling way that raises a persuasive dimension or critical need . 19 For example , health care reform has been attempted on many occasions over the years . One key to making the topic salient has been to frame it in terms of health care access , highlighting the percentage of people who do not have health insurance .", "hl_sentences": "Agenda setting is the crucial first stage of the public policy process . Agenda setting has two subphases : problem identification and alternative specification . Problem identification identifies the issues that merit discussion .", "question": { "cloze_format": "The stage of the public policy process that includes identification of problems in need of fixing is ___.", "normal_format": "Which stage of the public policy process includes identification of problems in need of fixing?", "question_choices": [ "agenda setting", "enactment", "implementation", "evaluation" ], "question_id": "fs-id1171471148227", "question_text": "Which stage of the public policy process includes identification of problems in need of fixing?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "A" }, "bloom": null, "hl_context": "A second approach to creating public policy is a bit more objective . <hl> Rather than starting with what ought to happen and seeking ways to make it so , policy analysts try to identify all the possible choices available to a decision maker and then gauge their impacts if implemented . <hl> <hl> The goal of the analyst isn ’ t really to encourage the implementation of any of the options ; rather , it is to make sure decision makers are fully informed about the implications of the decisions they do make . <hl>", "hl_sentences": "Rather than starting with what ought to happen and seeking ways to make it so , policy analysts try to identify all the possible choices available to a decision maker and then gauge their impacts if implemented . The goal of the analyst isn ’ t really to encourage the implementation of any of the options ; rather , it is to make sure decision makers are fully informed about the implications of the decisions they do make .", "question": { "cloze_format": "Policy analysts seek ________.", "normal_format": "What do policy analysts seek?", "question_choices": [ "evidence", "their chosen outputs", "influence", "money" ], "question_id": "fs-id1171471009039", "question_text": "Policy analysts seek ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "B" }, "bloom": null, "hl_context": "In theory , the amount of revenue raised by the national government should be equal to these expenses , but with the exception of a brief period from 1998 to 2000 , that has not been the case . <hl> The economic recovery from the 2007 – 2009 recession , and budget control efforts implemented since then , have managed to cut the annual deficit — the amount by which expenditures are greater than revenues — by more than half . <hl> However , the amount of money the U . S . government needed to borrow to pay its bills in 2016 was still in excess of $ 400 billion 25 . This was in addition to the country ’ s almost $ 19 trillion of total debt — the amount of money the government owes its creditors — at the end of 2015 , according to the Department of the Treasury .", "hl_sentences": "The economic recovery from the 2007 – 2009 recession , and budget control efforts implemented since then , have managed to cut the annual deficit — the amount by which expenditures are greater than revenues — by more than half .", "question": { "cloze_format": "A deficit is ________.", "normal_format": "What is a deficit?", "question_choices": [ "the overall amount owed by government for past borrowing", "the annual budget shortfall between revenues and expenditures", "the cancellation of an entitlement program", "all the above" ], "question_id": "fs-id1171470963267", "question_text": "A deficit is ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "formula-based spending that goes to individual citizens" }, "bloom": null, "hl_context": "<hl> Congress is ultimately responsible for setting the formulas for mandatory payouts , but as we saw in the earlier discussion regarding Social Security , major reforms to entitlement formulas are difficult to enact . <hl> As a result , the size and growth of mandatory spending in future budgets are largely a function of previous legislation that set the formulas up in the first place . So long as supporters of particular programs can block changes to the formulas , funding will continue almost on autopilot . Keynesians support this mandatory spending , along with other elements of social welfare policy , because they help maintain a minimal level of consumption that should , in theory , prevent recessions from turning into depressions , which are more severe downturns . <hl> The overwhelming portion of mandatory spending is earmarked for entitlement programs guaranteed to those who meet certain qualifications , usually based on age , income , or disability . <hl> These programs , discussed above , include Medicare and Medicaid , Social Security , and major income security programs such as unemployment insurance and SNAP . The costs of programs tied to age are relatively easy to estimate and grow largely as a function of the aging of the population . Income and disability payments are a bit more difficult to estimate . They tend to go down during periods of economic recovery and rise when the economy begins to slow down , in precisely the way Keynes suggested . A comparatively small piece of the mandatory spending pie , about 10 percent , is devoted to benefits designated for former federal employees , including military retirement and many Veterans Administration programs . Social Security addresses these concerns with three important tools . First and best known is the retirement benefit . After completing a minimum number of years of work , American workers may claim a form of pension upon reaching retirement age . <hl> It is often called an entitlement program since it guarantees benefits to a particular group , and virtually everyone will eventually qualify for the plan given the relatively low requirements for enrollment . <hl> The amount of money a worker receives is based loosely on his or her lifetime earnings . Full retirement age was originally set at sixty-five , although changes in legislation have increased it to sixty-seven for workers born after 1959 . 15 A valuable added benefit is that , under certain circumstances , this income may also be claimed by the survivors of qualifying workers , such as spouses and minor children , even if they themselves did not have a wage income .", "hl_sentences": "Congress is ultimately responsible for setting the formulas for mandatory payouts , but as we saw in the earlier discussion regarding Social Security , major reforms to entitlement formulas are difficult to enact . The overwhelming portion of mandatory spending is earmarked for entitlement programs guaranteed to those who meet certain qualifications , usually based on age , income , or disability . It is often called an entitlement program since it guarantees benefits to a particular group , and virtually everyone will eventually qualify for the plan given the relatively low requirements for enrollment .", "question": { "cloze_format": "Entitlement (or mandatory) spending is ________.", "normal_format": "What is entitlement (or mandatory) spending?", "question_choices": [ "formula-based spending that goes to individual citizens", "a program of contracts to aerospace companies", "focused on children", "concentrated on education" ], "question_id": "fs-id1171470991841", "question_text": "Entitlement (or mandatory) spending is ________." }, "references_are_paraphrase": null } ]
16
16.1 What Is Public Policy? Learning Objectives By the end of this section, you will be able to: Explain the concept of public policy Discuss examples of public policy in action It is easy to imagine that when designers engineer a product, like a car, they do so with the intent of satisfying the consumer. But the design of any complicated product must take into account the needs of regulators, transporters, assembly line workers, parts suppliers, and myriad other participants in the manufacture and shipment process. And manufacturers must also be aware that consumer tastes are fickle: A gas-guzzling sports car may appeal to an unmarried twenty-something with no children; but what happens to product satisfaction when gas prices fluctuate, or the individual gets married and has children? In many ways, the process of designing domestic policy isn’t that much different. The government, just like auto companies, needs to ensure that its citizen-consumers have access to an array of goods and services. And just as in auto companies, a wide range of actors is engaged in figuring out how to do it. Sometimes, this process effectively provides policies that benefit citizens. But just as often, the process of policymaking is muddied by the demands of competing interests with different opinions about society’s needs or the role that government should play in meeting them. To understand why, we begin by thinking about what we mean by the term “public policy.” PUBLIC POLICY DEFINED One approach to thinking about public policy is to see it as the broad strategy government uses to do its job. More formally, it is the relatively stable set of purposive governmental actions that address matters of concern to some part of society. 2 This description is useful in that it helps to explain both what public policy is and what it isn’t. First, public policy is a guide to legislative action that is more or less fixed for long periods of time, not just short-term fixes or single legislative acts. Policy also doesn’t happen by accident, and it is rarely formed simply as the result of the campaign promises of a single elected official, even the president. While elected officials are often important in shaping policy, most policy outcomes are the result of considerable debate, compromise, and refinement that happen over years and are finalized only after input from multiple institutions within government as well as from interest groups and the public. Consider the example of health care expansion. A follower of politics in the news media may come away thinking the reforms implemented in 2010 were as sudden as they were sweeping, having been developed in the final weeks before they were enacted. The reality is that expanding health care access had actually been a priority of the Democratic Party for several decades. What may have seemed like a policy developed over a period of months was in fact formed after years of analysis, reflection upon existing policy, and even trial implementation of similar types of programs at the state level. Even before passage of the ACA (2010), which expanded health care coverage to millions, and of the HCERA (2010), more than 50 percent of all health care expenditures in the United States already came from federal government programs such as Medicare and Medicaid. Several House and Senate members from both parties along with First Lady Hillary Clinton had proposed significant expansions in federal health care policy during the Democratic administration of Bill Clinton, providing a number of different options for any eventual health care overhaul. 3 Much of what became the ACA was drawn from proposals originally developed at the state level, by none other than Obama’s 2012 Republican presidential opponent Mitt Romney when he was governor of Massachusetts. 4 In addition to being thoughtful and generally stable, public policy deals with issues of concern to some large segment of society, as opposed to matters of interest only to individuals or a small group of people. Governments frequently interact with individual actors like citizens, corporations, or other countries. They may even pass highly specialized pieces of legislation, known as private bills, which confer specific privileges on individual entities. But public policy covers only those issues that are of interest to larger segments of society or that directly or indirectly affect society as a whole. Paying off the loans of a specific individual would not be public policy, but creating a process for loan forgiveness available to certain types of borrowers (such as those who provide a public service by becoming teachers) would certainly rise to the level of public policy. A final important characteristic of public policy is that it is more than just the actions of government; it also includes the behaviors or outcomes that government action creates. Policy can even be made when government refuses to act in ways that would change the status quo when circumstances or public opinion begin to shift. 5 For example, much of the debate over gun safety policy in the United States has centered on the unwillingness of Congress to act, even in the face of public opinion that supports some changes to gun policy. In fact, one of the last major changes occurred in 2004, when lawmakers’ inaction resulted in the expiration of a piece of legislation known as the Federal Assault Weapons Ban (1994). 6 PUBLIC POLICY AS OUTCOMES Governments rarely want to keep their policies a secret. Elected officials want to be able to take credit for the things they have done to help their constituents, and their opponents are all too willing to cast blame when policy initiatives fail. We can therefore think of policy as the formal expression of what elected or appointed officials are trying to accomplish. In passing the HCERA (2010), Congress declared its policy through an act that directed how it would appropriate money. The president can also implement or change policy through an executive order, which offers instructions about how to implement law under his or her discretion ( Figure 16.2 ). Finally, policy changes can come as a result of court actions or opinions, such as Brown v. Board of Education of Topeka (1954), which formally ended school segregation in the United States. 7 Typically, elected and even high-ranking appointed officials lack either the specific expertise or tools needed to successfully create and implement public policy on their own. They turn instead to the vast government bureaucracy to provide policy guidance. For example, when Congress passed the Clean Water Act (1972), it dictated that steps should be taken to improve water quality throughout the country. But it ultimately left it to the bureaucracy to figure out exactly how ‘clean’ water needed to be. In doing so, Congress provided the Environmental Protection Agency (EPA) with discretion to determine how much pollution is allowed in U.S. waterways. There is one more way of thinking about policy outcomes: in terms of winners and losers. Almost by definition, public policy promotes certain types of behavior while punishing others. So, the individuals or corporations that a policy favors are most likely to benefit, or win, whereas those the policy ignores or punishes are likely to lose. Even the best-intended policies can have unintended consequences and may even ultimately harm someone, if only those who must pay for the policy through higher taxes. A policy designed to encourage students to go to liberal arts colleges may cause trade school enrollment to decline. Strategies to promote diversity in higher education may make it more difficult for qualified white or male applicants to get accepted into competitive programs. Efforts to clean up drinking water supplies may make companies less competitive and cost employees their livelihood. Even something that seems to help everyone, such as promoting charitable giving through tax incentives, runs the risk of lowering tax revenues from the rich (who contribute a greater share of their income to charity) and shifting tax burdens to the poor (who must spend a higher share of their income to achieve a desired standard of living). And while policy pronouncements and bureaucratic actions are certainly meant to rationalize policy, it is whether a given policy helps or hurts constituents (or is perceived to do so) that ultimately determines how voters will react toward the government in future elections. Finding a Middle Ground The Social Safety Net During the Great Depression of the 1930s, the United States created a set of policies and programs that constituted a social safety net for the millions who had lost their jobs, their homes, and their savings ( Figure 16.3 ). Under President Franklin Delano Roosevelt , the federal government began programs like the Work Progress Administration and Civilian Conservation Corps to combat unemployment and the Home Owners’ Loan Corporation to refinance Depression-related mortgage debts. As the effects of the Depression eased, the government phased out many of these programs. Other programs, like Social Security or the minimum wage, remain an important part of the way the government takes care of the vulnerable members of its population. The federal government has also added further social support programs, like Medicaid, Medicare, and the Special Supplemental Nutrition Program for Women, Infants, and Children, to ensure a baseline or minimal standard of living for all, even in the direst of times. In recent decades, however, some have criticized these safety net programs for inefficiency and for incentivizing welfare dependence. They deride “government leeches” who use food stamps to buy lobster or other seemingly inappropriate items. Critics deeply resent the use of taxpayer money to relieve social problems like unemployment and poverty; workers who may themselves be struggling to put food on the table or pay the mortgage feel their hard-earned money should not support other families. “If I can get by without government support,” the reasoning goes, “those welfare families can do the same. Their poverty is not my problem.” So where should the government draw the line? While there have been some instances of welfare fraud, the welfare reforms of the 1990s have made long-term dependence on the federal government less likely as the welfare safety net was pushed to the states. And with the income gap between the richest and the poorest at its highest level in history, this topic is likely to continue to receive much discussion in the coming years. Where is the middle ground in the public policy argument over the social safety net? How can the government protect its most vulnerable citizens without placing an undue burden on others? Link to Learning Explore historical data on United States budgets and spending from 1940 to the present from the Office of Management and Budget. 16.2 Categorizing Public Policy Learning Objectives By the end of this section, you will be able to: Describe the different types of goods in a society Identify key public policy domains in the United States Compare the different forms of policy and the way they transfer goods within a society The idea of public policy is by its very nature a politically contentious one. Among the differences between American liberals and conservatives are the policy preferences prevalent in each group. Modern liberals tend to feel very comfortable with the idea of the government shepherding progressive social and economic reforms, believing that these will lead to outcomes more equitable and fair for all members of society. Conservatives, on the other hand, often find government involvement onerous and overreaching. They feel society would function more efficiently if oversight of most “public” matters were returned to the private sphere. Before digging too deeply into a discussion of the nature of public policy in the United States, let us look first at why so many aspects of society come under the umbrella of public policy to begin with. DIFFERENT TYPES OF GOODS Think for a minute about what it takes to make people happy and satisfied. As we live our daily lives, we experience a range of physical, psychological, and social needs that must be met in order for us to be happy and productive. At the very least, we require food, water, and shelter. In very basic subsistence societies, people acquire these through farming crops, digging wells, and creating shelter from local materials (see Figure 16.4 ). People also need social interaction with others and the ability to secure goods they acquire, lest someone else try to take them. As their tastes become more complex, they may find it advantageous to exchange their items for others; this requires not only a mechanism for barter but also a system of transportation. The more complex these systems are, the greater the range of items people can access to keep them alive and make them happy. However, this increase in possessions also creates a stronger need to secure what they have acquired. Economists use the term goods to describe the range of commodities, services, and systems that help us satisfy our wants or needs. This term can certainly apply to the food you eat or the home you live in, but it can also describe the systems of transportation or public safety used to protect them. Most of the goods you interact with in your daily life are private goods , which means that they can be owned by a particular person or group of people, and are excluded from use by others, typically by means of a price. For example, your home or apartment is a private good reserved for your own use because you pay rent or make mortgage payments for the privilege of living there. Further, private goods are finite and can run out if overused, even if only in the short term. The fact that private goods are excludable and finite makes them tradable. A farmer who grows corn, for instance, owns that corn, and since only a finite amount of corn exists, others may want to trade their goods for it if their own food supplies begin to dwindle. Proponents of free-market economics believe that the market forces of supply and demand, working without any government involvement, are the most effective way for markets to operate. One of the basic principles of free-market economics is that for just about any good that can be privatized, the most efficient means for exchange is the marketplace. A well-functioning market will allow producers of goods to come together with consumers of goods to negotiate a trade. People facilitate trade by creating a currency—a common unit of exchange—so they do not need to carry around everything they may want to trade at all times. As long as there are several providers or sellers of the same good, consumers can negotiate with them to find a price they are willing to pay. As long as there are several buyers for a seller’s goods, providers can negotiate with them to find a price buyers are willing to accept. And, the logic goes, if prices begin to rise too much, other sellers will enter the marketplace, offering lower prices. A second basic principle of free-market economics is that it is largely unnecessary for the government to protect the value of private goods. Farmers who own land used for growing food have a vested interest in protecting their land to ensure its continued production. Business owners must protect the reputation of their business or no one will buy from them. And, to the degree that producers need to ensure the quality of their product or industry, they can accomplish that by creating a group or association that operates outside government control. In short, industries have an interest in self-regulating to protect their own value. According to free-market economics, as long as everything we could ever want or need is a private good, and so long as every member of society has some ability to provide for themselves and their families, public policy regulating the exchange of goods and services is really unnecessary. Some people in the United States argue that the self-monitoring and self-regulating incentives provided by the existence of private goods mean that sound public policy requires very little government action. Known as libertarians , these individuals believe government almost always operates less efficiently than the private sector (the segment of the economy run for profit and not under government control), and that government actions should therefore be kept to a minimum. Even as many in the United States recognize the benefits provided by private goods, we have increasingly come to recognize problems with the idea that all social problems can be solved by exclusively private ownership. First, not all goods can be classified as strictly private. Can you really consider the air you breathe to be private? Air is a difficult good to privatize because it is not excludable—everyone can get access to it at all times—and no matter how much of it you breathe, there is still plenty to go around. Geographic regions like forests have environmental, social, recreational, and aesthetic value that cannot easily be reserved for private ownership. Resources like migrating birds or schools of fish may have value if hunted or fished, but they cannot be owned due to their migratory nature. Finally, national security provided by the armed forces protects all citizens and cannot reasonably be reserved for only a few. These are all examples of what economists call public goods , sometimes referred to as collective goods. Unlike private property, they are not excludable and are essentially infinite. Forests, water, and fisheries, however, are a type of public good called common goods , which are not excludable but may be finite. The problem with both public and common goods is that since no one owns them, no one has a financial interest in protecting their long-term or future value. Without government regulation, a factory owner can feel free to pollute the air or water, since he or she will have no responsibility for the pollution once the winds or waves carry it somewhere else (see Figure 16.5 ). Without government regulation, someone can hunt all the migratory birds or deplete a fishery by taking all the fish, eliminating future breeding stocks that would maintain the population. The situation in which individuals exhaust a common resource by acting in their own immediate self-interest is called the tragedy of the commons . A second problem with strict adherence to free-market economics is that some goods are too large, or too expensive, for individuals to provide them for themselves. Consider the need for a marketplace: Where does the marketplace come from? How do we get the goods to market? Who provides the roads and bridges? Who patrols the waterways? Who provides security? Who ensures the regulation of the currency? No individual buyer or seller could accomplish this. The very nature of the exchange of private goods requires a system that has some of the openness of public or common goods, but is maintained by either groups of individuals or entire societies. Economists consider goods like cable TV, cellphone service, and private schools to be toll goods . Toll goods are similar to public goods in that they are open to all and theoretically infinite if maintained, but they are paid for or provided by some outside (nongovernment) entity. Many people can make use of them, but only if they can pay the price. The name “toll goods” comes from the fact that, early on, many toll roads were in fact privately owned commodities. Even today, states from Virginia to California have allowed private companies to build public roads in exchange for the right to profit by charging tolls. 8 So long as land was plentiful, and most people in the United States lived a largely rural subsistence lifestyle, the difference between private, public, common, and toll goods was mostly academic. But as public lands increasingly became private through sale and settlement, and as industrialization and the rise of mass production allowed monopolies and oligopolies to become more influential, support for public policies regulating private entities grew. By the beginning of the twentieth century, led by the Progressives, the United States had begun to search for ways to govern large businesses that had managed to distort market forces by monopolizing the supply of goods. And, largely as a result of the Great Depression, people wanted ways of developing and protecting public goods that were fairer and more equitable than had existed before. These forces and events led to the increased regulation of public and common goods, and a move for the public sector—the government—to take over of the provision of many toll goods. CLASSIC TYPES OF POLICY Public policy, then, ultimately boils down to determining the distribution, allocation, and enjoyment of public, common, and toll goods within a society. While the specifics of policy often depend on the circumstances, two broad questions all policymakers must consider are a) who pays the costs of creating and maintaining the goods, and b) who receives the benefits of the goods? When private goods are bought and sold in a market place, the costs and benefits go to the participants in the transaction. Your landlord benefits from receipt of the rent you pay, and you benefit by having a place to live. But non-private goods like roads, waterways, and national parks are controlled and regulated by someone other than the owners, allowing policymakers to make decisions about who pays and who benefits. In 1964, Theodore Lowi argued that it was possible to categorize policy based upon the degree to which costs and benefits were concentrated on the few or diffused across the many. One policy category, known as distributive policy , tends to collect payments or resources from many but concentrates direct benefits on relatively few. Highways are often developed through distributive policy. Distributive policy is also common when society feels there is a social benefit to individuals obtaining private goods such as higher education that offer long-term benefits, but the upfront cost may be too high for the average citizen. One example of the way distributive policy works is the story of the Transcontinental Railroad. In the 1860s, the U.S. government began to recognize the value of building a robust railroad system to move passengers and freight around the country. A particular goal was connecting California and the other western territories acquired during the 1840s war with Mexico to the rest of the country. The problem was that constructing a nationwide railroad system was a costly and risky proposition. To build and support continuous rail lines, private investors would need to gain access to tens of thousands of miles of land, some of which might be owned by private citizens. The solution was to charter two private corporations—the Central Pacific and Union Pacific Railroads—and provide them with resources and land grants to facilitate the construction of the railroads (see Figure 16.6 ). 9 Through these grants, publicly owned land was distributed to private citizens, who could then use it for their own gain. However, a broader public gain was simultaneously being provided in the form of a nationwide transportation network. The same process operates in the agricultural sector, where various federal programs help farmers and food producers through price supports and crop insurance, among other forms of assistance. These programs help individual farmers and agriculture companies stay afloat and realize consistent profits. They also achieve the broader goal of providing plenty of sustenance for the people of the United States, so that few of us have to “live off the land.” Milestone The Hoover Dam: The Federal Effort to Domesticate the Colorado River As westward expansion led to development of the American Southwest, settlers increasingly realized that they needed a way to control the frequent floods and droughts that made agriculture difficult in the region. As early as 1890, land speculators had tried diverting the Colorado River for this purpose, but it wasn’t until 1922 that the U.S. Bureau of Reclamation (then called the Reclamation Service) chose the Black Canyon as a good location for a dam to divert the river. Since it would affect seven states (as well as Mexico), the federal government took the lead on the project, which eventually cost $49 million and more than one hundred lives. The dam faced significant opposition from members of other states, who felt its massive price tag (almost $670 million in today’s dollars 10 ) benefitted only a small group, not the whole nation. However, in 1928, Senator Hiram Johnson and Representative Phil Swing, both Republicans from California, won the day. Congress passed the Boulder Canyon Project Act , authorizing the construction of one of the most ambitious engineering feats in U.S. history. The Hoover Dam ( Figure 16.7 ), completed in 1935, served the dual purposed of generating hydroelectric power and irrigating two million acres of land from the resulting reservoir (Lake Mead). Was the construction of the Hoover Dam an effective expression of public policy? Why or why not? Link to Learning Visit this site to see how the U.S. Bureau of Reclamation (USBR) presented the construction of the Hoover Dam. How would you describe the bureau’s perspective? American Rivers is an advocacy group whose goal is to protect and restore rivers, including the Colorado River. How does this group’s view of the Hoover Dam differ from that of the USBR? Other examples of distributive policy support citizens’ efforts to achieve “the American Dream.” American society recognizes the benefits of having citizens who are financially invested in the country’s future. Among the best ways to encourage this investment are to ensure that citizens are highly educated and have the ability to acquire high-cost private goods such as homes and businesses. However, very few people have the savings necessary to pay upfront for a college education, a first home purchase, or the start-up costs of a business. To help out, the government has created a range of incentives that everyone in the country pays for through taxes but that directly benefit only the recipients. Examples include grants (such as Pell grants), tax credits and deductions, and subsidized or federally guaranteed loans. Each of these programs aims to achieve a policy outcome. Pell grants exist to help students graduate from college, whereas Federal Housing Administration mortgage loans lead to home ownership. While distributive policy, according to Lowi, has diffuse costs and concentrated benefits, regulatory policy features the opposite arrangement, with concentrated costs and diffuse benefits. A relatively small number of groups or individuals bear the costs of regulatory policy, but its benefits are expected to be distributed broadly across society. As you might imagine, regulatory policy is most effective for controlling or protecting public or common resources. Among the best-known examples are policies designed to protect public health and safety, and the environment. These regulatory policies prevent manufacturers or businesses from maximizing their profits by excessively polluting the air or water, selling products they know to be harmful, or compromising the health of their employees during production. In the United States, nationwide calls for a more robust regulatory policy first grew loud around the turn of the twentieth century and the dawn of the Industrial Age. Investigative journalists—called muckrakers by politicians and business leaders who were the focus of their investigations—began to expose many of the ways in which manufacturers were abusing the public trust. Although various forms of corruption topped the list of abuses, among the most famous muckraker exposés was The Jungle , a 1906 novel by Upton Sinclair that focused on unsanitary working conditions and unsavory business practices in the meat-packing industry. 11 This work and others like it helped to spur the passage of the Pure Food and Drug Act (1906) and ultimately led to the creation of government agencies such as the U.S. Food and Drug Administration (FDA). 12 The nation’s experiences during the depression of 1896 and the Great Depression of the 1930s also led to more robust regulatory policies designed to improve the transparency of financial markets and prevent monopolies from forming. A final type of policy is redistributive policy , so named because it redistributes resources in society from one group to another. That is, according to Lowi, the costs are concentrated and so are the benefits, but different groups bear the costs and enjoy the benefits. Most redistributive policies are intended to have a sort of “Robin Hood” effect; their goal is to transfer income and wealth from one group to another such that everyone enjoys at least a minimal standard of living. Typically, the wealthy and middle class pay into the federal tax base, which then funds need-based programs that support low-income individuals and families. A few examples of redistributive policies are Head Start (education), Medicaid (health care), Temporary Assistance for Needy Families (TANF, income support), and food programs like the Supplementary Nutritional Aid Program (SNAP). The government also uses redistribution to incentivize specific behaviors or aid small groups of people. Pell grants to encourage college attendance and tax credits to encourage home ownership are other examples of redistribution. 16.3 Policy Arenas Learning Objectives By the end of this section, you will be able to: Identify the key domestic arenas of public policy Describe the major social safety net programs List the key agencies responsible for promoting and regulating U.S. business and industry In practice, public policy consists of specific programs that provide resources to members of society, create regulations that protect U.S. citizens, and attempt to equitably fund the government. We can broadly categorize most policies based on their goals or the sector of society they affect, although many, such as food stamps, serve multiple purposes. Implementing these policies costs hundreds of billions of dollars each year, and understanding the goals of this spending and where the money goes is of vital importance to citizens and students of politics alike. SOCIAL WELFARE POLICY The U.S. government began developing a social welfare policy during the Great Depression of the 1930s. By the 1960s, social welfare had become a major function of the federal government—one to which most public policy funds are devoted—and had developed to serve several overlapping functions. First, social welfare policy is designed to ensure some level of equity in a democratic political system based on competitive, free-market economics. During the Great Depression, many politicians came to fear that the high unemployment and low-income levels plaguing society could threaten the stability of democracy, as was happening in European countries like Germany and Italy. The assumption in this thinking is that democratic systems work best when poverty is minimized. In societies operating in survival mode, in contrast, people tend to focus more on short-term problem-solving than on long-term planning. Second, social welfare policy creates an automatic stimulus for a society by building a safety net that can catch members of society who are suffering economic hardship through no fault of their own. For an individual family, this safety net makes the difference between eating and starving; for an entire economy, it could prevent an economic recession from sliding into a broader and more damaging depression. One of the oldest and largest pieces of social welfare policy is Social Security , which cost the United States about $845 billion in 2014 alone. 13 These costs are offset by a 12.4 percent payroll tax on all wages up to $118,500; employers and workers who are not self-employed split the bill for each worker, whereas the self-employed pay their entire share. 14 Social Security was conceived as a solution to several problems inherent to the Industrial Era economy. First, by the 1920s and 1930s, an increasing number of workers were earning their living through manual or day-wage labor that depended on their ability to engage in physical activity ( Figure 16.8 ). As their bodies weakened with age or if they were injured, their ability to provide for themselves and their families was compromised. Second, and of particular concern, were urban widows. During their working years, most American women stayed home to raise children and maintain the household while their husbands provided income. Should their husbands die or become injured, these women had no wage-earning skills with which to support themselves or their families. Social Security addresses these concerns with three important tools. First and best known is the retirement benefit. After completing a minimum number of years of work, American workers may claim a form of pension upon reaching retirement age. It is often called an entitlement program since it guarantees benefits to a particular group, and virtually everyone will eventually qualify for the plan given the relatively low requirements for enrollment. The amount of money a worker receives is based loosely on his or her lifetime earnings. Full retirement age was originally set at sixty-five, although changes in legislation have increased it to sixty-seven for workers born after 1959. 15 A valuable added benefit is that, under certain circumstances, this income may also be claimed by the survivors of qualifying workers, such as spouses and minor children, even if they themselves did not have a wage income. A second Social Security benefit is a disability payout, which the government distributes to workers who become unable to work due to physical or mental disability. To qualify, workers must demonstrate that the injury or incapacitation will last at least twelve months. A third and final benefit is Supplemental Security Income, which provides supplemental income to adults or children with considerable disability or to the elderly who fall below an income threshold. During the George W. Bush administration, Social Security became a highly politicized topic as the Republican Party sought to find a way of preventing what experts predicted would be the impending collapse of the Social Security system ( Figure 16.9 ). In 1950, the ratio of workers paying into the program to beneficiaries receiving payments was 16.5 to 1. By 2013, that number was 2.8 to 1 and falling. Most predictions in fact suggest that, due to continuing demographic changes including slower population growth and an aging population, by 2033, the amount of revenue generated from payroll taxes will no longer be sufficient to cover costs. The Bush administration proposed avoiding this by privatizing the program, in effect, taking it out of the government’s hands and making individuals’ benefits variable instead of defined. The effort ultimately failed, and Social Security’s long-term viability continues to remain uncertain. Numerous other plans for saving the program have been proposed, including raising the retirement age, increasing payroll taxes (especially on the wealthy) by removing the $118,500 income cap, and reducing payouts for wealthier retirees. None of these proposals have been able to gain traction, however. While Social Security was designed to provide cash payments to sustain the aged and disabled, Medicare and Medicaid were intended to ensure that vulnerable populations have access to health care. Medicare , like Social Security, is an entitlement program funded through payroll taxes. Its purpose is to make sure that senior citizens and retirees have access to low-cost health care they might not otherwise have, because most U.S. citizens get their health insurance through their employers. Medicare provides three major forms of coverage: a guaranteed insurance benefit that helps cover major hospitalization, fee-based supplemental coverage that retirees can use to lower costs for doctor visits and other health expenses, and a prescription drug benefit. Medicare faces many of the same long-term challenges as Social Security, due to the same demographic shifts. Medicare also faces the problem that health care costs are rising significantly faster than inflation. In 2014, Medicare cost the federal government almost $597 billion. 16 Medicaid is a formula-based, health insurance program, which means beneficiaries must demonstrate they fall within a particular income category. Individuals in the Medicaid program receive a fairly comprehensive set of health benefits, although access to health care may be limited because fewer providers accept payments from the program (it pays them less for services than does Medicare). Medicaid differs dramatically from Medicare in that it is partially funded by states, many of which have reduced access to the program by setting the income threshold so low that few people qualify. The ACA (2010) sought to change that by providing more federal money to the states if they agreed to raise minimum income requirements. Many states have refused, which has helped to keep the overall costs of Medicaid lower, even though it has also left many people without health coverage they might receive if they lived elsewhere. Total costs for Medicaid in 2014 were about $492 billion, about $305 billion of which was paid by the federal government. 17 Collectively, Social Security, Medicare, and Medicaid make up the lion’s share of total federal government spending, almost 50 percent in 2014 and more than 50 percent in 2015. Several other smaller programs also provide income support to families. Most of these are formula-based, or means-tested, requiring citizens to meet certain maximum income requirements in order to qualify. A few examples are TANF, SNAP (also called food stamps), the unemployment insurance program, and various housing assistance programs. Collectively, these programs add up to a little over $480 billion. SCIENCE, TECHNOLOGY, AND EDUCATION After World War II ended, the United States quickly realized that it had to address two problems to secure its fiscal and national security future. The first was that more than ten million servicemen and women needed to be reintegrated into the workforce, and many lacked appreciable work skills. The second problem was that the United States’ success in its new conflict with the Soviet Union depended on the rapid development of a new, highly technical military-industrial complex. To confront these challenges, the U.S. government passed several important pieces of legislation to provide education assistance to workers and research dollars to industry. As the needs of American workers and industry have changed, many of these programs have evolved from their original purposes, but they still remain important pieces of the public policy debate. Much of the nation’s science and technology policy benefits its military, for instance, in the form of research and development funding for a range of defense projects. The federal government still promotes research for civilian uses, mostly through the National Science Foundation, the National Institutes of Health, the National Aeronautics and Space Administration (NASA), and the National Oceanic and Atmospheric Administration. Recent debate over these agencies has focused on whether government funding is necessary or if private entities would be better suited. For example, although NASA continues to develop a replacement for the now-defunct U.S. space shuttle program ( Figure 16.10 ), much of its workload is currently being performed by private companies working to develop their own space launch, resupply, and tourism programs. The problem of trying to direct and fund the education of a modern U.S. workforce is familiar to many students of American government. Historically, education has largely been the job of the states. While they have provided a very robust K–12 public education system, the national government has never moved to create an equivalent system of national higher education academies or universities as many other countries have done. As the need to keep the nation competitive with others became more pressing, however, the U.S. government did step in to direct its education dollars toward creating greater equity and ease of access to the existing public and private systems. The overwhelming portion of the government’s education money is spent on student loans, grants, and work-study programs. Resources are set aside to cover job-retraining programs for individuals who lack private-sector skills or who need to be retrained to meet changes in the economy’s demands for the labor force. National policy toward elementary and secondary education programs has typically focused on increasing resources available to school districts for nontraditional programs (such as preschool and special needs), or helping poorer schools stay competitive with wealthier institutions. BUSINESS STIMULUS AND REGULATION A final key aspect of domestic policy is the growth and regulation of business. The size and strength of the economy is very important to politicians whose jobs depend on citizens’ believing in their own future prosperity. At the same time, people in the United States want to live in a world where they feel safe from unfair or environmentally damaging business practices. These desires have forced the government to perform a delicate balancing act between programs that help grow the economy by providing benefits to the business sector and those that protect consumers, often by curtailing or regulating the business sector. Two of the largest recipients of government aid to business are agriculture and energy. Both are multi-billion dollar industries concentrated in rural and/or electorally influential states. Because voters are affected by the health of these sectors every time they pay their grocery or utility bill, the U.S. government has chosen to provide significant agriculture and energy subsidies to cover the risks inherent in the unpredictability of the weather and oil exploration. Government subsidies also protect these industries’ profitability. These two purposes have even overlapped in the government’s controversial decision to subsidize the production of ethanol, a fuel source similar to gasoline but generated from corn. When it comes to regulation, the federal government has created several agencies responsible for providing for everything from worker safety (OSHA, the Occupational Safety and Health Administration), to food safety (FDA), to consumer protection, where the recently created Bureau of Consumer Protection ensures that businesses do not mislead consumers with deceptive or manipulative practices. Another prominent federal agency, the EPA, is charged with ensuring that businesses do not excessively pollute the nation’s air or waterways. A complex array of additional regulatory agencies governs specific industries such as banking and finance, which are detailed later in this chapter. Link to Learning The policy areas we’ve described so far fall far short of forming an exhaustive list. This site contains the major topic categories of substantive policy in U.S. government, according to the Policy Agendas Project. View subcategories by clicking on the major topic categories. 16.4 Policymakers Learning Objectives By the end of this section, you will be able to: Identify types of policymakers in different issue areas Describe the public policy process Many Americans were concerned when Congress began debating the ACA. As the program took shape, some people felt the changes it proposed were being debated too hastily, would be implemented too quickly, or would summarily give the government control over an important piece of the U.S. economy—the health care industry. Ironically, the government had been heavily engaged in providing health care for decades. More than 50 percent of all health care dollars spent were being spent by the U.S. government well before the ACA was enacted. As you have already learned, Medicare was created decades earlier. Despite protesters’ resistance to government involvement in health care, there is no keeping government out of Medicare; the government IS Medicare. What many did not realize is that few if any of the proposals that eventually became part of the ACA were original. While the country was worried about problems like terrorism, the economy, and conflicts over gay rights, armies of individuals were debating the best ways to fix the nation’s health care delivery. Two important but overlapping groups defended their preferred policy changes: policy advocates and policy analysts. POLICY ADVOCATES Take a minute to think of a policy change you believe would improve some condition in the United States. Now ask yourself this: “Why do I want to change this policy?” Are you motivated by a desire for justice? Do you feel the policy change would improve your life or that of members of your community? Is your sense of morality motivating you to change the status quo? Would your profession be helped? Do you feel that changing the policy might raise your status? Most people have some policy position or issue they would like to see altered (see Figure 16.11 ). One of the reasons the news media are so enduring is that citizens have a range of opinions on public policy, and they are very interested in debating how a given change would improve their lives or the country’s. But despite their interests, most people do little more than vote or occasionally contribute to a political campaign. A few people, however, become policy advocates by actively working to propose or maintain public policy. One way to think about policy advocates is to recognize that they hold a normative position on an issue, that is, they have a conviction about what should or ought to be done. The best public policy, in their view, is one that accomplishes a specific goal or outcome. For this reason, advocates often begin with an objective and then try to shape or create proposals that help them accomplish that goal. Facts, evidence, and analysis are important tools for convincing policymakers or the general public of the benefits of their proposals. Private citizens often find themselves in advocacy positions, particularly if they are required to take on leadership roles in their private lives or in their organizations. The most effective advocates are usually hired professionals who form lobbying groups or think tanks to promote their agenda. A lobbying group that frequently takes on advocacy roles is AARP (formerly the American Association of Retired Persons) ( Figure 16.12 ). AARP’s primary job is to convince the government to provide more public resources and services to senior citizens, often through regulatory or redistributive politics. Chief among its goals are lower health care costs and the safety of Social Security pension payments. These aims put AARP in the Democratic Party’s electoral coalition, since Democrats have historically been stronger advocates for Medicare’s creation and expansion. In 2002, for instance, Democrats and Republicans were debating a major change to Medicare. The Democratic Party supported expanding Medicare to include free or low-cost prescription drugs, while the Republicans preferred a plan that would require seniors to purchase drug insurance through a private insurer. The government would subsidize costs, but many seniors would still have substantial out-of-pocket expenses. To the surprise of many, AARP supported the Republican proposal. While Democrats argued that their position would have provided a better deal for individuals, AARP reasoned that the Republican plan had a much better chance of passing. The Republicans controlled the House and looked likely to reclaim control of the Senate in the upcoming election. Then-president George W. Bush was a Republican and would almost certainly have vetoed the Democratic approach. AARP’s support for the legislation helped shore up support for Republicans in the 2002 midterm election and also help convince a number of moderate Democrats to support the bill (with some changes), which passed despite apparent public disapproval. AARP had done its job as an advocate for seniors by creating a new benefit it hoped could later be expanded, rather than fighting for an extreme position that would have left it with nothing. 18 Not all policy advocates are as willing to compromise their positions. It is much easier for a group like AARP to compromise over the amount of money seniors will receive, for instance, than it is for an evangelical religious group to compromise over issues like abortion, or for civil rights groups to accept something less than equality. Nor are women’s rights groups likely to accept pay inequality as it currently exists. It is easier to compromise over financial issues than over our individual views of morality or social justice. POLICY ANALYSTS A second approach to creating public policy is a bit more objective. Rather than starting with what ought to happen and seeking ways to make it so, policy analysts try to identify all the possible choices available to a decision maker and then gauge their impacts if implemented. The goal of the analyst isn’t really to encourage the implementation of any of the options; rather, it is to make sure decision makers are fully informed about the implications of the decisions they do make. Understanding the financial and other costs and benefits of policy choices requires analysts to make strategic guesses about how the public and governmental actors will respond. For example, when policymakers are considering changes to health care policy, one very important question is how many people will participate. If very few people had chosen to take advantage of the new health care plans available under the ACA marketplace, it would have been significantly cheaper than advocates proposed, but it also would have failed to accomplish the key goal of increasing the number of insured. But if people who currently have insurance had dropped it to take advantage of ACA’s subsidies, the program’s costs would have skyrocketed with very little real benefit to public health. Similarly, had all states chosen to create their own marketplaces, the cost and complexity of ACA’s implementation would have been greatly reduced. Because advocates have an incentive to understate costs and overstate benefits, policy analysis tends to be a highly politicized aspect of government. It is critical for policymakers and voters that policy analysts provide the most accurate analysis possible. A number of independent or semi-independent think tanks have sprung up in Washington, DC, to provide assessments of policy options. Most businesses or trade organizations also employ their own policy-analysis wings to help them understand proposed changes or even offer some of their own. Some of these try to be as impartial as possible. Most, however, have a known bias toward policy advocacy. The Cato Institute, for example, is well known and highly respected policy analysis group that both liberal and conservative politicians have turned to when considering policy options. But the Cato Institute has a known libertarian bias; most of the problems it selects for analysis have the potential for private sector solutions. This means its analysts tend to include the rosiest assumptions of economic growth when considering tax cuts and to overestimate the costs of public sector proposals. Link to Learning The RAND Corporation has conducted objective policy analysis for corporate, nonprofit, and government clients since the mid-twentieth century. What are some of the policy areas it has explored? Both the Congress and the president have tried to reduce the bias in policy analysis by creating their own theoretically nonpartisan policy branches. In Congress, the best known of these is the Congressional Budget Office , or CBO. Authorized in the 1974 Congressional Budget and Impoundment Control Act, the CBO was formally created in 1975 as a way of increasing Congress’s independence from the executive branch. The CBO is responsible for scoring the spending or revenue impact of all proposed legislation to assess its net effect on the budget. In recent years, it has been the CBO’s responsibility to provide Congress with guidance on how to best balance the budget (see Figure 16.13 ). The formulas that the CBO uses in scoring the budget have become an important part of the policy debate, even as the group has tried to maintain its nonpartisan nature. In the executive branch, each individual department and agency is technically responsible for its own policy analysis. The assumption is that experts in the Federal Communications Commission or the Federal Elections Commission are best equipped to evaluate the impact of various proposals within their policy domain. Law requires that most regulatory changes made by the federal government also include the opportunity for public input so the government can both gauge public opinion and seek outside perspectives. Executive branch agencies are usually also charged with considering the economic impact of regulatory action, although some agencies have been better at this than others. Critics have frequently singled out the EPA and OSHA for failing to adequately consider the impact of new rules on business. Within the White House itself, the Office of Management and Budget (OMB) was created to “serve the President of the United States in implementing his [or her] vision” of policy. Policy analysis is important to the OMB’s function, but as you can imagine, it frequently compromises its objectivity during policy formulation. Link to Learning How do the OMB and the CBO compare when it comes to impartiality? Get Connected! Preparing to Be a Policymaker What is your passion? Is there an aspect of society you think should be changed? Become a public policy advocate for it! One way to begin is by petitioning the Office of the President. In years past, citizens wrote letters to express grievances or policy preferences. Today, you can visit We the People, the White House online petitions platform ( Figure 16.14 ). At this government site, you can search for petitions related to your cause or post your own. If your petition gets enough signatures, the White House will issue a response. The petitions range from serious to silly, but the process is an important way to speak out about the policies that are important to you. Follow-up activity: Choose an issue you are passionate about. Visit We the People to see if there is already a petition there concerning your chosen issue. If so, join the community promoting your cause. If not, create your own petition and try to gather enough signatures to receive an official response. THE POLICY PROCESS The policy process contains four sequential stages: (1) agenda setting, (2) policy enactment, (3) policy implementation, and (4) evaluation. Given the sheer number of issues already processed by the government, called the continuing agenda, and the large number of new proposals being pushed at any one time, it is typically quite difficult to move a new policy all the way through the process. Agenda setting is the crucial first stage of the public policy process. Agenda setting has two subphases: problem identification and alternative specification. Problem identification identifies the issues that merit discussion. Not all issues make it onto the governmental agenda because there is only so much attention that government can pay. Thus, one of the more important tasks for a policy advocate is to frame his or her issue in a compelling way that raises a persuasive dimension or critical need. 19 For example, health care reform has been attempted on many occasions over the years. One key to making the topic salient has been to frame it in terms of health care access, highlighting the percentage of people who do not have health insurance. Alternative specification, the second subphase of agenda setting, considers solutions to fix the difficulty raised in problem identification. For example, government officials may agree in the problem subphase that the increase in childhood obesity presents a societal problem worthy of government attention. However, the solution can be complex, and people who otherwise agree might come into conflict over what the best answer is. Alternatives might range from reinvestment in school physical education programs and health education classes, to taking soda and candy machines out of the schools and requiring good nutrition in school lunches. Agenda setting ends when a given problem has been selected, a solution has been paired with that problem, and the solution goes to the decision makers for a vote. Acid rain provides another nice illustration of agenda setting and the problems and solutions subphases. Acid rain is a widely recognized problem that did not make it on to the governmental policy agenda until Congress passed the Air Quality Act of 1967, long after environmental groups started asking for laws to regulate pollution. In the second policy phase, enactment, the elected branches of government typically consider one specific solution to a problem and decide whether to pass it. This stage is the most visible one and usually garners the most press coverage. And yet it is somewhat anticlimatic. By the time a specific policy proposal (a solution) comes out of agenda setting for a yes/no vote, it can be something of a foregone conclusion that it will pass. Once the policy has been enacted—usually by the legislative and/or executive branches of the government, like Congress or the president at the national level or the legislature or governor of a state—government agencies do the work of actually implementing it. On a national level, policy implementation can be either top-down or bottom-up. In top-down implementation , the federal government dictates the specifics of the policy, and each state implements it the same exact way. In bottom-up implementation , the federal government allows local areas some flexibility to meet their specific challenges and needs. 20 Evaluation, the last stage of the process, should be tied directly to the policy’s desired outcomes. Evaluation essentially asks, “How well did this policy do what we designed it to do?” The answers can sometimes be surprising. In one hotly debated case, the United States funded abstinence-only sex education for teens with the goal of reducing teen pregnancy. A 2011 study published in the journal PLoS One , however, found that abstinence-only education actually increased teen pregnancy rates. 21 The information from the evaluation stage can feed back into the other stages, informing future decisions and creating a public policy cycle. 16.5 Budgeting and Tax Policy Learning Objectives By the end of this section, you will be able to: Discuss economic theories that shape U.S. economic policy Explain how the government uses fiscal policy tools to maintain a healthy economy Analyze the taxing and spending decisions made by Congress and the president Discuss the role of the Federal Reserve Board in monetary policy A country spends, raises, and regulates money in accordance with its values. In all, the federal government’s budget for 2016 was $3.8 trillion. This chapter has provided a brief overview of some of the budget’s key areas of expenditure, and thus some insight into modern American values. But these values are only part of the budgeting story. Policymakers make considerable effort to ensure that long-term priorities are protected from the heat of the election cycle and short-term changes in public opinion. The decision to put some policymaking functions out of the reach of Congress also reflects economic philosophies about the best ways to grow, stimulate, and maintain the economy. The role of politics in drafting the annual budget is indeed large ( Figure 16.15 ), but we should not underestimate the challenges elected officials face as a result of decisions made in the past. APPROACHES TO THE ECONOMY Until the 1930s, most policy advocates argued that the best way for the government to interact with the economy was through a hands-off approach formally known as laissez-faire economics. These policymakers believed the key to economic growth and development was the government’s allowing private markets to operate efficiently. Proponents of this school of thought believed private investors were better equipped than governments to figure out which sectors of the economy were most likely to grow and which new products were most likely to be successful. They also tended to oppose government efforts to establish quality controls or health and safety standards, believing consumers themselves would punish bad behavior by not trading with poor corporate citizens. Finally, laissez-faire proponents felt that keeping government out of the business of business would create an automatic cycle of economic growth and contraction. Contraction phases in which there is no economic growth for two consecutive quarters, called recession s , would bring business failures and higher unemployment. But this condition, they believed, would correct itself on its own if the government simply allowed the system to operate. The Great Depression challenged the laissez-faire view, however. When President Franklin Roosevelt came to office in 1933, the United States had already been in the depths of the Great Depression for several years, since the stock market crash of 1929. Roosevelt sought to implement a new approach to economic regulation known as Keynesianism. Named for its developer, the economist John Maynard Keynes , Keynesian economics argues that it is possible for a recession to become so deep, and last for so long, that the typical models of economic collapse and recovery may not work. Keynes suggested that economic growth was closely tied to the ability of individuals to consume goods. It didn’t matter how or where investors wanted to invest their money if no one could afford to buy the products they wanted to make. And in periods of extremely high unemployment, wages for newly hired labor would be so low that new workers would be unable to afford the products they produced. Keynesianism counters this problem by increasing government spending in ways that improve consumption. Some of the proposals Keynes suggested were payments or pension for the unemployed and retired, as well as tax incentives to encourage consumption in the middle class. His reasoning was that these individuals would be most likely to spend the money they received by purchasing more goods, which in turn would encourage production and investment. Keynes argued that the wealthy class of producers and employers had sufficient capital to meet the increased demand of consumers that government incentives would stimulate. Once consumption had increased and capital was flowing again, the government would reduce or eliminate its economic stimulus, and any money it had borrowed to create it could be repaid from higher tax revenues. Keynesianism dominated U.S. fiscal or spending policy from the 1930s to the 1970s. By the 1970s, however, high inflation began to slow economic growth. There were a number of reasons, including higher oil prices and the costs of fighting the Vietnam War. However, some economists, such as Arthur Laffer, began to argue that the social welfare and high tax policies created in the name of Keynesianism were overstimulating the economy, creating a situation in which demand for products had outstripped investors’ willingness to increase production. 22 They called for an approach known as supply-side economics , which argues that economic growth is largely a function of the productive capacity of a country. Supply-siders have argued that increased regulation and higher taxes reduce the incentive to invest new money into the economy, to the point where little growth can occur. They have advocated reducing taxes and regulations to spur economic growth. MANDATORY SPENDING VS. DISCRETIONARY SPENDING The desire of Keynesians to create a minimal level of aggregate demand, coupled with a Depression-era preference to promote social welfare policy, led the president and Congress to develop a federal budget with spending divided into two broad categories: mandatory and discretionary (see Figure 16.16 ). Of these, mandatory spending is the larger, consisting of about $2.3 trillion of the projected 2015 budget, or roughly 57 percent of all federal expenditures. 23 The overwhelming portion of mandatory spending is earmarked for entitlement programs guaranteed to those who meet certain qualifications, usually based on age, income, or disability. These programs, discussed above, include Medicare and Medicaid, Social Security, and major income security programs such as unemployment insurance and SNAP. The costs of programs tied to age are relatively easy to estimate and grow largely as a function of the aging of the population. Income and disability payments are a bit more difficult to estimate. They tend to go down during periods of economic recovery and rise when the economy begins to slow down, in precisely the way Keynes suggested. A comparatively small piece of the mandatory spending pie, about 10 percent, is devoted to benefits designated for former federal employees, including military retirement and many Veterans Administration programs. Congress is ultimately responsible for setting the formulas for mandatory payouts, but as we saw in the earlier discussion regarding Social Security, major reforms to entitlement formulas are difficult to enact. As a result, the size and growth of mandatory spending in future budgets are largely a function of previous legislation that set the formulas up in the first place. So long as supporters of particular programs can block changes to the formulas, funding will continue almost on autopilot. Keynesians support this mandatory spending, along with other elements of social welfare policy, because they help maintain a minimal level of consumption that should, in theory, prevent recessions from turning into depressions, which are more severe downturns. Portions of the budget not devoted to mandatory spending are categorized as discretionary spending because Congress must pass legislation to authorize money to be spent each year. About 50 percent of the approximately $1.2 trillion set aside for discretionary spending each year pays for most of the operations of government, including employee salaries and the maintenance of federal buildings. It also covers science and technology spending, foreign affairs initiatives, education spending, federally provided transportation costs, and many of the redistributive benefits most people in the United States have come to take for granted. 24 The other half of discretionary spending—and the second-largest component of the total budget—is devoted to the military. (Only Social Security is larger.) Defense spending is used to maintain the U.S. military presence at home and abroad, procure and develop new weapons, and cover the cost of any wars or other military engagements in which the United States is currently engaged ( Figure 16.17 ). In theory, the amount of revenue raised by the national government should be equal to these expenses, but with the exception of a brief period from 1998 to 2000, that has not been the case. The economic recovery from the 2007–2009 recession, and budget control efforts implemented since then, have managed to cut the annual deficit —the amount by which expenditures are greater than revenues—by more than half. However, the amount of money the U.S. government needed to borrow to pay its bills in 2016 was still in excess of $400 billion 25 . This was in addition to the country’s almost $19 trillion of total debt —the amount of money the government owes its creditors—at the end of 2015, according to the Department of the Treasury. Balancing the budget has been a major goal of both the Republican and Democratic parties for the past several decades, although the parties tend to disagree on the best way to accomplish the task. One frequently offered solution, particularly among supply-side advocates, is to simply cut spending. This has proven to be much easier said than done. If Congress were to try to balance the budget only through discretionary spending, it would need to cut about one-third of spending on programs like defense, higher education, agriculture, police enforcement, transportation, and general government operations. Given the number and popularity of many of these programs, it is difficult to imagine this would be possible. To use spending cuts alone as a way to control the deficit, Congress will almost certainly be required to cut or control the costs of mandatory spending programs like Social Security and Medicare—a radically unpopular step. TAX POLICY The other option available for balancing the budget is to increase revenue. All governments must raise revenue in order to operate. The most common way is by applying some sort of tax on residents (or on their behaviors) in exchange for the benefits the government provides ( Figure 16.18 ). As necessary as taxes are, however, they are not without potential downfalls. First, the more money the government collects to cover its costs, the less residents are left with to spend and invest. Second, attempts to raise revenues through taxation may alter the behavior of residents in ways that are counterproductive to the state and the broader economy. Excessively taxing necessary and desirable behaviors like consumption (with a sales tax) or investment (with a capital gains tax) will discourage citizens from engaging in them, potentially slowing economic growth. The goal of tax policy, then, is to determine the most effective way of meeting the nation’s revenue obligations without harming other public policy goals. As you would expect, Keynesians and supply-siders disagree about which forms of tax policy are best. Keynesians, with their concern about whether consumers can really stimulate demand, prefer progressive tax es systems that increase the effective tax rate as the taxpayer’s income increases. This policy leaves those most likely to spend their money with more money to spend. For example, in 2015, U.S. taxpayers paid a 10 percent tax rate on the first $18,450 of income, but 15 percent on the next $56,450 (some income is excluded). 26 The rate continues to rise, to up to 39.6 percent on any taxable income over $464,850. These brackets are somewhat distorted by the range of tax credits, deductions, and incentives the government offers, but the net effect is that the top income earners pay a greater portion of the overall income tax burden than do those at the lowest tax brackets. According to the Pew Research Center, based on tax returns in 2014, 2.7 percent of filers made more than $250,000. Those 2.7 percent of filers paid 52 percent of the income tax paid. 27 Supply-siders, on the other hand, prefer regressive tax systems, which lower the overall rate as individuals make more money. This does not automatically mean the wealthy pay less than the poor, simply that the percentage of their income they pay in taxes will be lower. Consider, for example, the use of excise taxes on specific goods or services as a source of revenue. 28 Sometimes called “sin taxes” because they tend to be applied to goods like alcohol, tobacco, and gasoline, excise taxes have a regressive quality, since the amount of the good purchased by the consumer, and thus the tax paid, does not increase at the same rate as income. A person who makes $250,000 per year is likely to purchase more gasoline than a person who makes $50,000 per year ( Figure 16.19 ). But the higher earner is not likely to purchase five times more gasoline, which means the proportion of his or her income paid out in gasoline taxes is less than the proportion for a lower-earning individual. Another example of a regressive tax paid by most U.S. workers is the payroll tax that funds Social Security. While workers contribute 7.65 percent of their income to pay for Social Security and their employers pay a matching amount, in 2015, the payroll tax was applied to only the first $118,500 of income. Individuals who earned more than that, or who made money from other sources like investments, saw their overall tax rate fall as their income increased. In 2015, the United States raised about $3.2 trillion in revenue. Income taxes ($1.54 trillion), payroll taxes on Social Security and Medicare ($1.07 trillion), and excise taxes ($98 billion) make up three of the largest sources of revenue for the federal government. When combined with corporate income taxes ($344 billion), these four tax streams make up about 95 percent of total government revenue. The balance of revenue is split nearly evenly between revenues from the Federal Reserve and a mix of revenues from import tariffs, estate and gift taxes, and various fees or fines paid to the government ( Figure 16.20 ). THE FEDERAL RESERVE BOARD AND INTEREST RATES Financial panics arise when too many people, worried about the solvency of their investments, try to withdraw their money at the same time. Such panics plagued U.S. banks until 1913 ( Figure 16.21 ), when Congress enacted the Federal Reserve Act . The act established the Federal Reserve System, also known as the Fed, as the central bank of the United States. The Fed’s three original goals to promote were maximum employment, stable prices, and moderate long-term interest rates. 29 All of these goals bring stability. The Fed’s role is now broader and includes influencing monetary policy (the means by which the nation controls the size and growth of the money supply), supervising and regulating banks, and providing them with financial services like loans. The Federal Reserve System is overseen by a board of governors, known as the Federal Reserve Board. The president of the United States appoints the seven governors, each of whom serves a fourteen-year term (the terms are staggered). A chair and vice chair lead the board for terms of four years each. The most important work of the board is participating in the Federal Open Market Committee to set monetary policy, like interest rate levels and macroeconomic policy. The board also oversees a network of twelve regional Federal Reserve Banks, each of which serves as a “banker’s bank” for the country’s financial institutions. Insider Perspective The Role of the Federal Reserve Chair If you have read or watched the news for the past several years, perhaps you have heard the names Janet Yellen , Ben Bernanke , or Alan Greenspan . Bernanke and Greenspan are recent past chairs of the board of governors of the Federal Reserve System; Yellen is the current chair ( Figure 16.22 ). The role of the Fed chair is one of the most important in the country. By raising or lowering banks’ interest rates, the chair has the ability reduce inflation or stimulate growth. The Fed’s dual mandate is to keep inflation low (under 2 percent) and unemployment low (below 5 percent), but efforts to meet these goals can often lead to contradictory monetary policies. The Fed, and by extension its chair, have a tremendous responsibility. Many of the economic events of the past five decades, both good and bad, are the results of Fed policies. In the 1970s, double-digit inflation brought the economy almost to a halt, but when Paul Volcker became chair in 1979, he raised interest rates and jump-started the economy. After the stock market crash of 1987, then-chair Alan Greenspan declared, “The Federal Reserve, consistent with its responsibilities as the nation’s central bank, affirmed today its readiness to…support the economic and financial system.” 30 His lowering of interest rates led to an unprecedented decade of economic growth through the 1990s. In the 2000s, consistently low interest rates and readily available credit contributed to the sub-prime mortgage boom and subsequent bust, which led to a global economic recession beginning in 2008. Should the important tasks of the Fed continue to be pursued by unelected appointees like those profiled in this box, or should elected leaders be given the job? Why? Link to Learning Do you think you have what it takes to be chair of the Federal Reserve Board? Play this game and see how you fare!
microbiology
Summary 10.1 Using Microbiology to Discover the Secrets of Life DNA was discovered and characterized long before its role in heredity was understood. Microbiologists played significant roles in demonstrating that DNA is the hereditary information found within cells. In the 1850s and 1860s, Gregor Mendel experimented with true-breeding garden peas to demonstrate the heritability of specific observable traits. In 1869, Friedrich Miescher isolated and purified a compound rich in phosphorus from the nuclei of white blood cells; he named the compound nuclein. Miescher’s student Richard Altmann discovered its acidic nature, renaming it nucleic acid . Albrecht Kossell characterized the nucleotide bases found within nucleic acids. Although Walter Sutton and Theodor Boveri proposed the Chromosomal Theory of Inheritance in 1902, it was not scientifically demonstrated until the 1915 publication of the work of Thomas Hunt Morgan and his colleagues. Using Acetabularia, a large algal cell, as his model system, Joachim Hämmerling demonstrated in the 1930s and 1940s that the nucleus was the location of hereditary information in these cells. In the 1940s, George Beadle and Edward Tatum used the mold Neurospora crassa to show that each protein’s production was under the control of a single gene, demonstrating the “one gene–one enzyme” hypothesis . In 1928, Frederick Griffith showed that dead encapsulated bacteria could pass genetic information to live nonencapsulated bacteria and transform them into harmful strains. In 1944, Oswald Avery, Colin McLeod, and Maclyn McCarty identified the compound as DNA. The nature of DNA as the molecule that stores genetic information was unequivocally demonstrated in the experiment of Alfred Hershey and Martha Chase published in 1952. Labeled DNA from bacterial viruses entered and infected bacterial cells, giving rise to more viral particles. The labeled protein coats did not participate in the transmission of genetic information. 10.2 Structure and Function of DNA Nucleic acids are composed of nucleotides , each of which contains a pentose sugar, a phosphate group, and a nitrogenous base . Deoxyribonucleotides within DNA contain deoxyribose as the pentose sugar. DNA contains the pyrimidines cytosine and thymine , and the purines adenine and guanine . Nucleotides are linked together by phosphodiester bonds between the 5ʹ phosphate group of one nucleotide and the 3ʹ hydroxyl group of another. A nucleic acid strand has a free phosphate group at the 5ʹ end and a free hydroxyl group at the 3ʹ end. Chargaff discovered that the amount of adenine is approximately equal to the amount of thymine in DNA, and that the amount of the guanine is approximately equal to cytosine . These relationships were later determined to be due to complementary base pairing. Watson and Crick, building on the work of Chargaff, Franklin and Gosling, and Wilkins, proposed the double helix model and base pairing for DNA structure. DNA is composed of two complementary strands oriented antiparallel to each other with the phosphodiester backbones on the exterior of the molecule. The nitrogenous bases of each strand face each other and complementary bases hydrogen bond to each other, stabilizing the double helix. Heat or chemicals can break the hydrogen bonds between complementary bases, denaturing DNA. Cooling or removing chemicals can lead to renaturation or reannealing of DNA by allowing hydrogen bonds to reform between complementary bases. DNA stores the instructions needed to build and control the cell. This information is transmitted from parent to offspring through vertical gene transfer . 10.3 Structure and Function of RNA Ribonucleic acid (RNA) is typically single stranded and contains ribose as its pentose sugar and the pyrimidine uracil instead of thymine. An RNA strand can undergo significant intramolecular base pairing to take on a three-dimensional structure. There are three main types of RNA, all involved in protein synthesis. Messenger RNA ( mRNA ) serves as the intermediary between DNA and the synthesis of protein products during translation. Ribosomal RNA ( rRNA ) is a type of stable RNA that is a major constituent of ribosomes. It ensures the proper alignment of the mRNA and the ribosomes during protein synthesis and catalyzes the formation of the peptide bonds between two aligned amino acids during protein synthesis. Transfer RNA ( tRNA ) is a small type of stable RNA that carries an amino acid to the corresponding site of protein synthesis in the ribosome. It is the base pairing between the tRNA and mRNA that allows for the correct amino acid to be inserted in the polypeptide chain being synthesized. Although RNA is not used for long-term genetic information in cells, many viruses do use RNA as their genetic material. 10.4 Structure and Function of Cellular Genomes The entire genetic content of a cell is its genome . Genes code for proteins, or stable RNA molecules, each of which carries out a specific function in the cell. Although the genotype that a cell possesses remains constant, expression of genes is dependent on environmental conditions. A phenotype is the observable characteristics of a cell (or organism) at a given point in time and results from the complement of genes currently being used. The majority of genetic material is organized into chromosomes that contain the DNA that controls cellular activities. Prokaryotes are typically haploid, usually having a single circular chromosome found in the nucleoid. Eukaryotes are diploid; DNA is organized into multiple linear chromosomes found in the nucleus. Supercoiling and DNA packaging using DNA binding proteins allows lengthy molecules to fit inside a cell. Eukaryotes and archaea use histone proteins, and bacteria use different proteins with similar function. Prokaryotic and eukaryotic genomes both contain noncoding DNA , the function of which is not well understood. Some noncoding DNA appears to participate in the formation of small noncoding RNA molecules that influence gene expression; some appears to play a role in maintaining chromosomal structure and in DNA packaging. Extrachromosomal DNA in eukaryotes includes the chromosomes found within organelles of prokaryotic origin (mitochondria and chloroplasts) that evolved by endosymbiosis. Some viruses may also maintain themselves extrachromosomally. Extrachromosomal DNA in prokaryotes is commonly maintained as plasmids that encode a few nonessential genes that may be helpful under specific conditions. Plasmids can be spread through a bacterial community by horizontal gene transfer. Viral genomes show extensive variation and may be composed of either RNA or DNA, and may be either double or single stranded.
Chapter Outline 10.1 Using Microbiology to Discover the Secrets of Life 10.2 Structure and Function of DNA 10.3 Structure and Function of RNA 10.4 Structure and Function of Cellular Genomes Introduction Children inherit some characteristics from each parent. Siblings typically look similar to each other, but not exactly the same—except in the case of identical twins. How can we explain these phenomena? The answers lie in heredity (the transmission of traits from one generation to the next) and genetics (the science of heredity). Because humans reproduce sexually, 50% of a child’s genes come from the mother’s egg cell and the remaining 50% from the father’s sperm cell. Sperm and egg are formed through the process of meiosis , where DNA recombination occurs. Thus, there is no predictable pattern as to which 50% comes from which parent. Thus, siblings have only some genes, and their associated characteristics, in common. Identical twins are the exception, because they are genetically identical. Genetic differences among related microbes also dictate many observed biochemical and virulence differences. For example, some strains of the bacterium Escherichia coli are harmless members of the normal microbiota in the human gastrointestinal tract. Other strains of the same species have genes that give them the ability to cause disease. In bacteria, such genes are not inherited via sexual reproduction, as in humans. Often, they are transferred via plasmids, small circular pieces of double-stranded DNA that can be exchanged between prokaryotes.
[ { "answer": { "ans_choice": 2, "ans_text": "C" }, "bloom": null, "hl_context": "<hl> In a clever set of experiments in the 1930s and 1940s , German scientist Joachim Hämmerling ( 1901 – 1980 ) , using the single-celled alga Acetabularia as a microbial model , established that the genetic information in a eukaryotic cell is housed within the nucleus . <hl> Acetabularia spp . <hl> are unusually large algal cells that grow asymmetrically , forming a “ foot ” containing the nucleus , which is used for substrate attachment ; a stalk ; and an umbrella-like cap — structures that can all be easily seen with the naked eye . <hl> In an early set of experiments , Hämmerling removed either the cap or the foot of the cells and observed whether new caps or feet were regenerated ( Figure 10.3 ) . He found that when the foot of these cells was removed , new feet did not grow ; however , when caps were removed from the cells , new caps were regenerated . This suggested that the hereditary information was located in the nucleus-containing foot of each cell .", "hl_sentences": "In a clever set of experiments in the 1930s and 1940s , German scientist Joachim Hämmerling ( 1901 – 1980 ) , using the single-celled alga Acetabularia as a microbial model , established that the genetic information in a eukaryotic cell is housed within the nucleus . are unusually large algal cells that grow asymmetrically , forming a “ foot ” containing the nucleus , which is used for substrate attachment ; a stalk ; and an umbrella-like cap — structures that can all be easily seen with the naked eye .", "question": { "cloze_format": "The alga Acetabularia was a good model organism for Joachim Hämmerling to use to identify the location of genetic material because ___.", "normal_format": "Why was the alga Acetabularia a good model organism for Joachim Hämmerling to use to identify the location of genetic material?", "question_choices": [ "It lacks a nuclear membrane.", "It self-fertilizes.", "It is a large, asymmetrical, single cell easy to see with the naked eye.", "It makes a protein capsid." ], "question_id": "fs-id1172098392508", "question_text": "Why was the alga Acetabularia a good model organism for Joachim Hämmerling to use to identify the location of genetic material?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 1, "ans_text": "B" }, "bloom": null, "hl_context": "Despite compelling correlations between the behavior of chromosomes during meiosis and Mendel ’ s observations , the Chromosomal Theory of Inheritance was proposed long before there was any direct evidence that traits were carried on chromosomes . <hl> Thomas Hunt Morgan ( 1866 – 1945 ) and his colleagues spent several years carrying out crosses with the fruit fly , Drosophila melanogaster . <hl> <hl> They performed meticulous microscopic observations of fly chromosomes and correlated these observations with resulting fly characteristics . <hl> Their work provided the first experimental evidence to support the Chromosomal Theory of Inheritance in the early 1900s . In 1915 , Morgan and his “ Fly Room ” colleagues published The Mechanism of Mendelian Heredity , which identified chromosomes as the cellular structures responsible for heredity . For his many significant contributions to genetics , Morgan received the Nobel Prize in Physiology or Medicine in 1933 .", "hl_sentences": "Thomas Hunt Morgan ( 1866 – 1945 ) and his colleagues spent several years carrying out crosses with the fruit fly , Drosophila melanogaster . They performed meticulous microscopic observations of fly chromosomes and correlated these observations with resulting fly characteristics .", "question": { "cloze_format": "The method Morgan and colleagues used to show that hereditary information was carried on chromosomes was ___.", "normal_format": "Which method did Morgan and colleagues use to show that hereditary information was carried on chromosomes?", "question_choices": [ "statistical predictions of the outcomes of crosses using true-breeding parents", "correlations between microscopic observations of chromosomal movement and the characteristics of offspring", "transformation of nonpathogenic bacteria to pathogenic bacteria", "mutations resulting in distinct defects in metabolic enzymatic pathways" ], "question_id": "fs-id1172100738195", "question_text": "Which method did Morgan and colleagues use to show that hereditary information was carried on chromosomes?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "D" }, "bloom": null, "hl_context": "In Microbial Metabolism , we discussed the microbial catabolism of three classes of macromolecules : proteins , lipids and carbohydrates . In this chapter , we will discuss the genetic role of a fourth class of molecules : nucleic acids . Like other macromolecules , nucleic acid s are composed of monomers , called nucleotide s , which are polymerized to form large strands . Each nucleic acid strand contains certain nucleotides that appear in a certain order within the strand , called its base sequence . <hl> The base sequence of deoxyribonucleic acid ( DNA ) is responsible for carrying and retaining the hereditary information in a cell . <hl> In Mechanisms of Microbial Genetics , we will discuss in detail the ways in which DNA uses its own base sequence to direct its own synthesis , as well as the synthesis of RNA and proteins , which , in turn , gives rise to products with diverse structure and function . In this section , we will discuss the basic structure and function of DNA . Subsequent work by Beadle , Tatum , and colleagues showed that they could isolate different classes of mutants that required a particular supplement , like the amino acid arginine ( Figure 10.6 ) . With some knowledge of the arginine biosynthesis pathway , they identified three classes of arginine mutants by supplementing the minimal medium with intermediates ( citrulline or ornithine ) in the pathway . The three mutants differed in their abilities to grow in each of the media , which led the group of scientists to propose , in 1945 , that each type of mutant had a defect in a different gene in the arginine biosynthesis pathway . <hl> This led to the so-called one gene – one enzyme hypothesis , which suggested that each gene encodes one enzyme . <hl>", "hl_sentences": "The base sequence of deoxyribonucleic acid ( DNA ) is responsible for carrying and retaining the hereditary information in a cell . This led to the so-called one gene – one enzyme hypothesis , which suggested that each gene encodes one enzyme .", "question": { "cloze_format": "According to Beadle and Tatum’s “one gene–one enzyme” hypothesis, the enzymes that will eliminate the transformation of hereditary material from pathogenic bacteria to nonpathogenic bacteria are the ___ .", "normal_format": "According to Beadle and Tatum’s “one gene–one enzyme” hypothesis, which of the following enzymes will eliminate the transformation of hereditary material from pathogenic bacteria to nonpathogenic bacteria?", "question_choices": [ "carbohydrate-degrading enzymes", "proteinases", "ribonucleases", "deoxyribonucleases" ], "question_id": "fs-id1172098421677", "question_text": "According to Beadle and Tatum’s “one gene–one enzyme” hypothesis, which of the following enzymes will eliminate the transformation of hereditary material from pathogenic bacteria to nonpathogenic bacteria?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 3, "ans_text": "D" }, "bloom": null, "hl_context": "<hl> Individual nucleoside triphosphates combine with each other by covalent bonds known as 5ʹ - 3ʹ phosphodiester bonds , or linkages whereby the phosphate group attached to the 5ʹ carbon of the sugar of one nucleotide bonds to the hydroxyl group of the 3ʹ carbon of the sugar of the next nucleotide . <hl> Phosphodiester bonding between nucleotides forms the sugar-phosphate backbone , the alternating sugar-phosphate structure composing the framework of a nucleic acid strand ( Figure 10.13 ) . During the polymerization process , deoxynucleotide triphosphates ( dNTP ) are used . To construct the sugar-phosphate backbone , the two terminal phosphates are released from the dNTP as a pyrophosphate . The resulting strand of nucleic acid has a free phosphate group at the 5ʹ carbon end and a free hydroxyl group at the 3ʹ carbon end . The two unused phosphate groups from the nucleotide triphosphate are released as pyrophosphate during phosphodiester bond formation . Pyrophosphate is subsequently hydrolyzed , releasing the energy used to drive nucleotide polymerization . <hl> The deoxyribonucleotide is named according to the nitrogenous bases ( Figure 10.12 ) . <hl> The nitrogenous bases adenine ( A ) and guanine ( G ) are the purines ; they have a double-ring structure with a six-carbon ring fused to a five-carbon ring . <hl> The pyrimidines , cytosine ( C ) and thymine ( T ) , are smaller nitrogenous bases that have only a six-carbon ring structure . <hl> The building blocks of nucleic acids are nucleotides . Nucleotides that compose DNA are called deoxyribonucleotides . <hl> The three components of a deoxyribonucleotide are a five-carbon sugar called deoxyribose , a phosphate group , and a nitrogenous base , a nitrogen-containing ring structure that is responsible for complementary base pairing between nucleic acid strands ( Figure 10.11 ) . <hl> The carbon atoms of the five-carbon deoxyribose are numbered 1ʹ , 2ʹ , 3ʹ , 4ʹ , and 5ʹ ( 1ʹ is read as “ one prime ” ) . A nucleoside comprises the five-carbon sugar and nitrogenous base .", "hl_sentences": "Individual nucleoside triphosphates combine with each other by covalent bonds known as 5ʹ - 3ʹ phosphodiester bonds , or linkages whereby the phosphate group attached to the 5ʹ carbon of the sugar of one nucleotide bonds to the hydroxyl group of the 3ʹ carbon of the sugar of the next nucleotide . The deoxyribonucleotide is named according to the nitrogenous bases ( Figure 10.12 ) . The pyrimidines , cytosine ( C ) and thymine ( T ) , are smaller nitrogenous bases that have only a six-carbon ring structure . The three components of a deoxyribonucleotide are a five-carbon sugar called deoxyribose , a phosphate group , and a nitrogenous base , a nitrogen-containing ring structure that is responsible for complementary base pairing between nucleic acid strands ( Figure 10.11 ) .", "question": { "cloze_format": "___ is/are not found within DNA.", "normal_format": "Which of the following is not found within DNA?", "question_choices": [ "thymine", "phosphodiester bonds", "complementary base pairing", "amino acids" ], "question_id": "fs-id1172101964672", "question_text": "Which of the following is not found within DNA?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "C" }, "bloom": null, "hl_context": "By the early 1950s , considerable evidence had accumulated indicating that DNA was the genetic material of cells , and now the race was on to discover its three-dimensional structure . Around this time , Austrian biochemist Erwin Chargaff 5 ( 1905 – 2002 ) examined the content of DNA in different species and discovered that adenine , thymine , guanine , and cytosine were not found in equal quantities , and that it varied from species to species , but not between individuals of the same species . <hl> He found that the amount of adenine was very close to equaling the amount of thymine , and the amount of cytosine was very close to equaling the amount of guanine , or A = T and G = C . These relationships are also known as Chargaff ’ s rules . <hl> 5 N . Kresge et al . “ Chargaff's Rules : The Work of Erwin Chargaff . ” Journal of Biological Chemistry 280 ( 2005 ): e21 .", "hl_sentences": "He found that the amount of adenine was very close to equaling the amount of thymine , and the amount of cytosine was very close to equaling the amount of guanine , or A = T and G = C . These relationships are also known as Chargaff ’ s rules .", "question": { "cloze_format": "If 30% of the bases within a DNA molecule are adenine, the percentage of thymine is ___.", "normal_format": "If 30% of the bases within a DNA molecule are adenine, what is the percentage of thymine?", "question_choices": [ "20%", "25%", "30%", "35%" ], "question_id": "fs-id1172099485192", "question_text": "If 30% of the bases within a DNA molecule are adenine, what is the percentage of thymine?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 1, "ans_text": "B" }, "bloom": null, "hl_context": "Base pairing takes place between a purine and pyrimidine . <hl> In DNA , adenine ( A ) and thymine ( T ) are complementary base pairs , and cytosine ( C ) and guanine ( G ) are also complementary base pairs , explaining Chargaff ’ s rules ( Figure 10.17 ) . <hl> The base pairs are stabilized by hydrogen bonds ; adenine and thymine form two hydrogen bonds between them , whereas cytosine and guanine form three hydrogen bonds between them .", "hl_sentences": "In DNA , adenine ( A ) and thymine ( T ) are complementary base pairs , and cytosine ( C ) and guanine ( G ) are also complementary base pairs , explaining Chargaff ’ s rules ( Figure 10.17 ) .", "question": { "cloze_format": "A false statement about base pairing in DNA is ___ .", "normal_format": "Which of the following statements about base pairing in DNA is incorrect?", "question_choices": [ "Purines always base pairs with pyrimidines.", "Adenine binds to guanine.", "Base pairs are stabilized by hydrogen bonds.", "Base pairing occurs at the interior of the double helix." ], "question_id": "fs-id1172099597053", "question_text": "Which of the following statements about base pairing in DNA is incorrect?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "D" }, "bloom": null, "hl_context": "Base pairing takes place between a purine and pyrimidine . <hl> In DNA , adenine ( A ) and thymine ( T ) are complementary base pairs , and cytosine ( C ) and guanine ( G ) are also complementary base pairs , explaining Chargaff ’ s rules ( Figure 10.17 ) . <hl> The base pairs are stabilized by hydrogen bonds ; adenine and thymine form two hydrogen bonds between them , whereas cytosine and guanine form three hydrogen bonds between them . Watson and Crick proposed that DNA is made up of two strands that are twisted around each other to form a right-handed helix . <hl> The two DNA strands are antiparallel , such that the 3ʹ end of one strand faces the 5ʹ end of the other ( Figure 10.16 ) . <hl> <hl> The 3ʹ end of each strand has a free hydroxyl group , while the 5ʹ end of each strand has a free phosphate group . <hl> The sugar and phosphate of the polymerized nucleotides form the backbone of the structure , whereas the nitrogenous bases are stacked inside . These nitrogenous bases on the interior of the molecule interact with each other , base pairing .", "hl_sentences": "In DNA , adenine ( A ) and thymine ( T ) are complementary base pairs , and cytosine ( C ) and guanine ( G ) are also complementary base pairs , explaining Chargaff ’ s rules ( Figure 10.17 ) . The two DNA strands are antiparallel , such that the 3ʹ end of one strand faces the 5ʹ end of the other ( Figure 10.16 ) . The 3ʹ end of each strand has a free hydroxyl group , while the 5ʹ end of each strand has a free phosphate group .", "question": { "cloze_format": "If a DNA strand contains the sequence 5ʹ-ATTCCGGATCGA-3ʹ, ___ is the sequence of the complementary strand of DNA.", "normal_format": "If a DNA strand contains the sequence 5ʹ-ATTCCGGATCGA-3ʹ, which of the following is the sequence of the complementary strand of DNA?", "question_choices": [ "5ʹ-TAAGGCCTAGCT-3ʹ", "5ʹ-ATTCCGGATCGA-3ʹ", "3ʹ-TAACCGGTACGT-5ʹ", "5ʹ-TCGATCCGGAAT-3ʹ" ], "question_id": "fs-id1172101621283", "question_text": "If a DNA strand contains the sequence 5ʹ-ATTCCGGATCGA-3ʹ, which of the following is the sequence of the complementary strand of DNA?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 0, "ans_text": "A." }, "bloom": null, "hl_context": "<hl> In the laboratory , exposing the two DNA strands of the double helix to high temperatures or to certain chemicals can break the hydrogen bonds between complementary bases , thus separating the strands into two separate single strands of DNA ( single-stranded DNA [ ssDNA ] ) . <hl> <hl> This process is called DNA denaturation and is analogous to protein denaturation , as described in Proteins . <hl> The ssDNA strands can also be put back together as double-stranded DNA ( dsDNA ) , through reannealing or renaturing by cooling or removing the chemical denaturants , allowing these hydrogen bonds to reform . The ability to artificially manipulate DNA in this way is the basis for several important techniques in biotechnology ( Figure 10.18 ) . Because of the additional hydrogen bonding between the C = G base pair , DNA with a high GC content is more difficult to denature than DNA with a lower GC content .", "hl_sentences": "In the laboratory , exposing the two DNA strands of the double helix to high temperatures or to certain chemicals can break the hydrogen bonds between complementary bases , thus separating the strands into two separate single strands of DNA ( single-stranded DNA [ ssDNA ] ) . This process is called DNA denaturation and is analogous to protein denaturation , as described in Proteins .", "question": { "cloze_format": "During denaturation of DNA, it happens that ___.", "normal_format": "During denaturation of DNA, which of the following happens?", "question_choices": [ "Hydrogen bonds between complementary bases break.", "Phosphodiester bonds break within the sugar-phosphate backbone.", "Hydrogen bonds within the sugar-phosphate backbone break.", "Phosphodiester bonds between complementary bases break." ], "question_id": "fs-id1172101918575", "question_text": "During denaturation of DNA, which of the following happens?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "B" }, "bloom": null, "hl_context": "Cells access the information stored in DNA by creating RNA to direct the synthesis of proteins through the process of translation . Proteins within a cell have many functions , including building cellular structures and serving as enzyme catalysts for cellular chemical reactions that give cells their specific characteristics . <hl> The three main types of RNA directly involved in protein synthesis are messenger RNA ( mRNA ) , ribosomal RNA ( rRNA ) , and transfer RNA ( tRNA ) . <hl>", "hl_sentences": "The three main types of RNA directly involved in protein synthesis are messenger RNA ( mRNA ) , ribosomal RNA ( rRNA ) , and transfer RNA ( tRNA ) .", "question": { "cloze_format": "The type of RNA that codes for a protein is the ___ .", "normal_format": "Which of the following types of RNA codes for a protein?", "question_choices": [ "dsRNA", "mRNA", "rRNA", "tRNA" ], "question_id": "fs-id1172098766701", "question_text": "Which of the following types of RNA codes for a protein?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "C" }, "bloom": null, "hl_context": "Ribosomes are composed of rRNA and protein . As its name suggests , rRNA is a major constituent of ribosomes , composing up to about 60 % of the ribosome by mass and providing the location where the mRNA binds . The rRNA ensures the proper alignment of the mRNA , tRNA , and the ribosomes ; the rRNA of the ribosome also has an enzymatic activity ( peptidyl transferase ) and catalyzes the formation of the peptide bonds between two aligned amino acids during protein synthesis . <hl> Although rRNA had long been thought to serve primarily a structural role , its catalytic role within the ribosome was proven in 2000 . <hl> 17 Scientists in the laboratories of Thomas Steitz ( 1940 – ) and Peter Moore ( 1939 – ) at Yale University were able to crystallize the ribosome structure from Haloarcula marismortui , a halophilic archaeon isolated from the Dead Sea . Because of the importance of this work , Steitz shared the 2009 Nobel Prize in Chemistry with other scientists who made significant contributions to the understanding of ribosome structure . 17 P . Nissen et al . “ The Structural Basis of Ribosome Activity in Peptide Bond Synthesis . ” Science 289 no . 5481 ( 2000 ): 920 – 930 .", "hl_sentences": "Although rRNA had long been thought to serve primarily a structural role , its catalytic role within the ribosome was proven in 2000 .", "question": { "cloze_format": "The type of RNA that is known for its catalytic abilities is ___ .", "normal_format": "Which of the following types of RNA is known for its catalytic abilities?", "question_choices": [ "dsRNA", "mRNA", "rRNA", "tRNA" ], "question_id": "fs-id1172100626304", "question_text": "Which of the following types of RNA is known for its catalytic abilities?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "A" }, "bloom": null, "hl_context": "<hl> Ribosomes are composed of rRNA and protein . <hl> As its name suggests , rRNA is a major constituent of ribosomes , composing up to about 60 % of the ribosome by mass and providing the location where the mRNA binds . The rRNA ensures the proper alignment of the mRNA , tRNA , and the ribosomes ; the rRNA of the ribosome also has an enzymatic activity ( peptidyl transferase ) and catalyzes the formation of the peptide bonds between two aligned amino acids during protein synthesis . Although rRNA had long been thought to serve primarily a structural role , its catalytic role within the ribosome was proven in 2000 . 17 Scientists in the laboratories of Thomas Steitz ( 1940 – ) and Peter Moore ( 1939 – ) at Yale University were able to crystallize the ribosome structure from Haloarcula marismortui , a halophilic archaeon isolated from the Dead Sea . Because of the importance of this work , Steitz shared the 2009 Nobel Prize in Chemistry with other scientists who made significant contributions to the understanding of ribosome structure . 17 P . Nissen et al . “ The Structural Basis of Ribosome Activity in Peptide Bond Synthesis . ” Science 289 no . 5481 ( 2000 ): 920 – 930 .", "hl_sentences": "Ribosomes are composed of rRNA and protein .", "question": { "cloze_format": "Ribosomes are composed of rRNA and the ___ component.", "normal_format": "Ribosomes are composed of rRNA and what other component?", "question_choices": [ "protein", "carbohydrates", "DNA", "mRNA" ], "question_id": "fs-id1172098756700", "question_text": "Ribosomes are composed of rRNA and what other component?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "C" }, "bloom": null, "hl_context": "Although RNA does not serve as the hereditary information in most cells , RNA does hold this function for many viruses that do not contain DNA . Thus , RNA clearly does have the additional capacity to serve as genetic information . Although RNA is typically single stranded within cells , there is significant diversity in viruses . Rhinoviruses , which cause the common cold ; influenza viruses ; and the Ebola virus are single-stranded RNA viruses . Rotaviruses , which cause severe gastroenteritis in children and other immunocompromised individuals , are examples of double-stranded RNA viruses . Because double-stranded RNA is uncommon in eukaryotic cells , its presence serves as an indicator of viral infection . <hl> The implications for a virus having an RNA genome instead of a DNA genome are discussed in more detail in Viruses . <hl> 10.4 Structure and Function of Cellular Genomes", "hl_sentences": "The implications for a virus having an RNA genome instead of a DNA genome are discussed in more detail in Viruses .", "question": { "cloze_format": "___ may use RNA as its genome.", "normal_format": "Which of the following may use RNA as its genome?", "question_choices": [ "a bacterium", "an archaeon", "a virus", "a eukaryote" ], "question_id": "fs-id1172100983896", "question_text": "Which of the following may use RNA as its genome?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "A" }, "bloom": null, "hl_context": "Chromosome structure differs somewhat between eukaryotic and prokaryotic cells . Eukaryotic chromosomes are typically linear , and eukaryotic cells contain multiple distinct chromosomes . <hl> Many eukaryotic cells contain two copies of each chromosome and , therefore , are diploid . <hl>", "hl_sentences": "Many eukaryotic cells contain two copies of each chromosome and , therefore , are diploid .", "question": { "cloze_format": "The structure of the typical eukaryotic genome can be described as ___.", "normal_format": "Which of the following correctly describes the structure of the typical eukaryotic genome?", "question_choices": [ "diploid", "linear", "singular", "double stranded" ], "question_id": "fs-id1172100837488", "question_text": "Which of the following correctly describes the structure of the typical eukaryotic genome?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "C" }, "bloom": null, "hl_context": "<hl> Besides chromosomes , some prokaryotes also have smaller loops of DNA called plasmids that may contain one or a few genes not essential for normal growth ( Figure 3.12 ) . <hl> Bacteria can exchange these plasmids with other bacteria in a process known as horizontal gene transfer ( HGT ) . The exchange of genetic material on plasmids sometimes provides microbes with new genes beneficial for growth and survival under special conditions . In some cases , genes obtained from plasmids may have clinical implications , encoding virulence factors that give a microbe the ability to cause disease or make a microbe resistant to certain antibiotics . Plasmids are also used heavily in genetic engineering and biotechnology as a way to move genes from one cell to another . The role of plasmids in horizontal gene transfer and biotechnology will be discussed further in Mechanisms of Microbial Genetics and Modern Applications of Microbial Genetics .", "hl_sentences": "Besides chromosomes , some prokaryotes also have smaller loops of DNA called plasmids that may contain one or a few genes not essential for normal growth ( Figure 3.12 ) .", "question": { "cloze_format": "___ is/are typically found as part of the prokaryotic genome.", "normal_format": "Which of the following is typically found as part of the prokaryotic genome?", "question_choices": [ "chloroplast DNA", "linear chromosomes", "plasmids", "mitochondrial DNA" ], "question_id": "fs-id1172100849685", "question_text": "Which of the following is typically found as part of the prokaryotic genome?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "B" }, "bloom": null, "hl_context": "<hl> While the genotype of a cell remains constant , the phenotype may change in response to environmental signals ( e . g . , changes in temperature or nutrient availability ) that affect which nonconstitutive genes are expressed . <hl> For example , the oral bacterium Streptococcus mutans produces a sticky slime layer that allows it to adhere to teeth , forming dental plaque ; however , the genes that control the production of the slime layer are only expressed in the presence of sucrose ( table sugar ) . Thus , while the genotype of S . mutans is constant , its phenotype changes depending on the presence and absence of sugar in its environment . Temperature can also regulate gene expression . For example , the gram-negative bacterium Serratia marcescens , a pathogen frequently associated with hospital-acquired infections , produces a red pigment at 28 ° C but not at 37 ° C , the normal internal temperature of the human body ( Figure 10.24 ) .", "hl_sentences": "While the genotype of a cell remains constant , the phenotype may change in response to environmental signals ( e . g . , changes in temperature or nutrient availability ) that affect which nonconstitutive genes are expressed .", "question": { "cloze_format": "Serratia marcescens cells produce a red pigment at room temperature. The red color of the colonies is an example of ___ .", "normal_format": "Serratia marcescens cells produce a red pigment at room temperature. The red color of the colonies is an example of which of the following?", "question_choices": [ "genotype", "phenotype", "change in DNA base composition", "adaptation to the environment" ], "question_id": "fs-id1172096227863", "question_text": "Serratia marcescens cells produce a red pigment at room temperature. The red color of the colonies is an example of which of the following?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "C" }, "bloom": null, "hl_context": "Alex ’ s symptoms were very similar to those of cholera , caused by the gram-negative bacterium Vibrio cholerae , which also produces a toxin similar to ST and LT . At some point in the evolutionary history of ETEC , a nonpathogenic strain of E . coli similar to those typically found in the gut may have acquired the genes encoding the ST and LT toxins from V . cholerae . <hl> The fact that the genes encoding those toxins are encoded on extrachromosomal plasmids in ETEC supports the idea that these genes were acquired by E . coli and are likely maintained in bacterial populations through horizontal gene transfer . <hl> Besides chromosomes , some prokaryotes also have smaller loops of DNA called plasmids that may contain one or a few genes not essential for normal growth ( Figure 3.12 ) . Bacteria can exchange these plasmids with other bacteria in a process known as horizontal gene transfer ( HGT ) . The exchange of genetic material on plasmids sometimes provides microbes with new genes beneficial for growth and survival under special conditions . <hl> In some cases , genes obtained from plasmids may have clinical implications , encoding virulence factors that give a microbe the ability to cause disease or make a microbe resistant to certain antibiotics . <hl> Plasmids are also used heavily in genetic engineering and biotechnology as a way to move genes from one cell to another . The role of plasmids in horizontal gene transfer and biotechnology will be discussed further in Mechanisms of Microbial Genetics and Modern Applications of Microbial Genetics .", "hl_sentences": "The fact that the genes encoding those toxins are encoded on extrachromosomal plasmids in ETEC supports the idea that these genes were acquired by E . coli and are likely maintained in bacterial populations through horizontal gene transfer . In some cases , genes obtained from plasmids may have clinical implications , encoding virulence factors that give a microbe the ability to cause disease or make a microbe resistant to certain antibiotics .", "question": { "cloze_format": "The genes that would not likely be encoded on a plasmid are ___.", "normal_format": "Which of the following genes would not likely be encoded on a plasmid?", "question_choices": [ "genes encoding toxins that damage host tissue", "genes encoding antibacterial resistance", "gene encoding enzymes for glycolysis", "genes encoding enzymes for the degradation of an unusual substrate" ], "question_id": "fs-id1172100621924", "question_text": "Which of the following genes would not likely be encoded on a plasmid?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "D" }, "bloom": null, "hl_context": "During DNA packaging , DNA-binding proteins called histones perform various levels of DNA wrapping and attachment to scaffolding proteins . The combination of DNA with these attached proteins is referred to as chromatin . <hl> In eukaryotes , the packaging of DNA by histones may be influenced by environmental factors that affect the presence of methyl groups on certain cytosine nucleotides of DNA . <hl> The influence of environmental factors on DNA packaging is called epigenetics . Epigenetics is another mechanism for regulating gene expression without altering the sequence of nucleotides . Epigenetic changes can be maintained through multiple rounds of cell division and , therefore , can be heritable .", "hl_sentences": "In eukaryotes , the packaging of DNA by histones may be influenced by environmental factors that affect the presence of methyl groups on certain cytosine nucleotides of DNA .", "question": { "cloze_format": "Histones are DNA binding proteins that are important for DNA packaging in ___.", "normal_format": "Histones are DNA binding proteins that are important for DNA packaging in which of the following?", "question_choices": [ "double-stranded and single-stranded DNA viruses", "archaea and bacteria", "bacteria and eukaryotes", "eukaryotes and archaea" ], "question_id": "fs-id1172100660607", "question_text": "Histones are DNA binding proteins that are important for DNA packaging in which of the following?" }, "references_are_paraphrase": 0 } ]
10
10.1 Using Microbiology to Discover the Secrets of Life Learning Objectives Describe the discovery of nucleic acid and nucleotides Explain the historical experiments that led to the characterization of DNA Describe how microbiology and microorganisms have been used to discover the biochemistry of genes Explain how scientists established the link between DNA and heredity Clinical Focus Part 1 Alex is a 22-year-old college student who vacationed in Puerta Vallarta, Mexico, for spring break. Unfortunately, two days after flying home to Ohio, he began to experience abdominal cramping and extensive watery diarrhea. Because of his discomfort, he sought medical attention at a large Cincinnati hospital nearby. What types of infections or other conditions may be responsible? Jump to the next Clinical Focus box. Through the early 20th century, DNA was not yet recognized as the genetic material responsible for heredity , the passage of traits from one generation to the next. In fact, much of the research was dismissed until the mid-20th century. The scientific community believed, incorrectly, that the process of inheritance involved a blending of parental traits that produced an intermediate physical appearance in offspring; this hypothetical process appeared to be correct because of what we know now as continuous variation, which results from the action of many genes to determine a particular characteristic, like human height. Offspring appear to be a “blend” of their parents’ traits when we look at characteristics that exhibit continuous variation. The blending theory of inheritance asserted that the original parental traits were lost or absorbed by the blending in the offspring, but we now know that this is not the case. Two separate lines of research, begun in the mid to late 1800s, ultimately led to the discovery and characterization of DNA and the foundations of genetics, the science of heredity. These lines of research began to converge in the 1920s, and research using microbial systems ultimately resulted in significant contributions to elucidating the molecular basis of genetics . Discovery and Characterization of DNA Modern understanding of DNA has evolved from the discovery of nucleic acid to the development of the double-helix model. In the 1860s, Friedrich Miescher (1844–1895), a physician by profession, was the first person to isolate phosphorus-rich chemicals from leukocytes (white blood cells) from the pus on used bandages from a local surgical clinic. He named these chemicals (which would eventually be known as RNA and DNA) “ nuclein ” because they were isolated from the nuclei of the cells. His student Richard Altmann (1852–1900) subsequently termed it “ nucleic acid ” 20 years later when he discovered the acidic nature of nuclein. In the last two decades of the 19th century, German biochemist Albrecht Kossel (1853–1927) isolated and characterized the five different nucleotide bases composing nucleic acid. These are adenine, guanine, cytosine, thymine (in DNA), and uracil (in RNA). Kossell received the Nobel Prize in Physiology or Medicine in 1910 for his work on nucleic acids and for his considerable work on proteins, including the discovery of histidine . Foundations of Genetics Despite the discovery of DNA in the late 1800s, scientists did not make the association with heredity for many more decades. To make this connection, scientists, including a number of microbiologists, performed many experiments on plants, animals, and bacteria. Mendel’s Pea Plants While Miescher was isolating and discovering DNA in the 1860s, Austrian monk and botanist Johann Gregor Mendel (1822–1884) was experimenting with garden peas, demonstrating and documenting basic patterns of inheritance, now known as Mendel’s laws. In 1856, Mendel began his decade-long research into inheritance patterns. He used the diploid garden pea, Pisum sativum , as his primary model system because it naturally self-fertilizes and is highly inbred, producing “true-breeding” pea plant lines—plants that always produce offspring that look like the parent. By experimenting with true-breeding pea plants, Mendel avoided the appearance of unexpected traits in offspring that might occur if he used plants that were not true-breeding. Mendel performed hybridizations, which involve mating two true-breeding individuals (P generation) that have different traits, and examined the characteristics of their offspring (first filial generation, F 1 ) as well as the offspring of self-fertilization of the F 1 generation (second filial generation, F 2 ) ( Figure 10.2 ). In 1865, Mendel presented the results of his experiments with nearly 30,000 pea plants to the local natural history society. He demonstrated that traits are transmitted faithfully from parents to offspring independently of other traits. In 1866, he published his work, “Experiments in Plant Hybridization,” 1 in the Proceedings of the Natural History Society of Brünn . Mendel’s work went virtually unnoticed by the scientific community, which believed, incorrectly, in the theory of blending of traits in continuous variation. 1 J.G. Mendel. “Versuche über Pflanzenhybriden.” Verhandlungen des naturforschenden Vereines in Brünn, Bd. Abhandlungen 4 (1865):3–7. (For English translation, see http://www.mendelweb.org/Mendel.plain.html) He was not recognized for his extraordinary scientific contributions during his lifetime. In fact, it was not until 1900 that his work was rediscovered, reproduced, and revitalized by scientists on the brink of discovering the chromosomal basis of heredity. The Chromosomal Theory of Inheritance Mendel carried out his experiments long before chromosomes were visualized under a microscope. However, with the improvement of microscopic techniques during the late 1800s, cell biologists could stain and visualize subcellular structures with dyes and observe their actions during meiosis . They were able to observe chromosomes replicating, condensing from an amorphous nuclear mass into distinct X-shaped bodies and migrating to separate cellular poles. The speculation that chromosomes might be the key to understanding heredity led several scientists to examine Mendel’s publications and re-evaluate his model in terms of the behavior of chromosomes during mitosis and meiosis. In 1902, Theodor Boveri (1862–1915) observed that in sea urchins, nuclear components (chromosomes) determined proper embryonic development. That same year, Walter Sutton (1877–1916) observed the separation of chromosomes into daughter cells during meiosis. Together, these observations led to the development of the Chromosomal Theory of Inheritance , which identified chromosomes as the genetic material responsible for Mendelian inheritance. Despite compelling correlations between the behavior of chromosomes during meiosis and Mendel’s observations, the Chromosomal Theory of Inheritance was proposed long before there was any direct evidence that traits were carried on chromosomes. Thomas Hunt Morgan (1866–1945) and his colleagues spent several years carrying out crosses with the fruit fly, Drosophila melanogaster . They performed meticulous microscopic observations of fly chromosomes and correlated these observations with resulting fly characteristics. Their work provided the first experimental evidence to support the Chromosomal Theory of Inheritance in the early 1900s. In 1915, Morgan and his “Fly Room” colleagues published The Mechanism of Mendelian Heredity, which identified chromosomes as the cellular structures responsible for heredity. For his many significant contributions to genetics, Morgan received the Nobel Prize in Physiology or Medicine in 1933. In the late 1920s, Barbara McClintock (1902–1992) developed chromosomal staining techniques to visualize and differentiate between the different chromosomes of maize (corn). In the 1940s and 1950s, she identified a breakage event on chromosome 9, which she named the dissociation locus ( Ds ). Ds could change position within the chromosome. She also identified an activator locus ( Ac ). Ds chromosome breakage could be activated by an Ac element (transposase enzyme). At first, McClintock’s finding of these jumping genes , which we now call transposons , was not accepted by the scientific community. It wasn’t until the 1960s and later that transposons were discovered in bacteriophages, bacteria, and Drosophila . Today, we know that transposons are mobile segments of DNA that can move within the genome of an organism. They can regulate gene expression, protein expression, and virulence (ability to cause disease). Microbes and Viruses in Genetic Research Microbiologists have also played a crucial part in our understanding of genetics. Experimental organisms such as Mendel ’s garden peas, Morgan’s fruit flies, and McClintock ’s corn had already been used successfully to pave the way for an understanding of genetics. However, microbes and viruses were (and still are) excellent model systems for the study of genetics because, unlike peas, fruit flies, and corn, they are propagated more easily in the laboratory, growing to high population densities in a small amount of space and in a short time. In addition, because of their structural simplicity, microbes and viruses are more readily manipulated genetically. Fortunately, despite significant differences in size, structure, reproduction strategies, and other biological characteristics, there is biochemical unity among all organisms; they have in common the same underlying molecules responsible for heredity and the use of genetic material to give cells their varying characteristics. In the words of French scientist Jacques Monod , “What is true for E. coli is also true for the elephant,” meaning that the biochemistry of life has been maintained throughout evolution and is shared in all forms of life, from simple unicellular organisms to large, complex organisms. This biochemical continuity makes microbes excellent models to use for genetic studies. In a clever set of experiments in the 1930s and 1940s, German scientist Joachim Hämmerling (1901–1980), using the single-celled alga Acetabularia as a microbial model, established that the genetic information in a eukaryotic cell is housed within the nucleus . Acetabularia spp. are unusually large algal cells that grow asymmetrically, forming a “foot” containing the nucleus, which is used for substrate attachment; a stalk; and an umbrella-like cap—structures that can all be easily seen with the naked eye. In an early set of experiments, Hämmerling removed either the cap or the foot of the cells and observed whether new caps or feet were regenerated ( Figure 10.3 ). He found that when the foot of these cells was removed, new feet did not grow; however, when caps were removed from the cells, new caps were regenerated. This suggested that the hereditary information was located in the nucleus-containing foot of each cell. In another set of experiments, Hämmerling used two species of Acetabularia that have different cap morphologies, A. crenulata and A. mediterranea ( Figure 10.4 ). He cut the caps from both types of cells and then grafted the stalk from an A. crenulata onto an A. mediterranea foot, and vice versa. Over time, he observed that the grafted cell with the A. crenulata foot and A. mediterranea stalk developed a cap with the A. crenulata morphology. Conversely, the grafted cell with the A. mediterranea foot and A. crenulata stalk developed a cap with the A. mediterranea morphology. He microscopically confirmed the presence of nuclei in the feet of these cells and attributed the development of these cap morphologies to the nucleus of each grafted cell. Thus, he showed experimentally that the nucleus was the location of genetic material that dictated a cell’s properties. Another microbial model, the red bread mold Neurospora crassa , was used by George Beadle and Edward Tatum to demonstrate the relationship between genes and the proteins they encode. Beadle had worked with fruit flies in Morgan ’s laboratory but found them too complex to perform certain types of experiments. N. crassa , on the other hand, is a simpler organism and has the ability to grow on a minimal medium because it contains enzymatic pathways that allow it to use the medium to produce its own vitamins and amino acids. Beadle and Tatum irradiated the mold with X-rays to induce changes to a sequence of nucleic acids, called mutations . They mated the irradiated mold spores and attempted to grow them on both a complete medium and a minimal medium. They looked for mutants that grew on a complete medium, supplemented with vitamins and amino acids, but did not grow on the minimal medium lacking these supplements. Such molds theoretically contained mutations in the genes that encoded biosynthetic pathways. Upon finding such mutants, they systematically tested each to determine which vitamin or amino acid it was unable to produce ( Figure 10.5 ) and published this work in 1941. 2 2 G.W. Beadle, E.L. Tatum. “Genetic Control of Biochemical Reactions in Neurospora.” Proceedings of the National Academy of Sciences 27 no. 11 (1941):499–506. Subsequent work by Beadle, Tatum, and colleagues showed that they could isolate different classes of mutants that required a particular supplement, like the amino acid arginine ( Figure 10.6 ). With some knowledge of the arginine biosynthesis pathway, they identified three classes of arginine mutants by supplementing the minimal medium with intermediates (citrulline or ornithine) in the pathway. The three mutants differed in their abilities to grow in each of the media, which led the group of scientists to propose, in 1945, that each type of mutant had a defect in a different gene in the arginine biosynthesis pathway. This led to the so-called one gene–one enzyme hypothesis , which suggested that each gene encodes one enzyme. Subsequent knowledge about the processes of transcription and translation led scientists to revise this to the “one gene–one polypeptide” hypothesis. Although there are some genes that do not encode polypeptides (but rather encode for transfer RNAs [tRNAs] or ribosomal RNAs [rRNAs], which we will discuss later), the one gene–one enzyme hypothesis is true in many cases, especially in microbes. Beadle and Tatum’s discovery of the link between genes and corresponding characteristics earned them the 1958 Nobel Prize in Physiology and Medicine and has since become the basis for modern molecular genetics. Link to Learning To learn more about the experiments of Beadle and Tatum, visit this website from the DNA Learning Center. Check Your Understanding What organism did Morgan and his colleagues use to develop the Chromosomal Theory of Inheritance? What traits did they track? What did Hämmerling prove with his experiments on Acetabularia ? DNA as the Molecule Responsible for Heredity By the beginning of the 20th century, a great deal of work had already been done on characterizing DNA and establishing the foundations of genetics, including attributing heredity to chromosomes found within the nucleus. Despite all of this research, it was not until well into the 20th century that these lines of research converged and scientists began to consider that DNA could be the genetic material that offspring inherited from their parents. DNA, containing only four different nucleotides , was thought to be structurally too simple to encode such complex genetic information. Instead, protein was thought to have the complexity required to serve as cellular genetic information because it is composed of 20 different amino acids that could be combined in a huge variety of combinations. Microbiologists played a pivotal role in the research that determined that DNA is the molecule responsible for heredity . Griffith’s Transformation Experiments British bacteriologist Frederick Griffith (1879–1941) was perhaps the first person to show that hereditary information could be transferred from one cell to another “horizontally” (between members of the same generation), rather than “vertically” (from parent to offspring). In 1928, he reported the first demonstration of bacterial transformation , a process in which external DNA is taken up by a cell, thereby changing its characteristics. 3 He was working with two strains of Streptococcus pneumoniae , a bacterium that causes pneumonia: a rough (R) strain and a smooth (S) strain. The R strain is nonpathogenic and lacks a capsule on its outer surface; as a result, colonies from the R strain appear rough when grown on plates. The S strain is pathogenic and has a capsule outside its cell wall, allowing it to escape phagocytosis by the host immune system. The capsules cause colonies from the S strain to appear smooth when grown on plates. 3 F. Griffith. “The Significance of Pneumococcal Types.” Journal of Hygiene 27 no. 2 (1928):8–159. In a series of experiments, Griffith analyzed the effects of live R, live S, and heat-killed S strains of S. pneumoniae on live mice ( Figure 10.7 ). When mice were injected with the live S strain, the mice died. When he injected the mice with the live R strain or the heat-killed S strain, the mice survived. But when he injected the mice with a mixture of live R strain and heat-killed S strain, the mice died. Upon isolating the live bacteria from the dead mouse, he only recovered the S strain of bacteria. When he then injected this isolated S strain into fresh mice, the mice died. Griffith concluded that something had passed from the heat-killed S strain into the live R strain and “transformed” it into the pathogenic S strain; he called this the “transforming principle.” These experiments are now famously known as Griffith’s transformation experiments . In 1944, Oswald Avery , Colin MacLeod , and Maclyn McCarty were interested in exploring Griffith’s transforming principle further. They isolated the S strain from infected dead mice, heat-killed it, and inactivated various components of the S extract, conducting a systematic elimination study ( Figure 10.8 ). They used enzymes that specifically degraded proteins, RNA, and DNA and mixed the S extract with each of these individual enzymes. Then, they tested each extract/enzyme combination’s resulting ability to transform the R strain, as observed by the diffuse growth of the S strain in culture media and confirmed visually by growth on plates. They found that when DNA was degraded, the resulting mixture was no longer able to transform the R strain bacteria, whereas no other enzymatic treatment was able to prevent transformation. This led them to conclude that DNA was the transforming principle. Despite their results, many scientists did not accept their conclusion, instead believing that there were protein contaminants within their extracts. Check Your Understanding How did Avery, MacLeod, and McCarty’s experiments show that DNA was the transforming principle first described by Griffith? Hershey and Chase’s Proof of DNA as Genetic Material Alfred Hershey and Martha Chase performed their own experiments in 1952 and were able to provide confirmatory evidence that DNA , not protein, was the genetic material ( Figure 10.9 ). 4 Hershey and Chase were studying a bacteriophage , a virus that infects bacteria. Viruses typically have a simple structure: a protein coat, called the capsid, and a nucleic acid core that contains the genetic material, either DNA or RNA (see Viruses ). The particular bacteriophage they were studying was the T2 bacteriophage, which infects E. coli cells. As we now know today, T2 attaches to the surface of the bacterial cell and then it injects its nucleic acids inside the cell. The phage DNA makes multiple copies of itself using the host machinery, and eventually the host cell bursts, releasing a large number of bacteriophages. 4 A.D. Hershey, M. Chase. “Independent Functions of Viral Protein and Nucleic Acid in Growth of Bacteriophage.” Journal of General Physiology 36 no. 1 (1952):39–56. Hershey and Chase labeled the protein coat in one batch of phage using radioactive sulfur, 35 S, because sulfur is found in the amino acids methionine and cysteine but not in nucleic acids. They labeled the DNA in another batch using radioactive phosphorus, 32 P, because phosphorus is found in DNA and RNA but not typically in protein. Each batch of phage was allowed to infect the cells separately. After infection, Hershey and Chase put each phage bacterial suspension in a blender, which detached the phage coats from the host cell, and spun down the resulting suspension in a centrifuge. The heavier bacterial cells settled down and formed a pellet, whereas the lighter phage particles stayed in the supernatant. In the tube with the protein labeled, the radioactivity remained only in the supernatant. In the tube with the DNA labeled, the radioactivity was detected only in the bacterial cells. Hershey and Chase concluded that it was the phage DNA that was injected into the cell that carried the information to produce more phage particles, thus proving that DNA, not proteins, was the source of the genetic material. As a result of their work, the scientific community more broadly accepted DNA as the molecule responsible for heredity. By the time Hershey and Chase published their experiment in the early 1950s, microbiologists and other scientists had been researching heredity for over 80 years. Building on one another’s research during that time culminated in the general agreement that DNA was the genetic material responsible for heredity ( Figure 10.10 ). This knowledge set the stage for the age of molecular biology to come and the significant advancements in biotechnology and systems biology that we are experiencing today. Link to Learning To learn more about the experiments involved in the history of genetics and the discovery of DNA as the genetic material of cells, visit this website from the DNA Learning Center. Check Your Understanding How did Hershey and Chase use microbes to prove that DNA is genetic material? 10.2 Structure and Function of DNA Learning Objectives Describe the biochemical structure of deoxyribonucleotides Identify the base pairs used in the synthesis of deoxyribonucleotides Explain why the double helix of DNA is described as antiparallel In Microbial Metabolism , we discussed the microbial catabolism of three classes of macromolecules: proteins, lipids and carbohydrates. In this chapter, we will discuss the genetic role of a fourth class of molecules: nucleic acids. Like other macromolecules, nucleic acid s are composed of monomers, called nucleotide s, which are polymerized to form large strands. Each nucleic acid strand contains certain nucleotides that appear in a certain order within the strand, called its base sequence . The base sequence of deoxyribonucleic acid (DNA) is responsible for carrying and retaining the hereditary information in a cell. In Mechanisms of Microbial Genetics , we will discuss in detail the ways in which DNA uses its own base sequence to direct its own synthesis, as well as the synthesis of RNA and proteins, which, in turn, gives rise to products with diverse structure and function. In this section, we will discuss the basic structure and function of DNA. DNA Nucleotides The building blocks of nucleic acids are nucleotides. Nucleotides that compose DNA are called deoxyribonucleotides . The three components of a deoxyribonucleotide are a five-carbon sugar called deoxyribose , a phosphate group, and a nitrogenous base , a nitrogen-containing ring structure that is responsible for complementary base pairing between nucleic acid strands ( Figure 10.11 ). The carbon atoms of the five-carbon deoxyribose are numbered 1ʹ, 2ʹ, 3ʹ, 4ʹ, and 5ʹ (1ʹ is read as “one prime”). A nucleoside comprises the five-carbon sugar and nitrogenous base. The deoxyribonucleotide is named according to the nitrogenous bases ( Figure 10.12 ). The nitrogenous bases adenine (A) and guanine (G) are the purines ; they have a double-ring structure with a six-carbon ring fused to a five-carbon ring. The pyrimidines , cytosine (C) and thymine (T), are smaller nitrogenous bases that have only a six-carbon ring structure. Individual nucleoside triphosphates combine with each other by covalent bonds known as 5ʹ-3ʹ phosphodiester bonds , or linkages whereby the phosphate group attached to the 5ʹ carbon of the sugar of one nucleotide bonds to the hydroxyl group of the 3ʹ carbon of the sugar of the next nucleotide. Phosphodiester bonding between nucleotides forms the sugar-phosphate backbone , the alternating sugar-phosphate structure composing the framework of a nucleic acid strand ( Figure 10.13 ). During the polymerization process, deoxynucleotide triphosphates (dNTP) are used. To construct the sugar-phosphate backbone, the two terminal phosphates are released from the dNTP as a pyrophosphate. The resulting strand of nucleic acid has a free phosphate group at the 5ʹ carbon end and a free hydroxyl group at the 3ʹ carbon end. The two unused phosphate groups from the nucleotide triphosphate are released as pyrophosphate during phosphodiester bond formation. Pyrophosphate is subsequently hydrolyzed, releasing the energy used to drive nucleotide polymerization. Check Your Understanding What is meant by the 5ʹ and 3ʹ ends of a nucleic acid strand? Discovering the Double Helix By the early 1950s, considerable evidence had accumulated indicating that DNA was the genetic material of cells, and now the race was on to discover its three-dimensional structure. Around this time, Austrian biochemist Erwin Chargaff 5 (1905–2002) examined the content of DNA in different species and discovered that adenine, thymine, guanine, and cytosine were not found in equal quantities, and that it varied from species to species, but not between individuals of the same species. He found that the amount of adenine was very close to equaling the amount of thymine, and the amount of cytosine was very close to equaling the amount of guanine, or A = T and G = C. These relationships are also known as Chargaff’s rules . 5 N. Kresge et al. “Chargaff's Rules: The Work of Erwin Chargaff.” Journal of Biological Chemistry 280 (2005):e21. Other scientists were also actively exploring this field during the mid-20th century. In 1952, American scientist Linus Pauling (1901–1994) was the world’s leading structural chemist and odds-on favorite to solve the structure of DNA. Pauling had earlier discovered the structure of protein α helices, using X-ray diffraction , and, based upon X-ray diffraction images of DNA made in his laboratory, he proposed a triple-stranded model of DNA. 6 At the same time, British researchers Rosalind Franklin (1920–1958) and her graduate student R.G. Gosling were also using X-ray diffraction to understand the structure of DNA ( Figure 10.14 ). It was Franklin’s scientific expertise that resulted in the production of more well-defined X-ray diffraction images of DNA that would clearly show the overall double-helix structure of DNA. 6 L. Pauling, “A Proposed Structure for the Nucleic Acids.” Proceedings of the National Academy of Science of the United States of America 39 no. 2 (1953):84–97. James Watson (1928–), an American scientist, and Francis Crick (1916–2004), a British scientist, were working together in the 1950s to discover DNA’s structure. They used Chargaff’s rules and Franklin and Wilkins ’ X-ray diffraction images of DNA fibers to piece together the purine-pyrimidine pairing of the double helical DNA molecule ( Figure 10.15 ). In April 1953, Watson and Crick published their model of the DNA double helix in Nature . 7 The same issue additionally included papers by Wilkins and colleagues, 8 as well as by Franklin and Gosling , 9 each describing different aspects of the molecular structure of DNA. In 1962, James Watson, Francis Crick, and Maurice Wilkins were awarded the Nobel Prize in Physiology and Medicine. Unfortunately, by then Franklin had died, and Nobel prizes at the time were not awarded posthumously. Work continued, however, on learning about the structure of DNA. In 1973, Alexander Rich (1924–2015) and colleagues were able to analyze DNA crystals to confirm and further elucidate DNA structure. 10 7 J.D. Watson, F.H.C. Crick. “A Structure for Deoxyribose Nucleic Acid.” Nature 171 no. 4356 (1953):737–738. 8 M.H.F. Wilkins et al. “Molecular Structure of Deoxypentose Nucleic Acids.” Nature 171 no. 4356 (1953):738–740. 9 R. Franklin, R.G. Gosling. “Molecular Configuration in Sodium Thymonucleate.” Nature 171 no. 4356 (1953):740–741. 10 R.O. Day et al. “A Crystalline Fragment of the Double Helix: The Structure of the Dinucleoside Phosphate Guanylyl-3',5'-Cytidine.” Proceedings of the National Academy of Sciences of the United States of America 70 no. 3 (1973):849–853. Check Your Understanding Which scientists are given most of the credit for describing the molecular structure of DNA? DNA Structure Watson and Crick proposed that DNA is made up of two strands that are twisted around each other to form a right-handed helix. The two DNA strands are antiparallel , such that the 3ʹ end of one strand faces the 5ʹ end of the other ( Figure 10.16 ). The 3ʹ end of each strand has a free hydroxyl group, while the 5ʹ end of each strand has a free phosphate group. The sugar and phosphate of the polymerized nucleotides form the backbone of the structure, whereas the nitrogenous bases are stacked inside. These nitrogenous bases on the interior of the molecule interact with each other, base pairing. Analysis of the diffraction patterns of DNA has determined that there are approximately 10 bases per turn in DNA. The asymmetrical spacing of the sugar-phosphate backbones generates major grooves (where the backbone is far apart) and minor grooves (where the backbone is close together) ( Figure 10.16 ). These grooves are locations where proteins can bind to DNA. The binding of these proteins can alter the structure of DNA, regulate replication , or regulate transcription of DNA into RNA. Base pairing takes place between a purine and pyrimidine. In DNA, adenine (A) and thymine (T) are complementary base pairs , and cytosine (C) and guanine (G) are also complementary base pairs, explaining Chargaff’s rules ( Figure 10.17 ). The base pairs are stabilized by hydrogen bonds; adenine and thymine form two hydrogen bonds between them, whereas cytosine and guanine form three hydrogen bonds between them. In the laboratory, exposing the two DNA strands of the double helix to high temperatures or to certain chemicals can break the hydrogen bonds between complementary bases, thus separating the strands into two separate single strands of DNA (single-stranded DNA [ ssDNA ]). This process is called DNA denaturation and is analogous to protein denaturation, as described in Proteins . The ssDNA strands can also be put back together as double-stranded DNA ( dsDNA ), through reannealing or renaturing by cooling or removing the chemical denaturants, allowing these hydrogen bonds to reform. The ability to artificially manipulate DNA in this way is the basis for several important techniques in biotechnology ( Figure 10.18 ). Because of the additional hydrogen bonding between the C = G base pair, DNA with a high GC content is more difficult to denature than DNA with a lower GC content. Link to Learning View an animation on DNA structure from the DNA Learning Center to learn more. Check Your Understanding What are the two complementary base pairs of DNA and how are they bonded together? DNA Function DNA stores the information needed to build and control the cell. The transmission of this information from mother to daughter cells is called vertical gene transfer and it occurs through the process of DNA replication. DNA is replicated when a cell makes a duplicate copy of its DNA, then the cell divides, resulting in the correct distribution of one DNA copy to each resulting cell. DNA can also be enzymatically degraded and used as a source of nucleosides and nucleotides for the cell. Unlike other macromolecules, DNA does not serve a structural role in cells. Check Your Understanding How does DNA transmit genetic information to offspring? Eye on Ethics Paving the Way for Women in Science and Health Professions Historically, women have been underrepresented in the sciences and in medicine, and often their pioneering contributions have gone relatively unnoticed. For example, although Rosalind Franklin performed the X-ray diffraction studies demonstrating the double helical structure of DNA, it is Watson and Crick who became famous for this discovery, building on her data. There still remains great controversy over whether their acquisition of her data was appropriate and whether personality conflicts and gender bias contributed to the delayed recognition of her significant contributions. Similarly, Barbara McClintock did pioneering work in maize (corn) genetics from the 1930s through 1950s, discovering transposons (jumping genes), but she was not recognized until much later, receiving a Nobel Prize in Physiology or Medicine in 1983 ( Figure 10.19 ). Today, women still remain underrepresented in many fields of science and medicine. While more than half of the undergraduate degrees in science are awarded to women, only 46% of doctoral degrees in science are awarded to women. In academia, the number of women at each level of career advancement continues to decrease, with women holding less than one-third of the positions of Ph.D.-level scientists in tenure-track positions, and less than one-quarter of the full professorships at 4-year colleges and universities. 11 Even in the health professions, like nearly all other fields, women are often underrepresented in many medical careers and earn significantly less than their male counterparts, as shown in a 2013 study published by the Journal of the American Medical Association . 12 11 N.H. Wolfinger “For Female Scientists, There's No Good Time to Have Children.” The Atlantic July 29, 2013. http://www.theatlantic.com/sexes/archive/2013/07/for-female-scientists-theres-no-good-time-to-have-children/278165/. 12 S.A. Seabury et al. “Trends in the Earnings of Male and Female Health Care Professionals in the United States, 1987 to 2010.” Journal of the American Medical Association Internal Medicine 173 no. 18 (2013):1748–1750. Why do such disparities continue to exist and how do we break these cycles? The situation is complex and likely results from the combination of various factors, including how society conditions the behaviors of girls from a young age and supports their interests, both professionally and personally. Some have suggested that women do not belong in the laboratory, including Nobel Prize winner Tim Hunt, whose 2015 public comments suggesting that women are too emotional for science 13 were met with widespread condemnation. 13 E. Chung. “Tim Hunt, Sexism and Science: The Real 'Trouble With Girls' in Labs.” CBC News Technology and Science , June 12, 2015. http://www.cbc.ca/news/technology/tim-hunt-sexism-and-science-the-real-trouble-with-girls-in-labs-1.3110133. Accessed 8/4/2016. Perhaps girls should be supported more from a young age in the areas of science and math ( Figure 10.19 ). Science, technology, engineering, and mathematics (STEM) programs sponsored by the American Association of University Women (AAUW) 14 and National Aeronautics and Space Administration (NASA) 15 are excellent examples of programs that offer such support. Contributions by women in science should be made known more widely to the public, and marketing targeted to young girls should include more images of historically and professionally successful female scientists and medical professionals, encouraging all bright young minds, including girls and women, to pursue careers in science and medicine. 14 American Association of University Women. “Building a STEM Pipeline for Girls and Women.” http://www.aauw.org/what-we-do/stem-education/. Accessed June 10, 2016. 15 National Aeronautics and Space Administration. “Outreach Programs: Women and Girls Initiative.” http://women.nasa.gov/outreach-programs/. Accessed June 10, 2016. Clinical Focus Part 2 Based upon his symptoms, Alex’s physician suspects that he is suffering from a foodborne illness that he acquired during his travels. Possibilities include bacterial infection (e.g., enterotoxigenic E. coli , Vibrio cholerae , Campylobacter jejuni , Salmonella ), viral infection (rotavirus or norovirus), or protozoan infection ( Giardia lamblia , Cryptosporidium parvum , or Entamoeba histolytica ). His physician orders a stool sample to identify possible causative agents (e.g., bacteria, cysts) and to look for the presence of blood because certain types of infectious agents (like C. jejuni , Salmonella , and E. histolytica ) are associated with the production of bloody stools. Alex’s stool sample showed neither blood nor cysts. Following analysis of his stool sample and based upon his recent travel history, the hospital physician suspected that Alex was suffering from traveler’s diarrhea caused by enterotoxigenic E. coli ( ETEC ), the causative agent of most traveler’s diarrhea. To verify the diagnosis and rule out other possibilities, Alex’s physician ordered a diagnostic lab test of his stool sample to look for DNA sequences encoding specific virulence factors of ETEC. The physician instructed Alex to drink lots of fluids to replace what he was losing and discharged him from the hospital. ETEC produces several plasmid-encoded virulence factors that make it pathogenic compared with typical E. coli . These include the secreted toxins heat-labile enterotoxin (LT) and heat-stabile enterotoxin (ST) , as well as colonization factor (CF) . Both LT and ST cause the excretion of chloride ions from intestinal cells to the intestinal lumen, causing a consequent loss of water from intestinal cells, resulting in diarrhea. CF encodes a bacterial protein that aids in allowing the bacterium to adhere to the lining of the small intestine. Why did Alex’s physician use genetic analysis instead of either isolation of bacteria from the stool sample or direct Gram stain of the stool sample alone? Jump to the next Clinical Focus box. Go back to the previous Clinical Focus box. 10.3 Structure and Function of RNA Learning Objectives Describe the biochemical structure of ribonucleotides Describe the similarities and differences between RNA and DNA Describe the functions of the three main types of RNA used in protein synthesis Explain how RNA can serve as hereditary information Structurally speaking, ribonucleic acid (RNA) , is quite similar to DNA. However, whereas DNA molecules are typically long and double stranded, RNA molecules are much shorter and are typically single stranded. RNA molecules perform a variety of roles in the cell but are mainly involved in the process of protein synthesis ( translation ) and its regulation. RNA Structure RNA is typically single stranded and is made of ribonucleotides that are linked by phosphodiester bonds. A ribonucleotide in the RNA chain contains ribose (the pentose sugar), one of the four nitrogenous bases (A, U, G, and C), and a phosphate group. The subtle structural difference between the sugars gives DNA added stability, making DNA more suitable for storage of genetic information, whereas the relative instability of RNA makes it more suitable for its more short-term functions. The RNA-specific pyrimidine uracil forms a complementary base pair with adenine and is used instead of the thymine used in DNA. Even though RNA is single stranded, most types of RNA molecules show extensive intramolecular base pairing between complementary sequences within the RNA strand, creating a predictable three-dimensional structure essential for their function ( Figure 10.20 and Figure 10.21 ). Check Your Understanding How does the structure of RNA differ from the structure of DNA? Functions of RNA in Protein Synthesis Cells access the information stored in DNA by creating RNA to direct the synthesis of proteins through the process of translation . Proteins within a cell have many functions, including building cellular structures and serving as enzyme catalysts for cellular chemical reactions that give cells their specific characteristics. The three main types of RNA directly involved in protein synthesis are messenger RNA (mRNA) , ribosomal RNA (rRNA) , and transfer RNA (tRNA) . In 1961, French scientists François Jacob and Jacques Monod hypothesized the existence of an intermediary between DNA and its protein products, which they called messenger RNA. 16 Evidence supporting their hypothesis was gathered soon afterwards showing that information from DNA is transmitted to the ribosome for protein synthesis using mRNA. If DNA serves as the complete library of cellular information, mRNA serves as a photocopy of specific information needed at a particular point in time that serves as the instructions to make a protein. 16 A. Rich. “The Era of RNA Awakening: Structural Biology of RNA in the Early Years.” Quarterly Reviews of Biophysics 42 no. 2 (2009):117–137. The mRNA carries the message from the DNA, which controls all of the cellular activities in a cell. If a cell requires a certain protein to be synthesized, the gene for this product is “turned on” and the mRNA is synthesized through the process of transcription (see RNA Transcription ). The mRNA then interacts with ribosomes and other cellular machinery ( Figure 10.22 ) to direct the synthesis of the protein it encodes during the process of translation (see Protein Synthesis ). mRNA is relatively unstable and short-lived in the cell, especially in prokaryotic cells, ensuring that proteins are only made when needed. rRNA and tRNA are stable types of RNA. In prokaryotes and eukaryotes, tRNA and rRNA are encoded in the DNA, then copied into long RNA molecules that are cut to release smaller fragments containing the individual mature RNA species. In eukaryotes, synthesis, cutting, and assembly of rRNA into ribosomes takes place in the nucleolus region of the nucleus, but these activities occur in the cytoplasm of prokaryotes. Neither of these types of RNA carries instructions to direct the synthesis of a polypeptide, but they play other important roles in protein synthesis. Ribosomes are composed of rRNA and protein. As its name suggests, rRNA is a major constituent of ribosomes , composing up to about 60% of the ribosome by mass and providing the location where the mRNA binds. The rRNA ensures the proper alignment of the mRNA, tRNA, and the ribosomes; the rRNA of the ribosome also has an enzymatic activity ( peptidyl transferase ) and catalyzes the formation of the peptide bonds between two aligned amino acids during protein synthesis. Although rRNA had long been thought to serve primarily a structural role, its catalytic role within the ribosome was proven in 2000. 17 Scientists in the laboratories of Thomas Steitz (1940–) and Peter Moore (1939–) at Yale University were able to crystallize the ribosome structure from Haloarcula marismortui , a halophilic archaeon isolated from the Dead Sea. Because of the importance of this work, Steitz shared the 2009 Nobel Prize in Chemistry with other scientists who made significant contributions to the understanding of ribosome structure. 17 P. Nissen et al. “The Structural Basis of Ribosome Activity in Peptide Bond Synthesis.” Science 289 no. 5481 (2000):920–930. Transfer RNA is the third main type of RNA and one of the smallest, usually only 70–90 nucleotides long. It carries the correct amino acid to the site of protein synthesis in the ribosome. It is the base pairing between the tRNA and mRNA that allows for the correct amino acid to be inserted in the polypeptide chain being synthesized ( Figure 10.23 ). Any mutations in the tRNA or rRNA can result in global problems for the cell because both are necessary for proper protein synthesis ( Table 10.1 ). Structure and Function of RNA mRNA rRNA tRNA Structure Short, unstable, single-stranded RNA corresponding to a gene encoded within DNA Longer, stable RNA molecules composing 60% of ribosome’s mass Short (70-90 nucleotides), stable RNA with extensive intramolecular base pairing; contains an amino acid binding site and an mRNA binding site Function Serves as intermediary between DNA and protein; used by ribosome to direct synthesis of protein it encodes Ensures the proper alignment of mRNA, tRNA, and ribosome during protein synthesis; catalyzes peptide bond formation between amino acids Carries the correct amino acid to the site of protein synthesis in the ribosome Table 10.1 Check Your Understanding What are the functions of the three major types of RNA molecules involved in protein synthesis? RNA as Hereditary Information Although RNA does not serve as the hereditary information in most cells, RNA does hold this function for many viruses that do not contain DNA . Thus, RNA clearly does have the additional capacity to serve as genetic information. Although RNA is typically single stranded within cells, there is significant diversity in viruses. Rhinoviruses, which cause the common cold; influenza viruses; and the Ebola virus are single-stranded RNA viruses. Rotaviruses, which cause severe gastroenteritis in children and other immunocompromised individuals, are examples of double-stranded RNA viruses. Because double-stranded RNA is uncommon in eukaryotic cells, its presence serves as an indicator of viral infection. The implications for a virus having an RNA genome instead of a DNA genome are discussed in more detail in Viruses . 10.4 Structure and Function of Cellular Genomes Learning Objectives Define gene and genotype and differentiate genotype from phenotype Describe chromosome structure and packaging Compare prokaryotic and eukaryotic chromosomes Explain why extrachromosomal DNA is important in a cell Thus far, we have discussed the structure and function of individual pieces of DNA and RNA . In this section, we will discuss how all of an organism’s genetic material—collectively referred to as its genome —is organized inside of the cell. Since an organism’s genetics to a large extent dictate its characteristics, it should not be surprising that organisms differ in the arrangement of their DNA and RNA. Genotype versus Phenotype All cellular activities are encoded within a cell’s DNA. The sequence of bases within a DNA molecule represents the genetic information of the cell. Segments of DNA molecules are called gene s, and individual genes contain the instructional code necessary for synthesizing various proteins, enzymes, or stable RNA molecules. The full collection of genes that a cell contains within its genome is called its genotype . However, a cell does not express all of its genes simultaneously. Instead, it turns on (expresses) or turns off certain genes when necessary. The set of genes being expressed at any given point in time determines the cell’s activities and its observable characteristics, referred to as its phenotype . Genes that are always expressed are known as constitutive genes ; some constitutive genes are known as housekeeping genes because they are necessary for the basic functions of the cell. While the genotype of a cell remains constant, the phenotype may change in response to environmental signals (e.g., changes in temperature or nutrient availability) that affect which nonconstitutive genes are expressed. For example, the oral bacterium Streptococcus mutans produces a sticky slime layer that allows it to adhere to teeth, forming dental plaque ; however, the genes that control the production of the slime layer are only expressed in the presence of sucrose (table sugar). Thus, while the genotype of S. mutans is constant, its phenotype changes depending on the presence and absence of sugar in its environment. Temperature can also regulate gene expression . For example, the gram-negative bacterium Serratia marcescens , a pathogen frequently associated with hospital-acquired infections, produces a red pigment at 28 °C but not at 37 °C, the normal internal temperature of the human body ( Figure 10.24 ). Organization of Genetic Material The vast majority of an organism’s genome is organized into the cell’s chromosomes , which are discrete DNA structures within cells that control cellular activity. Recall that while eukaryotic chromosomes are housed in the membrane-bound nucleus, most prokaryotes contain a single, circular chromosome that is found in an area of the cytoplasm called the nucleoid (see Unique Characteristics of Prokaryotic Cells ). A chromosome may contain several thousand genes. Organization of Eukaryotic Chromosome Chromosome structure differs somewhat between eukaryotic and prokaryotic cells. Eukaryotic chromosomes are typically linear, and eukaryotic cells contain multiple distinct chromosomes. Many eukaryotic cells contain two copies of each chromosome and, therefore, are diploid . The length of a chromosome greatly exceeds the length of the cell, so a chromosome needs to be packaged into a very small space to fit within the cell. For example, the combined length of all of the 3 billion base pairs 18 of DNA of the human genome would measure approximately 2 meters if completely stretched out, and some eukaryotic genomes are many times larger than the human genome. DNA supercoiling refers to the process by which DNA is twisted to fit inside the cell. Supercoiling may result in DNA that is either underwound (less than one turn of the helix per 10 base pairs) or overwound (more than one turn per 10 base pairs) from its normal relaxed state. Proteins known to be involved in supercoiling include topoisomerases ; these enzymes help maintain the structure of supercoiled chromosomes, preventing overwinding of DNA during certain cellular processes like DNA replication. 18 National Human Genome Research Institute. “The Human Genome Project Completion: Frequently Asked Questions.” https://www.genome.gov/11006943. Accessed June 10, 2016 During DNA packaging , DNA-binding proteins called histones perform various levels of DNA wrapping and attachment to scaffolding proteins. The combination of DNA with these attached proteins is referred to as chromatin . In eukaryotes, the packaging of DNA by histones may be influenced by environmental factors that affect the presence of methyl groups on certain cytosine nucleotides of DNA. The influence of environmental factors on DNA packaging is called epigenetics . Epigenetics is another mechanism for regulating gene expression without altering the sequence of nucleotides. Epigenetic changes can be maintained through multiple rounds of cell division and, therefore, can be heritable. Link to Learning View this animation from the DNA Learning Center to learn more about on DNA packaging in eukaryotes. Organization of Prokaryotic Chromosomes Chromosomes in bacteria and archaea are usually circular, and a prokaryotic cell typically contains only a single chromosome within the nucleoid . Because the chromosome contains only one copy of each gene, prokaryotes are haploid . As in eukaryotic cells, DNA supercoiling is necessary for the genome to fit within the prokaryotic cell. The DNA in the bacterial chromosome is arranged in several supercoiled domains. As with eukaryotes, topoisomerases are involved in supercoiling DNA. DNA gyrase is a type of topoisomerase, found in bacteria and some archaea, that helps prevent the overwinding of DNA. (Some antibiotics kill bacteria by targeting DNA gyrase.) In addition, histone-like proteins bind DNA and aid in DNA packaging. Other proteins bind to the origin of replication, the location in the chromosome where DNA replication initiates. Because different regions of DNA are packaged differently, some regions of chromosomal DNA are more accessible to enzymes and thus may be used more readily as templates for gene expression. Interestingly, several bacteria, including Helicobacter pylori and Shigella flexneri , have been shown to induce epigenetic changes in their hosts upon infection, leading to chromatin remodeling that may cause long-term effects on host immunity. 19 19 H. Bierne et al. “Epigenetics and Bacterial Infections.” Cold Spring Harbor Perspectives in Medicine 2 no. 12 (2012):a010272. Check Your Understanding What is the difference between a cell’s genotype and its phenotype? How does DNA fit inside cells? Noncoding DNA In addition to genes, a genome also contains many regions of noncoding DNA that do not encode proteins or stable RNA products. Noncoding DNA is commonly found in areas prior to the start of coding sequences of genes as well as in intergenic regions (i.e., DNA sequences located between genes) ( Figure 10.25 ). Prokaryotes appear to use their genomes very efficiently, with only an average of 12% of the genome being taken up by noncoding sequences. In contrast, noncoding DNA can represent about 98% of the genome in eukaryotes, as seen in humans, but the percentage of noncoding DNA varies between species. 20 These noncoding DNA regions were once referred to as “junk DNA”; however, this terminology is no longer widely accepted because scientists have since found roles for some of these regions, many of which contribute to the regulation of transcription or translation through the production of small noncoding RNA molecules, DNA packaging , and chromosomal stability. Although scientists may not fully understand the roles of all noncoding regions of DNA, it is generally believed that they do have purposes within the cell. 20 R.J. Taft et al. “The Relationship between Non-Protein-Coding DNA and Eukaryotic Complexity.” Bioessays 29 no. 3 (2007):288–299. Check Your Understanding What is the role of noncoding DNA? Extrachromosomal DNA Although most DNA is contained within a cell’s chromosomes, many cells have additional molecules of DNA outside the chromosomes, called extrachromosomal DNA , that are also part of its genome. The genomes of eukaryotic cells would also include the chromosomes from any organelles such as mitochondria and/or chloroplasts that these cells maintain ( Figure 10.26 ). The maintenance of circular chromosomes in these organelles is a vestige of their prokaryotic origins and supports the endosymbiotic theory (see Foundations of Modern Cell Theory ). In some cases, genomes of certain DNA viruses can also be maintained independently in host cells during latent viral infection. In these cases, these viruses are another form of extrachromosomal DNA. For example, the human papillomavirus (HPV) may be maintained in infected cells in this way. Besides chromosomes, some prokaryotes also have smaller loops of DNA called plasmids that may contain one or a few genes not essential for normal growth ( Figure 3.12 ). Bacteria can exchange these plasmids with other bacteria in a process known as horizontal gene transfer ( HGT) . The exchange of genetic material on plasmids sometimes provides microbes with new genes beneficial for growth and survival under special conditions. In some cases, genes obtained from plasmids may have clinical implications, encoding virulence factors that give a microbe the ability to cause disease or make a microbe resistant to certain antibiotics. Plasmids are also used heavily in genetic engineering and biotechnology as a way to move genes from one cell to another. The role of plasmids in horizontal gene transfer and biotechnology will be discussed further in Mechanisms of Microbial Genetics and Modern Applications of Microbial Genetics . Check Your Understanding How are plasmids involved in antibiotic resistance? Case in Point Lethal Plasmids Maria, a 20-year-old anthropology student from Texas, recently became ill in the African nation of Botswana, where she was conducting research as part of a study-abroad program. Maria’s research was focused on traditional African methods of tanning hides for the production of leather. Over a period of three weeks, she visited a tannery daily for several hours to observe and participate in the tanning process. One day, after returning from the tannery, Maria developed a fever, chills, and a headache, along with chest pain, muscle aches, nausea, and other flu-like symptoms. Initially, she was not concerned, but when her fever spiked and she began to cough up blood, her African host family became alarmed and rushed her to the hospital, where her condition continued to worsen. After learning about her recent work at the tannery, the physician suspected that Maria had been exposed to anthrax . He ordered a chest X-ray, a blood sample, and a spinal tap, and immediately started her on a course of intravenous penicillin. Unfortunately, lab tests confirmed the physician’s presumptive diagnosis. Maria’s chest X-ray exhibited pleural effusion, the accumulation of fluid in the space between the pleural membranes, and a Gram stain of her blood revealed the presence of gram-positive, rod-shaped bacteria in short chains, consistent with Bacillus anthracis . Blood and bacteria were also shown to be present in her cerebrospinal fluid, indicating that the infection had progressed to meningitis. Despite supportive treatment and aggressive antibiotic therapy, Maria slipped into an unresponsive state and died three days later. Anthrax is a disease caused by the introduction of endospores from the gram-positive bacterium B. anthracis into the body. Once infected, patients typically develop meningitis, often with fatal results. In Maria’s case, she inhaled the endospores while handling the hides of animals that had been infected. The genome of B. anthracis illustrates how small structural differences can lead to major differences in virulence. In 2003, the genomes of B. anthracis and Bacillus cereus , a similar but less pathogenic bacterium of the same genus, were sequenced and compared. 21 Researchers discovered that the 16S rRNA gene sequences of these bacteria are more than 99% identical, meaning that they are actually members of the same species despite their traditional classification as separate species. Although their chromosomal sequences also revealed a great deal of similarity, several virulence factors of B. anthracis were found to be encoded on two large plasmids not found in B. cereus . The plasmid pX01 encodes a three-part toxin that suppresses the host immune system, whereas the plasmid pX02 encodes a capsular polysaccharide that further protects the bacterium from the host immune system ( Figure 10.27 ). Since B. cereus lacks these plasmids, it does not produce these virulence factors, and although it is still pathogenic, it is typically associated with mild cases of diarrhea from which the body can quickly recover. Unfortunately for Maria, the presence of these toxin-encoding plasmids in B. anthracis gives it its lethal virulence. 21 N. Ivanova et al. “Genome Sequence of Bacillus cereus and Comparative Analysis with Bacillus anthracis.” Nature 423 no. 6935 (2003):87–91. What do you think would happen to the pathogenicity of B. anthracis if it lost one or both of its plasmids? Clinical Focus Resolution Within 24 hours, the results of the diagnostic test analysis of Alex’s stool sample revealed that it was positive for heat-labile enterotoxin (LT) , heat-stabile enterotoxin (ST) , and colonization factor (CF) , confirming the hospital physician’s suspicion of ETEC . During a follow-up with Alex’s family physician, this physician noted that Alex’s symptoms were not resolving quickly and he was experiencing discomfort that was preventing him from returning to classes. The family physician prescribed Alex a course of ciprofloxacin to resolve his symptoms. Fortunately, the ciprofloxacin resolved Alex’s symptoms within a few days. Alex likely got his infection from ingesting contaminated food or water. Emerging industrialized countries like Mexico are still developing sanitation practices that prevent the contamination of water with fecal material. Travelers in such countries should avoid the ingestion of undercooked foods, especially meats, seafood, vegetables, and unpasteurized dairy products. They should also avoid use of water that has not been treated; this includes drinking water, ice cubes, and even water used for brushing teeth. Using bottled water for these purposes is a good alternative. Good hygiene (handwashing) can also aid the prevention of an ETEC infection. Alex had not been careful about his food or water consumption, which led to his illness. Alex’s symptoms were very similar to those of cholera , caused by the gram-negative bacterium Vibrio cholerae , which also produces a toxin similar to ST and LT. At some point in the evolutionary history of ETEC , a nonpathogenic strain of E. coli similar to those typically found in the gut may have acquired the genes encoding the ST and LT toxins from V. cholerae . The fact that the genes encoding those toxins are encoded on extrachromosomal plasmids in ETEC supports the idea that these genes were acquired by E. coli and are likely maintained in bacterial populations through horizontal gene transfer. Go back to the previous Clinical Focus box. Viral Genomes Viral genomes exhibit significant diversity in structure. Some viruses have genomes that consist of DNA as their genetic material. This DNA may be single stranded, as exemplified by human parvoviruses , or double stranded, as seen in the herpesviruses and poxviruses . Additionally, although all cellular life uses DNA as its genetic material, some viral genomes are made of either single-stranded or double-stranded RNA molecules, as we have discussed. Viral genomes are typically smaller than most bacterial genomes, encoding only a few genes, because they rely on their hosts to carry out many of the functions required for their replication . The diversity of viral genome structures and their implications for viral replication life cycles are discussed in more detail in The Viral Life Cycle . Check Your Understanding Why do viral genomes vary widely among viruses? Micro Connections Genome Size Matters There is great variation in size of genomes among different organisms. Most eukaryotes maintain multiple chromosomes; humans, for example have 23 pairs, giving them 46 chromosomes. Despite being large at 3 billion base pairs, the human genome is far from the largest genome. Plants often maintain very large genomes, up to 150 billion base pairs, and commonly are polyploid, having multiple copies of each chromosome. The size of bacterial genomes also varies considerably, although they tend to be smaller than eukaryotic genomes ( Figure 10.28 ). Some bacterial genomes may be as small as only 112,000 base pairs. Often, the size of a bacterium’s genome directly relates to how much the bacterium depends on its host for survival. When a bacterium relies on the host cell to carry out certain functions, it loses the genes encoding the abilities to carry out those functions itself. These types of bacterial endosymbionts are reminiscent of the prokaryotic origins of mitochondria and chloroplasts. From a clinical perspective, obligate and facultative intracellular pathogens also tend to have small genomes (some around 1 million base pairs). Because host cells can supply most of their nutrients, they tend to have a reduced number of genes encoding metabolic functions, making their cultivation in the laboratory difficult if not impossible Due to their small sizes, the genomes of organisms like Mycoplasma genitalium (580,000 base pairs), Chlamydia trachomatis (1.0 million), Rickettsia prowazekii (1.1 million), and Treponema pallidum (1.1 million) were some of the earlier bacterial genomes sequenced. Respectively, these pathogens cause urethritis and pelvic inflammation, chlamydia, typhus, and syphilis. Whereas obligate intracellular pathogens have unusually small genomes, other bacteria with a great variety of metabolic and enzymatic capabilities have unusually large bacterial genomes. Pseudomonas aeruginosa , for example, is a bacterium commonly found in the environment and is able to grow on a wide range of substrates. Its genome contains 6.3 million base pairs, giving it a high metabolic ability and the ability to produce virulence factors that cause several types of opportunistic infections . Interestingly, there has been significant variability in genome size in viruses as well, ranging from 3,500 base pairs to 2.5 million base pairs, significantly exceeding the size of many bacterial genomes. The great variation observed in viral genome sizes further contributes to the great diversity of viral genome characteristics already discussed. Link to Learning Visit the genome database of the National Center for Biotechnology Information (NCBI) to see the genomes that have been sequenced and their sizes.
biology
Chapter Outline 5.1 Components and Structure 5.2 Passive Transport 5.3 Active Transport 5.4 Bulk Transport Introduction The plasma membrane, which is also called the cell membrane, has many functions, but the most basic one is to define the borders of the cell and keep the cell functional. The plasma membrane is selectively permeable. This means that the membrane allows some materials to freely enter or leave the cell, while other materials cannot move freely, but require the use of a specialized structure, and occasionally, even energy investment for crossing.
[ { "answer": { "ans_choice": 0, "ans_text": "protein" }, "bloom": "1", "hl_context": "<hl> Proteins make up the second major component of plasma membranes . <hl> <hl> Integral proteins ( some specialized types are called integrins ) are , as their name suggests , integrated completely into the membrane structure , and their hydrophobic membrane-spanning regions interact with the hydrophobic region of the the phospholipid bilayer ( Figure 5.2 ) . <hl> Single-pass integral membrane proteins usually have a hydrophobic transmembrane segment that consists of 20 – 25 amino acids . Some span only part of the membrane — associating with a single layer — while others stretch from one side of the membrane to the other , and are exposed on either side . Some complex proteins are composed of up to 12 segments of a single protein , which are extensively folded and embedded in the membrane ( Figure 5.5 ) . This type of protein has a hydrophilic region or regions , and one or several mildly hydrophobic regions . This arrangement of regions of the protein tends to orient the protein alongside the phospholipids , with the hydrophobic region of the protein adjacent to the tails of the phospholipids and the hydrophilic region or regions of the protein protruding from the membrane and in contact with the cytosol or extracellular fluid . <hl> Peripheral proteins are found on the exterior and interior surfaces of membranes , attached either to integral proteins or to phospholipids . <hl> Peripheral proteins , along with integral proteins , may serve as enzymes , as structural attachments for the fibers of the cytoskeleton , or as part of the cell ’ s recognition sites . These are sometimes referred to as “ cell-specific ” proteins . The body recognizes its own proteins and attacks foreign proteins associated with invasive pathogens .", "hl_sentences": "Proteins make up the second major component of plasma membranes . Integral proteins ( some specialized types are called integrins ) are , as their name suggests , integrated completely into the membrane structure , and their hydrophobic membrane-spanning regions interact with the hydrophobic region of the the phospholipid bilayer ( Figure 5.2 ) . Peripheral proteins are found on the exterior and interior surfaces of membranes , attached either to integral proteins or to phospholipids .", "question": { "cloze_format": "___ is a plasma membrane component that can be either found on its surface or embedded in the membrane structure.", "normal_format": "Which plasma membrane component can be either found on its surface or embedded in the membrane structure?", "question_choices": [ "protein", "cholesterol", "carbohydrate", "phospholipid" ], "question_id": "fs-id1511651", "question_text": "Which plasma membrane component can be either found on its surface or embedded in the membrane structure?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "double bonds in the fatty acid tail" }, "bloom": null, "hl_context": "<hl> The mosaic characteristics of the membrane explain some but not all of its fluidity . <hl> There are two other factors that help maintain this fluid characteristic . <hl> One factor is the nature of the phospholipids themselves . <hl> <hl> In their saturated form , the fatty acids in phospholipid tails are saturated with bound hydrogen atoms . <hl> There are no double bonds between adjacent carbon atoms . This results in tails that are relatively straight . <hl> In contrast , unsaturated fatty acids do not contain a maximal number of hydrogen atoms , but they do contain some double bonds between adjacent carbon atoms ; a double bond results in a bend in the string of carbons of approximately 30 degrees ( Figure 5.3 ) . <hl>", "hl_sentences": "The mosaic characteristics of the membrane explain some but not all of its fluidity . One factor is the nature of the phospholipids themselves . In their saturated form , the fatty acids in phospholipid tails are saturated with bound hydrogen atoms . In contrast , unsaturated fatty acids do not contain a maximal number of hydrogen atoms , but they do contain some double bonds between adjacent carbon atoms ; a double bond results in a bend in the string of carbons of approximately 30 degrees ( Figure 5.3 ) .", "question": { "cloze_format": "The characteristic of a phospholipid that contributes to the fluidity of the membrane is ___.", "normal_format": "Which characteristic of a phospholipid contributes to the fluidity of the membrane?", "question_choices": [ "its head", "cholesterol", "a saturated fatty acid tail", "double bonds in the fatty acid tail" ], "question_id": "fs-id1268735", "question_text": "Which characteristic of a phospholipid contributes to the fluidity of the membrane?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "identification of the cell" }, "bloom": null, "hl_context": "<hl> These carbohydrates on the exterior surface of the cell — the carbohydrate components of both glycoproteins and glycolipids — are collectively referred to as the glycocalyx ( meaning “ sugar coating ” ) . <hl> <hl> The glycocalyx is highly hydrophilic and attracts large amounts of water to the surface of the cell . <hl> This aids in the interaction of the cell with its watery environment and in the cell ’ s ability to obtain substances dissolved in the water . <hl> As discussed above , the glycocalyx is also important for cell identification , self / non-self determination , and embryonic development , and is used in cell-cell attachments to form tissues . <hl>", "hl_sentences": "These carbohydrates on the exterior surface of the cell — the carbohydrate components of both glycoproteins and glycolipids — are collectively referred to as the glycocalyx ( meaning “ sugar coating ” ) . The glycocalyx is highly hydrophilic and attracts large amounts of water to the surface of the cell . As discussed above , the glycocalyx is also important for cell identification , self / non-self determination , and embryonic development , and is used in cell-cell attachments to form tissues .", "question": { "cloze_format": "The primary function of carbohydrates attached to the exterior of cell membranes is (that it) ___.", "normal_format": "What is the primary function of carbohydrates attached to the exterior of cell membranes?", "question_choices": [ "identification of the cell", "flexibility of the membrane", "strengthening the membrane", "channels through membrane" ], "question_id": "fs-id2163365", "question_text": "What is the primary function of carbohydrates attached to the exterior of cell membranes?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "from an area with a high concentration of water to one of lower concentration" }, "bloom": null, "hl_context": "Osmosis is a special case of diffusion . <hl> Water , like other substances , moves from an area of high concentration to one of low concentration . <hl> An obvious question is what makes water move at all ? Imagine a beaker with a semipermeable membrane separating the two sides or halves ( Figure 5.11 ) . On both sides of the membrane the water level is the same , but there are different concentrations of a dissolved substance , or solute , that cannot cross the membrane ( otherwise the concentrations on each side would be balanced by the solute crossing the membrane ) . If the volume of the solution on both sides of the membrane is the same , but the concentrations of solute are different , then there are different amounts of water , the solvent , on either side of the membrane .", "hl_sentences": "Water , like other substances , moves from an area of high concentration to one of low concentration .", "question": { "cloze_format": "Water moves via osmosis _________.", "normal_format": "How does water move via osmosis?", "question_choices": [ "throughout the cytoplasm", "from an area with a high concentration of other solutes to a lower one", "from an area with a high concentration of water to one of lower concentration", "from an area with a low concentration of water to one of higher concentration" ], "question_id": "fs-id1422483", "question_text": "Water moves via osmosis _________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "concentration gradient" }, "bloom": null, "hl_context": "Factors That Affect Diffusion Molecules move constantly in a random manner , at a rate that depends on their mass , their environment , and the amount of thermal energy they possess , which in turn is a function of temperature . This movement accounts for the diffusion of molecules through whatever medium in which they are localized . A substance will tend to move into any space available to it until it is evenly distributed throughout it . <hl> After a substance has diffused completely through a space , removing its concentration gradient , molecules will still move around in the space , but there will be no net movement of the number of molecules from one area to another . <hl> <hl> This lack of a concentration gradient in which there is no net movement of a substance is known as dynamic equilibrium . <hl> While diffusion will go forward in the presence of a concentration gradient of a substance , several factors affect the rate of diffusion . <hl> Each separate substance in a medium , such as the extracellular fluid , has its own concentration gradient , independent of the concentration gradients of other materials . <hl> <hl> In addition , each substance will diffuse according to that gradient . <hl> Within a system , there will be different rates of diffusion of the different substances in the medium .", "hl_sentences": "After a substance has diffused completely through a space , removing its concentration gradient , molecules will still move around in the space , but there will be no net movement of the number of molecules from one area to another . This lack of a concentration gradient in which there is no net movement of a substance is known as dynamic equilibrium . Each separate substance in a medium , such as the extracellular fluid , has its own concentration gradient , independent of the concentration gradients of other materials . In addition , each substance will diffuse according to that gradient .", "question": { "cloze_format": "The principal force driving movement in diffusion is the __________.", "normal_format": "What is the principal force driving movement in diffusion?", "question_choices": [ "temperature", "particle size", "concentration gradient", "membrane surface area" ], "question_id": "fs-id1243838", "question_text": "The principal force driving movement in diffusion is the __________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "They have no way of controlling their tonicity." }, "bloom": "2", "hl_context": "<hl> Tonicity is a concern for all living things . <hl> <hl> For example , paramecia and amoebas , which are protists that lack cell walls , have contractile vacuoles . <hl> <hl> This vesicle collects excess water from the cell and pumps it out , keeping the cell from lysing as it takes on water from its environment ( Figure 5.15 ) . <hl>", "hl_sentences": "Tonicity is a concern for all living things . For example , paramecia and amoebas , which are protists that lack cell walls , have contractile vacuoles . This vesicle collects excess water from the cell and pumps it out , keeping the cell from lysing as it takes on water from its environment ( Figure 5.15 ) .", "question": { "cloze_format": "The problem that is faced by organisms that live in fresh water is that ___.", "normal_format": "What problem is faced by organisms that live in fresh water?", "question_choices": [ "Their bodies tend to take in too much water.", "They have no way of controlling their tonicity.", "Only salt water poses problems for animals that live in it.", "Their bodies tend to lose too much water to their environment." ], "question_id": "fs-id1986270", "question_text": "What problem is faced by organisms that live in fresh water?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "diffusion is constantly moving solutes in opposite directions" }, "bloom": null, "hl_context": "<hl> To move substances against a concentration or electrochemical gradient , the cell must use energy . <hl> This energy is harvested from ATP generated through the cell ’ s metabolism . <hl> Active transport mechanisms , collectively called pumps , work against electrochemical gradients . <hl> <hl> Small substances constantly pass through plasma membranes . <hl> Active transport maintains concentrations of ions and other substances needed by living cells in the face of these passive movements . Much of a cell ’ s supply of metabolic energy may be spent maintaining these processes . ( Most of a red blood cell ’ s metabolic energy is used to maintain the imbalance between exterior and interior sodium and potassium levels required by the cell . ) Because active transport mechanisms depend on a cell ’ s metabolism for energy , they are sensitive to many metabolic poisons that interfere with the supply of ATP .", "hl_sentences": "To move substances against a concentration or electrochemical gradient , the cell must use energy . Active transport mechanisms , collectively called pumps , work against electrochemical gradients . Small substances constantly pass through plasma membranes .", "question": { "cloze_format": "Active transport must function continuously because __________.", "normal_format": "Why must active transport function continuously? ", "question_choices": [ "plasma membranes wear out", "not all membranes are amphiphilic", "facilitated transport opposes active transport", "diffusion is constantly moving solutes in opposite directions" ], "question_id": "fs-id2000653", "question_text": "Active transport must function continuously because __________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "by expelling more cations than are taken in" }, "bloom": "2", "hl_context": "We have discussed simple concentration gradients — differential concentrations of a substance across a space or a membrane — but in living systems , gradients are more complex . <hl> Because ions move into and out of cells and because cells contain proteins that do not move across the membrane and are mostly negatively charged , there is also an electrical gradient , a difference of charge , across the plasma membrane . <hl> <hl> The interior of living cells is electrically negative with respect to the extracellular fluid in which they are bathed , and at the same time , cells have higher concentrations of potassium ( K + ) and lower concentrations of sodium ( Na + ) than does the extracellular fluid . <hl> So in a living cell , the concentration gradient of Na + tends to drive it into the cell , and the electrical gradient of Na + ( a positive ion ) also tends to drive it inward to the negatively charged interior . The situation is more complex , however , for other elements such as potassium . The electrical gradient of K + , a positive ion , also tends to drive it into the cell , but the concentration gradient of K + tends to drive K + out of the cell ( Figure 5.16 ) . The combined gradient of concentration and electrical charge that affects an ion is called its electrochemical gradient .", "hl_sentences": "Because ions move into and out of cells and because cells contain proteins that do not move across the membrane and are mostly negatively charged , there is also an electrical gradient , a difference of charge , across the plasma membrane . The interior of living cells is electrically negative with respect to the extracellular fluid in which they are bathed , and at the same time , cells have higher concentrations of potassium ( K + ) and lower concentrations of sodium ( Na + ) than does the extracellular fluid .", "question": { "cloze_format": "The sodium-potassium pump makes the interior of the cell negatively charged ___.", "normal_format": "How does the sodium-potassium pump make the interior of the cell negatively charged?", "question_choices": [ "by expelling anions", "by pulling in anions", "by expelling more cations than are taken in", "by taking in and expelling an equal number of cations" ], "question_id": "fs-id1421748", "question_text": "How does the sodium-potassium pump make the interior of the cell negatively charged?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "electrochemical gradient" }, "bloom": null, "hl_context": "We have discussed simple concentration gradients — differential concentrations of a substance across a space or a membrane — but in living systems , gradients are more complex . Because ions move into and out of cells and because cells contain proteins that do not move across the membrane and are mostly negatively charged , there is also an electrical gradient , a difference of charge , across the plasma membrane . The interior of living cells is electrically negative with respect to the extracellular fluid in which they are bathed , and at the same time , cells have higher concentrations of potassium ( K + ) and lower concentrations of sodium ( Na + ) than does the extracellular fluid . So in a living cell , the concentration gradient of Na + tends to drive it into the cell , and the electrical gradient of Na + ( a positive ion ) also tends to drive it inward to the negatively charged interior . The situation is more complex , however , for other elements such as potassium . The electrical gradient of K + , a positive ion , also tends to drive it into the cell , but the concentration gradient of K + tends to drive K + out of the cell ( Figure 5.16 ) . <hl> The combined gradient of concentration and electrical charge that affects an ion is called its electrochemical gradient . <hl>", "hl_sentences": "The combined gradient of concentration and electrical charge that affects an ion is called its electrochemical gradient .", "question": { "cloze_format": "The combination of an electrical gradient and a concentration gradient is called the ___.", "normal_format": "What is the combination of an electrical gradient and a concentration gradient called?", "question_choices": [ "potential gradient", "electrical potential", "concentration potential", "electrochemical gradient" ], "question_id": "fs-id2188234", "question_text": "What is the combination of an electrical gradient and a concentration gradient called?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "It fuses with and becomes part of the plasma membrane." }, "bloom": "1", "hl_context": "The reverse process of moving material into a cell is the process of exocytosis . <hl> Exocytosis is the opposite of the processes discussed above in that its purpose is to expel material from the cell into the extracellular fluid . <hl> <hl> Waste material is enveloped in a membrane and fuses with the interior of the plasma membrane . <hl> This fusion opens the membranous envelope on the exterior of the cell , and the waste material is expelled into the extracellular space ( Figure 5.23 ) . Other examples of cells releasing molecules via exocytosis include the secretion of proteins of the extracellular matrix and secretion of neurotransmitters into the synaptic cleft by synaptic vesicles .", "hl_sentences": "Exocytosis is the opposite of the processes discussed above in that its purpose is to expel material from the cell into the extracellular fluid . Waste material is enveloped in a membrane and fuses with the interior of the plasma membrane .", "question": { "cloze_format": "After exocytosis, the effect on the membrane of a vesicle, is that ___.", "normal_format": "What happens to the membrane of a vesicle after exocytosis?", "question_choices": [ "It leaves the cell.", "It is disassembled by the cell.", "It fuses with and becomes part of the plasma membrane.", "It is used again in another exocytosis event." ], "question_id": "fs-id1853625", "question_text": "What happens to the membrane of a vesicle after exocytosis?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "phagocytosis" }, "bloom": "1", "hl_context": "<hl> Phagocytosis Active Large macromolecules , whole cells , or cellular structures <hl> <hl> Phagocytosis ( the condition of “ cell eating ” ) is the process by which large particles , such as cells or relatively large particles , are taken in by a cell . <hl> For example , when microorganisms invade the human body , a type of white blood cell called a neutrophil will remove the invaders through this process , surrounding and engulfing the microorganism , which is then destroyed by the neutrophil ( Figure 5.20 ) .", "hl_sentences": "Phagocytosis Active Large macromolecules , whole cells , or cellular structures Phagocytosis ( the condition of “ cell eating ” ) is the process by which large particles , such as cells or relatively large particles , are taken in by a cell .", "question": { "cloze_format": "The transport mechanism that can bring whole cells into a cell is (the) ___ .", "normal_format": "Which transport mechanism can bring whole cells into a cell?", "question_choices": [ "pinocytosis", "phagocytosis", "facilitated transport", "primary active transport" ], "question_id": "fs-id2116533", "question_text": "Which transport mechanism can bring whole cells into a cell?" }, "references_are_paraphrase": null } ]
5
5.1 Components and Structure Learning Objectives By the end of this section, you will be able to: Understand the fluid mosaic model of cell membranes Describe the functions of phospholipids, proteins, and carbohydrates in membranes Discuss membrane fluidity A cell’s plasma membrane defines the cell, outlines its borders, and determines the nature of its interaction with its environment (see Table 5.1 for a summary). Cells exclude some substances, take in others, and excrete still others, all in controlled quantities. The plasma membrane must be very flexible to allow certain cells, such as red blood cells and white blood cells, to change shape as they pass through narrow capillaries. These are the more obvious functions of a plasma membrane. In addition, the surface of the plasma membrane carries markers that allow cells to recognize one another, which is vital for tissue and organ formation during early development, and which later plays a role in the “self” versus “non-self” distinction of the immune response. Among the most sophisticated functions of the plasma membrane is the ability to transmit signals by means of complex, integral proteins known as receptors. These proteins act both as receivers of extracellular inputs and as activators of intracellular processes. These membrane receptors provide extracellular attachment sites for effectors like hormones and growth factors, and they activate intracellular response cascades when their effectors are bound. Occasionally, receptors are hijacked by viruses (HIV, human immunodeficiency virus, is one example) that use them to gain entry into cells, and at times, the genes encoding receptors become mutated, causing the process of signal transduction to malfunction with disastrous consequences. Fluid Mosaic Model The existence of the plasma membrane was identified in the 1890s, and its chemical components were identified in 1915. The principal components identified at that time were lipids and proteins. The first widely accepted model of the plasma membrane’s structure was proposed in 1935 by Hugh Davson and James Danielli; it was based on the “railroad track” appearance of the plasma membrane in early electron micrographs. They theorized that the structure of the plasma membrane resembles a sandwich, with protein being analogous to the bread, and lipids being analogous to the filling. In the 1950s, advances in microscopy, notably transmission electron microscopy (TEM), allowed researchers to see that the core of the plasma membrane consisted of a double, rather than a single, layer. A new model that better explains both the microscopic observations and the function of that plasma membrane was proposed by S.J. Singer and Garth L. Nicolson in 1972. The explanation proposed by Singer and Nicolson is called the fluid mosaic model . The model has evolved somewhat over time, but it still best accounts for the structure and functions of the plasma membrane as we now understand them. The fluid mosaic model describes the structure of the plasma membrane as a mosaic of components—including phospholipids, cholesterol, proteins, and carbohydrates—that gives the membrane a fluid character. Plasma membranes range from 5 to 10 nm in thickness. For comparison, human red blood cells, visible via light microscopy, are approximately 8 µm wide, or approximately 1,000 times wider than a plasma membrane. The membrane does look a bit like a sandwich ( Figure 5.2 ). The principal components of a plasma membrane are lipids (phospholipids and cholesterol), proteins, and carbohydrates attached to some of the lipids and some of the proteins. A phospholipid is a molecule consisting of glycerol, two fatty acids, and a phosphate-linked head group. Cholesterol, another lipid composed of four fused carbon rings, is found alongside the phospholipids in the core of the membrane. The proportions of proteins, lipids, and carbohydrates in the plasma membrane vary with cell type, but for a typical human cell, protein accounts for about 50 percent of the composition by mass, lipids (of all types) account for about 40 percent of the composition by mass, with the remaining 10 percent of the composition by mass being carbohydrates. However, the concentration of proteins and lipids varies with different cell membranes. For example, myelin, an outgrowth of the membrane of specialized cells that insulates the axons of the peripheral nerves, contains only 18 percent protein and 76 percent lipid. The mitochondrial inner membrane contains 76 percent protein and only 24 percent lipid. The plasma membrane of human red blood cells is 30 percent lipid. Carbohydrates are present only on the exterior surface of the plasma membrane and are attached to proteins, forming glycoproteins , or attached to lipids, forming glycolipids . Phospholipids The main fabric of the membrane is composed of amphiphilic, phospholipid molecules. The hydrophilic or “water-loving” areas of these molecules (which look like a collection of balls in an artist’s rendition of the model) ( Figure 5.2 ) are in contact with the aqueous fluid both inside and outside the cell. Hydrophobic , or water-hating molecules, tend to be non-polar. They interact with other non-polar molecules in chemical reactions, but generally do not interact with polar molecules. When placed in water, hydrophobic molecules tend to form a ball or cluster. The hydrophilic regions of the phospholipids tend to form hydrogen bonds with water and other polar molecules on both the exterior and interior of the cell. Thus, the membrane surfaces that face the interior and exterior of the cell are hydrophilic. In contrast, the interior of the cell membrane is hydrophobic and will not interact with water. Therefore, phospholipids form an excellent two-layer cell membrane that separates fluid within the cell from the fluid outside of the cell. A phospholipid molecule ( Figure 5.3 ) consists of a three-carbon glycerol backbone with two fatty acid molecules attached to carbons 1 and 2, and a phosphate-containing group attached to the third carbon. This arrangement gives the overall molecule an area described as its head (the phosphate-containing group), which has a polar character or negative charge, and an area called the tail (the fatty acids), which has no charge. The head can form hydrogen bonds, but the tail cannot. A molecule with this arrangement of a positively or negatively charged area and an uncharged, or non-polar, area is referred to as amphiphilic or “dual-loving.” This characteristic is vital to the structure of a plasma membrane because, in water, phospholipids tend to become arranged with their hydrophobic tails facing each other and their hydrophilic heads facing out. In this way, they form a lipid bilayer—a barrier composed of a double layer of phospholipids that separates the water and other materials on one side of the barrier from the water and other materials on the other side. In fact, phospholipids heated in an aqueous solution tend to spontaneously form small spheres or droplets (called micelles or liposomes), with their hydrophilic heads forming the exterior and their hydrophobic tails on the inside ( Figure 5.4 ). Proteins Proteins make up the second major component of plasma membranes. Integral proteins (some specialized types are called integrins) are, as their name suggests, integrated completely into the membrane structure, and their hydrophobic membrane-spanning regions interact with the hydrophobic region of the the phospholipid bilayer ( Figure 5.2 ). Single-pass integral membrane proteins usually have a hydrophobic transmembrane segment that consists of 20–25 amino acids. Some span only part of the membrane—associating with a single layer—while others stretch from one side of the membrane to the other, and are exposed on either side. Some complex proteins are composed of up to 12 segments of a single protein, which are extensively folded and embedded in the membrane ( Figure 5.5 ). This type of protein has a hydrophilic region or regions, and one or several mildly hydrophobic regions. This arrangement of regions of the protein tends to orient the protein alongside the phospholipids, with the hydrophobic region of the protein adjacent to the tails of the phospholipids and the hydrophilic region or regions of the protein protruding from the membrane and in contact with the cytosol or extracellular fluid. Peripheral proteins are found on the exterior and interior surfaces of membranes, attached either to integral proteins or to phospholipids. Peripheral proteins, along with integral proteins, may serve as enzymes, as structural attachments for the fibers of the cytoskeleton, or as part of the cell’s recognition sites. These are sometimes referred to as “cell-specific” proteins. The body recognizes its own proteins and attacks foreign proteins associated with invasive pathogens. Carbohydrates Carbohydrates are the third major component of plasma membranes. They are always found on the exterior surface of cells and are bound either to proteins (forming glycoproteins) or to lipids (forming glycolipids) ( Figure 5.2 ). These carbohydrate chains may consist of 2–60 monosaccharide units and can be either straight or branched. Along with peripheral proteins, carbohydrates form specialized sites on the cell surface that allow cells to recognize each other. These sites have unique patterns that allow the cell to be recognized, much the way that the facial features unique to each person allow him or her to be recognized. This recognition function is very important to cells, as it allows the immune system to differentiate between body cells (called “self”) and foreign cells or tissues (called “non-self”). Similar types of glycoproteins and glycolipids are found on the surfaces of viruses and may change frequently, preventing immune cells from recognizing and attacking them. These carbohydrates on the exterior surface of the cell—the carbohydrate components of both glycoproteins and glycolipids—are collectively referred to as the glycocalyx (meaning “sugar coating”). The glycocalyx is highly hydrophilic and attracts large amounts of water to the surface of the cell. This aids in the interaction of the cell with its watery environment and in the cell’s ability to obtain substances dissolved in the water. As discussed above, the glycocalyx is also important for cell identification, self/non-self determination, and embryonic development, and is used in cell-cell attachments to form tissues. Evolution Connection How Viruses Infect Specific Organs Glycoprotein and glycolipid patterns on the surfaces of cells give many viruses an opportunity for infection. HIV and hepatitis viruses infect only specific organs or cells in the human body. HIV is able to penetrate the plasma membranes of a subtype of lymphocytes called T-helper cells, as well as some monocytes and central nervous system cells. The hepatitis virus attacks liver cells. These viruses are able to invade these cells, because the cells have binding sites on their surfaces that are specific to and compatible with certain viruses ( Figure 5.6 ). Other recognition sites on the virus’s surface interact with the human immune system, prompting the body to produce antibodies. Antibodies are made in response to the antigens or proteins associated with invasive pathogens, or in response to foreign cells, such as might occur with an organ transplant. These same sites serve as places for antibodies to attach and either destroy or inhibit the activity of the virus. Unfortunately, these recognition sites on HIV change at a rapid rate because of mutations, making the production of an effective vaccine against the virus very difficult, as the virus evolves and adapts. A person infected with HIV will quickly develop different populations, or variants, of the virus that are distinguished by differences in these recognition sites. This rapid change of surface markers decreases the effectiveness of the person’s immune system in attacking the virus, because the antibodies will not recognize the new variations of the surface patterns. In the case of HIV, the problem is compounded by the fact that the virus specifically infects and destroys cells involved in the immune response, further incapacitating the host. Membrane Fluidity The mosaic characteristic of the membrane, described in the fluid mosaic model, helps to illustrate its nature. The integral proteins and lipids exist in the membrane as separate but loosely attached molecules. These resemble the separate, multicolored tiles of a mosaic picture, and they float, moving somewhat with respect to one another. The membrane is not like a balloon, however, that can expand and contract; rather, it is fairly rigid and can burst if penetrated or if a cell takes in too much water. However, because of its mosaic nature, a very fine needle can easily penetrate a plasma membrane without causing it to burst, and the membrane will flow and self-seal when the needle is extracted. The mosaic characteristics of the membrane explain some but not all of its fluidity. There are two other factors that help maintain this fluid characteristic. One factor is the nature of the phospholipids themselves. In their saturated form, the fatty acids in phospholipid tails are saturated with bound hydrogen atoms. There are no double bonds between adjacent carbon atoms. This results in tails that are relatively straight. In contrast, unsaturated fatty acids do not contain a maximal number of hydrogen atoms, but they do contain some double bonds between adjacent carbon atoms; a double bond results in a bend in the string of carbons of approximately 30 degrees ( Figure 5.3 ). Thus, if saturated fatty acids, with their straight tails, are compressed by decreasing temperatures, they press in on each other, making a dense and fairly rigid membrane. If unsaturated fatty acids are compressed, the “kinks” in their tails elbow adjacent phospholipid molecules away, maintaining some space between the phospholipid molecules. This “elbow room” helps to maintain fluidity in the membrane at temperatures at which membranes with saturated fatty acid tails in their phospholipids would “freeze” or solidify. The relative fluidity of the membrane is particularly important in a cold environment. A cold environment tends to compress membranes composed largely of saturated fatty acids, making them less fluid and more susceptible to rupturing. Many organisms (fish are one example) are capable of adapting to cold environments by changing the proportion of unsaturated fatty acids in their membranes in response to the lowering of the temperature. Link to Learning Visit this site to see animations of the fluidity and mosaic quality of membranes. Animals have an additional membrane constituent that assists in maintaining fluidity. Cholesterol, which lies alongside the phospholipids in the membrane, tends to dampen the effects of temperature on the membrane. Thus, this lipid functions as a buffer, preventing lower temperatures from inhibiting fluidity and preventing increased temperatures from increasing fluidity too much. Thus, cholesterol extends, in both directions, the range of temperature in which the membrane is appropriately fluid and consequently functional. Cholesterol also serves other functions, such as organizing clusters of transmembrane proteins into lipid rafts. The Components and Functions of the Plasma Membrane Component Location Phospholipid Main fabric of the membrane Cholesterol Attached between phospholipids and between the two phospholipid layers Integral proteins (for example, integrins) Embedded within the phospholipid layer(s). May or may not penetrate through both layers Peripheral proteins On the inner or outer surface of the phospholipid bilayer; not embedded within the phospholipids Carbohydrates (components of glycoproteins and glycolipids) Generally attached to proteins on the outside membrane layer Table 5.1 Career Connection Immunologist The variations in peripheral proteins and carbohydrates that affect a cell’s recognition sites are of prime interest in immunology. These changes are taken into consideration in vaccine development. Many infectious diseases, such as smallpox, polio, diphtheria, and tetanus, were conquered by the use of vaccines. Immunologists are the physicians and scientists who research and develop vaccines, as well as treat and study allergies or other immune problems. Some immunologists study and treat autoimmune problems (diseases in which a person’s immune system attacks his or her own cells or tissues, such as lupus) and immunodeficiencies, whether acquired (such as acquired immunodeficiency syndrome, or AIDS) or hereditary (such as severe combined immunodeficiency, or SCID). Immunologists are called in to help treat organ transplantation patients, who must have their immune systems suppressed so that their bodies will not reject a transplanted organ. Some immunologists work to understand natural immunity and the effects of a person’s environment on it. Others work on questions about how the immune system affects diseases such as cancer. In the past, the importance of having a healthy immune system in preventing cancer was not at all understood. To work as an immunologist, a PhD or MD is required. In addition, immunologists undertake at least 2–3 years of training in an accredited program and must pass an examination given by the American Board of Allergy and Immunology. Immunologists must possess knowledge of the functions of the human body as they relate to issues beyond immunization, and knowledge of pharmacology and medical technology, such as medications, therapies, test materials, and surgical procedures. 5.2 Passive Transport Learning Objectives By the end of this section, you will be able to: Explain why and how passive transport occurs Understand the processes of osmosis and diffusion Define tonicity and describe its relevance to passive transport Plasma membranes must allow certain substances to enter and leave a cell, and prevent some harmful materials from entering and some essential materials from leaving. In other words, plasma membranes are selectively permeable —they allow some substances to pass through, but not others. If they were to lose this selectivity, the cell would no longer be able to sustain itself, and it would be destroyed. Some cells require larger amounts of specific substances than do other cells; they must have a way of obtaining these materials from extracellular fluids. This may happen passively, as certain materials move back and forth, or the cell may have special mechanisms that facilitate transport. Some materials are so important to a cell that it spends some of its energy, hydrolyzing adenosine triphosphate (ATP), to obtain these materials. Red blood cells use some of their energy doing just that. All cells spend the majority of their energy to maintain an imbalance of sodium and potassium ions between the interior and exterior of the cell. The most direct forms of membrane transport are passive. Passive transport is a naturally occurring phenomenon and does not require the cell to exert any of its energy to accomplish the movement. In passive transport, substances move from an area of higher concentration to an area of lower concentration. A physical space in which there is a range of concentrations of a single substance is said to have a concentration gradient . Selective Permeability Plasma membranes are asymmetric: the interior of the membrane is not identical to the exterior of the membrane. In fact, there is a considerable difference between the array of phospholipids and proteins between the two leaflets that form a membrane. On the interior of the membrane, some proteins serve to anchor the membrane to fibers of the cytoskeleton. There are peripheral proteins on the exterior of the membrane that bind elements of the extracellular matrix. Carbohydrates, attached to lipids or proteins, are also found on the exterior surface of the plasma membrane. These carbohydrate complexes help the cell bind substances that the cell needs in the extracellular fluid. This adds considerably to the selective nature of plasma membranes ( Figure 5.7 ). Recall that plasma membranes are amphiphilic: They have hydrophilic and hydrophobic regions. This characteristic helps the movement of some materials through the membrane and hinders the movement of others. Lipid-soluble material with a low molecular weight can easily slip through the hydrophobic lipid core of the membrane. Substances such as the fat-soluble vitamins A, D, E, and K readily pass through the plasma membranes in the digestive tract and other tissues. Fat-soluble drugs and hormones also gain easy entry into cells and are readily transported into the body’s tissues and organs. Molecules of oxygen and carbon dioxide have no charge and so pass through membranes by simple diffusion. Polar substances present problems for the membrane. While some polar molecules connect easily with the outside of a cell, they cannot readily pass through the lipid core of the plasma membrane. Additionally, while small ions could easily slip through the spaces in the mosaic of the membrane, their charge prevents them from doing so. Ions such as sodium, potassium, calcium, and chloride must have special means of penetrating plasma membranes. Simple sugars and amino acids also need help with transport across plasma membranes, achieved by various transmembrane proteins (channels). Diffusion Diffusion is a passive process of transport. A single substance tends to move from an area of high concentration to an area of low concentration until the concentration is equal across a space. You are familiar with diffusion of substances through the air. For example, think about someone opening a bottle of ammonia in a room filled with people. The ammonia gas is at its highest concentration in the bottle; its lowest concentration is at the edges of the room. The ammonia vapor will diffuse, or spread away, from the bottle, and gradually, more and more people will smell the ammonia as it spreads. Materials move within the cell’s cytosol by diffusion, and certain materials move through the plasma membrane by diffusion ( Figure 5.8 ). Diffusion expends no energy. On the contrary, concentration gradients are a form of potential energy, dissipated as the gradient is eliminated. Each separate substance in a medium, such as the extracellular fluid, has its own concentration gradient, independent of the concentration gradients of other materials. In addition, each substance will diffuse according to that gradient. Within a system, there will be different rates of diffusion of the different substances in the medium. Factors That Affect Diffusion Molecules move constantly in a random manner, at a rate that depends on their mass, their environment, and the amount of thermal energy they possess, which in turn is a function of temperature. This movement accounts for the diffusion of molecules through whatever medium in which they are localized. A substance will tend to move into any space available to it until it is evenly distributed throughout it. After a substance has diffused completely through a space, removing its concentration gradient, molecules will still move around in the space, but there will be no net movement of the number of molecules from one area to another. This lack of a concentration gradient in which there is no net movement of a substance is known as dynamic equilibrium. While diffusion will go forward in the presence of a concentration gradient of a substance, several factors affect the rate of diffusion. Extent of the concentration gradient: The greater the difference in concentration, the more rapid the diffusion. The closer the distribution of the material gets to equilibrium, the slower the rate of diffusion becomes. Mass of the molecules diffusing: Heavier molecules move more slowly; therefore, they diffuse more slowly. The reverse is true for lighter molecules. Temperature: Higher temperatures increase the energy and therefore the movement of the molecules, increasing the rate of diffusion. Lower temperatures decrease the energy of the molecules, thus decreasing the rate of diffusion. Solvent density: As the density of a solvent increases, the rate of diffusion decreases. The molecules slow down because they have a more difficult time getting through the denser medium. If the medium is less dense, diffusion increases. Because cells primarily use diffusion to move materials within the cytoplasm, any increase in the cytoplasm’s density will inhibit the movement of the materials. An example of this is a person experiencing dehydration. As the body’s cells lose water, the rate of diffusion decreases in the cytoplasm, and the cells’ functions deteriorate. Neurons tend to be very sensitive to this effect. Dehydration frequently leads to unconsciousness and possibly coma because of the decrease in diffusion rate within the cells. Solubility: As discussed earlier, nonpolar or lipid-soluble materials pass through plasma membranes more easily than polar materials, allowing a faster rate of diffusion. Surface area and thickness of the plasma membrane: Increased surface area increases the rate of diffusion, whereas a thicker membrane reduces it. Distance travelled: The greater the distance that a substance must travel, the slower the rate of diffusion. This places an upper limitation on cell size. A large, spherical cell will die because nutrients or waste cannot reach or leave the center of the cell, respectively. Therefore, cells must either be small in size, as in the case of many prokaryotes, or be flattened, as with many single-celled eukaryotes. A variation of diffusion is the process of filtration. In filtration, material moves according to its concentration gradient through a membrane; sometimes the rate of diffusion is enhanced by pressure, causing the substances to filter more rapidly. This occurs in the kidney, where blood pressure forces large amounts of water and accompanying dissolved substances, or solutes , out of the blood and into the renal tubules. The rate of diffusion in this instance is almost totally dependent on pressure. One of the effects of high blood pressure is the appearance of protein in the urine, which is “squeezed through” by the abnormally high pressure. Facilitated transport In facilitated transport , also called facilitated diffusion, materials diffuse across the plasma membrane with the help of membrane proteins. A concentration gradient exists that would allow these materials to diffuse into the cell without expending cellular energy. However, these materials are ions are polar molecules that are repelled by the hydrophobic parts of the cell membrane. Facilitated transport proteins shield these materials from the repulsive force of the membrane, allowing them to diffuse into the cell. The material being transported is first attached to protein or glycoprotein receptors on the exterior surface of the plasma membrane. This allows the material that is needed by the cell to be removed from the extracellular fluid. The substances are then passed to specific integral proteins that facilitate their passage. Some of these integral proteins are collections of beta pleated sheets that form a pore or channel through the phospholipid bilayer. Others are carrier proteins which bind with the substance and aid its diffusion through the membrane. Channels The integral proteins involved in facilitated transport are collectively referred to as transport proteins , and they function as either channels for the material or carriers. In both cases, they are transmembrane proteins. Channels are specific for the substance that is being transported. Channel proteins have hydrophilic domains exposed to the intracellular and extracellular fluids; they additionally have a hydrophilic channel through their core that provides a hydrated opening through the membrane layers ( Figure 5.9 ). Passage through the channel allows polar compounds to avoid the nonpolar central layer of the plasma membrane that would otherwise slow or prevent their entry into the cell. Aquaporins are channel proteins that allow water to pass through the membrane at a very high rate. Channel proteins are either open at all times or they are “gated,” which controls the opening of the channel. The attachment of a particular ion to the channel protein may control the opening, or other mechanisms or substances may be involved. In some tissues, sodium and chloride ions pass freely through open channels, whereas in other tissues a gate must be opened to allow passage. An example of this occurs in the kidney, where both forms of channels are found in different parts of the renal tubules. Cells involved in the transmission of electrical impulses, such as nerve and muscle cells, have gated channels for sodium, potassium, and calcium in their membranes. Opening and closing of these channels changes the relative concentrations on opposing sides of the membrane of these ions, resulting in the facilitation of electrical transmission along membranes (in the case of nerve cells) or in muscle contraction (in the case of muscle cells). Carrier Proteins Another type of protein embedded in the plasma membrane is a carrier protein . This aptly named protein binds a substance and, in doing so, triggers a change of its own shape, moving the bound molecule from the outside of the cell to its interior ( Figure 5.10 ); depending on the gradient, the material may move in the opposite direction. Carrier proteins are typically specific for a single substance. This selectivity adds to the overall selectivity of the plasma membrane. The exact mechanism for the change of shape is poorly understood. Proteins can change shape when their hydrogen bonds are affected, but this may not fully explain this mechanism. Each carrier protein is specific to one substance, and there are a finite number of these proteins in any membrane. This can cause problems in transporting enough of the material for the cell to function properly. When all of the proteins are bound to their ligands, they are saturated and the rate of transport is at its maximum. Increasing the concentration gradient at this point will not result in an increased rate of transport. An example of this process occurs in the kidney. Glucose, water, salts, ions, and amino acids needed by the body are filtered in one part of the kidney. This filtrate, which includes glucose, is then reabsorbed in another part of the kidney. Because there are only a finite number of carrier proteins for glucose, if more glucose is present than the proteins can handle, the excess is not transported and it is excreted from the body in the urine. In a diabetic individual, this is described as “spilling glucose into the urine.” A different group of carrier proteins called glucose transport proteins, or GLUTs, are involved in transporting glucose and other hexose sugars through plasma membranes within the body. Channel and carrier proteins transport material at different rates. Channel proteins transport much more quickly than do carrier proteins. Channel proteins facilitate diffusion at a rate of tens of millions of molecules per second, whereas carrier proteins work at a rate of a thousand to a million molecules per second. Osmosis Osmosis is the movement of water through a semipermeable membrane according to the concentration gradient of water across the membrane, which is inversely proportional to the concentration of solutes. While diffusion transports material across membranes and within cells, osmosis transports only water across a membrane and the membrane limits the diffusion of solutes in the water. Not surprisingly, the aquaporins that facilitate water movement play a large role in osmosis, most prominently in red blood cells and the membranes of kidney tubules. Mechanism Osmosis is a special case of diffusion. Water, like other substances, moves from an area of high concentration to one of low concentration. An obvious question is what makes water move at all? Imagine a beaker with a semipermeable membrane separating the two sides or halves ( Figure 5.11 ). On both sides of the membrane the water level is the same, but there are different concentrations of a dissolved substance, or solute , that cannot cross the membrane (otherwise the concentrations on each side would be balanced by the solute crossing the membrane). If the volume of the solution on both sides of the membrane is the same, but the concentrations of solute are different, then there are different amounts of water, the solvent, on either side of the membrane. To illustrate this, imagine two full glasses of water. One has a single teaspoon of sugar in it, whereas the second one contains one-quarter cup of sugar. If the total volume of the solutions in both cups is the same, which cup contains more water? Because the large amount of sugar in the second cup takes up much more space than the teaspoon of sugar in the first cup, the first cup has more water in it. Returning to the beaker example, recall that it has a mixture of solutes on either side of the membrane. A principle of diffusion is that the molecules move around and will spread evenly throughout the medium if they can. However, only the material capable of getting through the membrane will diffuse through it. In this example, the solute cannot diffuse through the membrane, but the water can. Water has a concentration gradient in this system. Thus, water will diffuse down its concentration gradient, crossing the membrane to the side where it is less concentrated. This diffusion of water through the membrane—osmosis—will continue until the concentration gradient of water goes to zero or until the hydrostatic pressure of the water balances the osmotic pressure. Osmosis proceeds constantly in living systems. Tonicity Tonicity describes how an extracellular solution can change the volume of a cell by affecting osmosis. A solution's tonicity often directly correlates with the osmolarity of the solution. Osmolarity describes the total solute concentration of the solution. A solution with low osmolarity has a greater number of water molecules relative to the number of solute particles; a solution with high osmolarity has fewer water molecules with respect to solute particles. In a situation in which solutions of two different osmolarities are separated by a membrane permeable to water, though not to the solute, water will move from the side of the membrane with lower osmolarity (and more water) to the side with higher osmolarity (and less water). This effect makes sense if you remember that the solute cannot move across the membrane, and thus the only component in the system that can move—the water—moves along its own concentration gradient. An important distinction that concerns living systems is that osmolarity measures the number of particles (which may be molecules) in a solution. Therefore, a solution that is cloudy with cells may have a lower osmolarity than a solution that is clear, if the second solution contains more dissolved molecules than there are cells. Hypotonic Solutions Three terms—hypotonic, isotonic, and hypertonic—are used to relate the osmolarity of a cell to the osmolarity of the extracellular fluid that contains the cells. In a hypotonic situation, the extracellular fluid has lower osmolarity than the fluid inside the cell, and water enters the cell. (In living systems, the point of reference is always the cytoplasm, so the prefix hypo - means that the extracellular fluid has a lower concentration of solutes, or a lower osmolarity, than the cell cytoplasm.) It also means that the extracellular fluid has a higher concentration of water in the solution than does the cell. In this situation, water will follow its concentration gradient and enter the cell. Hypertonic Solutions As for a hypertonic solution, the prefix hyper - refers to the extracellular fluid having a higher osmolarity than the cell’s cytoplasm; therefore, the fluid contains less water than the cell does. Because the cell has a relatively higher concentration of water, water will leave the cell. Isotonic Solutions In an isotonic solution, the extracellular fluid has the same osmolarity as the cell. If the osmolarity of the cell matches that of the extracellular fluid, there will be no net movement of water into or out of the cell, although water will still move in and out. Blood cells and plant cells in hypertonic, isotonic, and hypotonic solutions take on characteristic appearances ( Figure 5.12 ). Art Connection A doctor injects a patient with what the doctor thinks is an isotonic saline solution. The patient dies, and an autopsy reveals that many red blood cells have been destroyed. Do you think the solution the doctor injected was really isotonic? Link to Learning For a video illustrating the process of diffusion in solutions, visit this site . Tonicity in Living Systems In a hypotonic environment, water enters a cell, and the cell swells. In an isotonic condition, the relative concentrations of solute and solvent are equal on both sides of the membrane. There is no net water movement; therefore, there is no change in the size of the cell. In a hypertonic solution, water leaves a cell and the cell shrinks. If either the hypo- or hyper- condition goes to excess, the cell’s functions become compromised, and the cell may be destroyed. A red blood cell will burst, or lyse, when it swells beyond the plasma membrane’s capability to expand. Remember, the membrane resembles a mosaic, with discrete spaces between the molecules composing it. If the cell swells, and the spaces between the lipids and proteins become too large, the cell will break apart. In contrast, when excessive amounts of water leave a red blood cell, the cell shrinks, or crenates. This has the effect of concentrating the solutes left in the cell, making the cytosol denser and interfering with diffusion within the cell. The cell’s ability to function will be compromised and may also result in the death of the cell. Various living things have ways of controlling the effects of osmosis—a mechanism called osmoregulation. Some organisms, such as plants, fungi, bacteria, and some protists, have cell walls that surround the plasma membrane and prevent cell lysis in a hypotonic solution. The plasma membrane can only expand to the limit of the cell wall, so the cell will not lyse. In fact, the cytoplasm in plants is always slightly hypertonic to the cellular environment, and water will always enter a cell if water is available. This inflow of water produces turgor pressure, which stiffens the cell walls of the plant ( Figure 5.13 ). In nonwoody plants, turgor pressure supports the plant. Conversly, if the plant is not watered, the extracellular fluid will become hypertonic, causing water to leave the cell. In this condition, the cell does not shrink because the cell wall is not flexible. However, the cell membrane detaches from the wall and constricts the cytoplasm. This is called plasmolysis . Plants lose turgor pressure in this condition and wilt ( Figure 5.14 ). Tonicity is a concern for all living things. For example, paramecia and amoebas, which are protists that lack cell walls, have contractile vacuoles. This vesicle collects excess water from the cell and pumps it out, keeping the cell from lysing as it takes on water from its environment ( Figure 5.15 ). Many marine invertebrates have internal salt levels matched to their environments, making them isotonic with the water in which they live. Fish, however, must spend approximately five percent of their metabolic energy maintaining osmotic homeostasis. Freshwater fish live in an environment that is hypotonic to their cells. These fish actively take in salt through their gills and excrete diluted urine to rid themselves of excess water. Saltwater fish live in the reverse environment, which is hypertonic to their cells, and they secrete salt through their gills and excrete highly concentrated urine. In vertebrates, the kidneys regulate the amount of water in the body. Osmoreceptors are specialized cells in the brain that monitor the concentration of solutes in the blood. If the levels of solutes increase beyond a certain range, a hormone is released that retards water loss through the kidney and dilutes the blood to safer levels. Animals also have high concentrations of albumin, which is produced by the liver, in their blood. This protein is too large to pass easily through plasma membranes and is a major factor in controlling the osmotic pressures applied to tissues. 5.3 Active Transport Learning Objectives By the end of this section, you will be able to: Understand how electrochemical gradients affect ions Distinguish between primary active transport and secondary active transport Active transport mechanisms require the use of the cell’s energy, usually in the form of adenosine triphosphate (ATP). If a substance must move into the cell against its concentration gradient—that is, if the concentration of the substance inside the cell is greater than its concentration in the extracellular fluid (and vice versa)—the cell must use energy to move the substance. Some active transport mechanisms move small-molecular weight materials, such as ions, through the membrane. Other mechanisms transport much larger molecules. Electrochemical Gradient We have discussed simple concentration gradients—differential concentrations of a substance across a space or a membrane—but in living systems, gradients are more complex. Because ions move into and out of cells and because cells contain proteins that do not move across the membrane and are mostly negatively charged, there is also an electrical gradient, a difference of charge, across the plasma membrane. The interior of living cells is electrically negative with respect to the extracellular fluid in which they are bathed, and at the same time, cells have higher concentrations of potassium (K + ) and lower concentrations of sodium (Na + ) than does the extracellular fluid. So in a living cell, the concentration gradient of Na + tends to drive it into the cell, and the electrical gradient of Na + (a positive ion) also tends to drive it inward to the negatively charged interior. The situation is more complex, however, for other elements such as potassium. The electrical gradient of K + , a positive ion, also tends to drive it into the cell, but the concentration gradient of K + tends to drive K + out of the cell ( Figure 5.16 ). The combined gradient of concentration and electrical charge that affects an ion is called its electrochemical gradient . Art Connection Injection of a potassium solution into a person’s blood is lethal; this is used in capital punishment and euthanasia. Why do you think a potassium solution injection is lethal? Moving Against a Gradient To move substances against a concentration or electrochemical gradient, the cell must use energy. This energy is harvested from ATP generated through the cell’s metabolism. Active transport mechanisms, collectively called pumps , work against electrochemical gradients. Small substances constantly pass through plasma membranes. Active transport maintains concentrations of ions and other substances needed by living cells in the face of these passive movements. Much of a cell’s supply of metabolic energy may be spent maintaining these processes. (Most of a red blood cell’s metabolic energy is used to maintain the imbalance between exterior and interior sodium and potassium levels required by the cell.) Because active transport mechanisms depend on a cell’s metabolism for energy, they are sensitive to many metabolic poisons that interfere with the supply of ATP. Two mechanisms exist for the transport of small-molecular weight material and small molecules. Primary active transport moves ions across a membrane and creates a difference in charge across that membrane, which is directly dependent on ATP. Secondary active transport describes the movement of material that is due to the electrochemical gradient established by primary active transport that does not directly require ATP. Carrier Proteins for Active Transport An important membrane adaption for active transport is the presence of specific carrier proteins or pumps to facilitate movement: there are three types of these proteins or transporters ( Figure 5.17 ). A uniporter carries one specific ion or molecule. A symporter carries two different ions or molecules, both in the same direction. An antiporter also carries two different ions or molecules, but in different directions. All of these transporters can also transport small, uncharged organic molecules like glucose. These three types of carrier proteins are also found in facilitated diffusion, but they do not require ATP to work in that process. Some examples of pumps for active transport are Na + -K + ATPase, which carries sodium and potassium ions, and H + -K + ATPase, which carries hydrogen and potassium ions. Both of these are antiporter carrier proteins. Two other carrier proteins are Ca 2+ ATPase and H + ATPase, which carry only calcium and only hydrogen ions, respectively. Both are pumps. Primary Active Transport The primary active transport that functions with the active transport of sodium and potassium allows secondary active transport to occur. The second transport method is still considered active because it depends on the use of energy as does primary transport ( Figure 5.18 ). One of the most important pumps in animals cells is the sodium-potassium pump (Na + -K + ATPase), which maintains the electrochemical gradient (and the correct concentrations of Na + and K + ) in living cells. The sodium-potassium pump moves K + into the cell while moving Na + out at the same time, at a ratio of three Na + for every two K + ions moved in. The Na + -K + ATPase exists in two forms, depending on its orientation to the interior or exterior of the cell and its affinity for either sodium or potassium ions. The process consists of the following six steps. With the enzyme oriented towards the interior of the cell, the carrier has a high affinity for sodium ions. Three ions bind to the protein. ATP is hydrolyzed by the protein carrier and a low-energy phosphate group attaches to it. As a result, the carrier changes shape and re-orients itself towards the exterior of the membrane. The protein’s affinity for sodium decreases and the three sodium ions leave the carrier. The shape change increases the carrier’s affinity for potassium ions, and two such ions attach to the protein. Subsequently, the low-energy phosphate group detaches from the carrier. With the phosphate group removed and potassium ions attached, the carrier protein repositions itself towards the interior of the cell. The carrier protein, in its new configuration, has a decreased affinity for potassium, and the two ions are released into the cytoplasm. The protein now has a higher affinity for sodium ions, and the process starts again. Several things have happened as a result of this process. At this point, there are more sodium ions outside of the cell than inside and more potassium ions inside than out. For every three ions of sodium that move out, two ions of potassium move in. This results in the interior being slightly more negative relative to the exterior. This difference in charge is important in creating the conditions necessary for the secondary process. The sodium-potassium pump is, therefore, an electrogenic pump (a pump that creates a charge imbalance), creating an electrical imbalance across the membrane and contributing to the membrane potential. Link to Learning Watch this video to see a simulation of active transport in a sodium-potassium ATPase. Secondary Active Transport (Co-transport) Secondary active transport brings sodium ions, and possibly other compounds, into the cell. As sodium ion concentrations build outside of the plasma membrane because of the action of the primary active transport process, an electrochemical gradient is created. If a channel protein exists and is open, the sodium ions will be pulled through the membrane. This movement is used to transport other substances that can attach themselves to the transport protein through the membrane ( Figure 5.19 ). Many amino acids, as well as glucose, enter a cell this way. This secondary process is also used to store high-energy hydrogen ions in the mitochondria of plant and animal cells for the production of ATP. The potential energy that accumulates in the stored hydrogen ions is translated into kinetic energy as the ions surge through the channel protein ATP synthase, and that energy is used to convert ADP into ATP. Art Connection If the pH outside the cell decreases, would you expect the amount of amino acids transported into the cell to increase or decrease? 5.4 Bulk Transport Learning Objectives By the end of this section, you will be able to: Describe endocytosis, including phagocytosis, pinocytosis, and receptor-mediated endocytosis Understand the process of exocytosis In addition to moving small ions and molecules through the membrane, cells also need to remove and take in larger molecules and particles (see Table 5.2 for examples). Some cells are even capable of engulfing entire unicellular microorganisms. You might have correctly hypothesized that the uptake and release of large particles by the cell requires energy. A large particle, however, cannot pass through the membrane, even with energy supplied by the cell. Endocytosis Endocytosis is a type of active transport that moves particles, such as large molecules, parts of cells, and even whole cells, into a cell. There are different variations of endocytosis, but all share a common characteristic: The plasma membrane of the cell invaginates, forming a pocket around the target particle. The pocket pinches off, resulting in the particle being contained in a newly created intracellular vesicle formed from the plasma membrane. Phagocytosis Phagocytosis (the condition of “cell eating”) is the process by which large particles, such as cells or relatively large particles, are taken in by a cell. For example, when microorganisms invade the human body, a type of white blood cell called a neutrophil will remove the invaders through this process, surrounding and engulfing the microorganism, which is then destroyed by the neutrophil ( Figure 5.20 ). In preparation for phagocytosis, a portion of the inward-facing surface of the plasma membrane becomes coated with a protein called clathrin , which stabilizes this section of the membrane. The coated portion of the membrane then extends from the body of the cell and surrounds the particle, eventually enclosing it. Once the vesicle containing the particle is enclosed within the cell, the clathrin disengages from the membrane and the vesicle merges with a lysosome for the breakdown of the material in the newly formed compartment (endosome). When accessible nutrients from the degradation of the vesicular contents have been extracted, the newly formed endosome merges with the plasma membrane and releases its contents into the extracellular fluid. The endosomal membrane again becomes part of the plasma membrane. Pinocytosis A variation of endocytosis is called pinocytosis . This literally means “cell drinking” and was named at a time when the assumption was that the cell was purposefully taking in extracellular fluid. In reality, this is a process that takes in molecules, including water, which the cell needs from the extracellular fluid. Pinocytosis results in a much smaller vesicle than does phagocytosis, and the vesicle does not need to merge with a lysosome ( Figure 5.21 ). A variation of pinocytosis is called potocytosis . This process uses a coating protein, called caveolin , on the cytoplasmic side of the plasma membrane, which performs a similar function to clathrin. The cavities in the plasma membrane that form the vacuoles have membrane receptors and lipid rafts in addition to caveolin. The vacuoles or vesicles formed in caveolae (singular caveola) are smaller than those in pinocytosis. Potocytosis is used to bring small molecules into the cell and to transport these molecules through the cell for their release on the other side of the cell, a process called transcytosis. Receptor-mediated Endocytosis A targeted variation of endocytosis employs receptor proteins in the plasma membrane that have a specific binding affinity for certain substances ( Figure 5.22 ). In receptor-mediated endocytosis , as in phagocytosis, clathrin is attached to the cytoplasmic side of the plasma membrane. If uptake of a compound is dependent on receptor-mediated endocytosis and the process is ineffective, the material will not be removed from the tissue fluids or blood. Instead, it will stay in those fluids and increase in concentration. Some human diseases are caused by the failure of receptor-mediated endocytosis. For example, the form of cholesterol termed low-density lipoprotein or LDL (also referred to as “bad” cholesterol) is removed from the blood by receptor-mediated endocytosis. In the human genetic disease familial hypercholesterolemia, the LDL receptors are defective or missing entirely. People with this condition have life-threatening levels of cholesterol in their blood, because their cells cannot clear LDL particles from their blood. Although receptor-mediated endocytosis is designed to bring specific substances that are normally found in the extracellular fluid into the cell, other substances may gain entry into the cell at the same site. Flu viruses, diphtheria, and cholera toxin all have sites that cross-react with normal receptor-binding sites and gain entry into cells. Link to Learning See receptor-mediated endocytosis in action, and click on different parts for a focused animation. Exocytosis The reverse process of moving material into a cell is the process of exocytosis. Exocytosis is the opposite of the processes discussed above in that its purpose is to expel material from the cell into the extracellular fluid. Waste material is enveloped in a membrane and fuses with the interior of the plasma membrane. This fusion opens the membranous envelope on the exterior of the cell, and the waste material is expelled into the extracellular space ( Figure 5.23 ). Other examples of cells releasing molecules via exocytosis include the secretion of proteins of the extracellular matrix and secretion of neurotransmitters into the synaptic cleft by synaptic vesicles. Methods of Transport, Energy Requirements, and Types of Material Transported Transport Method Active/Passive Material Transported Diffusion Passive Small-molecular weight material Osmosis Passive Water Facilitated transport/diffusion Passive Sodium, potassium, calcium, glucose Primary active transport Active Sodium, potassium, calcium Secondary active transport Active Amino acids, lactose Phagocytosis Active Large macromolecules, whole cells, or cellular structures Pinocytosis and potocytosis Active Small molecules (liquids/water) Receptor-mediated endocytosis Active Large quantities of macromolecules Table 5.2
principles_of_accounting,_volume_2:_managerial_accounting
Summary 11.1 Describe Capital Investment Decisions and How They Are Applied Capital investment decisions select a project for future business development. These projects typically require a large outlay of cash, provide an uncertain return, and tie up resources for an extended period of time. Having a large number of alternatives requires a careful budgeting and analysis process. This process includes determining capital needs, exploring resource limitations, establishing baseline criteria for alternatives, evaluating alternatives using screening and preference decisions, and making the decision. Screening decisions help eliminate undesirable alternatives that may waste time and money. Preference decisions rank alternatives emerging from the screening process to help make the final decision. Both decision avenues use capital budgeting methods to select between alternatives. 11.2 Evaluate the Payback and Accounting Rate of Return in Capital Investment Decisions The payback method determines how long it will take a company to recoup their investment. Annual cash flows are compared to the initial investment but the time value of money is not considered and cashflows beyond the payback period are ignored. The accounting rate of return considers incremental net income as it compares to the initial investment. Time value of money is not considered with this method. Incremental net income determines the net income expected if the company accepts the investment opportunity, as opposed to not investing. Incremental net income is the difference between incremental revenues and incremental expenses. 11.3 Explain the Time Value of Money and Calculate Present and Future Values of Lump Sums and Annuities A dollar is worth more today than it will be in the future. This is due to many reasons including the power of investment in today’s economy, market inflation, and the ability to use the money in the present to make more money in the future, with interest. Present value expresses the future value of a dollar in today’s (present) value. Present value tables, showing the present value factor intersection of periods and interest rate, are used to multiply by the final payout amount to compute today’s value. The future value shows what the value of an investment will be after a certain period of time. Future value tables, showing the future value factor intersection of periods and interest rate, are used to multiply by the initial investment amount to compute future value. A lump sum is a one-time payment after a certain period of time, whereas an ordinary annuity involves equal installments in a series of payments over time. A business can use lump sum or ordinary annuity calculations for present value and future value calculations. 11.4 Use Discounted Cash Flow Models to Make Capital Investment Decisions The discounted cash flow model assigns values to a project’s alternatives using time value of money and discounts future rates back to present value. Two measurement tools are used in discounted cash flows: net present value and internal rate of return. Net present value considers an expected rate of return, converts future cash flows into present value, and compares that to the initial investment cost. If the outcome is positive, the company would look to invest in the project. Internal rate of return shows the profitability of an investment, where NPV equals zero. If the corresponding interest rate exceeds the expected rate of return, the company would invest in the project. 11.5 Compare and Contrast Non-Time Value-Based Methods and Time Value-Based Methods in Capital Investment Decisions The payback method uses a simple calculation, removes unviable alternatives quickly, and considers investment risk. However, it disregards the time value of money, ignores profitability, and does not consider cash flows after recouping the investment. The accounting rate of return uses a simple calculation, considers profitability, and removes unviable options quickly. However, it disregards the time value of money, values return rates more than risk, and ignores external influential factors. Net present value considers the time value of money, ranks higher risk investments, and compares future earnings in today’s value. However, it cannot easily compare dissimilar investment opportunities, it uses a more difficult calculation, and it has limitations with the estimation of an expected rate of return. Internal rate of return considers the time value of money, removes the dollar bias, and leads a company to a decision, unlike non-time value methods. However, it has a bias toward return rates instead of higher risk investment consideration, it is a more difficult calculation, and it does not consider the time it will take to recoup an investment.
Chapter Outline 11.1 Describe Capital Investment Decisions and How They Are Applied 11.2 Evaluate the Payback and Accounting Rate of Return in Capital Investment Decisions 11.3 Explain the Time Value of Money and Calculate Present and Future Values of Lump Sums and Annuities 11.4 Use Discounted Cash Flow Models to Make Capital Investment Decisions 11.5 Compare and Contrast Non-Time Value-Based Methods and Time Value-Based Methods in Capital Investment Decisions Why It Matters Jerry Price owns Milling Manufacturing, a production facility geared toward entrepreneurial product development. Initially, Jerry purchased several milling machines, but after seven years, the machines have become obsolete due to technological advances. Jerry must purchase new machines to continue business growth, and there are several options available. How does he choose the best machines for his business? What factors must he consider before purchase? Jerry must consider several important factors—both financial and non-financial—as he makes this decision. First, he needs to consider the commitment of his initial capital investment. He also needs to compare differences between options such as warranties, the production capacities of different machines, and maintenance and repair costs. Another factor is the useful life of the new equipment—in other words, both its physical and the technological life. He will also consider how long it will take to recoup the cost of the investment, the impact on cash flow, and how the passage of time affects the value of the asset to the organization—it’s monetary value that considered depreciation to determine what the asset is actually worth to the organization in terms of dollars (i.e., “what could we sell it for?”). Jerry will consider the value of the dollar invested today in purchasing the machine as opposed to the value of the dollar in the future that might be better spent on another project. This last factor is significant because the new equipment will probably provide part of his down payment on future replacement equipment. There are also nonfinancial factors to consider, such as changes to customer satisfaction and employee morale. Jerry knows this equipment choice goes well beyond color or price preferences. The decision has a long-lasting influence on company direction and opportunity, and he needs to utilize capital budgeting analysis to help him make this decision.
[ { "answer": { "ans_choice": 1, "ans_text": "B" }, "bloom": null, "hl_context": "<hl> Capital investment decisions occur on a frequent basis , and it is important for a company to determine its project needs to establish a path for business development . <hl> This decision is not as obvious or as simple as it may seem . <hl> There is a lot at stake with a large outlay of capital , and the long-term financial impact may be unknown due to the capital outlay decreasing or increasing over time . <hl> To help reduce the risk involved in capital investment , a process is required to thoughtfully select the best opportunity for the company . <hl> Capital investment ( sometimes also referred to as capital budgeting ) is a company ’ s contribution of funds toward the acquisition of long-lived ( long-term or capital ) assets for further growth . <hl> Long-term assets can include investments such as the purchase of new equipment , the replacement of old machinery , the expansion of operations into new facilities , or even the expansion into new products or markets . These capital expenditures are different from operating expenses . An operating expense is a regularly-occurring expense used to maintain the current operations of the company , but a capital expenditure is one used to grow the business and produce a future economic benefit .", "hl_sentences": "Capital investment decisions occur on a frequent basis , and it is important for a company to determine its project needs to establish a path for business development . There is a lot at stake with a large outlay of capital , and the long-term financial impact may be unknown due to the capital outlay decreasing or increasing over time . Capital investment ( sometimes also referred to as capital budgeting ) is a company ’ s contribution of funds toward the acquisition of long-lived ( long-term or capital ) assets for further growth .", "question": { "cloze_format": "Capital investment decisions often involve all of the following except ________.", "normal_format": "Which of the following is NOT often involved in capital investment decisions?", "question_choices": [ "qualitative factors or considerations", "short periods of time", "large amounts of money", "risk" ], "question_id": "fs-idm379358192", "question_text": "Capital investment decisions often involve all of the following except ________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 0, "ans_text": "political prominence" }, "bloom": null, "hl_context": "If one or more of the alternatives meets or exceeds the minimum expectations , a preference decision is considered . <hl> A preference decision compares potential projects that meet screening decision criteria and will rank the alternatives in order of importance , feasibility , or desirability to differentiate among alternatives . <hl> Once the company determines the rank order , it is able to make a decision on the best avenue to pursue ( Figure 11.2 ) . When making the final decision , all financial and non-financial factors are deliberated .", "hl_sentences": "A preference decision compares potential projects that meet screening decision criteria and will rank the alternatives in order of importance , feasibility , or desirability to differentiate among alternatives .", "question": { "cloze_format": "Preference decisions compare potential projects that meet screening decision criteria and will be ranked in their preference order to differentiate between alternatives with respect to all of the following characteristics except ________.", "normal_format": "Preference decisions compare potential projects that meet screening decision criteria and will NOT be ranked in their preference order to differentiate between alternatives with respect to which of the following characteristic?", "question_choices": [ "political prominence", "feasibility", "desirability", "importance" ], "question_id": "fs-idm482789072", "question_text": "Preference decisions compare potential projects that meet screening decision criteria and will be ranked in their preference order to differentiate between alternatives with respect to all of the following characteristics except ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "D" }, "bloom": null, "hl_context": "Alternatives are the options available for investment . For example , if a company needs to purchase new printing equipment , all possible printing equipment options are considered alternatives . <hl> Since there are so many alternative possibilities , a company will need to establish baseline criteria for the investment . <hl> Baseline criteria are measurement methods that can help differentiate among alternatives . <hl> Common measurement methods include the payback method , accounting rate of return , net present value , or internal rate of return . <hl> These methods have varying degrees of complexity and will be discussed in greater detail in Evaluate the Payback and Accounting Rate of Return in Capital Investment Decisions and Explain the Time Value of Money and Calculate Present and Future Values of Lump Sums and Annuities", "hl_sentences": "Since there are so many alternative possibilities , a company will need to establish baseline criteria for the investment . Common measurement methods include the payback method , accounting rate of return , net present value , or internal rate of return .", "question": { "cloze_format": "The ___ would not be an acceptable baseline criterion.", "normal_format": "The third step for making a capital investment decision is to establish baseline criteria for alternatives. Which of the following would not be an acceptable baseline criterion?", "question_choices": [ "payback method", "accounting rate of return", "internal rate of return", "inventory turnover" ], "question_id": "fs-idm385854864", "question_text": "The third step for making a capital investment decision is to establish baseline criteria for alternatives. Which of the following would not be an acceptable baseline criterion?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 1, "ans_text": "B" }, "bloom": null, "hl_context": "<hl> Discounting is the procedure used to calculate the present value of an individual payment or a series of payments that will be received in the future based on an assumed interest rate or return on investment . <hl> Let ’ s look at a simple example to explain the concept of discounting .", "hl_sentences": "Discounting is the procedure used to calculate the present value of an individual payment or a series of payments that will be received in the future based on an assumed interest rate or return on investment .", "question": { "cloze_format": "The process that determines the present value of a single payment or stream of payments to be received is ________.", "normal_format": "Which process determines the present value of a single payment or stream of payments to be received?", "question_choices": [ "compounding", "discounting", "annuity", "lump-sum" ], "question_id": "fs-idm198862496", "question_text": "The process that determines the present value of a single payment or stream of payments to be received is ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "compounding" }, "bloom": null, "hl_context": "In our current example , interest is calculated once a year . However , interest can also be calculated in numerous ways . Some of the most common interest calculations are daily , monthly , quarterly , or annually . One concept important to understand in interest calculations is that of compounding . <hl> Compounding is the process of earning interest on previous interest earned , along with the interest earned on the original investment . <hl>", "hl_sentences": "Compounding is the process of earning interest on previous interest earned , along with the interest earned on the original investment .", "question": { "cloze_format": "The process of reinvesting interest earned to generate additional earnings over time is ________.", "normal_format": "Which is the process of reinvesting interest earned to generate additional earnings over time?", "question_choices": [ "compounding", "discounting", "annuity", "lump-sum" ], "question_id": "fs-idm200590624", "question_text": "The process of reinvesting interest earned to generate additional earnings over time is ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "payback period method" }, "bloom": null, "hl_context": "A company will be presented with many alternatives for investment . It is up to management to analyze each investment ’ s possibilities using capital budgeting methods . The company will want to first screen each possibility with the payback method and accounting rate of return . <hl> The payback method will show the company how long it will take to recoup their investment , while accounting rate of return gives them the profitability of the alternatives . <hl> <hl> This screening will typically get rid of non-viable options and allow the company to further consider a select few alternatives . <hl> A more detailed analysis is found in time-value methods , such as net present value and internal rate of return . Net present value converts future cash flows into today ’ s valuation for comparability purposes to see if an initial outlay of cash is worth future earnings . The internal rate of return determines the minimum expected return on a project given the present value of cash flow expectations and the initial investment . Analyzing these opportunities , with consideration given to time value of money , allows a company to make an informed decision on how to make large capital expenditures . <hl> The internal rate of return ( IRR ) and the net present value ( NPV ) methods are types of discounted cash flow analysis that require taking estimated future payments from a project and discounting them into present values . <hl> The difference between the two methods is that the NPV calculation determines the project ’ s estimated return in dollars and the IRR provides the percentage rate of return from a project needed to break even . As previously discussed , time value of money methods assume that the value of money today is worth more now than in the future . <hl> The payback period and accounting rate of return methods do not consider this concept when performing calculations and analyzing results . <hl> <hl> That is why they are typically only used as basic screening tools . <hl> <hl> To decide the best option between alternatives , a company performs preference measurement using tools , such as net present value and internal rate of return that do consider the time value of money concept . <hl> <hl> Net present value ( NPV ) discounts future cash flows to their present value at the expected rate of return and compares that to the initial investment . <hl> NPV does not determine the actual rate of return earned by a project . The internal rate of return ( IRR ) shows the profitability or growth potential of an investment at the point where NPV equals zero , so it determines the actual rate of return a project earns . As the name implies , net present value is stated in dollars , whereas the internal rate of return is stated as an interest rate . Both NPV and IRR require the company to determine a rate of return to be used as the target return rate , such as the minimum required rate of return or the weighted average cost of capital , which will be discussed in Balanced Scorecard and Other Performance Measures . <hl> The discount cash flow model assigns a value to a business opportunity using time-value measurement tools . <hl> The model considers future cash flows of the project , discounts them back to present time , and compares the outcome to an expected rate of return . If the outcome exceeds the expected rate of return and initial investment cost , the company would consider the investment . If the outcome does not exceed the expected rate of return or the initial investment , the company may not consider investment . When considering the discounted cash flow process , the time value of money plays a major role .", "hl_sentences": "The payback method will show the company how long it will take to recoup their investment , while accounting rate of return gives them the profitability of the alternatives . This screening will typically get rid of non-viable options and allow the company to further consider a select few alternatives . The internal rate of return ( IRR ) and the net present value ( NPV ) methods are types of discounted cash flow analysis that require taking estimated future payments from a project and discounting them into present values . The payback period and accounting rate of return methods do not consider this concept when performing calculations and analyzing results . That is why they are typically only used as basic screening tools . To decide the best option between alternatives , a company performs preference measurement using tools , such as net present value and internal rate of return that do consider the time value of money concept . Net present value ( NPV ) discounts future cash flows to their present value at the expected rate of return and compares that to the initial investment . The discount cash flow model assigns a value to a business opportunity using time-value measurement tools .", "question": { "cloze_format": "___ does not assign a value to a business opportunity using time-value measurement tools.", "normal_format": "Which of the following does not assign a value to a business opportunity using time-value measurement tools?", "question_choices": [ "internal rate of return (IRR) method", "net present value (NPV)", "discounted cash flow model", "payback period method" ], "question_id": "fs-idm239047488", "question_text": "Which of the following does not assign a value to a business opportunity using time-value measurement tools?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 1, "ans_text": "B" }, "bloom": null, "hl_context": "<hl> The internal rate of return ( IRR ) and the net present value ( NPV ) methods are types of discounted cash flow analysis that require taking estimated future payments from a project and discounting them into present values . <hl> <hl> The difference between the two methods is that the NPV calculation determines the project ’ s estimated return in dollars and the IRR provides the percentage rate of return from a project needed to break even . <hl> As previously discussed , time value of money methods assume that the value of money today is worth more now than in the future . The payback period and accounting rate of return methods do not consider this concept when performing calculations and analyzing results . That is why they are typically only used as basic screening tools . To decide the best option between alternatives , a company performs preference measurement using tools , such as net present value and internal rate of return that do consider the time value of money concept . <hl> Net present value ( NPV ) discounts future cash flows to their present value at the expected rate of return and compares that to the initial investment . <hl> NPV does not determine the actual rate of return earned by a project . The internal rate of return ( IRR ) shows the profitability or growth potential of an investment at the point where NPV equals zero , so it determines the actual rate of return a project earns . As the name implies , net present value is stated in dollars , whereas the internal rate of return is stated as an interest rate . Both NPV and IRR require the company to determine a rate of return to be used as the target return rate , such as the minimum required rate of return or the weighted average cost of capital , which will be discussed in Balanced Scorecard and Other Performance Measures .", "hl_sentences": "The internal rate of return ( IRR ) and the net present value ( NPV ) methods are types of discounted cash flow analysis that require taking estimated future payments from a project and discounting them into present values . The difference between the two methods is that the NPV calculation determines the project ’ s estimated return in dollars and the IRR provides the percentage rate of return from a project needed to break even . Net present value ( NPV ) discounts future cash flows to their present value at the expected rate of return and compares that to the initial investment .", "question": { "cloze_format": "The ___ discounts future cash flows to their present value at the expected rate of return, and compares that to the initial investment.", "normal_format": "Which of the following discounts future cash flows to their present value at the expected rate of return, and compares that to the initial investment?", "question_choices": [ "internal rate of return (IRR) method", "net present value (NPV)", "discounted cash flow model", "future value method" ], "question_id": "fs-idm204855984", "question_text": "Which of the following discounts future cash flows to their present value at the expected rate of return, and compares that to the initial investment?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "internal rate of return (IRR) method" }, "bloom": null, "hl_context": "The internal rate of return ( IRR ) and the net present value ( NPV ) methods are types of discounted cash flow analysis that require taking estimated future payments from a project and discounting them into present values . <hl> The difference between the two methods is that the NPV calculation determines the project ’ s estimated return in dollars and the IRR provides the percentage rate of return from a project needed to break even . <hl> <hl> IRR is the discounted rate ( interest rate ) point at which NPV equals zero . <hl> In other words , the IRR is the point at which the present value cash inflows equal the initial investment cost . To consider investment , IRR needs to meet or exceed the required rate of return for the investment type . If IRR does not meet the required rate of return , the company will forgo investment . As previously discussed , time value of money methods assume that the value of money today is worth more now than in the future . The payback period and accounting rate of return methods do not consider this concept when performing calculations and analyzing results . That is why they are typically only used as basic screening tools . To decide the best option between alternatives , a company performs preference measurement using tools , such as net present value and internal rate of return that do consider the time value of money concept . Net present value ( NPV ) discounts future cash flows to their present value at the expected rate of return and compares that to the initial investment . NPV does not determine the actual rate of return earned by a project . <hl> The internal rate of return ( IRR ) shows the profitability or growth potential of an investment at the point where NPV equals zero , so it determines the actual rate of return a project earns . <hl> As the name implies , net present value is stated in dollars , whereas the internal rate of return is stated as an interest rate . Both NPV and IRR require the company to determine a rate of return to be used as the target return rate , such as the minimum required rate of return or the weighted average cost of capital , which will be discussed in Balanced Scorecard and Other Performance Measures .", "hl_sentences": "The difference between the two methods is that the NPV calculation determines the project ’ s estimated return in dollars and the IRR provides the percentage rate of return from a project needed to break even . IRR is the discounted rate ( interest rate ) point at which NPV equals zero . The internal rate of return ( IRR ) shows the profitability or growth potential of an investment at the point where NPV equals zero , so it determines the actual rate of return a project earns .", "question": { "cloze_format": "This calculation determines profitability or growth potential of an investment, expressed as a percentage, at the point where NPV equals zero ___ .", "normal_format": "Which calculation determines the profitability or growth potential of an investment, expressed as a percentage, at the point where NPV equals zero?", "question_choices": [ "internal rate of return (IRR) method", "net present value (NPV)", "discounted cash flow model", "future value method" ], "question_id": "fs-idm440036112", "question_text": "This calculation determines profitability or growth potential of an investment, expressed as a percentage, at the point where NPV equals zero" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "A" }, "bloom": null, "hl_context": "A company will be presented with many alternatives for investment . It is up to management to analyze each investment ’ s possibilities using capital budgeting methods . The company will want to first screen each possibility with the payback method and accounting rate of return . The payback method will show the company how long it will take to recoup their investment , while accounting rate of return gives them the profitability of the alternatives . This screening will typically get rid of non-viable options and allow the company to further consider a select few alternatives . A more detailed analysis is found in time-value methods , such as net present value and internal rate of return . Net present value converts future cash flows into today ’ s valuation for comparability purposes to see if an initial outlay of cash is worth future earnings . <hl> The internal rate of return determines the minimum expected return on a project given the present value of cash flow expectations and the initial investment . <hl> Analyzing these opportunities , with consideration given to time value of money , allows a company to make an informed decision on how to make large capital expenditures . IRR is the discounted rate ( interest rate ) point at which NPV equals zero . In other words , the IRR is the point at which the present value cash inflows equal the initial investment cost . <hl> To consider investment , IRR needs to meet or exceed the required rate of return for the investment type . <hl> <hl> If IRR does not meet the required rate of return , the company will forgo investment . <hl>", "hl_sentences": "The internal rate of return determines the minimum expected return on a project given the present value of cash flow expectations and the initial investment . To consider investment , IRR needs to meet or exceed the required rate of return for the investment type . If IRR does not meet the required rate of return , the company will forgo investment .", "question": { "cloze_format": "The IRR method assumes that cash flows are reinvested at ________.", "normal_format": "How does the IRR method assume that cash flows are reinvested? ", "question_choices": [ "the internal rate of return", "the company’s discount rate", "the lower of the company’s discount rate or internal rate of return", "an average of the internal rate of return and the discount rate" ], "question_id": "fs-idm205218496", "question_text": "The IRR method assumes that cash flows are reinvested at ________." }, "references_are_paraphrase": 0 } ]
11
11.1 Describe Capital Investment Decisions and How They Are Applied Assume that you own a small printing store that provides custom printing applications for general business use. Your printers are used daily, which is good for business but results in heavy wear on each printer. After some time, and after a few too many repairs, you consider whether it is best to continue to use the printers you have or to invest some of your money in a new set of printers. A capital investment decision like this one is not an easy one to make, but it is a common occurrence faced by companies every day. Companies will use a step-by-step process to determine their capital needs, assess their ability to invest in a capital project, and decide which capital expenditures are the best use of their resources. Fundamentals of Capital Investment Decisions Capital investment (sometimes also referred to as capital budgeting ) is a company’s contribution of funds toward the acquisition of long-lived (long-term or capital) assets for further growth. Long-term assets can include investments such as the purchase of new equipment, the replacement of old machinery, the expansion of operations into new facilities, or even the expansion into new products or markets. These capital expenditures are different from operating expenses. An operating expense is a regularly-occurring expense used to maintain the current operations of the company, but a capital expenditure is one used to grow the business and produce a future economic benefit. Capital investment decisions occur on a frequent basis, and it is important for a company to determine its project needs to establish a path for business development. This decision is not as obvious or as simple as it may seem. There is a lot at stake with a large outlay of capital, and the long-term financial impact may be unknown due to the capital outlay decreasing or increasing over time. To help reduce the risk involved in capital investment, a process is required to thoughtfully select the best opportunity for the company. The process for capital decision-making involves several steps: Determine capital needs for both new and existing projects. Identify and establish resource limitations. Establish baseline criteria for alternatives. Evaluate alternatives using screening and preference decisions. Make the decision. The company must first determine its needs by deciding what capital improvements require immediate attention. For example, the company may determine that certain machinery requires replacement before any new buildings are acquired for expansion. Or, the company may determine that the new machinery and building expansion both require immediate attention. This latter situation would require a company to consider how to choose which investment to pursue first, or whether to pursue both capital investments concurrently. Concepts In Practice Brexit The decision to invest money in capital expenditures may not only be impacted by internal company objectives, but also by external factors. In 2016, Great Britain voted to leave the European Union (EU) (termed “Brexit”), which separates their trade interests and single-market economy from other participating European nations. This has led to uncertainty for United Kingdom (UK) businesses. Because of this instability, capital spending slowed or remained stagnant immediately following the Brexit vote and has not yet recovered growth momentum. 1 The largest decrease in capital spending has occurred in the expansions of businesses into new markets. The UK is expected to separate from the EU in 2019. 1 G. Jackson “UK Business Investment Stalls in Year since Brexit Vote.” The Financial Times . August 24, 2017. https://www.ft.com/content/daff3ffe-88ac-11e7-8bb1-5ba57d47eff7 The second step, exploring resource limitations, evaluates the company’s ability to invest in capital expenditures given the availability of funds and time. Sometimes a company may have enough resources to cover capital investments in many projects. Many times, however, they only have enough resources to invest in a limited number of opportunities. If this is the situation, the company must evaluate both the time and money needed to acquire each asset. Time allocation considerations can include employee commitments and project set-up requirements. Fund limitations may result from a lack of capital fundraising, tied-up capital in non-liquid assets, or extensive up-front acquisition costs that extend beyond investment means ( Table 11.1 ). Once the ability to invest has been established, the company needs to establish baseline criteria for alternatives. Resource Limitations Time Considerations Money Considerations Employee commitments Project set-up Time-frame necessary to secure financing Lack of liquidity Tied up in non-liquid assets Up-front acquisition costs Table 11.1 When resources are limited, capital budgeting procedures are needed. Alternatives are the options available for investment. For example, if a company needs to purchase new printing equipment, all possible printing equipment options are considered alternatives. Since there are so many alternative possibilities, a company will need to establish baseline criteria for the investment. Baseline criteria are measurement methods that can help differentiate among alternatives. Common measurement methods include the payback method, accounting rate of return, net present value, or internal rate of return. These methods have varying degrees of complexity and will be discussed in greater detail in Evaluate the Payback and Accounting Rate of Return in Capital Investment Decisions and Explain the Time Value of Money and Calculate Present and Future Values of Lump Sums and Annuities To evaluate alternatives, businesses will use the measurement methods to compare outcomes. The outcomes will not only be compared against other alternatives, but also against a predetermined rate of return on the investment (or minimum expectation) established for each project consideration. The rate of return concept is discussed in more detail in Balanced Scorecard and Other Performance Measures . A company may use experience or industry standards to predetermine factors used to evaluate alternatives. Alternatives will first be evaluated against the predetermined criteria for that investment opportunity, in a screening decision. The screening decision allows companies to remove alternatives that would be less desirable to pursue given their inability to meet basic standards. For example, if there were three different printing equipment options and a minimum return had been established, any printers that did not meet that minimum return requirement would be removed from consideration. If one or more of the alternatives meets or exceeds the minimum expectations, a preference decision is considered. A preference decision compares potential projects that meet screening decision criteria and will rank the alternatives in order of importance, feasibility, or desirability to differentiate among alternatives. Once the company determines the rank order, it is able to make a decision on the best avenue to pursue ( Figure 11.2 ). When making the final decision, all financial and non-financial factors are deliberated. Ethical Considerations Volkswagen Diesel Emissions Scandal Sometimes a company makes capital decisions due to outside pressures or unforeseen circumstances. The New York Times reported in 2015 that the car company Volkswagen was “scarred by an emissions-cheating scandal,” and “would need to cut its budget next year for new technology and research—a reversal after years of increased spending aimed at becoming the world’s biggest carmaker.” 2 This was a huge setback for Volkswagen , not only because the company had budgeted and planned to become the largest car company in the world, but also because the scandal damaged its reputation and set it back financially. 2 Jack Ewing and Jad Mouawad. “VW Cuts Its R&D Budget in Face of Costly Emissions Scandal.” New York Times . November 20, 2015. https://www.nytimes.com/2015/11/21/business/international/volkswagen-emissions-scandal.html Volkswagen “set aside about 9 billion euros ($9.6 billion) to cover costs related to making the cars compliant with pollution regulations;” however, the sums were “unlikely to cover the costs of potential legal judgments or other fines.” 3 All of the costs related to the company’s unethical actions needed to be included in the capital budget, as company resources were limited. Volkswagen used capital budgeting procedures to allocate funds for buying back the improperly manufactured cars and paying any legal claims or penalties. Other companies might take other approaches, but an unethical action that results in lawsuits and fines often requires an adjustment to the capital decision-making process. 3 Jack Ewing and Jad Mouawad. “VW Cuts Its R&D Budget in Face of Costly Emissions Scandal.” New York Times . November 20, 2015. https://www.nytimes.com/2015/11/21/business/international/volkswagen-emissions-scandal.html Let’s broadly consider what the five-step process for capital decision-making looks like for Melanie’s Sewing Studio. Melanie owns a sewing studio that produces fabric patterns for wholesale. Determine capital needs for both new and existing projects. Upon review of her future needs, Melanie determines that her five-year-old commercial sewing machine could be replaced. The old machine is still working, but production has slowed in recent months with an increase in repair needs and replacement parts. Melanie expects a new sewing machine to make her production process more efficient, which could also increase her current business volume. She decides to explore the possibility of purchasing a new sewing machine. Identify and establish resource limitations. Melanie must consider if she has enough time and money to invest in a new sewing machine. The Sewing Studio has been in business for three years and has shown steady financial growth year over year. Melanie expects to make enough profit to afford a capital investment of $50,000. If she does purchase a new sewing machine, she will have to train her staff on how to use the machine and will have to cease production while the new machine is installed. She anticipates a loss of $20,000 for training and production time. The estimation of the $20,000 loss is based on the downtime in production for both labor and product output. Establish baseline criteria for alternatives. Melanie is considering two different sewing machines for purchase. Before she evaluates which option is a better investment, she must establish minimum requirements for the investment. She determines that the new machine must return her initial investment back to her in three years at a rate of 20%, and the initial investment cost cannot exceed her future earnings. This established a baseline for what she considers reasonable for this type of investment, and she will not consider any investment alternative that does not meet these minimum criteria. Evaluate alternatives using screening and preference decisions. Now that she has established minimum requirements for the new machine, she can evaluate each of these machines to see if they meet or exceed her criteria. The first sewing machine costs $45,000. She is expected to recoup her initial investment in two-and-a-half years. The return rate is 25%, and her future earnings would exceed the initial cost of the machine. The second machine will cost $55,000. She expects to recoup her initial investment in three years. The return rate is $18%, and her future earnings would be less than the initial cost of the machine. Make the decision. Melanie will now decide which sewing machine to invest in. The first machine meets or exceeds her established minimum requirements in cost, payback, return rate, and future earnings compared to the initial investment. For the second machine, the $55,000 cost exceeds the cash available for investment. In addition, the second machine does not meet the return rate of 20% and the anticipated future earnings does not compare well to the value of the initial investment. Based on this information, Melanie would choose to purchase the first sewing machine. These steps make it seem as if narrowing down the alternatives and making a selection is a simple process. However, a company needs to use analysis techniques, including the payback method and the accounting rate of return method, as well as other, more sophisticated and complex techniques, to help them make screening and preference decisions. These techniques can assist management in making a final investment decision that is best for the company. We begin learning about these various screening and preference decisions in Evaluate the Payback and Accounting Rate of Return in Capital Link to Learning More and more companies are using capital expenditure software in budgeting analysis management. One company using this software is Solarcentury, a United Kingdom-based solar company. Read this case study on Solarcentury’s advantages to capital budgeting resulting from this software investment to learn more. 11.2 Evaluate the Payback and Accounting Rate of Return in Capital Investment Decisions Many companies are presented with investment opportunities continuously and must sift through both viable and nonviable options to identify the best possible expenditure for business growth. The process to select the best option requires careful budgeting and analysis. In conducting their analysis, a company may use various evaluation methods with differing inputs and analysis features. These methods are often broken into two broad categories: (1) those that consider the time value of money, or the fact that a dollar today differs from a dollar in the future due to inflation and the ability to invest today's money for future growth, and (2) those analysis methods that do not consider the time value of money. We will examine the non-time value methods first. Non-Time Value Methods Non-time value methods do not compare the value of a dollar today to the value of a dollar in the future and are often used as screening tools. Two non-time value evaluative methods are the payback method and the accounting rate of return. Fundamentals of the Payback Method The payback method (PM) computes the length of time it takes a company to recover their initial investment. In other words, it calculates how long it will take until either the amount earned or the costs saved are equal to or greater than the costs of the project. This can be useful when a company is focused solely on retrieving their funds from a project investment as quickly as possible. Businesses do not want their money tied up in capital assets that have limited liquidity. The longer money is unavailable, the less ability the company has to use these funds for other growth purposes. This extended length of time is also a concern because it produces a riskier opportunity. Therefore, a company would like to get their money returned to them as quickly as possible. One way to focus on this is to consider the payback period when making a capital budget decision. The payback method is limited in that it only considers the time frame to recoup an investment based on expected annual cash flows, and it doesn’t consider the effects of the time value of money. The payback period is calculated when there are even or uneven annual cash flows. Cash flow is money coming into or out of the company as a result of a business activity. A cash inflow can be money received or cost savings from a capital investment. A cash outflow can be money paid or increased cost expenditures from capital investment. Cash flow will estimate the ability of the company to pay long-term debt, its liquidity, and its ability to grow. Cash flows appear on the statement of cash flows. Cash flows are different than net income. Net income will represent all company activities affecting revenues and expenses regardless of the occurrence of a cash transaction and will appear on the income statement. A company will estimate the future cash inflows and outflows to be generated by the capital investment. It’s important to remember that the cash inflows can be caused by an increase in cash receipts or by a reduction in cash expenditures. For example, if a new piece of equipment would reduce the production costs for a company from $120,000 a year to $80,000 a year, we would consider this is a $40,000 cash inflow. While the company does not actually receive the $40,000 in cash, it does save $40,000 in operating costs giving it a positive cash inflow of $40,000. Cash flow can also be generated through increased production volume. For example, a company purchases a new building costing $100,000 that will allow them to house more space for production. This new space allows them to produce more product to sell, which increases cash sales by $300,000. The $300,000 is a new cash inflow. The difference between cash inflows and cash outflows is the net cash inflow or outflow, depending on which cash flow is larger. Annual net cash flows are then related to the initial investment to determine a payback period in years. When the expected net annual cash flow is an even amount each period, payback can be computed as follows: The result is the number of years it will take to recover the cash made in the original investment. For example, a printing company is considering a printer with an initial investment cost of $150,000. They expect an annual net cash flow of $20,000. The payback period is Payback Period = $150,000 $20,000 = 7.5 years Payback Period = $150,000 $20,000 = 7.5 years The initial investment cost of $150,000 is divided by the annual cash flow of $20,000 to compute an expected payback period of 7.5 years. Depending on the company’s payback period requirements for this type of investment, they may pass this option through the screening process to be considered in a preference decision. For example, the company might require a payback period of 5 years. Since 7.5 years is greater than 5 years, the company would probably not consider moving this alternative to a preference decision. If the company required a payback period of 9 years, the company would consider moving this alternative to a preference decision, since the number of years is less than the requirement. When net annual cash flows are uneven over the years, as opposed to even as in the previous example, the company requires a more detailed calculation to determine payback. Uneven cash flows occur when different amounts are returned each year. In the previous printing company example, the initial investment cost was $150,000 and even cash flows were $20,000 per year. However, in most examples, organizations experience uneven cash flows in a multiple-year ownership period. For example, an uneven cash flow distribution might be a return of $10,000 in year one, $20,000 in years two and three, $15,000 in years four and five, and $20,000 in year six and beyond. In this case, then, the payback period is 8.5 years. In a second example of the payback period for uneven cash flows, consider a company that will need to determine the net cash flow for each period and figure out the point at which cash flows equal or exceed the initial investment. This could arise in the middle of a year, prompting a calculation to determine the partial year payback. The company would add the partial year payback to the prior years’ payback to get the payback period for uneven cash flows. For example, a company may make an initial investment of $40,000 and receive net cash flows of $10,000 in years one and two, $5,000 in year three and four, and $7,500 for years five and beyond. We know that somewhere between years 5 and 6, the company recovers the money. In years one and two they recovered a total of $20,000 (10,000 + 10,000), in years three and four they recovered and additional $10,000 (5,000 + 5,000), and in year five they recovered $7,500, for a total through year five of $37,500. This left an outstanding balance after year five of $2,500 (40,000 – 37,500) to fully recover the costs of the investment. In year six, they had a cash flow of $7,500. This is more than they needed to recoup their initial investment. To get a more specific calculation, we need to compute the partial year’s payback. Partial Year Payback = $2,500 $7,500 = 0.33 years (rounded) Partial Year Payback = $2,500 $7,500 = 0.33 years (rounded) Therefore, the total payback period is 5.33 years (5 years + 0.33 years). Demonstration of the Payback Method For illustration, consider Baby Goods Manufacturing (BGM), a large manufacturing company specializing in the production of various baby products sold to retailers. BGM is considering investment in a new metal press machine. The payback period is calculated as follows: Payback Period = $50,000 $15,000 = 3.33 years Payback Period = $50,000 $15,000 = 3.33 years We divide the initial investment of $50,000 by the annual inflow of $15,000 to arrive at a payback period of 3.33 years. Assume that BGM will not allow a payback period of more than 7 years for this type of investment. Since this computed payback period meets their initial screening requirement, they can pass this investment opportunity on to a preference decision level. If BGM had an expected or maximum allowable payback period of 2 years, the same investment would not have passed their screening requirement and would be dropped from consideration. To illustrate the concept of uneven cash flows, let’s assume BGM shows the following expected net cash flows instead. Recall that that the initial investment in the metal press machine is $50,000. Between years 6 and 7, the initial investment outstanding balance is recovered. To determine the more specific payback period, we calculate the partial year payback. Payback Period = $5,000 $10,000 = 0.5 years Payback Period = $5,000 $10,000 = 0.5 years The total payback period is 6.5 years (6 years + 0.5 years). Think It Through Capital Investment You are the accountant at a large firm looking to make a capital investment in a future project. Your company is considering two project investments. Project A’s payback period is 3 years, and Project B’s payback period is 5.5 years. Your company requires a payback period of no more than 5 years on such projects. Which project should they further consider? Why? Is there an argument that can be made to advance either project or neither project? Why? What other factors might be necessary to make that decision? Fundamentals of the Accounting Rate of Return Method The accounting rate of return (ARR) computes the return on investment considering changes to net income. It shows how much extra income the company could expect if it undertakes the proposed project. Unlike the payback method, ARR compares income to the initial investment rather than cash flows. This method is useful because it reviews revenues, cost savings, and expenses associated with the investment and, in some cases, can provide a more complete picture of the impact, rather than focusing solely on the cash flows produced. However, ARR is limited in that it does not consider the value of money over time, similar to the payback method. The accounting rate of return is computed as follows: Incremental revenues represent the increase to revenue if the investment is made, as opposed to if the investment is rejected. The increase to revenues includes any cost savings that occur because of the project. Incremental expenses show the change to expenses if the project is accepted as opposed to maintaining the current conditions. Incremental expenses also include depreciation of the acquired asset. The difference between incremental revenues and incremental expenses is called the incremental net income . The initial investment is the original amount invested in the project; however, any salvage (residual) value for the capital asset needs to be subtracted from the initial investment before obtaining ARR. The concept of salvage value was addressed in Long-Term Assets . Basically, it is the anticipated future fair market value (FMV) of an asset when it is to be sold or used as a trade-in for a replacement asset. For example, assume that you bought a commercial printer for $40,000 five years ago with an anticipated salvage value of $8,000, and you are now considering replacing it. Assume that as of the date of replacement after the five-year holding period, the old printer has an FMV of $8,000. If the new printer has a purchase price of $45,000 and the seller is going to take the old printer as a trade-in, then you would owe $37,000 for the new printer. If the printer had been sold for $8,000, instead being used as a trade-in, the $8,000 could have been used as a down payment, and the company would still owe $37,000. This amount is the price of $45,000 minus the FMV value of $8,000. There is one more point to make with this example. The fair market value is not the same as the book value. The book value is the original cost less the accumulated depreciation that has been taken. For example, if you buy a long-term asset for $60,000 and the accumulated depreciation that you have taken is $42,000, then the asset’s book value would be $18,000. The fair market value could be more, less, or the same as the book value. For example, a piano manufacturer is considering investment in a new tuning machine. The initial investment will cost $300,000. Incremental revenues, including cost savings, are $200,000, and incremental expenses, including depreciation, are $125,000. ARR is computed as: ARR = ( $ 200,000 − $ 125,000 ) $ 300,000 = 0.25 or 25% ARR = ( $ 200,000 − $ 125,000 ) $ 300,000 = 0.25 or 25% This outcome means the company can expect an increase of 25% to net income, or an extra 25 cents on each dollar, if they make the investment. The company will have a minimum expected return that this project will need to meet or exceed before further consideration is given. ARR, like payback method, should not be used as the sole determining factor to invest in a capital asset. Also, note that the ARR calculation does not consider uneven annual income growth, or other depreciation methods besides straight-line depreciation. Demonstration of the Accounting Rate of Return Method Returning to the BGM example, the company is still considering the metal press machine because it passed the payback period method of less than 7 years. BGM has a set rate of return of 25% expected for the metal press machine investment. The company expects incremental revenues of $22,000 and incremental expenses of $12,000. Remember that the initial investment cost is $50,000. BGM computes ARR as follows: ARR = ( $ 20,000 − $ 5,000 ) $ 50,000 = 0.3 or 30% ARR = ( $ 20,000 − $ 5,000 ) $ 50,000 = 0.3 or 30% The ARR in this situation is 30%, exceeding the required hurdle rate of 25%. A hurdle rate is the minimum required rate of return on an investment to consider an alternative for further evaluation. In this case, BGM would move this investment option to a preference decision level. If we were to add a salvage value of $5,000 into the situation, the computation would change as follows: ARR = ( $ 20,000 − $ 5,000 ) $ 50,000 − $ 5,000 ) = 0.33 or 33% (rounded) . ARR = ( $ 20,000 − $ 5,000 ) $ 50,000 − $ 5,000 ) = 0.33 or 33% (rounded) . The ARR still exceeds the hurdle rate of 25%, so BGM would still forward the investment opportunity for further consideration. Let’s say BGM changes their required return rate to 35%. In both cases, the project ARR would be less than the required rate, so BGM would not further consider either investment. Your Turn Analyzing Hurdle Rate Turner Printing is looking to invest in a printer, which costs $60,000. Turner expects a 15% rate of return on this printer investment. The company expects incremental revenues of $30,000 and incremental expenses of $15,000. There is no salvage value for the printer. What is the accounting rate of return (ARR) for this printer? Did it meet the hurdle rate of 15%? Solution ARR is 25% calculated as ($30,000 – $15,000) / $60,000. 25% exceeds the hurdle rate of 15%, so the company would consider moving this alternative to a preference decision. Both the payback period and the accounting rate of return are useful analytical tools in certain situations, particularly when used in conjunction with other evaluative techniques. In certain situations, the non-time value methods can provide relevant and useful information. However, when considering projects with long lives and significant costs to initiate, there are more advanced models that can be used. These models are typically based on time value of money principles, the basics of which are explained here. Your Turn Analyzing Investments Your company is considering making an investment in equipment that will cost $240,000. The equipment is expected to generate annual cash flows of $60,000, provide incremental cash revenues of $200,000, and provide incremental cash expenses of $140,000 annually. Depreciation expense is included in the $140,000 incremental expense. Calculate the payback period and the accounting rate of return. Solution Payback Period = $240,000 60,000 = 4 years ARR = ( $200,000 – $140,000 ) 240,000 = 25% Payback Period = $240,000 60,000 = 4 years ARR = ( $200,000 – $140,000 ) 240,000 = 25% 11.3 Explain the Time Value of Money and Calculate Present and Future Values of Lump Sums and Annuities Your mother gives you $100 cash for a birthday present, and says, “Spend it wisely.” You want to purchase the latest cellular telephone on the market but wonder if this is really the best use of your money. You have a choice: You can spend the money now or spend it in the future. What should you do? Is there a benefit to spending it now as opposed to saving for later use? Does time have an impact on the value of your money in the future? Businesses are confronted with these questions and more when deciding how to allocate investment money. A major factor that affects their investment decisions is the concept of the time value of money. Time Value of Money Fundamentals The concept of the time value of money asserts that the value of a dollar today is worth more than the value of a dollar in the future. This is typically because a dollar today can be used now to earn more money in the future. There is also, typically, the possibility of future inflation, which decreases the value of a dollar over time and could lead to a reduction in economic buying power. At this point, potential effects of inflation can probably best be demonstrated by a couple of examples. The first example is the Ford Mustang. The first Ford Mustang sold in 1964 for $2,368. Today’s cheapest Mustang starts at a list price of $25,680. While a significant portion of this increase is due to additional features on newer models, much of the increase is due to the inflation that occurred between 1964 and 2019. Similar inflation characteristics can be demonstrated with housing prices. After World War II, a typical small home often sold for between $16,000 and $30,000. Many of these same homes today are selling for hundreds of thousands of dollars. Much of the increase is due to the location of the property, but a significant part is also attributed to inflation. The annual inflation rate for the Mustang between 1964 and 2019 was approximately 4.5%. If we assume that the home sold for $16,500 in 1948 and the price of the home in 2019 was about $500,000, that’s an annual appreciation rate of almost 5%. Today’s dollar is also more valuable because there is less risk than if the dollar was in a long-term investment, which may or may not yield the expected results. On the other hand, delaying payment from an investment may be beneficial if there is an opportunity to earn interest. The longer payment is delayed, the more available earning potential there is. This can be enticing to businesses and may persuade them to take on the risk of deferment. Businesses consider the time value of money before making an investment decision. They need to know what the future value is of their investment compared to today’s present value and what potential earnings they could see because of delayed payment. These considerations include present and future values. Before you learn about present and future values, it is important to examine two types of cash flows: lump sums and annuities. Lump Sums and Annuities A lump sum is a one-time payment or repayment of funds at a particular point in time. A lump sum can be either a present value or future value. For a lump sum, the present value is the value of a given amount today. For example, if you deposited $5,000 into a savings account today at a given rate of interest, say 6%, with the goal of taking it out in exactly three years, the $5,000 today would be a present value-lump sum. Assume for simplicity’s sake that the account pays 6% at the end of each year, and it also compounds interest on the interest earned in any earlier years. In our current example, interest is calculated once a year. However, interest can also be calculated in numerous ways. Some of the most common interest calculations are daily, monthly, quarterly, or annually. One concept important to understand in interest calculations is that of compounding. Compounding is the process of earning interest on previous interest earned, along with the interest earned on the original investment. Returning to our example, if $5,000 is deposited into a savings account for three years earning 6% interest compounded annually, the amount the $5,000 investment would be worth at the end of three years is $5,955.08 ($5,000 × 1.06 – $5,300 × 1.06 – $5,618 × 1.06 – $5,955.08). The $5,955.08 is the future value of $5,000 invested for three years at 6%. More formally, future value is the amount to which either a single investment or a series of investments will grow over a specified time at a given interest rate or rates. The initial $5,000 investment is the present value. Again, more formally, present value is the current value of a single future investment or a series of investments for a specified time at a given interest rate or rates. Another way to phrase this is to say the $5,000 is the present value of $5,955.08 when the initial amount was invested at 6% for three years. The interest earned over the three-year period would be $955.08, and the remaining $5,000 would be the original deposit of $5,000. As shown in the example the future value of a lump sum is the value of the given investment at some point in the future. It is also possible to have a series of payments that constitute a series of lump sums. Assume that a business receives the following four cash flows. They constitute a series of lump sums because they are not all the same amount. The company would be receiving a stream of four cash flows that are all lump sums. In some situations, the cash flows that occur each time period are the same amount; in other words, the cash flows are even each period. These types of even cash flows occurring at even intervals, such as once a year, are known as an annuity . The following figure shows an annuity that consists of four payments of $12,000 made at the end of each of four years. The nature of cash flows—single sum cash flows, even series of cash flows, or uneven series of cash flows—have different effects on compounding. Compounding Compounding can be applied in many types of financial transactions, such as funding a retirement account or college savings account. Assume that an individual invests $10,000 in a four-year certificate of deposit account that pays 10% interest at the end of each year (in this case 12/31). Any interest earned during the year will be retained until the end of the four-year period and will also earn 10% interest annually. Through the effects of compounding—earning interest on interest—the investor earned $4,641 in interest from the four-year investment. If the investor had removed the interest earned instead of reinvesting it in the account, the investor would have earned $1,000 a year for four years, or $4,000 interest ($10,000 × 10% = $1,000 per year × 4 years = $4,000 total interest). Compounding is a concept that is used to determine future value (more detailed calculations of future value will be covered later in this section). But what about present value? Does compounding play a role in determining present value? The term applied to finding present value is called discounting. Discounting Discounting is the procedure used to calculate the present value of an individual payment or a series of payments that will be received in the future based on an assumed interest rate or return on investment. Let’s look at a simple example to explain the concept of discounting. Assume that you want to accumulate sufficient funds to buy a new car and that you will need $5,000 in three years. Also, assume that your invested funds will earn 8% a year for the three years, and you reinvest any interest earned during the three-year period. If you wanted to take out adequate funds from your savings account to fund the three-year investment, you would need to invest $3,969.16 today and invest it in the account earning 8% for three years. After three years, the $3,969.16 would earn $1,030.84 and grow to exactly the $5,000 that you will need. This is an example of discounting. Discounting is the method by which we take a future value and determine its current, or present, value. An understanding of future value applications and calculations will aid in the understanding of present value uses and calculations. Future Value There are benefits to investing money now in hopes of a larger return in the future. These future earnings are possible because of interest payments received as an incentive for tying up money long-term. Knowing what these future earnings will be can help a business decide if the current investment is worth the long-term potential. Recall, the future value (FV) as the value of an investment after a certain period of time. Future value considers the initial amount invested, the time period of earnings, and the earnings interest rate in the calculation. For example, a bank would consider the future value of a loan based on whether a long-time client meets a certain interest rate return when determining whether to approve the loan. To determine future value, the bank would need some means to determine the future value of the loan. The bank could use formulas, future value tables, a financial calculator, or a spreadsheet application. The same is true for present value calculations. Due to the variety of calculators and spreadsheet applications, we will present the determination of both present and future values using tables. In many college courses today, these tables are used primarily because they are relatively simple to understand while demonstrating the material. For those who prefer formulas, the different formulas used to create each table are printed at the top of the corresponding table. In many finance classes, you will learn how to utilize the formulas. Regarding the use of a financial calculator, while all are similar, the user manual or a quick internet search will provide specific directions for each financial calculator. As for a spreadsheet application such as Microsoft Excel, there are some common formulas, shown in Table 11.2 . In addition, Appendix C provides links to videos and tutorials on using specific aspects of Excel, such as future and present value techniques. Excel Formulas Time Value Component Excel Formula Shorthand Excel Formula Detailed Present Value Single Sum =PV =PV(Rate, N, Payment, FV) Future Value Single Sum +FV =FV(Rate, N, Payment, PV) Present Value Annuity =PV =PV(Rate, N, Payment, FV, Type) Future Value Annuity =FV =FV(Rate, N, Payment, PV, Type) Net Present Value =NPV =NPV(Rate, CF2, CF3, CF4) + CF1 Internal Rate of Return =IRR =IRR(Invest, CF1, CF2, CF3) Rate = annual interest rate N = number of periods Payment = annual payment amount, entered as a negative number, use 0 when calculating both present value of a single sum and future value of a single sum FV = future value PV = current or present value Type = 0 for regular annuity, 1 for annuity due CF = cash flow for a period, thus CF1 – cash flow period 1, CF2 – cash flow period 2, etc. Invest = initial investment entered as a negative number Table 11.2 Since we will be using the tables in the examples in the body of the chapter, it is important to know there are four possible table, each used under specific conditions ( Table 11.3 . Time Value of Money Tables Situation Table Heading Future Value – Lump Sum Future Value of $1 Future Value – Annuity (even payment stream) Future Value of an Annuity Present Value – Lump Sum Present Value of $1 Present Value – Annuity (even payment stream) Present Value of an Annuity Table 11.3 In the prior situation, the bank would use either the Future Value of $1 table or Future Value of an Ordinary Annuity table, samples of which are provided in Appendix B . To use the correct table, the bank needs to determine whether the customer will pay them back at the end of the loan term or periodically throughout the term of the loan. The Future Value of $1 table is used if the customer will pay back at the end of the period; if the payments will be made periodically throughout the term of the loan, they will use the Future Value of an Annuity table. Choosing the correct table to use is critical for accurate determination of the future value. The application in other business matters is the same: a business needs to also consider if they are making an investment with a repayment in one lump sum or in an annuity structure before choosing a table and making the calculation. In the tables, the columns show interest rates ( i ) and the rows show periods ( n ). The interest columns represent the anticipated interest rate payout for that investment. Interest rates can be based on experience, industry standards, federal fiscal policy expectations, and risk investment. Periods represent the number of years until payment is received. The intersection of the expected payout years and the interest rate is a number called a future value factor. The future value factor is multiplied by the initial investment cost to produce the future value of the expected cash flows (or investment return). Future Value of $1 A lump sum payment is the present value of an investment when the return will occur at the end of the period in one installment. To determine this return, the Future Value of $1 table is used. For example, you are saving for a vacation you plan to take in 6 years and want to know how much your initial savings will yield in the future. You decide to place $4,500 in an investment account now that yields an anticipated annual return of 8%. Looking at the FV table, n = 6 years, and i = 8%, which return a future value factor of 1.587. Multiplying this factor by the initial investment amount of $4,500 produces $7,141.50. This means your initial savings of $4,500 will be worth approximately $7,141.50 in 6 years. Future Value of an Ordinary Annuity An ordinary annuity is one in which the payments are made at the end of each period in equal installments. A future value ordinary annuity looks at the value of the current investment in the future, if periodic payments were made throughout the life of the series. For example, you are saving for retirement and expect to contribute $10,000 per year for the next 15 years to a 401(k) retirement plan. The plan anticipates a periodic interest yield of 12%. How much would your investment be worth in the future meeting these criteria? In this case, you would use the Future Value of an Ordinary Annuity table. The relevant factor where n = 15 and i = 12% is 37.280. Multiplying the factor by the amount of the cash flow yields a future value of these installment savings of (37.280 × $10,000) $372,800. Therefore, you could expect your investment to be worth $372,800 at the end of 15 years, given the parameters. Let's now examine how present value differs from future value in use and computation. Your Turn Determining Future Value Determine the future value for each of the following situations. Use the future value tables provided in Appendix B when needed, and round answers to the nearest cent where required. You are saving for a car and you put away $5,000 in a savings account. You want to know how much your initial savings will be worth in 7 years if you have an anticipated annual interest rate of 5%. You are saving for retirement and make contributions of $11,500 per year for the next 14 years to your 403(b) retirement plan. The interest rate yield is 8%. Solution A. Use FV of $1 table. Future value factor where n = 7 and i = 5 is 1.407. 1.407 × 5,000 = $7,035. B. Use FV of an ordinary annuity table. Future value factor where n = 14 and i = 8 is 24.215. 24.215 × 11,500 = $278,472.50. Present Value It is impossible to compare the value or potential purchasing power of the future dollar to today’s dollar; they exist in different times and have different values. Present value (PV) considers the future value of an investment expressed in today’s value. This allows a company to see if the investment’s initial cost is more or less than the future return. For example, a bank might consider the present value of giving a customer a loan before extending funds to ensure that the risk and the interest earned are worth the initial outlay of cash. Similar to the Future Value tables, the columns show interest rates ( i ) and the rows show periods ( n ) in the Present Value tables. Periods represent how often interest is compounded (paid); that is, periods could represent days, weeks, months, quarters, years, or any interest time period. For our examples and assessments, the period ( n ) will almost always be in years. The intersection of the expected payout years ( n ) and the interest rate ( i ) is a number called a present value factor. The present value factor is multiplied by the initial investment cost to produce the present value of the expected cash flows (or investment return). The two tables provided in Appendix B for present value are the Present Value of $1 and the Present Value of an Ordinary Annuity. As with the future value tables, choosing the correct table to use is critical for accurate determination of the present value. Present Value of $1 When referring to present value, the lump sum return occurs at the end of a period. A business must determine if this delayed repayment, with interest, is worth the same as, more than, or less than the initial investment cost. If the deferred payment is more than the initial investment, the company would consider an investment. To calculate present value of a lump sum, we should use the Present Value of $1 table. For example, you are interested in saving money for college and want to calculate how much you would need put in the bank today to return a sum of $40,000 in 10 years. The bank returns an interest rate of 3% per year during these 10 years. Looking at the PV table, n = 10 years and i = 3% returns a present value factor of 0.744. Multiplying this factor by the return amount of $40,000 produces $29,760. This means you would need to put in the bank now approximately $29,760 to have $40,000 in 10 years. As mentioned, to determine the present value or future value of cash flows, a financial calculator, a program such as Excel, knowledge of the appropriate formulas, or a set of tables must be used. Though we illustrate examples in the text using tables, we recognize the value of these other calculation instruments and have included chapter assessments that use multiple approaches to determining present and future value. Knowledge of different approaches to determining present and future value is useful as there are situations, such as having fractional interest rates, 8.45% for example, in which a financial calculator or a program such as Excel would be needed to accurately determine present or future value. Annuity Table As discussed previously, annuities are a series of equal payments made over time, and ordinary annuities pay the equal installment at the end of each payment period within the series. This can help a business understand how their periodic returns translate into today’s value. For example, assume that Sam needs to borrow money for college and anticipates that she will be able to repay the loan in $1,200 annual payments for each of 5 years. If the lender charges 5% per year for similar loans, how much cash would the bank be willing to lend Sam today? In this case, she would use the Present Value of an Ordinary Annuity table in Appendix B , where n = 5 and i = 5%. This yields a present value factor of 4.329. The current value of the cash flow each period is calculated as 4.329 × $1,200 = $5,194.80. Therefore, Sam could borrow $5,194.80 now given the repayment parameters. Our focus has been on examples of ordinary annuities (annuities due and other more complicated annuity examples are addressed in advanced accounting courses). With annuities due , the cash flow occurs at the start of the period. For example, if you wanted to deposit a lump sum of money into an account and make monthly rent payments starting today, the first payment would be made the same day that you made the deposit into the funding account. Because of this timing difference in the withdrawals from the annuity due, the process of calculating annuity due is somewhat different from the methods that you’ve covered for ordinary annuities. Your Turn Determining Present Value Determine the present value for each of the following situations. Use the present value tables provided in Appendix B when needed, and round answers to the nearest cent where required. You are saving for college and you want to return a sum of $100,000 in 12 years. The bank returns an interest rate of 5% after these 12 years. You need to borrow money for college and can afford a yearly payment to the lending institution of $1,000 per year for the next 8 years. The interest rate charged by the lending institution is 3% per year. Solution a. Use PV of $1 table. Present value factor where n = 12 and i = 5 is 0.557. 0.557 × $100,000 = $55,700. b. Use PV of an ordinary annuity table. Present value factor where n = 8 and i = 3 is 7.020. 7.020 × $1,000 = $7,020. Link to Learning For a lucky few, winning the lottery can be a dream come true and the option to take a one-time payout or receive payments over several years does not seem to matter at the time. This lottery payout calculator shows how time value of money may affect your take-home winnings. 11.4 Use Discounted Cash Flow Models to Make Capital Investment Decisions Your company, Rudolph Incorporated, has begun analyzing two potential future project alternatives that have passed the basic screening using the non–time value methods of determining the payback period and the accounting rate of return. Both proposed projects seem reasonable, but your company typically selects only one option to pursue. Which one should you choose? How will you decide? A discounted cash flow model can assist with this process. In this section, we will discuss two commonly used time value of money–based options: the net present value method (NPV) and the internal rate of return (IRR). Both of these methods are based on the discounted cash flow process. Fundamentals of the Discounted Cash Flow Model The discount cash flow model assigns a value to a business opportunity using time-value measurement tools. The model considers future cash flows of the project, discounts them back to present time, and compares the outcome to an expected rate of return. If the outcome exceeds the expected rate of return and initial investment cost, the company would consider the investment. If the outcome does not exceed the expected rate of return or the initial investment, the company may not consider investment. When considering the discounted cash flow process, the time value of money plays a major role. Time Value-Based Methods As previously discussed, time value of money methods assume that the value of money today is worth more now than in the future. The payback period and accounting rate of return methods do not consider this concept when performing calculations and analyzing results. That is why they are typically only used as basic screening tools. To decide the best option between alternatives, a company performs preference measurement using tools, such as net present value and internal rate of return that do consider the time value of money concept. Net present value (NPV) discounts future cash flows to their present value at the expected rate of return and compares that to the initial investment. NPV does not determine the actual rate of return earned by a project. The internal rate of return (IRR) shows the profitability or growth potential of an investment at the point where NPV equals zero, so it determines the actual rate of return a project earns. As the name implies, net present value is stated in dollars, whereas the internal rate of return is stated as an interest rate. Both NPV and IRR require the company to determine a rate of return to be used as the target return rate, such as the minimum required rate of return or the weighted average cost of capital, which will be discussed in Balanced Scorecard and Other Performance Measures . A positive NPV implies that the present value of the cash inflows from the project are greater than the present value of the cash outflows, which represent the expenses and costs associated with the project. In an NPV calculation, a positive NPV is typically considered a potentially good investment or project. However, other extenuating circumstances should be considered. For example, the company might not wish to borrow the necessary funding to make the investment because the company might be anticipating a downturn in the national economy. An IRR analysis compares the calculated IRR with either a predetermined rate of return or the cost of borrowing the money to invest in the project in order to determine whether a potential investment or project is favorable. For example, assume that the investment or equipment purchase is expected to generate an IRR of 15% and the company’s expected rate of return is 12%. In this case, similar to the NPV calculation, we assume that the proposed investment would be undertaken. However, remember that other factors must be considered, as they are with NPV. When considering cash inflows—whether using NPV or IRR—the accountant should examine both profits generated or expenses reduced. Investments that are made may generate additional revenue or could reduce production costs. Both cases assume that the new product or other type of investment generates a positive cash inflow that will be compared to the cost outflows to determine whether there is an overall positive or negative net present value. Additionally, a company would determine whether the projects being considered are mutually exclusive or not. If the projects or investment options are mutually exclusive, the company can evaluate and identify more than one alternative as a viable project or investment, but they can only invest in one option. For example, if a company needs one new delivery truck, it might solicit proposals from five different truck dealers and conduct NPV and IRR evaluations. Even if all proposals pass the financial requirements of the NPV and IRR methods, only one proposal will be accepted. Another consideration occurs when a company has the ability to evaluate and accept multiple proposals. For example, an automobile manufacturer is considering expanding its number of dealerships in the United States over the next ten-year period and has allocated $30,000,000 to buy the land. They could purchase any number of properties. They conduct NPV and IRR analyses of fifteen properties and determine that four meet their required standards and market feasibility needs and then purchase those four properties. The opportunities were not mutually exclusive: the number of properties purchased was driven by research and expansion projections, not by their need for only one option. Continuing Application Capital Budgeting Decisions Gearhead Outfitters has expanded to many locations throughout its twenty-plus years in business. How did company management decide to expand? One of the financial tools a business can use is capital budgeting, which addresses many different issues involving the use of current cash flow for future return. As you’ve learned, capital outlay decisions can be evaluated through payback period, net present value, and methods involving rates of return. With this in mind, think about the capital budgeting issues Gearhead ’s management might have faced. For example, in deciding to expand, should the company buy a building or lease one? What method should be used to evaluate this? Purchasing a building might require more initial outlay, but the company will retain an asset. How will such a decision affect the bottom line? With respect to equipment, Gearhead could maintain a fleet of vehicles. Should the vehicles be purchased or leased? What will need to be considered in the process? In developing and maintaining its strategy for sustainability, a business must not only consider day-to-day operations, but also address long-term decisions. Common capital budgeting items like equipment purchases to increase efficiency or reduce costs, decisions about replacement versus repair, and expansion all involve significant cash outlay. How will these items be evaluated? How long will recouping the initial investment take? How much revenue will be generated (or costs saved) through capital outlay? Does the company require a minimum rate of return before it moves forward with investment? If so, how is that return determined? Considering Gearhead ’s decision to expand, what are some specific capital budgeting decisions important for the company to consider in their long-term strategy? Basic Characteristics of the Net Present Value Model Net present value helps companies choose between alternatives at a particular point in time by determining which produces the higher NPV. To determine the NPV, the initial investment is subtracted from the present value of cash inflows and outflows associated with a project at a required rate of return. If the outcome is positive, the company should consider investment. If the outcome is negative, the company would forgo investment. We previously discussed the calculation for present value using the present value tables, where n is the number of years and i is the expected interest rate. Once the present value factor is determined, it is multiplied by the expected net cash flows to produce the present value of future cash flows. The initial investment is subtracted from this present value calculation to determine the net present value. Recall that the Present Value of $1 table is used for a lump sum payout, whereas the Present Value of an Ordinary Annuity table is used for a series of equal payments occurring at the end of each period. Taking this distinction one step further, NPV requires use of different tables depending on whether the future cash flows are equal or unequal in each time period. If the cash flows each period are equal, the company uses the Present Value of an Ordinary Annuity table, where the present value factor is multiplied by the cash flow amount for one period to get the present value. If the cash flows each period are unequal, the company uses the Present Value of $1 table, where the total present value is the sum of each of the unequal cash flows multiplied by the appropriate present value factor for each time period. This concept is discussed in the following example. Assume that your company, Rudolph Incorporated, is determining the NPV for a new X-ray machine. The X-ray machine has an initial investment of $200,000 and an expected cash flow of $40,000 each period for the next 10 years. The expected $40,000 cash flows from the new X-ray machine can be attributed to either additional revenue generated or cost savings realized by more efficient operations of the new machine. Since these annual cash flows of $40,000 are the same amount in each period over the ten-years this will be a stream of annuity amounts received. The required rate of return on such an investment is 8%. The present value factor ( i = 8, n = 10) is 6.710 using the Present Value of an Ordinary Annuity table. Multiplying the present value factor (6.710) by the equal cash flow ($40,000) gives a present value of $268,400. NPV is found by taking the present value of $268,400 and subtracting the initial investment of $200,000 to arrive at $68,400. This is a positive NPV, so the company would consider investment. If there are two investments that have a positive NPV, and the investments are mutually exclusive, meaning only one can be chosen, the more profitable of the two investments is typically the appropriate one for a company to choose. We can also use the profitability index to compare them. The profitability index measures the amount of profit returned for each dollar invested in a project. This is particularly useful when projects being evaluated are of a different size, as the profitability index scales the projects to make them comparable. The profitability index is found by taking the present value of the net cash flows and dividing by the initial investment cost. For example, Rudolph Incorporated is considering the X-ray machine that had present value cash flows of $268,400 (not considering salvage value) and an initial investment cost of $200,000. Another X-ray equipment option, option B, produces present value cash flows of $290,000 and an initial investment cost of $240,000. The profitability index is computed as follows. Option A: $268,400 $200,000 = 1.342 Option B: $290,000 $240,000 = 1.208 Option A: $268,400 $200,000 = 1.342 Option B: $290,000 $240,000 = 1.208 Based on this outcome, the company would invest in Option A, the project with a higher profitability index of 1.342. If there were unequal cash flows each period, the Present Value of $1 table would be used with a more complex calculation. Each year’s present value factor is determined and multiplied by that year’s cash flow. Then all cash flows are added together to get one overall present value figure. This overall present value figure is used when finding the difference between present value and the initial investment cost. For example, let’s say the X-ray machine information is the same, except now cash flows are as follows: To find the overall present value, the following calculations take place using the present value of $1 table. The Present Value of $1 table is used because, each year, a new “lump sum” cash flow is received, so the cash flow in each period is different. The cash flows are treated as one-time lump sum payouts during that year. The present value for each period looks at each year’s present value factor at an interest rate of 8%. All the PVs are added together for a total present value of $219,990. The initial investment of $200,000 is subtracted from the $219,990 to arrive at a positive NPV of $19,990. In this case, the company would consider investment since the outcome is positive. (More complex considerations, such as depreciation, the effects of income taxes, and inflation, which could affect the overall NPV, are covered in advanced accounting courses.) Your Turn Analyzing a Postage Meter Investment Yellow Industries is considering investment in a new postage meter system. The postage meter system would have an initial investment cost of $135,000. Annual net cash flows are $40,000 for the next 5 years, and the expected interest rate return is 10%. Calculate net present value and decide whether or not Yellow Industries should invest in the new postage meter system. Solution Use the Present Value of an Ordinary Annuity table. Present value factor at n = 5 and i = 10% is 3.791. Present value = 3.791 × $40,000 = $151,640. NPV = $151,640 − $135,000 = $16,640. In this case, Yellow Industries should invest since the NPV is positive. Calculation and Discussion of the Results of the Net Present Value Model To demonstrate NPV, assume that a company, Rayford Machining, is considering buying a drill press that will have an initial investment cost of $50,000 and annual cash flows of $10,000 for the next 7 years. Assume that Rayford expects a 5% rate of return on such an investment. We need to determine the NPV when cash flows are equal. The present value factor ( i = 5, n = 7) is 5.786 using the Present Value of an Ordinary Annuity table. We multiply 5.786 by the equal cash flow of $10,000 to get a present value of $57,860. NPV is found by taking the present value of $57,860 and subtracting the initial investment of $50,000 to arrive at $7,860. This is a positive NPV, so the company would consider the investment. Let’s say Rayford Machining has another option, Option B, for a drill press purchase with an initial investment cost of $56,000 that produces present value cash flows of $60,500. The profitability index is computed as follows. Option A: $57,860 $50,000 = 1.157 Option B: $60,500 $56,000 = 1.080 Option A: $57,860 $50,000 = 1.157 Option B: $60,500 $56,000 = 1.080 Based on this outcome, the company would invest in Option A, the project with a higher profitability potential of 1.157. Now let’s assume cash flows are unequal. Unequal cash flow information for Rayford Machining is summarized here. To find the overall present value, the following calculations take place using the Present Value of $1 table. The present value for each period looks at each year’s present value factor at an interest rate of 5%. All individual year present values are added together for a total present value of $44,982. The initial investment of $50,000 is subtracted from the $44,982 to arrive at a negative NPV of $5,018. In this case, Rayford Machining would not invest, since the outcome is negative. The negative NPV value does not mean the investment would be unprofitable; rather, it means the investment does not return the desired 5% the company is looking for in the investments that it makes. Basic Characteristics of the Internal Rate of Return Model The internal rate of return model allows for the comparison of profitability or growth potential among alternatives. All external factors, such as inflation, are removed from calculation, and the project with the highest return rate percentage is considered for investment. IRR is the discounted rate (interest rate) point at which NPV equals zero. In other words, the IRR is the point at which the present value cash inflows equal the initial investment cost. To consider investment, IRR needs to meet or exceed the required rate of return for the investment type. If IRR does not meet the required rate of return, the company will forgo investment. To find IRR using the present value tables, we need to know the cash flow number of return periods ( n ) and the intersecting present value factor. To calculate present value factor, we use the following formula. We find the present value factor in the present value table in the row with the corresponding number of periods ( n ). We find the matching interest rate ( i ) at this present value factor. The corresponding interest rate at the number of periods ( n ) is the IRR. When cash flows are equal, use the Present Value of an Ordinary Annuity table to find IRR. For example, a car manufacturer needs to replace welding equipment. The initial investment cost is $312,000 and each annual net cash flow is $49,944 for the next 9 years. We need to find the internal rate of return for this welding equipment. The expected rate of return for such a purchase is 6%. In this case, n = 9 and the present value factor is computed as follows. Present Value Factor = $312,000 $49,944 = 6.247 (rounded) Present Value Factor = $312,000 $49,944 = 6.247 (rounded) Looking at the Present Value of an Ordinary Annuity table, where n = 9 and the present value factor is 6.247, we discover that the corresponding return rate is 8%. This exceeds the expected return rate, so the company would typically invest in the project. If there is more than one viable option, the company will select the alternative with the highest IRR that exceeds the expected rate of return. Our tables are limited in scope, and therefore, a present value factor may fall in between two interest rates. When this is the case, you may choose to identify an IRR range instead of a single interest rate figure. A spreadsheet program or financial calculator can produce a more accurate result and can also be used when cash flows are unequal. Calculation and Discussion of the Results of the Internal Rate of Return Model Assume that Rayford Machining wants to know the internal rate of return for the new drill press. The drill press has an initial investment cost of $50,000 and an annual cash flow of $10,000 for each of the next seven years. The company expects a 7% rate of return on this type of investment. We calculate the present value factor as: Present Value Factor = $50,000 $10,000 = 5.000 Present Value Factor = $50,000 $10,000 = 5.000 Scanning the Present Value of an Ordinary Annuity table reveals that the interest rate where the present value factor is 5 and the number of periods is 7 is between 8 and 10%. Since the required rate of return was 7%, Rayford would consider investment in this metal press machine. Consider another example using Rayford, where they have two drill press purchase options. Option A has an IRR between 8% and 10%. The other option, Option B, has an initial investment cost of $60,500 and equal annual net cash flows of $13,256 for the next seven years. We calculate the present value factor as: Present Value Factor = $60,500 $13,256 = 4.564 (rounded) Present Value Factor = $60,500 $13,256 = 4.564 (rounded) Scanning the Present Value of an Ordinary Annuity table reveals that, when the present value factor is 4.564 and the number of periods is 7, the interest rate is 12%. This not only exceeds the 7% required rate, it also exceeds Option A’s return of 8% to 10%. Therefore, if resources were limited, Rayford would select Option B over Option A. Final Summary of the Discounted Cash Flow Models The internal rate of return (IRR) and the net present value (NPV) methods are types of discounted cash flow analysis that require taking estimated future payments from a project and discounting them into present values. The difference between the two methods is that the NPV calculation determines the project’s estimated return in dollars and the IRR provides the percentage rate of return from a project needed to break even. When the NPV is determined to be $0, the present value of the cash inflows and the present value of the cash outflows are equal. For example, assume that the present value of the cash inflows is $10,000 and the present value of the cash outflows is also $10,000. In this example, the NPV would be $0. At a net present value of zero, the IRR would be exactly equal to the interest rate that was used to perform the NPV calculation. For example, in the previous example, where both the cash inflows and the cash outflows have present values of $10,000 and the NPV is $0, assume that they were discounted at an 8% interest rate. If you were to then calculate the internal rate of return, the IRR would be 8%, the same interest rate that gave us an NPV of $0. Overall, it is important to understand that a company must consider the time value of money when making capital investment decisions. Knowing the present value of a future cash flow enables a company to better select between alternatives. The net present value compares the initial investment cost to the present value of future cash flows and requires a positive outcome before investment. The internal rate of return also considers the present value of future cash flows but considers profitability stated in terms of percentage of return on the investment or project. These models allows two or more options to be compared to eliminate bias with raw financial figures. Think It Through Choosing Investments Companies are presented with viable alternatives that sometimes produce nearly identical results and profitability goals. If they have the ability to invest in both alternatives, they may do so. But what about when resources are constrained? How do they choose which investment is best for their company? Consider this: you have two projects that met the payback period and accounting rate of return screenings identically. Project 1 produced an NPV of $45,000 and had an IRR between 5% and 8%. Project 2 produced a NPV of $35,000 and had an IRR of 10%. This leaves you with a difficult choice, since each alternative has a measurement that exceeds the other and the other variables are the same. Which project would you invest in and why? 11.5 Compare and Contrast Non-Time Value-Based Methods and Time Value-Based Methods in Capital Investment Decisions When an investment opportunity is presented to a company, there are many financial and non-financial factors to consider. Using capital budgeting methods to narrow down the choices by removing unviable alternatives is an important process for any successful business. The four methods for capital budgeting analysis—payback period, accounting rate of return, net present value, and internal rate of return—all have their strengths and weaknesses, which are discussed as follows. Summary of the Strengths and Weaknesses of the Non-Time Value-Based Capital Budgeting Methods Non-time value-based capital budgeting methods are best used in an initial screening process when there are many alternatives to choose from. Two such methods are payback method and accounting rate of return. Their strengths and weaknesses are discussed in Table 11.4 and Table 11.5 . The payback method determines the length of time needed to recoup an investment. Payback Method Strengths Weaknesses Simple calculation Screens out many unviable alternatives quickly Removes high-risk investments from consideration Does not consider time value of money Profitability of an investment is ignored Cash flows beyond investment return are not considered Table 11.4 Accounting rate of return measures incremental increases to net income. This method has several strengths and weaknesses that are similar to payback period but include a deeper evaluation of income. Accounting Rate of Return Strengths Weaknesses Simple calculation Screens out many unviable options quickly Considers the impact on income rather than cash flows only (profitability) Does not consider the time value of money Return rates for the entire lifespan of the investment is not considered External factors, such as inflation, are ignored Return rates override the risk of investment Table 11.5 Because of the limited information each of the non-time value-based methods give, they are typically used in conjunction with time value-based capital budgeting methods. Summary of the Strengths and Weaknesses of the Time Value-Based Capital Budgeting Methods Time value-based capital budgeting methods are best used after an initial screening process, when a company is choosing between few alternatives. They help determine the best of the alternatives that a company should pursue. Two such methods are net present value and internal rate of return. Their strengths and weaknesses are presented in Table 11.6 and Table 11.7 . Net present value converts future cash flow dollars into current values to determine if the initial investment is less than the future returns. Net Present Value Strengths Weaknesses Considers the time value of money Acknowledges higher risk investments Comparable future earnings with today's value Allows for a selection of investment Requires a more difficult calculation than non-time value methods Required return rate is an estimate, thus any changes to this condition and the impact that has on earnings are unknown Difficult to compare alternatives that have varying investment amounts Table 11.6 Internal rate of return looks at future cash flows as compared to an initial investment to find the rate of return on investment. The goal is to have an interest rate higher than the predetermined rate of return to consider investment. Internal Rate of Return Strengths Weaknesses Considers the time value of money Easy to compare different-sized investments, removes dollar bias A predetermined rate of return is not required Allows for a selection of investment Does not acknowledge higher risk investments because the focus is on return rates More difficult calculation than non-time value methods, and outcome may be uncertain if not using a financial calculator or spreadsheet program If the time for return on investment is important, IRR will not place more importance on shorter-term investments Table 11.7 After a time-value based capital budgeting method is analyzed, a company can be move toward a decision on an investment opportunity. This is of particular importance when resources are limited. Before discussing the mechanics of choosing the NPV versus the IRR method for decision-making, we first need to discuss one cardinal rule of using the NPV or IRR methods to evaluate time-sensitive investments or asset purchases: If a project or investment has a positive NPV, then it will, by definition, have an IRR that is above the interest rate used to calculate the NPV. For example, assume that a company is considering buying a piece of equipment. They determine that it will cost $30,000 and will save them $10,000 a year in expenses for five years. They have decided that the interest rate that they will choose to calculate the NPV and to evaluate the purchase IRR is 8%, predicated on current loan rates available. Based on this sample data, the NPV will be positive $9,927 ($39,927 PV for inflows and $30,000 PV for the outflows), and the IRR will be 19.86%. Since the calculations require at least an 8% return, the company would accept the project using either method. We will not spend additional time on the calculations at this point, since our purpose is to create numbers to analyze. If you want to duplicate the calculations, you can use a software program such as Excel or a financial calculator. Concepts In Practice Solar Energy as Capital Investment A recent capital investment decision that many company leaders need to make is whether or not to invest in solar energy. Solar energy is replacing fossil fuels as a power source, and it provides a low-cost energy, reducing overhead costs. The expensive up-front installation costs can deter some businesses from making the initial investment. Businesses must now choose between an expensive initial capital outlay and the long-term benefits of solar power. A capital investment such as this would require an initial screening and preference process to determine if the cost savings and future benefits are worth more today than the current capital expenditure. If it makes financial sense, they may look to invest in this increasingly popular energy source. Now, we return to our comparison of the NPV and IRR methods. There are typically two situations that we want to consider. The first involves looking at projects that are not mutually exclusive, meaning we can consider more than one possibility. If a company is considering non-mutually exclusive opportunities, they will generally consider all options that have a positive NPV or an IRR that is above the target rate of interest as favorable options for an investment or asset purchase. In this situation, the NPV and IRR methods will provide the same accept-or-reject decision. If the company accepts a project or investment under the NPV calculation, then they will accept it under the IRR method. If they reject it under the NPV calculation, then they will also reject under the IRR method. The second situation involves mutually exclusive opportunities. For example, if a company has one computer system and is considering replacing it, they might look at seven options that have favorable NPVs and IRRs, even though they only need one computer system. In this case, they would choose only one of the seven possible options. In the case of mutually exclusive options, it is possible that the NPV method will select Option A while the IRR method might choose Option D. The primary reason for this difference is that the NPV method uses dollars and the IRR uses an interest rate. The two methods may select different options if the company has investments with major differences in costs in terms of dollars. While both will identify an investment or purchase that exceeds the required standards of a positive NPV or an interest rate above the target interest rate, they might lead the company to choose different positive options. When this occurs, the company needs to consider other conditions, such as qualitative factors, to make their decision. Future cost accounting or finance courses will cover this content in more detail. Final Comparison of the Four Capital Budgeting Options A company will be presented with many alternatives for investment. It is up to management to analyze each investment’s possibilities using capital budgeting methods. The company will want to first screen each possibility with the payback method and accounting rate of return. The payback method will show the company how long it will take to recoup their investment, while accounting rate of return gives them the profitability of the alternatives. This screening will typically get rid of non-viable options and allow the company to further consider a select few alternatives. A more detailed analysis is found in time-value methods, such as net present value and internal rate of return. Net present value converts future cash flows into today’s valuation for comparability purposes to see if an initial outlay of cash is worth future earnings. The internal rate of return determines the minimum expected return on a project given the present value of cash flow expectations and the initial investment. Analyzing these opportunities, with consideration given to time value of money, allows a company to make an informed decision on how to make large capital expenditures. Ethical Considerations Barclays and the LIBOR Scandal As discussed in Volkswagen Diesel Emissions Scandal , when a company makes an unethical decision, it must adjust its budget for fines and lawsuits. In 2012, Barclays , a British financial services company, was caught illegally manipulating LIBOR interest rates. LIBOR sets the interest rate for many types of loans. As CNN reported, “LIBOR, which stands for London Interbank Offered Rate, is the rate at which banks lend to each other, and is used globally to price financial products, such as mortgages, worth hundreds of trillions of dollars.” 4 4 Charles Riley. “Remember the Libor Scandal? Well It's Coming Back to Haunt the Bank of England.” CNN. April 10, 2017. https://money.cnn.com/2017/04/10/investing/bank-of-england-libor-barclays/index.html While Volkswagen decided to cover the costs related to fines and lawsuits by reducing its capital budget for technology and research, Barclays took a different approach. The company chose to “cut or claw back of about 450 million pounds ($680 million) of pay from its staff” and from past pay packages “another 140 million pounds ($212 million).” 5 Instead of reducing other areas of its capital budget, Barclays decided to cover its fines and lawsuits by cutting employee compensation. 5 Steve Slater. “Barclays to Cut Pay by $890 Million over Scandals: Source.” Reuters. February 27, 2013. https://www.reuters.com/article/us-barclays-libor-pay/barclays-to-cut-pay-by-890-million-over-scandals-source-idUSBRE91Q0SD20130227 The LIBOR scandal involved a number of international banks and rocked the international banking community. An independent review of Barclays reported that “if Barclays is to achieve a material improvement in its reputation, it will need to continue to make changes to its top levels of pay so as to reflect talent and contribution more realistically, and in ways that mean something to the general public.” 6 Previously, as described by the company website, “Barclays has been a leader in innovation; funding the world’s first industrial steam railway, naming the UK’s first female branch manager and introducing the world’s first ATM machine.” 7 The positive reputation Barclays built over 300 years was tarnished by just one scandal, and demonstrates the difficulty of calculating just how much unethical behavior will cost a company’s reputation. 6 Anthony Salz. Salz Review: An Independent Review of Barclays’ Business Practices. April 3, 2018. https://online.wsj.com/public/resources/documents/SalzReview04032013.pdf 7 “Our History.” Barclays. n.d. https://www.banking.barclaysus.com/our-history.html Link to Learning A popular television show, Shark Tank , explores the decision-making process investors use when considering ownership in a new business. Entrepreneurs will pitch their business concept and current position to the “sharks,” who will evaluate the business using capital budgeting methods, such as payback period and net present value, to decide whether or not to invest in the entrepreneur’s company. Learn more about Shark Tank ’s concept and success stories on the web.
biology
Chapter Outline 30.1 The Plant Body 30.2 Stems 30.3 Roots 30.4 Leaves 30.5 Transport of Water and Solutes in Plants 30.6 Plant Sensory Systems and Responses Introduction Plants are as essential to human existence as land, water, and air. Without plants, our day-to-day lives would be impossible because without oxygen from photosynthesis, aerobic life cannot be sustained. From providing food and shelter to serving as a source of medicines, oils, perfumes, and industrial products, plants provide humans with numerous valuable resources. When you think of plants, most of the organisms that come to mind are vascular plants. These plants have tissues that conduct food and water, and they have seeds. Seed plants are divided into gymnosperms and angiosperms. Gymnosperms include the needle-leaved conifers—spruce, fir, and pine—as well as less familiar plants, such as ginkgos and cycads. Their seeds are not enclosed by a fleshy fruit. Angiosperms, also called flowering plants, constitute the majority of seed plants. They include broadleaved trees (such as maple, oak, and elm), vegetables (such as potatoes, lettuce, and carrots), grasses, and plants known for the beauty of their flowers (roses, irises, and daffodils, for example). While individual plant species are unique, all share a common structure: a plant body consisting of stems, roots, and leaves. They all transport water, minerals, and sugars produced through photosynthesis through the plant body in a similar manner. All plant species also respond to environmental factors, such as light, gravity, competition, temperature, and predation.
[ { "answer": { "ans_choice": 2, "ans_text": "meristematic tissue" }, "bloom": null, "hl_context": "Plants are multicellular eukaryotes with tissue systems made of various cell types that carry out specific functions . <hl> Plant tissue systems fall into one of two general types : meristematic tissue , and permanent ( or non-meristematic ) tissue . <hl> <hl> Cells of the meristematic tissue are found in meristems , which are plant regions of continuous cell division and growth . <hl> Meristematic tissue cells are either undifferentiated or incompletely differentiated , and they continue to divide and contribute to the growth of the plant . In contrast , permanent tissue consists of plant cells that are no longer actively dividing .", "hl_sentences": "Plant tissue systems fall into one of two general types : meristematic tissue , and permanent ( or non-meristematic ) tissue . Cells of the meristematic tissue are found in meristems , which are plant regions of continuous cell division and growth .", "question": { "cloze_format": "Plant regions of continuous growth are made up of ________.", "normal_format": "What are plant regions of continuous growth made up of?", "question_choices": [ "dermal tissue", "vascular tissue", "meristematic tissue", "permanent tissue" ], "question_id": "fs-idm152914064", "question_text": "Plant regions of continuous growth are made up of ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "ground tissue" }, "bloom": null, "hl_context": "Meristems produce cells that quickly differentiate , or specialize , and become permanent tissue . Such cells take on specific roles and lose their ability to divide further . They differentiate into three main types : dermal , vascular , and ground tissue . Dermal tissue covers and protects the plant , and vascular tissue transports water , minerals , and sugars to different parts of the plant . <hl> Ground tissue serves as a site for photosynthesis , provides a supporting matrix for the vascular tissue , and helps to store water and sugars . <hl>", "hl_sentences": "Ground tissue serves as a site for photosynthesis , provides a supporting matrix for the vascular tissue , and helps to store water and sugars .", "question": { "cloze_format": "___ is the major site of photosynthesis.", "normal_format": "Which of the following is the major site of photosynthesis?", "question_choices": [ "apical meristem", "ground tissue", "xylem cells", "phloem cells" ], "question_id": "fs-idm166908656", "question_text": "Which of the following is the major site of photosynthesis?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "nodes" }, "bloom": null, "hl_context": "Plant stems , whether above or below ground , are characterized by the presence of nodes and internodes ( Figure 30.4 ) . <hl> Nodes are points of attachment for leaves , aerial roots , and flowers . <hl> The stem region between two nodes is called an internode . The stalk that extends from the stem to the base of the leaf is the petiole . An axillary bud is usually found in the axil — the area between the base of a leaf and the stem — where it can give rise to a branch or a flower . The apex ( tip ) of the shoot contains the apical meristem within the apical bud .", "hl_sentences": "Nodes are points of attachment for leaves , aerial roots , and flowers .", "question": { "cloze_format": "Stem regions at which leaves are attached are called ________.", "normal_format": "What stem regions at which leaves are attached are called?", "question_choices": [ "trichomes", "lenticels", "nodes", "internodes" ], "question_id": "fs-idp20795584", "question_text": "Stem regions at which leaves are attached are called ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "parenchyma cells" }, "bloom": null, "hl_context": "<hl> Parenchyma cells are the most common plant cells ( Figure 30.5 ) . <hl> <hl> They are found in the stem , the root , the inside of the leaf , and the pulp of the fruit . <hl> Parenchyma cells are responsible for metabolic functions , such as photosynthesis , and they help repair and heal wounds . Some parenchyma cells also store starch .", "hl_sentences": "Parenchyma cells are the most common plant cells ( Figure 30.5 ) . They are found in the stem , the root , the inside of the leaf , and the pulp of the fruit .", "question": { "cloze_format": "The cell types that form most of the inside of a plant are the ___ .", "normal_format": "Which of the following cell types forms most of the inside of a plant?", "question_choices": [ "meristem cells", "collenchyma cells", "sclerenchyma cells", "parenchyma cells" ], "question_id": "fs-idp22443296", "question_text": "Which of the following cell types forms most of the inside of a plant?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "vascular tissue" }, "bloom": null, "hl_context": "Secondary tissues are either simple ( composed of similar cell types ) or complex ( composed of different cell types ) . Dermal tissue , for example , is a simple tissue that covers the outer surface of the plant and controls gas exchange . <hl> Vascular tissue is an example of a complex tissue , and is made of two specialized conducting tissues : xylem and phloem . <hl> <hl> Xylem tissue transports water and nutrients from the roots to different parts of the plant , and includes three different cell types : vessel elements and tracheids ( both of which conduct water ) , and xylem parenchyma . <hl> <hl> Phloem tissue , which transports organic compounds from the site of photosynthesis to other parts of the plant , consists of four different cell types : sieve cells ( which conduct photosynthates ) , companion cells , phloem parenchyma , and phloem fibers . <hl> Unlike xylem conducting cells , phloem conducting cells are alive at maturity . The xylem and phloem always lie adjacent to each other ( Figure 30.3 ) . In stems , the xylem and the phloem form a structure called a vascular bundle ; in roots , this is termed the vascular stele or vascular cylinder . 30.2 Stems Learning Objectives By the end of this section , you will be able to :", "hl_sentences": "Vascular tissue is an example of a complex tissue , and is made of two specialized conducting tissues : xylem and phloem . Xylem tissue transports water and nutrients from the roots to different parts of the plant , and includes three different cell types : vessel elements and tracheids ( both of which conduct water ) , and xylem parenchyma . Phloem tissue , which transports organic compounds from the site of photosynthesis to other parts of the plant , consists of four different cell types : sieve cells ( which conduct photosynthates ) , companion cells , phloem parenchyma , and phloem fibers .", "question": { "cloze_format": "Tracheids, vessel elements, sieve-tube cells, and companion cells are components of ________.", "normal_format": "Tracheids, vessel elements, sieve-tube cells, and companion cells are components of what?", "question_choices": [ "vascular tissue", "meristematic tissue", "ground tissue", "dermal tissue" ], "question_id": "fs-idm7316720", "question_text": "Tracheids, vessel elements, sieve-tube cells, and companion cells are components of ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "apical meristem" }, "bloom": null, "hl_context": "Most primary growth occurs at the apices , or tips , of stems and roots . <hl> Primary growth is a result of rapidly dividing cells in the apical meristems at the shoot tip and root tip . <hl> Subsequent cell elongation also contributes to primary growth . The growth of shoots and roots during primary growth enables plants to continuously seek water ( roots ) or sunlight ( shoots ) .", "hl_sentences": "Primary growth is a result of rapidly dividing cells in the apical meristems at the shoot tip and root tip .", "question": { "cloze_format": "The primary growth of a plant is due to the action of the ________.", "normal_format": "The primary growth of a plant is due to the action of which of the following?", "question_choices": [ "lateral meristem", "vascular cambium", "apical meristem", "cork cambium" ], "question_id": "fs-idm83890816", "question_text": "The primary growth of a plant is due to the action of the ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "increase in thickness or girth" }, "bloom": null, "hl_context": "Growth in plants occurs as the stems and roots lengthen . Some plants , especially those that are woody , also increase in thickness during their life span . The increase in length of the shoot and the root is referred to as primary growth , and is the result of cell division in the shoot apical meristem . <hl> Secondary growth is characterized by an increase in thickness or girth of the plant , and is caused by cell division in the lateral meristem . <hl> Figure 30.10 shows the areas of primary and secondary growth in a plant . Herbaceous plants mostly undergo primary growth , with hardly any secondary growth or increase in thickness . Secondary growth or “ wood ” is noticeable in woody plants ; it occurs in some dicots , but occurs very rarely in monocots .", "hl_sentences": "Secondary growth is characterized by an increase in thickness or girth of the plant , and is caused by cell division in the lateral meristem .", "question": { "cloze_format": "___ is an example of secondary growth.", "normal_format": "Which of the following is an example of secondary growth?", "question_choices": [ "increase in length", "increase in thickness or girth", "increase in root hairs", "increase in leaf number" ], "question_id": "fs-idm38281984", "question_text": "Which of the following is an example of secondary growth?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "dicots" }, "bloom": null, "hl_context": "Growth in plants occurs as the stems and roots lengthen . Some plants , especially those that are woody , also increase in thickness during their life span . The increase in length of the shoot and the root is referred to as primary growth , and is the result of cell division in the shoot apical meristem . Secondary growth is characterized by an increase in thickness or girth of the plant , and is caused by cell division in the lateral meristem . Figure 30.10 shows the areas of primary and secondary growth in a plant . Herbaceous plants mostly undergo primary growth , with hardly any secondary growth or increase in thickness . <hl> Secondary growth or “ wood ” is noticeable in woody plants ; it occurs in some dicots , but occurs very rarely in monocots . <hl>", "hl_sentences": "Secondary growth or “ wood ” is noticeable in woody plants ; it occurs in some dicots , but occurs very rarely in monocots .", "question": { "cloze_format": "Secondary growth in stems is usually seen in ________.", "normal_format": "Where is secondary growth in stems usually seen in?", "question_choices": [ "monocots", "dicots", "both monocots and dicots", "neither monocots nor dicots" ], "question_id": "fs-idp54958096", "question_text": "Secondary growth in stems is usually seen in ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "epiphytic roots" }, "bloom": null, "hl_context": "<hl> Epiphytic roots enable a plant to grow on another plant . <hl> For example , the epiphytic roots of orchids develop a spongy tissue to absorb moisture . The banyan tree ( Ficus sp . ) begins as an epiphyte , germinating in the branches of a host tree ; aerial roots develop from the branches and eventually reach the ground , providing additional support ( Figure 30.20 ) . In screwpine ( Pandanus sp . ) , a palm-like tree that grows in sandy tropical soils , aboveground prop roots develop from the nodes to provide additional support . 30.4 Leaves Learning Objectives By the end of this section , you will be able to :", "hl_sentences": "Epiphytic roots enable a plant to grow on another plant .", "question": { "cloze_format": "Roots that enable a plant to grow on another plant are called ________.", "normal_format": "What are roots that enable a plant to grow on another plant called?", "question_choices": [ "epiphytic roots", "prop roots", "adventitious roots", "aerial roots" ], "question_id": "fs-idp60672", "question_text": "Roots that enable a plant to grow on another plant are called ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "endodermis" }, "bloom": null, "hl_context": "The vascular tissue in the root is arranged in the inner portion of the root , which is called the stele ( Figure 30.18 ) . A layer of cells known as the endodermis separates the stele from the ground tissue in the outer portion of the root . <hl> The endodermis is exclusive to roots , and serves as a checkpoint for materials entering the root ’ s vascular system . <hl> A waxy substance called suberin is present on the walls of the endodermal cells . This waxy region , known as the Casparian strip , forces water and solutes to cross the plasma membranes of endodermal cells instead of slipping between the cells . <hl> This ensures that only materials required by the root pass through the endodermis , while toxic substances and pathogens are generally excluded . <hl> The outermost cell layer of the root ’ s vascular tissue is the pericycle , an area that can give rise to lateral roots . In dicot roots , the xylem and phloem of the stele are arranged alternately in an X shape , whereas in monocot roots , the vascular tissue is arranged in a ring around the pith .", "hl_sentences": "The endodermis is exclusive to roots , and serves as a checkpoint for materials entering the root ’ s vascular system . This ensures that only materials required by the root pass through the endodermis , while toxic substances and pathogens are generally excluded .", "question": { "cloze_format": "The ________ forces selective uptake of minerals in the root.", "normal_format": "What forces selective uptake of minerals in the root?", "question_choices": [ "pericycle", "epidermis", "endodermis", "root cap" ], "question_id": "fs-idp89826256", "question_text": "The ________ forces selective uptake of minerals in the root." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "zone of cell division" }, "bloom": null, "hl_context": "Root growth begins with seed germination . When the plant embryo emerges from the seed , the radicle of the embryo forms the root system . The tip of the root is protected by the root cap , a structure exclusive to roots and unlike any other plant structure . The root cap is continuously replaced because it gets damaged easily as the root pushes through soil . <hl> The root tip can be divided into three zones : a zone of cell division , a zone of elongation , and a zone of maturation and differentiation ( Figure 30.16 ) . <hl> <hl> The zone of cell division is closest to the root tip ; it is made up of the actively dividing cells of the root meristem . <hl> The zone of elongation is where the newly formed cells increase in length , thereby lengthening the root . Beginning at the first root hair is the zone of cell maturation where the root cells begin to differentiate into special cell types . All three zones are in the first centimeter or so of the root tip .", "hl_sentences": "The root tip can be divided into three zones : a zone of cell division , a zone of elongation , and a zone of maturation and differentiation ( Figure 30.16 ) . The zone of cell division is closest to the root tip ; it is made up of the actively dividing cells of the root meristem .", "question": { "cloze_format": "Newly-formed root cells begin to form different cell types in the ________.", "normal_format": "Where do newly-formed root cells begin to form different cell types?", "question_choices": [ "zone of elongation", "zone of maturation", "root meristem", "zone of cell division" ], "question_id": "fs-idp3441920", "question_text": "Newly-formed root cells begin to form different cell types in the ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "petiole" }, "bloom": null, "hl_context": "Plant stems , whether above or below ground , are characterized by the presence of nodes and internodes ( Figure 30.4 ) . Nodes are points of attachment for leaves , aerial roots , and flowers . The stem region between two nodes is called an internode . <hl> The stalk that extends from the stem to the base of the leaf is the petiole . <hl> An axillary bud is usually found in the axil — the area between the base of a leaf and the stem — where it can give rise to a branch or a flower . The apex ( tip ) of the shoot contains the apical meristem within the apical bud .", "hl_sentences": "The stalk that extends from the stem to the base of the leaf is the petiole .", "question": { "cloze_format": "The stalk of a leaf is known as the ________.", "normal_format": "What is the stalk of a leaf known as?", "question_choices": [ "petiole", "lamina", "stipule", "rachis" ], "question_id": "fs-idp137709888", "question_text": "The stalk of a leaf is known as the ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "compound" }, "bloom": null, "hl_context": "Leaves may be simple or compound ( Figure 30.23 ) . In a simple leaf , the blade is either completely undivided — as in the banana leaf — or it has lobes , but the separation does not reach the midrib , as in the maple leaf . <hl> In a compound leaf , the leaf blade is completely divided , forming leaflets , as in the locust tree . <hl> Each leaflet may have its own stalk , but is attached to the rachis . A palmately compound leaf resembles the palm of a hand , with leaflets radiating outwards from one point Examples include the leaves of poison ivy , the buckeye tree , or the familiar houseplant Schefflera sp . ( common name “ umbrella plant ” ) . Pinnately compound leaves take their name from their feather-like appearance ; the leaflets are arranged along the midrib , as in rose leaves ( Rosa sp . ) , or the leaves of hickory , pecan , ash , or walnut trees .", "hl_sentences": "In a compound leaf , the leaf blade is completely divided , forming leaflets , as in the locust tree .", "question": { "cloze_format": "Leaflets are a characteristic of ________ leaves.", "normal_format": "Leaflets are a characteristic of which leaves?", "question_choices": [ "alternate", "whorled", "compound", "opposite" ], "question_id": "fs-idp85203904", "question_text": "Leaflets are a characteristic of ________ leaves." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "mesophyll" }, "bloom": null, "hl_context": "Below the epidermis of dicot leaves are layers of cells known as the mesophyll , or “ middle leaf . ” The mesophyll of most leaves typically contains two arrangements of parenchyma cells : the palisade parenchyma and spongy parenchyma ( Figure 30.26 ) . The palisade parenchyma ( also called the palisade mesophyll ) has column-shaped , tightly packed cells , and may be present in one , two , or three layers . Below the palisade parenchyma are loosely arranged cells of an irregular shape . These are the cells of the spongy parenchyma ( or spongy mesophyll ) . The air space found between the spongy parenchyma cells allows gaseous exchange between the leaf and the outside atmosphere through the stomata . In aquatic plants , the intercellular spaces in the spongy parenchyma help the leaf float . <hl> Both layers of the mesophyll contain many chloroplasts . <hl> Guard cells are the only epidermal cells to contain chloroplasts . Like the stem , the leaf contains vascular bundles composed of xylem and phloem ( Figure 30.27 ) . The xylem consists of tracheids and vessels , which transport water and minerals to the leaves . The phloem transports the photosynthetic products from the leaf to the other parts of the plant . A single vascular bundle , no matter how large or small , always contains both xylem and phloem tissues .", "hl_sentences": "Both layers of the mesophyll contain many chloroplasts .", "question": { "cloze_format": "Cells of the ________ contain chloroplasts.", "normal_format": "Which cells contain chloroplasts?", "question_choices": [ "epidermis", "vascular tissue", "stomata", "mesophyll" ], "question_id": "fs-idp28882000", "question_text": "Cells of the ________ contain chloroplasts." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "spines instead of leaves" }, "bloom": null, "hl_context": "Leaf Adaptations Coniferous plant species that thrive in cold environments , like spruce , fir , and pine , have leaves that are reduced in size and needle-like in appearance . These needle-like leaves have sunken stomata and a smaller surface area : two attributes that aid in reducing water loss . <hl> In hot climates , plants such as cacti have leaves that are reduced to spines , which in combination with their succulent stems , help to conserve water . <hl> Many aquatic plants have leaves with wide lamina that can float on the surface of the water , and a thick waxy cuticle on the leaf surface that repels water . Link to Learning", "hl_sentences": "In hot climates , plants such as cacti have leaves that are reduced to spines , which in combination with their succulent stems , help to conserve water .", "question": { "cloze_format": "___ is most likely to be found in a desert environment.", "normal_format": "Which of the following is most likely to be found in a desert environment?", "question_choices": [ "broad leaves to capture sunlight", "spines instead of leaves", "needle-like leaves", "wide, flat leaves that can float" ], "question_id": "fs-idm14688192", "question_text": "Which of the following is most likely to be found in a desert environment?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "Water vapor is lost to the external environment, increasing the rate of transpiration." }, "bloom": null, "hl_context": "Leaves are covered by a waxy cuticle on the outer surface that prevents the loss of water . Regulation of transpiration , therefore , is achieved primarily through the opening and closing of stomata on the leaf surface . Stomata are surrounded by two specialized cells called guard cells , which open and close in response to environmental cues such as light intensity and quality , leaf water status , and carbon dioxide concentrations . Stomata must open to allow air containing carbon dioxide and oxygen to diffuse into the leaf for photosynthesis and respiration . <hl> When stomata are open , however , water vapor is lost to the external environment , increasing the rate of transpiration . <hl> Therefore , plants must maintain a balance between efficient photosynthesis and water loss .", "hl_sentences": "When stomata are open , however , water vapor is lost to the external environment , increasing the rate of transpiration .", "question": { "cloze_format": "When stomata open, it occurs that ___.", "normal_format": "When stomata open, what occurs?", "question_choices": [ "Water vapor is lost to the external environment, increasing the rate of transpiration.", "Water vapor is lost to the external environment, decreasing the rate of transpiration.", "Water vapor enters the spaces in the mesophyll, increasing the rate of transpiration.", "Water vapor enters the spaces in the mesophyll, increasing the rate of transpiration." ], "question_id": "fs-idm129887328", "question_text": "When stomata open, what occurs?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "sieve-tube elements, companion cells" }, "bloom": null, "hl_context": "Photosynthates , such as sucrose , are produced in the mesophyll cells of photosynthesizing leaves . From there they are translocated through the phloem to where they are used or stored . Mesophyll cells are connected by cytoplasmic channels called plasmodesmata . <hl> Photosynthates move through these channels to reach phloem sieve-tube elements ( STEs ) in the vascular bundles . <hl> From the mesophyll cells , the photosynthates are loaded into the phloem STEs . The sucrose is actively transported against its concentration gradient ( a process requiring ATP ) into the phloem cells using the electrochemical potential of the proton gradient . This is coupled to the uptake of sucrose with a carrier protein called the sucrose-H + symporter . Phloem STEs have reduced cytoplasmic contents , and are connected by a sieve plate with pores that allow for pressure-driven bulk flow , or translocation , of phloem sap . Companion cells are associated with STEs . They assist with metabolic activities and produce energy for the STEs ( Figure 30.36 ) .", "hl_sentences": "Photosynthates move through these channels to reach phloem sieve-tube elements ( STEs ) in the vascular bundles .", "question": { "cloze_format": "The cells ___ are responsible for the movement of photosynthates through a plant.", "normal_format": "Which cells are responsible for the movement of photosynthates through a plant?", "question_choices": [ "tracheids, vessel elements", "tracheids, companion cells", "vessel elements, companion cells", "sieve-tube elements, companion cells" ], "question_id": "fs-idm12618624", "question_text": "Which cells are responsible for the movement of photosynthates through a plant?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "phototropin" }, "bloom": null, "hl_context": "<hl> The aptly-named phototropins are protein-based receptors responsible for mediating the phototropic response . <hl> Like all plant photoreceptors , phototropins consist of a protein portion and a light-absorbing portion , called the chromophore . In phototropins , the chromophore is a covalently-bound molecule of flavin ; hence , phototropins belong to a class of proteins called flavoproteins .", "hl_sentences": "The aptly-named phototropins are protein-based receptors responsible for mediating the phototropic response .", "question": { "cloze_format": "The main photoreceptor that triggers phototropism is a ________.", "normal_format": "What is the main photoreceptor that triggers phototropism?", "question_choices": [ "phytochrome", "cryptochrome", "phototropin", "carotenoid" ], "question_id": "fs-idp103337616", "question_text": "The main photoreceptor that triggers phototropism is a ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "mediates morphological changes in response to red and far-red light" }, "bloom": null, "hl_context": "The phytochromes are a family of chromoproteins with a linear tetrapyrrole chromophore , similar to the ringed tetrapyrrole light-absorbing head group of chlorophyll . <hl> Phytochromes have two photo-interconvertible forms : Pr and Pfr . <hl> <hl> Pr absorbs red light ( ~ 667 nm ) and is immediately converted to Pfr . <hl> Pfr absorbs far-red light ( ~ 730 nm ) and is quickly converted back to Pr . <hl> Absorption of red or far-red light causes a massive change to the shape of the chromophore , altering the conformation and activity of the phytochrome protein to which it is bound . <hl> Pfr is the physiologically active form of the protein ; therefore , exposure to red light yields physiological activity . Exposure to far-red light inhibits phytochrome activity . Together , the two forms represent the phytochrome system ( Figure 30.38 ) .", "hl_sentences": "Phytochromes have two photo-interconvertible forms : Pr and Pfr . Pr absorbs red light ( ~ 667 nm ) and is immediately converted to Pfr . Absorption of red or far-red light causes a massive change to the shape of the chromophore , altering the conformation and activity of the phytochrome protein to which it is bound .", "question": { "cloze_format": "Phytochrome is a plant pigment protein that ___.", "normal_format": "What is Phytochrome, a plant pigment protein?", "question_choices": [ "mediates plant infection", "promotes plant growth", "mediates morphological changes in response to red and far-red light", "inhibits plant growth" ], "question_id": "fs-idp154368896", "question_text": "Phytochrome is a plant pigment protein that:" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "amyloplast" }, "bloom": null, "hl_context": "<hl> Amyloplasts ( also known as statoliths ) are specialized plastids that contain starch granules and settle downward in response to gravity . <hl> Amyloplasts are found in shoots and in specialized cells of the root cap . When a plant is tilted , the statoliths drop to the new bottom cell wall . A few hours later , the shoot or root will show growth in the new vertical direction .", "hl_sentences": "Amyloplasts ( also known as statoliths ) are specialized plastids that contain starch granules and settle downward in response to gravity .", "question": { "cloze_format": "A mutant plant has roots that grow in all directions. The organelle that you would expect to be missing in the cell is ___.", "normal_format": "A mutant plant has roots that grow in all directions. Which of the following organelles would you expect to be missing in the cell?", "question_choices": [ "mitochondria", "amyloplast", "chloroplast", "nucleus" ], "question_id": "fs-idm61790704", "question_text": "A mutant plant has roots that grow in all directions. Which of the following organelles would you expect to be missing in the cell?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "ethylene" }, "bloom": null, "hl_context": "<hl> Aging tissues ( especially senescing leaves ) and nodes of stems produce ethylene . <hl> <hl> The best-known effect of the hormone , however , is the promotion of fruit ripening . <hl> Ethylene stimulates the conversion of starch and acids to sugars . <hl> Some people store unripe fruit , such as avocadoes , in a sealed paper bag to accelerate ripening ; the gas released by the first fruit to mature will speed up the maturation of the remaining fruit . <hl> Ethylene also triggers leaf and fruit abscission , flower fading and dropping , and promotes germination in some cereals and sprouting of bulbs and potatoes .", "hl_sentences": "Aging tissues ( especially senescing leaves ) and nodes of stems produce ethylene . The best-known effect of the hormone , however , is the promotion of fruit ripening . Some people store unripe fruit , such as avocadoes , in a sealed paper bag to accelerate ripening ; the gas released by the first fruit to mature will speed up the maturation of the remaining fruit .", "question": { "cloze_format": "After buying green bananas or unripe avocadoes, they can be kept in a brown bag to ripen. The hormone released by the fruit and trapped in the bag is probably ___.", "normal_format": "After buying green bananas or unripe avocadoes, they can be kept in a brown bag to ripen. What is the hormone that released by the fruit and trapped in the bag?", "question_choices": [ "abscisic acid", "cytokinin", "ethylene", "gibberellic acid" ], "question_id": "fs-idm41201088", "question_text": "After buying green bananas or unripe avocadoes, they can be kept in a brown bag to ripen. The hormone released by the fruit and trapped in the bag is probably:" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "abscisic acid" }, "bloom": null, "hl_context": "<hl> The plant hormone abscisic acid ( ABA ) was first discovered as the agent that causes the abscission or dropping of cotton bolls . <hl> However , more recent studies indicate that ABA plays only a minor role in the abscission process . ABA accumulates as a response to stressful environmental conditions , such as dehydration , cold temperatures , or shortened day lengths . Its activity counters many of the growth-promoting effects of GAs and auxins . <hl> ABA inhibits stem elongation and induces dormancy in lateral buds . <hl>", "hl_sentences": "The plant hormone abscisic acid ( ABA ) was first discovered as the agent that causes the abscission or dropping of cotton bolls . ABA inhibits stem elongation and induces dormancy in lateral buds .", "question": { "cloze_format": "A decrease in the level of ___ releases seeds from dormancy.", "normal_format": "A decrease in the level of which hormone releases seeds from dormancy?", "question_choices": [ "abscisic acid", "cytokinin", "ethylene", "gibberellic acid" ], "question_id": "fs-idp149917488", "question_text": "A decrease in the level of which hormone releases seeds from dormancy?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "thigmotropism" }, "bloom": null, "hl_context": "<hl> The movement of a plant subjected to constant directional pressure is called thigmotropism , from the Greek words thigma meaning “ touch , ” and tropism implying “ direction . ” Tendrils are one example of this . <hl> The meristematic region of tendrils is very touch sensitive ; light touch will evoke a quick coiling response . <hl> Cells in contact with a support surface contract , whereas cells on the opposite side of the support expand ( Figure 30.14 ) . <hl> Application of jasmonic acid is sufficient to trigger tendril coiling without a mechanical stimulus . A thigmonastic response is a touch response independent of the direction of stimulus Figure 30.24 . In the Venus flytrap , two modified leaves are joined at a hinge and lined with thin fork-like tines along the outer edges . Tiny hairs are located inside the trap . When an insect brushes against these trigger hairs , touching two or more of them in succession , the leaves close quickly , trapping the prey . Glands on the leaf surface secrete enzymes that slowly digest the insect . The released nutrients are absorbed by the leaves , which reopen for the next meal . Thigmomorphogenesis is a slow developmental change in the shape of a plant subjected to continuous mechanical stress . When trees bend in the wind , for example , growth is usually stunted and the trunk thickens . Strengthening tissue , especially xylem , is produced to add stiffness to resist the wind ’ s force . Researchers hypothesize that mechanical strain induces growth and differentiation to strengthen the tissues . Ethylene and jasmonate are likely involved in thigmomorphogenesis .", "hl_sentences": "The movement of a plant subjected to constant directional pressure is called thigmotropism , from the Greek words thigma meaning “ touch , ” and tropism implying “ direction . ” Tendrils are one example of this . Cells in contact with a support surface contract , whereas cells on the opposite side of the support expand ( Figure 30.14 ) .", "question": { "cloze_format": "A seedling germinating under a stone grows at an angle away from the stone and upward. This response to touch is called ________.", "normal_format": "A seedling germinating under a stone grows at an angle away from the stone and upward. What is this response to touch called?", "question_choices": [ "gravitropism", "thigmonasty", "thigmotropism", "skototropism" ], "question_id": "fs-idp109014384", "question_text": "A seedling germinating under a stone grows at an angle away from the stone and upward. This response to touch is called ________." }, "references_are_paraphrase": null } ]
30
30.1 The Plant Body Learning Objectives By the end of this section, you will be able to: Describe the shoot organ system and the root organ system Distinguish between meristematic tissue and permanent tissue Identify and describe the three regions where plant growth occurs Summarize the roles of dermal tissue, vascular tissue, and ground tissue Compare simple plant tissue with complex plant tissue Like animals, plants contain cells with organelles in which specific metabolic activities take place. Unlike animals, however, plants use energy from sunlight to form sugars during photosynthesis. In addition, plant cells have cell walls, plastids, and a large central vacuole: structures that are not found in animal cells. Each of these cellular structures plays a specific role in plant structure and function. Link to Learning Watch Botany Without Borders , a video produced by the Botanical Society of America about the importance of plants. Plant Organ Systems In plants, just as in animals, similar cells working together form a tissue. When different types of tissues work together to perform a unique function, they form an organ; organs working together form organ systems. Vascular plants have two distinct organ systems: a shoot system, and a root system. The shoot system consists of two portions: the vegetative (non-reproductive) parts of the plant, such as the leaves and the stems, and the reproductive parts of the plant, which include flowers and fruits. The shoot system generally grows above ground, where it absorbs the light needed for photosynthesis. The root system , which supports the plants and absorbs water and minerals, is usually underground. Figure 30.2 shows the organ systems of a typical plant. Plant Tissues Plants are multicellular eukaryotes with tissue systems made of various cell types that carry out specific functions. Plant tissue systems fall into one of two general types: meristematic tissue, and permanent (or non-meristematic) tissue. Cells of the meristematic tissue are found in meristems , which are plant regions of continuous cell division and growth. Meristematic tissue cells are either undifferentiated or incompletely differentiated, and they continue to divide and contribute to the growth of the plant. In contrast, permanent tissue consists of plant cells that are no longer actively dividing. Meristematic tissues consist of three types, based on their location in the plant. Apical meristems contain meristematic tissue located at the tips of stems and roots, which enable a plant to extend in length. Lateral meristems facilitate growth in thickness or girth in a maturing plant. Intercalary meristems occur only in monocots, at the bases of leaf blades and at nodes (the areas where leaves attach to a stem). This tissue enables the monocot leaf blade to increase in length from the leaf base; for example, it allows lawn grass leaves to elongate even after repeated mowing. Meristems produce cells that quickly differentiate, or specialize, and become permanent tissue. Such cells take on specific roles and lose their ability to divide further. They differentiate into three main types: dermal, vascular, and ground tissue. Dermal tissue covers and protects the plant, and vascular tissue transports water, minerals, and sugars to different parts of the plant. Ground tissue serves as a site for photosynthesis, provides a supporting matrix for the vascular tissue, and helps to store water and sugars. Secondary tissues are either simple (composed of similar cell types) or complex (composed of different cell types). Dermal tissue, for example, is a simple tissue that covers the outer surface of the plant and controls gas exchange. Vascular tissue is an example of a complex tissue, and is made of two specialized conducting tissues: xylem and phloem. Xylem tissue transports water and nutrients from the roots to different parts of the plant, and includes three different cell types: vessel elements and tracheids (both of which conduct water), and xylem parenchyma. Phloem tissue, which transports organic compounds from the site of photosynthesis to other parts of the plant, consists of four different cell types: sieve cells (which conduct photosynthates), companion cells, phloem parenchyma, and phloem fibers. Unlike xylem conducting cells, phloem conducting cells are alive at maturity. The xylem and phloem always lie adjacent to each other ( Figure 30.3 ). In stems, the xylem and the phloem form a structure called a vascular bundle ; in roots, this is termed the vascular stele or vascular cylinder . 30.2 Stems Learning Objectives By the end of this section, you will be able to: Describe the main function and basic structure of stems Compare and contrast the roles of dermal tissue, vascular tissue, and ground tissue Distinguish between primary growth and secondary growth in stems Summarize the origin of annual rings List and describe examples of modified stems Stems are a part of the shoot system of a plant. They may range in length from a few millimeters to hundreds of meters, and also vary in diameter, depending on the plant type. Stems are usually above ground, although the stems of some plants, such as the potato, also grow underground. Stems may be herbaceous (soft) or woody in nature. Their main function is to provide support to the plant, holding leaves, flowers and buds; in some cases, stems also store food for the plant. A stem may be unbranched, like that of a palm tree, or it may be highly branched, like that of a magnolia tree. The stem of the plant connects the roots to the leaves, helping to transport absorbed water and minerals to different parts of the plant. It also helps to transport the products of photosynthesis, namely sugars, from the leaves to the rest of the plant. Plant stems, whether above or below ground, are characterized by the presence of nodes and internodes ( Figure 30.4 ). Nodes are points of attachment for leaves, aerial roots, and flowers. The stem region between two nodes is called an internode . The stalk that extends from the stem to the base of the leaf is the petiole. An axillary bud is usually found in the axil—the area between the base of a leaf and the stem—where it can give rise to a branch or a flower. The apex (tip) of the shoot contains the apical meristem within the apical bud . Stem Anatomy The stem and other plant organs arise from the ground tissue, and are primarily made up of simple tissues formed from three types of cells: parenchyma, collenchyma, and sclerenchyma cells. Parenchyma cells are the most common plant cells ( Figure 30.5 ). They are found in the stem, the root, the inside of the leaf, and the pulp of the fruit. Parenchyma cells are responsible for metabolic functions, such as photosynthesis, and they help repair and heal wounds. Some parenchyma cells also store starch. Collenchyma cells are elongated cells with unevenly thickened walls ( Figure 30.6 ). They provide structural support, mainly to the stem and leaves. These cells are alive at maturity and are usually found below the epidermis. The “strings” of a celery stalk are an example of collenchyma cells. Sclerenchyma cells also provide support to the plant, but unlike collenchyma cells, many of them are dead at maturity. There are two types of sclerenchyma cells: fibers and sclereids. Both types have secondary cell walls that are thickened with deposits of lignin, an organic compound that is a key component of wood. Fibers are long, slender cells; sclereids are smaller-sized. Sclereids give pears their gritty texture. Humans use sclerenchyma fibers to make linen and rope ( Figure 30.7 ). Visual Connection Which layers of the stem are made of parenchyma cells? cortex and pith phloem sclerenchyma xylem Like the rest of the plant, the stem has three tissue systems: dermal, vascular, and ground tissue. Each is distinguished by characteristic cell types that perform specific tasks necessary for the plant’s growth and survival. Dermal Tissue The dermal tissue of the stem consists primarily of epidermis , a single layer of cells covering and protecting the underlying tissue. Woody plants have a tough, waterproof outer layer of cork cells commonly known as bark , which further protects the plant from damage. Epidermal cells are the most numerous and least differentiated of the cells in the epidermis. The epidermis of a leaf also contains openings known as stomata, through which the exchange of gases takes place ( Figure 30.8 ). Two cells, known as guard cells , surround each leaf stoma, controlling its opening and closing and thus regulating the uptake of carbon dioxide and the release of oxygen and water vapor. Trichomes are hair-like structures on the epidermal surface. They help to reduce transpiration (the loss of water by aboveground plant parts), increase solar reflectance, and store compounds that defend the leaves against predation by herbivores. Vascular Tissue The xylem and phloem that make up the vascular tissue of the stem are arranged in distinct strands called vascular bundles, which run up and down the length of the stem. When the stem is viewed in cross section, the vascular bundles of dicot stems are arranged in a ring. In plants with stems that live for more than one year, the individual bundles grow together and produce the characteristic growth rings. In monocot stems, the vascular bundles are randomly scattered throughout the ground tissue ( Figure 30.9 ). Xylem tissue has three types of cells: xylem parenchyma, tracheids, and vessel elements. The latter two types conduct water and are dead at maturity. Tracheids are xylem cells with thick secondary cell walls that are lignified. Water moves from one tracheid to another through regions on the side walls known as pits, where secondary walls are absent. Vessel elements are xylem cells with thinner walls; they are shorter than tracheids. Each vessel element is connected to the next by means of a perforation plate at the end walls of the element. Water moves through the perforation plates to travel up the plant. Phloem tissue is composed of sieve-tube cells, companion cells, phloem parenchyma, and phloem fibers. A series of sieve-tube cells (also called sieve-tube elements) are arranged end to end to make up a long sieve tube, which transports organic substances such as sugars and amino acids. The sugars flow from one sieve-tube cell to the next through perforated sieve plates, which are found at the end junctions between two cells. Although still alive at maturity, the nucleus and other cell components of the sieve-tube cells have disintegrated. Companion cells are found alongside the sieve-tube cells, providing them with metabolic support. The companion cells contain more ribosomes and mitochondria than the sieve-tube cells, which lack some cellular organelles. Ground Tissue Ground tissue is mostly made up of parenchyma cells, but may also contain collenchyma and sclerenchyma cells that help support the stem. The ground tissue towards the interior of the vascular tissue in a stem or root is known as pith , while the layer of tissue between the vascular tissue and the epidermis is known as the cortex . Growth in Stems Growth in plants occurs as the stems and roots lengthen. Some plants, especially those that are woody, also increase in thickness during their life span. The increase in length of the shoot and the root is referred to as primary growth , and is the result of cell division in the shoot apical meristem. Secondary growth is characterized by an increase in thickness or girth of the plant, and is caused by cell division in the lateral meristem. Figure 30.10 shows the areas of primary and secondary growth in a plant. Herbaceous plants mostly undergo primary growth, with hardly any secondary growth or increase in thickness. Secondary growth or “wood” is noticeable in woody plants; it occurs in some dicots, but occurs very rarely in monocots. Some plant parts, such as stems and roots, continue to grow throughout a plant’s life: a phenomenon called indeterminate growth. Other plant parts, such as leaves and flowers, exhibit determinate growth, which ceases when a plant part reaches a particular size. Primary Growth Most primary growth occurs at the apices, or tips, of stems and roots. Primary growth is a result of rapidly dividing cells in the apical meristems at the shoot tip and root tip. Subsequent cell elongation also contributes to primary growth. The growth of shoots and roots during primary growth enables plants to continuously seek water (roots) or sunlight (shoots). The influence of the apical bud on overall plant growth is known as apical dominance, which diminishes the growth of axillary buds that form along the sides of branches and stems. Most coniferous trees exhibit strong apical dominance, thus producing the typical conical Christmas tree shape. If the apical bud is removed, then the axillary buds will start forming lateral branches. Gardeners make use of this fact when they prune plants by cutting off the tops of branches, thus encouraging the axillary buds to grow out, giving the plant a bushy shape. Link to Learning Watch this BBC Nature video showing how time-lapse photography captures plant growth at high speed. Secondary Growth The increase in stem thickness that results from secondary growth is due to the activity of the lateral meristems, which are lacking in herbaceous plants. Lateral meristems include the vascular cambium and, in woody plants, the cork cambium (see Figure 30.10 ). The vascular cambium is located just outside the primary xylem and to the interior of the primary phloem. The cells of the vascular cambium divide and form secondary xylem (tracheids and vessel elements) to the inside, and secondary phloem (sieve elements and companion cells) to the outside. The thickening of the stem that occurs in secondary growth is due to the formation of secondary phloem and secondary xylem by the vascular cambium, plus the action of cork cambium, which forms the tough outermost layer of the stem. The cells of the secondary xylem contain lignin, which provides hardiness and strength. In woody plants, cork cambium is the outermost lateral meristem. It produces cork cells (bark) containing a waxy substance known as suberin that can repel water. The bark protects the plant against physical damage and helps reduce water loss. The cork cambium also produces a layer of cells known as phelloderm, which grows inward from the cambium. The cork cambium, cork cells, and phelloderm are collectively termed the periderm . The periderm substitutes for the epidermis in mature plants. In some plants, the periderm has many openings, known as lenticels , which allow the interior cells to exchange gases with the outside atmosphere ( Figure 30.11 ). This supplies oxygen to the living and metabolically active cells of the cortex, xylem and phloem. Annual Rings The activity of the vascular cambium gives rise to annual growth rings. During the spring growing season, cells of the secondary xylem have a large internal diameter and their primary cell walls are not extensively thickened. This is known as early wood, or spring wood. During the fall season, the secondary xylem develops thickened cell walls, forming late wood, or autumn wood, which is denser than early wood. This alternation of early and late wood is due largely to a seasonal decrease in the number of vessel elements and a seasonal increase in the number of tracheids. It results in the formation of an annual ring, which can be seen as a circular ring in the cross section of the stem ( Figure 30.12 ). An examination of the number of annual rings and their nature (such as their size and cell wall thickness) can reveal the age of the tree and the prevailing climatic conditions during each season. Stem Modifications Some plant species have modified stems that are especially suited to a particular habitat and environment ( Figure 30.13 ). A rhizome is a modified stem that grows horizontally underground and has nodes and internodes. Vertical shoots may arise from the buds on the rhizome of some plants, such as ginger and ferns. Corms are similar to rhizomes, except they are more rounded and fleshy (such as in gladiolus). Corms contain stored food that enables some plants to survive the winter. Stolons are stems that run almost parallel to the ground, or just below the surface, and can give rise to new plants at the nodes. Runners are a type of stolon that runs above the ground and produces new clone plants at nodes at varying intervals: strawberries are an example. Tubers are modified stems that may store starch, as seen in the potato ( Solanum sp.). Tubers arise as swollen ends of stolons, and contain many adventitious or unusual buds (familiar to us as the “eyes” on potatoes). A bulb , which functions as an underground storage unit, is a modification of a stem that has the appearance of enlarged fleshy leaves emerging from the stem or surrounding the base of the stem, as seen in the iris. Link to Learning Watch botanist Wendy Hodgson, of Desert Botanical Garden in Phoenix, Arizona, explain how agave plants were cultivated for food hundreds of years ago in the Arizona desert in this video: Finding the Roots of an Ancient Crop. Some aerial modifications of stems are tendrils and thorns ( Figure 30.14 ). Tendrils are slender, twining strands that enable a plant (like a vine or pumpkin) to seek support by climbing on other surfaces. Thorns are modified branches appearing as sharp outgrowths that protect the plant; common examples include roses, Osage orange and devil’s walking stick. 30.3 Roots Learning Objectives By the end of this section, you will be able to: Identify the two types of root systems Describe the three zones of the root tip and summarize the role of each zone in root growth Describe the structure of the root List and describe examples of modified roots The roots of seed plants have three major functions: anchoring the plant to the soil, absorbing water and minerals and transporting them upwards, and storing the products of photosynthesis. Some roots are modified to absorb moisture and exchange gases. Most roots are underground. Some plants, however, also have adventitious roots , which emerge above the ground from the shoot. Types of Root Systems Root systems are mainly of two types ( Figure 30.15 ). Dicots have a tap root system, while monocots have a fibrous root system. A tap root system has a main root that grows down vertically, and from which many smaller lateral roots arise. Dandelions are a good example; their tap roots usually break off when trying to pull these weeds, and they can regrow another shoot from the remaining root). A tap root system penetrates deep into the soil. In contrast, a fibrous root system is located closer to the soil surface, and forms a dense network of roots that also helps prevent soil erosion (lawn grasses are a good example, as are wheat, rice, and corn). Some plants have a combination of tap roots and fibrous roots. Plants that grow in dry areas often have deep root systems, whereas plants growing in areas with abundant water are likely to have shallower root systems. Root Growth and Anatomy Root growth begins with seed germination. When the plant embryo emerges from the seed, the radicle of the embryo forms the root system. The tip of the root is protected by the root cap , a structure exclusive to roots and unlike any other plant structure. The root cap is continuously replaced because it gets damaged easily as the root pushes through soil. The root tip can be divided into three zones: a zone of cell division, a zone of elongation, and a zone of maturation and differentiation ( Figure 30.16 ). The zone of cell division is closest to the root tip; it is made up of the actively dividing cells of the root meristem. The zone of elongation is where the newly formed cells increase in length, thereby lengthening the root. Beginning at the first root hair is the zone of cell maturation where the root cells begin to differentiate into special cell types. All three zones are in the first centimeter or so of the root tip. The root has an outer layer of cells called the epidermis, which surrounds areas of ground tissue and vascular tissue. The epidermis provides protection and helps in absorption. Root hairs , which are extensions of root epidermal cells, increase the surface area of the root, greatly contributing to the absorption of water and minerals. Inside the root, the ground tissue forms two regions: the cortex and the pith ( Figure 30.17 ). Compared to stems, roots have lots of cortex and little pith. Both regions include cells that store photosynthetic products. The cortex is between the epidermis and the vascular tissue, whereas the pith lies between the vascular tissue and the center of the root. The vascular tissue in the root is arranged in the inner portion of the root, which is called the stele ( Figure 30.18 ). A layer of cells known as the endodermis separates the stele from the ground tissue in the outer portion of the root. The endodermis is exclusive to roots, and serves as a checkpoint for materials entering the root’s vascular system. A waxy substance called suberin is present on the walls of the endodermal cells. This waxy region, known as the Casparian strip , forces water and solutes to cross the plasma membranes of endodermal cells instead of slipping between the cells. This ensures that only materials required by the root pass through the endodermis, while toxic substances and pathogens are generally excluded. The outermost cell layer of the root’s vascular tissue is the pericycle , an area that can give rise to lateral roots. In dicot roots, the xylem and phloem of the stele are arranged alternately in an X shape, whereas in monocot roots, the vascular tissue is arranged in a ring around the pith. Root Modifications Root structures may be modified for specific purposes. For example, some roots are bulbous and store starch. Aerial roots and prop roots are two forms of aboveground roots that provide additional support to anchor the plant. Tap roots, such as carrots, turnips, and beets, are examples of roots that are modified for food storage ( Figure 30.19 ). Epiphytic roots enable a plant to grow on another plant. For example, the epiphytic roots of orchids develop a spongy tissue to absorb moisture. The banyan tree ( Ficus sp.) begins as an epiphyte, germinating in the branches of a host tree; aerial roots develop from the branches and eventually reach the ground, providing additional support ( Figure 30.20 ). In screwpine ( Pandanus sp.), a palm-like tree that grows in sandy tropical soils, aboveground prop roots develop from the nodes to provide additional support. 30.4 Leaves Learning Objectives By the end of this section, you will be able to: Identify the parts of a typical leaf Describe the internal structure and function of a leaf Compare and contrast simple leaves and compound leaves List and describe examples of modified leaves Leaves are the main sites for photosynthesis: the process by which plants synthesize food. Most leaves are usually green, due to the presence of chlorophyll in the leaf cells. However, some leaves may have different colors, caused by other plant pigments that mask the green chlorophyll. The thickness, shape, and size of leaves are adapted to the environment. Each variation helps a plant species maximize its chances of survival in a particular habitat. Usually, the leaves of plants growing in tropical rainforests have larger surface areas than those of plants growing in deserts or very cold conditions, which are likely to have a smaller surface area to minimize water loss. Structure of a Typical Leaf Each leaf typically has a leaf blade called the lamina , which is also the widest part of the leaf. Some leaves are attached to the plant stem by a petiole . Leaves that do not have a petiole and are directly attached to the plant stem are called sessile leaves. Small green appendages usually found at the base of the petiole are known as stipules . Most leaves have a midrib, which travels the length of the leaf and branches to each side to produce veins of vascular tissue. The edge of the leaf is called the margin. Figure 30.21 shows the structure of a typical eudicot leaf. Within each leaf, the vascular tissue forms veins. The arrangement of veins in a leaf is called the venation pattern. Monocots and dicots differ in their patterns of venation ( Figure 30.22 ). Monocots have parallel venation; the veins run in straight lines across the length of the leaf without converging at a point. In dicots, however, the veins of the leaf have a net-like appearance, forming a pattern known as reticulate venation. One extant plant, the Ginkgo biloba , has dichotomous venation where the veins fork. Leaf Arrangement The arrangement of leaves on a stem is known as phyllotaxy . The number and placement of a plant’s leaves will vary depending on the species, with each species exhibiting a characteristic leaf arrangement. Leaves are classified as either alternate, spiral, or opposite. Plants that have only one leaf per node have leaves that are said to be either alternate—meaning the leaves alternate on each side of the stem in a flat plane—or spiral, meaning the leaves are arrayed in a spiral along the stem. In an opposite leaf arrangement, two leaves arise at the same point, with the leaves connecting opposite each other along the branch. If there are three or more leaves connected at a node, the leaf arrangement is classified as whorled . Leaf Form Leaves may be simple or compound ( Figure 30.23 ). In a simple leaf , the blade is either completely undivided—as in the banana leaf—or it has lobes, but the separation does not reach the midrib, as in the maple leaf. In a compound leaf , the leaf blade is completely divided, forming leaflets, as in the locust tree. Each leaflet may have its own stalk, but is attached to the rachis. A palmately compound leaf resembles the palm of a hand, with leaflets radiating outwards from one point Examples include the leaves of poison ivy, the buckeye tree, or the familiar houseplant Schefflera sp. (common name “umbrella plant”). Pinnately compound leaves take their name from their feather-like appearance; the leaflets are arranged along the midrib, as in rose leaves ( Rosa sp.), or the leaves of hickory, pecan, ash, or walnut trees. Leaf Structure and Function The outermost layer of the leaf is the epidermis; it is present on both sides of the leaf and is called the upper and lower epidermis, respectively. Botanists call the upper side the adaxial surface (or adaxis) and the lower side the abaxial surface (or abaxis). The epidermis helps in the regulation of gas exchange. It contains stomata ( Figure 30.24 ): openings through which the exchange of gases takes place. Two guard cells surround each stoma, regulating its opening and closing. The epidermis is usually one cell layer thick; however, in plants that grow in very hot or very cold conditions, the epidermis may be several layers thick to protect against excessive water loss from transpiration. A waxy layer known as the cuticle covers the leaves of all plant species. The cuticle reduces the rate of water loss from the leaf surface. Other leaves may have small hairs (trichomes) on the leaf surface. Trichomes help to deter herbivory by restricting insect movements, or by storing toxic or bad-tasting compounds; they can also reduce the rate of transpiration by blocking air flow across the leaf surface ( Figure 30.25 ). Below the epidermis of dicot leaves are layers of cells known as the mesophyll, or “middle leaf.” The mesophyll of most leaves typically contains two arrangements of parenchyma cells: the palisade parenchyma and spongy parenchyma ( Figure 30.26 ). The palisade parenchyma (also called the palisade mesophyll) has column-shaped, tightly packed cells, and may be present in one, two, or three layers. Below the palisade parenchyma are loosely arranged cells of an irregular shape. These are the cells of the spongy parenchyma (or spongy mesophyll). The air space found between the spongy parenchyma cells allows gaseous exchange between the leaf and the outside atmosphere through the stomata. In aquatic plants, the intercellular spaces in the spongy parenchyma help the leaf float. Both layers of the mesophyll contain many chloroplasts. Guard cells are the only epidermal cells to contain chloroplasts. Like the stem, the leaf contains vascular bundles composed of xylem and phloem ( Figure 30.27 ). The xylem consists of tracheids and vessels, which transport water and minerals to the leaves. The phloem transports the photosynthetic products from the leaf to the other parts of the plant. A single vascular bundle, no matter how large or small, always contains both xylem and phloem tissues. Leaf Adaptations Coniferous plant species that thrive in cold environments, like spruce, fir, and pine, have leaves that are reduced in size and needle-like in appearance. These needle-like leaves have sunken stomata and a smaller surface area: two attributes that aid in reducing water loss. In hot climates, plants such as cacti have leaves that are reduced to spines, which in combination with their succulent stems, help to conserve water. Many aquatic plants have leaves with wide lamina that can float on the surface of the water, and a thick waxy cuticle on the leaf surface that repels water. Link to Learning Watch “The Pale Pitcher Plant” episode of the video series Plants Are Cool, Too, a Botanical Society of America video about a carnivorous plant species found in Louisiana. Evolution Connection Plant Adaptations in Resource-Deficient Environments Roots, stems, and leaves are structured to ensure that a plant can obtain the required sunlight, water, soil nutrients, and oxygen resources. Some remarkable adaptations have evolved to enable plant species to thrive in less than ideal habitats, where one or more of these resources is in short supply. In tropical rainforests, light is often scarce, since many trees and plants grow close together and block much of the sunlight from reaching the forest floor. Many tropical plant species have exceptionally broad leaves to maximize the capture of sunlight. Other species are epiphytes: plants that grow on other plants that serve as a physical support. Such plants are able to grow high up in the canopy atop the branches of other trees, where sunlight is more plentiful. Epiphytes live on rain and minerals collected in the branches and leaves of the supporting plant. Bromeliads (members of the pineapple family), ferns, and orchids are examples of tropical epiphytes ( Figure 30.28 ). Many epiphytes have specialized tissues that enable them to efficiently capture and store water. Some plants have special adaptations that help them to survive in nutrient-poor environments. Carnivorous plants, such as the Venus flytrap and the pitcher plant ( Figure 30.29 ), grow in bogs where the soil is low in nitrogen. In these plants, leaves are modified to capture insects. The insect-capturing leaves may have evolved to provide these plants with a supplementary source of much-needed nitrogen. Many swamp plants have adaptations that enable them to thrive in wet areas, where their roots grow submerged underwater. In these aquatic areas, the soil is unstable and little oxygen is available to reach the roots. Trees such as mangroves ( Rhizophora sp.) growing in coastal waters produce aboveground roots that help support the tree ( Figure 30.30 ). Some species of mangroves, as well as cypress trees, have pneumatophores: upward-growing roots containing pores and pockets of tissue specialized for gas exchange. Wild rice is an aquatic plant with large air spaces in the root cortex. The air-filled tissue—called aerenchyma—provides a path for oxygen to diffuse down to the root tips, which are embedded in oxygen-poor bottom sediments. Link to Learning Watch Venus Flytraps: Jaws of Death , an extraordinary BBC close-up of the Venus flytrap in action. Click to view content 30.5 Transport of Water and Solutes in Plants Learning Objectives By the end of this section, you will be able to: Define water potential and explain how it is influenced by solutes, pressure, gravity, and the matric potential Describe how water potential, evapotranspiration, and stomatal regulation influence how water is transported in plants Explain how photosynthates are transported in plants The structure of plant roots, stems, and leaves facilitates the transport of water, nutrients, and photosynthates throughout the plant. The phloem and xylem are the main tissues responsible for this movement. Water potential, evapotranspiration, and stomatal regulation influence how water and nutrients are transported in plants. To understand how these processes work, we must first understand the energetics of water potential. Water Potential Plants are phenomenal hydraulic engineers. Using only the basic laws of physics and the simple manipulation of potential energy, plants can move water to the top of a 116-meter-tall tree ( Figure 30.31 a ). Plants can also use hydraulics to generate enough force to split rocks and buckle sidewalks ( Figure 30.31 b ). Plants achieve this because of water potential. Water potential is a measure of the potential energy in water. Plant physiologists are not interested in the energy in any one particular aqueous system, but are very interested in water movement between two systems. In practical terms, therefore, water potential is the difference in potential energy between a given water sample and pure water (at atmospheric pressure and ambient temperature). Water potential is denoted by the Greek letter ψ ( psi ) and is expressed in units of pressure (pressure is a form of energy) called megapascals (MPa). The potential of pure water (Ψ w pure H2O ) is, by convenience of definition, designated a value of zero (even though pure water contains plenty of potential energy, that energy is ignored). Water potential values for the water in a plant root, stem, or leaf are therefore expressed relative to Ψ w pure H2O . The water potential in plant solutions is influenced by solute concentration, pressure, gravity, and factors called matrix effects. Water potential can be broken down into its individual components using the following equation: Ψ system = Ψ total = Ψ s + Ψ p + Ψ g + Ψ m Ψ system = Ψ total = Ψ s + Ψ p + Ψ g + Ψ m where Ψ s , Ψ p , Ψ g , and Ψ m refer to the solute, pressure, gravity, and matric potentials, respectively. “System” can refer to the water potential of the soil water (Ψ soil ), root water (Ψ root ), stem water (Ψ stem ), leaf water (Ψ leaf ) or the water in the atmosphere (Ψ atmosphere ): whichever aqueous system is under consideration. As the individual components change, they raise or lower the total water potential of a system. When this happens, water moves to equilibrate, moving from the system or compartment with a higher water potential to the system or compartment with a lower water potential. This brings the difference in water potential between the two systems (ΔΨ) back to zero (ΔΨ = 0). Therefore, for water to move through the plant from the soil to the air (a process called transpiration), Ψ soil must be > Ψ root > Ψ stem > Ψ leaf > Ψ atmosphere . Water only moves in response to ΔΨ, not in response to the individual components. However, because the individual components influence the total Ψ system , by manipulating the individual components (especially Ψ s ), a plant can control water movement. Solute Potential Solute potential (Ψ s ), also called osmotic potential, is negative in a plant cell and zero in distilled water. Typical values for cell cytoplasm are –0.5 to –1.0 MPa. Solutes reduce water potential (resulting in a negative Ψ w ) by consuming some of the potential energy available in the water. Solute molecules can dissolve in water because water molecules can bind to them via hydrogen bonds; a hydrophobic molecule like oil, which cannot bind to water, cannot go into solution. The energy in the hydrogen bonds between solute molecules and water is no longer available to do work in the system because it is tied up in the bond. In other words, the amount of available potential energy is reduced when solutes are added to an aqueous system. Thus, Ψ s decreases with increasing solute concentration. Because Ψ s is one of the four components of Ψ system or Ψ total , a decrease in Ψ s will cause a decrease in Ψ total . The internal water potential of a plant cell is more negative than pure water because of the cytoplasm’s high solute content ( Figure 30.32 ). Because of this difference in water potential water will move from the soil into a plant’s root cells via the process of osmosis. This is why solute potential is sometimes called osmotic potential. Plant cells can metabolically manipulate Ψ s (and by extension, Ψ total ) by adding or removing solute molecules. Therefore, plants have control over Ψ total via their ability to exert metabolic control over Ψ s . Visual Connection Positive water potential is placed on the left side of the tube by increasing Ψ p such that the water level rises on the right side. Could you equalize the water level on each side of the tube by adding solute, and if so, how? Pressure Potential Pressure potential (Ψ p ), also called turgor potential, may be positive or negative ( Figure 30.32 ). Because pressure is an expression of energy, the higher the pressure, the more potential energy in a system, and vice versa. Therefore, a positive Ψp (compression) increases Ψ total , and a negative Ψ p (tension) decreases Ψ total . Positive pressure inside cells is contained by the cell wall, producing turgor pressure. Pressure potentials are typically around 0.6–0.8 MPa, but can reach as high as 1.5 MPa in a well-watered plant. A Ψ p of 1.5 MPa equates to 210 pounds per square inch (1.5 MPa x 140 lb in -2 MPa -1 = 210 lb/in -2 ). As a comparison, most automobile tires are kept at a pressure of 30–34 psi. An example of the effect of turgor pressure is the wilting of leaves and their restoration after the plant has been watered ( Figure 30.33 ). Water is lost from the leaves via transpiration (approaching Ψ p = 0 MPa at the wilting point) and restored by uptake via the roots. A plant can manipulate Ψ p via its ability to manipulate Ψ s and by the process of osmosis. If a plant cell increases the cytoplasmic solute concentration, Ψ s will decline, Ψ total will decline, the ΔΨ between the cell and the surrounding tissue will decline, water will move into the cell by osmosis, and Ψ p will increase. Ψ p is also under indirect plant control via the opening and closing of stomata. Stomatal openings allow water to evaporate from the leaf, reducing Ψ p and Ψ total of the leaf and increasing ii between the water in the leaf and the petiole, thereby allowing water to flow from the petiole into the leaf. Gravity Potential Gravity potential (Ψ g ) is always negative to zero in a plant with no height. It always removes or consumes potential energy from the system. The force of gravity pulls water downwards to the soil, reducing the total amount of potential energy in the water in the plant (Ψ total ). The taller the plant, the taller the water column, and the more influential Ψ g becomes. On a cellular scale and in short plants, this effect is negligible and easily ignored. However, over the height of a tall tree like a giant coastal redwood, the gravitational pull of –0.1 MPa m -1 is equivalent to an extra 1 MPa of resistance that must be overcome for water to reach the leaves of the tallest trees. Plants are unable to manipulate Ψ g . Matric Potential Matric potential (Ψ m ) is always negative to zero. In a dry system, it can be as low as –2 MPa in a dry seed, and it is zero in a water-saturated system. The binding of water to a matrix always removes or consumes potential energy from the system. Ψ m is similar to solute potential because it involves tying up the energy in an aqueous system by forming hydrogen bonds between the water and some other component. However, in solute potential, the other components are soluble, hydrophilic solute molecules, whereas in Ψ m , the other components are insoluble, hydrophilic molecules of the plant cell wall. Every plant cell has a cellulosic cell wall and the cellulose in the cell walls is hydrophilic, producing a matrix for adhesion of water: hence the name matric potential. Ψ m is very large (negative) in dry tissues such as seeds or drought-affected soils. However, it quickly goes to zero as the seed takes up water or the soil hydrates. Ψ m cannot be manipulated by the plant and is typically ignored in well-watered roots, stems, and leaves. Movement of Water and Minerals in the Xylem Solutes, pressure, gravity, and matric potential are all important for the transport of water in plants. Water moves from an area of higher total water potential (higher Gibbs free energy) to an area of lower total water potential. Gibbs free energy is the energy associated with a chemical reaction that can be used to do work. This is expressed as ΔΨ. Transpiration is the loss of water from the plant through evaporation at the leaf surface. It is the main driver of water movement in the xylem. Transpiration is caused by the evaporation of water at the leaf–atmosphere interface; it creates negative pressure (tension) equivalent to –2 MPa at the leaf surface. This value varies greatly depending on the vapor pressure deficit, which can be negligible at high relative humidity (RH) and substantial at low RH. Water from the roots is pulled up by this tension. At night, when stomata shut and transpiration stops, the water is held in the stem and leaf by the adhesion of water to the cell walls of the xylem vessels and tracheids, and the cohesion of water molecules to each other. This is called the cohesion–tension theory of sap ascent. Inside the leaf at the cellular level, water on the surface of mesophyll cells saturates the cellulose microfibrils of the primary cell wall. The leaf contains many large intercellular air spaces for the exchange of oxygen for carbon dioxide, which is required for photosynthesis. The wet cell wall is exposed to this leaf internal air space, and the water on the surface of the cells evaporates into the air spaces, decreasing the thin film on the surface of the mesophyll cells. This decrease creates a greater tension on the water in the mesophyll cells ( Figure 30.34 ), thereby increasing the pull on the water in the xylem vessels. The xylem vessels and tracheids are structurally adapted to cope with large changes in pressure. Rings in the vessels maintain their tubular shape, much like the rings on a vacuum cleaner hose keep the hose open while it is under pressure. Small perforations between vessel elements reduce the number and size of gas bubbles that can form via a process called cavitation. The formation of gas bubbles in xylem interrupts the continuous stream of water from the base to the top of the plant, causing a break termed an embolism in the flow of xylem sap. The taller the tree, the greater the tension forces needed to pull water, and the more cavitation events. In larger trees, the resulting embolisms can plug xylem vessels, making them non-functional. Visual Connection Which of the following statements is false? Negative water potential draws water into the root hairs. Cohesion and adhesion draw water up the xylem. Transpiration draws water from the leaf. Negative water potential draws water into the root hairs. Cohesion and adhesion draw water up the phloem. Transpiration draws water from the leaf. Water potential decreases from the roots to the top of the plant. Water enters the plants through root hairs and exits through stoma. Transpiration —the loss of water vapor to the atmosphere through stomata—is a passive process, meaning that metabolic energy in the form of ATP is not required for water movement. The energy driving transpiration is the difference in energy between the water in the soil and the water in the atmosphere. However, transpiration is tightly controlled. Control of Transpiration The atmosphere to which the leaf is exposed drives transpiration, but also causes massive water loss from the plant. Up to 90 percent of the water taken up by roots may be lost through transpiration. Leaves are covered by a waxy cuticle on the outer surface that prevents the loss of water. Regulation of transpiration, therefore, is achieved primarily through the opening and closing of stomata on the leaf surface. Stomata are surrounded by two specialized cells called guard cells, which open and close in response to environmental cues such as light intensity and quality, leaf water status, and carbon dioxide concentrations. Stomata must open to allow air containing carbon dioxide and oxygen to diffuse into the leaf for photosynthesis and respiration. When stomata are open, however, water vapor is lost to the external environment, increasing the rate of transpiration. Therefore, plants must maintain a balance between efficient photosynthesis and water loss. Plants have evolved over time to adapt to their local environment and reduce transpiration( Figure 30.35 ). Desert plant (xerophytes) and plants that grow on other plants (epiphytes) have limited access to water. Such plants usually have a much thicker waxy cuticle than those growing in more moderate, well-watered environments (mesophytes). Aquatic plants (hydrophytes) also have their own set of anatomical and morphological leaf adaptations. Xerophytes and epiphytes often have a thick covering of trichomes or of stomata that are sunken below the leaf’s surface. Trichomes are specialized hair-like epidermal cells that secrete oils and substances. These adaptations impede air flow across the stomatal pore and reduce transpiration. Multiple epidermal layers are also commonly found in these types of plants. Transportation of Photosynthates in the Phloem Plants need an energy source to grow. In seeds and bulbs, food is stored in polymers (such as starch) that are converted by metabolic processes into sucrose for newly developing plants. Once green shoots and leaves are growing, plants are able to produce their own food by photosynthesizing. The products of photosynthesis are called photosynthates, which are usually in the form of simple sugars such as sucrose. Structures that produce photosynthates for the growing plant are referred to as sources . Sugars produced in sources, such as leaves, need to be delivered to growing parts of the plant via the phloem in a process called translocation . The points of sugar delivery, such as roots, young shoots, and developing seeds, are called sinks . Seeds, tubers, and bulbs can be either a source or a sink, depending on the plant’s stage of development and the season. The products from the source are usually translocated to the nearest sink through the phloem. For example, the highest leaves will send photosynthates upward to the growing shoot tip, whereas lower leaves will direct photosynthates downward to the roots. Intermediate leaves will send products in both directions, unlike the flow in the xylem, which is always unidirectional (soil to leaf to atmosphere). The pattern of photosynthate flow changes as the plant grows and develops. Photosynthates are directed primarily to the roots early on, to shoots and leaves during vegetative growth, and to seeds and fruits during reproductive development. They are also directed to tubers for storage. Translocation: Transport from Source to Sink Photosynthates, such as sucrose, are produced in the mesophyll cells of photosynthesizing leaves. From there they are translocated through the phloem to where they are used or stored. Mesophyll cells are connected by cytoplasmic channels called plasmodesmata. Photosynthates move through these channels to reach phloem sieve-tube elements (STEs) in the vascular bundles. From the mesophyll cells, the photosynthates are loaded into the phloem STEs. The sucrose is actively transported against its concentration gradient (a process requiring ATP) into the phloem cells using the electrochemical potential of the proton gradient. This is coupled to the uptake of sucrose with a carrier protein called the sucrose-H + symporter. Phloem STEs have reduced cytoplasmic contents, and are connected by a sieve plate with pores that allow for pressure-driven bulk flow, or translocation, of phloem sap. Companion cells are associated with STEs. They assist with metabolic activities and produce energy for the STEs ( Figure 30.36 ). Once in the phloem, the photosynthates are translocated to the closest sink. Phloem sap is an aqueous solution that contains up to 30 percent sugar, minerals, amino acids, and plant growth regulators. The high percentage of sugar decreases Ψ s, which decreases the total water potential and causes water to move by osmosis from the adjacent xylem into the phloem tubes, thereby increasing pressure. This increase in total water potential causes the bulk flow of phloem from source to sink ( Figure 30.37 ). Sucrose concentration in the sink cells is lower than in the phloem STEs because the sink sucrose has been metabolized for growth, or converted to starch for storage or other polymers, such as cellulose, for structural integrity. Unloading at the sink end of the phloem tube occurs by either diffusion or active transport of sucrose molecules from an area of high concentration to one of low concentration. Water diffuses from the phloem by osmosis and is then transpired or recycled via the xylem back into the phloem sap. 30.6 Plant Sensory Systems and Responses Learning Objectives By the end of this section, you will be able to: Describe how red and blue light affect plant growth and metabolic activities Discuss gravitropism Understand how hormones affect plant growth and development Describe thigmotropism, thigmonastism, and thigmogenesis Explain how plants defend themselves from predators and respond to wounds Animals can respond to environmental factors by moving to a new location. Plants, however, are rooted in place and must respond to the surrounding environmental factors. Plants have sophisticated systems to detect and respond to light, gravity, temperature, and physical touch. Receptors sense environmental factors and relay the information to effector systems—often through intermediate chemical messengers—to bring about plant responses. Plant Responses to Light Plants have a number of sophisticated uses for light that go far beyond their ability to photosynthesize low-molecular-weight sugars using only carbon dioxide, light, and water. Photomorphogenesis is the growth and development of plants in response to light. It allows plants to optimize their use of light and space. Photoperiodism is the ability to use light to track time. Plants can tell the time of day and time of year by sensing and using various wavelengths of sunlight. Phototropism is a directional response that allows plants to grow towards, or even away from, light. The sensing of light in the environment is important to plants; it can be crucial for competition and survival. The response of plants to light is mediated by different photoreceptors, which are comprised of a protein covalently bonded to a light-absorbing pigment called a chromophore . Together, the two are called a chromoprotein. The red/far-red and violet-blue regions of the visible light spectrum trigger structural development in plants. Sensory photoreceptors absorb light in these particular regions of the visible light spectrum because of the quality of light available in the daylight spectrum. In terrestrial habitats, light absorption by chlorophylls peaks in the blue and red regions of the spectrum. As light filters through the canopy and the blue and red wavelengths are absorbed, the spectrum shifts to the far-red end, shifting the plant community to those plants better adapted to respond to far-red light. Blue-light receptors allow plants to gauge the direction and abundance of sunlight, which is rich in blue–green emissions. Water absorbs red light, which makes the detection of blue light essential for algae and aquatic plants. The Phytochrome System and the Red/Far-Red Response The phytochromes are a family of chromoproteins with a linear tetrapyrrole chromophore, similar to the ringed tetrapyrrole light-absorbing head group of chlorophyll. Phytochromes have two photo-interconvertible forms: Pr and Pfr. Pr absorbs red light (~667 nm) and is immediately converted to Pfr. Pfr absorbs far-red light (~730 nm) and is quickly converted back to Pr. Absorption of red or far-red light causes a massive change to the shape of the chromophore, altering the conformation and activity of the phytochrome protein to which it is bound. Pfr is the physiologically active form of the protein; therefore, exposure to red light yields physiological activity. Exposure to far-red light inhibits phytochrome activity. Together, the two forms represent the phytochrome system ( Figure 30.38 ). The phytochrome system acts as a biological light switch. It monitors the level, intensity, duration, and color of environmental light. The effect of red light is reversible by immediately shining far-red light on the sample, which converts the chromoprotein to the inactive Pr form. Additionally, Pfr can slowly revert to Pr in the dark, or break down over time. In all instances, the physiological response induced by red light is reversed. The active form of phytochrome (Pfr) can directly activate other molecules in the cytoplasm, or it can be trafficked to the nucleus, where it directly activates or represses specific gene expression. Once the phytochrome system evolved, plants adapted it to serve a variety of needs. Unfiltered, full sunlight contains much more red light than far-red light. Because chlorophyll absorbs strongly in the red region of the visible spectrum, but not in the far-red region, any plant in the shade of another plant on the forest floor will be exposed to red-depleted, far-red-enriched light. The preponderance of far-red light converts phytochrome in the shaded leaves to the Pr (inactive) form, slowing growth. The nearest non-shaded (or even less-shaded) areas on the forest floor have more red light; leaves exposed to these areas sense the red light, which activates the Pfr form and induces growth. In short, plant shoots use the phytochrome system to grow away from shade and towards light. Because competition for light is so fierce in a dense plant community, the evolutionary advantages of the phytochrome system are obvious. In seeds, the phytochrome system is not used to determine direction and quality of light (shaded versus unshaded). Instead, is it used merely to determine if there is any light at all. This is especially important in species with very small seeds, such as lettuce. Because of their size, lettuce seeds have few food reserves. Their seedlings cannot grow for long before they run out of fuel. If they germinated even a centimeter under the soil surface, the seedling would never make it into the sunlight and would die. In the dark, phytochrome is in the Pr (inactive form) and the seed will not germinate; it will only germinate if exposed to light at the surface of the soil. Upon exposure to light, Pr is converted to Pfr and germination proceeds. Plants also use the phytochrome system to sense the change of season. Photoperiodism is a biological response to the timing and duration of day and night. It controls flowering, setting of winter buds, and vegetative growth. Detection of seasonal changes is crucial to plant survival. Although temperature and light intensity influence plant growth, they are not reliable indicators of season because they may vary from one year to the next. Day length is a better indicator of the time of year. As stated above, unfiltered sunlight is rich in red light but deficient in far-red light. Therefore, at dawn, all the phytochrome molecules in a leaf quickly convert to the active Pfr form, and remain in that form until sunset. In the dark, the Pfr form takes hours to slowly revert back to the Pr form. If the night is long (as in winter), all of the Pfr form reverts. If the night is short (as in summer), a considerable amount of Pfr may remain at sunrise. By sensing the Pr/Pfr ratio at dawn, a plant can determine the length of the day/night cycle. In addition, leaves retain that information for several days, allowing a comparison between the length of the previous night and the preceding several nights. Shorter nights indicate springtime to the plant; when the nights become longer, autumn is approaching. This information, along with sensing temperature and water availability, allows plants to determine the time of the year and adjust their physiology accordingly. Short-day (long-night) plants use this information to flower in the late summer and early fall, when nights exceed a critical length (often eight or fewer hours). Long-day (short-night) plants flower during the spring, when darkness is less than a critical length (often eight to 15 hours). Not all plants use the phytochrome system in this way. Flowering in day-neutral plants is not regulated by daylength. Career Connection Horticulturalist The word “horticulturist” comes from the Latin words for garden ( hortus ) and culture ( cultura ). This career has been revolutionized by progress made in the understanding of plant responses to environmental stimuli. Growers of crops, fruit, vegetables, and flowers were previously constrained by having to time their sowing and harvesting according to the season. Now, horticulturists can manipulate plants to increase leaf, flower, or fruit production by understanding how environmental factors affect plant growth and development. Greenhouse management is an essential component of a horticulturist’s education. To lengthen the night, plants are covered with a blackout shade cloth. Long-day plants are irradiated with red light in winter to promote early flowering. For example, fluorescent (cool white) light high in blue wavelengths encourages leafy growth and is excellent for starting seedlings. Incandescent lamps (standard light bulbs) are rich in red light, and promote flowering in some plants. The timing of fruit ripening can be increased or delayed by applying plant hormones. Recently, considerable progress has been made in the development of plant breeds that are suited to different climates and resistant to pests and transportation damage. Both crop yield and quality have increased as a result of practical applications of the knowledge of plant responses to external stimuli and hormones. Horticulturists find employment in private and governmental laboratories, greenhouses, botanical gardens, and in the production or research fields. They improve crops by applying their knowledge of genetics and plant physiology. To prepare for a horticulture career, students take classes in botany, plant physiology, plant pathology, landscape design, and plant breeding. To complement these traditional courses, horticulture majors add studies in economics, business, computer science, and communications. The Blue Light Responses Phototropism—the directional bending of a plant toward or away from a light source—is a response to blue wavelengths of light. Positive phototropism is growth towards a light source ( Figure 30.39 ), while negative phototropism (also called skototropism) is growth away from light. The aptly-named phototropins are protein-based receptors responsible for mediating the phototropic response. Like all plant photoreceptors, phototropins consist of a protein portion and a light-absorbing portion, called the chromophore. In phototropins, the chromophore is a covalently-bound molecule of flavin; hence, phototropins belong to a class of proteins called flavoproteins. Other responses under the control of phototropins are leaf opening and closing, chloroplast movement, and the opening of stomata. However, of all responses controlled by phototropins, phototropism has been studied the longest and is the best understood. In their 1880 treatise The Power of Movements in Plants , Charles Darwin and his son Francis first described phototropism as the bending of seedlings toward light. Darwin observed that light was perceived by the tip of the plant (the apical meristem), but that the response (bending) took place in a different part of the plant. They concluded that the signal had to travel from the apical meristem to the base of the plant. In 1913, Peter Boysen-Jensen demonstrated that a chemical signal produced in the plant tip was responsible for the bending at the base. He cut off the tip of a seedling, covered the cut section with a layer of gelatin, and then replaced the tip. The seedling bent toward the light when illuminated. However, when impermeable mica flakes were inserted between the tip and the cut base, the seedling did not bend. A refinement of the experiment showed that the signal traveled on the shaded side of the seedling. When the mica plate was inserted on the illuminated side, the plant did bend towards the light. Therefore, the chemical signal was a growth stimulant because the phototropic response involved faster cell elongation on the shaded side than on the illuminated side. We now know that as light passes through a plant stem, it is diffracted and generates phototropin activation across the stem. Most activation occurs on the lit side, causing the plant hormone indole acetic acid (IAA) to accumulate on the shaded side. Stem cells elongate under influence of IAA. Cryptochromes are another class of blue-light absorbing photoreceptors that also contain a flavin-based chromophore. Cryptochromes set the plants 24-hour activity cycle, also know as its circadian rhythem, using blue light cues. There is some evidence that cryptochromes work together with phototropins to mediate the phototropic response. Link to Learning Use the navigation menu in the left panel of this website to view images of plants in motion. Plant Responses to Gravity Whether or not they germinate in the light or in total darkness, shoots usually sprout up from the ground, and roots grow downward into the ground. A plant laid on its side in the dark will send shoots upward when given enough time. Gravitropism ensures that roots grow into the soil and that shoots grow toward sunlight. Growth of the shoot apical tip upward is called negative gravitropism , whereas growth of the roots downward is called positive gravitropism . Amyloplasts (also known as statoliths ) are specialized plastids that contain starch granules and settle downward in response to gravity. Amyloplasts are found in shoots and in specialized cells of the root cap. When a plant is tilted, the statoliths drop to the new bottom cell wall. A few hours later, the shoot or root will show growth in the new vertical direction. The mechanism that mediates gravitropism is reasonably well understood. When amyloplasts settle to the bottom of the gravity-sensing cells in the root or shoot, they physically contact the endoplasmic reticulum (ER), causing the release of calcium ions from inside the ER. This calcium signaling in the cells causes polar transport of the plant hormone IAA to the bottom of the cell. In roots, a high concentration of IAA inhibits cell elongation. The effect slows growth on the lower side of the root, while cells develop normally on the upper side. IAA has the opposite effect in shoots, where a higher concentration at the lower side of the shoot stimulates cell expansion, causing the shoot to grow up. After the shoot or root begin to grow vertically, the amyloplasts return to their normal position. Other hypotheses—involving the entire cell in the gravitropism effect—have been proposed to explain why some mutants that lack amyloplasts may still exhibit a weak gravitropic response. Growth Responses A plant’s sensory response to external stimuli relies on chemical messengers (hormones). Plant hormones affect all aspects of plant life, from flowering to fruit setting and maturation, and from phototropism to leaf fall. Potentially every cell in a plant can produce plant hormones. They can act in their cell of origin or be transported to other portions of the plant body, with many plant responses involving the synergistic or antagonistic interaction of two or more hormones. In contrast, animal hormones are produced in specific glands and transported to a distant site for action, and they act alone. Plant hormones are a group of unrelated chemical substances that affect plant morphogenesis. Five major plant hormones are traditionally described: auxins (particularly IAA), cytokinins, gibberellins, ethylene, and abscisic acid. In addition, other nutrients and environmental conditions can be characterized as growth factors. Auxins The term auxin is derived from the Greek word auxein , which means "to grow." Auxins are the main hormones responsible for cell elongation in phototropism and gravitropism. They also control the differentiation of meristem into vascular tissue, and promote leaf development and arrangement. While many synthetic auxins are used as herbicides, IAA is the only naturally occurring auxin that shows physiological activity. Apical dominance—the inhibition of lateral bud formation—is triggered by auxins produced in the apical meristem. Flowering, fruit setting and ripening, and inhibition of abscission (leaf falling) are other plant responses under the direct or indirect control of auxins. Auxins also act as a relay for the effects of the blue light and red/far-red responses. Commercial use of auxins is widespread in plant nurseries and for crop production. IAA is used as a rooting hormone to promote growth of adventitious roots on cuttings and detached leaves. Applying synthetic auxins to tomato plants in greenhouses promotes normal fruit development. Outdoor application of auxin promotes synchronization of fruit setting and dropping to coordinate the harvesting season. Fruits such as seedless cucumbers can be induced to set fruit by treating unfertilized plant flowers with auxins. Cytokinins The effect of cytokinins was first reported when it was found that adding the liquid endosperm of coconuts to developing plant embryos in culture stimulated their growth. The stimulating growth factor was found to be cytokinin , a hormone that promotes cytokinesis (cell division). Almost 200 naturally occurring or synthetic cytokinins are known to date. Cytokinins are most abundant in growing tissues, such as roots, embryos, and fruits, where cell division is occurring. Cytokinins are known to delay senescence in leaf tissues, promote mitosis, and stimulate differentiation of the meristem in shoots and roots. Many effects on plant development are under the influence of cytokinins, either in conjunction with auxin or another hormone. For example, apical dominance seems to result from a balance between auxins that inhibit lateral buds, and cytokinins that promote bushier growth. Gibberellins Gibberellins (GAs) are a group of about 125 closely related plant hormones that stimulate shoot elongation, seed germination, and fruit and flower maturation. GAs are synthesized in the root and stem apical meristems, young leaves, and seed embryos. In urban areas, GA antagonists are sometimes applied to trees under power lines to control growth and reduce the frequency of pruning. GAs break dormancy (a state of inhibited growth and development) in the seeds of plants that require exposure to cold or light to germinate. Abscisic acid is a strong antagonist of GA action. Other effects of GAs include gender expression, seedless fruit development, and the delay of senescence in leaves and fruit. Seedless grapes are obtained through standard breeding methods and contain inconspicuous seeds that fail to develop. Because GAs are produced by the seeds, and because fruit development and stem elongation are under GA control, these varieties of grapes would normally produce small fruit in compact clusters. Maturing grapes are routinely treated with GA to promote larger fruit size, as well as looser bunches (longer stems), which reduces the instance of mildew infection ( Figure 30.40 ). Abscisic Acid The plant hormone abscisic acid (ABA) was first discovered as the agent that causes the abscission or dropping of cotton bolls. However, more recent studies indicate that ABA plays only a minor role in the abscission process. ABA accumulates as a response to stressful environmental conditions, such as dehydration, cold temperatures, or shortened day lengths. Its activity counters many of the growth-promoting effects of GAs and auxins. ABA inhibits stem elongation and induces dormancy in lateral buds. ABA induces dormancy in seeds by blocking germination and promoting the synthesis of storage proteins. Plants adapted to temperate climates require a long period of cold temperature before seeds germinate. This mechanism protects young plants from sprouting too early during unseasonably warm weather in winter. As the hormone gradually breaks down over winter, the seed is released from dormancy and germinates when conditions are favorable in spring. Another effect of ABA is to promote the development of winter buds; it mediates the conversion of the apical meristem into a dormant bud. Low soil moisture causes an increase in ABA, which causes stomata to close, reducing water loss in winter buds. Ethylene Ethylene is associated with fruit ripening, flower wilting, and leaf fall. Ethylene is unusual because it is a volatile gas (C 2 H 4 ). Hundreds of years ago, when gas street lamps were installed in city streets, trees that grew close to lamp posts developed twisted, thickened trunks and shed their leaves earlier than expected. These effects were caused by ethylene volatilizing from the lamps. Aging tissues (especially senescing leaves) and nodes of stems produce ethylene. The best-known effect of the hormone, however, is the promotion of fruit ripening. Ethylene stimulates the conversion of starch and acids to sugars. Some people store unripe fruit, such as avocadoes, in a sealed paper bag to accelerate ripening; the gas released by the first fruit to mature will speed up the maturation of the remaining fruit. Ethylene also triggers leaf and fruit abscission, flower fading and dropping, and promotes germination in some cereals and sprouting of bulbs and potatoes. Ethylene is widely used in agriculture. Commercial fruit growers control the timing of fruit ripening with application of the gas. Horticulturalists inhibit leaf dropping in ornamental plants by removing ethylene from greenhouses using fans and ventilation. Nontraditional Hormones Recent research has discovered a number of compounds that also influence plant development. Their roles are less understood than the effects of the major hormones described so far. Jasmonates play a major role in defense responses to herbivory. Their levels increase when a plant is wounded by a predator, resulting in an increase in toxic secondary metabolites. They contribute to the production of volatile compounds that attract natural enemies of predators. For example, chewing of tomato plants by caterpillars leads to an increase in jasmonic acid levels, which in turn triggers the release of volatile compounds that attract predators of the pest. Oligosaccharins also play a role in plant defense against bacterial and fungal infections. They act locally at the site of injury, and can also be transported to other tissues. Strigolactones promote seed germination in some species and inhibit lateral apical development in the absence of auxins. Strigolactones also play a role in the establishment of mycorrhizae, a mutualistic association of plant roots and fungi. Brassinosteroids are important to many developmental and physiological processes. Signals between these compounds and other hormones, notably auxin and GAs, amplifies their physiological effect. Apical dominance, seed germination, gravitropism, and resistance to freezing are all positively influenced by hormones. Root growth and fruit dropping are inhibited by steroids. Plant Responses to Wind and Touch The shoot of a pea plant winds around a trellis, while a tree grows on an angle in response to strong prevailing winds. These are examples of how plants respond to touch or wind. The movement of a plant subjected to constant directional pressure is called thigmotropism , from the Greek words thigma meaning “touch,” and tropism implying “direction.” Tendrils are one example of this. The meristematic region of tendrils is very touch sensitive; light touch will evoke a quick coiling response. Cells in contact with a support surface contract, whereas cells on the opposite side of the support expand ( Figure 30.14 ). Application of jasmonic acid is sufficient to trigger tendril coiling without a mechanical stimulus. A thigmonastic response is a touch response independent of the direction of stimulus Figure 30.24 . In the Venus flytrap, two modified leaves are joined at a hinge and lined with thin fork-like tines along the outer edges. Tiny hairs are located inside the trap. When an insect brushes against these trigger hairs, touching two or more of them in succession, the leaves close quickly, trapping the prey. Glands on the leaf surface secrete enzymes that slowly digest the insect. The released nutrients are absorbed by the leaves, which reopen for the next meal. Thigmomorphogenesis is a slow developmental change in the shape of a plant subjected to continuous mechanical stress. When trees bend in the wind, for example, growth is usually stunted and the trunk thickens. Strengthening tissue, especially xylem, is produced to add stiffness to resist the wind’s force. Researchers hypothesize that mechanical strain induces growth and differentiation to strengthen the tissues. Ethylene and jasmonate are likely involved in thigmomorphogenesis. Link to Learning Use the menu at the left to navigate to three short movies: a Venus fly trap capturing prey, the progressive closing of sensitive plant leaflets, and the twining of tendrils. Defense Responses against Herbivores and Pathogens Plants face two types of enemies: herbivores and pathogens. Herbivores both large and small use plants as food, and actively chew them. Pathogens are agents of disease. These infectious microorganisms, such as fungi, bacteria, and nematodes, live off of the plant and damage its tissues. Plants have developed a variety of strategies to discourage or kill attackers. The first line of defense in plants is an intact and impenetrable barrier. Bark and the waxy cuticle can protect against predators. Other adaptations against herbivory include thorns, which are modified branches, and spines, which are modified leaves. They discourage animals by causing physical damage and inducing rashes and allergic reactions. A plant’s exterior protection can be compromised by mechanical damage, which may provide an entry point for pathogens. If the first line of defense is breached, the plant must resort to a different set of defense mechanisms, such as toxins and enzymes. Secondary metabolites are compounds that are not directly derived from photosynthesis and are not necessary for respiration or plant growth and development. Many metabolites are toxic, and can even be lethal to animals that ingest them. Some metabolites are alkaloids, which discourage predators with noxious odors (such as the volatile oils of mint and sage) or repellent tastes (like the bitterness of quinine). Other alkaloids affect herbivores by causing either excessive stimulation (caffeine is one example) or the lethargy associated with opioids. Some compounds become toxic after ingestion; for instance, glycol cyanide in the cassava root releases cyanide only upon ingestion by the herbivore. Mechanical wounding and predator attacks activate defense and protection mechanisms both in the damaged tissue and at sites farther from the injury location. Some defense reactions occur within minutes: others over several hours. The infected and surrounding cells may die, thereby stopping the spread of infection. Long-distance signaling elicits a systemic response aimed at deterring the predator. As tissue is damaged, jasmonates may promote the synthesis of compounds that are toxic to predators. Jasmonates also elicit the synthesis of volatile compounds that attract parasitoids, which are insects that spend their developing stages in or on another insect, and eventually kill their host. The plant may activate abscission of injured tissue if it is damaged beyond repair.
u.s._history
Summary 23.1 American Isolationism and the European Origins of War President Wilson had no desire to embroil the United States in the bloody and lengthy war that was devastating Europe. His foreign policy, through his first term and his campaign for reelection, focused on keeping the United States out of the war and involving the country in international affairs only when there was a moral imperative to do so. After his 1916 reelection, however, the free trade associated with neutrality proved impossible to secure against the total war strategies of the belligerents, particularly Germany’s submarine warfare. Ethnic ties to Europe meant that much of the general public was more than happy to remain neutral. Wilson’s reluctance to go to war was mirrored in Congress, where fifty-six voted against the war resolution. The measure still passed, however, and the United States went to war against the wishes of many of its citizens. 23.2 The United States Prepares for War Wilson might have entered the war unwillingly, but once it became inevitable, he quickly moved to use federal legislation and government oversight to put into place the conditions for the nation’s success. First, he sought to ensure that all logistical needs—from fighting men to raw materials for wartime production—were in place and within government reach. From legislating rail service to encouraging Americans to buy liberty loans and “bring the boys home sooner,” the government worked to make sure that the conditions for success were in place. Then came the more nuanced challenge of ensuring that a country of immigrants from both sides of the conflict fell in line as Americans, first and foremost. Aggressive propaganda campaigns, combined with a series of restrictive laws to silence dissenters, ensured that Americans would either support the war or at least stay silent. While some conscientious objectors and others spoke out, the government efforts were largely successful in silencing those who had favored neutrality. 23.3 A New Home Front The First World War remade the world for all Americans, whether they served abroad or stayed at home. For some groups, such as women and Blacks, the war provided opportunities for advancement. As soldiers went to war, women and African Americans took on jobs that had previously been reserved for White men. In return for a no-strike pledge, workers gained the right to organize. Many of these shifts were temporary, however, and the end of the war came with a cultural expectation that the old social order would be reinstated. Some reform efforts also proved short-lived. President Wilson’s wartime agencies managed the wartime economy effectively but closed immediately with the end of the war (although they reappeared a short while later with the New Deal). While patriotic fervor allowed Progressives to pass prohibition, the strong demand for alcohol made the law unsustainable. Women’s suffrage, however, was a Progressive movement that came to fruition in part because of the circumstances of the war, and unlike prohibition, it remained. 23.4 From War to Peace American involvement in World War I came late. Compared to the incredible carnage endured by Europe, the United States’ battles were brief and successful, although the appalling fighting conditions and significant casualties made it feel otherwise to Americans, both at war and at home. For Wilson, victory in the fields of France was not followed by triumphs in Versailles or Washington, DC, where his vision of a new world order was summarily rejected by his allied counterparts and then by the U.S. Congress. Wilson had hoped that America’s political influence could steer the world to a place of more open and tempered international negotiations. His influence did lead to the creation of the League of Nations, but concerns at home impeded the process so completely that the United States never signed the treaty that Wilson worked so hard to create. 23.5 Demobilization and Its Difficult Aftermath The end of a successful war did not bring the kind of celebration the country craved or anticipated. The flu pandemic, economic troubles, and racial and ideological tensions combined to make the immediate postwar experience in the United States one of anxiety and discontent. As the 1920 presidential election neared, Americans made it clear that they were seeking a break from the harsh realities that the country had been forced to face through the previous years of Progressive mandates and war. By voting in President Warren G. Harding in a landslide election, Americans indicated their desire for a government that would leave them alone, keep taxes low, and limit social Progressivism and international intervention.
Chapter Outline 23.1 American Isolationism and the European Origins of War 23.2 The United States Prepares for War 23.3 A New Home Front 23.4 From War to Peace 23.5 Demobilization and Its Difficult Aftermath Introduction On the eve of World War I, the U.S. government under President Woodrow Wilson opposed any entanglement in international military conflicts. But as the war engulfed Europe and the belligerents’ total war strategies targeted commerce and travel across the Atlantic, it became clear that the United States would not be able to maintain its position of neutrality. Still, the American public was of mixed opinion; many resisted the idea of American intervention and American lives lost, no matter how bad the circumstances. In 1918, artist George Bellows created a series of paintings intended to strengthen public support for the war effort. His paintings depicted German war atrocities in explicit and expertly captured detail, from children run through with bayonets to torturers happily resting while their victims suffered. The image above, entitled Return of the Useless ( Figure 23.1 ), shows Germans unloading sick or disabled labor camp prisoners from a boxcar. These paintings, while not regarded as Bellows’ most important artistic work, were typical for anti-German propaganda at the time. The U.S. government sponsored much of this propaganda out of concern that many American immigrants sympathized with the Central powers and would not support the U.S. war effort.
[ { "answer": { "ans_choice": 2, "ans_text": "C" }, "bloom": null, "hl_context": "<hl> Wilson appointed former presidential candidate William Jennings Bryan , a noted anti-imperialist and proponent of world peace , as his Secretary of State . <hl> Bryan undertook his new assignment with great vigor , encouraging nations around the world to sign “ cooling off treaties , ” under which they agreed to resolve international disputes through talks , not war , and to submit any grievances to an international commission . Bryan also negotiated friendly relations with Colombia , including a $ 25 million apology for Roosevelt ’ s actions during the Panamanian Revolution , and worked to establish effective self-government in the Philippines in preparation for the eventual American withdrawal . Even with Bryan ’ s support , however , Wilson found that it was much harder than he anticipated to keep the United States out of world affairs ( Figure 23.3 ) . In reality , the United States was interventionist in areas where its interests — direct or indirect — were threatened .", "hl_sentences": "Wilson appointed former presidential candidate William Jennings Bryan , a noted anti-imperialist and proponent of world peace , as his Secretary of State .", "question": { "cloze_format": "In order to pursue his goal of using American influence overseas only when it was a moral imperative, Wilson put ___ in the position of Secretary of State.", "normal_format": "In order to pursue his goal of using American influence overseas only when it was a moral imperative, Wilson put which man in the position of Secretary of State?", "question_choices": [ "Charles Hughes", "Theodore Roosevelt", "William Jennings Bryan", "John Pershing" ], "question_id": "eip-idp15018256", "question_text": "In order to pursue his goal of using American influence overseas only when it was a moral imperative, Wilson put which man in the position of Secretary of State?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "because they refused to warn their targets before firing" }, "bloom": null, "hl_context": "<hl> One terrifying new piece of technological warfare was the German unterseeboot — an “ undersea boat ” or U-boat . <hl> By early 1915 , in an effort to break the British naval blockade of Germany and turn the tide of the war , the Germans dispatched a fleet of these submarines around Great Britain to attack both merchant and military ships . <hl> The U-boats acted in direct violation of international law , attacking without warning from beneath the water instead of surfacing and permitting the surrender of civilians or crew . <hl> By 1918 , German U-boats had sunk nearly five thousand vessels . Of greatest historical note was the attack on the British passenger ship , RMS Lusitania , on its way from New York to Liverpool on May 7 , 1915 . The German Embassy in the United States had announced that this ship would be subject to attack for its cargo of ammunition : an allegation that later proved accurate . Nonetheless , almost 1,200 civilians died in the attack , including 128 Americans . The attack horrified the world , galvanizing support in England and beyond for the war ( Figure 23.5 ) . This attack , more than any other event , would test President Wilson ’ s desire to stay out of what had been a largely European conflict .", "hl_sentences": "One terrifying new piece of technological warfare was the German unterseeboot — an “ undersea boat ” or U-boat . The U-boats acted in direct violation of international law , attacking without warning from beneath the water instead of surfacing and permitting the surrender of civilians or crew .", "question": { "cloze_format": "The German use of the unterseeboot considered to defy international law ___.", "normal_format": "Why was the German use of the unterseeboot considered to defy international law?", "question_choices": [ "because other countries did not have similar technology", "because they refused to warn their targets before firing", "because they constituted cruel and unusual methods", "because no international consensus existed to employ submarine technology" ], "question_id": "eip-idm3796896", "question_text": "Why was the German use of the unterseeboot considered to defy international law?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "the Sedition Act" }, "bloom": null, "hl_context": "<hl> Wilson also created the War Industries Board , run by Bernard Baruch , to ensure adequate military supplies . <hl> The War Industries Board had the power to direct shipments of raw materials , as well as to control government contracts with private producers . Baruch used lucrative contracts with guaranteed profits to encourage several private firms to shift their production over to wartime materials . For those firms that refused to cooperate , Baruch ’ s government control over raw materials provided him with the necessary leverage to convince them to join the war effort , willingly or not . With the size of the army growing , the U . S . government next needed to ensure that there were adequate supplies — in particular food and fuel — for both the soldiers and the home front . Concerns over shortages led to the passage of the Lever Food and Fuel Control Act , which empowered the president to control the production , distribution , and price of all food products during the war effort . Using this law , Wilson created both a Fuel Administration and a Food Administration . <hl> The Fuel Administration , run by Harry Garfield , created the concept of “ fuel holidays , ” encouraging civilian Americans to do their part for the war effort by rationing fuel on certain days . <hl> Garfield also implemented “ daylight saving time ” for the first time in American history , shifting the clocks to allow more productive daylight hours . Herbert Hoover coordinated the Food Administration , and he too encouraged volunteer rationing by invoking patriotism . With the slogan “ food will win the war , ” Hoover encouraged “ Meatless Mondays , ” “ Wheatless Wednesdays , ” and other similar reductions , with the hope of rationing food for military use ( Figure 23.8 ) . <hl> To compose a fighting force , Congress passed the Selective Service Act in 1917 , which initially required all men aged twenty-one through thirty to register for the draft ( Figure 23.7 ) . <hl> <hl> In 1918 , the act was expanded to include all men between eighteen and forty-five . <hl> Through a campaign of patriotic appeals , as well as an administrative system that allowed men to register at their local draft boards rather than directly with the federal government , over ten million men registered for the draft on the very first day . By the war ’ s end , twenty-two million men had registered for the U . S . Army draft . Five million of these men were actually drafted , another 1.5 million volunteered , and over 500,000 additional men signed up for the navy or marines . In all , two million men participated in combat operations overseas . Among the volunteers were also twenty thousand women , a quarter of whom went to France to serve as nurses or in clerical positions .", "hl_sentences": "Wilson also created the War Industries Board , run by Bernard Baruch , to ensure adequate military supplies . The Fuel Administration , run by Harry Garfield , created the concept of “ fuel holidays , ” encouraging civilian Americans to do their part for the war effort by rationing fuel on certain days . To compose a fighting force , Congress passed the Selective Service Act in 1917 , which initially required all men aged twenty-one through thirty to register for the draft ( Figure 23.7 ) . In 1918 , the act was expanded to include all men between eighteen and forty-five .", "question": { "cloze_format": "___ was not enacted in order to secure men and materials for the war effort.", "normal_format": "Which of the following was not enacted in order to secure men and materials for the war effort?", "question_choices": [ "the Food Administration", "the Selective Service Act", "the War Industries Board", "the Sedition Act" ], "question_id": "eip-idp232131680", "question_text": "Which of the following was not enacted in order to secure men and materials for the war effort?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "C" }, "bloom": null, "hl_context": "<hl> Understandably , opposition to such repression began mounting . <hl> <hl> In 1917 , Roger Baldwin formed the National Civil Liberties Bureau — a forerunner to the American Civil Liberties Union , which was founded in 1920 — to challenge the government ’ s policies against wartime dissent and conscientious objection . <hl> In 1919 , the case of Schenck v . United States went to the U . S . Supreme Court to challenge the constitutionality of the Espionage and Sedition Acts . The case concerned Charles Schenck , a leader in the Socialist Party of Philadelphia , who had distributed fifteen thousand leaflets , encouraging young men to avoid conscription . The court ruled that during a time of war , the federal government was justified in passing such laws to quiet dissenters . The decision was unanimous , and in the court ’ s opinion , Justice Oliver Wendell Holmes wrote that such dissent presented a “ clear and present danger ” to the safety of the United States and the military , and was therefore justified . He further explained how the First Amendment right of free speech did not protect such dissent , in the same manner that a citizen could not be freely permitted to yell “ fire ! ” in a crowded theater , due to the danger it presented . Congress ultimately repealed most of the Espionage and Sedition Acts in 1921 , and several who were imprisoned for violation of those acts were then quickly released . But the Supreme Court ’ s deference to the federal government ’ s restrictions on civil liberties remained a volatile topic in future wars . 23.3 A New Home Front Learning Objectives By the end of this section , you will be able to : <hl> In addition to its propaganda campaign , the U . S . government also tried to secure broad support for the war effort with repressive legislation . <hl> The Trading with the Enemy Act of 1917 prohibited individual trade with an enemy nation and banned the use of the postal service for disseminating any literature deemed treasonous by the postmaster general . That same year , the Espionage Act prohibited giving aid to the enemy by spying , or espionage , as well as any public comments that opposed the American war effort . Under this act , the government could impose fines and imprisonment of up to twenty years . The Sedition Act , passed in 1918 , prohibited any criticism or disloyal language against the federal government and its policies , the U . S . Constitution , the military uniform , or the American flag . More than two thousand persons were charged with violating these laws , and many received prison sentences of up to twenty years . Immigrants faced deportation as punishment for their dissent . Not since the Alien and Sedition Acts of 1798 had the federal government so infringed on the freedom of speech of loyal American citizens . <hl> The Wilson administration created the Committee of Public Information under director George Creel , a former journalist , just days after the United States declared war on Germany . <hl> <hl> Creel employed artists , speakers , writers , and filmmakers to develop a propaganda machine . <hl> <hl> The goal was to encourage all Americans to make sacrifices during the war and , equally importantly , to hate all things German ( Figure 23.10 ) . <hl> <hl> Through efforts such as the establishment of “ loyalty leagues ” in ethnic immigrant communities , Creel largely succeeded in molding an anti-German sentiment around the country . <hl> The result ? Some schools banned the teaching of the German language and some restaurants refused to serve frankfurters , sauerkraut , or hamburgers , instead serving “ liberty dogs with liberty cabbage ” and “ liberty sandwiches . ” Symphonies refused to perform music written by German composers . The hatred of Germans grew so widespread that , at one point , at a circus , audience members cheered when , in an act gone horribly wrong , a Russian bear mauled a German animal trainer ( whose ethnicity was more a part of the act than reality ) .", "hl_sentences": "Understandably , opposition to such repression began mounting . In 1917 , Roger Baldwin formed the National Civil Liberties Bureau — a forerunner to the American Civil Liberties Union , which was founded in 1920 — to challenge the government ’ s policies against wartime dissent and conscientious objection . In addition to its propaganda campaign , the U . S . government also tried to secure broad support for the war effort with repressive legislation . The Wilson administration created the Committee of Public Information under director George Creel , a former journalist , just days after the United States declared war on Germany . Creel employed artists , speakers , writers , and filmmakers to develop a propaganda machine . The goal was to encourage all Americans to make sacrifices during the war and , equally importantly , to hate all things German ( Figure 23.10 ) . Through efforts such as the establishment of “ loyalty leagues ” in ethnic immigrant communities , Creel largely succeeded in molding an anti-German sentiment around the country .", "question": { "cloze_format": "___ was not used to control American dissent against the war effort.", "normal_format": "What of the following was not used to control American dissent against the war effort?", "question_choices": [ "propaganda campaigns", "repressive legislation", "National Civil Liberties Bureau", "loyalty leagues" ], "question_id": "eip-idm52994880", "question_text": "What of the following was not used to control American dissent against the war effort?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "A" }, "bloom": null, "hl_context": "Wilson only briefly investigated the longstanding animosity between labor and management before ordering the creation of the National Labor War Board in April 1918 . Quick negotiations with Gompers and the AFL resulted in a promise : Organized labor would make a “ no-strike pledge ” for the duration of the war , in exchange for the U . S . government ’ s protection of workers ’ rights to organize and bargain collectively . The federal government kept its promise and promoted the adoption of an eight-hour workday ( which had first been adopted by government employees in 1868 ) , a living wage for all workers , and union membership . As a result , union membership skyrocketed during the war , from 2.6 million members in 1916 to 4.1 million in 1919 . In short , American workers received better working conditions and wages , as a result of the country ’ s participation in the war . However , their economic gains were limited . While prosperity overall went up during the war , it was enjoyed more by business owners and corporations than by the workers themselves . <hl> Even though wages increased , inflation offset most of the gains . <hl> <hl> Prices in the United States increased an average of 15 – 20 percent annually between 1917 and 1920 . <hl> <hl> Individual purchasing power actually declined during the war due to the substantially higher cost of living . <hl> Business profits , in contrast , increased by nearly a third during the war .", "hl_sentences": "Even though wages increased , inflation offset most of the gains . Prices in the United States increased an average of 15 – 20 percent annually between 1917 and 1920 . Individual purchasing power actually declined during the war due to the substantially higher cost of living .", "question": { "cloze_format": "The war did not increase overall prosperity ____.", "normal_format": "Why did the war not increase overall prosperity?", "question_choices": [ "because inflation made the cost of living higher", "because wages were lowered due to the war effort", "because workers had no bargaining power due to the “no-strike pledge”", "because women and African American men were paid less for the same work" ], "question_id": "fs-idp47548592", "question_text": "Why did the war not increase overall prosperity?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "the passage of the Volstead Act" }, "bloom": null, "hl_context": "Alice Paul , of the National Women ’ s Party , organized more radical tactics , bringing national attention to the issue of women ’ s suffrage by organizing protests outside the White House and , later , hunger strikes among arrested protesters . African American suffragists , who had been active in the movement for decades , faced discrimination from their White counterparts . Some White leaders justified this treatment based on the concern that promoting Black women would erode public support . But overt racism played a significant role , as well . During the suffrage parade in 1913 , Black members were told to march at the rear of the line . Ida B . Wells-Barnett , a prominent voice for equality , first asked her local delegation to oppose this segregation ; they refused . Not to be dismissed , Wells-Barnett waited in the crowd until the Illinois delegation passed by , then stepped onto the parade route and took her place among them . <hl> By the end of the war , the abusive treatment of suffragist hunger-strikers in prison , women ’ s important contribution to the war effort , and the arguments of his suffragist daughter Jessie Woodrow Wilson Sayre moved President Wilson to understand women ’ s right to vote as an ethical mandate for a true democracy . <hl> <hl> He began urging congressmen and senators to adopt the legislation . <hl> <hl> The amendment finally passed in June 1919 , and the states ratified it by August 1920 . <hl> <hl> Specifically , the Nineteenth Amendment prohibited all efforts to deny the right to vote on the basis of sex . <hl> It took effect in time for American women to vote in the presidential election of 1920 . 23.4 From War to Peace Learning Objectives By the end of this section , you will be able to :", "hl_sentences": "By the end of the war , the abusive treatment of suffragist hunger-strikers in prison , women ’ s important contribution to the war effort , and the arguments of his suffragist daughter Jessie Woodrow Wilson Sayre moved President Wilson to understand women ’ s right to vote as an ethical mandate for a true democracy . He began urging congressmen and senators to adopt the legislation . The amendment finally passed in June 1919 , and the states ratified it by August 1920 . Specifically , the Nineteenth Amendment prohibited all efforts to deny the right to vote on the basis of sex .", "question": { "cloze_format": "___ did not influence the eventual passage of the Nineteenth Amendment.", "normal_format": "Which of the following did not influence the eventual passage of the Nineteenth Amendment?", "question_choices": [ "women’s contributions to the war effort", "the dramatic tactics and harsh treatment of radical suffragists", "the passage of the Volstead Act", "the arguments of President Wilson’s daughter" ], "question_id": "fs-idp87420528", "question_text": "Which of the following did not influence the eventual passage of the Nineteenth Amendment?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 1, "ans_text": "the agreement that all nations in the League of Nations would be rendered equal" }, "bloom": null, "hl_context": "The sole piece of the original Fourteen Points that Wilson successfully fought to keep intact was the creation of a League of Nations . At a covenant agreed to at the conference , all member nations in the League would agree to defend all other member nations against military threats . <hl> Known as Article X , this agreement would basically render each nation equal in terms of power , as no member nation would be able to use its military might against a weaker member nation . <hl> Ironically , this article would prove to be the undoing of Wilson ’ s dream of a new world order .", "hl_sentences": "Known as Article X , this agreement would basically render each nation equal in terms of power , as no member nation would be able to use its military might against a weaker member nation .", "question": { "cloze_format": "The Article X in the Treaty of Versailles was ____.", "normal_format": "What was Article X in the Treaty of Versailles?", "question_choices": [ "the “war guilt clause” that France required", "the agreement that all nations in the League of Nations would be rendered equal", "the Allies’ division of Germany’s holdings in Asia", "the refusal to allow Bolshevik Russia membership in the League of Nations" ], "question_id": "fs-idp34041472", "question_text": "What was Article X in the Treaty of Versailles?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "B" }, "bloom": null, "hl_context": "In the end , the Treaty of Versailles that officially concluded World War I resembled little of Wilson ’ s original Fourteen Points . The Japanese , French , and British succeeded in carving up many of Germany ’ s colonial holdings in Africa and Asia . The dissolution of the Ottoman Empire created new nations under the quasi-colonial rule of France and Great Britain , such as Iraq and Palestine . <hl> France gained much of the disputed territory along their border with Germany , as well as passage of a “ war guilt clause ” that demanded Germany take public responsibility for starting and prosecuting the war that led to so much death and destruction . <hl> <hl> Great Britain led the charge that resulted in Germany agreeing to pay reparations in excess of $ 33 billion to the Allies . <hl> As for Bolshevik Russia , Wilson had agreed to send American troops to their northern region to protect Allied supplies and holdings there , while also participating in an economic blockade designed to undermine Lenin ’ s power . This move would ultimately have the opposite effect of galvanizing popular support for the Bolsheviks .", "hl_sentences": "France gained much of the disputed territory along their border with Germany , as well as passage of a “ war guilt clause ” that demanded Germany take public responsibility for starting and prosecuting the war that led to so much death and destruction . Great Britain led the charge that resulted in Germany agreeing to pay reparations in excess of $ 33 billion to the Allies .", "question": { "cloze_format": "___ was not includedd in the Treaty of Versailles.", "normal_format": "Which of the following was not included in the Treaty of Versailles?", "question_choices": [ "extensive German reparations to be paid to the Allies", "a curtailment of German immigration to Allied nations", "France’s acquisition of disputed territory along the French-German border", "a mandate for Germany to accept responsibility for the war publicly" ], "question_id": "fs-idp48990656", "question_text": "Which of the following was not included in the Treaty of Versailles?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 1, "ans_text": "B" }, "bloom": null, "hl_context": "<hl> Another element that greatly influenced the challenges of immediate postwar life was economic upheaval . <hl> <hl> As discussed above , wartime production had led to steady inflation ; the rising cost of living meant that few Americans could comfortably afford to live off their wages . <hl> When the government ’ s wartime control over the economy ended , businesses slowly recalibrated from the wartime production of guns and ships to the peacetime production of toasters and cars . Public demand quickly outpaced the slow production , leading to notable shortages of domestic goods . As a result , inflation skyrocketed in 1919 . By the end of the year , the cost of living in the United States was nearly double what it had been in 1916 . Workers , facing a shortage in wages to buy more expensive goods , and no longer bound by the no-strike pledge they made for the National War Labor Board , initiated a series of strikes for better hours and wages . In 1919 alone , more than four million workers participated in a total of nearly three thousand strikes : both records within all of American history . As world leaders debated the terms of the peace , the American public faced its own challenges at the conclusion of the First World War . Several unrelated factors intersected to create a chaotic and difficult time , just as massive numbers of troops rapidly demobilized and came home . <hl> Racial tensions , a terrifying flu epidemic , anticommunist hysteria , and economic uncertainty all combined to leave many Americans wondering what , exactly , they had won in the war . <hl> <hl> Adding to these problems was the absence of President Wilson , who remained in Paris for six months , leaving the country leaderless . <hl> The result of these factors was that , rather than a celebratory transition from wartime to peace and prosperity , and ultimately the Jazz Age of the 1920s , 1919 was a tumultuous year that threatened to tear the country apart .", "hl_sentences": "Another element that greatly influenced the challenges of immediate postwar life was economic upheaval . As discussed above , wartime production had led to steady inflation ; the rising cost of living meant that few Americans could comfortably afford to live off their wages . Racial tensions , a terrifying flu epidemic , anticommunist hysteria , and economic uncertainty all combined to leave many Americans wondering what , exactly , they had won in the war . Adding to these problems was the absence of President Wilson , who remained in Paris for six months , leaving the country leaderless .", "question": { "cloze_format": "___ was not a destabilizing factor immediaately following the end of the war.", "normal_format": "Which of the following was not a destabilizing factor immediately following the end of the war?", "question_choices": [ "a flu pandemic", "a women’s liberation movement", "high inflation and economic uncertainty", "political paranoia" ], "question_id": "fs-idm74601120", "question_text": "Which of the following was not a destabilizing factor immediately following the end of the war?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "the murder of a Black boy who swam too close to a White beach" }, "bloom": null, "hl_context": "In addition to labor clashes , race riots shattered the peace at the home front . The race riots that had begun during the Great Migration only grew in postwar America . White soldiers returned home to find Black workers in their former jobs and neighborhoods , and were committed to restoring their position of White supremacy . Black soldiers returned home with a renewed sense of justice and strength , and were determined to assert their rights as men and as citizens . Meanwhile , southern lynchings continued to escalate , with White mobs burning African Americans at the stake . The mobs often used false accusations of indecency and assault on White women to justify the murders . During the “ Red Summer ” of 1919 , northern cities recorded twenty-five bloody race riots that killed over 250 people . <hl> Among these was the Chicago Race Riot of 1919 , where a White mob stoned a young Black boy to death because he swam too close to the “ White beach ” on Lake Michigan . <hl> Police at the scene did not arrest the perpetrator who threw the rock . This crime prompted a week-long riot that left twenty-three Blacks and fifteen Whites dead , as well as millions of dollars ’ worth of damage to the city ( Figure 23.20 ) .", "hl_sentences": "Among these was the Chicago Race Riot of 1919 , where a White mob stoned a young Black boy to death because he swam too close to the “ White beach ” on Lake Michigan .", "question": { "cloze_format": "___ was the inciting event that led to the Chicago Race Riot of 1919.", "normal_format": "What was the inciting event that led to the Chicago Race Riot of 1919?", "question_choices": [ "a strike at a local factory", "a protest march of Black activists", "the murder of a Black boy who swam too close to a White beach", "the assault of a White man on a streetcar by Black youths" ], "question_id": "fs-idp170718528", "question_text": "What was the inciting event that led to the Chicago Race Riot of 1919?" }, "references_are_paraphrase": 0 } ]
23
23.1 American Isolationism and the European Origins of War Learning Objectives By the end of this section, you will be able to: Explain Woodrow Wilson’s foreign policy and the difficulties of maintaining American neutrality at the outset of World War I Identify the key factors that led to the U.S. declaration of war on Germany in April 1917 Unlike his immediate predecessors, President Woodrow Wilson had planned to shrink the role of the United States in foreign affairs. He believed that the nation needed to intervene in international events only when there was a moral imperative to do so. But as Europe’s political situation grew dire, it became increasingly difficult for Wilson to insist that the conflict growing overseas was not America’s responsibility. Germany’s war tactics struck most observers as morally reprehensible, while also putting American free trade with the Entente at risk. Despite campaign promises and diplomatic efforts, Wilson could only postpone American involvement in the war. WOODROW WILSON’S EARLY EFFORTS AT FOREIGN POLICY When Woodrow Wilson took over the White House in March 1913, he promised a less expansionist approach to American foreign policy than Theodore Roosevelt and William Howard Taft had pursued. Wilson did share the commonly held view that American values were superior to those of the rest of the world, that democracy was the best system to promote peace and stability, and that the United States should continue to actively pursue economic markets abroad. But he proposed an idealistic foreign policy based on morality, rather than American self-interest, and felt that American interference in another nation’s affairs should occur only when the circumstances rose to the level of a moral imperative. Wilson appointed former presidential candidate William Jennings Bryan, a noted anti-imperialist and proponent of world peace, as his Secretary of State. Bryan undertook his new assignment with great vigor, encouraging nations around the world to sign “cooling off treaties,” under which they agreed to resolve international disputes through talks, not war, and to submit any grievances to an international commission. Bryan also negotiated friendly relations with Colombia, including a $25 million apology for Roosevelt’s actions during the Panamanian Revolution, and worked to establish effective self-government in the Philippines in preparation for the eventual American withdrawal. Even with Bryan’s support, however, Wilson found that it was much harder than he anticipated to keep the United States out of world affairs ( Figure 23.3 ). In reality, the United States was interventionist in areas where its interests—direct or indirect—were threatened. Wilson’s greatest break from his predecessors occurred in Asia, where he abandoned Taft’s “dollar diplomacy,” a foreign policy that essentially used the power of U.S. economic dominance as a threat to gain favorable terms. Instead, Wilson revived diplomatic efforts to keep Japanese interference there at a minimum. But as World War I, also known as the Great War, began to unfold, and European nations largely abandoned their imperialistic interests in order to marshal their forces for self-defense, Japan demanded that China succumb to a Japanese protectorate over their entire nation. In 1917, William Jennings Bryan’s successor as Secretary of State, Robert Lansing, signed the Lansing-Ishii Agreement, which recognized Japanese control over the Manchurian region of China in exchange for Japan’s promise not to exploit the war to gain a greater foothold in the rest of the country. Furthering his goal of reducing overseas interventions, Wilson had promised not to rely on the Roosevelt Corollary, Theodore Roosevelt’s explicit policy that the United States could involve itself in Latin American politics whenever it felt that the countries in the Western Hemisphere needed policing. Once president, however, Wilson again found that it was more difficult to avoid American interventionism in practice than in rhetoric. Indeed, Wilson intervened more in Western Hemisphere affairs than either Taft or Roosevelt. In 1915, when a revolution in Haiti resulted in the murder of the Haitian president and threatened the safety of New York banking interests in the country, Wilson sent over three hundred U.S. Marines to establish order. Subsequently, the United States assumed control over the island’s foreign policy as well as its financial administration. One year later, in 1916, Wilson again sent marines to Hispaniola, this time to the Dominican Republic, to ensure prompt payment of a debt that nation owed. In 1917, Wilson sent troops to Cuba to protect American-owned sugar plantations from attacks by Cuban rebels; this time, the troops remained for four years. Wilson’s most noted foreign policy foray prior to World War I focused on Mexico, where rebel general Victoriano Huerta had seized control from a previous rebel government just weeks before Wilson’s inauguration. Wilson refused to recognize Huerta’s government, instead choosing to make an example of Mexico by demanding that they hold democratic elections and establish laws based on the moral principles he espoused. Officially, Wilson supported Venustiano Carranza, who opposed Huerta’s military control of the country. When American intelligence learned of a German ship allegedly preparing to deliver weapons to Huerta’s forces, Wilson ordered the U.S. Navy to land forces at Veracruz to stop the shipment. On April 22, 1914, a fight erupted between the U.S. Navy and Mexican troops, resulting in nearly 150 deaths, nineteen of them American. Although Carranza’s faction managed to overthrow Huerta in the summer of 1914, most Mexicans—including Carranza—had come to resent American intervention in their affairs. Carranza refused to work with Wilson and the U.S. government, and instead threatened to defend Mexico’s mineral rights against all American oil companies established there. Wilson then turned to support rebel forces who opposed Carranza, most notably Pancho Villa ( Figure 23.4 ). However, Villa lacked the strength in number or weapons to overtake Carranza; in 1915, Wilson reluctantly authorized official U.S. recognition of Carranza’s government. As a postscript, an irate Pancho Villa turned against Wilson, and on March 9, 1916, led a fifteen-hundred-man force across the border into New Mexico, where they attacked and burned the town of Columbus. Over one hundred people died in the attack, seventeen of them American. Wilson responded by sending General John Pershing into Mexico to capture Villa and return him to the United States for trial. With over eleven thousand troops at his disposal, Pershing marched three hundred miles into Mexico before an angry Carranza ordered U.S. troops to withdraw from the nation. Although reelected in 1916, Wilson reluctantly ordered the withdrawal of U.S. troops from Mexico in 1917, avoiding war with Mexico and enabling preparations for American intervention in Europe. Again, as in China, Wilson’s attempt to impose a moral foreign policy had failed in light of economic and political realities. WAR ERUPTS IN EUROPE When a Serbian nationalist murdered the Archduke Franz Ferdinand of the Austro-Hungarian Empire on June 28, 1914, the underlying forces that led to World War I had already long been in motion and seemed, at first, to have little to do with the United States. At the time, the events that pushed Europe from ongoing tensions into war seemed very far away from U.S. interests. For nearly a century, nations had negotiated a series of mutual defense alliance treaties to secure themselves against their imperialistic rivals. Among the largest European powers, the Triple Entente included an alliance of France, Great Britain, and Russia. Opposite them, the Central powers, also known as the Triple Alliance, included Germany, Austria-Hungary, the Ottoman Empire, and initially Italy. A series of “side treaties” likewise entangled the larger European powers to protect several smaller ones should war break out. At the same time that European nations committed each other to defense pacts, they jockeyed for power over empires overseas and invested heavily in large, modern militaries. Dreams of empire and military supremacy fueled an era of nationalism that was particularly pronounced in the newer nations of Germany and Italy, but also provoked separatist movements among Europeans. The Irish rose up in rebellion against British rule, for example. And in Bosnia’s capital of Sarajevo, Gavrilo Princip and his accomplices assassinated the Austro-Hungarian archduke in their fight for a pan-Slavic nation. Thus, when Serbia failed to accede to Austro-Hungarian demands in the wake of the archduke’s murder, Austria-Hungary declared war on Serbia with the confidence that it had the backing of Germany. This action, in turn, brought Russia into the conflict, due to a treaty in which they had agreed to defend Serbia. Germany followed suit by declaring war on Russia, fearing that Russia and France would seize this opportunity to move on Germany if it did not take the offensive. The eventual German invasion of Belgium drew Great Britain into the war, followed by the attack of the Ottoman Empire on Russia. By the end of August 1914, it seemed as if Europe had dragged the entire world into war. The Great War was unlike any war that came before it. Whereas in previous European conflicts, troops typically faced each other on open battlefields, World War I saw new military technologies that turned war into a conflict of prolonged trench warfare. Both sides used new artillery, tanks, airplanes, machine guns, barbed wire, and, eventually, poison gas: weapons that strengthened defenses and turned each military offense into barbarous sacrifices of thousands of lives with minimal territorial advances in return. By the end of the war, the total military death toll was ten million, as well as another million civilian deaths attributed to military action, and another six million civilian deaths caused by famine, disease, or other related factors. One terrifying new piece of technological warfare was the German unterseeboot —an “undersea boat” or U-boat. By early 1915, in an effort to break the British naval blockade of Germany and turn the tide of the war, the Germans dispatched a fleet of these submarines around Great Britain to attack both merchant and military ships. The U-boats acted in direct violation of international law, attacking without warning from beneath the water instead of surfacing and permitting the surrender of civilians or crew. By 1918, German U-boats had sunk nearly five thousand vessels. Of greatest historical note was the attack on the British passenger ship, RMS Lusitania , on its way from New York to Liverpool on May 7, 1915. The German Embassy in the United States had announced that this ship would be subject to attack for its cargo of ammunition: an allegation that later proved accurate. Nonetheless, almost 1,200 civilians died in the attack, including 128 Americans. The attack horrified the world, galvanizing support in England and beyond for the war ( Figure 23.5 ). This attack, more than any other event, would test President Wilson’s desire to stay out of what had been a largely European conflict. THE CHALLENGE OF NEUTRALITY Despite the loss of American lives on the Lusitania , President Wilson stuck to his path of neutrality in Europe’s escalating war: in part out of moral principle, in part as a matter of practical necessity, and in part for political reasons. Few Americans wished to participate in the devastating battles that ravaged Europe, and Wilson did not want to risk losing his reelection by ordering an unpopular military intervention. Wilson’s “neutrality” did not mean isolation from all warring factions, but rather open markets for the United States and continued commercial ties with all belligerents. For Wilson, the conflict did not reach the threshold of a moral imperative for U.S. involvement; it was largely a European affair involving numerous countries with whom the United States wished to maintain working relations. In his message to Congress in 1914, the president noted that “Every man who really loves America will act and speak in the true spirit of neutrality, which is the spirit of impartiality and fairness and friendliness to all concerned.” Wilson understood that he was already looking at a difficult reelection bid. He had only won the 1912 election with 42 percent of the popular vote, and likely would not have been elected at all had Roosevelt not come back as a third-party candidate to run against his former protégée Taft. Wilson felt pressure from all different political constituents to take a position on the war, yet he knew that elections were seldom won with a campaign promise of “If elected, I will send your sons to war!” Facing pressure from some businessmen and other government officials who felt that the protection of America’s best interests required a stronger position in defense of the Allied forces, Wilson agreed to a “preparedness campaign” in the year prior to the election. This campaign included the passage of the National Defense Act of 1916, which more than doubled the size of the army to nearly 225,000, and the Naval Appropriations Act of 1916, which called for the expansion of the U.S. fleet, including battleships, destroyers, submarines, and other ships. As the 1916 election approached, the Republican Party hoped to capitalize on the fact that Wilson was making promises that he would not be able to keep. They nominated Charles Evans Hughes, a former governor of New York and sitting U.S. Supreme Court justice at the time of his nomination. Hughes focused his campaign on what he considered Wilson’s foreign policy failures, but even as he did so, he himself tried to walk a fine line between neutrality and belligerence, depending on his audience. In contrast, Wilson and the Democrats capitalized on neutrality and campaigned under the slogan “Wilson—he kept us out of war.” The election itself remained too close to call on election night. Only when a tight race in California was decided two days later could Wilson claim victory in his reelection bid, again with less than 50 percent of the popular vote. Despite his victory based upon a policy of neutrality, Wilson would find true neutrality a difficult challenge. Several different factors pushed Wilson, however reluctantly, toward the inevitability of American involvement. A key factor driving U.S. engagement was economics. Great Britain was the country’s most important trading partner, and the Allies as a whole relied heavily on American imports from the earliest days of the war forward. Specifically, the value of all exports to the Allies quadrupled from $750 million to $3 billion in the first two years of the war. At the same time, the British naval blockade meant that exports to Germany all but ended, dropping from $350 million to $30 million. Likewise, numerous private banks in the United States made extensive loans—in excess of $500 million—to England. J. P. Morgan’s banking interests were among the largest lenders, due to his family’s connection to the country. Another key factor complicating the decision to go to war was the deep ethnic divisions between native-born Americans and more recent immigrants. For those of Anglo-Saxon descent, the nation’s historic and ongoing relationship with Great Britain was paramount, but many Irish-Americans resented British rule over their place of birth and opposed support for the world’s most expansive empire. Millions of Jewish immigrants had fled anti-Semitic pogroms in Tsarist Russia and would have supported any nation fighting that authoritarian state. German Americans saw their nation of origin as a victim of British and Russian aggression and a French desire to settle old scores, whereas emigrants from Austria-Hungary and the Ottoman Empire were mixed in their sympathies for the old monarchies or ethnic communities that these empires suppressed. For interventionists, this lack of support for Great Britain and its allies among recent immigrants only strengthened their conviction. Germany’s use of submarine warfare also played a role in challenging U.S. neutrality. After the sinking of the Lusitania , and the subsequent August 30 sinking of another British liner, the Arabic , Germany had promised to restrict their use of submarine warfare. Specifically, they promised to surface and visually identify any ship before they fired, as well as permit civilians to evacuate targeted ships. Instead, in February 1917, Germany intensified their use of submarines in an effort to end the war quickly before Great Britain’s naval blockade starved them out of food and supplies. The German high command wanted to continue unrestricted warfare on all Atlantic traffic, including unarmed American freighters, in order to cripple the British economy and secure a quick and decisive victory. Their goal: to bring an end to the war before the United States could intervene and tip the balance in this grueling war of attrition. In February 1917, a German U-boat sank the American merchant ship, the Laconia , killing two passengers, and, in late March, quickly sunk four more American ships. These attacks increased pressure on Wilson from all sides, as government officials, the general public, and both Democrats and Republicans urged him to declare war. The final element that led to American involvement in World War I was the so-called Zimmermann telegram . British intelligence intercepted and decoded a top-secret telegram from German foreign minister Arthur Zimmermann to the German ambassador to Mexico, instructing the latter to invite Mexico to join the war effort on the German side, should the United States declare war on Germany. It further went on to encourage Mexico to invade the United States if such a declaration came to pass, as Mexico’s invasion would create a diversion and permit Germany a clear path to victory. In exchange, Zimmermann offered to return to Mexico land that was previously lost to the United States in the Mexican-American War, including Arizona, New Mexico, and Texas ( Figure 23.6 ). The likelihood that Mexico, weakened and torn by its own revolution and civil war, could wage war against the United States and recover territory lost in the Mexican-American war with Germany’s help was remote at best. But combined with Germany’s unrestricted use of submarine warfare and the sinking of American ships, the Zimmermann telegram made a powerful argument for a declaration of war. The outbreak of the Russian Revolution in February and abdication of Tsar Nicholas II in March raised the prospect of democracy in the Eurasian empire and removed an important moral objection to entering the war on the side of the Allies. On April 2, 1917, Wilson asked Congress to declare war on Germany. Congress debated for four days, and several senators and congressmen expressed their concerns that the war was being fought over U.S. economic interests more than strategic need or democratic ideals. When Congress voted on April 6, fifty-six voted against the resolution, including the first woman ever elected to Congress, Representative Jeannette Rankin. This was the largest “no” vote against a war resolution in American history. Defining American Wilson’s Peace without Victory Speech Wilson’s last-ditch effort to avoid bringing the United States into World War I is captured in a speech he gave before the U.S. Senate on January 22, 1917. This speech, known as the “Peace without Victory” speech, extolled the country to be patient, as the countries involved in the war were nearing a peace. Wilson stated: It must be a peace without victory. It is not pleasant to say this. I beg that I may be permitted to put my own interpretation upon it and that it may be understood that no other interpretation was in my thought. I am seeking only to face realities and to face them without soft concealments. Victory would mean peace forced upon the loser, a victor’s terms imposed upon the vanquished. It would be accepted in humiliation, under duress, at an intolerable sacrifice, and would leave a sting, a resentment, a bitter memory upon which terms of peace would rest, not permanently, but only as upon quicksand. Only a peace between equals can last, only a peace the very principle of which is equality and a common participation in a common benefit. Not surprisingly, this speech was not well received by either side fighting the war. England resisted being put on the same moral ground as Germany, and France, whose country had been battered by years of warfare, had no desire to end the war without victory and its spoils. Still, the speech as a whole illustrates Wilson’s idealistic, if failed, attempt to create a more benign and high-minded foreign policy role for the United States. Unfortunately, the Zimmermann telegram and the sinking of the American merchant ships proved too provocative for Wilson to remain neutral. Little more than two months after this speech, he asked Congress to declare war on Germany. 23.2 The United States Prepares for War Learning Objectives By the end of this section, you will be able to: Identify the steps taken by the U.S. government to secure enough men, money, food, and supplies to prosecute World War I Explain how the U.S. government attempted to sway popular opinion in favor of the war effort Wilson knew that the key to America’s success in war lay largely in its preparation. With both the Allied and enemy forces entrenched in battles of attrition, and supplies running low on both sides, the United States needed, first and foremost, to secure enough men, money, food, and supplies to be successful. The country needed to first supply the basic requirements to fight a war, and then work to ensure military leadership, public support, and strategic planning. THE INGREDIENTS OF WAR The First World War was, in many ways, a war of attrition, and the United States needed a large army to help the Allies. In 1917, when the United States declared war on Germany, the U.S. Army ranked seventh in the world in terms of size, with an estimated 200,000 enlisted men. In contrast, at the outset of the war in 1914, the German force included 4.5 million men, and the country ultimately mobilized over eleven million soldiers over the course of the entire war. To compose a fighting force, Congress passed the Selective Service Act in 1917, which initially required all men aged twenty-one through thirty to register for the draft ( Figure 23.7 ). In 1918, the act was expanded to include all men between eighteen and forty-five. Through a campaign of patriotic appeals, as well as an administrative system that allowed men to register at their local draft boards rather than directly with the federal government, over ten million men registered for the draft on the very first day. By the war’s end, twenty-two million men had registered for the U.S. Army draft. Five million of these men were actually drafted, another 1.5 million volunteered, and over 500,000 additional men signed up for the navy or marines. In all, two million men participated in combat operations overseas. Among the volunteers were also twenty thousand women, a quarter of whom went to France to serve as nurses or in clerical positions. But the draft also provoked opposition, and almost 350,000 eligible Americans refused to register for military service. About 65,000 of these defied the conscription law as conscientious objectors, mostly on the grounds of their deeply held religious beliefs. Such opposition was not without risks, and whereas most objectors were never prosecuted, those who were found guilty at military hearings received stiff punishments: Courts handed down over two hundred prison sentences of twenty years or more, and seventeen death sentences. With the size of the army growing, the U.S. government next needed to ensure that there were adequate supplies—in particular food and fuel—for both the soldiers and the home front. Concerns over shortages led to the passage of the Lever Food and Fuel Control Act, which empowered the president to control the production, distribution, and price of all food products during the war effort. Using this law, Wilson created both a Fuel Administration and a Food Administration. The Fuel Administration, run by Harry Garfield, created the concept of “fuel holidays,” encouraging civilian Americans to do their part for the war effort by rationing fuel on certain days. Garfield also implemented “daylight saving time” for the first time in American history, shifting the clocks to allow more productive daylight hours. Herbert Hoover coordinated the Food Administration, and he too encouraged volunteer rationing by invoking patriotism. With the slogan “food will win the war,” Hoover encouraged “Meatless Mondays,” “Wheatless Wednesdays,” and other similar reductions, with the hope of rationing food for military use ( Figure 23.8 ). Wilson also created the War Industries Board, run by Bernard Baruch, to ensure adequate military supplies. The War Industries Board had the power to direct shipments of raw materials, as well as to control government contracts with private producers. Baruch used lucrative contracts with guaranteed profits to encourage several private firms to shift their production over to wartime materials. For those firms that refused to cooperate, Baruch’s government control over raw materials provided him with the necessary leverage to convince them to join the war effort, willingly or not. As a way to move all the personnel and supplies around the country efficiently, Congress created the U.S. Railroad Administration. Logistical problems had led trains bound for the East Coast to get stranded as far away as Chicago. To prevent these problems, Wilson appointed William McAdoo, the Secretary of the Treasury, to lead this agency, which had extraordinary war powers to control the entire railroad industry, including traffic, terminals, rates, and wages. Almost all the practical steps were in place for the United States to fight a successful war. The only step remaining was to figure out how to pay for it. The war effort was costly—with an eventual price tag in excess of $32 billion by 1920—and the government needed to finance it. The Liberty Loan Act allowed the federal government to sell liberty bonds to the American public, extolling citizens to “do their part” to help the war effort and bring the troops home. The government ultimately raised $23 billion through liberty bonds. Additional monies came from the government’s use of federal income tax revenue, which was made possible by the passage of the Sixteenth Amendment to the U.S. Constitution in 1913. With the financing, transportation, equipment, food, and men in place, the United States was ready to enter the war. The next piece the country needed was public support. CONTROLLING DISSENT Although all the physical pieces required to fight a war fell quickly into place, the question of national unity was another concern. The American public was strongly divided on the subject of entering the war. While many felt it was the only choice, others protested strongly, feeling it was not America’s war to fight. Wilson needed to ensure that a nation of diverse immigrants, with ties to both sides of the conflict, thought of themselves as American first, and their home country’s nationality second. To do this, he initiated a propaganda campaign, pushing the “America First” message, which sought to convince Americans that they should do everything in their power to ensure an American victory, even if that meant silencing their own criticisms. Americana American First, American Above All At the outset of the war, one of the greatest challenges for Wilson was the lack of national unity. The country, after all, was made up of immigrants, some recently arrived and some well established, but all with ties to their home countries. These home countries included Germany and Russia, as well as Great Britain and France. In an effort to ensure that Americans eventually supported the war, the government pro-war propaganda campaign focused on driving home that message. The posters below, shown in both English and Yiddish, prompted immigrants to remember what they owed to America ( Figure 23.9 ). Regardless of how patriotic immigrants might feel and act, however, an anti-German xenophobia overtook the country. German Americans were persecuted and their businesses shunned, whether or not they voiced any objection to the war. Some cities changed the names of the streets and buildings if they were German. Libraries withdrew German-language books from the shelves, and German Americans began to avoid speaking German for fear of reprisal. For some immigrants, the war was fought on two fronts: on the battlefields of France and again at home. The Wilson administration created the Committee of Public Information under director George Creel, a former journalist, just days after the United States declared war on Germany. Creel employed artists, speakers, writers, and filmmakers to develop a propaganda machine. The goal was to encourage all Americans to make sacrifices during the war and, equally importantly, to hate all things German ( Figure 23.10 ). Through efforts such as the establishment of “loyalty leagues” in ethnic immigrant communities, Creel largely succeeded in molding an anti-German sentiment around the country. The result? Some schools banned the teaching of the German language and some restaurants refused to serve frankfurters, sauerkraut, or hamburgers, instead serving “liberty dogs with liberty cabbage” and “liberty sandwiches.” Symphonies refused to perform music written by German composers. The hatred of Germans grew so widespread that, at one point, at a circus, audience members cheered when, in an act gone horribly wrong, a Russian bear mauled a German animal trainer (whose ethnicity was more a part of the act than reality). In addition to its propaganda campaign, the U.S. government also tried to secure broad support for the war effort with repressive legislation. The Trading with the Enemy Act of 1917 prohibited individual trade with an enemy nation and banned the use of the postal service for disseminating any literature deemed treasonous by the postmaster general. That same year, the Espionage Act prohibited giving aid to the enemy by spying, or espionage, as well as any public comments that opposed the American war effort. Under this act, the government could impose fines and imprisonment of up to twenty years. The Sedition Act, passed in 1918, prohibited any criticism or disloyal language against the federal government and its policies, the U.S. Constitution, the military uniform, or the American flag. More than two thousand persons were charged with violating these laws, and many received prison sentences of up to twenty years. Immigrants faced deportation as punishment for their dissent. Not since the Alien and Sedition Acts of 1798 had the federal government so infringed on the freedom of speech of loyal American citizens. In the months and years after these laws came into being, over one thousand people were convicted for their violation, primarily under the Espionage and Sedition Acts. More importantly, many more war critics were frightened into silence. One notable prosecution was that of Socialist Party leader Eugene Debs, who received a ten-year prison sentence for encouraging draft resistance, which, under the Espionage Act, was considered “giving aid to the enemy.” Prominent Socialist Victor Berger was also prosecuted under the Espionage Act and subsequently twice denied his seat in Congress, to which he had been properly elected by the citizens of Milwaukee, Wisconsin. One of the more outrageous prosecutions was that of a film producer who released a film about the American Revolution: Prosecutors found the film seditious, and a court convicted the producer to ten years in prison for portraying the British, who were now American allies, as the obedient soldiers of a monarchical empire. State and local officials, as well as private citizens, aided the government’s efforts to investigate, identify, and crush subversion. Over 180,000 communities created local “councils of defense,” which encouraged members to report any antiwar comments to local authorities. This mandate encouraged spying on neighbors, teachers, local newspapers, and other individuals. In addition, a larger national organization—the American Protective League—received support from the Department of Justice to spy on prominent dissenters, as well as open their mail and physically assault draft evaders. Understandably, opposition to such repression began mounting. In 1917, Roger Baldwin formed the National Civil Liberties Bureau—a forerunner to the American Civil Liberties Union, which was founded in 1920—to challenge the government’s policies against wartime dissent and conscientious objection. In 1919, the case of Schenck v. United States went to the U.S. Supreme Court to challenge the constitutionality of the Espionage and Sedition Acts. The case concerned Charles Schenck, a leader in the Socialist Party of Philadelphia, who had distributed fifteen thousand leaflets, encouraging young men to avoid conscription. The court ruled that during a time of war, the federal government was justified in passing such laws to quiet dissenters. The decision was unanimous, and in the court’s opinion, Justice Oliver Wendell Holmes wrote that such dissent presented a “ clear and present danger ” to the safety of the United States and the military, and was therefore justified. He further explained how the First Amendment right of free speech did not protect such dissent, in the same manner that a citizen could not be freely permitted to yell “fire!” in a crowded theater, due to the danger it presented. Congress ultimately repealed most of the Espionage and Sedition Acts in 1921, and several who were imprisoned for violation of those acts were then quickly released. But the Supreme Court’s deference to the federal government’s restrictions on civil liberties remained a volatile topic in future wars. 23.3 A New Home Front Learning Objectives By the end of this section, you will be able to: Explain how the status of organized labor changed during the First World War Describe how the lives of women and African Americans changed as a result of American participation in World War I Explain how America’s participation in World War I allowed for the passage of prohibition and women’s suffrage The lives of all Americans, whether they went abroad to fight or stayed on the home front, changed dramatically during the war. Restrictive laws censored dissent at home, and the armed forces demanded unconditional loyalty from millions of volunteers and conscripted soldiers. For organized labor, women, and African Americans in particular, the war brought changes to the prewar status quo. Some White women worked outside of the home for the first time, whereas others, like African American men, found that they were eligible for jobs that had previously been reserved for White men. African American women, too, were able to seek employment beyond the domestic servant jobs that had been their primary opportunity. These new options and freedoms were not easily erased after the war ended. NEW OPPORTUNITIES BORN FROM WAR After decades of limited involvement in the challenges between management and organized labor, the need for peaceful and productive industrial relations prompted the federal government during wartime to invite organized labor to the negotiating table. Samuel Gompers, head of the American Federation of Labor (AFL), sought to capitalize on these circumstances to better organize workers and secure for them better wages and working conditions. His efforts also solidified his own base of power. The increase in production that the war required exposed severe labor shortages in many states, a condition that was further exacerbated by the draft, which pulled millions of young men from the active labor force. Wilson only briefly investigated the longstanding animosity between labor and management before ordering the creation of the National Labor War Board in April 1918. Quick negotiations with Gompers and the AFL resulted in a promise: Organized labor would make a “no-strike pledge” for the duration of the war, in exchange for the U.S. government’s protection of workers’ rights to organize and bargain collectively. The federal government kept its promise and promoted the adoption of an eight-hour workday (which had first been adopted by government employees in 1868), a living wage for all workers, and union membership. As a result, union membership skyrocketed during the war, from 2.6 million members in 1916 to 4.1 million in 1919. In short, American workers received better working conditions and wages, as a result of the country’s participation in the war. However, their economic gains were limited. While prosperity overall went up during the war, it was enjoyed more by business owners and corporations than by the workers themselves. Even though wages increased, inflation offset most of the gains. Prices in the United States increased an average of 15–20 percent annually between 1917 and 1920. Individual purchasing power actually declined during the war due to the substantially higher cost of living. Business profits, in contrast, increased by nearly a third during the war. Women in Wartime For women, the economic situation was complicated by the war, with the departure of wage-earning men and the higher cost of living pushing many toward less comfortable lives. At the same time, however, wartime presented new opportunities for women in the workplace. More than one million women entered the workforce for the first time as a result of the war, while more than eight million working women found higher paying jobs, often in industry. Many women also found employment in what were typically considered male occupations, such as on the railroads ( Figure 23.11 ), where the number of women tripled, and on assembly lines. After the war ended and men returned home and searched for work, women were fired from their jobs, and expected to return home and care for their families. Furthermore, even when they were doing men’s jobs, women were typically paid lower wages than male workers, and unions were ambivalent at best—and hostile at worst—to women workers. Even under these circumstances, wartime employment familiarized women with an alternative to a life in domesticity and dependency, making a life of employment, even a career, plausible for women. When, a generation later, World War II arrived, this trend would increase dramatically. One notable group of women who exploited these new opportunities was the Women’s Land Army of America. First during World War I, then again in World War II, these women stepped up to run farms and other agricultural enterprises, as men left for the armed forces ( Figure 23.11 ). Known as Farmerettes , some twenty thousand women—mostly college educated and from larger urban areas—served in this capacity. Their reasons for joining were manifold. For some, it was a way to serve their country during a time of war. Others hoped to capitalize on the efforts to further the fight for women’s suffrage. Also of special note were the approximately thirty thousand American women who served in the military, as well as a variety of humanitarian organizations, such as the Red Cross and YMCA, during the war. In addition to serving as military nurses (without rank), American women also served as telephone operators in France. Of this latter group, 230 of them, known as “Hello Girls,” were bilingual and stationed in combat areas. Over eighteen thousand American women served as Red Cross nurses, providing much of the medical support available to American troops in France. Close to three hundred nurses died during service. Many of those who returned home continued to work in hospitals and home healthcare, helping wounded veterans heal both emotionally and physically from the scars of war. African Americans in the Crusade for Democracy African Americans also found that the war brought upheaval and opportunity. Blacks composed 13 percent of the enlisted military, with 350,000 men serving. Colonel Charles Young of the Tenth Cavalry division served as the highest-ranking African American officer. Blacks served in segregated units and suffered from widespread racism in the military hierarchy, often serving in menial or support roles. Some troops saw combat, however, and were commended for serving with valor. The 369th Infantry, for example, known as the Harlem Hellfighters , served on the frontline of France for six months, longer than any other American unit. One hundred seventy-one men from that regiment received the Legion of Merit for meritorious service in combat. The regiment marched in a homecoming parade in New York City, was remembered in paintings ( Figure 23.12 ), and was celebrated for bravery and leadership. The accolades given to them, however, in no way extended to the bulk of African Americans fighting in the war. On the home front, African Americans, like American women, saw economic opportunities increase during the war. During the so-called Great Migration (discussed in a previous chapter), nearly 350,000 African Americans had fled the post-Civil War South for opportunities in northern urban areas. From 1910–1920, they moved north and found work in the steel, mining, shipbuilding, and automotive industries, among others. African American women also sought better employment opportunities beyond their traditional roles as domestic servants. By 1920, over 100,000 women had found work in diverse manufacturing industries, up from 70,000 in 1910. Despite these opportunities, racism continued to be a major force in both the North and South. Worried that Black veterans would feel empowered to change the status quo of White supremacy, many White people took political, economic, and violent action against them. In a speech on the Senate floor in 1917, Mississippi Senator James K. Vardaman said, “Impress the negro with the fact that he is defending the flag, inflate his untutored soul with military airs, teach him that it is his duty to keep the emblem of the Nation flying triumphantly in the air—it is but a short step to the conclusion that his political rights must be respected.” Several municipalities passed residential codes designed to prohibit African Americans from settling in certain neighborhoods. Race riots also increased in frequency: In 1917 alone, there were race riots in twenty-five cities, including East Saint Louis, where thirty-nine Blacks were killed. In the South, White business and plantation owners feared that their cheap workforce was fleeing the region, and used violence to intimidate Blacks into staying. According to NAACP statistics, recorded incidences of lynching increased from thirty-eight in 1917 to eighty-three in 1919. Dozens of Black veterans were among the victims. The frequency of these killings did not start to decrease until 1923, when the number of annual lynchings dropped below thirty-five for the first time since the Civil War. THE LAST VESTIGES OF PROGRESSIVISM Across the United States, the war intersected with the last lingering efforts of the Progressives who sought to use the war as motivation for their final push for change. It was in large part due to the war’s influence that Progressives were able to lobby for the passage of the Eighteenth and Nineteenth Amendments to the U.S. Constitution. The Eighteenth Amendment, prohibiting alcohol, and the Nineteenth Amendment, giving women the right to vote, received their final impetus due to the war effort. Prohibition , as the anti-alcohol movement became known, had been a goal of many Progressives for decades. Organizations such as the Women’s Christian Temperance Union and the Anti-Saloon League linked alcohol consumption with any number of societal problems, and they had worked tirelessly with municipalities and counties to limit or prohibit alcohol on a local scale. But with the war, prohibitionists saw an opportunity for federal action. One factor that helped their cause was the strong anti-German sentiment that gripped the country, which turned sympathy away from the largely German-descended immigrants who ran the breweries. Furthermore, the public cry to ration food and grain—the latter being a key ingredient in both beer and hard alcohol—made prohibition even more patriotic. Congress ratified the Eighteenth Amendment in January 1919, with provisions to take effect one year later. Specifically, the amendment prohibited the manufacture, sale, and transportation of intoxicating liquors. It did not prohibit the drinking of alcohol, as there was a widespread feeling that such language would be viewed as too intrusive on personal rights. However, by eliminating the manufacture, sale, and transport of such beverages, drinking was effectively outlawed. Shortly thereafter, Congress passed the Volstead Act, translating the Eighteenth Amendment into an enforceable ban on the consumption of alcoholic beverages, and regulating the scientific and industrial uses of alcohol. The act also specifically excluded from prohibition the use of alcohol for religious rituals ( Figure 23.13 ). Unfortunately for proponents of the amendment, the ban on alcohol did not take effect until one full year following the end of the war. Almost immediately following the war, the general public began to oppose—and clearly violate—the law, making it very difficult to enforce. Doctors and druggists, who could prescribe whisky for medicinal purposes, found themselves inundated with requests. In the 1920s, organized crime and gangsters like Al Capone would capitalize on the persistent demand for liquor, making fortunes in the illegal trade. A lack of enforcement, compounded by an overwhelming desire by the public to obtain alcohol at all costs, eventually resulted in the repeal of the law in 1933. The First World War also provided the impetus for another longstanding goal of some reformers: universal suffrage. Supporters of equal rights for women pointed to Wilson’s rallying cry of a war “to make the world safe for democracy,” as hypocritical, saying he was sending American boys to die for such principles while simultaneously denying American women their democratic right to vote ( Figure 23.14 ). Carrie Chapman Catt, president of the National American Women Suffrage Movement, capitalized on the growing patriotic fervor to point out that every woman who gained the vote could exercise that right in a show of loyalty to the nation, thus offsetting the dangers of draft-dodgers or naturalized Germans who already had the right to vote. Alice Paul, of the National Women’s Party, organized more radical tactics, bringing national attention to the issue of women’s suffrage by organizing protests outside the White House and, later, hunger strikes among arrested protesters. African American suffragists, who had been active in the movement for decades, faced discrimination from their White counterparts. Some White leaders justified this treatment based on the concern that promoting Black women would erode public support. But overt racism played a significant role, as well. During the suffrage parade in 1913, Black members were told to march at the rear of the line. Ida B. Wells-Barnett, a prominent voice for equality, first asked her local delegation to oppose this segregation; they refused. Not to be dismissed, Wells-Barnett waited in the crowd until the Illinois delegation passed by, then stepped onto the parade route and took her place among them. By the end of the war, the abusive treatment of suffragist hunger-strikers in prison, women’s important contribution to the war effort, and the arguments of his suffragist daughter Jessie Woodrow Wilson Sayre moved President Wilson to understand women’s right to vote as an ethical mandate for a true democracy. He began urging congressmen and senators to adopt the legislation. The amendment finally passed in June 1919, and the states ratified it by August 1920. Specifically, the Nineteenth Amendment prohibited all efforts to deny the right to vote on the basis of sex. It took effect in time for American women to vote in the presidential election of 1920. 23.4 From War to Peace Learning Objectives By the end of this section, you will be able to: Identify the role that the United States played at the end of World War I Describe Woodrow Wilson’s vision for the postwar world Explain why the United States never formally approved the Treaty of Versailles nor joined the League of Nations The American role in World War I was brief but decisive. While millions of soldiers went overseas, and many thousands paid with their lives, the country’s involvement was limited to the very end of the war. In fact, the peace process, with the international conference and subsequent ratification process, took longer than the time U.S. soldiers were “in country” in France. For the Allies, American reinforcements came at a decisive moment in their defense of the western front, where a final offensive had exhausted German forces. For the United States, and for Wilson’s vision of a peaceful future, the fighting was faster and more successful than what was to follow. WINNING THE WAR When the United States declared war on Germany in April 1917, the Allied forces were close to exhaustion. Great Britain and France had already indebted themselves heavily in the procurement of vital American military supplies. Now, facing near-certain defeat, a British delegation to Washington, DC, requested immediate troop reinforcements to boost Allied spirits and help crush German fighting morale, which was already weakened by short supplies on the frontlines and hunger on the home front. Wilson agreed and immediately sent 200,000 American troops in June 1917. These soldiers were placed in “quiet zones” while they trained and prepared for combat. By March 1918, the Germans had won the war on the eastern front. The Russian Revolution of the previous year had not only toppled the hated regime of Tsar Nicholas II but also ushered in a civil war from which the Bolshevik faction of Communist revolutionaries under the leadership of Vladimir Lenin emerged victorious. Weakened by war and internal strife, and eager to build a new Soviet Union, Russian delegates agreed to a generous peace treaty with Germany. Thus emboldened, Germany quickly moved upon the Allied lines, causing both the French and British to ask Wilson to forestall extensive training to U.S. troops and instead commit them to the front immediately. Although wary of the move, Wilson complied, ordering the commander of the American Expeditionary Force, General John “Blackjack” Pershing, to offer U.S. troops as replacements for the Allied units in need of relief. By May 1918, Americans were fully engaged in the war ( Figure 23.15 ). In a series of battles along the front that took place from May 28 through August 6, 1918, including the battles of Cantigny, Chateau Thierry, Belleau Wood, and the Second Battle of the Marne, American forces alongside the British and French armies succeeded in repelling the German offensive. The Battle of Cantigny, on May 28, was the first American offensive in the war: In less than two hours that morning, American troops overran the German headquarters in the village, thus convincing the French commanders of their ability to fight against the German line advancing towards Paris. The subsequent battles of Chateau Thierry and Belleau Wood proved to be the bloodiest of the war for American troops. At the latter, faced with a German onslaught of mustard gas, artillery fire, and mortar fire, U.S. Marines attacked German units in the woods on six occasions—at times meeting them in hand-to-hand and bayonet combat—before finally repelling the advance. The U.S. forces suffered 10,000 casualties in the three-week battle, with almost 2,000 killed in total and 1,087 on a single day. Brutal as they were, they amounted to small losses compared to the casualties suffered by France and Great Britain. Still, these summer battles turned the tide of the war, with the Germans in full retreat by the end of July 1918 ( Figure 23.16 ). My Story Sgt. Charles Leon Boucher: Life and Death in the Trenches of France Wounded in his shoulder by enemy forces, George, a machine gunner posted on the right end of the American platoon, was taken prisoner at the Battle of Seicheprey in 1918. However, as darkness set in that evening, another American soldier, Charlie, heard a noise from a gully beside the trench in which he had hunkered down. “I figured it must be the enemy mop-up patrol,” Charlie later said. I only had a couple of bullets left in the chamber of my forty-five. The noise stopped and a head popped into sight. When I was about to fire, I gave another look and a white and distorted face proved to be that of George, so I grabbed his shoulders and pulled him down into our trench beside me. He must have had about twenty bullet holes in him but not one of them was well placed enough to kill him. He made an effort to speak so I told him to keep quiet and conserve his energy. I had a few malted milk tablets left and, I forced them into his mouth. I also poured the last of the water I had left in my canteen into his mouth. Following a harrowing night, they began to crawl along the road back to their platoon. As they crawled, George explained how he survived being captured. Charlie later told how George “was taken to an enemy First Aid Station where his wounds were dressed. Then the doctor motioned to have him taken to the rear of their lines. But, the Sergeant Major pushed him towards our side and ‘No Mans Land,’ pulled out his Luger Automatic and shot him down. Then, he began to crawl towards our lines little by little, being shot at consistently by the enemy snipers till, finally, he arrived in our position.” The story of Charlie and George, related later in life by Sgt. Charles Leon Boucher to his grandson, was one replayed many times over in various forms during the American Expeditionary Force’s involvement in World War I. The industrial scale of death and destruction was as new to American soldiers as to their European counterparts, and the survivors brought home physical and psychological scars that influenced the United States long after the war was won ( Figure 23.17 ). By the end of September 1918, over one million U.S. soldiers staged a full offensive into the Argonne Forest. By November—after nearly forty days of intense fighting—the German lines were broken, and their military command reported to German Emperor Kaiser Wilhelm II of the desperate need to end the war and enter into peace negotiations. Facing civil unrest from the German people in Berlin, as well as the loss of support from his military high command, Kaiser Wilhelm abdicated his throne on November 9, 1918, and immediately fled by train to the Netherlands. Two days later, on November 11, 1918, Germany and the Allies declared an immediate armistice, thus bring the fighting to a stop and signaling the beginning of the peace process. When the armistice was declared, a total of 117,000 American soldiers had been killed and 206,000 wounded. The Allies as a whole suffered over 5.7 million military deaths, primarily Russian, British, and French men. The Central powers suffered four million military deaths, with half of them German soldiers. The total cost of the war to the United States alone was in excess of $32 billion, with interest expenses and veterans’ benefits eventually bringing the cost to well over $100 billion. Economically, emotionally, and geopolitically, the war had taken an enormous toll. THE BATTLE FOR PEACE While Wilson had been loath to involve the United States in the war, he saw the country’s eventual participation as justification for America’s involvement in developing a moral foreign policy for the entire world. The “new world order” he wished to create from the outset of his presidency was now within his grasp. The United States emerged from the war as the predominant world power. Wilson sought to capitalize on that influence and impose his moral foreign policy on all the nations of the world. The Paris Peace Conference As early as January 1918—a full five months before U.S. military forces fired their first shot in the war, and eleven months before the actual armistice—Wilson announced his postwar peace plan before a joint session of Congress. Referring to what became known as the Fourteen Points , Wilson called for openness in all matters of diplomacy and trade, specifically, free trade, freedom of the seas, an end to secret treaties and negotiations, promotion of self-determination of all nations, and more. In addition, he called for the creation of a League of Nations to promote the new world order and preserve territorial integrity through open discussions in place of intimidation and war. As the war concluded, Wilson announced, to the surprise of many, that he would attend the Paris Peace Conference himself, rather than ceding to the tradition of sending professional diplomats to represent the country ( Figure 23.18 ). His decision influenced other nations to follow suit, and the Paris conference became the largest meeting of world leaders to date in history. For six months, beginning in December 1918, Wilson remained in Paris to personally conduct peace negotiations. Although the French public greeted Wilson with overwhelming enthusiasm, other delegates at the conference had deep misgivings about the American president’s plans for a “peace without victory.” Specifically, Great Britain, France, and Italy sought to obtain some measure of revenge against Germany for drawing them into the war, to secure themselves against possible future aggressions from that nation, and also to maintain or even strengthen their own colonial possessions. Great Britain and France in particular sought substantial monetary reparations, as well as territorial gains, at Germany’s expense. Japan also desired concessions in Asia, whereas Italy sought new territory in Europe. Finally, the threat posed by a Bolshevik Russia under Vladimir Lenin, and more importantly, the danger of revolutions elsewhere, further spurred on these allies to use the treaty negotiations to expand their territories and secure their strategic interests, rather than strive towards world peace. In the end, the Treaty of Versailles that officially concluded World War I resembled little of Wilson’s original Fourteen Points. The Japanese, French, and British succeeded in carving up many of Germany’s colonial holdings in Africa and Asia. The dissolution of the Ottoman Empire created new nations under the quasi-colonial rule of France and Great Britain, such as Iraq and Palestine. France gained much of the disputed territory along their border with Germany, as well as passage of a “war guilt clause” that demanded Germany take public responsibility for starting and prosecuting the war that led to so much death and destruction. Great Britain led the charge that resulted in Germany agreeing to pay reparations in excess of $33 billion to the Allies. As for Bolshevik Russia, Wilson had agreed to send American troops to their northern region to protect Allied supplies and holdings there, while also participating in an economic blockade designed to undermine Lenin’s power. This move would ultimately have the opposite effect of galvanizing popular support for the Bolsheviks. The sole piece of the original Fourteen Points that Wilson successfully fought to keep intact was the creation of a League of Nations. At a covenant agreed to at the conference, all member nations in the League would agree to defend all other member nations against military threats. Known as Article X, this agreement would basically render each nation equal in terms of power, as no member nation would be able to use its military might against a weaker member nation. Ironically, this article would prove to be the undoing of Wilson’s dream of a new world order. Ratification of the Treaty of Versailles Although the other nations agreed to the final terms of the Treaty of Versailles, Wilson’s greatest battle lay in the ratification debate that awaited him upon his return. As with all treaties, this one would require two-thirds approval by the U.S. Senate for final ratification, something Wilson knew would be difficult to achieve. Even before Wilson’s return to Washington, Senator Henry Cabot Lodge, chairman of the Senate Foreign Relations Committee that oversaw ratification proceedings, issued a list of fourteen reservations he had regarding the treaty, most of which centered on the creation of a League of Nations. An isolationist in foreign policy issues, Lodge feared that Article X would require extensive American intervention, as more countries would seek her protection in all controversial affairs. But on the other side of the political spectrum, interventionists argued that Article X would impede the United States from using her rightfully attained military power to secure and protect America’s international interests. Wilson’s greatest fight was with the Senate, where most Republicans opposed the treaty due to the clauses surrounding the creation of the League of Nations. Some Republicans, known as Irreconcilables , opposed the treaty on all grounds, whereas others, called Reservationists , would support the treaty if sufficient amendments were introduced that could eliminate Article X. In an effort to turn public support into a weapon against those in opposition, Wilson embarked on a cross-country railway speaking tour. He began travelling in September 1919, and the grueling pace, after the stress of the six months in Paris, proved too much. Wilson fainted following a public event on September 25, 1919, and immediately returned to Washington. There he suffered a debilitating stroke, leaving his second wife Edith Wilson in charge as de facto president for a period of about six months. Frustrated that his dream of a new world order was slipping away—a frustration that was compounded by the fact that, now an invalid, he was unable to speak his own thoughts coherently—Wilson urged Democrats in the Senate to reject any effort to compromise on the treaty. As a result, Congress voted on, and defeated, the originally worded treaty in November. When the treaty was introduced with “reservations,” or amendments, in March 1920, it again fell short of the necessary margin for ratification. As a result, the United States never became an official signatory of the Treaty of Versailles. Nor did the country join the League of Nations, which shattered the international authority and significance of the organization. Although Wilson received the Nobel Peace Prize in October 1919 for his efforts to create a model of world peace, he remained personally embarrassed and angry at his country’s refusal to be a part of that model. As a result of its rejection of the treaty, the United States technically remained at war with Germany until July 21, 1921, when it formally came to a close with Congress’s quiet passage of the Knox-Porter Resolution. 23.5 Demobilization and Its Difficult Aftermath Learning Objectives By the end of this section, you will be able to: Identify the challenges that the United States faced following the conclusion of World War I Explain Warren G. Harding’s landslide victory in the 1920 presidential election As world leaders debated the terms of the peace, the American public faced its own challenges at the conclusion of the First World War. Several unrelated factors intersected to create a chaotic and difficult time, just as massive numbers of troops rapidly demobilized and came home. Racial tensions, a terrifying flu epidemic, anticommunist hysteria, and economic uncertainty all combined to leave many Americans wondering what, exactly, they had won in the war. Adding to these problems was the absence of President Wilson, who remained in Paris for six months, leaving the country leaderless. The result of these factors was that, rather than a celebratory transition from wartime to peace and prosperity, and ultimately the Jazz Age of the 1920s, 1919 was a tumultuous year that threatened to tear the country apart. DISORDER AND FEAR IN AMERICA After the war ended, U.S. troops were demobilized and rapidly sent home. One unanticipated and unwanted effect of their return was the emergence of a new strain of influenza that medical professionals had never before encountered. Within months of the war’s end, over twenty million Americans fell ill from the flu ( Figure 23.19 ). Eventually, 675,000 Americans died before the disease mysteriously ran its course in the spring of 1919. Worldwide, recent estimates suggest that 500 million people suffered from this flu strain, with as many as fifty million people dying. Throughout the United States, from the fall of 1918 to the spring of 1919, fear of the flu gripped the country. Americans avoided public gatherings, children wore surgical masks to school, and undertakers ran out of coffins and burial plots in cemeteries. Hysteria grew as well, and instead of welcoming soldiers home with a postwar celebration, people hunkered down and hoped to avoid contagion. Another element that greatly influenced the challenges of immediate postwar life was economic upheaval. As discussed above, wartime production had led to steady inflation; the rising cost of living meant that few Americans could comfortably afford to live off their wages. When the government’s wartime control over the economy ended, businesses slowly recalibrated from the wartime production of guns and ships to the peacetime production of toasters and cars. Public demand quickly outpaced the slow production, leading to notable shortages of domestic goods. As a result, inflation skyrocketed in 1919. By the end of the year, the cost of living in the United States was nearly double what it had been in 1916. Workers, facing a shortage in wages to buy more expensive goods, and no longer bound by the no-strike pledge they made for the National War Labor Board, initiated a series of strikes for better hours and wages. In 1919 alone, more than four million workers participated in a total of nearly three thousand strikes: both records within all of American history. In addition to labor clashes, race riots shattered the peace at the home front. The race riots that had begun during the Great Migration only grew in postwar America. White soldiers returned home to find Black workers in their former jobs and neighborhoods, and were committed to restoring their position of White supremacy. Black soldiers returned home with a renewed sense of justice and strength, and were determined to assert their rights as men and as citizens. Meanwhile, southern lynchings continued to escalate, with White mobs burning African Americans at the stake. The mobs often used false accusations of indecency and assault on White women to justify the murders. During the “ Red Summer ” of 1919, northern cities recorded twenty-five bloody race riots that killed over 250 people. Among these was the Chicago Race Riot of 1919, where a White mob stoned a young Black boy to death because he swam too close to the “White beach” on Lake Michigan. Police at the scene did not arrest the perpetrator who threw the rock. This crime prompted a week-long riot that left twenty-three Blacks and fifteen Whites dead, as well as millions of dollars’ worth of damage to the city ( Figure 23.20 ). A massacre in Tulsa, Oklahoma, in 1921, turned out even more deadly, with estimates of Black fatalities ranging from fifty to three hundred. Again, the violence arose based on a dubious allegation of assault on a White girl by a Black teenager. After an incendiary newspaper article, a conflict at the courthouse led to ten White and two Black people's deaths. A riot ensued, with White groups pursuing Blacks as they retreated to the Greenwood section of the city. Both sides were armed, and gunfire and arson continued throughout the night. The next morning, the White groups began an assault on the Black neighborhoods, killing many Black residents and destroying homes and businesses. The Tulsa Massacre (also called the Tulsa Riot, Greenwood Massacre, or Black Wall Street Massacre) was widely reported at the time, but was omitted from many historical recollections, textbooks, and media for decades. My Story The Tulsa Race Riot and Three of Its Victims B.C. Franklin was a prominent Black lawyer in Tulsa, Oklahoma. A survivor of the Tulsa Massacre, he penned a first-person account ten years after the events. The manuscript was uncovered in 2015 and has been published by the Smithsonian. About mid-night, I arose and went to the north porch on the second floor of my hotel and, looking in a north-westerly direction, I saw the top of stand-pipe hill literally lighted up by blazes that came from the throats of machine guns, and I could hear bullets whizzing and cutting the air. There was shooting now in every direction, and the sounds that came from the thousands and thousands of guns were deafening.... I reached my office in safety, but I knew that that safety would be short-lived. I now knew the mob-spirit. I knew too that government and law and order had broken down. I knew that mob law had been substituted in all its fiendishness and barbarity. I knew that the mobbist cared nothing about the written law and the constitution and I also knew that he had neither the patience nor the intelligence to distinguish between the good and the bad, the law-abiding and the lawless in my race. From my office window, I could see planes circling in mid-air. They grew in number and hummed, darted and dipped low. I could hear something like hail falling upon the top of my office building. Down East Archer, I saw the old Mid-Way hotel on fire, burning from its top, and then another and another and another building began to burn from the top. While illness, economic hardship, and racial tensions all came from within, another destabilizing factor arrived from overseas. As revolutionary rhetoric emanating from Bolshevik Russia intensified in 1918 and 1919, a Red Scare erupted in the United States over fear that Communist infiltrators sought to overthrow the American government as part of an international revolution ( Figure 23.21 ). When investigators uncovered a collection of thirty-six letter bombs at a New York City post office, with recipients that included several federal, state, and local public officials, as well as industrial leaders such as John D. Rockefeller, fears grew significantly. And when eight additional bombs actually exploded simultaneously on June 2, 1919, including one that destroyed the entrance to U.S. attorney general A. Mitchell Palmer’s house in Washington, the country was convinced that all radicals, no matter what ilk, were to blame. Socialists, Communists, members of the Industrial Workers of the World (Wobblies), and anarchists: They were all threats to be taken down. Private citizens who considered themselves upstanding and loyal Americans, joined by discharged soldiers and sailors, raided radical meeting houses in many major cities, attacking any alleged radicals they found inside. By November 1919, Palmer’s new assistant in charge of the Bureau of Investigation, J. Edgar Hoover, organized nationwide raids on radical headquarters in twelve cities around the country. Subsequent “Palmer raids” resulted in the arrests of four thousand alleged American radicals who were detained for weeks in overcrowded cells. Almost 250 of those arrested were subsequently deported on board a ship dubbed “the Soviet Ark” ( Figure 23.22 ). A RETURN TO NORMALCY By 1920, Americans had failed their great expectations to make the world safer and more democratic. The flu epidemic had demonstrated the limits of science and technology in making Americans less vulnerable. The Red Scare signified Americans’ fear of revolutionary politics and the persistence of violent capital-labor conflicts. And race riots made it clear that the nation was no closer to peaceful race relations either. After a long era of Progressive initiatives and new government agencies, followed by a costly war that did not end in a better world, most of the public sought to focus on economic progress and success in their private lives instead. As the presidential election of 1920 unfolded, the extent of just how tired Americans were of an interventionist government—whether in terms of Progressive reform or international involvement—became exceedingly clear. Republicans, anxious to return to the White House after eight years of Wilson idealism, capitalized on this growing American sentiment to find the candidate who would promise a return to normalcy. The Republicans found their man in Senator Warren G. Harding from Ohio. Although not the most energetic candidate for the White House, Harding offered what party handlers desired—a candidate around whom they could mold their policies of low taxes, immigration restriction, and noninterference in world affairs. He also provided Americans with what they desired: a candidate who could look and act presidential, and yet leave them alone to live their lives as they wished. Democratic leaders realized they had little chance at victory. Wilson remained adamant that the election be a referendum over his League of Nations, yet after his stroke, he was in no physical condition to run for a third term. Political in-fighting among his cabinet, most notably between A. Mitchell Palmer and William McAdoo, threatened to split the party convention until a compromise candidate could be found in Ohio governor James Cox. Cox chose, for his vice presidential running mate, the young Assistant Secretary of the Navy, Franklin Delano Roosevelt. At a time when Americans wanted prosperity and normalcy, rather than continued interference in their lives, Harding won in an overwhelming landslide, with 404 votes to 127 in the Electoral College, and 60 percent of the popular vote. With the war, the flu epidemic, the Red Scare, and other issues behind them, American looked forward to Harding’s inauguration in 1921, and to an era of personal freedoms and hedonism that would come to be known as the Jazz Age.
principles_of_accounting,_volume_1:_financial_accounting
Summary 4.1 Explain the Concepts and Guidelines Affecting Adjusting Entries The next three steps in the accounting cycle are adjusting entries (journalizing and posting), preparing an adjusted trial balance, and preparing the financial statements. These steps consider end-of-period transactions and their impact on financial statements. Accrual basis accounting is used by US GAAP or IFRS-governed companies, and it requires revenues and expenses to be recorded in the accounting period in which they occur, not necessarily where an associated cash event happened. This is unlike cash basis accounting that will delay reporting revenues and expenses until a cash event occurs. Companies need timely and consistent financial information presented for users to consider in their decision-making. Accounting periods help companies do this by breaking down information into months, quarters, half-years, and full years. A calendar year considers financial information for a company for the time period of January 1 to December 31 on a specific year. A fiscal year is any twelve-month reporting cycle not beginning on January 1 and ending on December 31. An interim period is any reporting period that does not cover a full year. This can be useful when needing timely information for users making financial decisions. 4.2 Discuss the Adjustment Process and Illustrate Common Types of Adjusting Entries Incorrect balances: Incorrect balances on the unadjusted trial balance occur because not every transaction produces an original source document that will alert the bookkeeper it is time to make an entry. It is not that the accountant made an error, it means an adjustment is required to correct the balance. Need for adjustments: Some account adjustments are needed to update records that may not have original source documents or those that do not reflect change on a daily basis. The revenue recognition principle, expense recognition principle, and time period assumption all further the need for adjusting entries because they require revenue and expense reporting occur when earned and incurred in a current period. Prepaid expenses: Prepaid expenses are assets paid for before their use. When they are used, this asset’s value is reduced and an expense is recognized. Some examples include supplies, insurance, and depreciation. Unearned revenues: These are customer advanced payments for product or services yet to be provided. When the company provides the product or service, revenue is then recognized. Accrued revenues: Accrued revenues are revenues earned in a period but have yet to be recorded and no money has been collected. Accrued revenues are updated at the end of the period to recognize revenue and money owed to the company. Accrued expenses: Accrued expenses are incurred in a period but have yet to be recorded and no money has been paid. Accrued expenses are updated to reflect the expense and the company’s liability. 4.3 Record and Post the Common Types of Adjusting Entries Rules for adjusting entries: The rules for recording adjusting entries are as follows: every adjusting entry will have one income statement account and one balance sheet account, cash will never be in an adjusting entry, and the adjusting entry records the change in amount that occurred during the period. Posting adjusting entries: Posting adjusting entries is the same process as posting general journal entries. The additional adjustments may add accounts to the end of the period or may change account balances from the earlier journal entry step in the accounting cycle. 4.4 Use the Ledger Balances to Prepare an Adjusted Trial Balance Adjusted trial balance: The adjusted trial balance lists all accounts in the general ledger, including adjusting entries, which have nonzero balances. This trial balance is an important step in the accounting process because it helps identify any computational errors throughout the first five steps in the cycle. 4.5 Prepare Financial Statements Using the Adjusted Trial Balance Income Statement: The income statement shows the net income or loss as a result of revenue and expense activities occurring in a period. Statement of Retained Earnings: The statement of retained earnings shows the effects of net income (loss) and dividends on the earnings the company maintains. Balance Sheet: The balance sheet visually represents the accounting equation, showing that assets balance with liabilities and equity. 10-column worksheet: The 10-column worksheet organizes data from the trial balance all the way through the financial statements.
Chapter Outline 4.1 Explain the Concepts and Guidelines Affecting Adjusting Entries 4.2 Discuss the Adjustment Process and Illustrate Common Types of Adjusting Entries 4.3 Record and Post the Common Types of Adjusting Entries 4.4 Use the Ledger Balances to Prepare an Adjusted Trial Balance 4.5 Prepare Financial Statements Using the Adjusted Trial Balance Why It Matters As we learned in Analyzing and Recording Transactions , upon finishing college Mark Summers wanted to start his own dry-cleaning business called Supreme Cleaners. After four years, Mark finished college and opened Supreme Cleaners. During his first month of operations, Mark purchased dry-cleaning equipment and supplies. He also hired an employee, opened a savings account, and provided services to his first customers, among other things. Mark kept thorough records of all of the daily business transactions for the month. At the end of the month, Mark reviewed his trial balance and realized that some of the information was not up to date. His equipment and supplies had been used, making them less valuable. He had not yet paid his employee for work completed. His business savings account earned interest. Some of his customers had paid in advance for their dry cleaning, with Mark's business providing the service during the month. What should Mark do with all of these events? Does he have a responsibility to record these transactions? If so, how would he go about recording this information? How does it affect his financial statements? Mark will have to explore his accounting process to determine if these end-of-period transactions require recording and adjust his financial statements accordingly. This exploration is performed by taking the next few steps in the accounting cycle .
[ { "answer": { "ans_choice": 1, "ans_text": "B" }, "bloom": null, "hl_context": "<hl> An interim period is any reporting period shorter than a full year ( fiscal or calendar ) . <hl> <hl> This can encompass monthly , quarterly , or half-year statements . <hl> The information contained on these statements is timelier than waiting for a yearly accounting period to end . The most common interim period is three months , or a quarter . For companies whose common stock is traded on a major stock exchange , meaning these are publicly traded companies , quarterly statements must be filed with the SEC on a Form 10 - Q . The companies must file a Form 10 - K for their annual statements . As you ’ ve learned , the SEC is an independent agency of the federal government that provides oversight of public companies to maintain fair representation of company financial activities for investors to make informed decisions .", "hl_sentences": "An interim period is any reporting period shorter than a full year ( fiscal or calendar ) . This can encompass monthly , quarterly , or half-year statements .", "question": { "cloze_format": "A(n) ___ is any reporting period shorter than a full year (fiscal or calendar) and can encompass monthly, quarterly, or half-year statements.", "normal_format": "Which of the following is any reporting period shorter than a full year (fiscal or calendar) and can encompass monthly, quarterly, or half-year statements?", "question_choices": [ "fiscal year", "interim period", "calendar year", "fixed year" ], "question_id": "fs-idm404923552", "question_text": "Which of the following is any reporting period shorter than a full year (fiscal or calendar) and can encompass monthly, quarterly, or half-year statements?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "SEC (Securities and Exchange Commission)" }, "bloom": null, "hl_context": "An interim period is any reporting period shorter than a full year ( fiscal or calendar ) . This can encompass monthly , quarterly , or half-year statements . The information contained on these statements is timelier than waiting for a yearly accounting period to end . The most common interim period is three months , or a quarter . For companies whose common stock is traded on a major stock exchange , meaning these are publicly traded companies , quarterly statements must be filed with the SEC on a Form 10 - Q . The companies must file a Form 10 - K for their annual statements . <hl> As you ’ ve learned , the SEC is an independent agency of the federal government that provides oversight of public companies to maintain fair representation of company financial activities for investors to make informed decisions . <hl>", "hl_sentences": "As you ’ ve learned , the SEC is an independent agency of the federal government that provides oversight of public companies to maintain fair representation of company financial activities for investors to make informed decisions .", "question": { "cloze_format": "___ is the federal, independent agency that provides oversight of public companies to maintain fair representation of company financial activities for investors to make informed decisions.", "normal_format": "Which of the following is the federal, independent agency that provides oversight of public companies to maintain fair representation of company financial activities for investors to make informed decisions?", "question_choices": [ "IRS (Internal Revenue Service)", "SEC (Securities and Exchange Commission)", "FASB (Financial Accounting Standards Board)", "FDIC (Federal Deposit Insurance Corporation)" ], "question_id": "fs-idm392341680", "question_text": "Which of the following is the federal, independent agency that provides oversight of public companies to maintain fair representation of company financial activities for investors to make informed decisions?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "A" }, "bloom": null, "hl_context": "Public companies reporting their financial positions use either US generally accepted accounting principles ( GAAP ) or International Financial Reporting Standards ( IFRS ) , as allowed under the Securities and Exchange Commission ( SEC ) regulations . Also , companies , public or private , using US GAAP or IFRS prepare their financial statements using the rules of accrual accounting . <hl> Recall from Introduction to Financial Statements that accrual basis accounting prescribes that revenues and expenses must be recorded in the accounting period in which they were earned or incurred , no matter when cash receipts or payments occur . <hl> It is because of accrual accounting that we have the revenue recognition principle and the expense recognition principle ( also known as the matching principle ) .", "hl_sentences": "Recall from Introduction to Financial Statements that accrual basis accounting prescribes that revenues and expenses must be recorded in the accounting period in which they were earned or incurred , no matter when cash receipts or payments occur .", "question": { "cloze_format": "Revenues and expenses must be recorded in the accounting period in which they were earned or incurred, no matter when cash receipts or outlays occur under the accounting method ___ .", "normal_format": "Revenues and expenses must be recorded in the accounting period in which they were earned or incurred, no matter when cash receipts or outlays occur under which of the following accounting methods?", "question_choices": [ "accrual basis accounting", "cash basis accounting", "tax basis accounting", "revenue basis accounting" ], "question_id": "fs-idm372709552", "question_text": "Revenues and expenses must be recorded in the accounting period in which they were earned or incurred, no matter when cash receipts or outlays occur under which of the following accounting methods?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "accounting period" }, "bloom": null, "hl_context": "As we discussed , accrual accounting requires companies to report revenues and expenses in the accounting period in which they were earned or incurred . <hl> An accounting period breaks down company financial information into specific time spans , and can cover a month , a quarter , a half-year , or a full year . <hl> Public companies governed by GAAP are required to present quarterly ( three-month ) accounting period financial statements called 10 - Qs . However , most public and private companies keep monthly , quarterly , and yearly ( annual ) period information . This is useful to users needing up-to-date financial data to make decisions about company investment and growth . When the company keeps yearly information , the year could be based on a fiscal or calendar year . This is explained shortly .", "hl_sentences": "An accounting period breaks down company financial information into specific time spans , and can cover a month , a quarter , a half-year , or a full year .", "question": { "cloze_format": "The ___ breaks down company financial information into specific time spans, and can cover a month, quarter, half-year, or full year.", "normal_format": "Which of the following breaks down company financial information into specific time spans, and can cover a month, quarter, half-year, or full year?", "question_choices": [ "accounting period", "yearly period", "monthly period", "fiscal period" ], "question_id": "fs-idm357481840", "question_text": "Which of the following breaks down company financial information into specific time spans, and can cover a month, quarter, half-year, or full year?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "D" }, "bloom": null, "hl_context": "<hl> A fiscal year is a twelve-month reporting cycle that can begin in any month and records financial data for that consecutive twelve-month period . <hl> For example , a business may choose its fiscal year to begin on April 1 , 2019 , and end on March 31 , 2020 . This can be common practice for corporations and may best reflect the operational flow of revenues and expenses for a particular business . In addition to annual reporting , companies often need or choose to report financial statement information in interim periods .", "hl_sentences": "A fiscal year is a twelve-month reporting cycle that can begin in any month and records financial data for that consecutive twelve-month period .", "question": { "cloze_format": "The ___ is a twelve-month reporting cycle that can begin in any month, except January 1, and records financial data for that twelve-month consecutive period.", "normal_format": "Which of the following is a twelve-month reporting cycle that can begin in any month, except January 1, and records financial data for that twelve-month consecutive period?", "question_choices": [ "fixed year", "interim period", "calendar year", "fiscal year" ], "question_id": "fs-idm378377696", "question_text": "Which of the following is a twelve-month reporting cycle that can begin in any month, except January 1, and records financial data for that twelve-month consecutive period?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "deferral" }, "bloom": null, "hl_context": "<hl> Deferrals are prepaid expense and revenue accounts that have delayed recognition until they have been used or earned . <hl> This recognition may not occur until the end of a period or future periods . When deferred expenses and revenues have yet to be recognized , their information is stored on the balance sheet . As soon as the expense is incurred and the revenue is earned , the information is transferred from the balance sheet to the income statement . Two main types of deferrals are prepaid expenses and unearned revenues .", "hl_sentences": "Deferrals are prepaid expense and revenue accounts that have delayed recognition until they have been used or earned .", "question": { "cloze_format": "The type of adjustment that occurs when cash is either collected or paid, but the related income or expense is not reportable in the current period, is referred to as ___.", "normal_format": "Which type of adjustment occurs when cash is either collected or paid, but the related income or expense is not reportable in the current period?", "question_choices": [ "accrual", "deferral", "estimate", "cull" ], "question_id": "fs-idm318988096", "question_text": "Which type of adjustment occurs when cash is either collected or paid, but the related income or expense is not reportable in the current period?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "A" }, "bloom": null, "hl_context": "<hl> Accruals are types of adjusting entries that accumulate during a period , where amounts were previously unrecorded . <hl> The two specific types of adjustments are accrued revenues and accrued expenses .", "hl_sentences": "Accruals are types of adjusting entries that accumulate during a period , where amounts were previously unrecorded .", "question": { "cloze_format": "The ___ adjustment occurs when cash is not collected or paid, but the related income or expense is reportable in the current period.", "normal_format": "Which type of adjustment occurs when cash is not collected or paid, but the related income or expense is reportable in the current period?", "question_choices": [ "accrual", "deferral", "estimate", "cull" ], "question_id": "fs-idm364707296", "question_text": "Which type of adjustment occurs when cash is not collected or paid, but the related income or expense is reportable in the current period?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 1, "ans_text": "B" }, "bloom": null, "hl_context": "Accumulated Depreciation is contrary to an asset account , such as Equipment . This means that the normal balance for Accumulated Depreciation is on the credit side . It houses all depreciation expensed in current and prior periods . Accumulated Depreciation will reduce the asset account for depreciation incurred up to that point . The difference between the asset ’ s value ( cost ) and accumulated depreciation is called the book value of the asset . <hl> When depreciation is recorded in an adjusting entry , Accumulated Depreciation is credited and Depreciation Expense is debited . <hl> Depreciation may also require an adjustment at the end of the period . Recall that depreciation is the systematic method to record the allocation of cost over a given period of certain assets . This allocation of cost is recorded over the useful life of the asset , or the time period over which an asset cost is allocated . <hl> The allocated cost up to that point is recorded in Accumulated Depreciation , a contra asset account . <hl> A contra account is an account paired with another account type , has an opposite normal balance to the paired account , and reduces the balance in the paired account at the end of a period . Deferrals are prepaid expense and revenue accounts that have delayed recognition until they have been used or earned . This recognition may not occur until the end of a period or future periods . When deferred expenses and revenues have yet to be recognized , their information is stored on the balance sheet . As soon as the expense is incurred and the revenue is earned , the information is transferred from the balance sheet to the income statement . <hl> Two main types of deferrals are prepaid expenses and unearned revenues . <hl>", "hl_sentences": "When depreciation is recorded in an adjusting entry , Accumulated Depreciation is credited and Depreciation Expense is debited . The allocated cost up to that point is recorded in Accumulated Depreciation , a contra asset account . Two main types of deferrals are prepaid expenses and unearned revenues .", "question": { "cloze_format": "If an adjustment includes an entry to Accumulated Depreciation, it is the ___ type of adjustment.", "normal_format": "If an adjustment includes an entry to Accumulated Depreciation, which type of adjustment is it?", "question_choices": [ "accrual", "deferral", "estimate", "cull" ], "question_id": "fs-idm628788832", "question_text": "If an adjustment includes an entry to Accumulated Depreciation, which type of adjustment is it?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 3, "ans_text": "deferred revenue (unearned revenue)" }, "bloom": null, "hl_context": "<hl> Recall that unearned revenue represents a customer ’ s advanced payment for a product or service that has yet to be provided by the company . <hl> Since the company has not yet provided the product or service , it cannot recognize the customer ’ s payment as revenue . At the end of a period , the company will review the account to see if any of the unearned revenue has been earned . If so , this amount will be recorded as revenue in the current period .", "hl_sentences": "Recall that unearned revenue represents a customer ’ s advanced payment for a product or service that has yet to be provided by the company .", "question": { "cloze_format": "Rent collected in advance is an example of ___.", "normal_format": "Rent collected in advance is an example of which of the following?", "question_choices": [ "accrued expense", "accrued revenue", "deferred expense (prepaid expense)", "deferred revenue (unearned revenue)" ], "question_id": "fs-idm628928720", "question_text": "Rent collected in advance is an example of which of the following?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "C" }, "bloom": null, "hl_context": "<hl> Recall from Analyzing and Recording Transactions that prepaid expenses ( prepayments ) are assets for which advanced payment has occurred , before the company can benefit from use . <hl> As soon as the asset has provided benefit to the company , the value of the asset used is transferred from the balance sheet to the income statement as an expense . <hl> Some common examples of prepaid expenses are supplies , depreciation , insurance , and rent . <hl>", "hl_sentences": "Recall from Analyzing and Recording Transactions that prepaid expenses ( prepayments ) are assets for which advanced payment has occurred , before the company can benefit from use . Some common examples of prepaid expenses are supplies , depreciation , insurance , and rent .", "question": { "cloze_format": "Rent paid in advance is an example of ___.", "normal_format": "Rent paid in advance is an example of which of the following?", "question_choices": [ "accrued expense", "accrued revenue", "deferred expense (prepaid expense)", "deferred revenue (unearned revenue)" ], "question_id": "fs-idm365613904", "question_text": "Rent paid in advance is an example of which of the following?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "accrued expense" }, "bloom": null, "hl_context": "<hl> Accrued expenses are expenses incurred in a period but have yet to be recorded , and no money has been paid . <hl> <hl> Some examples include interest , tax , and salary expenses . <hl>", "hl_sentences": "Accrued expenses are expenses incurred in a period but have yet to be recorded , and no money has been paid . Some examples include interest , tax , and salary expenses .", "question": { "cloze_format": "Salaries owed but not yet paid is an example of ___ .", "normal_format": "Salaries owed but not yet paid is an example of which of the following?", "question_choices": [ "accrued expense", "accrued revenue", "deferred expense (prepaid expense)", "deferred revenue (unearned revenue)" ], "question_id": "fs-idm364268864", "question_text": "Salaries owed but not yet paid is an example of which of the following?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "B" }, "bloom": null, "hl_context": "<hl> Accrued revenues are revenues earned in a period but have yet to be recorded , and no money has been collected . <hl> Some examples include interest , and services completed but a bill has yet to be sent to the customer .", "hl_sentences": "Accrued revenues are revenues earned in a period but have yet to be recorded , and no money has been collected .", "question": { "cloze_format": "Revenue earned but not yet collected is an example of ___ .", "normal_format": "Revenue earned but not yet collected is an example of which of the following?", "question_choices": [ "accrued expense", "accrued revenue", "deferred expense (prepaid expense)", "deferred revenue (unearned revenue)" ], "question_id": "fs-idm364493680", "question_text": "Revenue earned but not yet collected is an example of which of the following?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "a debit to Depreciation Expense; a credit to Accumulated Depreciation" }, "bloom": null, "hl_context": "<hl> In the journal entry , Depreciation Expense – Equipment has a debit of $ 75 . <hl> This is posted to the Depreciation Expense – Equipment T-account on the debit side ( left side ) . <hl> Accumulated Depreciation – Equipment has a credit balance of $ 75 . <hl> This is posted to the Accumulated Depreciation – Equipment T-account on the credit side ( right side ) . Accumulated Depreciation is contrary to an asset account , such as Equipment . This means that the normal balance for Accumulated Depreciation is on the credit side . It houses all depreciation expensed in current and prior periods . Accumulated Depreciation will reduce the asset account for depreciation incurred up to that point . The difference between the asset ’ s value ( cost ) and accumulated depreciation is called the book value of the asset . <hl> When depreciation is recorded in an adjusting entry , Accumulated Depreciation is credited and Depreciation Expense is debited . <hl>", "hl_sentences": "In the journal entry , Depreciation Expense – Equipment has a debit of $ 75 . Accumulated Depreciation – Equipment has a credit balance of $ 75 . When depreciation is recorded in an adjusting entry , Accumulated Depreciation is credited and Depreciation Expense is debited .", "question": { "cloze_format": "The adjusting journal entry that is needed to record depreciation expense for the period is ___.", "normal_format": "What adjusting journal entry is needed to record depreciation expense for the period?", "question_choices": [ "a debit to Depreciation Expense; a credit to Cash", "a debit to Accumulated Depreciation; a credit to Depreciation Expense", "a debit to Depreciation Expense; a credit to Accumulated Depreciation", "a debit to Accumulated Depreciation; a credit to Cash" ], "question_id": "fs-idm490492736", "question_text": "What adjusting journal entry is needed to record depreciation expense for the period?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "D" }, "bloom": null, "hl_context": "<hl> Revenue recognition principle : Adjusting entries are necessary because the revenue recognition principle requires revenue recognition when earned , thus the need for an update to unearned revenues . <hl>", "hl_sentences": "Revenue recognition principle : Adjusting entries are necessary because the revenue recognition principle requires revenue recognition when earned , thus the need for an update to unearned revenues .", "question": { "cloze_format": "A transaction that requires an adjusting entry (debit) to Unearned Revenue is ___ . ", "normal_format": "Which of these transactions requires an adjusting entry (debit) to Unearned Revenue?", "question_choices": [ "revenue earned but not yet collected", "revenue collected but not yet earned", "revenue earned before being collected, when it is later collected", "revenue collected before being earned, when it is later earned" ], "question_id": "fs-idm237970976", "question_text": "Which of these transactions requires an adjusting entry (debit) to Unearned Revenue?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 1, "ans_text": "It is the source document from which to prepare the financial statements" }, "bloom": null, "hl_context": "There are several steps in the accounting cycle that require the preparation of a trial balance : step 4 , preparing an unadjusted trial balance ; step 6 , preparing an adjusted trial balance ; and step 9 , preparing a post-closing trial balance . <hl> You might question the purpose of more than one trial balance . <hl> <hl> For example , why can we not go from the unadjusted trial balance straight into preparing financial statements for public consumption ? <hl> What is the purpose of the adjusted trial balance ? Does preparing more than one trial balance mean the company made a mistake earlier in the accounting cycle ? To answer these questions , let ’ s first explore the ( unadjusted ) trial balance , and why some accounts have incorrect balances .", "hl_sentences": "You might question the purpose of more than one trial balance . For example , why can we not go from the unadjusted trial balance straight into preparing financial statements for public consumption ?", "question": { "cloze_format": "A critical purpose that the adjusted trial balance serves is that ___ .", "normal_format": "What critical purpose does the adjusted trial balance serve?", "question_choices": [ "It proves that transactions have been posted correctly", "It is the source document from which to prepare the financial statements", "It shows the beginning balances of every account, to be used to start the new year’s records", "It proves that all journal entries have been made correctly." ], "question_id": "fs-idm343131664", "question_text": "What critical purpose does the adjusted trial balance serve?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "C" }, "bloom": null, "hl_context": "<hl> Ending retained earnings information is taken from the statement of retained earnings , and asset , liability , and common stock information is taken from the adjusted trial balance as follows . <hl> The statement of retained earnings ( which is often a component of the statement of stockholders ’ equity ) shows how the equity ( or value ) of the organization has changed over a period of time . <hl> The statement of retained earnings is prepared second to determine the ending retained earnings balance for the period . <hl> <hl> The statement of retained earnings is prepared before the balance sheet because the ending retained earnings amount is a required element of the balance sheet . <hl> The following is the Statement of Retained Earnings for Printing Plus .", "hl_sentences": "Ending retained earnings information is taken from the statement of retained earnings , and asset , liability , and common stock information is taken from the adjusted trial balance as follows . The statement of retained earnings is prepared second to determine the ending retained earnings balance for the period . The statement of retained earnings is prepared before the balance sheet because the ending retained earnings amount is a required element of the balance sheet .", "question": { "cloze_format": "The ___ account's balance would be a different number on the Balance Sheet than it is on the adjusted trial balance.", "normal_format": "Which of the following accounts’ balance would be a different number on the Balance Sheet than it is on the adjusted trial balance?", "question_choices": [ "accumulated depreciation", "unearned service revenue", "retained earnings", "dividends" ], "question_id": "fs-idm355329344", "question_text": "Which of the following accounts’ balance would be a different number on the Balance Sheet than it is on the adjusted trial balance?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 0, "ans_text": "Balance Sheet" }, "bloom": null, "hl_context": "<hl> Impact on the financial statements : Supplies is a balance sheet account , and Supplies Expense is an income statement account . <hl> This satisfies the rule that each adjusting entry will contain an income statement and balance sheet account . We see total assets decrease by $ 100 on the balance sheet . Supplies Expense increases overall expenses on the income statement , which reduces net income .", "hl_sentences": "Impact on the financial statements : Supplies is a balance sheet account , and Supplies Expense is an income statement account .", "question": { "cloze_format": "The financial statement on which the Supplies account would appear is the ___.", "normal_format": "On which financial statement would the Supplies account appear?", "question_choices": [ "Balance Sheet", "Income Statement", "Retained Earnings Statement", "Statement of Cash Flows" ], "question_id": "fs-idm236335040", "question_text": "On which financial statement would the Supplies account appear?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "C" }, "bloom": null, "hl_context": "<hl> The statement of retained earnings always leads with beginning retained earnings . <hl> Beginning retained earnings carry over from the previous period ’ s ending retained earnings balance . Since this is the first month of business for Printing Plus , there is no beginning retained earnings balance . Notice the net income of $ 4,665 from the income statement is carried over to the statement of retained earnings . <hl> Dividends are taken away from the sum of beginning retained earnings and net income to get the ending retained earnings balance of $ 4,565 for January . <hl> This ending retained earnings balance is transferred to the balance sheet . To prepare the financial statements , a company will look at the adjusted trial balance for account information . From this information , the company will begin constructing each of the statements , beginning with the income statement . Income statement s will include all revenue and expense accounts . <hl> The statement of retained earnings will include beginning retained earnings , any net income ( loss ) ( found on the income statement ) , and dividends . <hl> The balance sheet is going to include assets , contra assets , liabilities , and stockholder equity accounts , including ending retained earnings and common stock .", "hl_sentences": "The statement of retained earnings always leads with beginning retained earnings . Dividends are taken away from the sum of beginning retained earnings and net income to get the ending retained earnings balance of $ 4,565 for January . The statement of retained earnings will include beginning retained earnings , any net income ( loss ) ( found on the income statement ) , and dividends .", "question": { "cloze_format": "The financial statement on which the Dividends account would appear is the ___.", "normal_format": "On which financial statement would the Dividends account appear?", "question_choices": [ "Balance Sheet", "Income Statement", "Retained Earnings Statement", "Statement of Cash Flows" ], "question_id": "fs-idm177021984", "question_text": "On which financial statement would the Dividends account appear?" }, "references_are_paraphrase": null } ]
4
4.1 Explain the Concepts and Guidelines Affecting Adjusting Entries Analyzing and Recording Transactions was the first of three consecutive chapters covering the steps in the accounting cycle ( Figure 4.2 ). In Analyzing and Recording Transactions , we discussed the first four steps in the accounting cycle: identify and analyze transactions, record transactions to a journal, post journal information to the general ledger, and prepare an (unadjusted) trial balance. This chapter examines the next three steps in the cycle: record adjusting entries (journalizing and posting), prepare an adjusted trial balance, and prepare the financial statements ( Figure 4.3 ). As we progress through these steps, you learn why the trial balance in this phase of the accounting cycle is referred to as an “adjusted” trial balance. We also discuss the purpose of adjusting entries and the accounting concepts supporting their need. One of the first concepts we discuss is accrual accounting . Accrual Accounting Public companies reporting their financial positions use either US generally accepted accounting principles (GAAP) or International Financial Reporting Standards (IFRS), as allowed under the Securities and Exchange Commission (SEC) regulations. Also, companies, public or private, using US GAAP or IFRS prepare their financial statements using the rules of accrual accounting. Recall from Introduction to Financial Statements that accrual basis accounting prescribes that revenues and expenses must be recorded in the accounting period in which they were earned or incurred, no matter when cash receipts or payments occur. It is because of accrual accounting that we have the revenue recognition principle and the expense recognition principle (also known as the matching principle ). The accrual method is considered to better match revenues and expenses and standardizes reporting information for comparability purposes. Having comparable information is important to external users of information trying to make investment or lending decisions, and to internal users trying to make decisions about company performance, budgeting, and growth strategies. Some nonpublic companies may choose to use cash basis accounting rather than accrual basis accounting to report financial information. Recall from Introduction to Financial Statements that cash basis accounting is a method of accounting in which transactions are not recorded in the financial statements until there is an exchange of cash. Cash basis accounting sometimes delays or accelerates revenue and expense reporting until cash receipts or outlays occur. With this method, cash flows are used to measure business performance in a given period and can be simpler to track than accrual basis accounting. There are several other accounting methods or concepts that accountants will sometimes apply. The first is modified accrual accounting , which is commonly used in governmental accounting and merges accrual basis and cash basis accounting. The second is tax basis accounting that is used in establishing the tax effects of transactions in determining the tax liability of an organization. One fundamental concept to consider related to the accounting cycle—and to accrual accounting in particular—is the idea of the accounting period. The Accounting Period As we discussed, accrual accounting requires companies to report revenues and expenses in the accounting period in which they were earned or incurred. An accounting period breaks down company financial information into specific time spans, and can cover a month, a quarter, a half-year, or a full year. Public companies governed by GAAP are required to present quarterly (three-month) accounting period financial statements called 10-Qs . However, most public and private companies keep monthly, quarterly, and yearly (annual) period information. This is useful to users needing up-to-date financial data to make decisions about company investment and growth. When the company keeps yearly information, the year could be based on a fiscal or calendar year. This is explained shortly. Continuing Application Adjustment Process for Grocery Stores In every industry, adjustment entries are made at the end of the period to ensure revenue matches expenses. Companies with an online presence need to account for items sold that have not yet been shipped or are in the process of reaching the end user. But what about the grocery industry? At first glance, it might seem that no such adjustment entries are necessary. However, grocery stores have adapted to the current retail environment. For example, your local grocery store might provide catering services for a graduation party. If the contract requires the customer to put down a 50% deposit, and occurs near the end of a period, the grocery store will have unearned revenue until it provides the catering service. Once the party occurs, the grocery store needs to make an adjusting entry to reflect that revenue has been earned. The Fiscal Year and the Calendar Year A company may choose its yearly reporting period to be based on a calendar or fiscal year. If a company uses a calendar year , it is reporting financial data from January 1 to December 31 of a specific year. This may be useful for businesses needing to coincide with a traditional yearly tax schedule. It can also be easier to track for some businesses without formal reconciliation practices, and for small businesses. A fiscal year is a twelve-month reporting cycle that can begin in any month and records financial data for that consecutive twelve-month period. For example, a business may choose its fiscal year to begin on April 1, 2019, and end on March 31, 2020. This can be common practice for corporations and may best reflect the operational flow of revenues and expenses for a particular business. In addition to annual reporting, companies often need or choose to report financial statement information in interim periods. Interim Periods An interim period is any reporting period shorter than a full year (fiscal or calendar). This can encompass monthly, quarterly, or half-year statements. The information contained on these statements is timelier than waiting for a yearly accounting period to end. The most common interim period is three months, or a quarter. For companies whose common stock is traded on a major stock exchange, meaning these are publicly traded companies, quarterly statements must be filed with the SEC on a Form 10-Q. The companies must file a Form 10-K for their annual statements. As you’ve learned, the SEC is an independent agency of the federal government that provides oversight of public companies to maintain fair representation of company financial activities for investors to make informed decisions. In order for information to be useful to the user, it must be timely—that is, the user has to get it quickly enough so it is relevant to decision-making. You may recall from Analyzing and Recording Transactions that this is the basis of the time period assumption in accounting. For example, a potential or existing investor wants timely information by which to measure the performance of the company, and to help decide whether to invest, to stay invested, or to sell their stockholdings and invest elsewhere. This requires companies to organize their information and break it down into shorter periods. Internal and external users can then rely on the information that is both timely and relevant to decision-making. The accounting period a company chooses to use for financial reporting will impact the types of adjustments they may have to make to certain accounts. Ethical Considerations Illegal Cookie Jar Accounting Used to Manage Earnings From 2000 through the end of 2001, Bristol-Myers Squibb engaged in “Cookie Jar Accounting,” resulting in $150 million in SEC fines. The company manipulated its accounting to create a false indication of income and growth to create the appearance that it was meeting its own targets and Wall Street analysts’ earnings estimates during the years 2000 and 2001. The SEC describes some of what occurred: Bristol-Myers inflated its results primarily by (1) stuffing its distribution channels with excess inventory near the end of every quarter in amounts sufficient to meet its targets by making pharmaceutical sales to its wholesalers ahead of demand; and (2) improperly recognizing $1.5 billion in revenue from such pharmaceutical sales to its two biggest wholesalers. In connection with the $1.5 billion in revenue, Bristol-Myers covered these wholesalers’ carrying costs and guaranteed them a return on investment until they sold the products. When Bristol-Myers recognized the $1.5 billion in revenue upon shipment, it did so contrary to generally accepted accounting principles. 1 1 U.S. Securities and Exchange Commission. “Bristol-Myers Squibb Company Agrees to Pay $150 Million to Settle Fraud Charges.” August 4, 2004. https://www.sec.gov/news/press/2004-105.htm In addition to the improper distribution of product to manipulate earnings numbers, which was not enough to meet earnings targets, the company improperly used divestiture reserve funds (a “cookie jar” fund that is funded by the sale of assets such as product lines or divisions) to meet those targets. In this circumstance, earnings management was considered illegal, costing the company millions of dollars in fines. 4.2 Discuss the Adjustment Process and Illustrate Common Types of Adjusting Entries When a company reaches the end of a period, it must update certain accounts that have either been left unattended throughout the period or have not yet been recognized. Adjusting entries update accounting records at the end of a period for any transactions that have not yet been recorded. One important accounting principle to remember is that just as the accounting equation (Assets = Liabilities + Owner’s equity/or common stock/or capital) must be equal, it must remain equal after you make adjusting entries. Also note that in this equation, owner’s equity represents an individual owner (sole proprietorship), common stock represents a corporation’s owners’ interests, and capital represents a partnership’s owners’ interests. We discuss the effects of adjusting entries in greater detail throughout this chapter. There are several steps in the accounting cycle that require the preparation of a trial balance: step 4, preparing an unadjusted trial balance; step 6, preparing an adjusted trial balance; and step 9, preparing a post-closing trial balance. You might question the purpose of more than one trial balance. For example, why can we not go from the unadjusted trial balance straight into preparing financial statements for public consumption? What is the purpose of the adjusted trial balance? Does preparing more than one trial balance mean the company made a mistake earlier in the accounting cycle? To answer these questions, let’s first explore the (unadjusted) trial balance, and why some accounts have incorrect balances. Why Some Accounts Have Incorrect Balances on the Trial Balance The unadjusted trial balance may have incorrect balances in some accounts. Recall the trial balance from Analyzing and Recording Transactions for the example company, Printing Plus. The trial balance for Printing Plus shows Supplies of $500, which were purchased on January 30. Since this is a new company, Printing Plus would more than likely use some of their supplies right away, before the end of the month on January 31. Supplies are only an asset when they are unused. If Printing Plus used some of its supplies immediately on January 30, then why is the full $500 still in the supply account on January 31? How do we fix this incorrect balance? Similarly, what about Unearned Revenue? On January 9, the company received $4,000 from a customer for printing services to be performed. The company recorded this as a liability because it received payment without providing the service. To clear this liability, the company must perform the service. Assume that as of January 31 some of the printing services have been provided. Is the full $4,000 still a liability? Since a portion of the service was provided, a change to unearned revenue should occur. The company needs to correct this balance in the Unearned Revenue account. Having incorrect balances in Supplies and in Unearned Revenue on the company’s January 31 trial balance is not due to any error on the company’s part. The company followed all of the correct steps of the accounting cycle up to this point. So why are the balances still incorrect? Journal entries are recorded when an activity or event occurs that triggers the entry. Usually the trigger is from an original source. Recall that an original source can be a formal document substantiating a transaction, such as an invoice, purchase order, cancelled check, or employee time sheet. Not every transaction produces an original source document that will alert the bookkeeper that it is time to make an entry. When a company purchases supplies, the original order, receipt of the supplies, and receipt of the invoice from the vendor will all trigger journal entries. This trigger does not occur when using supplies from the supply closet. Similarly, for unearned revenue, when the company receives an advance payment from the customer for services yet provided, the cash received will trigger a journal entry. When the company provides the printing services for the customer, the customer will not send the company a reminder that revenue has now been earned. Situations such as these are why businesses need to make adjusting entries. Think It Through Keep Calm and Adjust . . . Elliot Simmons owns a small law firm. He does the accounting himself and uses an accrual basis for accounting. At the end of his first month, he reviews his records and realizes there are a few inaccuracies on this unadjusted trial balance. One difference is the supplies account; the figure on paper does not match the value of the supplies inventory still available. Another difference was interest earned from his bank account. He did not have anything recognizing these earnings. Why did his unadjusted trial balance have these errors? What can be attributed to the differences in supply figures? What can be attributed to the differences in interest earned? The Need for Adjusting Entries Adjusting entries update accounting records at the end of a period for any transactions that have not yet been recorded. These entries are necessary to ensure the income statement and balance sheet present the correct, up-to-date numbers. Adjusting entries are also necessary because the initial trial balance may not contain complete and current data due to several factors: The inefficiency of recording every single day-to-day event, such as the use of supplies. Some costs are not recorded during the period but must be recognized at the end of the period, such as depreciation, rent, and insurance. Some items are forthcoming for which original source documents have not yet been received, such as a utility bill. There are a few other guidelines that support the need for adjusting entries. Guidelines Supporting Adjusting Entries Several guidelines support the need for adjusting entries: Revenue recognition principle: Adjusting entries are necessary because the revenue recognition principle requires revenue recognition when earned, thus the need for an update to unearned revenues. Expense recognition (matching) principle: This requires matching expenses incurred to generate the revenues earned, which affects accounts such as insurance expense and supplies expense. Time period assumption: This requires useful information be presented in shorter time periods such as years, quarters, or months. This means a company must recognize revenues and expenses in the proper period, requiring adjustment to certain accounts to meet these criteria. The required adjusting entries depend on what types of transactions the company has, but there are some common types of adjusting entries. Before we look at recording and posting the most common types of adjusting entries, we briefly discuss the various types of adjusting entries. Types of Adjusting Entries Adjusting entries requires updates to specific account types at the end of the period. Not all accounts require updates, only those not naturally triggered by an original source document. There are two main types of adjusting entries that we explore further, deferrals and accruals. Deferrals Deferrals are prepaid expense and revenue accounts that have delayed recognition until they have been used or earned. This recognition may not occur until the end of a period or future periods. When deferred expenses and revenues have yet to be recognized, their information is stored on the balance sheet. As soon as the expense is incurred and the revenue is earned, the information is transferred from the balance sheet to the income statement. Two main types of deferrals are prepaid expenses and unearned revenues. Prepaid Expenses Recall from Analyzing and Recording Transactions that prepaid expenses (prepayments) are assets for which advanced payment has occurred, before the company can benefit from use. As soon as the asset has provided benefit to the company, the value of the asset used is transferred from the balance sheet to the income statement as an expense. Some common examples of prepaid expenses are supplies, depreciation, insurance, and rent. When a company purchases supplies, it may not use all supplies immediately, but chances are the company has used some of the supplies by the end of the period. It is not worth it to record every time someone uses a pencil or piece of paper during the period, so at the end of the period, this account needs to be updated for the value of what has been used. Let’s say a company paid for supplies with cash in the amount of $400. At the end of the month, the company took an inventory of supplies used and determined the value of those supplies used during the period to be $150. The following entry occurs for the initial payment. Supplies increases (debit) for $400, and Cash decreases (credit) for $400. When the company recognizes the supplies usage, the following adjusting entry occurs. Supplies Expense is an expense account, increasing (debit) for $150, and Supplies is an asset account, decreasing (credit) for $150. This means $150 is transferred from the balance sheet (asset) to the income statement (expense). Notice that not all of the supplies are used. There is still a balance of $250 (400 – 150) in the Supplies account. This amount will carry over to future periods until used. The balances in the Supplies and Supplies Expense accounts show as follows. Depreciation may also require an adjustment at the end of the period. Recall that depreciation is the systematic method to record the allocation of cost over a given period of certain assets. This allocation of cost is recorded over the useful life of the asset, or the time period over which an asset cost is allocated. The allocated cost up to that point is recorded in Accumulated Depreciation, a contra asset account. A contra account is an account paired with another account type, has an opposite normal balance to the paired account, and reduces the balance in the paired account at the end of a period. Accumulated Depreciation is contrary to an asset account, such as Equipment. This means that the normal balance for Accumulated Depreciation is on the credit side. It houses all depreciation expensed in current and prior periods. Accumulated Depreciation will reduce the asset account for depreciation incurred up to that point. The difference between the asset’s value (cost) and accumulated depreciation is called the book value of the asset. When depreciation is recorded in an adjusting entry, Accumulated Depreciation is credited and Depreciation Expense is debited. For example, let’s say a company pays $2,000 for equipment that is supposed to last four years. The company wants to depreciate the asset over those four years equally. This means the asset will lose $500 in value each year ($2,000/four years). In the first year, the company would record the following adjusting entry to show depreciation of the equipment. Depreciation Expense increases (debit) and Accumulated Depreciation, Equipment, increases (credit). If the company wanted to compute the book value, it would take the original cost of the equipment and subtract accumulated depreciation. Book value of equipment = $2,000 – $500 = $1,500 Book value of equipment = $2,000 – $500 = $1,500 This means that the current book value of the equipment is $1,500, and depreciation will be subtracted from this figure the next year. The following account balances after adjustment are as follows: You will learn more about depreciation and its computation in Long-Term Assets . However, one important fact that we need to address now is that the book value of an asset is not necessarily the price at which the asset would sell. For example, you might have a building for which you paid $1,000,000 that currently has been depreciated to a book value of $800,000. However, today it could sell for more than, less than, or the same as its book value. The same is true about just about any asset you can name, except, perhaps, cash itself. Insurance policies can require advanced payment of fees for several months at a time, six months, for example. The company does not use all six months of insurance immediately but over the course of the six months. At the end of each month, the company needs to record the amount of insurance expired during that month. For example, a company pays $4,500 for an insurance policy covering six months. It is the end of the first month and the company needs to record an adjusting entry to recognize the insurance used during the month. The following entries show the initial payment for the policy and the subsequent adjusting entry for one month of insurance usage. In the first entry, Cash decreases (credit) and Prepaid Insurance increases (debit) for $4,500. In the second entry, Prepaid Insurance decreases (credit) and Insurance Expense increases (debit) for one month’s insurance usage found by taking the total $4,500 and dividing by six months (4,500/6 = 750). The account balances after adjustment are as follows: Similar to prepaid insurance, rent also requires advanced payment. Usually to rent a space, a company will need to pay rent at the beginning of the month. The company may also enter into a lease agreement that requires several months, or years, of rent in advance. Each month that passes, the company needs to record rent used for the month. Let’s say a company pays $8,000 in advance for four months of rent. After the first month, the company records an adjusting entry for the rent used. The following entries show initial payment for four months of rent and the adjusting entry for one month’s usage. In the first entry, Cash decreases (credit) and Prepaid Rent increases (debit) for $8,000. In the second entry, Prepaid Rent decreases (credit) and Rent Expense increases (debit) for one month’s rent usage found by taking the total $8,000 and dividing by four months (8,000/4 = 2,000). The account balances after adjustment are as follows: Another type of deferral requiring adjustment is unearned revenue. Unearned Revenues Recall that unearned revenue represents a customer’s advanced payment for a product or service that has yet to be provided by the company. Since the company has not yet provided the product or service, it cannot recognize the customer’s payment as revenue. At the end of a period, the company will review the account to see if any of the unearned revenue has been earned. If so, this amount will be recorded as revenue in the current period. For example, let’s say the company is a law firm. During the year, it collected retainer fees totaling $48,000 from clients. Retainer fees are money lawyers collect in advance of starting work on a case. When the company collects this money from its clients, it will debit cash and credit unearned fees. Even though not all of the $48,000 was probably collected on the same day, we record it as if it was for simplicity’s sake. In this case, Unearned Fee Revenue increases (credit) and Cash increases (debit) for $48,000. At the end of the year after analyzing the unearned fees account, 40% of the unearned fees have been earned. This 40% can now be recorded as revenue. Total revenue recorded is $19,200 ($48,000 × 40%). For this entry, Unearned Fee Revenue decreases (debit) and Fee Revenue increases (credit) for $19,200, which is the 40% earned during the year. The company will have the following balances in the two accounts: Besides deferrals, other types of adjusting entries include accruals. Accruals Accruals are types of adjusting entries that accumulate during a period, where amounts were previously unrecorded. The two specific types of adjustments are accrued revenues and accrued expenses. Accrued Revenues Accrued revenues are revenues earned in a period but have yet to be recorded, and no money has been collected. Some examples include interest, and services completed but a bill has yet to be sent to the customer. Interest can be earned from bank account holdings, notes receivable, and some accounts receivables (depending on the contract). Interest had been accumulating during the period and needs to be adjusted to reflect interest earned at the end of the period. Note that this interest has not been paid at the end of the period, only earned. This aligns with the revenue recognition principle to recognize revenue when earned, even if cash has yet to be collected. For example, assume that a company has one outstanding note receivable in the amount of $100,000. Interest on this note is 5% per year. Three months have passed, and the company needs to record interest earned on this outstanding loan. The calculation for the interest revenue earned is $100,000 × 5% × 3/12 = $1,250. The following adjusting entry occurs. Interest Receivable increases (debit) for $1,250 because interest has not yet been paid. Interest Revenue increases (credit) for $1,250 because interest was earned in the three-month period but had been previously unrecorded. Previously unrecorded service revenue can arise when a company provides a service but did not yet bill the client for the work. This means the customer has also not yet paid for services. Since there was no bill to trigger a transaction, an adjustment is required to recognize revenue earned at the end of the period. For example, a company performs landscaping services in the amount of $1,500. However, they have not yet received payment. At the period end, the company would record the following adjusting entry. Accounts Receivable increases (debit) for $1,500 because the customer has not yet paid for services completed. Service Revenue increases (credit) for $1,500 because service revenue was earned but had been previously unrecorded. Accrued Expenses Accrued expenses are expenses incurred in a period but have yet to be recorded, and no money has been paid. Some examples include interest, tax, and salary expenses. Interest expense arises from notes payable and other loan agreements. The company has accumulated interest during the period but has not recorded or paid the amount. This creates a liability that the company must pay at a future date. You cover more details about computing interest in Current Liabilities , so for now amounts are given. For example, a company accrued $300 of interest during the period. The following entry occurs at the end of the period. Interest Expense increases (debit) and Interest Payable increases (credit) for $300. The following are the updated ledger balances after posting the adjusting entry. Taxes are only paid at certain times during the year, not necessarily every month. Taxes the company owes during a period that are unpaid require adjustment at the end of a period. This creates a liability for the company. Some tax expense examples are income and sales taxes. For example, a company has accrued income taxes for the month for $9,000. The company would record the following adjusting entry. Income Tax Expense increases (debit) and Income Tax Payable increases (credit) for $9,000. The following are the updated ledger balances after posting the adjusting entry. Many salaried employees are paid once a month. The salary the employee earned during the month might not be paid until the following month. For example, the employee is paid for the prior month’s work on the first of the next month. The financial statements must remain up to date, so an adjusting entry is needed during the month to show salaries previously unrecorded and unpaid at the end of the month. Let’s say a company has five salaried employees, each earning $2,500 per month. In our example, assume that they do not get paid for this work until the first of the next month. The following is the adjusting journal entry for salaries. Salaries Expense increases (debit) and Salaries Payable increases (credit) for $12,500 ($2,500 per employee × five employees). The following are the updated ledger balances after posting the adjusting entry. In Record and Post the Common Types of Adjusting Entries , we explore some of these adjustments specifically for our company Printing Plus, and show how these entries affect our general ledger (T-accounts). Your Turn Adjusting Entries   Example Income Statement Account Balance Sheet Account Cash in Entry?                         Table 4.1 Review the three adjusting entries that follow. Using the table provided, for each entry write down the income statement account and balance sheet account used in the adjusting entry in the appropriate column. Then in the last column answer yes or no. Solution   Example Income Statement Account Balance Sheet Account Cash in Entry? 1 Supplies expense Supplies no 2 Service Revenue Unearned Revenue no 3 Rent Expense Prepaid machine rent no Table 4.2 Your Turn Adjusting Entries Take Two Did we continue to follow the rules of adjusting entries in these two examples? Explain.   Example Income Statement Account Balance Sheet Account Cash in Entry?                 Table 4.3 Solution Yes, we did. Each entry has one income statement account and one balance sheet account, and cash does not appear in either of the adjusting entries.   Example Income Statement Account Balance Sheet Account Cash in Entry? 1 Electricity Expense Accounts Payable no 2 Salaries Expense Salaries Payable no Table 4.4 4.3 Record and Post the Common Types of Adjusting Entries Before beginning adjusting entry examples for Printing Plus, let’s consider some rules governing adjusting entries: Every adjusting entry will have at least one income statement account and one balance sheet account. Cash will never be in an adjusting entry. The adjusting entry records the change in amount that occurred during the period. What are “income statement” and “balance sheet” accounts? Income statement accounts include revenues and expenses. Balance sheet accounts are assets, liabilities, and stockholders’ equity accounts, since they appear on a balance sheet. The second rule tells us that cash can never be in an adjusting entry. This is true because paying or receiving cash triggers a journal entry. This means that every transaction with cash will be recorded at the time of the exchange. We will not get to the adjusting entries and have cash paid or received which has not already been recorded. If accountants find themselves in a situation where the cash account must be adjusted, the necessary adjustment to cash will be a correcting entry and not an adjusting entry. With an adjusting entry, the amount of change occurring during the period is recorded. For example, if the supplies account had a $300 balance at the beginning of the month and $100 is still available in the supplies account at the end of the month, the company would record an adjusting entry for the $200 used during the month (300 – 100). Similarly for unearned revenues, the company would record how much of the revenue was earned during the period. Let’s now consider new transaction information for Printing Plus. Concepts In Practice Earnings Management Recording adjusting entries seems so cut and dry. It looks like you just follow the rules and all of the numbers come out 100 percent correct on all financial statements. But in reality this is not always the case. Just the fact that you have to make estimates in some cases, such as depreciation estimating residual value and useful life, tells you that numbers will not be 100 percent correct unless the accountant has ESP. Some companies engage in something called earnings management, where they follow the rules of accounting mostly but they stretch the truth a little to make it look like they are more profitable. Some companies do this by recording revenue before they should. Others leave assets on the books instead of expensing them when they should to decrease total expenses and increase profit. Take Mexico-based home-building company Desarrolladora Homex S.A.B. de C.V. This company reported revenue earned on more than 100,000 homes they had not even build yet. The SEC’s complaint states that Homex reported revenues from a project site where every planned home was said to have been “built and sold by Dec. 31, 2011. Satellite images of the project site on March 12, 2012, show it was still largely undeveloped and the vast majority of supposedly sold homes remained unbuilt.” 2 2 U.S. Securities and Exchange Commission. “SEC Charges Mexico-Based Homebuilder in $3.3 Billion Accounting Fraud. Press Release.” March 3, 2017. https://www.sec.gov/news/pressrelease/2017-60.html Is managing your earnings illegal? In some situations it is just an unethical stretch of the truth easy enough to do because of the estimates made in adjusting entries. You can simply change your estimate and insist the new estimate is really better when maybe it is your way to improve the bottom line, for example, changing your annual depreciation expense calculated on expensive plant assets from assuming a ten-year useful life, a reasonable estimated expectation, to a twenty-year useful life, not so reasonable but you insist your company will be able to use these assets twenty years while knowing that is a slim possibility. Doubling the useful life will cause 50% of the depreciation expense you would have had. This will make a positive impact on net income. This method of earnings management would probably not be considered illegal but is definitely a breach of ethics. In other situations, companies manage their earnings in a way that the SEC believes is actual fraud and charges the company with the illegal activity. Recording Common Types of Adjusting Entries Recall the transactions for Printing Plus discussed in Analyzing and Recording Transactions . Jan. 3, 2019 issues $20,000 shares of common stock for cash Jan. 5, 2019 purchases equipment on account for $3,500, payment due within the month Jan. 9, 2019 receives $4,000 cash in advance from a customer for services not yet rendered Jan. 10, 2019 provides $5,500 in services to a customer who asks to be billed for the services Jan. 12, 2019 pays a $300 utility bill with cash Jan. 14, 2019 distributed $100 cash in dividends to stockholders Jan. 17, 2019 receives $2,800 cash from a customer for services rendered Jan. 18, 2019 paid in full, with cash, for the equipment purchase on January 5 Jan. 20, 2019 paid $3,600 cash in salaries expense to employees Jan. 23, 2019 received cash payment in full from the customer on the January 10 transaction Jan. 27, 2019 provides $1,200 in services to a customer who asks to be billed for the services Jan. 30, 2019 purchases supplies on account for $500, payment due within three months On January 31, 2019, Printing Plus makes adjusting entries for the following transactions. On January 31, Printing Plus took an inventory of its supplies and discovered that $100 of supplies had been used during the month. The equipment purchased on January 5 depreciated $75 during the month of January. Printing Plus performed $600 of services during January for the customer from the January 9 transaction. Reviewing the company bank statement, Printing Plus discovers $140 of interest earned during the month of January that was previously uncollected and unrecorded. Employees earned $1,500 in salaries for the period of January 21–January 31 that had been previously unpaid and unrecorded. We now record the adjusting entries from January 31, 2019, for Printing Plus. Transaction 13: On January 31, Printing Plus took an inventory of its supplies and discovered that $100 of supplies had been used during the month. Analysis: $100 of supplies were used during January. Supplies is an asset that is decreasing (credit). Supplies is a type of prepaid expense that, when used, becomes an expense. Supplies Expense would increase (debit) for the $100 of supplies used during January. Impact on the financial statements: Supplies is a balance sheet account, and Supplies Expense is an income statement account. This satisfies the rule that each adjusting entry will contain an income statement and balance sheet account. We see total assets decrease by $100 on the balance sheet. Supplies Expense increases overall expenses on the income statement, which reduces net income. Transaction 14: The equipment purchased on January 5 depreciated $75 during the month of January. Analysis: Equipment lost value in the amount of $75 during January. This depreciation will impact the Accumulated Depreciation–Equipment account and the Depreciation Expense–Equipment account. While we are not doing depreciation calculations here, you will come across more complex calculations in the future. Accumulated Depreciation–Equipment is a contra asset account (contrary to Equipment) and increases (credit) for $75. Depreciation Expense–Equipment is an expense account that is increasing (debit) for $75. Impact on the financial statements: Accumulated Depreciation–Equipment is a contra account to Equipment. When calculating the book value of Equipment, Accumulated Depreciation–Equipment will be deducted from the original cost of the equipment. Therefore, total assets will decrease by $75 on the balance sheet. Depreciation Expense will increase overall expenses on the income statement, which reduces net income. Transaction 15: Printing Plus performed $600 of services during January for the customer from the January 9 transaction. Analysis: The customer from the January 9 transaction gave the company $4,000 in advanced payment for services. By the end of January the company had earned $600 of the advanced payment. This means that the company still has yet to provide $3,400 in services to that customer. Since some of the unearned revenue is now earned, Unearned Revenue would decrease. Unearned Revenue is a liability account and decreases on the debit side. The company can now recognize the $600 as earned revenue. Service Revenue increases (credit) for $600. Impact on the financial statements: Unearned revenue is a liability account and will decrease total liabilities and equity by $600 on the balance sheet. Service Revenue will increase overall revenue on the income statement, which increases net income. Transaction 16: Reviewing the company bank statement, Printing Plus discovers $140 of interest earned during the month of January that was previously uncollected and unrecorded. Analysis: Interest is revenue for the company on money kept in a savings account at the bank. The company only sees the bank statement at the end of the month and needs to record interest revenue that has not yet been collected or recorded. Interest Revenue is a revenue account that increases (credit) for $140. Since Printing Plus has yet to collect this interest revenue, it is considered a receivable. Interest Receivable increases (debit) for $140. Impact on the financial statements: Interest Receivable is an asset account and will increase total assets by $140 on the balance sheet. Interest Revenue will increase overall revenue on the income statement, which increases net income. Transaction 17: Employees earned $1,500 in salaries for the period of January 21–January 31 that had been previously unpaid and unrecorded. Analysis: Salaries have accumulated since January 21 and will not be paid in the current period. Since the salaries expense occurred in January, the expense recognition principle requires recognition in January. Salaries Expense is an expense account that is increasing (debit) for $1,500. Since the company has not yet paid salaries for this time period, Printing Plus owes the employees this money. This creates a liability for Printing Plus. Salaries Payable increases (credit) for $1,500. Impact on the financial statements: Salaries Payable is a liability account and will increase total liabilities and equity by $1,500 on the balance sheet. Salaries expense will increase overall expenses on the income statement, which decreases net income. We now explore how these adjusting entries impact the general ledger (T-accounts). Your Turn Deferrals versus Accruals Label each of the following as a deferral or an accrual, and explain your answer. The company recorded supplies usage for the month. A customer paid in advance for services, and the company recorded revenue earned after providing service to that customer. The company recorded salaries that had been earned by employees but were previously unrecorded and have not yet been paid. Solution The company is recording a deferred expense. The company was deferring the recognition of supplies from supplies expense until it had used the supplies. The company has deferred revenue. It deferred the recognition of the revenue until it was actually earned. The customer already paid the cash and is currently on the balance sheet as a liability. The company has an accrued expense. The company is bringing the salaries that have been incurred, added up since the last paycheck, onto the books for the first time during the adjusting entry. Cash will be given to the employees at a later time. Link to Learning Several internet sites can provide additional information for you on adjusting entries. One very good site where you can find many tools to help you study this topic is Accounting Coach which provides a tool that is available to you free of charge. Visit the website and take a quiz on accounting basics to test your knowledge. Posting Adjusting Entries Once you have journalized all of your adjusting entries, the next step is posting the entries to your ledger. Posting adjusting entries is no different than posting the regular daily journal entries. T-accounts will be the visual representation for the Printing Plus general ledger. Transaction 13: On January 31, Printing Plus took an inventory of its supplies and discovered that $100 of supplies had been used during the month. Journal entry and T-accounts: In the journal entry, Supplies Expense has a debit of $100. This is posted to the Supplies Expense T-account on the debit side (left side). Supplies has a credit balance of $100. This is posted to the Supplies T-account on the credit side (right side). You will notice there is already a debit balance in this account from the purchase of supplies on January 30. The $100 is deducted from $500 to get a final debit balance of $400. Transaction 14: The equipment purchased on January 5 depreciated $75 during the month of January. Journal entry and T-accounts: In the journal entry, Depreciation Expense–Equipment has a debit of $75. This is posted to the Depreciation Expense–Equipment T-account on the debit side (left side). Accumulated Depreciation–Equipment has a credit balance of $75. This is posted to the Accumulated Depreciation–Equipment T-account on the credit side (right side). Transaction 15: Printing Plus performed $600 of services during January for the customer from the January 9 transaction. Journal entry and T-accounts: In the journal entry, Unearned Revenue has a debit of $600. This is posted to the Unearned Revenue T-account on the debit side (left side). You will notice there is already a credit balance in this account from the January 9 customer payment. The $600 debit is subtracted from the $4,000 credit to get a final balance of $3,400 (credit). Service Revenue has a credit balance of $600. This is posted to the Service Revenue T-account on the credit side (right side). You will notice there is already a credit balance in this account from other revenue transactions in January. The $600 is added to the previous $9,500 balance in the account to get a new final credit balance of $10,100. Transaction 16: Reviewing the company bank statement, Printing Plus discovers $140 of interest earned during the month of January that was previously uncollected and unrecorded. Journal entry and T-accounts: In the journal entry, Interest Receivable has a debit of $140. This is posted to the Interest Receivable T-account on the debit side (left side). Interest Revenue has a credit balance of $140. This is posted to the Interest Revenue T-account on the credit side (right side). Transaction 17: Employees earned $1,500 in salaries for the period of January 21–January 31 that had been previously unpaid and unrecorded. Journal entry and T-accounts: In the journal entry, Salaries Expense has a debit of $1,500. This is posted to the Salaries Expense T-account on the debit side (left side). You will notice there is already a debit balance in this account from the January 20 employee salary expense. The $1,500 debit is added to the $3,600 debit to get a final balance of $5,100 (debit). Salaries Payable has a credit balance of $1,500. This is posted to the Salaries Payable T-account on the credit side (right side). T-accounts Summary Once all adjusting journal entries have been posted to T-accounts, we can check to make sure the accounting equation remains balanced. Following is a summary showing the T-accounts for Printing Plus including adjusting entries. The sum on the assets side of the accounting equation equals $29,965, found by adding together the final balances in each asset account (24,800 + 1,200 + 140 + 400 + 3,500 – 75). To find the total on the liabilities and equity side of the equation, we need to find the difference between debits and credits. Credits on the liabilities and equity side of the equation total $35,640 (500 + 1,500 + 3,400 + 20,000 + 10,100 + 140). Debits on the liabilities and equity side of the equation total $5,675 (100 + 100 + 5,100 + 300 + 75). The difference between $35,640 – $5,675 = $29,965. Thus, the equation remains balanced with $29,965 on the asset side and $29,965 on the liabilities and equity side. Now that we have the T-account information, and have confirmed the accounting equation remains balanced, we can create the adjusted trial balance in our sixth step in the accounting cycle. Link to Learning When posting any kind of journal entry to a general ledger, it is important to have an organized system for recording to avoid any account discrepancies and misreporting. To do this, companies can streamline their general ledger and remove any unnecessary processes or accounts. Check out this article “Encourage General Ledger Efficiency” from the Journal of Accountancy that discusses some strategies to improve general ledger efficiency. 4.4 Use the Ledger Balances to Prepare an Adjusted Trial Balance Once all of the adjusting entries have been posted to the general ledger, we are ready to start working on preparing the adjusted trial balance. Preparing an adjusted trial balance is the sixth step in the accounting cycle. An adjusted trial balance is a list of all accounts in the general ledger, including adjusting entries, which have nonzero balances. This trial balance is an important step in the accounting process because it helps identify any computational errors throughout the first five steps in the cycle. As with the unadjusted trial balance, transferring information from T-accounts to the adjusted trial balance requires consideration of the final balance in each account. If the final balance in the ledger account (T-account) is a debit balance, you will record the total in the left column of the trial balance. If the final balance in the ledger account (T-account) is a credit balance, you will record the total in the right column. Once all ledger accounts and their balances are recorded, the debit and credit columns on the adjusted trial balance are totaled to see if the figures in each column match. The final total in the debit column must be the same dollar amount that is determined in the final credit column. Let’s now take a look at the adjusted T-accounts and adjusted trial balance for Printing Plus to see how the information is transferred from these T-accounts to the adjusted trial balance. We only focus on those general ledger accounts that had balance adjustments. For example, Interest Receivable is an adjusted account that has a final balance of $140 on the debit side. This balance is transferred to the Interest Receivable account in the debit column on the adjusted trial balance. Supplies ($400), Supplies Expense ($100), Salaries Expense ($5,100), and Depreciation Expense–Equipment ($75) also have debit final balances in their adjusted T-accounts, so this information will be transferred to the debit column on the adjusted trial balance. Accumulated Depreciation–Equipment ($75), Salaries Payable ($1,500), Unearned Revenue ($3,400), Service Revenue ($10,100), and Interest Revenue ($140) all have credit final balances in their T-accounts. These credit balances would transfer to the credit column on the adjusted trial balance. Once all balances are transferred to the adjusted trial balance, we sum each of the debit and credit columns. The debit and credit columns both total $35,715, which means they are equal and in balance. After the adjusted trial balance is complete, we next prepare the company’s financial statements. Think It Through Cash or Accrual Basis Accounting? You are a new accountant at a salon. The salon had previously used cash basis accounting to prepare its financial records but now considers switching to an accrual basis method. You have been tasked with determining if this transition is appropriate. When you go through the records you notice that this transition will greatly impact how the salon reports revenues and expenses. The salon will now report some revenues and expenses before it receives or pays cash. How will change positively impact its business reporting? How will it negatively impact its business reporting? If you were the accountant, would you recommend the salon transition from cash basis to accrual basis? Concepts In Practice Why Is the Adjusted Trial Balance So Important? As you have learned, the adjusted trial balance is an important step in the accounting process. But outside of the accounting department, why is the adjusted trial balance important to the rest of the organization? An employee or customer may not immediately see the impact of the adjusted trial balance on his or her involvement with the company. The adjusted trial balance is the key point to ensure all debits and credits are in the general ledger accounts balance before information is transferred to financial statements. Financial statements drive decision-making for a business. Budgeting for employee salaries, revenue expectations, sales prices, expense reductions, and long-term growth strategies are all impacted by what is provided on the financial statements. So if the company skips over creating an adjusted trial balance to make sure all accounts are balanced or adjusted, it runs the risk of creating incorrect financial statements and making important decisions based on inaccurate financial information. 4.5 Prepare Financial Statements Using the Adjusted Trial Balance Once you have prepared the adjusted trial balance, you are ready to prepare the financial statements. Preparing financial statements is the seventh step in the accounting cycle. Remember that we have four financial statements to prepare: an income statement, a statement of retained earnings, a balance sheet, and the statement of cash flows. These financial statements were introduced in Introduction to Financial Statements and Statement of Cash Flows dedicates in-depth discussion to that statement. To prepare the financial statements, a company will look at the adjusted trial balance for account information. From this information, the company will begin constructing each of the statements, beginning with the income statement. Income statement s will include all revenue and expense accounts. The statement of retained earnings will include beginning retained earnings, any net income (loss) (found on the income statement), and dividends. The balance sheet is going to include assets, contra assets, liabilities, and stockholder equity accounts, including ending retained earnings and common stock. Your Turn Magnificent Adjusted Trial Balance Go over the adjusted trial balance for Magnificent Landscaping Service. Identify which financial statement each account will go on: Balance Sheet, Statement of Retained Earnings, or Income Statement. Solution Balance Sheet: Cash, accounts receivable, office supplied, prepaid insurance, equipment, accumulated depreciation (equipment), accounts payable, salaries payable, unearned lawn mowing revenue, and common stock. Statement of Retained Earnings: Dividends. Income Statement: Lawn mowing revenue, gas expense, advertising expense, depreciation expense (equipment), supplies expense, and salaries expense. Income Statement An income statement shows the organization’s financial performance for a given period of time. When preparing an income statement, revenues will always come before expenses in the presentation. For Printing Plus, the following is its January 2019 Income Statement. Revenue and expense information is taken from the adjusted trial balance as follows: Total revenues are $10,240, while total expenses are $5,575. Total expenses are subtracted from total revenues to get a net income of $4,665. If total expenses were more than total revenues, Printing Plus would have a net loss rather than a net income. This net income figure is used to prepare the statement of retained earnings. Concepts In Practice The Importance of Accurate Financial Statements Financial statements give a glimpse into the operations of a company, and investors, lenders, owners, and others rely on the accuracy of this information when making future investing, lending, and growth decisions. When one of these statements is inaccurate, the financial implications are great. For example, Celadon Group misreported revenues over the span of three years and elevated earnings during those years. The total overreported income was approximately $200–$250 million. This gross misreporting misled investors and led to the removal of Celadon Group from the New York Stock Exchange. Not only did this negatively impact Celadon Group ’s stock price and lead to criminal investigations, but investors and lenders were left to wonder what might happen to their investment. That is why it is so important to go through the detailed accounting process to reduce errors early on and hopefully prevent misinformation from reaching financial statements. The business must have strong internal controls and best practices to ensure the information is presented fairly. 3 3 James Jaillet. “Celadon under Criminal Investigation over Financial Statements.” Commercial Carrier Journal . July 25, 2018. https://www.ccjdigital.com/200520-2/ Statement of Retained Earnings The statement of retained earnings (which is often a component of the statement of stockholders’ equity) shows how the equity (or value) of the organization has changed over a period of time. The statement of retained earnings is prepared second to determine the ending retained earnings balance for the period. The statement of retained earnings is prepared before the balance sheet because the ending retained earnings amount is a required element of the balance sheet. The following is the Statement of Retained Earnings for Printing Plus. Net income information is taken from the income statement, and dividends information is taken from the adjusted trial balance as follows. The statement of retained earnings always leads with beginning retained earnings. Beginning retained earnings carry over from the previous period’s ending retained earnings balance. Since this is the first month of business for Printing Plus, there is no beginning retained earnings balance. Notice the net income of $4,665 from the income statement is carried over to the statement of retained earnings. Dividends are taken away from the sum of beginning retained earnings and net income to get the ending retained earnings balance of $4,565 for January. This ending retained earnings balance is transferred to the balance sheet. Link to Learning Concepts Statements give the Financial Accounting Standards Board (FASB) a guide to creating accounting principles and consider the limitations of financial statement reporting. See the FASB’s “Concepts Statements” page to learn more. Balance Sheet The balance sheet is the third statement prepared after the statement of retained earnings and lists what the organization owns ( assets ), what it owes ( liabilities ), and what the shareholders control ( equity ) on a specific date. Remember that the balance sheet represents the accounting equation, where assets equal liabilities plus stockholders’ equity. The following is the Balance Sheet for Printing Plus. Ending retained earnings information is taken from the statement of retained earnings, and asset, liability, and common stock information is taken from the adjusted trial balance as follows. Looking at the asset section of the balance sheet, Accumulated Depreciation–Equipment is included as a contra asset account to equipment. The accumulated depreciation ($75) is taken away from the original cost of the equipment ($3,500) to show the book value of equipment ($3,425). The accounting equation is balanced, as shown on the balance sheet, because total assets equal $29,965 as do the total liabilities and stockholders’ equity. There is a worksheet approach a company may use to make sure end-of-period adjustments translate to the correct financial statements. IFRS Connection Financial Statements Both US-based companies and those headquartered in other countries produce the same primary financial statements—Income Statement, Balance Sheet, and Statement of Cash Flows. The presentation of these three primary financial statements is largely similar with respect to what should be reported under US GAAP and IFRS, but some interesting differences can arise, especially when presenting the Balance Sheet. While both US GAAP and IFRS require the same minimum elements that must be reported on the Income Statement, such as revenues, expenses, taxes, and net income, to name a few, publicly traded companies in the United States have further requirements placed by the SEC on the reporting of financial statements. For example, IFRS-based financial statements are only required to report the current period of information and the information for the prior period. US GAAP has no requirement for reporting prior periods, but the SEC requires that companies present one prior period for the Balance Sheet and three prior periods for the Income Statement. Under both IFRS and US GAAP, companies can report more than the minimum requirements. Presentation differences are most noticeable between the two forms of GAAP in the Balance Sheet. Under US GAAP there is no specific requirement on how accounts should be presented. However, the SEC requires that companies present their Balance Sheet information in liquidity order, which means current assets listed first with cash being the first account presented, as it is a company’s most liquid account. Liquidity refers to how easily an item can be converted to cash. IFRS requires that accounts be classified into current and noncurrent categories for both assets and liabilities, but no specific presentation format is required. Thus, for US companies, the first category always seen on a Balance Sheet is Current Assets, and the first account balance reported is cash. This is not always the case under IFRS. While many Balance Sheets of international companies will be presented in the same manner as those of a US company, the lack of a required format means that a company can present noncurrent assets first, followed by current assets. The accounts of a Balance Sheet using IFRS might appear as shown here. Review the annual report of Stora Enso which is an international company that utilizes the illustrated format in presenting its Balance Sheet, also called the Statement of Financial Position. The Balance Sheet is found on page 31 of the report. Some of the biggest differences that occur on financial statements prepared under US GAAP versus IFRS relate primarily to measurement or timing issues: in other words, how a transaction is valued and when it is recorded. Ten-Column Worksheets The 10-column worksheet is an all-in-one spreadsheet showing the transition of account information from the trial balance through the financial statements. Accountants use the 10-column worksheet to help calculate end-of-period adjustments. Using a 10-column worksheet is an optional step companies may use in their accounting process. Here is a picture of a 10-column worksheet for Printing Plus. There are five sets of columns, each set having a column for debit and credit, for a total of 10 columns. The five column sets are the trial balance, adjustments, adjusted trial balance, income statement, and the balance sheet. After a company posts its day-to-day journal entries, it can begin transferring that information to the trial balance columns of the 10-column worksheet. The trial balance information for Printing Plus is shown previously. Notice that the debit and credit columns both equal $34,000. If we go back and look at the trial balance for Printing Plus, we see that the trial balance shows debits and credits equal to $34,000. Once the trial balance information is on the worksheet, the next step is to fill in the adjusting information from the posted adjusted journal entries. The adjustments total of $2,415 balances in the debit and credit columns. The next step is to record information in the adjusted trial balance columns. To get the numbers in these columns, you take the number in the trial balance column and add or subtract any number found in the adjustment column. For example, Cash shows an unadjusted balance of $24,800. There is no adjustment in the adjustment columns, so the Cash balance from the unadjusted balance column is transferred over to the adjusted trial balance columns at $24,800. Interest Receivable did not exist in the trial balance information, so the balance in the adjustment column of $140 is transferred over to the adjusted trial balance column. Unearned revenue had a credit balance of $4,000 in the trial balance column, and a debit adjustment of $600 in the adjustment column. Remember that adding debits and credits is like adding positive and negative numbers. This means the $600 debit is subtracted from the $4,000 credit to get a credit balance of $3,400 that is translated to the adjusted trial balance column. Service Revenue had a $9,500 credit balance in the trial balance column, and a $600 credit balance in the Adjustments column. To get the $10,100 credit balance in the adjusted trial balance column requires adding together both credits in the trial balance and adjustment columns (9,500 + 600). You will do the same process for all accounts. Once all accounts have balances in the adjusted trial balance columns, add the debits and credits to make sure they are equal. In the case of Printing Plus, the balances equal $35,715. If you check the adjusted trial balance for Printing Plus, you will see the same equal balance is present. Next you will take all of the figures in the adjusted trial balance columns and carry them over to either the income statement columns or the balance sheet columns. Your Turn Income Statement and Balance Sheet Take a couple of minutes and fill in the income statement and balance sheet columns. Total them when you are done. Do not panic when they do not balance. They will not balance at this time. Solution Looking at the income statement columns, we see that all revenue and expense accounts are listed in either the debit or credit column. This is a reminder that the income statement itself does not organize information into debits and credits, but we do use this presentation on a 10-column worksheet. You will notice that when debit and credit income statement columns are totaled, the balances are not the same. The debit balance equals $5,575, and the credit balance equals $10,240. Why do they not balance? If the debit and credit columns equal each other, it means the expenses equal the revenues. This would happen if a company broke even, meaning the company did not make or lose any money. If there is a difference between the two numbers, that difference is the amount of net income, or net loss, the company has earned. In the Printing Plus case, the credit side is the higher figure at $10,240. The credit side represents revenues. This means revenues exceed expenses, thus giving the company a net income. If the debit column were larger, this would mean the expenses were larger than revenues, leading to a net loss. You want to calculate the net income and enter it onto the worksheet. The $4,665 net income is found by taking the credit of $10,240 and subtracting the debit of $5,575. When entering net income, it should be written in the column with the lower total. In this instance, that would be the debit side. You then add together the $5,575 and $4,665 to get a total of $10,240. This balances the two columns for the income statement. If you review the income statement, you see that net income is in fact $4,665. We now consider the last two columns for the balance sheet. In these columns we record all asset, liability, and equity accounts. When adding the total debits and credits, you notice they do not balance. The debit column equals $30,140, and the credit column equals $25,475. How do we get the columns to balance? Treat the income statement and balance sheet columns like a double-entry accounting system, where if you have a debit on the income statement side, you must have a credit equaling the same amount on the credit side. In this case we added a debit of $4,665 to the income statement column. This means we must add a credit of $4,665 to the balance sheet column. Once we add the $4,665 to the credit side of the balance sheet column, the two columns equal $30,140. You may notice that dividends are included in our 10-column worksheet balance sheet columns even though this account is not included on a balance sheet. So why is it included here? There is actually a very good reason we put dividends in the balance sheet columns. When you prepare a balance sheet, you must first have the most updated retained earnings balance. To get that balance, you take the beginning retained earnings balance + net income – dividends. If you look at the worksheet for Printing Plus, you will notice there is no retained earnings account. That is because they just started business this month and have no beginning retained earnings balance. If you look in the balance sheet columns, we do have the new, up-to-date retained earnings, but it is spread out through two numbers. You have the dividends balance of $100 and net income of $4,665. If you combine these two individual numbers ($4,665 – $100), you will have your updated retained earnings balance of $4,565, as seen on the statement of retained earnings. You will not see a similarity between the 10-column worksheet and the balance sheet, because the 10-column worksheet is categorizing all accounts by the type of balance they have, debit or credit. This leads to a final balance of $30,140. The balance sheet is classifying the accounts by type of accounts, assets and contra assets, liabilities, and equity. This leads to a final balance of $29,965. Even though they are the same numbers in the accounts, the totals on the worksheet and the totals on the balance sheet will be different because of the different presentation methods. Link to Learning Publicly traded companies release their financial statements quarterly for open viewing by the general public, which can usually be viewed on their websites. One such company is Alphabet, Inc. (trade name Google). Take a look at Alphabet’s quarter ended March 31, 2018, financial statements from the SEC Form 10-Q. Your Turn Frank’s Net Income and Loss What amount of net income/loss does Frank have? Solution In Completing the Accounting Cycle , we continue our discussion of the accounting cycle, completing the last steps of journalizing and posting closing entries and preparing a post-closing trial balance.
american_government
Summary 13.1 Guardians of the Constitution and Individual Rights From humble beginnings, the judicial branch has evolved over the years to a significance that would have been difficult for the Constitution’s framers to envision. While they understood and prioritized the value of an independent judiciary in a common law system, they could not have predicted the critical role the courts would play in the interpretation of the Constitution, our understanding of the law, the development of public policy, and the preservation and expansion of individual rights and liberties over time. 13.2 The Dual Court System The U.S. judicial system features a dual court model, with courts at both the federal and state levels, and the U.S. Supreme Court at the top. While cases may sometimes be eligible for both state and federal review, each level has its own distinct jurisdiction. There are trial and appellate courts at both levels, but there are also remarkable differences among the states in their laws, politics, and culture, meaning that no two state court systems are exactly alike. The diversity of courts across the nation can have both positive and negative effects for citizens, depending on their situation. While it provides for various opportunities for an issue or interest to be heard, it may also lead to case-by-case treatment of individuals, groups, or issues that is not always the same or even-handed across the nation. 13.3 The Federal Court System The structure of today’s three-tiered federal court system, largely established by Congress, is quite clear-cut. The system’s reliance on precedent ensures a consistent and stable institution that is still capable of slowly evolving over the years—such as by increasingly reflecting the diverse population it serves. Presidents hope their judicial nominees will make rulings consistent with the chief executive’s own ideological leanings. But the lifetime tenure of federal court members gives them the flexibility to act in ways that may or may not reflect what their nominating president intended. Perfect alignment between nominating president and justice is not expected; a judge might be liberal on most issues but conservative on others, or vice versa. However, presidents have sometimes been surprised by the decisions made by their nominees, such as President Eisenhower was by Justice Earl Warren and President Reagan by Justice Anthony Kennedy. 13.4 The Supreme Court A unique institution, the U.S. Supreme Court today is an interesting mix of the traditional and the modern. On one hand, it still holds to many of the formal traditions, processes, and procedures it has followed for many decades. Its public proceedings remain largely ceremonial and are never filmed or photographed. At the same time, the Court has taken on new cases involving contemporary matters before a nine-justice panel that is more diverse today than ever before. When considering whether to take on a case and then later when ruling on it, the justices rely on a number of internal and external players who assist them with and influence their work, including, but not limited to, their law clerks, the U.S. solicitor general, interest groups, and the mass media. 13.5 Judicial Decision-Making and Implementation by the Supreme Court Like the executive and legislative branches, the judicial system wields power that is not absolute. There remain many checks on its power and limits to its rulings. Judicial decisions are also affected by various internal and external factors, including legal, personal, ideological, and political influences. To stay relevant, Court decisions have to keep up with the changing times, and the justices’ decision-making power is subject to the support afforded by the other branches of government in implementation and enforcement. Nevertheless, the courts have evolved into an indispensable part of our government system—a separate and coequal branch that interprets law, makes policy, guards the Constitution, and protects individual rights.
Chapter Outline 13.1 Guardians of the Constitution and Individual Rights 13.2 The Dual Court System 13.3 The Federal Court System 13.4 The Supreme Court 13.5 Judicial Decision-Making and Implementation by the Supreme Court Introduction If democratic institutions struggle to balance individual freedoms and collective well-being, the judiciary is arguably the branch where the individual has the best chance to be heard. For those seeking protection on the basis of sexual orientation, for example, in recent years, the courts have expanded rights, culminating in 2015 when the Supreme Court ruled that same-sex couples have the right to marry in all fifty states ( Figure 13.1 ). 1 The U.S. courts pride themselves on two achievements: (1) as part of the framers’ system of checks and balances, they protect the sanctity of the U.S. Constitution from breaches by the other branches of government, and (2) they protect individual rights against societal and governmental oppression. At the federal level, nine Supreme Court judges are nominated by the president and confirmed by the Senate for lifetime appointments. Hence, democratic control over them is indirect at best, but this provides them the independence they need to carry out their duties. However, court power is confined to rulings on those cases the courts decide to hear. 2 How do the courts make decisions, and how do they exercise their power to protect individual rights? How are the courts structured, and what distinguishes the Supreme Court from all others? This chapter answers these and other questions in delineating the power of the judiciary in the United States.
[ { "answer": { "ans_choice": 1, "ans_text": "B" }, "bloom": null, "hl_context": "<hl> While the U . S . Supreme Court and state supreme courts exert power over many when reviewing laws or declaring acts of other branches unconstitutional , they become particularly important when an individual or group comes before them believing there has been a wrong . <hl> A citizen or group that feels mistreated can approach a variety of institutional venues in the U . S . system for assistance in changing policy or seeking support . Organizing protests , garnering special interest group support , and changing laws through the legislative and executive branches are all possible , but an individual is most likely to find the courts especially well-suited to analyzing the particulars of his or her case . Perhaps Marshall feared a confrontation with the Jefferson administration and thought Madison would refuse his directive anyway . In any case , his ruling shows an interesting contrast in the early Court . On one hand , it humbly declined a power — issuing a writ of mandamus — given to it by Congress , but on the other , it laid the foundation for legitimizing a much more important one — judicial review . <hl> Marbury never got his commission , but the Court ’ s ruling in the case has become more significant for the precedent it established : As the first time the Court declared an act of Congress unconstitutional , it established the power of judicial review , a key power that enables the judicial branch to remain a powerful check on the other branches of government . <hl> In 1803 , the Supreme Court declared for itself the power of judicial review , a power to which Hamilton had referred but that is not expressly mentioned in the Constitution . Judicial review is the power of the courts , as part of the system of checks and balances , to look at actions taken by the other branches of government and the states and determine whether they are constitutional . If the courts find an action to be unconstitutional , it becomes null and void . <hl> Judicial review was established in the Supreme Court case Marbury v . Madison , when , for the first time , the Court declared an act of Congress to be unconstitutional . <hl> 9 Wielding this power is a role Marshall defined as the “ very essence of judicial duty , ” and it continues today as one of the most significant aspects of judicial power . <hl> Judicial review lies at the core of the court ’ s ability to check the other branches of government — and the states . <hl>", "hl_sentences": "While the U . S . Supreme Court and state supreme courts exert power over many when reviewing laws or declaring acts of other branches unconstitutional , they become particularly important when an individual or group comes before them believing there has been a wrong . Marbury never got his commission , but the Court ’ s ruling in the case has become more significant for the precedent it established : As the first time the Court declared an act of Congress unconstitutional , it established the power of judicial review , a key power that enables the judicial branch to remain a powerful check on the other branches of government . Judicial review was established in the Supreme Court case Marbury v . Madison , when , for the first time , the Court declared an act of Congress to be unconstitutional . Judicial review lies at the core of the court ’ s ability to check the other branches of government — and the states .", "question": { "cloze_format": "The Supreme Court’s power of judicial review ________.", "normal_format": "Which of the following is correct about the Supreme Court’s power of judicial review?", "question_choices": [ "is given to it in the original constitution", "enables it to declare acts of the other branches unconstitutional", "allows it to hear cases", "establishes the three-tiered court system" ], "question_id": "fs-id1163755372994", "question_text": "The Supreme Court’s power of judicial review ________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 3, "ans_text": "an appeals court" }, "bloom": null, "hl_context": "The first session of the first U . S . Congress laid the framework for today ’ s federal judicial system , established in the Judiciary Act of 1789 . Although legislative changes over the years have altered it , the basic structure of the judicial branch remains as it was set early on : At the lowest level are the district courts , where federal cases are tried , witnesses testify , and evidence and arguments are presented . <hl> A losing party who is unhappy with a district court decision may appeal to the circuit courts , or U . S . courts of appeals , where the decision of the lower court is reviewed . <hl> <hl> Still further , appeal to the U . S . Supreme Court is possible , but of the thousands of petitions for appeal , the Supreme Court will typically hear fewer than one hundred a year . <hl> 3", "hl_sentences": "A losing party who is unhappy with a district court decision may appeal to the circuit courts , or U . S . courts of appeals , where the decision of the lower court is reviewed . Still further , appeal to the U . S . Supreme Court is possible , but of the thousands of petitions for appeal , the Supreme Court will typically hear fewer than one hundred a year .", "question": { "cloze_format": "The Supreme Court most typically functions as ________.", "normal_format": "What does the Supreme Court most typically function as?", "question_choices": [ "a district court", "a trial court", "a court of original jurisdiction", "an appeals court" ], "question_id": "fs-id1163757280013", "question_text": "The Supreme Court most typically functions as ________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "C" }, "bloom": null, "hl_context": "<hl> To add further explanation to Article III , Alexander Hamilton wrote details about the federal judiciary in Federalist No . <hl> 78 . In explaining the importance of an independent judiciary separated from the other branches of government , he said “ interpretation ” was a key role of the courts as they seek to protect people from unjust laws . <hl> But he also believed “ the Judiciary Department ” would “ always be the least dangerous ” because “ with no influence over either the sword or the purse , ” it had “ neither force nor will , but merely judgment . ” The courts would only make decisions , not take action . <hl> With no control over how those decisions would be implemented and no power to enforce their choices , they could exercise only judgment , and their power would begin and end there . Hamilton would no doubt be surprised by what the judiciary has become : a key component of the nation ’ s constitutional democracy , finding its place as the chief interpreter of the Constitution and the equal of the other two branches , though still checked and balanced by them .", "hl_sentences": "To add further explanation to Article III , Alexander Hamilton wrote details about the federal judiciary in Federalist No . But he also believed “ the Judiciary Department ” would “ always be the least dangerous ” because “ with no influence over either the sword or the purse , ” it had “ neither force nor will , but merely judgment . ” The courts would only make decisions , not take action .", "question": { "cloze_format": "In Federalist No. 78, Alexander Hamilton characterized the judiciary as the ________ branch of government.", "normal_format": "In Federalist No. 78, Alexander Hamilton characterized the judiciary as what branch of government?", "question_choices": [ "most unnecessary", "strongest", "least dangerous", "most political" ], "question_id": "fs-id1163757287791", "question_text": "In Federalist No. 78, Alexander Hamilton characterized the judiciary as the ________ branch of government." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "at the state level" }, "bloom": null, "hl_context": "Although the Supreme Court tends to draw the most public attention , it typically hears fewer than one hundred cases every year . <hl> In fact , the entire federal side — both trial and appellate — handles proportionately very few cases , with about 90 percent of all cases in the U . S . court system being heard at the state level . <hl> 27 The several hundred thousand cases handled every year on the federal side pale in comparison to the several million handled by the states .", "hl_sentences": "In fact , the entire federal side — both trial and appellate — handles proportionately very few cases , with about 90 percent of all cases in the U . S . court system being heard at the state level .", "question": { "cloze_format": "Of all the court cases in the United States, the majority are handled ________.", "normal_format": "Of all the court cases in the United States, how are the majority handled?", "question_choices": [ "by the U.S. Supreme Court", "at the state level", "by the circuit courts", "by the U.S. district courts" ], "question_id": "fs-id1163758655880", "question_text": "Of all the court cases in the United States, the majority are handled ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "C" }, "bloom": null, "hl_context": "<hl> Courts hear two different types of disputes : criminal and civil . <hl> Under criminal law , governments establish rules and punishments ; laws define conduct that is prohibited because it can harm others and impose punishment for committing such an act . Crimes are usually labeled felonies or misdemeanors based on their nature and seriousness ; felonies are the more serious crimes . When someone commits a criminal act , the government ( state or national , depending on which law has been broken ) charges that person with a crime , and the case brought to court contains the name of the charging government , as in Miranda v . Arizona discussed below . 26 On the other hand , civil law cases involve two or more private ( non-government ) parties , at least one of whom alleges harm or injury committed by the other . In both criminal and civil matters , the courts decide the remedy and resolution of the case , and in all cases , the U . S . Supreme Court is the final court of appeal .", "hl_sentences": "Courts hear two different types of disputes : criminal and civil .", "question": { "cloze_format": "Both state and federal courts hear matters that involve ________.", "normal_format": "Both state and federal courts hear matters that involve which of the following?", "question_choices": [ "civil law only", "criminal law only", "both civil and criminal law", "neither civil nor criminal law" ], "question_id": "fs-id1163758513860", "question_text": "Both state and federal courts hear matters that involve ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "it involves a federal question" }, "bloom": null, "hl_context": "<hl> Hear cases that involve “ interstate ” matters , “ diversity of citizenship ” involving parties of two different states , or between a U . S . citizen and a citizen of another nation ( and with a damage claim of at least $ 75,000 ) <hl> <hl> Hear both civil and criminal matters , although many criminal cases involving federal law are tried in state courts <hl> <hl> Hear cases that involve a “ federal question , ” involving the Constitution , federal laws or treaties , or a “ federal party ” in which the U . S . government is a party to the case <hl> <hl> Federal Courts <hl>", "hl_sentences": "Hear cases that involve “ interstate ” matters , “ diversity of citizenship ” involving parties of two different states , or between a U . S . citizen and a citizen of another nation ( and with a damage claim of at least $ 75,000 ) Hear both civil and criminal matters , although many criminal cases involving federal law are tried in state courts Hear cases that involve a “ federal question , ” involving the Constitution , federal laws or treaties , or a “ federal party ” in which the U . S . government is a party to the case Federal Courts", "question": { "cloze_format": "A state case is more likely to be heard by the federal courts when ________.", "normal_format": "When is a state case more likely to be heard by the federal courts?", "question_choices": [ "it involves a federal question", "a governor requests a federal court hearing", "it involves a criminal matter", "the state courts are unable to come up with a decision" ], "question_id": "fs-id1163758364559", "question_text": "A state case is more likely to be heard by the federal courts when ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "B" }, "bloom": null, "hl_context": "<hl> On the U . S . Supreme Court , there are nine justices — one chief justice and eight associate justices . <hl> <hl> Circuit courts each contain three justices , whereas federal district courts have just one judge each . <hl> As the national court of last resort for all other courts in the system , the Supreme Court plays a vital role in setting the standards of interpretation that the lower courts follow . The Supreme Court ’ s decisions are binding across the nation and establish the precedent by which future cases are resolved in all the system ’ s tiers .", "hl_sentences": "On the U . S . Supreme Court , there are nine justices — one chief justice and eight associate justices . Circuit courts each contain three justices , whereas federal district courts have just one judge each .", "question": { "cloze_format": "Besides the Supreme Court, there are lower courts in the national system called ________.", "normal_format": "Besides the Supreme Court, there are lower courts in the national system called what?", "question_choices": [ "state and federal courts", "district and circuit courts", "state and local courts", "civil and common courts" ], "question_id": "fs-id1163758372924", "question_text": "Besides the Supreme Court, there are lower courts in the national system called ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "stare decisis" }, "bloom": null, "hl_context": "<hl> The U . S . court system operates on the principle of stare decisis ( Latin for stand by things decided ) , which means that today ’ s decisions are based largely on rulings from the past , and tomorrow ’ s rulings rely on what is decided today . <hl> <hl> Stare decisis is especially important in the U . S . common law system , in which the consistency of precedent ensures greater certainty and stability in law and constitutional interpretation , and it also contributes to the solidity and legitimacy of the court system itself . <hl> As former Supreme Court justice Benjamin Cardozo summarized it years ago , “ Adherence to precedent must then be the rule rather than the exception if litigants are to have faith in the even-handed administration of justice in the courts . ” 37", "hl_sentences": "The U . S . court system operates on the principle of stare decisis ( Latin for stand by things decided ) , which means that today ’ s decisions are based largely on rulings from the past , and tomorrow ’ s rulings rely on what is decided today . Stare decisis is especially important in the U . S . common law system , in which the consistency of precedent ensures greater certainty and stability in law and constitutional interpretation , and it also contributes to the solidity and legitimacy of the court system itself .", "question": { "cloze_format": "In standing by precedent, a judge relies on the principle of ________.", "normal_format": "In standing by precedent, a judge relies on which principle?", "question_choices": [ "stare decisis", "amicus curiae", "judicial activism", "laissez-faire" ], "question_id": "fs-id1163758839661", "question_text": "In standing by precedent, a judge relies on the principle of ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "D" }, "bloom": null, "hl_context": "<hl> The original court in 1789 had six justices , but Congress set the number at nine in 1869 , and it has remained there ever since . <hl> There is one chief justice , who is the lead or highest-ranking judge on the Court , and eight associate justice s . <hl> All nine serve lifetime terms , after successful nomination by the president and confirmation by the Senate . <hl> The U . S . courts pride themselves on two achievements : ( 1 ) as part of the framers ’ system of checks and balances , they protect the sanctity of the U . S . Constitution from breaches by the other branches of government , and ( 2 ) they protect individual rights against societal and governmental oppression . <hl> At the federal level , nine Supreme Court judges are nominated by the president and confirmed by the Senate for lifetime appointments . <hl> Hence , democratic control over them is indirect at best , but this provides them the independence they need to carry out their duties . However , court power is confined to rulings on those cases the courts decide to hear . 2", "hl_sentences": "The original court in 1789 had six justices , but Congress set the number at nine in 1869 , and it has remained there ever since . All nine serve lifetime terms , after successful nomination by the president and confirmation by the Senate . At the federal level , nine Supreme Court judges are nominated by the president and confirmed by the Senate for lifetime appointments .", "question": { "cloze_format": "The justices of the Supreme Court are ________.", "normal_format": "Which of the following is correct about the justices of the Supreme Court?", "question_choices": [ "elected by citizens", "chosen by the Congress", "confirmed by the president", "nominated by the president and confirmed by the Senate" ], "question_id": "fs-id1163758534602", "question_text": "The justices of the Supreme Court are ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "one chief justice and eight associate justices" }, "bloom": null, "hl_context": "The original court in 1789 had six justices , but Congress set the number at nine in 1869 , and it has remained there ever since . <hl> There is one chief justice , who is the lead or highest-ranking judge on the Court , and eight associate justice s . <hl> All nine serve lifetime terms , after successful nomination by the president and confirmation by the Senate .", "hl_sentences": "There is one chief justice , who is the lead or highest-ranking judge on the Court , and eight associate justice s .", "question": { "cloze_format": "The Supreme Court consists of ________.", "normal_format": "What does the Supreme Court consist of?", "question_choices": [ "nine associate justices", "one chief justice and eight associate justices", "thirteen judges", "one chief justice and five associate justices" ], "question_id": "fs-id1163757290685", "question_text": "The Supreme Court consists of ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "A" }, "bloom": null, "hl_context": "Most often , the petitioner is asking the Supreme Court to grant a writ of certiorari , a request that the lower court send up its record of the case for review . Once a writ of certiorari ( cert . <hl> for short ) has been granted , the case is scheduled on the Court ’ s docket . <hl> <hl> The Supreme Court exercises discretion in the cases it chooses to hear , but four of the nine Justices must vote to accept a case . <hl> <hl> This is called the Rule of Four . <hl>", "hl_sentences": "for short ) has been granted , the case is scheduled on the Court ’ s docket . The Supreme Court exercises discretion in the cases it chooses to hear , but four of the nine Justices must vote to accept a case . This is called the Rule of Four .", "question": { "cloze_format": "A case will be placed on the Court’s docket when ________ justices agree to do so.", "normal_format": "For a case to be placed on the Court’s docket, how many justices need to agree to do so?", "question_choices": [ "four", "five", "six", "all" ], "question_id": "fs-id11637573162720", "question_text": "A case will be placed on the Court’s docket when ________ justices agree to do so." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "filing amicus curiae briefs" }, "bloom": null, "hl_context": "Both the executive and legislative branches check and balance the judiciary in many different ways . The president can leave a lasting imprint on the bench through his or her nominations , even long after leaving office . <hl> The president may also influence the Court through the solicitor general ’ s involvement or through the submission of amicus briefs in cases in which the United States is not a party . <hl> Once a case has been placed on the docket , briefs , or short arguments explaining each party ’ s view of the case , must be submitted — first by the petitioner putting forth his or her case , then by the respondent . After initial briefs have been filed , both parties may file subsequent briefs in response to the first . Likewise , people and groups that are not party to the case but are interested in its outcome may file an amicus curiae ( “ friend of the court ” ) brief giving their opinion , analysis , and recommendations about how the Court should rule . <hl> Interest groups in particular can become heavily involved in trying to influence the judiciary by filing amicus briefs — both before and after a case has been granted cert . <hl> And , as noted earlier , if the United States is not party to a case , the solicitor general may file an amicus brief on the government ’ s behalf .", "hl_sentences": "The president may also influence the Court through the solicitor general ’ s involvement or through the submission of amicus briefs in cases in which the United States is not a party . Interest groups in particular can become heavily involved in trying to influence the judiciary by filing amicus briefs — both before and after a case has been granted cert .", "question": { "cloze_format": "One of the main ways interest groups participate in Supreme Court cases is by ________.", "normal_format": "What is one of the main ways interest groups participate in Supreme Court cases?", "question_choices": [ "giving monetary contributions to the justices", "lobbying the justices", "filing amicus curiae briefs", "protesting in front of the Supreme Court building" ], "question_id": "fs-id1163757305271", "question_text": "One of the main ways interest groups participate in Supreme Court cases is by ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "A" }, "bloom": null, "hl_context": "<hl> The solicitor general is the lawyer who represents the federal government before the Supreme Court : He or she decides which cases ( in which the United States is a party ) should be appealed from the lower courts and personally approves each one presented ( Figure 13.11 ) . <hl> Most of the cases the solicitor general brings to the Court will be given a place on the docket . About two-thirds of all Supreme Court cases involve the federal government . 53", "hl_sentences": "The solicitor general is the lawyer who represents the federal government before the Supreme Court : He or she decides which cases ( in which the United States is a party ) should be appealed from the lower courts and personally approves each one presented ( Figure 13.11 ) .", "question": { "cloze_format": "The lawyer who represents the federal government and argues cases before the Supreme Court is the ________.", "normal_format": "Who is the lawyer who represents the federal government and argues cases before the Supreme Court?", "question_choices": [ "solicitor general", "attorney general", "U.S. attorney", "chief justice" ], "question_id": "fs-id1163757194580", "question_text": "The lawyer who represents the federal government and argues cases before the Supreme Court is the ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "C" }, "bloom": null, "hl_context": "A justice ’ s decisions are influenced by how he or she defines his role as a jurist , with some justices believing strongly in judicial activism , or the need to defend individual rights and liberties , and they aim to stop actions and laws by other branches of government that they see as infringing on these rights . <hl> A judge or justice who views the role with an activist lens is more likely to use his or her judicial power to broaden personal liberty , justice , and equality . <hl> <hl> Still others believe in judicial restraint , which leads them to defer decisions ( and thus policymaking ) to the elected branches of government and stay focused on a narrower interpretation of the Bill of Rights . <hl> These justices are less likely to strike down actions or laws as unconstitutional and are less likely to focus on the expansion of individual liberties . While it is typically the case that liberal actions are described as unnecessarily activist , conservative decisions can be activist as well .", "hl_sentences": "A judge or justice who views the role with an activist lens is more likely to use his or her judicial power to broaden personal liberty , justice , and equality . Still others believe in judicial restraint , which leads them to defer decisions ( and thus policymaking ) to the elected branches of government and stay focused on a narrower interpretation of the Bill of Rights .", "question": { "cloze_format": "When using judicial restraint, a judge will usually ________.", "normal_format": "When using judicial restraint, what will a judge usually do?", "question_choices": [ "refuse to rule on a case", "overrule any act of Congress he or she doesn’t like", "defer to the decisions of the elected branches of government", "make mostly liberal rulings" ], "question_id": "fs-id1163756284095", "question_text": "When using judicial restraint, a judge will usually ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "concurring opinion" }, "bloom": null, "hl_context": "Most typically , though , the Court will put forward a majority opinion . If he or she is in the majority , the chief justice decides who will write the opinion . If not , then the most senior justice ruling with the majority chooses the writer . Likewise , the most senior justice in the dissenting group can assign a member of that group to write the dissenting opinion ; however , any justice who disagrees with the majority may write a separate dissenting opinion . <hl> If a justice agrees with the outcome of the case but not with the majority ’ s reasoning in it , that justice may write a concurring opinion . <hl>", "hl_sentences": "If a justice agrees with the outcome of the case but not with the majority ’ s reasoning in it , that justice may write a concurring opinion .", "question": { "cloze_format": "When a Supreme Court ruling is made, justices may write a ________ to show they agree with the majority but for different reasons.", "normal_format": "When a Supreme Court ruling is made, what may justices write to show they agree with the majority but for different reasons?", "question_choices": [ "brief", "dissenting opinion", "majority opinion", "concurring opinion" ], "question_id": "fs-id1163758521651", "question_text": "When a Supreme Court ruling is made, justices may write a ________ to show they agree with the majority but for different reasons." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "D" }, "bloom": null, "hl_context": "<hl> The Court relies on the executive to implement or enforce its decisions and on the legislative branch to fund them . <hl> <hl> As the Jackson and Lincoln stories indicate , presidents may simply ignore decisions of the Court , and Congress may withhold funding needed for implementation and enforcement . <hl> Fortunately for the courts , these situations rarely happen , and the other branches tend to provide support rather than opposition . In general , presidents have tended to see it as their duty to both obey and enforce Court rulings , and Congress seldom takes away the funding needed for the president to do so . Likewise , Congress has checks on the judiciary . It retains the power to modify the federal court structure and its appellate jurisdiction , and the Senate may accept or reject presidential nominees to the federal courts . <hl> Faced with a court ruling that overturns one of its laws , Congress may rewrite the law or even begin a constitutional amendment process . <hl> The U . S . courts pride themselves on two achievements : ( 1 ) as part of the framers ’ system of checks and balances , they protect the sanctity of the U . S . Constitution from breaches by the other branches of government , and ( 2 ) they protect individual rights against societal and governmental oppression . <hl> At the federal level , nine Supreme Court judges are nominated by the president and confirmed by the Senate for lifetime appointments . <hl> Hence , democratic control over them is indirect at best , but this provides them the independence they need to carry out their duties . However , court power is confined to rulings on those cases the courts decide to hear . 2", "hl_sentences": "The Court relies on the executive to implement or enforce its decisions and on the legislative branch to fund them . As the Jackson and Lincoln stories indicate , presidents may simply ignore decisions of the Court , and Congress may withhold funding needed for implementation and enforcement . Faced with a court ruling that overturns one of its laws , Congress may rewrite the law or even begin a constitutional amendment process . At the federal level , nine Supreme Court judges are nominated by the president and confirmed by the Senate for lifetime appointments .", "question": { "cloze_format": "___ is a check that the legislative branch has over the courts.", "normal_format": "Which of the following is a check that the legislative branch has over the courts?", "question_choices": [ "Senate approval is needed for the appointment of justices and federal judges.", "Congress may rewrite a law the courts have declared unconstitutional.", "Congress may withhold funding needed to implement court decisions.", "all of the above" ], "question_id": "fs-id1163758500816", "question_text": "Which of the following is a check that the legislative branch has over the courts?" }, "references_are_paraphrase": null } ]
13
13.1 Guardians of the Constitution and Individual Rights Learning Objectives By the end of this section, you will be able to: Describe the evolving role of the courts since the ratification of the Constitution Explain why courts are uniquely situated to protect individual rights Recognize how the courts make public policy Under the Articles of Confederation, there was no national judiciary. The U.S. Constitution changed that, but its Article III , which addresses “the judicial power of the United States,” is the shortest and least detailed of the three articles that created the branches of government. It calls for the creation of “one supreme Court” and establishes the Court’s jurisdiction, or its authority to hear cases and make decisions about them, and the types of cases the Court may hear. It distinguishes which are matters of original jurisdiction and which are for appellate jurisdiction. Under original jurisdiction , a case is heard for the first time, whereas under appellate jurisdiction , a court hears a case on appeal from a lower court and may change the lower court’s decision. The Constitution also limits the Supreme Court’s original jurisdiction to those rare cases of disputes between states, or between the United States and foreign ambassadors or ministers. So, for the most part, the Supreme Court is an appeals court, operating under appellate jurisdiction and hearing appeals from the lower courts. The rest of the development of the judicial system and the creation of the lower courts were left in the hands of Congress. To add further explanation to Article III, Alexander Hamilton wrote details about the federal judiciary in Federalist No. 78 . In explaining the importance of an independent judiciary separated from the other branches of government, he said “interpretation” was a key role of the courts as they seek to protect people from unjust laws. But he also believed “the Judiciary Department” would “always be the least dangerous” because “with no influence over either the sword or the purse,” it had “neither force nor will, but merely judgment.” The courts would only make decisions, not take action. With no control over how those decisions would be implemented and no power to enforce their choices, they could exercise only judgment, and their power would begin and end there. Hamilton would no doubt be surprised by what the judiciary has become: a key component of the nation’s constitutional democracy, finding its place as the chief interpreter of the Constitution and the equal of the other two branches, though still checked and balanced by them. The first session of the first U.S. Congress laid the framework for today’s federal judicial system, established in the Judiciary Act of 1789 . Although legislative changes over the years have altered it, the basic structure of the judicial branch remains as it was set early on: At the lowest level are the district courts, where federal cases are tried, witnesses testify, and evidence and arguments are presented. A losing party who is unhappy with a district court decision may appeal to the circuit courts, or U.S. courts of appeals, where the decision of the lower court is reviewed. Still further, appeal to the U.S. Supreme Court is possible, but of the thousands of petitions for appeal, the Supreme Court will typically hear fewer than one hundred a year. 3 Link to Learning This public site maintained by the Administrative Office of the U.S. Courts provides detailed information from and about the judicial branch. HUMBLE BEGINNINGS Starting in New York in 1790, the early Supreme Court focused on establishing its rules and procedures and perhaps trying to carve its place as the new government’s third branch. However, given the difficulty of getting all the justices even to show up, and with no permanent home or building of its own for decades, finding its footing in the early days proved to be a monumental task. Even when the federal government moved to the nation’s capital in 1800, the Court had to share space with Congress in the Capitol building. This ultimately meant that “the high bench crept into an undignified committee room in the Capitol beneath the House Chamber.” 4 It was not until the Court’s 146th year of operation that Congress, at the urging of Chief Justice—and former president—William Howard Taft , provided the designation and funding for the Supreme Court’s own building, “on a scale in keeping with the importance and dignity of the Court and the Judiciary as a coequal, independent branch of the federal government.” 5 It was a symbolic move that recognized the Court’s growing role as a significant part of the national government ( Figure 13.2 ). But it took years for the Court to get to that point, and it faced a number of setbacks on the way to such recognition. In their first case of significance, Chisholm v. Georgia (1793), the justices ruled that the federal courts could hear cases brought by a citizen of one state against a citizen of another state, and that Article III , Section 2, of the Constitution did not protect the states from facing such an interstate lawsuit. 6 However, their decision was almost immediately overturned by the Eleventh Amendment , passed by Congress in 1794 and ratified by the states in 1795. In protecting the states, the Eleventh Amendment put a prohibition on the courts by stating, “The Judicial power of the United States shall not be construed to extend to any suit in law or equity, commenced or prosecuted against one of the United States by Citizens of another State, or by Citizens or Subjects of any Foreign State.” It was an early hint that Congress had the power to change the jurisdiction of the courts as it saw fit and stood ready to use it. In an atmosphere of perceived weakness, the first chief justice, John Jay , an author of The Federalist Papers and appointed by President George Washington, resigned his post to become governor of New York and later declined President John Adams’s offer of a subsequent term. 7 In fact, the Court might have remained in a state of what Hamilton called its “natural feebleness” if not for the man who filled the vacancy Jay had refused—the fourth chief justice, John Marshall . Often credited with defining the modern court, clarifying its power, and strengthening its role, Marshall served in the chief’s position for thirty-four years. One landmark case during his tenure changed the course of the judicial branch’s history ( Figure 13.3 ). 8 In 1803, the Supreme Court declared for itself the power of judicial review , a power to which Hamilton had referred but that is not expressly mentioned in the Constitution. Judicial review is the power of the courts, as part of the system of checks and balances, to look at actions taken by the other branches of government and the states and determine whether they are constitutional. If the courts find an action to be unconstitutional, it becomes null and void. Judicial review was established in the Supreme Court case Marbury v. Madison , when, for the first time, the Court declared an act of Congress to be unconstitutional. 9 Wielding this power is a role Marshall defined as the “very essence of judicial duty,” and it continues today as one of the most significant aspects of judicial power. Judicial review lies at the core of the court’s ability to check the other branches of government—and the states. Since Marbury , the power of judicial review has continually expanded, and the Court has not only ruled actions of Congress and the president to be unconstitutional, but it has also extended its power to include the review of state and local actions. The power of judicial review is not confined to the Supreme Court but is also exercised by the lower federal courts and even the state courts. Any legislative or executive action at the federal or state level inconsistent with the U.S. Constitution or a state constitution can be subject to judicial review. 10 Milestone Marbury v. Madison (1803) The Supreme Court found itself in the middle of a dispute between the outgoing presidential administration of John Adams and that of incoming president (and opposition party member) Thomas Jefferson . It was an interesting circumstance at the time, particularly because Jefferson and the man who would decide the case—John Marshall —were themselves political rivals. President Adams had appointed William Marbury to a position in Washington, DC, but his commission was not delivered before Adams left office. So Marbury petitioned the Supreme Court to use its power under the Judiciary Act of 1789 and issue a writ of mandamus to force the new president’s secretary of state, James Madison , to deliver the commission documents. It was a task Madison refused to do. A unanimous Court under the leadership of Chief Justice John Marshall ruled that although Marbury was entitled to the job, the Court did not have the power to issue the writ and order Madison to deliver the documents, because the provision in the Judiciary Act that had given the Court that power was unconstitutional. 11 Perhaps Marshall feared a confrontation with the Jefferson administration and thought Madison would refuse his directive anyway. In any case, his ruling shows an interesting contrast in the early Court. On one hand, it humbly declined a power—issuing a writ of mandamus—given to it by Congress, but on the other, it laid the foundation for legitimizing a much more important one—judicial review. Marbury never got his commission, but the Court’s ruling in the case has become more significant for the precedent it established: As the first time the Court declared an act of Congress unconstitutional, it established the power of judicial review, a key power that enables the judicial branch to remain a powerful check on the other branches of government. Consider the dual nature of John Marshall’s opinion in Marbury v. Madison : On one hand, it limits the power of the courts, yet on the other it also expanded their power. Explain the different aspects of the decision in terms of these contrasting results. THE COURTS AND PUBLIC POLICY Even with judicial review in place, the courts do not always stand ready just to throw out actions of the other branches of government. More broadly, as Marshall put it, “it is emphatically the province and duty of the judicial department to say what the law is.” 12 The United States has a common law system in which law is largely developed through binding judicial decisions. With roots in medieval England, the system was inherited by the American colonies along with many other British traditions. 13 It stands in contrast to code law systems, which provide very detailed and comprehensive laws that do not leave room for much interpretation and judicial decision-making. With code law in place, as it is in many nations of the world, it is the job of judges to simply apply the law. But under common law, as in the United States, they interpret it. Often referred to as a system of judge-made law, common law provides the opportunity for the judicial branch to have stronger involvement in the process of law-making itself, largely through its ruling and interpretation on a case-by-case basis. In their role as policymakers, Congress and the president tend to consider broad questions of public policy and their costs and benefits. But the courts consider specific cases with narrower questions, thus enabling them to focus more closely than other government institutions on the exact context of the individuals, groups, or issues affected by the decision. This means that while the legislature can make policy through statute, and the executive can form policy through regulations and administration, the judicial branch can also influence policy through its rulings and interpretations. As cases are brought to the courts, court decisions can help shape policy. Consider health care, for example. In 2010, President Barack Obama signed into law the Patient Protection and Affordable Care Act (ACA), a statute that brought significant changes to the nation’s healthcare system. With its goal of providing more widely attainable and affordable health insurance and health care, “Obamacare” was hailed by some but soundly denounced by others as bad policy. People who opposed the law and understood that a congressional repeal would not happen any time soon looked to the courts for help. They challenged the constitutionality of the law in National Federation of Independent Business v. Sebelius , hoping the Supreme Court would overturn it. 14 The practice of judicial review enabled the law’s critics to exercise this opportunity, even though their hopes were ultimately dashed when, by a narrow 5–4 margin, the Supreme Court upheld the health care law as a constitutional extension of Congress’s power to tax. Since this 2012 decision, the ACA has continued to face challenges, the most notable of which have also been decided by court rulings. It faced a setback in 2014, for instance, when the Supreme Court ruled in Burwell v. Hobby Lobby that, for religious reasons, some for-profit corporations could be exempt from the requirement that employers provide insurance coverage of contraceptives for their female employees. 15 But the ACA also attained a victory in King v. Burwell , when the Court upheld the ability of the federal government to provide tax credits for people who bought their health insurance through an exchange created by the law. 16 With each ACA case it has decided, the Supreme Court has served as the umpire, upholding the law and some of its provisions on one hand, but ruling some aspects of it unconstitutional on the other. Both supporters and opponents of the law have claimed victory and faced defeat. In each case, the Supreme Court has further defined and fine-tuned the law passed by Congress and the president, determining which parts stay and which parts go, thus having its say in the way the act has manifested itself, the way it operates, and the way it serves its public purpose. In this same vein, the courts have become the key interpreters of the U.S. Constitution, continuously interpreting it and applying it to modern times and circumstances. For example, it was in 2015 that we learned a man’s threat to kill his ex-wife, written in rap lyrics and posted to her Facebook wall, was not a real threat and thus could not be prosecuted as a felony under federal law. 17 Certainly, when the Bill of Rights first declared that government could not abridge freedom of speech, its framers could never have envisioned Facebook—or any other modern technology for that matter. But freedom of speech, just like many constitutional concepts, has come to mean different things to different generations, and it is the courts that have designed the lens through which we understand the Constitution in modern times. It is often said that the Constitution changes less by amendment and more by the way it is interpreted. Rather than collecting dust on a shelf, the nearly 230-year-old document has come with us into the modern age, and the accepted practice of judicial review has helped carry it along the way. COURTS AS A LAST RESORT While the U.S. Supreme Court and state supreme courts exert power over many when reviewing laws or declaring acts of other branches unconstitutional, they become particularly important when an individual or group comes before them believing there has been a wrong. A citizen or group that feels mistreated can approach a variety of institutional venues in the U.S. system for assistance in changing policy or seeking support. Organizing protests, garnering special interest group support, and changing laws through the legislative and executive branches are all possible, but an individual is most likely to find the courts especially well-suited to analyzing the particulars of his or her case. The adversarial judicial system comes from the common law tradition: In a court case, it is one party versus the other, and it is up to an impartial person or group, such as the judge or jury, to determine which party prevails. The federal court system is most often called upon when a case touches on constitutional rights. For example, when Samantha Elauf , a Muslim woman, was denied a job working for the clothing retailer Abercrombie & Fitch because a headscarf she wears as religious practice violated the company’s dress code, the Supreme Court ruled that her First Amendment rights had been violated, making it possible for her to sue the store for monetary damages. Elauf had applied for an Abercrombie sales job in Oklahoma in 2008. Her interviewer recommended her based on her qualifications, but she was never given the job because the clothing retailer wanted to avoid having to accommodate her religious practice of wearing a headscarf, or hijab. In so doing, the Court ruled, Abercrombie violated Title VII of the Civil Rights Act of 1964, which prohibits employers from discriminating on the basis of race, color, religion, sex, or national origin, and requires them to accommodate religious practices. 18 Rulings like this have become particularly important for members of religious minority groups, including Muslims, Sikhs, and Jews, who now feel more protected from employment discrimination based on their religious attire, head coverings, or beards. 19 Such decisions illustrate how the expansion of individual rights and liberties for particular persons or groups over the years has come about largely as a result of court rulings made for individuals on a case-by-case basis. Although the United States prides itself on the Declaration of Independence’s statement that “all men are created equal,” and “equal protection of the laws” is a written constitutional principle of the Fourteenth Amendment, the reality is less than perfect. But it is evolving. Changing times and technology have and will continue to alter the way fundamental constitutional rights are defined and applied, and the courts have proven themselves to be crucial in that definition and application. Societal traditions, public opinion, and politics have often stood in the way of the full expansion of rights and liberties to different groups, and not everyone has agreed that these rights should be expanded as they have been by the courts. Schools were long segregated by race until the Court ordered desegregation in Brown v. Board of Education (1954), and even then, many stood in opposition and tried to block students at the entrances to all-white schools. 20 Factions have formed on opposite sides of the abortion and handgun debates, because many do not agree that women should have abortion rights or that individuals should have the right to a handgun. People disagree about whether members of the LGBT community should be allowed to marry or whether arrested persons should be read their rights, guaranteed an attorney, and/or have their cell phones protected from police search. But the Supreme Court has ruled in favor of all these issues and others. Even without unanimous agreement among citizens, Supreme Court decisions have made all these possibilities a reality, a particularly important one for the individuals who become the beneficiaries ( Table 13.1 ). The judicial branch has often made decisions the other branches were either unwilling or unable to make, and Hamilton was right in Federalist No. 78 when he said that without the courts exercising their duty to defend the Constitution, “all the reservations of particular rights or privileges would amount to nothing.” Examples of Supreme Court Cases Involving Individuals Case Name Year Court’s Decision Brown v. Board of Education 1954 Public schools must be desegregated. Gideon v. Wainwright 1963 Poor criminal defendants must be provided an attorney. Miranda v. Arizona 1966 Criminal suspects must be read their rights. Roe v. Wade 1973 Women have a constitutional right to abortion. McDonald v. Chicago 2010 An individual has the right to a handgun in his or her home. Riley v. California 2014 Police may not search a cell phone without a warrant. Obergefell v. Hodges 2015 Same-sex couples have the right to marry in all states. Table 13.1 Over time, the courts have made many decisions that have broadened the rights of individuals. This table is a sampling of some of these Supreme Court cases. The courts seldom if ever grant rights to a person instantly and upon request. In a number of cases, they have expressed reluctance to expand rights without limit, and they still balance that expansion with the government’s need to govern, provide for the common good, and serve a broader societal purpose. For example, the Supreme Court has upheld the constitutionality of the death penalty , ruling that the Eighth Amendment does not prevent a person from being put to death for committing a capital crime and that the government may consider “retribution and the possibility of deterrence” when it seeks capital punishment for a crime that so warrants it. 21 In other words, there is a greater good—more safety and security—that may be more important than sparing the life of an individual who has committed a heinous crime. Yet the Court has also put limits on the ability to impose the death penalty, ruling, for example, that the government may not execute a person with cognitive disabilities, a person who was under eighteen at the time of the crime, or a child rapist who did not kill his victim. 22 So the job of the courts on any given issue is never quite done, as justices continuously keep their eye on government laws, actions, and policy changes as cases are brought to them and then decide whether those laws, actions, and policies can stand or must go. Even with an issue such as the death penalty, about which the Court has made several rulings, there is always the possibility that further judicial interpretation of what does (or does not) violate the Constitution will be needed. This happened, for example, as recently as 2015 in a case involving the use of lethal injection as capital punishment in the state of Oklahoma, where death-row inmates are put to death through the use of three drugs—a sedative to bring about unconsciousness (midazolam), followed by two others that cause paralysis and stop the heart. A group of these inmates challenged the use of midazolam as unconstitutional. They argued that since it could not reliably cause unconsciousness, its use constituted an Eighth Amendment violation against cruel and unusual punishment and should be stopped by the courts. The Supreme Court rejected the inmates’ claims, ruling that Oklahoma could continue to use midazolam as part of its three-drug protocol. 23 But with four of the nine justices dissenting from that decision, a sharply divided Court leaves open a greater possibility of more death-penalty cases to come. The 2015–2016 session alone includes four such cases, challenging death-sentencing procedures in such states as Florida, Georgia, and Kansas. 24 Therefore, we should not underestimate the power and significance of the judicial branch in the United States. Today, the courts have become a relevant player, gaining enough clout and trust over the years to take their place as a separate yet coequal branch. 13.2 The Dual Court System Learning Objectives By the end of this section, you will be able to: Describe the dual court system and its three tiers Explain how you are protected and governed by different U.S. court systems Compare the positive and negative aspects of a dual court system Before the writing of the U.S. Constitution and the establishment of the permanent national judiciary under Article III, the states had courts. Each of the thirteen colonies had also had its own courts, based on the British common law model. The judiciary today continues as a dual court system , with courts at both the national and state levels. Both levels have three basic tiers consisting of trial court s , appellate court s , and finally courts of last resort, typically called supreme courts, at the top ( Figure 13.4 ). To add to the complexity, the state and federal court systems sometimes intersect and overlap each other, and no two states are exactly alike when it comes to the organization of their courts. Since a state’s court system is created by the state itself, each one differs in structure, the number of courts, and even name and jurisdiction. Thus, the organization of state courts closely resembles but does not perfectly mirror the more clear-cut system found at the federal level. 25 Still, we can summarize the overall three-tiered structure of the dual court model and consider the relationship that the national and state sides share with the U.S. Supreme Court, as illustrated in Figure 13.4 . Cases heard by the U.S. Supreme Court come from two primary pathways: (1) the circuit courts, or U.S. courts of appeals (after the cases have originated in the federal district courts), and (2) state supreme courts (when there is a substantive federal question in the case). In a later section of the chapter, we discuss the lower courts and the movement of cases through the dual court system to the U.S. Supreme Court. But first, to better understand how the dual court system operates, we consider the types of cases state and local courts handle and the types for which the federal system is better designed. COURTS AND FEDERALISM Courts hear two different types of disputes: criminal and civil. Under criminal law , governments establish rules and punishments; laws define conduct that is prohibited because it can harm others and impose punishment for committing such an act. Crimes are usually labeled felonies or misdemeanors based on their nature and seriousness; felonies are the more serious crimes. When someone commits a criminal act, the government (state or national, depending on which law has been broken) charges that person with a crime, and the case brought to court contains the name of the charging government, as in Miranda v. Arizona discussed below. 26 On the other hand, civil law cases involve two or more private (non-government) parties, at least one of whom alleges harm or injury committed by the other. In both criminal and civil matters, the courts decide the remedy and resolution of the case, and in all cases, the U.S. Supreme Court is the final court of appeal. Link to Learning This site provides an interesting challenge: Look at the different cases presented and decide whether each would be heard in the state or federal courts. You can check your results at the end. Although the Supreme Court tends to draw the most public attention, it typically hears fewer than one hundred cases every year. In fact, the entire federal side—both trial and appellate—handles proportionately very few cases, with about 90 percent of all cases in the U.S. court system being heard at the state level. 27 The several hundred thousand cases handled every year on the federal side pale in comparison to the several million handled by the states. State courts really are the core of the U.S. judicial system, and they are responsible for a huge area of law. Most crimes and criminal activity, such as robbery, rape, and murder, are violations of state laws, and cases are thus heard by state court s. State courts also handle civil matters; personal injury, malpractice, divorce, family, juvenile, probate, and contract disputes and real estate cases, to name just a few, are usually state-level cases. The federal court s, on the other hand, will hear any case that involves a foreign government, patent or copyright infringement, Native American rights, maritime law, bankruptcy, or a controversy between two or more states. Cases arising from activities across state lines (interstate commerce) are also subject to federal court jurisdiction, as are cases in which the United States is a party. A dispute between two parties not from the same state or nation and in which damages of at least $75,000 are claimed is handled at the federal level. Such a case is known as a diversity of citizenship case. 28 However, some cases cut across the dual court system and may end up being heard in both state and federal courts. Any case has the potential to make it to the federal courts if it invokes the U.S. Constitution or federal law. It could be a criminal violation of federal law, such as assault with a gun, the illegal sale of drugs, or bank robbery. Or it could be a civil violation of federal law, such as employment discrimination or securities fraud. Also, any perceived violation of a liberty protected by the Bill of Rights, such as freedom of speech or the protection against cruel and unusual punishment, can be argued before the federal courts. A summary of the basic jurisdictions of the state and federal sides is provided in Table 13.2 . Jurisdiction of the Courts: State vs. Federal State Courts Federal Courts Hear most day-to-day cases, covering 90 percent of all cases Hear cases that involve a “federal question,” involving the Constitution, federal laws or treaties, or a “federal party” in which the U.S. government is a party to the case Hear both civil and criminal matters Hear both civil and criminal matters, although many criminal cases involving federal law are tried in state courts Help the states retain their own sovereignty in judicial matters over their state laws, distinct from the national government Hear cases that involve “interstate” matters, “diversity of citizenship” involving parties of two different states, or between a U.S. citizen and a citizen of another nation (and with a damage claim of at least $75,000) Table 13.2 While we may certainly distinguish between the two sides of a jurisdiction, looking on a case-by-case basis will sometimes complicate the seemingly clear-cut division between the state and federal sides. It is always possible that issues of federal law may start in the state courts before they make their way over to the federal side. And any case that starts out at the state and/or local level on state matters can make it into the federal system on appeal—but only on points that involve a federal law or question, and usually after all avenues of appeal in the state courts have been exhausted. 29 Consider the case Miranda v. Arizona . 30 Ernesto Miranda, arrested for kidnapping and rape, which are violations of state law, was easily convicted and sentenced to prison after a key piece of evidence—his own signed confession—was presented at trial in the Arizona court. On appeal first to the Arizona Supreme Court and then to the U.S. Supreme Court to exclude the confession on the grounds that its admission was a violation of his constitutional rights, Miranda won the case. By a slim 5–4 margin, the justices ruled that the confession had to be excluded from evidence because in obtaining it, the police had violated Miranda’s Fifth Amendment right against self-incrimination and his Sixth Amendment right to an attorney. In the opinion of the Court, because of the coercive nature of police interrogation, no confession can be admissible unless a suspect is made aware of his rights and then in turn waives those rights. For this reason, Miranda’s original conviction was overturned. Yet the Supreme Court considered only the violation of Miranda’s constitutional rights, but not whether he was guilty of the crimes with which he was charged. So there were still crimes committed for which Miranda had to face charges. He was therefore retried in state court in 1967, the second time without the confession as evidence, found guilty again based on witness testimony and other evidence, and sent to prison. Miranda’s story is a good example of the tandem operation of the state and federal court systems. His guilt or innocence of the crimes was a matter for the state courts, whereas the constitutional questions raised by his trial were a matter for the federal courts. Although he won his case before the Supreme Court, which established a significant precedent that criminal suspects must be read their so-called Miranda rights before police questioning, the victory did not do much for Miranda himself. After serving prison time, he was stabbed to death in a bar fight in 1976 while out on parole, and due to a lack of evidence, no one was ever convicted in his death. THE IMPLICATIONS OF A DUAL COURT SYSTEM From an individual’s perspective, the dual court system has both benefits and drawbacks. On the plus side, each person has more than just one court system ready to protect his or her rights. The dual court system provides alternate venues in which to appeal for assistance, as Ernesto Miranda’s case illustrates. The U.S. Supreme Court found for Miranda an extension of his Fifth Amendment protections—a constitutional right to remain silent when faced with police questioning. It was a right he could not get solely from the state courts in Arizona, but one those courts had to honor nonetheless. The fact that a minority voice like Miranda’s can be heard in court, and that his or her grievance can be resolved in his or her favor if warranted, says much about the role of the judiciary in a democratic republic. In Miranda’s case, a resolution came from the federal courts, but it can also come from the state side. In fact, the many differences among the state courts themselves may enhance an individual’s potential to be heard. State courts vary in the degree to which they take on certain types of cases or issues, give access to particular groups, or promote certain interests. If a particular issue or topic is not taken up in one place, it may be handled in another, giving rise to many different opportunities for an interest to be heard somewhere across the nation. In their research, Paul Brace and Melinda Hall found that state courts are important instruments of democracy because they provide different alternatives and varying arenas for political access. They wrote, “Regarding courts, one size does not fit all, and the republic has survived in part because federalism allows these critical variations.” 31 But the existence of the dual court system and variations across the states and nation also mean that there are different courts in which a person could face charges for a crime or for a violation of another person’s rights. Except for the fact that the U.S. Constitution binds judges and justices in all the courts, it is state law that governs the authority of state courts, so judicial rulings about what is legal or illegal may differ from state to state. These differences are particularly pronounced when the laws across the states and the nation are not the same, as we see with marijuana laws today. Finding a Middle Ground Marijuana Laws and the Courts There are so many differences in marijuana law s between states, and between the states and the national government, that uniform application of treatment in courts across the nation is nearly impossible ( Figure 13.5 ). What is legal in one state may be illegal in another, and state laws do not cross state geographic boundary lines—but people do. What’s more, a person residing in any of the fifty states is still subject to federal law. For example, a person over the age of twenty-one may legally buy marijuana for recreational use in four states and for medicinal purpose in nearly half the states, but could face charges—and time in court—for possession in a neighboring state where marijuana use is not legal. Under federal law, too, marijuana is still regulated as a Schedule 1 (most dangerous) drug, and federal authorities often find themselves pitted against states that have legalized it. Such differences can lead, somewhat ironically, to arrests and federal criminal charges for people who have marijuana in states where it is legal, or to federal raids on growers and dispensaries that would otherwise be operating legally under their state’s law. Differences among the states have also prompted a number of lawsuits against states with legalized marijuana, as people opposed to those state laws seek relief from (none other than) the courts. They want the courts to resolve the issue, which has left in its wake contradictions and conflicts between states that have legalized marijuana and those that have not, as well as conflicts between states and the national government. These lawsuits include at least one filed by the states of Nebraska and Oklahoma against Colorado. Citing concerns over cross-border trafficking, difficulties with law enforcement, and violations of the Constitution’s supremacy clause, Nebraska and Oklahoma have petitioned the U.S. Supreme Court to intervene and rule on the legality of Colorado’s marijuana law, hoping to get it overturned. 32 The Supreme Court has yet to take up the case. How do you think differences among the states and differences between federal and state law regarding marijuana use can affect the way a person is treated in court? What, if anything, should be done to rectify the disparities in application of the law across the nation? Where you are physically located can affect not only what is allowable and what is not, but also how cases are judged. For decades, political scientists have confirmed that political culture affects the operation of government institutions, and when we add to that the differing political interests and cultures at work within each state, we end up with court systems that vary greatly in their judicial and decision-making processes. 33 Each state court system operates with its own individual set of biases. People with varying interests, ideologies, behaviors, and attitudes run the disparate legal systems, so the results they produce are not always the same. Moreover, the selection method for judges at the state and local level varies. In some states, judges are elected rather than appointed, which can affect their rulings. Just as the laws vary across the states, so do judicial rulings and interpretations, and the judges who make them. That means there may not be uniform application of the law—even of the same law—nationwide. We are somewhat bound by geography and do not always have the luxury of picking and choosing the venue for our particular case. So, while having such a decentralized and varied set of judicial operations affects the kinds of cases that make it to the courts and gives citizens alternate locations to get their case heard, it may also lead to disparities in the way they are treated once they get there. 13.3 The Federal Court System Learning Objectives By the end of this section, you will be able to: Describe the differences between the U.S. district courts, circuit courts, and the Supreme Court Explain the significance of precedent in the courts’ operations Describe how judges are selected for their positions Congress has made numerous changes to the federal judicial system throughout the years, but the three-tiered structure of the system is quite clear-cut today. Federal cases typically begin at the lowest federal level, the district (or trial) court. Losing parties may appeal their case to the higher courts—first to the circuit courts, or U.S. courts of appeals, and then, if chosen by the justices, to the U.S. Supreme Court. Decisions of the higher courts are binding on the lower courts. The precedent set by each ruling, particularly by the Supreme Court’s decisions, both builds on principles and guidelines set by earlier cases and frames the ongoing operation of the courts, steering the direction of the entire system. Reliance on precedent has enabled the federal courts to operate with logic and consistency that has helped validate their role as the key interpreters of the Constitution and the law—a legitimacy particularly vital in the United States where citizens do not elect federal judges and justices but are still subject to their rulings. THE THREE TIERS OF FEDERAL COURTS There are ninety-four U.S. district courts in the fifty states and U.S. territories, of which eighty-nine are in the states (at least one in each state). The others are in Washington, DC; Puerto Rico; Guam; the U.S. Virgin Islands; and the Northern Mariana Islands. These are the trial courts of the national system, in which federal cases are tried, witness testimony is heard, and evidence is presented. No district court crosses state lines, and a single judge oversees each one. Some cases are heard by a jury, and some are not. There are thirteen U.S. courts of appeals , or circuit courts , eleven across the nation and two in Washington, DC (the DC circuit and the federal circuit courts), as illustrated in Figure 13.6 . Each court is overseen by a rotating panel of three judges who do not hold trials but instead review the rulings of the trial (district) courts within their geographic circuit. As authorized by Congress, there are currently 179 judges. The circuit courts are often referred to as the intermediate appellate courts of the federal system, since their rulings can be appealed to the U.S. Supreme Court. Moreover, different circuits can hold legal and cultural views, which can lead to differing outcomes on similar legal questions. In such scenarios, clarification from the U.S. Supreme Court might be needed. Today’s federal court system was not an overnight creation; it has been changing and transitioning for more than two hundred years through various acts of Congress. Since district courts are not called for in Article III of the Constitution, Congress established them and narrowly defined their jurisdiction, at first limiting them to handling only cases that arose within the district. Beginning in 1789 when there were just thirteen, the district courts became the basic organizational units of the federal judicial system. Gradually over the next hundred years, Congress expanded their jurisdiction, in particular over federal questions, which enables them to review constitutional issues and matters of federal law. In the Judicial Code of 1911 , Congress made the U.S. district courts the sole general-jurisdiction trial courts of the federal judiciary, a role they had previously shared with the circuit courts. 34 The circuit courts started out as the trial courts for most federal criminal cases and for some civil suits, including those initiated by the United States and those involving citizens of different states. But early on, they did not have their own judges; the local district judge and two Supreme Court justices formed each circuit court panel. (That is how the name “circuit” arose—judges in the early circuit courts traveled from town to town to hear cases, following prescribed paths or circuits to arrive at destinations where they were needed. 35 ) Circuit courts also exercised appellate jurisdiction (meaning they receive appeals on federal district court cases) over most civil suits that originated in the district courts; however, that role ended in 1891, and their appellate jurisdiction was turned over to the newly created circuit courts, or U.S. courts of appeals. The original circuit courts—the ones that did not have “of appeals” added to their name—were abolished in 1911, fully replaced by these new circuit courts of appeals. 36 While we often focus primarily on the district and circuit courts of the federal system, other federal trial courts exist that have more specialized jurisdictions, such as the Court of International Trade, Court of Federal Claims, and U.S. Tax Court. Specialized federal appeals courts include the Court of Appeals for the Armed Forces and the Court of Appeals for Veterans Claims. Cases from any of these courts may also be appealed to the Supreme Court, although that result is very rare. On the U.S. Supreme Court , there are nine justices—one chief justice and eight associate justices. Circuit courts each contain three justices, whereas federal district courts have just one judge each. As the national court of last resort for all other courts in the system, the Supreme Court plays a vital role in setting the standards of interpretation that the lower courts follow. The Supreme Court’s decisions are binding across the nation and establish the precedent by which future cases are resolved in all the system’s tiers. The U.S. court system operates on the principle of stare decisis (Latin for stand by things decided ), which means that today’s decisions are based largely on rulings from the past, and tomorrow’s rulings rely on what is decided today. Stare decisis is especially important in the U.S. common law system, in which the consistency of precedent ensures greater certainty and stability in law and constitutional interpretation, and it also contributes to the solidity and legitimacy of the court system itself. As former Supreme Court justice Benjamin Cardozo summarized it years ago, “Adherence to precedent must then be the rule rather than the exception if litigants are to have faith in the even-handed administration of justice in the courts.” 37 Link to Learning With a focus on federal courts and the public, this website reveals the different ways the federal courts affect the lives of U.S. citizens and how those citizens interact with the courts. When the legal facts of one case are the same as the legal facts of another, stare decisis dictates that they should be decided the same way, and judges are reluctant to disregard precedent without justification. However, that does not mean there is no flexibility or that new precedents or rulings can never be created. They often are. Certainly, court interpretations can change as times and circumstances change—and as the courts themselves change when new judges are selected and take their place on the bench. For example, the membership of the Supreme Court had changed entirely between Plessey v. Ferguson (1896), which brought the doctrine of “separate but equal” and Brown v. Board of Education (1954), which required integration. 38 THE SELECTION OF JUDGES Judges fulfill a vital role in the U.S. judicial system and are carefully selected. At the federal level, the president nominates a candidate to a judgeship or justice position, and the nominee must be confirmed by a majority vote in the U.S. Senate, a function of the Senate’s “advice and consent” role. All judge s and justices in the national courts serve lifetime terms of office. The president sometimes chooses nominees from a list of candidates maintained by the American Bar Association , a national professional organization of lawyers. 39 The president’s nominee is then discussed (and sometimes hotly debated) in the Senate Judiciary Committee . After a committee vote, the candidate must be confirmed by a majority vote of the full Senate. He or she is then sworn in, taking an oath of office to uphold the Constitution and the laws of the United States. When a vacancy occurs in a lower federal court, by custom, the president consults with that state’s U.S. senators before making a nomination. Through such senatorial courtesy , senators exert considerable influence on the selection of judges in their state, especially those senators who share a party affiliation with the president. In many cases, a senator can block a proposed nominee just by voicing his or her opposition. Thus, a presidential nominee typically does not get far without the support of the senators from the nominee’s home state. Most presidential appointments to the federal judiciary go unnoticed by the public, but when a president has the rarer opportunity to make a Supreme Court appointment, it draws more attention. That is particularly true now, when many people get their news primarily from the Internet and social media. It was not surprising to see not only television news coverage but also blogs and tweets about President Obama ’s most recent nominees to the high court, Sonia Sotomayor and Elena Kagan ( Figure 13.7 ). Presidential nominees for the courts typically reflect the chief executive’s own ideological position. With a confirmed nominee serving a lifetime appointment, a president’s ideological legacy has the potential to live on long after the end of his or her term. 40 President Obama surely considered the ideological leanings of his two Supreme Court appointees, and both Sotomayor and Kagan have consistently ruled in a more liberal ideological direction. The timing of the two nominations also dovetailed nicely with the Democratic Party’s gaining control of the Senate in the 111th Congress of 2009–2011, which helped guarantee their confirmations. But some nominees turn out to be surprises or end up ruling in ways that the president who nominated them did not anticipate. Democratic-appointed judges sometimes side with conservatives, just as Republican-appointed judges sometimes side with liberals. Republican Dwight D. Eisenhower reportedly called his nomination of Earl Warren as chief justice—in an era that saw substantial broadening of civil and criminal rights—“the biggest damn fool mistake” he had ever made. Sandra Day O’Connor , nominated by Republican president Ronald Reagan , often became a champion for women’s rights. David Souter , nominated by Republican George H. W. Bush , more often than not sided with the Court’s liberal wing. And even on the present-day court, Anthony Kennedy , a Reagan appointee, has become notorious as the Court’s swing vote, sometimes siding with the more conservative justices but sometimes not. Current chief justice John Roberts , though most typically an ardent member of the Court’s more conservative wing, has twice voted to uphold provisions of the Affordable Care Act. Once a justice has started his or her lifetime tenure on the Court and years begin to pass, many people simply forget which president nominated him or her. For better or worse, sometimes it is only a controversial nominee who leaves a president’s legacy behind. For example, the Reagan presidency is often remembered for two controversial nominees to the Supreme Court—Robert Bork and Douglas Ginsburg , the former accused of taking an overly conservative and “extremist view of the Constitution” 41 and the latter of having used marijuana while a student and then a professor at Harvard University ( Figure 13.8 ). President George W. Bush’s nomination of Harriet Miers was withdrawn in the face of criticism from both sides of the political spectrum, questioning her ideological leanings and especially her qualifications, suggesting she was not ready for the job. 42 After Miers’ withdrawal, the Senate went on to confirm Bush’s subsequent nomination of Samuel Alito , who remains on the Court today. The 2016 presidential election between Hillary Clinton and Donald Trump was especially important because the next president is likely to choose three justices. Presidential legacy and controversial nominations notwithstanding, there is one certainty about the overall look of the federal court system: What was once a predominately white, male, Protestant institution is today much more diverse. As a look at Table 13.3 reveals, the membership of the Supreme Court has changed with the passing years. Supreme Court Justice Firsts First Catholic Roger B. Taney (nominated in 1836) First Jew Louis J. Brandeis (1916) First (and only) former U.S. President William Howard Taft (1921) First African American Thurgood Marshall (1967) First Woman Sandra Day O’Connor (1981) First Hispanic American Sonia Sotomayor (2009) Table 13.3 The lower courts are also more diverse today. In the past few decades, the U.S. judiciary has expanded to include more women and minorities at both the federal and state levels. 43 However, the number of women and people of color on the courts still lags behind the overall number of white men. As of 2009, the federal judiciary consists of 70 percent white men, 15 percent white women, and between 1 and 8 percent African American, Hispanic American, and Asian American men and women. 44 13.4 The Supreme Court Learning Objectives By the end of this section, you will be able to: Analyze the structure and important features of the Supreme Court Explain how the Supreme Court selects cases to hear Discuss the Supreme Court’s processes and procedures The Supreme Court of the United States, sometimes abbreviated SCOTUS, is a one-of-a-kind institution. While a look at the Supreme Court typically focuses on the nine justices themselves, they represent only the top layer of an entire branch of government that includes many administrators, lawyers, and assistants who contribute to and help run the overall judicial system. The Court has its own set of rules for choosing cases, and it follows a unique set of procedures for hearing them. Its decisions not only affect the outcome of the individual case before the justices, but they also create lasting impacts on legal and constitutional interpretation for the future. THE STRUCTURE OF THE SUPREME COURT The original court in 1789 had six justices, but Congress set the number at nine in 1869, and it has remained there ever since. There is one chief justice , who is the lead or highest-ranking judge on the Court, and eight associate justice s . All nine serve lifetime terms, after successful nomination by the president and confirmation by the Senate. The current court is fairly diverse in terms of gender, religion (Christians and Jews), ethnicity, and ideology, as well as length of tenure. Some justices have served for three decades, whereas others were only recently appointed by President Obama. Figure 13.9 lists the names of the eight justices serving on the Court as of November 2016, along with their year of appointment and the president who nominated them. With the death of Associate Justice Antonin Scalia in February 2016, there remain three current justices who are considered part of the Court’s more conservative wing—Chief Justice Roberts and Associate Justices Thomas and Alito , while four are considered more liberal-leaning—Justices Ginsburg , Breyer, Sotomayor , and Kagan ( Figure 13.10 ). Justice Kennedy has become known as the “swing” vote, particularly on decisions like the Court’s same-sex marriage rulings in 2015, because he sometimes takes a more liberal position and sometimes a more conservative one. Had the Democrats retained the presidency in 2016, the replacement for Scalia’s spot on the court could have swung many key votes in a moderate or liberal direction. However, with Republican Donald Trump winning the election and the Republicans retaining Senate control, it is likely that the replacement in 2017 will be more conservative. Link to Learning While not formally connected with the public the way elected leaders are, the Supreme Court nonetheless offers visitors a great deal of information at its official website. For unofficial summaries of recent Supreme Court cases or news about the Court, visit the Oyez website or SCOTUS blog. In fact, none of the justices works completely in an ideological bubble. While their numerous opinions have revealed certain ideological tendencies, they still consider each case as it comes to them, and they don’t always rule in a consistently predictable or expected way. Furthermore, they don’t work exclusively on their own. Each justice has three or four law clerks, recent law school graduates who temporarily work for him or her, do research, help prepare the justice with background information, and assist with the writing of opinions. The law clerks’ work and recommendations influence whether the justices will choose to hear a case, as well as how they will rule. As the profile below reveals, the role of the clerks is as significant as it is varied. Insider Perspective Profile of a United States Supreme Court Clerk A Supreme Court clerkship is one of the most sought-after legal positions, giving “thirty-six young lawyers each year a chance to leave their fingerprints all over constitutional law.” 45 A number of current and former justices were themselves clerks, including Chief Justice John Roberts, Justices Stephen Breyer and Elena Kagan, and former chief justice William Rehnquist. Supreme Court clerks are often reluctant to share insider information about their experiences, but it is always fascinating and informative to hear about their jobs. Former clerk Philippa Scarlett , who worked for Justice Stephen Breyer, describes four main responsibilities: 46 Review the cases: Clerks participate in a “ cert. pool” (short for writ of certiorari , a request that the lower court send up its record of the case for review) and make recommendations about which cases the Court should choose to hear. Prepare the justices for oral argument: Clerks analyze the filed briefs (short arguments explaining each party’s side of the case) and the law at issue in each case waiting to be heard. Research and draft judicial opinions: Clerks do detailed research to assist justices in writing an opinion, whether it is the majority opinion or a dissenting or concurring opinion. Help with emergencies: Clerks also assist the justices in deciding on emergency applications to the Court, many of which are applications by prisoners to stay their death sentences and are sometimes submitted within hours of a scheduled execution. Explain the role of law clerks in the Supreme Court system. What is your opinion about the role they play and the justices’ reliance on them? HOW THE SUPREME COURT SELECTS CASES The Supreme Court begins its annual session on the first Monday in October and ends late the following June. Every year, there are literally thousands of people who would like to have their case heard before the Supreme Court, but the justices will select only a handful to be placed on the docket , which is the list of cases scheduled on the Court’s calendar. The Court typically accepts fewer than 2 percent of the as many as ten thousand cases it is asked to review every year. 47 Case names, written in italics, list the name of a petitioner versus a respondent, as in Roe v. Wade , for example. 48 For a case on appeal, you can tell which party lost at the lower level of court by looking at the case name: The party unhappy with the decision of the lower court is the one bringing the appeal and is thus the petitioner, or the first-named party in the case. For example, in Brown v. Board of Education (1954), Oliver Brown was one of the thirteen parents who brought suit against the Topeka public schools for discrimination based on racial segregation. Most often, the petitioner is asking the Supreme Court to grant a writ of certiorari , a request that the lower court send up its record of the case for review. Once a writ of certiorari ( cert . for short) has been granted, the case is scheduled on the Court’s docket. The Supreme Court exercises discretion in the cases it chooses to hear, but four of the nine Justices must vote to accept a case. This is called the Rule of Four . For decisions about cert ., the Court’s Rule 10 (Considerations Governing Review on Writ of Certiorari ) takes precedence. 49 The Court is more likely to grant certiorari when there is a conflict on an issue between or among the lower courts. Examples of conflicts include (1) conflicting decisions among different courts of appeals on the same matter, (2) decisions by an appeals court or a state court conflicting with precedent, and (3) state court decisions that conflict with federal decisions. Occasionally, the Court will fast-track a case that has special urgency, such as Bush v. Gore in the wake of the 2000 election. 50 Past research indicated that the amount of interest-group activity surrounding a case before it is granted cert. has a significant impact on whether the Supreme Court puts the case on its agenda. The more activity, the more likely the case will be placed on the docket. 51 But more recent research broadens that perspective, suggesting that too much interest-group activity when the Court is considering a case for its docket may actually have diminishing impact and that external actors may have less influence on the work of the Court than they have had in the past. 52 Still, the Court takes into consideration external influences, not just from interest groups but also from the public, from media attention, and from a very key governmental actor—the solicitor general. The solicitor general is the lawyer who represents the federal government before the Supreme Court: He or she decides which cases (in which the United States is a party) should be appealed from the lower courts and personally approves each one presented ( Figure 13.11 ). Most of the cases the solicitor general brings to the Court will be given a place on the docket. About two-thirds of all Supreme Court cases involve the federal government. 53 The solicitor general determines the position the government will take on a case. The attorneys of his or her office prepare and file the petitions and briefs, and the solicitor general (or an assistant) presents the oral arguments before the Court. In other cases in which the United States is not the petitioner or the respondent, the solicitor general may choose to intervene or comment as a third party. Before a case is granted cert. , the justices will sometimes ask the solicitor general to comment on or file a brief in the case, indicating their potential interest in getting it on the docket. The solicitor general may also recommend that the justices decline to hear a case. Though research has shown that the solicitor general’s special influence on the Court is not unlimited, it remains quite significant. In particular, the Court does not always agree with the solicitor general, and “while justices are not lemmings who will unwittingly fall off legal cliffs for tortured solicitor general recommendations, they nevertheless often go along with them even when we least expect them to.” 54 Some have credited Donald B. Verrilli, the solicitor general under President Obama, with holding special sway over the five-justice majority ruling on same-sex marriage in June 2015. Indeed, his position that denying homosexuals the right to marry would mean “thousands and thousands of people are going to live out their lives and go to their deaths without their states ever recognizing the equal dignity of their relationships” became a foundational point of the Court’s opinion, written by Justice Kennedy. 55 With such power over the Court, the solicitor general is sometimes referred to as “the tenth justice.” SUPREME COURT PROCEDURES Once a case has been placed on the docket, briefs , or short arguments explaining each party’s view of the case, must be submitted—first by the petitioner putting forth his or her case, then by the respondent. After initial briefs have been filed, both parties may file subsequent briefs in response to the first. Likewise, people and groups that are not party to the case but are interested in its outcome may file an amicus curiae (“friend of the court”) brief giving their opinion, analysis, and recommendations about how the Court should rule. Interest groups in particular can become heavily involved in trying to influence the judiciary by filing amicus briefs—both before and after a case has been granted cert . And, as noted earlier, if the United States is not party to a case, the solicitor general may file an amicus brief on the government’s behalf. With briefs filed, the Court hears oral argument s in cases from October through April. The proceedings are quite ceremonial. When the Court is in session, the robed justices make a formal entrance into the courtroom to a standing audience and the sound of a banging gavel. The Court’s marshal presents them with a traditional chant: “The Honorable, the Chief Justice and the Associate Justices of the Supreme Court of the United States. Oyez! Oyez! Oyez! [Hear ye!] All persons having business before the Honorable, the Supreme Court of the United States, are admonished to draw near and give their attention, for the Court is now sitting. God save the United States and this Honorable Court!” 56 It has not gone unnoticed that the Court, which has defended the First Amendment’s religious protection and the traditional separation of church and state, opens its every public session with a mention of God. During oral arguments, each side’s lawyers have thirty minutes to make their legal case, though the justices often interrupt the presentations with questions. The justices consider oral arguments not as a forum for a lawyer to restate the merits of his or her case as written in the briefs, but as an opportunity to get answers to any questions they may have. 57 When the United States is party to a case, the solicitor general (or one of his or her assistants) will argue the government’s position; even in other cases, the solicitor general may still be given time to express the government’s position on the dispute. When oral arguments have been concluded, the justices have to decide the case, and they do so in conference , which is held in private twice a week when the Court is in session and once a week when it is not. The conference is also a time to discuss petitions for certiorari , but for those cases already heard, each justice may state his or her views on the case, ask questions, or raise concerns. The chief justice speaks first about a case, then each justice speaks in turn, in descending order of seniority, ending with the most recently appointed justice. 58 The judges take an initial vote in private before the official announcement of their decisions is made public. Oral arguments are open to the public, but cameras are not allowed in the courtroom, so the only picture we get is one drawn by an artist’s hand, an illustration or rendering. Cameras seem to be everywhere today, especially to provide security in places such as schools, public buildings, and retail stores, so the lack of live coverage of Supreme Court proceedings may seem unusual or old-fashioned. Over the years, groups have called for the Court to let go of this tradition and open its operations to more “sunshine” and greater transparency. Nevertheless, the justices have resisted the pressure and remain neither filmed nor photographed during oral arguments. 59 13.5 Judicial Decision-Making and Implementation by the Supreme Court Learning Objectives By the end of this section, you will be able to: Describe how the Supreme Court decides cases and issues opinions Identify the various influences on the Supreme Court Explain how the judiciary is checked by the other branches of government The courts are the least covered and least publicly known of the three branches of government. The inner workings of the Supreme Court and its day-to-day operations certainly do not get as much public attention as its rulings, and only a very small number of its announced decisions are enthusiastically discussed and debated. The Court’s 2015 decision on same-sex marriage was the exception, not the rule, since most court opinions are filed away quietly in the United States Reports , sought out mostly by judges, lawyers, researchers, and others with a particular interest in reading or studying them. Thus, we sometimes envision the justices formally robed and cloistered away in their chambers, unaffected by the world around them, but the reality is that they are not that isolated, and a number of outside factors influence their decisions. Though they lack their own mechanism for enforcement of their rulings and their power remains checked and balanced by the other branches, the effect of the justices’ opinions on the workings of government, politics, and society in the United States is much more significant than the attention they attract might indicate. JUDICIAL OPINIONS Every Court opinion sets precedent for the future. The Supreme Court ’s decisions are not always unanimous, however; the published majority opinion , or explanation of the justices’ decision, is the one with which a majority of the nine justices agree. It can represent a vote as narrow as five in favor to four against. A tied vote is rare but can occur at a time of vacancy, absence, or abstention from a case, perhaps where there is a conflict of interest. In the event of a tied vote, the decision of the lower court stands. Most typically, though, the Court will put forward a majority opinion. If he or she is in the majority, the chief justice decides who will write the opinion. If not, then the most senior justice ruling with the majority chooses the writer. Likewise, the most senior justice in the dissenting group can assign a member of that group to write the dissenting opinion ; however, any justice who disagrees with the majority may write a separate dissenting opinion. If a justice agrees with the outcome of the case but not with the majority’s reasoning in it, that justice may write a concurring opinion . Court decisions are released at different times throughout the Court’s term, but all opinions are announced publicly before the Court adjourns for the summer. Some of the most controversial and hotly debated rulings are released near or on the last day of the term and thus are avidly anticipated ( Figure 13.12 ). Link to Learning One of the most prominent writers on judicial decision-making in the U.S. system is Dr. Forrest Maltzman of George Washington University. Maltzman’s articles, chapters, and manuscripts, along with articles by other prominent authors in the field, are downloadable at this site. INFLUENCES ON THE COURT Many of the same players who influence whether the Court will grant cert . in a case, discussed earlier in this chapter, also play a role in its decision-making, including law clerks, the solicitor general, interest groups, and the mass media. But additional legal, personal, ideological, and political influences weigh on the Supreme Court and its decision-making process. On the legal side, courts, including the Supreme Court, cannot make a ruling unless they have a case before them, and even with a case, courts must rule on its facts. Although the courts’ role is interpretive, judges and justices are still constrained by the facts of the case, the Constitution, the relevant laws, and the courts’ own precedent. A justice’s decisions are influenced by how he or she defines his role as a jurist, with some justices believing strongly in judicial activism , or the need to defend individual rights and liberties, and they aim to stop actions and laws by other branches of government that they see as infringing on these rights. A judge or justice who views the role with an activist lens is more likely to use his or her judicial power to broaden personal liberty, justice, and equality. Still others believe in judicial restraint , which leads them to defer decisions (and thus policymaking) to the elected branches of government and stay focused on a narrower interpretation of the Bill of Rights. These justices are less likely to strike down actions or laws as unconstitutional and are less likely to focus on the expansion of individual liberties. While it is typically the case that liberal actions are described as unnecessarily activist, conservative decisions can be activist as well. Critics of the judiciary often deride activist courts for involving themselves too heavily in matters they believe are better left to the elected legislative and executive branches. However, as Justice Anthony Kennedy has said, “An activist court is a court that makes a decision you don’t like.” 60 Justices’ personal beliefs and political attitudes also matter in their decision-making. Although we may prefer to believe a justice can leave political ideology or party identification outside the doors of the courtroom, the reality is that a more liberal-thinking judge may tend to make more liberal decisions and a more conservative-leaning judge may tend toward more conservative ones. Although this is not true 100 percent of the time, and an individual’s decisions are sometimes a cause for surprise, the influence of ideology is real, and at a minimum, it often guides presidents to aim for nominees who mirror their own political or ideological image. It is likely not possible to find a potential justice who is completely apolitical. And the courts themselves are affected by another “court”—the court of public opinion. Though somewhat isolated from politics and the volatility of the electorate, justices may still be swayed by special-interest pressure, the leverage of elected or other public officials, the mass media, and the general public. As times change and the opinions of the population change, the court’s interpretation is likely to keep up with those changes, lest the courts face the danger of losing their own relevance. Take, for example, rulings on sodomy laws: In 1986, the Supreme Court upheld the constitutionality of the State of Georgia’s ban on sodomy, 61 but it reversed its decision seventeen years later, invalidating sodomy laws in Texas and thirteen other states. 62 No doubt the Court considered what had been happening nationwide: In the 1960s, sodomy was banned in all the states. By 1986, that number had been reduced by about half. By 2002, thirty-six states had repealed their sodomy laws, and most states were only selectively enforcing them. Changes in state laws, along with an emerging LGBT movement, no doubt swayed the Court and led it to the reversal of its earlier ruling with the 2003 decision, Lawrence v. Texas ( Figure 13.13 ). 63 Heralded by advocates of gay rights as important progress toward greater equality, the ruling in Lawrence v. Texas illustrates that the Court is willing to reflect upon what is going on in the world. Even with their heavy reliance on precedent and reluctance to throw out past decisions, justices are not completely inflexible and do tend to change and evolve with the times. Get Connected! The Importance of Jury Duty Since judges and justices are not elected, we sometimes consider the courts removed from the public; however, this is not always the case, and there are times when average citizens may get involved with the courts firsthand as part of their decision-making process at either the state or federal levels. At some point, if you haven’t already been called, you may receive a summons for jury duty from your local court system. You may be asked to serve on federal jury duty , such as U.S. district court duty or federal grand jury duty, but service at the local level, in the state court system, is much more common. While your first reaction may be to start planning a way to get out of it, participating in jury service is vital to the operation of the judicial system, because it provides individuals in court the chance to be heard and to be tried fairly by a group of their peers. And jury duty has benefits for those who serve as well. You will no doubt come away better informed about how the judicial system works and ready to share your experiences with others. Who knows? You might even get an unexpected surprise, as some citizens in Dallas, Texas did recently when former President George W. Bush showed up to serve jury duty with them. Have you ever been called to jury duty? Describe your experience. What did you learn about the judicial process? What advice would you give to someone called to jury duty for the first time? If you’ve never been called to jury duty, what questions do you have for those who have? THE COURTS AND THE OTHER BRANCHES OF GOVERNMENT Both the executive and legislative branches check and balance the judiciary in many different ways. The president can leave a lasting imprint on the bench through his or her nominations, even long after leaving office. The president may also influence the Court through the solicitor general’s involvement or through the submission of amicus briefs in cases in which the United States is not a party. President Franklin D. Roosevelt even attempted to stack the odds in his favor in 1937, with a “court-packing scheme” in which he tried to get a bill passed through Congress that would have reorganized the judiciary and enabled him to appoint up to six additional judges to the high court ( Figure 13.14 ). The bill never passed, but other presidents have also been accused of trying similar moves at different courts in the federal system. Most recently, some members of Congress suggested that President Obama was attempting to “pack” the District of Columbia Circuit Court of Appeals with three nominees. Obama was filling vacancies, not adding judges, but the “packing” term was still bandied about. 64 Likewise, Congress has checks on the judiciary. It retains the power to modify the federal court structure and its appellate jurisdiction, and the Senate may accept or reject presidential nominees to the federal courts. Faced with a court ruling that overturns one of its laws, Congress may rewrite the law or even begin a constitutional amendment process. But the most significant check on the Supreme Court is executive and legislative leverage over the implementation and enforcement of its rulings. This process is called judicial implementation . While it is true that courts play a major role in policymaking, they have no mechanism to make their rulings a reality. Remember it was Alexander Hamilton in Federalist No. 78 who remarked that the courts had “neither force nor will, but merely judgment.” And even years later, when the 1832 Supreme Court ruled the State of Georgia’s seizing of Native American lands unconstitutional, 65 President Andrew Jackson is reported to have said, “John Marshall has made his decision, now let him enforce it,” and the Court’s ruling was basically ignored. 66 Abraham Lincoln , too, famously ignored Chief Justice Roger B. Taney ’s order finding unconstitutional Lincoln’s suspension of habeas corpus rights in 1861, early in the Civil War. Thus, court rulings matter only to the extent they are heeded and followed. The Court relies on the executive to implement or enforce its decisions and on the legislative branch to fund them. As the Jackson and Lincoln stories indicate, presidents may simply ignore decisions of the Court, and Congress may withhold funding needed for implementation and enforcement. Fortunately for the courts, these situations rarely happen, and the other branches tend to provide support rather than opposition. In general, presidents have tended to see it as their duty to both obey and enforce Court rulings, and Congress seldom takes away the funding needed for the president to do so. For example, in 1957, President Dwight D. Eisenhower called out the military by executive order to enforce the Supreme Court’s order to racially integrate the public schools in Little Rock, Arkansas. Eisenhower told the nation: “Whenever normal agencies prove inadequate to the task and it becomes necessary for the executive branch of the federal government to use its powers and authority to uphold federal courts, the president’s responsibility is inescapable.” 67 Executive Order 10730 nationalized the Arkansas National Guard to enforce desegregation because the governor refused to use the state National Guard troops to protect the black students trying to enter the school ( Figure 13.15 ). So what becomes of court decisions is largely due to their credibility, their viability, and the assistance given by the other branches of government. It is also somewhat a matter of tradition and the way the United States has gone about its judicial business for more than two centuries. Although not everyone agrees with the decisions made by the Court, rulings are generally accepted and followed, and the Court is respected as the key interpreter of the laws and the Constitution. Over time, its rulings have become yet another way policy is legitimately made and justice more adequately served in the United States.
principles_of_accounting,_volume_2:_managerial_accounting
Summary 6.1 Calculate Predetermined Overhead and Total Cost under the Traditional Allocation Method Manufacturing overhead is estimated for the upcoming period. An activity base is selected to allocate overhead. This is traditionally direct labor hours, direct labor cost, or machine hours. A predetermined overhead rate is calculated by dividing the estimated overhead by the allocation base. Overhead is allocated to each product based on the estimated predetermined overhead rate and the number of units in the selected activity base. 6.2 Describe and Identify Cost Drivers Overhead costs are analyzed and grouped based on similar activity bases. A cost driver, such as inspections, machine setups, or order taking, is selected for each cost grouping. Analysis of cost drivers allows for better selection of true overhead cost drivers and more appropriate allocation of overhead. 6.3 Calculate Activity-Based Product Costs Costs can be traced to the unit level or batch level. There are five steps in the ABC process: identify activities needed for production assign overhead expenses assign a cost driver for each expense determine a predetermined overhead rate allocate overhead to each product 6.4 Compare and Contrast Traditional and Activity-Based Costing Systems Traditional allocation assigns overhead based on a single overhead rate, while ABC assigns overhead based on several cost pools and the activities that drive costs. Traditional allocation is optimal when the manufacturing process is labor driven and overhead increases based on traditional activity bases, such as direct labor hours, direct labor dollars, or machine hours. ABC costing is optimal when the manufacturing process is technology driven and overhead increases based on various activities that differ for each product. 6.5 Compare and Contrast Variable and Absorption Costing Absorption costing assigns all manufacturing costs to products, whereas variable costing only assigns variable costs to the products. Income statements from both methods can be reconciled by starting with the net income or loss using variable costing and adding the amount of fixed costs included in ending inventory and subtracting the fixed costs included in beginning inventory. Variable costing is not considered GAAP compliant but lends itself to cost-volume-profit analysis.
Chapter Outline 6.1 Calculate Predetermined Overhead and Total Cost under the Traditional Allocation Method 6.2 Describe and Identify Cost Drivers 6.3 Calculate Activity-Based Product Costs 6.4 Compare and Contrast Traditional and Activity-Based Costing Systems 6.5 Compare and Contrast Variable and Absorption Costing Why It Matters Barry thinks of his education as a job and spends forty hours a week in class or studying. Barry estimates he has about eighty hours per week to allocate between school and other activities and believes everyone should follow his fifty-fifty rule of time allocation. His roommate, Kamil, disagrees with Barry and argues that allocating 50 percent of one’s time to class and studying is not a great formula because everyone has different activities and responsibilities. Kamil points out, for example, that he has a job tutoring other students, is involved with student activities, and plays in a band, while Barry spends some of his nonstudy time doing volunteer work and working out. Kamil plans each week based on how many hours he will need for each activity: classes, studying and coursework, tutoring, and practicing and performing with his band. In essence, he considers the details of each week’s needs to budget his time. Kamil explains to Barry that being aware of the activities that consume his limited resources (time, in this example) helps him to better plan his week. He adds that individuals who have activities with lots of time commitments (class, work, study, exercise, family, friends, and so on) must be efficient with their time or they risk doing poorly in one or more areas. Kamil argues these individuals cannot simply assign a percentage of their time to each activity but should use each specific activity as the basis for allocating their time. Barry insists that assigning a set percentage to everything is easy and the better method. Who is correct?
[ { "answer": { "ans_choice": 1, "ans_text": "B" }, "bloom": null, "hl_context": "Finally , step five is to allocate the overhead costs to each product . The predetermined overhead rate found in step four is applied to the actual level of the cost driver used by each product . <hl> As with the traditional overhead allocation method , the actual overhead costs are accumulated in an account called manufacturing overhead and then applied to each of the products in this step . <hl> <hl> Once the costs are grouped into similar cost pools , the activities in each pool are analyzed to determine which activity “ drives ” the costs in that pool , leading to the third step of ABC : identify the cost driver for each cost pool and estimate an annual level of activity for each cost driver . <hl> As you ’ ve learned , the cost driver is the specific activity that drives the costs in the cost pools . Table 6.3 shows some activities and cost drivers for those activities . ABC is a five-stage process that allocates overhead more precisely than traditional allocation does by applying it to the products that use those activities . ABC works best in complex processes where the expenses are not driven by a single cost driver . <hl> Instead , several cost drivers are used as the overhead costs are analyzed and grouped into activities , and each activity is allocated based on each group ’ s cost driver . <hl> The five stages of the ABC process are : <hl> Another benefit of looking at cost drivers is that doing so allows a company to analyze all costs . <hl> <hl> A company can differentiate among costs that drive overhead and have value , those that do not drive overhead but still add value , and those that may or may not drive the overhead but do not add any value . <hl> For example , a furniture manufacturer produces and sells wooden tables in various colors . The painting process involves a white base coat , a color coat , and a clear protective top coat . The three coats are applied in a sealed room using a spraying process followed by an ultraviolet drying process . The depreciation on the spraying machines and the ultraviolet bulbs used in the painting process are overhead costs . <hl> These costs drive or increase overhead , and they add value to the product by increasing the quality . <hl> <hl> Costs associated with repainting or fixing any blemishes are overhead costs that are necessary to sell the product but would not be considered value-added costs . <hl> <hl> The goal is to eliminate as many of the non-value-added costs as possible and subsequently reduce overhead costs . <hl>", "hl_sentences": "As with the traditional overhead allocation method , the actual overhead costs are accumulated in an account called manufacturing overhead and then applied to each of the products in this step . Once the costs are grouped into similar cost pools , the activities in each pool are analyzed to determine which activity “ drives ” the costs in that pool , leading to the third step of ABC : identify the cost driver for each cost pool and estimate an annual level of activity for each cost driver . Instead , several cost drivers are used as the overhead costs are analyzed and grouped into activities , and each activity is allocated based on each group ’ s cost driver . Another benefit of looking at cost drivers is that doing so allows a company to analyze all costs . A company can differentiate among costs that drive overhead and have value , those that do not drive overhead but still add value , and those that may or may not drive the overhead but do not add any value . These costs drive or increase overhead , and they add value to the product by increasing the quality . Costs associated with repainting or fixing any blemishes are overhead costs that are necessary to sell the product but would not be considered value-added costs . The goal is to eliminate as many of the non-value-added costs as possible and subsequently reduce overhead costs .", "question": { "cloze_format": "___ is not a step in analyzing the cost driver for manufacturing overhead.", "normal_format": "Which is not a step in analyzing the cost driver for manufacturing overhead?", "question_choices": [ "identify the cost", "identify non-value-added costs", "analyze the effect on manufacturing overhead", "identify the correlation between the potential driver and manufacturing overhead" ], "question_id": "fs-idm398700768", "question_text": "Which is not a step in analyzing the cost driver for manufacturing overhead?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 0, "ans_text": "the proportion of that product’s use of the cost driver" }, "bloom": null, "hl_context": "<hl> ABC costing assigns a proportion of overhead costs on the basis of the activities under the presumption that the activities drive the overhead costs . <hl> <hl> As such , ABC costing converts the indirect costs into product costs . <hl> There are also cost systems with a different approach . Instead of focusing on the overhead costs incurred by the product unit , these methods focus on assigning the fixed overhead costs to inventory . Notice that steps one through three represent the process of allocating overhead costs to activities , and steps four and five represent the process of allocating the overhead costs that have been assigned to activities to the products to which they pertain . <hl> Thus , the five steps of ABC involve two major processes : first , allocating overhead costs to the various activities to get a cost per activity , and then allocating the cost per activity to each product based on that product ’ s usage of the activities . <hl>", "hl_sentences": "ABC costing assigns a proportion of overhead costs on the basis of the activities under the presumption that the activities drive the overhead costs . As such , ABC costing converts the indirect costs into product costs . Thus , the five steps of ABC involve two major processes : first , allocating overhead costs to the various activities to get a cost per activity , and then allocating the cost per activity to each product based on that product ’ s usage of the activities .", "question": { "cloze_format": "Overhead costs are assigned to each product based on ________.", "normal_format": "Overhead costs are assigned to each product based on what?", "question_choices": [ "the proportion of that product’s use of the cost driver", "a predetermined overhead rate for a single cost driver", "price of the product", "machine hours per product" ], "question_id": "fs-idm406179312", "question_text": "Overhead costs are assigned to each product based on ________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "C" }, "bloom": null, "hl_context": "<hl> Musicality is considering switching to an activity-based costing approach for determining overhead and has collected data to help them decide which overhead allocation method they should use . <hl> Performing the analysis requires these steps :", "hl_sentences": "Musicality is considering switching to an activity-based costing approach for determining overhead and has collected data to help them decide which overhead allocation method they should use .", "question": { "cloze_format": "A reason a company would implement activity-based costing is that ___ .", "normal_format": "Which of the following is a reason a company would implement activity-based costing?", "question_choices": [ "The cost of record keeping is high.", "The additional data obtained through traditional allocation are not worth the cost.", "They want to improve the data on which decisions are made.", "A company only has one cost driver." ], "question_id": "fs-idm368188848", "question_text": "Which of the following is a reason a company would implement activity-based costing?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "estimated overhead for the activity/estimated use of the cost driver for the activity" }, "bloom": null, "hl_context": "<hl> A predetermined overhead rate is calculated at the start of the accounting period by dividing the estimated manufacturing overhead by the estimated activity base . <hl> <hl> The predetermined overhead rate is then applied to production to facilitate determining a standard cost for a product . <hl> <hl> This estimated overhead rate will allow a company to determine a cost for the product without having to wait , possibly several months , until all of the actual overhead costs are determined , and to help with issues such as seasonal production or variable overhead costs , such as utilities . <hl>", "hl_sentences": "A predetermined overhead rate is calculated at the start of the accounting period by dividing the estimated manufacturing overhead by the estimated activity base . The predetermined overhead rate is then applied to production to facilitate determining a standard cost for a product . This estimated overhead rate will allow a company to determine a cost for the product without having to wait , possibly several months , until all of the actual overhead costs are determined , and to help with issues such as seasonal production or variable overhead costs , such as utilities .", "question": { "cloze_format": "The correct formula for computing the overhead rate is ___.", "normal_format": "Which is the correct formula for computing the overhead rate?", "question_choices": [ "estimated use of the cost driver for production/estimated overhead for the activity", "estimated overhead for the product/estimated use of the cost driver for the activity", "estimated use of the cost driver for production/estimated overhead for the activity", "estimated overhead for the activity/estimated use of the cost driver for the activity" ], "question_id": "fs-idm369491648", "question_text": "Which is the correct formula for computing the overhead rate?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "identify the cost drivers, identify the cost pools, calculate the overhead application rate for each cost pool, assign the costs to the products" }, "bloom": null, "hl_context": "<hl> Finally , step five is to allocate the overhead costs to each product . <hl> The predetermined overhead rate found in step four is applied to the actual level of the cost driver used by each product . As with the traditional overhead allocation method , the actual overhead costs are accumulated in an account called manufacturing overhead and then applied to each of the products in this step . <hl> The fourth step is to compute the predetermined overhead rate for each of the cost drivers . <hl> This portion of the process is similar to finding the traditional predetermined overhead rate , where the overhead rate is divided by direct labor dollars , direct labor hours , or machine hours . Each cost driver will have its own overhead rate , which is why ABC is a more accurate method of allocating overhead . <hl> Once the costs are grouped into similar cost pools , the activities in each pool are analyzed to determine which activity “ drives ” the costs in that pool , leading to the third step of ABC : identify the cost driver for each cost pool and estimate an annual level of activity for each cost driver . <hl> As you ’ ve learned , the cost driver is the specific activity that drives the costs in the cost pools . Table 6.3 shows some activities and cost drivers for those activities . <hl> The second step is assigning overhead costs to the identified activities . <hl> <hl> In this step , overhead costs are assigned to each of the activities to become a cost pool . <hl> A cost pool is a list of costs incurred when related activities are performed . Table 6.2 illustrates the various cost pools along with their activities and related costs . <hl> The first step is to identify activities needed for production . <hl> An activity is an action or process involved in the production of inventory . There can be many activities that consume resources , and management will need to narrow down the activities to those that have the biggest impact on overhead costs . Examples of these activities include :", "hl_sentences": "Finally , step five is to allocate the overhead costs to each product . The fourth step is to compute the predetermined overhead rate for each of the cost drivers . Once the costs are grouped into similar cost pools , the activities in each pool are analyzed to determine which activity “ drives ” the costs in that pool , leading to the third step of ABC : identify the cost driver for each cost pool and estimate an annual level of activity for each cost driver . The second step is assigning overhead costs to the identified activities . In this step , overhead costs are assigned to each of the activities to become a cost pool . The first step is to identify activities needed for production .", "question": { "cloze_format": "The proper order of tasks in an ABC system is ___.", "normal_format": "What is the proper order of tasks in an ABC system?", "question_choices": [ "identify the cost drivers, assign the costs to the products, calculate the overhead application rate for each cost pool, identify the cost pools", "assign the costs to the products, identify the cost drivers, calculate the overhead application rate for each cost pool, identify the cost pools", "identify the cost drivers, identify the cost pools, calculate the overhead application rate for each cost pool, assign the costs to the products", "identify the cost pools, identify the cost drivers, calculate the overhead application rate for each cost pool, assign the costs to the products" ], "question_id": "fs-idm367668256", "question_text": "What is the proper order of tasks in an ABC system?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "B" }, "bloom": null, "hl_context": "<hl> ABC is a five-stage process that allocates overhead more precisely than traditional allocation does by applying it to the products that use those activities . <hl> <hl> ABC works best in complex processes where the expenses are not driven by a single cost driver . <hl> <hl> Instead , several cost drivers are used as the overhead costs are analyzed and grouped into activities , and each activity is allocated based on each group ’ s cost driver . <hl> The five stages of the ABC process are :", "hl_sentences": "ABC is a five-stage process that allocates overhead more precisely than traditional allocation does by applying it to the products that use those activities . ABC works best in complex processes where the expenses are not driven by a single cost driver . Instead , several cost drivers are used as the overhead costs are analyzed and grouped into activities , and each activity is allocated based on each group ’ s cost driver .", "question": { "cloze_format": "A task that is not typically associated with ABC systems is ___ .", "normal_format": "Which is not a task typically associated with ABC systems?", "question_choices": [ "calculating the overhead application rate for each cost pool", "applying a single cost rate", "identifying a cost driver", "more correctly allocating overhead costs" ], "question_id": "fs-idm356761712", "question_text": "Which is not a task typically associated with ABC systems?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "Activity-based cost systems are more accurate than traditional cost systems." }, "bloom": null, "hl_context": "<hl> As shown with Musicality ’ s products , not only are there different costs for each product when comparing traditional allocation with an activity-based costing , but ABC showed that the Solo product creates a loss for the company . <hl> <hl> Activity-based costing is a more accurate method , because it assigns overhead based on the activities that drive the overhead costs . <hl> It can be concluded , then , that the cost and subsequent gross loss for each unit ’ s sales provide a more accurate picture than the overall cost and gross profit under the traditional method . Table 6.4 compares the cost per unit using the different cost systems and shows how different the costs can be depending on the method used .", "hl_sentences": "As shown with Musicality ’ s products , not only are there different costs for each product when comparing traditional allocation with an activity-based costing , but ABC showed that the Solo product creates a loss for the company . Activity-based costing is a more accurate method , because it assigns overhead based on the activities that drive the overhead costs .", "question": { "cloze_format": "A true statement is that ___ .", "normal_format": "Which statement is correct?", "question_choices": [ "Activity-based cost systems are less costly than traditional cost systems.", "Activity-based cost systems are easier to implement than traditional cost systems.", "Activity-based cost systems are more accurate than traditional cost systems.", "Activity-based cost systems provide the same data as traditional cost systems." ], "question_id": "fs-idm384356896", "question_text": "Which statement is correct?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "B" }, "bloom": null, "hl_context": "<hl> Using an ABC method to better assign unit-level , batch-level , product-level , and factory-level costs can increase the per-unit costs of the low-volume products and decrease the per-unit costs of the high-volume products . <hl> <hl> Adopting an ABC overhead allocation system can allow a company to shift manufacturing overhead costs between products based on their volume . <hl>", "hl_sentences": "Using an ABC method to better assign unit-level , batch-level , product-level , and factory-level costs can increase the per-unit costs of the low-volume products and decrease the per-unit costs of the high-volume products . Adopting an ABC overhead allocation system can allow a company to shift manufacturing overhead costs between products based on their volume .", "question": { "cloze_format": "Activity-based costing systems ___.", "normal_format": "What are activity-based costing systems?", "question_choices": [ "use a single predetermined overhead rate based on machine hours instead of on direct labor", "frequently increase the overhead allocation to at least one product while decreasing the overhead allocation to at least one other product", "limit the number of cost pools", "always result in an increase of at least one product’s selling price" ], "question_id": "fs-idm398848960", "question_text": "Activity-based costing systems:" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 0, "ans_text": "when multiple products have similar product volumes and costs" }, "bloom": null, "hl_context": "<hl> ABC is a five-stage process that allocates overhead more precisely than traditional allocation does by applying it to the products that use those activities . <hl> <hl> ABC works best in complex processes where the expenses are not driven by a single cost driver . <hl> <hl> Instead , several cost drivers are used as the overhead costs are analyzed and grouped into activities , and each activity is allocated based on each group ’ s cost driver . <hl> The five stages of the ABC process are :", "hl_sentences": "ABC is a five-stage process that allocates overhead more precisely than traditional allocation does by applying it to the products that use those activities . ABC works best in complex processes where the expenses are not driven by a single cost driver . Instead , several cost drivers are used as the overhead costs are analyzed and grouped into activities , and each activity is allocated based on each group ’ s cost driver .", "question": { "cloze_format": "Activity-based costing is preferable in a system ___ .", "normal_format": "When activity-based costing is preferable in a system?", "question_choices": [ "when multiple products have similar product volumes and costs", "with a large direct labor cost as a percentage of the total product cost", "with multiple, diverse products", "where management needs to support an increase in sales price" ], "question_id": "fs-idm392141248", "question_text": "Activity-based costing is preferable in a system:" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "C" }, "bloom": null, "hl_context": "There are two major methods in manufacturing firms for valuing work in process and finished goods inventory for financial accounting purposes : variable costing and absorption costing . Variable costing , also called direct costing or marginal costing , is a method in which all variable costs ( direct material , direct labor , and variable overhead ) are assigned to a product and fixed overhead costs are expensed in the period incurred . Under variable costing , fixed overhead is not included in the value of inventory . <hl> In contrast , absorption costing , also called full costing , is a method that applies all direct costs , fixed overhead , and variable manufacturing overhead to the cost of the product . <hl> The value of inventory under absorption costing includes direct material , direct labor , and all overhead .", "hl_sentences": "In contrast , absorption costing , also called full costing , is a method that applies all direct costs , fixed overhead , and variable manufacturing overhead to the cost of the product .", "question": { "cloze_format": "Absorption costing is also referred to as ___.", "normal_format": "What is absorption costing referred to?", "question_choices": [ "direct costing", "marginal costing", "full costing", "variable costing" ], "question_id": "fs-idm388453376", "question_text": "Absorption costing is also referred to as:" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 3, "ans_text": "direct material, direct labor, and all variable manufacturing overhead" }, "bloom": null, "hl_context": "<hl> Variable costing only includes the product costs that vary with output , which typically include direct material , direct labor , and variable manufacturing overhead . <hl> Fixed overhead is not considered a product cost under variable costing . Fixed manufacturing overhead is still expensed on the income statement , but it is treated as a period cost charged against revenue for each period . It does not include a portion of fixed overhead costs that remains in inventory and is not expensed , as in absorption costing . There are two major methods in manufacturing firms for valuing work in process and finished goods inventory for financial accounting purposes : variable costing and absorption costing . <hl> Variable costing , also called direct costing or marginal costing , is a method in which all variable costs ( direct material , direct labor , and variable overhead ) are assigned to a product and fixed overhead costs are expensed in the period incurred . <hl> Under variable costing , fixed overhead is not included in the value of inventory . In contrast , absorption costing , also called full costing , is a method that applies all direct costs , fixed overhead , and variable manufacturing overhead to the cost of the product . The value of inventory under absorption costing includes direct material , direct labor , and all overhead .", "hl_sentences": "Variable costing only includes the product costs that vary with output , which typically include direct material , direct labor , and variable manufacturing overhead . Variable costing , also called direct costing or marginal costing , is a method in which all variable costs ( direct material , direct labor , and variable overhead ) are assigned to a product and fixed overhead costs are expensed in the period incurred .", "question": { "cloze_format": "Under variable costing, a unit of product includes the costs of ___ .", "normal_format": "Under variable costing, a unit of product includes which costs?", "question_choices": [ "direct material, direct labor, and manufacturing overhead", "direct material, direct labor, and variable manufacturing overhead", "direct material, direct labor, and fixed manufacturing overhead", "direct material, direct labor, and all variable manufacturing overhead" ], "question_id": "fs-idm405079840", "question_text": "Under variable costing, a unit of product includes which costs?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "A" }, "bloom": null, "hl_context": "There are two major methods in manufacturing firms for valuing work in process and finished goods inventory for financial accounting purposes : variable costing and absorption costing . Variable costing , also called direct costing or marginal costing , is a method in which all variable costs ( direct material , direct labor , and variable overhead ) are assigned to a product and fixed overhead costs are expensed in the period incurred . Under variable costing , fixed overhead is not included in the value of inventory . <hl> In contrast , absorption costing , also called full costing , is a method that applies all direct costs , fixed overhead , and variable manufacturing overhead to the cost of the product . <hl> <hl> The value of inventory under absorption costing includes direct material , direct labor , and all overhead . <hl>", "hl_sentences": "In contrast , absorption costing , also called full costing , is a method that applies all direct costs , fixed overhead , and variable manufacturing overhead to the cost of the product . The value of inventory under absorption costing includes direct material , direct labor , and all overhead .", "question": { "cloze_format": "Under absorption costing, a unit of product includes the costs of ___ .", "normal_format": "Under absorption costing, a unit of product includes which costs?", "question_choices": [ "direct material, direct labor, and manufacturing overhead", "direct material, direct labor, and variable manufacturing overhead", "direct material, direct labor, and fixed manufacturing overhead", "direct material, direct labor, and all variable manufacturing overhead" ], "question_id": "fs-idm388912496", "question_text": "Under absorption costing, a unit of product includes which costs?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "that it is not well designed for cost-volume-profit analysis" }, "bloom": null, "hl_context": "<hl> Difficulty in understanding . <hl> <hl> The absorption costing method does not list the incremental fixed overhead costs and is more difficult to understand and analyze as compared to variable costing . <hl> The difference between the absorption and variable costing methods centers on the treatment of fixed manufacturing overhead costs . <hl> Absorption costing “ absorbs ” all of the costs used in manufacturing and includes fixed manufacturing overhead as product costs . <hl> Absorption costing is in accordance with GAAP , because the product cost includes fixed overhead . Variable costing considers the variable overhead costs and does not consider fixed overhead as part of a product ’ s cost . It is not in accordance with GAAP , because fixed overhead is treated as a period cost and is not included in the cost of the product .", "hl_sentences": "Difficulty in understanding . The absorption costing method does not list the incremental fixed overhead costs and is more difficult to understand and analyze as compared to variable costing . Absorption costing “ absorbs ” all of the costs used in manufacturing and includes fixed manufacturing overhead as product costs .", "question": { "cloze_format": "A downside to absorption costing is ___.", "normal_format": "What is a downside to absorption costing?", "question_choices": [ "not including fixed manufacturing overhead in the cost of the product", "that it is not really useful for managerial decisions", "that it is not allowable under GAAP", "that it is not well designed for cost-volume-profit analysis" ], "question_id": "fs-idm385510432", "question_text": "A downside to absorption costing is:" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "C" }, "bloom": null, "hl_context": "Sales less than Production . <hl> When a company produces more than it sells , net income will be less under variable costing than under absorption costing . <hl> <hl> In this scenario , there will be a buildup , or an increase , in inventory from the beginning of the period to the end of the period . <hl> Under variable costing , fixed manufacturing costs are still in the finished goods inventory account . But under absorption costing , those fixed costs have been expensed during the current production period and thus have reduced net income .", "hl_sentences": "When a company produces more than it sells , net income will be less under variable costing than under absorption costing . In this scenario , there will be a buildup , or an increase , in inventory from the beginning of the period to the end of the period .", "question": { "cloze_format": "When the number of units in ending inventory increases through the year, it is true that ___ .", "normal_format": "When the number of units in ending inventory increases through the year, which of the following is true?", "question_choices": [ "Net income is the same for variable and absorption costing.", "Net income is higher for variable costing than for absorption costing.", "Net income is higher for absorption costing than for variable costing.", "There is no relationship between net income and the costing method." ], "question_id": "fs-idm384849808", "question_text": "When the number of units in ending inventory increases through the year, which of the following is true?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "lower than under absorption costing" }, "bloom": null, "hl_context": "<hl> Variable costing only includes the product costs that vary with output , which typically include direct material , direct labor , and variable manufacturing overhead . <hl> Fixed overhead is not considered a product cost under variable costing . Fixed manufacturing overhead is still expensed on the income statement , but it is treated as a period cost charged against revenue for each period . <hl> It does not include a portion of fixed overhead costs that remains in inventory and is not expensed , as in absorption costing . <hl> The difference between the absorption and variable costing methods centers on the treatment of fixed manufacturing overhead costs . <hl> Absorption costing “ absorbs ” all of the costs used in manufacturing and includes fixed manufacturing overhead as product costs . <hl> <hl> Absorption costing is in accordance with GAAP , because the product cost includes fixed overhead . <hl> <hl> Variable costing considers the variable overhead costs and does not consider fixed overhead as part of a product ’ s cost . <hl> It is not in accordance with GAAP , because fixed overhead is treated as a period cost and is not included in the cost of the product .", "hl_sentences": "Variable costing only includes the product costs that vary with output , which typically include direct material , direct labor , and variable manufacturing overhead . It does not include a portion of fixed overhead costs that remains in inventory and is not expensed , as in absorption costing . Absorption costing “ absorbs ” all of the costs used in manufacturing and includes fixed manufacturing overhead as product costs . Absorption costing is in accordance with GAAP , because the product cost includes fixed overhead . Variable costing considers the variable overhead costs and does not consider fixed overhead as part of a product ’ s cost .", "question": { "cloze_format": "Product costs under variable costing are typically ___.", "normal_format": "What are product costs under variable costing typically?", "question_choices": [ "higher than under absorption costing", "lower than under absorption costing", "the same as with absorption costing", "higher than absorption costing when inventory increases" ], "question_id": "fs-idm389125680", "question_text": "Product costs under variable costing are typically:" }, "references_are_paraphrase": 0 } ]
6
6.1 Calculate Predetermined Overhead and Total Cost under the Traditional Allocation Method Both roommates make valid points about allocating limited resources. Ultimately, each must decide which method to use to allocate time, and they can make that decision based on their own analyses. Similarly, businesses and other organizations must create an allocation system for assigning limited resources, such as overhead. Whereas Kamil and Barry are discussing the allocation of hours, the issue of allocating costs raises similar questions. For example, for a manufacturer allocating maintenance costs, which are an overhead cost, is it better to allocate to each production department equally by the number of machines that need to be maintained or by the square footage of space that needs to be maintained? In the past, overhead costs were typically allocated based on factors such as total direct labor hours, total direct labor costs, or total machine hours. This allocation process, often called the traditional allocation method, works most effectively when direct labor is a dominant component in production. However, many industries have evolved, primarily due to changes in technology, and their production processes have become more complicated, with more steps or components. Many of these industries have significantly reduced their use of direct labor and replaced it with technology, such as robotics or other machinery. For example, a mobile phone production facility in China replaced 90 percent of its workforce with robots. 1 1 June Javelosa and Kristin Houser. “This Company Replaced 90% of Its Workforce with Machines. Here’s What Happened.” Futurism / World Economic Forum . https://www.weforum.org/agenda/2017/02/after-replacing-90-of-employees-with-robots-this-companys-productivity-soared In these situations, a direct cost (labor) has been replaced by an overhead cost (e.g., depreciation on equipment). Because of this decrease in reliance on labor and/or changes in the types of production complexity and methods, the traditional method of overhead allocation becomes less effective in certain production environments. To account for these changes in technology and production, many organizations today have adopted an overhead allocation method known as activity-based costing (ABC). This chapter will explain the transition to ABC and provide a foundation in its mechanics. Activity-based costing is an accounting method that recognizes the relationship between product costs and a production activity, such as the number of hours of engineering or design activity, the costs of the set up or preparation for the production of different products, or the costs of packaging different products after the production process is completed. Overhead costs are then allocated to production according to the use of that activity, such as the number of machine setups needed. In contrast, the traditional allocation method commonly uses cost drivers, such as direct labor or machine hours, as the single activity. Because of the use of multiple activities as cost drivers, ABC costing has advantages over the traditional allocation method, which assigns overhead using a single predetermined overhead rate. Those advantages come at a cost, both in resources and time, since additional information needs to be collected and analyzed. Chrysler , for instance, shifted its overhead allocation to ABC in 1991 and estimates that the benefits of cost savings, product improvement, and elimination of inefficiencies have been ten to twenty times greater than the investment in the program at some sites. It believes other sites experienced savings of fifty to one hundred times the cost to implement the system. 2 2 Joseph H. Ness and Thomas G. Cucuzza. “Tapping the Full Potential of ABC.” Harvard Business Review . July-Aug. 1995. https://hbr.org/1995/07/tapping-the-full-potential-of-abc As you’ve learned, understanding the cost needed to manufacture a product is critical to making many management decisions ( Figure 6.2 ). Knowing the total and component costs of the product is necessary for price setting and for measuring the efficiency and effectiveness of the organization. Remember that product costs consist of direct materials, direct labor, and manufacturing overhead. It is relatively simple to understand each product’s direct material and direct labor cost, but it is more complicated to determine the overhead component of each product’s costs because there are a number of indirect and other costs to consider. A company’s manufacturing overhead costs are all costs other than direct material, direct labor, or selling and administrative costs. Once a company has determined the overhead, it must establish how to allocate the cost. This allocation can come in the form of the traditional overhead allocation method or activity-based costing.. Component Categories under Traditional Allocation Traditional allocation involves the allocation of factory overhead to products based on the volume of production resources consumed, such as the amount of direct labor hours consumed, direct labor cost, or machine hours used. In order to perform the traditional method, it is also important to understand each of the involved cost components: direct materials, direct labor, and manufacturing overhead. Direct materials and direct labor are cost categories that are relatively easy to trace to a product. Direct material comprises the supplies used in manufacturing that can be traced directly to the product. Direct labor is the work used in manufacturing that can be directly traced to the product. Although the processes for tracing the costs differ, both job order costing and process costing trace the material and labor through materials requisition requests and time cards or electronic mechanisms for measuring labor input. Job order costing traces the costs directly to the product, and process costing traces the costs to the manufacturing department. Ethical Considerations Ethical Cost Modeling The proper use of management accounting skills to model financial and non-financial data optimizes the organization’s evaluation and use of resources and assists in the proper evaluation of costs and revenues in an organization. The IFAC provides guidance on the use of cost models and how to ethically design proper cost models: “Cost models should be designed and maintained to reflect the cause-and-effect interrelationships and the behavioral dynamics of the way the organization functions. The information needs of decision makers at all levels of an organization should be taken into account, by incorporating an organization’s business and operational models, strategy, structure, and competitive environment.” 3 3 International Federation of Accountings (IFAC) PAIB Committee. “Evaluating and Improving Costing in Organizations.” International Good Practice Guidance . June 30, 2009. https://www.ifac.org/system/files/publications/files/IGPG-Evaluating-and-Improving-Costing-July-2009.pdf Estimated Total Manufacturing Overhead Costs The more challenging product component to track is manufacturing overhead. Overhead consists of indirect materials, indirect labor, and other costs closely associated with the manufacturing process but not tied to a specific product. Examples of other overhead costs include such items as depreciation on the factory machinery and insurance on the factory building. Indirect material comprises the supplies used in production that cannot be traced to an individual product, and indirect labor is the work done by employees not directly involved in the manufacturing process, such as the supervisors’ salaries or the maintenance staff’s wages. Because these costs cannot be traced directly to the product like direct costs are, they have to be allocated among all of the products produced and added, or applied, to the production and product cost. For example, the recipe for shea butter has easily identifiable quantities of shea nuts and other ingredients. Based on the manufacturing process, it is also easy to determine the direct labor cost. But determining the exact overhead costs is not easy, as the cost of electricity needed to dry, crush, and roast the nuts changes depending on the moisture content of the nuts upon arrival. Until now, you have learned to apply overhead to production based on a predetermined overhead rate typically using an activity base. An activity base is considered to be a primary driver of overhead costs, and traditionally, direct labor hours or machine hours were used for it. For example, a production facility that is fairly labor intensive would likely determine that the more labor hours worked, the higher the overhead will be. As a result, management would likely view labor hours as the activity base when applying overhead costs. A predetermined overhead rate is calculated at the start of the accounting period by dividing the estimated manufacturing overhead by the estimated activity base. The predetermined overhead rate is then applied to production to facilitate determining a standard cost for a product. This estimated overhead rate will allow a company to determine a cost for the product without having to wait, possibly several months, until all of the actual overhead costs are determined, and to help with issues such as seasonal production or variable overhead costs, such as utilities. Calculation of Predetermined Overhead and Total Cost under Traditional Allocation The predetermined overhead rate is set at the beginning of the year and is calculated as the estimated (budgeted) overhead costs for the year divided by the estimated (budgeted) level of activity for the year. This activity base is often direct labor hours, direct labor costs, or machine hours. Once a company determines the overhead rate, it determines the overhead rate per unit and adds the overhead per unit cost to the direct material and direct labor costs for the product to find the total cost. To put this method into context, consider this example. Musicality Manufacturing developed a recording device similar to a microphone that allows musicians and music aficionados to record their playing or singing along with any song publicly available. There are three products that vary in features and ability: Solo, Band, and Orchestra. Musicality was started by musicians who majored in math and software engineering while in college. Their main concern was building a quality manufacturing plant, so they used the simpler traditional allocation method. They started by determining their direct costs, which are shown in Figure 6.3 . Musicality determines the overhead rate based on direct labor hours. At the beginning of the year, the company estimates total overhead costs to be $2,500,000 and total direct labor hours to be 1,250,000. The predetermined overhead rate is $2,500,000 overhead 1,250,000 labor hours = $2.00 per labor hour $2,500,000 overhead 1,250,000 labor hours = $2.00 per labor hour Musicality uses this information to determine the cost of each product. For example, the total direct labor hours estimated for the solo product is 350,000 direct labor hours. With $2.00 of overhead per direct hour, the Solo product is estimated to have $700,000 of overhead applied. When the $700,000 of overhead applied is divided by the estimated production of 140,000 units of the Solo product, the estimated overhead per product for the Solo product is $5.00 per unit. The computation of the overhead cost per unit for all of the products is shown in Figure 6.4 . The overhead cost per unit from Figure 6.4 is combined with the direct material and direct labor costs as shown in Figure 6.3 to compute the total cost per unit as shown in Figure 6.5 . After reviewing the product cost and consulting with the marketing department, the sales prices were set. The sales price, cost of each product, and resulting gross profit are shown in Figure 6.6 . Sales of each product have been strong, and the total gross profit for each product is shown in Figure 6.7 . Using the Solo product as an example, 150,000 units are sold at a price of $20 per unit resulting in sales of $3,000,000. The cost of goods sold consists of direct materials of $3.50 per unit, direct labor of $10 per unit, and manufacturing overhead of $5.00 per unit. With 150,000 units, the direct material cost is $525,000; the direct labor cost is $1,500,000; and the manufacturing overhead applied is $750,000 for a total Cost of Goods Sold of $2,775,000. The resulting Gross Profit is $225,000 or $1.50 per unit. Think It Through Computing Actual Overhead Costs As manufacturing technology becomes less expensive and more efficient, the mix between overhead and labor changes so that tasks are more computerized tasks and involve less direct labor; the traditional use of direct labor hours or direct labor dollars changes accordingly. If the predetermined overhead rate is based on direct labor hours and set at the beginning of the year but manufacturing technology leads to a reduction in direct labor during the year, the number of direct labor hours may be less than estimated. This reduces the amount of overhead applied so that the overhead is more likely to be underapplied at the end of the year. Why do companies not wait until the end of the period and compute an actual overhead rate based on actual manufacturing costs and actual units? 6.2 Describe and Identify Cost Drivers As you’ve learned, the most common bases for predetermined overhead are direct labor hours, direct labor dollars, or machine hours. Each of these costs is considered a cost driver because of the causal relationship between the base and the related costs: As the cost driver’s usage increases, the cost of overhead increases as well. Table 6.1 shows various costs and potential cost drivers. Common Manufacturing Expenses and Potential Cost Drivers Common Expenses Potential Cost Drivers Customer Service Cleaning Equipment Costs Marketing Expenses Office Supplies Green Floral Tape (indirect material) Website Maintenance Expense Number of product returns from customers Number of square feet Number of customer contacts Number of employees Number of customer orders Number of customer online orders Table 6.1 The more accurately a company can determine the cost drivers for its products, the more accurate the costing information will be, which in turn allows management to make better use of the cost data in making decisions. As technology changes, however, the mix between materials, labor, and overhead changes. Often, improved technology means less waste of material and fewer direct labor hours, but possibly more overhead. For example, technology has changed the way pharmaceuticals are manufactured. Advancing technology allows for the now smaller labor force to be more productive than a larger labor force from earlier years. While the labor cost has changed, this decrease may only be temporary as a labor force with higher costs and different skills is often needed. Additionally, an increase in technology often raises overhead costs. How accurate, then, is the company’s product cost information if it has become more efficient in its production process? Should the company still be using a predetermined overhead application rate based on direct labor hours or machine hours? A detailed analysis of the cost drivers will answer these questions. Another benefit of looking at cost drivers is that doing so allows a company to analyze all costs. A company can differentiate among costs that drive overhead and have value, those that do not drive overhead but still add value, and those that may or may not drive the overhead but do not add any value. For example, a furniture manufacturer produces and sells wooden tables in various colors. The painting process involves a white base coat, a color coat, and a clear protective top coat. The three coats are applied in a sealed room using a spraying process followed by an ultraviolet drying process. The depreciation on the spraying machines and the ultraviolet bulbs used in the painting process are overhead costs. These costs drive or increase overhead, and they add value to the product by increasing the quality. Costs associated with repainting or fixing any blemishes are overhead costs that are necessary to sell the product but would not be considered value-added costs. The goal is to eliminate as many of the non-value-added costs as possible and subsequently reduce overhead costs. Cost Drivers and Overhead In today’s production environment, there are many activities within the production process that can contribute to the cost of the product, but determining the cost drivers may be complicated because some of those activities may change over time. Additionally, the appropriate level of assigning cost drivers needs to be determined. In some cases, overhead costs such as inspection increase with each unit inspected, and the costs need to be allocated on a per-unit level. In other cases, the overhead costs, such as machine setup costs, are incurred each time a batch of products is manufactured and need to be allocated at the batch level. For example, the labor hours for the staff taking, fulfilling, and inspecting orders may increase as the number of orders increases, driving up the overhead. Furthermore, the costs of taking orders or of quality inspections can vary per product and may not be captured properly. Technology improvements, including switching to automated processes for production, may decrease the labor hours of the production staff, driving the labor-related overhead downward but potentially increasing other overhead expenses. These activities—order taking, fulfillment, and quality inspections—are potential cost drivers associated with production, and they each drive the overhead at varying rates. Think It Through Identifying Cost Drivers Cost drivers vary widely among companies. After costs are accumulated into cost pools, what information would help management select the appropriate cost driver? Name an appropriate cost driver for each of the following cost pools: Plant cleaning and maintenance Factory supervision Machine maintenance Machine setups Identify Cost Drivers How does a company determine its cost drivers for indirect materials, indirect labor, and other overhead costs? To begin the determination of appropriate cost drivers, an accountant analyzes the activities in the product production process that contribute to the cost of that product. An activity is any action that consumes company resources, such as taking orders for a product, setting up machines to produce the product, inspecting the product, and providing customer support before and through the order process. For example, Musicality’s direct costs can be traced to the products, but there are indirect costs associated with using various types of material for each product. While the Orchestra product has more intricate materials and labor, it has fewer costs associated with requisitioning and conveying materials to the production line than the other products have. Additionally, examining the inspection costs indicates the Orchestra product is a simple product to inspect, so random quality inspections are sufficient. But individual inspections for both the Solo and Band products are critical, and the overhead related to inspection costs should be based on the number of inspections. As you can imagine, the unique aspects of the production process for each product affect the overhead cost of each product. However, these costs may not be allocated to the products appropriately when overhead is applied using a predetermined rate based on one activity. While Solo, Band, and Orchestra might appear to be different only in quality, they are actually very different from each other when it comes to manufacturing overhead costs. Whether the products produced require significantly different overhead resources or not, the company benefits from understanding what its cost drivers are. The more efficiently each product’s activities are tracked, the more actual cost drivers are discovered, and the more accurately overhead can be assigned to each product. Concepts In Practice Cost Drivers for Small Businesses The value of analyzing cost drivers can be used in budgeting beyond allocating overhead to products. American Express has forums designed to help small businesses be successful. Knowing the cost drivers for your business can help with budgeting. American Express states that all business activities are related to five main cost drivers: 4 4 American Express. “5 Cost Drivers to Help You Make Accurate Expense Projections.” June 23, 2011. https://www.americanexpress.com/us/small-business/openforum/articles/5-cost-drivers-to-help-you-make-accurate-expense-projections/ Employee head count is often the driver for office supply expense. Salesperson head count is often the driver for auto and other employee travel expense. The number of leads required to reach the target sales goal is often the driver for advertising, public relations, social media, search engine optimization expense, and other expenses associated with generating leads. Sales and all related variable expenses are often the driver for commissions, bad debt, insurance expense, and so on. Fixed costs, such as postage, web hosting fees, business licenses, and banking fees, are often overlooked as cost drivers. 6.3 Calculate Activity-Based Product Costs As technology changes the ratio between direct labor and overhead, more overhead costs are linked to drivers other than direct labor and machine hours. This shift in costs gives companies the opportunity to stop using the traditional single predetermined overhead rate applied to all units of production and instead use an overhead allocation approach based on the actual activities that drive overhead. Making this change allows management to obtain more accurate product cost information, which leads to more informed decisions. Activity-based costing (ABC) is the process that assigns overhead to products based on the various activities that drive overhead costs. Historical Perspective on Determination of Manufacturing Overhead Allocation All products consist of material, labor, and overhead, and the major cost components have historically been materials and labor. Manufacturing overhead was not a large cost of the product, so an overhead allocation method based on labor or machine hours was logical. For example, as shown in Figure 6.3 , Musicality determined the direct costs and direct labor for their three products: Solo, Band, and Orchestra. Under the traditional method of costing, the predetermined overhead rate of $2 per direct labor hour was computed by dividing the estimated overhead by the estimated direct labor hours. Based on the number of direct labor hours and the number of units produced for each product, the overhead per product is shown in Figure 6.4 . As technology costs decreased and production methods became more efficient, overhead costs changed and became a much larger component of product costs. For many companies, and in many cases, overhead costs are now significantly larger than labor costs. For example, in the last few years, many industries have increased technology, and the amount of overhead has doubled. 5 Technology has changed the manufacturing labor force, and therefore, the type and cost of labor associated with those jobs have changed. In addition, technology has made it easier to track the various activities and their related overhead costs. 5 Mary Ellen Biery. “A Sure-Fire Way to Boost the Bottom Line.” Forbes . January 12, 2014. https://www.forbes.com/sites/sageworks/2014/01/12/control-overhead-compare-industry-data/#47a9ea69d068 Costs can be gathered on a unit level, batch level, product level, or factory level. The idea behind these various levels is that at each level, there are additional costs that are encountered, so a company must decide at which level or levels it is best for the company to accumulate costs. A unit-level cost is incurred each time a unit of product is produced and includes costs such as materials and labor. A batch-level cost is incurred every time a batch of items is manufactured, for example, costs associated with purchasing and receiving materials. A product-level cost is incurred each time a product is produced and includes costs such as engineering costs, testing costs, or quality control costs. A factory-level cost is incurred because products are being produced and includes costs such as the plant supervisor’s salary and rent on the factory building. By definition, indirect labor is not traced to individual products. However, it is possible to track some indirect labor to several jobs or batches. A similar amount of information can be derived for indirect material. An example of an indirect material in some manufacturing processes is cleaning solution. For example, one type of cleaning solution is used in the manufacturing of pop sockets. It is not practical to measure every ounce of cleaning solution used in the manufacture of an individual pop socket; rather, it makes sense to allocate to a particular batch of pop sockets the cost of the cleaning solution needed to make that batch. Likewise, a manufacturer of frozen french fries uses a different type of solution to clean potatoes prior to making the french fries and would allocate the cost of the solution based on how much is used to make each batch of fries. Establishing an Activity-Based Costing System ABC is a five-stage process that allocates overhead more precisely than traditional allocation does by applying it to the products that use those activities. ABC works best in complex processes where the expenses are not driven by a single cost driver. Instead, several cost drivers are used as the overhead costs are analyzed and grouped into activities, and each activity is allocated based on each group’s cost driver. The five stages of the ABC process are: Identify the activities performed in the organization Determine activity cost pools Calculate activity rates for each cost pool Allocate activity rates to products (or services) Calculate unit product costs The first step is to identify activities needed for production. An activity is an action or process involved in the production of inventory. There can be many activities that consume resources, and management will need to narrow down the activities to those that have the biggest impact on overhead costs. Examples of these activities include: Taking orders Setting up machines Purchasing material Assembling products Inspecting products Providing customer service The second step is assigning overhead costs to the identified activities. In this step, overhead costs are assigned to each of the activities to become a cost pool. A cost pool is a list of costs incurred when related activities are performed. Table 6.2 illustrates the various cost pools along with their activities and related costs. Cost Pools and Their Activities and Related Costs Cost Pool Activities and Related Costs Production Indirect labor setting up machines Indirect labor cost of accepting and verifying orders Machine maintenance costs Costs to operate the machine: utilities, insurance, etc. Purchasing material Preparing purchase requisitions for the material Cost to move material from receiving department into production Depreciation of equipment used to move material Inspect products Inspection supervisor costs Cost to move product to and from the inspection area Assemble products Cost of assembly machine Cost of label machine Cost of labels Technological production Website maintenance Depreciation of computers Table 6.2 For example, the production cost pool consists of costs such as indirect labor for those accepting the order, verifying the customer has credit to pay for the order, maintenance and depreciation on the machines used to produce the orders, and utilities and rent for operating the machines. Figure 6.8 illustrates how the costs in each pool are allocated to each product in a different proportion. Once the costs are grouped into similar cost pools, the activities in each pool are analyzed to determine which activity “drives” the costs in that pool, leading to the third step of ABC: identify the cost driver for each cost pool and estimate an annual level of activity for each cost driver. As you’ve learned, the cost driver is the specific activity that drives the costs in the cost pools. Table 6.3 shows some activities and cost drivers for those activities. Activities and Their Common Cost Drivers Cost Pool Cost Driver Customer order Number of orders Production Machine setups Purchasing materials Purchase requisitions Assembling products Direct labor hours Inspecting products Inspection hours Customer service Number of contacts with customer Table 6.3 The fourth step is to compute the predetermined overhead rate for each of the cost drivers. This portion of the process is similar to finding the traditional predetermined overhead rate, where the overhead rate is divided by direct labor dollars, direct labor hours, or machine hours. Each cost driver will have its own overhead rate, which is why ABC is a more accurate method of allocating overhead. Finally, step five is to allocate the overhead costs to each product. The predetermined overhead rate found in step four is applied to the actual level of the cost driver used by each product. As with the traditional overhead allocation method, the actual overhead costs are accumulated in an account called manufacturing overhead and then applied to each of the products in this step. Notice that steps one through three represent the process of allocating overhead costs to activities, and steps four and five represent the process of allocating the overhead costs that have been assigned to activities to the products to which they pertain. Thus, the five steps of ABC involve two major processes: first, allocating overhead costs to the various activities to get a cost per activity, and then allocating the cost per activity to each product based on that product’s usage of the activities. Now that the steps involved have been detailed, let’s demonstrate the calculations using the Musicality example. Your Turn Comparing Estimates to Actual Costs A company has determined that its estimated 500,000 machine hours is the optimal driver for its estimated $1,000,000 machine overhead cost pool. The $750,000 in the material overhead cost pool should be allocated using the estimated 15,000 material requisition requests. How much is over- or underapplied if there were actually 490,000 machine hours and 15,500 material requisitions that resulted in $950,000 in the machine overhead cost pool, and $780,000 in the material cost pool? What does this difference indicate? Solution The predetermined overhead rate is $2 per machine hour ($1,000,000/500,000 machine hours) and $50 per material requisition ($750,000/15,000 requisitions). The actual and applied overhead can then be calculated to determine whether it is over- or underapplied: The difference is a combination of factors. There were fewer machine hours than estimated, but there was also less overhead than estimated. There were more requisitions than estimated, and there was also more overhead. The Calculation of Product Costs Using the Activity-Based Costing Allocation Method Musicality is considering switching to an activity-based costing approach for determining overhead and has collected data to help them decide which overhead allocation method they should use. Performing the analysis requires these steps: Identify cost pools necessary to complete the product. Musicality determined its cost pools are: Setting up machines Purchasing material Inspecting products Assembling products Technological production Assign overhead cost to the cost pools. Musicality has estimated the overhead for each cost pool to be: Identify the cost driver for each activity, and estimate an annual activity for each driver. Musicality determined the driver and estimated activity for each product to be the following: Compute the predetermined overhead for each cost driver. Musicality determined this predetermined overhead rate for each driver: Allocate overhead costs to products. Assuming Musicality’s activities were as estimated, the amount allocated to each product is: Now that Musicality has applied overhead to each product, they can calculate the cost per unit . Management can review its sales price and make necessary decisions regarding its products. The overhead cost per unit is the overhead for each product divided by the number of units of each product: The overhead per unit can be added to the unit cost for direct material and direct labor to compute the total product cost per unit: The sales price was set after management reviewed the product cost with traditional allocation along with other factors such as competition and product demand. The current sales price, cost of each product using ABC, and the resulting gross profit are shown in Figure 6.9 . The loss on each sale of the Solo product was not discovered until the company did the calculations for the ABC method, because the sales of the other products were strong enough for the company to retain a total gross profit. Additionally, the more accurate gross profit for each product calculated using ABC is shown in Figure 6.10 : The calculations Musicality did in order to switch to ABC revealed that the Solo product was generating a loss for every unit sold. Knowing this information will allow Musicality to consider whether they should make changes to generate a profit from the Solo product, such as increase the selling price or carefully analyze the costs to identify potential cost reductions. Musicality could also decide to continue selling Solo at a loss, because the other products are generating enough profit for the company to absorb the Solo product loss and still be profitable. Why would a company continue to sell a product that is generating a loss? Sometimes these products are ones for which the company is well known or that draw customers into the store. For example, companies will sometimes offer extreme sales, such as on Black Friday, to attract customers in the hope that the customers will purchase other products. This information shows how valuable ABC can be in many situations for providing a more accurate picture than traditional allocation. The Service Industries and Their Use of the Activity-Based Costing Allocation Method ABC costing was developed to help management understand manufacturing costs and how they can be better managed. However, the service industry can apply the same principles to improve its cost management. Direct material and direct labor costs range from nonexistent to minimal in the service industry, which makes the overhead application even more important. The number and types of cost pools may be completely different in the service industry as compared to the manufacturing industry. For example, the health-care industry may have different overhead costs and cost drivers for the treatment of illnesses than they have for injuries. Some of the overhead related to monitoring a patient’s health status may overlap, but most of the overhead related to diagnosis and treatment differ from each other. Link to Learning Activity-based costing is not restricted to manufacturing. Service industries also have cost drivers and can benefit from analyzing what drives their costs. See this report on activity-based costing at UPS for an example. 6.4 Compare and Contrast Traditional and Activity-Based Costing Systems Calculating an accurate manufacturing cost for each product is a vital piece of information for a company’s decision-making. For example, knowing the cost to produce a unit of product affects not only how a business budgets to manufacture that product, but it is often the starting point in determining the sales price. An important component in determining the total production costs of a product or job is the proper allocation of overhead. For some companies, the often less-complicated traditional method does an excellent job of allocating overhead. However, for many products, the allocation of overhead is a more complex issue, and an activity-based costing (ABC) system is more appropriate. Another factor to consider in determining which of the two major overhead allocation methods to use is the cost associated with collecting and analyzing information. When making their decision regarding which method to use, the company must consider these costs, both in time and money. Table 6.4 compares overhead in the two systems. In many cases, the ABC method is more expensive in terms of time and other costs. The difference between the traditional method (using one cost driver) and the ABC method (using multiple cost drivers) is more complex than simply the number of cost drivers. When direct labor is a large portion of the product cost, the overhead costs tend to be consistently driven by one cost driver, which is typically direct labor or machine hours; the traditional method appropriately allocates those costs. When technology is a large portion of the product cost, the overhead costs tend to be driven by multiple drivers, so using multiple cost drivers in the ABC method allows for a more precise allocation of overhead. Overhead in Traditional versus ABC Costing   Traditional ABC Overhead assigned Single cost driver Multiple cost drivers Optimal usage When direct labor is a large portion of the product cost When technology is a large portion of the product cost Orientation Cost driven Process driven Table 6.4 As shown with Musicality’s products, not only are there different costs for each product when comparing traditional allocation with an activity-based costing, but ABC showed that the Solo product creates a loss for the company. Activity-based costing is a more accurate method, because it assigns overhead based on the activities that drive the overhead costs. It can be concluded, then, that the cost and subsequent gross loss for each unit’s sales provide a more accurate picture than the overall cost and gross profit under the traditional method. Table 6.4 compares the cost per unit using the different cost systems and shows how different the costs can be depending on the method used. Advantages and Disadvantages of the Traditional Method of Calculating Overhead The traditional allocation system assigns manufacturing overhead based on a single cost driver, such as direct labor hours, direct labor dollars, or machine hours, and is optimal when there is a relationship between the activity base and overhead. This most often occurs when direct labor is a large part of the product cost. The theory supporting the single cost driver is that the cost driver selected increases as overhead increases, and further analysis is more costly than it is valuable. Each method has its advantages and disadvantages. These are advantages of the traditional method: All manufacturing costs are classified as material, labor, or overhead and assigned to products regardless of whether they drive or are driven by production. All manufacturing costs are considered to be part of the product cost, whereas nonmanufacturing costs are not considered to be production costs and are not assigned to products, regardless of whether the costs are based on the products. For example, the machines used to receive and process customer orders are necessary because product orders must be taken, but their costs are not allocated to particular products. There is only one overhead cost pool and a single measure of activity, such as direct labor hours, which makes the traditional method simple and less costly to maintain. The predetermined overhead rate is based on estimated costs at the budgeted level of activity. Therefore, the overhead rate is consistent across products, but overhead may be over- or underapplied. Disadvantages of the traditional method include: The use of the single cost driver does not allocate overhead as accurately as using multiple cost drivers. The use of the single cost driver may overallocate overhead to one product and underallocate overhead to another product, resulting in erroneous total costs and potentially setting an incorrect sales price. Traditional allocation assigns costs as period or product costs, and all product costs are included in the cost of inventory, which makes this method acceptable for generally accepted accounting principles (GAAP). Think It Through ABC Method and Financial Statements There are pros and cons to both the traditional and the ABC system. One advantage of the ABC system is that it provides more accurate information on the costs to manufacture products, but it does not show up on the financial statements. Explain how this costing information has value if it does not appear on the financial statements. Advantages and Disadvantages of Creating an Activity-Based Costing System for Allocating Overhead While ABC systems more accurately allocate the costs based on the various resources used to make the product, they cost more to use and, therefore, are not always the best method. Management needs to consider each system and how it will work within its own organization. Some advantages of activity-based costing include: There are multiple overhead cost pools, and each has its own unique measure of activity. This provides more accurate rates for applying overhead, but it takes more time to implement and results in a higher cost. The allocation bases (i.e., measures of activity) often differ from those used in traditional allocation. Multiple cost pools allow management to group costs being influenced by similar drivers and to consider cost drivers beyond the typical labor or machine hour. This results in a more accurate overhead application rate. The activity rates may consider the level of activity at capacity instead of the budgeted level of activity. Both nonmanufacturing costs and manufacturing costs may be assigned to products. The main rationale in assigning costs is the relationship between the cost and the product. If the cost increases as the volume of the product increases, it is considered part of overhead. There are disadvantages to using ABC costing that management needs to consider when determining which method to use. Those disadvantages include: Some manufacturing costs may be excluded from product costs. For example, the cost to heat the factory may be excluded as a product cost because, while it is necessary for production, it does not fit into one of the activity-driven cost pools. It is more expensive, as there is a cost to collect and analyze cost driver information as well as to allocate overhead on the basis of multiple cost drivers. An ABC system takes much more to implement and operate, as information on cost drivers must be collected in an objective manner. The advantages and disadvantages of both methods are as previously listed, but what is the practical impact on the product cost? There are several items to consider at the product costs level: Adopting an ABC overhead allocation system can allow a company to shift manufacturing overhead costs between products based on their volume. Using an ABC method to better assign unit-level, batch-level, product-level, and factory-level costs can increase the per-unit costs of the low-volume products and decrease the per-unit costs of the high-volume products. The effects are not symmetrical; there is usually a larger change in the per-unit costs of the low-volume products. The cost of the products may include some period costs but not some of the product costs, so it is not considered GAAP compliant. The information is supplemental and very helpful to management, but the company still needs to compute the product’s cost under the traditional method for financial reporting. Link to Learning Changing from the traditional allocation method to ABC costing is not as simple as having management dictate that employees follow the new system. There are often challenges that begin with convincing employees that it will provide benefits and that they should buy into the new system. See this 1995 article, Tapping the Full Potential of ABC , illustrating some of Chrysler ’s challenges to learn more. 6.5 Compare and Contrast Variable and Absorption Costing ABC costing assigns a proportion of overhead costs on the basis of the activities under the presumption that the activities drive the overhead costs. As such, ABC costing converts the indirect costs into product costs. There are also cost systems with a different approach. Instead of focusing on the overhead costs incurred by the product unit, these methods focus on assigning the fixed overhead costs to inventory. There are two major methods in manufacturing firms for valuing work in process and finished goods inventory for financial accounting purposes: variable costing and absorption costing. Variable costing , also called direct costing or marginal costing , is a method in which all variable costs (direct material, direct labor, and variable overhead) are assigned to a product and fixed overhead costs are expensed in the period incurred. Under variable costing, fixed overhead is not included in the value of inventory. In contrast, absorption costing , also called full costing , is a method that applies all direct costs, fixed overhead, and variable manufacturing overhead to the cost of the product. The value of inventory under absorption costing includes direct material, direct labor, and all overhead. The difference in the methods is that management will prefer one method over the other for internal decision-making purposes. The other main difference is that only the absorption method is in accordance with GAAP . Variable Costing Versus Absorption Costing Methods The difference between the absorption and variable costing methods centers on the treatment of fixed manufacturing overhead costs. Absorption costing “absorbs” all of the costs used in manufacturing and includes fixed manufacturing overhead as product costs. Absorption costing is in accordance with GAAP, because the product cost includes fixed overhead. Variable costing considers the variable overhead costs and does not consider fixed overhead as part of a product’s cost. It is not in accordance with GAAP, because fixed overhead is treated as a period cost and is not included in the cost of the product. Concepts In Practice Absorbing Costs through Overproduction While companies use absorption costing for their financial statements, many also use variable costing for decision-making. The Big Three auto companies made decisions based on absorption costing, and the result was the manufacturing of more vehicles than the market demanded. Why? With absorption costing, the fixed overhead costs, such as marketing, were allocated to inventory, and the larger the inventory, the lower was the unit cost of that overhead. For example, if a fixed cost of $1,000 is allocated to 500 units, the cost is $2 per unit. But if there are 2,000 units, the per-unit cost is $0.50. While this was not the only reason for manufacturing too many cars, it kept the period costs hidden among the manufacturing costs. Using variable costing would have kept the costs separate and led to different decisions. Deferred Costs Absorption costing considers all fixed overhead as part of a product’s cost and assigns it to the product. This treatment means that as inventories increase and are possibly carried over from the year of production to actual sales of the units in the next year, the company allocates a portion of the fixed manufacturing overhead costs from the current period to future periods. Carrying over inventories and overhead costs is reflected in the ending inventory balances at the end of the production period, which become the beginning inventory balances at the start of the next period. It is anticipated that the units that were carried over will be sold in the next period. If the units are not sold, the costs will continue to be included in the costs of producing the units until they are sold. Finally, at the point of sale, whenever it happens, these deferred production costs, such as fixed overhead, become part of the costs of goods sold and flow through to the income statement in the period of the sale. This treatment is based on the expense recognition principle , which is one of the cornerstones of accrual accounting and is why the absorption method follows GAAP. The principle states that expenses should be recognized in the period in which revenues are incurred. Including fixed overhead as a cost of the product ensures the fixed overhead is expensed (as part of cost of goods sold) when the sale is reported. For example, assume a new company has fixed overhead of $12,000 and manufactures 10,000 units. Direct materials cost is $3 per unit, direct labor is $15 per unit, and the variable manufacturing overhead is $7 per unit. Under absorption costing, the amount of fixed overhead in each unit is $1.20 ($12,000/10,000 units); variable costing does not include any fixed overhead as part of the cost of the product. Figure 6.11 shows the cost to produce the 10,000 units using absorption and variable costing. Assume each unit is sold for $33 each, so sales are $330,000 for the year. If the entire finished goods inventory is sold, the income is the same for both the absorption and variable cost methods. The difference is that the absorption cost method includes fixed overhead as part of the cost of goods sold, while the variable cost method includes it as an administrative cost, as shown in Figure 6.12 . When the entire inventory is sold, the total fixed cost is expensed as the cost of goods sold under the absorption method or it is expensed as an administrative cost under the variable method; net income is the same under both methods. Now assume that 8,000 units are sold and 2,000 are still in finished goods inventory at the end of the year. The cost of the fixed overhead expensed on the income statement as cost of goods sold is $9,600 ($1.20/unit × 8,000 units), and the fixed overhead cost remaining in finished goods inventory is $2,400 ($1.20/unit × 2,000 units). The amount of the fixed overhead paid by the company is not totally expensed, because the number of units in ending inventory has increased. Eventually, the fixed overhead cost will be expensed when the inventory is sold in the next period. Figure 6.13 shows the cost to produce the 8,000 units of inventory that became cost of goods sold and the 2,000 units that remain in ending inventory. If the 8,000 units are sold for $33 each, the difference between absorption costing and variable costing is a timing difference. Under absorption costing, the 2,000 units in ending inventory include the $1.20 per unit share, or $2,400 of fixed cost. That cost will be expensed when the inventory is sold and accounts for the difference in net income under absorption and variable costing, as shown in Figure 6.14 . Under variable costing, the fixed overhead is not considered a product cost and would not be assigned to ending inventory. The fixed overhead would have been expensed on the income statement as a period cost. Inventory Differences Because absorption costing defers costs, the ending inventory figure differs from that calculated using the variable costing method. As shown in Figure 6.13 , the inventory figure under absorption costing considers both variable and fixed manufacturing costs, whereas under variable costing, it only includes the variable manufacturing costs. Suitability for Cost-Volume-Profit Analysis Using the absorption costing method on the income statement does not easily provide data for cost-volume-profit (CVP) computations. In the previous example, the fixed overhead cost per unit is $1.20 based on an activity of 10,000 units. If the company estimated 12,000 units, the fixed overhead cost per unit would decrease to $1 per unit. This calculation is possible, but it must be done multiple times each time the volume of activity changes in order to provide accurate data, as CVP analysis makes no distinction between variable costing and absorption costing income statements. Your Turn Comparing Variable and Absorption Methods A company expects to manufacture 7,000 units. Its direct material costs are $10 per unit, direct labor is $9 per unit, and variable overhead is $3 per unit. The fixed overhead is estimated at $49,000. How much would each unit cost under both the variable method and the absorption method? Solution The variable cost per unit is $22 (the total of direct material, direct labor, and variable overhead). The absorption cost per unit is the variable cost ($22) plus the per-unit cost of $7 ($49,000/7,000 units) for the fixed overhead, for a total of $29. Advantages and Disadvantages of the Variable Costing Method Variable costing only includes the product costs that vary with output, which typically include direct material, direct labor, and variable manufacturing overhead. Fixed overhead is not considered a product cost under variable costing. Fixed manufacturing overhead is still expensed on the income statement, but it is treated as a period cost charged against revenue for each period. It does not include a portion of fixed overhead costs that remains in inventory and is not expensed, as in absorption costing. If absorption costing is the method acceptable for financial reporting under GAAP, why would management prefer variable costing? Advocates of variable costing argue that the definition of fixed costs holds, and fixed manufacturing overhead costs will be incurred regardless of whether anything is actually produced. They also argue that fixed manufacturing overhead costs are true period expenses and have no future service potential, since incurring them now has no effect on whether these costs will have to be incurred again in the future. Advantages of the variable approach are: More useful for CVP analysis . Variable costing statements provide data that are immediately useful for CVP analysis because fixed and variable overhead are separate items. Computations from financial statements prepared with absorption costing need computations to break out the fixed and variable costs from the product costs. Income is not affected by changes in production volume . Fixed overhead is treated as a period cost and does not vary as the volume of inventory changes. This results in income increasing in proportion to sales, which may not happen under absorption costing. Under absorption costing, the fixed overhead assigned to a cost changes as the volume changes. Therefore, the reported net income changes with production, since fixed costs are spread across the changing number of units. This can distort the income picture and may even result in income moving in an opposite direction from sales. Understandability . Managers may find it easier to understand variable costing reports because overhead changes with the cost driver. Fixed costs are more visible . Variable costing emphasizes the impact fixed costs have on income. The total amount of fixed costs for the period is reported after gross profit. This emphasizes the direct impact fixed costs have on net income, whereas in absorption costing, fixed costs are included as product costs and thus are part of cost of goods sold, which is a determinant of gross profit. Margins are less distorted . Gross margins are not distorted by the allocation of common fixed costs. This facilitates appraisal of the profitability of products, customers, and business segments. Common fixed costs , sometimes called allocated fixed costs, are costs of the organization that are shared by the various revenue-generating components of the business, such as divisions. Examples of these costs include the chief executive officer (CEO) salary and corporate headquarter costs, such as rent and insurance. These overhead costs are typically allocated to various components of the organization, such as divisions or production facilities. This is necessary, because these costs are needed for doing business but are generated by a part of the company that does not directly generate revenues to offset these costs. The company’s revenues are generated by the goods that are produced and sold by the various divisions of the company. Control is facilitated . Variable costing considers only variable production costs and facilitates the use of control mechanisms such as flexible budgets that are based on differing levels of production and therefore designed around variable costs, since fixed costs do not change within a relevant range of production. Incremental analysis is more straightforward . Variable cost corresponds closely with the current out-of-pocket expenditure necessary to manufacture goods and can therefore be used more readily in incremental analysis. While the variable cost method helps management make decisions, especially when the number of units in ending inventory fluctuates, there are some disadvantages: Financial reporting . The variable cost method is not acceptable for financial reporting under GAAP. GAAP requires expenses to be recognized in the same period as the related revenue, and the variable method expenses fixed overhead as a period cost regardless of how much inventory remains. Tax reporting . Tax laws in the United States and many other countries do not allow variable costing and require absorption costing. Advantages and Disadvantages of the Absorption Costing Method Under the absorption costing method, all costs of production, whether fixed or variable, are considered product costs. This means that absorption costing allocates a portion of fixed manufacturing overhead to each product. Advocates of absorption costing argue that fixed manufacturing overhead costs are essential to the production process and are an actual cost of the product. They further argue that costs should be categorized by function rather than by behavior, and these costs must be included as a product cost regardless of whether the cost is fixed or variable. The advantages of absorption costing include: Product cost . Absorption costing includes fixed overhead as part of the inventory cost, and it is expensed as cost of goods sold when inventory is sold. This represents a more complete list of costs involved in producing a product. Financial reporting . Absorption costing is the acceptable reporting method under GAAP. Tax reporting . Absorption costing is the method required for tax preparation in the United States and many other countries. While financial and tax reporting are the main advantages of absorption costing, there is one distinct disadvantage: Difficulty in understanding . The absorption costing method does not list the incremental fixed overhead costs and is more difficult to understand and analyze as compared to variable costing. Ethical Considerations Cost Accounting for Ethical Business Managers An ethical and evenhanded approach to providing clear and informative financial information regarding costing is the goal of the ethical accountant. Ethical business managers understand the benefits of using the appropriate costing systems and methods. The accountant’s entire business organization needs to understand that the costing system is created to provide efficiency in assisting in making business decisions. Determining the appropriate costing system and the type of information to be provided to management goes beyond providing just accounting information. The costing system should provide the organization’s management with factual and true financial information regarding the organization’s operations and the performance of the organization. Unethical business managers can game the costing system by unfairly or unscrupulously influencing the outcome of the costing system’s reports. Comparing the Operating Income Statements for Both Methods Assuming No Ending Inventory in the First Year, and the Existence of Ending Inventory in the Second Year In order to understand how to prepare income statements using both methods, consider a scenario in which a company has no ending inventory in the first year but does have ending inventory in the second year. Outdoor Nation, a manufacturer of residential, tabletop propane heaters, wants to determine whether absorption costing or variable costing is better for internal decision-making. It manufactures 5,000 units annually and sells them for $15 per unit. The total of direct material, direct labor, and variable overhead is $5 per unit with an additional $1 in variable sales cost paid when the units are sold. Additionally, fixed overhead is $15,000 per year, and fixed sales and administrative expenses are $21,000 per year. Production is estimated to hold steady at 5,000 units per year, while sales estimates are projected to be 5,000 units in year 1; 4,000 units in year 2; and 6,000 in year 3. Under absorption costing, the ending inventory costs include all manufacturing costs, including overhead. If fixed overhead is $15,000 per year and 5,000 units are manufactured each year, the fixed overhead per unit is $3: $15,000 5,000 units = $3 per unit $15,000 5,000 units = $3 per unit The projected income statement using absorption costing is shown in Figure 6.15 : In variable costing, the fixed overhead is not included in the cost of goods sold even if it relates to manufacturing. As a result, the net income under variable costing differs from absorption costing by the same amount as inventory differential. The projected income under variable costing is shown in Figure 6.16 : The difference between the methods is attributable to the fixed overhead. Therefore, the methods can be reconciled with each other, as shown in Figure 6.17 . Each method results in different amounts for net income when the inventory amounts change. More specifically, the effects on income are: Sales and Production equal. When a company sells the same quantity of products produced during the period, the resulting net income will be identical whether absorption costing or variable costing is used. When sales equals production, all manufacturing costs are accounted for in net income, and none of the costs are waiting in finished goods inventory to be recognized in a future period. Remember, with absorption costing, all manufacturing costs are added to the cost of the product during the work in process phase; thus, as the goods are sold, all costs have been accounted for. With variable costing, only the variable costs or production are added to the cost of the product during the work in process phase, and the fixed costs are expensed in the period in which they are incurred. Thus, in the example where sales and production are equal, all costs have been accounted for since all of the produced inventory has moved through cost of goods sold. This means that net income under absorption costing would be the same as net income under variable costing. Sales less than Production . When a company produces more than it sells, net income will be less under variable costing than under absorption costing. In this scenario, there will be a buildup, or an increase, in inventory from the beginning of the period to the end of the period. Under variable costing, fixed manufacturing costs are still in the finished goods inventory account. But under absorption costing, those fixed costs have been expensed during the current production period and thus have reduced net income. Sales greater than Production. When a company sells more than it produces during the current period, this indicates it is selling goods produced in a prior period. This will result in net income under variable costing being greater than under absorption costing. With absorption costing, all manufacturing costs are captured in the finished goods inventory account, and as those goods are sold, those costs become expenses. Selling items that were produced in a prior period defers the recognition of the costs of those products until the future period in which they are sold. Variable costing results in all of the variable costs associated with the sold products being in the current period net income, but only the current period fixed expenses would be included in the current period net income. The fixed expenses associated with the items produced in a prior period were recognized in the period in which they were incurred, not the period in which the products are sold. This results in fewer expenses and therefore greater income with the variable cost method. Effect of differences in Sales and Production Long Term. The differences between net income generated under absorption costing and variable costing will be almost zero over the long run, as all costs associated with the production of goods will eventually be recognized in net income. The use of absorption versus variable costing creates more of a timing issue for the recognition of fixed expenses, and this is why net income would vary from period to period under the two methods but in the long run would not. In addition, absorption costing does allow for manipulation of income by managers through overproduction. Increasing production at year-end results in a higher net income than if the additional goods had not been produced, since increasing the number of units decreases the fixed cost per unit. Under absorption costing, these fixed costs follow the units produced and do not become a part of cost of goods sold until they are sold. Instead, a portion of the fixed costs is in the inventory accounts. Why would a manager want to manipulate income by overproducing? If the manager’s annual bonus or other compensation is linked to net income, then the manager may be motivated to overproduce in order to increase the potential for or the amount of a bonus. If the level of sales remain constant while manipulating the production level, such an action would increase the company’s expenses (including the amount of bonus) while not increasing its revenue. Barring any other justification for the increase in production, such an action by the manager would typically be considered an ethical violation, since the manager’s actions would be in the manager’s best interests, but contrary to the best interests of the company. Link to Learning Absorption costing is not as well understood as variable costing because of its financial statement limitations. But understanding how it can help management make decisions is very important. See the Strategic CFO forum on Absorption Cost Accounting that helps managers understand its uses to learn more.
u.s._history
Summary 12.1 The Economics of Cotton In the years before the Civil War, the South produced the bulk of the world’s supply of cotton. The Mississippi River Valley slave states became the epicenter of cotton production, an area of frantic economic activity where the landscape changed dramatically as land was transformed from pinewoods and swamps into cotton fields. Cotton’s profitability relied on the institution of slavery, which generated the product that fueled cotton mill profits in the North. When the international slave trade was outlawed in 1808, the domestic slave trade exploded, providing economic opportunities for Whites involved in many aspects of the trade and increasing the possibility of enslaved people’s dislocation and separation from kin and friends. Although the larger American and Atlantic markets relied on southern cotton in this era, the South depended on these other markets for food, manufactured goods, and loans. Thus, the market revolution transformed the South just as it had other regions. 12.2 African Americans in the Antebellum United States Slave labor in the antebellum South generated great wealth for plantation owners. Enslaved people, in contrast, endured daily traumas as human property. Enslaved people resisted their condition in a variety of ways, and many found some solace in Christianity and the communities they created in the slave quarters. While some free Black people achieved economic prosperity and even became slaveholders themselves, the vast majority found themselves restricted by the same White-supremacist assumptions upon which the institution of slavery was based. 12.3 Wealth and Culture in the South Although a small White elite owned the vast majority of enslaved people in the South, and most other White people could only aspire to slaveholders’ wealth and status, slavery shaped the social life of all White southerners in profound ways. Southern culture valued a behavioral code in which men’s honor, based on the domination of others and the protection of southern White womanhood, stood as the highest good. Slavery also decreased class tensions, binding Whites together on the basis of race despite their inequalities of wealth. Several defenses of slavery were prevalent in the antebellum era, including Calhoun’s argument that the South’s “concurrent majority” could overrule federal legislation deemed hostile to southern interests; the notion that slaveholders’ care of their chattel made the enslaved better off than wage workers in the North; and the profoundly racist ideas underlying polygenism. 12.4 The Filibuster and the Quest for New Slave States The decade of the 1850s witnessed various schemes to expand the American empire of slavery. The Ostend Manifesto articulated the right of the United States to forcefully seize Cuba if Spain would not sell it, while filibuster expeditions attempted to annex new slave states without the benefit of governmental approval. Those who pursued the goal of expanding American slavery believed they embodied the true spirit of White racial superiority.
Chapter Outline 12.1 The Economics of Cotton 12.2 African Americans in the Antebellum United States 12.3 Wealth and Culture in the South 12.4 The Filibuster and the Quest for New Slave States Introduction Nine new slave states entered the Union between 1789 and 1860, rapidly expanding and transforming the South into a region of economic growth built on slave labor. In the image above ( Figure 12.1 ), innumerable enslaved workers load cargo onto a steamship in the Port of New Orleans, the commercial center of the antebellum South, while two well-dressed White men stand by talking. Commercial activity extends as far as the eye can see. By the mid-nineteenth century, southern commercial centers like New Orleans had become home to the greatest concentration of wealth in the United States. While most White southerners did not own hold, they aspired to join the ranks of elite slaveholders, who played a key role in the politics of both the South and the nation. Meanwhile, slavery shaped the culture and society of the South, which rested on a racial ideology of White supremacy and a vision of the United States as a White man’s republic. Enslaved people endured the traumas of slavery by creating their own culture and using the Christian message of redemption to find hope for a world of freedom without violence.
[ { "answer": { "ans_choice": 0, "ans_text": "A" }, "bloom": null, "hl_context": "New Orleans had been part of the French empire before the United States purchased it , along with the rest of the Louisiana Territory , in 1803 . <hl> In the first half of the nineteenth century , it rose in prominence and importance largely because of the cotton boom , steam-powered river traffic , and its strategic position near the mouth of the Mississippi River . <hl> Steamboats moved down the river transporting cotton grown on plantations along the river and throughout the South to the port at New Orleans . From there , the bulk of American cotton went to Liverpool , England , where it was sold to British manufacturers who ran the cotton mills in Manchester and elsewhere . This lucrative international trade brought new wealth and new residents to the city . By 1840 , New Orleans alone had 12 percent of the nation ’ s total banking capital , and visitors often commented on the great cultural diversity of the city . In 1835 , Joseph Holt Ingraham wrote : “ Truly does New-Orleans represent every other city and nation upon earth . I know of none where is congregated so great a variety of the human species . ” Slave labor , cotton , and the steamship transformed the city from a relatively isolated corner of North America in the eighteenth century to a thriving metropolis that rivaled New York in importance ( Figure 12.5 ) . Almost no cotton was grown in the United States in 1787 , the year the federal constitution was written . <hl> However , following the War of 1812 , a huge increase in production resulted in the so-called cotton boom , and by midcentury , cotton became the key cash crop ( a crop grown to sell rather than for the farmer ’ s sole use ) of the southern economy and the most important American commodity . <hl> <hl> By 1850 , of the 3.2 million enslaved people in the country ’ s fifteen slave states , 1.8 million were producing cotton ; by 1860 , enslaved labor was producing over two billion pounds of cotton per year . <hl> Indeed , American cotton soon made up two-thirds of the global supply , and production continued to soar . <hl> By the time of the Civil War , South Carolina politician James Hammond confidently proclaimed that the North could never threaten the South because “ cotton is king . ” <hl>", "hl_sentences": "In the first half of the nineteenth century , it rose in prominence and importance largely because of the cotton boom , steam-powered river traffic , and its strategic position near the mouth of the Mississippi River . However , following the War of 1812 , a huge increase in production resulted in the so-called cotton boom , and by midcentury , cotton became the key cash crop ( a crop grown to sell rather than for the farmer ’ s sole use ) of the southern economy and the most important American commodity . By 1850 , of the 3.2 million enslaved people in the country ’ s fifteen slave states , 1.8 million were producing cotton ; by 1860 , enslaved labor was producing over two billion pounds of cotton per year . By the time of the Civil War , South Carolina politician James Hammond confidently proclaimed that the North could never threaten the South because “ cotton is king . ”", "question": { "cloze_format": "___ was not one of the effects of the cotton boom.", "normal_format": "Which of the following was not one of the effects of the cotton boom?", "question_choices": [ "U.S. trade increased with France and Spain.", "Northern manufacturing expanded.", "The need for slave labor grew.", "Port cities like New Orleans expanded." ], "question_id": "fs-idm225454704", "question_text": "Which of the following was not one of the effects of the cotton boom?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 1, "ans_text": "the rise of a thriving domestic slave trade" }, "bloom": null, "hl_context": "<hl> As discussed above , after centuries of slave trade with West Africa , Congress banned the further importation of enslaved Africans beginning in 1808 . <hl> <hl> The domestic slave trade then expanded rapidly . <hl> As the cotton trade grew in size and importance , so did the domestic slave trade ; the cultivation of cotton gave new life and importance to slavery , increasing the value of enslaved individuals . To meet the South ’ s fierce demand for labor , American smugglers illegally transferred captives through Florida and later through Texas . Many more enslaved Africans arrived illegally from Cuba ; indeed , Cubans relied on the smuggling of enslaved people to prop up their finances . The largest number of enslaved people after 1808 , however , came from the massive , legal internal slave market in which slave states in the Upper South sold enslaved men , women , and children to states in the Lower South . For the enslaved , the domestic trade presented the full horrors of slavery as children were ripped from their mothers and fathers and families destroyed , creating heartbreak and alienation . <hl> In 1807 , the U . S . Congress abolished the foreign slave trade , a ban that went into effect on January 1 , 1808 . <hl> <hl> After this date , importing captives from Africa became illegal in the United States . <hl> <hl> While smuggling continued to occur , the end of the international slave trade meant that enslaved domestic people were in very high demand . <hl> Fortunately for Americans whose wealth depended upon the exploitation of slave labor , a fall in the price of tobacco had caused landowners in the Upper South to reduce their production of this crop and use more of their land to grow wheat , which was far more profitable . While tobacco was a labor-intensive crop that required many people to cultivate it , wheat was not . Former tobacco farmers in the older states of Virginia and Maryland found themselves with “ surplus ” enslaved people whom they were obligated to feed , clothe , and shelter . Some slaveholders responded to this situation by releasing enslaved people ; far more decided to sell their excess bondsmen . Virginia and Maryland therefore took the lead in the domestic slave trade , the trading of enslaved people within the borders of the United States .", "hl_sentences": "As discussed above , after centuries of slave trade with West Africa , Congress banned the further importation of enslaved Africans beginning in 1808 . The domestic slave trade then expanded rapidly . In 1807 , the U . S . Congress abolished the foreign slave trade , a ban that went into effect on January 1 , 1808 . After this date , importing captives from Africa became illegal in the United States . While smuggling continued to occur , the end of the international slave trade meant that enslaved domestic people were in very high demand .", "question": { "cloze_format": "The abolition of the foreign slave trade in 1807 led to _______.", "normal_format": "What did the abolition of the foreign slave trade in 1807 lead to?", "question_choices": [ "a dramatic decrease in the price and demand for enslaved people", "the rise of a thriving domestic slave trade", "a reform movement calling for the complete end to slavery in the United States", "the decline of cotton production" ], "question_id": "fs-idm179266624", "question_text": "The abolition of the foreign slave trade in 1807 led to _______." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "property" }, "bloom": null, "hl_context": "Below the wealthy planters were the yeoman farmers , or small landowners ( Figure 12.13 ) . Below yeomen were poor , landless White people , who made up the majority of White people in the South . These landless White men dreamed of owning land and enslaving people and served as slave overseers , drivers , and traders in the southern economy . In fact , owning land and enslaved people provided one of the only opportunities for upward social and economic mobility . <hl> In the South , living the American dream meant enslaving people , producing cotton , and owning land . <hl> <hl> The selling of enslaved people was a major business enterprise in the antebellum South , representing a key part of the economy . <hl> <hl> White men invested substantial sums in enslaved people , carefully calculating the annual returns they could expect from each enslaved person as well as the possibility of greater profits through natural increase . <hl> The domestic slave trade was highly visible , and like the infamous Middle Passage that brought captive Africans to the Americas , it constituted an equally disruptive and horrifying journey now called the second middle passage . Between 1820 and 1860 , White American traders sold a million or more captives in the domestic slave market . Groups of enslaved people were transported by ship from places like Virginia , a state that specialized in raising enslaved people for sale , to New Orleans , where they were sold to planters in the Mississippi Valley . Others made the overland trek from older states like North Carolina to new and booming Deep South states like Alabama . <hl> In addition to cotton , the great commodity of the antebellum South was human chattel . <hl> <hl> Slavery was the cornerstone of the southern economy . <hl> By 1850 , about 3.2 million enslaved people labored in the United States , 1.8 million of whom worked in the cotton fields . They faced arbitrary power abuses from Whites ; they coped by creating family and community networks . Storytelling , song , and Christianity also provided solace and allowed enslaved individuals to develop their own interpretations of their condition .", "hl_sentences": "In the South , living the American dream meant enslaving people , producing cotton , and owning land . The selling of enslaved people was a major business enterprise in the antebellum South , representing a key part of the economy . White men invested substantial sums in enslaved people , carefully calculating the annual returns they could expect from each enslaved person as well as the possibility of greater profits through natural increase . In addition to cotton , the great commodity of the antebellum South was human chattel . Slavery was the cornerstone of the southern economy .", "question": { "cloze_format": "Under the law in the antebellum South, enslaved people were ________.", "normal_format": "Under the law in the antebellum South, what were enslaved people?", "question_choices": [ "servants", "animals", "property", "indentures" ], "question_id": "fs-idm105747888", "question_text": "Under the law in the antebellum South, enslaved people were ________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 0, "ans_text": "enslaved no one" }, "bloom": null, "hl_context": "The South prospered , but its wealth was very unequally distributed . Upward social mobility did not exist for the millions of enslaved people who produced a good portion of the nation ’ s wealth , while poor southern White people envisioned a day when they might rise enough in the world to own enslaved people of their own . Because of the cotton boom , there were more millionaires per capita in the Mississippi River Valley by 1860 than anywhere else in the United States . <hl> However , in that same year , only 3 percent of White people enslaved more than fifty people , and two-thirds of White households in the South did not enslave any people at all ( Figure 12.11 ) . <hl> <hl> Distribution of wealth in the South became less democratic over time ; fewer Whites enslaved people in 1860 than in 1840 . <hl>", "hl_sentences": "However , in that same year , only 3 percent of White people enslaved more than fifty people , and two-thirds of White households in the South did not enslave any people at all ( Figure 12.11 ) . Distribution of wealth in the South became less democratic over time ; fewer Whites enslaved people in 1860 than in 1840 .", "question": { "cloze_format": "The largest group of Whites in the South _______.", "normal_format": "Which of the following is correct about the largest group of Whites in the South?", "question_choices": [ "enslaved no one", "enslaved between one and nine people each", "enslaved between ten and ninety-nine people each", "enslaved over one hundred people each" ], "question_id": "fs-idp244628928", "question_text": "The largest group of Whites in the South _______." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "C" }, "bloom": null, "hl_context": "<hl> Calhoun ’ s idea of the concurrent majority found full expression in his 1850 essay “ Disquisition on Government . ” In this treatise , he wrote about government as a necessary means to ensure the preservation of society , since society existed to “ preserve and protect our race . ” If government grew hostile to society , then a concurrent majority had to take action , including forming a new government . <hl> “ Disquisition on Government ” advanced a profoundly anti-democratic argument . It illustrates southern leaders ’ intense suspicion of democratic majorities and their ability to effect legislation that would challenge southern interests . As the nation expanded in the 1830s and 1840s , the writings of abolitionists — a small but vocal group of northerners committed to ending slavery — reached a larger national audience . White southerners responded by putting forth arguments in defense of slavery , their way of life , and their honor . Calhoun became a leading political theorist defending slavery and the rights of the South , which he saw as containing an increasingly embattled minority . <hl> He advanced the idea of a concurrent majority , a majority of a separate region ( that would otherwise be in the minority of the nation ) with the power to veto or disallow legislation put forward by a hostile majority . <hl>", "hl_sentences": "Calhoun ’ s idea of the concurrent majority found full expression in his 1850 essay “ Disquisition on Government . ” In this treatise , he wrote about government as a necessary means to ensure the preservation of society , since society existed to “ preserve and protect our race . ” If government grew hostile to society , then a concurrent majority had to take action , including forming a new government . He advanced the idea of a concurrent majority , a majority of a separate region ( that would otherwise be in the minority of the nation ) with the power to veto or disallow legislation put forward by a hostile majority .", "question": { "cloze_format": "John C. Calhoun argued for greater rights for southerners because of ___.", "normal_format": "John C. Calhoun argued for greater rights for southerners with which idea?", "question_choices": [ "polygenism", "nullification", "concurrent majority", "paternalism" ], "question_id": "fs-idp257500480", "question_text": "John C. Calhoun argued for greater rights for southerners with which idea?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 1, "ans_text": "B" }, "bloom": null, "hl_context": "<hl> Filibustering plots picked up pace in the 1850s as the drive for expansion continued . <hl> <hl> Slaveholders looked south to the Caribbean , Mexico , and Central America , hoping to add new slave states . <hl> Spanish Cuba became the objective of many American slaveholders in the 1850s , as debate over the island dominated the national conversation . Many who urged its annexation believed Cuba had to be made part of the United States to prevent it from going the route of Haiti , with enslaved Black people overthrowing their captors and creating another Black republic , a prospect horrifying to many in the United States . Americans also feared that the British , who had an interest in the sugar island , would make the first move and snatch Cuba from the United States . Since Britain had outlawed slavery in its colonies in 1833 , Black people on the island of Cuba would then be free . <hl> Southern expansionists had spearheaded the drive to add more territory to the United States . <hl> They applauded the Louisiana Purchase and fervently supported Native American removal , the annexation of Texas , and the Mexican-American War . <hl> Drawing inspiration from the annexation of Texas , proslavery expansionists hoped to replicate that feat by bringing Cuba and other territories into the United States and thereby enlarging the American empire of slavery . <hl>", "hl_sentences": "Filibustering plots picked up pace in the 1850s as the drive for expansion continued . Slaveholders looked south to the Caribbean , Mexico , and Central America , hoping to add new slave states . Southern expansionists had spearheaded the drive to add more territory to the United States . Drawing inspiration from the annexation of Texas , proslavery expansionists hoped to replicate that feat by bringing Cuba and other territories into the United States and thereby enlarging the American empire of slavery .", "question": { "cloze_format": "Southern expansionists conducted filibuster expeditions ___.", "normal_format": "Why did southern expansionists conduct filibuster expeditions?", "question_choices": [ "to gain political advantage", "to annex new slave states", "to prove they could raise an army", "to map unknown territories" ], "question_id": "fs-idp53796144", "question_text": "Why did southern expansionists conduct filibuster expeditions?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "Cuba" }, "bloom": null, "hl_context": "<hl> Controversy around the Ostend Manifesto caused President Pierce to step back from the plan to take Cuba . <hl> After his election , President Buchanan , despite his earlier expansionist efforts , denounced filibustering as the action of pirates . Filibustering caused an even wider gulf between the North and the South ( Figure 12.18 ) .", "hl_sentences": "Controversy around the Ostend Manifesto caused President Pierce to step back from the plan to take Cuba .", "question": { "cloze_format": "The Controversy at the heart of the Ostend Manifesto centered on the fate of ___.", "normal_format": "On the fate of what the controversy at the heart of the Ostend Manifesto was centered?", "question_choices": [ "Ostend, Belgium", "Nicaragua", "Cuba", "Louisiana" ], "question_id": "fs-idm217777168", "question_text": "The controversy at the heart of the Ostend Manifesto centered on the fate of:" }, "references_are_paraphrase": null } ]
12
12.1 The Economics of Cotton Learning Objectives By the end of this section, you will be able to: Explain the labor-intensive processes of cotton production Describe the importance of cotton to the Atlantic and American antebellum economy In the antebellum era—that is, in the years before the Civil War—American planters in the South continued to grow Chesapeake tobacco and Carolina rice as they had in the colonial era. Cotton, however, emerged as the antebellum South’s major commercial crop, eclipsing tobacco, rice, and sugar in economic importance. By 1860, the region was producing two-thirds of the world’s cotton. In 1793, Eli Whitney revolutionized the production of cotton when he invented the cotton gin , a device that separated the seeds from raw cotton. Suddenly, a process that was extraordinarily labor-intensive when done by hand could be completed quickly and easily. American plantation owners, who were searching for a successful staple crop to compete on the world market, found it in cotton. As a commodity, cotton had the advantage of being easily stored and transported. A demand for it already existed in the industrial textile mills in Great Britain, and in time, a steady stream of slave-grown American cotton would also supply northern textile mills. Southern cotton, picked and processed by enslaved labor, helped fuel the nineteenth-century Industrial Revolution in both the United States and Great Britain. KING COTTON Almost no cotton was grown in the United States in 1787, the year the federal constitution was written. However, following the War of 1812, a huge increase in production resulted in the so-called cotton boom , and by midcentury, cotton became the key cash crop (a crop grown to sell rather than for the farmer’s sole use) of the southern economy and the most important American commodity. By 1850, of the 3.2 million enslaved people in the country’s fifteen slave states, 1.8 million were producing cotton; by 1860, enslaved labor was producing over two billion pounds of cotton per year. Indeed, American cotton soon made up two-thirds of the global supply, and production continued to soar. By the time of the Civil War, South Carolina politician James Hammond confidently proclaimed that the North could never threaten the South because “cotton is king.” The crop grown in the South was a hybrid: Gossypium barbadense , known as Petit Gulf cotton, a mix of Mexican, Georgia, and Siamese strains. Petit Gulf cotton grew extremely well in different soils and climates. It dominated cotton production in the Mississippi River Valley—home of the new slave states of Louisiana, Mississippi, Arkansas, Tennessee, Kentucky, and Missouri—as well as in other states like Texas. Whenever new slave states entered the Union, White slaveholders sent armies of the enslaved to clear the land in order to grow and pick the lucrative crop. The phrase “to be sold down the river,” used by Harriet Beecher Stowe in her 1852 novel Uncle Tom’s Cabin , refers to this forced migration from the upper southern states to the Deep South, lower on the Mississippi, to grow cotton. The enslaved people who built this cotton kingdom with their forced labor started by clearing the land. Although the Jeffersonian vision of the settlement of new U.S. territories entailed White yeoman farmers single-handedly carving out small independent farms, the reality proved quite different. Entire old-growth forests and cypress swamps fell to the axe as enslaved people labored to strip the vegetation to make way for cotton. With the land cleared, they readied the earth by plowing and planting. To ambitious White planters, the extent of new land available for cotton production seemed almost limitless, and many planters simply leapfrogged from one area to the next, abandoning their fields every ten to fifteen years after the soil became exhausted. As a result, enslaved people composed the vanguard of this American expansion to the West. Cotton planting took place in March and April, when enslaved people planted seeds in rows around three to five feet apart. Over the next several months, from April to August, they carefully tended the plants. Weeding the cotton rows took significant energy and time. In August, after the cotton plants had flowered and the flowers had begun to give way to cotton bolls (the seed-bearing capsule that contains the cotton fiber), all the plantation’s enslaved men, women, and children worked together to pick the crop ( Figure 12.3 ). On each day of cotton picking, enslaved workers went to the fields with sacks, which they would fill as many times as they could. The effort was laborious, and a White “driver” employed the lash to make the enslaved people work as quickly as possible. Cotton planters projected the amount of cotton they could harvest based on the number of enslaved people under their control. In general, planters expected a good “hand,” or enslaved laborer, to work ten acres of land and pick two hundred pounds of cotton a day. An overseer or "master" measured each enslaved individual’s daily yield. Great pressure existed to meet the expected daily amount, and some overseers whipped enslaved people who picked less than expected. Cotton picking occurred as many as seven times a season as the plant grew and continued to produce bolls through the fall and early winter. During the picking season, enslaved people worked from sunrise to sunset with a ten-minute break at lunch; many slaveholders tended to give them little to eat, since spending on food would cut into their profits. Other slaveholders knew that feeding the enslaved could increase productivity and therefore provided what they thought would help ensure a profitable crop. Enslaved people’s day didn’t end after they picked the cotton; once they had brought it to the gin house to be weighed, they then had to care for the animals and perform other chores. Indeed, they often maintained their own gardens and livestock, which they tended after working the cotton fields, in order to supplement their supply of food. Sometimes the cotton was dried before it was ginned (put through the process of separating the seeds from the cotton fiber). The cotton gin allowed an enslaved laborer to remove the seeds from fifty pounds of cotton a day, compared to one pound if done by hand. After the seeds had been removed, the cotton was pressed into bales. These bales, weighing about four hundred to five hundred pounds, were wrapped in burlap cloth and sent down the Mississippi River. As the cotton industry boomed in the South, the Mississippi River quickly became the essential water highway in the United States. Steamboats, a crucial part of the transportation revolution thanks to their enormous freight-carrying capacity and ability to navigate shallow waterways, became a defining component of the cotton kingdom. Steamboats also illustrated the class and social distinctions of the antebellum age. While the decks carried precious cargo, ornate rooms graced the interior. In these spaces, Whites socialized in the ship’s saloons and dining halls while enslaved Black people served them ( Figure 12.4 ). Investors poured huge sums into steamships. In 1817, only seventeen plied the waters of western rivers, but by 1837, there were over seven hundred steamships in operation. Major new ports developed at St. Louis, Missouri; Memphis, Tennessee; and other locations. By 1860, some thirty-five hundred vessels were steaming in and out of New Orleans, carrying an annual cargo made up primarily of cotton that amounted to $220 million worth of goods (approximately $6.5 billion in 2014 dollars). New Orleans had been part of the French empire before the United States purchased it, along with the rest of the Louisiana Territory, in 1803. In the first half of the nineteenth century, it rose in prominence and importance largely because of the cotton boom, steam-powered river traffic, and its strategic position near the mouth of the Mississippi River. Steamboats moved down the river transporting cotton grown on plantations along the river and throughout the South to the port at New Orleans. From there, the bulk of American cotton went to Liverpool, England, where it was sold to British manufacturers who ran the cotton mills in Manchester and elsewhere. This lucrative international trade brought new wealth and new residents to the city. By 1840, New Orleans alone had 12 percent of the nation’s total banking capital, and visitors often commented on the great cultural diversity of the city. In 1835, Joseph Holt Ingraham wrote: “Truly does New-Orleans represent every other city and nation upon earth. I know of none where is congregated so great a variety of the human species.” Slave labor, cotton, and the steamship transformed the city from a relatively isolated corner of North America in the eighteenth century to a thriving metropolis that rivaled New York in importance ( Figure 12.5 ). THE DOMESTIC SLAVE TRADE The South’s dependence on cotton was matched by its dependence on stolen labor from enslaved people to harvest the cotton. Despite the rhetoric of the Revolution that “all men are created equal,” slavery not only endured in the American republic but formed the very foundation of the country’s economic success. Cotton and slavery occupied a central—and intertwined—place in the nineteenth-century economy. In 1807, the U.S. Congress abolished the foreign slave trade, a ban that went into effect on January 1, 1808. After this date, importing captives from Africa became illegal in the United States. While smuggling continued to occur, the end of the international slave trade meant that enslaved domestic people were in very high demand. Fortunately for Americans whose wealth depended upon the exploitation of slave labor, a fall in the price of tobacco had caused landowners in the Upper South to reduce their production of this crop and use more of their land to grow wheat, which was far more profitable. While tobacco was a labor-intensive crop that required many people to cultivate it, wheat was not. Former tobacco farmers in the older states of Virginia and Maryland found themselves with “surplus” enslaved people whom they were obligated to feed, clothe, and shelter. Some slaveholders responded to this situation by releasing enslaved people; far more decided to sell their excess bondsmen. Virginia and Maryland therefore took the lead in the domestic slave trade , the trading of enslaved people within the borders of the United States. The domestic slave trade offered many economic opportunities for White men. Those who sold the enslaved could realize great profits, as could the slave brokers who served as middlemen between sellers and buyers. Other White men could benefit from the trade as owners of warehouses and pens in which the enslaved were held, or as suppliers of clothing and food for enslaved people on the move. Between 1790 and 1859, slaveholders in Virginia sold more than half a million people. In the early part of this period, many were sold to people living in Kentucky, Tennessee, and North and South Carolina. By the 1820s, however, people in Kentucky and the Carolinas had begun to sell many of the people they held in bondage. Maryland slave dealers sold at least 185,000 people. Kentucky slaveholders sold some seventy-one thousand individuals. Most of the slave traders forced these enslaved people south to Alabama, Louisiana, and Mississippi. New Orleans, the hub of commerce, boasted the largest slave market in the United States and grew to become the nation’s fourth-largest city as a result. Natchez, Mississippi, had the second-largest market. In Virginia, Maryland, the Carolinas, and elsewhere in the South, slave auctions happened every day. All told, the movement of enslaved people in the South made up one of the largest forced internal migrations in the United States. In each of the decades between 1820 and 1860, about 200,000 people were sold and relocated. The 1800 census recorded over one million African Americans, of which nearly 900,000 had slave status. By 1860, the total number of African Americans increased to 4.4 million, and of that number, 3.95 million were held in bondage. For many of the enslaved, the domestic slave trade incited the terror of being sold away from family and friends. My Story Solomon Northup Remembers the New Orleans Slave Market Solomon Northup was a free Black man living in Saratoga, New York, when he was kidnapped and sold into slavery in 1841. He later escaped and wrote a book about his experiences: Twelve Years a Slave. Narrative of Solomon Northup, a Citizen of New-York, Kidnapped in Washington City in 1841 and Rescued in 1853 (the basis of a 2013 Academy Award–winning film) . This excerpt derives from Northup’s description of being sold in New Orleans, along with fellow slave Eliza and her children Randall and Emily. One old gentleman, who said he wanted a coachman, appeared to take a fancy to me. . . . The same man also purchased Randall. The little fellow was made to jump, and run across the floor, and perform many other feats, exhibiting his activity and condition. All the time the trade was going on, Eliza was crying aloud, and wringing her hands. She besought the man not to buy him, unless he also bought her self and Emily. . . . Freeman turned round to her, savagely, with his whip in his uplifted hand, ordering her to stop her noise, or he would flog her. He would not have such work—such snivelling; and unless she ceased that minute, he would take her to the yard and give her a hundred lashes. . . . Eliza shrunk before him, and tried to wipe away her tears, but it was all in vain. She wanted to be with her children, she said, the little time she had to live. All the frowns and threats of Freeman, could not wholly silence the afflicted mother. What does Northup’s narrative tell you about the experience of being enslaved? How does he characterize Freeman, the slave trader? How does he characterize Eliza? THE SOUTH IN THE AMERICAN AND WORLD MARKETS The first half of the nineteenth century saw a market revolution in the United States, one in which industrialization brought changes to both the production and the consumption of goods. Some southerners of the time believed that their region’s reliance on a single cash crop and its use of stolen labor to produce it gave the South economic independence and made it immune from the effects of these changes, but this was far from the truth. Indeed, the production of cotton brought the South more firmly into the larger American and Atlantic markets. Northern mills depended on the South for supplies of raw cotton that was then converted into textiles. But this domestic cotton market paled in comparison to the Atlantic market. About 75 percent of the cotton produced in the United States was eventually exported abroad. Exporting at such high volumes made the United States the undisputed world leader in cotton production. Between the years 1820 and 1860, approximately 80 percent of the global cotton supply was produced in the United States. Nearly all the exported cotton was shipped to Great Britain, fueling its burgeoning textile industry and making the powerful British Empire increasingly dependent on American cotton and southern slavery. The power of cotton on the world market may have brought wealth to the South, but it also increased its economic dependence on other countries and other parts of the United States. Much of the corn and pork that enslaved people consumed came from farms in the West. Some of the inexpensive clothing, called “slops,” and shoes worn by enslaved people were manufactured in the North. The North also supplied the furnishings found in the homes of both wealthy planters and members of the middle class. Many of the trappings of domestic life, such as carpets, lamps, dinnerware, upholstered furniture, books, and musical instruments—all the accoutrements of comfortable living for southern Whites—were made in either the North or Europe. Southern planters also borrowed money from banks in northern cities, and in the southern summers, took advantage of the developments in transportation to travel to resorts at Saratoga, New York; Litchfield, Connecticut; and Newport, Rhode Island. 12.2 African Americans in the Antebellum United States Learning Objectives By the end of this section, you will be able to: Discuss the similarities and differences in the lives of enslaved and free Black people Describe the independent culture and customs that enslaved people developed In addition to cotton, the great commodity of the antebellum South was human chattel. Slavery was the cornerstone of the southern economy. By 1850, about 3.2 million enslaved people labored in the United States, 1.8 million of whom worked in the cotton fields. They faced arbitrary power abuses from Whites; they coped by creating family and community networks. Storytelling, song, and Christianity also provided solace and allowed enslaved individuals to develop their own interpretations of their condition. LIFE AS A SLAVE Southern Whites frequently relied upon the idea of paternalism —the premise that White slaveholders acted in the best interests of those they enslaved, taking responsibility for their care, feeding, discipline, and even their Christian morality—to justify the existence of slavery. This grossly misrepresented the reality of slavery, which was, by any measure, a dehumanizing, traumatizing, and horrifying human disaster and crime against humanity. Nevertheless, the enslaved were hardly passive victims of their conditions; they sought and found myriad ways to resist their shackles and develop their own communities and cultures. Enslaved people often used the notion of paternalism to their advantage, finding opportunities within this system to engage in acts of resistance and win a degree of freedom and autonomy. For example, some played into their masters’ racism by hiding their intelligence and feigning childishness and ignorance. The enslaved could then slow down the workday and sabotage the system in small ways by “accidentally” breaking tools, for example; the slaveholder, seeing the enslaved as unsophisticated and childlike, would believe these incidents were accidents rather than rebellions. Some enslaved individuals engaged in more dramatic forms of resistance, such as poisoning their captors slowly. Other enslaved people reported their fellow captives to their slaveholders, hoping to gain preferential treatment. Those who informed their holders about planned slave rebellions could often expect the slaveholder’s gratitude and, perhaps, more lenient treatment. Such expectations were always tempered by the individual personality and caprice of the slaveholder. Slaveholders used both psychological coercion and physical violence to prevent enslaved people from disobeying their wishes. Often, the most efficient way to discipline people was to threaten to sell them. The lash, while the most common form of punishment, was effective but not efficient; whippings sometimes left the victimes incapacitated or even dead. Slaveholders and overseers also used punishment gear like neck braces, balls and chains, leg irons, and paddles with holes to produce blood blisters. The enslaved lived in constant terror of both physical violence and separation from family and friends ( Figure 12.6 ). Under southern law, enslaved people could not marry. Nonetheless, some slaveholders allowed marriages to promote the birth of children and to foster harmony on plantations. Some slaveholders even forced certain individuals to form unions, anticipating the birth of more children (and consequently greater profits) from them. Slaveholders sometimes allowed enslaved people to choose their own partners, but they could also veto a match. Enslaved couples always faced the prospect of being sold away from each other, and, once they had children, the horrifying reality that their children could be sold and sent away at any time. Enslaved parents had to show their children the best way to survive under slavery. This meant teaching them to be discreet, submissive, and guarded around White people. Parents also taught their children through the stories they told. Popular stories among the enslaved included tales of tricksters, sly captives, or animals like Brer Rabbit , who outwitted their antagonists ( Figure 12.7 ). Such stories provided comfort in humor and conveyed the sense of the wrongs of slavery. Enslaved people’s work songs commented on the harshness of their life and often had double meanings—a literal meaning that White people would not find offensive and a deeper meaning for the enslaved. African beliefs, including ideas about the spiritual world and the importance of African healers, survived in the South as well. White people who became aware of non-Christian rituals among the enslaved labeled such practices as witchcraft. Among Africans, however, the rituals and use of various plants by respected enslaved healers created connections between the African past and the American South while also providing a sense of community and identity for enslaved individuals. Other African customs, including traditional naming patterns, the making of baskets, and the cultivation of certain native African plants that had been brought to the New World, also endured. Americana African Americans and Christian Spirituals Many of the enslaved embraced Christianity. Their holders emphasized a scriptural message of obedience to White people and a better day awaiting them in heaven, but enslaved people focused on the uplifting message of being freed from bondage. The styles of worship in the Methodist and Baptist churches, which emphasized emotional responses to scripture, attracted the enslaved to those traditions and inspired some to become preachers. Spiritual songs that referenced the Exodus (the biblical account of the Hebrews’ escape from slavery in Egypt), such as “Roll, Jordan, Roll,” allowed enslaved individuals to freely express messages of hope, struggle, and overcoming adversity ( Figure 12.8 ). What imagery might the Jordan River suggest to enslaved people working in the Deep South? What lyrics in this song suggest redemption and a better world ahead? THE FREE BLACK POPULATION Complicating the picture of the antebellum South was the existence of a large free Black population. In fact, more free Black people lived in the South than in the North; roughly 261,000 lived in slave states, while 226,000 lived in northern states without slavery. Most free Black people did not live in the Lower, or Deep South: the states of Alabama, Arkansas, Florida, Georgia, Louisiana, Mississippi, South Carolina, and Texas. Instead, the largest number lived in the upper southern states of Delaware, Maryland, Virginia, North Carolina, and later Kentucky, Missouri, Tennessee, and the District of Columbia. Part of the reason for the large number of free Black people living in slave states were the many instances of manumission—the formal granting of freedom to enslaved people—that occurred as a result of the Revolution, when many slaveholders put into action the ideal that “all men are created equal” and released the people they enslaved. The transition in the Upper South to the staple crop of wheat, which did not require large numbers of enslaved laborers to produce, also spurred manumissions. Another large group of free Black people in the South had been free residents of Louisiana before the 1803 Louisiana Purchase, while still other free Black people came from Cuba and Haiti. Most free Black people in the South lived in cities, and a majority of free Black people were lighter-skinned women, a reflection of the interracial unions that formed between White men and Black women. Everywhere in the United States Blackness had come to be associated with slavery, the station at the bottom of the social ladder. Both Whites and those with African ancestry tended to delineate varying degrees of lightness in skin color in a social hierarchy. In the slaveholding South, different names described one’s distance from Blackness or Whiteness: mulattos (those with one Black and one White parent), quadroons (those with one Black grandparent), and octoroons (those with one Black great-grandparent) ( Figure 12.9 ). Lighter-skinned Black people often looked down on their darker counterparts, an indication of the ways in which both White and Black people internalized the racism of the age. Some free Black people in the South owned enslaved people themselves. Andrew Durnford, for example, was born in New Orleans in 1800, three years before the Louisiana Purchase. His father was White, and his mother was a free Black. Durnford became an American citizen after the Louisiana Purchase, rising to prominence as a Louisiana sugar planter and slaveholder. William Ellison, another free Black person who amassed great wealth and power in the South, was born with a slave status in 1790 in South Carolina. After buying his freedom and that of his wife and daughter, he proceeded to purchase his own enslaved people, whom he then put to work manufacturing cotton gins. By the eve of the Civil War, Ellison had become one of the richest and largest slaveholders in the entire state. The phenomenon of free Black people amassing large fortunes within a slave society predicated on racial difference, however, was exceedingly rare. Most free Black people in the South lived under the specter of slavery and faced many obstacles. Beginning in the early nineteenth century, southern states increasingly made manumission illegal. They also devised laws that divested free Blacks of their rights, such as the right to testify against Whites in court or the right to seek employment where they pleased. Interestingly, it was in the upper southern states that such laws were the harshest. In Virginia, for example, legislators made efforts to require free Black people to leave the state. In parts of the Deep South, free Black people were able to maintain their rights more easily. The difference in treatment between free Black people in the Deep South and those in the Upper South, historians have surmised, came down to economics. In the Deep South, slavery as an institution was strong and profitable. In the Upper South, the opposite was true. The anxiety of this economic uncertainty manifested in the form of harsh laws that targeted free Black people. SLAVE REVOLTS Captives resisted their enslavement in small ways every day, but this resistance did not usually translate into mass uprisings. The enslaved understood that the chances of ending slavery through rebellion were slim and would likely result in massive retaliation; many also feared the risk that participating in such actions would pose to themselves and their families. White slaveholders, however, constantly feared uprisings and took drastic steps, including torture and mutilation, whenever they believed that rebellions might be simmering. Gripped by the fear of insurrection, Whites often imagined revolts to be in the works even when no uprising actually happened. At least two major slave uprisings did occur in the antebellum South. In 1811, a major rebellion broke out in the sugar parishes of the booming territory of Louisiana. Inspired by the successful overthrow of the White planter class in Haiti, a group of people enslaved in Louisiana took up arms against slaveholders. Perhaps as many five hundred joined the rebellion, led by Charles Deslondes, a mixed-race slave driver on a sugar plantation owned by Manuel Andry. The revolt began in January 1811 on Andry’s plantation. Deslondes and others attacked the Andry household, where they killed the slaveholder’s son (although Andry himself escaped). The rebels then began traveling toward New Orleans, armed with weapons gathered at Andry’s plantation. Whites mobilized to stop the rebellion, but not before Deslondes and the other enslaved people set fire to three plantations and killed numerous White people. A small White force led by Andry ultimately captured Deslondes, whose body was mutilated and burned following his execution. Other rebels were beheaded, and their heads placed on pikes along the Mississippi River. The second rebellion, led by the enslaved Nat Turner, occurred in 1831 in Southampton County, Virginia. Turner had suffered not only from personal enslavement, but also from the additional trauma of having his wife sold away from him. Bolstered by Christianity, Turner became convinced that like Christ, he should lay down his life to end slavery. Mustering his relatives and friends, he began the rebellion August 22, killing scores of White people in the county. Whites mobilized quickly and within forty-eight hours had brought the rebellion to an end. Shocked by Nat Turner’s Rebellion, Virginia’s state legislature considered ending slavery in the state in order to provide greater security. In the end, legislators decided slavery would remain and that their state would continue to play a key role in the domestic slave trade. SLAVE MARKETS As discussed above, after centuries of slave trade with West Africa, Congress banned the further importation of enslaved Africans beginning in 1808. The domestic slave trade then expanded rapidly. As the cotton trade grew in size and importance, so did the domestic slave trade; the cultivation of cotton gave new life and importance to slavery, increasing the value of enslaved individuals. To meet the South’s fierce demand for labor, American smugglers illegally transferred captives through Florida and later through Texas. Many more enslaved Africans arrived illegally from Cuba; indeed, Cubans relied on the smuggling of enslaved people to prop up their finances. The largest number of enslaved people after 1808, however, came from the massive, legal internal slave market in which slave states in the Upper South sold enslaved men, women, and children to states in the Lower South. For the enslaved, the domestic trade presented the full horrors of slavery as children were ripped from their mothers and fathers and families destroyed, creating heartbreak and alienation. Some slaveholders sought to increase the number of enslaved children by placing enslaved males with fertile enslaved females, and slaveholders routinely raped enslaved females. The resulting births played an important role in slavery’s expansion in the first half of the nineteenth century, as many enslaved children were born as a result of rape. One account written by an enslaved person named William J. Anderson captures the horror of sexual exploitation in the antebellum South. Anderson wrote about how a Mississippi slaveholder divested a poor female slave of all wearing apparel, tied her down to stakes, and whipped her with a handsaw until he broke it over her naked body. In process of time he ravished [raped] her person, and became the father of a child by her. Besides, he always kept a colored Miss in the house with him. This is another curse of Slavery—concubinage and illegitimate connections—which is carried on to an alarming extent in the far South. A poor slave man who lives close by his wife, is permitted to visit her but very seldom, and other men, both White and colored, cohabit with her. It is undoubtedly the worst place of incest and bigamy in the world. A White man thinks nothing of putting a colored man out to carry the fore row [front row in field work], and carry on the same sport with the colored man’s wife at the same time. Anderson, a devout Christian, recognized and explains in his narrative that one of the evils of slavery is the way it undermines the family. Anderson was not the only critic of slavery to emphasize this point. Frederick Douglass, a Maryland slave who escaped to the North in 1838, elaborated on this dimension of slavery in his 1845 narrative. He recounted how enslavers had to sell their own children whom they had with enslaved women to appease the White wives who despised their offspring. The selling of enslaved people was a major business enterprise in the antebellum South, representing a key part of the economy. White men invested substantial sums in enslaved people, carefully calculating the annual returns they could expect from each enslaved person as well as the possibility of greater profits through natural increase. The domestic slave trade was highly visible, and like the infamous Middle Passage that brought captive Africans to the Americas, it constituted an equally disruptive and horrifying journey now called the second middle passage . Between 1820 and 1860, White American traders sold a million or more captives in the domestic slave market. Groups of enslaved people were transported by ship from places like Virginia, a state that specialized in raising enslaved people for sale, to New Orleans, where they were sold to planters in the Mississippi Valley. Others made the overland trek from older states like North Carolina to new and booming Deep South states like Alabama. New Orleans had the largest slave market in the United States ( Figure 12.10 ). Slaveholders brought the people they enslaved there from the East (Virginia, Maryland, and the Carolinas) and the West (Tennessee and Kentucky) to be sold for work in the Mississippi Valley. The slave trade benefited Whites in the Chesapeake and Carolinas, providing them with extra income: A healthy young enslaved male in the 1850s could be sold for $1,000 (approximately $30,000 in 2014 dollars), and a planter who could sell ten such enslaved people collected a windfall. In fact, by the 1850s, the demand for enslaved people reached an all-time high, and prices therefore doubled. An enslaved person who would have sold for $400 in the 1820s could command a price of $800 in the 1850s. The high price of enslaved people in the 1850s and the inability of natural increase to satisfy demands led some southerners to demand the reopening of the international slave trade, a movement that caused a rift between the Upper South and the Lower South. White people in the Upper South who sold enslaved people to their counterparts in the Lower South worried that reopening the trade would lower prices and therefore hurt their profits. My Story John Brown on Slave Life in Georgia An enslaved person named John Brown lived in Virginia, North Carolina, and Georgia before he escaped and moved to England. While there, he dictated his autobiography to someone at the British and Foreign Anti-Slavery Society, who published it in 1855. I really thought my mother would have died of grief at being obliged to leave her two children, her mother, and her relations behind. But it was of no use lamenting, the few things we had were put together that night, and we completed our preparations for being parted for life by kissing one another over and over again, and saying good bye till some of us little ones fell asleep. . . . And here I may as well tell what kind of man our new master was. He was of small stature, and thin, but very strong. He had sandy hair, a very red face, and chewed tobacco. His countenance had a very cruel expression, and his disposition was a match for it. He was, indeed, a very bad man, and used to flog us dreadfully. He would make his slaves work on one meal a day, until quite night, and after supper, set them to burn brush or spin cotton. We worked from four in the morning till twelve before we broke our fast, and from that time till eleven or twelve at night . . . we labored eighteen hours a day. —John Brown, Slave Life in Georgia: A Narrative of the Life, Sufferings, and Escape of John Brown, A Fugitive Slave, Now in England , 1855 What features of the domestic slave trade does Brown’s narrative illuminate? Why do you think he brought his story to an antislavery society? How do you think people responded to this narrative? 12.3 Wealth and Culture in the South Learning Objectives By the end of this section, you will be able to: Assess the distribution of wealth in the antebellum South Describe the southern culture of honor Identify the main proslavery arguments in the years prior to the Civil War During the antebellum years, wealthy southern planters formed an elite master class that wielded most of the economic and political power of the region. They created their own standards of gentility and honor, defining ideals of southern White manhood and womanhood and shaping the culture of the South. To defend the system of forced labor on which their economic survival and genteel lifestyles depended, elite southerners developed several proslavery arguments that they levied at those who would see the institution dismantled. SLAVERY AND THE WHITE CLASS STRUCTURE The South prospered, but its wealth was very unequally distributed. Upward social mobility did not exist for the millions of enslaved people who produced a good portion of the nation’s wealth, while poor southern White people envisioned a day when they might rise enough in the world to own enslaved people of their own. Because of the cotton boom, there were more millionaires per capita in the Mississippi River Valley by 1860 than anywhere else in the United States. However, in that same year, only 3 percent of White people enslaved more than fifty people, and two-thirds of White households in the South did not enslave any people at all ( Figure 12.11 ). Distribution of wealth in the South became less democratic over time; fewer Whites enslaved people in 1860 than in 1840. At the top of southern White society stood the planter elite, which comprised two groups. In the Upper South, an aristocratic gentry, generation upon generation of whom had grown up with slavery, held a privileged place. In the Deep South, an elite group of slaveholders gained new wealth from cotton. Some members of this group hailed from established families in the eastern states (Virginia and the Carolinas), while others came from humbler backgrounds. South Carolinian Nathaniel Heyward, a wealthy rice planter and member of the aristocratic gentry, came from an established family and sat atop the pyramid of southern slaveholders. He amassed an enormous estate; in 1850, he enslaved more than eighteen hundred people. When he died in 1851, he left an estate worth more than $2 million (approximately $63 million in 2014 dollars). As cotton production increased, new wealth flowed to the cotton planters. These planters became the staunchest defenders of slavery, and as their wealth grew, they gained considerable political power. One member of the planter elite was Edward Lloyd V, who came from an established and wealthy family of Talbot County, Maryland. Lloyd had inherited his position rather than rising to it through his own labors. His hundreds of enslaved people formed a crucial part of his wealth. Like many of the planter elite, Lloyd’s plantation was a masterpiece of elegant architecture and gardens ( Figure 12.12 ). One of the people enslaved on Lloyd’s plantation was Frederick Douglass, who escaped in 1838 and became an abolitionist leader, writer, statesman, and orator in the North. In his autobiography, Douglass described the plantation’s elaborate gardens and racehorses, but also its underfed and brutalized slave population. Lloyd provided employment opportunities to other Whites in Talbot County, many of whom served as slave traders and the “slave breakers” entrusted with beating and overworking unruly enslaved people into submission. Like other members of the planter elite, Lloyd himself served in a variety of local and national political offices. He was governor of Maryland from 1809 to 1811, a member of the House of Representatives from 1807 to 1809, and a senator from 1819 to 1826. As a representative and a senator, Lloyd defended slavery as the foundation of the American economy. Wealthy plantation owners like Lloyd came close to forming an American ruling class in the years before the Civil War. They helped shape foreign and domestic policy with one goal in view: to expand the power and reach of the cotton kingdom of the South. Socially, they cultivated a refined manner and believed White people, especially members of their class, should not perform manual labor. Rather, they created an identity for themselves based on a world of leisure in which horse racing and entertainment mattered greatly, and where the enslavement of others was the bedrock of civilization. Below the wealthy planters were the yeoman farmers, or small landowners ( Figure 12.13 ). Below yeomen were poor, landless White people, who made up the majority of White people in the South. These landless White men dreamed of owning land and enslaving people and served as slave overseers, drivers, and traders in the southern economy. In fact, owning land and enslaved people provided one of the only opportunities for upward social and economic mobility. In the South, living the American dream meant enslaving people, producing cotton, and owning land. Despite this unequal distribution of wealth, non-slaveholding Whites shared with White planters a common set of values, most notably a belief in White supremacy. Whites, whether rich or poor, were bound together by racism. Slavery defused class tensions among them, because no matter how poor they were, White southerners had race in common with the mighty plantation owners. Non-slaveholders accepted the rule of the planters as defenders of their shared interest in maintaining a racial hierarchy. Significantly, all Whites were also bound together by the constant, prevailing fear of slave uprisings. My Story D. R. Hundley on the Southern Yeoman D. R. Hundley was a well-educated planter, lawyer, and banker from Alabama. Something of an amateur sociologist, he argued against the common northern assumption that the South was made up exclusively of two tiers of White residents: the very wealthy planter class and the very poor landless Whites. In his 1860 book, Social Relations in Our Southern States , Hundley describes what he calls the “Southern Yeomen,” a social group he insists is roughly equivalent to the middle-class farmers of the North. But you have no Yeomen in the South, my dear Sir? Beg your pardon, our dear Sir, but we have—hosts of them. I thought you had only poor White Trash? Yes, we dare say as much—and that the moon is made of green cheese! . . . Know, then, that the Poor Whites of the South constitute a separate class to themselves; the Southern Yeomen are as distinct from them as the Southern Gentleman is from the Cotton Snob. Certainly the Southern Yeomen are nearly always poor, at least so far as this world’s goods are to be taken into account. As a general thing they own no slaves; and even in case they do, the wealthiest of them rarely possess more than from ten to fifteen. . . . The Southern Yeoman much resembles in his speech, religious opinions, household arrangements, indoor sports, and family traditions, the middle class farmers of the Northern States. He is fully as intelligent as the latter, and is on the whole much better versed in the lore of politics and the provisions of our Federal and State Constitutions. . . . [A]lthough not as a class pecuniarily interested in slave property, the Southern Yeomanry are almost unanimously pro-slavery in sentiment. Nor do we see how any honest, thoughtful person can reasonably find fault with them on this account. —D. R. Hundley, Social Relations in Our Southern States , 1860 What elements of social relations in the South is Hundley attempting to emphasize for his readers? In what respects might his position as an educated and wealthy planter influence his understanding of social relations in the South? Because race bound all White people together as members of the master race, non-slaveholding White people took part in civil duties. They served on juries and voted. They also engaged in the daily rounds of maintaining slavery by serving on neighborhood patrols to ensure that enslaved people did not escape and that rebellions did not occur. The practical consequence of such activities was that the institution of slavery, and its perpetuation, became a source of commonality among different economic and social tiers that otherwise were separated by a gulf of difference. Southern planters exerted a powerful influence on the federal government. Seven of the first eleven presidents were enslavers, and more than half of the Supreme Court justices who served on the court from its inception to the Civil War came from slaveholding states. However, southern White yeoman farmers generally did not support an active federal government. They were suspicious of the state bank and supported President Jackson’s dismantling of the Second Bank of the United States. They also did not support taxes to create internal improvements such as canals and railroads; to them, government involvement in the economic life of the nation disrupted what they perceived as the natural workings of the economy. They also feared a strong national government might tamper with slavery. Planters operated within a larger capitalist society, but the labor system they used to produce goods—that is, slavery—was similar to systems that existed before capitalism, such as feudalism and serfdom. Under capitalism, free workers are paid for their labor (by owners of capital) to produce commodities; the money from the sale of the goods is used to pay for the work performed. As enslaved people did not reap any earnings from their forced labor, some economic historians consider the antebellum plantation system a “pre-capitalist” system. HONOR IN THE SOUTH A complicated code of honor among privileged White southerners, dictating the beliefs and behavior of “gentlemen” and “ladies,” developed in the antebellum years. Maintaining appearances and reputation was supremely important. It can be argued that, as in many societies, the concept of honor in the antebellum South had much to do with control over dependents, whether enslaved people, wives, or relatives. Defending their honor and ensuring that they received proper respect became preoccupations of White people in the slaveholding South. To question another man’s assertions was to call his honor and reputation into question. Insults in the form of words or behavior, such as calling someone a coward, could trigger a rupture that might well end on the dueling ground ( Figure 12.14 ). Dueling had largely disappeared in the antebellum North by the early nineteenth century, but it remained an important part of the southern code of honor through the Civil War years. Southern White men, especially those of high social status, settled their differences with duels, before which antagonists usually attempted reconciliation, often through the exchange of letters addressing the alleged insult. If the challenger was not satisfied by the exchange, a duel would often result. The dispute between South Carolina’s James Hammond and his erstwhile friend (and brother-in-law) Wade Hampton II illustrates the southern culture of honor and the place of the duel in that culture. A strong friendship bound Hammond and Hampton together. Both stood at the top of South Carolina’s society as successful, married plantation owners involved in state politics. Prior to his election as governor of the state in 1842, Hammond became sexually involved with each of Hampton’s four teenage daughters, who were his nieces by marriage. “[A]ll of them rushing on every occasion into my arms,” Hammond confided in his private diary, “covering me with kisses, lolling on my lap, pressing their bodies almost into mine . . . and permitting my hands to stray unchecked.” Hampton found out about these dalliances, and in keeping with the code of honor, could have demanded a duel with Hammond. However, Hampton instead tried to use the liaisons to destroy his former friend politically. This effort proved disastrous for Hampton, because it represented a violation of the southern code of honor. “As matters now stand,” Hammond wrote, “he [Hampton] is a convicted dastard who, not having nerve to redress his own wrongs, put forward bullies to do it for him. . . . To challenge me [to a duel] would be to throw himself upon my mercy for he knows I am not bound to meet him [for a duel].” Because Hampton’s behavior marked him as a man who lacked honor, Hammond was no longer bound to meet Hampton in a duel even if Hampton were to demand one. Hammond’s reputation, though tarnished, remained high in the esteem of South Carolinians, and the governor went on to serve as a U.S. senator from 1857 to 1860. As for the four Hampton daughters, they never married; their names were disgraced, not only by the whispered-about scandal but by their father’s actions in response to it; and no man of honor in South Carolina would stoop so low as to marry them. GENDER AND THE SOUTHERN HOUSEHOLD The antebellum South was an especially male-dominated society. Far more than in the North, southern men, particularly wealthy planters, were patriarchs and sovereigns of their own household. Among the White members of the household, labor and daily ritual conformed to rigid gender delineations. Men represented their household in the larger world of politics, business, and war. Within the family, the patriarchal male was the ultimate authority. White women were relegated to the household and lived under the thumb and protection of the male patriarch. The ideal southern lady conformed to her prescribed gender role, a role that was largely domestic and subservient. While responsibilities and experiences varied across different social tiers, women’s subordinate state in relation to the male patriarch remained the same. Writers in the antebellum period were fond of celebrating the image of the ideal southern woman ( Figure 12.15 ). One such writer, Thomas Roderick Dew, president of Virginia’s College of William and Mary in the mid-nineteenth century, wrote approvingly of the virtue of southern women, a virtue he concluded derived from their natural weakness, piety, grace, and modesty. In his Dissertation on the Characteristic Differences Between the Sexes , he writes that southern women derive their power not by leading armies to combat, or of enabling her to bring into more formidable action the physical power which nature has conferred on her. No! It is but the better to perfect all those feminine graces, all those fascinating attributes, which render her the center of attraction, and which delight and charm all those who breathe the atmosphere in which she moves; and, in the language of Mr. Burke, would make ten thousand swords leap from their scabbards to avenge the insult that might be offered to her. By her very meekness and beauty does she subdue all around her. Such popular idealizations of elite southern White women, however, are difficult to reconcile with their lived experience: in their own words, these women frequently described the trauma of childbirth, the loss of children, and the loneliness of the plantation. My Story Louisa Cheves McCord’s “Woman’s Progress” Louisa Cheves McCord was born in Charleston, South Carolina, in 1810. A child of some privilege in the South, she received an excellent education and became a prolific writer. As the excerpt from her poem “Woman’s Progress” indicates, some southern women also contributed to the idealization of southern White womanhood. Sweet Sister! stoop not thou to be a man! Man has his place as woman hers; and she As made to comfort, minister and help; Moulded for gentler duties, ill fulfils His jarring destinies. Her mission is To labour and to pray; to help, to heal, To soothe, to bear; patient, with smiles, to suffer; And with self-abnegation noble lose Her private interest in the dearer weal Of those she loves and lives for. Call not this— (The all-fulfilling of her destiny; She the world’s soothing mother)—call it not, With scorn and mocking sneer, a drudgery. The ribald tongue profanes Heaven’s holiest things, But holy still they are. The lowliest tasks Are sanctified in nobly acting them. Christ washed the apostles’ feet, not thus cast shame Upon the God-like in him. Woman lives Man’s constant prophet. If her life be true And based upon the instincts of her being, She is a living sermon of that truth Which ever through her gentle actions speaks, That life is given to labour and to love. —Louisa Susanna Cheves McCord, “Woman’s Progress,” 1853 What womanly virtues does Louisa Cheves McCord emphasize? How might her social status, as an educated southern woman of great privilege, influence her understanding of gender relations in the South? For slaveholding Whites, the male-dominated household operated to protect gendered divisions and prevalent gender norms; for enslaved women, however, the same system exposed them to brutality and frequent sexual domination. The demands on the labor of enslaved women made it impossible for them to perform the role of domestic caretaker that was so idealized by southern men. That slaveholders put them out into the fields, where they frequently performed work traditionally thought of as male, reflected little the ideal image of gentleness and delicacy reserved for White women. Nor did the enslaved woman’s role as daughter, wife, or mother garner any patriarchal protection. Each of these roles and the relationships they defined was subject to the prerogative of a slaveholder, who could freely violate enslaved women’s persons, sell off their children, or separate them from their families. DEFENDING SLAVERY With the rise of democracy during the Jacksonian era in the 1830s, slaveholders worried about the power of the majority. If political power went to a majority that was hostile to slavery, the South—and the honor of White southerners—would be imperiled. White southerners keen on preserving the institution of slavery bristled at what they perceived to be northern attempts to deprive them of their livelihood. Powerful southerners like South Carolinian John C. Calhoun ( Figure 12.16 ) highlighted laws like the Tariff of 1828 as evidence of the North’s desire to destroy the southern economy and, by extension, its culture. Such a tariff, he and others concluded, would disproportionately harm the South, which relied heavily on imports, and benefit the North, which would receive protections for its manufacturing centers. The tariff appeared to open the door for other federal initiatives, including the abolition of slavery. Because of this perceived threat to southern society, Calhoun argued that states could nullify federal laws. This belief illustrated the importance of the states’ rights argument to the southern states. It also showed slaveholders’ willingness to unite against the federal government when they believed it acted unjustly against their interests. As the nation expanded in the 1830s and 1840s, the writings of abolitionists—a small but vocal group of northerners committed to ending slavery—reached a larger national audience. White southerners responded by putting forth arguments in defense of slavery, their way of life, and their honor. Calhoun became a leading political theorist defending slavery and the rights of the South, which he saw as containing an increasingly embattled minority. He advanced the idea of a concurrent majority , a majority of a separate region (that would otherwise be in the minority of the nation) with the power to veto or disallow legislation put forward by a hostile majority. Calhoun’s idea of the concurrent majority found full expression in his 1850 essay “Disquisition on Government.” In this treatise, he wrote about government as a necessary means to ensure the preservation of society, since society existed to “preserve and protect our race.” If government grew hostile to society, then a concurrent majority had to take action, including forming a new government. “Disquisition on Government” advanced a profoundly anti-democratic argument. It illustrates southern leaders’ intense suspicion of democratic majorities and their ability to effect legislation that would challenge southern interests. White southerners reacted strongly to abolitionists’ attacks on slavery. In making their defense of slavery, they critiqued wage labor in the North. They argued that the Industrial Revolution had brought about a new type of slavery—wage slavery—and that this form of “slavery” was far worse than the slave labor used on southern plantations. Defenders of the institution also lashed out directly at abolitionists such as William Lloyd Garrison for daring to call into question their way of life. Indeed, Virginians cited Garrison as the instigator of Nat Turner’s 1831 rebellion. The Virginian George Fitzhugh contributed to the defense of slavery with his book Sociology for the South, or the Failure of Free Society (1854). Fitzhugh argued that laissez-faire capitalism, as celebrated by Adam Smith, benefited only the quick-witted and intelligent, leaving the ignorant at a huge disadvantage. Slaveholders, he argued, took care of the ignorant—in Fitzhugh’s argument, the enslaved people of the South. Southerners provided the enslaved with care from birth to death, he asserted; this offered a stark contrast to the wage slavery of the North, where workers were at the mercy of economic forces beyond their control. Fitzhugh’s ideas exemplified southern notions of paternalism. Defining American George Fitzhugh’s Defense of Slavery George Fitzhugh, a southern writer of social treatises, was a staunch supporter of slavery, not as a necessary evil but as what he argued was a necessary good, a way to take care of enslaved people and keep them from being a burden on society. He published Sociology for the South, or the Failure of Free Society in 1854, in which he laid out what he believed to be the benefits of slavery to both the enslaved and society as a whole. According to Fitzhugh: [I]t is clear the Athenian democracy would not suit a negro nation, nor will the government of mere law suffice for the individual negro. He is but a grown up child and must be governed as a child . . . The master occupies towards him the place of parent or guardian. . . . The negro is improvident; will not lay up in summer for the wants of winter; will not accumulate in youth for the exigencies of age. He would become an insufferable burden to society. Society has the right to prevent this, and can only do so by subjecting him to domestic slavery. In the last place, the negro race is inferior to the White race, and living in their midst, they would be far outstripped or outwitted in the chase of free competition. . . . Our negroes are not only better off as to physical comfort than free laborers, but their moral condition is better. What arguments does Fitzhugh use to promote slavery? What basic premise underlies his ideas? Can you think of a modern parallel to Fitzhugh’s argument? The North also produced defenders of slavery, including Louis Agassiz, a Harvard professor of zoology and geology. Agassiz helped to popularize polygenism , the idea that different human races came from separate origins. According to this formulation, no single human family origin existed, and Black people made up a race wholly separate from the White race. Agassiz’s notion gained widespread popularity in the 1850s with the 1854 publication of George Gliddon and Josiah Nott’s Types of Mankind and other books. The theory of polygenism codified racism, giving the notion of Black inferiority the lofty mantle of science. One popular advocate of the idea posited that Black people occupied a place in evolution between the Greeks and chimpanzees ( Figure 12.17 ). 12.4 The Filibuster and the Quest for New Slave States Learning Objectives By the end of this section, you will be able to: Explain the expansionist goals of advocates of slavery Describe the filibuster expeditions undertaken during the antebellum era Southern expansionists had spearheaded the drive to add more territory to the United States. They applauded the Louisiana Purchase and fervently supported Native American removal, the annexation of Texas, and the Mexican-American War. Drawing inspiration from the annexation of Texas, proslavery expansionists hoped to replicate that feat by bringing Cuba and other territories into the United States and thereby enlarging the American empire of slavery. In the 1850s, the expansionist drive among White southerners intensified. Among southern imperialists, one way to push for the creation of an American empire of slavery was through the actions of filibusters—men who led unofficial military operations intended to seize land from foreign countries or foment revolution there. These unsanctioned military adventures were not part of the official foreign policy of the United States; American citizens simply formed themselves into private armies to forcefully annex new land without the government’s approval. An 1818 federal law made it a crime to undertake such adventures, which was an indication of both the reality of efforts at expansion through these illegal expeditions and the government’s effort to create a U.S. foreign policy. Nonetheless, Americans continued to filibuster throughout the nineteenth century. In 1819, an expedition of two hundred Americans invaded Spanish Texas, intent on creating a republic modeled on the United States, only to be driven out by Spanish forces. Using force, taking action, and asserting White supremacy in these militaristic drives were seen by many as an ideal of American male vigor. President Jackson epitomized this military prowess as an officer in the Tennessee militia, where earlier in the century he had played a leading role in ending the Creek War and driving Native peoples out of Alabama and Georgia. His reputation helped him to win the presidency in 1828 and again in 1832. Filibustering plots picked up pace in the 1850s as the drive for expansion continued. Slaveholders looked south to the Caribbean, Mexico, and Central America, hoping to add new slave states. Spanish Cuba became the objective of many American slaveholders in the 1850s, as debate over the island dominated the national conversation. Many who urged its annexation believed Cuba had to be made part of the United States to prevent it from going the route of Haiti, with enslaved Black people overthrowing their captors and creating another Black republic, a prospect horrifying to many in the United States. Americans also feared that the British, who had an interest in the sugar island, would make the first move and snatch Cuba from the United States. Since Britain had outlawed slavery in its colonies in 1833, Black people on the island of Cuba would then be free. Narciso López, a Venezuelan who wanted to end Spanish control of the island, gained American support. He tried five times to take the island, with his last effort occurring in the summer of 1851 when he led an armed group from New Orleans. Thousands came out to cheer his small force as they set off to wrest Cuba from the Spanish. Unfortunately for López and his supporters, however, the effort to take Cuba did not produce the hoped-for spontaneous uprising of the Cuban people. Spanish authorities in Cuba captured and executed López and the American filibusters. Efforts to take Cuba continued under President Franklin Pierce, who had announced at his inauguration in 1853 his intention to pursue expansion. In 1854, American diplomats met in Ostend, Belgium, to find a way to gain Cuba. They wrote a secret memo, known as the Ostend Manifesto (thought to be penned by James Buchanan, who was elected president two years later), stating that if Spain refused to sell Cuba to the United States, the United States was justified in taking the island as a national security measure. The contents of this memo were supposed to remain secret, but details were leaked to the public, leading the House of Representatives to demand a copy. Many in the North were outraged over what appeared to be a southern scheme, orchestrated by what they perceived as the Slave Power—a term they used to describe the disproportionate influence that elite slaveholders wielded—to expand slavery. European powers also reacted with anger. Southern annexationists, however, applauded the effort to take Cuba. The Louisiana legislature in 1854 asked the federal government to take decisive action, and John Quitman, a former Mississippi governor, raised money from slaveholders to fund efforts to take the island. Controversy around the Ostend Manifesto caused President Pierce to step back from the plan to take Cuba. After his election, President Buchanan, despite his earlier expansionist efforts, denounced filibustering as the action of pirates. Filibustering caused an even wider gulf between the North and the South ( Figure 12.18 ). Cuba was not the only territory in slaveholders’ expansionist sights: some focused on Mexico and Central America. In 1855, Tennessee-born William Walker, along with an army of no more than sixty mercenaries, gained control of the Central American nation of Nicaragua. Previously, Walker had launched a successful invasion of Mexico, dubbing his conquered land the Republic of Sonora . In a relatively short period of time, Walker was dislodged from Sonora by Mexican authorities and forced to retreat back to the United States. His conquest of Nicaragua garnered far more attention, catapulting him into national popularity as the heroic embodiment of White supremacy ( Figure 12.19 ). Why Nicaragua? Nicaragua presented a tempting target because it provided a quick route from the Caribbean to the Pacific: Only twelve miles of land stood between the Pacific Ocean, the inland Lake Nicaragua, and the river that drained into the Atlantic. Shipping from the East Coast to the West Coast of the United States had to travel either by land across the continent, south around the entire continent of South America, or through Nicaragua. Previously, American tycoon Cornelius Vanderbilt ( Figure 12.19 ) had recognized the strategic importance of Nicaragua and worked with the Nicaraguan government to control shipping there. The filibustering of William Walker may have excited expansionist-minded southerners, but it greatly upset Vanderbilt’s business interests in the region. Walker clung to the racist, expansionist philosophies of the proslavery South. In 1856, Walker made slavery legal in Nicaragua—it had been illegal there for thirty years—in a move to gain the support of the South. He also reopened the slave trade. In 1856, he was elected president of Nicaragua, but in 1857, he was chased from the country. When he returned to Central America in 1860, he was captured by the British and released to Honduran authorities, who executed him by firing squad.
psychology
Summary 11.1 What Is Personality? Personality has been studied for over 2,000 years, beginning with Hippocrates. More recent theories of personality have been proposed, including Freud’s psychodynamic perspective, which holds that personality is formed through early childhood experiences. Other perspectives then emerged in reaction to the psychodynamic perspective, including the learning, humanistic, biological, trait, and cultural perspectives. 11.2 Freud and the Psychodynamic Perspective Sigmund Freud presented the first comprehensive theory of personality. He was also the first to recognize that much of our mental life takes place outside of our conscious awareness. Freud also proposed three components to our personality: the id, ego, and superego. The job of the ego is to balance the sexual and aggressive drives of the id with the moral ideal of the superego. Freud also said that personality develops through a series of psychosexual stages. In each stage, pleasure focuses on a specific erogenous zone. Failure to resolve a stage can lead one to become fixated in that stage, leading to unhealthy personality traits. Successful resolution of the stages leads to a healthy adult. 11.3 Neo-Freudians: Adler, Erikson, Jung, and Horney The neo-Freudians were psychologists whose work followed from Freud’s. They generally agreed with Freud that childhood experiences matter, but they decreased the emphasis on sex and focused more on the social environment and effects of culture on personality. Some of the notable neo-Freudians are Alfred Adler, Carl Jung, Erik Erikson, and Karen Horney. The neo-Freudian approaches have been criticized, because they tend to be philosophical rather than based on sound scientific research. For example, Jung’s conclusions about the existence of the collective unconscious are based on myths, legends, dreams, and art. In addition, as with Freud’s psychoanalytic theory, the neo-Freudians based much of their theories of personality on information from their patients. 11.4 Learning Approaches Behavioral theorists view personality as significantly shaped and impacted by the reinforcements and consequences outside of the organism. People behave in a consistent manner based on prior learning. B. F. Skinner, a prominent behaviorist, said that we demonstrate consistent behavior patterns, because we have developed certain response tendencies. Mischel focused on how personal goals play a role in the self-regulation process. Albert Bandura said that one’s environment can determine behavior, but at the same time, people can influence the environment with both their thoughts and behaviors, which is known as reciprocal determinism. Bandura also emphasized how we learn from watching others. He felt that this type of learning also plays a part in the development of our personality. Bandura discussed the concept of self-efficacy, which is our level of confidence in our own abilities. Finally, Rotter proposed the concept of locus of control, which refers to our beliefs about the power we have over our lives. He said that people fall along a continuum between a purely internal and a purely external locus of control. 11.5 Humanistic Approaches Humanistic psychologists Abraham Maslow and Carl Rogers focused on the growth potential of healthy individuals. They believed that people strive to become self-actualized. Both Rogers’s and Maslow’s theories greatly contributed to our understanding of the self. They emphasized free will and self-determination, with each individual desiring to become the best person they can become. 11.6 Biological Approaches Some aspects of our personalities are largely controlled by genetics; however, environmental factors (such as family interactions) and maturation can affect the ways in which children’s personalities are expressed. 11.7 Trait Theorists Trait theorists attempt to explain our personality by identifying our stable characteristics and ways of behaving. They have identified important dimensions of personality. The Five Factor Model is the most widely accepted trait theory today. The five factors are openness, conscientiousness, extroversion, agreeableness, and neuroticism. These traits occur along a continuum. 11.8 Cultural Understandings of Personality The culture in which you live is one of the most important environmental factors that shapes your personality. Western ideas about personality may not be applicable to other cultures. In fact, there is evidence that the strength of personality traits varies across cultures. Individualist cultures and collectivist cultures place emphasis on different basic values. People who live in individualist cultures tend to believe that independence, competition, and personal achievement are important. People who live in collectivist cultures value social harmony, respectfulness, and group needs over individual needs. There are three approaches that can be used to study personality in a cultural context: the cultural-comparative approach, the indigenous approach, and the combined approach, which incorporates both elements of both views. 11.9 Personality Assessment Personality tests are techniques designed to measure one’s personality. They are used to diagnose psychological problems as well as to screen candidates for college and employment. There are two types of personality tests: self-report inventories and projective tests. The MMPI is one of the most common self-report inventories. It asks a series of true/false questions that are designed to provide a clinical profile of an individual. Projective tests use ambiguous images or other ambiguous stimuli to assess an individual’s unconscious fears, desires, and challenges. The Rorschach Inkblot Test, the TAT, the RISB, and the C-TCB are all forms of projective tests.
Chapter Outline 11.1 What Is Personality? 11.2 Freud and the Psychodynamic Perspective 11.3 Neo-Freudians: Adler, Erikson, Jung, and Horney 11.4 Learning Approaches 11.5 Humanistic Approaches 11.6 Biological Approaches 11.7 Trait Theorists 11.8 Cultural Understandings of Personality 11.9 Personality Assessment Introduction Three months before William Jefferson Blythe III was born, his father died in a car accident. He was raised by his mother, Virginia Dell, and grandparents, in Hope, Arkansas. When he turned 4, his mother married Roger Clinton, Jr., an alcoholic who was physically abusive to William’s mother. Six years later, Virginia gave birth to another son, Roger. William, who later took the last name Clinton from his stepfather, became the 42nd president of the United States. While Bill Clinton was making his political ascendance, his half-brother, Roger Clinton, was arrested numerous times for drug charges, including possession, conspiracy to distribute cocaine, and driving under the influence, serving time in jail. Two brothers, raised by the same people, took radically different paths in their lives. Why did they make the choices they did? What internal forces shaped their decisions? Personality psychology can help us answer these questions and more.
[ { "answer": { "ans_choice": 3, "ans_text": "long term, stable and not easily changed" }, "bloom": null, "hl_context": "Personality refers to the long-standing traits and patterns that propel individuals to consistently think , feel , and behave in specific ways . Our personality is what makes us unique individuals . Each person has an idiosyncratic pattern of enduring , long-term characteristics and a manner in which he or she interacts with other individuals and the world around them . <hl> Our personalities are thought to be long term , stable , and not easily changed . <hl> The word personality comes from the Latin word persona . In the ancient world , a persona was a mask worn by an actor . While we tend to think of a mask as being worn to conceal one ’ s identity , the theatrical mask was originally used to either represent or project a specific personality trait of a character ( Figure 11.2 ) .", "hl_sentences": "Our personalities are thought to be long term , stable , and not easily changed .", "question": { "cloze_format": "Personality is thought to be ________.", "normal_format": "What is personality thought to be?", "question_choices": [ "short term and easily changed", "a pattern of short-term characteristics", "unstable and short term", "long term, stable and not easily changed" ], "question_id": "fs-idm147588048", "question_text": "Personality is thought to be ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "personality" }, "bloom": null, "hl_context": "<hl> Personality refers to the long-standing traits and patterns that propel individuals to consistently think , feel , and behave in specific ways . <hl> Our personality is what makes us unique individuals . Each person has an idiosyncratic pattern of enduring , long-term characteristics and a manner in which he or she interacts with other individuals and the world around them . Our personalities are thought to be long term , stable , and not easily changed . The word personality comes from the Latin word persona . In the ancient world , a persona was a mask worn by an actor . While we tend to think of a mask as being worn to conceal one ’ s identity , the theatrical mask was originally used to either represent or project a specific personality trait of a character ( Figure 11.2 ) .", "hl_sentences": "Personality refers to the long-standing traits and patterns that propel individuals to consistently think , feel , and behave in specific ways .", "question": { "cloze_format": "The long-standing traits and patterns that propel individuals to consistently think, feel, and behave in specific ways are known as ________.", "normal_format": "The long-standing traits and patterns that propel individuals to consistently think, feel, and behave in specific ways are known as what?", "question_choices": [ "psychodynamic", "temperament", "humors", "personality" ], "question_id": "fs-idm142790960", "question_text": "The long-standing traits and patterns that propel individuals to consistently think, feel, and behave in specific ways are known as ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "Freud" }, "bloom": null, "hl_context": "In the centuries after Galen , other researchers contributed to the development of his four primary temperament types , most prominently Immanuel Kant ( in the 18th century ) and psychologist Wilhelm Wundt ( in the 19th century ) ( Eysenck , 2009 ; Stelmack & Stalikas , 1991 ; Wundt , 1874/1886 ) ( Figure 11.4 ) . Kant agreed with Galen that everyone could be sorted into one of the four temperaments and that there was no overlap between the four categories ( Eysenck , 2009 ) . He developed a list of traits that could be used to describe the personality of a person from each of the four temperaments . However , Wundt suggested that a better description of personality could be achieved using two major axes : emotional / nonemotional and changeable / unchangeable . The first axis separated strong from weak emotions ( the melancholic and choleric temperaments from the phlegmatic and sanguine ) . The second axis divided the changeable temperaments ( choleric and sanguine ) from the unchangeable ones ( melancholic and phlegmatic ) ( Eysenck , 2009 ) . <hl> Sigmund Freud ’ s psychodynamic perspective of personality was the first comprehensive theory of personality , explaining a wide variety of both normal and abnormal behaviors . <hl> According to Freud , unconscious drives influenced by sex and aggression , along with childhood sexuality , are the forces that influence our personality . Freud attracted many followers who modified his ideas to create new theories about personality . These theorists , referred to as neo-Freudians , generally agreed with Freud that childhood experiences matter , but they reduced the emphasis on sex and focused more on the social environment and effects of culture on personality . The perspective of personality proposed by Freud and his followers was the dominant theory of personality for the first half of the 20th century .", "hl_sentences": "Sigmund Freud ’ s psychodynamic perspective of personality was the first comprehensive theory of personality , explaining a wide variety of both normal and abnormal behaviors .", "question": { "cloze_format": "________ is credited with the first comprehensive theory of personality.", "normal_format": "Who is credited with the first comprehensive theory of personality?", "question_choices": [ "Hippocrates", "Gall", "Wundt", "Freud" ], "question_id": "fs-idp47985824", "question_text": "________ is credited with the first comprehensive theory of personality." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "phrenology" }, "bloom": null, "hl_context": "<hl> In 1780 , Franz Gall , a German physician , proposed that the distances between bumps on the skull reveal a person ’ s personality traits , character , and mental abilities ( Figure 11.3 ) . <hl> According to Gall , measuring these distances revealed the sizes of the brain areas underneath , providing information that could be used to determine whether a person was friendly , prideful , murderous , kind , good with languages , and so on . <hl> Initially , phrenology was very popular ; however , it was soon discredited for lack of empirical support and has long been relegated to the status of pseudoscience ( Fancher , 1979 ) . <hl>", "hl_sentences": "In 1780 , Franz Gall , a German physician , proposed that the distances between bumps on the skull reveal a person ’ s personality traits , character , and mental abilities ( Figure 11.3 ) . Initially , phrenology was very popular ; however , it was soon discredited for lack of empirical support and has long been relegated to the status of pseudoscience ( Fancher , 1979 ) .", "question": { "cloze_format": "An early science that tried to correlate personality with measurements of parts of a person’s skull is known as ________.", "normal_format": "An early science that tried to correlate personality with measurements of parts of a person’s skull is known as what?", "question_choices": [ "phrenology", "psychology", "physiology", "personality psychology" ], "question_id": "fs-idm96722816", "question_text": "An early science that tried to correlate personality with measurements of parts of a person’s skull is known as ________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 1, "ans_text": "pleasure" }, "bloom": null, "hl_context": "Levels of Consciousness To explain the concept of conscious versus unconscious experience , Freud compared the mind to an iceberg ( Figure 11.5 ) . He said that only about one-tenth of our mind is conscious , and the rest of our mind is unconscious . Our unconscious refers to that mental activity of which we are unaware and are unable to access ( Freud , 1923 ) . According to Freud , unacceptable urges and desires are kept in our unconscious through a process called repression . For example , we sometimes say things that we don ’ t intend to say by unintentionally substituting another word for the one we meant . You ’ ve probably heard of a Freudian slip , the term used to describe this . Freud suggested that slips of the tongue are actually sexual or aggressive urges , accidentally slipping out of our unconscious . Speech errors such as this are quite common . Seeing them as a reflection of unconscious desires , linguists today have found that slips of the tongue tend to occur when we are tired , nervous , or not at our optimal level of cognitive functioning ( Motley , 2002 ) . According to Freud , our personality develops from a conflict between two forces : our biological aggressive and pleasure-seeking drives versus our internal ( socialized ) control over these drives . Our personality is the result of our efforts to balance these two competing forces . Freud suggested that we can understand this by imagining three interacting systems within our minds . He called them the id , ego , and superego ( Figure 11.6 ) . The unconscious id contains our most primitive drives or urges , and is present from birth . It directs impulses for hunger , thirst , and sex . <hl> Freud believed that the id operates on what he called the “ pleasure principle , ” in which the id seeks immediate gratification . <hl> Through social interactions with parents and others in a child ’ s environment , the ego and superego develop to help control the id . The superego develops as a child interacts with others , learning the social rules for right and wrong . The superego acts as our conscience ; it is our moral compass that tells us how we should behave . It strives for perfection and judges our behavior , leading to feelings of pride or — when we fall short of the ideal — feelings of guilt . In contrast to the instinctual id and the rule-based superego , the ego is the rational part of our personality . It ’ s what Freud considered to be the self , and it is the part of our personality that is seen by others . Its job is to balance the demands of the id and superego in the context of reality ; thus , it operates on what Freud called the “ reality principle . ” The ego helps the id satisfy its desires in a realistic way .", "hl_sentences": "Freud believed that the id operates on what he called the “ pleasure principle , ” in which the id seeks immediate gratification .", "question": { "cloze_format": "The id operates on the ________ principle.", "normal_format": "On which principle does the id operate?", "question_choices": [ "reality", "pleasure", "instant gratification", "guilt" ], "question_id": "fs-idm135064480", "question_text": "The id operates on the ________ principle." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "regression" }, "bloom": null, "hl_context": "Another defense mechanism is reaction formation , in which someone expresses feelings , thoughts , and behaviors opposite to their inclinations . In the above example , Joe made fun of a homosexual peer while himself being attracted to males . <hl> In regression , an individual acts much younger than their age . <hl> For example , a four-year-old child who resents the arrival of a newborn sibling may act like a baby and revert to drinking out of a bottle . In projection , a person refuses to acknowledge her own unconscious feelings and instead sees those feelings in someone else . Other defense mechanisms include rationalization , displacement , and sublimation . The id and superego are in constant conflict , because the id wants instant gratification regardless of the consequences , but the superego tells us that we must behave in socially acceptable ways . Thus , the ego ’ s job is to find the middle ground . It helps satisfy the id ’ s desires in a rational way that will not lead us to feelings of guilt . According to Freud , a person who has a strong ego , which can balance the demands of the id and the superego , has a healthy personality . Freud maintained that imbalances in the system can lead to neurosis ( a tendency to experience negative emotions ) , anxiety disorders , or unhealthy behaviors . For example , a person who is dominated by their id might be narcissistic and impulsive . A person with a dominant superego might be controlled by feelings of guilt and deny themselves even socially acceptable pleasures ; conversely , if the superego is weak or absent , a person might become a psychopath . An overly dominant superego might be seen in an over-controlled individual whose rational grasp on reality is so strong that they are unaware of their emotional needs , or , in a neurotic who is overly defensive ( overusing ego defense mechanisms ) . Defense Mechanisms Freud believed that feelings of anxiety result from the ego ’ s inability to mediate the conflict between the id and superego . When this happens , Freud believed that the ego seeks to restore balance through various protective measures known as defense mechanisms ( Figure 11.7 ) . <hl> When certain events , feelings , or yearnings cause an individual anxiety , the individual wishes to reduce that anxiety . <hl> <hl> To do that , the individual ’ s unconscious mind uses ego defense mechanisms , unconscious protective behaviors that aim to reduce anxiety . <hl> The ego , usually conscious , resorts to unconscious strivings to protect the ego from being overwhelmed by anxiety . When we use defense mechanisms , we are unaware that we are using them . Further , they operate in various ways that distort reality . According to Freud , we all use ego defense mechanisms . While everyone uses defense mechanisms , Freud believed that overuse of them may be problematic . For example , let ’ s say Joe Smith is a high school football player . Deep down , Joe feels sexually attracted to males . His conscious belief is that being gay is immoral and that if he were gay , his family would disown him and he would be ostracized by his peers . Therefore , there is a conflict between his conscious beliefs ( being gay is wrong and will result in being ostracized ) and his unconscious urges ( attraction to males ) . The idea that he might be gay causes Joe to have feelings of anxiety . How can he decrease his anxiety ? Joe may find himself acting very “ macho , ” making gay jokes , and picking on a school peer who is gay . This way , Joe ’ s unconscious impulses are further submerged .", "hl_sentences": "In regression , an individual acts much younger than their age . When certain events , feelings , or yearnings cause an individual anxiety , the individual wishes to reduce that anxiety . To do that , the individual ’ s unconscious mind uses ego defense mechanisms , unconscious protective behaviors that aim to reduce anxiety .", "question": { "cloze_format": "The ego defense mechanism in which a person who is confronted with anxiety returns to a more immature behavioral stage is called ________.", "normal_format": "What the ego defense mechanism in which a person who is confronted with anxiety returns to a more immature behavioral stage is called?", "question_choices": [ "repression", "regression", "reaction formation", "rationalization" ], "question_id": "fs-idm2953264", "question_text": "The ego defense mechanism in which a person who is confronted with anxiety returns to a more immature behavioral stage is called ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "phallic" }, "bloom": null, "hl_context": "<hl> Freud ’ s third stage of psychosexual development is the phallic stage ( 3 – 6 years ) , corresponding to the age when children become aware of their bodies and recognize the differences between boys and girls . <hl> The erogenous zone in this stage is the genitals . Conflict arises when the child feels a desire for the opposite-sex parent , and jealousy and hatred toward the same-sex parent . <hl> For boys , this is called the Oedipus complex , involving a boy's desire for his mother and his urge to replace his father who is seen as a rival for the mother ’ s attention . <hl> At the same time , the boy is afraid his father will punish him for his feelings , so he experiences castration anxiety . The Oedipus complex is successfully resolved when the boy begins to identify with his father as an indirect way to have the mother . Failure to resolve the Oedipus complex may result in fixation and development of a personality that might be described as vain and overly ambitious .", "hl_sentences": "Freud ’ s third stage of psychosexual development is the phallic stage ( 3 – 6 years ) , corresponding to the age when children become aware of their bodies and recognize the differences between boys and girls . For boys , this is called the Oedipus complex , involving a boy's desire for his mother and his urge to replace his father who is seen as a rival for the mother ’ s attention .", "question": { "cloze_format": "The Oedipus complex occurs in the ________ stage of psychosexual development.", "normal_format": "The Oedipus complex occurs in which stage of psychosexual development?", "question_choices": [ "oral", "anal", "phallic", "latency" ], "question_id": "fs-idm43251568", "question_text": "The Oedipus complex occurs in the ________ stage of psychosexual development." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "collective unconscious" }, "bloom": null, "hl_context": "<hl> The collective unconscious is a universal version of the personal unconscious , holding mental patterns , or memory traces , which are common to all of us ( Jung , 1928 ) . <hl> <hl> These ancestral memories , which Jung called archetypes , are represented by universal themes in various cultures , as expressed through literature , art , and dreams ( Jung ) . <hl> Jung said that these themes reflect common experiences of people the world over , such as facing death , becoming independent , and striving for mastery . <hl> Jung ( 1964 ) believed that through biology , each person is handed down the same themes and that the same types of symbols — such as the hero , the maiden , the sage , and the trickster — are present in the folklore and fairy tales of every culture . <hl> In Jung ’ s view , the task of integrating these unconscious archetypal aspects of the self is part of the self-realization process in the second half of life . With this orientation toward self-realization , Jung parted ways with Freud ’ s belief that personality is determined solely by past events and anticipated the humanistic movement with its emphasis on self-actualization and orientation toward the future .", "hl_sentences": "The collective unconscious is a universal version of the personal unconscious , holding mental patterns , or memory traces , which are common to all of us ( Jung , 1928 ) . These ancestral memories , which Jung called archetypes , are represented by universal themes in various cultures , as expressed through literature , art , and dreams ( Jung ) . Jung ( 1964 ) believed that through biology , each person is handed down the same themes and that the same types of symbols — such as the hero , the maiden , the sage , and the trickster — are present in the folklore and fairy tales of every culture .", "question": { "cloze_format": "The universal bank of ideas, images, and concepts that have been passed down through the generations from our ancestors refers to ________.", "normal_format": "What do the universal bank of ideas, images, and concepts that have been passed down through the generations from our ancestors refer to?", "question_choices": [ "archetypes", "intuition", "collective unconscious", "personality types" ], "question_id": "fs-idm4144288", "question_text": "The universal bank of ideas, images, and concepts that have been passed down through the generations from our ancestors refers to ________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 1, "ans_text": "will power" }, "bloom": null, "hl_context": "One of Mischel ’ s most notable contributions to personality psychology was his ideas on self-regulation . According to Lecci & Magnavita ( 2013 ) , “ Self-regulation is the process of identifying a goal or set of goals and , in pursuing these goals , using both internal ( e . g . , thoughts and affect ) and external ( e . g . , responses of anything or anyone in the environment ) feedback to maximize goal attainment ” ( p . 6.3 ) . <hl> Self-regulation is also known as will power . <hl> When we talk about will power , we tend to think of it as the ability to delay gratification . For example , Bettina ’ s teenage daughter made strawberry cupcakes , and they looked delicious . However , Bettina forfeited the pleasure of eating one , because she is training for a 5K race and wants to be fit and do well in the race . Would you be able to resist getting a small reward now in order to get a larger reward later ? This is the question Mischel investigated in his now-classic marshmallow test .", "hl_sentences": "Self-regulation is also known as will power .", "question": { "cloze_format": "Self-regulation is also known as ________.", "normal_format": "What is self-regulation also known as?", "question_choices": [ "self-efficacy", "will power", "internal locus of control", "external locus of control" ], "question_id": "fs-idp99048256", "question_text": "Self-regulation is also known as ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "self-efficacy" }, "bloom": null, "hl_context": "Bandura ( 1977 , 1995 ) has studied a number of cognitive and personal factors that affect learning and personality development , and most recently has focused on the concept of self-efficacy . <hl> Self-efficacy is our level of confidence in our own abilities , developed through our social experiences . <hl> Self-efficacy affects how we approach challenges and reach goals . In observational learning , self-efficacy is a cognitive factor that affects which behaviors we choose to imitate as well as our success in performing those behaviors .", "hl_sentences": "Self-efficacy is our level of confidence in our own abilities , developed through our social experiences .", "question": { "cloze_format": "Your level of confidence in your own abilities is known as ________.", "normal_format": "Your level of confidence in your own abilities is known as what?", "question_choices": [ "self-efficacy", "self-concept", "self-control", "self-esteem" ], "question_id": "fs-idp171884848", "question_text": "Your level of confidence in your own abilities is known as ________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 1, "ans_text": "external" }, "bloom": null, "hl_context": "Julian Rotter ( 1966 ) proposed the concept of locus of control , another cognitive factor that affects learning and personality development . Distinct from self-efficacy , which involves our belief in our own abilities , locus of control refers to our beliefs about the power we have over our lives . In Rotter ’ s view , people possess either an internal or an external locus of control ( Figure 11.11 ) . Those of us with an internal locus of control ( “ internals ” ) tend to believe that most of our outcomes are the direct result of our efforts . <hl> Those of us with an external locus of control ( “ externals ” ) tend to believe that our outcomes are outside of our control . <hl> Externals see their lives as being controlled by other people , luck , or chance . For example , say you didn ’ t spend much time studying for your psychology test and went out to dinner with friends instead . When you receive your test score , you see that you earned a D . If you possess an internal locus of control , you would most likely admit that you failed because you didn ’ t spend enough time studying and decide to study more for the next test . On the other hand , if you possess an external locus of control , you might conclude that the test was too hard and not bother studying for the next test , because you figure you will fail it anyway . Researchers have found that people with an internal locus of control perform better academically , achieve more in their careers , are more independent , are healthier , are better able to cope , and are less depressed than people who have an external locus of control ( Benassi , Sweeney , & Durfour , 1988 ; Lefcourt , 1982 ; Maltby , Day , & Macaskill , 2007 ; Whyte , 1977 , 1978 , 1980 ) . Link to Learning", "hl_sentences": "Those of us with an external locus of control ( “ externals ” ) tend to believe that our outcomes are outside of our control .", "question": { "cloze_format": "Jane believes that she got a bad grade on her psychology paper because her professor doesn’t like her. Jane most likely has an _______ locus of control.", "normal_format": "Jane believes that she got a bad grade on her psychology paper because her professor doesn’t like her. Which type of locus of control Jane most likely has? ", "question_choices": [ "internal", "external", "intrinsic", "extrinsic" ], "question_id": "fs-idp109083568", "question_text": "Jane believes that she got a bad grade on her psychology paper because her professor doesn’t like her. Jane most likely has an _______ locus of control." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "all of our thoughts and feelings about ourselves" }, "bloom": null, "hl_context": "Another humanistic theorist was Carl Rogers . <hl> One of Rogers ’ s main ideas about personality regards self-concept , our thoughts and feelings about ourselves . <hl> How would you respond to the question , “ Who am I ? ” Your answer can show how you see yourself . If your response is primarily positive , then you tend to feel good about who you are , and you see the world as a safe and positive place . If your response is mainly negative , then you may feel unhappy with who you are . Rogers further divided the self into two categories : the ideal self and the real self . The ideal self is the person that you would like to be ; the real self is the person you actually are . Rogers focused on the idea that we need to achieve consistency between these two selves . We experience congruence when our thoughts about our real self and ideal self are very similar — in other words , when our self-concept is accurate . High congruence leads to a greater sense of self-worth and a healthy , productive life . Parents can help their children achieve this by giving them unconditional positive regard , or unconditional love . According to Rogers ( 1980 ) , “ As persons are accepted and prized , they tend to develop a more caring attitude towards themselves ” ( p . 116 ) . Conversely , when there is a great discrepancy between our ideal and actual selves , we experience a state Rogers called incongruence , which can lead to maladjustment . Both Rogers ’ s and Maslow ’ s theories focus on individual choices and do not believe that biology is deterministic . 11.6 Biological Approaches Learning Objectives By the end of this section , you will be able to :", "hl_sentences": "One of Rogers ’ s main ideas about personality regards self-concept , our thoughts and feelings about ourselves .", "question": { "cloze_format": "Self-concept refers to ________.", "normal_format": "What do self-concept refer to?", "question_choices": [ "our level of confidence in our own abilities", "all of our thoughts and feelings about ourselves", "the belief that we control our own outcomes", "the belief that our outcomes are outside of our control" ], "question_id": "fs-idm88196816", "question_text": "Self-concept refers to ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "congruence" }, "bloom": null, "hl_context": "Another humanistic theorist was Carl Rogers . One of Rogers ’ s main ideas about personality regards self-concept , our thoughts and feelings about ourselves . How would you respond to the question , “ Who am I ? ” Your answer can show how you see yourself . If your response is primarily positive , then you tend to feel good about who you are , and you see the world as a safe and positive place . If your response is mainly negative , then you may feel unhappy with who you are . Rogers further divided the self into two categories : the ideal self and the real self . The ideal self is the person that you would like to be ; the real self is the person you actually are . Rogers focused on the idea that we need to achieve consistency between these two selves . <hl> We experience congruence when our thoughts about our real self and ideal self are very similar — in other words , when our self-concept is accurate . <hl> High congruence leads to a greater sense of self-worth and a healthy , productive life . Parents can help their children achieve this by giving them unconditional positive regard , or unconditional love . According to Rogers ( 1980 ) , “ As persons are accepted and prized , they tend to develop a more caring attitude towards themselves ” ( p . 116 ) . Conversely , when there is a great discrepancy between our ideal and actual selves , we experience a state Rogers called incongruence , which can lead to maladjustment . Both Rogers ’ s and Maslow ’ s theories focus on individual choices and do not believe that biology is deterministic . 11.6 Biological Approaches Learning Objectives By the end of this section , you will be able to :", "hl_sentences": "We experience congruence when our thoughts about our real self and ideal self are very similar — in other words , when our self-concept is accurate .", "question": { "cloze_format": "The idea that people’s ideas about themselves should match their actions is called ________.", "normal_format": "What is the idea that people’s ideas about themselves should match their actions called?", "question_choices": [ "confluence", "conscious", "conscientiousness", "congruence" ], "question_id": "fs-idm199631728", "question_text": "The idea that people’s ideas about themselves should match their actions is called ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "temperament" }, "bloom": null, "hl_context": "<hl> Most contemporary psychologists believe temperament has a biological basis due to its appearance very early in our lives ( Rothbart , 2011 ) . <hl> As you learned when you studied lifespan development , Thomas and Chess ( 1977 ) found that babies could be categorized into one of three temperaments : easy , difficult , or slow to warm up . However , environmental factors ( family interactions , for example ) and maturation can affect the ways in which children ’ s personalities are expressed ( Carter et al . , 2008 ) . The concept of personality has been studied for at least 2,000 years , beginning with Hippocrates in 370 BCE ( Fazeli , 2012 ) . <hl> Hippocrates theorized that personality traits and human behaviors are based on four separate temperaments associated with four fluids ( “ humors ” ) of the body : choleric temperament ( yellow bile from the liver ) , melancholic temperament ( black bile from the kidneys ) , sanguine temperament ( red blood from the heart ) , and phlegmatic temperament ( white phlegm from the lungs ) ( Clark & Watson , 2008 ; Eysenck & Eysenck , 1985 ; Lecci & Magnavita , 2013 ; Noga , 2007 ) . <hl> <hl> Centuries later , the influential Greek physician and philosopher Galen built on Hippocrates ’ s theory , suggesting that both diseases and personality differences could be explained by imbalances in the humors and that each person exhibits one of the four temperaments . <hl> <hl> For example , the choleric person is passionate , ambitious , and bold ; the melancholic person is reserved , anxious , and unhappy ; the sanguine person is joyful , eager , and optimistic ; and the phlegmatic person is calm , reliable , and thoughtful ( Clark & Watson , 2008 ; Stelmack & Stalikas , 1991 ) . <hl> Galen ’ s theory was prevalent for over 1,000 years and continued to be popular through the Middle Ages .", "hl_sentences": "Most contemporary psychologists believe temperament has a biological basis due to its appearance very early in our lives ( Rothbart , 2011 ) . Hippocrates theorized that personality traits and human behaviors are based on four separate temperaments associated with four fluids ( “ humors ” ) of the body : choleric temperament ( yellow bile from the liver ) , melancholic temperament ( black bile from the kidneys ) , sanguine temperament ( red blood from the heart ) , and phlegmatic temperament ( white phlegm from the lungs ) ( Clark & Watson , 2008 ; Eysenck & Eysenck , 1985 ; Lecci & Magnavita , 2013 ; Noga , 2007 ) . Centuries later , the influential Greek physician and philosopher Galen built on Hippocrates ’ s theory , suggesting that both diseases and personality differences could be explained by imbalances in the humors and that each person exhibits one of the four temperaments . For example , the choleric person is passionate , ambitious , and bold ; the melancholic person is reserved , anxious , and unhappy ; the sanguine person is joyful , eager , and optimistic ; and the phlegmatic person is calm , reliable , and thoughtful ( Clark & Watson , 2008 ; Stelmack & Stalikas , 1991 ) .", "question": { "cloze_format": "The way a person reacts to the world, starting when they are very young, including the person’s activity level is known as ________.", "normal_format": "What is known as the way a person reacts to the world, starting when they are very young, including their activity level?", "question_choices": [ "traits", "temperament", "heritability", "personality" ], "question_id": "fs-idp99300112", "question_text": "The way a person reacts to the world, starting when they are very young, including the person’s activity level is known as ________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 1, "ans_text": "a difficult baby" }, "bloom": null, "hl_context": "Most contemporary psychologists believe temperament has a biological basis due to its appearance very early in our lives ( Rothbart , 2011 ) . <hl> As you learned when you studied lifespan development , Thomas and Chess ( 1977 ) found that babies could be categorized into one of three temperaments : easy , difficult , or slow to warm up . <hl> However , environmental factors ( family interactions , for example ) and maturation can affect the ways in which children ’ s personalities are expressed ( Carter et al . , 2008 ) .", "hl_sentences": "As you learned when you studied lifespan development , Thomas and Chess ( 1977 ) found that babies could be categorized into one of three temperaments : easy , difficult , or slow to warm up .", "question": { "cloze_format": "Brianna is 18 months old. She cries frequently, is hard to soothe, and wakes frequently during the night. According to Thomas and Chess, she would be considered ________.", "normal_format": "Brianna is 18 months old. She cries frequently, is hard to soothe, and wakes frequently during the night. What would Thomas and Chess consider her to be?", "question_choices": [ "an easy baby", "a difficult baby", "a slow to warm up baby", "a colicky baby" ], "question_id": "fs-idm11446400", "question_text": "Brianna is 18 months old. She cries frequently, is hard to soothe, and wakes frequently during the night. According to Thomas and Chess, she would be considered ________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 3, "ans_text": "very similar" }, "bloom": null, "hl_context": "<hl> In the field of behavioral genetics , the Minnesota Study of Twins Reared Apart — a well-known study of the genetic basis for personality — conducted research with twins from 1979 to 1999 . <hl> <hl> In studying 350 pairs of twins , including pairs of identical and fraternal twins reared together and apart , researchers found that identical twins , whether raised together or apart , have very similar personalities ( Bouchard , 1994 ; Bouchard , Lykken , McGue , Segal , & Tellegen , 1990 ; Segal , 2012 ) . <hl> These findings suggest the heritability of some personality traits . Heritability refers to the proportion of difference among people that is attributed to genetics . Some of the traits that the study reported as having more than a 0.50 heritability ratio include leadership , obedience to authority , a sense of well-being , alienation , resistance to stress , and fearfulness . The implication is that some aspects of our personalities are largely controlled by genetics ; however , it ’ s important to point out that traits are not determined by a single gene , but by a combination of many genes , as well as by epigenetic factors that control whether the genes are expressed .", "hl_sentences": "In the field of behavioral genetics , the Minnesota Study of Twins Reared Apart — a well-known study of the genetic basis for personality — conducted research with twins from 1979 to 1999 . In studying 350 pairs of twins , including pairs of identical and fraternal twins reared together and apart , researchers found that identical twins , whether raised together or apart , have very similar personalities ( Bouchard , 1994 ; Bouchard , Lykken , McGue , Segal , & Tellegen , 1990 ; Segal , 2012 ) .", "question": { "cloze_format": "According to the findings of the Minnesota Study of Twins Reared Apart, identical twins, whether raised together or apart have ________ personalities.", "normal_format": "According to the findings of the Minnesota Study of Twins Reared Apart, which personalities did identical twins, whether raised together or apart, have?", "question_choices": [ "slightly different", "very different", "slightly similar", "very similar" ], "question_id": "fs-idm30533600", "question_text": "According to the findings of the Minnesota Study of Twins Reared Apart, identical twins, whether raised together or apart have ________ personalities." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "inborn, genetically based personality differences" }, "bloom": null, "hl_context": "<hl> Psychologists Hans and Sybil Eysenck were personality theorists ( Figure 11.13 ) who focused on temperament , the inborn , genetically based personality differences that you studied earlier in the chapter . <hl> They believed personality is largely governed by biology . The Eysencks ( Eysenck , 1990 , 1992 ; Eysenck & Eysenck , 1963 ) viewed people as having two specific personality dimensions : extroversion / introversion and neuroticism / stability .", "hl_sentences": "Psychologists Hans and Sybil Eysenck were personality theorists ( Figure 11.13 ) who focused on temperament , the inborn , genetically based personality differences that you studied earlier in the chapter .", "question": { "cloze_format": "Temperament refers to ________.", "normal_format": "What does temperament refer to?", "question_choices": [ "inborn, genetically based personality differences", "characteristic ways of behaving", "conscientiousness, agreeableness, neuroticism, openness, and extroversion", "degree of introversion-extroversion" ], "question_id": "fs-idp143462352", "question_text": "Temperament refers to ________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 3, "ans_text": "anxious" }, "bloom": null, "hl_context": "According to their theory , people high on the trait of extroversion are sociable and outgoing , and readily connect with others , whereas people high on the trait of introversion have a higher need to be alone , engage in solitary behaviors , and limit their interactions with others . <hl> In the neuroticism / stability dimension , people high on neuroticism tend to be anxious ; they tend to have an overactive sympathetic nervous system and , even with low stress , their bodies and emotional state tend to go into a flight-or-fight reaction . <hl> In contrast , people high on stability tend to need more stimulation to activate their flight-or-fight reaction and are considered more emotionally stable . Based on these two dimensions , the Eysencks ’ theory divides people into four quadrants . These quadrants are sometimes compared with the four temperaments described by the Greeks : melancholic , choleric , phlegmatic , and sanguine ( Figure 11.14 ) .", "hl_sentences": "In the neuroticism / stability dimension , people high on neuroticism tend to be anxious ; they tend to have an overactive sympathetic nervous system and , even with low stress , their bodies and emotional state tend to go into a flight-or-fight reaction .", "question": { "cloze_format": "According to the Eysencks’ theory, people who score high on neuroticism tend to be ________.", "normal_format": "According to the Eysencks’ theory, people who score high on neuroticism tend to be what?", "question_choices": [ "calm", "stable", "outgoing", "anxious" ], "question_id": "fs-idp99350112", "question_text": "According to the Eysencks’ theory, people who score high on neuroticism tend to be ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "individualist" }, "bloom": null, "hl_context": "Individualist cultures and collectivist cultures place emphasis on different basic values . <hl> People who live in individualist cultures tend to believe that independence , competition , and personal achievement are important . <hl> <hl> Individuals in Western nations such as the United States , England , and Australia score high on individualism ( Oyserman , Coon , & Kemmelmier , 2002 ) . <hl> People who live in collectivist cultures value social harmony , respectfulness , and group needs over individual needs . Individuals who live in countries in Asia , Africa , and South America score high on collectivism ( Hofstede , 2001 ; Triandis , 1995 ) . These values influence personality . For example , Yang ( 2006 ) found that people in individualist cultures displayed more personally oriented personality traits , whereas people in collectivist cultures displayed more socially oriented personality traits .", "hl_sentences": "People who live in individualist cultures tend to believe that independence , competition , and personal achievement are important . Individuals in Western nations such as the United States , England , and Australia score high on individualism ( Oyserman , Coon , & Kemmelmier , 2002 ) .", "question": { "cloze_format": "The United States is considered a ________ culture.", "normal_format": "Which type of culture is The United States considered?", "question_choices": [ "collectivistic", "individualist", "traditional", "nontraditional" ], "question_id": "fs-idm898704", "question_text": "The United States is considered a ________ culture." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 0, "ans_text": "selective migration" }, "bloom": null, "hl_context": "One explanation for the regional differences is selective migration ( Rentfrow et al . , 2013 ) . <hl> Selective migration is the concept that people choose to move to places that are compatible with their personalities and needs . <hl> For example , a person high on the agreeable scale would likely want to live near family and friends , and would choose to settle or remain in such an area . In contrast , someone high on openness would prefer to settle in a place that is recognized as diverse and innovative ( such as California ) .", "hl_sentences": "Selective migration is the concept that people choose to move to places that are compatible with their personalities and needs .", "question": { "cloze_format": "The concept that people choose to move to places that are compatible with their personalities and needs is known as ________.", "normal_format": "What is the concept that people choose to move to places compatible with their personalities and needs?", "question_choices": [ "selective migration", "personal oriented personality", "socially oriented personality", "individualism" ], "question_id": "fs-idp5072432", "question_text": "The concept that people choose to move to places that are compatible with their personalities and needs is known as ________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 0, "ans_text": "Minnesota Multiphasic Personality Inventory (MMPI)" }, "bloom": null, "hl_context": "<hl> A third projective test is the Rotter Incomplete Sentence Blank ( RISB ) developed by Julian Rotter in 1950 ( recall his theory of locus of control , covered earlier in this chapter ) . <hl> There are three forms of this test for use with different age groups : the school form , the college form , and the adult form . The tests include 40 incomplete sentences that people are asked to complete as quickly as possible ( Figure 11.19 ) . The average time for completing the test is approximately 20 minutes , as responses are only 1 – 2 words in length . This test is similar to a word association test , and like other types of projective tests , it is presumed that responses will reveal desires , fears , and struggles . The RISB is used in screening college students for adjustment problems and in career counseling ( Holaday , Smith , & Sherry , 2010 ; Rotter & Rafferty 1950 ) . For many decades , these traditional projective tests have been used in cross-cultural personality assessments . However , it was found that test bias limited their usefulness ( Hoy-Watkins & Jenkins-Moore , 2008 ) . It is difficult to assess the personalities and lifestyles of members of widely divergent ethnic / cultural groups using personality instruments based on data from a single culture or race ( Hoy-Watkins & Jenkins-Moore , 2008 ) . For example , when the TAT was used with African-American test takers , the result was often shorter story length and low levels of cultural identification ( Duzant , 2005 ) . Therefore , it was vital to develop other personality assessments that explored factors such as race , language , and level of acculturation ( Hoy-Watkins & Jenkins-Moore , 2008 ) . To address this need , Robert Williams developed the first culturally specific projective test designed to reflect the everyday life experiences of African Americans ( Hoy-Watkins & Jenkins-Moore , 2008 ) . The updated version of the instrument is the Contemporized-Themes Concerning Blacks Test ( C-TCB ) ( Williams , 1972 ) . The C-TCB contains 20 color images that show scenes of African-American lifestyles . When the C-TCB was compared with the TAT for African Americans , it was found that use of the C-TCB led to increased story length , higher degrees of positive feelings , and stronger identification with the C-TCB ( Hoy , 1997 ; Hoy-Watkins & Jenkins-Moore , 2008 ) . The TEMAS Multicultural Thematic Apperception Test is another tool designed to be culturally relevant to minority groups , especially Hispanic youths . TEMAS — standing for “ Tell Me a Story ” but also a play on the Spanish word temas ( themes ) — uses images and storytelling cues that relate to minority culture ( Constantino , 1982 ) . <hl> A second projective test is the Thematic Apperception Test ( TAT ) , created in the 1930s by Henry Murray , an American psychologist , and a psychoanalyst named Christiana Morgan . <hl> A person taking the TAT is shown 8 – 12 ambiguous pictures and is asked to tell a story about each picture . The stories give insight into their social world , revealing hopes , fears , interests , and goals . The storytelling format helps to lower a person ’ s resistance divulging unconscious personal details ( Cramer , 2004 ) . The TAT has been used in clinical settings to evaluate psychological disorders ; more recently , it has been used in counseling settings to help clients gain a better understanding of themselves and achieve personal growth . Standardization of test administration is virtually nonexistent among clinicians , and the test tends to be modest to low on validity and reliability ( Aronow , Weiss , & Rezinkoff , 2001 ; Lilienfeld , Wood , & Garb , 2000 ) . Despite these shortcomings , the TAT has been one of the most widely used projective tests . <hl> The Rorschach Inkblot Test was developed in 1921 by a Swiss psychologist named Hermann Rorschach ( pronounced “ ROAR-shock ” ) . <hl> It is a series of symmetrical inkblot cards that are presented to a client by a psychologist . Upon presentation of each card , the psychologist asks the client , “ What might this be ? ” What the test-taker sees reveals unconscious feelings and struggles ( Piotrowski , 1987 ; Weiner , 2003 ) . The Rorschach has been standardized using the Exner system and is effective in measuring depression , psychosis , and anxiety . Another method for assessment of personality is projective testing . This kind of test relies on one of the defense mechanisms proposed by Freud — projection — as a way to assess unconscious processes . During this type of testing , a series of ambiguous cards is shown to the person being tested , who then is encouraged to project his feelings , impulses , and desires onto the cards — by telling a story , interpreting an image , or completing a sentence . Many projective tests have undergone standardization procedures ( for example , Exner , 2002 ) and can be used to access whether someone has unusual thoughts or a high level of anxiety , or is likely to become volatile . <hl> Some examples of projective tests are the Rorschach Inkblot Test , the Thematic Apperception Test ( TAT ) , the Contemporized-Themes Concerning Blacks test , the TEMAS ( Tell-Me-A-Story ) , and the Rotter Incomplete Sentence Blank ( RISB ) . <hl> <hl> One of the most widely used personality inventories is the Minnesota Multiphasic Personality Inventory ( MMPI ) , first published in 1943 , with 504 true / false questions , and updated to the MMPI - 2 in 1989 , with 567 questions . <hl> The original MMPI was based on a small , limited sample , composed mostly of Minnesota farmers and psychiatric patients ; the revised inventory was based on a more representative , national sample to allow for better standardization . The MMPI - 2 takes 1 – 2 hours to complete . Responses are scored to produce a clinical profile composed of 10 scales : hypochondriasis , depression , hysteria , psychopathic deviance ( social deviance ) , masculinity versus femininity , paranoia , psychasthenia ( obsessive / compulsive qualities ) , schizophrenia , hypomania , and social introversion . There is also a scale to ascertain risk factors for alcohol abuse . In 2008 , the test was again revised , using more advanced methods , to the MMPI - 2 - RF . This version takes about one-half the time to complete and has only 338 questions ( Figure 11.18 ) . Despite the new test ’ s advantages , the MMPI - 2 is more established and is still more widely used . Typically , the tests are administered by computer . Although the MMPI was originally developed to assist in the clinical diagnosis of psychological disorders , it is now also used for occupational screening , such as in law enforcement , and in college , career , and marital counseling ( Ben-Porath & Tellegen , 2008 ) . In addition to clinical scales , the tests also have validity and reliability scales . ( Recall the concepts of reliability and validity from your study of psychological research . ) One of the validity scales , the Lie Scale ( or “ L ” Scale ) , consists of 15 items and is used to ascertain whether the respondent is “ faking good ” ( underreporting psychological problems to appear healthier ) . For example , if someone responds “ yes ” to a number of unrealistically positive items such as “ I have never told a lie , ” they may be trying to “ fake good ” or appear better than they actually are .", "hl_sentences": "A third projective test is the Rotter Incomplete Sentence Blank ( RISB ) developed by Julian Rotter in 1950 ( recall his theory of locus of control , covered earlier in this chapter ) . A second projective test is the Thematic Apperception Test ( TAT ) , created in the 1930s by Henry Murray , an American psychologist , and a psychoanalyst named Christiana Morgan . The Rorschach Inkblot Test was developed in 1921 by a Swiss psychologist named Hermann Rorschach ( pronounced “ ROAR-shock ” ) . Some examples of projective tests are the Rorschach Inkblot Test , the Thematic Apperception Test ( TAT ) , the Contemporized-Themes Concerning Blacks test , the TEMAS ( Tell-Me-A-Story ) , and the Rotter Incomplete Sentence Blank ( RISB ) . One of the most widely used personality inventories is the Minnesota Multiphasic Personality Inventory ( MMPI ) , first published in 1943 , with 504 true / false questions , and updated to the MMPI - 2 in 1989 , with 567 questions .", "question": { "cloze_format": "___ is not a projective test.", "normal_format": "Which of the following is NOT a projective test?", "question_choices": [ "Minnesota Multiphasic Personality Inventory (MMPI)", "Rorschach Inkblot Test", "Thematic Apperception Test (TAT)", "Rotter Incomplete Sentence Blank (RISB)" ], "question_id": "fs-idm398688", "question_text": "Which of the following is NOT a projective test?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 1, "ans_text": "projective test" }, "bloom": null, "hl_context": "<hl> Another method for assessment of personality is projective testing . <hl> This kind of test relies on one of the defense mechanisms proposed by Freud — projection — as a way to assess unconscious processes . <hl> During this type of testing , a series of ambiguous cards is shown to the person being tested , who then is encouraged to project his feelings , impulses , and desires onto the cards — by telling a story , interpreting an image , or completing a sentence . <hl> Many projective tests have undergone standardization procedures ( for example , Exner , 2002 ) and can be used to access whether someone has unusual thoughts or a high level of anxiety , or is likely to become volatile . Some examples of projective tests are the Rorschach Inkblot Test , the Thematic Apperception Test ( TAT ) , the Contemporized-Themes Concerning Blacks test , the TEMAS ( Tell-Me-A-Story ) , and the Rotter Incomplete Sentence Blank ( RISB ) .", "hl_sentences": "Another method for assessment of personality is projective testing . During this type of testing , a series of ambiguous cards is shown to the person being tested , who then is encouraged to project his feelings , impulses , and desires onto the cards — by telling a story , interpreting an image , or completing a sentence .", "question": { "cloze_format": "A personality assessment in which a person responds to ambiguous stimuli, revealing unconscious feelings, impulses, and desires ________.", "normal_format": "A personality assessment in which a person responds to ambiguous stimuli, revealing unconscious feelings, impulses, and desires which of the following?", "question_choices": [ "self-report inventory", "projective test", "Minnesota Multiphasic Personality Inventory (MMPI)", "Myers-Briggs Type Indicator (MBTI)" ], "question_id": "fs-idm436576", "question_text": "A personality assessment in which a person responds to ambiguous stimuli, revealing unconscious feelings, impulses, and desires ________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 0, "ans_text": "Minnesota Multiphasic Personality Inventory (MMPI)" }, "bloom": null, "hl_context": "<hl> One of the most widely used personality inventories is the Minnesota Multiphasic Personality Inventory ( MMPI ) , first published in 1943 , with 504 true / false questions , and updated to the MMPI - 2 in 1989 , with 567 questions . <hl> The original MMPI was based on a small , limited sample , composed mostly of Minnesota farmers and psychiatric patients ; the revised inventory was based on a more representative , national sample to allow for better standardization . The MMPI - 2 takes 1 – 2 hours to complete . Responses are scored to produce a clinical profile composed of 10 scales : hypochondriasis , depression , hysteria , psychopathic deviance ( social deviance ) , masculinity versus femininity , paranoia , psychasthenia ( obsessive / compulsive qualities ) , schizophrenia , hypomania , and social introversion . There is also a scale to ascertain risk factors for alcohol abuse . In 2008 , the test was again revised , using more advanced methods , to the MMPI - 2 - RF . This version takes about one-half the time to complete and has only 338 questions ( Figure 11.18 ) . Despite the new test ’ s advantages , the MMPI - 2 is more established and is still more widely used . Typically , the tests are administered by computer . Although the MMPI was originally developed to assist in the clinical diagnosis of psychological disorders , it is now also used for occupational screening , such as in law enforcement , and in college , career , and marital counseling ( Ben-Porath & Tellegen , 2008 ) . In addition to clinical scales , the tests also have validity and reliability scales . ( Recall the concepts of reliability and validity from your study of psychological research . ) One of the validity scales , the Lie Scale ( or “ L ” Scale ) , consists of 15 items and is used to ascertain whether the respondent is “ faking good ” ( underreporting psychological problems to appear healthier ) . For example , if someone responds “ yes ” to a number of unrealistically positive items such as “ I have never told a lie , ” they may be trying to “ fake good ” or appear better than they actually are .", "hl_sentences": "One of the most widely used personality inventories is the Minnesota Multiphasic Personality Inventory ( MMPI ) , first published in 1943 , with 504 true / false questions , and updated to the MMPI - 2 in 1989 , with 567 questions .", "question": { "cloze_format": "___ employs a series of true/false questions.", "normal_format": "Which personality assessment employs a series of true/false questions?", "question_choices": [ "Minnesota Multiphasic Personality Inventory (MMPI)", "Thematic Apperception Test (TAT)", "Rotter Incomplete Sentence Blank (RISB)", "Myers-Briggs Type Indicator (MBTI)" ], "question_id": "fs-idm1236192", "question_text": "Which personality assessment employs a series of true/false questions?" }, "references_are_paraphrase": 0 } ]
11
11.1 What Is Personality? Learning Objectives By the end of this section, you will be able to: Define personality Describe early theories about personality development Personality refers to the long-standing traits and patterns that propel individuals to consistently think, feel, and behave in specific ways. Our personality is what makes us unique individuals. Each person has an idiosyncratic pattern of enduring, long-term characteristics and a manner in which he or she interacts with other individuals and the world around them. Our personalities are thought to be long term, stable, and not easily changed. The word personality comes from the Latin word persona . In the ancient world, a persona was a mask worn by an actor. While we tend to think of a mask as being worn to conceal one’s identity, the theatrical mask was originally used to either represent or project a specific personality trait of a character ( Figure 11.2 ). Historical Perspectives The concept of personality has been studied for at least 2,000 years, beginning with Hippocrates in 370 BCE (Fazeli, 2012). Hippocrates theorized that personality traits and human behaviors are based on four separate temperaments associated with four fluids (“humors”) of the body: choleric temperament (yellow bile from the liver), melancholic temperament (black bile from the kidneys), sanguine temperament (red blood from the heart), and phlegmatic temperament (white phlegm from the lungs) (Clark & Watson, 2008; Eysenck & Eysenck, 1985; Lecci & Magnavita, 2013; Noga, 2007). Centuries later, the influential Greek physician and philosopher Galen built on Hippocrates’s theory, suggesting that both diseases and personality differences could be explained by imbalances in the humors and that each person exhibits one of the four temperaments. For example, the choleric person is passionate, ambitious, and bold; the melancholic person is reserved, anxious, and unhappy; the sanguine person is joyful, eager, and optimistic; and the phlegmatic person is calm, reliable, and thoughtful (Clark & Watson, 2008; Stelmack & Stalikas, 1991). Galen’s theory was prevalent for over 1,000 years and continued to be popular through the Middle Ages. In 1780, Franz Gall, a German physician, proposed that the distances between bumps on the skull reveal a person’s personality traits, character, and mental abilities ( Figure 11.3 ). According to Gall, measuring these distances revealed the sizes of the brain areas underneath, providing information that could be used to determine whether a person was friendly, prideful, murderous, kind, good with languages, and so on. Initially, phrenology was very popular; however, it was soon discredited for lack of empirical support and has long been relegated to the status of pseudoscience (Fancher, 1979). In the centuries after Galen, other researchers contributed to the development of his four primary temperament types, most prominently Immanuel Kant (in the 18th century) and psychologist Wilhelm Wundt (in the 19th century) (Eysenck, 2009; Stelmack & Stalikas, 1991; Wundt, 1874/1886) ( Figure 11.4 ). Kant agreed with Galen that everyone could be sorted into one of the four temperaments and that there was no overlap between the four categories (Eysenck, 2009). He developed a list of traits that could be used to describe the personality of a person from each of the four temperaments. However, Wundt suggested that a better description of personality could be achieved using two major axes: emotional/nonemotional and changeable/unchangeable. The first axis separated strong from weak emotions (the melancholic and choleric temperaments from the phlegmatic and sanguine). The second axis divided the changeable temperaments (choleric and sanguine) from the unchangeable ones (melancholic and phlegmatic) (Eysenck, 2009). Sigmund Freud’s psychodynamic perspective of personality was the first comprehensive theory of personality, explaining a wide variety of both normal and abnormal behaviors. According to Freud, unconscious drives influenced by sex and aggression, along with childhood sexuality, are the forces that influence our personality. Freud attracted many followers who modified his ideas to create new theories about personality. These theorists, referred to as neo-Freudians, generally agreed with Freud that childhood experiences matter, but they reduced the emphasis on sex and focused more on the social environment and effects of culture on personality. The perspective of personality proposed by Freud and his followers was the dominant theory of personality for the first half of the 20th century. Other major theories then emerged, including the learning, humanistic, biological, evolutionary, trait, and cultural perspectives. In this chapter, we will explore these various perspectives on personality in depth. Link to Learning View this video for a brief overview of some of the psychological perspectives on personality. 11.2 Freud and the Psychodynamic Perspective Learning Objectives By the end of this section, you will be able to: Describe the assumptions of the psychodynamic perspective on personality development Define and describe the nature and function of the id, ego, and superego Define and describe the defense mechanisms Define and describe the psychosexual stages of personality development Sigmund Freud (1856–1939) is probably the most controversial and misunderstood psychological theorist. When reading Freud’s theories, it is important to remember that he was a medical doctor, not a psychologist. There was no such thing as a degree in psychology at the time that he received his education, which can help us understand some of the controversy over his theories today. However, Freud was the first to systematically study and theorize the workings of the unconscious mind in the manner that we associate with modern psychology. In the early years of his career, Freud worked with Josef Breuer, a Viennese physician. During this time, Freud became intrigued by the story of one of Breuer’s patients, Bertha Pappenheim, who was referred to by the pseudonym Anna O. (Launer, 2005). Anna O. had been caring for her dying father when she began to experience symptoms such as partial paralysis, headaches, blurred vision, amnesia, and hallucinations (Launer, 2005). In Freud’s day, these symptoms were commonly referred to as hysteria. Anna O. turned to Breuer for help. He spent 2 years (1880–1882) treating Anna O. and discovered that allowing her to talk about her experiences seemed to bring some relief of her symptoms. Anna O. called his treatment the “talking cure” (Launer, 2005). Despite the fact the Freud never met Anna O., her story served as the basis for the 1895 book, Studies on Hysteria , which he co-authored with Breuer. Based on Breuer’s description of Anna O.’s treatment, Freud concluded that hysteria was the result of sexual abuse in childhood and that these traumatic experiences had been hidden from consciousness. Breuer disagreed with Freud, which soon ended their work together. However, Freud continued to work to refine talk therapy and build his theory on personality. Levels of Consciousness To explain the concept of conscious versus unconscious experience, Freud compared the mind to an iceberg ( Figure 11.5 ). He said that only about one-tenth of our mind is conscious , and the rest of our mind is unconscious . Our unconscious refers to that mental activity of which we are unaware and are unable to access (Freud, 1923). According to Freud, unacceptable urges and desires are kept in our unconscious through a process called repression. For example, we sometimes say things that we don’t intend to say by unintentionally substituting another word for the one we meant. You’ve probably heard of a Freudian slip, the term used to describe this. Freud suggested that slips of the tongue are actually sexual or aggressive urges, accidentally slipping out of our unconscious. Speech errors such as this are quite common. Seeing them as a reflection of unconscious desires, linguists today have found that slips of the tongue tend to occur when we are tired, nervous, or not at our optimal level of cognitive functioning (Motley, 2002). According to Freud, our personality develops from a conflict between two forces: our biological aggressive and pleasure-seeking drives versus our internal (socialized) control over these drives. Our personality is the result of our efforts to balance these two competing forces. Freud suggested that we can understand this by imagining three interacting systems within our minds. He called them the id, ego, and superego ( Figure 11.6 ). The unconscious id contains our most primitive drives or urges, and is present from birth. It directs impulses for hunger, thirst, and sex. Freud believed that the id operates on what he called the “pleasure principle,” in which the id seeks immediate gratification. Through social interactions with parents and others in a child’s environment, the ego and superego develop to help control the id. The superego develops as a child interacts with others, learning the social rules for right and wrong. The superego acts as our conscience; it is our moral compass that tells us how we should behave. It strives for perfection and judges our behavior, leading to feelings of pride or—when we fall short of the ideal—feelings of guilt. In contrast to the instinctual id and the rule-based superego, the ego is the rational part of our personality. It’s what Freud considered to be the self, and it is the part of our personality that is seen by others. Its job is to balance the demands of the id and superego in the context of reality; thus, it operates on what Freud called the “reality principle.” The ego helps the id satisfy its desires in a realistic way. The id and superego are in constant conflict, because the id wants instant gratification regardless of the consequences, but the superego tells us that we must behave in socially acceptable ways. Thus, the ego’s job is to find the middle ground. It helps satisfy the id’s desires in a rational way that will not lead us to feelings of guilt. According to Freud, a person who has a strong ego, which can balance the demands of the id and the superego, has a healthy personality. Freud maintained that imbalances in the system can lead to neurosis (a tendency to experience negative emotions), anxiety disorders, or unhealthy behaviors. For example, a person who is dominated by their id might be narcissistic and impulsive. A person with a dominant superego might be controlled by feelings of guilt and deny themselves even socially acceptable pleasures; conversely, if the superego is weak or absent, a person might become a psychopath. An overly dominant superego might be seen in an over-controlled individual whose rational grasp on reality is so strong that they are unaware of their emotional needs, or, in a neurotic who is overly defensive (overusing ego defense mechanisms). Defense Mechanisms Freud believed that feelings of anxiety result from the ego’s inability to mediate the conflict between the id and superego. When this happens, Freud believed that the ego seeks to restore balance through various protective measures known as defense mechanisms ( Figure 11.7 ). When certain events, feelings, or yearnings cause an individual anxiety, the individual wishes to reduce that anxiety. To do that, the individual’s unconscious mind uses ego defense mechanisms , unconscious protective behaviors that aim to reduce anxiety. The ego, usually conscious, resorts to unconscious strivings to protect the ego from being overwhelmed by anxiety. When we use defense mechanisms, we are unaware that we are using them. Further, they operate in various ways that distort reality. According to Freud, we all use ego defense mechanisms. While everyone uses defense mechanisms, Freud believed that overuse of them may be problematic. For example, let’s say Joe Smith is a high school football player. Deep down, Joe feels sexually attracted to males. His conscious belief is that being gay is immoral and that if he were gay, his family would disown him and he would be ostracized by his peers. Therefore, there is a conflict between his conscious beliefs (being gay is wrong and will result in being ostracized) and his unconscious urges (attraction to males). The idea that he might be gay causes Joe to have feelings of anxiety. How can he decrease his anxiety? Joe may find himself acting very “macho,” making gay jokes, and picking on a school peer who is gay. This way, Joe’s unconscious impulses are further submerged. There are several different types of defense mechanisms. For instance, in repression, anxiety-causing memories from consciousness are blocked. As an analogy, let’s say your car is making a strange noise, but because you do not have the money to get it fixed, you just turn up the radio so that you no longer hear the strange noise. Eventually you forget about it. Similarly, in the human psyche, if a memory is too overwhelming to deal with, it might be repressed and thus removed from conscious awareness (Freud, 1920). This repressed memory might cause symptoms in other areas. Another defense mechanism is reaction formation , in which someone expresses feelings, thoughts, and behaviors opposite to their inclinations. In the above example, Joe made fun of a homosexual peer while himself being attracted to males. In regression , an individual acts much younger than their age. For example, a four-year-old child who resents the arrival of a newborn sibling may act like a baby and revert to drinking out of a bottle. In projection , a person refuses to acknowledge her own unconscious feelings and instead sees those feelings in someone else. Other defense mechanisms include rationalization , displacement , and sublimation . Link to Learning Watch this video for a review of Freud’s defense mechanisms. Stages of Psychosexual Development Freud believed that personality develops during early childhood: Childhood experiences shape our personalities as well as our behavior as adults. He asserted that we develop via a series of stages during childhood. Each of us must pass through these childhood stages, and if we do not have the proper nurturing and parenting during a stage, we will be stuck, or fixated, in that stage, even as adults. In each psychosexual stage of development , the child’s pleasure-seeking urges, coming from the id, are focused on a different area of the body, called an erogenous zone. The stages are oral, anal, phallic, latency, and genital ( Table 11.1 ). Freud’s psychosexual development theory is quite controversial. To understand the origins of the theory, it is helpful to be familiar with the political, social, and cultural influences of Freud’s day in Vienna at the turn of the 20th century. During this era, a climate of sexual repression, combined with limited understanding and education surrounding human sexuality, heavily influenced Freud’s perspective. Given that sex was a taboo topic, Freud assumed that negative emotional states (neuroses) stemmed from suppression of unconscious sexual and aggressive urges. For Freud, his own recollections and interpretations of patients’ experiences and dreams were sufficient proof that psychosexual stages were universal events in early childhood. Stage Age (years) Erogenous Zone Major Conflict Adult Fixation Example Oral 0–1 Mouth Weaning off breast or bottle Smoking, overeating Anal 1–3 Anus Toilet training Neatness, messiness Phallic 3–6 Genitals Oedipus/Electra complex Vanity, overambition Latency 6–12 None None None Genital 12+ Genitals None None Table 11.1 Freud’s Stages of Psychosexual Development Oral Stage In the oral stage (birth to 1 year), pleasure is focused on the mouth. Eating and the pleasure derived from sucking (nipples, pacifiers, and thumbs) play a large part in a baby’s first year of life. At around 1 year of age, babies are weaned from the bottle or breast, and this process can create conflict if not handled properly by caregivers. According to Freud, an adult who smokes, drinks, overeats, or bites her nails is fixated in the oral stage of her psychosexual development; she may have been weaned too early or too late, resulting in these fixation tendencies, all of which seek to ease anxiety. Anal Stage After passing through the oral stage, children enter what Freud termed the anal stage (1–3 years). In this stage, children experience pleasure in their bowel and bladder movements, so it makes sense that the conflict in this stage is over toilet training. Freud suggested that success at the anal stage depended on how parents handled toilet training. Parents who offer praise and rewards encourage positive results and can help children feel competent. Parents who are harsh in toilet training can cause a child to become fixated at the anal stage, leading to the development of an anal-retentive personality. The anal-retentive personality is stingy and stubborn, has a compulsive need for order and neatness, and might be considered a perfectionist. If parents are too lenient in toilet training, the child might also become fixated and display an anal-expulsive personality. The anal-expulsive personality is messy, careless, disorganized, and prone to emotional outbursts. Phallic Stage Freud’s third stage of psychosexual development is the phallic stage (3–6 years), corresponding to the age when children become aware of their bodies and recognize the differences between boys and girls. The erogenous zone in this stage is the genitals. Conflict arises when the child feels a desire for the opposite-sex parent, and jealousy and hatred toward the same-sex parent. For boys, this is called the Oedipus complex, involving a boy's desire for his mother and his urge to replace his father who is seen as a rival for the mother’s attention. At the same time, the boy is afraid his father will punish him for his feelings, so he experiences castration anxiety . The Oedipus complex is successfully resolved when the boy begins to identify with his father as an indirect way to have the mother. Failure to resolve the Oedipus complex may result in fixation and development of a personality that might be described as vain and overly ambitious. Girls experience a comparable conflict in the phallic stage—the Electra complex. The Electra complex, while often attributed to Freud, was actually proposed by Freud’s protégé, Carl Jung (Jung & Kerenyi, 1963). A girl desires the attention of her father and wishes to take her mother’s place. Jung also said that girls are angry with the mother for not providing them with a penis—hence the term penis envy . While Freud initially embraced the Electra complex as a parallel to the Oedipus complex, he later rejected it, yet it remains as a cornerstone of Freudian theory, thanks in part to academics in the field (Freud, 1931/1968; Scott, 2005). Latency Period Following the phallic stage of psychosexual development is a period known as the latency period (6 years to puberty). This period is not considered a stage, because sexual feelings are dormant as children focus on other pursuits, such as school, friendships, hobbies, and sports. Children generally engage in activities with peers of the same sex, which serves to consolidate a child’s gender-role identity. Genital Stage The final stage is the genital stage (from puberty on). In this stage, there is a sexual reawakening as the incestuous urges resurface. The young person redirects these urges to other, more socially acceptable partners (who often resemble the other-sex parent). People in this stage have mature sexual interests, which for Freud meant a strong desire for the opposite sex. Individuals who successfully completed the previous stages, reaching the genital stage with no fixations, are said to be well-balanced, healthy adults. While most of Freud’s ideas have not found support in modern research, we cannot discount the contributions that Freud has made to the field of psychology. It was Freud who pointed out that a large part of our mental life is influenced by the experiences of early childhood and takes place outside of our conscious awareness; his theories paved the way for others. 11.3 Neo-Freudians: Adler, Erikson, Jung, and Horney Learning Objectives By the end of this section, you will be able to: Discuss the concept of the inferiority complex Discuss the core differences between Erikson’s and Freud’s views on personality Discuss Jung’s ideas of the collective unconscious and archetypes Discuss the work of Karen Horney, including her revision of Freud’s “penis envy” Freud attracted many followers who modified his ideas to create new theories about personality. These theorists, referred to as neo-Freudians, generally agreed with Freud that childhood experiences matter, but deemphasized sex, focusing more on the social environment and effects of culture on personality. Four notable neo-Freudians include Alfred Adler, Erik Erikson, Carl Jung (pronounced “Yoong”), and Karen Horney (pronounced “HORN-eye”). Alfred Adler Alfred Adler , a colleague of Freud’s and the first president of the Vienna Psychoanalytical Society (Freud’s inner circle of colleagues), was the first major theorist to break away from Freud ( Figure 11.8 ). He subsequently founded a school of psychology called individual psychology , which focuses on our drive to compensate for feelings of inferiority. Adler (1937, 1956) proposed the concept of the inferiority complex . An inferiority complex refers to a person’s feelings that they lack worth and don’t measure up to the standards of others or of society. Adler’s ideas about inferiority represent a major difference between his thinking and Freud’s. Freud believed that we are motivated by sexual and aggressive urges, but Adler (1930, 1961) believed that feelings of inferiority in childhood are what drive people to attempt to gain superiority and that this striving is the force behind all of our thoughts, emotions, and behaviors. Adler also believed in the importance of social connections, seeing childhood development emerging through social development rather than the sexual stages Freud outlined. Adler noted the inter-relatedness of humanity and the need to work together for the betterment of all. He said, “The happiness of mankind lies in working together, in living as if each individual had set himself the task of contributing to the common welfare” (Adler, 1964, p. 255) with the main goal of psychology being “to recognize the equal rights and equality of others” (Adler, 1961, p. 691). With these ideas, Adler identified three fundamental social tasks that all of us must experience: occupational tasks (careers), societal tasks (friendship), and love tasks (finding an intimate partner for a long-term relationship). Rather than focus on sexual or aggressive motives for behavior as Freud did, Adler focused on social motives. He also emphasized conscious rather than unconscious motivation, since he believed that the three fundamental social tasks are explicitly known and pursued. That is not to say that Adler did not also believe in unconscious processes—he did—but he felt that conscious processes were more important. One of Adler’s major contributions to personality psychology was the idea that our birth order shapes our personality. He proposed that older siblings, who start out as the focus of their parents’ attention but must share that attention once a new child joins the family, compensate by becoming overachievers. The youngest children, according to Adler, may be spoiled, leaving the middle child with the opportunity to minimize the negative dynamics of the youngest and oldest children. Despite popular attention, research has not conclusively confirmed Adler’s hypotheses about birth order. Link to Learning One of Adler’s major contributions to personality psychology was the idea that our birth order shapes our personality. Follow this link to view a summary of birth order theory. Erik Erikson As an art school dropout with an uncertain future, young Erik Erikson met Freud’s daughter, Anna Freud, while he was tutoring the children of an American couple undergoing psychoanalysis in Vienna. It was Anna Freud who encouraged Erikson to study psychoanalysis. Erikson received his diploma from the Vienna Psychoanalytic Institute in 1933, and as Nazism spread across Europe, he fled the country and immigrated to the United States that same year. As you learned when you studied lifespan development, Erikson later proposed a psychosocial theory of development, suggesting that an individual’s personality develops throughout the lifespan—a departure from Freud’s view that personality is fixed in early life. In his theory, Erikson emphasized the social relationships that are important at each stage of personality development, in contrast to Freud’s emphasis on sex. Erikson identified eight stages, each of which represents a conflict or developmental task ( Table 11.2 ). The development of a healthy personality and a sense of competence depend on the successful completion of each task. Stage Age (years) Developmental Task Description 1 0–1 Trust vs. mistrust Trust (or mistrust) that basic needs, such as nourishment and affection, will be met 2 1–3 Autonomy vs. shame/doubt Sense of independence in many tasks develops 3 3–6 Initiative vs. guilt Take initiative on some activities, may develop guilt when success not met or boundaries overstepped 4 7–11 Industry vs. inferiority Develop self-confidence in abilities when competent or sense of inferiority when not 5 12–18 Identity vs. confusion Experiment with and develop identity and roles 6 19–29 Intimacy vs. isolation Establish intimacy and relationships with others 7 30–64 Generativity vs. stagnation Contribute to society and be part of a family 8 65– Integrity vs. despair Assess and make sense of life and meaning of contributions Table 11.2 Erikson’s Psychosocial Stages of Development Carl Jung Carl Jung ( Figure 11.9 ) was a Swiss psychiatrist and protégé of Freud, who later split off from Freud and developed his own theory, which he called analytical psychology . The focus of analytical psychology is on working to balance opposing forces of conscious and unconscious thought, and experience within one’s personality. According to Jung, this work is a continuous learning process—mainly occurring in the second half of life—of becoming aware of unconscious elements and integrating them into consciousness. Jung’s split from Freud was based on two major disagreements. First, Jung, like Adler and Erikson, did not accept that sexual drive was the primary motivator in a person’s mental life. Second, although Jung agreed with Freud’s concept of a personal unconscious, he thought it to be incomplete. In addition to the personal unconscious, Jung focused on the collective unconscious. The collective unconscious is a universal version of the personal unconscious, holding mental patterns, or memory traces, which are common to all of us (Jung, 1928). These ancestral memories, which Jung called archetypes , are represented by universal themes in various cultures, as expressed through literature, art, and dreams (Jung). Jung said that these themes reflect common experiences of people the world over, such as facing death, becoming independent, and striving for mastery. Jung (1964) believed that through biology, each person is handed down the same themes and that the same types of symbols—such as the hero, the maiden, the sage, and the trickster—are present in the folklore and fairy tales of every culture. In Jung’s view, the task of integrating these unconscious archetypal aspects of the self is part of the self-realization process in the second half of life. With this orientation toward self-realization, Jung parted ways with Freud’s belief that personality is determined solely by past events and anticipated the humanistic movement with its emphasis on self-actualization and orientation toward the future. Jung also proposed two attitudes or approaches toward life: extroversion and introversion (Jung, 1923) ( Table 11.3 ). These ideas are considered Jung’s most important contributions to the field of personality psychology, as almost all models of personality now include these concepts. If you are an extrovert, then you are a person who is energized by being outgoing and socially oriented: You derive your energy from being around others. If you are an introvert, then you are a person who may be quiet and reserved, or you may be social, but your energy is derived from your inner psychic activity. Jung believed a balance between extroversion and introversion best served the goal of self-realization. Introvert Extrovert Energized by being alone Energized by being with others Avoids attention Seeks attention Speaks slowly and softly Speaks quickly and loudly Thinks before speaking Thinks out loud Stays on one topic Jumps from topic to topic Prefers written communication Prefers verbal communication Pays attention easily Distractible Cautious Acts first, thinks later Table 11.3 Introverts and Extroverts Another concept proposed by Jung was the persona, which he referred to as a mask that we adopt. According to Jung, we consciously create this persona; however, it is derived from both our conscious experiences and our collective unconscious. What is the purpose of the persona? Jung believed that it is a compromise between who we really are (our true self) and what society expects us to be. We hide those parts of ourselves that are not aligned with society’s expectations. Link to Learning Jung’s view of extroverted and introverted types serves as a basis of the Myers-Briggs Type Indicator (MBTI). This questionnaire describes a person’s degree of introversion versus extroversion, thinking versus feeling, intuition versus sensation, and judging versus perceiving. This site provides a modified questionnaire based on the MBTI. Connect the Concepts Are Archetypes Genetically Based? Jung proposed that human responses to archetypes are similar to instinctual responses in animals. One criticism of Jung is that there is no evidence that archetypes are biologically based or similar to animal instincts (Roesler, 2012). Jung formulated his ideas about 100 years ago, and great advances have been made in the field of genetics since that time. We’ve found that human babies are born with certain capacities, including the ability to acquire language. However, we’ve also found that symbolic information (such as archetypes) is not encoded on the genome and that babies cannot decode symbolism, refuting the idea of a biological basis to archetypes. Rather than being seen as purely biological, more recent research suggests that archetypes emerge directly from our experiences and are reflections of linguistic or cultural characteristics (Young-Eisendrath, 1995). Today, most Jungian scholars believe that the collective unconscious and archetypes are based on both innate and environmental influences, with the differences being in the role and degree of each (Sotirova-Kohli et al., 2013). Karen Horney Karen Horney was one of the first women trained as a Freudian psychoanalyst. During the Great Depression, Horney moved from Germany to the United States, and subsequently moved away from Freud’s teachings. Like Jung, Horney believed that each individual has the potential for self-realization and that the goal of psychoanalysis should be moving toward a healthy self rather than exploring early childhood patterns of dysfunction. Horney also disagreed with the Freudian idea that girls have penis envy and are jealous of male biological features. According to Horney, any jealousy is most likely culturally based, due to the greater privileges that males often have, meaning that the differences between men’s and women’s personalities are culturally based, not biologically based. She further suggested that men have womb envy, because they cannot give birth. Horney’s theories focused on the role of unconscious anxiety. She suggested that normal growth can be blocked by basic anxiety stemming from needs not being met, such as childhood experiences of loneliness and/or isolation. How do children learn to handle this anxiety? Horney suggested three styles of coping ( Table 11.4 ). The first coping style, moving toward people , relies on affiliation and dependence. These children become dependent on their parents and other caregivers in an effort to receive attention and affection, which provides relief from anxiety (Burger, 2008). When these children grow up, they tend to use this same coping strategy to deal with relationships, expressing an intense need for love and acceptance (Burger, 2008). The second coping style, moving against people , relies on aggression and assertiveness. Children with this coping style find that fighting is the best way to deal with an unhappy home situation, and they deal with their feelings of insecurity by bullying other children (Burger, 2008). As adults, people with this coping style tend to lash out with hurtful comments and exploit others (Burger, 2008). The third coping style, moving away from people , centers on detachment and isolation. These children handle their anxiety by withdrawing from the world. They need privacy and tend to be self-sufficient. When these children are adults, they continue to avoid such things as love and friendship, and they also tend to gravitate toward careers that require little interaction with others (Burger, 2008). Coping Style Description Example Moving toward people Affiliation and dependence Child seeking positive attention and affection from parent; adult needing love Moving against people Aggression and manipulation Child fighting or bullying other children; adult who is abrasive and verbally hurtful, or who exploits others Moving away from people Detachment and isolation Child withdrawn from the world and isolated; adult loner Table 11.4 Horney’s Coping Styles Horney believed these three styles are ways in which people typically cope with day-to-day problems; however, the three coping styles can become neurotic strategies if they are used rigidly and compulsively, leading a person to become alienated from others. 11.4 Learning Approaches Learning Objectives By the end of this section, you will be able to: Describe the behaviorist perspective on personality Describe the cognitive perspective on personality Describe the social cognitive perspective on personality In contrast to the psychodynamic approaches of Freud and the neo-Freudians, which relate personality to inner (and hidden) processes, the learning approaches focus only on observable behavior. This illustrates one significant advantage of the learning approaches over psychodynamics: Because learning approaches involve observable, measurable phenomena, they can be scientifically tested. The Behavioral Perspective Behaviorists do not believe in biological determinism: They do not see personality traits as inborn. Instead, they view personality as significantly shaped by the reinforcements and consequences outside of the organism. In other words, people behave in a consistent manner based on prior learning. B. F. Skinner , a strict behaviorist, believed that environment was solely responsible for all behavior, including the enduring, consistent behavior patterns studied by personality theorists. As you may recall from your study on the psychology of learning, Skinner proposed that we demonstrate consistent behavior patterns because we have developed certain response tendencies (Skinner, 1953). In other words, we learn to behave in particular ways. We increase the behaviors that lead to positive consequences, and we decrease the behaviors that lead to negative consequences. Skinner disagreed with Freud’s idea that personality is fixed in childhood. He argued that personality develops over our entire life, not only in the first few years. Our responses can change as we come across new situations; therefore, we can expect more variability over time in personality than Freud would anticipate. For example, consider a young woman, Greta, a risk taker. She drives fast and participates in dangerous sports such as hang gliding and kiteboarding. But after she gets married and has children, the system of reinforcements and punishments in her environment changes. Speeding and extreme sports are no longer reinforced, so she no longer engages in those behaviors. In fact, Greta now describes herself as a cautious person. The Social-Cognitive Perspective Albert Bandura agreed with Skinner that personality develops through learning . He disagreed, however, with Skinner’s strict behaviorist approach to personality development, because he felt that thinking and reasoning are important components of learning. He presented a social-cognitive theory of personality that emphasizes both learning and cognition as sources of individual differences in personality. In social-cognitive theory, the concepts of reciprocal determinism, observational learning, and self-efficacy all play a part in personality development. Reciprocal Determinism In contrast to Skinner’s idea that the environment alone determines behavior, Bandura (1990) proposed the concept of reciprocal determinism , in which cognitive processes, behavior, and context all interact, each factor influencing and being influenced by the others simultaneously ( Figure 11.10 ). Cognitive processes refer to all characteristics previously learned, including beliefs, expectations, and personality characteristics. Behavior refers to anything that we do that may be rewarded or punished. Finally, the context in which the behavior occurs refers to the environment or situation, which includes rewarding/punishing stimuli. Consider, for example, that you’re at a festival and one of the attractions is bungee jumping from a bridge. Do you do it? In this example, the behavior is bungee jumping. Cognitive factors that might influence this behavior include your beliefs and values, and your past experiences with similar behaviors. Finally, context refers to the reward structure for the behavior. According to reciprocal determinism, all of these factors are in play. Observational Learning Bandura’s key contribution to learning theory was the idea that much learning is vicarious. We learn by observing someone else’s behavior and its consequences, which Bandura called observational learning. He felt that this type of learning also plays a part in the development of our personality. Just as we learn individual behaviors, we learn new behavior patterns when we see them performed by other people or models. Drawing on the behaviorists’ ideas about reinforcement, Bandura suggested that whether we choose to imitate a model’s behavior depends on whether we see the model reinforced or punished. Through observational learning, we come to learn what behaviors are acceptable and rewarded in our culture, and we also learn to inhibit deviant or socially unacceptable behaviors by seeing what behaviors are punished. We can see the principles of reciprocal determinism at work in observational learning. For example, personal factors determine which behaviors in the environment a person chooses to imitate, and those environmental events in turn are processed cognitively according to other personal factors. Self-Efficacy Bandura (1977, 1995) has studied a number of cognitive and personal factors that affect learning and personality development, and most recently has focused on the concept of self-efficacy. Self-efficacy is our level of confidence in our own abilities, developed through our social experiences. Self-efficacy affects how we approach challenges and reach goals. In observational learning, self-efficacy is a cognitive factor that affects which behaviors we choose to imitate as well as our success in performing those behaviors. People who have high self-efficacy believe that their goals are within reach, have a positive view of challenges seeing them as tasks to be mastered, develop a deep interest in and strong commitment to the activities in which they are involved, and quickly recover from setbacks. Conversely, people with low self-efficacy avoid challenging tasks because they doubt their ability to be successful, tend to focus on failure and negative outcomes, and lose confidence in their abilities if they experience setbacks. Feelings of self-efficacy can be specific to certain situations. For instance, a student might feel confident in her ability in English class but much less so in math class. Julian Rotter and Locus of Control Julian Rotter (1966) proposed the concept of locus of control, another cognitive factor that affects learning and personality development. Distinct from self-efficacy, which involves our belief in our own abilities, locus of control refers to our beliefs about the power we have over our lives. In Rotter’s view, people possess either an internal or an external locus of control ( Figure 11.11 ). Those of us with an internal locus of control (“internals”) tend to believe that most of our outcomes are the direct result of our efforts. Those of us with an external locus of control (“externals”) tend to believe that our outcomes are outside of our control. Externals see their lives as being controlled by other people, luck, or chance. For example, say you didn’t spend much time studying for your psychology test and went out to dinner with friends instead. When you receive your test score, you see that you earned a D. If you possess an internal locus of control, you would most likely admit that you failed because you didn’t spend enough time studying and decide to study more for the next test. On the other hand, if you possess an external locus of control, you might conclude that the test was too hard and not bother studying for the next test, because you figure you will fail it anyway. Researchers have found that people with an internal locus of control perform better academically, achieve more in their careers, are more independent, are healthier, are better able to cope, and are less depressed than people who have an external locus of control (Benassi, Sweeney, & Durfour, 1988; Lefcourt, 1982; Maltby, Day, & Macaskill, 2007; Whyte, 1977, 1978, 1980). Link to Learning Take the Locus of Control questionnaire. Scores range from 0 to 13. A low score on this questionnaire indicates an internal locus of control, and a high score indicates an external locus of control. Walter Mischel and the Person-Situation Debate Walter Mischel was a student of Julian Rotter and taught for years at Stanford, where he was a colleague of Albert Bandura. Mischel surveyed several decades of empirical psychological literature regarding trait prediction of behavior, and his conclusion shook the foundations of personality psychology. Mischel found that the data did not support the central principle of the field—that a person’s personality traits are consistent across situations. His report triggered a decades-long period of self-examination, known as the person-situation debate, among personality psychologists. Mischel suggested that perhaps we were looking for consistency in the wrong places. He found that although behavior was inconsistent across different situations, it was much more consistent within situations—so that a person’s behavior in one situation would likely be repeated in a similar one. And as you will see next regarding his famous “marshmallow test,” Mischel also found that behavior is consistent in equivalent situations across time. One of Mischel’s most notable contributions to personality psychology was his ideas on self-regulation. According to Lecci & Magnavita (2013), “Self-regulation is the process of identifying a goal or set of goals and, in pursuing these goals, using both internal (e.g., thoughts and affect) and external (e.g., responses of anything or anyone in the environment) feedback to maximize goal attainment” (p. 6.3). Self-regulation is also known as will power. When we talk about will power, we tend to think of it as the ability to delay gratification. For example, Bettina’s teenage daughter made strawberry cupcakes, and they looked delicious. However, Bettina forfeited the pleasure of eating one, because she is training for a 5K race and wants to be fit and do well in the race. Would you be able to resist getting a small reward now in order to get a larger reward later? This is the question Mischel investigated in his now-classic marshmallow test. Mischel designed a study to assess self-regulation in young children. In the marshmallow study, Mischel and his colleagues placed a preschool child in a room with one marshmallow on the table. The child was told that he could either eat the marshmallow now, or wait until the researcher returned to the room and then he could have two marshmallows (Mischel, Ebbesen & Raskoff, 1972). This was repeated with hundreds of preschoolers. What Mischel and his team found was that young children differ in their degree of self-control. Mischel and his colleagues continued to follow this group of preschoolers through high school, and what do you think they discovered? The children who had more self-control in preschool (the ones who waited for the bigger reward) were more successful in high school. They had higher SAT scores, had positive peer relationships, and were less likely to have substance abuse issues; as adults, they also had more stable marriages (Mischel, Shoda, & Rodriguez, 1989; Mischel et al., 2010). On the other hand, those children who had poor self-control in preschool (the ones who grabbed the one marshmallow) were not as successful in high school, and they were found to have academic and behavioral problems. Link to Learning To learn more about the marshmallow test and view the test given to children in Columbia, follow the link below to Joachim de Posada’s TEDTalks video. Today, the debate is mostly resolved, and most psychologists consider both the situation and personal factors in understanding behavior. For Mischel (1993), people are situation processors. The children in the marshmallow test each processed, or interpreted, the rewards structure of that situation in their own way. Mischel’s approach to personality stresses the importance of both the situation and the way the person perceives the situation. Instead of behavior being determined by the situation, people use cognitive processes to interpret the situation and then behave in accordance with that interpretation. 11.5 Humanistic Approaches Learning Objectives By the end of this section, you will be able to: Discuss the contributions of Abraham Maslow and Carl Rogers to personality development As the “third force” in psychology, humanism is touted as a reaction both to the pessimistic determinism of psychoanalysis, with its emphasis on psychological disturbance, and to the behaviorists’ view of humans passively reacting to the environment, which has been criticized as making people out to be personality-less robots. It does not suggest that psychoanalytic, behaviorist, and other points of view are incorrect but argues that these perspectives do not recognize the depth and meaning of human experience, and fail to recognize the innate capacity for self-directed change and transforming personal experiences. This perspective focuses on how healthy people develop. One pioneering humanist, Abraham Maslow , studied people who he considered to be healthy, creative, and productive, including Albert Einstein, Eleanor Roosevelt, Thomas Jefferson, Abraham Lincoln, and others. Maslow (1950, 1970) found that such people share similar characteristics, such as being open, creative, loving, spontaneous, compassionate, concerned for others, and accepting of themselves. When you studied motivation, you learned about one of the best-known humanistic theories, Maslow's hierarchy of needs theory, in which Maslow proposes that human beings have certain needs in common and that these needs must be met in a certain order. The highest need is the need for self-actualization, which is the achievement of our fullest potential. Another humanistic theorist was Carl Rogers. One of Rogers’s main ideas about personality regards self-concept , our thoughts and feelings about ourselves. How would you respond to the question, “Who am I?” Your answer can show how you see yourself. If your response is primarily positive, then you tend to feel good about who you are, and you see the world as a safe and positive place. If your response is mainly negative, then you may feel unhappy with who you are. Rogers further divided the self into two categories: the ideal self and the real self. The ideal self is the person that you would like to be; the real self is the person you actually are. Rogers focused on the idea that we need to achieve consistency between these two selves. We experience congruence when our thoughts about our real self and ideal self are very similar—in other words, when our self-concept is accurate . High congruence leads to a greater sense of self-worth and a healthy, productive life. Parents can help their children achieve this by giving them unconditional positive regard, or unconditional love. According to Rogers (1980), “As persons are accepted and prized, they tend to develop a more caring attitude towards themselves” (p. 116). Conversely, when there is a great discrepancy between our ideal and actual selves, we experience a state Rogers called incongruence , which can lead to maladjustment. Both Rogers’s and Maslow’s theories focus on individual choices and do not believe that biology is deterministic. 11.6 Biological Approaches Learning Objectives By the end of this section, you will be able to: Discuss the findings of the Minnesota Study of Twins Reared Apart as they relate to personality and genetics Discuss temperament and describe the three infant temperaments identified by Thomas and Chess Discuss the evolutionary perspective on personality development How much of our personality is in-born and biological, and how much is influenced by the environment and culture we are raised in? Psychologists who favor the biological approach believe that inherited predispositions as well as physiological processes can be used to explain differences in our personalities (Burger, 2008). In the field of behavioral genetics, the Minnesota Study of Twins Reared Apart —a well-known study of the genetic basis for personality—conducted research with twins from 1979 to 1999. In studying 350 pairs of twins, including pairs of identical and fraternal twins reared together and apart, researchers found that identical twins, whether raised together or apart, have very similar personalities (Bouchard, 1994; Bouchard, Lykken, McGue, Segal, & Tellegen, 1990; Segal, 2012). These findings suggest the heritability of some personality traits. Heritability refers to the proportion of difference among people that is attributed to genetics. Some of the traits that the study reported as having more than a 0.50 heritability ratio include leadership, obedience to authority, a sense of well-being, alienation, resistance to stress, and fearfulness. The implication is that some aspects of our personalities are largely controlled by genetics; however, it’s important to point out that traits are not determined by a single gene, but by a combination of many genes, as well as by epigenetic factors that control whether the genes are expressed. Link to Learning To what extent is our personality dictated by our genetic makeup? View this video to learn more. Temperament Most contemporary psychologists believe temperament has a biological basis due to its appearance very early in our lives (Rothbart, 2011). As you learned when you studied lifespan development, Thomas and Chess (1977) found that babies could be categorized into one of three temperaments: easy, difficult, or slow to warm up. However, environmental factors (family interactions, for example) and maturation can affect the ways in which children’s personalities are expressed (Carter et al., 2008). Research suggests that there are two dimensions of our temperament that are important parts of our adult personality—reactivity and self-regulation (Rothbart, Ahadi, & Evans, 2000). Reactivity refers to how we respond to new or challenging environmental stimuli; self-regulation refers to our ability to control that response (Rothbart & Derryberry, 1981; Rothbart, Sheese, Rueda, & Posner, 2011). For example, one person may immediately respond to new stimuli with a high level of anxiety, while another barely notices it. Connect the Concepts Body Type and Temperament Is there an association between your body type and your temperament? The constitutional perspective, which examines the relationship between the structure of the human body and behavior, seeks to answer this question (Genovese, 2008). The first comprehensive system of constitutional psychology was proposed by American psychologist William H. Sheldon (1940, 1942). He believed that your body type can be linked to your personality. Sheldon’s life’s work was spent observing human bodies and temperaments. Based on his observations and interviews of hundreds of people, he proposed three body/personality types, which he called somatotypes. The three somatotypes are ectomorphs, endomorphs, and mesomorphs ( Figure 11.12 ). Ectomorphs are thin with a small bone structure and very little fat on their bodies. According to Sheldon, the ectomorph personality is anxious, self-conscious, artistic, thoughtful, quiet, and private. They enjoy intellectual stimulation and feel uncomfortable in social situations. Actors Adrien Brody and Nicole Kidman would be characterized as ectomorphs. Endomorphs are the opposite of ectomorphs. Endomorphs have narrow shoulders and wide hips, and carry extra fat on their round bodies. Sheldon described endomorphs as being relaxed, comfortable, good-humored, even-tempered, sociable, and tolerant. Endomorphs enjoy affection and detest disapproval. Queen Latifah and Jack Black would be considered endomorphs. The third somatotype is the mesomorph. This body type falls between the ectomorph and the endomorph. Mesomorphs have large bone structure, well-defined muscles, broad shoulders, narrow waists, and attractive, strong bodies. According to Sheldon, mesomorphs are adventurous, assertive, competitive, and fearless. They are curious and enjoy trying new things, but can also be obnoxious and aggressive. Channing Tatum and Scarlett Johannson would likely be mesomorphs. Sheldon (1949) also conducted further research into somatotypes and criminality. He measured the physical proportions of hundreds of juvenile delinquent boys in comparison to male college students, and found that problem youth were primarily mesomorphs. Why might this be? Perhaps it’s because they are quick to anger and don’t have the restraint demonstrated by ectomorphs. Maybe it’s because a person with a mesomorphic body type reflects high levels of testosterone, which may lead to more aggressive behavior. Can you think of other explanations for Sheldon’s findings? Sheldon’s method of somatotyping is not without criticism, as it has been considered largely subjective (Carter & Heath, 1990; Cortés & Gatti, 1972; Parnell, 1958). More systematic and controlled research methods did not support his findings (Eysenck, 1970). Consequently, it’s not uncommon to see his theory labeled as pseudoscience, much like Gall’s theory of phrenology (Rafter, 2007; Rosenbaum, 1995). However, studies involving correlations between somatotype, temperament, and children’s school performance (Sanford et al., 1943; Parnell); somatotype and performance of pilots during wartime (Damon, 1955); and somatotype and temperament (Peterson, Liivamagi, & Koskel, 2006) did support his theory. 11.7 Trait Theorists Learning Objectives By the end of this section, you will be able to: Discuss early trait theories of Cattell and Eysenck Discuss the Big Five factors and describe someone who is high and low on each of the five traits Trait theorists believe personality can be understood via the approach that all people have certain traits , or characteristic ways of behaving. Do you tend to be sociable or shy? Passive or aggressive? Optimistic or pessimistic? Moody or even-tempered? Early trait theorists tried to describe all human personality traits. For example, one trait theorist, Gordon Allport (Allport & Odbert, 1936), found 4,500 words in the English language that could describe people. He organized these personality traits into three categories: cardinal traits, central traits, and secondary traits. A cardinal trait is one that dominates your entire personality, and hence your life—such as Ebenezer Scrooge’s greed and Mother Theresa’s altruism. Cardinal traits are not very common: Few people have personalities dominated by a single trait. Instead, our personalities typically are composed of multiple traits. Central traits are those that make up our personalities (such as loyal, kind, agreeable, friendly, sneaky, wild, and grouchy). Secondary traits are those that are not quite as obvious or as consistent as central traits. They are present under specific circumstances and include preferences and attitudes. For example, one person gets angry when people try to tickle him; another can only sleep on the left side of the bed; and yet another always orders her salad dressing on the side. And you—although not normally an anxious person—feel nervous before making a speech in front of your English class. In an effort to make the list of traits more manageable, Raymond Cattell (1946, 1957) narrowed down the list to about 171 traits. However, saying that a trait is either present or absent does not accurately reflect a person’s uniqueness, because all of our personalities are actually made up of the same traits; we differ only in the degree to which each trait is expressed. Cattell (1957) identified 16 factors or dimensions of personality: warmth, reasoning, emotional stability, dominance, liveliness, rule-consciousness, social boldness, sensitivity, vigilance, abstractedness, privateness, apprehension, openness to change, self-reliance, perfectionism, and tension ( Table 11.5 ). He developed a personality assessment based on these 16 factors, called the 16PF. Instead of a trait being present or absent, each dimension is scored over a continuum, from high to low. For example, your level of warmth describes how warm, caring, and nice to others you are. If you score low on this index, you tend to be more distant and cold. A high score on this index signifies you are supportive and comforting. Factor Low Score High Score Warmth Reserved, detached Outgoing, supportive Intellect Concrete thinker Analytical Emotional stability Moody, irritable Stable, calm Aggressiveness Docile, submissive Controlling, dominant Liveliness Somber, prudent Adventurous, spontaneous Dutifulness Unreliable Conscientious Social assertiveness Shy, restrained Uninhibited, bold Sensitivity Tough-minded Sensitive, caring Paranoia Trusting Suspicious Abstractness Conventional Imaginative Introversion Open, straightforward Private, shrewd Anxiety Confident Apprehensive Openmindedness Closeminded, traditional Curious, experimental Independence Outgoing, social Self-sufficient Perfectionism Disorganized, casual Organized, precise Tension Relaxed Stressed Table 11.5 Personality Factors Measured by the 16PF Questionnaire Link to Learning Follow this link to an assessment based on Cattell’s 16PF questionnaire to see which personality traits dominate your personality. Psychologists Hans and Sybil Eysenck were personality theorists ( Figure 11.13 ) who focused on temperament , the inborn, genetically based personality differences that you studied earlier in the chapter. They believed personality is largely governed by biology. The Eysencks (Eysenck, 1990, 1992; Eysenck & Eysenck, 1963) viewed people as having two specific personality dimensions: extroversion/introversion and neuroticism/stability. According to their theory, people high on the trait of extroversion are sociable and outgoing, and readily connect with others, whereas people high on the trait of introversion have a higher need to be alone, engage in solitary behaviors, and limit their interactions with others. In the neuroticism/stability dimension, people high on neuroticism tend to be anxious; they tend to have an overactive sympathetic nervous system and, even with low stress, their bodies and emotional state tend to go into a flight-or-fight reaction. In contrast, people high on stability tend to need more stimulation to activate their flight-or-fight reaction and are considered more emotionally stable. Based on these two dimensions, the Eysencks’ theory divides people into four quadrants. These quadrants are sometimes compared with the four temperaments described by the Greeks: melancholic, choleric, phlegmatic, and sanguine ( Figure 11.14 ). Later, the Eysencks added a third dimension: psychoticism versus superego control (Eysenck, Eysenck & Barrett, 1985). In this dimension, people who are high on psychoticism tend to be independent thinkers, cold, nonconformists, impulsive, antisocial, and hostile, whereas people who are high on superego control tend to have high impulse control—they are more altruistic, empathetic, cooperative, and conventional (Eysenck, Eysenck & Barrett, 1985). While Cattell’s 16 factors may be too broad, the Eysenck’s two-factor system has been criticized for being too narrow. Another personality theory, called the Five Factor Model , effectively hits a middle ground, with its five factors referred to as the Big Five personality traits. It is the most popular theory in personality psychology today and the most accurate approximation of the basic trait dimensions (Funder, 2001). The five traits are openness to experience, conscientiousness, extroversion, agreeableness, and neuroticism ( Figure 11.15 ). A helpful way to remember the traits is by using the mnemonic OCEAN. In the Five Factor Model, each person has each trait, but they occur along a spectrum. Openness to experience is characterized by imagination, feelings, actions, and ideas. People who score high on this trait tend to be curious and have a wide range of interests. Conscientiousness is characterized by competence, self-discipline, thoughtfulness, and achievement-striving (goal-directed behavior). People who score high on this trait are hardworking and dependable. Numerous studies have found a positive correlation between conscientiousness and academic success (Akomolafe, 2013; Chamorro-Premuzic & Furnham, 2008; Conrad & Patry, 2012; Noftle & Robins, 2007; Wagerman & Funder, 2007). Extroversion is characterized by sociability, assertiveness, excitement-seeking, and emotional expression. People who score high on this trait are usually described as outgoing and warm. Not surprisingly, people who score high on both extroversion and openness are more likely to participate in adventure and risky sports due to their curious and excitement-seeking nature (Tok, 2011). The fourth trait is agreeableness, which is the tendency to be pleasant, cooperative, trustworthy, and good-natured. People who score low on agreeableness tend to be described as rude and uncooperative, yet one recent study reported that men who scored low on this trait actually earned more money than men who were considered more agreeable (Judge, Livingston, & Hurst, 2012). The last of the Big Five traits is neuroticism, which is the tendency to experience negative emotions. People high on neuroticism tend to experience emotional instability and are characterized as angry, impulsive, and hostile. Watson and Clark (1984) found that people reporting high levels of neuroticism also tend to report feeling anxious and unhappy. In contrast, people who score low in neuroticism tend to be calm and even-tempered. The Big Five personality factors each represent a range between two extremes. In reality, most of us tend to lie somewhere midway along the continuum of each factor, rather than at polar ends. It’s important to note that the Big Five traits are relatively stable over our lifespan, with some tendency for the traits to increase or decrease slightly. Researchers have found that conscientiousness increases through young adulthood into middle age, as we become better able to manage our personal relationships and careers (Donnellan & Lucas, 2008). Agreeableness also increases with age, peaking between 50 to 70 years (Terracciano, McCrae, Brant, & Costa, 2005). Neuroticism and extroversion tend to decline slightly with age (Donnellan & Lucas; Terracciano et al.). Additionally, The Big Five traits have been shown to exist across ethnicities, cultures, and ages, and may have substantial biological and genetic components (Jang, Livesley, & Vernon, 1996; Jang et al., 2006; McCrae & Costa, 1997; Schmitt et al., 2007). Link to Learning To find out about your personality and where you fall on the Big Five traits, follow this link to take the Big Five personality test. 11.8 Cultural Understandings of Personality Learning Objectives By the end of this section you should be able to: Discuss personality differences of people from collectivist and individualist cultures Discuss the three approaches to studying personality in a cultural context As you have learned in this chapter, personality is shaped by both genetic and environmental factors. The culture in which you live is one of the most important environmental factors that shapes your personality (Triandis & Suh, 2002). The term culture refers to all of the beliefs, customs, art, and traditions of a particular society. Culture is transmitted to people through language as well as through the modeling of culturally acceptable and nonacceptable behaviors that are either rewarded or punished (Triandis & Suh, 2002). With these ideas in mind, personality psychologists have become interested in the role of culture in understanding personality. They ask whether personality traits are the same across cultures or if there are variations. It appears that there are both universal and culture-specific aspects that account for variation in people’s personalities. Why might it be important to consider cultural influences on personality? Western ideas about personality may not be applicable to other cultures (Benet-Martinez & Oishi, 2008). In fact, there is evidence that the strength of personality traits varies across cultures. Let’s take a look at some of the Big Five factors (conscientiousness, neuroticism, openness, and extroversion) across cultures. As you will learn when you study social psychology, Asian cultures are more collectivist, and people in these cultures tend to be less extroverted. People in Central and South American cultures tend to score higher on openness to experience, whereas Europeans score higher on neuroticism (Benet-Martinez & Karakitapoglu-Aygun, 2003). According to this study, there also seem to be regional personality differences within the United States ( Figure 11.16 ). Researchers analyzed responses from over 1.5 million individuals in the United States and found that there are three distinct regional personality clusters: Cluster 1, which is in the Upper Midwest and Deep South, is dominated by people who fall into the “friendly and conventional” personality; Cluster 2, which includes the West, is dominated by people who are more relaxed, emotionally stable, calm, and creative; and Cluster 3, which includes the Northeast, has more people who are stressed, irritable, and depressed. People who live in Clusters 2 and 3 are also generally more open (Rentfrow et al., 2013). One explanation for the regional differences is selective migration (Rentfrow et al., 2013). Selective migration is the concept that people choose to move to places that are compatible with their personalities and needs. For example, a person high on the agreeable scale would likely want to live near family and friends, and would choose to settle or remain in such an area. In contrast, someone high on openness would prefer to settle in a place that is recognized as diverse and innovative (such as California). Personality in Individualist and Collectivist Cultures Individualist cultures and collectivist cultures place emphasis on different basic values. People who live in individualist cultures tend to believe that independence, competition, and personal achievement are important. Individuals in Western nations such as the United States, England, and Australia score high on individualism (Oyserman, Coon, & Kemmelmier, 2002). People who live in collectivist cultures value social harmony, respectfulness, and group needs over individual needs. Individuals who live in countries in Asia, Africa, and South America score high on collectivism (Hofstede, 2001; Triandis, 1995). These values influence personality. For example, Yang (2006) found that people in individualist cultures displayed more personally oriented personality traits, whereas people in collectivist cultures displayed more socially oriented personality traits. Approaches to Studying Personality in a Cultural Context There are three approaches that can be used to study personality in a cultural context, the cultural-comparative approach ; the indigenous approach ; and the combined approach , which incorporates elements of both views. Since ideas about personality have a Western basis, the cultural-comparative approach seeks to test Western ideas about personality in other cultures to determine whether they can be generalized and if they have cultural validity (Cheung van de Vijver, & Leong, 2011). For example, recall from the previous section on the trait perspective that researchers used the cultural-comparative approach to test the universality of McCrae and Costa’s Five Factor Model. They found applicability in numerous cultures around the world, with the Big Five traits being stable in many cultures (McCrae & Costa, 1997; McCrae et al., 2005). The indigenous approach came about in reaction to the dominance of Western approaches to the study of personality in non-Western settings (Cheung et al., 2011). Because Western-based personality assessments cannot fully capture the personality constructs of other cultures, the indigenous model has led to the development of personality assessment instruments that are based on constructs relevant to the culture being studied (Cheung et al., 2011). The third approach to cross-cultural studies of personality is the combined approach, which serves as a bridge between Western and indigenous psychology as a way of understanding both universal and cultural variations in personality (Cheung et al., 2011). 11.9 Personality Assessment Learning Objectives By the end of this section, you will be able to: Discuss the Minnesota Multiphasic Personality Inventory Recognize and describe common projective tests used in personality assessment Roberto, Mikhail, and Nat are college friends and all want to be police officers. Roberto is quiet and shy, lacks self-confidence, and usually follows others. He is a kind person, but lacks motivation. Mikhail is loud and boisterous, a leader. He works hard, but is impulsive and drinks too much on the weekends. Nat is thoughtful and well liked. He is trustworthy, but sometimes he has difficulty making quick decisions. Of these three men, who would make the best police officer? What qualities and personality factors make someone a good police officer? What makes someone a bad or dangerous police officer? A police officer’s job is very high in stress, and law enforcement agencies want to make sure they hire the right people. Personality testing is often used for this purpose—to screen applicants for employment and job training. Personality tests are also used in criminal cases and custody battles, and to assess psychological disorders. This section explores the best known among the many different types of personality tests. Self-Report Inventories Self-report inventories are a kind of objective test used to assess personality. They typically use multiple-choice items or numbered scales, which represent a range from 1 (strongly disagree) to 5 (strongly agree). They often are called Likert scales after their developer, Rensis Likert (1932) ( Figure 11.17 ). One of the most widely used personality inventories is the Minnesota Multiphasic Personality Inventory (MMPI) , first published in 1943, with 504 true/false questions, and updated to the MMPI-2 in 1989, with 567 questions. The original MMPI was based on a small, limited sample, composed mostly of Minnesota farmers and psychiatric patients; the revised inventory was based on a more representative, national sample to allow for better standardization. The MMPI-2 takes 1–2 hours to complete. Responses are scored to produce a clinical profile composed of 10 scales: hypochondriasis, depression, hysteria, psychopathic deviance (social deviance), masculinity versus femininity, paranoia, psychasthenia (obsessive/compulsive qualities), schizophrenia, hypomania, and social introversion. There is also a scale to ascertain risk factors for alcohol abuse. In 2008, the test was again revised, using more advanced methods, to the MMPI-2-RF. This version takes about one-half the time to complete and has only 338 questions ( Figure 11.18 ). Despite the new test’s advantages, the MMPI-2 is more established and is still more widely used. Typically, the tests are administered by computer. Although the MMPI was originally developed to assist in the clinical diagnosis of psychological disorders, it is now also used for occupational screening, such as in law enforcement, and in college, career, and marital counseling (Ben-Porath & Tellegen, 2008). In addition to clinical scales, the tests also have validity and reliability scales. (Recall the concepts of reliability and validity from your study of psychological research.) One of the validity scales, the Lie Scale (or “L” Scale), consists of 15 items and is used to ascertain whether the respondent is “faking good” (underreporting psychological problems to appear healthier). For example, if someone responds “yes” to a number of unrealistically positive items such as “I have never told a lie,” they may be trying to “fake good” or appear better than they actually are. Reliability scales test an instrument’s consistency over time, assuring that if you take the MMPI-2-RF today and then again 5 years later, your two scores will be similar. Beutler, Nussbaum, and Meredith (1988) gave the MMPI to newly recruited police officers and then to the same police officers 2 years later. After 2 years on the job, police officers’ responses indicated an increased vulnerability to alcoholism, somatic symptoms (vague, unexplained physical complaints), and anxiety. When the test was given an additional 2 years later (4 years after starting on the job), the results suggested high risk for alcohol-related difficulties. Projective Tests Another method for assessment of personality is projective testing . This kind of test relies on one of the defense mechanisms proposed by Freud—projection—as a way to assess unconscious processes. During this type of testing, a series of ambiguous cards is shown to the person being tested, who then is encouraged to project his feelings, impulses, and desires onto the cards—by telling a story, interpreting an image, or completing a sentence. Many projective tests have undergone standardization procedures (for example, Exner, 2002) and can be used to access whether someone has unusual thoughts or a high level of anxiety, or is likely to become volatile. Some examples of projective tests are the Rorschach Inkblot Test, the Thematic Apperception Test (TAT), the Contemporized-Themes Concerning Blacks test, the TEMAS (Tell-Me-A-Story), and the Rotter Incomplete Sentence Blank (RISB).  The Rorschach Inkblot Test was developed in 1921 by a Swiss psychologist named Hermann Rorschach (pronounced “ROAR-shock”). It is a series of symmetrical inkblot cards that are presented to a client by a psychologist. Upon presentation of each card, the psychologist asks the client, “What might this be?” What the test-taker sees reveals unconscious feelings and struggles (Piotrowski, 1987; Weiner, 2003). The Rorschach has been standardized using the Exner system and is effective in measuring depression, psychosis, and anxiety. A second projective test is the Thematic Apperception Test (TAT) , created in the 1930s by Henry Murray, an American psychologist, and a psychoanalyst named Christiana Morgan. A person taking the TAT is shown 8–12 ambiguous pictures and is asked to tell a story about each picture. The stories give insight into their social world, revealing hopes, fears, interests, and goals. The storytelling format helps to lower a person’s resistance divulging unconscious personal details (Cramer, 2004). The TAT has been used in clinical settings to evaluate psychological disorders; more recently, it has been used in counseling settings to help clients gain a better understanding of themselves and achieve personal growth. Standardization of test administration is virtually nonexistent among clinicians, and the test tends to be modest to low on validity and reliability (Aronow, Weiss, & Rezinkoff, 2001; Lilienfeld, Wood, & Garb, 2000). Despite these shortcomings, the TAT has been one of the most widely used projective tests. A third projective test is the Rotter Incomplete Sentence Blank (RISB) developed by Julian Rotter in 1950 (recall his theory of locus of control, covered earlier in this chapter). There are three forms of this test for use with different age groups: the school form, the college form, and the adult form. The tests include 40 incomplete sentences that people are asked to complete as quickly as possible ( Figure 11.19 ). The average time for completing the test is approximately 20 minutes, as responses are only 1–2 words in length. This test is similar to a word association test, and like other types of projective tests, it is presumed that responses will reveal desires, fears, and struggles. The RISB is used in screening college students for adjustment problems and in career counseling (Holaday, Smith, & Sherry, 2010; Rotter & Rafferty 1950). For many decades, these traditional projective tests have been used in cross-cultural personality assessments. However, it was found that test bias limited their usefulness (Hoy-Watkins & Jenkins-Moore, 2008). It is difficult to assess the personalities and lifestyles of members of widely divergent ethnic/cultural groups using personality instruments based on data from a single culture or race (Hoy-Watkins & Jenkins-Moore, 2008). For example, when the TAT was used with African-American test takers, the result was often shorter story length and low levels of cultural identification (Duzant, 2005). Therefore, it was vital to develop other personality assessments that explored factors such as race, language, and level of acculturation (Hoy-Watkins & Jenkins-Moore, 2008). To address this need, Robert Williams developed the first culturally specific projective test designed to reflect the everyday life experiences of African Americans (Hoy-Watkins & Jenkins-Moore, 2008). The updated version of the instrument is the Contemporized-Themes Concerning Blacks Test (C-TCB) (Williams, 1972). The C-TCB contains 20 color images that show scenes of African-American lifestyles. When the C-TCB was compared with the TAT for African Americans, it was found that use of the C-TCB led to increased story length, higher degrees of positive feelings, and stronger identification with the C-TCB (Hoy, 1997; Hoy-Watkins & Jenkins-Moore, 2008). The TEMAS Multicultural Thematic Apperception Test is another tool designed to be culturally relevant to minority groups, especially Hispanic youths. TEMAS—standing for “Tell Me a Story” but also a play on the Spanish word temas (themes)—uses images and storytelling cues that relate to minority culture (Constantino, 1982).
u.s._history
Summary 28.1 The Challenges of Peacetime At the end of World War II, U.S. servicemen and women returned to civilian life, and all hoped the prosperity of the war years would continue. The GI Bill eased many veterans’ return by providing them with unemployment compensation, low-interest loans, and money to further their education; however, African American, Mexican American, and gay veterans were often unable to take advantage of these benefits fully or at all. Meanwhile, Japanese Americans faced an uphill struggle in their attempts to return to normalcy, and many women who had made significant professional gains in wartime found themselves dismissed from their positions. President Harry Truman attempted to extend Roosevelt’s New Deal with his own Fair Deal, which had the goal of improving wages, housing, and healthcare, and protecting the rights of African Americans. Confronted by a Congress dominated by Republicans and southern Democrats, however, Truman was able to achieve only some of his goals. 28.2 The Cold War Joy at the ending of World War II was quickly replaced by fears of conflict with the Soviet Union. The Cold War heated up as both the United States and Soviet Union struggled for world dominance. Fearing Soviet expansion, the United States committed itself to assisting countries whose governments faced overthrow by Communist forces and gave billions of dollars to war-torn Europe to help it rebuild. While the United States achieved victory in its thwarting of Soviet attempts to cut Berlin off from the West, the nation was less successful in its attempts to prevent Communist expansion in Korea. The development of atomic weapons by the Soviet Union and the arrest of Soviet spies in the United States and Britain roused fears in the United States that Communist agents were seeking to destroy the nation from within. Loyalty board investigations and hearings before House and Senate committees attempted to root out Soviet sympathizers in the federal government and in other sectors of American society, including Hollywood and the military. 28.3 The American Dream In 1953, Dwight D. Eisenhower became president of the United States. Fiscally conservative but ideologically moderate, he sought to balance the budget while building a strong system of national defense. This defense policy led to a greater emphasis on the possible use of nuclear weapons in any confrontation with the Soviet Union. Committed to maintaining peace, however, Eisenhower avoided engaging the United States in foreign conflicts; during his presidency, the economy boomed. Young Americans married in record numbers, moved to the growing suburbs, and gave birth to the largest generation to date in U.S. history. As middle-class adults, they conformed to the requirements of corporate jobs and suburban life, while their privileged children enjoyed a consumer culture tailored to their desires. 28.4 Popular Culture and Mass Media Young Americans in the postwar period had more disposable income and enjoyed greater material comfort than their forebears. These factors allowed them to devote more time and money to leisure activities and the consumption of popular culture. Rock and roll, which drew from African American roots in the blues, embraced themes popular among teenagers, such as young love and rebellion against authority. At the same time, traditional forms of entertainment, such as motion pictures, came under increasing competition from a relatively new technology, television. 28.5 The African American Struggle for Civil Rights After World War II, African American efforts to secure greater civil rights increased across the United States. African American lawyers such as Thurgood Marshall championed cases intended to destroy the Jim Crow system of segregation that had dominated the American South since Reconstruction. The landmark Supreme Court case Brown v. Board of Education prohibited segregation in public schools, but not all school districts integrated willingly, and President Eisenhower had to use the military to desegregate Little Rock’s Central High School. The courts and the federal government did not assist African Americans in asserting their rights in other cases. In Montgomery, Alabama, it was the grassroots efforts of African American citizens who boycotted the city’s bus system that brought about change. Throughout the region, many White southerners made their opposition to these efforts known. Too often, this opposition manifested itself in violence and tragedy, as in the murder of Emmett Till.
Chapter Outline 28.1 The Challenges of Peacetime 28.2 The Cold War 28.3 The American Dream 28.4 Popular Culture and Mass Media 28.5 The African American Struggle for Civil Rights Introduction Is This Tomorrow? ( Figure 28.1 ), a 1947 comic book, highlights one way that the federal government and some Americans revived popular sentiment in opposition to Communism. The United States and the Soviet Union, allies during World War II, had different visions for the postwar world. As Joseph Stalin, premier of the Soviet Union, tightened his grip on the countries of Eastern Europe, Americans began to fear that it was his goal to spread the Communist revolution throughout the world and make newly independent nations puppets of the Soviet Union. To enlist as many Americans as possible in the fight against Soviet domination, the U.S. government and purveyors of popular culture churned out propaganda intended to convince average citizens of the dangers posed by the Soviet Union. Artwork such as the cover of Is This Tomorrow? , which depicts Russians attacking Americans, including a struggling woman and an African American veteran still wearing his uniform, played upon postwar fears of Communism and of a future war with the Soviet Union. These fears dominated American life and affected foreign policy, military strategy, urban planning, popular culture, and the civil rights movement.
[ { "answer": { "ans_choice": 2, "ans_text": "C" }, "bloom": null, "hl_context": "Early in his presidency , Truman sought to build on the promises of Roosevelt ’ s New Deal . Besides demobilizing the armed forces and preparing for the homecoming of servicemen and women , he also had to guide the nation through the process of returning to a peacetime economy . <hl> To this end , he proposed an ambitious program of social legislation that included establishing a federal minimum wage , expanding Social Security and public housing , and prohibiting child labor . <hl> <hl> Wartime price controls were retained for some items but removed from others , like meat . <hl> <hl> In his 1949 inaugural address , Truman referred to his programs as the “ Fair Deal , ” a nod to his predecessor ’ s New Deal . <hl> He wanted the Fair Deal to include Americans of color and became the first president to address the National Association for the Advancement of Colored People ( NAACP ) . He also took decisive steps towards extending civil rights to African Americans by establishing , by executive order in December 1946 , a Presidential Committee on Civil Rights to investigate racial discrimination in the United States . Truman also desegregated the armed forces , again by executive order , in July 1948 , overriding many objections that the military was no place for social experimentation .", "hl_sentences": "To this end , he proposed an ambitious program of social legislation that included establishing a federal minimum wage , expanding Social Security and public housing , and prohibiting child labor . Wartime price controls were retained for some items but removed from others , like meat . In his 1949 inaugural address , Truman referred to his programs as the “ Fair Deal , ” a nod to his predecessor ’ s New Deal .", "question": { "cloze_format": "Truman referred to his program of economic and social reform as the ________.", "normal_format": "What did Truman refer his program of economic and social reform to?", "question_choices": [ "New Deal", "Square Deal", "Fair Deal", "Straight Deal" ], "question_id": "fs-idm39780816", "question_text": "Truman referred to his program of economic and social reform as the ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "national healthcare" }, "bloom": null, "hl_context": "<hl> Congress , however , which was dominated by Republicans and southern conservative Democrats , refused to pass more “ radical ” pieces of legislation , such as a bill providing for national healthcare . <hl> The American Medical Association spent some $ 1.5 million to defeat Truman ’ s healthcare proposal , which it sought to discredit as socialized medicine in order to appeal to Americans ’ fear of Communism . The same Congress also refused to make lynching a federal crime or outlaw the poll tax that reduced the access of poor Americans to the ballot box . Congress also rejected a bill that would have made Roosevelt ’ s Fair Employment Practices Committee , which prohibited racial discrimination by companies doing business with the federal government , permanent . At the same time , they passed many conservative pieces of legislation . For example , the Taft-Hartley Act , which limited the power of unions , became law despite Truman ’ s veto . 28.2 The Cold War Learning Objectives By the end of this section , you will be able to :", "hl_sentences": "Congress , however , which was dominated by Republicans and southern conservative Democrats , refused to pass more “ radical ” pieces of legislation , such as a bill providing for national healthcare .", "question": { "cloze_format": "Truman's domestig agenda about ____ was rejected by Congress.", "normal_format": "Which of the following pieces of Truman’s domestic agenda was rejected by Congress?", "question_choices": [ "the Taft-Hartley Act", "national healthcare", "the creation of a civil rights commission", "funding for schools" ], "question_id": "fs-idp47613920", "question_text": "Which of the following pieces of Truman’s domestic agenda was rejected by Congress?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "containment" }, "bloom": null, "hl_context": "In February 1946 , George Kennan , a State Department official stationed at the U . S . embassy in Moscow , sent an eight-thousand-word message to Washington , DC . In what became known as the “ Long Telegram , ” Kennan maintained that Soviet leaders believed that the only way to protect the Soviet Union was to destroy “ rival ” nations and their influence over weaker nations . According to Kennan , the Soviet Union was not so much a revolutionary regime as a totalitarian bureaucracy that was unable to accept the prospect of a peaceful coexistence of the United States and itself . <hl> He advised that the best way to thwart Soviet plans for the world was to contain Soviet influence — primarily through economic policy — to those places where it already existed and prevent its political expansion into new areas . <hl> <hl> This strategy , which came to be known as the policy of containment , formed the basis for U . S . foreign policy and military decision making for more than thirty years . <hl>", "hl_sentences": "He advised that the best way to thwart Soviet plans for the world was to contain Soviet influence — primarily through economic policy — to those places where it already existed and prevent its political expansion into new areas . This strategy , which came to be known as the policy of containment , formed the basis for U . S . foreign policy and military decision making for more than thirty years .", "question": { "cloze_format": "The policy trying to limit expansion of Soviet influence abroad was ___.", "normal_format": "What was the policy of trying to limit the expansion of Soviet influence abroad?", "question_choices": [ "restraint", "containment", "isolationism", "quarantine" ], "question_id": "fs-idp106897600", "question_text": "What was the policy of trying to limit the expansion of Soviet influence abroad?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 3, "ans_text": "D" }, "bloom": null, "hl_context": "By 1946 , the American economy was growing significantly . <hl> At the same time , the economic situation in Europe was disastrous . <hl> <hl> The war had turned much of Western Europe into a battlefield , and the rebuilding of factories , public transportation systems , and power stations progressed exceedingly slowly . <hl> Starvation loomed as a real possibility for many . As a result of these conditions , Communism was making significant inroads in both Italy and France . <hl> These concerns led Truman , along with Secretary of State George C . Marshall , to propose to Congress the European Recovery Program , popularly known as the Marshall Plan . <hl> <hl> Between its implantation in April 1948 and its termination in 1951 , this program gave $ 13 billion in economic aid to European nations . <hl>", "hl_sentences": "At the same time , the economic situation in Europe was disastrous . The war had turned much of Western Europe into a battlefield , and the rebuilding of factories , public transportation systems , and power stations progressed exceedingly slowly . These concerns led Truman , along with Secretary of State George C . Marshall , to propose to Congress the European Recovery Program , popularly known as the Marshall Plan . Between its implantation in April 1948 and its termination in 1951 , this program gave $ 13 billion in economic aid to European nations .", "question": { "cloze_format": "The Truman administration tried to help Europe recover from the devastation of World War II with the ________.", "normal_format": "What did the Truman administration try to help Europe recover from the devastation of World War II with?", "question_choices": [ "Economic Development Bank", "Atlantic Free Trade Zone", "Byrnes Budget", "Marshall Plan" ], "question_id": "fs-idp56034864", "question_text": "The Truman administration tried to help Europe recover from the devastation of World War II with the ________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 3, "ans_text": "D" }, "bloom": null, "hl_context": "However , the United States often feared that the Soviets were making greater strides in developing technology with potential military applications . <hl> This was especially true following the Soviet Union ’ s launch of Sputnik ( Figure 28.11 ) , the first manmade satellite , in October 1957 . <hl> In September 1958 , Congress passed the National Defense Education Act , which pumped over $ 775 million into educational programs over four years , especially those programs that focused on math and science . Congressional appropriations to the National Science Foundation also increased by $ 100 million in a single year , from $ 34 million in 1958 to $ 134 million in 1959 . One consequence of this increased funding was the growth of science and engineering programs at American universities .", "hl_sentences": "This was especially true following the Soviet Union ’ s launch of Sputnik ( Figure 28.11 ) , the first manmade satellite , in October 1957 .", "question": { "cloze_format": "The name of the first manmade satellite, launched by the Soviet Union in 1957, was ________.", "normal_format": "What was the name of the first manmade satellite, launched by the Soviet Union in 1957?", "question_choices": [ "Triton", "Cosmolskaya", "Pravda", "Sputnik" ], "question_id": "fs-idm78360416", "question_text": "The name of the first manmade satellite, launched by the Soviet Union in 1957, was ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "in Nassau County, New York" }, "bloom": null, "hl_context": "An additional factor was the use of prefabricated construction techniques pioneered during World War II , which allowed houses complete with plumbing , electrical wiring , and appliances to be built and painted in a day . Employing these methods , developers built acres of inexpensive tract housing throughout the country . <hl> One of the first developers to take advantage of this method was William Levitt , who purchased farmland in Nassau County , Long Island , in 1947 and built thousands of prefabricated houses . <hl> <hl> The new community was named Levittown . <hl>", "hl_sentences": "One of the first developers to take advantage of this method was William Levitt , who purchased farmland in Nassau County , Long Island , in 1947 and built thousands of prefabricated houses . The new community was named Levittown .", "question": { "cloze_format": "The first Levittown was built ________.", "normal_format": "Where was the first Levittown built?", "question_choices": [ "in Bucks County, Pennsylvania", "in Nassau County, New York", "near Newark, New Jersey", "near Pittsburgh, Pennsylvania" ], "question_id": "fs-idp32570896", "question_text": "The first Levittown was built ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "Alan Freed" }, "bloom": null, "hl_context": "In the late 1940s , some White country musicians began to experiment with the rhythms of the blues , a decades-old musical genre of rural southern Black people . This experimentation led to the creation of a new musical form known as rockabilly , and by the 1950s , rockabilly had developed into rock and roll . Rock and roll music celebrated themes such as young love and freedom from the oppression of middle-class society . <hl> It quickly grew in favor among American teens , thanks largely to the efforts of disc jockey Alan Freed , who named and popularized the music by playing it on the radio in Cleveland , where he also organized the first rock and roll concert , and later in New York . <hl>", "hl_sentences": "It quickly grew in favor among American teens , thanks largely to the efforts of disc jockey Alan Freed , who named and popularized the music by playing it on the radio in Cleveland , where he also organized the first rock and roll concert , and later in New York .", "question": { "cloze_format": "The disc jockey who popularized rock and roll was ________.", "normal_format": "Who was the disc jockey that popularized rock and roll?", "question_choices": [ "Bill Haley", "Elvis Presley", "Alan Freed", "Ed Sullivan" ], "question_id": "fs-idm29528224", "question_text": "The disc jockey who popularized rock and roll was ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "Thurgood Marshall" }, "bloom": null, "hl_context": "<hl> By 1938 , Marshall had become “ Mr . <hl> <hl> Civil Rights ” and formally organized the NAACP ’ s Legal Defense and Education Fund in 1940 to garner the resources to take on cases to break the racist justice system of America . <hl> A direct result of Marshall ’ s energies and commitment was his 1940 victory in a Supreme Court case , Chambers v . Florida , which held that confessions obtained by violence and torture were inadmissible in a court of law . His most well-known case was Brown v . Board of Education in 1954 , which held that state laws establishing separate public schools for Black and White students were unconstitutional . <hl> Defining American Thurgood Marshall on Fighting Racism As a law student in 1933 , Thurgood Marshall ( Figure 28.19 ) was recruited by his mentor Charles Hamilton Houston to assist in gathering information for the defense of a Black man in Virginia accused of killing two White women . <hl> His continued close association with Houston led Marshall to aggressively defend Black people in the court system and to use the courts as the weapon by which equal rights might be extracted from the U . S . Constitution and a White racist system . Houston also suggested that it would be important to establish legal precedents regarding the Plessy v . Ferguson ruling of separate but equal .", "hl_sentences": "By 1938 , Marshall had become “ Mr . Civil Rights ” and formally organized the NAACP ’ s Legal Defense and Education Fund in 1940 to garner the resources to take on cases to break the racist justice system of America . Defining American Thurgood Marshall on Fighting Racism As a law student in 1933 , Thurgood Marshall ( Figure 28.19 ) was recruited by his mentor Charles Hamilton Houston to assist in gathering information for the defense of a Black man in Virginia accused of killing two White women .", "question": { "cloze_format": "The NAACP lawyer who became known as “Mr. Civil Rights” was ________.", "normal_format": "Who was the NAACP lawyer who became known as “Mr. Civil Rights”?", "question_choices": [ "Earl Warren", "Jackie Robinson", "Orval Faubus", "Thurgood Marshall" ], "question_id": "fs-idp169887792", "question_text": "The NAACP lawyer who became known as “Mr. Civil Rights” was ________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "C" }, "bloom": null, "hl_context": "Throughout the course of the school year , the Little Rock Nine were insulted , harassed , and physically assaulted ; nevertheless , they returned to school each day . At the end of the school year , the first African American student graduated from Central High . <hl> At the beginning of the 1958 – 1959 school year , Orval Faubus ordered all Little Rock ’ s public schools closed . <hl> In the opinion of White segregationists , keeping all students out of school was preferable to having them attend integrated schools . In 1959 , the U . S . Supreme Court ruled that the school had to be reopened and that the process of desegregation had to proceed . It soon became clear that enforcing Brown v . the Board of Education would require presidential intervention . Eisenhower did not agree with the U . S . Supreme Court ’ s decision and did not wish to force southern states to integrate their schools . However , as president , he was responsible for doing so . <hl> In 1957 , Central High School in Little Rock , Arkansas , was forced to accept its first nine African American students , who became known as the Little Rock Nine . <hl> <hl> In response , Arkansas governor Orval Faubus called out the state National Guard to prevent the students from attending classes , removing the troops only after Eisenhower told him to do so . <hl> A subsequent attempt by the nine students to attend school resulted in mob violence . Eisenhower then placed the Arkansas National Guard under federal control and sent the U . S . Army ’ s 101st airborne unit to escort the students to and from school as well as from class to class ( Figure 28.20 ) . This was the first time since the end of Reconstruction that federal troops once more protected the rights of African Americans in the South .", "hl_sentences": "At the beginning of the 1958 – 1959 school year , Orval Faubus ordered all Little Rock ’ s public schools closed . In 1957 , Central High School in Little Rock , Arkansas , was forced to accept its first nine African American students , who became known as the Little Rock Nine . In response , Arkansas governor Orval Faubus called out the state National Guard to prevent the students from attending classes , removing the troops only after Eisenhower told him to do so .", "question": { "cloze_format": "The Arkansas governor who tried to prevent the integration of Little Rock High School was ________.", "normal_format": "Who was the Arkansas governor who tried to prevent the integration of Little Rock High School?", "question_choices": [ "Charles Hamilton Houston", "Kenneth Clark", "Orval Faubus", "Clark Clifford" ], "question_id": "fs-idp230968512", "question_text": "The Arkansas governor who tried to prevent the integration of Little Rock High School was ________." }, "references_are_paraphrase": 0 } ]
28
28.1 The Challenges of Peacetime Learning Objectives By the end of this section, you will be able to: Identify the issues that the nation faced during demobilization Explain the goals and objectives of the Truman administration Evaluate the actions taken by the U.S. government to address the concerns of returning veterans The decade and a half immediately following the end of World War II was one in which middle- and working-class Americans hoped for a better life than the one they lived before the war. These hopes were tainted by fears of economic hardship, as many who experienced the Great Depression feared a return to economic decline. Others clamored for the opportunity to spend the savings they had accumulated through long hours on the job during the war when consumer goods were rarely available. African Americans who had served in the armed forces and worked in the defense industry did not wish to return to “normal.” Instead, they wanted the same rights and opportunities that other Americans had. Still other citizens were less concerned with the economy or civil rights; instead, they looked with suspicion at the Soviet presence in Eastern Europe. What would happen now that the United States and the Soviet Union were no longer allies, and the other nations that had long helped maintain a balance of power were left seriously damaged by the war? Harry Truman, president for less than a year when the war ended, was charged with addressing all of these concerns and giving the American people a “fair deal.” DEMOBILIZATION AND THE RETURN TO CIVILIAN LIFE The most immediate task to be completed after World War II was demobilizing the military and reintegrating the veterans into civilian life. In response to popular pressure and concerns over the budget, the United States sought to demobilize its armed forces as quickly as possible. Many servicemen, labeled the “Ohio boys” (Over the Hill in October), threatened to vote Republican if they were not home by Christmas 1946. Understandably, this placed a great deal of pressure on the still-inexperienced president to shrink the size of the U.S. military. Not everyone wanted the government to reduce America’s military might, however. Secretary of the Navy James Forrestal and Secretary of War Robert P. Patterson warned Truman in October 1945 that an overly rapid demobilization jeopardized the nation’s strategic position in the world. While Truman agreed with their assessment, he felt powerless to put a halt to demobilization. In response to mounting political pressure, the government reduced the size of the U.S. military from a high of 12 million in June 1945 to 1.5 million in June 1947—still more troops than the nation ever had in arms during peacetime. Soldiers and sailors were not the only ones dismissed from service. As the war drew to a close, millions of women working the jobs of men who had gone off to fight were dismissed by their employers, often because the demand for war materiel had declined and because government propaganda encouraged them to go home to make way for the returning troops. While most women workers surveyed at the end of the war wished to keep their jobs (75–90 percent, depending on the study), many did in fact leave them. Nevertheless, throughout the late 1940s and the 1950s, women continued to make up approximately one-third of the U.S. labor force. Readjustment to postwar life was difficult for the returning troops. The U.S. Army estimated that as many of 20 percent of its casualties were psychological. Although many eagerly awaited their return to civilian status, others feared that they would not be able to resume a humdrum existence after the experience of fighting on the front lines. Veterans also worried that they wouldn’t find work and that civilian defense workers were better positioned to take advantage of the new jobs opening up in the peacetime economy. Some felt that their wives and children would not welcome their presence, and some children did indeed resent the return of fathers who threatened to disrupt the mother-child household. Those on the home front worried as well. Doctors warned fiancées, wives, and mothers that soldiers might return with psychological problems that would make them difficult to live with. The GI Bill of Rights Well before the end of the war, Congress had passed one of the most significant and far-reaching pieces of legislation to ease veterans’ transition into civilian life: the Servicemen’s Readjustment Act, also known as the GI Bill ( Figure 28.3 ). Every honorably discharged veteran who had seen active duty, but not necessarily combat, was eligible to receive a year’s worth of unemployment compensation. This provision not only calmed veterans’ fears regarding their ability to support themselves, but it also prevented large numbers of men—as well as some women—from suddenly entering a job market that did not have enough positions for them. Another way that the GI Bill averted a glut in the labor market was by giving returning veterans the opportunity to pursue an education; it paid for tuition at a college or vocational school, and gave them a stipend to live on while they completed their studies. The result was a dramatic increase in the number of students—especially male ones—enrolled in American colleges and universities. In 1940, only 5.5 percent of American men had a college degree. By 1950, that percentage had increased to 7.3 percent, as more than two million servicemen took advantage of the benefits offered by the GI Bill to complete college. The numbers continued to grow throughout the 1950s. Upon graduation, these men were prepared for skilled blue-collar or white-collar jobs that paved the way for many to enter the middle class. The creation of a well-educated, skilled labor force helped the U.S. economy as well. Other benefits offered by the GI Bill included low-interest loans to purchase homes or start small businesses. However, not all veterans were able to take advantage of the GI Bill. African American veterans could use their educational benefits only to attend schools that accepted Black students. The approximately nine thousand servicemen and women who were dishonorably discharged because they were gay or lesbian were ineligible for GI Bill benefits. Benefits for some Mexican American veterans, mainly in Texas, were also denied or delayed. The Return of the Japanese While most veterans received assistance to help in their adjustment to postwar life, others returned home to an uncertain future without the promise of government aid to help them resume their prewar lives. Japanese Americans from the West Coast who had been interned during the war also confronted the task of rebuilding their lives. In December 1944, Franklin Roosevelt had declared an end to the forced relocation of Japanese Americans, and as of January 1945, they were free to return to their homes. In many areas, however, neighbors clung to their prejudices and denounced those of Japanese descent as disloyal and dangerous. These feelings had been worsened by wartime propaganda, which often featured horrific accounts of Japanese mistreatment of prisoners, and by the statements of military officers to the effect that the Japanese were inherently savage. Facing such animosity, many Japanese American families chose to move elsewhere. Those who did return often found that in their absence, “friends” and neighbors had sold possessions that had been left with them for safekeeping. Many homes had been vandalized and farms destroyed. When Japanese Americans reopened their businesses, former customers sometimes boycotted them. THE FAIR DEAL Early in his presidency, Truman sought to build on the promises of Roosevelt’s New Deal. Besides demobilizing the armed forces and preparing for the homecoming of servicemen and women, he also had to guide the nation through the process of returning to a peacetime economy. To this end, he proposed an ambitious program of social legislation that included establishing a federal minimum wage , expanding Social Security and public housing, and prohibiting child labor. Wartime price controls were retained for some items but removed from others, like meat. In his 1949 inaugural address, Truman referred to his programs as the “ Fair Deal ,” a nod to his predecessor’s New Deal. He wanted the Fair Deal to include Americans of color and became the first president to address the National Association for the Advancement of Colored People (NAACP). He also took decisive steps towards extending civil rights to African Americans by establishing, by executive order in December 1946, a Presidential Committee on Civil Rights to investigate racial discrimination in the United States. Truman also desegregated the armed forces, again by executive order, in July 1948, overriding many objections that the military was no place for social experimentation. Congress, however, which was dominated by Republicans and southern conservative Democrats, refused to pass more “radical” pieces of legislation, such as a bill providing for national healthcare. The American Medical Association spent some $1.5 million to defeat Truman’s healthcare proposal, which it sought to discredit as socialized medicine in order to appeal to Americans’ fear of Communism. The same Congress also refused to make lynching a federal crime or outlaw the poll tax that reduced the access of poor Americans to the ballot box. Congress also rejected a bill that would have made Roosevelt’s Fair Employment Practices Committee, which prohibited racial discrimination by companies doing business with the federal government, permanent. At the same time, they passed many conservative pieces of legislation. For example, the Taft-Hartley Act, which limited the power of unions, became law despite Truman’s veto. 28.2 The Cold War Learning Objectives By the end of this section, you will be able to: Explain how and why the Cold War emerged in the wake of World War II Describe the steps taken by the U.S. government to oppose Communist expansion in Europe and Asia Discuss the government’s efforts to root out Communist influences in the United States As World War II drew to a close, the alliance that had made the United States and the Soviet Union partners in their defeat of the Axis powers—Germany, Italy, and Japan—began to fall apart. Both sides realized that their visions for the future of Europe and the world were incompatible. Joseph Stalin, the premier of the Soviet Union, wished to retain hold of Eastern Europe and establish Communist, pro-Soviet governments there, in an effort to both expand Soviet influence and protect the Soviet Union from future invasions. He also sought to bring Communist revolution to Asia and to developing nations elsewhere in the world. The United States wanted to expand its influence as well by protecting or installing democratic governments throughout the world. It sought to combat the influence of the Soviet Union by forming alliances with Asian, African, and Latin American nations, and by helping these countries to establish or expand prosperous, free-market economies. The end of the war left the industrialized nations of Europe and Asia physically devastated and economically exhausted by years of invasion, battle, and bombardment. With Great Britain, France, Germany, Italy, Japan, and China reduced to shadows of their former selves, the United States and the Soviet Union emerged as the last two superpowers and quickly found themselves locked in a contest for military, economic, social, technological, and ideological supremacy. FROM ISOLATIONISM TO ENGAGEMENT The United States had a long history of avoiding foreign alliances that might require the commitment of its troops abroad. However, in accepting the realities of the post-World War II world, in which traditional powers like Great Britain or France were no longer strong enough to police the globe, the United States realized that it would have to make a permanent change in its foreign policy, shifting from relative isolation to active engagement. On assuming the office of president upon the death of Franklin Roosevelt, Harry Truman was already troubled by Soviet actions in Europe. He disliked the concessions made by Roosevelt at Yalta, which had allowed the Soviet Union to install a Communist government in Poland. At the Potsdam conference, held from July 17 to August 2, 1945, Truman also opposed Stalin’s plans to demand large reparations from Germany. He feared the burden that this would impose on Germany might lead to another cycle of German rearmament and aggression—a fear based on that nation’s development after World War I ( Figure 28.4 ). Although the United States and the Soviet Union did finally reach an agreement at Potsdam, this was the final occasion on which they cooperated for quite some time. Each remained convinced that its own economic and political systems were superior to the other’s, and the two superpowers quickly found themselves drawn into conflict. The decades-long struggle between them for technological and ideological supremacy became known as the Cold War . So called because it did not include direct military confrontation between Soviet and U.S. troops, the Cold War was fought with a variety of other weapons: espionage and surveillance, political assassinations, propaganda, and the formation of alliances with other nations. It also became an arms race, as both countries competed to build the greatest stockpile of nuclear weapons, and also competed for influence in poorer nations, supporting opposite sides in wars in some of those nations, such as Korea and Vietnam. CONTAINMENT ABROAD In February 1946, George Kennan, a State Department official stationed at the U.S. embassy in Moscow, sent an eight-thousand-word message to Washington, DC. In what became known as the “ Long Telegram ,” Kennan maintained that Soviet leaders believed that the only way to protect the Soviet Union was to destroy “rival” nations and their influence over weaker nations. According to Kennan, the Soviet Union was not so much a revolutionary regime as a totalitarian bureaucracy that was unable to accept the prospect of a peaceful coexistence of the United States and itself. He advised that the best way to thwart Soviet plans for the world was to contain Soviet influence—primarily through economic policy—to those places where it already existed and prevent its political expansion into new areas. This strategy, which came to be known as the policy of containment , formed the basis for U.S. foreign policy and military decision making for more than thirty years. As Communist governments came to power elsewhere in the world, American policymakers extended their strategy of containment to what became known as the domino theory under the Eisenhower administration: Neighbors to Communist nations, so was the assumption, were likely to succumb to the same allegedly dangerous and infectious ideology. Like dominos toppling one another, entire regions would eventually be controlled by the Soviets. The demand for anti-Communist containment appeared as early as March 1946 in a speech by Winston Churchill, in which he referred to an Iron Curtain that divided Europe into the “free” West and the Communist East controlled by the Soviet Union. The commitment to containing Soviet expansion made necessary the ability to mount a strong military offense and defense. In pursuit of this goal, the U.S. military was reorganized under the National Security Act of 1947. This act streamlined the government in matters of security by creating the National Security Council and establishing the Central Intelligence Agency (CIA) to conduct surveillance and espionage in foreign nations. It also created the Department of the Air Force, which was combined with the Departments of the Army and Navy in 1949 to form one Department of Defense. The Truman Doctrine In Europe, the end of World War II witnessed the rise of a number of internal struggles for control of countries that had been occupied by Nazi Germany. Great Britain occupied Greece as the Nazi regime there collapsed. The British aided the authoritarian government of Greece in its battles against Greek Communists. In March 1947, Great Britain announced that it could no longer afford the cost of supporting government military activities and withdrew from participation in the Greek civil war . Stepping into this power vacuum, the United States announced the Truman Doctrine, which offered support to Greece and Turkey in the form of financial assistance, weaponry, and troops to help train their militaries and bolster their governments against Communism. Eventually, the program was expanded to include any state trying to withstand a Communist takeover. The Truman Doctrine thus became a hallmark of U.S. Cold War policy. Defining American The Truman Doctrine In 1947, Great Britain, which had assumed responsibility for the disarming of German troops in Greece at the end of World War II, could no longer afford to provide financial support for the authoritarian Greek government, which was attempting to win a civil war against Greek leftist rebels. President Truman, unwilling to allow a Communist government to come to power there, requested Congress to provide funds for the government of Greece to continue its fight against the rebels. Truman also requested aid for the government of Turkey to fight the forces of Communism in that country. He said: At the present moment in world history nearly every nation must choose between alternative ways of life. The choice is too often not a free one. Should we fail to aid Greece and Turkey in this fateful hour, the effect will be far reaching to the West as well as to the East. The seeds of totalitarian regimes are nurtured by misery and want. They spread and grow in the evil soil of poverty and strife. They reach their full growth when the hope of a people for a better life has died. We must keep that hope alive. The free peoples of the world look to us for support in maintaining their freedoms. If we falter in our leadership, we may endanger the peace of the world—and we shall surely endanger the welfare of our own nation. Great responsibilities have been placed upon us by the swift movement of events. I am confident that the Congress will face these responsibilities squarely. What role is Truman suggesting that the United States assume in the postwar world? Does the United States still assume this role? The Marshall Plan By 1946, the American economy was growing significantly. At the same time, the economic situation in Europe was disastrous. The war had turned much of Western Europe into a battlefield, and the rebuilding of factories, public transportation systems, and power stations progressed exceedingly slowly. Starvation loomed as a real possibility for many. As a result of these conditions, Communism was making significant inroads in both Italy and France. These concerns led Truman, along with Secretary of State George C. Marshall, to propose to Congress the European Recovery Program, popularly known as the Marshall Plan . Between its implantation in April 1948 and its termination in 1951, this program gave $13 billion in economic aid to European nations. Truman’s motivation was economic and political, as well as humanitarian. The plan stipulated that the European nations had to work together in order to receive aid, thus enforcing unity through enticement, while seeking to undercut the political popularity of French and Italian Communists and dissuading moderates from forming coalition governments with them. Likewise, much of the money had to be spent on American goods, boosting the postwar economy of the United States as well as the American cultural presence in Europe. Stalin regarded the program as a form of bribery. The Soviet Union refused to accept aid from the Marshall Plan, even though it could have done so, and forbade the Communist states of Eastern Europe to accept U.S. funds as well. Those states that did accept aid began to experience an economic recovery. My Story George C. Marshall and the Nobel Peace Prize The youngest child of a Pennsylvania businessman and Democrat, George C. Marshall ( Figure 28.5 ) chose a military career. He attended the Virginia Military Institute, was a veteran of World War I, and spent the rest of his life either in the military or otherwise in the service of his country, including as President Truman’s Secretary of State. He was awarded the Nobel Peace Prize in 1953, the only soldier to ever receive that honor. Below is an excerpt of his remarks as he accepted the award. There has been considerable comment over the awarding of the Nobel Peace Prize to a soldier. I am afraid this does not seem as remarkable to me as it quite evidently appears to others. I know a great deal of the horrors and tragedies of war. Today, as chairman of the American Battle Monuments Commission, it is my duty to supervise the construction and maintenance of military cemeteries in many countries overseas, particularly in Western Europe. The cost of war in human lives is constantly spread before me, written neatly in many ledgers whose columns are gravestones. I am deeply moved to find some means or method of avoiding another calamity of war. Almost daily I hear from the wives, or mothers, or families of the fallen. The tragedy of the aftermath is almost constantly before me. I share with you an active concern for some practical method for avoiding war. . . . A very strong military posture is vitally necessary today. How long it must continue I am not prepared to estimate, but I am sure that it is too narrow a basis on which to build a dependable, long-enduring peace. The guarantee for a long continued peace will depend on other factors in addition to a moderated military strength, and no less important. Perhaps the most important single factor will be a spiritual regeneration to develop goodwill, faith, and understanding among nations. Economic factors will undoubtedly play an important part. Agreements to secure a balance of power, however disagreeable they may seem, must likewise be considered. And with all these there must be wisdom and the will to act on that wisdom. What steps did Marshall recommend be taken to maintain a lasting peace? To what extent have today’s nations heeded his advice? Showdown in Europe The lack of consensus with the Soviets on the future of Germany led the United States, Great Britain, and France to support joining their respective occupation zones into a single, independent state. In December 1946, they took steps to do so, but the Soviet Union did not wish the western zones of the country to unify under a democratic, pro-capitalist government. The Soviet Union also feared the possibility of a unified West Berlin, located entirely within the Soviet sector. Three days after the western allies authorized the introduction of a new currency in Western Germany—the Deutsche Mark—Stalin ordered all land and water routes to the western zones of the city Berlin to be cut off in June 1948. Hoping to starve the western parts of the city into submission, the Berlin blockade was also a test of the emerging U.S. policy of containment. Unwilling to abandon Berlin, the United States, Great Britain, and France began to deliver all needed supplies to West Berlin by air ( Figure 28.6 ). In April 1949, the three countries joined Canada and eight Western European nations to form the North Atlantic Treaty Organization (NATO), an alliance pledging its members to mutual defense in the event of attack. On May 12, 1949, a year and approximately two million tons of supplies later, the Soviets admitted defeat and ended the blockade of Berlin. On May 23, the Federal Republic of Germany (FRG), consisting of the unified western zones and commonly referred to as West Germany, was formed. The Soviets responded by creating the German Democratic Republic, or East Germany, in October 1949. CONTAINMENT AT HOME In 1949, two incidents severely disrupted American confidence in the ability of the United States to contain the spread of Communism and limit Soviet power in the world. First, on August 29, 1949, the Soviet Union exploded its first atomic bomb—no longer did the United States have a monopoly on nuclear power. A few months later, on October 1, 1949, Chinese Communist Party leader Mao Zedong announced the triumph of the Chinese Communists over their Nationalist foes in a civil war that had been raging since 1927. The Nationalist forces, under their leader Chiang Kai-shek, departed for Taiwan in December 1949. Immediately, there were suspicions that spies had passed bomb-making secrets to the Soviets and that Communist sympathizers in the U.S. State Department had hidden information that might have enabled the United States to ward off the Communist victory in China. Indeed, in February 1950, Wisconsin senator Joseph McCarthy, a Republican, charged in a speech that the State Department was filled with Communists. Also in 1950, the imprisonment in Great Britain of Klaus Fuchs, a German-born physicist who had worked on the Manhattan Project and was then convicted of passing nuclear secrets to the Soviets, increased American fears. Information given by Fuchs to the British implicated a number of American citizens as well. The most infamous trial of suspected American spies was that of Julius and Ethel Rosenberg, who were executed in June 1953 despite a lack of evidence against them. Several decades later, evidence was found that Julius, but not Ethel, had in fact given information to the Soviet Union. Fears that Communists within the United States were jeopardizing the country’s security had existed even before the victory of Mao Zedong and the arrest and conviction of the atomic spies. Roosevelt’s New Deal and Truman’s Fair Deal were often criticized as “socialist,” which many mistakenly associated with Communism, and Democrats were often branded Communists by Republicans. In response, on March 21, 1947, Truman signed Executive Order 9835, which provided the Federal Bureau of Investigation with broad powers to investigate federal employees and identify potential security risks. State and municipal governments instituted their own loyalty boards to find and dismiss potentially disloyal workers. In addition to loyalty review boards, the House Committee on Un-American Activities (HUAC) was established in 1938 to investigate claims of disloyalty and subversive activities among private citizens. It directed much of its attention to rooting out suspected Communists in business, academia, and the media. HUAC was particularly interested in Hollywood because it feared that Communist sympathizers might use motion pictures as pro-Soviet propaganda. Witnesses were subpoenaed and required to testify before the committee; refusal could result in imprisonment. Those who invoked Fifth Amendment protections, or were otherwise suspected of Communist sympathies, often lost their jobs or found themselves on a blacklist , which prevented them from securing employment. Notable artists who were blacklisted in the 1940s and 1950s include composer Leonard Bernstein, novelist Dashiell Hammett, playwright and screenwriter Lillian Hellman, actor and singer Paul Robeson, and musician Artie Shaw. TO THE TRENCHES AGAIN Just as the U.S. government feared the possibility of Communist infiltration of the United States, so too was it alert for signs that Communist forces were on the move elsewhere. The Soviet Union had been granted control of the northern half of the Korean peninsula at the end of World War II, and the United States had control of the southern portion. The Soviets displayed little interest in extending its power into South Korea, and Stalin did not wish to risk confrontation with the United States over Korea. North Korea’s leaders, however, wished to reunify the peninsula under Communist rule. In April 1950, Stalin finally gave permission to North Korea’s leader Kim Il Sung to invade South Korea and provided the North Koreans with weapons and military advisors. On June 25, 1950, troops of the North Korean People’s Democratic Army crossed the thirty-eighth parallel, the border between North and South Korea. The first major test of the U.S. policy of containment in Asia had begun, for the domino theory held that a victory by North Korea might lead to further Communist expansion in Asia, in the virtual backyard of the United States’ chief new ally in East Asia—Japan. The United Nations (UN), which had been established in 1945, was quick to react. On June 27, the UN Security Council denounced North Korea’s actions and called upon UN members to help South Korea defeat the invading forces. As a permanent member of the Security Council, the Soviet Union could have vetoed the action, but it had boycotted UN meetings following the awarding of China’s seat on the Security Council to Taiwan instead of to Mao Zedong’s People’s Republic of China. On June 27, Truman ordered U.S. military forces into South Korea. They established a defensive line on the far southern part of the Korean peninsula near the town of Pusan. A U.S.-led invasion at Inchon on September 15 halted the North Korean advance and turned it into a retreat ( Figure 28.7 ). As North Korean forces moved back across the thirty-eighth parallel, UN forces under the command of U.S. General Douglas MacArthur followed. MacArthur’s goal was not only to drive the North Korean army out of South Korea but to destroy Communist North Korea as well. At this stage, he had the support of President Truman; however, as UN forces approached the Yalu River, the border between China and North Korea, MacArthur’s and Truman’s objectives diverged. Chinese premier Zhou Enlai, who had provided supplies and military advisors for North Korea before the conflict began, sent troops into battle to support North Korea and caught U.S. troops by surprise. Following a costly retreat from North Korea’s Chosin Reservoir, a swift advance of Chinese and North Korean forces and another invasion of Seoul, MacArthur urged Truman to deploy nuclear weapons against China. Truman, however, did not wish to risk a broader war in Asia. MacArthur criticized Truman’s decision and voiced his disagreement in a letter to a Republican congressman, who subsequently allowed the letter to become public. In April 1951, Truman accused MacArthur of insubordination and relieved him of his command. The Joint Chiefs of Staff agreed, calling the escalation MacArthur had called for “the wrong war, at the wrong place, at the wrong time, and with the wrong enemy.” Nonetheless, the public gave MacArthur a hero’s welcome in New York with the largest ticker tape parade in the nation’s history. By July 1951, the UN forces had recovered from the setbacks earlier in the year and pushed North Korean and Chinese forces back across the thirty-eighth parallel, and peace talks began. However, combat raged on for more than two additional years. The primary source of contention was the fate of prisoners of war. The Chinese and North Koreans insisted that their prisoners be returned to them, but many of these men did not wish to be repatriated. Finally, an armistice agreement was signed on July 27, 1953. A border between North and South Korea, one quite close to the original thirty-eighth parallel line, was agreed upon. A demilitarized zone between the two nations was established, and both sides agreed that prisoners of war would be allowed to choose whether to be returned to their homelands. Five million people died in the three-year conflict. Of these, around 36,500 were U.S. soldiers; a majority were Korean civilians. As the war in Korea came to an end, so did one of the most frightening anti-Communist campaigns in the United States. After charging the U.S. State Department with harboring Communists, Senator Joseph McCarthy had continued to make similar accusations against other government agencies. Prominent Republicans like Senator Robert Taft and Congressman Richard Nixon regarded McCarthy as an asset who targeted Democratic politicians, and they supported his actions. In 1953, as chair of the Senate Committee on Government Operations, McCarthy investigated the Voice of America, which broadcast news and pro-U.S. propaganda to foreign countries, and the State Department’s overseas libraries. After an aborted effort to investigate Protestant clergy, McCarthy turned his attention to the U.S. Army. This proved to be the end of the senator’s political career. From April to June 1954, the Army-McCarthy Hearings were televised, and the American public, able to witness his use of intimidation and innuendo firsthand, rejected McCarthy’s approach to rooting out Communism in the United States ( Figure 28.8 ). In December 1954, the U.S. Senate officially condemned his actions with a censure, ending his prospects for political leadership. One particularly heinous aspect of the hunt for Communists in the United States, likened by playwright Arthur Miller to the witch hunts of old, was its effort to root out gay men and lesbians employed by the government. Many anti-Communists, including McCarthy, believed that gay men, referred to by Senator Everett Dirksen as “lavender lads,” were morally weak and thus were particularly likely to betray their country. Many also believed that lesbians and gay men were prone to being blackmailed by Soviet agents because of their sexual orientation, which at the time was regarded by psychiatrists as a form of mental illness. 28.3 The American Dream Learning Objectives By the end of this section, you will be able to: Describe President Dwight D. Eisenhower’s domestic and foreign policies Discuss gender roles in the 1950s Discuss the growth of the suburbs and the effect of suburbanization on American society Against the backdrop of the Cold War, Americans dedicated themselves to building a peaceful and prosperous society after the deprivation and instability of the Great Depression and World War II. Dwight D. Eisenhower, the general who led the United States to victory in Europe in 1945, proved to be the perfect president for the new era. Lacking strong conservative positions, he steered a middle path between conservatism and liberalism, and presided over a peacetime decade of economic growth and social conformity. In foreign affairs, Eisenhower’s New Look policy simultaneously expanded the nation’s nuclear arsenal and prevented the expansion of the defense budget for conventional forces. WE LIKE IKE After Harry Truman declined to run again for the presidency, the election of 1952 emerged as a contest between the Democratic nominee, Illinois governor Adlai Stevenson, and Republican Dwight D. Eisenhower, who had directed American forces in Europe during World War II ( Figure 28.9 ). Eisenhower campaigned largely on a promise to end the war in Korea, a conflict the public had grown weary of fighting. He also vowed to fight Communism both at home and abroad, a commitment he demonstrated by choosing as his running mate Richard M. Nixon, a congressman who had made a name for himself by pursuing Communists, notably former State Department employee and suspected Soviet agent Alger Hiss. In 1952, Eisenhower supporters enthusiastically proclaimed “We Like Ike,” and Eisenhower defeated Stevenson by winning 54 percent of the popular vote and 87 percent of the electoral vote ( Figure 28.10 ). When he assumed office in 1953, Eisenhower employed a leadership style he had developed during his years of military service. He was calm and willing to delegate authority regarding domestic affairs to his cabinet members, allowing him to focus his own efforts on foreign policy. Unlike many earlier presidents, such as Harry Truman, Eisenhower was largely nonpartisan and consistently sought a middle ground between liberalism and conservatism. He strove to balance the federal budget, which appealed to conservative Republicans, but retained much of the New Deal and even expanded Social Security. He maintained high levels of defense spending but, in his farewell speech in 1961, warned about the growth of the military-industrial complex , the matrix of relationships between officials in the Department of Defense and executives in the defense industry who all benefited from increases in defense spending. He disliked the tactics of Joseph McCarthy but did not oppose him directly, preferring to remain above the fray. He saw himself as a leader called upon to do his best for his country, not as a politician engaged in a contest for advantage over rivals. In keeping with his goal of a balanced budget, Eisenhower switched the emphasis in defense from larger conventional forces to greater stockpiles of nuclear weapons. His New Look strategy embraced nuclear “ massive retaliation ,” a plan for nuclear response to a first Soviet strike so devastating that the attackers would not be able to respond. Some labeled this approach “ Mutually Assured Destruction ” or MAD. Part of preparing for a possible war with the Soviet Union was informing the American public what to do in the event of a nuclear attack. The government provided instructions for building and equipping bomb shelters in the basement or backyard, and some cities constructed municipal shelters. Schools purchased dog tags to help identify students in the aftermath of an attack and showed children instructional films telling them what to do if atomic bombs were dropped on the city where they lived. Americana “A Guide for Surviving Nuclear War” To prepare its citizens for the possibility of nuclear war, in 1950, the U.S. government published and distributed informative pamphlets such as “A Guide for Surviving Nuclear War” excerpted here. Just like fire bombs and ordinary high explosives, atomic weapons cause most of their death and damage by blast and heat. So first let’s look at a few things you can do to escape these two dangers. Even if you have only a second’s warning, there is one important thing you can do to lessen your chances of injury by blast: Fall flat on your face. More than half of all wounds are the result of being bodily tossed about or being struck by falling and flying objects. If you lie down flat, you are least likely to be thrown about. If you have time to pick a good spot, there is less chance of your being struck by flying glass and other things. If you are inside a building, the best place to flatten out is close against the cellar wall. If you haven’t time to get down there, lie down along an inside wall, or duck under a bed or table. . . . If caught out-of-doors, either drop down alongside the base of a good substantial building—avoid flimsy, wooden ones likely to be blown over on top of you—or else jump in any handy ditch or gutter. When you fall flat to protect yourself from a bombing, don’t look up to see what is coming. Even during the daylight hours, the flash from a bursting A-bomb can cause several moments of blindness, if you’re facing that way. To prevent it, bury your face in your arms and hold it there for 10 to 12 seconds after the explosion. . . . If you work in the open, always wear full-length, loose-fitting, light-colored clothes in time of emergency. Never go around with your sleeves rolled up. Always wear a hat—the brim could save you a serious face burn. What do you think was the purpose of these directions? Do you think they could actually help people survive an atomic bomb blast? If not, why publish such booklets? Government and industry allocated enormous amounts of money to the research and development of more powerful weapons. This investment generated rapid strides in missile technology as well as increasingly sensitive radar. Computers that could react more quickly than humans and thereby shoot down speeding missiles were also investigated. Many scientists on both sides of the Cold War, including captured Germans such as rocket engineer Werner von Braun, worked on these devices. An early success for the West came in 1950, when Alan Turing, a British mathematician who had broken Germany’s Enigma code during World War II, created a machine that mimicked human thought. His discoveries led scientists to consider the possibility of developing true artificial intelligence. However, the United States often feared that the Soviets were making greater strides in developing technology with potential military applications. This was especially true following the Soviet Union’s launch of Sputnik ( Figure 28.11 ), the first manmade satellite, in October 1957. In September 1958, Congress passed the National Defense Education Act, which pumped over $775 million into educational programs over four years, especially those programs that focused on math and science. Congressional appropriations to the National Science Foundation also increased by $100 million in a single year, from $34 million in 1958 to $134 million in 1959. One consequence of this increased funding was the growth of science and engineering programs at American universities. In the diplomatic sphere, Eisenhower pushed Secretary of State John Foster Dulles to take a firmer stance against the Soviets to reassure European allies of continued American support. At the same time, keenly sensing that the stalemate in Korea had cost Truman his popularity, Eisenhower worked to avoid being drawn into foreign wars. Thus, when the French found themselves fighting Vietnamese Communists for control of France’s former colony of Indochina, Eisenhower provided money but not troops. Likewise, the United States took no steps when Hungary attempted to break away from Soviet domination in 1956. The United States also refused to be drawn in when Great Britain, France, and Israel invaded the Suez Canal Zone following Egypt’s nationalization of the canal in 1956. Indeed, Eisenhower, wishing to avoid conflict with the Soviet Union, threatened to impose economic sanctions on the invading countries if they did not withdraw. SUBURBANIZATION Although the Eisenhower years were marked by fear of the Soviet Union and its military might, they were also a time of peace and prosperity. Even as many Americans remained mired in poverty, many others with limited economic opportunities, like African Americans or union workers, were better off financially in the 1950s and rose into the ranks of the middle class. Wishing to build the secure life that the Great Depression had deprived their parents of, young men and women married in record numbers and purchased homes where they could start families of their own. In 1940, the rate of homeownership in the United States was 43.6 percent. By 1960, it was almost 62 percent. Many of these newly purchased homes had been built in the new suburban areas that began to encircle American cities after the war. Although middle-class families had begun to move to the suburbs beginning in the nineteenth century, suburban growth accelerated rapidly after World War II. Several factors contributed to this development. During World War II, the United States had suffered from a housing shortage, especially in cities with shipyards or large defense plants. Now that the war was over, real estate developers and contractors rushed to alleviate the scarcity. Unused land on the fringes of American cities provided the perfect place for new housing, which attracted not only the middle class, which had long sought homes outside the crowded cities, but also blue-collar workers who took advantage of the low-interest mortgages offered by the GI Bill. An additional factor was the use of prefabricated construction techniques pioneered during World War II, which allowed houses complete with plumbing, electrical wiring, and appliances to be built and painted in a day. Employing these methods, developers built acres of inexpensive tract housing throughout the country. One of the first developers to take advantage of this method was William Levitt, who purchased farmland in Nassau County, Long Island, in 1947 and built thousands of prefabricated houses. The new community was named Levittown . Levitt’s houses cost only $8,000 and could be bought with little or no down payment. The first day they were offered for sale, more than one thousand were purchased. Levitt went on to build similar developments, also called Levittown, in New Jersey and Pennsylvania ( Figure 28.12 ). As developers around the country rushed to emulate him, the name Levittown became synonymous with suburban tract housing, in which entire neighborhoods were built to either a single plan or a mere handful of designs. The houses were so similar that workers told of coming home late at night and walking into the wrong one. Levittown homes were similar in other ways as well; most were owned by White families. Levitt used restrictive language in his agreements with potential homeowners to ensure that only Whites would live in his communities. In the decade between 1950 and 1960, the suburbs grew by 46 percent. The transition from urban to suburban life exerted profound effects on both the economy and society. For example, fifteen of the largest U.S. cities saw their tax bases shrink significantly in the postwar period, and the apportionment of seats in the House of Representatives shifted to the suburbs and away from urban areas. The development of the suburbs also increased reliance on the automobile for transportation. Suburban men drove to work in nearby cities or, when possible, were driven to commuter rail stations by their wives. In the early years of suburban development, before schools, parks, and supermarkets were built, access to an automobile was crucial, and the pressure on families to purchase a second one was strong. As families rushed to purchase them, the annual production of passenger cars leaped from 2.2 million to 8 million between 1946 and 1955, and by 1960, about 20 percent of suburban families owned two cars. The growing number of cars on the road changed consumption patterns, and drive-in and drive-through convenience stores, restaurants, and movie theaters began to dot the landscape. The first McDonalds opened in San Bernardino, California, in 1954 to cater to drivers in a hurry. As drivers jammed highways and small streets in record numbers, cities and states rushed to build additional roadways and ease congestion. To help finance these massive construction efforts, states began taxing gasoline, and the federal government provided hundreds of thousands of dollars for the construction of the interstate highway system ( Figure 28.13 ). The resulting construction projects, designed to make it easier for suburbanites to commute to and from cities, often destroyed urban working-class neighborhoods. Increased funding for highway construction also left less money for public transportation, making it impossible for those who could not afford automobiles to live in the suburbs. THE ORGANIZATION MAN As the government poured money into the defense industry and into universities that conducted research for the government, the economy boomed. The construction and automobile industries employed thousands, as did the industries they relied upon: steel, oil and gasoline refining, rubber, and lumber. As people moved into new homes, their purchases of appliances, carpeting, furniture, and home decorations spurred growth in other industries. The building of miles of roads also employed thousands. Unemployment was low, and wages for members of both the working and middle classes were high. Following World War II, the majority of White Americans were members of the middle class, based on such criteria as education, income, and home ownership. Even most blue-collar families could afford such elements of a middle-class lifestyle as new cars, suburban homes, and regular vacations. Most African Americans, however, were not members of the middle class. In 1950, the median income for White families was $20,656, whereas for Black families it was $11,203. By 1960, when the average White family earned $28,485 a year, Black people still lagged behind at $15,786; nevertheless, this represented a more than 40 percent increase in African American income in the space of a decade. While working-class men found jobs in factories and on construction crews, those in the middle class often worked for corporations that, as a result of government spending, had grown substantially during World War II and were still getting larger. Such corporations, far too large to allow managers to form personal relationships with all of their subordinates, valued conformity to company rules and standards above all else. In his best-selling book The Organization Man , however, William H. Whyte criticized the notion that conformity was the best path to success and self-fulfillment. Conformity was still the watchword of suburban life: Many neighborhoods had rules mandating what types of clotheslines could be used and prohibited residents from parking their cars on the street. Above all, conforming to societal norms meant marrying young and having children. In the post-World War II period, marriage rates rose; the average age at first marriage dropped to twenty-three for men and twenty for women. Between 1946 and 1964, married couples also gave birth to the largest generation in U.S. history to date; this baby boom resulted in the cohort known as the baby boomers. Conformity also required that the wives of both working- and middle-class men stay home and raise children instead of working for wages outside the home. Most conformed to this norm, at least while their children were young. Nevertheless, 40 percent of women with young children and half of women with older children sought at least part-time employment. They did so partly out of necessity and partly to pay for the new elements of “the good life”—second cars, vacations, and college education for their children. The children born during the baby boom were members of a more privileged generation than their parents had been. Entire industries sprang up to cater to their need for clothing, toys, games, books, and breakfast cereals. For the first time in U.S. history, attending high school was an experience shared by the majority, regardless of race or region. As the baby boomers grew into adolescence, marketers realized that they not only controlled large amounts of disposable income earned at part-time jobs, but they exerted a great deal of influence over their parents’ purchases as well. Madison Avenue began to appeal to teenage interests. Boys yearned for cars, and girls of all ethnicities wanted boyfriends who had them. New fashion magazines for adolescent girls, such as Seventeen , advertised the latest clothing and cosmetics, and teen romance magazines, like Copper Romance , a publication for young African American women, filled drugstore racks. The music and movie industries also altered their products to appeal to affluent adolescents who were growing tired of parental constraints. 28.4 Popular Culture and Mass Media Learning Objectives By the end of this section, you will be able to: Describe Americans’ different responses to rock and roll music Discuss the way contemporary movies and television reflected postwar American society With a greater generational consciousness than previous generations, the baby boomers sought to define and redefine their identities in numerous ways. Music, especially rock and roll, reflected their desire to rebel against adult authority. Other forms of popular culture, such as movies and television, sought to entertain, while reinforcing values such as religious faith, patriotism, and conformity to societal norms. ROCKING AROUND THE CLOCK In the late 1940s, some White country musicians began to experiment with the rhythms of the blues, a decades-old musical genre of rural southern Black people. This experimentation led to the creation of a new musical form known as rockabilly, and by the 1950s, rockabilly had developed into rock and roll . Rock and roll music celebrated themes such as young love and freedom from the oppression of middle-class society. It quickly grew in favor among American teens, thanks largely to the efforts of disc jockey Alan Freed, who named and popularized the music by playing it on the radio in Cleveland, where he also organized the first rock and roll concert, and later in New York. The theme of rebellion against authority, present in many rock and roll songs, appealed to teens. In 1954, Bill Haley and His Comets provided youth with an anthem for their rebellion—”Rock Around the Clock” ( Figure 28.14 ). The song, used in the 1955 movie Blackboard Jungle about a White teacher at a troubled inner-city high school, seemed to be calling for teens to declare their independence from adult control. Haley illustrated how White artists could take musical motifs from the African American community and achieve mainstream success. Teen heartthrob Elvis Presley rose to stardom doing the same. Thus, besides encouraging a feeling of youthful rebellion, rock and roll also began to tear down color barriers, as White youths sought out African American musicians such as Chuck Berry and Little Richard ( Figure 28.14 ). While youth had found an outlet for their feelings and concerns, parents were much less enthused about rock and roll and the values it seemed to promote. Many regarded the music as a threat to American values. When Elvis Presley appeared on The Ed Sullivan Show , a popular television variety program, the camera deliberately focused on his torso and did not show his swiveling hips or legs shaking in time to the music. Despite adults’ dislike of the genre, or perhaps because of it, more than 68 percent of the music played on the radio in 1956 was rock and roll. HOLLYWOOD ON THE DEFENSIVE At first, Hollywood encountered difficulties in adjusting to the post-World War II environment. Although domestic audiences reached a record high in 1946 and the war’s end meant expanding international markets too, the groundwork for the eventual dismantling of the traditional studio system was laid in 1948, with a landmark decision by the U.S. Supreme Court. Previously, film studios had owned their own movie theater chains in which they exhibited the films they produced; however, in United States v. Paramount Pictures, Inc. , this vertical integration of the industry—the complete control by one firm of the production, distribution, and exhibition of motion pictures—was deemed a violation of antitrust laws. The HUAC hearings also targeted Hollywood. When eleven “unfriendly witnesses” were called to testify before Congress about Communism in the film industry in October 1947, only playwright Bertolt Brecht answered questions. The other ten, who refused to testify, were cited for contempt of Congress on November 24. The next day, film executives declared that the so-called “Hollywood Ten” would no longer be employed in the industry until they had sworn they were not Communists ( Figure 28.15 ). Eventually, more than three hundred actors, screenwriters, directors, musicians, and other entertainment professionals were placed on the industry blacklist. Some never worked in Hollywood again; others directed films or wrote screenplays under assumed names. Hollywood reacted aggressively to these various challenges. Filmmakers tried new techniques, like CinemaScope and Cinerama, which allowed movies to be shown on large screens and in 3-D. Audiences were drawn to movies not because of gimmicks, however, but because of the stories they told. Dramas and romantic comedies continued to be popular fare for adults, and, to appeal to teens, studios produced large numbers of horror films and movies starring music idols such as Elvis. Many films took espionage, a timely topic, as their subject matter, and science fiction hits such as Invasion of the Body Snatchers , about a small town whose inhabitants fall prey to space aliens, played on audience fears of both Communist invasion and nuclear technology. THE TRIUMPH OF TELEVISION By far the greatest challenge to Hollywood, however, came from the relatively new medium of television. Although the technology had been developed in the late 1920s, through much of the 1940s, only a fairly small audience of the wealthy had access to it. As a result, programming was limited. With the post-World War II economic boom, all this changed. In 1950, there were just under 4 million households with a television set, or 9 percent of all U.S. households. Five years later, that number had grown to over 30 million, or nearly 65 percent of all U.S. households ( Figure 28.16 ). Various types of programs were broadcast on the handful of major networks: situation comedies, variety programs, game shows, soap operas, talk shows, medical dramas, adventure series, cartoons, and police procedurals. Many comedies presented an idealized image of White suburban family life: Happy housewife mothers, wise fathers, and mischievous but not dangerously rebellious children were constants on shows like Leave It to Beaver and Father Knows Best in the late 1950s. These shows also reinforced certain perspectives on the values of individualism and family—values that came to be redefined as “American” in opposition to alleged Communist collectivism. Westerns, which stressed unity in the face of danger and the ability to survive in hostile environments, were popular too. Programming for children began to emerge with shows such as Captain Kangaroo , Romper Room , and The Mickey Mouse Club designed to appeal to members of the baby boom. 28.5 The African American Struggle for Civil Rights Learning Objectives By the end of this section, you will be able to: Explain how Presidents Truman and Eisenhower addressed civil rights issues Discuss efforts by African Americans to end discrimination and segregation Describe southern Whites’ response to the civil rights movement In the aftermath of World War II, African Americans began to mount organized resistance to racially discriminatory policies in force throughout much of the United States. In the South, they used a combination of legal challenges and grassroots activism to begin dismantling the racial segregation that had stood for nearly a century following the end of Reconstruction. Community activists and civil rights leaders targeted racially discriminatory housing practices, segregated transportation, and legal requirements that African Americans and Whites be educated separately. While many of these challenges were successful, life did not necessarily improve for African Americans. Hostile Whites fought these changes in any way they could, including by resorting to violence. EARLY VICTORIES During World War II, many African Americans had supported the “Double V Campaign,” which called on them to defeat foreign enemies while simultaneously fighting against segregation and discrimination at home. After World War II ended, many returned home to discover that, despite their sacrifices, the United States was not willing to extend them any greater rights than they had enjoyed before the war. Particularly rankling was the fact that although African American veterans were legally entitled to draw benefits under the GI Bill, discriminatory practices prevented them from doing so. For example, many banks would not give them mortgages if they wished to buy homes in predominantly African American neighborhoods, which banks often considered too risky an investment. However, African Americans who attempted to purchase homes in White neighborhoods often found themselves unable to do so because of real estate covenants that prevented owners from selling their property to Black people. Indeed, when a Black family purchased a Levittown house in 1957, they were subjected to harassment and threats of violence. The postwar era, however, saw African Americans make greater use of the courts to defend their rights. In 1944, an African American woman, Irene Morgan, was arrested in Virginia for refusing to give up her seat on an interstate bus and sued to have her conviction overturned. In Morgan v. the Commonwealth of Virginia in 1946, the U.S. Supreme Court ruled that the conviction should be overturned because it violated the interstate commerce clause of the Constitution. This victory emboldened some civil rights activists to launch the Journey of Reconciliation, a bus trip taken by eight African American men and eight White men through the states of the Upper South to test the South’s enforcement of the Morgan decision. Other victories followed. In 1948, in Shelley v. Kraemer , the U.S. Supreme Court held that courts could not enforce real estate covenants that restricted the purchase or sale of property based on race. In 1950, the NAACP brought a case before the U.S. Supreme Court that they hoped would help to undermine the concept of “separate but equal” as espoused in the 1896 decision in Plessy v. Ferguson , which gave legal sanction to segregated school systems. Sweatt v. Painter was a case brought by Heman Marion Sweatt, who sued the University of Texas for denying him admission to its law school because state law prohibited integrated education. Texas attempted to form a separate law school for African Americans only, but in its decision on the case, the U.S. Supreme Court rejected this solution, holding that the separate school provided neither equal facilities nor “intangibles,” such as the ability to form relationships with other future lawyers, that a professional school should provide. Not all efforts to enact desegregation required the use of the courts, however. On April 15, 1947, Jackie Robinson started for the Brooklyn Dodgers, playing first base. He was the first African American to play baseball in the National League, breaking the color barrier. Although African Americans had their own baseball teams in the Negro Leagues, Robinson opened the gates for them to play in direct competition with White players in the major leagues. Other African American athletes also began to challenge the segregation of American sports. At the 1948 Summer Olympics, Alice Coachman, an African American, was the only American woman to take a gold medal in the games ( Figure 28.17 ). These changes, while symbolically significant, were mere cracks in the wall of segregation. DESEGREGATION AND INTEGRATION Until 1954, racial segregation in education was not only legal but was required in seventeen states and permissible in several others ( Figure 28.18 ). Utilizing evidence provided in sociological studies conducted by Kenneth Clark and Gunnar Myrdal, however, Thurgood Marshall, then chief counsel for the NAACP, successfully argued the landmark case Brown v. Board of Education of Topeka, Kansas before the U.S. Supreme Court led by Chief Justice Earl Warren. Marshall showed that the practice of segregation in public schools made African American students feel inferior. Even if the facilities provided were equal in nature, the Court noted in its decision, the very fact that some students were separated from others on the basis of their race made segregation unconstitutional. Defining American Thurgood Marshall on Fighting Racism As a law student in 1933, Thurgood Marshall ( Figure 28.19 ) was recruited by his mentor Charles Hamilton Houston to assist in gathering information for the defense of a Black man in Virginia accused of killing two White women. His continued close association with Houston led Marshall to aggressively defend Black people in the court system and to use the courts as the weapon by which equal rights might be extracted from the U.S. Constitution and a White racist system. Houston also suggested that it would be important to establish legal precedents regarding the Plessy v. Ferguson ruling of separate but equal. By 1938, Marshall had become “Mr. Civil Rights” and formally organized the NAACP’s Legal Defense and Education Fund in 1940 to garner the resources to take on cases to break the racist justice system of America. A direct result of Marshall’s energies and commitment was his 1940 victory in a Supreme Court case, Chambers v. Florida , which held that confessions obtained by violence and torture were inadmissible in a court of law. His most well-known case was Brown v. Board of Education in 1954, which held that state laws establishing separate public schools for Black and White students were unconstitutional. Later in life, Marshall reflected on his career fighting racism in a speech at Howard Law School in 1978: Be aware of that myth, that everything is going to be all right. Don’t give in. I add that, because it seems to me, that what we need to do today is to refocus. Back in the 30s and 40s, we could go no place but to court. We knew then, the court was not the final solution. Many of us knew the final solution would have to be politics, if for no other reason, politics is cheaper than lawsuits. So now we have both. We have our legal arm, and we have our political arm. Let’s use them both. And don’t listen to this myth that it can be solved by either or that it has already been solved. Take it from me, it has not been solved. When Marshall says that the problems of racism have not been solved, to what was he referring? Plessy v. Fergusson had been overturned. The challenge now was to integrate schools. A year later, the U.S. Supreme Court ordered southern school systems to begin desegregation “with all deliberate speed.” Some school districts voluntarily integrated their schools. For many other districts, however, “deliberate speed” was very, very slow. It soon became clear that enforcing Brown v. the Board of Education would require presidential intervention. Eisenhower did not agree with the U.S. Supreme Court’s decision and did not wish to force southern states to integrate their schools. However, as president, he was responsible for doing so. In 1957, Central High School in Little Rock, Arkansas, was forced to accept its first nine African American students, who became known as the Little Rock Nine . In response, Arkansas governor Orval Faubus called out the state National Guard to prevent the students from attending classes, removing the troops only after Eisenhower told him to do so. A subsequent attempt by the nine students to attend school resulted in mob violence. Eisenhower then placed the Arkansas National Guard under federal control and sent the U.S. Army’s 101st airborne unit to escort the students to and from school as well as from class to class ( Figure 28.20 ). This was the first time since the end of Reconstruction that federal troops once more protected the rights of African Americans in the South. Throughout the course of the school year, the Little Rock Nine were insulted, harassed, and physically assaulted; nevertheless, they returned to school each day. At the end of the school year, the first African American student graduated from Central High. At the beginning of the 1958–1959 school year, Orval Faubus ordered all Little Rock’s public schools closed. In the opinion of White segregationists, keeping all students out of school was preferable to having them attend integrated schools. In 1959, the U.S. Supreme Court ruled that the school had to be reopened and that the process of desegregation had to proceed. WHITE RESPONSES Efforts to desegregate public schools led to a backlash among most southern Whites. Many greeted the Brown decision with horror; some World War II veterans questioned how the government they had fought for could betray them in such a fashion. Some White parents promptly withdrew their children from public schools and enrolled them in all-White private academies, many newly created for the sole purpose of keeping White children from attending integrated schools. Often, these “academies” held classes in neighbors’ basements or living rooms. Other White southerners turned to state legislatures or courts to solve the problem of school integration. Orders to integrate school districts were routinely challenged in court. When the lawsuits proved unsuccessful, many southern school districts responded by closing all public schools, as Orval Faubus had done after Central High School was integrated. One county in Virginia closed its public schools for five years rather than see them integrated. Besides suing school districts, many southern segregationists filed lawsuits against the NAACP, trying to bankrupt the organization. Many national politicians supported the segregationist efforts. In 1956, ninety-six members of Congress signed “The Southern Manifesto,” in which they accused the U.S. Supreme Court of misusing its power and violating the principle of states’ rights , which maintained that states had rights equal to those of the federal government. Unfortunately, many White southern racists, frightened by challenges to the social order, responded with violence. When Little Rock’s Central High School desegregated, an irate Ku Klux Klansman from a neighboring community sent a letter to the members of the city’s school board in which he denounced them as Communists and threatened to kill them. White rage sometimes erupted into murder. In August 1955, both White and Black Americans were shocked by the brutality of the murder of Emmett Till. Till, a fourteen-year-old boy from Chicago, had been vacationing with relatives in Mississippi. While visiting a White-owned store, he had made a remark to the White woman behind the counter. A few days later, the husband and brother-in-law of the woman came to the home of Till’s relatives in the middle of the night and abducted the boy. Till’s beaten and mutilated body was found in a nearby river three days later. Till’s mother insisted on an open-casket funeral; she wished to use her son’s body to reveal the brutality of southern racism. The murder of a child who had been guilty of no more than a casual remark captured the nation’s attention, as did the acquittal of the two men who admitted killing him. THE MONTGOMERY BUS BOYCOTT One of those inspired by Till’s death was Rosa Parks, an NAACP member from Montgomery, Alabama, who became the face of the 1955–1956 Montgomery Bus Boycott. City ordinances in Montgomery segregated the city’s buses, forcing African American passengers to ride in the back section. They had to enter through the rear of the bus, could not share seats with White passengers, and, if the front of the bus was full and a White passenger requested an African American’s seat, had to relinquish their place to the White rider. The bus company also refused to hire African American drivers even though most of the people who rode the buses were Black. On December 1, 1955, Rosa Parks refused to give her seat to a White man, and the Montgomery police arrested her. After being bailed out of jail, she decided to fight the laws requiring segregation in court. To support her, the Women’s Political Council, a group of African American female activists, organized a boycott of Montgomery’s buses. News of the boycott spread through newspaper notices and by word of mouth; ministers rallied their congregations to support the Women’s Political Council. Their efforts were successful, and forty thousand African American riders did not take the bus on December 5, the first day of the boycott. Other African American leaders within the city embraced the boycott and maintained it beyond December 5, Rosa Parks’ court date. Among them was a young minister named Martin Luther King, Jr. For the next year, Black Montgomery residents avoided the city’s buses. Some organized carpools. Others paid for rides in African American-owned taxis, whose drivers reduced their fees. Most walked to and from school, work, and church for 381 days, the duration of the boycott. In June 1956, an Alabama federal court found the segregation ordinance unconstitutional. The city appealed, but the U.S. Supreme Court upheld the decision. The city’s buses were desegregated.
american_government
Summary 5.1 What Are Civil Rights and How Do We Identify Them? The equal protection clause of the Fourteenth Amendment gives all people and groups in the United States the right to be treated equally regardless of individual attributes. That logic has been expanded in the twenty-first century to cover attributes such as race, color, ethnicity, sex, gender, sexual orientation, religion, and disability. People may still be treated unequally by the government, but only if there is at least a rational basis for it, such as a disability that makes a person unable to perform the essential functions required by a job, or if a person is too young to be trusted with an important responsibility, like driving safely. If the characteristic on which discrimination is based is related to sex, race, or ethnicity, the reason for it must serve, respectively, an important government interest or a compelling government interest. 5.2 The African American Struggle for Equality Following the Civil War and the freeing of all slaves by the Thirteenth Amendment, a Republican Congress hoped to protect the freedmen from vengeful southern whites by passing the Fourteenth and Fifteenth Amendments, granting them citizenship and guaranteeing equal protection under the law and the right to vote (for black men). The end of Reconstruction, however, allowed white Southerners to regain control of the South’s political and legal system and institute openly discriminatory Jim Crow laws. While some early efforts to secure civil rights were successful, the greatest gains came after World War II. Through a combination of lawsuits, Congressional acts, and direct action (such as President Truman’s executive order to desegregate the U.S. military), African Americans regained their voting rights and were guaranteed protection against discrimination in employment. Schools and public accommodations were desegregated. While much has been achieved, the struggle for equal treatment continues. 5.3 The Fight for Women’s Rights At the time of the Revolution and for many decades following it, married women had no right to control their own property, vote, or run for public office. Beginning in the 1840s, a women’s movement began among women who were active in the abolition and temperance movements. Although some of their goals, such as achieving property rights for married women, were reached early on, their biggest goal—winning the right to vote—required the 1920 passage of the Nineteenth Amendment. Women secured more rights in the 1960s and 1970s, such as reproductive rights and the right not to be discriminated against in employment or education. Women continue to face many challenges: they are still paid less than men and are underrepresented in executive positions and elected office. 5.4 Civil Rights for Indigenous Groups: Native Americans, Alaskans, and Hawaiians At the beginning of U.S. history, Indians were considered citizens of sovereign nations and thus ineligible for citizenship, and they were forced off their ancestral lands and onto reservations. Interest in Indian rights arose in the late nineteenth century, and in the 1930s, Native Americans were granted a degree of control over reservation lands and the right to govern themselves. Following World War II, they won greater rights to govern themselves, educate their children, decide how tribal lands should be used—to build casinos, for example—and practice traditional religious rituals without federal interference. Alaska Natives and Native Hawaiians have faced similar difficulties, but since the 1960s, they have been somewhat successful in having lands restored to them or obtaining compensation for their loss. Despite these achievements, members of these groups still tend to be poorer, less educated, less likely to be employed, and more likely to suffer addictions or to be incarcerated than other racial and ethnic groups in the United States. 5.5 Equal Protection for Other Groups Many Hispanics and Latinos were deprived of their right to vote and forced to attend segregated schools. Asian Americans were also segregated and sometimes banned from immigrating to the United States. The achievements of the African American civil rights movement, such as the Civil Rights Act of 1964, benefited these groups, however, and Latinos and Asians also brought lawsuits on their own behalf. Many, like the Chicano youth of the Southwest, also engaged in direct action. This brought important gains, especially in education. Recent concerns over illegal immigration have resulted in renewed attempts to discriminate against Latinos, however. For a long time, fear of discovery kept many LGBT people closeted and thus hindered their efforts to form a united response to discrimination. Since World War II, however, the LGBT community has achieved the right to same-sex marriage and protection from discrimination in other areas of life as well. The Americans with Disabilities Act, enacted in 1990, has recognized the equal rights of people with disabilities to employment, transportation, and access to public education. People with disabilities still face much discrimination, however, and LGBT people are frequently victims of hate crimes. Some of the most serious forms of discrimination today are directed at religious minorities like Muslims, and many conservative Christians believe the recognition of LGBT rights threatens their religious freedoms.
Chapter Outline 5.1 What Are Civil Rights and How Do We Identify Them? 5.2 The African American Struggle for Equality 5.3 The Fight for Women’s Rights 5.4 Civil Rights for Indigenous Groups: Native Americans, Alaskans, and Hawaiians 5.5 Equal Protection for Other Groups Introduction The United States’ founding principles are liberty, equality, and justice. However, not all its citizens have always enjoyed equal opportunities, the same treatment under the law, or all the liberties extended to others. Well into the twentieth century, many were routinely discriminated against because of sex, race, ethnicity or country of origin, religion, sexual orientation, or physical or mental abilities. When we consider the experiences of white women and ethnic minorities, for much of U.S. history the majority of its people have been deprived of basic rights and opportunities, and sometimes of citizenship itself. The fight to secure equal rights for all continues today. While many changes must still be made, the past one hundred years, especially the past few decades, have brought significant gains for people long discriminated against. Yet, as the protest over the building of an Islamic community center in Lower Manhattan demonstrates ( Figure 5.1 ), people still encounter prejudice, injustice, and negative stereotypes that lead to discrimination, marginalization, and even exclusion from civic life. What is the difference between civil liberties and civil rights? How did the African American struggle for civil rights evolve? What challenges did women overcome in securing the right to vote, and what obstacles do they and other U.S. groups still face? This chapter addresses these and other questions in exploring the essential concepts of civil rights.
[ { "answer": { "ans_choice": 2, "ans_text": "C" }, "bloom": null, "hl_context": "<hl> Because affirmative action attempts to redress discrimination on the basis of race or ethnicity , it is generally subject to the strict scrutiny standard , which means the burden of proof is on the government to demonstrate the necessity of racial discrimination to achieve a compelling governmental interest . <hl> In 1978 , in Bakke v . California , the Supreme Court upheld affirmative action and said that colleges and universities could consider race when deciding whom to admit but could not establish racial quotas . 57 In 2003 , the Supreme Court reaffirmed the Bakke decision in Grutter v . Bollinger , which said that taking race or ethnicity into account as one of several factors in admitting a student to a college or university was acceptable , but a system setting aside seats for a specific quota of minority students was not . 58 All these issues are back under discussion in the Supreme Court with the re-arguing of Fisher v . University of Texas . 59 In Fisher v . University of Texas ( 2013 , known as Fisher I ) , University of Texas student Abigail Fisher brought suit to declare UT ’ s race-based admissions policy as inconsistent with Grutter . The court did not see the UT policy that way and allowed it , so long as it remained narrowly tailored and not quota-based . Fisher II ( 2016 ) was decided by a 4 – 3 majority . It allowed race-based admissions , but required that the utility of such an approach had to be re-established on a regular basis . Should race be a factor in deciding who will be admitted to a particular college ? Why or why not ? 5.3 The Fight for Women ’ s Rights <hl> Discrimination against members of racial , ethnic , or religious groups or those of various national origins is reviewed to the greatest degree by the courts , which apply the strict scrutiny standard in these cases . <hl> Under strict scrutiny , the burden of proof is on the government to demonstrate that there is a compelling governmental interest in treating people from one group differently from those who are not part of that group — the law or action can be “ narrowly tailored ” to achieve the goal in question , and that it is the “ least restrictive means ” available to achieve that goal . 10 In other words , if there is a non-discriminatory way to accomplish the goal in question , discrimination should not take place . In the modern era , laws and actions that are challenged under strict scrutiny have rarely been upheld . Strict scrutiny , however , was the legal basis for the Supreme Court ’ s 1944 upholding of the legality of the internment of Japanese Americans during World War II , discussed later in this chapter . 11 Finally , affirmative action consists of government programs and policies designed to benefit members of groups historically subject to discrimination . Much of the controversy surrounding affirmative action is about whether strict scrutiny should be applied to these cases .", "hl_sentences": "Because affirmative action attempts to redress discrimination on the basis of race or ethnicity , it is generally subject to the strict scrutiny standard , which means the burden of proof is on the government to demonstrate the necessity of racial discrimination to achieve a compelling governmental interest . Discrimination against members of racial , ethnic , or religious groups or those of various national origins is reviewed to the greatest degree by the courts , which apply the strict scrutiny standard in these cases .", "question": { "cloze_format": "The legal standard that the courts would use in deciding their case is the ___ .", "normal_format": "A group of African American students believes a college admissions test that is used by a public university discriminates against them. What legal standard would the courts use in deciding their case?", "question_choices": [ "rational basis test", "intermediate scrutiny", "strict scrutiny", "equal protection" ], "question_id": "fs-id1164435652845", "question_text": "A group of African American students believes a college admissions test that is used by a public university discriminates against them. What legal standard would the courts use in deciding their case?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "intermediate scrutiny" }, "bloom": null, "hl_context": "The changes wrought by the Fourteenth Amendment were more extensive . <hl> In addition to introducing the equal protection clause to the Constitution , this amendment also extended the due process clause of the Fifth Amendment to the states , required the states to respect the privileges or immunities of all citizens , and , for the first time , defined citizenship at the national and state levels . <hl> People could no longer be excluded from citizenship based solely on their race . Although some of these provisions were rendered mostly toothless by the courts or the lack of political action to enforce them , others were pivotal in the expansion of civil rights . <hl> Discrimination based on gender or sex is generally examined with intermediate scrutiny . <hl> The standard of intermediate scrutiny was first applied by the Supreme Court in Craig v . Boren ( 1976 ) and again in Clark v . Jeter ( 1988 ) . 7 It requires the government to demonstrate that treating men and women differently is “ substantially related to an important governmental objective . ” This puts the burden of proof on the government to demonstrate why the unequal treatment is justifiable , not on the individual who alleges unfair discrimination has taken place . In practice , this means laws that treat men and women differently are sometimes upheld , although usually they are not . For example , in the 1980s and 1990s , the courts ruled that states could not operate single-sex institutions of higher education and that such schools , like South Carolina ’ s military college The Citadel , shown in Figure 5.2 , must admit both male and female students . 8 Women in the military are now also allowed to serve in all combat roles , although the courts have continued to allow the Selective Service System ( the draft ) to register only men and not women . 9 We can contrast civil rights with civil liberties , which are limitations on government power designed to protect our fundamental freedoms . For example , the Eighth Amendment prohibits the application of “ cruel and unusual punishments ” to those convicted of crimes , a limitation on government power . As another example , the guarantee of equal protection means the laws and the Constitution must be applied on an equal basis , limiting the government ’ s ability to discriminate or treat some people differently , unless the unequal treatment is based on a valid reason , such as age . A law that imprisons Asian Americans twice as long as Latinos for the same offense , or a law that says people with disabilities don ’ t have the right to contact members of Congress while other people do , would treat some people differently from others for no valid reason and might well be unconstitutional . <hl> According to the Supreme Court ’ s interpretation of the Equal Protection Clause , “ all persons similarly circumstanced shall be treated alike . ” 4 If people are not similarly circumstanced , however , they may be treated differently . <hl> Asian Americans and Latinos who have broken the same law are similarly circumstanced ; however , a blind driver or a ten-year-old driver is differently circumstanced than a sighted , adult driver . Civil rights are , at the most fundamental level , guarantees by the government that it will treat people equally , particularly people belonging to groups that have historically been denied the same rights and opportunities as others . The proclamation that “ all men are created equal ” appears in the Declaration of Independence , and the due process clause of the Fifth Amendment to the U . S . Constitution requires that the federal government treat people equally . <hl> According to Chief Justice Earl Warren in the Supreme Court case of Bolling v . Sharpe ( 1954 ) , “ discrimination may be so unjustifiable as to be violative of due process . ” 3 Additional guarantees of equality are provided by the equal protection clause of the Fourteenth Amendment , ratified in 1868 , which states in part that “ No State shall . <hl> . . deny to any person within its jurisdiction the equal protection of the laws . ” Thus , between the Fifth and Fourteenth Amendments , neither state governments nor the federal government may treat people unequally unless unequal treatment is necessary to maintain important governmental interests , like public safety .", "hl_sentences": "In addition to introducing the equal protection clause to the Constitution , this amendment also extended the due process clause of the Fifth Amendment to the states , required the states to respect the privileges or immunities of all citizens , and , for the first time , defined citizenship at the national and state levels . Discrimination based on gender or sex is generally examined with intermediate scrutiny . According to the Supreme Court ’ s interpretation of the Equal Protection Clause , “ all persons similarly circumstanced shall be treated alike . ” 4 If people are not similarly circumstanced , however , they may be treated differently . According to Chief Justice Earl Warren in the Supreme Court case of Bolling v . Sharpe ( 1954 ) , “ discrimination may be so unjustifiable as to be violative of due process . ” 3 Additional guarantees of equality are provided by the equal protection clause of the Fourteenth Amendment , ratified in 1868 , which states in part that “ No State shall .", "question": { "cloze_format": "The equal protection clause became part of the Constitution as a result of ________.", "normal_format": "The equal protection clause became part of the Constitution as a result of which of the following?", "question_choices": [ "affirmative action", "the Fourteenth Amendment", "intermediate scrutiny", "strict scrutiny" ], "question_id": "fs-id1164435565673", "question_text": "The equal protection clause became part of the Constitution as a result of ________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 3, "ans_text": "D" }, "bloom": null, "hl_context": "The answer to this question lies in the purpose of the discriminatory practice . In most cases when the courts are deciding whether discrimination is unlawful , the government has to demonstrate only that it has a good reason for engaging in it . Unless the person or group challenging the law can prove otherwise , the courts will generally decide the discriminatory practice is allowed . <hl> In these cases , the courts are applying the rational basis test . <hl> <hl> That is , as long as there ’ s a reason for treating some people differently that is “ rationally related to a legitimate government interest , ” the discriminatory act or law or policy is acceptable . <hl> 5 For example , since letting blind people operate cars would be dangerous to others on the road , the law forbidding them to drive is reasonably justified on the grounds of safety ; thus , it is allowed even though it discriminates against the blind . Similarly , when universities and colleges refuse to admit students who fail to meet a certain test score or GPA , they can discriminate against students with weaker grades and test scores because these students most likely do not possess the knowledge or skills needed to do well in their classes and graduate from the institution . The universities and colleges have a legitimate reason for denying these students entrance . We can contrast civil rights with civil liberties , which are limitations on government power designed to protect our fundamental freedoms . For example , the Eighth Amendment prohibits the application of “ cruel and unusual punishments ” to those convicted of crimes , a limitation on government power . As another example , the guarantee of equal protection means the laws and the Constitution must be applied on an equal basis , limiting the government ’ s ability to discriminate or treat some people differently , unless the unequal treatment is based on a valid reason , such as age . A law that imprisons Asian Americans twice as long as Latinos for the same offense , or a law that says people with disabilities don ’ t have the right to contact members of Congress while other people do , would treat some people differently from others for no valid reason and might well be unconstitutional . <hl> According to the Supreme Court ’ s interpretation of the Equal Protection Clause , “ all persons similarly circumstanced shall be treated alike . ” 4 If people are not similarly circumstanced , however , they may be treated differently . <hl> <hl> Asian Americans and Latinos who have broken the same law are similarly circumstanced ; however , a blind driver or a ten-year-old driver is differently circumstanced than a sighted , adult driver . <hl>", "hl_sentences": "In these cases , the courts are applying the rational basis test . That is , as long as there ’ s a reason for treating some people differently that is “ rationally related to a legitimate government interest , ” the discriminatory act or law or policy is acceptable . According to the Supreme Court ’ s interpretation of the Equal Protection Clause , “ all persons similarly circumstanced shall be treated alike . ” 4 If people are not similarly circumstanced , however , they may be treated differently . Asian Americans and Latinos who have broken the same law are similarly circumstanced ; however , a blind driver or a ten-year-old driver is differently circumstanced than a sighted , adult driver .", "question": { "cloze_format": "The type of discrimination that would be subject to the rational basis test is ___.", "normal_format": "Which of the following types of discrimination would be subject to the rational basis test?", "question_choices": [ "A law that treats men differently from women", "An action by a state governor that treats Asian Americans differently from other citizens", "A law that treats whites differently from other citizens", "A law that treats 10-year-olds differently from 28-year-olds" ], "question_id": "fs-id1164435609939", "question_text": "Which of the following types of discrimination would be subject to the rational basis test?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 1, "ans_text": "B" }, "bloom": null, "hl_context": "The landmark court decision of the judicial phase of the civil rights movement settled the Brown v . Board of Education case in 1954 . <hl> 32 In this case , the Supreme Court unanimously overturned its decision in Plessy v . Ferguson as it pertained to public education , stating that a separate but equal education was a logical impossibility . <hl> <hl> Even with the same funding and equivalent facilities , a segregated school could not have the same teachers or environment as the equivalent school for another race . <hl> The court also rested its decision in part on social science studies suggesting that racial discrimination led to feelings of inferiority among African American children . The only way to dispel this sense of inferiority was to end segregation and integrate public schools . With blacks effectively disenfranchised , the restored southern state governments undermined guarantees of equal treatment in the Fourteenth Amendment . They passed laws that excluded African Americans from juries and allowed the imprisonment and forced labor of “ idle ” black citizens . The laws also called for segregation of whites and blacks in public places under the doctrine known as “ separate but equal . ” As long as nominally equal facilities were provided for both whites and blacks , it was legal to require members of each race to use the facilities designated for them . Similarly , state and local governments passed laws limiting what neighborhoods blacks and whites could live in . Collectively , these discriminatory laws came to be known as Jim Crow laws . <hl> The Supreme Court upheld the separate but equal doctrine in 1896 in Plessy v . Ferguson , consistent with the Fourteenth Amendment ’ s equal protection clause , and allowed segregation to continue . <hl> 29", "hl_sentences": "32 In this case , the Supreme Court unanimously overturned its decision in Plessy v . Ferguson as it pertained to public education , stating that a separate but equal education was a logical impossibility . Even with the same funding and equivalent facilities , a segregated school could not have the same teachers or environment as the equivalent school for another race . The Supreme Court upheld the separate but equal doctrine in 1896 in Plessy v . Ferguson , consistent with the Fourteenth Amendment ’ s equal protection clause , and allowed segregation to continue .", "question": { "cloze_format": "The Supreme Court decision ruling that “separate but equal” was constitutional and allowed racial segregation to take place was ________.", "normal_format": "Which Supreme Court decision was ruling that “separate but equal” was constitutional and allowed racial segregation to take place?", "question_choices": [ "Brown v. Board of Education", "Plessy v. Ferguson", "Loving v. Virginia", "Shelley v. Kraemer" ], "question_id": "fs-id1164435664283", "question_text": "The Supreme Court decision ruling that “separate but equal” was constitutional and allowed racial segregation to take place was ________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 0, "ans_text": "vividly illustrated the continued resistance to black civil rights in the Deep South" }, "bloom": null, "hl_context": "<hl> The organizations ’ leaders planned a march from Selma to Montgomery in March 1965 . <hl> <hl> Their first attempt to march was violently broken up by state police and sheriff ’ s deputies ( Figure 5.8 ) . <hl> <hl> The second attempt was aborted because King feared it would lead to a brutal confrontation with police and violate a court order from a federal judge who had been sympathetic to the movement in the past . <hl> <hl> That night , three of the marchers , white ministers from the north , were attacked and beaten with clubs by members of the Ku Klux Klan ; one of the victims died from his injuries . <hl> <hl> Televised images of the brutality against protesters and the death of a minister led to greater public sympathy for the cause . <hl> Eventually , a third march was successful in reaching the state capital of Montgomery . 45", "hl_sentences": "The organizations ’ leaders planned a march from Selma to Montgomery in March 1965 . Their first attempt to march was violently broken up by state police and sheriff ’ s deputies ( Figure 5.8 ) . The second attempt was aborted because King feared it would lead to a brutal confrontation with police and violate a court order from a federal judge who had been sympathetic to the movement in the past . That night , three of the marchers , white ministers from the north , were attacked and beaten with clubs by members of the Ku Klux Klan ; one of the victims died from his injuries . Televised images of the brutality against protesters and the death of a minister led to greater public sympathy for the cause .", "question": { "cloze_format": "The 1965 Selma-to-Montgomery march was an important milestone in the civil rights movement because it ________.", "normal_format": "The 1965 Selma-to-Montgomery march was an important milestone in the civil rights movement because it what?", "question_choices": [ "vividly illustrated the continued resistance to black civil rights in the Deep South", "did not encounter any violent resistance", "led to the passage of the Civil Rights Act of 1964", "was the first major protest after the death of Martin Luther King, Jr." ], "question_id": "fs-id1164435546302", "question_text": "The 1965 Selma-to-Montgomery march was an important milestone in the civil rights movement because it ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "B. suffrage for women" }, "bloom": null, "hl_context": "<hl> In 1848 , Stanton and Mott called for a women ’ s rights convention , the first ever held specifically to address the subject , at Seneca Falls , New York . <hl> <hl> At the Seneca Falls Convention , Stanton wrote the Declaration of Sentiments , which was modeled after the Declaration of Independence and proclaimed women were equal to men and deserved the same rights . <hl> <hl> Among the rights Stanton wished to see granted to women was suffrage , the right to vote . <hl> When called upon to sign the Declaration , many of the delegates feared that if women demanded the right to vote , the movement would be considered too radical and its members would become a laughingstock . The Declaration passed , but the resolution demanding suffrage was the only one that did not pass unanimously . 65", "hl_sentences": "In 1848 , Stanton and Mott called for a women ’ s rights convention , the first ever held specifically to address the subject , at Seneca Falls , New York . At the Seneca Falls Convention , Stanton wrote the Declaration of Sentiments , which was modeled after the Declaration of Independence and proclaimed women were equal to men and deserved the same rights . Among the rights Stanton wished to see granted to women was suffrage , the right to vote .", "question": { "cloze_format": "At the world’s first women’s rights convention in 1848, the most contentious issue proved to be _________.", "normal_format": "At the world’s first women’s rights convention in 1848, the most contentious issue proved to be what?", "question_choices": [ "A. the right to education for women", "B. suffrage for women", "C. access to the professions for women", "D. greater property rights for women" ], "question_id": "fs-id1164435732193", "question_text": "At the world’s first women’s rights convention in 1848, the most contentious issue proved to be _________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "C" }, "bloom": null, "hl_context": "<hl> The more radical National Woman ’ s Party ( NWP ) , led by Alice Paul , advocated the use of stronger tactics . <hl> <hl> The NWP held public protests and picketed outside the White House ( Figure 5.12 ) . <hl> 71 Demonstrators were often beaten and arrested , and suffragists were subjected to cruel treatment in jail . When some , like Paul , began hunger strikes to call attention to their cause , their jailers force-fed them , an incredibly painful and invasive experience for the women . 72 Finally , in 1920 , the triumphant passage of the Nineteenth Amendment granted all women the right to vote . Women were also granted the right to vote on matters involving liquor licenses , in school board elections , and in municipal elections in several states . However , this was often done because of stereotyped beliefs that associated women with moral reform and concern for children , not as a result of a belief in women ’ s equality . Furthermore , voting in municipal elections was restricted to women who owned property . 70 In 1890 , the two suffragist groups united to form the National American Woman Suffrage Association ( NAWSA ) . <hl> To call attention to their cause , members circulated petitions , lobbied politicians , and held parades in which hundreds of women and girls marched through the streets ( Figure 5.11 ) . <hl>", "hl_sentences": "The more radical National Woman ’ s Party ( NWP ) , led by Alice Paul , advocated the use of stronger tactics . The NWP held public protests and picketed outside the White House ( Figure 5.12 ) . To call attention to their cause , members circulated petitions , lobbied politicians , and held parades in which hundreds of women and girls marched through the streets ( Figure 5.11 ) .", "question": { "cloze_format": "NAWSA differs from the NWP because ___.", "normal_format": "How did NAWSA differ from the NWP?", "question_choices": [ "NAWSA worked to win votes for women on a state-by-state basis while the NWP wanted an amendment added to the Constitution.", "NAWSA attracted mostly middle-class women while NWP appealed to the working class.", "The NWP favored more confrontational tactics like protests and picketing while NAWSA circulated petitions and lobbied politicians.", "The NWP sought to deny African Americans the vote, but NAWSA wanted to enfranchise all women." ], "question_id": "fs-id1164435579428", "question_text": "How did NAWSA differ from the NWP?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 3, "ans_text": "affirmative action" }, "bloom": null, "hl_context": "<hl> Finding a Middle Ground Affirmative Action One of the major controversies regarding race in the United States today is related to affirmative action , the practice of ensuring that members of historically disadvantaged or underrepresented groups have equal access to opportunities in education , the workplace , and government contracting . <hl> The phrase affirmative action originated in the Civil Rights Act of 1964 and Executive Order 11246 , and it has drawn controversy ever since . The Civil Rights Act of 1964 prohibited discrimination in employment , and Executive Order 11246 , issued in 1965 , forbade employment discrimination not only within the federal government but by federal contractors and contractors and subcontractors who received government funds .", "hl_sentences": "Finding a Middle Ground Affirmative Action One of the major controversies regarding race in the United States today is related to affirmative action , the practice of ensuring that members of historically disadvantaged or underrepresented groups have equal access to opportunities in education , the workplace , and government contracting .", "question": { "cloze_format": "The doctrine that people who do jobs that require the same level of skill, training, or education are thus entitled to equal pay is known as ________.", "normal_format": "What is the doctrine that people who do jobs that require the same level of skill, training, or education are thus entitled to equal pay known as?", "question_choices": [ "the glass ceiling", "substantial compensation", "comparable worth", "affirmative action" ], "question_id": "fs-id1164435608261", "question_text": "The doctrine that people who do jobs that require the same level of skill, training, or education are thus entitled to equal pay is known as ________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 3, "ans_text": "D" }, "bloom": null, "hl_context": "The next year , in Worcester v . Georgia , the Court ruled that whites could not enter tribal lands without the tribe ’ s permission . White Georgians , however , refused to abide by the Court ’ s decision , and President Andrew Jackson , a former Indian fighter , refused to enforce it . 95 Between 1831 and 1838 , members of several southern tribes , including the Cherokees , were forced by the U . S . Army to move west along routes shown in Figure 5.14 . <hl> The forced removal of the Cherokees to Oklahoma Territory , which had been set aside for settlement by displaced tribes and designated Indian Territory , resulted in the death of one-quarter of the tribe ’ s population . <hl> <hl> 96 The Cherokees remember this journey as the Trail of Tears . <hl>", "hl_sentences": "The forced removal of the Cherokees to Oklahoma Territory , which had been set aside for settlement by displaced tribes and designated Indian Territory , resulted in the death of one-quarter of the tribe ’ s population . 96 The Cherokees remember this journey as the Trail of Tears .", "question": { "cloze_format": "The tribe from Georgia to Oklahoma of which the Trail of Tears is the name given to its forced removal is ___.", "normal_format": "The Trail of Tears is the name given to the forced removal of which tribe from Georgia to Oklahoma?", "question_choices": [ "Lakota", "Paiute", "Navajo", "Cherokee" ], "question_id": "fs-id1164436863368", "question_text": "The Trail of Tears is the name given to the forced removal of this tribe from Georgia to Oklahoma." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 1, "ans_text": "a radical group of Native American activists who occupied the settlement of Wounded Knee on the Pine Ridge Reservation" }, "bloom": null, "hl_context": "<hl> In 1973 , members of the American Indian Movement ( AIM ) , a more radical group than the occupiers of Alcatraz , temporarily took over the offices of the Bureau of Indian Affairs in Washington , DC . <hl> <hl> The following year , members of AIM and some two hundred Oglala Lakota supporters occupied the town of Wounded Knee on the Lakota tribe ’ s Pine Ridge Reservation in South Dakota , the site of an 1890 massacre of Lakota men , women , and children by the U . S . Army ( Figure 5.16 ) . <hl> Many of the Oglala were protesting the actions of their half-white tribal chieftain , who they claimed had worked too closely with the BIA . The occupiers also wished to protest the failure of the Justice Department to investigate acts of white violence against Lakota tribal members outside the bounds of the reservation .", "hl_sentences": "In 1973 , members of the American Indian Movement ( AIM ) , a more radical group than the occupiers of Alcatraz , temporarily took over the offices of the Bureau of Indian Affairs in Washington , DC . The following year , members of AIM and some two hundred Oglala Lakota supporters occupied the town of Wounded Knee on the Lakota tribe ’ s Pine Ridge Reservation in South Dakota , the site of an 1890 massacre of Lakota men , women , and children by the U . S . Army ( Figure 5.16 ) .", "question": { "cloze_format": "AIM was ________.", "normal_format": "What was AIM?", "question_choices": [ "a federal program that returned control of Native American education to tribal governments", "a radical group of Native American activists who occupied the settlement of Wounded Knee on the Pine Ridge Reservation", "an attempt to reduce the size of reservations", "a federal program to give funds to Native American tribes to help their members open small businesses that would employ tribal members" ], "question_id": "fs-id1164436775778", "question_text": "AIM was ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "the United Farm Workers union" }, "bloom": null, "hl_context": "Mexican American civil rights leaders were active in other areas as well . <hl> Throughout the 1960s , Cesar Chavez and Dolores Huerta fought for the rights of Mexican American agricultural laborers through their organization , the United Farm Workers ( UFW ) , a union for migrant workers they founded in 1962 . <hl> Chavez , Huerta , and the UFW proclaimed their solidarity with Filipino farm workers by joining them in a strike against grape growers in Delano , California , in 1965 . Chavez consciously adopted the tactics of the African American civil rights movement . In 1965 , he called upon all U . S . consumers to boycott California grapes ( Figure 5.17 ) , and in 1966 , he led the UFW on a 300 - mile march to Sacramento , the state capital , to bring the state farm workers ’ problems to the attention of the entire country . The strike finally ended in 1970 when the grape growers agreed to give the pickers better pay and benefits . 127", "hl_sentences": "Throughout the 1960s , Cesar Chavez and Dolores Huerta fought for the rights of Mexican American agricultural laborers through their organization , the United Farm Workers ( UFW ) , a union for migrant workers they founded in 1962 .", "question": { "cloze_format": "Mexican American farm workers in California organized ________ to demand higher pay from their employers.", "normal_format": "What did Mexican American farm workers in California organize to demand higher pay from their employers?", "question_choices": [ "the bracero program", "Operation Wetback", "the United Farm Workers union", "the Mattachine Society" ], "question_id": "fs-id1164435628809", "question_text": "Mexican American farm workers in California organized ________ to demand higher pay from their employers." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 0, "ans_text": "A" }, "bloom": null, "hl_context": "<hl> Because Asian Americans are often stereotypically regarded as “ the model minority ” ( because it is assumed they are generally financially successful and do well academically ) , it is easy to forget that they have also often been discriminated against and denied their civil rights . <hl> Indeed , in the nineteenth century , Asians were among the most despised of all immigrant groups and were often subjected to the same laws enforcing segregation and forbidding interracial marriage as were African Americans and American Indians .", "hl_sentences": "Because Asian Americans are often stereotypically regarded as “ the model minority ” ( because it is assumed they are generally financially successful and do well academically ) , it is easy to forget that they have also often been discriminated against and denied their civil rights .", "question": { "cloze_format": "___ best describes attitudes toward Asian immigrants in the late nineteenth and early twentieth centuries.", "normal_format": "Which of the following best describes attitudes toward Asian immigrants in the late nineteenth and early twentieth centuries?", "question_choices": [ "Asian immigrants were welcomed to the United States and swiftly became financially successful.", "Asian immigrants were disliked by whites who feared competition for jobs, and several acts of Congress sought to restrict immigration and naturalization of Asians.", "Whites feared Asian immigrants because Japanese and Chinese Americans were often disloyal to the U.S. government.", "Asian immigrants got along well with whites but not with Mexican Americans or African Americans." ], "question_id": "fs-id1164435635892", "question_text": "Which of the following best describes attitudes toward Asian immigrants in the late nineteenth and early twentieth centuries?" }, "references_are_paraphrase": 0 } ]
5
5.1 What Are Civil Rights and How Do We Identify Them? Learning Objectives By the end of this section, you will be able to: Define the concept of civil rights Describe the standards that courts use when deciding whether a discriminatory law or regulation is unconstitutional Identify three core questions for recognizing a civil rights problem The belief that people should be treated equally under the law is one of the cornerstones of political thought in the United States. Yet not all citizens have been treated equally throughout the nation’s history, and some are treated differently even today. For example, until 1920, nearly all women in the United States lacked the right to vote. Black men received the right to vote in 1870, but as late as 1940 only 3 percent of African American adults living in the South were registered to vote, largely due to laws designed to keep them from the polls. 2 Americans were not allowed to enter into legal marriage with a member of the same sex in many U.S. states until 2015. Some types of unequal treatment are considered acceptable, while others are not. No one would consider it acceptable to allow a ten-year-old to vote, because a child lacks the ability to understand important political issues, but all reasonable people would agree that it is wrong to mandate racial segregation or to deny someone the right to vote on the basis of race. It is important to understand which types of inequality are unacceptable and why. DEFINING CIVIL RIGHTS Civil rights are, at the most fundamental level, guarantees by the government that it will treat people equally, particularly people belonging to groups that have historically been denied the same rights and opportunities as others. The proclamation that “all men are created equal” appears in the Declaration of Independence, and the due process clause of the Fifth Amendment to the U.S. Constitution requires that the federal government treat people equally. According to Chief Justice Earl Warren in the Supreme Court case of Bolling v. Sharpe (1954), “discrimination may be so unjustifiable as to be violative of due process.” 3 Additional guarantees of equality are provided by the equal protection clause of the Fourteenth Amendment, ratified in 1868, which states in part that “No State shall . . . deny to any person within its jurisdiction the equal protection of the laws.” Thus, between the Fifth and Fourteenth Amendments, neither state governments nor the federal government may treat people unequally unless unequal treatment is necessary to maintain important governmental interests, like public safety. We can contrast civil rights with civil liberties , which are limitations on government power designed to protect our fundamental freedoms. For example, the Eighth Amendment prohibits the application of “cruel and unusual punishments” to those convicted of crimes, a limitation on government power. As another example, the guarantee of equal protection means the laws and the Constitution must be applied on an equal basis, limiting the government’s ability to discriminate or treat some people differently, unless the unequal treatment is based on a valid reason, such as age. A law that imprisons Asian Americans twice as long as Latinos for the same offense, or a law that says people with disabilities don’t have the right to contact members of Congress while other people do, would treat some people differently from others for no valid reason and might well be unconstitutional. According to the Supreme Court’s interpretation of the Equal Protection Clause, “all persons similarly circumstanced shall be treated alike.” 4 If people are not similarly circumstanced, however, they may be treated differently. Asian Americans and Latinos who have broken the same law are similarly circumstanced; however, a blind driver or a ten-year-old driver is differently circumstanced than a sighted, adult driver. IDENTIFYING DISCRIMINATION Laws that treat one group of people differently from others are not always unconstitutional. In fact, the government engages in legal discrimination quite often. In most states, you must be eighteen years old to smoke cigarettes and twenty-one to drink alcohol; these laws discriminate against the young. To get a driver’s license so you can legally drive a car on public roads, you have to be a minimum age and pass tests showing your knowledge, practical skills, and vision. Perhaps you are attending a public college or university run by the government; the school you attend has an open admission policy, which means the school admits all who apply. Not all public colleges and universities have an open admissions policy, however. These schools may require that students have a high school diploma or a particular score on the SAT or ACT or a GPA above a certain number. In a sense, this is discrimination, because these requirements treat people unequally; people who do not have a high school diploma or a high enough GPA or SAT score are not admitted. How can the federal, state, and local governments discriminate in all these ways even though the equal protection clause seems to suggest that everyone be treated the same? The answer to this question lies in the purpose of the discriminatory practice. In most cases when the courts are deciding whether discrimination is unlawful, the government has to demonstrate only that it has a good reason for engaging in it. Unless the person or group challenging the law can prove otherwise, the courts will generally decide the discriminatory practice is allowed. In these cases, the courts are applying the rational basis test . That is, as long as there’s a reason for treating some people differently that is “rationally related to a legitimate government interest,” the discriminatory act or law or policy is acceptable. 5 For example, since letting blind people operate cars would be dangerous to others on the road, the law forbidding them to drive is reasonably justified on the grounds of safety; thus, it is allowed even though it discriminates against the blind. Similarly, when universities and colleges refuse to admit students who fail to meet a certain test score or GPA, they can discriminate against students with weaker grades and test scores because these students most likely do not possess the knowledge or skills needed to do well in their classes and graduate from the institution. The universities and colleges have a legitimate reason for denying these students entrance. The courts, however, are much more skeptical when it comes to certain other forms of discrimination. Because of the United States’ history of discrimination against people of non-white ancestry, women, and members of ethnic and religious minorities, the courts apply more stringent rules to policies, laws, and actions that discriminate on the basis of race, ethnicity, gender, religion, or national origin. 6 Discrimination based on gender or sex is generally examined with intermediate scrutiny . The standard of intermediate scrutiny was first applied by the Supreme Court in Craig v. Boren (1976) and again in Clark v. Jeter (1988). 7 It requires the government to demonstrate that treating men and women differently is “substantially related to an important governmental objective.” This puts the burden of proof on the government to demonstrate why the unequal treatment is justifiable, not on the individual who alleges unfair discrimination has taken place. In practice, this means laws that treat men and women differently are sometimes upheld, although usually they are not. For example, in the 1980s and 1990s, the courts ruled that states could not operate single-sex institutions of higher education and that such schools, like South Carolina’s military college The Citadel, shown in Figure 5.2 , must admit both male and female students. 8 Women in the military are now also allowed to serve in all combat roles, although the courts have continued to allow the Selective Service System (the draft) to register only men and not women. 9 Discrimination against members of racial, ethnic, or religious groups or those of various national origins is reviewed to the greatest degree by the courts, which apply the strict scrutiny standard in these cases. Under strict scrutiny, the burden of proof is on the government to demonstrate that there is a compelling governmental interest in treating people from one group differently from those who are not part of that group—the law or action can be “narrowly tailored” to achieve the goal in question, and that it is the “least restrictive means” available to achieve that goal. 10 In other words, if there is a non-discriminatory way to accomplish the goal in question, discrimination should not take place. In the modern era, laws and actions that are challenged under strict scrutiny have rarely been upheld. Strict scrutiny, however, was the legal basis for the Supreme Court’s 1944 upholding of the legality of the internment of Japanese Americans during World War II, discussed later in this chapter. 11 Finally, affirmative action consists of government programs and policies designed to benefit members of groups historically subject to discrimination. Much of the controversy surrounding affirmative action is about whether strict scrutiny should be applied to these cases. PUTTING CIVIL RIGHTS IN THE CONSTITUTION At the time of the nation’s founding, of course, the treatment of many groups was unequal: hundreds of thousands of people of African descent were not free, the rights of women were decidedly fewer than those of men, and the native peoples of North America were generally not considered U.S. citizens at all. While the early United States was perhaps a more inclusive society than most of the world at that time, equal treatment of all was at best still a radical idea. The aftermath of the Civil War marked a turning point for civil rights. The Republican majority in Congress was enraged by the actions of the reconstituted governments of the southern states. In these states, many former Confederate politicians and their sympathizers returned to power and attempted to circumvent the Thirteenth Amendment’s freeing of slaves by passing laws known as the black codes . These laws were designed to reduce former slaves to the status of serfs or indentured servants; blacks were not just denied the right to vote but also could be arrested and jailed for vagrancy or idleness if they lacked jobs. Blacks were excluded from public schools and state colleges and were subject to violence at the hands of whites ( Figure 5.3 ). 12 To override the southern states’ actions, lawmakers in Congress proposed two amendments to the Constitution designed to give political equality and power to former slaves; once passed by Congress and ratified by the necessary number of states, these became the Fourteenth and Fifteenth Amendments. The Fourteenth Amendment, in addition to including the equal protection clause as noted above, also was designed to ensure that the states would respect the civil liberties of freed slaves. The Fifteenth Amendment was proposed to ensure the right to vote for black men, which will be discussed in more detail later in this chapter. IDENTIFYING CIVIL RIGHTS ISSUES When we look back at the past, it’s relatively easy to identify civil rights issues that arose. But looking into the future is much harder. For example, few people fifty years ago would have identified the rights of the LGBT community as an important civil rights issue or predicted it would become one, yet in the intervening decades it has certainly done so. Similarly, in past decades the rights of those with disabilities, particularly mental disabilities, were often ignored by the public at large. Many people with disabilities were institutionalized and given little further thought, and within the past century, it was common for those with mental disabilities to be subject to forced sterilization. 13 Today, most of us view this treatment as barbaric. Clearly, then, new civil rights issues can emerge over time. How can we, as citizens, identify them as they emerge and distinguish genuine claims of discrimination from claims by those who have merely been unable to convince a majority to agree with their viewpoints? For example, how do we decide if twelve-year-olds are discriminated against because they are not allowed to vote? We can identify true discrimination by applying the following analytical process: Which groups? First, identify the group of people who are facing discrimination. Which right(s) are threatened? Second, what right or rights are being denied to members of this group? What do we do? Third, what can the government do to bring about a fair situation for the affected group? Is proposing and enacting such a remedy realistic? Get Connected! Join the Fight for Civil Rights One way to get involved in the fight for civil rights is to stay informed. The Southern Poverty Law Center (SPLC) is a not-for-profit advocacy group based in Montgomery, Alabama. Lawyers for the SPLC specialize in civil rights litigation and represent many people whose rights have been violated, from victims of hate crimes to undocumented immigrants. They provide summaries of important civil rights cases under their Docket section. Activity: Visit the SPLC website to find current information about a variety of different hate groups. In what part of the country do hate groups seem to be concentrated? Where are hate incidents most likely to occur? What might be some reasons for this? Link to Learning Civil rights institutes are found throughout the United States and especially in the south. One of the most prominent civil rights institutes is the Birmingham Civil Rights Institute, which is located in Alabama. 5.2 The African American Struggle for Equality Learning Objectives By the end of this section, you will be able to: Identify key events in the history of African American civil rights Explain how the courts, Congress, and the executive branch supported the civil rights movement Describe the role of grassroots efforts in the civil rights movement Many groups in U.S. history have sought recognition as equal citizens. Although each group’s efforts have been notable and important, arguably the greatest, longest, and most violent struggle was that of African Americans, whose once-inferior legal status was even written into the text of the Constitution. Their fight for freedom and equality provided the legal and moral foundation for others who sought recognition of their equality later on. SLAVERY AND THE CIVIL WAR In the Declaration of Independence, Thomas Jefferson made the radical statement that “all men are created equal” and “are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness.” Yet like other wealthy landowners of his time, Jefferson also owned dozens of other human beings as his personal property. He recognized this contradiction and personally considered the institution of slavery to be a “hideous blot” on the nation. 14 However, in order to forge a political union that would stand the test of time, he and the other founders—and later the framers of the Constitution—chose not to address the issue in any definitive way. Political support for abolition was very much a minority stance at the time, although after the Revolution many of the northern states did abolish slavery for a variety of reasons. 15 As the new United States expanded westward, however, the issue of slavery became harder to ignore and ignited much controversy. Many opponents of slavery were willing to accept the institution if it remained largely confined to the South but did not want it to spread westward. They feared the expansion of slavery would lead to the political dominance of the South over the North and would deprive small farmers in the newly acquired western territories who could not afford slaves. 16 Abolitionists, primarily in the North, also argued that slavery was both immoral and opposed basic U.S. values; they demanded an end to it. The spread of slavery into the West seemed inevitable, however, following the Supreme Court’s ruling in the case Dred Scott v. Sandford , 17 decided in 1857. Scott, who had been born into slavery but had spent time in free states and territories, argued that his temporary residence in a territory where slavery had been banned by the federal government had made him a free man. The Supreme Court rejected his argument. In fact, the Court’s majority stated that Scott had no legal right to sue for his freedom at all because blacks (whether free or slave) were not and could not become U.S. citizens. Thus, Scott lacked the standing to even appear before the court. The Court also held that Congress lacked the power to decide whether slavery would be permitted in a territory that had been acquired after the Constitution was ratified, in effect prohibiting the federal government from passing any laws that would limit the expansion of slavery into any part of the West. Ultimately, of course, the issue was decided by the Civil War (1861–1865), with the southern states seceding to defend their “states’ rights” to determine their own destinies without interference by the federal government. Foremost among the rights claimed by the southern states was the right to decide whether their residents would be allowed to own slaves. 18 Although at the beginning of the war President Abraham Lincoln had been willing to allow slavery to continue in the South to preserve the Union, he changed his policies regarding abolition over the course of the war. The first step was the issuance of the Emancipation Proclamation on January 1, 1863 ( Figure 5.4 ). Although it stated “all persons held as slaves . . . henceforward shall be free,” the proclamation was limited in effect to the states that had rebelled. Slaves in states that had remained within the Union, such as Maryland and Delaware, and in parts of the Confederacy that were already occupied by the Union army, were not set free. Although slaves in states in rebellion were technically freed, because Union troops controlled relatively small portions of these states at the time, it was impossible to ensure that enslaved people were freed in reality and not simply on paper. 19 RECONSTRUCTION At the end of the Civil War, the South entered a period called Reconstruction (1865–1877) during which state governments were reorganized before the rebellious states were allowed to be readmitted to the Union. As part of this process, the Republican Party pushed for a permanent end to slavery. A constitutional amendment to this effect was passed by the House of Representatives in January 1865, after having already been approved by the Senate in April 1864, and it was ratified in December 1865 as the Thirteenth Amendment . The amendment’s first section states, “Neither slavery nor involuntary servitude, except as a punishment for crime whereof the party shall have been duly convicted, shall exist within the United States, or any place subject to their jurisdiction.” In effect, this amendment outlawed slavery in the United States. The changes wrought by the Fourteenth Amendment were more extensive. In addition to introducing the equal protection clause to the Constitution, this amendment also extended the due process clause of the Fifth Amendment to the states, required the states to respect the privileges or immunities of all citizens, and, for the first time, defined citizenship at the national and state levels. People could no longer be excluded from citizenship based solely on their race. Although some of these provisions were rendered mostly toothless by the courts or the lack of political action to enforce them, others were pivotal in the expansion of civil rights. The Fifteenth Amendment stated that people could not be denied the right to vote based on “race, color, or previous condition of servitude.” This construction allowed states to continue to decide the qualifications of voters as long as those qualifications were ostensibly race-neutral. Thus, while states could not deny African American men the right to vote on the basis of race, they could deny it to women on the basis of sex or to people who could not prove they were literate. Although the immediate effect of these provisions was quite profound, over time the Republicans in Congress gradually lost interest in pursuing Reconstruction policies, and the Reconstruction ended with the end of military rule in the South and the withdrawal of the Union army in 1877. 20 Following the army’s removal, political control of the South fell once again into the hands of white men, and violence was used to discourage blacks from exercising the rights they had been granted. 21 The revocation of voting rights, or disenfranchisement , took a number of forms; not every southern state used the same methods, and some states used more than one, but they all disproportionately affected black voter registration and turnout. 22 Perhaps the most famous of the tools of disenfranchisement were literacy tests and understanding tests . Literacy tests, which had been used in the North since the 1850s to disqualify naturalized European immigrants from voting, called on the prospective voter to demonstrate his (and later her) ability to read a particular passage of text. However, since voter registration officials had discretion to decide what text the voter was to read, they could give easy passages to voters they wanted to register (typically whites) and more difficult passages to those whose registration they wanted to deny (typically blacks). Understanding tests required the prospective voter to explain the meaning of a particular passage of text, often a provision of the U.S. Constitution, or answer a series of questions related to citizenship. Again, since the official examining the prospective voter could decide which passage or questions to choose, the difficulty of the test might vary dramatically between white and black applicants. 23 Even had these tests been administered fairly and equitably, however, most blacks would have been at a huge disadvantage, because few could read. Although schools for blacks had existed in some places, southern states had made it largely illegal to teach slaves to read and write. At the beginning of the Civil War, only 5 percent of blacks could read and write, and most of them lived in the North. 24 Some were able to take advantage of educational opportunities after they were freed, but many were not able to gain effective literacy. In some states, poorer, less literate white voters feared being disenfranchised by the literacy and understanding tests. Some states introduced a loophole, known as the grandfather clause , to allow less literate whites to vote. The grandfather clause exempted those who had been allowed to vote in that state prior to the Civil War and their descendants from literacy and understanding tests. 25 Because blacks were not allowed to vote prior to the Civil War, but most white men had been voting at a time when there were no literacy tests, this loophole allowed most illiterate whites to vote ( Figure 5.5 ) while leaving obstacles in place for blacks who wanted to vote as well. Time limits were often placed on these provisions because state legislators realized that they might quickly be declared unconstitutional, but they lasted long enough to allow illiterate white men to register to vote. 26 In states where the voting rights of poor whites were less of a concern, another tool for disenfranchisement was the poll tax ( Figure 5.6 ). This was an annual per-person tax, typically one or two dollars (on the order of $20 to $50 today), that a person had to pay to register to vote. People who didn’t want to vote didn’t have to pay, but in several states the poll tax was cumulative, so if you decided to vote you would have to pay not only the tax due for that year but any poll tax from previous years as well. Because former slaves were usually quite poor, they were less likely than white men to be able to pay poll taxes. 27 Although these methods were usually sufficient to ensure that blacks were kept away from the polls, some dedicated African Americans did manage to register to vote despite the obstacles placed in their way. To ensure their vote was largely meaningless, the white elites used their control of the Democratic Party to create the white primary : primary elections in which only whites were allowed to vote. The state party organizations argued that as private groups, rather than part of the state government, they had no obligation to follow the Fifteenth Amendment’s requirement not to deny the right to vote on the basis of race. Furthermore, they contended, voting for nominees to run for office was not the same as electing those who would actually hold office. So they held primary elections to choose the Democratic nominee in which only white citizens were allowed to vote. 28 Once the nominee had been chosen, he or she might face token opposition from a Republican or minor-party candidate in the general election, but since white voters had agreed beforehand to support whoever won the Democrats’ primary, the outcome of the general election was a foregone conclusion. With blacks effectively disenfranchised, the restored southern state governments undermined guarantees of equal treatment in the Fourteenth Amendment. They passed laws that excluded African Americans from juries and allowed the imprisonment and forced labor of “idle” black citizens. The laws also called for segregation of whites and blacks in public places under the doctrine known as “separate but equal.” As long as nominally equal facilities were provided for both whites and blacks, it was legal to require members of each race to use the facilities designated for them. Similarly, state and local governments passed laws limiting what neighborhoods blacks and whites could live in. Collectively, these discriminatory laws came to be known as Jim Crow laws . The Supreme Court upheld the separate but equal doctrine in 1896 in Plessy v. Ferguson , consistent with the Fourteenth Amendment’s equal protection clause, and allowed segregation to continue. 29 CIVIL RIGHTS IN THE COURTS By the turn of the twentieth century, the position of African Americans was quite bleak. Even outside the South, racial inequality was a fact of everyday life. African American leaders and thinkers themselves disagreed on the right path forward. Some, like Booker T. Washington, argued that acceptance of inequality and segregation over the short term would allow African Americans to focus their efforts on improving their educational and social status until whites were forced to acknowledge them as equals. W. E. B. Du Bois , however, argued for a more confrontational approach and in 1909 founded the National Association for the Advancement of Colored People (NAACP) as a rallying point for securing equality. Liberal whites dominated the organization in its early years, but African Americans assumed control over its operations in the 1920s. 30 The NAACP soon focused on a strategy of overturning Jim Crow laws through the courts. Perhaps its greatest series of legal successes consisted of its efforts to challenge segregation in education. Early cases brought by the NAACP dealt with racial discrimination in higher education. In 1938, the Supreme Court essentially gave states a choice: they could either integrate institutions of higher education, or they could establish an equivalent university or college for African Americans. 31 Southern states chose to establish colleges for blacks rather than allow them into all-white state institutions. Although this ruling expanded opportunities for professional and graduate education in areas such as law and medicine for African Americans by requiring states to provide institutions for them to attend, it nevertheless allowed segregated colleges and universities to continue to exist. Link to Learning The NAACP was pivotal in securing African American civil rights and today continues to address civil rights violations, such as police brutality and the disproportionate percentage of African American convicts that are given the death penalty. The landmark court decision of the judicial phase of the civil rights movement settled the Brown v. Board of Education case in 1954. 32 In this case, the Supreme Court unanimously overturned its decision in Plessy v. Ferguson as it pertained to public education, stating that a separate but equal education was a logical impossibility. Even with the same funding and equivalent facilities, a segregated school could not have the same teachers or environment as the equivalent school for another race. The court also rested its decision in part on social science studies suggesting that racial discrimination led to feelings of inferiority among African American children. The only way to dispel this sense of inferiority was to end segregation and integrate public schools. It is safe to say this ruling was controversial. While integration of public schools took place without much incident in some areas of the South, particularly where there were few black students, elsewhere it was often confrontational—or nonexistent. In recognition of the fact that southern states would delay school integration for as long as possible, civil rights activists urged the federal government to enforce the Supreme Court’s decision. Organized by A. Philip Randolph and Bayard Rustin , approximately twenty-five thousand African Americans gathered in Washington, DC, on May 17, 1957, to participate in a Prayer Pilgrimage for Freedom. 33 A few months later, in Little Rock, Arkansas, governor Orval Faubus resisted court-ordered integration and mobilized National Guard troops to keep black students out of Central High School. President Eisenhower then called up the Arkansas National Guard for federal duty (essentially taking the troops out of Faubus’s hands) and sent soldiers of the 101st Airborne Division to escort students to and from classes, as shown in Figure 5.7 . To avoid integration, Faubus closed four high schools in Little Rock the following school year. 34 In Virginia, state leaders employed a strategy of “massive resistance” to school integration, which led to the closure of a large number of public schools across the state, some for years. 35 Although de jure segregation , segregation mandated by law, had ended on paper, in practice, few efforts were made to integrate schools in most school districts with substantial black student populations until the late 1960s. Many white southerners who objected to sending their children to school with blacks then established private academies that admitted only white students. 36 Advances were made in the courts in areas other than public education. In many neighborhoods in northern cities, which technically were not segregated, residents were required to sign restrictive real estate covenants promising that if they moved, they would not sell their houses to African Americans and sometimes not to Chinese, Japanese, Mexicans, Filipinos, Jews, and other ethnic minorities as well. 37 In the case of Shelley v. Kraemer (1948), the Supreme Court held that while such covenants did not violate the Fourteenth Amendment because they consisted of agreements between private citizens, their provisions could not be enforced by courts. 38 Because state courts are government institutions and the Fourteenth Amendment prohibits the government from denying people equal protection of the law, the courts’ enforcement of such covenants would be a violation of the amendment. Thus, if a white family chose to sell its house to a black family and the other homeowners in the neighborhood tried to sue the seller, the court would not hear the case. In 1967, the Supreme Court struck down a Virginia law that prohibited interracial marriage in Loving v. Virginia . 39 LEGISLATING CIVIL RIGHTS Beyond these favorable court rulings, however, progress toward equality for African Americans remained slow in the 1950s. In 1962, Congress proposed what later became the Twenty-Fourth Amendment , which banned the poll tax in elections to federal (but not state or local) office; the amendment went into effect after being ratified in early 1964. Several southern states continued to require residents to pay poll taxes in order to vote in state elections until 1966 when, in the case of Harper v. Virginia Board of Elections , the Supreme Court declared that requiring payment of a poll tax in order to vote in an election at any level was unconstitutional. 40 The slow rate of progress led to frustration within the African American community. Newer, grassroots organizations such as the Southern Christian Leadership Conference (SCLC), Congress of Racial Equality (CORE), and Student Non-Violent Coordinating Committee (SNCC) challenged the NAACP’s position as the leading civil rights organization and questioned its legal-focused strategy. These newer groups tended to prefer more confrontational approaches, including the use of direct action campaigns relying on marches and demonstrations. The strategies of nonviolent resistance and civil disobedience , or the refusal to obey an unjust law, had been effective in the campaign led by Mahatma Gandhi to liberate colonial India from British rule in the 1930s and 1940s. Civil rights pioneers adopted these measures in the 1955–1956 Montgomery bus boycott. After Rosa Parks refused to give up her bus seat to a white person and was arrested, a group of black women carried out a day-long boycott of Montgomery’s public transit system. This boycott was then extended for over a year and overseen by union organizer E. D. Nixon. The effort desegregated public transportation in that city. 41 Direct action also took such forms as the sit-in campaigns to desegregate lunch counters that began in Greensboro, North Carolina, in 1960, and the 1961 Freedom Rides in which black and white volunteers rode buses and trains through the South to enforce a 1946 Supreme Court decision that desegregated interstate transportation ( Morgan v. Virginia ). 42 While such focused campaigns could be effective, they often had little impact in places where they were not replicated. In addition, some of the campaigns led to violence against both the campaigns’ leaders and ordinary people; Rosa Parks, a longtime NAACP member and graduate of the Highlander Folk School for civil rights activists, whose actions had begun the Montgomery boycott, received death threats, E. D. Nixon’s home was bombed, and the Freedom Riders were attacked in Alabama. 43 As the campaign for civil rights continued and gained momentum, President John F. Kennedy called for Congress to pass new civil rights legislation, which began to work its way through Congress in 1963. The resulting law (pushed heavily and then signed by President Lyndon B. Johnson after Kennedy’s assassination) was the Civil Rights Act of 1964, which had wide-ranging effects on U.S. society. Not only did the act outlaw government discrimination and the unequal application of voting qualifications by race, but it also, for the first time, outlawed segregation and other forms of discrimination by most businesses that were open to the public, including hotels, theaters, and restaurants that were not private clubs. It outlawed discrimination on the basis of race, ethnicity, religion, sex, or national origin by most employers, and it created the Equal Employment Opportunity Commission (EEOC) to monitor employment discrimination claims and help enforce this provision of the law. The provisions that affected private businesses and employers were legally justified not by the Fourteenth Amendment’s guarantee of equal protection of the laws but instead by Congress’s power to regulate interstate commerce. 44 Even though the Civil Rights Act of 1964 had a monumental impact over the long term, it did not end efforts by many southern whites to maintain the white-dominated political power structure in the region. Progress in registering African American voters remained slow in many states despite increased federal activity supporting it, so civil rights leaders including Martin Luther King , Jr. decided to draw the public eye to the area where the greatest resistance to voter registration drives were taking place. The SCLC and SNCC particularly focused their attention on the city of Selma, Alabama, which had been the site of violent reactions against civil rights activities. The organizations’ leaders planned a march from Selma to Montgomery in March 1965. Their first attempt to march was violently broken up by state police and sheriff’s deputies ( Figure 5.8 ). The second attempt was aborted because King feared it would lead to a brutal confrontation with police and violate a court order from a federal judge who had been sympathetic to the movement in the past. That night, three of the marchers, white ministers from the north, were attacked and beaten with clubs by members of the Ku Klux Klan; one of the victims died from his injuries. Televised images of the brutality against protesters and the death of a minister led to greater public sympathy for the cause. Eventually, a third march was successful in reaching the state capital of Montgomery. 45 Link to Learning The 1987 PBS documentary Eyes on the Prize won several Emmys and other awards for its coverage of major events in the civil rights movement, including the Montgomery bus boycott, the battle for school integration in Little Rock, the march from Selma to Montgomery, and Martin Luther King, Jr.’s leadership of the march on Washington, DC. The events at Selma galvanized support in Congress for a follow-up bill solely dealing with the right to vote. The Voting Rights Act of 1965 went beyond previous laws by requiring greater oversight of elections by federal officials. Literacy and understanding tests, and other devices used to discriminate against voters on the basis of race, were banned. The Voting Rights Act proved to have much more immediate and dramatic effect than the laws that preceded it; what had been a fairly slow process of improving voter registration and participation was replaced by a rapid increase of black voter registration rates—although white registration rates increased over this period as well. 46 To many people’s way of thinking, however, the Supreme Court turned back the clocks when it gutted a core aspect of the Voting Rights Act in Shelby County v. Holder (2013). 47 No longer would states need federal approval to change laws and policies related to voting. Indeed, many states with a history of voter discrimination quickly resumed restrictive practices with laws requiring photo ID and limiting early voting. Some of the new restrictions are already being challenged in the courts. 48 Not all African Americans in the civil rights movement were comfortable with gradual change. Instead of using marches and demonstrations to change people’s attitudes, calling for tougher civil rights laws, or suing for their rights in court, they favored more immediate action that forced whites to give in to their demands. Men like Malcolm X , the leader of the Nation of Islam , and groups like the Black Panthers were willing to use violence to achieve their goals ( Figure 5.9 ). 49 These activists called for Black Power and Black Pride, not assimilation into white society. Their position was attractive to many young African Americans, especially after Martin Luther King, Jr. was assassinated in 1968. CONTINUING CHALLENGES FOR AFRICAN AMERICANS The civil rights movement for African Americans did not end with the passage of the Voting Rights Act in 1965. For the last fifty years, the African American community has faced challenges related to both past and current discrimination; progress on both fronts remains slow, uneven, and often frustrating. Legacies of the de jure segregation of the past remain in much of the United States. Many African Americans still live in predominantly black neighborhoods where their ancestors were forced by laws and housing covenants to live. 50 Even those who live in the suburbs, once largely white, tend to live in suburbs that are mostly black. 51 Some two million African American young people attend schools whose student body is composed almost entirely of students of color. 52 During the late 1960s and early 1970s, efforts to tackle these problems were stymied by large-scale public opposition, not just in the South but across the nation. Attempts to integrate public schools through the use of busing—transporting students from one segregated neighborhood to another to achieve more racially balanced schools—were particularly unpopular and helped contribute to “white flight” from cities to the suburbs. 53 This white flight has created de facto segregation , a form of segregation that results from the choices of individuals to live in segregated communities without government action or support. Today, a lack of high-paying jobs in many urban areas, combined with persistent racism, has trapped many African Americans in poor neighborhoods. While the Civil Rights Act of 1964 created opportunities for members of the black middle class to advance economically and socially, and to live in the same neighborhoods as the white middle class did, their departure left many black neighborhoods mired in poverty and without the strong community ties that existed during the era of legal segregation. Many of these neighborhoods also suffered from high rates of crime and violence. 54 Police also appear, consciously or subconsciously, to engage in racial profiling: singling out blacks (and Latinos) for greater attention than members of other racial and ethnic groups, as former FBI director James B. Comey has admitted. 55 When incidents of real or perceived injustice arise, as recently occurred after a series of deaths of young black men at the hands of police in Ferguson, Missouri; Staten Island, New York; and Baltimore, Maryland, many African Americans turn to the streets to protest because they believe that politicians—white and black alike—fail to pay sufficient attention to these problems. The most serious concerns of the black community today appear to revolve around poverty resulting from the legacies of slavery and Jim Crow. While the public mood may have shifted toward greater concern about economic inequality in the United States, substantial policy changes to immediately improve the economic standing of African Americans in general have not followed, that is, if government-based policies and solutions are the answer. The Obama administration recently proposed new rules under the Fair Housing Act that may, in time, lead to more integrated communities in the future. 56 Meanwhile, grassroots movements to improve neighborhoods and local schools have taken root in many black communities across America, and perhaps in those movements is the hope for greater future progress. Finding a Middle Ground Affirmative Action One of the major controversies regarding race in the United States today is related to affirmative action , the practice of ensuring that members of historically disadvantaged or underrepresented groups have equal access to opportunities in education, the workplace, and government contracting. The phrase affirmative action originated in the Civil Rights Act of 1964 and Executive Order 11246, and it has drawn controversy ever since. The Civil Rights Act of 1964 prohibited discrimination in employment, and Executive Order 11246, issued in 1965, forbade employment discrimination not only within the federal government but by federal contractors and contractors and subcontractors who received government funds. Clearly, African Americans, as well as other groups, have been subject to discrimination in the past and present, limiting their opportunity to compete on a level playing field with those who face no such challenge. Opponents of affirmative action, however, point out that many of its beneficiaries are ethnic minorities from relatively affluent backgrounds, while whites and Asian Americans who grew up in poverty are expected to succeed despite facing many of the same handicaps. Because affirmative action attempts to redress discrimination on the basis of race or ethnicity, it is generally subject to the strict scrutiny standard, which means the burden of proof is on the government to demonstrate the necessity of racial discrimination to achieve a compelling governmental interest. In 1978, in Bakke v. California , the Supreme Court upheld affirmative action and said that colleges and universities could consider race when deciding whom to admit but could not establish racial quotas. 57 In 2003, the Supreme Court reaffirmed the Bakke decision in Grutter v. Bollinger , which said that taking race or ethnicity into account as one of several factors in admitting a student to a college or university was acceptable, but a system setting aside seats for a specific quota of minority students was not. 58 All these issues are back under discussion in the Supreme Court with the re-arguing of Fisher v. University of Texas . 59 In Fisher v. University of Texas (2013, known as Fisher I ), University of Texas student Abigail Fisher brought suit to declare UT’s race-based admissions policy as inconsistent with Grutter . The court did not see the UT policy that way and allowed it, so long as it remained narrowly tailored and not quota-based. Fisher II (2016) was decided by a 4–3 majority. It allowed race-based admissions, but required that the utility of such an approach had to be re-established on a regular basis. Should race be a factor in deciding who will be admitted to a particular college? Why or why not? 5.3 The Fight for Women’s Rights Learning Objectives By the end of this section, you will be able to: Describe early efforts to achieve rights for women Explain why the Equal Rights Amendment failed to be ratified Describe the ways in which women acquired greater rights in the twentieth century Analyze why women continue to experience unequal treatment Along with African Americans, women of all races and ethnicities have long been discriminated against in the United States, and the women’s rights movement began at the same time as the movement to abolish slavery in the United States. Indeed, the women’s movement came about largely as a result of the difficulties women encountered while trying to abolish slavery. The trailblazing Seneca Falls Convention for women’s rights was held in 1848, a few years before the Civil War. But the abolition and African American civil rights movements largely eclipsed the women’s movement throughout most of the nineteenth century. Women began to campaign actively again in the late nineteenth and early twentieth centuries, and another movement for women’s rights began in the 1960s. THE EARLY WOMEN’S RIGHTS MOVEMENT AND WOMEN’S SUFFRAGE At the time of the American Revolution, women had few rights. Although single women were allowed to own property, married women were not. When women married, their separate legal identities were erased under the legal principle of coverture . Not only did women adopt their husbands’ names, but all personal property they owned legally became their husbands’ property. Husbands could not sell their wives’ real property—such as land or in some states slaves—without their permission, but they were allowed to manage it and retain the profits. If women worked outside the home, their husbands were entitled to their wages. 60 So long as a man provided food, clothing, and shelter for his wife, she was not legally allowed to leave him. Divorce was difficult and in some places impossible to obtain. 61 Higher education for women was not available, and women were barred from professional positions in medicine, law, and ministry. Following the Revolution, women’s conditions did not improve. Women were not granted the right to vote by any of the states except New Jersey, which at first allowed all taxpaying property owners to vote. However, in 1807, the law changed to limit the vote to men. 62 Changes in property laws actually hurt women by making it easier for their husbands to sell their real property without their consent. Although women had few rights, they nevertheless played an important role in transforming American society. This was especially true in the 1830s and 1840s, a time when numerous social reform movements swept across the United States. Many women were active in these causes, especially the abolition movement and the temperance movement, which tried to end the excessive consumption of liquor. They often found they were hindered in their efforts, however, either by the law or by widely held beliefs that they were weak, silly creatures who should leave important issues to men. 63 One of the leaders of the early women’s movement, Elizabeth Cady Stanton ( Figure 5.10 ), was shocked and angered when she sought to attend an 1840 antislavery meeting in London, only to learn that women would not be allowed to participate and had to sit apart from the men. At this convention, she made the acquaintance of another American female abolitionist, Lucretia Mott ( Figure 5.10 ), who was also appalled by the male reformers’ treatment of women. 64 In 1848, Stanton and Mott called for a women’s rights convention, the first ever held specifically to address the subject, at Seneca Falls, New York. At the Seneca Falls Convention , Stanton wrote the Declaration of Sentiments, which was modeled after the Declaration of Independence and proclaimed women were equal to men and deserved the same rights. Among the rights Stanton wished to see granted to women was suffrage, the right to vote. When called upon to sign the Declaration, many of the delegates feared that if women demanded the right to vote, the movement would be considered too radical and its members would become a laughingstock. The Declaration passed, but the resolution demanding suffrage was the only one that did not pass unanimously. 65 Along with other feminists (advocates of women’s equality), such as her friend and colleague Susan B. Anthony , Stanton fought for rights for women besides suffrage, including the right to seek higher education. As a result of their efforts, several states passed laws that allowed married women to retain control of their property and let divorced women keep custody of their children. 66 Amelia Bloomer , another activist, also campaigned for dress reform, believing women could lead better lives and be more useful to society if they were not restricted by voluminous heavy skirts and tight corsets. The women’s rights movement attracted many women who, like Stanton and Anthony, were active in either the temperance movement, the abolition movement, or both movements. Sarah and Angelina Grimke , the daughters of a wealthy slaveholding family in South Carolina, became first abolitionists and then women’s rights activists. 67 Many of these women realized that their effectiveness as reformers was limited by laws that prohibited married women from signing contracts and by social proscriptions against women addressing male audiences. Without such rights, women found it difficult to rent halls in which to deliver lectures or to hire printers to produce antislavery literature. Following the Civil War and the abolition of slavery, the women’s rights movement fragmented. Stanton and Anthony denounced the Fifteenth Amendment because it granted voting rights only to black men and not to women of any race. 68 The fight for women’s rights did not die, however. In 1869, Stanton and Anthony formed the National Woman Suffrage Association (NWSA), which demanded that the Constitution be amended to grant the right to vote to all women. It also called for more lenient divorce laws and an end to sex discrimination in employment. The less radical Lucy Stone formed the American Woman Suffrage Association (AWSA) in the same year; AWSA hoped to win the suffrage for women by working on a state-by-state basis instead of seeking to amend the Constitution. 69 Four western states—Utah, Colorado, Wyoming, and Idaho—did extend the right to vote to women in the late nineteenth century, but no other states did. Women were also granted the right to vote on matters involving liquor licenses, in school board elections, and in municipal elections in several states. However, this was often done because of stereotyped beliefs that associated women with moral reform and concern for children, not as a result of a belief in women’s equality. Furthermore, voting in municipal elections was restricted to women who owned property. 70 In 1890, the two suffragist groups united to form the National American Woman Suffrage Association (NAWSA). To call attention to their cause, members circulated petitions, lobbied politicians, and held parades in which hundreds of women and girls marched through the streets ( Figure 5.11 ). The more radical National Woman’s Party (NWP), led by Alice Paul , advocated the use of stronger tactics. The NWP held public protests and picketed outside the White House ( Figure 5.12 ). 71 Demonstrators were often beaten and arrested, and suffragists were subjected to cruel treatment in jail. When some, like Paul, began hunger strikes to call attention to their cause, their jailers force-fed them, an incredibly painful and invasive experience for the women. 72 Finally, in 1920, the triumphant passage of the Nineteenth Amendment granted all women the right to vote. CIVIL RIGHTS AND THE EQUAL RIGHTS AMENDMENT Just as the passage of the Thirteenth, Fourteenth, and Fifteenth Amendments did not result in equality for African Americans, the Nineteenth Amendment did not end discrimination against women in education, employment, or other areas of life, which continued to be legal. Although women could vote, they very rarely ran for or held public office. Women continued to be underrepresented in the professions, and relatively few sought advanced degrees. Until the mid-twentieth century, the ideal in U.S. society was typically for women to marry, have children, and become housewives. Those who sought work for pay outside the home were routinely denied jobs because of their sex and, when they did find employment, were paid less than men. Women who wished to remain childless or limit the number of children they had in order to work or attend college found it difficult to do so. In some states it was illegal to sell contraceptive devices, and abortions were largely illegal and difficult for women to obtain. A second women’s rights movement emerged in the 1960s to address these problems. Title VII of the Civil Rights Act of 1964 prohibited discrimination in employment on the basis of sex as well as race, color, national origin, and religion. Nevertheless, women continued to be denied jobs because of their sex and were often sexually harassed at the workplace. In 1966, feminists who were angered by the lack of progress made by women and by the government’s lackluster enforcement of Title VII organized the National Organization for Women (NOW). NOW promoted workplace equality, including equal pay for women, and also called for the greater presence of women in public office, the professions, and graduate and professional degree programs. NOW also declared its support for the Equal Rights Amendment (ERA) , which mandated equal treatment for all regardless of sex. The ERA, written by Alice Paul and Crystal Eastman , was first proposed to Congress, unsuccessfully, in 1923. It was introduced in every Congress thereafter but did not pass both the House and the Senate until 1972. The amendment was then sent to the states for ratification with a deadline of March 22, 1979. Although many states ratified the amendment in 1972 and 1973, the ERA still lacked sufficient support as the deadline drew near. Opponents, including both women and men, argued that passage would subject women to military conscription and deny them alimony and custody of their children should they divorce. 73 In 1978, Congress voted to extend the deadline for ratification to June 30, 1982. Even with the extension, however, the amendment failed to receive the support of the required thirty-eight states; by the time the deadline arrived, it had been ratified by only thirty-five, some of those had rescinded their ratifications, and no new state had ratified the ERA during the extension period ( Figure 5.13 ). Although the ERA failed to be ratified, Title IX of the United States Education Amendments of 1972 passed into law as a federal statute (not as an amendment, as the ERA was meant to be). Title IX applies to all educational institutions that receive federal aid and prohibits discrimination on the basis of sex in academic programs, dormitory space, health-care access, and school activities including sports. Thus, if a school receives federal aid, it cannot spend more funds on programs for men than on programs for women. CONTINUING CHALLENGES FOR WOMEN There is no doubt that women have made great progress since the Seneca Falls Convention. Today, more women than men attend college, and they are more likely than men to graduate. 74 Women are represented in all the professions, and approximately half of all law and medical school students are women. 75 Women have held Cabinet positions and have been elected to Congress. They have run for president and vice president, and three female justices currently serve on the Supreme Court. Women are also represented in all branches of the military and can serve in combat. As a result of the 1973 Supreme Court decision in Roe v. Wade , women now have legal access to abortion. 76 Nevertheless, women are still underrepresented in some jobs and are less likely to hold executive positions than are men. Many believe the glass ceiling , an invisible barrier caused by discrimination, prevents women from rising to the highest levels of American organizations, including corporations, governments, academic institutions, and religious groups. Women earn less money than men for the same work. As of 2014, fully employed women earned seventy-nine cents for every dollar earned by a fully employed man. 77 Women are also more likely to be single parents than are men. 78 As a result, more women live below the poverty line than do men, and, as of 2012, households headed by single women are twice as likely to live below the poverty line than those headed by single men. 79 Women remain underrepresented in elective offices. As of April 2016, women held only about 20 percent of seats in Congress and only about 25 percent of seats in state legislatures. 80 Women remain subject to sexual harassment in the workplace and are more likely than men to be the victims of domestic violence. Approximately one-third of all women have experienced domestic violence; one in five women is assaulted during her college years. 81 Many in the United States continue to call for a ban on abortion, and states have attempted to restrict women’s access to the procedure. For example, many states have required abortion clinics to meet the same standards set for hospitals, such as corridor size and parking lot capacity, despite lack of evidence regarding the benefits of such standards. Abortion clinics, which are smaller than hospitals, often cannot meet such standards. Other restrictions include mandated counseling before the procedure and the need for minors to secure parental permission before obtaining abortion services. 82 Whole Woman’s Health v. Hellerstedt (2016) cited the lack of evidence for the benefit of larger clinics and further disallowed two Texas laws that imposed special requirements on doctors in order to perform abortions. 83 Furthermore, the federal government will not pay for abortions for low-income women except in cases of rape or incest or in situations in which carrying the fetus to term would endanger the life of the mother. 84 To address these issues, many have called for additional protections for women. These include laws mandating equal pay for equal work. According to the doctrine of comparable worth , people should be compensated equally for work requiring comparable skills, responsibilities, and effort. Thus, even though women are underrepresented in certain fields, they should receive the same wages as men if performing jobs requiring the same level of accountability, knowledge, skills, and/or working conditions, even though the specific job may be different. For example, garbage collectors are largely male. The chief job requirements are the ability to drive a sanitation truck and to lift heavy bins and toss their contents into the back of truck. The average wage for a garbage collector is $15.34 an hour. 85 Daycare workers are largely female, and the average pay is $9.12 an hour. 86 However, the work arguably requires more skills and is a more responsible position. Daycare workers must be able to feed, clean, and dress small children; prepare meals for them; entertain them; give them medicine if required; and teach them basic skills. They must be educated in first aid and assume responsibility for the children’s safety. In terms of the skills and physical activity required and the associated level of responsibility of the job, daycare workers should be paid at least as much as garbage collectors and perhaps more. Women’s rights advocates also call for stricter enforcement of laws prohibiting sexual harassment, and for harsher punishment, such as mandatory arrest, for perpetrators of domestic violence. Insider Perspective Harry Burn and the Tennessee General Assembly In 1918, the proposed Nineteenth Amendment to the Constitution, extending the right to vote to all adult female citizens of the United States, was passed by both houses of Congress and sent to the states for ratification. Thirty-six votes were needed. Throughout 1918 and 1919, the Amendment dragged through legislature after legislature as pro- and anti-suffrage advocates made their arguments. By the summer of 1920, only one more state had to ratify it before it became law. The Amendment passed through Tennessee’s state Senate and went to its House of Representatives. Arguments were bitter and intense. Pro-suffrage advocates argued that the amendment would reward women for their service to the nation during World War I and that women’s supposedly greater morality would help to clean up politics. Those opposed claimed women would be degraded by entrance into the political arena and that their interests were already represented by their male relatives. On August 18, the amendment was brought for a vote before the House. The vote was closely divided, and it seemed unlikely it would pass. But as a young anti-suffrage representative waited for his vote to be counted, he remembered a note he had received from his mother that day. In it, she urged him, “Hurrah and vote for suffrage!” At the last minute, Harry Burn abruptly changed his ballot. The amendment passed the House by one vote, and eight days later, the Nineteenth Amendment was added to the Constitution. How are women perceived in politics today compared to the 1910s? What were the competing arguments for Harry Burn’s vote? Link to Learning The website for the Women’s National History Project contains a variety of resources for learning more about the women’s rights movement and women’s history. It features a history of the women’s movement, a “This Day in Women’s History” page, and quizzes to test your knowledge. 5.4 Civil Rights for Indigenous Groups: Native Americans, Alaskans, and Hawaiians Learning Objectives By the end of this section, you will be able to: Outline the history of discrimination against Native Americans Describe the expansion of Native American civil rights from 1960 to 1990 Discuss the persistence of problems Native Americans face today Native Americans have long suffered the effects of segregation and discrimination imposed by the U.S. government and the larger white society. Ironically, Native Americans were not granted the full rights and protections of U.S. citizenship until long after African Americans and women were, with many having to wait until the Nationality Act of 1940 to become citizens. 87 This was long after the passage of the Fourteenth Amendment in 1868, which granted citizenship to African Americans but not, the Supreme Court decided in Elk v. Wilkins (1884), to Native Americans. 88 White women had been citizens of the United States since its very beginning even though they were not granted the full rights of citizenship. Furthermore, Native Americans are the only group of Americans who were forcibly removed en masse from the lands on which they and their ancestors had lived so that others could claim this land and its resources. This issue remains relevant today as can be seen in the recent protests of the Dakota Access Pipeline, which have led to intense confrontations between those in charge of the pipeline and Native Americans. NATIVE AMERICANS LOSE THEIR LAND AND THEIR RIGHTS From the very beginning of European settlement in North America, Native Americans were abused and exploited. Early British settlers attempted to enslave the members of various tribes, especially in the southern colonies and states. 89 Following the American Revolution, the U.S. government assumed responsibility for conducting negotiations with Indian tribes, all of which were designated as sovereign nations, and regulating commerce with them. Because Indians were officially regarded as citizens of other nations, they were denied U.S. citizenship. 90 As white settlement spread westward over the course of the nineteenth century, Indian tribes were forced to move from their homelands. Although the federal government signed numerous treaties guaranteeing Indians the right to live in the places where they had traditionally farmed, hunted, or fished, land-hungry white settlers routinely violated these agreements and the federal government did little to enforce them. 91 In 1830, Congress passed the Indian Removal Act , which forced Native Americans to move west of the Mississippi River. 92 Not all tribes were willing to leave their land, however. The Cherokee in particular resisted, and in the 1820s, the state of Georgia tried numerous tactics to force them from their territory. Efforts intensified in 1829 after gold was discovered there. Wishing to remain where they were, the tribe sued the state of Georgia. 93 In 1831, the Supreme Court decided in Cherokee Nation v. Georgia that Indian tribes were not sovereign nations, but also that tribes were entitled to their ancestral lands and could not be forced to move from them. 94 The next year, in Worcester v. Georgia , the Court ruled that whites could not enter tribal lands without the tribe’s permission. White Georgians, however, refused to abide by the Court’s decision, and President Andrew Jackson , a former Indian fighter, refused to enforce it. 95 Between 1831 and 1838, members of several southern tribes, including the Cherokees, were forced by the U.S. Army to move west along routes shown in Figure 5.14 . The forced removal of the Cherokees to Oklahoma Territory, which had been set aside for settlement by displaced tribes and designated Indian Territory, resulted in the death of one-quarter of the tribe’s population. 96 The Cherokees remember this journey as the Trail of Tears . By the time of the Civil War, most Indian tribes had been relocated west of the Mississippi. However, once large numbers of white Americans and European immigrants had also moved west after the Civil War, Native Americans once again found themselves displaced. They were confined to reservations, which are federal lands set aside for their use where non-Indians could not settle. Reservation land was usually poor, however, and attempts to farm or raise livestock, not traditional occupations for most western tribes anyway, often ended in failure. Unable to feed themselves, the tribes became dependent on the Bureau of Indian Affairs (BIA) in Washington, DC, for support. Protestant missionaries were allowed to “adopt” various tribes, to convert them to Christianity and thus speed their assimilation. In an effort to hasten this process, Indian children were taken from their parents and sent to boarding schools, many of them run by churches, where they were forced to speak English and abandon their traditional cultures. 97 In 1887, the Dawes Severalty Act , another effort to assimilate Indians to white society, divided reservation lands into individual allotments. Native Americans who accepted these allotments and agreed to sever tribal ties were also given U.S. citizenship. All lands remaining after the division of reservations into allotments were offered for sale by the federal government to white farmers and ranchers. As a result, Indians swiftly lost control of reservation land. 98 In 1898, the Curtis Act dealt the final blow to Indian sovereignty by abolishing all tribal governments. 99 THE FIGHT FOR NATIVE AMERICAN RIGHTS As Indians were removed from their tribal lands and increasingly saw their traditional cultures being destroyed over the course of the nineteenth century, a movement to protect their rights began to grow. Sarah Winnemucca ( Figure 5.15 ), member of the Paiute tribe, lectured throughout the east in the 1880s in order to acquaint white audiences with the injustices suffered by the western tribes. 100 Lakota physician Charles Eastman ( Figure 5.15 ) also worked for Native American rights. In 1924, the Indian Citizenship Act granted citizenship to all Native Americans born after its passage. Native Americans born before the act took effect, who had not already become citizens as a result of the Dawes Severalty Act or service in the army in World War I, had to wait until the Nationality Act of 1940 to become citizens. In 1934, Congress passed the Indian Reorganization Act , which ended the division of reservation land into allotments. It returned to Native American tribes the right to institute self-government on their reservations, write constitutions, and manage their remaining lands and resources. It also provided funds for Native Americans to start their own businesses and attain a college education. 101 Despite the Indian Reorganization Act, conditions on the reservations did not improve dramatically. Most tribes remained impoverished, and many Native Americans, despite the fact that they were now U.S. citizens, were denied the right to vote by the states in which they lived. States justified this violation of the Fifteenth Amendment by claiming that Native Americans might be U.S. citizens but were not state residents because they lived on reservations. Other states denied Native Americans voting rights if they did not pay taxes. 102 Despite states’ actions, the federal government continued to uphold the rights of tribes to govern themselves. Federal concern for tribal sovereignty was part of an effort on the government’s part to end its control of, and obligations to, Indian tribes. 103 In the 1960s, a modern Native American civil rights movement, inspired by the African American civil rights movement, began to grow. In 1969, a group of Native American activists from various tribes, part of a new Pan-Indian movement, took control of Alcatraz Island in San Francisco Bay, which had once been the site of a federal prison. Attempting to strike a blow for Red Power, the power of Native Americans united by a Pan-Indian identity and demanding federal recognition of their rights, they maintained control of the island for more than a year and a half. They claimed the land as compensation for the federal government’s violation of numerous treaties and offered to pay for it with beads and trinkets. In January 1970, some of the occupiers began to leave the island. Some may have been disheartened by the accidental death of the daughter of one of the activists. In May 1970, all electricity and telephone service to the island was cut off by the federal government, and more of the occupiers began to leave. In June, the few people remaining on the island were removed by the government. Though the goals of the activists were not achieved, the occupation of Alcatraz had brought national attention to the concerns of Native American activists. 104 In 1973, members of the American Indian Movement (AIM) , a more radical group than the occupiers of Alcatraz, temporarily took over the offices of the Bureau of Indian Affairs in Washington, DC. The following year, members of AIM and some two hundred Oglala Lakota supporters occupied the town of Wounded Knee on the Lakota tribe’s Pine Ridge Reservation in South Dakota, the site of an 1890 massacre of Lakota men, women, and children by the U.S. Army ( Figure 5.16 ). Many of the Oglala were protesting the actions of their half-white tribal chieftain, who they claimed had worked too closely with the BIA. The occupiers also wished to protest the failure of the Justice Department to investigate acts of white violence against Lakota tribal members outside the bounds of the reservation. The occupation led to a confrontation between the Native American protestors and the FBI and U.S. Marshals. Violence erupted; two Native American activists were killed, and a marshal was shot ( Figure 5.16 ). After the second death, the Lakota called for an end to the occupation and negotiations began with the federal government. Two of AIM’s leaders, Russell Means and Dennis Banks, were arrested, but the case against them was later dismissed. 105 Violence continued on the Pine Ridge Reservation for several years after the siege; the reservation had the highest per capita murder rate in the United States. Two FBI agents were among those who were killed. The Oglala blamed the continuing violence on the federal government. 106 Link to Learning The official website of the American Indian Movement provides information about ongoing issues in Native American communities in both North and South America. The current relationship between the U.S. government and Native American tribes was established by the Indian Self-Determination and Education Assistance Act of 1975. Under the act, tribes assumed control of programs that had formerly been controlled by the BIA, such as education and resource management, and the federal government provided the funding. 107 Many tribes have also used their new freedom from government control to legalize gambling and to open casinos on their reservations. Although the states in which these casinos are located have attempted to control gaming on Native American lands, the Supreme Court and the Indian Gaming Regulatory Act of 1988 have limited their ability to do so. 108 The 1978 American Indian Religious Freedom Act granted tribes the right to conduct traditional ceremonies and rituals, including those that use otherwise prohibited substances like peyote cactus and eagle bones, which can be procured only from vulnerable or protected species. 109 ALASKA NATIVES AND NATIVE HAWAIIANS REGAIN SOME RIGHTS Alaska Natives and Native Hawaiians suffered many of the same abuses as Native Americans, including loss of land and forced assimilation. Following the discovery of oil in Alaska, however, the state, in an effort to gain undisputed title to oil rich land, settled the issue of Alaska Natives’ land claims with the passage of the Alaska Native Claims Settlement Act in 1971. According to the terms of the act, Alaska Natives received 44 million acres of resource-rich land and more than $900 million in cash in exchange for relinquishing claims to ancestral lands to which the state wanted title. 110 Native Hawaiians also lost control of their land—nearly two million acres—through the overthrow of the Hawaiian monarchy in 1893 and the subsequent formal annexation of the Hawaiian Islands by the United States in 1898. The indigenous population rapidly decreased in number, and white settlers tried to erase all trace of traditional Hawaiian culture. Two acts passed by Congress in 1900 and 1959, when the territory was granted statehood, returned slightly more than one million acres of federally owned land to the state of Hawaii. The state was to hold it in trust and use profits from the land to improve the condition of Native Hawaiians. 111 In September 2015, the U.S. Department of Interior, the same department that contains the Bureau of Indian Affairs, created guidelines for Native Hawaiians who wish to govern themselves in a relationship with the federal government similar to that established with Native American and Alaska Native tribes. Such a relationship would grant Native Hawaiians power to govern themselves while remaining U.S. citizens. Voting began in fall 2015 for delegates to a constitutional convention that would determine whether or not such a relationship should exist between Native Hawaiians and the federal government. 112 When non-Native Hawaiians and some Native Hawaiians brought suit on the grounds that, by allowing only Native Hawaiians to vote, the process discriminated against members of other ethnic groups, a federal district court found the election to be legal. However, the Supreme Court has ordered that votes not be counted until an appeal of the lower court’s decision be heard by the Ninth U.S. Circuit Court of Appeals. 113 Despite significant advances, American Indians, Alaska Natives, and Native Hawaiians still trail behind U.S. citizens of other ethnic backgrounds in many important areas. These groups continue to suffer widespread poverty and high unemployment. Some of the poorest counties in the United States are those in which Native American reservations are located. These minorities are also less likely than white Americans, African Americans, or Asian Americans to complete high school or college. 114 Many American Indian and Alaskan tribes endure high rates of infant mortality, alcoholism, and suicide. 115 Native Hawaiians are also more likely to live in poverty than whites in Hawaii, and they are more likely than white Hawaiians to be homeless or unemployed. 116 5.5 Equal Protection for Other Groups Learning Objectives By the end of this section, you will be able to: Discuss the discrimination faced by Hispanic/Latino Americans and Asian Americans Describe the influence of the African American civil rights movement on Hispanic/Latino, Asian American, and LGBT civil rights movements Describe federal actions to improve opportunities for people with disabilities Describe discrimination faced by religious minorities Many groups in American society have faced and continue to face challenges in achieving equality, fairness, and equal protection under the laws and policies of the federal government and/or the states. Some of these groups are often overlooked because they are not as large of a percentage of the U.S. population as women or African Americans, and because organized movements to achieve equality for them are relatively young. This does not mean, however, that the discrimination they face has not been as longstanding or as severe. HISPANIC/LATINO CIVIL RIGHTS Hispanics and Latinos in the United States have faced many of the same problems as African Americans and Native Americans. Although the terms Hispanic and Latino are often used interchangeably, they are not the same. Hispanic usually refers to native speakers of Spanish. Latino refers to people who come from, or whose ancestors came from, Latin America. Not all Hispanics are Latinos. Latinos may be of any race or ethnicity; they may be of European, African, Native American descent, or they may be of mixed ethnic background. Thus, people from Spain are Hispanic but are not Latino. 117 Many Latinos became part of the U.S. population following the annexation of Texas by the United States in 1845 and of California, Arizona, New Mexico, Nevada, Utah, and Colorado following the War with Mexico in 1848. Most were subject to discrimination and could find employment only as poorly paid migrant farm workers, railroad workers, and unskilled laborers. 118 The Spanish-speaking population of the United States increased following the Spanish-American War in 1898 with the incorporation of Puerto Rico as a U.S. territory. In 1917, during World War I, the Jones Act granted U.S. citizenship to Puerto Ricans. In the early twentieth century, waves of violence aimed at Mexicans and Mexican Americans swept the Southwest. Mexican Americans in Arizona and in parts of Texas were denied the right to vote, which they had previously possessed, and Mexican American children were barred from attending Anglo-American schools. During the Great Depression of the 1930s, Mexican immigrants and many Mexican Americans, both U.S.-born and naturalized citizens, living in the Southwest and Midwest were deported by the government so that Anglo-Americans could take the jobs that they had once held. 119 When the United States entered World War II, however, Mexicans were invited to immigrate to the United States as farmworkers under the Bracero ( bracero meaning “manual laborer” in Spanish) Program to make it possible for these American men to enlist in the armed services. 120 Mexican Americans and Puerto Ricans did not passively accept discriminatory treatment, however. In 1903, Mexican farmworkers joined with Japanese farmworkers, who were also poorly paid, to form the first union to represent agricultural laborers. In 1929, Latino civil rights activists formed the League of United Latin American Citizens (LULAC) to protest against discrimination and to fight for greater rights for Latinos. 121 Just as in the case of African Americans, however, true civil rights advances for Hispanics and Latinos did not take place until the end of World War II. Hispanic and Latino activists targeted the same racist practices as did African Americans and used many of the same tactics to end them. In 1946, Mexican American parents in California, with the assistance of the NAACP, sued several California school districts that forced Mexican and Mexican American children to attend segregated schools. In the case of Mendez v. Westminster (1947), the Court of Appeals for the Ninth Circuit Court held that the segregation of Mexican and Mexican American students into separate schools was unconstitutional. 122 Although Latinos made some civil rights advances in the decades following World War II, discrimination continued. Alarmed by the large number of undocumented Mexicans crossing the border into the United States in the 1950s, the United States government began Operation Wetback ( wetback is a derogatory term for Mexicans living unofficially in the United States). From 1953 to 1958, more than three million Mexican immigrants, and some Mexican Americans as well, were deported from California, Texas, and Arizona. 123 To limit the entry of Hispanic and Latino immigrants to the United States, in 1965 Congress imposed an immigration quota of 120,000 newcomers from the Western Hemisphere. At the same time that the federal government sought to restrict Hispanic and Latino immigration to the United States, the Mexican American civil rights movement grew stronger and more radical, just as the African American civil rights movement had done. While African Americans demanded Black Power and called for Black Pride, young Mexican American civil rights activists called for Brown Power and began to refer to themselves as Chicano s , a term disliked by many older, conservative Mexican Americans, in order to stress their pride in their hybrid Spanish-Native American cultural identity. 124 Demands by Mexican American activists often focused on improving education for their children, and they called upon school districts to hire teachers and principals who were bilingual in English and Spanish, to teach Mexican and Mexican American history, and to offer instruction in both English and Spanish for children with limited ability to communicate in English. 125 Milestone East L.A. Student Walkouts In March 1968, Chicano students at five high schools in East Los Angeles went on strike to demand better education for students of Mexican ancestry. Los Angeles schools did not allow Latino students to speak Spanish in class and gave no place to study Mexican history in the curriculum. Guidance counselors also encouraged students, regardless of their interests or ability, to pursue vocational careers instead of setting their sights on college. Some students were placed in classes for the mentally challenged even though they were of normal intelligence. As a result, the dropout rate among Mexican American students was very high. School administrators refused to meet with the student protestors to discuss their grievances. After a week, police were sent in to end the strike. Thirteen of the organizers of the walkout were arrested and charged with conspiracy to disturb the peace. After Sal Castro, a teacher who had led the striking students, was dismissed from his job, activists held a sit-in at school district headquarters until Castro was reinstated. Student protests spread across the Southwest, and in response many schools did change. That same year, Congress passed the Bilingual Education Act , which required school districts with large numbers of Hispanic or Latino students to provide instruction in Spanish. 126 Bilingual education remains controversial, even among Hispanics and Latinos. What are some arguments they might raise both for and against it? Are these different from arguments coming from whites? Mexican American civil rights leaders were active in other areas as well. Throughout the 1960s, Cesar Chavez and Dolores Huerta fought for the rights of Mexican American agricultural laborers through their organization, the United Farm Workers (UFW), a union for migrant workers they founded in 1962. Chavez, Huerta, and the UFW proclaimed their solidarity with Filipino farm workers by joining them in a strike against grape growers in Delano, California, in 1965. Chavez consciously adopted the tactics of the African American civil rights movement. In 1965, he called upon all U.S. consumers to boycott California grapes ( Figure 5.17 ), and in 1966, he led the UFW on a 300-mile march to Sacramento, the state capital, to bring the state farm workers’ problems to the attention of the entire country. The strike finally ended in 1970 when the grape growers agreed to give the pickers better pay and benefits. 127 As Latino immigration to the United States increased in the late twentieth and early twenty-first centuries, discrimination also increased in many places. In 1994, California voters passed Proposition 187 . The proposition sought to deny non-emergency health services, food stamps, welfare, and Medicaid to undocumented immigrants. It also banned children from attending public school unless they could present proof that they and their parents were legal residents of the United States. A federal court found it unconstitutional in 1997 on the grounds that the law’s intention was to regulate immigration, a power held only by the federal government. 128 In 2005, discussion began in Congress on proposed legislation that would make it a felony to enter the United States illegally or to give assistance to anyone who had done so. Although the bill quickly died, on May 1, 2006, hundreds of thousands of people, primarily Latinos, staged public demonstrations in major U.S. cities, refusing to work or attend school for one day. 129 The protestors claimed that people seeking a better life should not be treated as criminals and that undocumented immigrants already living in the United States should have the opportunity to become citizens. Following the failure to make undocumented immigration a felony under federal law, several states attempted to impose their own sanctions on illegal immigration. In April 2010, Arizona passed a law that made illegal immigration a state crime. The law also forbade undocumented immigrants from seeking work and allowed law enforcement officers to arrest people suspected of being in the U.S. illegally. Thousands protested the law, claiming that it encouraged racial profiling. In 2012, in Arizona v. United States , the U.S. Supreme Court struck down those provisions of the law that made it a state crime to reside in the United States illegally, forbade undocumented immigrants to take jobs, and allowed the police to arrest those suspected of being illegal immigrants. 130 The court, however, upheld the authority of the police to ascertain the immigration status of someone suspected of being an undocumented alien if the person had been stopped or arrested by the police for other reasons. 131 Today, Latinos constitute the largest minority group in the United States. They also have one of the highest birth rates of any ethnic group. 132 Although Hispanics lag behind whites in terms of income and high school graduation rates, they are enrolling in college at higher rates than whites. 133 Topics that remain at the forefront of public debate today include immigration reform, the DREAM Act (a proposal for granting undocumented immigrants permanent residency in stages), and court action on President Obama’s executive orders on immigration. ASIAN AMERICAN CIVIL RIGHTS Because Asian Americans are often stereotypically regarded as “the model minority” (because it is assumed they are generally financially successful and do well academically), it is easy to forget that they have also often been discriminated against and denied their civil rights. Indeed, in the nineteenth century, Asians were among the most despised of all immigrant groups and were often subjected to the same laws enforcing segregation and forbidding interracial marriage as were African Americans and American Indians. The Chinese were the first large group of Asians to immigrate to the United States. They arrived in large numbers in the mid-nineteenth century to work in the mining industry and on the Central Pacific Railroad. Others worked as servants or cooks or operated laundries. Their willingness to work for less money than whites led white workers in California to call for a ban on Chinese immigration. In 1882, Congress passed the Chinese Exclusion Act , which prevented Chinese from immigrating to the United States for ten years and prevented Chinese already in the country from becoming citizens ( Figure 5.18 ). In 1892, the Geary Act extended the ban on Chinese immigration for another ten years. In 1913, California passed a law preventing all Asians, not just the Chinese, from owning land. With the passage of the Immigration Act of 1924, all Asians, with the exception of Filipinos, were prevented from immigrating to the United States or becoming naturalized citizens. Laws in several states barred marriage between Chinese and white Americans, and some cities with large Asian populations required Asian children to attend segregated schools. 134 During World War II, citizens of Japanese descent living on the West Coast, whether naturalized immigrants or Japanese Americans born in the United States, were subjected to the indignity of being removed from their communities and interned under Executive Order 9066 ( Figure 5.19 ). The reason was fear that they might prove disloyal to the United States and give assistance to Japan. Although Italians and Germans suspected of disloyalty were also interned by the U.S. government, only the Japanese were imprisoned solely on the basis of their ethnicity. None of the more than 110,000 Japanese and Japanese Americans internees was ever found to have committed a disloyal act against the United States, and many young Japanese American men served in the U.S. army during the war. 135 Although Japanese American Fred Korematsu challenged the right of the government to imprison law-abiding citizens, the Supreme Court decision in the 1944 case of Korematsu v. United States upheld the actions of the government as a necessary precaution in a time of war. 136 When internees returned from the camps after the war was over, many of them discovered that the houses, cars, and businesses they had left behind, often in the care of white neighbors, had been sold or destroyed. 137 Link to Learning Explore the resources at Japanese American Internment and Digital History to learn more about experiences of Japanese Americans during World War II. The growth of the African American, Chicano, and Native American civil rights movements in the 1960s inspired many Asian Americans to demand their own rights. Discrimination against Asian Americans, regardless of national origin, increased during the Vietnam War. Ironically, violence directed indiscriminately against Chinese, Japanese, Koreans, and Vietnamese caused members of these groups to unite around a shared pan-Asian identity, much as Native Americans had in the Pan-Indian movement. In 1968, students of Asian ancestry at the University of California at Berkeley formed the Asian American Political Alliance . Asian American students also joined Chicano, Native American, and African American students to demand that colleges offer ethnic studies courses. 138 In 1974, in the case of Lau v. Nichols , Chinese American students in San Francisco sued the school district, claiming its failure to provide them with assistance in learning English denied them equal educational opportunities. 139 The Supreme Court found in favor of the students. The Asian American movement is no longer as active as other civil rights movements are. Although discrimination persists, Americans of Asian ancestry are generally more successful than members of other ethnic groups. They have higher rates of high school and college graduation and higher average income than other groups. 140 Although educational achievement and economic success do not protect them from discrimination, it does place them in a much better position to defend their rights. THE FIGHT FOR CIVIL RIGHTS IN THE LGBT COMMUNITY Laws against homosexuality, which was regarded as a sin and a moral failing, existed in most states throughout the nineteenth and twentieth centuries. By the late nineteenth century, homosexuality had come to be regarded as a form of mental illness as well as a sin, and gay men were often erroneously believed to be pedophiles. 141 As a result, lesbians, gay men, bisexuals, and transgender people, collectively known as the LGBT community, had to keep their sexual orientation hidden or “closeted.” Secrecy became even more important in the 1950s, when fear of gay men increased and the federal government believed they could be led into disloyal acts either as a result of their “moral weakness” or through blackmail by Soviet agents. As a result, many men lost or were denied government jobs. Fears of lesbians also increased after World War II as U.S. society stressed conformity to traditional gender roles and the importance of marriage and childrearing. 142 The very secrecy in which lesbian, gay, bisexual, and transgender people had to live made it difficult for them to organize to fight for their rights as other, more visible groups had done. Some organizations did exist, however. The Mattachine Society , established in 1950, was one of the first groups to champion the rights of gay men. Its goal was to unite gay men who otherwise lived in secrecy and to fight against abuse. The Mattachine Society often worked with the Daughters of Bilitis , a lesbian rights organization. Among the early issues targeted by the Mattachine Society was police entrapment of male homosexuals. 143 In the 1960s, the gay and lesbian rights movements began to grow more radical, in a manner similar to other civil rights movements. In 1962, gay Philadelphians demonstrated in front of Independence Hall. In 1966, transgender prostitutes who were tired of police harassment rioted in San Francisco. In June 1969, gay men, lesbians, and transgender people erupted in violence when New York City police attempted to arrest customers at a gay bar in Greenwich Village called the Stonewall Inn . The patrons’ ability to resist arrest and fend off the police inspired many members of New York’s LGBT community, and the riots persisted over several nights. New organizations promoting LGBT rights that emerged after Stonewall were more radical and confrontational than the Mattachine Society and the Daughters of Bilitis had been. These groups, like the Gay Activists Alliance and the Gay Liberation Front , called not just for equality before the law and protection against abuse but also for “liberation,” Gay Power, and Gay Pride. 144 Although LGBT people gained their civil rights later than many other groups, changes did occur beginning in the 1970s, remarkably quickly when we consider how long other minority groups had fought for their rights. In 1973, the American Psychological Association ended its classification of homosexuality as a mental disorder. In 1994, the U.S. military adopted the policy of “Don’t ask, don’t tell.” This act, Department of Defense Directive 1304.26 , officially prohibited discrimination against suspected gays, lesbians, and bisexuals by the U.S. military. It also prohibited superior officers from asking about or investigating the sexual orientation of those below them in rank. 145 However, those gays, lesbians, and bisexuals who spoke openly about their sexual orientation were still subject to dismissal because it remained illegal for anyone except heterosexuals to serve in the armed forces. The policy ended in 2011, and now gays, lesbians, and bisexuals may serve openly in the military. 146 In 2006, in the case of Lawrence v. Texas , the Supreme Court ruled unconstitutional state laws that criminalized sexual intercourse between consenting adults of the same sex. 147 Beginning in 2000, several states made it possible for same-sex couples to enter into legal relationships known as civil unions or domestic partnerships. These arrangements extended many of the same protections enjoyed by heterosexual married couples to same-sex couples. LGBT activists, however, continued to fight for the right to marry. Same-sex marriages would allow partners to enjoy exactly the same rights as married heterosexual couples and accord their relationships the same dignity and importance. In 2004, Massachusetts became the first state to grant legal status to same-sex marriage. Other states quickly followed. This development prompted a backlash among many religious conservatives, who considered homosexuality a sin and argued that allowing same-sex couples to marry would lessen the value and sanctity of heterosexual marriage. Many states passed laws banning same-sex marriage, and many gay and lesbian couples challenged these laws, successfully, in the courts. Finally, in Obergefell v. Hodges , the Supreme Court overturned state bans and made same-sex marriage legal throughout the United States on June 26, 2015 ( Figure 5.20 ). 148 The legalization of same-sex marriage throughout the United States led some people to feel their religious beliefs were under attack, and many religiously conservative business owners have refused to acknowledge LBGT rights or the legitimacy of same-sex marriages. Following swiftly upon the heels of the Obergefell ruling, the Indiana legislature passed a Religious Freedom Restoration Act (RFRA). Congress had already passed such a law in 1993; it was intended to extend protection to minority religions, such as by allowing rituals of the Native American Church. However, the Supreme Court in City of Boerne v. Flores (1997) ruled that the 1993 law applied only to the federal government and not to state governments. 149 Thus several state legislatures later passed their own Religious Freedom Restoration Acts. These laws state that the government cannot “substantially burden an individual’s exercise of religion” unless it would serve a “compelling governmental interest” to do so. They allow individuals, which also include businesses and other organizations, to discriminate against others, primarily same-sex couples and LGBT people, if the individual’s religious beliefs are opposed to homosexuality. LGBT Americans still encounter difficulties in other areas as well. Discrimination continues in housing and employment, although federal courts are increasingly treating employment discrimination against transgender people as a form of sex discrimination prohibited by the Civil Rights Act of 1964. The federal Department of Housing and Urban Development has also indicated that refusing to rent or sell homes to transgendered people may be considered sex discrimination. 150 Violence against members of the LGBT community remains a serious problem; this violence occurs on the streets and in their homes. 151 The enactment of the Matthew Shepard and James Byrd Jr. Hate Crimes Prevention Act, also known as the Matthew Shepard Act , in 2009 made it a federal hate crime to attack someone based on his or her gender, gender identity, sexual orientation, or disability and made it easier for federal, state, and local authorities to investigate hate crimes, but it has not necessarily made the world safer for LGBT Americans. CIVIL RIGHTS AND THE AMERICANS WITH DISABILITIES ACT People with disabilities make up one of the last groups whose civil rights have been recognized. For a long time, they were denied employment and access to public education, especially if they were mentally or developmentally challenged. Many were merely institutionalized. A eugenics movement in the United States in the late nineteenth and early to mid-twentieth centuries sought to encourage childbearing among physically and mentally fit whites and discourage it among those with physical or mental disabilities. Many states passed laws prohibiting marriage among people who had what were believed to be hereditary “defects.” Among those affected were people who were blind or deaf, those with epilepsy, people with mental or developmental disabilities, and those suffering mental illnesses. In some states, programs existed to sterilize people considered “feeble minded” by the standards of the time, without their will or consent. 152 When this practice was challenged by a “feeble-minded” woman in a state institution in Virginia, the Supreme Court, in the 1927 case of Buck v. Bell , upheld the right of state governments to sterilize those people believed likely to have children who would become dependent upon public welfare. 153 Some of these programs persisted into the 1970s, as Figure 5.21 shows. 154 By the 1970s, however, concern for extending equal opportunities to all led to the passage of two important acts by Congress. In 1973, the Rehabilitation Act made it illegal to discriminate against people with disabilities in federal employment or in programs run by federal agencies or receiving federal funding. This was followed by the Education for all Handicapped Children Act of 1975, which required public schools to educate children with disabilities. The act specified that schools consult with parents to create a plan tailored for each child’s needs that would provide an educational experience as close as possible to that received by other children. In 1990, the Americans with Disabilities Act (ADA) greatly expanded opportunities and protections for people of all ages with disabilities. It also significantly expanded the categories and definition of disability. The ADA prohibits discrimination in employment based on disability. It also requires employers to make reasonable accommodations available to workers who need them. Finally, the ADA mandates that public transportation and public accommodations be made accessible to those with disabilities. The Act was passed despite the objections of some who argued that the cost of providing accommodations would be prohibitive for small businesses. Link to Learning The community of people with disabilities is well organized in the twenty-first century, as evidenced by the considerable network of disability rights organizations in the United States. THE RIGHTS OF RELIGIOUS MINORITIES The right to worship as a person chooses was one of the reasons for the initial settlement of the United States. Thus, it is ironic that many people throughout U.S. history have been denied their civil rights because of their status as members of a religious minority. Beginning in the early nineteenth century with the immigration of large numbers of Irish Catholics to the United States, anti-Catholicism became a common feature of American life and remained so until the mid-twentieth century. Catholic immigrants were denied jobs, and in the 1830s and 1840s anti-Catholic literature accused Catholic priests and nuns of committing horrific acts. Anti-Mormon sentiment was also quite common, and Mormons were accused of kidnapping women and building armies for the purpose of dominating their non-Mormon neighbors. At times, these fears led to acts of violence. A convent in Charlestown, Massachusetts, was burned to the ground in 1834. 155 In 1844, Joseph Smith, the founder of the Mormon religion, and his brother were murdered by a mob in Illinois. 156 For many years, American Jews faced discrimination in employment, education, and housing based on their religion. Many of the restrictive real estate covenants that prohibited people from selling their homes to African Americans also prohibited them from selling to Jews, and a “gentlemen’s agreement” among the most prestigious universities in the United States limited the number of Jewish students accepted. Indeed, a tradition of confronting discrimination led many American Jews to become actively involved in the civil rights movements for women and African Americans. 157 Today, open discrimination against Jews in the United States is less common, although anti-Semitic sentiments still remain. In the twenty-first century, especially after the September 11 attacks, Muslims are the religious minority most likely to face discrimination. Although Title VII of the Civil Rights Act of 1964 prevents employment discrimination on the basis of religion and requires employers to make reasonable accommodations so that employees can engage in religious rituals and practices, Muslim employees are often discriminated against. Often the source of controversy is the wearing of head coverings by observant Muslims, which some employers claim violates uniform policies or dress codes, even when non-Muslim coworkers are allowed to wear head coverings that are not part of work uniforms. 158 Hate crimes against Muslims have also increased since 9/11, and many Muslims believe they are subject to racial profiling by law enforcement officers who suspect them of being terrorists. 159 In another irony, many Christians have recently argued that they are being deprived of their rights because of their religious beliefs and have used this claim to justify their refusal to acknowledge the rights of others. The owner of Hobby Lobby Stores, for example, a conservative Christian, argued that his company’s health-care plan should not have to pay for contraception because his religious beliefs are opposed to the practice. In 2014, in the case of Burwell v. Hobby Lobby Stores, Inc. , the Supreme Court ruled in his favor. 160 As discussed earlier, many conservative Christians have also argued that they should not have to recognize same-sex marriages because they consider homosexuality to be a sin.
u.s._history
Summary 5.1 Confronting the National Debt: The Aftermath of the French and Indian War The British Empire had gained supremacy in North America with its victory over the French in 1763. Almost all of the North American territory east of the Mississippi fell under Great Britain’s control, and British leaders took this opportunity to try to create a more coherent and unified empire after decades of lax oversight. Victory over the French had proved very costly, and the British government attempted to better regulate their expanded empire in North America. The initial steps the British took in 1763 and 1764 raised suspicions among some colonists about the intent of the home government. These suspicions would grow and swell over the coming years. 5.2 The Stamp Act and the Sons and Daughters of Liberty Though Parliament designed the 1765 Stamp Act to deal with the financial crisis in the Empire, it had unintended consequences. Outrage over the act created a degree of unity among otherwise unconnected American colonists, giving them a chance to act together both politically and socially. The crisis of the Stamp Act allowed colonists to loudly proclaim their identity as defenders of British liberty. With the repeal of the Stamp Act in 1766, liberty-loving subjects of the king celebrated what they viewed as a victory. 5.3 The Townshend Acts and Colonial Protest Like the Stamp Act in 1765, the Townshend Acts led many colonists to work together against what they perceived to be an unconstitutional measure, generating the second major crisis in British Colonial America. The experience of resisting the Townshend Acts provided another shared experience among colonists from diverse regions and backgrounds, while the partial repeal convinced many that liberty had once again been defended. Nonetheless, Great Britain’s debt crisis still had not been solved. 5.4 The Destruction of the Tea and the Coercive Acts The colonial rejection of the Tea Act, especially the destruction of the tea in Boston Harbor, recast the decade-long argument between British colonists and the home government as an intolerable conspiracy against liberty and an excessive overreach of parliamentary power. The Coercive Acts were punitive in nature, awakening the worst fears of otherwise loyal members of the British Empire in America. 5.5 Disaffection: The First Continental Congress and American Identity The First Continental Congress, which comprised elected representatives from twelve of the thirteen American colonies, represented a direct challenge to British authority. In its Declaration and Resolves, colonists demanded the repeal of all repressive acts passed since 1773. The delegates also recommended that the colonies raise militias, lest the British respond to the Congress’s proposed boycott of British goods with force. While the colonists still considered themselves British subjects, they were slowly retreating from British authority, creating their own de facto government via the First Continental Congress.
Chapter Outline 5.1 Confronting the National Debt: The Aftermath of the French and Indian War 5.2 The Stamp Act and the Sons and Daughters of Liberty 5.3 The Townshend Acts and Colonial Protest 5.4 The Destruction of the Tea and the Coercive Acts 5.5 Disaffection: The First Continental Congress and American Identity Introduction The Bostonians Paying the Excise-man, or Tarring and Feathering ( Figure 5.1 ), shows five Patriots tarring and feathering the Commissioner of Customs, John Malcolm, a sea captain, army officer, and staunch Loyalist . The print shows the Boston Tea Party, a protest against the Tea Act of 1773, and the Liberty Tree, an elm tree near Boston Common that became a rallying point against the Stamp Act of 1765. When the crowd threatened to hang Malcolm if he did not renounce his position as a royal customs officer, he reluctantly agreed and the protestors allowed him to go home. The scene represents the animosity toward those who supported royal authority and illustrates the high tide of unrest in the colonies after the British government imposed a series of imperial reform measures during the years 1763–1774. The government’s formerly lax oversight of the colonies ended as the architects of the British Empire put these new reforms in place. The British hoped to gain greater control over colonial trade and frontier settlement as well as to reduce the administrative cost of the colonies and the enormous debt left by the French and Indian War. Each step the British took, however, generated a backlash. Over time, imperial reforms pushed many colonists toward separation from the British Empire.
[ { "answer": { "ans_choice": 3, "ans_text": "D" }, "bloom": null, "hl_context": "Great Britain ’ s newly enlarged empire meant a greater financial burden , and the mushrooming debt from the war was a major cause of concern . <hl> The war nearly doubled the British national debt , from £ 75 million in 1756 to £ 133 million in 1763 . <hl> <hl> Interest payments alone consumed over half the national budget , and the continuing military presence in North America was a constant drain . <hl> The Empire needed more revenue to replenish its dwindling coffers . Those in Great Britain believed that British subjects in North America , as the major beneficiaries of Great Britain ’ s war for global supremacy , should certainly shoulder their share of the financial burden .", "hl_sentences": "The war nearly doubled the British national debt , from £ 75 million in 1756 to £ 133 million in 1763 . Interest payments alone consumed over half the national budget , and the continuing military presence in North America was a constant drain .", "question": { "cloze_format": "___ was a cause of the British National Debt in 1763.", "normal_format": "Which of the following was a cause of the British National Debt in 1763?", "question_choices": [ " drought in Great Britain", " the French and Indian War", " the continued British military presence in the American colonies", " both B and C" ], "question_id": "fs-idm24270480", "question_text": "Which of the following was a cause of the British National Debt in 1763?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": " It strengthened enforcement of molasses smuggling laws. " }, "bloom": null, "hl_context": "Grenville also pushed Parliament to pass the Sugar Act of 1764 , which actually lowered duties on British molasses by half , from six pence per gallon to three . Grenville designed this measure to address the problem of rampant colonial smuggling with the French sugar islands in the West Indies . <hl> The act attempted to make it easier for colonial traders , especially New England mariners who routinely engaged in illegal trade , to comply with the imperial law . <hl>", "hl_sentences": "The act attempted to make it easier for colonial traders , especially New England mariners who routinely engaged in illegal trade , to comply with the imperial law .", "question": { "cloze_format": "The main purpose of the Sugar Act of 1764 was ___.", "normal_format": "What was the main purpose of the Sugar Act of 1764?", "question_choices": [ " It raised taxes on sugar. ", " It raised taxes on molasses. ", " It strengthened enforcement of molasses smuggling laws. ", " It required colonists to purchase only sugar distilled in Great Britain. " ], "question_id": "fs-idm41661056", "question_text": "What was the main purpose of the Sugar Act of 1764?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "to declare null and void any laws the colonies had passed to govern and tax themselves" }, "bloom": null, "hl_context": "In March 1766 , the new prime minister , Lord Rockingham , compelled Parliament to repeal the Stamp Act . Colonists celebrated what they saw as a victory for their British liberty ; in Boston , merchant John Hancock treated the entire town to drinks . <hl> However , to appease opponents of the repeal , who feared that it would weaken parliamentary power over the American colonists , Rockingham also proposed the Declaratory Act . <hl> <hl> This stated in no uncertain terms that Parliament ’ s power was supreme and that any laws the colonies may have passed to govern and tax themselves were null and void if they ran counter to parliamentary law . <hl> 5.3 The Townshend Acts and Colonial Protest Learning Objectives By the end of this section , you will be able to : The Stamp Act signaled a shift in British policy after the French and Indian War . Before the Stamp Act , the colonists had paid taxes to their colonial governments or indirectly through higher prices , not directly to the Crown ’ s appointed governors . This was a time-honored liberty of representative legislatures of the colonial governments . <hl> The passage of the Stamp Act meant that starting on November 1 , 1765 , the colonists would contribute £ 60,000 per year — 17 percent of the total cost — to the upkeep of the ten thousand British soldiers in North America ( Figure 5.6 ) . <hl> Because the Stamp Act raised constitutional issues , it triggered the first serious protest against British imperial policy . <hl> Prime Minister Grenville , author of the Sugar Act of 1764 , introduced the Stamp Act in the early spring of 1765 . <hl> <hl> Under this act , anyone who used or purchased anything printed on paper had to buy a revenue stamp ( Figure 5.5 ) for it . <hl> In the same year , 1765 , Parliament also passed the Quartering Act , a law that attempted to solve the problems of stationing troops in North America . The Parliament understood the Stamp Act and the Quartering Act as an assertion of their power to control colonial policy .", "hl_sentences": "However , to appease opponents of the repeal , who feared that it would weaken parliamentary power over the American colonists , Rockingham also proposed the Declaratory Act . This stated in no uncertain terms that Parliament ’ s power was supreme and that any laws the colonies may have passed to govern and tax themselves were null and void if they ran counter to parliamentary law . The passage of the Stamp Act meant that starting on November 1 , 1765 , the colonists would contribute £ 60,000 per year — 17 percent of the total cost — to the upkeep of the ten thousand British soldiers in North America ( Figure 5.6 ) . Prime Minister Grenville , author of the Sugar Act of 1764 , introduced the Stamp Act in the early spring of 1765 . Under this act , anyone who used or purchased anything printed on paper had to buy a revenue stamp ( Figure 5.5 ) for it .", "question": { "cloze_format": "___ was not a goal of the Stamp Act.", "normal_format": "Which of the following was not a goal of the Stamp Act?", "question_choices": [ "to gain control over the colonists", "to raise revenue for British troops stationed in the colonies", "to raise revenue to pay off British debt from the French and Indian War", "to declare null and void any laws the colonies had passed to govern and tax themselves" ], "question_id": "fs-idp11149920", "question_text": "Which of the following was not a goal of the Stamp Act?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 1, "ans_text": "B" }, "bloom": null, "hl_context": "Forming in Boston in the summer of 1765 , the Sons of Liberty were artisans , shopkeepers , and small-time merchants willing to adopt extralegal means of protest . Before the act had even gone into effect , the Sons of Liberty began protesting . On August 14 , they took aim at Andrew Oliver , who had been named the Massachusetts Distributor of Stamps . <hl> After hanging Oliver in effigy — that is , using a crudely made figure as a representation of Oliver — the unruly crowd stoned and ransacked his house , finally beheading the effigy and burning the remains . <hl> Such a brutal response shocked the royal governmental officials , who hid until the violence had spent itself . Andrew Oliver resigned the next day . By that time , the mob had moved on to the home of Lieutenant Governor Thomas Hutchinson who , because of his support of Parliament ’ s actions , was considered an enemy of English liberty . The Sons of Liberty barricaded Hutchinson in his home and demanded that he renounce the Stamp Act ; he refused , and the protesters looted and burned his house . Furthermore , the Sons ( also called “ True Sons ” or “ True-born Sons ” to make clear their commitment to liberty and distinguish them from the likes of Hutchinson ) continued to lead violent protests with the goal of securing the resignation of all appointed stamp collectors ( Figure 5.8 ) .", "hl_sentences": "After hanging Oliver in effigy — that is , using a crudely made figure as a representation of Oliver — the unruly crowd stoned and ransacked his house , finally beheading the effigy and burning the remains .", "question": { "cloze_format": "The Sons of Liberty were responsible for ___.", "normal_format": "For which of the following activities were the Sons of Liberty responsible?", "question_choices": [ "the Stamp Act Congress", "the hanging and beheading of a stamp commissioner in effigy", "the massacre of Conestoga in Pennsylvania", "the introduction of the Virginia Stamp Act Resolutions" ], "question_id": "fs-idm155168", "question_text": "For which of the following activities were the Sons of Liberty responsible?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "greater colonial unity" }, "bloom": null, "hl_context": "Townshend also orchestrated the Vice-Admiralty Court Act , which established three more vice-admiralty courts , in Boston , Philadelphia , and Charleston , to try violators of customs regulations without a jury . Before this , the only colonial vice-admiralty court had been in far-off Halifax , Nova Scotia , but with three local courts , smugglers could be tried more efficiently . Since the judges of these courts were paid a percentage of the worth of the goods they recovered , leniency was rare . <hl> All told , the Townshend Acts resulted in higher taxes and stronger British power to enforce them . <hl> <hl> Four years after the end of the French and Indian War , the Empire continued to search for solutions to its debt problem and the growing sense that the colonies needed to be brought under control . <hl>", "hl_sentences": "All told , the Townshend Acts resulted in higher taxes and stronger British power to enforce them . Four years after the end of the French and Indian War , the Empire continued to search for solutions to its debt problem and the growing sense that the colonies needed to be brought under control .", "question": { "cloze_format": "____ was not a goal of the Townshend Acts.", "normal_format": "Which of the following was not one of the goals of the Townshend Acts?", "question_choices": [ "higher taxes", "greater colonial unity", "greater British control over the colonies", "reduced power of the colonial governments" ], "question_id": "fs-idm2243168", "question_text": "Which of the following was not one of the goals of the Townshend Acts?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "D" }, "bloom": null, "hl_context": "Great Britain ’ s response to this threat of disobedience served only to unite the colonies further . The colonies ’ initial response to the Massachusetts Circular was lukewarm at best . <hl> However , back in Great Britain , the secretary of state for the colonies — Lord Hillsborough — demanded that Massachusetts retract the letter , promising that any colonial assemblies that endorsed it would be dissolved . <hl> <hl> This threat had the effect of pushing the other colonies to Massachusetts ’ s side . <hl> Even the city of Philadelphia , which had originally opposed the Circular , came around . <hl> In Massachusetts in 1768 , Samuel Adams wrote a letter that became known as the Massachusetts Circular . <hl> <hl> Sent by the Massachusetts House of Representatives to the other colonial legislatures , the letter laid out the unconstitutionality of taxation without representation and encouraged the other colonies to again protest the taxes by boycotting British goods . <hl> Adams wrote , “ It is , moreover , [ the Massachusetts House of Representatives ] humble opinion , which they express with the greatest deference to the wisdom of the Parliament , that the acts made there , imposing duties on the people of this province , with the sole and express purpose of raising a revenue , are infringements of their natural and constitutional rights ; because , as they are not represented in the Parliament , his Majesty ’ s Commons in Britain , by those acts , grant their property without their consent . ” Note that even in this letter of protest , the humble and submissive tone shows the Massachusetts Assembly ’ s continued deference to parliamentary authority . Even in that hotbed of political protest , it is a clear expression of allegiance and the hope for a restoration of “ natural and constitutional rights . ”", "hl_sentences": "However , back in Great Britain , the secretary of state for the colonies — Lord Hillsborough — demanded that Massachusetts retract the letter , promising that any colonial assemblies that endorsed it would be dissolved . This threat had the effect of pushing the other colonies to Massachusetts ’ s side . In Massachusetts in 1768 , Samuel Adams wrote a letter that became known as the Massachusetts Circular . Sent by the Massachusetts House of Representatives to the other colonial legislatures , the letter laid out the unconstitutionality of taxation without representation and encouraged the other colonies to again protest the taxes by boycotting British goods .", "question": { "cloze_format": "___ was most responsible for the colonies' endorsement of Samuel Adam's Massachusetts Circular. ", "normal_format": "Which event was most responsible for the colonies’ endorsement of Samuel Adams’s Massachusetts Circular?", "question_choices": [ "the Townshend Duties", "the Indemnity Act", "the Boston Massacre", "Lord Hillsborough’s threat to dissolve the colonial assemblies that endorsed the letter" ], "question_id": "fs-idp28471760", "question_text": "Which event was most responsible for the colonies’ endorsement of Samuel Adams’s Massachusetts Circular?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "A" }, "bloom": null, "hl_context": "Violence continued to break out on occasion , as in 1772 , when Rhode Island colonists boarded and burned the British revenue ship Gaspée in Narragansett Bay ( Figure 5.12 ) . <hl> Colonists had attacked or burned British customs ships in the past , but after the Gaspée Affair , the British government convened a Royal Commission of Inquiry . <hl> <hl> This Commission had the authority to remove the colonists , who were charged with treason , to Great Britain for trial . <hl> <hl> Some colonial protestors saw this new ability as another example of the overreach of British power . <hl>", "hl_sentences": "Colonists had attacked or burned British customs ships in the past , but after the Gaspée Affair , the British government convened a Royal Commission of Inquiry . This Commission had the authority to remove the colonists , who were charged with treason , to Great Britain for trial . Some colonial protestors saw this new ability as another example of the overreach of British power .", "question": { "cloze_format": "The statement about the Gaspée affair that is true is that ___.", "normal_format": "Which of the following is true of the Gaspée affair?", "question_choices": [ "Colonists believed that the British response represented an overreach of power. ", "It was the first time colonists attacked a revenue ship. ", "It was the occasion of the first official death in the war for independence. ", "The ship’s owner, John Hancock, was a respectable Boston merchant. " ], "question_id": "fs-idm96514944", "question_text": "Which of the following is true of the Gaspée affair?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "to help revive the struggling East India Company" }, "bloom": null, "hl_context": "<hl> The Tea Act of 1773 gave the British East India Company the ability to export its tea directly to the colonies without paying import or export duties and without using middlemen in either Great Britain or the colonies . <hl> <hl> Even with the Townshend tax , the act would allow the East India Company to sell its tea at lower prices than the smuggled Dutch tea , thus undercutting the smuggling trade . <hl>", "hl_sentences": "The Tea Act of 1773 gave the British East India Company the ability to export its tea directly to the colonies without paying import or export duties and without using middlemen in either Great Britain or the colonies . Even with the Townshend tax , the act would allow the East India Company to sell its tea at lower prices than the smuggled Dutch tea , thus undercutting the smuggling trade .", "question": { "cloze_format": "The purpose of the Tea Act of 1773 was ___.", "normal_format": "What was the purpose of the Tea Act of 1773?", "question_choices": [ "to punish the colonists for their boycotting of British tea", "to raise revenue to offset the British national debt", "to help revive the struggling East India Company", "to pay the salaries of royal appointees" ], "question_id": "fs-idm47459552", "question_text": "What was the purpose of the Tea Act of 1773?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "to boycott all British goods and prepare for possible military action" }, "bloom": null, "hl_context": "<hl> The representatives at the First Continental Congress created a Continental Association to ensure that the full boycott was enforced across all the colonies . <hl> The Continental Association served as an umbrella group for colonial and local committees of observation and inspection . By taking these steps , the First Continental Congress established a governing network in opposition to royal authority . In the end , Paul Revere rode from Massachusetts to Philadelphia with the Suffolk Resolves , which became the basis of the Declaration and Resolves of the First Continental Congress . <hl> In the Declaration and Resolves , adopted on October 14 , the colonists demanded the repeal of all repressive acts passed since 1773 and agreed to a non-importation , non-exportation , and non-consumption pact against all British goods until the acts were repealed . <hl> In the “ Petition of Congress to the King ” on October 24 , the delegates adopted a further recommendation of the Suffolk Resolves and proposed that the colonies raise and regulate their own militias .", "hl_sentences": "The representatives at the First Continental Congress created a Continental Association to ensure that the full boycott was enforced across all the colonies . In the Declaration and Resolves , adopted on October 14 , the colonists demanded the repeal of all repressive acts passed since 1773 and agreed to a non-importation , non-exportation , and non-consumption pact against all British goods until the acts were repealed .", "question": { "cloze_format": "At the First Continental Congress it was decided ___.", "normal_format": "Which of the following was decided at the First Continental Congress?", "question_choices": [ "to declare war on Great Britain", "to boycott all British goods and prepare for possible military action", "to offer a conciliatory treaty to Great Britain", "to pay for the tea that was dumped in Boston Harbor" ], "question_id": "fs-idp17434368", "question_text": "Which of the following was decided at the First Continental Congress?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 0, "ans_text": "A" }, "bloom": null, "hl_context": "<hl> In the end , Paul Revere rode from Massachusetts to Philadelphia with the Suffolk Resolves , which became the basis of the Declaration and Resolves of the First Continental Congress . <hl> In the Declaration and Resolves , adopted on October 14 , the colonists demanded the repeal of all repressive acts passed since 1773 and agreed to a non-importation , non-exportation , and non-consumption pact against all British goods until the acts were repealed . In the “ Petition of Congress to the King ” on October 24 , the delegates adopted a further recommendation of the Suffolk Resolves and proposed that the colonies raise and regulate their own militias .", "hl_sentences": "In the end , Paul Revere rode from Massachusetts to Philadelphia with the Suffolk Resolves , which became the basis of the Declaration and Resolves of the First Continental Congress .", "question": { "cloze_format": "The ___ colony provided the basis for the Declarations and Resolves.", "normal_format": "Which colony provided the basis for the Declarations and Resolves?", "question_choices": [ "Massachusetts", "Philadelphia", "Rhode Island", "New York" ], "question_id": "fs-idm2821152", "question_text": "Which colony provided the basis for the Declarations and Resolves?" }, "references_are_paraphrase": 0 } ]
5
5.1 Confronting the National Debt: The Aftermath of the French and Indian War Learning Objectives By the end of this section, you will be able to: Discuss the status of Great Britain’s North American colonies in the years directly following the French and Indian War Describe the size and scope of the British debt at the end of the French and Indian War Explain how the British Parliament responded to the debt crisis Outline the purpose of the Proclamation Line, the Sugar Act, and the Currency Act Great Britain had much to celebrate in 1763. The long and costly war with France had finally ended, and Great Britain had emerged victorious. British subjects on both sides of the Atlantic celebrated the strength of the British Empire. Colonial pride ran high; to live under the British Constitution and to have defeated the hated French Catholic menace brought great joy to British Protestants everywhere in the Empire. From Maine to Georgia, British colonists joyously celebrated the victory and sang the refrain of “Rule, Britannia! Britannia, rule the waves! Britons never, never, never shall be slaves!” Despite the celebratory mood, the victory over France also produced major problems within the British Empire, problems that would have serious consequences for British colonists in the Americas. During the war, many Native American tribes had sided with the French, who supplied them with guns. After the 1763 Treaty of Paris that ended the French and Indian War (or the Seven Years’ War), British colonists had to defend the frontier, where French colonists and their tribal allies remained a powerful force. The most organized resistance, Pontiac’s Rebellion, highlighted tensions the settlers increasingly interpreted in racial terms. The massive debt the war generated at home, however, proved to be the most serious issue facing Great Britain. The frontier had to be secure in order to prevent another costly war. Greater enforcement of imperial trade laws had to be put into place. Parliament had to find ways to raise revenue to pay off the crippling debt from the war. Everyone would have to contribute their expected share, including the British subjects across the Atlantic. PROBLEMS ON THE AMERICAN FRONTIER With the end of the French and Indian War, Great Britain claimed a vast new expanse of territory, at least on paper. Under the terms of the Treaty of Paris, the French territory known as New France had ceased to exist. British territorial holdings now extended from Canada to Florida, and British military focus shifted to maintaining peace in the king’s newly enlarged lands. However, much of the land in the American British Empire remained under the control of powerful native confederacies, which made any claims of British mastery beyond the Atlantic coastal settlements hollow. Great Britain maintained ten thousand troops in North America after the war ended in 1763 to defend the borders and repel any attack by their imperial rivals. British colonists, eager for fresh land, poured over the Appalachian Mountains to stake claims. The western frontier had long been a “middle ground” where different imperial powers (British, French, Spanish) had interacted and compromised with native peoples. That era of accommodation in the “middle ground” came to an end after the French and Indian War. Virginians (including George Washington) and other land-hungry colonists had already raised tensions in the 1740s with their quest for land. Virginia landowners in particular eagerly looked to diversify their holdings beyond tobacco, which had stagnated in price and exhausted the fertility of the lands along the Chesapeake Bay. They invested heavily in the newly available land. This westward movement brought the settlers into conflict as never before with Native American tribes, such as the Shawnee, Seneca-Cayuga, Wyandot, and Delaware, who increasingly held their ground against any further intrusion by White settlers. The treaty that ended the war between France and Great Britain proved to be a significant blow to native peoples, who had viewed the conflict as an opportunity to gain additional trade goods from both sides. With the French defeat, many Native Americans who had sided with France lost a valued trading partner as well as bargaining power over the British. Settlers’ encroachment on their land, as well as the increased British military presence, changed the situation on the frontier dramatically. After the war, British troops took over the former French forts but failed to court favor with the local tribes by distributing ample gifts, as the French had done. They also significantly reduced the amount of gunpowder and ammunition they sold to the Native Americans, worsening relationships further. Native Americans’ resistance to colonists drew upon the teachings of Delaware (Lenni Lenape) prophet Neolin and the leadership of Ottawa war chief Pontiac. Neolin was a spiritual leader who preached a doctrine of shunning European culture and expelling Europeans from native lands. Neolin’s beliefs united Native Americans from many villages. In a broad-based alliance that came to be known as Pontiac’s Rebellion, Pontiac led a loose coalition of these native tribes against the colonists and the British army. Pontiac started bringing his coalition together as early as 1761, urging Native Americans to “drive [the Europeans] out and make war upon them.” The conflict began in earnest in 1763, when Pontiac and several hundred Ojibwas, Potawatomis, and Hurons laid siege to Fort Detroit. At the same time, Senecas, Shawnees, and Delawares laid siege to Fort Pitt. Over the next year, the war spread along the backcountry from Virginia to Pennsylvania. Pontiac’s Rebellion (also known as Pontiac’s War) triggered horrific violence on both sides. Firsthand reports of Native American attacks tell of murder, scalping, dismemberment, and burning at the stake. These stories incited a deep racial hatred among colonists against all Native Americans. The actions of a group of Scots-Irish settlers from Paxton (or Paxtang), Pennsylvania, in December 1763, illustrates the deadly situation on the frontier. Forming a mob known as the Paxton Boys, these frontiersmen attacked a nearby group of Conestoga of the Susquehannock tribe. The Conestoga had lived peacefully with local settlers, but the Paxton Boys viewed all Native Americans as savages and they brutally murdered the six Conestoga they found at home and burned their houses. When Governor John Penn put the remaining fourteen Conestoga in protective custody in Lancaster, Pennsylvania, the Paxton Boys broke into the building and killed and scalped the Conestoga they found there ( Figure 5.3 ). Although Governor Penn offered a reward for the capture of any Paxton Boys involved in the murders, no one ever identified the attackers. Some colonists reacted to the incident with outrage. Benjamin Franklin described the Paxton Boys as “the barbarous Men who committed the atrocious act, in Defiance of Government, of all Laws human and divine, and to the eternal Disgrace of their Country and Colour,” stating that “the Wickedness cannot be covered, the Guilt will lie on the whole Land, till Justice is done on the Murderers. The blood of the innocent will cry to heaven for vengeance.” Yet, as the inability to bring the perpetrators to justice clearly indicates, the Paxton Boys had many more supporters than critics. Pontiac’s Rebellion and the Paxton Boys’ actions were examples of early American race wars, in which both sides saw themselves as inherently different from the other and believed the other needed to be eradicated. The prophet Neolin’s message, which he said he received in a vision from the Master of Life, was: “Wherefore do you suffer the Whites to dwell upon your lands? Drive them away; wage war against them.” Pontiac echoed this idea in a meeting, exhorting tribes to join together against the British: “It is important for us, my brothers, that we exterminate from our lands this nation which seeks only to destroy us.” In his letter suggesting “gifts” to the natives of smallpox-infected blankets, Field Marshal Jeffrey Amherst said, “You will do well to inoculate the Indians by means of blankets, as well as every other method that can serve to extirpate this execrable race.” Pontiac’s Rebellion came to an end in 1766, when it became clear that the French, whom Pontiac had hoped would side with his forces, would not be returning. The repercussions, however, would last much longer. Race relations between Native Americans and Whites remained poisoned on the frontier. Well aware of the problems on the frontier, the British government took steps to try to prevent bloodshed and another costly war. At the beginning of Pontiac’s uprising, the British issued the Proclamation of 1763, which forbade White settlement west of the Proclamation Line , a borderline running along the spine of the Appalachian Mountains ( Figure 5.4 ). The Proclamation Line aimed to forestall further conflict on the frontier, the clear flashpoint of tension in British North America. British colonists who had hoped to move west after the war chafed at this restriction, believing the war had been fought and won to ensure the right to settle west. The Proclamation Line therefore came as a setback to their vision of westward expansion. THE BRITISH NATIONAL DEBT Great Britain’s newly enlarged empire meant a greater financial burden, and the mushrooming debt from the war was a major cause of concern. The war nearly doubled the British national debt, from £75 million in 1756 to £133 million in 1763. Interest payments alone consumed over half the national budget, and the continuing military presence in North America was a constant drain. The Empire needed more revenue to replenish its dwindling coffers. Those in Great Britain believed that British subjects in North America, as the major beneficiaries of Great Britain’s war for global supremacy, should certainly shoulder their share of the financial burden. The British government began increasing revenues by raising taxes at home, even as various interest groups lobbied to keep their taxes low. Powerful members of the aristocracy, well represented in Parliament, successfully convinced Prime Minister John Stuart, third earl of Bute, to refrain from raising taxes on land. The greater tax burden, therefore, fell on the lower classes in the form of increased import duties, which raised the prices of imported goods such as sugar and tobacco. George Grenville succeeded Bute as prime minister in 1763. Grenville determined to curtail government spending and make sure that, as subjects of the British Empire, the American colonists did their part to pay down the massive debt. IMPERIAL REFORMS The new era of greater British interest in the American colonies through imperial reforms picked up in pace in the mid-1760s. In 1764, Prime Minister Grenville introduced the Currency Act of 1764, prohibiting the colonies from printing additional paper money and requiring colonists to pay British merchants in gold and silver instead of the colonial paper money already in circulation. The Currency Act aimed to standardize the currency used in Atlantic trade, a logical reform designed to help stabilize the Empire’s economy. This rule brought American economic activity under greater British control. Colonists relied on their own paper currency to conduct trade and, with gold and silver in short supply, they found their finances tight. Not surprisingly, they grumbled about the new imperial currency regulations. Grenville also pushed Parliament to pass the Sugar Act of 1764, which actually lowered duties on British molasses by half, from six pence per gallon to three. Grenville designed this measure to address the problem of rampant colonial smuggling with the French sugar islands in the West Indies. The act attempted to make it easier for colonial traders, especially New England mariners who routinely engaged in illegal trade, to comply with the imperial law. To give teeth to the 1764 Sugar Act, the law intensified enforcement provisions. Prior to the 1764 act, colonial violations of the Navigation Acts had been tried in local courts, where sympathetic colonial juries refused to convict merchants on trial. However, the Sugar Act required violators to be tried in vice-admiralty courts . These crown-sanctioned tribunals, which settled disputes that occurred at sea, operated without juries. Some colonists saw this feature of the 1764 act as dangerous. They argued that trial by jury had long been honored as a basic right of Englishmen under the British Constitution. To deprive defendants of a jury, they contended, meant reducing liberty-loving British subjects to political slavery. In the British Atlantic world, some colonists perceived this loss of liberty as parallel to the enslavement of Africans. As loyal British subjects, colonists in America cherished their Constitution, an unwritten system of government that they celebrated as the best political system in the world. The British Constitution prescribed the roles of the King, the House of Lords, and the House of Commons. Each entity provided a check and balance against the worst tendencies of the others. If the King had too much power, the result would be tyranny. If the Lords had too much power, the result would be oligarchy. If the Commons had the balance of power, democracy or mob rule would prevail. The British Constitution promised representation of the will of British subjects, and without such representation, even the indirect tax of the Sugar Act was considered a threat to the settlers’ rights as British subjects. Furthermore, some American colonists felt the colonies were on equal political footing with Great Britain. The Sugar Act meant they were secondary, mere adjuncts to the Empire. All subjects of the British crown knew they had liberties under the constitution. The Sugar Act suggested that some in Parliament labored to deprive them of what made them uniquely British. 5.2 The Stamp Act and the Sons and Daughters of Liberty Learning Objectives By the end of this section, you will be able to: Explain the purpose of the 1765 Stamp Act Describe the colonial responses to the Stamp Act In 1765, the British Parliament moved beyond the efforts during the previous two years to better regulate westward expansion and trade by putting in place the Stamp Act. As a direct tax on the colonists, the Stamp Act imposed an internal tax on almost every type of printed paper colonists used, including newspapers, legal documents, and playing cards. While the architects of the Stamp Act saw the measure as a way to defray the costs of the British Empire, it nonetheless gave rise to the first major colonial protest against British imperial control as expressed in the famous slogan “no taxation without representation.” The Stamp Act reinforced the sense among some colonists that Parliament was not treating them as equals of their peers across the Atlantic. THE STAMP ACT AND THE QUARTERING ACT Prime Minister Grenville, author of the Sugar Act of 1764, introduced the Stamp Act in the early spring of 1765. Under this act, anyone who used or purchased anything printed on paper had to buy a revenue stamp ( Figure 5.5 ) for it. In the same year, 1765, Parliament also passed the Quartering Act, a law that attempted to solve the problems of stationing troops in North America. The Parliament understood the Stamp Act and the Quartering Act as an assertion of their power to control colonial policy. The Stamp Act signaled a shift in British policy after the French and Indian War. Before the Stamp Act, the colonists had paid taxes to their colonial governments or indirectly through higher prices, not directly to the Crown’s appointed governors. This was a time-honored liberty of representative legislatures of the colonial governments. The passage of the Stamp Act meant that starting on November 1, 1765, the colonists would contribute £60,000 per year—17 percent of the total cost—to the upkeep of the ten thousand British soldiers in North America ( Figure 5.6 ). Because the Stamp Act raised constitutional issues, it triggered the first serious protest against British imperial policy. Parliament also asserted its prerogative in 1765 with the Quartering Act . The Quartering Act of 1765 addressed the problem of housing British soldiers stationed in the American colonies. It required that they be provided with barracks or places to stay in public houses, and that if extra housing were necessary, then troops could be stationed in barns and other uninhabited private buildings. In addition, the costs of the troops’ food and lodging fell to the colonists. Since the time of James II, who ruled from 1685 to 1688, many British subjects had mistrusted the presence of a standing army during peacetime, and having to pay for the soldiers’ lodging and food was especially burdensome. Widespread evasion and disregard for the law occurred in almost all the colonies, but the issue was especially contentious in New York, the headquarters of British forces. When fifteen hundred troops arrived in New York in 1766, the New York Assembly refused to follow the Quartering Act. COLONIAL PROTEST: GENTRY, MERCHANTS, AND THE STAMP ACT CONGRESS For many British colonists living in America, the Stamp Act raised many concerns. As a direct tax, it appeared to be an unconstitutional measure, one that deprived freeborn British subjects of their liberty, a concept they defined broadly to include various rights and privileges they enjoyed as British subjects, including the right to representation. According to the unwritten British Constitution, only representatives for whom British subjects voted could tax them. Parliament was in charge of taxation, and although it was a representative body, the colonies did not have “actual” (or direct) representation in it. Parliamentary members who supported the Stamp Act argued that the colonists had virtual representation, because the architects of the British Empire knew best how to maximize returns from its possessions overseas. However, this argument did not satisfy the protesters, who viewed themselves as having the same right as all British subjects to avoid taxation without their consent. With no representation in the House of Commons, where bills of taxation originated, they felt themselves deprived of this inherent right. The British government knew the colonists might object to the Stamp Act’s expansion of parliamentary power, but Parliament believed the relationship of the colonies to the Empire was one of dependence, not equality. However, the Stamp Act had the unintended and ironic consequence of drawing colonists from very different areas and viewpoints together in protest. In Massachusetts, for instance, James Otis, a lawyer and defender of British liberty, became the leading voice for the idea that “Taxation without representation is tyranny.” In the Virginia House of Burgesses, firebrand and slaveholder Patrick Henry introduced the Virginia Stamp Act Resolutions, which denounced the Stamp Act and the British crown in language so strong that some conservative Virginians accused him of treason ( Figure 5.7 ). Henry replied that Virginians were subject only to taxes that they themselves—or their representatives—imposed. In short, there could be no taxation without representation . The colonists had never before formed a unified political front, so Grenville and Parliament did not fear true revolt. However, this was to change in 1765. In response to the Stamp Act, the Massachusetts Assembly sent letters to the other colonies, asking them to attend a meeting, or congress, to discuss how to respond to the act. Many American colonists from very different colonies found common cause in their opposition to the Stamp Act. Representatives from nine colonial legislatures met in New York in the fall of 1765 to reach a consensus. Could Parliament impose taxation without representation? The members of this first congress, known as the Stamp Act Congress, said no. These nine representatives had a vested interest in repealing the tax. Not only did it weaken their businesses and the colonial economy, but it also threatened their liberty under the British Constitution. They drafted a rebuttal to the Stamp Act, making clear that they desired only to protect their liberty as loyal subjects of the Crown. The document, called the Declaration of Rights and Grievances, outlined the unconstitutionality of taxation without representation and trials without juries. Meanwhile, popular protest was also gaining force. MOBILIZATION: POPULAR PROTEST AGAINST THE STAMP ACT The Stamp Act Congress was a gathering of landowning, educated White men who represented the political elite of the colonies and was the colonial equivalent of the British landed aristocracy. While these gentry were drafting their grievances during the Stamp Act Congress, other colonists showed their distaste for the new act by boycotting British goods and protesting in the streets. Two groups, the Sons of Liberty and the Daughters of Liberty , led the popular resistance to the Stamp Act. Both groups considered themselves British patriots defending their liberty, just as their forebears had done in the time of James II. Forming in Boston in the summer of 1765, the Sons of Liberty were artisans, shopkeepers, and small-time merchants willing to adopt extralegal means of protest. Before the act had even gone into effect, the Sons of Liberty began protesting. On August 14, they took aim at Andrew Oliver, who had been named the Massachusetts Distributor of Stamps. After hanging Oliver in effigy—that is, using a crudely made figure as a representation of Oliver—the unruly crowd stoned and ransacked his house, finally beheading the effigy and burning the remains. Such a brutal response shocked the royal governmental officials, who hid until the violence had spent itself. Andrew Oliver resigned the next day. By that time, the mob had moved on to the home of Lieutenant Governor Thomas Hutchinson who, because of his support of Parliament’s actions, was considered an enemy of English liberty. The Sons of Liberty barricaded Hutchinson in his home and demanded that he renounce the Stamp Act; he refused, and the protesters looted and burned his house. Furthermore, the Sons (also called “True Sons” or “True-born Sons” to make clear their commitment to liberty and distinguish them from the likes of Hutchinson) continued to lead violent protests with the goal of securing the resignation of all appointed stamp collectors ( Figure 5.8 ). Starting in early 1766, the Daughters of Liberty protested the Stamp Act by refusing to buy British goods and encouraging others to do the same. They avoided British tea, opting to make their own teas with local herbs and berries. They built a community—and a movement—around creating homespun cloth instead of buying British linen. Well-born women held “spinning bees,” at which they competed to see who could spin the most and the finest linen. An entry in The Boston Chronicle of April 7, 1766, states that on March 12, in Providence, Rhode Island, “18 Daughters of Liberty, young ladies of good reputation, assembled at the house of Doctor Ephraim Bowen, in this town. . . . There they exhibited a fine example of industry, by spinning from sunrise until dark, and displayed a spirit for saving their sinking country rarely to be found among persons of more age and experience.” At dinner, they “cheerfully agreed to omit tea, to render their conduct consistent. Besides this instance of their patriotism, before they separated, they unanimously resolved that the Stamp Act was unconstitutional, that they would purchase no more British manufactures unless it be repealed, and that they would not even admit the addresses of any gentlemen should they have the opportunity, without they determined to oppose its execution to the last extremity, if the occasion required.” The Daughters’ non-importation movement broadened the protest against the Stamp Act, giving women a new and active role in the political dissent of the time. Women were responsible for purchasing goods for the home, so by exercising the power of the purse, they could wield more power than they had in the past. Although they could not vote, they could mobilize others and make a difference in the political landscape. From a local movement, the protests of the Sons and Daughters of Liberty soon spread until there was a chapter in every colony. The Daughters of Liberty promoted the boycott on British goods while the Sons enforced it, threatening retaliation against anyone who bought imported goods or used stamped paper. In the protest against the Stamp Act, wealthy, lettered political figures like John Adams supported the goals of the Sons and Daughters of Liberty, even if they did not engage in the Sons’ violent actions. These men, who were lawyers, printers, and merchants, ran a propaganda campaign parallel to the Sons’ campaign of violence. In newspapers and pamphlets throughout the colonies, they published article after article outlining the reasons the Stamp Act was unconstitutional and urging peaceful protest. They officially condemned violent actions but did not have the protesters arrested; a degree of cooperation prevailed, despite the groups’ different economic backgrounds. Certainly, all the protesters saw themselves as acting in the best British tradition, standing up against the corruption (especially the extinguishing of their right to representation) that threatened their liberty ( Figure 5.9 ). THE DECLARATORY ACT Back in Great Britain, news of the colonists’ reactions worsened an already volatile political situation. Grenville’s imperial reforms had brought about increased domestic taxes and his unpopularity led to his dismissal by King George III. While many in Parliament still wanted such reforms, British merchants argued strongly for their repeal. These merchants had no interest in the philosophy behind the colonists’ desire for liberty; rather, their motive was that the non-importation of British goods by North American colonists was hurting their business. Many of the British at home were also appalled by the colonists’ violent reaction to the Stamp Act. Other Britons cheered what they saw as the manly defense of liberty by their counterparts in the colonies. In March 1766, the new prime minister, Lord Rockingham, compelled Parliament to repeal the Stamp Act. Colonists celebrated what they saw as a victory for their British liberty; in Boston, merchant John Hancock treated the entire town to drinks. However, to appease opponents of the repeal, who feared that it would weaken parliamentary power over the American colonists, Rockingham also proposed the Declaratory Act. This stated in no uncertain terms that Parliament’s power was supreme and that any laws the colonies may have passed to govern and tax themselves were null and void if they ran counter to parliamentary law. 5.3 The Townshend Acts and Colonial Protest Learning Objectives By the end of this section, you will be able to: Describe the purpose of the 1767 Townshend Acts Explain why many colonists protested the 1767 Townshend Acts and the consequences of their actions Colonists’ joy over the repeal of the Stamp Act and what they saw as their defense of liberty did not last long. The Declaratory Act of 1766 had articulated Great Britain’s supreme authority over the colonies, and Parliament soon began exercising that authority. In 1767, with the passage of the Townshend Acts, a tax on consumer goods in British North America, colonists believed their liberty as loyal British subjects had come under assault for a second time. THE TOWNSHEND ACTS Lord Rockingham’s tenure as prime minister was not long (1765–1766). Rich landowners feared that if he were not taxing the colonies, Parliament would raise their taxes instead, sacrificing them to the interests of merchants and colonists. George III duly dismissed Rockingham. William Pitt, also sympathetic to the colonists, succeeded him. However, Pitt was old and ill with gout. His chancellor of the exchequer, Charles Townshend ( Figure 5.10 ), whose job was to manage the Empire’s finances, took on many of his duties. Primary among these was raising the needed revenue from the colonies. Townshend’s first act was to deal with the unruly New York Assembly, which had voted not to pay for supplies for the garrison of British soldiers that the Quartering Act required. In response, Townshend proposed the Restraining Act of 1767, which disbanded the New York Assembly until it agreed to pay for the garrison’s supplies, which it eventually agreed to do. The Townshend Revenue Act of 1767 placed duties on various consumer items like paper, paint, lead, tea, and glass. These British goods had to be imported, since the colonies did not have the manufacturing base to produce them. Townshend hoped the new duties would not anger the colonists because they were external taxes, not internal ones like the Stamp Act. In 1766, in arguing before Parliament for the repeal of the Stamp Act, Benjamin Franklin had stated, “I never heard any objection to the right of laying duties to regulate commerce; but a right to lay internal taxes was never supposed to be in parliament, as we are not represented there.” The Indemnity Act of 1767 exempted tea produced by the British East India Company from taxation when it was imported into Great Britain. When the tea was re-exported to the colonies, however, the colonists had to pay taxes on it because of the Revenue Act. Some critics of Parliament on both sides of the Atlantic saw this tax policy as an example of corrupt politicians giving preferable treatment to specific corporate interests, creating a monopoly. The sense that corruption had become entrenched in Parliament only increased colonists’ alarm. In fact, the revenue collected from these duties was only nominally intended to support the British army in America. It actually paid the salaries of some royally appointed judges, governors, and other officials whom the colonial assemblies had traditionally paid. Thanks to the Townshend Revenue Act of 1767, however, these officials no longer relied on colonial leadership for payment. This change gave them a measure of independence from the assemblies, so they could implement parliamentary acts without fear that their pay would be withheld in retaliation. The Revenue Act thus appeared to sever the relationship between governors and assemblies, drawing royal officials closer to the British government and further away from the colonial legislatures. The Revenue Act also gave the customs board greater powers to counteract smuggling. It granted “writs of assistance”—basically, search warrants—to customs commissioners who suspected the presence of contraband goods, which also opened the door to a new level of bribery and trickery on the waterfronts of colonial America. Furthermore, to ensure compliance, Townshend introduced the Commissioners of Customs Act of 1767, which created an American Board of Customs to enforce trade laws. Customs enforcement had been based in Great Britain, but rules were difficult to implement at such a distance, and smuggling was rampant. The new customs board was based in Boston and would severely curtail smuggling in this large colonial seaport. Townshend also orchestrated the Vice-Admiralty Court Act, which established three more vice-admiralty courts, in Boston, Philadelphia, and Charleston, to try violators of customs regulations without a jury. Before this, the only colonial vice-admiralty court had been in far-off Halifax, Nova Scotia, but with three local courts, smugglers could be tried more efficiently. Since the judges of these courts were paid a percentage of the worth of the goods they recovered, leniency was rare. All told, the Townshend Acts resulted in higher taxes and stronger British power to enforce them. Four years after the end of the French and Indian War, the Empire continued to search for solutions to its debt problem and the growing sense that the colonies needed to be brought under control. REACTIONS: THE NON-IMPORTATION MOVEMENT Like the Stamp Act, the Townshend Acts produced controversy and protest in the American colonies. For a second time, many colonists resented what they perceived as an effort to tax them without representation and thus to deprive them of their liberty. The fact that the revenue the Townshend Acts raised would pay royal governors only made the situation worse, because it took control away from colonial legislatures that otherwise had the power to set and withhold a royal governor’s salary. The Restraining Act, which had been intended to isolate New York without angering the other colonies, had the opposite effect, showing the rest of the colonies how far beyond the British Constitution some members of Parliament were willing to go. The Townshend Acts generated a number of protest writings, including “Letters from a Pennsylvania Farmer” by John Dickinson. In this influential pamphlet, which circulated widely in the colonies, Dickinson conceded that the Empire could regulate trade but argued that Parliament could not impose either internal taxes, like stamps, on goods or external taxes, like customs duties, on imports. Americana “Address to the Ladies” Verse from The Boston Post-Boy and Advertiser This verse, which ran in a Boston newspaper in November 1767, highlights how women were encouraged to take political action by boycotting British goods. Notice that the writer especially encourages women to avoid British tea (Bohea and Green Hyson) and linen, and to manufacture their own homespun cloth. Building on the protest of the 1765 Stamp Act by the Daughters of Liberty, the non-importation movement of 1767–1768 mobilized women as political actors. Young ladies in town, and those that live round, Let a friend at this season advise you: Since money’s so scarce, and times growing worse Strange things may soon hap and surprize you: First then, throw aside your high top knots of pride Wear none but your own country linnen; of economy boast, let your pride be the most What, if homespun they say is not quite so gay As brocades, yet be not in a passion, For when once it is known this is much wore in town, One and all will cry out, ’tis the fashion! And as one, all agree that you’ll not married be To such as will wear London Fact’ry: But at first sight refuse, tell’em such you do chuse As encourage our own Manufact’ry. No more Ribbons wear, nor in rich dress appear, Love your country much better than fine things, Begin without passion, ’twill soon be the fashion To grace your smooth locks with a twine string. Throw aside your Bohea, and your Green Hyson Tea, And all things with a new fashion duty; Procure a good store of the choice Labradore, For there’ll soon be enough here to suit ye; These do without fear and to all you’ll appear Fair, charming, true, lovely, and cleaver; Tho’ the times remain darkish, young men may be sparkish. And love you much stronger than ever. !O! In Massachusetts in 1768, Samuel Adams wrote a letter that became known as the Massachusetts Circular . Sent by the Massachusetts House of Representatives to the other colonial legislatures, the letter laid out the unconstitutionality of taxation without representation and encouraged the other colonies to again protest the taxes by boycotting British goods. Adams wrote, “It is, moreover, [the Massachusetts House of Representatives] humble opinion, which they express with the greatest deference to the wisdom of the Parliament, that the acts made there, imposing duties on the people of this province, with the sole and express purpose of raising a revenue, are infringements of their natural and constitutional rights; because, as they are not represented in the Parliament, his Majesty’s Commons in Britain, by those acts, grant their property without their consent.” Note that even in this letter of protest, the humble and submissive tone shows the Massachusetts Assembly’s continued deference to parliamentary authority. Even in that hotbed of political protest, it is a clear expression of allegiance and the hope for a restoration of “natural and constitutional rights.” Great Britain’s response to this threat of disobedience served only to unite the colonies further. The colonies’ initial response to the Massachusetts Circular was lukewarm at best. However, back in Great Britain, the secretary of state for the colonies—Lord Hillsborough—demanded that Massachusetts retract the letter, promising that any colonial assemblies that endorsed it would be dissolved. This threat had the effect of pushing the other colonies to Massachusetts’s side. Even the city of Philadelphia, which had originally opposed the Circular, came around. The Daughters of Liberty once again supported and promoted the boycott of British goods. Women resumed spinning bees and again found substitutes for British tea and other goods. Many colonial merchants signed non-importation agreements, and the Daughters of Liberty urged colonial women to shop only with those merchants. The Sons of Liberty used newspapers and circulars to call out by name those merchants who refused to sign such agreements; sometimes they were threatened by violence. For instance, a broadside from 1769–1770 reads: WILLIAM JACKSON, an IMPORTER; at the BRAZEN HEAD, North Side of the TOWN-HOUSE, and Opposite the Town-Pump, [in] Corn-hill, BOSTON It is desired that the SONS and DAUGHTERS of LIBERTY, would not buy any one thing of him, for in so doing they will bring disgrace upon themselves, and their Posterity, for ever and ever, AMEN. The boycott in 1768–1769 turned the purchase of consumer goods into a political gesture. It mattered what you consumed. Indeed, the very clothes you wore indicated whether you were a defender of liberty in homespun or a protector of parliamentary rights in superfine British attire. TROUBLE IN BOSTON The Massachusetts Circular got Parliament’s attention, and in 1768, Lord Hillsborough sent four thousand British troops to Boston to deal with the unrest and put down any potential rebellion there. The troops were a constant reminder of the assertion of British power over the colonies, an illustration of an unequal relationship between members of the same empire. As an added aggravation, British soldiers moonlighted as dockworkers, creating competition for employment. Boston’s labor system had traditionally been closed, privileging native-born laborers over outsiders, and jobs were scarce. Many Bostonians, led by the Sons of Liberty, mounted a campaign of harassment against British troops. The Sons of Liberty also helped protect the smuggling actions of the merchants; smuggling was crucial for the colonists’ ability to maintain their boycott of British goods. John Hancock was one of Boston’s most successful merchants and prominent citizens. While he maintained too high a profile to work actively with the Sons of Liberty, he was known to support their aims, if not their means of achieving them. He was also one of the many prominent merchants who had made their fortunes by smuggling, which was rampant in the colonial seaports. In 1768, customs officials seized the Liberty , one of his ships, and violence erupted. Led by the Sons of Liberty, Bostonians rioted against customs officials, attacking the customs house and chasing out the officers, who ran to safety at Castle William, a British fort on a Boston harbor island. British soldiers crushed the riots, but over the next few years, clashes between British officials and Bostonians became common. Conflict turned deadly on March 5, 1770, in a confrontation that came to be known as the Boston Massacre . On that night, a crowd of Bostonians from many walks of life started throwing snowballs, rocks, and sticks at the British soldiers guarding the customs house. In the resulting scuffle, some soldiers, goaded by the mob who hectored the soldiers as “lobster backs” (the reference to lobster equated the soldiers with bottom feeders, i.e., aquatic animals that feed on the lowest organisms in the food chain), fired into the crowd, killing five people. Crispus Attucks, the first man killed—and, though no one could have known it then, the first official casualty in the war for independence—was of Wampanoag and African descent. The bloodshed illustrated the level of hostility that had developed as a result of Boston’s occupation by British troops, the competition for scarce jobs between Bostonians and the British soldiers stationed in the city, and the larger question of Parliament’s efforts to tax the colonies. The Sons of Liberty immediately seized on the event, characterizing the British soldiers as murderers and their victims as martyrs. Paul Revere, a silversmith and member of the Sons of Liberty, circulated an engraving that showed a line of grim redcoats firing ruthlessly into a crowd of unarmed, fleeing civilians. Among colonists who resisted British power, this view of the “massacre” confirmed their fears of a tyrannous government using its armies to curb the freedom of British subjects. But to others, the attacking mob was equally to blame for pelting the British with rocks and insulting them. It was not only British Loyalists who condemned the unruly mob. John Adams, one of the city’s strongest supporters of peaceful protest against Parliament, represented the British soldiers at their murder trial. Adams argued that the mob’s lawlessness required the soldiers’ response, and that without law and order, a society was nothing. He argued further that the soldiers were the tools of a much broader program, which transformed a street brawl into the injustice of imperial policy. Of the eight soldiers on trial, the jury acquitted six, convicting the other two of the reduced charge of manslaughter. Adams argued: “Facts are stubborn things; and whatever may be our wishes, our inclinations, or the dictates of our passions, they cannot alter the state of facts and evidence: nor is the law less stable than the fact; if an assault was made to endanger their lives, the law is clear, they had a right to kill in their own defense; if it was not so severe as to endanger their lives, yet if they were assaulted at all, struck and abused by blows of any sort, by snow-balls, oyster-shells, cinders, clubs, or sticks of any kind; this was a provocation, for which the law reduces the offence of killing, down to manslaughter, in consideration of those passions in our nature, which cannot be eradicated. To your candour and justice I submit the prisoners and their cause.” Americana Propaganda and the Sons of Liberty Long after the British soldiers had been tried and punished, the Sons of Liberty maintained a relentless propaganda campaign against British oppression. Many of them were printers or engravers, and they were able to use public media to sway others to their cause. Shortly after the incident outside the customs house, Paul Revere created “The bloody massacre perpetrated in King Street Boston on March 5th 1770 by a party of the 29th Regt.” ( Figure 5.11 ), based on an image by engraver Henry Pelham. The picture—which represents only the protesters’ point of view—shows the ruthlessness of the British soldiers and the helplessness of the crowd of civilians. Notice the subtle details Revere uses to help convince the viewer of the civilians’ innocence and the soldiers’ cruelty. Although eyewitnesses said the crowd started the fight by throwing snowballs and rocks, in the engraving they are innocently standing by. Revere also depicts the crowd as well dressed and well-to-do, when in fact they were laborers and probably looked quite a bit rougher. Newspaper articles and pamphlets that the Sons of Liberty circulated implied that the “massacre” was a planned murder. In the Boston Gazette on March 12, 1770, an article describes the soldiers as striking first. It goes on to discuss this version of the events: “On hearing the noise, one Samuel Atwood came up to see what was the matter; and entering the alley from dock square, heard the latter part of the combat; and when the boys had dispersed he met the ten or twelve soldiers aforesaid rushing down the alley towards the square and asked them if they intended to murder people? They answered Yes, by God, root and branch! With that one of them struck Mr. Atwood with a club which was repeated by another; and being unarmed, he turned to go off and received a wound on the left shoulder which reached the bone and gave him much pain.” What do you think most people in the United States think of when they consider the Boston Massacre? How does the propaganda of the Sons of Liberty still affect the way we think of this event? PARTIAL REPEAL As it turned out, the Boston Massacre occurred after Parliament had partially repealed the Townshend Acts. By the late 1760s, the American boycott of British goods had drastically reduced British trade. Once again, merchants who lost money because of the boycott strongly pressured Parliament to loosen its restrictions on the colonies and break the non-importation movement. Charles Townshend died suddenly in 1767 and was replaced by Lord North, who was inclined to look for a more workable solution with the colonists. North convinced Parliament to drop all the Townshend duties except the tax on tea. The administrative and enforcement provisions under the Townshend Acts—the American Board of Customs Commissioners and the vice-admiralty courts—remained in place. To those who had protested the Townshend Acts for several years, the partial repeal appeared to be a major victory. For a second time, colonists had rescued liberty from an unconstitutional parliamentary measure. The hated British troops in Boston departed. The consumption of British goods skyrocketed after the partial repeal, an indication of the American colonists’ desire for the items linking them to the Empire. 5.4 The Destruction of the Tea and the Coercive Acts Learning Objectives By the end of this section, you will be able to: Describe the socio-political environment in the colonies in the early 1770s Explain the purpose of the Tea Act of 1773 and discuss colonial reactions to it Identify and describe the Coercive Acts The Tea Act of 1773 triggered a reaction with far more significant consequences than either the 1765 Stamp Act or the 1767 Townshend Acts. Colonists who had joined in protest against those earlier acts renewed their efforts in 1773. They understood that Parliament had again asserted its right to impose taxes without representation, and they feared the Tea Act was designed to seduce them into conceding this important principle by lowering the price of tea to the point that colonists might abandon their scruples. They also deeply resented the East India Company’s monopoly on the sale of tea in the American colonies; this resentment sprang from the knowledge that some members of Parliament had invested heavily in the company. SMOLDERING RESENTMENT Even after the partial repeal of the Townshend duties, however, suspicion of Parliament’s intentions remained high. This was especially true in port cities like Boston and New York, where British customs agents were a daily irritant and reminder of British power. In public houses and squares, people met and discussed politics. Philosopher John Locke’s Two Treatises of Government , published almost a century earlier, influenced political thought about the role of government to protect life, liberty, and property. The Sons of Liberty issued propaganda ensuring that colonists remained aware when Parliament overreached itself. Violence continued to break out on occasion, as in 1772, when Rhode Island colonists boarded and burned the British revenue ship Gaspée in Narragansett Bay ( Figure 5.12 ). Colonists had attacked or burned British customs ships in the past, but after the Gaspée Affair, the British government convened a Royal Commission of Inquiry. This Commission had the authority to remove the colonists, who were charged with treason, to Great Britain for trial. Some colonial protestors saw this new ability as another example of the overreach of British power. Samuel Adams, along with Joseph Warren and James Otis, re-formed the Boston Committee of Correspondence , which functioned as a form of shadow government, to address the fear of British overreach. Soon towns all over Massachusetts had formed their own committees, and many other colonies followed suit. These committees, which had between seven and eight thousand members in all, identified enemies of the movement and communicated the news of the day. Sometimes they provided a version of events that differed from royal interpretations, and slowly, the committees began to supplant royal governments as sources of information. They later formed the backbone of communication among the colonies in the rebellion against the Tea Act, and eventually in the revolt against the British crown. THE TEA ACT OF 1773 Parliament did not enact the Tea Act of 1773 in order to punish the colonists, assert parliamentary power, or even raise revenues. Rather, the act was a straightforward order of economic protectionism for a British tea firm, the East India Company, that was on the verge of bankruptcy. In the colonies, tea was the one remaining consumer good subject to the hated Townshend duties. Protest leaders and their followers still avoided British tea, drinking smuggled Dutch tea as a sign of patriotism. The Tea Act of 1773 gave the British East India Company the ability to export its tea directly to the colonies without paying import or export duties and without using middlemen in either Great Britain or the colonies. Even with the Townshend tax, the act would allow the East India Company to sell its tea at lower prices than the smuggled Dutch tea, thus undercutting the smuggling trade. This act was unwelcome to those in British North America who had grown displeased with the pattern of imperial measures. By granting a monopoly to the East India Company, the act not only cut out colonial merchants who would otherwise sell the tea themselves; it also reduced their profits from smuggled foreign tea. These merchants were among the most powerful and influential people in the colonies, so their dissatisfaction carried some weight. Moreover, because the tea tax that the Townshend Acts imposed remained in place, tea had intense power to symbolize the idea of “no taxation without representation.” COLONIAL PROTEST: THE DESTRUCTION OF THE TEA The 1773 act reignited the worst fears among the colonists. To the Sons and Daughters of Liberty and those who followed them, the act appeared to be proof positive that a handful of corrupt members of Parliament were violating the British Constitution. Veterans of the protest movement had grown accustomed to interpreting British actions in the worst possible light, so the 1773 act appeared to be part of a large conspiracy against liberty. As they had done to protest earlier acts and taxes, colonists responded to the Tea Act with a boycott. The Committees of Correspondence helped to coordinate resistance in all of the colonial port cities, so up and down the East Coast, British tea-carrying ships were unable to come to shore and unload their wares. In Charlestown, Boston, Philadelphia, and New York, the equivalent of millions of dollars’ worth of tea was held hostage, either locked in storage warehouses or rotting in the holds of ships as they were forced to sail back to Great Britain. In Boston, Thomas Hutchinson, now the royal governor of Massachusetts, vowed that radicals like Samuel Adams would not keep the ships from unloading their cargo. He urged the merchants who would have accepted the tea from the ships to stand their ground and receive the tea once it had been unloaded. When the Dartmouth sailed into Boston Harbor in November 1773, it had twenty days to unload its cargo of tea and pay the duty before it had to return to Great Britain. Two more ships, the Eleanor and the Beaver , followed soon after. Samuel Adams and the Sons of Liberty tried to keep the captains of the ships from paying the duties and posted groups around the ships to make sure the tea would not be unloaded. On December 16, just as the Dartmouth ’s deadline approached, townspeople gathered at the Old South Meeting House determined to take action. From this gathering, a group of Sons of Liberty and their followers approached the three ships. Some were disguised as Mohawks. Protected by a crowd of spectators, they systematically dumped all the tea into the harbor, destroying goods worth almost $1 million in today’s dollars, a very significant loss. This act soon inspired further acts of resistance up and down the East Coast. However, not all colonists, and not even all Patriots, supported the dumping of the tea. The wholesale destruction of property shocked people on both sides of the Atlantic. PARLIAMENT RESPONDS: THE COERCIVE ACTS In London, response to the destruction of the tea was swift and strong. The violent destruction of property infuriated King George III and the prime minister, Lord North ( Figure 5.13 ), who insisted the loss be repaid. Though some American merchants put forward a proposal for restitution, the Massachusetts Assembly refused to make payments. Massachusetts’s resistance to British authority united different factions in Great Britain against the colonies. North had lost patience with the unruly British subjects in Boston. He declared: “The Americans have tarred and feathered your subjects, plundered your merchants, burnt your ships, denied all obedience to your laws and authority; yet so clement and so long forbearing has our conduct been that it is incumbent on us now to take a different course. Whatever may be the consequences, we must risk something; if we do not, all is over.” Both Parliament and the king agreed that Massachusetts should be forced to both pay for the tea and yield to British authority. In early 1774, leaders in Parliament responded with a set of four measures designed to punish Massachusetts, commonly known at the Coercive Acts . The Boston Port Act shut down Boston Harbor until the East India Company was repaid. The Massachusetts Government Act placed the colonial government under the direct control of crown officials and made traditional town meetings subject to the governor’s approval. The Administration of Justice Act allowed the royal governor to unilaterally move any trial of a crown officer out of Massachusetts, a change designed to prevent hostile Massachusetts juries from deciding these cases. This act was especially infuriating to John Adams and others who emphasized the time-honored rule of law. They saw this part of the Coercive Acts as striking at the heart of fair and equitable justice. Finally, the Quartering Act encompassed all the colonies and allowed British troops to be housed in occupied buildings. At the same time, Parliament also passed the Quebec Act, which expanded the boundaries of Quebec westward and extended religious tolerance to Roman Catholics in the province. For many Protestant colonists, especially Congregationalists in New England, this forced tolerance of Catholicism was the most objectionable provision of the act. Additionally, expanding the boundaries of Quebec raised troubling questions for many colonists who eyed the West, hoping to expand the boundaries of their provinces. The Quebec Act appeared gratuitous, a slap in the face to colonists already angered by the Coercive Acts. American Patriots renamed the Coercive and Quebec measures the Intolerable Acts . Some in London also thought the acts went too far; see the cartoon “The Able Doctor, or America Swallowing the Bitter Draught” ( Figure 5.14 ) for one British view of what Parliament was doing to the colonies. Meanwhile, punishments designed to hurt only one colony (Massachusetts, in this case) had the effect of mobilizing all the colonies to its side. The Committees of Correspondence had already been active in coordinating an approach to the Tea Act. Now the talk would turn to these new, intolerable assaults on the colonists’ rights as British subjects. 5.5 Disaffection: The First Continental Congress and American Identity Learning Objectives By the end of this section, you will be able to: Describe the state of affairs between the colonies and the home government in 1774 Explain the purpose and results of the First Continental Congress Disaffection—the loss of affection toward the home government—had reached new levels by 1774. Many colonists viewed the Intolerable Acts as a turning point; they now felt they had to take action. The result was the First Continental Congress, a direct challenge to Lord North and British authority in the colonies. Still, it would be a mistake to assume there was a groundswell of support for separating from the British Empire and creating a new, independent nation. Strong ties still bound the Empire together, and colonists did not agree about the proper response. Loyalists tended to be property holders, established residents who feared the loss of their property. To them the protests seemed to promise nothing but mob rule, and the violence and disorder they provoked were shocking. On both sides of the Atlantic, opinions varied. After the passage of the Intolerable Acts in 1774, the Committees of Correspondence and the Sons of Liberty went straight to work, spreading warnings about how the acts would affect the liberty of all colonists, not just urban merchants and laborers. The Massachusetts Government Act had shut down the colonial government there, but resistance-minded colonists began meeting in extralegal assemblies. One of these assemblies, the Massachusetts Provincial Congress, passed the Suffolk Resolves in September 1774, which laid out a plan of resistance to the Intolerable Acts. Meanwhile, the First Continental Congress was convening to discuss how to respond to the acts themselves. The First Continental Congress was made up of elected representatives of twelve of the thirteen American colonies. (Georgia’s royal governor blocked the move to send representatives from that colony, an indication of the continued strength of the royal government despite the crisis.) The representatives met in Philadelphia from September 5 through October 26, 1774, and at first they did not agree at all about the appropriate response to the Intolerable Acts. Joseph Galloway of Pennsylvania argued for a conciliatory approach; he proposed that an elected Grand Council in America, like the Parliament in Great Britain, should be paired with a royally appointed President General, who would represent the authority of the Crown. More radical factions argued for a move toward separation from the Crown. In the end, Paul Revere rode from Massachusetts to Philadelphia with the Suffolk Resolves, which became the basis of the Declaration and Resolves of the First Continental Congress. In the Declaration and Resolves, adopted on October 14, the colonists demanded the repeal of all repressive acts passed since 1773 and agreed to a non-importation, non-exportation, and non-consumption pact against all British goods until the acts were repealed. In the “Petition of Congress to the King” on October 24, the delegates adopted a further recommendation of the Suffolk Resolves and proposed that the colonies raise and regulate their own militias. The representatives at the First Continental Congress created a Continental Association to ensure that the full boycott was enforced across all the colonies. The Continental Association served as an umbrella group for colonial and local committees of observation and inspection. By taking these steps, the First Continental Congress established a governing network in opposition to royal authority. Defining American The First List of Un-American Activities In her book Toward A More Perfect Union: Virtue and the Formation of American Republics , historian Ann Fairfax Withington explores actions the delegates to the First Continental Congress took during the weeks they were together. Along with their efforts to bring about the repeal of the Intolerable Acts, the delegates also banned certain activities they believed would undermine their fight against what they saw as British corruption. In particular, the delegates prohibited horse races, cockfights, the theater, and elaborate funerals. The reasons for these prohibitions provide insight into the state of affairs in 1774. Both horse races and cockfights encouraged gambling and, for the delegates, gambling threatened to prevent the unity of action and purpose they desired. In addition, cockfighting appeared immoral and corrupt because the roosters were fitted with razors and fought to the death ( Figure 5.15 ). The ban on the theater aimed to do away with another corrupt British practice. Critics had long believed that theatrical performances drained money from working people. Moreover, they argued, theatergoers learned to lie and deceive from what they saw on stage. The delegates felt banning the theater would demonstrate their resolve to act honestly and without pretence in their fight against corruption. Finally, eighteenth-century mourning practices often required lavish spending on luxury items and even the employment of professional mourners who, for a price, would shed tears at the grave. Prohibiting these practices reflected the idea that luxury bred corruption, and the First Continental Congress wanted to demonstrate that the colonists would do without British vices. Congress emphasized the need to be frugal and self-sufficient when confronted with corruption. The First Continental Congress banned all four activities—horse races, cockfights, the theater, and elaborate funerals—and entrusted the Continental Association with enforcement. Rejecting what they saw as corruption coming from Great Britain, the delegates were also identifying themselves as standing apart from their British relatives. They cast themselves as virtuous defenders of liberty against a corrupt Parliament. In the Declaration and Resolves and the Petition of Congress to the King, the delegates to the First Continental Congress refer to George III as “Most Gracious Sovereign” and to themselves as “inhabitants of the English colonies in North America” or “inhabitants of British America,” indicating that they still considered themselves British subjects of the king, not American citizens. At the same time, however, they were slowly moving away from British authority, creating their own de facto government in the First Continental Congress. One of the provisions of the Congress was that it meet again in one year to mark its progress; the Congress was becoming an elected government.
principles_of_accounting,_volume_1:_financial_accounting
Summary 8.1 Analyze Fraud in the Accounting Workplace The fraud triangle helps explain the mechanics of fraud by examining the common contributing factors of perceived opportunity, incentive, and rationalization. Due to the nature of their functions, internal and external auditors, through the implementation of effective internal controls, are in excellent positions to prevent opportunity-based fraud. 8.2 Define and Explain Internal Controls and Their Purpose within an Organization A system of internal control is the policies combined with procedures created by management to protect the integrity of assets and ensure efficiency of operations. The system prevents losses and helps management maintain an effective means of performance. 8.3 Describe Internal Controls within an Organization Principles of an effective internal control system include having clear responsibilities, documenting operations, having adequate insurance, separating duties, and setting clear responsibilities for action. Internal controls are applicable to all types of organizations: for profit, not-for-profit, and governmental organizations. 8.4 Define the Purpose and Use of a Petty Cash Fund, and Prepare Petty Cash Journal Entries The purpose of a petty cash fund is to make payments for small amounts that are immaterial, such as postage, minor repairs, or day-to-day supplies. A petty cash account is an imprest account, so it is only debited when the fund is initially established or increased in amount. Transactions to replenish the account involve a debit to the expenses and a credit to the cash account (e.g., bank account). 8.5 Discuss Management Responsibilities for Maintaining Internal Controls within an Organization It is the responsibility of management to assure that internal controls of a company are effective and in place. Though management has always had responsibility over internal controls, the Sarbanes-Oxley Act has added additional assurances that management takes this responsibility seriously, and placed sanctions against corporate officers and boards of directors who do not take appropriate responsibility. Sarbanes-Oxley only applies to public companies. Even though the rules of this act only apply to public companies, proper internal controls are an important aspect of all businesses of any size. Tone at the top is a key component of a proper internal control system. 8.6 Define the Purpose of a Bank Reconciliation, and Prepare a Bank Reconciliation and Its Associated Journal Entries The bank reconciliation is an internal document that verifies the accuracy of records maintained by the depositor and the financial institution. The balance on the bank statement is adjusted for outstanding checks and uncleared deposits. The record balance is adjusted for service charges and interest earned. The bank reconciliation is an internal control document that ensures transactions to the bank account are properly recorded, and allows for verification of transactions. 8.7 Describe Fraud in Financial Statements and Sarbanes-Oxley Act Requirements Financial statement fraud has occurred when financial statements intentionally hide illegal transactions or fail to accurately reflect the true financial condition of an entity. Cooking the books can be used to create false records to present to lenders or investors. It also is used to hide corporate looting of funds and other resources, or to increase stock prices. Cooking the books is an intentional action and is often achieved through the manipulation of the entity’s revenues or accounts receivable. Health South and Enron were used as examples of past corporate financial fraud. The section takes a brief look at the current state of SOX compliance.
Chapter Outline 8.1 Analyze Fraud in the Accounting Workplace 8.2 Define and Explain Internal Controls and Their Purpose within an Organization 8.3 Describe Internal Controls within an Organization 8.4 Define the Purpose and Use of a Petty Cash Fund, and Prepare Petty Cash Journal Entries 8.5 Discuss Management Responsibilities for Maintaining Internal Controls within an Organization 8.6 Define the Purpose of a Bank Reconciliation, and Prepare a Bank Reconciliation and Its Associated Journal Entries 8.7 Describe Fraud in Financial Statements and Sarbanes-Oxley Act Requirements Why It Matters One of Jennifer’s fondest memories was visiting her grandparents’ small country store when she was a child. She was impressed by how happy the customers seemed to be in the welcoming environment. While attending college, she decided that the college community needed a coffee/pastry shop where students and the local citizens could congregate, spend time together, and enjoy a coffee or other beverage, along with a pastry that Jennifer would buy from a local bakery. In a sense, she wanted to replicate the environment that people found in her grandparents’ store. After graduation, while she was in the planning stage, she asked her former accounting professor for advice on planning and operating a business since she had heard that the attrition rate for new businesses is quite high. The professor told her that one of the most important factors was the selection, hiring, and treatment of happy and productive personnel. The professor further stated that, with the right personnel, many problems that companies might face, such as fraud, theft, and the violation of the organization’s internal control policies and principles, can be lessened. To emphasize her point, the professor stated a statistic from the National Restaurant Association’s 2016 Restaurant Operations Report that restaurant staff were responsible for an estimated 75% of inventory theft. 1 This statistic led to the professor’s final gem of wisdom for Jennifer: hire the right people, create a pleasant work environment, and also create an environment that does not tempt your personnel to consider fraudulent or felonious activities. 1 National Restaurant Association. “2016 Restaurant Operations Report.” 2016. https://www.restaurant.org/research/reports/restaurant-operations-report
[ { "answer": { "ans_choice": 2, "ans_text": "C" }, "bloom": null, "hl_context": "<hl> Incentive ( or pressure ) is another element necessary for a person to commit fraud . <hl> <hl> The different types of pressure are typically found in ( 1 ) vices , such as gambling or drug use ; ( 2 ) financial pressures , such as greed or living beyond their means ; ( 3 ) work pressure , such as being unhappy with a job ; and ( 4 ) other pressures , such as the desire to appear successful . <hl> Pressure may be more recognizable than rationalization , for instance , when coworkers seem to be living beyond their means or complain that they want to get even with their employer because of low pay or other perceived slights .", "hl_sentences": "Incentive ( or pressure ) is another element necessary for a person to commit fraud . The different types of pressure are typically found in ( 1 ) vices , such as gambling or drug use ; ( 2 ) financial pressures , such as greed or living beyond their means ; ( 3 ) work pressure , such as being unhappy with a job ; and ( 4 ) other pressures , such as the desire to appear successful .", "question": { "cloze_format": "A fraudster would perceive ___ as a pressure.", "normal_format": "Which of the following would a fraudster perceive as a pressure?", "question_choices": [ "lack of management oversight", "everyone does it", "living beyond one’s means", "lack of an internal audit function" ], "question_id": "fs-idm361923088", "question_text": "Which of the following would a fraudster perceive as a pressure?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "Internal controls and company policies are important to protect and safeguard assets and to protect all company data and are designed to protect the company from fraud." }, "bloom": null, "hl_context": "It should be clear how important internal control is to all businesses , regardless of size . <hl> An effective internal control system allows a business to monitor its employees , but it also helps a company protect sensitive customer data . <hl> Consider the 2017 massive data breach at Equifax that compromised data of over 143 million people . With proper internal controls functioning as intended , there would have been protective measures to ensure that no unauthorized parties had access to the data . <hl> Not only would internal controls prevent outside access to the data , but proper internal controls would protect the data from corruption , damage , or misuse . <hl> One of the issues faced by any organization is that internal control systems can be overridden and can be ineffective if not followed by management or employees . <hl> The use of internal controls in both accounting and operations can reduce the risk of fraud . <hl> In the unfortunate event that an organization is a victim of fraud , the internal controls should provide tools that can be used to identify who is responsible for the fraud and provide evidence that can be used to prosecute the individual responsible for the fraud . <hl> This chapter discusses internal controls in the context of accounting and controlling for cash in a typical business setting . <hl> These examples are applicable to the other ways in which an organization may protect its assets and protect itself against fraud . 8.2 Define and Explain Internal Controls and Their Purpose within an Organization", "hl_sentences": "An effective internal control system allows a business to monitor its employees , but it also helps a company protect sensitive customer data . Not only would internal controls prevent outside access to the data , but proper internal controls would protect the data from corruption , damage , or misuse . The use of internal controls in both accounting and operations can reduce the risk of fraud . This chapter discusses internal controls in the context of accounting and controlling for cash in a typical business setting .", "question": { "cloze_format": "Internal control is said to be the backbone of all businesses. The best description of internal controls is that ___ .", "normal_format": "Internal control is said to be the backbone of all businesses. Which of the following is the best description of internal controls?", "question_choices": [ "Internal controls ensure that the financial statements published are correct.", "The only role of internal controls is to protect customer data.", "Internal controls and company policies are important to protect and safeguard assets and to protect all company data and are designed to protect the company from fraud.", "Internal controls are designed to keep employees from committing fraud against the company." ], "question_id": "fs-idm357461072", "question_text": "Internal control is said to be the backbone of all businesses. Which of the following is the best description of internal controls?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "A" }, "bloom": null, "hl_context": "The use of internal controls differs significantly across organizations of different sizes . <hl> In the case of small businesses , implementation of internal controls can be a challenge , due to cost constraints , or because a small staff may mean that one manager or owner will have full control over the organization and its operations . <hl> <hl> An owner in charge of all functions has enough knowledge to keep a close eye on all aspects of the organization and can track all assets appropriately . <hl> In smaller organizations in which responsibilities are delegated , procedures need to be developed in order to ensure that assets are tracked and used properly .", "hl_sentences": "In the case of small businesses , implementation of internal controls can be a challenge , due to cost constraints , or because a small staff may mean that one manager or owner will have full control over the organization and its operations . An owner in charge of all functions has enough knowledge to keep a close eye on all aspects of the organization and can track all assets appropriately .", "question": { "cloze_format": "The best way for owners of small businesses to maintain proper internal controls is that ___.", "normal_format": "What is the best way for owners of small businesses to maintain proper internal controls?", "question_choices": [ "The owner must have enough knowledge of all aspects of the company and have controls in place to track all assets.", "Small businesses do not need to worry about internal controls.", "Small businesses should make one of their employees in charge of all aspects of the company, giving the owner the ability to run the company and generate sales.", "Only managers need to be concerned about internal controls." ], "question_id": "fs-idm392990528", "question_text": "What is the best way for owners of small businesses to maintain proper internal controls?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "Publish accurate financial statements on a regular basis." }, "bloom": null, "hl_context": "<hl> ensure that assets are kept secure <hl> <hl> monitor operations of the organization to ensure maximum efficiency <hl> <hl> ensure assets are properly used <hl> Here we address some of the practical aspects of internal control systems . <hl> The internal control system consists of the formal policies and procedures that do the following : <hl>", "hl_sentences": "ensure that assets are kept secure monitor operations of the organization to ensure maximum efficiency ensure assets are properly used The internal control system consists of the formal policies and procedures that do the following :", "question": { "cloze_format": "It is not considered to be part of the internal control structure of a company to ___.", "normal_format": "Which of the following is not considered to be part of the internal control structure of a company?", "question_choices": [ "Ensure that assets are kept secure.", "Monitor operations of the organization to ensure maximum efficiency.", "Publish accurate financial statements on a regular basis.", "Ensure assets are properly used." ], "question_id": "fs-idm684287760", "question_text": "Which of the following is not considered to be part of the internal control structure of a company?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "D" }, "bloom": null, "hl_context": "<hl> In addition , any documentation generated by daily operations should be managed according to internal controls . <hl> <hl> For example , when the Galaxy ’ s Best Yogurt closes each day , one employee should close out and reconcile the cash drawer using prenumbered forms in pen to ensure that no forms can be altered or changed by another employee who may have access to the cash . <hl> In case of an error , the employee responsible for making the change should initial any changes on the form . If there are special orders for cakes or other products , the order forms should be prenumbered . The use of prenumbered documents provides assurance that all sales are recorded . If a form is not prenumbered , an order can be prepared , and the employee can then take the money without ringing the order into the cash register , leaving no record of the sale .", "hl_sentences": "In addition , any documentation generated by daily operations should be managed according to internal controls . For example , when the Galaxy ’ s Best Yogurt closes each day , one employee should close out and reconcile the cash drawer using prenumbered forms in pen to ensure that no forms can be altered or changed by another employee who may have access to the cash .", "question": { "cloze_format": "The statement that would not address the issue of having cash transactions reported in the accounting records is ___ .", "normal_format": "There are several elements to internal controls. Which of the following would not address the issue of having cash transactions reported in the accounting records?", "question_choices": [ "One employee would have access to the cash register.", "The cash drawer should be closed out, and cash and the sales register should be reconciled on a prenumbered form.", "Ask customers to report to a manager if they do not receive a sales receipt or invoice.", "The person behind the cash register should also be responsible for making price adjustments." ], "question_id": "fs-idm686164688", "question_text": "There are several elements to internal controls. Which of the following would not address the issue of having cash transactions reported in the accounting records?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "C" }, "bloom": null, "hl_context": "Separation of assets from custody ensures that the person who controls an asset cannot also keep the accounting records . This action prevents one employee from taking income from the business and entering a transaction on the accounting records to cover it up . <hl> For example , one person within an organization may open an envelope that contains a check , but a different person would enter the check into the organization ’ s accounting system . <hl> <hl> In the case of the Galaxy ’ s Best Yogurt , one employee may count the money in the cash register drawer at the end of the night and reconcile it with the sales , but a different employee would recount the money , prepare the bank deposit , and ensure that the deposit is made at the bank . <hl>", "hl_sentences": "For example , one person within an organization may open an envelope that contains a check , but a different person would enter the check into the organization ’ s accounting system . In the case of the Galaxy ’ s Best Yogurt , one employee may count the money in the cash register drawer at the end of the night and reconcile it with the sales , but a different employee would recount the money , prepare the bank deposit , and ensure that the deposit is made at the bank .", "question": { "cloze_format": "There are three employees in the accounting department: payroll clerk, accounts payable clerk, and accounts receivable clerk. The employee that should not make the daily deposit is the ___ .", "normal_format": "There are three employees in the accounting department: payroll clerk, accounts payable clerk, and accounts receivable clerk. Which one of these employees should not make the daily deposit?", "question_choices": [ "payroll clerk", "account payable clerk", "accounts receivable clerk", "none of them" ], "question_id": "fs-idm635347344", "question_text": "There are three employees in the accounting department: payroll clerk, accounts payable clerk, and accounts receivable clerk. Which one of these employees should not make the daily deposit?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 1, "ans_text": "packing slip" }, "bloom": null, "hl_context": "<hl> As cash is spent from a petty cash fund , it is replaced with a receipt of the purchase . <hl> At all times , the balance in the petty cash box should be equal to the cash in the box plus the receipts showing purchases . As we have discussed , one of the hardest assets to control within any organization is cash . <hl> One way to control cash is for an organization to require that all payments be made by check . <hl> <hl> However , there are situations in which it is not practical to use a check . <hl> For example , imagine that the Galaxy ’ s Best Yogurt runs out of milk one evening . It is not possible to operate without milk , and the normal shipment does not come from the supplier for another 48 hours . To maintain operations , it becomes necessary to go to the grocery store across the street and purchase three gallons of milk . <hl> It is not efficient for time and cost to write a check for this small purchase , so companies set up a petty cash fund , which is a predetermined amount of cash held on hand to be used to make payments for small day-to-day purchases . <hl> <hl> A petty cash fund is a type of imprest account , which means that it contains a fixed amount of cash that is replaced as it is spent in order to maintain a set balance . <hl> <hl> Think It Through Hiring Approved Vendors One internal control that companies often have is an official “ approved vendor ” list for purchases . <hl> Why is it important to have an approved vendor list ? 8.4 Define the Purpose and Use of a Petty Cash Fund , and Prepare Petty Cash Journal Entries", "hl_sentences": "As cash is spent from a petty cash fund , it is replaced with a receipt of the purchase . One way to control cash is for an organization to require that all payments be made by check . However , there are situations in which it is not practical to use a check . It is not efficient for time and cost to write a check for this small purchase , so companies set up a petty cash fund , which is a predetermined amount of cash held on hand to be used to make payments for small day-to-day purchases . A petty cash fund is a type of imprest account , which means that it contains a fixed amount of cash that is replaced as it is spent in order to maintain a set balance . Think It Through Hiring Approved Vendors One internal control that companies often have is an official “ approved vendor ” list for purchases .", "question": { "cloze_format": "The ___ document is not needed to process a payment to a vendor.", "normal_format": "Which one of the following documents is not needed to process a payment to a vendor?", "question_choices": [ "vendor invoice", "packing slip", "check request", "purchase order" ], "question_id": "fs-idm685975760", "question_text": "Which one of the following documents is not needed to process a payment to a vendor?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "A" }, "bloom": null, "hl_context": "In terms of the application of security resources , some businesses use surveillance cameras focused on key areas of the organization , such as the cash register and areas where a majority of work is performed . <hl> Technology also allows businesses to use password protection on their data or systems so that employees cannot access systems and change data without authorization . <hl> Businesses may also track all employee activities within an information technology system .", "hl_sentences": "Technology also allows businesses to use password protection on their data or systems so that employees cannot access systems and change data without authorization .", "question": { "cloze_format": "The advantage of using technology in the internal control system is that ___ .", "normal_format": "What is the advantage of using technology in the internal control system?", "question_choices": [ "Passwords can be used to allow access by employees.", "Any cash received does not need to be reconciled because the computer tracks all transactions.", "Transactions are easily changed.", "Employees cannot steal because all cash transactions are recorded by the computer/cash register." ], "question_id": "fs-idm372711712", "question_text": "What is the advantage of using technology in the internal control system?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 3, "ans_text": "cash" }, "bloom": null, "hl_context": "<hl> As we have discussed , one of the hardest assets to control within any organization is cash . <hl> One way to control cash is for an organization to require that all payments be made by check . However , there are situations in which it is not practical to use a check . For example , imagine that the Galaxy ’ s Best Yogurt runs out of milk one evening . It is not possible to operate without milk , and the normal shipment does not come from the supplier for another 48 hours . To maintain operations , it becomes necessary to go to the grocery store across the street and purchase three gallons of milk . It is not efficient for time and cost to write a check for this small purchase , so companies set up a petty cash fund , which is a predetermined amount of cash held on hand to be used to make payments for small day-to-day purchases . A petty cash fund is a type of imprest account , which means that it contains a fixed amount of cash that is replaced as it is spent in order to maintain a set balance .", "hl_sentences": "As we have discussed , one of the hardest assets to control within any organization is cash .", "question": { "cloze_format": "The asset that requires the strongest of internal controls is ___.", "normal_format": "Which of the following assets require the strongest of internal controls?", "question_choices": [ "inventory", "credit cards", "computer equipment", "cash" ], "question_id": "fs-idm404730784", "question_text": "Which of the following assets require the strongest of internal controls?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "A" }, "bloom": null, "hl_context": "<hl> As a result of the Enron failure and others that occurred during the same time frame , Congress passed the Sarbanes-Oxley Act ( SOX ) to regulate practice to manage conflicts of analysts , maintain governance , and impose guidelines for criminal conduct as well as sanctions for violations of conduct . <hl> <hl> It ensures that internal controls are properly documented , tested , and used consistently . <hl> The intent of the act was to ensure that corporate financial statements and disclosures are accurate and reliable . It is important to note that SOX only applies to public companies . A publicly traded company is one whose stock is traded ( bought and sold ) on an organized stock exchange . Smaller companies still struggle with internal control development and compliance due to a variety of reasons , such as cost and lack of resources .", "hl_sentences": "As a result of the Enron failure and others that occurred during the same time frame , Congress passed the Sarbanes-Oxley Act ( SOX ) to regulate practice to manage conflicts of analysts , maintain governance , and impose guidelines for criminal conduct as well as sanctions for violations of conduct . It ensures that internal controls are properly documented , tested , and used consistently .", "question": { "cloze_format": "A true statement about the Sarbanes-Oxley Act is that ___ .", "normal_format": "Which of the following is true about the Sarbanes-Oxley Act?", "question_choices": [ "It was passed to ensure that internal controls are properly documented and tested by public companies.", "It applies to both public and smaller companies.", "It requires all companies to report their internal control policies to the US Securities and Exchange Commission.", "It does not require additional costs or resources to have adequate controls." ], "question_id": "fs-idm269712544", "question_text": "Which of the following is true about the Sarbanes-Oxley Act?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "The auditor is required to only report weaknesses in the internal control design of the company he or she is auditing." }, "bloom": null, "hl_context": "<hl> Additionally , the work conducted by the auditor is to be overseen by the Public Company Accounting Oversight Board ( PCAOB ) . <hl> The PCAOB is a congressionally established , nonprofit corporation . Its creation was included in the Sarbanes-Oxley Act of 2002 to regulate conflict , control disclosures , and set sanction guidelines for any violation of regulations . <hl> The PCAOB was assigned the responsibilities of ensuring independent , accurate , and informative audit reports , monitoring the audits of securities brokers and dealers , and maintaining oversight of the accountants and accounting firms that audit publicly traded companies . <hl> Rotate who can lead the audit . <hl> The person in charge of the audit can serve for a period of no longer than seven years without a break of two years . <hl> <hl> Limit nonaudit services , such as consulting , that are provided to a client . <hl> <hl> Issue an internal control report following the evaluation of internal controls . <hl> <hl> As it pertains to internal controls , the SOX requires the certification and documentation of internal controls . <hl> <hl> Specifically , the act requires that the auditor do the following : <hl>", "hl_sentences": "Additionally , the work conducted by the auditor is to be overseen by the Public Company Accounting Oversight Board ( PCAOB ) . The PCAOB was assigned the responsibilities of ensuring independent , accurate , and informative audit reports , monitoring the audits of securities brokers and dealers , and maintaining oversight of the accountants and accounting firms that audit publicly traded companies . The person in charge of the audit can serve for a period of no longer than seven years without a break of two years . Limit nonaudit services , such as consulting , that are provided to a client . Issue an internal control report following the evaluation of internal controls . As it pertains to internal controls , the SOX requires the certification and documentation of internal controls . Specifically , the act requires that the auditor do the following :", "question": { "cloze_format": "The external auditor of a company has certain requirements due to Sarbanes-Oxley. The best description of these requirements is that ___.", "normal_format": "The external auditor of a company has certain requirements due to Sarbanes-Oxley. Which of the following best describes these requirements?", "question_choices": [ "The auditor is required to only report weaknesses in the internal control design of the company he or she is auditing.", "The auditor must issue an internal control report on the evaluation of internal controls overseen by the Public Company Accounting Oversight Board", "The auditor in charge can serve for a period of only two years.", "The Public Company Accounting Oversight Board reviews reports submitted by the auditors when no evaluations have been performed." ], "question_id": "fs-idm282587136", "question_text": "The external auditor of a company has certain requirements due to Sarbanes-Oxley. Which of the following best describes these requirements?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "C" }, "bloom": null, "hl_context": "<hl> Additions such as interest or funds collected by the bank for the client : interest is added to the bank account as earned but is not reported on the financial records . <hl> These additions might also include funds collected by the bank for the client .", "hl_sentences": "Additions such as interest or funds collected by the bank for the client : interest is added to the bank account as earned but is not reported on the financial records .", "question": { "cloze_format": "The item that is found on a book side of the bank reconciliation is the ___ .", "normal_format": "Which of the following items are found on a book side of the bank reconciliation?", "question_choices": [ "beginning bank balance", "outstanding checks", "interest income", "error made by bank" ], "question_id": "fs-idm201336544", "question_text": "Which of the following items are found on a book side of the bank reconciliation?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "interest income" }, "bloom": null, "hl_context": "<hl> Additions such as interest or funds collected by the bank for the client : interest is added to the bank account as earned but is not reported on the financial records . <hl> These additions might also include funds collected by the bank for the client .", "hl_sentences": "Additions such as interest or funds collected by the bank for the client : interest is added to the bank account as earned but is not reported on the financial records .", "question": { "cloze_format": "A ___ is found on the bank side of the bank reconciliation.", "normal_format": "Which of the following are found on the bank side of the bank reconciliation?", "question_choices": [ "NSF check", "interest income", "wire transfer into client’s account", "deposit in transit" ], "question_id": "fs-idm207969632", "question_text": "Which of the following are found on the bank side of the bank reconciliation?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "to help nudge its stock price higher" }, "bloom": null, "hl_context": "A common reason to cook the books is to create a false set of a company ’ s books used to convince investors or lenders to provide money to the company . Investors and lenders rely on a properly prepared set of financial statements in making their decision to provide the company with money . Another reason to misstate a set of financial statements is to hide corporate looting such as excessive retirement perks of top executives , unpaid loans to top executives , improper stock options , and any other wrongful financial action . <hl> Yet another reason to misreport a company ’ s financial data is to drive the stock price higher . <hl> Internal controls assist the accountant in locating and identifying when management of a company wants to mislead the inventors or lenders .", "hl_sentences": "Yet another reason to misreport a company ’ s financial data is to drive the stock price higher .", "question": { "cloze_format": "A reason a company would want to overstate income is ___ .", "normal_format": "What would be a reason a company would want to overstate income?", "question_choices": [ "to help nudge its stock price higher", "to lower its tax bill", "to show a decrease in overall profits", "none of the above" ], "question_id": "fs-idm368358656", "question_text": "What would be a reason a company would want to overstate income?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "C" }, "bloom": null, "hl_context": "One of the most common ways companies cook the books is by manipulating revenue accounts or accounts receivables . <hl> Proper revenue recognition involves accounting for revenue when the company has met its obligation on a contract . <hl> Financial statement fraud involves early revenue recognition , or recognizing revue that does not exist , and receivable accountings , used in tandem with false revenue reporting . HealthSouth used a combination of false revenue accounts and misstated accounts receivable in a direct manipulation of the revenue accounts to commit a multibillion-dollar fraud between 1996 and 2002 . Several chief financial officers and other company officials went to prison as a result . 9 9 Melinda Dickinson . “ Former HealthSouth Boss Found Liable for $ 2.9 Billion . ” Reuters . June 18 , 2009 . https://www.reuters.com/article/us-healthsouth-scrushy/former-healthsouth-boss-found-liable-for-2-9-billion-idUSTRE55H4IP20090618", "hl_sentences": "Proper revenue recognition involves accounting for revenue when the company has met its obligation on a contract .", "question": { "cloze_format": "Revenue recognition occurs ___.", "normal_format": "At what point does revenue recognition occur?", "question_choices": [ "When the purchase order is received", "When the seller receives the money for the job", "When the seller has met “performance”", "When the purchaser makes payment" ], "question_id": "fs-idm366077808", "question_text": "At what point does revenue recognition occur?" }, "references_are_paraphrase": null } ]
8
8.1 Analyze Fraud in the Accounting Workplace In this chapter, one of the major issues examined is the concept of fraud. Fraud can be defined in many ways, but for the purposes of this course we define it as the act of intentionally deceiving a person or organization or misrepresenting a relationship in order to secure some type of benefit, either financial or nonfinancial. We initially discuss it in a broader sense and then concentrate on the issue of fraud as it relates to the accounting environment and profession. Workplace fraud is typically detected by anonymous tips or by accident, so many companies use the fraud triangle to help in the analysis of workplace fraud. Donald Cressey, an American criminologist and sociologist, developed the fraud triangle to help explain why law-abiding citizens sometimes commit serious workplace-related crimes. He determined that people who embezzled money from banks were typically otherwise law-abiding citizens who came into a “non-sharable financial problem.” A non-sharable financial problem is when a trusted individual has a financial issue or problem that he or she feels can't be shared. However, it is felt that the problem can be alleviated by surreptitiously violating the position of trust through some type of illegal response, such as embezzlement or other forms of misappropriation. The guilty party is typically able to rationalize the illegal action. Although they committed serious financial crimes, for many of them, it was their first offense. The fraud triangle consists of three elements: incentive, opportunity, and rationalization ( Figure 8.2 ). When an employee commits fraud, the elements of the fraud triangle provide assistance in understanding the employee’s methods and rationale. Each of the elements needs to be present for workplace fraud to occur. Perceived opportunity is when a potential fraudster thinks that the internal controls are weak or sees a way to override them. This is the area in which an accountant has the greatest ability to mitigate fraud, as the accountant can review and test internal controls to locate weaknesses. After identifying a weak, circumvented, or nonexistent internal control, management, along with the accountant, can implement stronger internal controls. Rationalization is a way for the potential fraudster to internalize the concept that the fraudulent actions are acceptable. A typical fraudster finds ways to personally justify his or her illegal and unethical behavior. Using rationalization as a tool to locate or combat fraud is difficult, because the outward signs may be difficult to recognize. Incentive (or pressure) is another element necessary for a person to commit fraud. The different types of pressure are typically found in (1) vices, such as gambling or drug use; (2) financial pressures, such as greed or living beyond their means; (3) work pressure, such as being unhappy with a job; and (4) other pressures, such as the desire to appear successful. Pressure may be more recognizable than rationalization, for instance, when coworkers seem to be living beyond their means or complain that they want to get even with their employer because of low pay or other perceived slights. Typically, all three elements of the triangle must be in place for an employee to commit fraud, but companies usually focus on the opportunity aspect of mitigating fraud because, they can develop internal controls to manage the risk. The rationalization and pressure to commit fraud are harder to understand and identify. Many organizations may recognize that an employee may be under pressure, but many times the signs of pressure are missed. Virtually all types of businesses can fall victim to fraudulent behavior. For example, there have been scams involving grain silos in Texas inflating their inventory, the sale of mixed oils labeled as olive oil across the globe, and the tens of billions of dollars that Bernie Madoff swindled out of investors and not-for-profits. To demonstrate how a fraud can occur, let’s examine a sample case in a little more detail. In 2015, a long-term employee of the SCICAP Federal Credit Union in Iowa was convicted of stealing over $2.7 million in cash over a 37-year period. The employee maintained two sets of financial records: one that provided customers with correct information as to how much money they had on deposit within their account, and a second set of books that, through a complex set of transactions, moved money out of customer accounts and into the employee’s account as well as those of members of her family. To ensure that no other employee within the small credit union would have access to the duplicate set of books, the employee never took a vacation over the 37-year period, and she was the only employee with password-protected access to the system where the electronic records were stored. There were, at least, two obvious violations of solid internal control principles in this case. The first was the failure to require more than one person to have access to the records, which the employee was able to maintain by not taking a vacation. Allowing the employee to not share the password-protected access was a second violation. If more than one employee had access to the system, the felonious employee probably would have been caught much earlier. What other potential failures in the internal control system might have been present? How does this example of fraud exhibit the three components of the fraud triangle? Unfortunately, this is one of many examples that occur on a daily basis. In almost any city on almost any day, there are articles in local newspapers about a theft from a company by its employees. Although these thefts can involve assets such as inventory, most often, employee theft involves cash that the employee has access to as part of his or her day-to-day job. Link to Learning Small businesses have few employees, but often they have certain employees who are trusted with responsibilities that may not have complete internal control systems. This situation makes small businesses especially vulnerable to fraud. The article “Small Business Fraud and the Trusted Employee” from the Association of Certified Fraud Examiners describes how a trusted employee may come to commit fraud, and how a small business can prevent it from happening. Accountants, and other members of the management team, are in a good position to control the perceived opportunity side of the fraud triangle through good internal controls, which are policies and procedures used by management and accountants of a company to protect assets and maintain proper and efficient operations within a company with the intent to minimize fraud. An internal auditor is an employee of an organization whose job is to provide an independent and objective evaluation of the company’s accounting and operational activities. Management typically reviews the recommendations and implements stronger internal controls. Another important role is that of an external auditor , who generally works for an outside certified public accountant (CPA) firm or his or her own private practice and conducts audits and other assignments, such as reviews. Importantly, the external auditor is not an employee of the client. The external auditor prepares reports and then provides opinions as to whether or not the financial statements accurately reflect the financial conditions of the company, subject to generally accepted accounting principles (GAAP). External auditors can maintain their own practice, or they might be employed by national or regional firms. Ethical Considerations Internal Auditors and Their Code of Ethics Internal auditors are employees of an organization who evaluate internal controls and other operational metrics, and then ethically report their findings to management. An internal auditor may be a Certified Internal Auditor (CIA), an accreditation granted by the Institute of Internal Auditors (IIA). The IIA defines internal auditing as “an independent, objective assurance and consulting activity designed to add value and improve an organization’s operations. It helps an organization accomplish its objectives by bringing a systematic, disciplined approach to evaluate and improve the effectiveness of risk management, control, and governance processes.” 2 2 The Institute of Internal Auditors (IIA). “Code of Ethics.” n.d. https://na.theiia.org/standards-guidance/mandatory-guidance/Pages/Code-of-Ethics.aspx Internal auditors have their own organizational code of ethics. According to the IIA, “the purpose of The Institute’s Code of Ethics is to promote an ethical culture in the profession of internal auditing.” 3 Company management relies on a disciplined and truthful approach to reporting. The internal auditor is expected to keep confidential any received information, while reporting results in an objective fashion. Management trusts internal auditors to perform their work in a competent manner and with integrity, so that the company can make the best decisions moving forward. 3 The Institute of Internal Auditors (IIA). “Code of Ethics.” n.d. https://na.theiia.org/standards-guidance/mandatory-guidance/Pages/Code-of-Ethics.aspx One of the issues faced by any organization is that internal control systems can be overridden and can be ineffective if not followed by management or employees. The use of internal controls in both accounting and operations can reduce the risk of fraud. In the unfortunate event that an organization is a victim of fraud, the internal controls should provide tools that can be used to identify who is responsible for the fraud and provide evidence that can be used to prosecute the individual responsible for the fraud. This chapter discusses internal controls in the context of accounting and controlling for cash in a typical business setting. These examples are applicable to the other ways in which an organization may protect its assets and protect itself against fraud. 8.2 Define and Explain Internal Controls and Their Purpose within an Organization Internal controls are the systems used by an organization to manage risk and diminish the occurrence of fraud. The internal control structure is made up of the control environment, the accounting system, and procedures called control activities . Several years ago, the Committee of Sponsoring Organizations (COSO) , which is an independent, private-sector group whose five sponsoring organizations periodically identify and address specific accounting issues or projects, convened to address the issue of internal control deficiencies in the operations and accounting systems of organizations. They subsequently published a report that is known as COSO’s Internal Control-Integrated Framework . The five components that they determined were necessary in an effective internal control system make up the components in the internal controls triangle shown in Figure 8.3 . Here we address some of the practical aspects of internal control systems. The internal control system consists of the formal policies and procedures that do the following: ensure assets are properly used ensure that the accounting system is functioning properly monitor operations of the organization to ensure maximum efficiency ensure that assets are kept secure ensure that employees are in compliance with corporate policies A properly designed and functioning internal control system will not eliminate the risk of loss, but it will reduce the risk. Different organizations face different types of risk, but when internal control systems are lacking, the opportunity arises for fraud, misuse of the organization’s assets, and employee or workplace corruption. Part of an accountant’s function is to understand and assist in maintaining the internal control in the organization. Link to Learning See the Institute of Internal Auditors website to learn more about many of the professional functions of the internal auditor. Internal control keeps the assets of a company safe and keeps the company from violating any laws, while fairly recording the financial activity of the company in the accounting records. Proper accounting records are used to create the financial statements that the owners use to evaluate the operations of a company, including all company and employee activities. Internal controls are more than just reviews of how items are recorded in the company’s accounting records; they also include comparing the accounting records to the actual operations of the company. For example, a movie theater earns most of its profits from the sale of popcorn and soda at the concession stand. The prices of the items sold at the concession stand are typically high, even though the costs of popcorn and soda are low. Internal controls allow the owners to ensure that their employees do not give away the profits by giving away sodas and popcorn. If you were to go to the concession stand and ask for a cup of water, typically, the employee would give you a clear, small plastic cup called a courtesy cup. This internal control, the small plastic cup for nonpaying customers, helps align the accounting system and the theater’s operations. A movie theater does not use a system to directly account for the sale of popcorn, soda, or ice used. Instead, it accounts for the containers. A point-of-sale system compares the number of soda cups used in a shift to the number of sales recorded in the system to ensure that those numbers match. The same process accounts for popcorn buckets and other containers. Providing a courtesy cup ensures that customers drinking free water do not use the soda cups that would require a corresponding sale to appear in the point-of-sale system. The cost of the popcorn, soda, and ice will be recorded in the accounting system as an inventory item, but the internal control is the comparison of the recorded sales to the number of containers used. This is just one type of internal control. As we discuss the internal controls, we see that the internal controls are used both in accounting, to provide information for management to properly evaluate the operations of the company, and in business operations, to reduce fraud. It should be clear how important internal control is to all businesses, regardless of size. An effective internal control system allows a business to monitor its employees, but it also helps a company protect sensitive customer data. Consider the 2017 massive data breach at Equifax that compromised data of over 143 million people. With proper internal controls functioning as intended, there would have been protective measures to ensure that no unauthorized parties had access to the data. Not only would internal controls prevent outside access to the data, but proper internal controls would protect the data from corruption, damage, or misuse. Your Turn Bank Fraud in Enid, Oklahoma The retired mayor of Enid, Oklahoma, Ernst Currier, had a job as a loan officer and then as a senior vice president at Security National Bank . In his bank job, he allegedly opened 61 fraudulent loans. He used the identities of at least nine real people as well as eight fictitious people and stole about $6.2 million. 4 He was sentenced to 13 years in prison on 33 felony counts. 4 Jack Money. “Fraudulent Loans Lead to Enid Banker’s Arrest on Numerous Felony Complaints.” The Oklahoman . November 15, 2017. https://newsok.com/article/5572195/fraudulent-loans-lead-to-enid-bankers-arrest-on-numerous-felony-complaints Currier was able to circumvent one of the most important internal controls: segregation of duties. The American Institute of Certified Public Accountants (AICPA) states that segregation of duties “is based on shared responsibilities of a key process that disperses the critical functions of that process to more than one person or department. Without this separation in key processes, fraud and error risks are far less manageable.” 5 Currier used local residents’ identities and created false documents to open loans for millions of dollars and then collect the funds himself, without any oversight by any other employee. Creating these loans allowed him to walk up to the bank vault and take cash out of the bank without anyone questioning him. There was no segregation of duties for opening loans, or if there was, he was able to easily override those internal controls. 5 American Institute of Certified Public Accountants (AICPA). “Segregation of Duties.” n.d. https://www.aicpa.org/interestareas/informationtechnology/resources/value-strategy-through-segregation-of-duties.html How could internal controls have helped prevent Currier’s bank fraud in Enid, Oklahoma? Solution Simply having someone else confirm the existence of the borrower and make the payment for the loan directly to the borrower would have saved this small bank millions of dollars. Consider a bank that has to track deposits for thousands of customers. If a fire destroys the building housing the bank’s servers, how can the bank find the balances of each customer? Typically, organizations such as banks mirror their servers at several locations around the world as an internal control. The bank might have a main server in Tennessee but also mirror all data in real time to identical servers in Arizona, Montana, and even offshore in Iceland. With multiple copies of a server at multiple locations across the country, or even the world, in the event of disaster to one server, a backup server can take control of operations, protecting customer data and avoiding any service interruptions. Internal controls are the basic components of an internal control system , the sum of all internal controls and policies within an organization that protect assets and data. A properly designed system of internal controls aims to ensure the integrity of assets, allows for reliable accounting information and financial reporting, enhances efficiency within an organization, and provides guidelines and possible consequences for dealing with breaches. Internal controls drive many decisions and overall operational procedures within an organization. A properly designed internal control system will not prevent all loss from occurring, but it will significantly reduce the risk of loss and increase the chance of identifying the responsible party. Continuing Application Fraud Controls for Grocery Stores All businesses are concerned with internal controls over reporting and assets. For the grocery industry this concern is even greater, because profit margins on items are so small that any lost opportunity hurts profitability. How can an individual grocery store develop effective controls? Consider the two biggest items that a grocery store needs to control: food (inventory) and cash. Inventory controls are set up to stop shrinkage (theft). While it is not profitable for each aisle to be patrolled by a security guard, cameras throughout the store linked to a central location allow security staff to observe customers. More controls are placed on cash registers to prevent employees from stealing cash. Cameras at each register, cash counts at each shift change, and/or a supervisor who observes cashiers are some potential internal control methods. Grocery stores invest more resources in controlling cash because they have determined it to be the greatest opportunity for fraudulent activity. The Role of Internal Controls The accounting system is the backbone of any business entity, whether it is profit based or not. It is the responsibility of management to link the accounting system with other functional areas of the business and ensure that there is communication among employees, managers, customers, suppliers, and all other internal and external users of financial information. With a proper understanding of internal controls, management can design an internal control system that promotes a positive business environment that can most effectively serve its customers. For example, a customer enters a retail store to purchase a pair of jeans. As the cashier enters the jeans into the point-of-sale system, the following events occur internally: A sale is recorded in the company’s journal, which increases revenue on the income statement. If the transaction occurred by credit card, the bank typically transfers the funds into the store’s bank account in a timely manner. The pair of jeans is removed from the inventory of the store where the purchase was made. A new pair of jeans is ordered from the distribution center to replace what was purchased from the store’s inventory. The distribution center orders a new pair of jeans from the factory to replace its inventory. Marketing professionals can monitor over time the trend and volume of jeans sold in a specific size. If an increase or decrease in sales volume of a specific size is noted, store inventory levels can be adjusted. The company can see in real time the exact inventory levels of all products in all stores at all times, and this can ensure the best customer access to products. Because many systems are linked through technology that drives decisions made by many stakeholders inside and outside of the organization, internal controls are needed to protect the integrity and ensure the flow of information. An internal control system also assists all stakeholders of an organization to develop an understanding of the organization and provide assurance that all assets are being used efficiently and accurately. Environment Leading to the Sarbanes-Oxley Act Internal controls have grown in their importance as a component of most business decisions. This importance has grown as many company structures have grown in complexity. Despite their importance, not all companies have given maintenance of controls top priority. Additionally, many small businesses do not have adequate understanding of internal controls and therefore use inferior internal control systems. Many large companies have nonformalized processes, which can lead to systems that are not as efficient as they could be. The failure of the SCICAP Credit Union discussed earlier is a direct result of a small financial institution having a substandard internal control system leading to employee theft. One of the largest corporate failures of all time was Enron , and the failure can be directly attributed to poor internal controls. Enron was one of the largest energy companies in the world in the late twentieth century. However, a corrupt management attempted to hide weak financial performance by manipulating revenue recognition, valuation of assets on the balance sheet, and other financial reporting disclosures so that the company appeared to have significant growth. When this practice was uncovered, the owners of Enron stock lost $40 billion as the stock price dropped from $91 per share to less than $1 per share, as shown in Figure 8.4 . 6 This failure could have been prevented had proper internal controls been in place. 6 Douglas O. Linder, ed. “Enron Historical Stock Price.” Famous Trials. n.d. https://www.famous-trials.com/images/ftrials/Enron/documents/enronstockchart.pdf For example, Enron and its accounting firm, Arthur Andersen , did not maintain an adequate degree of independence. Arthur Andersen provided a significant amount of services in both auditing and consulting, which prevented them from approaching the audit of Enron with a proper degree of independence. Also, among many other violations, Enron avoided the proper use of several acceptable reporting requirements. As a result of the Enron failure and others that occurred during the same time frame, Congress passed the Sarbanes-Oxley Act (SOX) to regulate practice to manage conflicts of analysts, maintain governance, and impose guidelines for criminal conduct as well as sanctions for violations of conduct. It ensures that internal controls are properly documented, tested, and used consistently. The intent of the act was to ensure that corporate financial statements and disclosures are accurate and reliable. It is important to note that SOX only applies to public companies. A publicly traded company is one whose stock is traded (bought and sold) on an organized stock exchange. Smaller companies still struggle with internal control development and compliance due to a variety of reasons, such as cost and lack of resources. Major Accounting Components of the Sarbanes-Oxley Act As it pertains to internal controls, the SOX requires the certification and documentation of internal controls. Specifically, the act requires that the auditor do the following: Issue an internal control report following the evaluation of internal controls. Limit nonaudit services, such as consulting, that are provided to a client. Rotate who can lead the audit. The person in charge of the audit can serve for a period of no longer than seven years without a break of two years. Additionally, the work conducted by the auditor is to be overseen by the Public Company Accounting Oversight Board (PCAOB) . The PCAOB is a congressionally established, nonprofit corporation. Its creation was included in the Sarbanes-Oxley Act of 2002 to regulate conflict, control disclosures, and set sanction guidelines for any violation of regulations. The PCAOB was assigned the responsibilities of ensuring independent, accurate, and informative audit reports, monitoring the audits of securities brokers and dealers, and maintaining oversight of the accountants and accounting firms that audit publicly traded companies. Link to Learning Visit the Public Company Accounting Oversight Board (PCAOB) website to learn more about what it does. Any employee found to violate SOX standards can be subject to very harsh penalties, including $5 million in fines and up to 20 to 25 years in prison. The penalty is more severe for securities fraud (25 years) than for mail or wire fraud (20 years). The SOX is relatively long and detailed, with Section 404 having the most application to internal controls. Under Section 404, management of a company must perform annual audits to assess and document the effectiveness of all internal controls that have an impact on the financial reporting of the organization. Also, selected executives of the firm under audit must sign the audit report and state that they attest that the audit fairly represents the financial records and conditions of the company. The financial reports and internal control system must be audited annually. The cost to comply with this act is very high, and there is debate as to how effective this regulation is. Two primary arguments that have been made against the SOX requirements is that complying with their requirements is expensive, both in terms of cost and workforce, and the results tend not to be conclusive. Proponents of the SOX requirements do not accept these arguments. One available potential response to mandatory SOX compliance is for a company to decertify (remove) its stock for trade on the available stock exchanges. Since SOX affects publicly traded companies, decertifying its stock would eliminate the SOX compliance requirement. However, this has not proven to be a viable option, primarily because investors enjoy the protection SOX provides, especially the requirement that the companies in which they invest undergo a certified audit prepared by CPAs employed by national or regional accounting firms. Also, if a company takes its stock off of an organized stock exchange, many investors assume that a company is in trouble financially and that it wants to avoid an audit that might detect its problems. Your Turn The Growing Importance of the Report on Internal Controls Internal controls have become an important aspect of financial reporting. As part of the financial statements, the auditor has to issue a report with an opinion on the financial statements, as well as internal controls. Use the internet and locate the annual report of a company, specifically the report on internal controls. What does this report tell the user of financial information? Solution The annual report informs the user about the financial results of the company, both in discussion by management as well as the financial statements. Part of the financial statements involves an independent auditor’s report on the integrity of the financial statements as well as the internal controls. Link to Learning Many companies have their own internal auditors on staff. The role of the internal auditor is to test and ensure that a company has proper internal controls in place, and that they are functioning. Read about how the internal audit works from I.S. Partners to learn more. 8.3 Describe Internal Controls within an Organization The use of internal controls differs significantly across organizations of different sizes. In the case of small businesses, implementation of internal controls can be a challenge, due to cost constraints, or because a small staff may mean that one manager or owner will have full control over the organization and its operations. An owner in charge of all functions has enough knowledge to keep a close eye on all aspects of the organization and can track all assets appropriately. In smaller organizations in which responsibilities are delegated, procedures need to be developed in order to ensure that assets are tracked and used properly. When an owner cannot have full oversight and control over an organization, internal control systems need to be developed. When an appropriate internal control system is in place, it is interlinked to all aspects of the entity’s operations. An appropriate internal control system links the accounting, finance, operations, human resources, marketing, and sales departments within an organization. It is important that the management team, as well as employees, recognize the importance of internal controls and their role in preventing losses, monitoring performance, and planning for the future. Elements of Internal Control A strong internal control system is based on the same consistent elements: establishment of clear responsibilities proper documentation adequate insurance separation of assets from custody separation of duties use of technology Establishment of Clear Responsibilities A properly designed system of internal control clearly dictates responsibility for certain roles within an organization. When there is a clear statement of responsibility, issues that are uncovered can be easily traced and responsibility placed where it belongs. As an example, imagine that you are the manager of the Galaxy’s Best Yogurt. On any shift, you have three employees working in the store. One employee is designated as the shift supervisor who oversees the operations of the other two employees on the shift and ensures that the store is presented and functioning properly. Of the other two employees, one may be solely responsible for management of the cash register, while the others serve the customers. When only one employee has access to an individual cash register, if there is an overage or shortage of cash, it can be traced to the one employee who is in charge of the cash register. Proper Documentation An effective internal control system maintains proper documentation, including backups, to trace all transactions. The documentation can be paper copies, or documents that are computer generated and stored, on flash drives or in the cloud, for example. Given the possibility of some type of natural (tornado or flood) or man-made (arson) disasters, even the most basic of businesses should create backup copies of documentation that are stored off-site. In addition, any documentation generated by daily operations should be managed according to internal controls. For example, when the Galaxy’s Best Yogurt closes each day, one employee should close out and reconcile the cash drawer using prenumbered forms in pen to ensure that no forms can be altered or changed by another employee who may have access to the cash. In case of an error, the employee responsible for making the change should initial any changes on the form. If there are special orders for cakes or other products, the order forms should be prenumbered. The use of prenumbered documents provides assurance that all sales are recorded. If a form is not prenumbered, an order can be prepared, and the employee can then take the money without ringing the order into the cash register, leaving no record of the sale. Adequate Insurance Insurance may be a significant cost to an organization (especially liability coverage), but it is necessary. With adequate insurance on an asset, if it is lost or destroyed, an outside party will recoup the company for the loss. If assets are lost to fraud or theft, an insurance company will investigate the loss and will press criminal charges against any employee found to be involved. Very often, the employer will be hesitant to pursue criminal charges against an employee due to the risk of lawsuit or bad publicity. For example, an employee might assume that the termination was age related and is going to sue the company. Also, there might be a situation where the company experienced a loss, such as theft, and it does not want to let the general public know that there are potential deficiencies in its security system. If the insurance company presses charges on behalf of the company, this protects the organization and also acts as a deterrent if employees know that the insurance company will always prosecute theft. For example, suppose the manager of the Galaxy’s Best Yogurt stole $10,000 cash over a period of two years. The owner of the yogurt store will most likely file an insurance claim to recover the $10,000 that was stolen. With proper insurance, the insurance company will reimburse the yogurt store for the money but then has the right to press charges and recover its losses from the employee who was caught stealing. The store owner will have no control over the insurance company’s efforts to recover the $10,000 and will likely be forced to fire the employee in order to keep the insurance policy. Separation of Assets from Custody Separation of assets from custody ensures that the person who controls an asset cannot also keep the accounting records. This action prevents one employee from taking income from the business and entering a transaction on the accounting records to cover it up. For example, one person within an organization may open an envelope that contains a check, but a different person would enter the check into the organization’s accounting system. In the case of the Galaxy’s Best Yogurt, one employee may count the money in the cash register drawer at the end of the night and reconcile it with the sales, but a different employee would recount the money, prepare the bank deposit, and ensure that the deposit is made at the bank. Separation of Duties A properly designed internal control system assures that at least two (if not more) people are involved with most transactions. The purpose of separating duties is to ensure that there is a check and balance in place. One common internal control is to have one employee place an inventory order and a different employee receive the order as it is delivered. For example, assume that an employee at the Galaxy’s Best Yogurt places an inventory order. In addition to the needed inventory, the employee orders an extra box of piecrusts. If that employee also receives the order, he or she can take the piecrusts home, and the store will still pay for them. Check signing is another important aspect of separation of duties. Typically, the person who writes a check should not also sign the check. Additionally, the person who places supply orders should not write checks to pay the bills for these supplies. Use of Technology Technology has made the process of internal control simpler and more approachable to all businesses. There are two reasons that the use of technology has become more prevalent. The first is the development of more user-friendly equipment, and the second is the reduction in costs of security resources. In the past, if a company wanted a security system, it often had to go to an outside security firm, and the costs of providing and monitoring the system were prohibitive for many small businesses. Currently, security systems have become relatively inexpensive, and not only do many small businesses now have them, they are now commonly used by residential homeowners. In terms of the application of security resources, some businesses use surveillance cameras focused on key areas of the organization, such as the cash register and areas where a majority of work is performed. Technology also allows businesses to use password protection on their data or systems so that employees cannot access systems and change data without authorization. Businesses may also track all employee activities within an information technology system. Even if a business uses all of the elements of a strong internal control system, the system is only as good as the oversight. As responsibilities, staffing, and even technology change, internal control systems need to be constantly reviewed and refined. Internal control reviews are typically not conducted by inside management but by internal auditors who provide an impartial perspective of where controls are working and where they can be improved. Purposes of Internal Controls within a Governmental Entity Internal controls apply not only to public and private corporations but also to governmental entities. Often, a government controls one of the most important assets of modern times: data. Unprotected financial information, including tax data, social security, and governmental identifications, could lead to identity theft and could even provide rogue nations access to data that could compromise the security of our country. Governmental entities require their contractors to have proper internal controls and to maintain proper codes of ethics. Ethical Considerations Ethics in Governmental Contractors Government entities are not the only organizations required to implement proper internal controls and codes of ethics. As part of the business relationship between different organizations, governmental agencies also require contractors and their subcontractors to implement internal controls to ensure compliance with proper ethical conduct. The Federal Acquisition Regulation (FAR) Council outlines regulations under FAR 3.10, 7 which require governmental contractors and their subcontractors to implement a written “Contractor Code of Business Ethics and Conduct,” and the proper internal controls to ensure that the code of ethics is followed. An employee training program, posting of agency inspector general hotline posters, and an internal control system to promote compliance with the established ethics code are also required. Contractors must disclose violations of federal criminal law involving fraud, conflicts of interest, bribery, or gratuity violations; violations of the civil False Claims Act; and significant overpayments on a contract not resulting from contract financing payments. 8 Such internal controls help ensure that an organization and its business relationships are properly managed. 7 Federal Acquisition Regulation. “Subpart 3.10: Contractor Code of Business Ethics and Conduct.” January 22, 2019. https://www.acquisition.gov/content/subpart-310-contractor-code-business-ethics-and-conduct 8 National Contract Management Association. https://www.ncmahq.org/ To recognize the significant need for internal controls within the government, and to ensure and enforce compliance, the US Government Accountability Office (GAO) has its own standards for internal control within the federal government. All government agencies are subject to governance under these standards, and one of the objectives of the GAO is to provide audits on agencies to ensure that proper controls are in place and within compliance. Standards for internal control within the federal government are located within a publication referred to as the “Green Book,” or Standards for Internal Control in the Federal Government. Link to Learning Government organizations have their own needs for internal controls. Read the GAO “Green Book” to learn more about these internal control procedures. Purposes of Internal Controls within a Not-for-Profit Not-for-profit (NFP) organizations have the same needs for internal control as many traditional for-profit entities. At the same time, there are unique challenges that these entities face. Based on the objectives and charters of NFP organizations, in many cases, those who run the organizations are volunteers. As volunteers, leaders of NFPs may not have the same training background and qualifications as those in a similar for-profit position. Additionally, a volunteer leader often splits time between the organization and a full-time career. For these reasons, internal controls in an NFP often are not properly implemented, and there may be a greater risk of control lapse. A control lapse occurs when there is a deviation from standard control protocol that leads to a failure in the internal control and/or fraud prevention processes or systems. A failure occurs in a situation when results did not achieve predetermined goals or meet expectations. Not-for-profit organizations have an extra category of finances that need protection, in addition to their assets. They need to ensure that incoming donations are used as intended. For example, many colleges and universities are classified as NFP organizations, and donations are a significant source of revenue. However, donations are often directed to a specific source. For example, suppose an alumnus of Alpha University wants to make a $1,000,000 donation to the business school for undergraduate student scholarships. Internal controls would track that donation to ensure it paid for scholarships for undergraduate students in the business school and was not used for any other purpose at the school, in order to avoid potential legal issues. Identify and Apply Principles of Internal Controls to the Receipt and Disbursement of Cash Cash can be a major part of many business operations. Imagine a Las Vegas casino, or a large grocery store, such as Publix Super Markets , Wegmans Food Markets , or ShopRite ; in any of these settings, millions of dollars in cash can change hands within a matter of minutes, and it can pass through the hands of thousands of employees. Internal controls ensure that all of this cash reaches the bank account of the business entity. The first control is monitoring. Not only are cameras strategically placed throughout the store to prevent shoplifting and crime by customers, but cameras are also located over all areas where cash changes hands, such as over every cash register, or in a casino over every gaming table. These cameras are constantly monitored, often offsite at a central location by personnel who have no relationship with the employees who handle the cash, and all footage is recorded. This close monitoring makes it more difficult for misuse of cash to occur. Additionally, access to cash is tightly controlled. Within a grocery store, each employee has his or her own cash drawer with a set amount of cash. At any time, any employee can reconcile the sales recorded within the system to the cash balance that should be in the drawer. If access to the drawer is restricted to one employee, that employee is responsible when cash is missing. If one specific employee is consistently short on cash, the company can investigate and monitor the employee closely to determine if the shortages are due to theft or if they are accidental, such as if they resulted from errors in counting change. Within a casino, each time a transaction occurs and when there is a shift change for the dealers, cash is counted in real time. Casino employees dispersed on the gaming floor are constantly monitoring play, in addition to those monitoring cameras behind the scenes. Technology plays a major role in the maintenance of internal controls, but other principles are also important. If an employee makes a mistake involving cash, such as making an error in a transaction on a cash register, the employee who made the mistake typically cannot correct the mistake. In most cases, a manager must review the mistake and clear it before any adjustments are made. These changes are logged to ensure that managers are not clearing mistakes for specific employees in a pattern that could signify collusion , which is considered to be a private cooperation or agreement primarily for a deceitful, illegal, or immoral cause or purpose. Duties are also separated to count cash on hand and ensure records are accurate. Often, at the end of the shift, a manager or employee other than the person responsible for the cash is responsible for counting cash on hand within the cash drawer. For example, at a grocery store, it is common for an employee who has been checking out customers for a shift to then count the money in the register and prepare a document providing the counts for the shift. This employee then submits the counted tray to a supervisor, such as a head cashier, who then repeats the counting and documentation process. The two counts should be equal. If there is a discrepancy, it should immediately be investigated. If the store accepts checks and credit/debit card payments, these methods of payments are also incorporated into the verification process. In many cases, the sales have also been documented either by a paper tape or by a computerized system. The ultimate goal is to determine if the cash, checks, and credit/debit card transactions equal the amount of sales for the shift. For example, if the shift’s register had sales of $800, then the documentation of counted cash and checks, plus the credit/debit card documentation should also add up to $800. Despite increased use of credit cards by consumers, our economy is still driven by cash. As cash plays a very important role in society, efforts must be taken to control it and ensure that it makes it to the proper areas within an organization. The cost of developing, maintaining, and monitoring internal controls is significant but important. Considering the millions of dollars of cash that can pass through the hands of employees on any given day, the high cost can be well worth it to protect the flow of cash within an organization. Link to Learning Internal controls are as important for not-for-profit businesses as they are within the for-profit sector. See this guide for not-for-profit businesses to set up and maintain proper internal control systems provided by the National Council of Nonprofits. Think It Through Hiring Approved Vendors One internal control that companies often have is an official “approved vendor” list for purchases. Why is it important to have an approved vendor list? 8.4 Define the Purpose and Use of a Petty Cash Fund, and Prepare Petty Cash Journal Entries As we have discussed, one of the hardest assets to control within any organization is cash. One way to control cash is for an organization to require that all payments be made by check. However, there are situations in which it is not practical to use a check. For example, imagine that the Galaxy’s Best Yogurt runs out of milk one evening. It is not possible to operate without milk, and the normal shipment does not come from the supplier for another 48 hours. To maintain operations, it becomes necessary to go to the grocery store across the street and purchase three gallons of milk. It is not efficient for time and cost to write a check for this small purchase, so companies set up a petty cash fund , which is a predetermined amount of cash held on hand to be used to make payments for small day-to-day purchases. A petty cash fund is a type of imprest account , which means that it contains a fixed amount of cash that is replaced as it is spent in order to maintain a set balance. To maintain internal controls, managers can use a petty cash receipt ( Figure 8.5 ), which tracks the use of the cash and requires a signature from the manager. As cash is spent from a petty cash fund, it is replaced with a receipt of the purchase. At all times, the balance in the petty cash box should be equal to the cash in the box plus the receipts showing purchases. For example, the Galaxy’s Best Yogurt maintains a petty cash box with a stated balance of $75 at all times. Upon review of the box, the balance is counted in the following way. Because there may not always be a manager with check signing privileges available to sign a check for unexpected expenses, a petty cash account allows employees to make small and necessary purchases to support the function of a business when it is not practical to go through the formal expense process. In all cases, the amount of the purchase using petty cash would be considered to not be material in nature. Recall that materiality means that the dollar amount in question would have a significant impact in financial results or influence investor decisions. Demonstration of Typical Petty Cash Journal Entries Petty cash accounts are managed through a series of journal entries. Entries are needed to (1) establish the fund, (2) increase or decrease the balance of the fund (replenish the fund as cash is used), and (3) adjust for overages and shortages of cash. Consider the following example. The Galaxy’s Best Yogurt establishes a petty cash fund on July 1 by cashing a check for $75 from its checking account and placing cash in the petty cash box. At this point, the petty cash box has $75 to be used for small expenses with the authorization of the responsible manager. The journal entry to establish the petty cash fund would be as follows. As this petty cash fund is established, the account titled “Petty Cash” is created; this is an asset on the balance sheet of many small businesses. In this case, the cash account, which includes checking accounts, is decreased, while the funds are moved to the petty cash account. One asset is increasing, while another asset is decreasing by the same account. Since the petty cash account is an imprest account, this balance will never change and will remain on the balance sheet at $75, unless management elects to change the petty cash balance. Throughout the month, several payments are made from the petty cash account of the Galaxy’s Best Yogurt. Assume the following activities. At the end of July, in the petty cash box there should be a receipt for the postage stamp purchase, a receipt for the milk, a receipt for the window cleaner, and the remaining cash. The employee in charge of the petty cash box should sign each receipt when the purchase is made. The total amount of purchases from the receipts ($45), plus the remaining cash in the box should total $75. As the receipts are reviewed, the box must be replenished for what was spent during the month. The journal entry to replenish the petty cash account will be as follows. Typically, petty cash accounts are reimbursed at a fixed time period. Many small businesses will do this monthly, which ensures that the expenses are recognized within the proper accounting period. In the event that all of the cash in the account is used before the end of the established time period, it can be replenished in the same way at any time more cash is needed. If the petty cash account often needs to be replenished before the end of the accounting period, management may decide to increase the cash balance in the account. If, for example, management of the Galaxy’s Best Yogurt decides to increase the petty cash balance to $100 from the current balance of $75, the journal entry to do this on August 1 would be as follows. If the management at a later date decides to decrease the balance in the petty cash account, the previous entry would be reversed, with cash being debited and petty cash being credited. Occasionally, errors may occur that affect the balance of the petty cash account. This may be the result of an employee not getting a receipt or getting back incorrect change from the store where the purchase was made. In this case, an expense is created that creates a cash overage or shortage . Consider Galaxy’s expenses for July. During the month, $45 was spent on expenses. If the balance in the petty cash account is supposed to be $75, then the petty cash box should contain $45 in signed receipts and $30 in cash. Assume that when the box is counted, there are $45 in receipts and $25 in cash. In this case, the petty cash balance is $70, when it should be $75. This creates a $5 shortage that needs to be replaced from the checking account. The entry to record a cash shortage is as follows. When there is a shortage of cash, we record the shortage as a “debit” and this has the same effect as an expense. If we have an overage of cash, we record the overage as a credit, and this has the same impact as if we are recording revenue. If there were cash overage, the petty cash account would be debited and the cash over and short account would be credited. In this case, the expense balance decreases, and the year-end balance is the net balance from all overages and shortages during the year. If a petty cash account is consistently short, this may be a warning sign that there is not a proper control of the account, and management may want to consider additional controls to better monitor petty cash. Think It Through Cash versus Debit Card A petty cash system in some businesses may be replaced by use of a prepaid credit card (or debit card) on site. What would be the pros and cons of actually maintaining cash on premises for the petty cash system, versus a rechargeable debit card that employees may use for petty cash purposes? Which option would you select for your petty cash account if you were the owner of a small business? Link to Learning See this article on tips for companies to establish and manage petty cash systems to learn more. 8.5 Discuss Management Responsibilities for Maintaining Internal Controls within an Organization Because internal controls do protect the integrity of financial statements, large companies have become highly regulated in their implementation. In addition to Section 404 of the SOX, which addresses reporting and testing requirements for internal controls, there are other sections of the act that govern management responsibility for internal controls. Although the auditor reviews internal controls and advises on the improvement of controls, ultimate responsibility for the controls is on the management of the company. Under SOX Section 302, in order to provide additional assurance to the financial markets, the chief executive officer (CEO) , who is the executive within a company with the highest-ranking title and the overall responsibility for management of the company, and the chief financial officer (CFO) , who is the corporation officer who reports to the CEO and oversees all of the accounting and finance concerns of a company, must personally certify that (1) they have reviewed the internal control report provided by the auditor; (2) the report does not contain any inaccurate information; and (3) they believe that all financial information fairly states the financial conditions, income, and cash flows of the entity. The sign-off under Section 302 makes the CEO and CFO personally responsible for financial reporting as well as internal control structure. While the executive sign-offs seem like they would be just a formality, they actually have a great deal of power in court cases. Prior to SOX, when an executive swore in court that he or she was not aware of the occurrence of some type of malfeasance, either committed by his or her firm or against his or her firm, the executive would claim a lack of knowledge of specific circumstances. The typical response was, “I can’t be expected to know everything.” In fact, in virtually all of the trials involving potential malfeasance, this claim was made and often was successful in a not-guilty verdict. The initial response to the new SOX requirements by many people was that there was already sufficient affirmation by the CEO and CFO and other executives to the accuracy and fairness of the financial statements and that the SOX requirements were unnecessary. However, it was determined that the SOX requirements provided a degree of legal responsibility that previously might have been assumed but not actually stated. Even if a company is not public and not governed by the SOX, it is important to note that the tone is set at the managerial level, called the tone at the top . If management respects the internal control system and emphasizes the importance of maintaining proper internal controls, the rest of the staff will follow and create a cohesive environment. A proper tone at the top demonstrates management’s commitment toward openness, honesty, integrity, and ethical behavior. Your Turn Defending the Sarbanes-Oxley Act You are having a conversation with the CFO of a public company. Imagine that the CFO complains that there is no benefit to Sections 302 and 404 of the Sarbanes-Oxley Act relative to the cost, as “our company has always valued internal controls before this regulation and never had an issue.” He believes that this regulation is an unnecessary overstep. How would you respond and defend the need for Sections 302 and 404 of the Sarbanes-Oxley Act? Solution I would tell the CFO the following: Everyone says that they have always valued internal controls, even those who did not. Better security for the public is worth the cost. The cost of compliance is more than recovered in the company’s market price for its stock. Think It Through Personal Internal Controls Technology plays a very important role in internal controls. One recent significant security breach through technology was the Equifax breach. What is an internal control that you can personally implement to protect your personal data as a result of this breach, or any other future breach? 8.6 Define the Purpose of a Bank Reconciliation, and Prepare a Bank Reconciliation and Its Associated Journal Entries The bank is a very important partner to all businesses. Not only does the bank provide basic checking services, but they process credit card transactions, keep cash safe, and may finance loans when needed. Bank accounts for businesses can involve thousands of transactions per month. Due to the number of ongoing transactions, an organization’s book balance for its checking account rarely is the same as the balance that the bank records reflect for the entity at any given point. These timing differences are typically caused by the fact that there will be some transactions that the organization is aware of before the bank, or transactions the bank is aware of before the company. For example, if a company writes a check that has not cleared yet, the company would be aware of the transaction before the bank is. Similarly, the bank might have received funds on the company’s behalf and recorded them in the bank’s records for the company before the organization is aware of the deposit. With the large volume of transactions that impact a bank account, it becomes necessary to have an internal control system in place to assure that all cash transactions are properly recorded within the bank account, as well as on the ledger of the business. The bank reconciliation is the internal financial report that explains and documents any differences that may exist between the balance of a checking account as reflected by the bank’s records (bank balance) for a company and the company’s accounting records (company balance). The bank reconciliation is an internal document prepared by the company that owns the checking account. The transactions with timing differences are used to adjust and reconcile both the bank and company balances; after the bank reconciliation is prepared accurately, both the bank balance and the company balance will be the same amount. Note that the transactions the company is aware of have already been recorded (journalized) in its records. However, the transactions that the bank is aware of but the company is not must be journalized in the entity’s records. Fundamentals of the Bank Reconciliation Procedure The balance on a bank statement can differ from company’s financial records due to one or more of the following circumstances: An outstanding check : a check that was written and deducted from the financial records of the company but has not been cashed by the recipient, so the amount has not been removed from the bank account. A deposit in transit : a deposit that was made by the business and recorded on its books but has not yet been recorded by the bank. Deductions for a bank service fee : fees often charged by banks each month for management of the bank account. These may be fixed maintenance fees, per-check fees, or a fee for a check that was written for an amount greater than the balance in the checking account, called an nonsufficient funds (NSF) check . These fees are deducted by the bank from the account but would not appear on the financial records. Errors initiated by either the client or the bank: for example, the client might record a check incorrectly in its records, for either a greater or lesser amount than was written. Also, the bank might report a check either with an incorrect balance or in the wrong client’s checking account. Additions such as interest or funds collected by the bank for the client: interest is added to the bank account as earned but is not reported on the financial records. These additions might also include funds collected by the bank for the client. Demonstration of a Bank Reconciliation A bank reconciliation is structured to include the information shown in Figure 8.6 . Assume the following circumstances for Feeter Plumbing Company, a small business located in Northern Ohio. After all posting is up to date, at the end of July 31, the book balance shows $32,760, and the bank statement balance shows $77,040. Check 5523 for $9,620 and 6547 for $10,000 are outstanding. Check 5386 for $2,000 is removed from the bank account correctly but is recorded on the accounting records for $1,760. This was in payment of dues. The effects of this transaction resulted in an error of $240 that must be deducted from the company’s book balance. The July 31 night deposit of $34,300 was delivered to the bank after hours. As a result, the deposit is not on the bank statement, but it is on the financial records. Upon review of the bank statement, an error is uncovered. A check is removed from the account from Feeter for $320 that should have been removed from the account of another customer of the bank. In the bank statement is a note stating that the bank collected $60,000 in charges (payments) from the credit card company as well as $1,800 in interest. This transaction is on the bank statement but not in the company’s financial records. The bank notified Feeter that a $2,200 check was returned unpaid from customer Berson due to insufficient funds in Berson’s account. This check return is reflected on the bank statement but not in the records of Feeter. Bank service charges for the month are $80. They have not been recorded on Feeter’s records. Each item would be recorded on the bank reconciliation as follows: One important trait of the bank reconciliation is that it identifies transactions that have not been recorded by the company that are supposed to be recorded. Journal entries are required to adjust the book balance to the correct balance. In the case of Feeter, the first entry will record the collection of the note, as well as the interest collected. The second entry required is to adjust the books for the check that was returned from Berson. The third entry is to adjust the recording error for check 5386. The final entry is to record the bank service charges that are deducted by the bank but have not been recorded on the records. The previous entries are standard to ensure that the bank records are matching to the financial records. These entries are necessary to update Feeter‛s general ledger cash account to reflect the adjustments made by the bank. Link to Learning This practical article illustrates the key points of why a bank reconciliation is important for both business and personal reasons. 8.7 Describe Fraud in Financial Statements and Sarbanes-Oxley Act Requirements Financial statements are the end result of an accountant’s work and are the responsibility of management. Proper internal controls help the accountant determine that the financial statements fairly present the financial position and performance of a company. Financial statement fraud occurs when the financial statements are used to conceal the actual financial condition of a company or to hide specific transactions that may be illegal. Financial statement fraud may take on many different methods, but it is generally called cooking the books . This issue may occur for many purposes. A common reason to cook the books is to create a false set of a company’s books used to convince investors or lenders to provide money to the company. Investors and lenders rely on a properly prepared set of financial statements in making their decision to provide the company with money. Another reason to misstate a set of financial statements is to hide corporate looting such as excessive retirement perks of top executives, unpaid loans to top executives, improper stock options, and any other wrongful financial action. Yet another reason to misreport a company’s financial data is to drive the stock price higher. Internal controls assist the accountant in locating and identifying when management of a company wants to mislead the inventors or lenders. The financial accountant or members of management who set out to cook the books are intentionally attempting to deceive the user of the financial statements. The actions of upper management are being concealed, and in most cases, the entire financial position of the company is being purposely misreported. Regardless of the reason for misstating the true condition of a company’s financial position, doing so misleads any person using the financial statements of a company to evaluate the company and its operations. How Companies Cook the Books to Misrepresent Their Financial Condition One of the most common ways companies cook the books is by manipulating revenue accounts or accounts receivables. Proper revenue recognition involves accounting for revenue when the company has met its obligation on a contract. Financial statement fraud involves early revenue recognition, or recognizing revue that does not exist, and receivable accountings, used in tandem with false revenue reporting. HealthSouth used a combination of false revenue accounts and misstated accounts receivable in a direct manipulation of the revenue accounts to commit a multibillion-dollar fraud between 1996 and 2002. Several chief financial officers and other company officials went to prison as a result. 9 9 Melinda Dickinson. “Former HealthSouth Boss Found Liable for $2.9 Billion.” Reuters . June 18, 2009. https://www.reuters.com/article/us-healthsouth-scrushy/former-healthsouth-boss-found-liable-for-2-9-billion-idUSTRE55H4IP20090618 Concepts In Practice Internal Controls at HealthSouth The fraud at HealthSouth was possible because some of the internal controls were ignored. The company failed to maintain standard segregation of duties and allowed management override of internal controls. The fraud required the collusion of the entire accounting department, concealing hundreds of thousands of fraudulent transactions through the use of falsified documents and fraudulent accounting schemes that included revenue recognition irregularities (such as recognizing accounts receivables to be recorded as revenue before collection), misclassification of expenses and asset acquisitions, and fraudulent merger and acquisition accounting. The result was billions of dollars of fraud. Simply implementing and following proper internal control procedures would have stopped this massive fraud. 10 10 David McCann. “Two CFOs Tell a Tale of Fraud at HealthSouth.” CFO.com. March 27, 2017. .http://ww2.cfo.com/fraud/2017/03/two-cfos-tell-tale-fraud-healthsouth/ Many companies may go to great lengths to perpetuate financial statement fraud. Besides the direct manipulation of revenue accounts, there are many other ways fraudulent companies manipulate their financial statements. Companies with large inventory balances can misrepresent their inventory account balances and use this misrepresentation to overstate the amount of their assets to get larger loans or use the increased balance to entice investors through claims of exaggerated revenues. The inventory accounts can also be used to overstate income. Such inventory manipulations can include the following: Channel stuffing : encouraging customers to buy products under favorable terms. These terms include allowing the customer to return or even not pick up goods sold, without a corresponding reserve to account for the returns. Sham sales : sales that have not occurred and for which there are no customers. Bill-and-hold sales : recognition of income before the title transfers to the buyer, and holding the inventory in the seller’s warehouse. Improper cutoff : recording sales of inventory in the wrong period and before the inventory is sold; this is a type of early revenue recognition. Round-tripping : selling items with the promise to buy the items back, usually on credit, so there is no economic benefit. These are just a few examples of the way an organization might manipulate inventory or sales to create false revenue. One of the most famous financial statement frauds involved Enron , as discussed previously. Enron started as an interstate pipeline company, but then branched out into many different ventures. In addition to the internal control deficiencies discussed earlier, the financial statement fraud started when the company began to attempt to hide its losses. The fraudulent financial reporting schemes included building assets and immediately taking as income any projected profits on construction and hiding the losses from operating assets in an off-the-balance sheet transaction called special purpose entities , which are separate, often complicated legal entities that are often used to absorb risk for a corporation. Enron moved assets that were losing money off of its books and onto the books of the Special Purpose Entity. This way, Enron could hide its bad business decisions and continue to report a profit, even though its assets were losing money. Enron ’s financial statement fraud created false revenues with the misstatement of assets and liability balances. This was further supported by inadequate balance sheet footnotes and the related disclosures. For example, required disclosures were ramped up as a result of these special purpose entities. Sarbanes-Oxley Act Compliance Today The Enron scandal and related financial statement frauds led to investors requiring that public companies maintain better internal controls and develop stronger governance systems, while auditors perform a better job at auditing public companies. These requirements, in turn, led to the regulations developed under SOX that were intended to protect the investing public. Since SOX was first passed, it has adapted to changing technology and now requires public companies to protect their accounting and financial data from hackers and other outside or internal forces through stronger internal controls designed to protect the data. The Journal of Accountancy supported these new requirements and reported that the results of SOX have been positive for both companies and investors. As discussed in the Journal of Accountancy article, 11 there are three conditions that are increasingly affecting compliance with SOX requirements: 11 Ken Tysiac. “Companies Spending More Time on SOX Compliance.” Journal of Accountancy . June 12, 2017. https://www.journalofaccountancy.com/news/2017/jun/companies-spending-more-time-on-sox-compliance-201716857.html PCAOB requirements. The PCAOB has increased the requirements for inspection reports, with a greater emphasis on deficiency evaluation. Revenue recognition . The Financial Accounting Standards Board has introduced a new standard for revenue recognition. This requirement has led to the need for companies to update control documentation. Cybersecurity . Cybersecurity is the practice of protecting software, hardware, and data from digital attacks. As would be expected in today’s environment, the number of recent cybersecurity disclosures has significantly grown. Under current guidelines, instead of the SOX requiring compliance with just the financial component of reporting and internal control, the guidelines now allow application to information technology (IT) activities as well. A major change under the SOX guidelines involves the method of storage of a company’s electronic records. While the act did not specifically require a particular storage method, it did provide guidance on which records were to be stored and for how long they should be stored. The SOX now requires that all business records, electronic records, and electronic messages must be stored for at least five years. The penalties for noncompliance include either imprisonment or fines, or a combination of the two options.
biology
Chapter Outline 1.1 The Science of Biology 1.2 Themes and Concepts of Biology Introduction Viewed from space, Earth offers no clues about the diversity of life forms that reside there. The first forms of life on Earth are thought to have been microorganisms that existed for billions of years in the ocean before plants and animals appeared. The mammals, birds, and flowers so familiar to us are all relatively recent, originating 130 to 200 million years ago. Humans have inhabited this planet for only the last 2.5 million years, and only in the last 200,000 years have humans started looking like we do today.
[ { "answer": { "ans_choice": 1, "ans_text": "microorganisms" }, "bloom": null, "hl_context": "Chapter Outline 1.1 The Science of Biology 1.2 Themes and Concepts of Biology Introduction Viewed from space , Earth offers no clues about the diversity of life forms that reside there . <hl> The first forms of life on Earth are thought to have been microorganisms that existed for billions of years in the ocean before plants and animals appeared . <hl> The mammals , birds , and flowers so familiar to us are all relatively recent , originating 130 to 200 million years ago . Humans have inhabited this planet for only the last 2.5 million years , and only in the last 200,000 years have humans started looking like we do today .", "hl_sentences": "The first forms of life on Earth are thought to have been microorganisms that existed for billions of years in the ocean before plants and animals appeared .", "question": { "cloze_format": "The first forms of life on Earth were ________.", "normal_format": "What were the first forms of life on Earth?", "question_choices": [ "plants", "microorganisms", "birds", "dinosaurs" ], "question_id": "fs-id2197874", "question_text": "The first forms of life on Earth were ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "hypothesis" }, "bloom": null, "hl_context": "<hl> Recall that a hypothesis is a suggested explanation that can be tested . <hl> To solve a problem , several hypotheses may be proposed . For example , one hypothesis might be , “ The classroom is warm because no one turned on the air conditioning . ” But there could be other responses to the question , and therefore other hypotheses may be proposed . A second hypothesis might be , “ The classroom is warm because there is a power failure , and so the air conditioning doesn ’ t work . ” The steps of the scientific method will be examined in detail later , but one of the most important aspects of this method is the testing of hypotheses by means of repeatable experiments . <hl> A hypothesis is a suggested explanation for an event , which can be tested . <hl> Although using the scientific method is inherent to science , it is inadequate in determining what science is . This is because it is relatively easy to apply the scientific method to disciplines such as physics and chemistry , but when it comes to disciplines like archaeology , psychology , and geology , the scientific method becomes less applicable as it becomes more difficult to repeat experiments .", "hl_sentences": "Recall that a hypothesis is a suggested explanation that can be tested . A hypothesis is a suggested explanation for an event , which can be tested .", "question": { "cloze_format": "A suggested and testable explanation for an event is called a ________.", "normal_format": "What is a suggested and testable explanation for an event called?", "question_choices": [ "hypothesis", "variable", "theory", "control" ], "question_id": "fs-id1260889", "question_text": "A suggested and testable explanation for an event is called a ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "computer science" }, "bloom": "2", "hl_context": "<hl> There is no complete agreement when it comes to defining what the natural sciences include , however . <hl> <hl> For some experts , the natural sciences are astronomy , biology , chemistry , earth science , and physics . <hl> Other scholars choose to divide natural sciences into life sciences , which study living things and include biology , and physical sciences , which study nonliving matter and include astronomy , geology , physics , and chemistry . Some disciplines such as biophysics and biochemistry build on both life and physical sciences and are interdisciplinary . Natural sciences are sometimes referred to as “ hard science ” because they rely on the use of quantitative data ; social sciences that study society and human behavior are more likely to use qualitative assessments to drive investigations and findings . What would you expect to see in a museum of natural sciences ? Frogs ? Plants ? Dinosaur skeletons ? Exhibits about how the brain functions ? A planetarium ? Gems and minerals ? Or , maybe all of the above ? <hl> Science includes such diverse fields as astronomy , biology , computer sciences , geology , logic , physics , chemistry , and mathematics ( Figure 1.4 ) . <hl> However , those fields of science related to the physical world and its phenomena and processes are considered natural sciences . Thus , a museum of natural sciences might contain any of the items listed above .", "hl_sentences": "There is no complete agreement when it comes to defining what the natural sciences include , however . For some experts , the natural sciences are astronomy , biology , chemistry , earth science , and physics . Science includes such diverse fields as astronomy , biology , computer sciences , geology , logic , physics , chemistry , and mathematics ( Figure 1.4 ) .", "question": { "cloze_format": "The science of ___ is not considered a natural science.", "normal_format": "Which of the following sciences is not considered a natural science?", "question_choices": [ "biology", "astronomy", "physics", "computer science" ], "question_id": "fs-id1512007", "question_text": "Which of the following sciences is not considered a natural science?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "inductive reasoning" }, "bloom": null, "hl_context": "<hl> Inductive reasoning is a form of logical thinking that uses related observations to arrive at a general conclusion . <hl> This type of reasoning is common in descriptive science . A life scientist such as a biologist makes observations and records them . These data can be qualitative or quantitative , and the raw data can be supplemented with drawings , pictures , photos , or videos . From many observations , the scientist can infer conclusions ( inductions ) based on evidence . Inductive reasoning involves formulating generalizations inferred from careful observation and the analysis of a large amount of data . Brain studies provide an example . In this type of research , many live brains are observed while people are doing a specific activity , such as viewing images of food . The part of the brain that “ lights up ” during this activity is then predicted to be the part controlling the response to the selected stimulus , in this case , images of food . The “ lighting up ” of the various areas of the brain is caused by excess absorption of radioactive sugar derivatives by active areas of the brain . The resultant increase in radioactivity is observed by a scanner . Then , researchers can stimulate that part of the brain to see if similar responses result .", "hl_sentences": "Inductive reasoning is a form of logical thinking that uses related observations to arrive at a general conclusion .", "question": { "cloze_format": "The type of logical thinking that uses related observations to arrive at a general conclusion is called ________.", "normal_format": "What type of logical thinking uses related observations to arrive at a general conclusion? ", "question_choices": [ "deductive reasoning", "the scientific method", "hypothesis-based science", "inductive reasoning" ], "question_id": "fs-id1967894", "question_text": "The type of logical thinking that uses related observations to arrive at a general conclusion is called ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "peer review" }, "bloom": null, "hl_context": "Whether scientific research is basic science or applied science , scientists must share their findings in order for other researchers to expand and build upon their discoveries . Collaboration with other scientists — when planning , conducting , and analyzing results — are all important for scientific research . For this reason , important aspects of a scientist ’ s work are communicating with peers and disseminating results to peers . Scientists can share results by presenting them at a scientific meeting or conference , but this approach can reach only the select few who are present . Instead , most scientists present their results in peer-reviewed manuscripts that are published in scientific journals . Peer-reviewed manuscripts are scientific papers that are reviewed by a scientist ’ s colleagues , or peers . These colleagues are qualified individuals , often experts in the same research area , who judge whether or not the scientist ’ s work is suitable for publication . <hl> The process of peer review helps to ensure that the research described in a scientific paper or grant proposal is original , significant , logical , and thorough . <hl> Grant proposals , which are requests for research funding , are also subject to peer review . Scientists publish their work so other scientists can reproduce their experiments under similar or different conditions to expand on the findings . The experimental results must be consistent with the findings of other scientists .", "hl_sentences": "The process of peer review helps to ensure that the research described in a scientific paper or grant proposal is original , significant , logical , and thorough .", "question": { "cloze_format": "The process of ________ helps to ensure that a scientist’s research is original, significant, logical, and thorough.", "normal_format": "Which process helps to ensure that a scientist’s research is original, significant, logical, and thorough?", "question_choices": [ "publication", "public speaking", "peer review", "the scientific method" ], "question_id": "fs-id1694297", "question_text": "The process of ________ helps to ensure that a scientist’s research is original, significant, logical, and thorough." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "cell" }, "bloom": null, "hl_context": "Some cells contain aggregates of macromolecules surrounded by membranes ; these are called organelles . Organelles are small structures that exist within cells . Examples of organelles include mitochondria and chloroplasts , which carry out indispensable functions : mitochondria produce energy to power the cell , while chloroplasts enable green plants to utilize the energy in sunlight to make sugars . <hl> All living things are made of cells ; the cell itself is the smallest fundamental unit of structure and function in living organisms . <hl> ( This requirement is why viruses are not considered living : they are not made of cells . To make new viruses , they have to invade and hijack the reproductive mechanism of a living cell ; only then can they obtain the materials they need to reproduce . ) Some organisms consist of a single cell and others are multicellular . Cells are classified as prokaryotic or eukaryotic . Prokaryotes are single-celled or colonial organisms that do not have membrane-bound nuclei ; in contrast , the cells of eukaryotes do have membrane-bound organelles and a membrane-bound nucleus .", "hl_sentences": "All living things are made of cells ; the cell itself is the smallest fundamental unit of structure and function in living organisms .", "question": { "cloze_format": "The smallest unit of biological structure that meets the functional requirements of “living” is the ________.", "normal_format": "What is the smallest unit of biological structure that meets the functional requirements of “living”?", "question_choices": [ "organ", "organelle", "cell", "macromolecule" ], "question_id": "fs-id2575162", "question_text": "The smallest unit of biological structure that meets the functional requirements of “living” is the ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "are not made of cells" }, "bloom": null, "hl_context": "Some cells contain aggregates of macromolecules surrounded by membranes ; these are called organelles . Organelles are small structures that exist within cells . Examples of organelles include mitochondria and chloroplasts , which carry out indispensable functions : mitochondria produce energy to power the cell , while chloroplasts enable green plants to utilize the energy in sunlight to make sugars . All living things are made of cells ; the cell itself is the smallest fundamental unit of structure and function in living organisms . <hl> ( This requirement is why viruses are not considered living : they are not made of cells . <hl> To make new viruses , they have to invade and hijack the reproductive mechanism of a living cell ; only then can they obtain the materials they need to reproduce . ) Some organisms consist of a single cell and others are multicellular . Cells are classified as prokaryotic or eukaryotic . Prokaryotes are single-celled or colonial organisms that do not have membrane-bound nuclei ; in contrast , the cells of eukaryotes do have membrane-bound organelles and a membrane-bound nucleus . Biology is the science that studies life , but what exactly is life ? This may sound like a silly question with an obvious response , but it is not always easy to define life . For example , a branch of biology called virology studies viruses , which exhibit some of the characteristics of living entities but lack others . <hl> It turns out that although viruses can attack living organisms , cause diseases , and even reproduce , they do not meet the criteria that biologists use to define life . <hl> Consequently , virologists are not biologists , strictly speaking . Similarly , some biologists study the early molecular evolution that gave rise to life ; since the events that preceded life are not biological events , these scientists are also excluded from biology in the strict sense of the term .", "hl_sentences": "( This requirement is why viruses are not considered living : they are not made of cells . It turns out that although viruses can attack living organisms , cause diseases , and even reproduce , they do not meet the criteria that biologists use to define life .", "question": { "cloze_format": "Viruses are not considered living because they ________.", "normal_format": "Why are viruses not considered living?", "question_choices": [ "are not made of cells", "lack cell nuclei", "do not contain DNA or RNA", "cannot reproduce" ], "question_id": "fs-id3095300", "question_text": "Viruses are not considered living because they ________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 1, "ans_text": "eukaryotic cells" }, "bloom": null, "hl_context": "In the past , biologists grouped living organisms into five kingdoms : animals , plants , fungi , protists , and bacteria . The organizational scheme was based mainly on physical features , as opposed to physiology , biochemistry , or molecular biology , all of which are used by modern systematics . The pioneering work of American microbiologist Carl Woese in the early 1970s has shown , however , that life on Earth has evolved along three lineages , now called domains — Bacteria , Archaea , and Eukarya . <hl> The first two are prokaryotic cells with microbes that lack membrane-enclosed nuclei and organelles . <hl> <hl> The third domain contains the eukaryotes and includes unicellular microorganisms together with the four original kingdoms ( excluding bacteria ) . <hl> Woese defined Archaea as a new domain , and this resulted in a new taxonomic tree ( Figure 1.17 ) . Many organisms belonging to the Archaea domain live under extreme conditions and are called extremophiles . To construct his tree , Woese used genetic relationships rather than similarities based on morphology ( shape ) . Some cells contain aggregates of macromolecules surrounded by membranes ; these are called organelles . Organelles are small structures that exist within cells . Examples of organelles include mitochondria and chloroplasts , which carry out indispensable functions : mitochondria produce energy to power the cell , while chloroplasts enable green plants to utilize the energy in sunlight to make sugars . All living things are made of cells ; the cell itself is the smallest fundamental unit of structure and function in living organisms . ( This requirement is why viruses are not considered living : they are not made of cells . To make new viruses , they have to invade and hijack the reproductive mechanism of a living cell ; only then can they obtain the materials they need to reproduce . ) Some organisms consist of a single cell and others are multicellular . Cells are classified as prokaryotic or eukaryotic . <hl> Prokaryotes are single-celled or colonial organisms that do not have membrane-bound nuclei ; in contrast , the cells of eukaryotes do have membrane-bound organelles and a membrane-bound nucleus . <hl>", "hl_sentences": "The first two are prokaryotic cells with microbes that lack membrane-enclosed nuclei and organelles . The third domain contains the eukaryotes and includes unicellular microorganisms together with the four original kingdoms ( excluding bacteria ) . Prokaryotes are single-celled or colonial organisms that do not have membrane-bound nuclei ; in contrast , the cells of eukaryotes do have membrane-bound organelles and a membrane-bound nucleus .", "question": { "cloze_format": "The presence of a membrane-enclosed nucleus is a characteristic of ________.", "normal_format": "What is the presence of a membrane-enclosed nucleus a characteristic of?", "question_choices": [ "prokaryotic cells", "eukaryotic cells", "living organisms", "bacteria" ], "question_id": "fs-id1443260", "question_text": "The presence of a membrane-enclosed nucleus is a characteristic of ________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "population" }, "bloom": "2", "hl_context": "<hl> All the individuals of a species living within a specific area are collectively called a population . <hl> For example , a forest may include many pine trees . All of these pine trees represent the population of pine trees in this forest . Different populations may live in the same specific area . For example , the forest with the pine trees includes populations of flowering plants and also insects and microbial populations . A community is the sum of populations inhabiting a particular area . For instance , all of the trees , flowers , insects , and other populations in a forest form the forest ’ s community . The forest itself is an ecosystem . An ecosystem consists of all the living things in a particular area together with the abiotic , non-living parts of that environment such as nitrogen in the soil or rain water . At the highest level of organization ( Figure 1.16 ) , the biosphere is the collection of all ecosystems , and it represents the zones of life on earth . It includes land , water , and even the atmosphere to a certain extent . Visual Connection Which of the following statements is false ?", "hl_sentences": "All the individuals of a species living within a specific area are collectively called a population .", "question": { "cloze_format": "A group of individuals of the same species living in the same area is called a(n) ________.", "normal_format": "What is a group of individuals of the same species living in the same area called?", "question_choices": [ "family", "community", "population", "ecosystem" ], "question_id": "fs-id2196908", "question_text": "A group of individuals of the same species living in the same area is called a(n) ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "biosphere, ecosystem, community, population, organism" }, "bloom": "3", "hl_context": "<hl> All the individuals of a species living within a specific area are collectively called a population . <hl> For example , a forest may include many pine trees . <hl> All of these pine trees represent the population of pine trees in this forest . <hl> <hl> Different populations may live in the same specific area . <hl> For example , the forest with the pine trees includes populations of flowering plants and also insects and microbial populations . <hl> A community is the sum of populations inhabiting a particular area . <hl> <hl> For instance , all of the trees , flowers , insects , and other populations in a forest form the forest ’ s community . <hl> The forest itself is an ecosystem . <hl> An ecosystem consists of all the living things in a particular area together with the abiotic , non-living parts of that environment such as nitrogen in the soil or rain water . <hl> <hl> At the highest level of organization ( Figure 1.16 ) , the biosphere is the collection of all ecosystems , and it represents the zones of life on earth . <hl> It includes land , water , and even the atmosphere to a certain extent . Visual Connection Which of the following statements is false ? In larger organisms , cells combine to make tissues , which are groups of similar cells carrying out similar or related functions . Organs are collections of tissues grouped together performing a common function . Organs are present not only in animals but also in plants . An organ system is a higher level of organization that consists of functionally related organs . Mammals have many organ systems . For instance , the circulatory system transports blood through the body and to and from the lungs ; it includes organs such as the heart and blood vessels . <hl> Organisms are individual living entities . <hl> <hl> For example , each tree in a forest is an organism . <hl> Single-celled prokaryotes and single-celled eukaryotes are also considered organisms and are typically referred to as microorganisms .", "hl_sentences": "All the individuals of a species living within a specific area are collectively called a population . All of these pine trees represent the population of pine trees in this forest . Different populations may live in the same specific area . A community is the sum of populations inhabiting a particular area . For instance , all of the trees , flowers , insects , and other populations in a forest form the forest ’ s community . An ecosystem consists of all the living things in a particular area together with the abiotic , non-living parts of that environment such as nitrogen in the soil or rain water . At the highest level of organization ( Figure 1.16 ) , the biosphere is the collection of all ecosystems , and it represents the zones of life on earth . Organisms are individual living entities . For example , each tree in a forest is an organism .", "question": { "cloze_format": "The sequences of ___ represent the hierarchy of biological organization from the most inclusive to the least complex level.", "normal_format": "Which of the following sequences represents the hierarchy of biological organization from the most inclusive to the least complex level?", "question_choices": [ "organelle, tissue, biosphere, ecosystem, population", "organ, organism, tissue, organelle, molecule", "organism, community, biosphere, molecule, tissue, organ", "biosphere, ecosystem, community, population, organism" ], "question_id": "fs-id2169252", "question_text": "Which of the following sequences represents the hierarchy of biological organization from the most inclusive to the least complex level?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "at the branch tips" }, "bloom": "2", "hl_context": "The evolution of various life forms on Earth can be summarized in a phylogenetic tree ( Figure 1.17 ) . <hl> A phylogenetic tree is a diagram showing the evolutionary relationships among biological species based on similarities and differences in genetic or physical traits or both . <hl> <hl> A phylogenetic tree is composed of nodes and branches . <hl> <hl> The internal nodes represent ancestors and are points in evolution when , based on scientific evidence , an ancestor is thought to have diverged to form two new species . <hl> The length of each branch is proportional to the time elapsed since the split .", "hl_sentences": "A phylogenetic tree is a diagram showing the evolutionary relationships among biological species based on similarities and differences in genetic or physical traits or both . A phylogenetic tree is composed of nodes and branches . The internal nodes represent ancestors and are points in evolution when , based on scientific evidence , an ancestor is thought to have diverged to form two new species .", "question": { "cloze_format": "The place in a phylogenetic tree where you would expect to find the organism that had evolved most recently is ___ .", "normal_format": "Where in a phylogenetic tree would you expect to find the organism that had evolved most recently?", "question_choices": [ "at the base", "within the branches", "at the nodes", "at the branch tips" ], "question_id": "fs-id1967739", "question_text": "Where in a phylogenetic tree would you expect to find the organism that had evolved most recently?" }, "references_are_paraphrase": null } ]
1
1.1 The Science of Biology Learning Objectives By the end of this section, you will be able to: Identify the shared characteristics of the natural sciences Summarize the steps of the scientific method Compare inductive reasoning with deductive reasoning Describe the goals of basic science and applied science What is biology? In simple terms, biology is the study of living organisms and their interactions with one another and their environments. This is a very broad definition because the scope of biology is vast. Biologists may study anything from the microscopic or submicroscopic view of a cell to ecosystems and the whole living planet ( Figure 1.2 ). Listening to the daily news, you will quickly realize how many aspects of biology are discussed every day. For example, recent news topics include Escherichia coli ( Figure 1.3 ) outbreaks in spinach and Salmonella contamination in peanut butter. Other subjects include efforts toward finding a cure for AIDS, Alzheimer’s disease, and cancer. On a global scale, many researchers are committed to finding ways to protect the planet, solve environmental issues, and reduce the effects of climate change. All of these diverse endeavors are related to different facets of the discipline of biology. The Process of Science Biology is a science, but what exactly is science? What does the study of biology share with other scientific disciplines? Science (from the Latin scientia , meaning “knowledge”) can be defined as knowledge that covers general truths or the operation of general laws, especially when acquired and tested by the scientific method. It becomes clear from this definition that the application of the scientific method plays a major role in science. The scientific method is a method of research with defined steps that include experiments and careful observation. The steps of the scientific method will be examined in detail later, but one of the most important aspects of this method is the testing of hypotheses by means of repeatable experiments. A hypothesis is a suggested explanation for an event, which can be tested. Although using the scientific method is inherent to science, it is inadequate in determining what science is. This is because it is relatively easy to apply the scientific method to disciplines such as physics and chemistry, but when it comes to disciplines like archaeology, psychology, and geology, the scientific method becomes less applicable as it becomes more difficult to repeat experiments. These areas of study are still sciences, however. Consider archeology—even though one cannot perform repeatable experiments, hypotheses may still be supported. For instance, an archeologist can hypothesize that an ancient culture existed based on finding a piece of pottery. Further hypotheses could be made about various characteristics of this culture, and these hypotheses may be found to be correct or false through continued support or contradictions from other findings. A hypothesis may become a verified theory. A theory is a tested and confirmed explanation for observations or phenomena. Science may be better defined as fields of study that attempt to comprehend the nature of the universe. Natural Sciences What would you expect to see in a museum of natural sciences? Frogs? Plants? Dinosaur skeletons? Exhibits about how the brain functions? A planetarium? Gems and minerals? Or, maybe all of the above? Science includes such diverse fields as astronomy, biology, computer sciences, geology, logic, physics, chemistry, and mathematics ( Figure 1.4 ). However, those fields of science related to the physical world and its phenomena and processes are considered natural sciences . Thus, a museum of natural sciences might contain any of the items listed above. There is no complete agreement when it comes to defining what the natural sciences include, however. For some experts, the natural sciences are astronomy, biology, chemistry, earth science, and physics. Other scholars choose to divide natural sciences into life sciences , which study living things and include biology, and physical sciences , which study nonliving matter and include astronomy, geology, physics, and chemistry. Some disciplines such as biophysics and biochemistry build on both life and physical sciences and are interdisciplinary. Natural sciences are sometimes referred to as “hard science” because they rely on the use of quantitative data; social sciences that study society and human behavior are more likely to use qualitative assessments to drive investigations and findings. Not surprisingly, the natural science of biology has many branches or subdisciplines. Cell biologists study cell structure and function, while biologists who study anatomy investigate the structure of an entire organism. Those biologists studying physiology, however, focus on the internal functioning of an organism. Some areas of biology focus on only particular types of living things. For example, botanists explore plants, while zoologists specialize in animals. Scientific Reasoning One thing is common to all forms of science: an ultimate goal “to know.” Curiosity and inquiry are the driving forces for the development of science. Scientists seek to understand the world and the way it operates. To do this, they use two methods of logical thinking: inductive reasoning and deductive reasoning. Inductive reasoning is a form of logical thinking that uses related observations to arrive at a general conclusion. This type of reasoning is common in descriptive science. A life scientist such as a biologist makes observations and records them. These data can be qualitative or quantitative, and the raw data can be supplemented with drawings, pictures, photos, or videos. From many observations, the scientist can infer conclusions (inductions) based on evidence. Inductive reasoning involves formulating generalizations inferred from careful observation and the analysis of a large amount of data. Brain studies provide an example. In this type of research, many live brains are observed while people are doing a specific activity, such as viewing images of food. The part of the brain that “lights up” during this activity is then predicted to be the part controlling the response to the selected stimulus, in this case, images of food. The “lighting up” of the various areas of the brain is caused by excess absorption of radioactive sugar derivatives by active areas of the brain. The resultant increase in radioactivity is observed by a scanner. Then, researchers can stimulate that part of the brain to see if similar responses result. Deductive reasoning or deduction is the type of logic used in hypothesis-based science. In deductive reason, the pattern of thinking moves in the opposite direction as compared to inductive reasoning. Deductive reasoning is a form of logical thinking that uses a general principle or law to forecast specific results. From those general principles, a scientist can extrapolate and predict the specific results that would be valid as long as the general principles are valid. Studies in climate change can illustrate this type of reasoning. For example, scientists may predict that if the climate becomes warmer in a particular region, then the distribution of plants and animals should change. These predictions have been made and tested, and many such changes have been found, such as the modification of arable areas for agriculture, with change based on temperature averages. Both types of logical thinking are related to the two main pathways of scientific study: descriptive science and hypothesis-based science. Descriptive (or discovery) science , which is usually inductive, aims to observe, explore, and discover, while hypothesis-based science , which is usually deductive, begins with a specific question or problem and a potential answer or solution that can be tested. The boundary between these two forms of study is often blurred, and most scientific endeavors combine both approaches. The fuzzy boundary becomes apparent when thinking about how easily observation can lead to specific questions. For example, a gentleman in the 1940s observed that the burr seeds that stuck to his clothes and his dog’s fur had a tiny hook structure. On closer inspection, he discovered that the burrs’ gripping device was more reliable than a zipper. He eventually developed a company and produced the hook-and-loop fastener popularly known today as Velcro. Descriptive science and hypothesis-based science are in continuous dialogue. The Scientific Method Biologists study the living world by posing questions about it and seeking science-based responses. This approach is common to other sciences as well and is often referred to as the scientific method. The scientific method was used even in ancient times, but it was first documented by England’s Sir Francis Bacon (1561–1626) ( Figure 1.5 ), who set up inductive methods for scientific inquiry. The scientific method is not exclusively used by biologists but can be applied to almost all fields of study as a logical, rational problem-solving method. The scientific process typically starts with an observation (often a problem to be solved) that leads to a question. Let’s think about a simple problem that starts with an observation and apply the scientific method to solve the problem. One Monday morning, a student arrives at class and quickly discovers that the classroom is too warm. That is an observation that also describes a problem: the classroom is too warm. The student then asks a question: “Why is the classroom so warm?” Proposing a Hypothesis Recall that a hypothesis is a suggested explanation that can be tested. To solve a problem, several hypotheses may be proposed. For example, one hypothesis might be, “The classroom is warm because no one turned on the air conditioning.” But there could be other responses to the question, and therefore other hypotheses may be proposed. A second hypothesis might be, “The classroom is warm because there is a power failure, and so the air conditioning doesn’t work.” Once a hypothesis has been selected, the student can make a prediction. A prediction is similar to a hypothesis but it typically has the format “If . . . then . . . .” For example, the prediction for the first hypothesis might be, “ If the student turns on the air conditioning, then the classroom will no longer be too warm.” Testing a Hypothesis A valid hypothesis must be testable. It should also be falsifiable , meaning that it can be disproven by experimental results. Importantly, science does not claim to “prove” anything because scientific understandings are always subject to modification with further information. This step—openness to disproving ideas—is what distinguishes sciences from non-sciences. The presence of the supernatural, for instance, is neither testable nor falsifiable. To test a hypothesis, a researcher will conduct one or more experiments designed to eliminate one or more of the hypotheses. Each experiment will have one or more variables and one or more controls. A variable is any part of the experiment that can vary or change during the experiment. The control group contains every feature of the experimental group except it is not given the manipulation that is hypothesized about. Therefore, if the results of the experimental group differ from the control group, the difference must be due to the hypothesized manipulation, rather than some outside factor. Look for the variables and controls in the examples that follow. To test the first hypothesis, the student would find out if the air conditioning is on. If the air conditioning is turned on but does not work, there should be another reason, and this hypothesis should be rejected. To test the second hypothesis, the student could check if the lights in the classroom are functional. If so, there is no power failure and this hypothesis should be rejected. Each hypothesis should be tested by carrying out appropriate experiments. Be aware that rejecting one hypothesis does not determine whether or not the other hypotheses can be accepted; it simply eliminates one hypothesis that is not valid ( Figure 1.6 ). Using the scientific method, the hypotheses that are inconsistent with experimental data are rejected. While this “warm classroom” example is based on observational results, other hypotheses and experiments might have clearer controls. For instance, a student might attend class on Monday and realize she had difficulty concentrating on the lecture. One observation to explain this occurrence might be, “When I eat breakfast before class, I am better able to pay attention.” The student could then design an experiment with a control to test this hypothesis. In hypothesis-based science, specific results are predicted from a general premise. This type of reasoning is called deductive reasoning: deduction proceeds from the general to the particular. But the reverse of the process is also possible: sometimes, scientists reach a general conclusion from a number of specific observations. This type of reasoning is called inductive reasoning, and it proceeds from the particular to the general. Inductive and deductive reasoning are often used in tandem to advance scientific knowledge ( Figure 1.7 ). In recent years a new approach of testing hypotheses has developed as a result of an exponential growth of data deposited in various databases. Using computer algorithms and statistical analyses of data in databases, a new field of so-called "data research" (also referred to as "in silico" research) provides new methods of data analyses and their interpretation. This will increase the demand for specialists in both biology and computer science, a promising career opportunity. Visual Connection In the example below, the scientific method is used to solve an everyday problem. Order the scientific method steps (numbered items) with the process of solving the everyday problem (lettered items). Based on the results of the experiment, is the hypothesis correct? If it is incorrect, propose some alternative hypotheses. Observation Question Hypothesis (answer) Prediction Experiment Result There is something wrong with the electrical outlet. If something is wrong with the outlet, my coffeemaker also won’t work when plugged into it. My toaster doesn’t toast my bread. I plug my coffee maker into the outlet. My coffeemaker works. Why doesn’t my toaster work? Visual Connection Decide if each of the following is an example of inductive or deductive reasoning. All flying birds and insects have wings. Birds and insects flap their wings as they move through the air. Therefore, wings enable flight. Insects generally survive mild winters better than harsh ones. Therefore, insect pests will become more problematic if global temperatures increase. Chromosomes, the carriers of DNA, separate into daughter cells during cell division. Therefore, DNA is the genetic material. Animals as diverse as humans, insects, and wolves all exhibit social behavior. Therefore, social behavior must have an evolutionary advantage. The scientific method may seem too rigid and structured. It is important to keep in mind that, although scientists often follow this sequence, there is flexibility. Sometimes an experiment leads to conclusions that favor a change in approach; often, an experiment brings entirely new scientific questions to the puzzle. Many times, science does not operate in a linear fashion; instead, scientists continually draw inferences and make generalizations, finding patterns as their research proceeds. Scientific reasoning is more complex than the scientific method alone suggests. Notice, too, that the scientific method can be applied to solving problems that aren’t necessarily scientific in nature. Two Types of Science: Basic Science and Applied Science The scientific community has been debating for the last few decades about the value of different types of science. Is it valuable to pursue science for the sake of simply gaining knowledge, or does scientific knowledge only have worth if we can apply it to solving a specific problem or to bettering our lives? This question focuses on the differences between two types of science: basic science and applied science. Basic science or “pure” science seeks to expand knowledge regardless of the short-term application of that knowledge. It is not focused on developing a product or a service of immediate public or commercial value. The immediate goal of basic science is knowledge for knowledge’s sake, though this does not mean that, in the end, it may not result in a practical application. In contrast, applied science or “technology,” aims to use science to solve real-world problems, making it possible, for example, to improve a crop yield, find a cure for a particular disease, or save animals threatened by a natural disaster ( Figure 1.8 ). In applied science, the problem is usually defined for the researcher. Some individuals may perceive applied science as “useful” and basic science as “useless.” A question these people might pose to a scientist advocating knowledge acquisition would be, “What for?” A careful look at the history of science, however, reveals that basic knowledge has resulted in many remarkable applications of great value. Many scientists think that a basic understanding of science is necessary before an application is developed; therefore, applied science relies on the results generated through basic science. Other scientists think that it is time to move on from basic science and instead to find solutions to actual problems. Both approaches are valid. It is true that there are problems that demand immediate attention; however, few solutions would be found without the help of the wide knowledge foundation generated through basic science. One example of how basic and applied science can work together to solve practical problems occurred after the discovery of DNA structure led to an understanding of the molecular mechanisms governing DNA replication. Strands of DNA, unique in every human, are found in our cells, where they provide the instructions necessary for life. During DNA replication, DNA makes new copies of itself, shortly before a cell divides. Understanding the mechanisms of DNA replication enabled scientists to develop laboratory techniques that are now used to identify genetic diseases, pinpoint individuals who were at a crime scene, and determine paternity. Without basic science, it is unlikely that applied science would exist. Another example of the link between basic and applied research is the Human Genome Project, a study in which each human chromosome was analyzed and mapped to determine the precise sequence of DNA subunits and the exact location of each gene. (The gene is the basic unit of heredity; an individual’s complete collection of genes is his or her genome.) Other less complex organisms have also been studied as part of this project in order to gain a better understanding of human chromosomes. The Human Genome Project ( Figure 1.9 ) relied on basic research carried out with simple organisms and, later, with the human genome. An important end goal eventually became using the data for applied research, seeking cures and early diagnoses for genetically related diseases. While research efforts in both basic science and applied science are usually carefully planned, it is important to note that some discoveries are made by serendipity , that is, by means of a fortunate accident or a lucky surprise. Penicillin was discovered when biologist Alexander Fleming accidentally left a petri dish of Staphylococcus bacteria open. An unwanted mold grew on the dish, killing the bacteria. The mold turned out to be Penicillium , and a new antibiotic was discovered. Even in the highly organized world of science, luck—when combined with an observant, curious mind—can lead to unexpected breakthroughs. Reporting Scientific Work Whether scientific research is basic science or applied science, scientists must share their findings in order for other researchers to expand and build upon their discoveries. Collaboration with other scientists—when planning, conducting, and analyzing results—are all important for scientific research. For this reason, important aspects of a scientist’s work are communicating with peers and disseminating results to peers. Scientists can share results by presenting them at a scientific meeting or conference, but this approach can reach only the select few who are present. Instead, most scientists present their results in peer-reviewed manuscripts that are published in scientific journals. Peer-reviewed manuscripts are scientific papers that are reviewed by a scientist’s colleagues, or peers. These colleagues are qualified individuals, often experts in the same research area, who judge whether or not the scientist’s work is suitable for publication. The process of peer review helps to ensure that the research described in a scientific paper or grant proposal is original, significant, logical, and thorough. Grant proposals, which are requests for research funding, are also subject to peer review. Scientists publish their work so other scientists can reproduce their experiments under similar or different conditions to expand on the findings. The experimental results must be consistent with the findings of other scientists. A scientific paper is very different from creative writing. Although creativity is required to design experiments, there are fixed guidelines when it comes to presenting scientific results. First, scientific writing must be brief, concise, and accurate. A scientific paper needs to be succinct but detailed enough to allow peers to reproduce the experiments. The scientific paper consists of several specific sections—introduction, materials and methods, results, and discussion. This structure is sometimes called the “IMRaD” format. There are usually acknowledgment and reference sections as well as an abstract (a concise summary) at the beginning of the paper. There might be additional sections depending on the type of paper and the journal where it will be published; for example, some review papers require an outline. The introduction starts with brief, but broad, background information about what is known in the field. A good introduction also gives the rationale of the work; it justifies the work carried out and also briefly mentions the end of the paper, where the hypothesis or research question driving the research will be presented. The introduction refers to the published scientific work of others and therefore requires citations following the style of the journal. Using the work or ideas of others without proper citation is considered plagiarism . The materials and methods section includes a complete and accurate description of the substances used, and the method and techniques used by the researchers to gather data. The description should be thorough enough to allow another researcher to repeat the experiment and obtain similar results, but it does not have to be verbose. This section will also include information on how measurements were made and what types of calculations and statistical analyses were used to examine raw data. Although the materials and methods section gives an accurate description of the experiments, it does not discuss them. Some journals require a results section followed by a discussion section, but it is more common to combine both. If the journal does not allow the combination of both sections, the results section simply narrates the findings without any further interpretation. The results are presented by means of tables or graphs, but no duplicate information should be presented. In the discussion section, the researcher will interpret the results, describe how variables may be related, and attempt to explain the observations. It is indispensable to conduct an extensive literature search to put the results in the context of previously published scientific research. Therefore, proper citations are included in this section as well. Finally, the conclusion section summarizes the importance of the experimental findings. While the scientific paper almost certainly answered one or more scientific questions that were stated, any good research should lead to more questions. Therefore, a well-done scientific paper leaves doors open for the researcher and others to continue and expand on the findings. Review articles do not follow the IMRAD format because they do not present original scientific findings, or primary literature; instead, they summarize and comment on findings that were published as primary literature and typically include extensive reference sections. 1.2 Themes and Concepts of Biology Learning Objectives By the end of this section, you will be able to: Identify and describe the properties of life Describe the levels of organization among living things Recognize and interpret a phylogenetic tree List examples of different sub disciplines in biology Biology is the science that studies life, but what exactly is life? This may sound like a silly question with an obvious response, but it is not always easy to define life. For example, a branch of biology called virology studies viruses, which exhibit some of the characteristics of living entities but lack others. It turns out that although viruses can attack living organisms, cause diseases, and even reproduce, they do not meet the criteria that biologists use to define life. Consequently, virologists are not biologists, strictly speaking. Similarly, some biologists study the early molecular evolution that gave rise to life; since the events that preceded life are not biological events, these scientists are also excluded from biology in the strict sense of the term. From its earliest beginnings, biology has wrestled with three questions: What are the shared properties that make something “alive”? And once we know something is alive, how do we find meaningful levels of organization in its structure? And, finally, when faced with the remarkable diversity of life, how do we organize the different kinds of organisms so that we can better understand them? As new organisms are discovered every day, biologists continue to seek answers to these and other questions. Properties of Life All living organisms share several key characteristics or functions: order, sensitivity or response to the environment, reproduction, adaptation, growth and development, regulation, homeostasis, energy processing, and evolution. When viewed together, these nine characteristics serve to define life. Order Organisms are highly organized, coordinated structures that consist of one or more cells. Even very simple, single-celled organisms are remarkably complex: inside each cell, atoms make up molecules; these in turn make up cell organelles and other cellular inclusions. In multicellular organisms ( Figure 1.10 ), similar cells form tissues. Tissues, in turn, collaborate to create organs (body structures with a distinct function). Organs work together to form organ systems. Sensitivity or Response to Stimuli Organisms respond to diverse stimuli. For example, plants can bend toward a source of light, climb on fences and walls, or respond to touch ( Figure 1.11 ). Even tiny bacteria can move toward or away from chemicals (a process called chemotaxis ) or light ( phototaxis ). Movement toward a stimulus is considered a positive response, while movement away from a stimulus is considered a negative response. Link to Learning Watch this video to see how plants respond to a stimulus—from opening to light, to wrapping a tendril around a branch, to capturing prey. Reproduction Single-celled organisms reproduce by first duplicating their DNA, and then dividing it equally as the cell prepares to divide to form two new cells. Multicellular organisms often produce specialized reproductive germline cells that will form new individuals. When reproduction occurs, genes containing DNA are passed along to an organism’s offspring. These genes ensure that the offspring will belong to the same species and will have similar characteristics, such as size and shape. Growth and Development Organisms grow and develop following specific instructions coded for by their genes. These genes provide instructions that will direct cellular growth and development, ensuring that a species’ young ( Figure 1.12 ) will grow up to exhibit many of the same characteristics as its parents. Regulation Even the smallest organisms are complex and require multiple regulatory mechanisms to coordinate internal functions, respond to stimuli, and cope with environmental stresses. Two examples of internal functions regulated in an organism are nutrient transport and blood flow. Organs (groups of tissues working together) perform specific functions, such as carrying oxygen throughout the body, removing wastes, delivering nutrients to every cell, and cooling the body. Homeostasis In order to function properly, cells need to have appropriate conditions such as proper temperature, pH, and appropriate concentration of diverse chemicals. These conditions may, however, change from one moment to the next. Organisms are able to maintain internal conditions within a narrow range almost constantly, despite environmental changes, through homeostasis (literally, “steady state”)—the ability of an organism to maintain constant internal conditions. For example, an organism needs to regulate body temperature through a process known as thermoregulation. Organisms that live in cold climates, such as the polar bear ( Figure 1.13 ), have body structures that help them withstand low temperatures and conserve body heat. Structures that aid in this type of insulation include fur, feathers, blubber, and fat. In hot climates, organisms have methods (such as perspiration in humans or panting in dogs) that help them to shed excess body heat. Energy Processing All organisms use a source of energy for their metabolic activities. Some organisms capture energy from the sun and convert it into chemical energy in food; others use chemical energy in molecules they take in as food ( Figure 1.14 ). Levels of Organization of Living Things Living things are highly organized and structured, following a hierarchy that can be examined on a scale from small to large. The atom is the smallest and most fundamental unit of matter. It consists of a nucleus surrounded by electrons. Atoms form molecules. A molecule is a chemical structure consisting of at least two atoms held together by one or more chemical bonds. Many molecules that are biologically important are macromolecules , large molecules that are typically formed by polymerization (a polymer is a large molecule that is made by combining smaller units called monomers, which are simpler than macromolecules). An example of a macromolecule is deoxyribonucleic acid (DNA) ( Figure 1.15 ), which contains the instructions for the structure and functioning of all living organisms. Link to Learning Watch this video that animates the three-dimensional structure of the DNA molecule shown in Figure 1.15 . Some cells contain aggregates of macromolecules surrounded by membranes; these are called organelles . Organelles are small structures that exist within cells. Examples of organelles include mitochondria and chloroplasts, which carry out indispensable functions: mitochondria produce energy to power the cell, while chloroplasts enable green plants to utilize the energy in sunlight to make sugars. All living things are made of cells; the cell itself is the smallest fundamental unit of structure and function in living organisms. (This requirement is why viruses are not considered living: they are not made of cells. To make new viruses, they have to invade and hijack the reproductive mechanism of a living cell; only then can they obtain the materials they need to reproduce.) Some organisms consist of a single cell and others are multicellular. Cells are classified as prokaryotic or eukaryotic. Prokaryotes are single-celled or colonial organisms that do not have membrane-bound nuclei; in contrast, the cells of eukaryotes do have membrane-bound organelles and a membrane-bound nucleus. In larger organisms, cells combine to make tissues , which are groups of similar cells carrying out similar or related functions. Organs are collections of tissues grouped together performing a common function. Organs are present not only in animals but also in plants. An organ system is a higher level of organization that consists of functionally related organs. Mammals have many organ systems. For instance, the circulatory system transports blood through the body and to and from the lungs; it includes organs such as the heart and blood vessels. Organisms are individual living entities. For example, each tree in a forest is an organism. Single-celled prokaryotes and single-celled eukaryotes are also considered organisms and are typically referred to as microorganisms. All the individuals of a species living within a specific area are collectively called a population . For example, a forest may include many pine trees. All of these pine trees represent the population of pine trees in this forest. Different populations may live in the same specific area. For example, the forest with the pine trees includes populations of flowering plants and also insects and microbial populations. A community is the sum of populations inhabiting a particular area. For instance, all of the trees, flowers, insects, and other populations in a forest form the forest’s community. The forest itself is an ecosystem. An ecosystem consists of all the living things in a particular area together with the abiotic, non-living parts of that environment such as nitrogen in the soil or rain water. At the highest level of organization ( Figure 1.16 ), the biosphere is the collection of all ecosystems, and it represents the zones of life on earth. It includes land, water, and even the atmosphere to a certain extent. Visual Connection Which of the following statements is false? Tissues exist within organs which exist within organ systems. Communities exist within populations which exist within ecosystems. Organelles exist within cells which exist within tissues. Communities exist within ecosystems which exist in the biosphere. The Diversity of Life The fact that biology, as a science, has such a broad scope has to do with the tremendous diversity of life on earth. The source of this diversity is evolution , the process of gradual change during which new species arise from older species. Evolutionary biologists study the evolution of living things in everything from the microscopic world to ecosystems. The evolution of various life forms on Earth can be summarized in a phylogenetic tree ( Figure 1.17 ). A phylogenetic tree is a diagram showing the evolutionary relationships among biological species based on similarities and differences in genetic or physical traits or both. A phylogenetic tree is composed of nodes and branches. The internal nodes represent ancestors and are points in evolution when, based on scientific evidence, an ancestor is thought to have diverged to form two new species. The length of each branch is proportional to the time elapsed since the split. Evolution Connection Carl Woese and the Phylogenetic Tree In the past, biologists grouped living organisms into five kingdoms: animals, plants, fungi, protists, and bacteria. The organizational scheme was based mainly on physical features, as opposed to physiology, biochemistry, or molecular biology, all of which are used by modern systematics. The pioneering work of American microbiologist Carl Woese in the early 1970s has shown, however, that life on Earth has evolved along three lineages, now called domains—Bacteria, Archaea, and Eukarya. The first two are prokaryotic cells with microbes that lack membrane-enclosed nuclei and organelles. The third domain contains the eukaryotes and includes unicellular microorganisms together with the four original kingdoms (excluding bacteria). Woese defined Archaea as a new domain, and this resulted in a new taxonomic tree ( Figure 1.17 ). Many organisms belonging to the Archaea domain live under extreme conditions and are called extremophiles. To construct his tree, Woese used genetic relationships rather than similarities based on morphology (shape). Woese’s tree was constructed from comparative sequencing of the genes that are universally distributed, present in every organism, and conserved (meaning that these genes have remained essentially unchanged throughout evolution). Woese’s approach was revolutionary because comparisons of physical features are insufficient to differentiate between the prokaryotes that appear fairly similar in spite of their tremendous biochemical diversity and genetic variability ( Figure 1.18 ). The comparison of homologous DNA and RNA sequences provided Woese with a sensitive device that revealed the extensive variability of prokaryotes, and which justified the separation of the prokaryotes into two domains: bacteria and archaea. Branches of Biological Study The scope of biology is broad and therefore contains many branches and subdisciplines. Biologists may pursue one of those subdisciplines and work in a more focused field. For instance, molecular biology and biochemistry study biological processes at the molecular and chemical level, including interactions among molecules such as DNA, RNA, and proteins, as well as the way they are regulated. Microbiology , the study of microorganisms, is the study of the structure and function of single-celled organisms. It is quite a broad branch itself, and depending on the subject of study, there are also microbial physiologists, ecologists, and geneticists, among others. Career Connection Forensic Scientist Forensic science is the application of science to answer questions related to the law. Biologists as well as chemists and biochemists can be forensic scientists. Forensic scientists provide scientific evidence for use in courts, and their job involves examining trace materials associated with crimes. Interest in forensic science has increased in the last few years, possibly because of popular television shows that feature forensic scientists on the job. Also, the development of molecular techniques and the establishment of DNA databases have expanded the types of work that forensic scientists can do. Their job activities are primarily related to crimes against people such as murder, rape, and assault. Their work involves analyzing samples such as hair, blood, and other body fluids and also processing DNA ( Figure 1.19 ) found in many different environments and materials. Forensic scientists also analyze other biological evidence left at crime scenes, such as insect larvae or pollen grains. Students who want to pursue careers in forensic science will most likely be required to take chemistry and biology courses as well as some intensive math courses. Another field of biological study, neurobiology , studies the biology of the nervous system, and although it is considered a branch of biology, it is also recognized as an interdisciplinary field of study known as neuroscience. Because of its interdisciplinary nature, this subdiscipline studies different functions of the nervous system using molecular, cellular, developmental, medical, and computational approaches. Paleontology , another branch of biology, uses fossils to study life’s history ( Figure 1.20 ). Zoology and botany are the study of animals and plants, respectively. Biologists can also specialize as biotechnologists, ecologists, or physiologists, to name just a few areas. This is just a small sample of the many fields that biologists can pursue. Biology is the culmination of the achievements of the natural sciences from their inception to today. Excitingly, it is the cradle of emerging sciences, such as the biology of brain activity, genetic engineering of custom organisms, and the biology of evolution that uses the laboratory tools of molecular biology to retrace the earliest stages of life on earth. A scan of news headlines—whether reporting on immunizations, a newly discovered species, sports doping, or a genetically-modified food—demonstrates the way biology is active in and important to our everyday world.
principles_of_accounting,_volume_2:_managerial_accounting
Summary 9.1 Differentiate between Centralized and Decentralized Management Management control systems allow managers to develop a reporting structure to help the organization meet its strategic goals. In centralized organizations, primary decisions are made by the person or persons at the top of the organization. Decentralized organizations delegate decision-making authority throughout the organization. Daily decision-making involves frequent and immediate decisions. Strategic decision-making involves infrequent and long-term decisions. 9.2 Describe How Decision-Making Differs between Centralized and Decentralized Environments Segments are uniquely identifiable components of the business that facilitate the effective and efficient operation of the business. Organizational charts are used to graphically represent the authority structure of an organization. The CEO of a centralized organization will establish the strategy and make decisions that will be implemented throughout the organization. The CEO of a decentralized organization will establish strategic goals and empower managers to achieve the goals. 9.3 Describe the Types of Responsibility Centers A responsibility accounting structure helps management evaluate the financial performance of the segments in the organization. Responsibility centers are segments within a responsibility accounting structure. Five types of responsibility centers include cost centers, discretionary cost centers, revenue centers, profit centers, and investment centers. Cost centers are responsibility centers that focus only on expenses. Discretionary cost centers are responsibility centers that focus only on controllable expenses. Revenue centers are responsibility centers that focus on revenues. Profit centers are responsibility centers that focus on revenues and expenses. Investment centers are responsibility centers that consider the investments made by the responsibility center. Return on investment is a particular type of investment center structure that calculates a responsibility center’s profit percentage relative to the center’s investment. Residual income is a particular type of investment center structure that evaluates investments using a common cost of capital rate amongst all responsibility centers. 9.4 Describe the Effects of Various Decisions on Performance Evaluation of Responsibility Centers Uncontrollable costs are costs that management or an organization has little or no ability to influence. Controllable costs are costs that managers or an organization can influence. Managers in a responsibility accounting structure should only be evaluated based on controllable costs. Businesses with segments that provide goods to other segments within the business often use a transfer pricing structure to record the transaction. The general transfer pricing model considers the opportunity costs involved in selling to internal rather than external customers. This method is difficult to implement and businesses often choose other methods. The market price model uses market prices that would be used for external customers as the basis for internal transfers. The cost approach uses the company’s cost to make the product as the basis for establishing the transfer price. The negotiated model allows the selling and buying segments within the business to determine the transfer price. Transfer price arrangements are more difficult in international businesses because of complexities related to taxes, duties, and currency fluctuations.
Chapter Outline 9.1 Differentiate between Centralized and Decentralized Management 9.2 Describe How Decision-Making Differs between Centralized and Decentralized Environments 9.3 Describe the Types of Responsibility Centers 9.4 Describe the Effects of Various Decisions on Performance Evaluation of Responsibility Centers Why It Matters Lauren is a good cook who can make delicious meals quickly, and she enjoys cooking tremendously. Several friends have suggested she consider opening a food truck. She is intrigued by this idea and decides to further explore the possibility. After several years of research and planning, Lauren opens her food truck and finds instant success. She is so busy that she decides to recruit several others to join her in her food truck business. While this is an exciting next step, she has some questions about expanding the food truck concept. In particular, she wants to know if she can grow the business while maintaining the level of quality in her food that has led to her success. Since the concept of multiple food trucks is similar to the franchising concept, Lauren reaches out to a good friend who is the founder of a franchise that now has 10 regional locations. Her friend shares with her the concepts of a decentralized business and responsibility accounting. Under this approach, her friend tells her, she will be able to allow the individual food truck owners to have autonomy over their food truck while achieving the broader goals of financial success and serving quality food.
[ { "answer": { "ans_choice": 1, "ans_text": "B" }, "bloom": null, "hl_context": "All businesses start with an idea . After putting the idea into action and forming the business , measuring the performance of the business is a crucial next step for the business owners . As the business begins operations , it is fairly easy for the entrepreneur to measure the performance because the owner is heavily involved in the daily activities and decisions of the business . As the business grows through increased sales volume , additional products and locations , and more employees , however , it becomes more complicated to measure the performance of the organization . <hl> Owners and managers must design organizational systems that allow for operational efficiency , performance measurement , and the achievement of organizational goals . <hl>", "hl_sentences": "Owners and managers must design organizational systems that allow for operational efficiency , performance measurement , and the achievement of organizational goals .", "question": { "cloze_format": "___ is not a common goal of an organization.", "normal_format": "Which of the following is not a common goal of an organization?", "question_choices": [ "operational efficiency", "being acquired by another business", "achieving strategic goals", "measuring financial performance" ], "question_id": "fs-idm351324208", "question_text": "Which of the following is not a common goal of an organization?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 3, "ans_text": "a system that only measures profitability" }, "bloom": null, "hl_context": "It is important for those studying business ( and accounting , in particular ) to understand the concept of a management control system . <hl> A management control system is a structure within an organization that allows managers to establish , implement , and monitor progress toward the strategic goals of the organization . <hl>", "hl_sentences": "A management control system is a structure within an organization that allows managers to establish , implement , and monitor progress toward the strategic goals of the organization .", "question": { "cloze_format": "It does not describe a management control system that it (is) ___.", "normal_format": "Which of the following does not describe a management control system?", "question_choices": [ "establishes a company’s strategic goals", "implements a company’s strategic goals", "monitors a company’s strategic goals", "a system that only measures profitability" ], "question_id": "fs-idm354585712", "question_text": "Which of the following does not describe a management control system?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "A" }, "bloom": null, "hl_context": "<hl> In a centralized environment , the major decisions are made at the top by the CEO and then are carried out by everyone below the CEO . <hl> In a decentralized environment , the CEO sets the tone for the running of the organization and provides some decision-making guidelines , but the actual decisions for the day-to-day operations are made by the managers at the various levels of the organization . In other words , the essential difference between centralized and decentralized organizations involves decision-making . While no organization can be 100 % centralized or 100 % decentralized , organizations generally have a well-established structure that outlines the decision-making authority within the organization . <hl> Centralization is a business structure in which one individual makes the important decisions ( such as resource allocation ) and provides the primary strategic direction for the company . <hl> Most small businesses are centralized in that the owner makes all decisions regarding products , services , strategic direction , and most other significant areas . However , a business does not have to be small to be centralized . Apple is an example of a business with a centralized management structure . Within Apple , much of the decision-making responsibility lies with the Chief Executive Officer ( CEO ) Tim Cook , who assumed the leadership role within Apple following the death of Steve Jobs . Apple has long been viewed as an organization that maintains a high level of centralized control over the company ’ s strategic initiatives such as new product development , markets to operate in , and company acquisitions . Many businesses in rapidly changing technological environments have a centralized form of management structure . The decisions made by the lower level management are limited in a centralized environment .", "hl_sentences": "In a centralized environment , the major decisions are made at the top by the CEO and then are carried out by everyone below the CEO . Centralization is a business structure in which one individual makes the important decisions ( such as resource allocation ) and provides the primary strategic direction for the company .", "question": { "cloze_format": "In centralized organizations, primary decisions are made by ________.", "normal_format": "In centralized organizations, primary decisions are made by whom? ", "question_choices": [ "an individual at the top of the organization", "various managers throughout the organization", "outside consultants", "low-level management" ], "question_id": "fs-idm351268400", "question_text": "In centralized organizations, primary decisions are made by ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "quicker decisions and response time" }, "bloom": null, "hl_context": "<hl> Quick decision and response times — it is important for decisions to be made and implemented in a timely manner . <hl> <hl> In order to remain competitive , it is important for organizations to take advantage of opportunities that fit within the organization ’ s strategy . <hl> <hl> The advantages of centralized organizations include clarity in decision-making , streamlined implementation of policies and initiatives , and control over the strategic direction of the organization . <hl> The primary disadvantages of centralized organizations can include limited opportunities for employees to provide feedback and a higher risk of inflexibility .", "hl_sentences": "Quick decision and response times — it is important for decisions to be made and implemented in a timely manner . In order to remain competitive , it is important for organizations to take advantage of opportunities that fit within the organization ’ s strategy . The advantages of centralized organizations include clarity in decision-making , streamlined implementation of policies and initiatives , and control over the strategic direction of the organization .", "question": { "cloze_format": "A key advantage of a decentralized organization is ________.", "normal_format": "What is a key advantage of a decentralized organization?", "question_choices": [ "increased administrative costs", "quicker decisions and response time", "the ease of aligning segment and company goals", "duplication of efforts" ], "question_id": "fs-idm360587264", "question_text": "A key advantage of a decentralized organization is ________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "C" }, "bloom": null, "hl_context": "New businesses , for example , are often centralized . When a business first opens , it is common for the owner ( s ) to be highly involved in the day-to-day operations . In addition , the small size of a new business allows the owner to have a high level of involvement in both the daily and the strategic decisions of the business . Daily decisions are ongoing , immediate decisions that must be made in order to effectively and efficiently meet the needs of the organization ’ s customers . <hl> Strategic decisions , on the other hand , are made fairly infrequently and involve long-term goals of the organization . <hl> Being actively involved in the business allows new business owners to gain experience in all aspects of the business so that they can get a sense of the patterns of the daily operations and the decisions that need to be made . For example , the owner can be involved in determining the number of workers needed to meet the day ’ s production goal . Having too many workers would be inefficient and require the company to incur unnecessary expenses . Having too few workers , on the other hand , may result in inferior quality of products , missed shipments , or lost sales .", "hl_sentences": "Strategic decisions , on the other hand , are made fairly infrequently and involve long-term goals of the organization .", "question": { "cloze_format": "Strategic decisions occur ________.", "normal_format": "How do strategic decisions occur?", "question_choices": [ "frequently and involve immediate decisions", "frequently and involve long-term decisions", "infrequently and involve long-term decisions", "infrequently and involve immediate decisions" ], "question_id": "fs-idm353530416", "question_text": "Strategic decisions occur ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "number of employees" }, "bloom": null, "hl_context": "Businesses are organized with the intention of creating efficiency and effectiveness in achieving organizational goals . To aid in this , larger businesses use segments , uniquely identifiable components of the business . A company often creates them because of the specific activities undertaken within a particular portion of the business . <hl> 4 Segments are often categorized within the organization based on the services provided ( i . e . , departments ) , products produced , or even by geographic region . <hl> The purpose of identifying distinguishable segments within an organization is to provide efficiency in decision-making and effectiveness in operational performance . 4 In Building Blocks of Managerial Accounting , you learned that generally accepted accounting principles ( GAAP ) — also called accounting standards — provide official guidance to the accounting profession . Under the oversight of the Securities and Exchange Commission ( SEC ) , GAAP are created by the Financial Accounting Standards Board ( FASB ) . The official definition of segments as provided by FASB can be reviewed in ASC 280-10- 50 .", "hl_sentences": "4 Segments are often categorized within the organization based on the services provided ( i . e . , departments ) , products produced , or even by geographic region .", "question": { "cloze_format": "Segments are uniquely identifiable components of the business and can be categorized by all of the following except ________.", "normal_format": "Segments are uniquely identifiable components of the business and can NOT be categorized by what?", "question_choices": [ "products produced", "services provided", "geographical location", "number of employees" ], "question_id": "fs-idm348152800", "question_text": "Segments are uniquely identifiable components of the business and can be categorized by all of the following except ________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "C" }, "bloom": null, "hl_context": "<hl> Many organizations use an organizational chart to graphically represent the authority for decision-making and oversight . <hl> Organizational charts are similar in appearance to flowcharts . An organizational chart for a centralized organization is shown in Figure 9.2 . The middle tier represents position held by individuals or departments within the company . The lowest tier represents geographic locations in which the company operates . The lines connecting the boxes indicate the relationship among the segments and branch from the ultimate and decision-making authority . <hl> Organizational charts are typically arranged with the highest-ranking person ( or group ) listed at the top . <hl>", "hl_sentences": "Many organizations use an organizational chart to graphically represent the authority for decision-making and oversight . Organizational charts are typically arranged with the highest-ranking person ( or group ) listed at the top .", "question": { "cloze_format": "Organizational charts ________.", "normal_format": "Which of the following is correct about organizational charts?", "question_choices": [ "list the salaries of all employees", "outline the strategic goals of the organization", "show the structure of an organization", "help management measure financial performance" ], "question_id": "fs-idm346402432", "question_text": "Organizational charts ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "at the highest level of the organization and promoted downward" }, "bloom": null, "hl_context": "<hl> Establishing effective management control systems is important for organizations of all sizes . <hl> <hl> It is important for businesses to determine how they should structure the organization to ease decision-making and subsequent evaluation . <hl> <hl> First , levels of management within an organization help the organization form a structure that establishes levels of authority and roles within the organization . <hl> <hl> Lower-level management provides basic supervision and oversight for the operations of the organization . <hl> <hl> Mid-level management supervises and provides direction to lower-level management . <hl> <hl> Mid-level management often directs the various departments or divisions within the organization . <hl> <hl> Mid-level managers receive direction and are responsible for achieving the goals established by upper management . <hl> <hl> Upper management consists of the board of directors and chief executives charged with providing strategic guidance for the organization . <hl> <hl> Upper management has the ultimate authority within the organization and is accountable to the owners of the organization . <hl>", "hl_sentences": "Establishing effective management control systems is important for organizations of all sizes . It is important for businesses to determine how they should structure the organization to ease decision-making and subsequent evaluation . First , levels of management within an organization help the organization form a structure that establishes levels of authority and roles within the organization . Lower-level management provides basic supervision and oversight for the operations of the organization . Mid-level management supervises and provides direction to lower-level management . Mid-level management often directs the various departments or divisions within the organization . Mid-level managers receive direction and are responsible for achieving the goals established by upper management . Upper management consists of the board of directors and chief executives charged with providing strategic guidance for the organization . Upper management has the ultimate authority within the organization and is accountable to the owners of the organization .", "question": { "cloze_format": "In a centralized organization, the location where goals are established is ___ .", "normal_format": "In a centralized organization, where are goals established?", "question_choices": [ "at the lower level of the organization and promoted upward", "outside the organization based on best practices in the industry", "by each segment of the organization", "at the highest level of the organization and promoted downward" ], "question_id": "fs-idm353984576", "question_text": "In a centralized organization, where are goals established?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 0, "ans_text": "A" }, "bloom": null, "hl_context": "<hl> Products and services to offer , prices to charge customers , markets in which to operate <hl> <hl> Personnel decisions such as hiring and compensation <hl> <hl> Facility and equipment purchases and upgrades <hl> <hl> Think It Through Determining the Best Structure Here are some examples of decisions that every business must make : <hl>", "hl_sentences": "Products and services to offer , prices to charge customers , markets in which to operate Personnel decisions such as hiring and compensation Facility and equipment purchases and upgrades Think It Through Determining the Best Structure Here are some examples of decisions that every business must make :", "question": { "cloze_format": "Managers in decentralized organizations make decisions relating to all of the following except ________.", "normal_format": "Managers in decentralized organizations make decisions relating to all of the following except which of the following?", "question_choices": [ "the company’s stock price", "equipment purchases", "personnel", "prices to charge customers" ], "question_id": "fs-idm350747920", "question_text": "Managers in decentralized organizations make decisions relating to all of the following except ________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 0, "ans_text": "concentrated cost center" }, "bloom": null, "hl_context": "<hl> A cost center is an organizational segment in which a manager is held responsible only for costs . <hl> In these types of responsibility centers , there is a direct link between the costs incurred and the product or services produced . This link must be recognized by managers and properly structured within the responsibility accounting framework . This is not an easy task . There are several factors that organizations must consider when developing and using a responsibility accounting framework . <hl> Before discussing those factors , let ’ s explore the five types of responsibility centers : cost centers , discretionary cost centers , revenue centers , profit centers , and investment centers . <hl>", "hl_sentences": "A cost center is an organizational segment in which a manager is held responsible only for costs . Before discussing those factors , let ’ s explore the five types of responsibility centers : cost centers , discretionary cost centers , revenue centers , profit centers , and investment centers .", "question": { "cloze_format": "A ___ is not a type of responsibility center.", "normal_format": "Which of the following is not a type of responsibility center?", "question_choices": [ "concentrated cost center", "investment center", "profit center", "cost center" ], "question_id": "fs-idm362801104", "question_text": "Which of the following is not a type of responsibility center?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "C" }, "bloom": null, "hl_context": "The terminology changes slightly when we think about accountability relating to the financial performance of the segment . <hl> In a decentralized organization , the system of financial accountability for the various segments is administered through what is called responsibility accounting . <hl>", "hl_sentences": "In a decentralized organization , the system of financial accountability for the various segments is administered through what is called responsibility accounting .", "question": { "cloze_format": "A system that establishes financial accountability for operating segments within an organization is called ________.", "normal_format": "What is a system that establishes financial accountability for operating segments within an organization called?", "question_choices": [ "a financial statement", "an internal control system", "responsibility accounting", "centralization" ], "question_id": "fs-idm385680560", "question_text": "A system that establishes financial accountability for operating segments within an organization is called ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "profit center" }, "bloom": null, "hl_context": "<hl> A profit center is an organizational segment in which a manager is responsible for both revenues and costs ( such as a Starbucks store location ) . <hl> Of the responsibility centers explored so far , a profit center structure is the most complex because a manager must be well-versed in techniques to increase revenues , decrease expenses , and thereby increase profits while also meeting the strategic goals of the organization . <hl> A revenue center is an organizational segment in which a manager is held accountable only for revenues . <hl> As the name implies , the goal of a revenue center is to generate revenues for the business . In order to accomplish the goal of increasing revenues , the manager of a revenue center would focus on developing specific skillsets of the revenue center ’ s employees . The reservations group of Southwest Airlines is an example of a segment that may be structured as a revenue center . The employees should be well-trained in providing excellent customer service , handling customer complaints , and converting customer interactions into actual sales . As the financial performance of cost centers and discretionary cost centers is similar , so is the financial performance of a revenue center and a cost center . <hl> A cost center is an organizational segment in which a manager is held responsible only for costs . <hl> In these types of responsibility centers , there is a direct link between the costs incurred and the product or services produced . This link must be recognized by managers and properly structured within the responsibility accounting framework .", "hl_sentences": "A profit center is an organizational segment in which a manager is responsible for both revenues and costs ( such as a Starbucks store location ) . A revenue center is an organizational segment in which a manager is held accountable only for revenues . A cost center is an organizational segment in which a manager is held responsible only for costs .", "question": { "cloze_format": "A responsibility center in which managers are held accountable for both revenues and expenses is called a ________.", "normal_format": "What is a responsibility center in which managers are held accountable for revenues and expenses?", "question_choices": [ "discretionary cost center", "revenue center", "cost center", "profit center" ], "question_id": "fs-idm387158992", "question_text": "A responsibility center in which managers are held accountable for both revenues and expenses is called a ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "B" }, "bloom": null, "hl_context": "Residual income ( RI ) establishes a minimum level that all investments must attain in order to be accepted by management . This minimum acceptable level is defined as a dollar value and is applicable to all departments or segments of the business . <hl> Residual income is calculated by taking the segment income less the product of the investment value and cost of capital percentage . <hl> The formula is :", "hl_sentences": "Residual income is calculated by taking the segment income less the product of the investment value and cost of capital percentage .", "question": { "cloze_format": "A responsibility center structure that considers investments made by the operating segments by using a common cost of capital percentage is called ________.", "normal_format": "What a responsibility center structure that considers investments made by the operating segments by using a common cost of capital percentage is called?", "question_choices": [ "return on investment", "residual income", "a profit center", "a discretionary cost center" ], "question_id": "fs-idm383976064", "question_text": "A responsibility center structure that considers investments made by the operating segments by using a common cost of capital percentage is called ________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 3, "ans_text": "segment and company financial goals are congruent." }, "bloom": null, "hl_context": "Organizations must exercise care when establishing responsibility centers . In a responsibility accounting framework , decision-making authority is delegated to a specific manager or director of each segment . The manager or director will , in turn , be evaluated based on the financial performance of that segment or responsibility center . <hl> It is important , therefore , to establish a responsibility accounting framework that allows for an adequate and equitable evaluation of the financial performance of the responsibility center ( and , by default , the manager of the responsibility center ) as well as the attainment of the organization ’ s strategic goals . <hl> Often , businesses will use the segment structure to establish the responsibility accounting framework . <hl> You might think of segments and responsibility centers as two sides of the same coin : segments establish the structure for operational accountability whereas responsibility centers establish the structure for financial accountability . <hl> <hl> Both segments and responsibility centers ( which will likely be the same ) attempt to accomplish the same goal : ensure all sectors of the business achieve the organization ’ s strategic goals . <hl>", "hl_sentences": "It is important , therefore , to establish a responsibility accounting framework that allows for an adequate and equitable evaluation of the financial performance of the responsibility center ( and , by default , the manager of the responsibility center ) as well as the attainment of the organization ’ s strategic goals . You might think of segments and responsibility centers as two sides of the same coin : segments establish the structure for operational accountability whereas responsibility centers establish the structure for financial accountability . Both segments and responsibility centers ( which will likely be the same ) attempt to accomplish the same goal : ensure all sectors of the business achieve the organization ’ s strategic goals .", "question": { "cloze_format": "An important goal of a responsibility accounting framework is to help ensure ___.", "normal_format": "An important goal of a responsibility accounting framework is to help ensure which of the following?", "question_choices": [ "decision-making is made by the top executives.", "investments made by each segment are minimized.", "identification of operating segments that should be closed.", "segment and company financial goals are congruent." ], "question_id": "fs-idm372945376", "question_text": "An important goal of a responsibility accounting framework is to help ensure which of the following?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "D" }, "bloom": null, "hl_context": "Organizations incur various types of costs using decentralization and responsibility accounting , and they need to determine how the costs relate to particular segments of the organization within the responsibility accounting framework . One way to categorize costs is based on the level of autonomy the organization ( or responsibility center manager ) has over the costs . <hl> Controllable costs are costs that a company or manager can influence . <hl> Examples of controllable costs include the wages paid to employees of the company , the cost of training provided to employees , and the cost of maintaining buildings and equipment . As it relates to controllable costs , managers have a fair amount of discretion . While managers may choose to reduce controllable costs like the examples listed , the long-term implications of reducing certain controllable costs must be considered . For example , suppose a manager chooses to reduce the costs of maintaining buildings and equipment . While the manager would achieve the short-term goal of reducing expenses , it is important to also consider the long-term implications of those decisions . Often , deferring routine maintenance costs leads to a greater expense in the long-term because once the building or equipment ultimately needs repairs , the repairs will likely be more extensive , expensive , and time-consuming compared to investments in routine maintenance .", "hl_sentences": "Controllable costs are costs that a company or manager can influence .", "question": { "cloze_format": "Costs that a company or manager can influence are called ________.", "normal_format": "What are costs that a company or manager can influence called?", "question_choices": [ "discretionary costs", "fixed costs", "variable costs", "controllable costs" ], "question_id": "fs-idm368424800", "question_text": "Costs that a company or manager can influence are called ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "hourly rate of pay for the company’s purchasing manager" }, "bloom": null, "hl_context": "The goal of responsibility center accounting is to evaluate managers only on the decisions over which they have control . While many of the costs that managers will encounter are controllable , other costs are uncontrollable and originate from within the organization . Uncontrollable costs are those costs that the organization or manager has little or no ability to influence ( in the short-term , at least ) and therefore should not be incorporated into the analysis of either the manager or the segment ’ s performance . <hl> Examples of uncontrollable costs include the cost of electricity the company uses , the cost per gallon of fuel for a company ’ s delivery trucks , and the amount of real estate taxes charged by the municipalities in which the company operates . <hl> While there are some long-term ways that companies can influence these costs , the examples listed are generally considered uncontrollable .", "hl_sentences": "Examples of uncontrollable costs include the cost of electricity the company uses , the cost per gallon of fuel for a company ’ s delivery trucks , and the amount of real estate taxes charged by the municipalities in which the company operates .", "question": { "cloze_format": "An example of an uncontrollable cost would include all of the following except ________.", "normal_format": "Which of the following would not be included in an example of an uncontrollable cost?", "question_choices": [ "real estate taxes charged by the county in which the business operates", "per-gallon cost of fuel for the company’s delivery trucks", "hourly rate of pay for the company’s purchasing manager", "federal income tax rate paid by the company" ], "question_id": "fs-idm380126464", "question_text": "An example of an uncontrollable cost would include all of the following except ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "D" }, "bloom": null, "hl_context": "<hl> One category of uncontrollable costs is allocated costs . <hl> <hl> These are costs that are often allocated ( or charged ) to the segments within the organization based on some allocation formula or process , such as the costs of receiving support from corporate headquarters . <hl> These costs cannot be controlled by the responsibility center manager and thus should not be considered when that manager is being evaluated . Costs relevant to decision-making and financial performance evaluation will be further explored in Short-Term Decision-Making .", "hl_sentences": "One category of uncontrollable costs is allocated costs . These are costs that are often allocated ( or charged ) to the segments within the organization based on some allocation formula or process , such as the costs of receiving support from corporate headquarters .", "question": { "cloze_format": "Internal costs that are charged to the segments of a business are called ________.", "normal_format": "What are internal costs that are charged to the segments of a business?", "question_choices": [ "controllable costs", "variable costs", "fixed costs", "allocated costs" ], "question_id": "fs-idm370399824", "question_text": "Internal costs that are charged to the segments of a business are called ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "market-based approach" }, "bloom": null, "hl_context": "<hl> With the market price approach , the transfer price paid by the purchaser is the price the seller would use for an outside customer . <hl> <hl> Market-based prices are consistent with the responsibility accounting concepts of profit and investment centers , as managers of these units are evaluated based on purchasing and selling goods and services at market prices . <hl> Market-based transfer pricing is very common in a situation in which the seller is operating at full capacity . <hl> There are three primary transfer pricing approaches : market-based prices , cost-based prices , or negotiated prices . <hl>", "hl_sentences": "With the market price approach , the transfer price paid by the purchaser is the price the seller would use for an outside customer . Market-based prices are consistent with the responsibility accounting concepts of profit and investment centers , as managers of these units are evaluated based on purchasing and selling goods and services at market prices . There are three primary transfer pricing approaches : market-based prices , cost-based prices , or negotiated prices .", "question": { "cloze_format": "A transfer pricing arrangement that uses the price that would be charged to an external customer is a ________.", "normal_format": "What is a transfer pricing arrangement that uses the price that would be charged to an external customer?", "question_choices": [ "market-based approach", "negotiated approach", "cost approach", "decentralized approach" ], "question_id": "fs-idm387997760", "question_text": "A transfer pricing arrangement that uses the price that would be charged to an external customer is a ________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 1, "ans_text": "B" }, "bloom": null, "hl_context": "<hl> The three approaches to transfer pricing assume that the selling department has excess capacity to produce additional products to sell internally . <hl> What happens if the selling department does not have excess capacity — in other words , if the selling department can sell all that it produces to external customers ? If an internal department wants to purchase goods from the selling department , what would be an appropriate selling price ? In this case , the transfer price must take into consideration the opportunity cost of the contribution margin that would be lost from having to forego external sales in order to meet internal sales . <hl> Notice that the blending department has two categories of customers — external and internal . <hl> External customers purchase the soft drink mixtures and bottle the drinks under a different label , such as a store brand . Internally , the blending department “ sells ” the soft drink mixtures to the bottling department . Notice the “ sale ” by the blending department ( a positive amount ) and the “ purchase ” by the bottling department ( a negative amount ) net out to zero . This transaction does not impact the overall financial performance of the organization and allows the responsibility center managers to analyze the financial performance of the segment just as if these were transactions involving outside entities . Transfer pricing can affect goal congruence — alignment between the goals of the segment or responsibility center , or even an individual manager , with the strategic goals of the organization . Recall what you ’ ve learned regarding segments of the business . Often , segments will be arranged by the type of product produced or service offered . <hl> Segments often sell products to external customers . <hl> For example , assume a soft drink company has a segment — called the blending department — dedicated to producing various types of soft drinks . The company may have an external customer to which it sells unique soft drink flavors that the customer will bottle under a different brand name ( perhaps a store brand like Kroger or Meijer ) . The segment may also produce soft drinks for another segment within its own company — the bottling department , for example — for further processing and ultimate sale to external customers . <hl> When the internal transfer occurs between the blending segment and the bottling segment , the transaction will be structured as a sale for the blending segment and as a purchase for the bottling department . <hl> To facilitate the transaction , the company will establish a transfer price , even though the transaction is internal because each segment is responsible for its own profits and costs .", "hl_sentences": "The three approaches to transfer pricing assume that the selling department has excess capacity to produce additional products to sell internally . Notice that the blending department has two categories of customers — external and internal . Segments often sell products to external customers . When the internal transfer occurs between the blending segment and the bottling segment , the transaction will be structured as a sale for the blending segment and as a purchase for the bottling department .", "question": { "cloze_format": "A transfer pricing structure that considers the opportunity costs of selling to internal rather than external customers uses ________.", "normal_format": "What does a transfer pricing structure that considers the opportunity costs of selling to internal rather than external customers use?", "question_choices": [ "the cost approach", "the general transfer pricing approach", "the market-based approach", "the opportunity cost approach" ], "question_id": "fs-idm363178352", "question_text": "A transfer pricing structure that considers the opportunity costs of selling to internal rather than external customers uses ________." }, "references_are_paraphrase": 0 } ]
9
9.1 Differentiate between Centralized and Decentralized Management All businesses start with an idea. After putting the idea into action and forming the business, measuring the performance of the business is a crucial next step for the business owners. As the business begins operations, it is fairly easy for the entrepreneur to measure the performance because the owner is heavily involved in the daily activities and decisions of the business. As the business grows through increased sales volume, additional products and locations, and more employees, however, it becomes more complicated to measure the performance of the organization. Owners and managers must design organizational systems that allow for operational efficiency, performance measurement, and the achievement of organizational goals. In this chapter, you will learn the difference between centralized and decentralized management and how that relates to decision-making. You will learn about responsibility accounting and the type of decision-making authority that may be granted through different responsibility centers. Finally, you will learn how certain types of decisions have differing effects, depending on the type of responsibility center. Management Control System It is important for those studying business (and accounting, in particular) to understand the concept of a management control system. A management control system is a structure within an organization that allows managers to establish, implement, and monitor progress toward the strategic goals of the organization. Establishing strategic goals within any organization is important. Strategic goals relate to all facets of the business, including which markets to operate in, what products and services to offer to customers, and how to recruit and retain a talented workforce. It is the responsibility of the organization’s management to establish strategic goals and to ensure that all activities of the business help meet goals. Once an organization establishes its strategic goals, it must implement them. Implementing the strategic goals of the organization requires communication and providing plans that guide the work of those in the organization. The final factor in creating a management control system is to design mechanisms to monitor the activities of the organization to assess how well they are meeting the strategic goals. This aspect of the management control system includes the accounting system (both financial and managerial). Monitoring the performance of the organization allows management to repeat the activities that lead to good performance and to adjust activities that are not supporting the strategic goals. In addition, monitoring the activities of the organization provides feedback to management as to whether adjustments to the organization’s strategy are necessary. Establishing a management control system is very important to an organization. Organizations must continually evaluate ways to improve and remain competitive in an ever-changing market. This requires the organization to be both forward-looking (via strategic planning) and backward-looking (by evaluating what has occurred), constantly monitoring performance and making necessary adjustments. Concepts In Practice Double Loop Learning In the fall of 1977, Harvard professor Chris Argyris wrote an article entitled “Double Loop Learning in Organizations.” The article describes how organizations “learn,” defined by Argyris as “a process of detecting and correcting error.” 1 Argyris suggests there are two types of learning—single loop and double loop. 1 Chris Argyris “Double Loop Learning in Organizations.” Harvard Business Review 55, no. 5 (1977): 115–116. Single loop learning is characterized as a system that evaluates the organization from the perspective of the organization’s present policies. The result of single loop learning is binary: the organization is either meeting or not meeting the company’s objectives. There is no further evaluation or additional information fed back into the management control system. Double loop learning, on the other hand, allows for a more comprehensive evaluation. In addition to evaluating whether or not the organization is meeting the current goals, double loop learning takes into consideration whether or not the current goals of the organization are relevant or should be adjusted in any way. That is, double loop learning requires organizations to evaluate the underlying assumptions that serves as the basis for establishing the current goals. Argyris’s introduction of double loop learning has had a significant impact on the study of management and organizations. The concept of double loop learning also highlights how accounting systems, both financial and managerial, play a vital role in helping the organization attain its strategic goals. Establishing effective management control systems is important for organizations of all sizes. It is important for businesses to determine how they should structure the organization to ease decision-making and subsequent evaluation. First, levels of management within an organization help the organization form a structure that establishes levels of authority and roles within the organization. Lower-level management provides basic supervision and oversight for the operations of the organization. Mid-level management supervises and provides direction to lower-level management. Mid-level management often directs the various departments or divisions within the organization. Mid-level managers receive direction and are responsible for achieving the goals established by upper management. Upper management consists of the board of directors and chief executives charged with providing strategic guidance for the organization. Upper management has the ultimate authority within the organization and is accountable to the owners of the organization. Once a company establishes its management levels, it must determine whether the business is set up as centralized or decentralized—opposite ends of a spectrum. Many businesses fall somewhere between the two ends. Understanding the structures of both centralized and decentralized organizations provides a foundation for understanding the variations in management accounting the organizations use. Ethical Considerations The Ethical Bakery Accountant Bakery accountant Keith Roberts worked at Archway & The Mother’s Cookie Company as the director of finance. According to the New York Times , Roberts found himself perplexed by some numbers: "he knew things had been bad—daily reports he had been monitoring for six months showed that cookie sales at the company had been dismal. But the financial data he was looking at showed much more robust sales." He could not figure out where the sales were coming from, and after researching the accounting records, he determined that the company was booking nonexistent sales. Why? Roberts reasoned that sham transactions allowed Archway, which was owned by a private-equity firm, Catterton Partners, to maintain access to badly needed money from its lender, Wachovia. Roberts played a major role in alerting Archway's auditing firm of the possibility of accounting fraud. When challenged with the deceptive accounting, Roberts’s supervisor invoked a crucial period in the business as a rationale for the unorthodox accounting for sales. Roberts finally quit his job and the accounting misstatements were brought to the attention of the bank and the auditors. Centralized Organizations Centralization is a business structure in which one individual makes the important decisions (such as resource allocation) and provides the primary strategic direction for the company. Most small businesses are centralized in that the owner makes all decisions regarding products, services, strategic direction, and most other significant areas. However, a business does not have to be small to be centralized. Apple is an example of a business with a centralized management structure. Within Apple , much of the decision-making responsibility lies with the Chief Executive Officer (CEO) Tim Cook, who assumed the leadership role within Apple following the death of Steve Jobs. Apple has long been viewed as an organization that maintains a high level of centralized control over the company’s strategic initiatives such as new product development, markets to operate in, and company acquisitions. Many businesses in rapidly changing technological environments have a centralized form of management structure. The decisions made by the lower level management are limited in a centralized environment. The advantages of centralized organizations include clarity in decision-making, streamlined implementation of policies and initiatives, and control over the strategic direction of the organization. The primary disadvantages of centralized organizations can include limited opportunities for employees to provide feedback and a higher risk of inflexibility. Decentralized Organizations Decentralization is a business structure in which the decision-making is made at various levels of the organization. Typically, decentralized businesses are divided into smaller segments or groups in order to make it easier to measure the performance of the company and the individuals within each of the sub-groups. Advantages of Decentralized Management Many businesses operate in markets and industries that are highly competitive. In order to be successful, a company must work hard to develop strategic competitive advantages that distinguish the company from its peers. To accomplish this, the organizational structure must allow the organization to quickly adapt and take advantage of opportunities. Therefore, many organizations adopt a decentralized management structure in order to maintain a competitive advantage. There are numerous advantages of a decentralized management, such as: Quick decision and response times—it is important for decisions to be made and implemented in a timely manner. In order to remain competitive, it is important for organizations to take advantage of opportunities that fit within the organization’s strategy. Better ability to expand company—it is important for organizations to constantly explore new opportunities to provide goods and services to its customers. Skilled and/or specialized management—organizations must invest in developing highly skilled employees who are able to make sound decisions that help the organization achieve its goals. Increased morale of employees—the success of an organization depends on its ability to obtain, develop, and retain highly motivated employees. Empowering employees to make decisions is one way to help increase employee morale. Link between compensation and responsibility—promotional opportunities are often linked with a corresponding increase in compensation. In a decentralized organization, a compensation increase often corresponds to a commensurate increase in the responsibilities associated with learning new skills, increased decision-making authority, and supervision of other employees. Better use of lower and middle management—many tasks must be performed in order to achieve success in an organization. Decentralized organizations often rely on lower and middle management to perform many of these tasks. This allows managers to gain valuable experience and expertise in different areas. Disadvantages of Decentralized Management While a decentralized organizational structure can be an advantage for many organizations, there are also disadvantages to this type of structure, including: Coordination problems—it is important for an organization to be working toward a common goal. Because decision-making is delegated in a decentralized organization, it is often difficult to ensure that all segments of the company are working in a consistent manner to achieve the strategic goals of the organization. Increased administrative costs due to duplication of efforts—because similar decisions need to be made and activities undertaken across all divisions of an organization, decentralized organizations are susceptible to duplicating efforts, which results in inefficiency and increased costs. Incongruity in operations—when autonomy is dispersed throughout the organization, as is the case in decentralized organizations, division managers may be tempted to customize/alter the operations of the division in an effort to maximize efficiency and suit the best interest of the division. In this structure, it is important to ensure the shortcuts taken by one division of the organization do not conflict with or disrupt the operations of another division within the organization. Each department/division is often self-centered (its own fiefdom)—it is not uncommon for separate divisions within an organization to be measured on the performance of the division rather than of the entire company. In a decentralized organization, it is possible for division managers to prioritize divisional goal over organizational goals. Leaders of decentralized organizations should ensure the organization’s goals remain the priority for all divisions to attain. Significant, if not almost total, reliance on the divisional or department managers—because divisions within decentralized organizations have a high level of autonomy, the division may become operationally isolated from other divisions within the organization, focusing solely on the priorities of the division. If divisional or departmental managers do not have a wide breadth of experience or skills, the division may be at a disadvantage due to limited access to other expertise. Concepts In Practice Johnson & Johnson Johnson & Johnson was founded in 1886. The first factory had 14 employees: eight women and six men. 2 Today, Johnson & Johnson , employs over 125,000 associates and operates in over 60 countries. You may recognize some of Johnson & Johnson’s products, which include Johnson’s Baby Shampoo, Neutrogena, Band-Aid, Tylenol, Listerine, and Neosporin. 2 “Our Story.” Johnson & Johnson. https://ourstory.jnj.com/timeline William Weldon was Chief Executive Officer (CEO) of Johnson & Johnson from 2002 to 2012. Under Weldon’s leadership, Johnson & Johnson operated under a decentralized structure. This interview on successfully operating a decentralized organization shows it is clear that the key is the people within the organization. Weldon notes that to be successful, a decentralized organization must empower employees to innovate, develop expertise, and collaborate to achieve organizational goals. Daily and Strategic Decision-Making An underlying assumption is that businesses possess a single structure (either centralized or decentralized) at any given point. That is not necessarily the case. For example, businesses often add employees who specialize in the various needs of the organization. Over the life of an organization, it is not uncommon for businesses to demonstrate aspects of both centralization and decentralization. New businesses, for example, are often centralized. When a business first opens, it is common for the owner(s) to be highly involved in the day-to-day operations. In addition, the small size of a new business allows the owner to have a high level of involvement in both the daily and the strategic decisions of the business. Daily decisions are ongoing, immediate decisions that must be made in order to effectively and efficiently meet the needs of the organization’s customers. Strategic decisions, on the other hand, are made fairly infrequently and involve long-term goals of the organization. Being actively involved in the business allows new business owners to gain experience in all aspects of the business so that they can get a sense of the patterns of the daily operations and the decisions that need to be made. For example, the owner can be involved in determining the number of workers needed to meet the day’s production goal. Having too many workers would be inefficient and require the company to incur unnecessary expenses. Having too few workers, on the other hand, may result in inferior quality of products, missed shipments, or lost sales. Additionally, an owner involved in daily operations has the opportunity to evaluate and, if necessary, alter any strategic goals that may impact the daily operations. Strategic goals relate to all facets of the business, including in which markets to operate, what products and services to offer to customers, how to recruit and retain a talented workforce, and many other aspects of the business. If an owner is involved in daily operations, an example of a potential strategic goal could be tha he or she can determine whether to pursue a cost leadership perspective. When pursuing a cost leadership perspective, companies undertake activities to eliminate costs in order to produce a product or provide a service that has a cost advantage compared to competing products or services. While providing a high-quality goods or service is important to a company pursuing a cost leadership perspective, the competitive advantage of the company is eliminating wasteful activities that add unnecessary costs, entering into strategic partnerships with suppliers and other companies, and focusing on activities that allow the organization to offer the good or service at a lower price than its competitors. Being highly involved in both the daily and strategic decisions can be very beneficial as the business is established, but it is demanding on the business owner and, without adjustments, often cannot be sustained. As the business grows, management of a centralized organization faces a choice. Remaining highly involved in the daily decisions of the business results in a low level of involvement in the strategic decisions of the organization. While this may be effective in the short-term, the risks associated with not establishing and adjusting long-term strategic goals increase. On the other hand, remaining highly involved in the strategic decisions of the business results in a low level of involvement in the daily decisions of the business. This, too, is risky because ineffectively managing daily business decisions may have long-term, negative consequences. Ethical Considerations Ethically Directed Strategic Management Managers in some organizations follow legal and regulatory requirements to operate their business at the lowest level of acceptable behavior in their business environment in order to keep costs low; however, some stakeholders may expect more than the minimum level of ethics. Stakeholders of business organizations are now insisting on higher ethical standards from their organizations. Stakeholders are any group or individual who may be affected by the organization’s business decisions. Organizations providing high-quality goods and services need to consider all of their stakeholders when developing a strategic decision-making process to direct the organization’s strategic decisions. Another alternative for growing businesses is to move toward a decentralized operating structure. The management of growing businesses with a decentralized structure has a low level of involvement in the daily decisions of the business. Instead, management in these businesses focuses on strategic decisions that impact the long-term success of the organization. The daily decisions are delegated to others, thereby allowing management to focus on developing, implementing, and monitoring the firm’s performance with respect to the strategic goals of the business. Think It Through Centralized Structure at Procter & Gamble The organizational chart shows the 10 product categories of Procter & Gamble . 3 3 “Company Strategy.” Procter & Gamble. http://www.pginvestor.com/Company-Strategy/Index?KeyGenPage=208821 Review the different types of products that Procter & Gamble produces. Think of 2–3 instances where Procter & Gamble would adopt a centralized perspective in its operations. Why would this perspective be beneficial for Procter & Gamble ? Don’t forget to consider the ingredients used to make these products and how these products are sold to consumers. 9.2 Describe How Decision-Making Differs between Centralized and Decentralized Environments Businesses are organized with the intention of creating efficiency and effectiveness in achieving organizational goals. To aid in this, larger businesses use segments , uniquely identifiable components of the business. A company often creates them because of the specific activities undertaken within a particular portion of the business. 4 Segments are often categorized within the organization based on the services provided (i.e., departments), products produced, or even by geographic region. The purpose of identifying distinguishable segments within an organization is to provide efficiency in decision-making and effectiveness in operational performance. 4 In Building Blocks of Managerial Accounting , you learned that generally accepted accounting principles (GAAP)—also called accounting standards—provide official guidance to the accounting profession. Under the oversight of the Securities and Exchange Commission (SEC), GAAP are created by the Financial Accounting Standards Board (FASB). The official definition of segments as provided by FASB can be reviewed in ASC 280-10-50. Organizational Charts Many organizations use an organizational chart to graphically represent the authority for decision-making and oversight. Organizational charts are similar in appearance to flowcharts. An organizational chart for a centralized organization is shown in Figure 9.2 . The middle tier represents position held by individuals or departments within the company. The lowest tier represents geographic locations in which the company operates. The lines connecting the boxes indicate the relationship among the segments and branch from the ultimate and decision-making authority. Organizational charts are typically arranged with the highest-ranking person (or group) listed at the top. Notice the organization depicted in Figure 9.2 has segments based on departments as well as geographic regions. In addition, all lines connect directly to the president of the organization. This indicates that the president is responsible for the oversight and decision-making for the production and sales departments as well as the district (Northeast, Southwest, and Midwest) managers; essentially, the president has seven direct reports. In this centralized organizational structure, all decision-making responsibility resides with the president. Figure 9.3 shows the same organization structured as a decentralized organization. Notice that the organization depicted in Figure 9.3 has the same segments, which represent departments and geographic regions. There are, however, noticeable differences between the centralized and decentralized structure. Instead of seven direct reports, the president now oversees five direct reports, three of which are based on geography—the Western, Southern, and Eastern regional managers. Notice, too, each regional district manager is responsible for their respective production and sales departments. In this decentralized organization, all decision-making responsibility does not reside with the president; regional decisions are delegated to the three regional managers. Understand, however, that responsibility for achieving the organization’s goals still ultimately resides with the company president. In a centralized environment, the major decisions are made at the top by the CEO and then are carried out by everyone below the CEO. In a decentralized environment, the CEO sets the tone for the running of the organization and provides some decision-making guidelines, but the actual decisions for the day-to-day operations are made by the managers at the various levels of the organization. In other words, the essential difference between centralized and decentralized organizations involves decision-making. While no organization can be 100% centralized or 100% decentralized, organizations generally have a well-established structure that outlines the decision-making authority within the organization. Continuing Application Centralized vs. Decentralized Management Gearhead Outfitters was founded by Ted Herget in 1997 in a friend’s living room in Jonesboro, AR. By 2003, the business moved to its downtown location. In 2006, a second Jonesboro location was opened. Over the next several years, the company’s growth allowed for expansion to several different cities, miles and hours away. Eventually Little Rock, AR, Fayetteville, AR, Shreveport, LA, Springfield, MO, and Tulsa, OK became home to Gearhead branches. With such growth, the company faced many management challenges. Would it be best for management to remain centralized with decision-making coming from a single location, or should the process be decentralized, allowing local management the flexibility and autonomy to run individual locations? If local management is given autonomy to make their own decisions, will those decisions be in line with company, or perhaps, individual goals? How will management be evaluated? Will inventory management be a uniform process, or will people and the process have to adapt to accommodate differences in demand at each location? These are just some of the hurdles that Gearhead needed to address. What are some other issues which Gearhead might have considered? Think in terms of inventory management, personnel, efficiencies, and leadership development. How could Gearhead have use decentralized management to grow and thrive? Conversely, what would the benefits of keeping all or some of the company’s management decisions more centralized be? How Does Decision-Making Differ in a Centralized versus a Decentralized Environment? The CEO of a centralized organization will determine the direction of the company and determine how to get the company to its goals. The steps necessary to reach these goals are then passed along to the lower-level managers who carry out these steps and report back to the CEO. The CEO would then evaluate the results and incorporate any necessary operational changes. On the other hand, the CEO of a decentralized organization will determine the goals of the company and either pass along the goals to the divisional managers for them to determine how to reach these goals or work with the managers to determine the strategic plans and how to meet the goals laid out by those plans. The divisional managers will then meet with the managers below them to determine the best way to reach these goals. The lower-level managers are responsible for carrying out the plan and reporting their results to the manager above them. The higher-level managers will combine the results of several managers and evaluate those results before sending them to the divisional manager. Think It Through Determining the Best Structure Here are some examples of decisions that every business must make: Facility and equipment purchases and upgrades Personnel decisions such as hiring and compensation Products and services to offer, prices to charge customers, markets in which to operate For each decision listed, identify and explain the best structure (centralized, decentralized, or both) for each of the following types of businesses: Auto manufacturer with multiple production departments Florist shop (with three part-time employees) owned by a local couple Law firm with four attorneys 9.3 Describe the Types of Responsibility Centers You’ve learned how segments are established within a business to increase decision-making and operational effectiveness and efficiency. In other words, segments allow management to establish a structure of operational accountability. The terminology changes slightly when we think about accountability relating to the financial performance of the segment. In a decentralized organization, the system of financial accountability for the various segments is administered through what is called responsibility accounting. Responsibility accounting is a basic component of accounting systems for many companies as their performance measurement process becomes more complex. The process involves assigning the responsibility of accounting for particular segments of the company to a specific individual or group. These segments are often structured as responsibility centers in which designated supervisors or managers will have both the responsibility for the performance of the center and the authority to make decisions that affect the center. Often, businesses will use the segment structure to establish the responsibility accounting framework. You might think of segments and responsibility centers as two sides of the same coin: segments establish the structure for operational accountability whereas responsibility centers establish the structure for financial accountability. Both segments and responsibility centers (which will likely be the same) attempt to accomplish the same goal: ensure all sectors of the business achieve the organization’s strategic goals. Before learning about the five types of responsibility centers in detail, it is important to understand the essence of responsibility accounting and responsibility centers. Fundamentals of Responsibility Accounting and Responsibility Centers Recall the discussion of management control systems. These systems allow management to establish, implement, monitor, and adjust the activities of the organization toward attainment of strategic goals. Responsibility accounting and the responsibility centers framework focuses on monitoring and adjusting activities, based on financial performance. This framework allows management to gain valuable feedback relating to the financial performance of the organization and to identify any segment activity where adjustments are necessary. Types of Responsibility Centers Organizations must exercise care when establishing responsibility centers. In a responsibility accounting framework, decision-making authority is delegated to a specific manager or director of each segment. The manager or director will, in turn, be evaluated based on the financial performance of that segment or responsibility center. It is important, therefore, to establish a responsibility accounting framework that allows for an adequate and equitable evaluation of the financial performance of the responsibility center (and, by default, the manager of the responsibility center) as well as the attainment of the organization’s strategic goals. This is not an easy task. There are several factors that organizations must consider when developing and using a responsibility accounting framework. Before discussing those factors, let’s explore the five types of responsibility centers: cost centers, discretionary cost centers, revenue centers, profit centers, and investment centers. Cost Centers A cost center is an organizational segment in which a manager is held responsible only for costs. In these types of responsibility centers, there is a direct link between the costs incurred and the product or services produced. This link must be recognized by managers and properly structured within the responsibility accounting framework. An example of a cost center is the custodial department of a department store called Apparel World. On one hand, since the custodial department is structured as a cost center, the goal of the custodial department manager is to keep costs as low as possible, since this is the basis by which the manager will be evaluated by upper-level management. On the other hand, the custodial department manager, who is responsible for cleaning the store entrances, also wants to keep the store as clean as possible for the store’s customers. If the store appears unclean and disorganized, customers will not continue to shop at the store. Therefore, the custodial department manager and upper-level management must work together to establish goals of the cost center (the custodial department, in this example) that satisfy the strategic goals of the business—maintaining a clean and organized store while minimizing the costs of managing the custodial department. Figure 9.4 shows an example of what the cost center report might look like for the Apparel World custodial department. Let’s use this report to explore how the department manager and upper-level management might review and use this information. In total, in December, the custodial department incurred $980 more of actual expenses than budgeted (or expected) expenses. This represents a 5.2% increase in expenses than was expected. Notice the terminology used to describe the financial information of the custodial department: the department “ incurred $980 more of actual expenses,” rather than the department “ spent $980 more of actual expenses.” Recall from Introduction to Financial Statements that financial statements are typically prepared using accrual accounting rather than cash accounting. Under accrual accounting, certain transactions are recorded regardless of when the cash is exchanged. Therefore, to say the custodial department “ spent $19,725” or “ spent $980 more for expenses” would technically be incorrect, since the cash may not have been spent. The managers would then review each line item to determine what caused the $980 increase in expenses over what was expected. Keep in mind, the $980 represents the total overage from the budget, so it is possible that some expense accounts could have actually been below expectations. Unfortunately, that is not the case in the month of December because every line item, with the exception of department manager wages, exceeded the budgeted amount. It was no surprise to management that the department manager’s wages were exactly as expected. Even though the custodial department manager worked more hours in the month of December, the manager is a salaried employee, so the wages are the same regardless of the number of hours worked. Upon further investigation, it was determined that in December, the town where the Apparel World store is located received an unusually high amount of snow. This had an impact on each of the expense amounts in the custodial department. Because of the need to shovel snow more often, some of the custodial staff had to work overtime to ensure customers could easily and safely enter the store. This led to an increase in custodial wages of $500 compared to the budgeted or expected amount, which was established based on the previous year, when snowfall in the area was closer to average. The research conducted by management also identified that additional cleaning equipment (mop buckets, mops, and “wet floor” signs) were purchased. The increased snowfall also led to the purchase of more salt than usual for the sidewalks outside the store. Because it was important to promptly clean the snow as well as the salt that was brought into the store on customers’ shoes, additional equipment was purchased so that each entrance would have a mop and bucket. The custodial department manager decided this was the best course of action. Normally, the store uses a single mop and bucket to clean all entrances. This would have taken more time and increased the risk of an accident. The increased application of salt partially explains the 129.2% (or $155) overage in the cleaning supplies expense account. Management has learned that the overage in this account was also caused by an increase in purchases of mop head replacements, floor cleaner, and paper towels. After reviewing the December information and learning the causes of the increased expenses, the company determined that no corrective action was necessary going forward. The area received an unusually high level of snowfall that year, which was not something the custodial department manager could control. In fact, the upper-level managers praised the custodial department manager for taking action that was in the best interest of the store and its customers. The managers commented that they had received numerous compliments from customers regarding how easy and safe it was to enter the store compared to other local stores. The manager noted that, despite the increased snowfall, store sales were higher than expected and attributed much of the success to the work of the custodial department. Discretionary Cost Centers A discretionary cost center is similar to a cost center, with one distinguishing factor. A discretionary cost center is an organizational segment in which a manager is held responsible for controllable costs when there is not a well-defined relationship between the center’s costs and its services or products. Examples include human resources and accounting departments. Human resources departments often establish policies that affect the entire organization. For instance, while a policy requiring all workers to have annual safety training for fires, injuries, and tornadoes is beneficial to the entire company, it is difficult to evaluate the human resources department manager’s performance in relation to impacting the products or services the company provides. As you might expect, reviewing the financial performance of a discretionary cost center is similar to that of the review of a cost center. Revenue Centers A revenue center is an organizational segment in which a manager is held accountable only for revenues. As the name implies, the goal of a revenue center is to generate revenues for the business. In order to accomplish the goal of increasing revenues, the manager of a revenue center would focus on developing specific skillsets of the revenue center’s employees. The reservations group of Southwest Airlines is an example of a segment that may be structured as a revenue center. The employees should be well-trained in providing excellent customer service, handling customer complaints, and converting customer interactions into actual sales. As the financial performance of cost centers and discretionary cost centers is similar, so is the financial performance of a revenue center and a cost center. Profit Centers A profit center is an organizational segment in which a manager is responsible for both revenues and costs (such as a Starbucks store location). Of the responsibility centers explored so far, a profit center structure is the most complex because a manager must be well-versed in techniques to increase revenues, decrease expenses, and thereby increase profits while also meeting the strategic goals of the organization. Let’s return to the Apparel World department store. Figure 9.5 shows an example of what the profit center report might look like for the Apparel World children’s clothing department. Just as with the cost center, let’s walk through an analysis of the December children’s clothing department profit center report. Overall, the department’s actual profit exceeded budgeted profit by $3,891, or 13.5%, compared to budgeted (or expected) profit. This increase was driven by a total revenue increase over budget by $29,200 or 19.8%. Recall from Building Blocks of Managerial Accounting that variable costs, unlike fixed costs, change in proportion to the level of activity in a business. Therefore, it should be no surprise that the expenses in the children’s clothing department also increased. In fact, the expenses increased $25,309 (or 21.4%) versus the budgeted amount. The revenues of the department increased $29,200, while expenses increased $25,309, yielding an increase in profit of $3,891 over expectations. The increase in revenue could be further analyzed. Because the store also sells accessories such as belts and socks, the children’s clothing department tracks two revenue sources (also called streams )—clothing and accessories. Management was pleased to learn that clothing revenue exceeded expectations by $30,000, or 20.7%. Given the higher-than-usual level of snowfall in the area, this is an impressive increase, and the company can attribute a portion of the successful month to the employees of the custodial department, who worked extra hard to ensure customers could easily and safely enter the store. The overall revenue of the department increased by $29,200. Since the clothing department revenue increased by $30,000, the clothing accessories revenue stream must have experienced a decline in revenue. In fact, the accessories revenue dropped by 36.4%. While this is a large percentage, consider the fact that the actual value of revenue decline was relatively minor—only $800 lower (as indicated by the negative amount) than expected. This indicates the employees may not have encouraged customers to also get belts or socks with their clothing purchase. This is an opportunity for the department manager to remind employees to encourage customers to purchase accessories to complement the clothing purchases. Overall, the increase in revenue attained by the children’s clothing department is a highlight for the store. A review of the department’s expenses shows increases in all expenses, except department manager wages and cost of accessories sold. When reviewing the profit center report, pay special attention to how the differences between the actual and budgeted expenses are calculated in this analysis. In the revenue section, a positive number indicates the revenue exceeded the budgeted amount, which means a favorable financial performance. In the expense section, a positive number indicates the expense exceeded the budgeted amount, which means an unfavorable financial performance. As with the custodial department manager, the manager of the children’s clothing department is also a salaried employee, so the wages do not change each month—the wages are a fixed cost for the department. Since the clothing accessories revenue declined, the cost of accessories also declined. The accessories expenses were $576 lower than expected. While this appears to be good news for the department, recall that clothing accessories revenue dropped by $800. Therefore, the department profit margin decreased by a net amount of $224 versus expectations ($800 revenue decline and a corresponding expense decrease of $576). All other actual expenses were over budget, as indicated by the positive numbers. Remember, these are expenses, and in this analysis, they indicate unfavorable financial performance. It probably comes as no surprise that all of the expense overages are a result of the increased sales. Because of the increased sales, more associates were needed to cover each shift, and they worked more hours to cover the longer store hours, which caused wages to go over budget. The substantial increase in clothing revenue also caused the cost of clothing sold to increase proportionately. Similarly, the increased sales drove an increase in equipment/fixture repairs of $735 (or 253.4%) over budget due to repairs to cash registers and clothing racks. Because the store was open longer hours during the holiday season, the utilities expenses also exceeded budget by $275, or 44.4%. Overall, the Apparel World department store management was pleased with the December financial performance of the children’s clothing department. The department exceeded budgeted sales, which resulted in an increase in department profitability. The review also highlighted an area for improvement in the department—increasing accessory sales—which is easily corrected through additional training. Notice that the review of the children’s clothing department profit center report discussed differences measured in both dollars and percentages. When analyzing financial information, looking only at dollar values can be misleading. Displaying information as percentages—percentage of an entire amount or percentage change—standardizes the information and facilitates an easier and more accurate comparison, especially when dealing with segments (or companies) with vastly different sizes. Let’s look at another scenario using Apparel World. The example so far has explored the financial performance review processes for a cost center and a profit center. Now assume that store management wants to compare two different profit centers—children’s clothing and women’s clothing. Figure 9.6 shows the December financial information for the children’s clothing department, and Figure 9.7 shows the financial information for the women’s clothing department. Comparing the dollar differences in the two departments, notice that the children’s clothing department is a smaller department, as measured by total revenue, than the women’s clothing department. Now, let’s compare the differences in the two departments by looking at the percentages. The children’s clothing department financial information is shown in Figure 9.8 , and the women’s clothing department financial information is shown in Figure 9.9 . Does the comparison change when the dollar differences are shown as percentages? Which department was more effective at strengthening the store’s financial position? Which department was more efficient with the December revenue? What other factors might the Apparel World management consider? Adding the percentages to the financial analysis allows managers to more directly make comparisons, to separate departments in this case. Simply reviewing the dollar differences can be misleading because of size differences between the departments being compared. The Women’s Department added more value ($61,113) to the store’s financial position, while the Children’s Department was more efficient, converting 13.5% (or $0.135) of every dollar of revenue to profit. Investment Centers It is important for managers to continually invest in the business. Managers must choose investments that improve the value of the business by improving the customer experience, increasing customer loyalty, and, ultimately, increasing the value of the organization. A limitation of the centers explored so far—cost center, discretionary cost center, revenue center, and profit center—is that these structures do not account for the investments made by the various responsibility center managers. The final responsibility center—investment centers—takes into account and evaluates the investments made by the responsibility center managers. The goal of the investment center structure is to ensure that segment managers choose investments that add value and help the organization achieve its strategic goals. An investment center is an organizational segment (such as the northern region of Best Buy or the food trucks used in the Why It Matters opening case) in which a manager is accountable for profits (revenues minus expenses) and the invested capital used by the segment. Concepts In Practice Research and Development at Hershey’s As you know by now, financial statements tell users what has occurred in the past—the statements provide feedback value. Responsibility accounting is no exception—it is a system that measures the financial performance of what has already occurred and provides management with a measure of past events. Have you ever considered how companies measure the outcome of activities that have not yet occurred? As you’ve learned, many companies invest in research and development activities to determine how to improve existing products and to create entirely new products or processes. The Hershey Chocolate Company is one company that invests heavily in research and development. Hershey’s has created an Advanced Technology & Foresight Lab, which looks for innovative ways to bring chocolate to the market. Here are some of the innovative things that Hershey’s has developed: Sourcemap—an interactive, web-based tool to show consumers where the ingredients in their favorite Hershey’s snack, such as Hershey’s Milk Chocolate with Almonds Bar comes from. There is also a video and short story for each point on the interactive map for more information. SmartLabel—a scanable label on each Hershey’s product that gives the user up-to-date ingredient, allergen, and other information. 3D Chocolate Printing—using a 3D printer, Hershey’s has developed an innovative way to create customized chocolate candies. 5 5 Sue Gleiter. “Hershey Company Goes Futuristic with 3-D Printed Chocolates.” PennLive. https://www.pennlive.com/food/index.ssf/2014/12/hersheys_3-d_chocolate.html Measuring the financial success of innovations such as these is nearly impossible in the short-run. However, in the long-run, investments in product development help companies like Hershey’s increase sales, reduce costs, gain market share, and remain competitive in the marketplace. There are numerous methods used to evaluate the financial performance of investment centers. When discussing profit centers, we used the segment’s profit/loss stated in dollars. Another method to evaluate segment financial performance involves using the profit margin percentage. The profit margin percentage is calculated by taking the net profit (or loss) divided by the net sales. This is a useful calculation to measure the organization’s (or segment’s) efficiency at converting revenue into profit (net income). While the dollar value of a segment’s profit/loss is important, the advantage of using a percentage is that percentages allow for more direct comparisons of different-sized segments. Let’s return to the Apparel World example and look at the profit margin percentage for the children’s and women’s clothing departments. Figure 9.10 shows the December financial information for the children’s clothing department, including the profit margin percentage. The actual profit margin percentage achieved by the children’s clothing department was 18.5%, calculated by taking the department profit of $32,647 divided by the total revenue of $176,400 ($32,647 / $176,400). The actual profit margin percentage was slightly lower than the expected percentage of 19.5% ($28,756 / $147,200). To determine why the profit margin percentage slipped slightly compared to expectations, management could compare the actual revenue and expenses with the budgeted revenue and expenses using a vertical analysis, as shown in Financial Statement Analysis . Doing so would highlight the fact that the cost of clothing sold as a percentage of clothing revenue increased significantly compared to what was expected. Management would want to explore this further, looking at factors influencing both clothing revenue (sales prices and quantity) and the cost of the clothing (which may have increased). Figure 9.11 shows the December financial information for the women’s clothing department, including the profit margin percentage. The actual profit margin percentage of the women’s clothing department was 14.6%, calculated by taking the department profit of $61,113 divided by the total revenue of $417,280 ($61,113 / $417,280). The actual profit margin percentage was significantly lower than the expected percentage of 18.2% ($58,580 / $322,300). As with the children’s clothing department, a vertical analysis indicates the significant decrease from budgeted profit margin percentage was a result of the cost of clothing sold. This would lead management to investigate possible causes that would have influenced the clothing revenue (sales prices and quantity), the cost of the clothing, or both. Another method used to evaluate investment centers is called return on investment. Return on investment (ROI) is the department or segment’s profit (or loss) divided by the investment base (Net Income / Base). It is a measure of how effective the segment was at generating profit with a given level of investment. Another way to think about ROI is its use as a measure of leverage. That is, the return on investment calculation measures how much profit the segment can realize per dollar invested. Several points are in order regarding the definition of return on investment. In practice, the numerator (segment profit or loss) may have different names, depending upon the terms used by the organization. Some organizations may call this value net income (or loss) or operating income (or loss). These terms relate to the financial performance of the segment, and each organization decides how best to identify and quantify financial performance. Another significant point in the definition of return on investment relates to the denominator (investment base). There is no uniform definition of “investment base” within the accounting/finance profession. Some organizations define investment base as operating assets, while others define the investment base as average operating assets. Other organizations use the book value of assets, and still others use the historical or even replacement cost of assets. There are valid arguments for all of these definitions for investment base. It is important not to be confused by these variations but instead to know the definition in a particular context and to use it consistently. For our purposes, the denominator in the return on investment formula will be “investment base,” and the value will be provided. Finally, you may recall from Long-Term Assets that accountants carefully consider where to place certain costs (either on the balance sheet as assets or on the income statement as expenses). While ROI typically deals with long-lived assets such as buildings and equipment that are charged to the balance sheet, the ROI approach also applies to certain “investments” that are expensed. For instance, advertising costs are expensed. If a segment is considering an advertising campaign, management would assess the effectiveness of the advertising campaign in a similar manner as the traditional ROI analysis using large, capitalized investments. That is, management would want to assess the additional revenue (or profit) derived from the advertising campaign (which would be the numerator in the ROI calculation) compared to the investment or cost of the advertising campaign (which would be the denominator in the ROI calculation). To illustrate, let’s say management was able to identify that an advertising campaign costing $2,500 brought in an additional $500 of profit. This would be a 20% return on investment ($500 / $2,500). A return on investment analysis of an investment center begins with the same information as an analysis of a profit center. To explore return on investment, let’s return to the December Apparel World profit center information analyzing the children’s and women’s clothing departments. Assume that a smaller store in another location had the following profit for December: Children’s clothing department: $3,891 Women’s clothing department: $2,533 Now assume that each department had an investment base of the following amounts: Children’s clothing department: $15,000 Women’s clothing department: $65,000 To calculate the return on investment (ROI) for each department, divide the segment profit by the segment investment base. The ROI for each department is: Children’s clothing department: 25.9% ($3,891 / $15,000) Women’s clothing department: 3.9% ($2,533 / $65,000) The children’s clothing department contributed the most to the financial position of this Apparel World location ($3,891 vs. $2,533). In addition, the children’s clothing department was able to better leverage every dollar invested into profit. Stated differently, for every dollar invested, the children’s clothing department was able to realize $0.259 of profit while the women’s clothing department realized only $0.039 of profit for every dollar invested. It is also significant that the children’s clothing department requires a smaller dollar value of investment. This conserves store resources (financial capital) and helps store management prioritize and efficiently allocate future resources. By investing in the children’s clothing department, store management is able to invest a smaller dollar amount while achieving a higher rate of return (profitability) on that investment. One of the criticisms of the ROI approach is that each segment evaluates potential investments only in relation to the individual segment’s ROI. This may cause the individual segment manager to select only projects or activities that improve the individual segment’s ROI and decline projects that improve the financial position of the overall company. Most often, segment managers are primarily evaluated based on the performance of the segment they manage with only a small portion, if any, of their evaluation based on overall corporate performance. This means that the bonuses of a segment manager are largely dependent on how the segment performs, or in other words, based on the decisions made by that segment manager. A manager may choose to forgo a project or activity because it will lower the segment’s ROI even though the project would benefit the entire company. ROI and the many implications of its use are explained further and demonstrated in Balanced Scorecard and Other Performance Measures . The final investment center evaluation method, residual income (RI), structures the investment selection process to incentivize segment managers to select projects that benefit the entire company, rather than only the specific segment. Your Turn Analyzing Historical Success Companies want to be sure the investments they make are generating an acceptable return. Additonally, individual investors want to ensure they are receiving the highest financial return for the money they are investing. This article published in the New York Times on best investments listed Microsoft as having one of the best investments since 1926 (based on a study by Hendrik Bessembinder). Based on stock market returns to investors, Microsoft ranked third, behind ExxonMobil and Apple . According to the article, “since 1986, it has had an annualized return of 25 percent.” Other companies in the ranking included familiar company names such as General Electric (ranked #4), Walmart (ranked #10), McDonald’s (#31), and Coca-Cola (#15). But does historical success ensure future success? General Electric is listed in the article as the 4th highest-ranking company for creating wealth for investors. Conduct internet research to find out the condition of General Electric today. What do you think the future holds for General Electric ? As the world-wide economy changes, General Electric seems to be struggling to evolve, and this issue potentially leaves them with an uncertain future. Residual income (RI) establishes a minimum level that all investments must attain in order to be accepted by management. This minimum acceptable level is defined as a dollar value and is applicable to all departments or segments of the business. Residual income is calculated by taking the segment income less the product of the investment value and cost of capital percentage. The formula is: As with the return on investment calculation, income can be defined as segment operating income (or loss) or segment profit (or loss). Some organizations may use different terms. In RI scenarios, the investment refers to a specific project the segment is considering. Investment, in RI calculations, should not be confused with the total investment base, which was used in the ROI calculation. Finally, the cost of capital, which is covered in Short-Term Decision-Making , refers to the rate at which the company raises (or earns) capital. Essentially, the cost of capital can be considered the same as the interest rate at which the company can borrow funds through a bank loan. By establishing a standard cost of capital rate used by all segments of the company, the company is establishing a minimum investment level that all investment opportunities must achieve. For example, assume a company can borrow funds from a local bank at an interest rate of 10%. The company, then, does not want a segment accepting an investment opportunity that earns anything less than 10%. Therefore, the company will establish a threshold—the cost of capital percentage—that will be used to screen potential investments. At the same time, under the residual income structure, managers of the individual segments (also called responsibility centers) will be incentivized to undertake investments that benefit not only the segment but also the entire company. Recall that the ROI of the children’s clothing department was 25.9% ($3,891 profit / $15,000 investment). Under an ROI analysis, the manager of the children’s clothing department would not accept an investment that earns less than 25.9% because the rate of return would be negatively impacted, even though the company may benefit. Under a residual income structure, managers would accept all investments with a positive value because the investment would exceeded the investment threshold established by the company. Let’s look at an example. Recall that the children’s clothing department of Apparel World had an investment base of $15,000. Assuming the cost of capital (understood as the rate of a bank loan) to Apparel World is 10%. This is the rate that Apparel World will also set as the rate it expects all responsibility centers to earn. Therefore, in the example, the expected amount of residual value—the profit goal, in a sense—for the children’s clothing department is $1,500 ($15,000 investment base × 10% cost of capital). Management is pleased with the December performance of the children’s clothing department because it earned a profit of $3,891, well in excess of the $1,500 goal. Now let’s examine how the manager of the children’s clothing department would evaluate a potential investment opportunity. Assume in December the manager had an opportunity to invest to upgrade the store by adding a supervised children’s play area for children to use while parents shopped. The manager believes this enhancement might increase sales because parents could take their time shopping, while knowing their children are safe and having fun. The upgrade would make the customer shopping experience more enjoyable for everyone. The children’s play area requires an investment of $50,000 and the expected increase in income as a result of the children’s play area is $5,001. Because the Apparel World store has a cost of capital requirement of 10%, the manager would invest in the children’s play area because the residual income on this investment would be positive. To be precise, the residual income is $1. Using the residual income formula, the residual income is $5,001 – ($50,000 × 10%) = $1. While this is an exaggerated and oversimplified example, it is intended to highlight the fact that, as long as resources (funds) are available to invest, a responsibility manager will (or should) accept projects that have a positive residual value. In this example, the children’s clothing department would be in a better financial position by undertaking this project than if they rejected this project. The department earned $3,891 of profit in December but would have earned, based on the estimates, $3,892 if the department added the children’s play area. The benefit of a residual income approach is that all investments in all segments of the organization are evaluated using the same approach. Instead of having each segment select only investments that benefit only the segment, the residual income approach guides managers to select investments that benefit the entire organization. 9.4 Describe the Effects of Various Decisions on Performance Evaluation of Responsibility Centers Organizations incur various types of costs using decentralization and responsibility accounting, and they need to determine how the costs relate to particular segments of the organization within the responsibility accounting framework. One way to categorize costs is based on the level of autonomy the organization (or responsibility center manager) has over the costs. Controllable costs are costs that a company or manager can influence. Examples of controllable costs include the wages paid to employees of the company, the cost of training provided to employees, and the cost of maintaining buildings and equipment. As it relates to controllable costs, managers have a fair amount of discretion. While managers may choose to reduce controllable costs like the examples listed, the long-term implications of reducing certain controllable costs must be considered. For example, suppose a manager chooses to reduce the costs of maintaining buildings and equipment. While the manager would achieve the short-term goal of reducing expenses, it is important to also consider the long-term implications of those decisions. Often, deferring routine maintenance costs leads to a greater expense in the long-term because once the building or equipment ultimately needs repairs, the repairs will likely be more extensive, expensive, and time-consuming compared to investments in routine maintenance. Think It Through The Frequency of Maintenance If you own your own vehicle, you may have been advised (maybe all too often) to have your vehicle maintained through routine oil changes, inspections, and other safety-related checks. With advancements in technology in both car manufacturing and motor oil technology, the recommended mileage intervals between oil changes has increased significantly. If you ask some of your family members how often to change the oil in your vehicle, you might get a wide range of answers—including both time-based and mileage-based recommendations. It is not uncommon to hear that oil should be changed every three months or 3,000 miles. An article from the Edmunds.com website devoted to automobiles suggests automobile manufacturers are extending the recommended intervals between oil changes to up to 15,000 miles. Do you know what the recommendation is for changing the oil in the vehicle you drive? Why do you think the recommendations have increased from the traditional 3,000 miles to longer intervals? How might a business apply these concepts to the concept of maintaining and upgrading equipment? If you were the accountant for a business, what factors would you recommend management consider when making the decisions on how frequently to maintain equipment and how big of a priority should equipment maintenance be? The goal of responsibility center accounting is to evaluate managers only on the decisions over which they have control. While many of the costs that managers will encounter are controllable, other costs are uncontrollable and originate from within the organization. Uncontrollable costs are those costs that the organization or manager has little or no ability to influence (in the short-term, at least) and therefore should not be incorporated into the analysis of either the manager or the segment’s performance. Examples of uncontrollable costs include the cost of electricity the company uses, the cost per gallon of fuel for a company’s delivery trucks, and the amount of real estate taxes charged by the municipalities in which the company operates. While there are some long-term ways that companies can influence these costs, the examples listed are generally considered uncontrollable. One category of uncontrollable costs is allocated costs . These are costs that are often allocated (or charged) to the segments within the organization based on some allocation formula or process, such as the costs of receiving support from corporate headquarters. These costs cannot be controlled by the responsibility center manager and thus should not be considered when that manager is being evaluated. Costs relevant to decision-making and financial performance evaluation will be further explored in Short-Term Decision-Making . Effects of Decisions on Performance Evaluation of Responsibility Centers Suppose, as the manager of the maintenance department of a major airline, you become aware of a training session that is available to your mechanics. The disadvantages are that the training will require the mechanics to miss an entire week of work and the associated costs (travel, lodging, training session) are high. The advantage is that, as a result of the training, the time during which the planes are grounded for repairs will significantly decrease. What factors would influence your decision regarding whether or not to send mechanics to school? Considering the fact that each mechanic would miss an entire week of work, what factors would you consider in determining how many mechanics to send? Do these factors align with or conflict with what is best for the company or you as the department manager? Is there a way to quantify the investment in the training compared to the benefit of quicker repairs for the airplanes? Scenarios such as this are common for managers of the various responsibility centers—cost, discretionary cost, revenue, profit, and investment centers. Managers must be well-versed at using both financial and nonfinancial information to make decisions such as these in order to do what is best for the organization. Ethical Considerations Pro-Stakeholder Culture Opens Business Opportunities The use of pro-stakeholder decision-making by managers in their responsibility centers allows managers to determine alternatives that are both profitable and follow stakeholders’ ethics-related demands. In an essay in Business Horizons , Michael Hitt and Jamie Collins explain that companies with a pro-stakeholder culture should better understand the multiple ethical demands of those stakeholders. They also argue that this understanding should “provide these firms with an advantage in recognizing economic opportunities associated with such concerns.” 6 The identification of these opportunities can make a manager’s decisions more profitable in the long run. 6 Michael Hitt and Jamie Collins “Business Ethics, Strategic Decision Making, and Firm Performance.” Business Horizons 50, no. 5 (February 2007): 353–357. Hitt and Collins go on to argue that “as products and services may be developed in response to consumers’ desires, stakeholders’ ethical expectations can, in fact, represent latent signals on emerging economic opportunities.” 7 Providing managers the ability to identify alternatives based upon stakeholders’ desires and demands gives them a broader decision-making platform that allows for decisions that are in the best interest of the organization. 7 Michael Hitt and Jamie Collins “Business Ethics, Strategic Decision Making, and Firm Performance.” Business Horizons 50, no. 5 (February 2007): 353–357. Often one of the most challenging decisions a manager must make relates to transfer pricing , which is the pricing process put into place when one segment of a business “sells” goods to another segment of the same business. In order to understand the significance of transfer pricing, recall that the primary goal of a responsibility center manager is to manage costs and make decisions that contribute to the success of the company. In addition, often the financial performance of the segment impacts the manager’s compensation, through bonuses and raises, which are likely tied to the financial performance of the segment. Therefore, the decisions made by the manager will affect both the manager and the company. Application of Transfer Pricing Transfer pricing can affect goal congruence —alignment between the goals of the segment or responsibility center, or even an individual manager, with the strategic goals of the organization. Recall what you’ve learned regarding segments of the business. Often, segments will be arranged by the type of product produced or service offered. Segments often sell products to external customers. For example, assume a soft drink company has a segment—called the blending department—dedicated to producing various types of soft drinks. The company may have an external customer to which it sells unique soft drink flavors that the customer will bottle under a different brand name (perhaps a store brand like Kroger or Meijer). The segment may also produce soft drinks for another segment within its own company—the bottling department, for example—for further processing and ultimate sale to external customers. When the internal transfer occurs between the blending segment and the bottling segment, the transaction will be structured as a sale for the blending segment and as a purchase for the bottling department. To facilitate the transaction, the company will establish a transfer price, even though the transaction is internal because each segment is responsible for its own profits and costs. Figure 9.12 shows a graphical representation of the transfer pricing structure for the soft drink company used in the example. Notice that the blending department has two categories of customers—external and internal. External customers purchase the soft drink mixtures and bottle the drinks under a different label, such as a store brand. Internally, the blending department “sells” the soft drink mixtures to the bottling department. Notice the “sale” by the blending department (a positive amount) and the “purchase” by the bottling department (a negative amount) net out to zero. This transaction does not impact the overall financial performance of the organization and allows the responsibility center managers to analyze the financial performance of the segment just as if these were transactions involving outside entities. What issues might this scenario cause as it relates to goal congruence—that is, meeting the goals of the corporation as a whole as well as meeting the goals of the individual managers? In situations where the selling division, in this case the blending division, has excess capacity—meaning they can produce more than they currently sell—and ignoring goal congruence issues, the selling division would sell its products internally for variable cost. If there is no excess capacity, though, the opportunity cost of the contribution margin given up by taking internal sales instead of external sales would need to be considered. Let’s look at each of these general situations individually. In the case of excess capacity, the selling division has the ability to produce the goods to sell internally with only variable costs increasing. Thus, it seems logical to make transfer price the same as the variable costs—but is it? Reflecting back on the concept of responsibility centers, the idea is to allow management to have decision-making authority and to evaluate and reward management based on how well they make decisions that lead to increased profitability for their segment. These managers are often rewarded with bonuses or other forms of compensation based on how well they reach certain profitability measures. Does selling goods at variable cost increase the profitability for the selling division? The answer, of course, is no. Thus, why would a manager, who is rewarded based on profitability, sell goods at variable cost? Obviously, the manager would prefer not to sell at variable cost and would rather sell the goods at some amount above variable cost and thus contribute to the segment’s profitability. What should the transfer price be? There are various options for choosing a transfer price. Concepts In Practice Transfer Pricing with Overseas Segments An inherent assumption in transfer pricing is that the divisions of a company are located in the same country. While implementing a transfer pricing framework can be complex for a business located entirely in the United States, transfer pricing becomes even more complex when any of the divisions are located outside of the United States. Companies with overseas transactions involving transfer pricing must pay particular attention to ensure compliance with the tax, foreign currency exchange rate fluctuations, and other regulations in the countries in which they operate. This can be expensive and difficult for companies to manage. While many firms use their own employees to manage the process, the Big Four accounting firms, for example, offer expertise in transfer pricing setup and regulatory compliance. This short video from Deloitte on transfer pricing provides more information about this valuable service that accountants provide. Available Transfer Pricing Approaches There are three primary transfer pricing approaches: market-based prices, cost-based prices, or negotiated prices. Market Price Approach With the market price approach , the transfer price paid by the purchaser is the price the seller would use for an outside customer. Market-based prices are consistent with the responsibility accounting concepts of profit and investment centers, as managers of these units are evaluated based on purchasing and selling goods and services at market prices. Market-based transfer pricing is very common in a situation in which the seller is operating at full capacity. The benefit of using a market price approach is that the company will need to stay familiar with market prices. This will likely occur naturally because the company will also have outside sales. A potential disadvantage of this approach is that conflicts might arise when there are discrepancies between the current market price and the market price the company sets for transfer prices. Firms should decide at what point and how frequently to update the transfer prices used in a market approach. For example, assume a company adopts a market approach for transfer prices. As time goes by, the market price will likely change—it will either increase or decrease. When this occurs, the firm must decide if and when to update transfer prices. If current market prices are higher than the market price the company uses, the selling division will be happy because the price earned for intersegment (transfer) sales will increase while inputs (costs) to provide the goods or services remain the same. An increase in the transfer price will, in turn, increase the profit margins of the selling division. The opposite is true for the purchasing division. If the market transfer price increases to match the current market price, the costs (cost of goods sold, in particular) will increase. Without a corresponding increase in the prices charged to its customers or an offset through cost reductions, the profit margins of the purchasing division will decrease. This situation could cause conflict between divisions within same company, an unenviable situation for management as one manager is pleased with the transfer price situation and the other is not. Both managers desire to improve the profits of their respective divisions, but in this situation, the purchasing division may feel they are giving up profits that are then being realized by the selling division due the increase in the market price of the goods and the use of a market-based transfer price. Cost Approach When the transfer price uses a cost approach , the price may be based on either total variable cost, full cost, or a cost-plus scenario. In the variable cost scenario, as mentioned previously, the transfer of the goods would take place at the total of all variable costs incurred to produce the product. In a full-cost scenario, the goods would be transferred at the variable cost plus the fixed cost per unit associated with making that product. With a cost-plus transfer price, the goods would be transferred at either the variable cost or the full-cost plus a predetermined markup percentage. For example, assume the variable cost to produce a product is $10 and the full cost is $12. If the company uses a cost-plus methodology to calculate the transfer price with a 30% mark-up, the transfer price would be $13 ($10 × 130%) based on just the variable cost or $15.60 ($12 × 130%) based on the full cost. When using the full cost as a basis for applying markup, it is important to understand that the cost structure may include costs that are irrelevant to establishing a transfer price (for example, costs unrelated to producing the actual product to be transferred, such as the fixed cost of the plant supervisor’s salary, which will exist whether the product is transferred internally or externally), which may unnecessarily influence decisions. The benefit of using a cost approach is that the company will invest effort into determining the actual costs involved in making a product or providing another service. The selling division should be able to justify to the purchasing division the cost that will be charged, which likely includes a profit margin, based on what the division would earn on a sale to an outside customer. At the same time, a deeper understanding of what drives the costs within a division provides an opportunity to identify activities that add unnecessary costs. Companies can, in turn, work to increase efficiency and eliminate unnecessary activity and bring down the cost. In essence, the selling division has to justify the costs it is charging the purchasing division. Negotiated Price Approach Somewhere in between a transfer price based on cost and one based on market is a negotiated price approach in which the company allows the buying segment and the selling segment to negotiate the transfer price. This is common in situations in which there is no external market. When an external price exists and is used as a starting point for establishing the transfer price, the organization must be aware of differences in specific costs between the source of the external price and its own organization. For example, a price from an external source may include a higher profit margin than the profit margin targeted by a company pursuing a cost leadership strategy. In this case, the external price should be reduced to account for such differences. However, one disadvantage of using a negotiated price system is the possibility of creating a situation in which competition exists between a department and an outside vendor (as occurs when it is cheaper for a department to purchase from an outside vendor rather than another department in the organization) or, worse yet, between departments of the same organization. It is paramount that, when selecting a transfer pricing methodology, the goals of the particular departments involved align with the overall strategic goals of the organization. A transfer pricing structure is not intended to facilitate competition between departments within the same company. Rather, a transfer pricing system should be viewed as a tool to help the company remain competitive in the marketplace and improve a company’s overall profit margin. Other Transfer Pricing Issues The three approaches to transfer pricing assume that the selling department has excess capacity to produce additional products to sell internally. What happens if the selling department does not have excess capacity—in other words, if the selling department can sell all that it produces to external customers? If an internal department wants to purchase goods from the selling department, what would be an appropriate selling price? In this case, the transfer price must take into consideration the opportunity cost of the contribution margin that would be lost from having to forego external sales in order to meet internal sales. Suppose in the previous example that the blending department is at full capacity but the bottling department wants to purchase some of its soft drink blends internally. Assume the variable cost to produce one unit of soft drink is $10, the fixed cost per unit is $2, and the market price for selling one unit is $18. What would be an appropriate transfer price in this situation? Since the blending department does not have the capacity to meet external sales plus internal sales, in order to accept the internal sales order, the blending department would have to lose sales to external customers. The contribution margin per unit is $8 ($18 − $10). Thus, $8 per unit would be given up for each external unit that is sold internally. Looking only at costs, the blending department would be indifferent between an external sale of $18 and an internal sale of $18 ($10 variable cost + $8 contribution margin). Obviously, there are other issues that need to be considered in these situations, such as the effect on external customers if demand cannot be met. Overall, if there is no excess capacity, the transfer price should take into consideration the opportunity cost lost from taking internal sales over external sales. In addition to the possibility of losing opportunity costs, there are additional transfer pricing issues. Recall that decentralized organizations delegate decision-making authority throughout the organization. A well-designed transfer pricing policy can contribute not only to the segment manager’s profits but to overall corporate profits in situations where the transfer price is lower than the external price. However, when a transfer pricing system is used to facilitate transactions between departments, an ill-designed policy is likely to lead to disputes between departments. It is possible the departments view each other as competition rather than strategic partners. When this occurs, it is important for upper-level management to establish a process that allows managers to resolve disagreements in a way that aligns with organizational, rather than departmental, goals. Transfer pricing systems become even more complicated when departments are located in different countries. The transfer price in an international setting must also account for differences in currencies and fluctuating exchange rate as well as differences in regulations such as tariffs and duties, taxes, and other regulations. Transfer Pricing Example Regal Paper has two divisions. The Paper Division produces copy paper, wrapping paper, and paper used on the outside of cardboard displays placed in grocery, office, and department stores. The Box Division produces cardboard boxes sold at Christmas, cardboard boxes purchased by manufacturers for packaging their goods, and cardboard displays for stores, particularly seasonal displays. Both divisions are profit centers, and each manager is evaluated and rewarded based on his division’s profitability. The Box Division has approached the Paper Division to buy paper needed to cover cardboard displays that have been ordered by several major snack food manufacturers for the upcoming Superbowl game. The Box Division has been buying the display coverings from an external seller for $12.50 per unit. Currently, the Paper Division has excess capacity and can fill the order for the 500,000 display coverings that the Box Division is requesting. The cost to the Paper Division to produce one display covering is as follows: What would be the transfer price per unit under each of the following scenarios? Market-based transfer price. This transfer price is the same as the selling price to external customers, which is $12. Cost-based transfer price. This transfer price is the same as the variable costs per unit, which is $8. Full-cost–based transfer price. This transfer price is the same as the variable costs plus the fixed cost per unit, which is $9. Cost plus assuming 20% mark-up. This marks up the cost-based transfer price by 20%, which is $8 × 120%, or $9.60. Full-cost plus assuming 20% mark-up. This marks up the full-cost–based transfer price by 20%, which is $9 × 120%, or $10.80. Range of negotiated transfer price. The negotiated transfer price should be between the lowest and highest possible prices: $8–12 What if Paper had no excess capacity? If the Paper Division had no excess capacity, the transfer price would be the cost plus the contribution margin, which is $8 + $4, or $12. Which transfer price is best? If there is excess capacity, then typically a negotiated transfer price is best, as it allows the managers who are evaluated on that decision to have input into the decision and does not take away their autonomy. Table 9.1 shows the per-unit effect on income of each of the transfer pricing options on each division. Remember, the effects provided here cannot necessarily be generalized, as there are two critical factors: whether or not the selling department is at capacity and the price at which the purchasing department could buy the goods externally, which in this case is $0.50 more per unit than the market price of the paper being sold by the Paper Division. Per-Unit Effect on Division Income of Various Transfer Pricing Methodologies Transfer Pricing Method Paper Division Box Division Market $4 per unit increase in income ($12 SP − $8 VC) $0.50 per unit increase in income ($12.50 − $12 SP) Cost $0 per unit increase in income ($8 SP − $8 VC) $4.50 per unit increase in income ($12.50 − $8) Full-cost $1 per unit increase in income ($9 SP − $8 VC) $3.50 per unit increase in income ($12.50 − $9) Cost-Plus (20%) $1.60 per unit increase in income ($9.60 SP − $8 VC) $2.90 per unit increase in income ($12.50 − $9.60) Full-cost Plus (20%) $2.80 per unit increase in income ($10.80 SP − $8VC) $1.70 per unit increase in income ($12.50 − $10.80) Negotiated ($0 − $12) Increase to income between $0 and $4 per unit Increase to income between $0.50 and $4.50 No Excess Capacity $4 per unit increase in income ($12 SP − $8 VC) $0.50 per unit increase in income ($12.50 − $12 SP) SP = selling price; VC = variable cost Table 9.1 As you can see, the transfer price can significantly affect the profitability of the division. It is easy to see which transfer prices most benefit the seller and which most benefit the buyer. Thus, as previously mentioned, a negotiated transfer price is often the best resolution to determining a transfer price. Think It Through Comparing Transfer Pricing and Outsourcing Assume you are the President of a manufacturing firm that has a division that transfers products to other divisions within the company. The other divisions have recently complained that the transfer price charged to the departments has increased significantly over the past several quarters. They are frustrated because performance evaluations and bonuses are linked to the profitability of their respective departments. During a recent management meeting, a cost accountant suggests the company can solve this issue by transferring production to another supplier that has a lower cost of production due to lower labor costs. In addition to solving the conflict between departments, the company’s overall profitability will increase because of the substantial cost savings. Evaluate this scenario and explain how you would respond as the company’s President. Consider the perspectives of various stakeholders in this situation.
business_law_i_essentials
Chapter Outline 10.1 Administrative Law 10.2 Regulatory Agencies Introduction Learning Outcome Define the role of administrative bodies and regulation in the governmental rulemaking process.
[ { "answer": { "ans_choice": 3, "ans_text": "Congress." }, "bloom": null, "hl_context": "<hl> Although administrative agencies are created by Congress , most administrative agencies are part of the executive branch of the government . <hl> The executive branch of government of the United States is headed by the president of the United States . Administrative agencies are created to enforce and administer laws , and the executive branch was created to oversee administrative agencies . Administrative agencies conduct exams and investigations of the entities they regulate . As a result of being part of the executive branch of government , the leaders of administrative agencies are generally appointed by the executive branch .", "hl_sentences": "Although administrative agencies are created by Congress , most administrative agencies are part of the executive branch of the government .", "question": { "cloze_format": "___ is valid defense under Title VII.", "normal_format": "Which of the following is valid defense under Title VII?", "question_choices": [ "The president.", "The judicial branch.", "The Constitution.", "Congress." ], "question_id": "fs-212325318356", "question_text": "Administrative agencies are created by:" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "c" }, "bloom": null, "hl_context": "<hl> One well-known federal agency is the Food and Drug Administration ( FDA ) . <hl> The FDA was created to protect the public ’ s health . The agency ’ s responsibilities are very broad . The agency fulfills its role by ensuring the safety and effectiveness of drugs consumed by people and animals , biological products , medical devices , food , and cosmetics . Specifically , the FDA regulates the things that the public consumes , including supplements , infant formula , bottled water , food additives , eggs , some meat , and other food products . The FDA also regulates biological items and medical devices , including vaccines , cellular therapy products , surgical implants , and dental devices . This federal agency began in 1906 with the passing of the Pure Food and Drugs Act .", "hl_sentences": "One well-known federal agency is the Food and Drug Administration ( FDA ) .", "question": { "cloze_format": "The FDA stands for ___ .", "normal_format": "What does the FDA stand for?", "question_choices": [ "The First Drug Administration.", "The Federal Drug Administration.", "The Food and Drug Administration.", "The Food and Diet Administration." ], "question_id": "fs-212318356", "question_text": "The FDA stands for:" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "The President." }, "bloom": null, "hl_context": "Although administrative agencies are created by Congress , most administrative agencies are part of the executive branch of the government . <hl> The executive branch of government of the United States is headed by the president of the United States . <hl> Administrative agencies are created to enforce and administer laws , and the executive branch was created to oversee administrative agencies . Administrative agencies conduct exams and investigations of the entities they regulate . <hl> As a result of being part of the executive branch of government , the leaders of administrative agencies are generally appointed by the executive branch . <hl>", "hl_sentences": "The executive branch of government of the United States is headed by the president of the United States . As a result of being part of the executive branch of government , the leaders of administrative agencies are generally appointed by the executive branch .", "question": { "cloze_format": "___ appoints leaders to run administrative agencies.", "normal_format": "Who appoints leaders to run administrative agencies?", "question_choices": [ "The President.", "Congress.", "The judges.", "None of these are correct." ], "question_id": "fs-2123253183560", "question_text": "Who appoints leaders to run administrative agencies?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 3, "ans_text": "d" }, "bloom": null, "hl_context": "The power of administrative agencies comes from the executive branch of the government . Congress passes laws to carry out specific directives . The passing of these laws often creates a need for a government agency that will implement and carry out these laws . The government is not able to perform the work itself or manage the employees who will do the work . Instead , it creates agencies to do this . <hl> Assigning this authority to agencies is called delegation . <hl> The agencies have focus and expertise in their specific area of authority . However , it is important to note that Congress gives these agencies just enough power to fulfill their responsibilities .", "hl_sentences": "Assigning this authority to agencies is called delegation .", "question": { "cloze_format": "The process of assigning authority to administrative agencies is called ___ .", "normal_format": "What is called the process of assigning authority to administrative agencies?", "question_choices": [ "An assignment.", "A directive.", "A passing.", "A delegation." ], "question_id": "fs-21232538356", "question_text": "The process of assigning authority to administrative agencies is called:" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "c" }, "bloom": null, "hl_context": "The FTC is a well-known agency and is organized into bureaus . Each bureau is focused on an agency goal . The three bureaus are consumer protection , competition , and economics . The Bureau of Consumer Protection focuses on unfair and deceptive business practices by encouraging consumers to voice complaints , investigate , and file lawsuits against companies . It also develops rules to maintain fair practices and educates consumers and businesses about rights and responsibilities . The Bureau of Competition focuses on antitrust laws and , by doing so , supports lower prices and choices for the consumer . <hl> And , lastly , the Bureau of Economics concentrates on consumer protection investigation , rulemaking , and the economic impact of government regulations on businesses and consumers . <hl>", "hl_sentences": "And , lastly , the Bureau of Economics concentrates on consumer protection investigation , rulemaking , and the economic impact of government regulations on businesses and consumers .", "question": { "cloze_format": "The Bureau of Economics concentrates on all but the ___ .", "normal_format": "Which of the following the Bureau of Economics does NOT concentrate on?", "question_choices": [ "Consumer protection investigation.", "Rulemaking.", "Lower prices for consumers.", "Economic impact of government regulation." ], "question_id": "fs-212325356", "question_text": "The Bureau of Economics concentrates on all but the following:" }, "references_are_paraphrase": 0 } ]
10
10.1 Administrative Law Administrative law is also referred to as regulatory and public law . It is the law that is related to administrative agencies. Administrative agencies are established by statutes and governed by rules, regulations and orders, court decisions, judicial orders, and decisions. Agencies are created by federal or state governments to carry out certain goals or purposes. Federal agencies are created by an act of Congress. Congress writes out a law called an organic statute that lays out the purpose and structure of the agency. The agency is charged with carrying out that purpose, as described by Congress. Organic statutes are utilized to create administrative agencies, as well as to define their responsibilities and authority. Industrialization Administrative agencies have been around almost since the founding of the United States. However, industrialization had a big impact on the development of administrative laws. As people moved from farms and rural areas to cities to find work and raise families, the economy changed. It became more complex. As a result of this economic change, the government saw a need to expand its regulation to protect and support the public. In the 20th century, the number of agencies expanded very quickly with the addition of the Food and Drug Administration (FDA) to regulate food and medication, the Federal Trade Commission (FTC) to regulate trade, and the Federal Reserve System (FRS) to regulate banks. These are just a few of the agencies created to regulate industries. Ultimately, this expansion occurred in response to the complexity of the economy. Everyday Impact Administrative law impacts the public on a daily basis. Administrative law is basically the delegated power granted to administrative agencies to carry out specific functions. Government agencies endeavor to protect the rights of citizens, corporations, and any other entity through administrative laws. Administrative agencies were developed to protect consumers and the community. As a result, they are present in all aspects of life, including medicine, food, environment, and trade. One well-known federal agency is the Food and Drug Administration (FDA). The FDA was created to protect the public’s health. The agency’s responsibilities are very broad. The agency fulfills its role by ensuring the safety and effectiveness of drugs consumed by people and animals, biological products, medical devices, food, and cosmetics. Specifically, the FDA regulates the things that the public consumes, including supplements, infant formula, bottled water, food additives, eggs, some meat, and other food products. The FDA also regulates biological items and medical devices, including vaccines, cellular therapy products, surgical implants, and dental devices. This federal agency began in 1906 with the passing of the Pure Food and Drugs Act. EpiPens are automatic injection devices that deliver lifesaving medication that can save an individual in the event of exposure to an allergen, like a bee sting or peanuts. The United States faced a shortage of EpiPens, so in 2018, the FDA took action to address this issue. The FDA approved the extension of EpiPen expiration dates for four months on specific lots of the EpiPen. This extension impacted both the public and the organization that produces EpiPens. In the same year, the FDA approved the first generic EpiPen. The new generic version will be produced by a pharmaceutical company that has not previously produced the EpiPen. These two actions impact consumers by increasing the supply of lifesaving EpiPens. Another well-known agency is the Federal Trade Commission (FTC). The FTC was formed in 1914 when President Woodrow Wilson signed the Federal Trade Commission Act into law. The goal of the agency is to protect the consumer, encourage business competition, and further the interests of consumers by encouraging innovation. The FTC works within the United States as well as internationally to protect consumers and encourage competition. The agency fulfills this role by developing policies, partnering with law enforcement to ensure consumer protection, and helping to ensure that markets are open and free. For instance, management and enforcement of the Do Not Call List is part of the FTC’s consumer protection goals. The FTC protects consumers from unfair or misleading practices. Phone scams are a common issue. Scammers go to great lengths to trick the public into donating to false charities, providing personal information, or giving access to financial information. The FTC is aware of these issues and has put rules in place to punish scammers and educate the public. The FTC created a phone scammer reporting process to help collect information about scammers so that they can be prosecuted. The agency also collects information about scammers and creates educational materials for the public. These materials are designed to help consumers identify possible phone scammers, avoid their tactics, and report their activities. A complete list of U.S. government agencies can be found at https://www.usa.gov/federal-agencies/a . 10.2 Regulatory Agencies The power of administrative agencies comes from the executive branch of the government. Congress passes laws to carry out specific directives . The passing of these laws often creates a need for a government agency that will implement and carry out these laws. The government is not able to perform the work itself or manage the employees who will do the work. Instead, it creates agencies to do this. Assigning this authority to agencies is called delegation . The agencies have focus and expertise in their specific area of authority. However, it is important to note that Congress gives these agencies just enough power to fulfill their responsibilities. Although administrative agencies are created by Congress, most administrative agencies are part of the executive branch of the government. The executive branch of government of the United States is headed by the president of the United States. Administrative agencies are created to enforce and administer laws, and the executive branch was created to oversee administrative agencies. Administrative agencies conduct exams and investigations of the entities they regulate. As a result of being part of the executive branch of government, the leaders of administrative agencies are generally appointed by the executive branch. Administrative agencies also have responsibilities that mirror the responsibilities of the judicial branch of government. Administrative law judges (ALJ) have two primary duties. First, they oversee procedural aspects, like depositions of witnesses related to a case. They have the ability to review rules and statutes and review decisions related to their agencies. They also determine the facts and then make a judgment related to whether or not the agency’s rules were broken. They act like a trial judge in a court, but their jurisdiction is limited to evaluating if rules established by certain government agencies were violated. They can award money, other benefits, and punish those found guilty of violating the rules. Federal Agencies Well-known federal agencies include the Federal Bureau of Investigations (FBI), Environmental Protection Agency (EPA), Food and Drug Administration (FDA), Federal Trade Commission (FTC), Federal Election Commission (FEC), and the National Labor Relations Board (NLRB). These agencies were created to serve specific purposes. For instance, the FBI was created to investigate federal crimes. A federal crime is one that violates federal criminal law, rather than a state’s criminal law. The EPA was created to combine federal functions that were instituted protect the environment. The NLRB was created to carry out the National Labor Relations Act of 1935. The goal of federal agencies is to protect the public. The EPA was created in response to concerns about the dumping of toxic chemicals in waterways and about air pollution. It began when the Cuyahoga River in Ohio burst into flames without warning. President Richard Nixon presented a plan to reduce pollution from cars, end the dumping of pollutants into waterways, tax businesses for some environmentally unfriendly practices, and reduce pollution in other ways. The EPA was created by Congress in response to these environmental concerns and President Richard Nixon’s plan. It is given the authority and responsibility to protect the environment from businesses, so that the people can enjoy a clean and safe environment. As mentioned in the previous section, the Federal Trade Commission (FTC) was created to protect the consumer. It investigates and addresses activities that limit competition between businesses. The organization enforces antitrust laws that prevent one organization from restraining competition or seeking to maintain full control over a market. In December of 2006, the FTC ruled on the merger of America Online, Inc. (AOL) and Time Warner, Inc. The FTC decided that the joining of these two companies would limit the ability of other organizations to compete in the cable internet marketplace. The FTC ordered the merged company, AOL Time Warner, to do certain things that permitted competitors to engage, including opening its system to competitors’ internet services and not interfering with the transmission signal being passed through the system. Doing so prevented the large company from shutting out its competitors. These are just a few examples of administrative agencies that were created to protect the community from business activities that could negatively impact the environment or the consumer. Agency Structure Administrative agencies are made up of experts, and they are trusted by Congress to identify the agency structure that best serves their specific goals. Thus, each agency is structured differently. The FTC is a well-known agency and is organized into bureaus. Each bureau is focused on an agency goal. The three bureaus are consumer protection, competition, and economics. The Bureau of Consumer Protection focuses on unfair and deceptive business practices by encouraging consumers to voice complaints, investigate, and file lawsuits against companies. It also develops rules to maintain fair practices and educates consumers and businesses about rights and responsibilities. The Bureau of Competition focuses on antitrust laws and, by doing so, supports lower prices and choices for the consumer. And, lastly, the Bureau of Economics concentrates on consumer protection investigation, rulemaking, and the economic impact of government regulations on businesses and consumers. Administrative Procedure Act (APA) These agencies are not unrestrained in their operations. First, there are due process requirements created in the Constitution. Rules must be reasonable and based on facts. Second, rules cannot violate anyone’s constitutional rights or civil liberties. Third, there must be an opportunity for the public to voice its support, or lack of support, for a rule. In 1946, the Administrative Procedure Act (APA) was enacted. Under the APA, agencies must follow certain procedures to make their rules enforceable statutes. The Act set up a full system for the execution of administrative law by administrative agencies for the federal government. Although agencies have power, government agencies must still act within the structures in place, including the Constitution, span of authority, statutory limitations, and other restrictions. The APA outlines roles, powers, and procedures of agencies. It organizes administrative functions into rulemaking and adjudication.
biology
Chapter Outline 31.1 Nutritional Requirements of Plants 31.2 The Soil 31.3 Nutritional Adaptations of Plants Introduction Cucurbitaceae is a family of plants first cultivated in Mesoamerica, although several species are native to North America. The family includes many edible species, such as squash and pumpkin, as well as inedible gourds. In order to grow and develop into mature, fruit-bearing plants, many requirements must be met and events must be coordinated. Seeds must germinate under the right conditions in the soil; therefore, temperature, moisture, and soil quality are important factors that play a role in germination and seedling development. Soil quality and climate are significant to plant distribution and growth. The young seedling will eventually grow into a mature plant, and the roots will absorb nutrients and water from the soil. At the same time, the aboveground parts of the plant will absorb carbon dioxide from the atmosphere and use energy from sunlight to produce organic compounds through photosynthesis. This chapter will explore the complex dynamics between plants and soils, and the adaptations that plants have evolved to make better use of nutritional resources.
[ { "answer": { "ans_choice": 2, "ans_text": "The element is inorganic." }, "bloom": null, "hl_context": "Plants require only light , water and about 20 elements to support all their biochemical needs : these 20 elements are called essential nutrients ( Table 31.1 ) . <hl> For an element to be regarded as essential , three criteria are required : 1 ) a plant cannot complete its life cycle without the element ; 2 ) no other element can perform the function of the element ; and 3 ) the element is directly involved in plant nutrition . <hl>", "hl_sentences": "For an element to be regarded as essential , three criteria are required : 1 ) a plant cannot complete its life cycle without the element ; 2 ) no other element can perform the function of the element ; and 3 ) the element is directly involved in plant nutrition .", "question": { "cloze_format": "For an element to be regarded as essential, all of the following criteria must be met, except ___\n", "normal_format": "For an element to be regarded as essential, all of the following criteria must be met, except which one?", "question_choices": [ "No other element can perform the function.", "The element is directly involved in plant nutrition.", "The element is inorganic.", "The plant cannot complete its lifecycle without the element." ], "question_id": "fs-idp100874880", "question_text": "For an element to be regarded as essential, all of the following criteria must be met, except:" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "carbon" }, "bloom": null, "hl_context": "The essential elements can be divided into two groups : macronutrients and micronutrients . Nutrients that plants require in larger amounts are called macronutrients . About half of the essential elements are considered macronutrients : carbon , hydrogen , oxygen , nitrogen , phosphorus , potassium , calcium , magnesium and sulfur . <hl> The first of these macronutrients , carbon ( C ) , is required to form carbohydrates , proteins , nucleic acids , and many other compounds ; it is therefore present in all macromolecules . <hl> On average , the dry weight ( excluding water ) of a cell is 50 percent carbon . As shown in Figure 31.3 , carbon is a key part of plant biomolecules .", "hl_sentences": "The first of these macronutrients , carbon ( C ) , is required to form carbohydrates , proteins , nucleic acids , and many other compounds ; it is therefore present in all macromolecules .", "question": { "cloze_format": "The nutrient that is part of carbohydrates, proteins, and nucleic acids, and that forms biomolecules, is ________.", "normal_format": "Which is the nutrient that is part of carbohydrates, proteins, and nucleic acids, and that forms biomolecules?", "question_choices": [ "nitrogen", "carbon", "magnesium", "iron" ], "question_id": "fs-idp4252816", "question_text": "The nutrient that is part of carbohydrates, proteins, and nucleic acids, and that forms biomolecules, is ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "soil" }, "bloom": "4", "hl_context": "Since plants require nutrients in the form of elements such as carbon and potassium , it is important to understand the chemical composition of plants . The majority of volume in a plant cell is water ; it typically comprises 80 to 90 percent of the plant ’ s total weight . <hl> Soil is the water source for land plants , and can be an abundant source of water , even if it appears dry . <hl> Plant roots absorb water from the soil through root hairs and transport it up to the leaves through the xylem . As water vapor is lost from the leaves , the process of transpiration and the polarity of water molecules ( which enables them to form hydrogen bonds ) draws more water from the roots up through the plant to the leaves ( Figure 31.2 ) . Plants need water to support cell structure , for metabolic functions , to carry nutrients , and for photosynthesis .", "hl_sentences": "Soil is the water source for land plants , and can be an abundant source of water , even if it appears dry .", "question": { "cloze_format": "The main water source for land plants is ___.", "normal_format": "What is the main water source for land plants?", "question_choices": [ "rain", "soil", "biomolecules", "essential nutrients" ], "question_id": "fs-idm33567072", "question_text": "What is the main water source for land plants?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "all of the above" }, "bloom": "3", "hl_context": "<hl> Temperature , moisture , and wind cause different patterns of weathering and therefore affect soil characteristics . <hl> The presence of moisture and nutrients from weathering will also promote biological activity : a key component of a quality soil . Plants obtain inorganic elements from the soil , which serves as a natural medium for land plants . Soil is the outer loose layer that covers the surface of Earth . Soil quality is a major determinant , along with climate , of plant distribution and growth . <hl> Soil quality depends not only on the chemical composition of the soil , but also the topography ( regional surface features ) and the presence of living organisms . <hl> In agriculture , the history of the soil , such as the cultivating practices and previous crops , modify the characteristics and fertility of that soil .", "hl_sentences": "Temperature , moisture , and wind cause different patterns of weathering and therefore affect soil characteristics . Soil quality depends not only on the chemical composition of the soil , but also the topography ( regional surface features ) and the presence of living organisms .", "question": { "cloze_format": "The factor that affects soil quality is the ___.", "normal_format": "Which factors affect soil quality?", "question_choices": [ "chemical composition", "history of the soil", "presence of living organisms and topography", "all of the above" ], "question_id": "fs-idm8094880", "question_text": "Which factors affect soil quality?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "horizons : soil profile" }, "bloom": null, "hl_context": "<hl> Soils are named and classified based on their horizons . <hl> <hl> The soil profile has four distinct layers : 1 ) O horizon ; 2 ) A horizon ; 3 ) B horizon , or subsoil ; and 4 ) C horizon , or soil base ( Figure 31.6 ) . <hl> The O horizon has freshly decomposing organic matter — humus — at its surface , with decomposed vegetation at its base . Humus enriches the soil with nutrients and enhances soil moisture retention . Topsoil — the top layer of soil — is usually two to three inches deep , but this depth can vary considerably . For instance , river deltas like the Mississippi River delta have deep layers of topsoil . Topsoil is rich in organic material ; microbial processes occur there , and it is the “ workhorse ” of plant production . The A horizon consists of a mixture of organic material with inorganic products of weathering , and it is therefore the beginning of true mineral soil . This horizon is typically darkly colored because of the presence of organic matter . In this area , rainwater percolates through the soil and carries materials from the surface . The B horizon is an accumulation of mostly fine material that has moved downward , resulting in a dense layer in the soil . In some soils , the B horizon contains nodules or a layer of calcium carbonate . The C horizon , or soil base , includes the parent material , plus the organic and inorganic material that is broken down to form soil . The parent material may be either created in its natural place , or transported from elsewhere to its present location . Beneath the C horizon lies bedrock . <hl> Soil distribution is not homogenous because its formation results in the production of layers ; together , the vertical section of a soil is called the soil profile . <hl> <hl> Within the soil profile , soil scientists define zones called horizons . <hl> A horizon is a soil layer with distinct physical and chemical properties that differ from those of other layers . Five factors account for soil formation : parent material , climate , topography , biological factors , and time . Parent Material", "hl_sentences": "Soils are named and classified based on their horizons . The soil profile has four distinct layers : 1 ) O horizon ; 2 ) A horizon ; 3 ) B horizon , or subsoil ; and 4 ) C horizon , or soil base ( Figure 31.6 ) . Soil distribution is not homogenous because its formation results in the production of layers ; together , the vertical section of a soil is called the soil profile . Within the soil profile , soil scientists define zones called horizons .", "question": { "cloze_format": "A soil consists of layers called ________ that taken together are called a ________.", "normal_format": "What does a soil consist of layers called, and what are those layers taken together called?", "question_choices": [ "soil profiles : horizon", "horizons : soil profile", "horizons : humus", "humus : soil profile" ], "question_id": "fs-idp9669392", "question_text": "A soil consists of layers called ________ that taken together are called a ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "bedrock" }, "bloom": null, "hl_context": "The organic and inorganic material in which soils form is the parent material . <hl> Mineral soils form directly from the weathering of bedrock , the solid rock that lies beneath the soil , and therefore , they have a similar composition to the original rock . <hl> Other soils form in materials that came from elsewhere , such as sand and glacial drift . Materials located in the depth of the soil are relatively unchanged compared with the deposited material . Sediments in rivers may have different characteristics , depending on whether the stream moves quickly or slowly . A fast-moving river could have sediments of rocks and sand , whereas a slow-moving river could have fine-textured material , such as clay .", "hl_sentences": "Mineral soils form directly from the weathering of bedrock , the solid rock that lies beneath the soil , and therefore , they have a similar composition to the original rock .", "question": { "cloze_format": "The term used to describe the solid rock that lies beneath the soil is ___.", "normal_format": "What is the term used to describe the solid rock that lies beneath the soil?", "question_choices": [ "sand", "bedrock", "clay", "loam" ], "question_id": "fs-idp26322320", "question_text": "What is the term used to describe the solid rock that lies beneath the soil?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "nitrogen fixation" }, "bloom": "3", "hl_context": "Soil bacteria , collectively called rhizobia , symbiotically interact with legume roots to form specialized structures called nodules , in which nitrogen fixation takes place . This process entails the reduction of atmospheric nitrogen to ammonia , by means of the enzyme nitrogenase . Therefore , using rhizobia is a natural and environmentally friendly way to fertilize plants , as opposed to chemical fertilization that uses a nonrenewable resource , such as natural gas . <hl> Through symbiotic nitrogen fixation , the plant benefits from using an endless source of nitrogen from the atmosphere . <hl> The process simultaneously contributes to soil fertility because the plant root system leaves behind some of the biologically available nitrogen . As in any symbiosis , both organisms benefit from the interaction : the plant obtains ammonia , and bacteria obtain carbon compounds generated through photosynthesis , as well as a protected niche in which to grow ( Figure 31.10 ) . Nitrogen is an important macronutrient because it is part of nucleic acids and proteins . Atmospheric nitrogen , which is the diatomic molecule N 2 , or dinitrogen , is the largest pool of nitrogen in terrestrial ecosystems . <hl> However , plants cannot take advantage of this nitrogen because they do not have the necessary enzymes to convert it into biologically useful forms . <hl> <hl> However , nitrogen can be “ fixed , ” which means that it can be converted to ammonia ( NH 3 ) through biological , physical , or chemical processes . <hl> <hl> As you have learned , biological nitrogen fixation ( BNF ) is the conversion of atmospheric nitrogen ( N 2 ) into ammonia ( NH 3 ) , exclusively carried out by prokaryotes such as soil bacteria or cyanobacteria . <hl> Biological processes contribute 65 percent of the nitrogen used in agriculture . The following equation represents the process : Plant cells need essential substances , collectively called nutrients , to sustain life . Plant nutrients may be composed of either organic or inorganic compounds . An organic compound is a chemical compound that contains carbon , such as carbon dioxide obtained from the atmosphere . Carbon that was obtained from atmospheric CO2 composes the majority of the dry mass within most plants . An inorganic compound does not contain carbon and is not part of , or produced by , a living organism . <hl> Inorganic substances , which form the majority of the soil solution , are commonly called minerals : those required by plants include nitrogen ( N ) and potassium ( K ) for structure and regulation . <hl>", "hl_sentences": "Through symbiotic nitrogen fixation , the plant benefits from using an endless source of nitrogen from the atmosphere . However , plants cannot take advantage of this nitrogen because they do not have the necessary enzymes to convert it into biologically useful forms . However , nitrogen can be “ fixed , ” which means that it can be converted to ammonia ( NH 3 ) through biological , physical , or chemical processes . As you have learned , biological nitrogen fixation ( BNF ) is the conversion of atmospheric nitrogen ( N 2 ) into ammonia ( NH 3 ) , exclusively carried out by prokaryotes such as soil bacteria or cyanobacteria . Inorganic substances , which form the majority of the soil solution , are commonly called minerals : those required by plants include nitrogen ( N ) and potassium ( K ) for structure and regulation .", "question": { "cloze_format": "(The) ___ is a process that produces an inorganic compound that plants can easily use.", "normal_format": "Which process produces an inorganic compound that plants can easily use?", "question_choices": [ "photosynthesis", "nitrogen fixation", "mycorrhization", "Calvin cycle" ], "question_id": "fs-idp83226592", "question_text": "Which process produces an inorganic compound that plants can easily use?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "phosphorus, zinc, and copper" }, "bloom": null, "hl_context": "<hl> Through mycorrhization , the plant obtains mainly phosphate and other minerals , such as zinc and copper , from the soil . <hl> The fungus obtains nutrients , such as sugars , from the plant root ( Figure 31.11 ) . Mycorrhizae help increase the surface area of the plant root system because hyphae , which are narrow , can spread beyond the nutrient depletion zone . Hyphae can grow into small soil pores that allow access to phosphorus that would otherwise be unavailable to the plant . The beneficial effect on the plant is best observed in poor soils . The benefit to fungi is that they can obtain up to 20 percent of the total carbon accessed by plants . Mycorrhizae functions as a physical barrier to pathogens . It also provides an induction of generalized host defense mechanisms , and sometimes involves production of antibiotic compounds by the fungi . There are two types of mycorrhizae : ectomycorrhizae and endomycorrhizae . Ectomycorrhizae form an extensive dense sheath around the roots , called a mantle . Hyphae from the fungi extend from the mantle into the soil , which increases the surface area for water and mineral absorption . This type of mycorrhizae is found in forest trees , especially conifers , birches , and oaks . Endomycorrhizae , also called arbuscular mycorrhizae , do not form a dense sheath over the root . Instead , the fungal mycelium is embedded within the root tissue . Endomycorrhizae are found in the roots of more than 80 percent of terrestrial plants .", "hl_sentences": "Through mycorrhization , the plant obtains mainly phosphate and other minerals , such as zinc and copper , from the soil .", "question": { "cloze_format": "Through mycorrhization, a plant obtains important nutrients such as ________.", "normal_format": "Through mycorrhization, what important nutrients does a plant obtain?", "question_choices": [ "phosphorus, zinc, and copper", "phosphorus, zinc, and calcium", "nickel, calcium, and zinc", "all of the above" ], "question_id": "fs-idp41382544", "question_text": "Through mycorrhization, a plant obtains important nutrients such as ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "parasite" }, "bloom": "1", "hl_context": "<hl> A parasitic plant depends on its host for survival . <hl> Some parasitic plants have no leaves . An example of this is the dodder ( Figure 31.12 ) , which has a weak , cylindrical stem that coils around the host and forms suckers . From these suckers , cells invade the host stem and grow to connect with the vascular bundles of the host . <hl> The parasitic plant obtains water and nutrients through these connections . <hl> The plant is a total parasite ( a holoparasite ) because it is completely dependent on its host . Other parasitic plants ( hemiparasites ) are fully photosynthetic and only use the host for water and minerals . There are about 4,100 species of parasitic plants . Plants obtain food in two different ways . Autotrophic plants can make their own food from inorganic raw materials , such as carbon dioxide and water , through photosynthesis in the presence of sunlight . Green plants are included in this group . <hl> Some plants , however , are heterotrophic : they are totally parasitic and lacking in chlorophyll . <hl> <hl> These plants , referred to as holo-parasitic plants , are unable to synthesize organic carbon and draw all of their nutrients from the host plant . <hl>", "hl_sentences": "A parasitic plant depends on its host for survival . The parasitic plant obtains water and nutrients through these connections . Some plants , however , are heterotrophic : they are totally parasitic and lacking in chlorophyll . These plants , referred to as holo-parasitic plants , are unable to synthesize organic carbon and draw all of their nutrients from the host plant .", "question": { "cloze_format": "The term that describes a plant that requires nutrition from a living host plant is ___.", "normal_format": "What term describes a plant that requires nutrition from a living host plant?", "question_choices": [ "parasite", "saprophyte", "epiphyte", "insectivorous" ], "question_id": "fs-idm132740496", "question_text": "What term describes a plant that requires nutrition from a living host plant?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "lichen" }, "bloom": "1", "hl_context": "A symbiont is a plant in a symbiotic relationship , with special adaptations such as mycorrhizae or nodule formation . <hl> Fungi also form symbiotic associations with cyanobacteria and green algae ( called lichens ) . <hl> Lichens can sometimes be seen as colorful growths on the surface of rocks and trees ( Figure 31.14 ) . The algal partner ( phycobiont ) makes food autotrophically , some of which it shares with the fungus ; the fungal partner ( mycobiont ) absorbs water and minerals from the environment , which are made available to the green alga . If one partner was separated from the other , they would both die .", "hl_sentences": "Fungi also form symbiotic associations with cyanobacteria and green algae ( called lichens ) .", "question": { "cloze_format": "___ is the term for the symbiotic association between fungi and cyanobacteria.", "normal_format": "What is the term for the symbiotic association between fungi and cyanobacteria?", "question_choices": [ "lichen", "mycorrhizae", "epiphyte", "nitrogen-fixing nodule" ], "question_id": "fs-idm37456816", "question_text": "What is the term for the symbiotic association between fungi and cyanobacteria?" }, "references_are_paraphrase": null } ]
31
31.1 Nutritional Requirements of Plants Learning Objectives By the end of this section, you will be able to: Describe how plants obtain nutrients List the elements and compounds required for proper plant nutrition Describe an essential nutrient Plants are unique organisms that can absorb nutrients and water through their root system, as well as carbon dioxide from the atmosphere. Soil quality and climate are the major determinants of plant distribution and growth. The combination of soil nutrients, water, and carbon dioxide, along with sunlight, allows plants to grow. The Chemical Composition of Plants Since plants require nutrients in the form of elements such as carbon and potassium, it is important to understand the chemical composition of plants. The majority of volume in a plant cell is water; it typically comprises 80 to 90 percent of the plant’s total weight. Soil is the water source for land plants, and can be an abundant source of water, even if it appears dry. Plant roots absorb water from the soil through root hairs and transport it up to the leaves through the xylem. As water vapor is lost from the leaves, the process of transpiration and the polarity of water molecules (which enables them to form hydrogen bonds) draws more water from the roots up through the plant to the leaves ( Figure 31.2 ). Plants need water to support cell structure, for metabolic functions, to carry nutrients, and for photosynthesis. Plant cells need essential substances, collectively called nutrients, to sustain life. Plant nutrients may be composed of either organic or inorganic compounds. An organic compound is a chemical compound that contains carbon, such as carbon dioxide obtained from the atmosphere. Carbon that was obtained from atmospheric CO2 composes the majority of the dry mass within most plants. An inorganic compound does not contain carbon and is not part of, or produced by, a living organism. Inorganic substances, which form the majority of the soil solution, are commonly called minerals: those required by plants include nitrogen (N) and potassium (K) for structure and regulation. Essential Nutrients Plants require only light, water and about 20 elements to support all their biochemical needs: these 20 elements are called essential nutrients ( Table 31.1 ). For an element to be regarded as essential , three criteria are required: 1) a plant cannot complete its life cycle without the element; 2) no other element can perform the function of the element; and 3) the element is directly involved in plant nutrition. Essential Elements for Plant Growth Macronutrients Micronutrients Carbon (C) Iron (Fe) Hydrogen (H) Manganese (Mn) Oxygen (O) Boron (B) Nitrogen (N) Molybdenum (Mo) Phosphorus (P) Copper (Cu) Potassium (K) Zinc (Zn) Calcium (Ca) Chlorine (Cl) Magnesium (Mg) Nickel (Ni) Sulfur (S) Cobalt (Co) Sodium (Na) Silicon (Si) Table 31.1 Macronutrients and Micronutrients The essential elements can be divided into two groups: macronutrients and micronutrients. Nutrients that plants require in larger amounts are called macronutrients . About half of the essential elements are considered macronutrients: carbon, hydrogen, oxygen, nitrogen, phosphorus, potassium, calcium, magnesium and sulfur. The first of these macronutrients, carbon (C), is required to form carbohydrates, proteins, nucleic acids, and many other compounds; it is therefore present in all macromolecules. On average, the dry weight (excluding water) of a cell is 50 percent carbon. As shown in Figure 31.3 , carbon is a key part of plant biomolecules. The next most abundant element in plant cells is nitrogen (N); it is part of proteins and nucleic acids. Nitrogen is also used in the synthesis of some vitamins. Hydrogen and oxygen are macronutrients that are part of many organic compounds, and also form water. Oxygen is necessary for cellular respiration; plants use oxygen to store energy in the form of ATP. Phosphorus (P), another macromolecule, is necessary to synthesize nucleic acids and phospholipids. As part of ATP, phosphorus enables food energy to be converted into chemical energy through oxidative phosphorylation. Likewise, light energy is converted into chemical energy during photophosphorylation in photosynthesis, and into chemical energy to be extracted during respiration. Sulfur is part of certain amino acids, such as cysteine and methionine, and is present in several coenzymes. Sulfur also plays a role in photosynthesis as part of the electron transport chain, where hydrogen gradients play a key role in the conversion of light energy into ATP. Potassium (K) is important because of its role in regulating stomatal opening and closing. As the openings for gas exchange, stomata help maintain a healthy water balance; a potassium ion pump supports this process. Magnesium (Mg) and calcium (Ca) are also important macronutrients. The role of calcium is twofold: to regulate nutrient transport, and to support many enzyme functions. Magnesium is important to the photosynthetic process. These minerals, along with the micronutrients, which are described below, also contribute to the plant’s ionic balance. In addition to macronutrients, organisms require various elements in small amounts. These micronutrients , or trace elements, are present in very small quantities. They include boron (B), chlorine (Cl), manganese (Mn), iron (Fe), zinc (Zn), copper (Cu), molybdenum (Mo), nickel (Ni), silicon (Si), and sodium (Na). Deficiencies in any of these nutrients—particularly the macronutrients—can adversely affect plant growth ( Figure 31.4 . Depending on the specific nutrient, a lack can cause stunted growth, slow growth, or chlorosis (yellowing of the leaves). Extreme deficiencies may result in leaves showing signs of cell death. Link to Learning Visit this website to participate in an interactive experiment on plant nutrient deficiencies. You can adjust the amounts of N, P, K, Ca, Mg, and Fe that plants receive . . . and see what happens. Everyday Connection Hydroponics Hydroponics is a method of growing plants in a water-nutrient solution instead of soil. Since its advent, hydroponics has developed into a growing process that researchers often use. Scientists who are interested in studying plant nutrient deficiencies can use hydroponics to study the effects of different nutrient combinations under strictly controlled conditions. Hydroponics has also developed as a way to grow flowers, vegetables, and other crops in greenhouse environments. You might find hydroponically grown produce at your local grocery store. Today, many lettuces and tomatoes in your market have been hydroponically grown. 31.2 The Soil Learning Objectives By the end of this section, you will be able to: Describe how soils are formed Explain soil composition Describe a soil profile Plants obtain inorganic elements from the soil, which serves as a natural medium for land plants. Soil is the outer loose layer that covers the surface of Earth. Soil quality is a major determinant, along with climate, of plant distribution and growth. Soil quality depends not only on the chemical composition of the soil, but also the topography (regional surface features) and the presence of living organisms. In agriculture, the history of the soil, such as the cultivating practices and previous crops, modify the characteristics and fertility of that soil. Soil develops very slowly over long periods of time, and its formation results from natural and environmental forces acting on mineral, rock, and organic compounds. Soils can be divided into two groups: organic soils are those that are formed from sedimentation and primarily composed of organic matter, while those that are formed from the weathering of rocks and are primarily composed of inorganic material are called mineral soils . Mineral soils are predominant in terrestrial ecosystems, where soils may be covered by water for part of the year or exposed to the atmosphere. Soil Composition Soil consists of these major components ( Figure 31.5 ): inorganic mineral matter, about 40 to 45 percent of the soil volume organic matter, about 5 percent of the soil volume water and air, about 50 percent of the soil volume The amount of each of the four major components of soil depends on the amount of vegetation, soil compaction, and water present in the soil. A good healthy soil has sufficient air, water, minerals, and organic material to promote and sustain plant life. Visual Connection Soil compaction can result when soil is compressed by heavy machinery or even foot traffic. How might this compaction change the soil composition? The organic material of soil, called humus , is made up of microorganisms (dead and alive), and dead animals and plants in varying stages of decay. Humus improves soil structure and provides plants with water and minerals. The inorganic material of soil consists of rock, slowly broken down into smaller particles that vary in size. Soil particles that are 0.1 to 2 mm in diameter are sand . Soil particles between 0.002 and 0.1 mm are called silt , and even smaller particles, less than 0.002 mm in diameter, are called clay . Some soils have no dominant particle size and contain a mixture of sand, silt, and humus; these soils are called loams . Link to Learning Explore this interactive map from the USDA’s National Cooperative Soil Survey to access soil data for almost any region in the United States. Soil Formation Soil formation is the consequence of a combination of biological, physical, and chemical processes. Soil should ideally contain 50 percent solid material and 50 percent pore space. About one-half of the pore space should contain water, and the other half should contain air. The organic component of soil serves as a cementing agent, returns nutrients to the plant, allows soil to store moisture, makes soil tillable for farming, and provides energy for soil microorganisms. Most soil microorganisms—bacteria, algae, or fungi—are dormant in dry soil, but become active once moisture is available. Soil distribution is not homogenous because its formation results in the production of layers; together, the vertical section of a soil is called the soil profile . Within the soil profile, soil scientists define zones called horizons. A horizon is a soil layer with distinct physical and chemical properties that differ from those of other layers. Five factors account for soil formation: parent material, climate, topography, biological factors, and time. Parent Material The organic and inorganic material in which soils form is the parent material . Mineral soils form directly from the weathering of bedrock , the solid rock that lies beneath the soil, and therefore, they have a similar composition to the original rock. Other soils form in materials that came from elsewhere, such as sand and glacial drift. Materials located in the depth of the soil are relatively unchanged compared with the deposited material. Sediments in rivers may have different characteristics, depending on whether the stream moves quickly or slowly. A fast-moving river could have sediments of rocks and sand, whereas a slow-moving river could have fine-textured material, such as clay. Climate Temperature, moisture, and wind cause different patterns of weathering and therefore affect soil characteristics. The presence of moisture and nutrients from weathering will also promote biological activity: a key component of a quality soil. Topography Regional surface features (familiarly called “the lay of the land”) can have a major influence on the characteristics and fertility of a soil. Topography affects water runoff, which strips away parent material and affects plant growth. Steeps soils are more prone to erosion and may be thinner than soils that are relatively flat or level. Biological factors The presence of living organisms greatly affects soil formation and structure. Animals and microorganisms can produce pores and crevices, and plant roots can penetrate into crevices to produce more fragmentation. Plant secretions promote the development of microorganisms around the root, in an area known as the rhizosphere . Additionally, leaves and other material that fall from plants decompose and contribute to soil composition. Time Time is an important factor in soil formation because soils develop over long periods. Soil formation is a dynamic process. Materials are deposited over time, decompose, and transform into other materials that can be used by living organisms or deposited onto the surface of the soil. Physical Properties of the Soil Soils are named and classified based on their horizons. The soil profile has four distinct layers: 1) O horizon; 2) A horizon; 3) B horizon, or subsoil; and 4) C horizon, or soil base ( Figure 31.6 ). The O horizon has freshly decomposing organic matter—humus—at its surface, with decomposed vegetation at its base. Humus enriches the soil with nutrients and enhances soil moisture retention. Topsoil—the top layer of soil—is usually two to three inches deep, but this depth can vary considerably. For instance, river deltas like the Mississippi River delta have deep layers of topsoil. Topsoil is rich in organic material; microbial processes occur there, and it is the “workhorse” of plant production. The A horizon consists of a mixture of organic material with inorganic products of weathering, and it is therefore the beginning of true mineral soil. This horizon is typically darkly colored because of the presence of organic matter. In this area, rainwater percolates through the soil and carries materials from the surface. The B horizon is an accumulation of mostly fine material that has moved downward, resulting in a dense layer in the soil. In some soils, the B horizon contains nodules or a layer of calcium carbonate. The C horizon , or soil base, includes the parent material, plus the organic and inorganic material that is broken down to form soil. The parent material may be either created in its natural place, or transported from elsewhere to its present location. Beneath the C horizon lies bedrock. Visual Connection Which horizon is considered the topsoil, and which is considered the subsoil? Some soils may have additional layers, or lack one of these layers. The thickness of the layers is also variable, and depends on the factors that influence soil formation. In general, immature soils may have O, A, and C horizons, whereas mature soils may display all of these, plus additional layers ( Figure 31.7 ). Career Connection Soil Scientist A soil scientist studies the biological components, physical and chemical properties, distribution, formation, and morphology of soils. Soil scientists need to have a strong background in physical and life sciences, plus a foundation in mathematics. They may work for federal or state agencies, academia, or the private sector. Their work may involve collecting data, carrying out research, interpreting results, inspecting soils, conducting soil surveys, and recommending soil management programs. Many soil scientists work both in an office and in the field. According to the United States Department of Agriculture (USDA): “a soil scientist needs good observation skills to analyze and determine the characteristics of different types of soils. Soil types are complex and the geographical areas a soil scientist may survey are varied. Aerial photos or various satellite images are often used to research the areas. Computer skills and geographic information systems (GIS) help the scientist to analyze the multiple facets of geomorphology, topography, vegetation, and climate to discover the patterns left on the landscape.” 1 Soil scientists play a key role in understanding the soil’s past, analyzing present conditions, and making recommendations for future soil-related practices. 1 National Resources Conservation Service / United States Department of Agriculture. “Careers in Soil Science.” http://soils.usda.gov/education/facts/careers.html 31.3 Nutritional Adaptations of Plants Learning Objectives By the end of this section, you will be able to: Understand the nutritional adaptations of plants Describe mycorrhizae Explain nitrogen fixation Plants obtain food in two different ways. Autotrophic plants can make their own food from inorganic raw materials, such as carbon dioxide and water, through photosynthesis in the presence of sunlight. Green plants are included in this group. Some plants, however, are heterotrophic: they are totally parasitic and lacking in chlorophyll. These plants, referred to as holo-parasitic plants, are unable to synthesize organic carbon and draw all of their nutrients from the host plant. Plants may also enlist the help of microbial partners in nutrient acquisition. Particular species of bacteria and fungi have evolved along with certain plants to create a mutualistic symbiotic relationship with roots. This improves the nutrition of both the plant and the microbe. The formation of nodules in legume plants and mycorrhization can be considered among the nutritional adaptations of plants. However, these are not the only type of adaptations that we may find; many plants have other adaptations that allow them to thrive under specific conditions. Link to Learning This video reviews basic concepts about photosynthesis. In the left panel, click each tab to select a topic for review. Nitrogen Fixation: Root and Bacteria Interactions Nitrogen is an important macronutrient because it is part of nucleic acids and proteins. Atmospheric nitrogen, which is the diatomic molecule N 2, or dinitrogen, is the largest pool of nitrogen in terrestrial ecosystems. However, plants cannot take advantage of this nitrogen because they do not have the necessary enzymes to convert it into biologically useful forms. However, nitrogen can be “fixed,” which means that it can be converted to ammonia (NH 3 ) through biological, physical, or chemical processes. As you have learned, biological nitrogen fixation (BNF) is the conversion of atmospheric nitrogen (N 2 ) into ammonia (NH 3 ), exclusively carried out by prokaryotes such as soil bacteria or cyanobacteria. Biological processes contribute 65 percent of the nitrogen used in agriculture. The following equation represents the process: N 2 + 16  ATP  +  8 e −   +  8 H +   →  2NH 3   +  16 ADP  +  16 Pi  +  H 2 N 2 + 16  ATP  +  8 e −   +  8 H +   →  2NH 3   +  16 ADP  +  16 Pi  +  H 2 The most important source of BNF is the symbiotic interaction between soil bacteria and legume plants, including many crops important to humans ( Figure 31.9 ). The NH 3 resulting from fixation can be transported into plant tissue and incorporated into amino acids, which are then made into plant proteins. Some legume seeds, such as soybeans and peanuts, contain high levels of protein, and serve among the most important agricultural sources of protein in the world. Visual Connection Farmers often rotate corn (a cereal crop) and soy beans (a legume), planting a field with each crop in alternate seasons. What advantage might this crop rotation confer? Soil bacteria, collectively called rhizobia , symbiotically interact with legume roots to form specialized structures called nodules , in which nitrogen fixation takes place. This process entails the reduction of atmospheric nitrogen to ammonia, by means of the enzyme nitrogenase . Therefore, using rhizobia is a natural and environmentally friendly way to fertilize plants, as opposed to chemical fertilization that uses a nonrenewable resource, such as natural gas. Through symbiotic nitrogen fixation, the plant benefits from using an endless source of nitrogen from the atmosphere. The process simultaneously contributes to soil fertility because the plant root system leaves behind some of the biologically available nitrogen. As in any symbiosis, both organisms benefit from the interaction: the plant obtains ammonia, and bacteria obtain carbon compounds generated through photosynthesis, as well as a protected niche in which to grow ( Figure 31.10 ). Mycorrhizae: The Symbiotic Relationship between Fungi and Roots A nutrient depletion zone can develop when there is rapid soil solution uptake, low nutrient concentration, low diffusion rate, or low soil moisture. These conditions are very common; therefore, most plants rely on fungi to facilitate the uptake of minerals from the soil. Fungi form symbiotic associations called mycorrhizae with plant roots, in which the fungi actually are integrated into the physical structure of the root. The fungi colonize the living root tissue during active plant growth. Through mycorrhization, the plant obtains mainly phosphate and other minerals, such as zinc and copper, from the soil. The fungus obtains nutrients, such as sugars, from the plant root ( Figure 31.11 ). Mycorrhizae help increase the surface area of the plant root system because hyphae, which are narrow, can spread beyond the nutrient depletion zone. Hyphae can grow into small soil pores that allow access to phosphorus that would otherwise be unavailable to the plant. The beneficial effect on the plant is best observed in poor soils. The benefit to fungi is that they can obtain up to 20 percent of the total carbon accessed by plants. Mycorrhizae functions as a physical barrier to pathogens. It also provides an induction of generalized host defense mechanisms, and sometimes involves production of antibiotic compounds by the fungi. There are two types of mycorrhizae: ectomycorrhizae and endomycorrhizae. Ectomycorrhizae form an extensive dense sheath around the roots, called a mantle. Hyphae from the fungi extend from the mantle into the soil, which increases the surface area for water and mineral absorption. This type of mycorrhizae is found in forest trees, especially conifers, birches, and oaks. Endomycorrhizae, also called arbuscular mycorrhizae, do not form a dense sheath over the root. Instead, the fungal mycelium is embedded within the root tissue. Endomycorrhizae are found in the roots of more than 80 percent of terrestrial plants. Nutrients from Other Sources Some plants cannot produce their own food and must obtain their nutrition from outside sources. This may occur with plants that are parasitic or saprophytic. Some plants are mutualistic symbionts, epiphytes, or insectivorous. Plant Parasites A parasitic plant depends on its host for survival. Some parasitic plants have no leaves. An example of this is the dodder ( Figure 31.12 ), which has a weak, cylindrical stem that coils around the host and forms suckers. From these suckers, cells invade the host stem and grow to connect with the vascular bundles of the host. The parasitic plant obtains water and nutrients through these connections. The plant is a total parasite (a holoparasite) because it is completely dependent on its host. Other parasitic plants (hemiparasites) are fully photosynthetic and only use the host for water and minerals. There are about 4,100 species of parasitic plants. Saprophytes A saprophyte is a plant that does not have chlorophyll and gets its food from dead matter, similar to bacteria and fungi (note that fungi are often called saprophytes, which is incorrect, because fungi are not plants). Plants like these use enzymes to convert organic food materials into simpler forms from which they can absorb nutrients ( Figure 31.13 ). Most saprophytes do not directly digest dead matter: instead, they parasitize fungi that digest dead matter, or are mycorrhizal, ultimately obtaining photosynthate from a fungus that derived photosynthate from its host. Saprophytic plants are uncommon; only a few species are described. Symbionts A symbiont is a plant in a symbiotic relationship, with special adaptations such as mycorrhizae or nodule formation. Fungi also form symbiotic associations with cyanobacteria and green algae (called lichens). Lichens can sometimes be seen as colorful growths on the surface of rocks and trees ( Figure 31.14 ). The algal partner (phycobiont) makes food autotrophically, some of which it shares with the fungus; the fungal partner (mycobiont) absorbs water and minerals from the environment, which are made available to the green alga. If one partner was separated from the other, they would both die. Epiphytes An epiphyte is a plant that grows on other plants, but is not dependent upon the other plant for nutrition ( Figure 31.15 ). Epiphytes have two types of roots: clinging aerial roots, which absorb nutrients from humus that accumulates in the crevices of trees; and aerial roots, which absorb moisture from the atmosphere. Insectivorous Plants An insectivorous plant has specialized leaves to attract and digest insects. The Venus flytrap is popularly known for its insectivorous mode of nutrition, and has leaves that work as traps ( Figure 31.16 ). The minerals it obtains from prey compensate for those lacking in the boggy (low pH) soil of its native North Carolina coastal plains. There are three sensitive hairs in the center of each half of each leaf. The edges of each leaf are covered with long spines. Nectar secreted by the plant attracts flies to the leaf. When a fly touches the sensory hairs, the leaf immediately closes. Next, fluids and enzymes break down the prey and minerals are absorbed by the leaf. Since this plant is popular in the horticultural trade, it is threatened in its original habitat.
principles_of_accounting,_volume_2:_managerial_accounting
Summary 5.1 Compare and Contrast Job Order Costing and Process Costing The three categories of costs incurred in producing an item are direct material, direct labor, and manufacturing overhead. Process costing is the system of accumulating costs within each department for large-volume, mass-produced units. Process costing often groups direct labor and manufacturing overhead as conversion costs. Costs under GAAP are categorized as period costs when they are not related to production and instead cover a time period. Selling and administrative costs are period costs related to the sales of products and management of the company and are not directly tied to a specific product. Process costing determines the cost per unit through the use of equivalent units, or the number of units that would have been produced if production was sequential instead of in batches. 5.2 Explain and Identify Conversion Costs Conversion costs are the costs of direct labor and manufacturing overhead used to convert raw materials into a finished product. Materials are added during various stages of the manufacturing process, such as the beginning or end, while conversion of the product from raw material into finished goods is considered to occur uniformly through the process. Thus, it is possible for a product to have all of its materials and not be complete. Equivalent units for direct materials can be different than the equivalent units for conversion costs because materials are added in steps through the manufacturing process, while conversion costs are incurred evenly throughout the process. 5.3 Explain and Compute Equivalent Units and Total Cost of Production in an Initial Processing Stage Process costing has a work in process inventory account for each department. Equivalent units of production for materials may differ from the equivalent units for conversion costs. The total units to account for is the number of units in the beginning work in process inventory plus the number of units started into production; this total also represents the sum of the number of units completed and the number of units in the ending work in process inventory. The cost per equivalent unit for materials is the total of the material costs for the beginning work in process inventory and the total of material costs incurred during the period. The cost per equivalent unit for conversion costs is the total of the conversion costs for the beginning work in process inventory and the total of conversion costs incurred during the period. The cost of units transferred to the next department is the number of units transferred times the total of the cost per equivalent unit of material plus the cost per equivalent unit for conversion costs. 5.4 Explain and Compute Equivalent Units and Total Cost of Production in a Subsequent Processing Stage The total units to account for is the number of units in the beginning work in process inventory plus the number of units transferred from the prior department; this total also represents the number of units completed plus the number of units in the ending work in process inventory. The cost per equivalent units for materials is the total of the material costs for the beginning work in process inventory plus the cost of material transferred in to the department plus the total of material costs incurred during the period. The cost per equivalent unit for conversion costs is the total of the conversion costs for the beginning work in process inventory plus the conversion costs transferred in plus the total of conversion costs incurred during the period. 5.5 Prepare Journal Entries for a Process Costing System Traditional journal entries show the purchase of material and the incurring of overhead costs. Each department records the transfer of material from the storeroom into production, its direct labor costs, the application of overhead, and the transfer of goods to the next department or finished goods. The value of the inventory transferred to the next department or to finished goods equals the amount listed as transferred on the production cost report.
Chapter Outline 5.1 Compare and Contrast Job Order Costing and Process Costing 5.2 Explain and Identify Conversion Costs 5.3 Explain and Compute Equivalent Units and Total Cost of Production in an Initial Processing Stage 5.4 Explain and Compute Equivalent Units and Total Cost of Production in a Subsequent Processing Stage 5.5 Prepare Journal Entries for a Process Costing System Why It Matters David and William’s family has used a secret family recipe for generations to make amazing chocolate chip cookies. While in college, they helped their grandmother, who used only locally sourced products, make and sell the cookies to a local restaurant. They helped her become more efficient, discovered how to retain the quality taste while making larger batches, and developed a plate-sized version that could be decorated similar to a birthday cake. After creating an equally successful peanut butter cookie recipe, David and William decided to expand the business and sell to high-end grocers as well as to a second restaurant. They found it was optimal in terms of cost, efficiency, and quality to produce 100 cookies per batch for each regular-sized cookie and 5 cookies per batch for the large cookies. They surveyed restaurants and grocery stores and determined that each flavor should be offered in four different package sizes. They also analyzed the marketability at various sale prices. David and William now know they need to use their information to identify the costs associated with making the cookies. They need to know the cost to produce one unit of their product in order to price their cookies correctly, determine the optimal product mix, manage efficiency and process improvement, and make other management decisions.
[ { "answer": { "ans_choice": 2, "ans_text": "C" }, "bloom": null, "hl_context": "<hl> Process costing is the optimal costing system when a standardized process is used to manufacture identical products and the direct material , direct labor , and manufacturing overhead cannot be easily or economically traced to a specific unit . <hl> Process costing is used most often when manufacturing a product in batches . <hl> Each department or production process or batch process tracks its direct material and direct labor costs as well as the number of units in production . <hl> The actual cost to produce each unit through a process costing system varies , but the average result is an adequate determination of the cost for each manufactured unit . Examples of items produced and accounted for using a form of the process costing method could be soft drinks , petroleum products , or even furniture such as chairs , assuming that the company makes batches of the same chair , instead of customizing final products for individual customers . <hl> As you ’ ve learned , job order costing is the optimal accounting method when costs and production specifications are not identical for each product or customer but the direct material and direct labor costs can easily be traced to the final product . <hl> Job order costing is often a more complex system and is appropriate when the level of detail is necessary , as discussed in Job Order Costing . Examples of products manufactured using the job order costing method include tax returns or audits conducted by a public accounting firm , custom furniture , or , in a comprehensive example , semitrucks . At the Peterbilt factory in Denton , Texas , the company can build over 100,000 unique versions of their semitrucks without making the same truck twice .", "hl_sentences": "Process costing is the optimal costing system when a standardized process is used to manufacture identical products and the direct material , direct labor , and manufacturing overhead cannot be easily or economically traced to a specific unit . Each department or production process or batch process tracks its direct material and direct labor costs as well as the number of units in production . As you ’ ve learned , job order costing is the optimal accounting method when costs and production specifications are not identical for each product or customer but the direct material and direct labor costs can easily be traced to the final product .", "question": { "cloze_format": "The production characteristic best suited for process costing and not job order costing can be stated as follows: ___.", "normal_format": "Which of the following production characteristics is better suited for process costing and not job order costing?", "question_choices": [ "Each product batch is distinguishable from the prior batch.", "The costs are easily traced to a specific product.", "Costs are accumulated by department.", "The value of work in process is the direct material used, the direct labor incurred, and the overhead applied to the job in process." ], "question_id": "fs-idm469131584", "question_text": "Which of the following production characteristics is better suited for process costing and not job order costing?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 1, "ans_text": "a paper manufacturing company" }, "bloom": null, "hl_context": "<hl> As previously mentioned , process costing is used when similar items are produced in large quantities . <hl> As such , many individuals immediately associate process costing with assembly line production . <hl> Process costing works best when products cannot be distinguished from each other and , in addition to obvious production line products like ice cream or paint , also works for more complex manufacturing of similar products like small engines . <hl> Conversely , products in a job order cost system are manufactured in small quantities and include custom jobs such as custom manufacturing products . They can also be legal or accounting tasks , movie production , or major projects such as construction activities .", "hl_sentences": "As previously mentioned , process costing is used when similar items are produced in large quantities . Process costing works best when products cannot be distinguished from each other and , in addition to obvious production line products like ice cream or paint , also works for more complex manufacturing of similar products like small engines .", "question": { "cloze_format": "A process costing system is most likely used by ___ .", "normal_format": "A process costing system is most likely used by which of the following?", "question_choices": [ "airplane manufacturing", "a paper manufacturing company", "an accounting firm specializing in tax returns", "a hospital" ], "question_id": "fs-idm205106176", "question_text": "A process costing system is most likely used by which of the following?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "A" }, "bloom": null, "hl_context": "When assigning costs to departments , it is important to separate the product costs from the period costs , which are those that are typically related with a particular time period , instead of attached to the production of an asset . Management often needs additional information to make decisions and needs the product costs further categorized as prime costs or conversion costs ( Figure 5.4 ) . <hl> Prime costs are costs that include the primary ( or direct ) product costs : direct material and direct labor . <hl> Conversion costs are the costs necessary to convert direct materials into a finished product : direct labor and manufacturing overhead , which includes other costs that are not classified as direct materials or direct labor , such as plant insurance , utilities , or property taxes . Also , note that direct labor is considered to be a component of both prime costs and conversion costs .", "hl_sentences": "Prime costs are costs that include the primary ( or direct ) product costs : direct material and direct labor .", "question": { "cloze_format": "(The) ___ is/are a prime cost.", "normal_format": "Which of the following is a prime cost?", "question_choices": [ "direct labor", "work in process inventory", "administrative labor", "factory maintenance expenses" ], "question_id": "fs-idm214657104", "question_text": "Which of the following is a prime cost?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "direct labor" }, "bloom": null, "hl_context": "When assigning costs to departments , it is important to separate the product costs from the period costs , which are those that are typically related with a particular time period , instead of attached to the production of an asset . Management often needs additional information to make decisions and needs the product costs further categorized as prime costs or conversion costs ( Figure 5.4 ) . Prime costs are costs that include the primary ( or direct ) product costs : direct material and direct labor . <hl> Conversion costs are the costs necessary to convert direct materials into a finished product : direct labor and manufacturing overhead , which includes other costs that are not classified as direct materials or direct labor , such as plant insurance , utilities , or property taxes . <hl> Also , note that direct labor is considered to be a component of both prime costs and conversion costs .", "hl_sentences": "Conversion costs are the costs necessary to convert direct materials into a finished product : direct labor and manufacturing overhead , which includes other costs that are not classified as direct materials or direct labor , such as plant insurance , utilities , or property taxes .", "question": { "cloze_format": "___ is a conversion cost.", "normal_format": "Which of the following is a conversion cost?", "question_choices": [ "raw materials", "direct labor", "sales commissions", "direct material used" ], "question_id": "fs-idm200869264", "question_text": "Which of the following is a conversion cost?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "D" }, "bloom": null, "hl_context": "When a company mass produces parts but allows customization on the final product , both systems are used ; this is common in auto manufacturing . <hl> Each part of the vehicle is mass produced , and its cost is calculated with process costing . <hl> <hl> However , specific cars have custom options , so each individual car costs the sum of the specific parts used . <hl> <hl> In a process cost system , costs are maintained by each department , and the method for determining the cost per individual unit is different than in a job order costing system . <hl> Rock City Percussion uses a process cost system because the drumsticks are produced in batches , and it is not economically feasible to trace the direct labor or direct material , like hickory , to a specific drumstick . Therefore , the costs are maintained by each department , rather than by job , as they are in job order costing .", "hl_sentences": "Each part of the vehicle is mass produced , and its cost is calculated with process costing . However , specific cars have custom options , so each individual car costs the sum of the specific parts used . In a process cost system , costs are maintained by each department , and the method for determining the cost per individual unit is different than in a job order costing system .", "question": { "cloze_format": "During production, the costs in process costing are accumulated ___.", "normal_format": "During production, how are the costs in process costing accumulated?", "question_choices": [ "to cost of goods sold", "to each individual product", "to manufacturing overhead", "to each individual department" ], "question_id": "fs-idm225538080", "question_text": "During production, how are the costs in process costing accumulated?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "C" }, "bloom": null, "hl_context": "The shaping department uses only wood as its direct material and water as its indirect material . In the shaping department , the material is added first . <hl> Then , machines cut the wood underwater into dowels , separate them , and move them to machines that shape the dowels into drumsticks . <hl> <hl> These machines need electricity to operate and personnel to monitor and adjust the processes and to maintain the equipment . <hl> When the shaping is finished , a conveyer belt transfers the sticks to the finishing department . <hl> Let ’ s return to our drumstick example to learn how to work with conversion costs . <hl> Rock City Percussion has two departments critical to manufacturing drumsticks : the shaping and packaging departments . When assigning costs to departments , it is important to separate the product costs from the period costs , which are those that are typically related with a particular time period , instead of attached to the production of an asset . Management often needs additional information to make decisions and needs the product costs further categorized as prime costs or conversion costs ( Figure 5.4 ) . Prime costs are costs that include the primary ( or direct ) product costs : direct material and direct labor . <hl> Conversion costs are the costs necessary to convert direct materials into a finished product : direct labor and manufacturing overhead , which includes other costs that are not classified as direct materials or direct labor , such as plant insurance , utilities , or property taxes . <hl> Also , note that direct labor is considered to be a component of both prime costs and conversion costs .", "hl_sentences": "Then , machines cut the wood underwater into dowels , separate them , and move them to machines that shape the dowels into drumsticks . These machines need electricity to operate and personnel to monitor and adjust the processes and to maintain the equipment . Let ’ s return to our drumstick example to learn how to work with conversion costs . Conversion costs are the costs necessary to convert direct materials into a finished product : direct labor and manufacturing overhead , which includes other costs that are not classified as direct materials or direct labor , such as plant insurance , utilities , or property taxes .", "question": { "cloze_format": "___ is the list that contains only conversion costs for an inflatable raft manufacturing corporation.", "normal_format": "Which of the following lists contains only conversion costs for an inflatable raft manufacturing corporation?", "question_choices": [ "vinyl for raft, machine operator, electricity, insurance", "machine operator, electricity, depreciation, plastic for air valves", "machine operator, electricity, depreciation, insurance", "vinyl for raft, electricity, insurance, plastic for air valves" ], "question_id": "fs-idm363777888", "question_text": "Which of the following lists contains only conversion costs for an inflatable raft manufacturing corporation?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 3, "ans_text": "costs of the units in the beginning inventory and costs added during the period" }, "bloom": null, "hl_context": "Once the equivalent units for materials and conversion are known , the cost per equivalent unit is computed in a similar manner as the units accounted for . <hl> The costs for material and conversion need to reconcile with the total beginning inventory and the costs incurred for the department during that month . <hl> In addition to the equivalent units , it is necessary to track the units completed as well as the units remaining in ending inventory . A similar process is used to account for the costs completed and transferred . <hl> Reconciling the number of units and the costs is part of the process costing system . <hl> The reconciliation involves the total of beginning inventory and units started into production . <hl> This total is called “ units to account for , ” while the total of beginning inventory costs and costs added to production is called “ costs to be accounted for . ” Knowing the total units or costs to account for is helpful since it also equals the units or costs transferred out plus the amount remaining in ending inventory . <hl>", "hl_sentences": "The costs for material and conversion need to reconcile with the total beginning inventory and the costs incurred for the department during that month . Reconciling the number of units and the costs is part of the process costing system . This total is called “ units to account for , ” while the total of beginning inventory costs and costs added to production is called “ costs to be accounted for . ” Knowing the total units or costs to account for is helpful since it also equals the units or costs transferred out plus the amount remaining in ending inventory .", "question": { "cloze_format": "The costs to be accounted for consist of ___.", "normal_format": "The costs to be accounted for consist of which of the following?", "question_choices": [ "costs of the units added during the period", "costs of the units in ending inventory", "costs of the units started and transferred during the period", "costs of the units in the beginning inventory and costs added during the period" ], "question_id": "fs-idm203001392", "question_text": "The costs to be accounted for consist of which of the following?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "C" }, "bloom": null, "hl_context": "<hl> The similarities between job order cost systems and process cost systems are the product costs of materials , labor , and overhead , which are used determine the cost per unit , and the inventory values . <hl> The differences between the two systems are shown in Table 5.1 .", "hl_sentences": "The similarities between job order cost systems and process cost systems are the product costs of materials , labor , and overhead , which are used determine the cost per unit , and the inventory values .", "question": { "cloze_format": "___ is the step in which materials, labor, and overhead are detailed.", "normal_format": "Which of the following is the step in which materials, labor, and overhead are detailed?", "question_choices": [ "determining the units to which costs are assigned", "determining the equivalent units of production", "determining the cost per equivalent units", "allocating the costs to the units transferred out and the units partially completed" ], "question_id": "fs-idm235943968", "question_text": "Which of the following is the step in which materials, labor, and overhead are detailed?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "C" }, "bloom": null, "hl_context": "Overhead costs are accumulated in a manufacturing overhead account and applied to each department on the basis of a predetermined overhead rate . Properly allocating overhead to each department depends on finding an activity that provides a fair basis for the allocation . <hl> It needs to be an activity common to each department and influential in driving the cost of manufacturing overhead . <hl> In traditional costing systems , the most common activities used are machine hours , direct labor in dollars , or direct labor in hours . If the number of machine hours can be related to the manufacturing overhead , the overhead can be applied to each department based on the machine hours . The formula for overhead allocation is : During July , the shaping department requisitioned $ 10,179 in direct material . <hl> Similar to job order costing , indirect material costs are accumulated in the manufacturing overhead account . <hl> The overhead costs are applied to each department based on a predetermined overhead rate . In the example , assume that there was an indirect material cost for water of $ 400 in July that will be recorded as manufacturing overhead . The journal entry to record the requisition and usage of direct materials and overhead is :", "hl_sentences": "It needs to be an activity common to each department and influential in driving the cost of manufacturing overhead . Similar to job order costing , indirect material costs are accumulated in the manufacturing overhead account .", "question": { "cloze_format": "Assigning indirect costs to departments is completed by ________.", "normal_format": "Assigning indirect costs to departments is completed by what?", "question_choices": [ "applying the predetermined overhead rate", "debiting the manufacturing costs incurred", "applying the costs to manufacturing overhead", "applying the costs to work in process inventory" ], "question_id": "fs-idm383545408", "question_text": "Assigning indirect costs to departments is completed by ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "work in process inventory" }, "bloom": null, "hl_context": "<hl> In a process cost system , each department accumulates its costs to compute the value of work in process inventory , so there will be a work in process inventory for each manufacturing or production department as well as an inventory cost for finished goods inventory . <hl> Manufacturing departments are often organized by the various stages of the production process . For example , blending , baking , and packaging could each be categorized as manufacturing or production departments for the cookie producer , while cutting , assembly , and finishing could be manufacturing or production departments with accompanying costs for a furniture manufacturer . Each department , or process , will have its own work in process inventory account , but there will only be one finished goods inventory account . Manufacturing costs or product costs include all expenses required to manufacture the product : direct materials , direct labor , and manufacturing overhead . <hl> Since process costing assigns the costs to each department , the inventory at the end of the period includes the finished goods inventory , and the work in process inventory for each manufacturing department . <hl> For example , using the departments shown in Figure 5.3 , raw materials inventory is the cost paid for the materials that remain in the storeroom until requested .", "hl_sentences": "In a process cost system , each department accumulates its costs to compute the value of work in process inventory , so there will be a work in process inventory for each manufacturing or production department as well as an inventory cost for finished goods inventory . Since process costing assigns the costs to each department , the inventory at the end of the period includes the finished goods inventory , and the work in process inventory for each manufacturing department .", "question": { "cloze_format": "In a process costing system, the account that shows the overhead assigned to the department is the ___ .", "normal_format": "In a process costing system, which account shows the overhead assigned to the department?", "question_choices": [ "cost of goods sold", "finished goods inventory", "raw material inventory", "work in process inventory" ], "question_id": "fs-idm369029280", "question_text": "In a process costing system, which account shows the overhead assigned to the department?" }, "references_are_paraphrase": 0 } ]
5
5.1 Compare and Contrast Job Order Costing and Process Costing As you’ve learned, job order costing is the optimal accounting method when costs and production specifications are not identical for each product or customer but the direct material and direct labor costs can easily be traced to the final product. Job order costing is often a more complex system and is appropriate when the level of detail is necessary, as discussed in Job Order Costing . Examples of products manufactured using the job order costing method include tax returns or audits conducted by a public accounting firm, custom furniture, or, in a comprehensive example, semitrucks. At the Peterbilt factory in Denton, Texas, the company can build over 100,000 unique versions of their semitrucks without making the same truck twice. Process costing is the optimal costing system when a standardized process is used to manufacture identical products and the direct material, direct labor, and manufacturing overhead cannot be easily or economically traced to a specific unit. Process costing is used most often when manufacturing a product in batches. Each department or production process or batch process tracks its direct material and direct labor costs as well as the number of units in production. The actual cost to produce each unit through a process costing system varies, but the average result is an adequate determination of the cost for each manufactured unit. Examples of items produced and accounted for using a form of the process costing method could be soft drinks, petroleum products, or even furniture such as chairs, assuming that the company makes batches of the same chair, instead of customizing final products for individual customers. For example, small companies, such as David and William’s, and large companies, such as Nabisco , use similar cost-determination processes. In order to understand how much each product costs—for example, Oreo cookies— Nabisco uses process costing to track the direct materials, direct labor, and manufacturing overhead used in the manufacturing of its products. Oreo production has six distinct steps or departments: (1) make the cookie dough, (2) press the cookie dough into a molding machine, (3) bake the cookies, (4) make the filling and apply it to the cookies, (5) put the cookies together into a sandwich, and (6) and place the cookies into plastic trays and packages. Each department keeps track of its direct materials used and direct labor incurred, and manufacturing overhead applied to facilitate determining the cost of a batch of Oreo cookies. As previously mentioned, process costing is used when similar items are produced in large quantities. As such, many individuals immediately associate process costing with assembly line production. Process costing works best when products cannot be distinguished from each other and, in addition to obvious production line products like ice cream or paint, also works for more complex manufacturing of similar products like small engines. Conversely, products in a job order cost system are manufactured in small quantities and include custom jobs such as custom manufacturing products. They can also be legal or accounting tasks, movie production, or major projects such as construction activities. The difference between process costing and job order costing relates to how the costs are assigned to the products. In either costing system, the ability to obtain and analyze cost data is needed. This results in the costing system selected being the one that best matches the manufacturing process. A job order cost system is often more expensive to maintain than a basic process costing system, since there is a cost associated with assigning the individual material and labor to the product. Thus, a job order cost system is used for custom jobs when it is easy to determine the cost of materials and labor used for each job. A process cost system is often less expensive to maintain and works best when items are identical and it is difficult to trace the exact cost of materials and labor to the final product. For example, assume that your company uses three production processes to make jigsaw puzzles. The first process glues the picture on the cardboard backing, the second process cuts the puzzle into pieces, and the final process loads the pieces into the boxes and seals them. Tracing the complete costs for the batch of similar puzzles would likely entail three steps, with three separate costing system components. In this environment, it would be difficult and not economically feasible to trace the exact materials and the exact labor to each individual puzzle; rather, it would be more efficient to trace the costs per batch of puzzles. The costing system used typically depends on whether the company can most efficiently and economically trace the costs to the job (favoring job order costing system) or to the production department or batch (favoring a process costing system). While the costing systems are different from each other, management uses the information provided to make similar managerial decisions, such as setting the sales price. For example, in a job order cost system, each job is unique, which allows management to establish individual prices for individual projects. Management also needs to establish a sales price for a product produced with a process costing system, but this system is not designed to stop the production process and individually cost each batch of a product, so management must set a price that will work for many batches of the product. In addition to setting the sales price, managers need to know the cost of their products in order to determine the value of inventory, plan production, determine labor needs, and make long- and short-term plans. They also need to know the costs to determine when a new product should be added or an old product removed from production. In this chapter, you will learn when and why process costing is used. You’ll also learn the concepts of conversion costs and equivalent units of production and how to use these for calculating the unit and total cost of items produced using a process costing system. Basic Managerial Accounting Terms Used in Job Order Costing and Process Costing Regardless of the costing system used, manufacturing costs consist of direct material, direct labor, and manufacturing overhead. Figure 5.2 shows a partial organizational chart for Rock City Percussion, a drumstick manufacturer. In this example, two groups—administrative and manufacturing—report directly to the chief financial officer (CFO). Each group has a vice president responsible for several departments. The organizational chart also shows the departments that report to the production department, illustrating the production arrangement. The material storage unit stores the types of wood used (hickory, maple, and birch), the tips (nylon and felt), and packaging materials. Understanding the company’s organization is an important first step in any costing system. Next is understanding the production process. The most basic drumstick is made of hickory and has a wooden tip. When the popular size 5A stick is manufactured, the hickory stored in the materials storeroom is delivered to the shaping department where the wood is cut into pieces, shaped into dowels, and shaped into the size 5A shape while under a stream of water. The sticks are dried, and then sent to the packaging department, where the sticks are embossed with the Rock City Percussion logo, inspected, paired, packaged, and shipped to retail outlets such as Guitar Center . The manufacturing process is described in Figure 5.3 . The different units within Rock City Percussion illustrate the two main cost categories of a manufacturing company: manufacturing costs and administrative costs. Link to Learning Understanding the full manufacturing process for a product helps with tracking costs. This video on how drumsticks are made shows the production process for drumsticks at one company, starting with the raw wood and ending with packaging. Manufacturing Costs Manufacturing costs or product costs include all expenses required to manufacture the product: direct materials, direct labor, and manufacturing overhead. Since process costing assigns the costs to each department, the inventory at the end of the period includes the finished goods inventory, and the work in process inventory for each manufacturing department. For example, using the departments shown in Figure 5.3 , raw materials inventory is the cost paid for the materials that remain in the storeroom until requested. While still in production, the work in process units are moved from one department to the next until they are completed, so the work in process inventory includes all of the units in the shaping and packaging departments. When the units are completed, they are transferred to finished goods inventory and become costs of goods sold when the product is sold. When assigning costs to departments, it is important to separate the product costs from the period costs , which are those that are typically related with a particular time period, instead of attached to the production of an asset. Management often needs additional information to make decisions and needs the product costs further categorized as prime costs or conversion costs ( Figure 5.4 ). Prime costs are costs that include the primary (or direct) product costs: direct material and direct labor. Conversion costs are the costs necessary to convert direct materials into a finished product: direct labor and manufacturing overhead, which includes other costs that are not classified as direct materials or direct labor, such as plant insurance, utilities, or property taxes. Also, note that direct labor is considered to be a component of both prime costs and conversion costs. Job order costing tracks prime costs to assign direct material and direct labor to individual products (jobs). Process costing also tracks prime costs to assign direct material and direct labor to each production department (batch). Manufacturing overhead is another cost of production, and it is applied to products (job order) or departments (process) based on an appropriate activity base. Ethical Considerations The Unethical Bakery Accountant 1 , 2 According to the Federal Bureau of Investigation (FBI), “Sandy Jenkins was a shy, daydreaming accountant at the Collin Street Bakery , the world’s most famous fruitcake company. He was tired of feeling invisible, so he started stealing—and got a little carried away.” Being unethical netted the accountant ten years in federal prison, and his wife Kay was sentenced to five years’ probation and 100 hours of community service, and she was required to write a formal apology to the bakery. According to the FBI, “Jenkins spent over $11 million on a Black American Express card alone—roughly $98,000 per month over the course of the scheme—for a couple that had a legitimate income, through the Bakery, of approximately $50,000 per year.” How did this happen? Texas Monthly reports that Sandy found a way to write unapproved checks in the accounting system. He implemented his accounting system and created checks that were “signed” by the owner of the company, Bob McNutt. McNutt was perplexed as to why his bakery was not more profitable year after year. The accountant was stealing the money while making the stolen checks appear to be paying for material costs or operating costs. According to Texas Monthly , “Once Sandy was sure that nobody had noticed the first fraudulent check, he tried it again. And again and again. Each time, Sandy would repeat the scheme, pairing his fraudulent check with one that appeared legitimate. Someone would have to closely examine the checks to see any discrepancies, and that seemed unlikely.” The multimillion dollar fraud was exposed when another accountant looked closely at the checks and noticed discrepancies. 1 Katy Vine. “Just Desserts.” Texas Monthly . October 2010. https://features.texasmonthly.com/editorial/just-desserts/ 2 Federal Bureau of Investigation (FBI). “Former Collin Street Bakery Executive and Wife Sentenced.” September 16, 2015. https://www.fbi.gov/contact-us/field-offices/dallas/news/press-releases/former-collin-street-bakery-executive-and-wife-sentenced Selling and Administrative Expenses Selling and administrative (S&A) expenses are period costs, which means that they are recorded in the period in which they were incurred. Selling and administrative expenses typically are not directly assigned to the items produced or services provided and include costs of departments not directly associated with manufacturing but necessary to operate the business. The selling costs component of S&A expenses is related to the promotion and sale of the company’s products, while administrative expenses are related to the administration of the company. Some examples of S&A expenses include marketing costs; administration building rent; the chief executive officer’s salary expense; and the accounting, payroll, and data processing department expenses. These general rules for S&A expenses, however, have their exceptions. For example, some items that are classified as overhead, such as plant insurance, are period costs but are classified as overhead and are attached to the items produced as product costs. The expense recognition principle is the primary reason to separate the costs of production from the other expenses of the company. This principle requires costs to be recorded in the period in which they are incurred. The costs are expensed when matched to the revenue with which they are associated; this is commonly referred to as having the expenses follow the revenues . Period costs are expensed during the period in which they are incurred; this allows a company to apply the administrative and other expenses shown on the income statement to the same period in which the company earns income. Under generally accepted accounting principles (GAAP), separating the production costs and assigning them to the department results in the costs of the product staying with the work in process inventory for each department. This follows the expense recognition principle because the cost of the product is expensed when revenue from the sale is recognized. Equivalent Units In a process cost system, costs are maintained by each department, and the method for determining the cost per individual unit is different than in a job order costing system. Rock City Percussion uses a process cost system because the drumsticks are produced in batches, and it is not economically feasible to trace the direct labor or direct material, like hickory, to a specific drumstick. Therefore, the costs are maintained by each department, rather than by job, as they are in job order costing. How does an organization determine the cost of each unit in a process costing environment? The costs in each department are allocated to the number of units produced in a given period. This requires determination of the number of units produced, but this is not always an easy process. At the end of the accounting period, there typically are always units still in production, and these units are only partially complete. Think of it this way: At midnight on the last day of the month, all accounting numbers need to be determined in order to process the financial statements for that month, but the production process does not stop at the end of each accounting period. However, the number of units produced must be calculated at the end of the accounting period to determine the number of equivalent units , or the number of units that would have been produced if the units were produced sequentially and in their entirety in a particular time period. The number of equivalent units is different from the number of actual units and represents the number of full or whole units that could have been produced given the amount of effort applied. To illustrate, consider this analogy. You have five large pizzas that each contained eight slices. Your friends served themselves, and when they were finished eating, there were several partial pizzas left. In equivalent units, determine how many whole pizzas are left if the remaining slices are divided as shown in Figure 5.5 . Pie 1 had one slice Pie 2 had two slices Pie 3 had two slices Pie 4 had three slices Pie 5 had eight slices Together, there are sixteen slices left. Since there are eight slices per pizza, the leftover pizza would be considered two full equivalent units of pizzas. The equivalent unit is determined separately for direct materials and for conversion costs as part of the computation of the per-unit cost for both material and conversion costs. Major Characteristics of Process Costing Process costing is the optimal system for a company to use when the production process results in many similar units. It is used when production is continuous or occurs in large batches and it is difficult to trace a particular input cost to a specific individual product. For example, before David and William found ways to make five large cookies per batch, their family always made one large cookie per batch. In order to make five cookies at a time, they had to gather the ingredients and baking materials, including five bowls and five cookie sheets. The exact amount of ingredients for one large cookie was mixed in each separate bowl and then placed on the cookie sheet. When this method was used, it was easy to establish that exactly one egg, two cups of flour, three-quarter cup of chocolate chips, three-quarter cup of sugar, one-quarter teaspoon salt, and so forth, were in each cookie. This made it easy to determine the exact cost of each cookie. But if David and William used one bowl instead of five bowls, measured the ingredients into it and then divided the dough into five large cookies, they could not know for certain that each cookie has exactly two cups of flour. One cookie may have 1 7/8 cups and another may have 1 15/16 cups, and one cookie may have a few more chocolate chips than another. It is also impossible to trace the chocolate chips from each bag to each cookie because the chips were mixed together. These variations do not affect the taste and are not important in this type of accounting. Process costing is optimal when the products are relatively homogenous or indistinguishable from one another, such as bottles of vegetable oil or boxes of cereal. Often, process costing makes sense if the individual costs or values of each unit are not significant. For example, it would not be cost effective for a restaurant to make each cup of iced tea separately or to track the direct material and direct labor used to make each eight-ounce glass of iced tea served to a customer. In this scenario, job order costing is a less efficient accounting method because it costs more to track the costs per eight ounces of iced tea than the cost of a batch of tea. Overall, when it is difficult or not economically feasible to track the costs of a product individually, process costing is typically the best cost system to use. Process costing can also accommodate increasingly complex business scenarios. While making drumsticks may sound simple, an immense amount of technology is involved. Rock City Percussion makes 8,000 hickory sticks per day, four days each week. The sticks made of maple and birch are manufactured on the fifth day of the week. It is difficult to tell the first drumstick made on Monday from the 32,000th one made on Thursday, so a computer matches the sticks in pairs based on the tone produced. Process costing measures and assigns the costs to the associated department. The basic 5A hickory stick consists only of hickory as direct material. The rest of the manufacturing process involves direct labor and manufacturing overhead, so the focus is on properly assigning those costs. Thus, process costing works well for simple production processes such as cereal, rubber, and steel, and for more complicated production processes such as the manufacturing of electronics and watches, if there is a degree of similarity in the production process. In a process cost system, each department accumulates its costs to compute the value of work in process inventory, so there will be a work in process inventory for each manufacturing or production department as well as an inventory cost for finished goods inventory. Manufacturing departments are often organized by the various stages of the production process. For example, blending, baking, and packaging could each be categorized as manufacturing or production departments for the cookie producer, while cutting, assembly, and finishing could be manufacturing or production departments with accompanying costs for a furniture manufacturer. Each department, or process, will have its own work in process inventory account, but there will only be one finished goods inventory account. There are two methods used to compute the values in the work in process and finished goods inventories. The first method is the weighted-average method, which includes all costs (costs incurred during the current period and costs incurred during the prior period and carried over to the current period). This method is often favored, because in the process cost production method there often is little product left at the end of the period and most has been transferred out. The second method is the first-in, first-out (FIFO) method, which calculates the unit costs based on the assumption that the first units sold come from the prior period’s work in process that was carried over into the current period and completed. After these units are sold, the newer completed units can then be sold. The theory is similar to the FIFO inventory valuation process that you learned about in Inventory . (Since the FIFO process costing method is more complicated than the weighted-average method, the FIFO method is typically covered in more advanced accounting courses.) With processing, it is difficult to establish how much of each material, and exactly how much time is in each unit of finished product. This will require the use of the equivalent unit computation, and management selects the method (weighted average or FIFO) that best fits their information system. Process costing can also be used by service organizations that provide homogeneous services and often do not have inventory to value, such as a hotel reservation system. Although they have no inventory, the hotel might want to know its costs per reservation for a period. They could allocate the total costs incurred by the reservation system based on the number of inquiries they served. For example, assume that in a year they incurred costs of $200,000 and served 50,000 potential guests. They could determine an average cost by dividing costs by number of inquiries, or $200,000/50,000 = $4.00 per potential guest. In the case of a not-for-profit company, the same process could be used to determine the average costs incurred by a department that performs interviews. The department’s costs would be allocated based on the number of cases processed. For example, assume a not-for-profit pet adoption organization has an annual budget of $180,000 and typically matches 900 shelter animals with new owners each year. The average cost would be $200 per match. Similarities between Process Costing and Job Order Costing Both process costing and job order costing maintain the costs of direct material, direct labor, and manufacturing overhead. The process of production does not change because of the costing method. The costing method is chosen based on the production process. In job order cost production, the costs can be directly traced to the job, and the job cost sheet contains the total expenses for that job. Process costing is optimal when the costs cannot be traced directly to the job. For example, it would be impossible for David and William to trace the exact amount of eggs in each chocolate chip cookie. It is also impossible to trace the exact amount of hickory in a drumstick. Even two sticks made sequentially may have different weights because the wood varies in density. These types of manufacturing are optimal for the process cost system. The similarities between job order cost systems and process cost systems are the product costs of materials, labor, and overhead, which are used determine the cost per unit, and the inventory values. The differences between the two systems are shown in Table 5.1 . Differences between Job Order Costing and Process Costing Job Order Costing Process Costing Product costs are traced to the product and recorded on each job’s individual job cost sheet. Product costs are traced to departments or processes. Each department tracks its expenses and adds them to the job cost sheet. As jobs move from one department to another, the job cost sheet moves to the next department as well. Each department tracks its expenses, the number of units started or transferred in, and the number of units transferred to the next department. Unit costs are computed using the job cost sheet. Unit costs are computed using the departmental costs and the equivalent units produced. Finished goods inventory includes the products completed but not sold, and all incomplete jobs are work in process inventory. Finished goods inventory is the number of units completed at the per unit cost. Work in process inventory is the cost per unit and the equivalent units remaining to be completed. Table 5.1 Concepts In Practice Choosing Between Process Costing and Job Order Costing Process costing and job order costing are both acceptable methods for tracking costs and production levels. Some companies use a single method, while some companies use both, which creates a hybrid costing system. The system a company uses depends on the nature of the product the company manufactures. Companies that mass produce a product allocate the costs to each department and use process costing. For example, General Mills uses process costing for its cereal, pasta, baking products, and pet foods. Job order systems are custom orders because the cost of the direct material and direct labor are traced directly to the job being produced. For example, Boeing uses job order costing to manufacture planes. When a company mass produces parts but allows customization on the final product, both systems are used; this is common in auto manufacturing. Each part of the vehicle is mass produced, and its cost is calculated with process costing. However, specific cars have custom options, so each individual car costs the sum of the specific parts used. Think It Through Direct or Indirect Material Around Again is a wooden frame manufacturer. Wood and fastener metals are typically added at the beginning of the process and are easily tracked as direct material. Sometimes, after inspection, the product needs to be reworked and additional pieces are added. Because the frames have already been through each department, the additional work is typically minor and often entails simply adding an additional fastener to keep the back of the frame intact. Other times, all the frame needs is additional glue for a corner piece. How does a company differentiate between direct and indirect material? Many direct material costs, as the wood in the frame, are easy to identify as direct costs because the material is identifiable in the final product. But not all readily identifiable material is a direct material cost. Technology makes it easy to track costs as small as one fastener or ounce of glue. However, if each fastener had to be requisitioned and each ounce of glue recorded, the product would take longer to make and the direct labor cost would be higher. So, while it is possible to track the cost of each individual product, the additional information may not be worth the additional expense. Managerial accountants work with management to decide which products should be accounted for as direct material and tracked individually, versus which should be considered indirect material and allocated to the departments through overhead application. Should Around Again consider the fasteners or glue added after inspection as direct material or indirect material? 5.2 Explain and Identify Conversion Costs In a processing environment, there are two concepts important to determining the cost of products produced. These are the concepts of equivalent units and conversion costs. As you have learned, equivalent units are the number of units that would have been produced if one unit was completed before starting a second unit. For example, four units that are one-fourth finished would equal one equivalent unit. Conversion costs are the labor and overhead expenses that “convert” raw materials into a completed unit. Each department tracks its conversion costs in order to determine the quantity and cost per unit (see Explain and Compute Equivalent Units and Total Cost of Production in an Initial Processing Stage details the steps to determine cost per unit ; we discuss this concept in more detail later). Management often uses the cost information generated to set the sales price; to set standard usage data and price for material, labor, and overhead; and to allow management to evaluate the efficiency of production and plan for the future. Definition of Conversion Costs Conversion costs are the total of direct labor and factory overhead costs. They are combined because it is the labor and overhead together that convert the raw material into the finished product. Remember that factory, manufacturing, or organizational overhead (you might see all three terms in practice) is composed of three sources: indirect materials, indirect labor, and all other overhead costs that are not indirect materials or indirect labor. Materials are often added in stages at discrete points of production, such as at the beginning, middle, or end of a process, but conversion is usually applied equally throughout the process. For example, in the opening example, David and William do not add direct material (ingredients) evenly throughout the cookie-making process. They are all added at the beginning of the production process, so they begin with the direct materials but add labor and overhead throughout the rest of the process. Conversion costs can be explained through the process of making Just Born ’s Peeps. Just Born makes 5.5 million Peeps per day using three ingredients and the following process: 3 3 Just Born. “Marshmallow Peeps Factory Tour.” n.d. http://www.justborn.com/resource/corporate/popups/virtualTour.cfm Use machines to add and mix the sugar, corn syrup, and gelatin into a mixture called a slurry. Send slurry through a whipper to give the marshmallow its fluffy texture. Color the sugar. Deposit marshmallows on sugar-coated belts in the Peep shape. Send Peeps on belts through a wind tunnel that stirs up the sugar to coat the entire shape. Add eyes, and inspect. Move the Peeps via belt into their appropriate tray, and wrap with cellophane. In the Peep-making process, the direct materials of sugar, corn syrup, gelatin, color, and packaging materials are added at the beginning of steps 1, 2, and 5. While the fully automated production does not need direct labor, it does need indirect labor in each step to ensure the machines are operating properly and to perform inspections (step 4). Mechanics of Applying Conversion Costs Let’s return to our drumstick example to learn how to work with conversion costs. Rock City Percussion has two departments critical to manufacturing drumsticks: the shaping and packaging departments. The shaping department uses only wood as its direct material and water as its indirect material. In the shaping department, the material is added first. Then, machines cut the wood underwater into dowels, separate them, and move them to machines that shape the dowels into drumsticks. These machines need electricity to operate and personnel to monitor and adjust the processes and to maintain the equipment. When the shaping is finished, a conveyer belt transfers the sticks to the finishing department. Since the drumsticks are made by performing one process on one batch at a time, instead of producing one stick at a time from start to finish, it is difficult to determine the exact materials, labor, and overhead for a single pair of drumsticks. It is easier to track the materials and conversion costs for one batch and have those costs follow the batch to the next process. Therefore, once the batch of sticks gets to the second process—the packaging department—it already has costs attached to it. In other words, the packaging department receives both the drumsticks and their related costs from the shaping department. For the basic size 5A stick, the packaging department adds material at the beginning of the process. The 5A uses only packaging sleeves as its direct material, while other types may also include nylon, felt, and/or the ingredients for the proprietary handgrip. Direct labor and manufacturing overhead are used to test, weigh, and sound-match the drumsticks into pairs. Thus, at the end of the accounting period, there are two work in process inventories: one in the shaping department and one in the packaging department. Direct materials are added at the beginning of shaping and packaging departments, so the work in process inventory for those departments is 100% complete with regard to materials, but it is not complete with regard to conversion costs. If they were 100% complete with regard to conversion costs, then they would have been transferred to the next department. Link to Learning Management needs to understand its costs in order to set prices, budget for the upcoming year, and evaluate performance. Sometimes individuals become managers due to their knowledge of the production process but not necessarily the costs. Managers can view this information on the importance of identifying prime and conversion costs from Investopedia, a resource for managers. 5.3 Explain and Compute Equivalent Units and Total Cost of Production in an Initial Processing Stage As described previously, process costing can have more than one work in process account. Determining the value of the work in process inventory accounts is challenging because each product is at varying stages of completion and the computation needs to be done for each department. Trying to determine the value of those partial stages of completion requires application of the equivalent unit computation. The equivalent unit computation determines the number of units if each is manufactured in its entirety before manufacturing the next unit. For example, forty units that are 25% complete would be ten (40 × 25%) units that are totally complete. Direct material is added in stages, such as the beginning, middle, or end of the process, while conversion costs are expensed evenly over the process. Often there is a different percentage of completion for materials than there is for labor. For example, if material is added at the beginning of the process, the forty units that are 100% complete with respect to material and 25% complete with respect to conversion costs would be the same as forty units of material and ten units (40 × 25%) completed with conversion costs. For example, during the month of July, Rock City Percussion purchased raw material inventory of $25,000 for the shaping department. Although each department tracks the direct material it uses in its own department, all material is held in the material storeroom. The inventory will be requisitioned for each department as needed. During the month, Rock City Percussion’s shaping department requested $10,179 in direct material and started into production 8,700 hickory drumsticks of size 5A. There was no beginning inventory in the shaping department, and 7,500 drumsticks were completed in that department and transferred to the finishing department. Wood is the only direct material in the shaping department, and it is added at the beginning of the process, so the work in process (WIP) is considered to be 100% complete with respect to direct materials. At the end of the month, the drumsticks still in the shaping department were estimated to be 35% complete with respect to conversion costs. All materials are added at the beginning of the shaping process. While beginning the size 5A drumsticks, the shaping department incurred these costs in July: These costs are then used to calculate the equivalent units and total production costs in a four-step process. Step One: Determining the Units to Which Costs Will Be Assigned In addition to the equivalent units, it is necessary to track the units completed as well as the units remaining in ending inventory. A similar process is used to account for the costs completed and transferred. Reconciling the number of units and the costs is part of the process costing system. The reconciliation involves the total of beginning inventory and units started into production. This total is called “units to account for,” while the total of beginning inventory costs and costs added to production is called “costs to be accounted for.” Knowing the total units or costs to account for is helpful since it also equals the units or costs transferred out plus the amount remaining in ending inventory. When the new batch of hickory sticks was started on July 1, Rock City Percussion did not have any beginning inventory and started 8,700 units, so the total number of units to account for in the reconciliation is 8,700: The shaping department completed 7,500 units and transferred them to the testing and sorting department. No units were lost to spoilage , which consists of any units that are not fit for sale due to breakage or other imperfections. Since the maximum number of units that could possibly be completed is 8,700, the number of units in the shaping department’s ending inventory must be 1,200. The total of the 7,500 units completed and transferred out and the 1,200 units in ending inventory equal the 8,700 possible units in the shaping department. Step Two: Computing the Equivalent Units of Production All of the materials have been added to the shaping department, but all of the conversion elements have not; the numbers of equivalent units for material costs and for conversion costs remaining in ending inventory are different. All of the units transferred to the next department must be 100% complete with regard to that department’s cost or they would not be transferred. So the number of units transferred is the same for material units and for conversion units. The process cost system must calculate the equivalent units of production for units completed (with respect to materials and conversion) and for ending work in process with respect to materials and conversion. For the shaping department, the materials are 100% complete with regard to materials costs and 35% complete with regard to conversion costs. The 7,500 units completed and transferred out to the finishing department must be 100% complete with regard to materials and conversion, so they make up 7,500 (7,500 × 100%) units. The 1,200 ending work in process units are 100% complete with regard to material and have 1,200 (1,200 × 100%) equivalent units for material. The 1,200 ending work in process units are only 35% complete with regard to conversion costs and represent 420 (1,200 × 35%) equivalent units. Step Three: Determining the Cost per Equivalent Unit Once the equivalent units for materials and conversion are known, the cost per equivalent unit is computed in a similar manner as the units accounted for. The costs for material and conversion need to reconcile with the total beginning inventory and the costs incurred for the department during that month. The total materials costs for the period (including any beginning inventory costs) is computed and divided by the equivalent units for materials. The same process is then completed for the total conversion costs. The total of the cost per unit for material ($1.17) and for conversion costs ($2.80) is the total cost of each unit transferred to the finishing department ($3.97). Step Four: Allocating the Costs to the Units Transferred Out and Partially Completed in the Shaping Department Now you can determine the cost of the units transferred out and the cost of the units still in process in the shaping department. To calculate the goods transferred out, simply take the units transferred out times the sum of the two equivalent unit costs (materials and conversion) because all items transferred to the next department are complete with respect to materials and conversion, so each unit brings all its costs. But the ending WIP value is determined by taking the product of the work in process material units and the cost per equivalent unit for materials plus the product of the work in process conversion units and the cost per equivalent unit for conversion. This information is accumulated in a production cost report . This report shows the costs used in the preparation of a product, including the cost per unit for materials and conversion costs, and the amount of work in process and finished goods inventory. A complete production cost report for the shaping department is illustrated in Figure 5.6 . Your Turn Calculating Inventory Transferred and Work in Process Costs Kyler Industries started a new batch of paint on October 1. The new batch consists of 8,700 cans of paint, of which 7,500 was completed and transferred to finished goods. During October, the manufacturing process recorded the following expenses: direct materials of $10,353; direct labor of $17,970; and applied overhead of $9,000. The inventory still in process is 100% complete with respect to materials and 30% complete with respect to conversion. What is the cost of inventory transferred out and work in process? Assume that there is no beginning work in process inventory. Solution 5.4 Explain and Compute Equivalent Units and Total Cost of Production in a Subsequent Processing Stage In many production departments, units are typically transferred from the initial stage to the next stage in the process. When the units are transferred, the accumulated cost per unit is transferred along with them. Since the unit being produced includes work from all of the prior departments, the transferred-in cost is the cost of the work performed in all earlier departments. When the hickory size 5A drumsticks have completed the shaping process, they are transferred to the packaging department along with the inventory costs of $29,775. The inventory costs of $29,775 were $8,775 for materials and $21,000 for conversion costs and were calculated in Figure 5.6 . During the month of July, Rock City Percussion purchased raw material inventory of $2,000 for the packaging department. As with the shaping department, the packaging department tracks its costs and requisitions the raw material from the material storeroom. The packaging department has computed direct material costs of $2,000, direct labor costs of $13,000, and applied overhead of $9,100, for a total of $22,100 in conversion costs. Equivalent units are computed for this department, and a new cost per unit is computed. As with calculating the equivalent units and total cost of production in the initial processing stage, there are four steps for calculating these costs in a subsequent processing stage. Step One: Determining the Stage 2 Units to Which Costs Will Be Assigned In the initial manufacturing department, there is beginning inventory, and units are started in production. In subsequent stages, instead of starting new units, units are transferred in from the prior department, but the accounting process is the same. Returning to the example, Rock City Percussion had a beginning inventory of 750 units in the packaging department. When the 7,500 sticks are transferred into the packaging department from the shaping department, the total number of units to account for in the reconciliation is 8,250, which is the total of the beginning WIP and the units transferred in: The reconciliation of units to account for are the same for each department. The units that were completed and transferred out plus the ending inventory equal the total units to account for. The packaging department for Rock City Percussion completed 6,500 units and transferred them into finished goods inventory. Since the maximum number of units to possibly be completed is 8,250 and no units were lost to spoilage, the number of units in the packaging department’s ending inventory must be 1,750. The total of the 6,500 units completed and transferred out and the 1,750 units in ending inventory equal the 8,250 possible units in the packaging department. Step Two: Computing the Stage 2 Equivalent Units of Production The only direct material added in the packaging department for the 5A sticks is packaging. The packaging materials are added at the beginning of the process, so all the materials have been added before the units are transferred out, but all of the conversion elements have not. As a result, the number of equivalent units for material costs and for conversion costs remaining in ending inventory is different for the testing and sorting department. As you’ve learned, all of the units transferred to the next department must be 100% complete with regard to that department’s cost, or they would not be transferred. The process cost system must calculate the equivalent units of production for units completed (with respect to materials and conversion) and for ending WIP with respect to materials and conversion. For the packaging department, the materials are 100% complete with regard to materials costs and 40% complete with regard to conversion costs. The 6,500 units completed and transferred out to the finishing department must be 100% complete with regard to materials and conversion, so they make up 6,500 (6,500 × 100%) units. The 1,750 ending WIP units are 100% complete with regard to material and have 1,750 (1,750 × 100%) equivalent units for material. The 1,750 ending WIP units are only 40% complete with regard to conversion costs and represent 700 (1,750 × 40%) equivalent units. Step Three: Determining the Stage 2 Cost per Equivalent Unit Once the equivalent units for materials and conversion are known for the packaging department, the cost per equivalent unit is computed in a manner similar to the calculation for the units accounted for. The costs for material and conversion need to reconcile with the department’s beginning inventory and the costs incurred for the department during that month. The total materials costs for the period (including any beginning inventory costs) are computed and divided by the equivalent units for materials. The same process is then completed for the total conversion costs. The total of the cost per unit for materials ($1.50) and for conversion costs ($6.90) is the total cost of each unit transferred to the testing and sorting department. Step Four: Allocating the Costs to the Units in the Finishing Department Now you can determine the cost of the units transferred out and the cost of the units still in process in the finishing department. For the goods transferred out, simply take the units transferred out times the sum of the two equivalent unit costs (materials and conversion) because all items transferred to the next department are complete with respect to materials and conversion, so each unit brings all its costs. But the ending WIP value is determined by taking the product of the work in process materials units and the cost per equivalent unit for materials plus the product of the work in process conversion units and the cost per equivalent unit for conversion. Link to Learning Knowing the cost to produce a unit is critical to management’s decisions. Sometimes that knowledge leads to management’s decision to stop production, but sometimes that decision isn’t as simple as it seems. The cost to produce a penny is more than one cent, and yet, the United States still makes pennies. See this article from Forbes that explains the difference among cost , worth , and value to learn more. 5.5 Prepare Journal Entries for a Process Costing System Calculating the costs associated with the various processes within a process costing system is only a part of the accounting process. Journal entries are used to record and report the financial information relating to the transactions. The example that follows illustrates how the journal entries reflect the process costing system by recording the flow of goods and costs through the process costing environment. Purchased Materials for Multiple Departments Each department within Rock City Percussion has a separate work in process inventory account. Raw materials totaling $33,500 were ordered prior to being requisitioned by each department: $25,000 for the shaping department and $8,500 for the packaging department. The July 1 journal entry to record the purchases on account is: Direct Materials Requisitioned by the Shaping and Packaging Departments and Indirect Material Used During July, the shaping department requisitioned $10,179 in direct material. Similar to job order costing, indirect material costs are accumulated in the manufacturing overhead account. The overhead costs are applied to each department based on a predetermined overhead rate. In the example, assume that there was an indirect material cost for water of $400 in July that will be recorded as manufacturing overhead. The journal entry to record the requisition and usage of direct materials and overhead is: During July, the packaging department requisitioned $2,000 in direct material and overhead costs for indirect material totaled $300 for the month of July. The journal entry to record the requisition and usage of materials is: Direct Labor Paid by All Production Departments During July, the shaping department incurred $15,000 in direct labor costs and $600 in indirect labor. The journal entry to record the labor costs is: During July, the packaging department incurred $13,000 of direct labor costs and indirect labor of $1,000. The journal entry to record the labor costs is: Applied Manufacturing Overhead to All Production Departments Manufacturing overhead includes indirect material, indirect labor, and other types of manufacturing overhead. It is difficult, if not impossible, to trace manufacturing overhead to a specific product, and yet, the total cost per unit needs to include overhead in order to make management decisions. Overhead costs are accumulated in a manufacturing overhead account and applied to each department on the basis of a predetermined overhead rate. Properly allocating overhead to each department depends on finding an activity that provides a fair basis for the allocation. It needs to be an activity common to each department and influential in driving the cost of manufacturing overhead. In traditional costing systems, the most common activities used are machine hours, direct labor in dollars, or direct labor in hours. If the number of machine hours can be related to the manufacturing overhead, the overhead can be applied to each department based on the machine hours. The formula for overhead allocation is: Rock City Percussion determined that machine hours is the appropriate base to use when allocating overhead. The estimated annual overhead cost is $340,000 per year. It was also estimated that the total machine hours will be 34,000 hours, so the allocation rate is computed as: The shaping department used 700 machine hours, and with an overhead application rate of $10 per direct labor hour, the journal entry to record the overhead allocation is: The finishing department used 910 machine hours, and with an overhead application rate of $10 per direct labor hour, the journal entry to record the overhead allocation is: Transferred Costs of Finished Goods from the Shaping Department to the Packaging Department When the units are transferred from the shaping department to the packaging department, they are transferred at $3.97 per unit, as calculated previously. The amount transferred from the shaping department is the same amount listed on the production cost report in Figure 5.6 . The journal entry is: Transferred Goods from the Packaging Department to Finished Goods The computation of inventory for the packaging department is shown in Figure 5.7 . The value of the inventory transferred to finished goods in the production cost report is the same as in the journal entry: Recording the Cost of Goods Sold Out of the Finished Goods Inventory Each unit is a package of two drumsticks that cost $8.40 to make and sells for $24.99. There are two transactions when recording a sale. One entry is to transfer the inventory from finished goods inventory to cost of goods sold and is at the cost of the product. The second transaction is to record the sale at the sales price. The compound entry to record both transactions for the sale of 500 units on account is: Link to Learning The importance of properly recording the production process is illustrated in this report on work in process inventory from InventoryOps.com.
principles_of_accounting,_volume_1:_financial_accounting
Summary 14.1 Explain the Process of Securing Equity Financing through the Issuance of Stock The process of forming a corporation involves several steps, which result in a legal entity that can issue stock, enter into contracts, buy and sell assets, and borrow funds. The corporate form has several advantages, which include the ability to function as a separate legal entity, limited liability, transferable ownership, continuing existence, and ease of raising capital. The disadvantages of operating as a corporation include the costs of organization, regulation, and potential double taxation. There are a number of considerations when choosing whether to finance with debt or equity as a means to raise capital, including dilution of ownership, the repayment obligation, the cash obligation, budgeting reliability, cost savings, and the risk assessment by creditors. The Securities and Exchange Commission regulates large and small public corporations. There are key differences between public corporations that experience an IPO and private corporations. A corporation’s shares continue to be bought and sold by the public in the secondary market after an IPO. The process of marketing a company’s stock involves several steps. Capital stock consists of two classes of stock—common and preferred, each providing the company with the ability to attract capital from investors. Shares of stock are categorized as authorized, issued, and outstanding. Shares of stock are measured based on their market or par value. Some stock is no-par, which carries a stated value. A company’s primary class of stock issued is common stock, and each share represents a partial claim to ownership or a share of the company’s business. Common shareholders have four rights: right to vote, the right to share in corporate net income through dividends, the right to share in any distribution of assets upon liquidation, and a preemptive right. Preferred stock, by definition, has preferred characteristics, which are more advantageous to shareholders over common stock characteristics. These include dividend preferences such as cumulative and participating and a preference for asset distribution upon liquidation. These shares can also be callable or convertible. 14.2 Analyze and Record Transactions for the Issuance and Repurchase of Stock The initial issuance of common stock reflects the sale of the first stock by a corporation. Common stock issued at par value for cash creates an additional paid-in capital account for the excess of the issue price over the par value. Stock issued in exchange for property or services is recorded at the fair market value of the stock or the asset or services received, whichever is more clearly determinable. Stock with a stated value is treated as if the stated value is a par value. The entire issue price of no-par stock with no stated value is credited to the capital stock account. Preferred stock issued at par or stated value creates an additional paid-in capital account for the excess of the issue price over the par value. A corporation reports a stock’s par or stated value, the number of shares authorized, issued, and outstanding, and if preferred, the dividend rate on the face of the balance sheet. Treasury stock is a corporation’s stock that the corporation purchased back. A company may buy back its stock for strategic purposes against competitors, to create demand, or to use for employee stock option plans. The acquisition of treasury stock creates a contra equity account, Treasury Stock, reported in the stockholders’ equity section of the balance sheet. When a corporation reissues its treasury stock at an amount above the cost, it generates a credit to the Additional Paid-in Capital from Treasury stock account. When a corporation reissues its treasury stock at an amount below cost, the Additional Paid-in Capital from Treasury stock account is reduced first, then any excess is debited to Retained Earnings. 14.3 Record Transactions and the Effects on Financial Statements for Cash Dividends, Property Dividends, Stock Dividends, and Stock Splits Dividends are a distribution of corporate earnings, though some companies reinvest earnings rather than declare dividends. There are three dividend dates: date of declaration, date of record, and date of payment. Cash dividends are accounted for as a reduction of retained earnings and create a liability when declared. When dividends are declared and a company has only common stock issued, the reduction of retained earnings is the amount per share times the number of outstanding shares. A property dividend occurs when a company declares and distributes assets other than cash. They are recorded at the fair market value of the asset being distributed. A stock dividend is a distribution of shares of stock to existing shareholders in lieu of a cash dividend. A small stock dividend occurs when a stock dividend distribution is less than 25% of the total outstanding shares based on the outstanding shares prior to the dividend distribution. The entry requires a decrease to Retained Earnings for the market value of the shares to be distributed. A large stock dividend involves a distribution of stock to existing shareholders that is larger than 25% of the total outstanding shares just before the distribution. The journal entry requires a decrease to Retained Earnings and a credit to Stock Dividends Distributable for the par or stated value of the shares to be distributed. Some corporations employ stock splits to keep their stock price competitive in the market. A traditional stock split occurs when a company’s board of directors issues new shares to existing shareholders in place of the old shares by increasing the number of shares and reducing the par value of each share. 14.4 Compare and Contrast Owners’ Equity versus Retained Earnings Owner’s equity reflects an owner’s investment value in a company. The three forms of business utilize different accounts and transactions relative to owners’ equity. Retained earnings is the primary component of a company’s earned capital. It generally consists of the cumulative net income minus any cumulative losses less dividends declared. A statement of retained earnings shows the changes in the retained earnings account during the period. Restricted retained earnings is the portion of a company’s earnings that has been designated for a particular purpose due to legal or contractual obligations. A company’s board of directors may designate a portion of a company’s retained earnings for a particular purpose such as future expansion, special projects, or as part of a company’s risk management plan. The amount designated is classified as appropriated retained earnings. The statement of stockholders’ equity provides the changes between the beginning and ending balances of each of the stockholders’ equity accounts, including retained earnings. Prior period adjustments are corrections of errors that occurred on previous periods’ financial statements. They are reported on a company’s statement of retained earnings as an adjustment to the beginning balance. 14.5 Discuss the Applicability of Earnings per Share as a Method to Measure Performance Earnings per share (EPS) measures the portion of a corporation’s profit allocated to each outstanding share of common stock. EPS is calculated by dividing the profit earned for common shareholders by the weighted average common shares of stock outstanding. Because EPS is a key profitability measure that both current and potential common stockholders monitor, it is important to understand how to interpret it.
Chapter Outline 14.1 Explain the Process of Securing Equity Financing through the Issuance of Stock 14.2 Analyze and Record Transactions for the Issuance and Repurchase of Stock 14.3 Record Transactions and the Effects on Financial Statements for Cash Dividends, Property Dividends, Stock Dividends, and Stock Splits 14.4 Compare and Contrast Owners’ Equity versus Retained Earnings 14.5 Discuss the Applicability of Earnings per Share as a Method to Measure Performance Why It Matters Chad and Rick have experienced resounding success operating their three Mexican restaurants named La Cantina. They are now ready to expand and open two more restaurants. The partners realize this will require significant funds for leasing locations, purchasing and installing equipment, and setting up operations. They have tentatively decided to form a new corporation for their future restaurant operations. The partners researched some of the characteristics of corporations and have learned that a corporation can sell shares of stock in exchange for funding their operations and buying new equipment. The sale of shares will dilute the partners’ ownership interest in the restaurants but will enable them to finance the expansion without borrowing any money. Chad and Rick are not ready to go public with the offering of their shares because the three current restaurants are not widely recognized. A public offering of the shares in a corporation is typically done when a company is recognized and investment banks and venture capitalists can create enough interest for a large number of investors. When a corporation is starting up, it shares are typically sold to friends and family, and then to angel investors. Many successful companies, like Amazon and Dell , started this way. Partners Chad and Rick locate possible investors and then share their restaurant’s financial information and business plan. The investors will not participate in management or work at the restaurants, but they will be stockholders along with Chad and Rick. Stockholders own part of the corporation by holding ownership in shares of the corporation’s stock. The corporate form of business will enable Chad, Rick, and other shareholders to minimize their liability. The most that the investors can lose is the amount they have invested in the corporation. In addition, Chad and Rick will be able to receive a salary from the new corporation because they will manage the operations, and all of the shareholders will be able to share in the corporation’s profits through the receipt of dividends.
[ { "answer": { "ans_choice": 3, "ans_text": "D" }, "bloom": null, "hl_context": "A company ’ s charter may authorize more than one class of stock . <hl> Preferred stock has unique rights that are “ preferred , ” or more advantageous , to shareholders than common stock . <hl> The classification of preferred stock is often a controversial area in accounting as some researchers believe preferred stock has characteristics closer to that of a stock / bond hybrid security , with characteristics of debt rather than a true equity item . <hl> For example , unlike common stockholders , preferred shareholders typically do not have voting rights ; in this way , they are similar to bondholders . <hl> <hl> In addition , preferred shares do not share in the common stock dividend distributions . <hl> Instead , the “ preferred ” classification entitles shareholders to a dividend that is fixed ( assuming sufficient dividends are declared ) , similar to the fixed interest rate associated with bonds and other debt items . Preferred stock also mimics debt in that preferred shareholders have a priority of dividend payments over common stockholders . While there may be characteristics of both debt and equity , preferred stock is still reported as part of stockholders ’ equity on the balance sheet .", "hl_sentences": "Preferred stock has unique rights that are “ preferred , ” or more advantageous , to shareholders than common stock . For example , unlike common stockholders , preferred shareholders typically do not have voting rights ; in this way , they are similar to bondholders . In addition , preferred shares do not share in the common stock dividend distributions .", "question": { "cloze_format": "___ is not a characteristic that sets preferred stock apart from common stock.", "normal_format": "Which of the following is not a characteristic that sets preferred stock apart from common stock?", "question_choices": [ "voting rights", "dividend payments", "transferability", "ownership" ], "question_id": "fs-idm265626000", "question_text": "Which of the following is not a characteristic that sets preferred stock apart from common stock?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "has been sold to investors" }, "bloom": null, "hl_context": "Chad and Rick have successfully incorporated La Cantina and are ready to issue common stock to themselves and the newly recruited investors . The proceeds will be used to open new locations . The corporate charter of the corporation indicates that the par value of its common stock is $ 1.50 per share . <hl> When stock is sold to investors , it is very rarely sold at par value . <hl> <hl> Most often , shares are issued at a value in excess of par . <hl> <hl> This is referred to as issuing stock at a premium . <hl> Stock with no par value that has been assigned a stated value is treated very similarly to stock with a par value .", "hl_sentences": "When stock is sold to investors , it is very rarely sold at par value . Most often , shares are issued at a value in excess of par . This is referred to as issuing stock at a premium .", "question": { "cloze_format": "Issued stock is defined as stock that ________.", "normal_format": "What is the definition of Issued stock?", "question_choices": [ "is available for sale", "that is held by the corporation", "has been sold to investors", "has no voting rights" ], "question_id": "fs-idm265620448", "question_text": "Issued stock is defined as stock that ________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 1, "ans_text": "value assigned by the incorporation documents" }, "bloom": null, "hl_context": "<hl> Most corporate charters specify the par value assigned to each share of stock . <hl> This value is printed on the stock certificates and is often referred to as a face value because it is printed on the “ face ” of the certificate . <hl> Incorporators typically set the par value at a very small arbitrary amount because it is used internally for accounting purposes and has no economic significance . <hl> Because par value often has some legal significance , it is considered to be legal capital . In some states , par value is the minimum price at which the stock can be sold . If for some reason a share of stock with a par value of one dollar was issued for less than its par value of one dollar known as issuing at a stock discount , the shareholder could be held liable for the difference between the issue price and the par value if liquidation occurs and any creditors remain unpaid . <hl> Stock Values Two of the most important values associated with stock are market value and par value . <hl> The market value of stock is the price at which the stock of a public company trades on the stock market . This amount does not appear in the corporation ’ s accounting records , nor in the company ’ s financial statements .", "hl_sentences": "Most corporate charters specify the par value assigned to each share of stock . Incorporators typically set the par value at a very small arbitrary amount because it is used internally for accounting purposes and has no economic significance . Stock Values Two of the most important values associated with stock are market value and par value .", "question": { "cloze_format": "Par value of a stock refers to the ________.", "normal_format": "To what refers par value of a stock?", "question_choices": [ "issue price of a stock", "value assigned by the incorporation documents", "maximum selling price of a stock", "dividend to be paid by the corporation" ], "question_id": "fs-idm265609856", "question_text": "Par value of a stock refers to the ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "D" }, "bloom": null, "hl_context": "<hl> Regulate securities markets <hl> <hl> Enforce federal securities laws <hl> <hl> Facilitate capital information <hl> <hl> Inform and protect investors <hl> The Securities and Exchange Commission ( SEC ) ( www.sec.gov ) is a government agency that regulates large and small public corporations . <hl> Its mission is “ to protect investors , maintain fair , orderly , and efficient markets , and facilitate capital formation . ” 3 The SEC identifies these as its five primary responsibilities : 3 U . S . Securities and Exchange Commission . <hl> “ What We Do . ” June 10 , 2013 . https://www.sec.gov/Article/whatwedo.html", "hl_sentences": "Regulate securities markets Enforce federal securities laws Facilitate capital information Inform and protect investors Its mission is “ to protect investors , maintain fair , orderly , and efficient markets , and facilitate capital formation . ” 3 The SEC identifies these as its five primary responsibilities : 3 U . S . Securities and Exchange Commission .", "question": { "cloze_format": "___ is not one of the five primary responsibilities of the Securities and Exchange Commission (the SEC).", "normal_format": "Which of the following is not one of the five primary responsibilities of the Securities and Exchange Commission (the SEC)?", "question_choices": [ "inform and protect investors", "regulate securities law", "facilitate capital formation", "assure that dividends are paid by corporations" ], "question_id": "fs-idm265604928", "question_text": "Which of the following is not one of the five primary responsibilities of the Securities and Exchange Commission (the SEC)?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "common stock" }, "bloom": null, "hl_context": "<hl> A company ’ s primary class of stock issued is common stock , and each share represents a partial claim to ownership or a share of the company ’ s business . <hl> For many companies , this is the only class of stock they have authorized . Common stockholders have four basic rights .", "hl_sentences": "A company ’ s primary class of stock issued is common stock , and each share represents a partial claim to ownership or a share of the company ’ s business .", "question": { "cloze_format": "When a C corporation has only one class of stock it is referred to as ________.", "normal_format": "What stock is referred to when a C corporation has only one class of stock? ", "question_choices": [ "stated value stock", "par value stock", "common stock", "preferred stock" ], "question_id": "fs-idm281923056", "question_text": "When a C corporation has only one class of stock it is referred to as ________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 3, "ans_text": "D" }, "bloom": null, "hl_context": "<hl> The corporate charter specifies the number of authorized shares , which is the maximum number of shares that a corporation can issue to its investors as approved by the state in which the company is incorporated . <hl> Once shares are sold to investors , they are considered issued shares . Shares that are issued and are currently held by investors are called outstanding shares because they are “ out ” in the hands of investors . Occasionally , a company repurchases shares from investors . While these shares are still issued , they are no longer considered to be outstanding . These repurchased shares are called treasury stock .", "hl_sentences": "The corporate charter specifies the number of authorized shares , which is the maximum number of shares that a corporation can issue to its investors as approved by the state in which the company is incorporated .", "question": { "cloze_format": "The number of shares that a corporation’s incorporation documents allows it to sell is referred to as ________.", "normal_format": "What is the number of shares that a corporation’s incorporation documents allows it to sell referred to?", "question_choices": [ "issued stock", "outstanding stock", "common stock", "authorized stock" ], "question_id": "fs-idm271671264", "question_text": "The number of shares that a corporation’s incorporation documents allows it to sell is referred to as ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "referred to as paid-in capital" }, "bloom": null, "hl_context": "The company plans to issue most of the shares in exchange for cash , and other shares in exchange for kitchen equipment provided to the corporation by one of the new investors . Two common accounts in the equity section of the balance sheet are used when issuing stock — Common Stock and Additional Paid-in Capital from Common Stock . Common Stock consists of the par value of all shares of common stock issued . <hl> Additional paid-in capital from common stock consists of the excess of the proceeds received from the issuance of the stock over the stock ’ s par value . <hl> When a company has more than one class of stock , it usually keeps a separate additional paid-in capital account for each class .", "hl_sentences": "Additional paid-in capital from common stock consists of the excess of the proceeds received from the issuance of the stock over the stock ’ s par value .", "question": { "cloze_format": "The total amount of cash and other assets received by a corporation from the stockholders in exchange for the shares is ________.", "normal_format": "The total amount of cash and other assets received by a corporation from the stockholders in exchange for the shares is which of the following?", "question_choices": [ "always equal to par value", "referred to as retained earnings", "always below its stated value", "referred to as paid-in capital" ], "question_id": "fs-idm359426272", "question_text": "The total amount of cash and other assets received by a corporation from the stockholders in exchange for the shares is ________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 1, "ans_text": "B" }, "bloom": null, "hl_context": "<hl> Stock can be issued in exchange for cash , property , or services provided to the corporation . <hl> For example , an investor could give a delivery truck in exchange for a company ’ s stock . Another investor could provide legal fees in exchange for stock . The general rule is to recognize the assets received in exchange for stock at the asset ’ s fair market value .", "hl_sentences": "Stock can be issued in exchange for cash , property , or services provided to the corporation .", "question": { "cloze_format": "Stock can be issued for all except ___.", "normal_format": "Stock can be issued for all except which of the following?", "question_choices": [ "accounts payable", "state income tax payments", "property such as a delivery truck", "services provided to the corporation such as legal fees" ], "question_id": "fs-idm364456624", "question_text": "Stock can be issued for all except which of the following?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "C" }, "bloom": null, "hl_context": "A company ’ s board of directors has the power to formally vote to declare dividends . <hl> The date of declaration is the date on which the dividends become a legal liability , the date on which the board of directors votes to distribute the dividends . <hl> Cash and property dividends become liabilities on the declaration date because they represent a formal obligation to distribute economic resources ( assets ) to stockholders . On the other hand , stock dividends distribute additional shares of stock , and because stock is part of equity and not an asset , stock dividends do not become liabilities when declared .", "hl_sentences": "The date of declaration is the date on which the dividends become a legal liability , the date on which the board of directors votes to distribute the dividends .", "question": { "cloze_format": "The date the board of directors votes to declare and pay a cash dividend is called the ___.", "normal_format": "What is called the date the board of directors votes to declare and pay a cash dividend?", "question_choices": [ "date of stockholder’s meeting", "date of payment", "date of declaration", "date of liquidation" ], "question_id": "fs-idm355957472", "question_text": "The date the board of directors votes to declare and pay a cash dividend is called the:" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "It does not affect total equity but transfers amounts between equity components." }, "bloom": null, "hl_context": "<hl> In comparing the stockholders ’ equity section of the balance sheet before and after the large stock dividend , we can see that the total stockholders ’ equity is the same before and after the stock dividend , just as it was with a small dividend ( Figure 14.10 ) . <hl> <hl> There is no change in total assets , total liabilities , or total stockholders ’ equity when a small stock dividend , a large stock dividend , or a stock split occurs . <hl> Both types of stock dividends impact the accounts in stockholders ’ equity . A stock split causes no change in any of the accounts within stockholders ’ equity . The impact on the financial statement usually does not drive the decision to choose between one of the stock dividend types or a stock split . Instead , the decision is typically based on its effect on the market . Large stock dividends and stock splits are done in an attempt to lower the market price of the stock so that it is more affordable to potential investors . A small stock dividend is viewed by investors as a distribution of the company ’ s earnings . Both small and large stock dividends cause an increase in common stock and a decrease to retained earnings . This is a method of capitalizing ( increasing stock ) a portion of the company ’ s earnings ( retained earnings ) .", "hl_sentences": "In comparing the stockholders ’ equity section of the balance sheet before and after the large stock dividend , we can see that the total stockholders ’ equity is the same before and after the stock dividend , just as it was with a small dividend ( Figure 14.10 ) . There is no change in total assets , total liabilities , or total stockholders ’ equity when a small stock dividend , a large stock dividend , or a stock split occurs .", "question": { "cloze_format": "It is true of a stock dividend that ___.", "normal_format": "Which of the following is true of a stock dividend?", "question_choices": [ "It is a liability.", "The decision to issue a stock dividend resides with shareholders.", "It does not affect total equity but transfers amounts between equity components.", "It creates a cash reserve for shareholders." ], "question_id": "fs-idm367178320", "question_text": "Which of the following is true of a stock dividend?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "C" }, "bloom": null, "hl_context": "<hl> The statement of retained earnings is a subsection of the statement of stockholders ’ equity . <hl> While the retained earnings statement shows the changes between the beginning and ending balances of the retained earnings account during the period , the statement of stockholders ’ equity provides the changes between the beginning and ending balances of each of the stockholders ’ equity accounts , including retained earnings . The format typically displays a separate column for each stockholders ’ equity account , as shown for Clay Corporation in Figure 14.13 . The key events that occurred during the year — including net income , stock issuances , and dividends — are listed vertically . The stockholders ’ equity section of the company ’ s balance sheet displays only the ending balances of the accounts and does not provide the activity or changes during the period . <hl> The stockholders ’ equity section of the balance sheet for corporations contains two primary categories of accounts . <hl> <hl> The first is paid-in capital , or contributed capital — consisting of amounts paid in by owners . <hl> <hl> The second category is earned capital , consisting of amounts earned by the corporation as part of business operations . <hl> On the balance sheet , retained earnings is a key component of the earned capital section , while the stock accounts such as common stock , preferred stock , and additional paid-in capital are the primary components of the contributed capital section .", "hl_sentences": "The statement of retained earnings is a subsection of the statement of stockholders ’ equity . The stockholders ’ equity section of the balance sheet for corporations contains two primary categories of accounts . The first is paid-in capital , or contributed capital — consisting of amounts paid in by owners . The second category is earned capital , consisting of amounts earned by the corporation as part of business operations .", "question": { "cloze_format": "Stockholders’ equity consists of ___.", "normal_format": "Stockholders’ equity consists of which of the following?", "question_choices": [ "bonds payable", "retained earnings and accounts receivable", "retained earnings and paid-in capital", "discounts and premiums on bond payable" ], "question_id": "fs-idm418204144", "question_text": "Stockholders’ equity consists of which of the following?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 1, "ans_text": "Dividends declared are added to retained earnings." }, "bloom": null, "hl_context": "<hl> Retained earnings is the primary component of a company ’ s earned capital . <hl> <hl> It generally consists of the cumulative net income minus any cumulative losses less dividends declared . <hl> A basic statement of retained earnings is referred to as an analysis of retained earnings because it shows the changes in the retained earnings account during the period . A company preparing a full set of financial statements may choose between preparing a statement of retained earnings , if the activity in its stock accounts is negligible , or a statement of stockholders ’ equity , for corporations with activity in their stock accounts . A statement of retained earnings for Clay Corporation for its second year of operations ( Figure 14.12 ) shows the company generated more net income than the amount of dividends it declared .", "hl_sentences": "Retained earnings is the primary component of a company ’ s earned capital . It generally consists of the cumulative net income minus any cumulative losses less dividends declared .", "question": { "cloze_format": "Retained earnings is accurately described by all of the following except by this statement: ___.", "normal_format": "Retained earnings is accurately described by all except which of the following statements?", "question_choices": [ "Retained earnings is the primary component of a company’s earned capital.", "Dividends declared are added to retained earnings.", "Net income is added to retained earnings.", "Net losses are accumulated in the retained earnings account." ], "question_id": "fs-idm393024048", "question_text": "Retained earnings is accurately described by all except which of the following statements?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "D" }, "bloom": null, "hl_context": "<hl> Retained earnings is often subject to certain restrictions . <hl> <hl> Restricted retained earnings is the portion of a company ’ s earnings that has been designated for a particular purpose due to legal or contractual obligations . <hl> Some of the restrictions reflect the laws of the state in which a company operates . Many states restrict retained earnings by the cost of treasury stock , which prevents the legal capital of the stock from dropping below zero . Other restrictions are contractual , such as debt covenants and loan arrangements ; these exist to protect creditors , often limiting the payment of dividends to maintain a minimum level of earned capital .", "hl_sentences": "Retained earnings is often subject to certain restrictions . Restricted retained earnings is the portion of a company ’ s earnings that has been designated for a particular purpose due to legal or contractual obligations .", "question": { "cloze_format": "If a company’s board of directors designates a portion of earnings for a particular purpose due to legal or contractual obligations, they are designated as ________.", "normal_format": "If a company’s board of directors designates a portion of earnings for a particular purpose due to legal or contractual obligations, they are designated as which of the following?", "question_choices": [ "retained earnings payable", "appropriated retained earnings", "cumulative retained earnings", "restricted retained earnings" ], "question_id": "fs-idm410948768", "question_text": "If a company’s board of directors designates a portion of earnings for a particular purpose due to legal or contractual obligations, they are designated as ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "prior period adjustments" }, "bloom": null, "hl_context": "<hl> Prior period adjustments are corrections of errors that appeared on previous periods ’ financial statements . <hl> These errors can stem from mathematical errors , misinterpretation of GAAP , or a misunderstanding of facts at the time the financial statements were prepared . Many errors impact the retained earnings account whose balance is carried forward from the previous period . Since the financial statements have already been issued , they must be corrected . The correction involves changing the financial statement amounts to the amounts they would have been had no errors occurred , a process known as restatement . The correction may impact both balance sheet and income statement accounts , requiring the company to record a transaction that corrects both . Since income statement accounts are closed at the end of every period , the journal entry will contain an entry to the Retained Earnings account . As such , prior period adjustments are reported on a company ’ s statement of retained earnings as an adjustment to the beginning balance of retained earnings . By directly adjusting beginning retained earnings , the adjustment has no effect on current period net income . The goal is to separate the error correction from the current period ’ s net income to avoid distorting the current period ’ s profitability . In other words , prior period adjustments are a way to go back and correct past financial statements that were misstated because of a reporting error .", "hl_sentences": "Prior period adjustments are corrections of errors that appeared on previous periods ’ financial statements .", "question": { "cloze_format": "Corrections of errors that occurred on a previous period’s financial statements are called ________.", "normal_format": "What are corrections of errors that occurred on a previous period’s financial statements called?", "question_choices": [ "restrictions", "deficits", "prior period adjustments", "restatements" ], "question_id": "fs-idm404841792", "question_text": "Corrections of errors that occurred on a previous period’s financial statements are called ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "D" }, "bloom": null, "hl_context": "<hl> Owners ’ equity represents the business owners ’ share of the company . <hl> <hl> It is often referred to as net worth or net assets in the financial world and as stockholders ’ equity or shareholders ’ equity when discussing businesses operations of corporations . <hl> From a practical perspective , it represents everything a company owns ( the company ’ s assets ) minus all the company owes ( its liabilities ) . While “ owners ’ equity ” is used for all three types of business organizations ( corporations , partnerships , and sole proprietorships ) , only sole proprietorships name the balance sheet account “ owner ’ s equity ” as the entire equity of the company belongs to the sole owner . Partnerships ( to be covered more thoroughly in Partnership Accounting ) often label this section of their balance sheet as “ partners ’ equity . ” All three forms of business utilize different accounting for the respective equity transactions and use different equity accounts , but they all rely on the same relationship represented by the basic accounting equation ( Figure 14.11 ) .", "hl_sentences": "Owners ’ equity represents the business owners ’ share of the company . It is often referred to as net worth or net assets in the financial world and as stockholders ’ equity or shareholders ’ equity when discussing businesses operations of corporations .", "question": { "cloze_format": "Owner’s equity represents ___.", "normal_format": "Owner’s equity represents which of the following?", "question_choices": [ "the amount of funding the company has from issuing bonds", "the sum of the retained earnings and accounts receivable account balances", "the total of retained earnings plus paid-in capital", "the business owner’s/owners’ share of the company, also known as net worth or net assets" ], "question_id": "fs-idm399518064", "question_text": "Owner’s equity represents which of the following?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "EBITDA" }, "bloom": null, "hl_context": "By removing the preferred dividends from net income , the numerator represents the profit available to common shareholders . Because preferred dividends represent the amount of net income to be distributed to preferred shareholders , this portion of the income is obviously not available for common shareholders . <hl> While there are a number of variations of measuring a company ’ s profit used in the financial world , such as NOPAT ( net operating profit after taxes ) and EBITDA ( earnings before interest , taxes , depreciation , and amortization ) , GAAP requires companies to calculate EPS based on a corporation ’ s net income , as this amount appears directly on a company ’ s income statement , which for public companies must be audited . <hl>", "hl_sentences": "While there are a number of variations of measuring a company ’ s profit used in the financial world , such as NOPAT ( net operating profit after taxes ) and EBITDA ( earnings before interest , taxes , depreciation , and amortization ) , GAAP requires companies to calculate EPS based on a corporation ’ s net income , as this amount appears directly on a company ’ s income statement , which for public companies must be audited .", "question": { "cloze_format": "___ is/are a measurement of earnings that represents the profit before interest, taxes, depreciation and amortization are subtracted.", "normal_format": "Which of the following is a measurement of earnings that represents the profit before interest, taxes, depreciation and amortization are subtracted?", "question_choices": [ "net income", "retained earnings", "EBITDA", "EPS" ], "question_id": "fs-idm268463008", "question_text": "Which of the following is a measurement of earnings that represents the profit before interest, taxes, depreciation and amortization are subtracted?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "B" }, "bloom": null, "hl_context": "<hl> Earnings per share ( EPS ) measures the portion of a corporation ’ s profit allocated to each outstanding share of common stock . <hl> Many financial analysts believe that EPS is the single most important tool in assessing a stock ’ s market price . A high or increasing earnings per share can drive up a stock price . Conversely , falling earnings per share can lower a stock ’ s market price . EPS is also a component in calculating the price-to-earnings ratio ( the market price of the stock divided by its earnings per share ) , which many investors find to be a key indicator of the value of a company ’ s stock .", "hl_sentences": "Earnings per share ( EPS ) measures the portion of a corporation ’ s profit allocated to each outstanding share of common stock .", "question": { "cloze_format": "___ measures that the portion of a corporation’s profit allocated to each outstanding share of common stock.", "normal_format": "Which of the following measures the portion of a corporation’s profit allocated to each outstanding share of common stock?", "question_choices": [ "retained earnings", "EPS", "EBITDA", "NOPAT" ], "question_id": "fs-idm274538944", "question_text": "Which of the following measures the portion of a corporation’s profit allocated to each outstanding share of common stock?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "NOPAT" }, "bloom": null, "hl_context": "By removing the preferred dividends from net income , the numerator represents the profit available to common shareholders . Because preferred dividends represent the amount of net income to be distributed to preferred shareholders , this portion of the income is obviously not available for common shareholders . <hl> While there are a number of variations of measuring a company ’ s profit used in the financial world , such as NOPAT ( net operating profit after taxes ) and EBITDA ( earnings before interest , taxes , depreciation , and amortization ) , GAAP requires companies to calculate EPS based on a corporation ’ s net income , as this amount appears directly on a company ’ s income statement , which for public companies must be audited . <hl>", "hl_sentences": "While there are a number of variations of measuring a company ’ s profit used in the financial world , such as NOPAT ( net operating profit after taxes ) and EBITDA ( earnings before interest , taxes , depreciation , and amortization ) , GAAP requires companies to calculate EPS based on a corporation ’ s net income , as this amount appears directly on a company ’ s income statement , which for public companies must be audited .", "question": { "cloze_format": "The measurement of earnings concept that consists of a company’s profit from operations after taxed are subtracted is ________.", "normal_format": "What is the measurement of earnings concept consisting of a company’s profit from operations after taxed are subtracted?", "question_choices": [ "ROI", "EPS", "EBITDA", "NOPAT" ], "question_id": "fs-idm268951680", "question_text": "The measurement of earnings concept that consists of a company’s profit from operations after taxed are subtracted is ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "Consistent improvement in EPS year after year is the indication of continuous improvement in the company’s earning power." }, "bloom": null, "hl_context": "<hl> Most analysts believe that a consistent improvement in EPS year after year is the indication of continuous improvement in the earning power of a company . <hl> This is what is seen in Cracker Barrel ’ s EPS amounts over each of the three years reported , moving from $ 6.85 to $ 7.91 to $ 8.40 . However , it is important to remember that EPS is calculated on historical data , which is not always predictive of the future . In addition , when EPS is used to compare different companies , significant differences may exist . If companies are in the same industry , that comparison may be more valuable than if they are in different industries . Basically , EPS should be a tool used in decision-making , utilized alongside other analytic tools .", "hl_sentences": "Most analysts believe that a consistent improvement in EPS year after year is the indication of continuous improvement in the earning power of a company .", "question": { "cloze_format": "Most analysts believe that the true statement about EPS is that ___ .", "normal_format": "Most analysts believe which of the following is true about EPS?", "question_choices": [ "Consistent improvement in EPS year after year is the indication of continuous improvement in the company’s earning power.", "Consistent improvement in EPS year after year is the indication of continuous decline in the company’s earning power.", "Consistent improvement in EPS year after year is the indication of fraud within the company.", "Consistent improvement in EPS year after year is the indication that the company will never suffer a year of net loss rather than net income." ], "question_id": "fs-idm274693808", "question_text": "Most analysts believe which of the following is true about EPS?" }, "references_are_paraphrase": null } ]
14
14.1 Explain the Process of Securing Equity Financing through the Issuance of Stock A corporation is a legal business structure involving one or more individuals (owners) who are legally distinct (separate) from the business that is created under state laws. The owners of a corporation are called stockholders (or shareholders) and may or may not be employees of the corporation. Most corporations rely on a combination of debt (liabilities) and equity (stock) to raise capital. Both debt and equity financing have the goal of obtaining funding, often referred to as capital, to be used to acquire other assets needed for operations or expansion. Capital consists of the total cash and other assets owned by a company found on the left side of the accounting equation. The method of financing these assets is evidenced by looking at the right side of the accounting equation, either recorded as liabilities or shareholders’ equity. The Organization of a Corporation Incorporation is the process of forming a company into a corporate legal entity. The advantages of incorporating are available to a corporation regardless of size, from a corporation with one shareholder to those with hundreds of thousands of shareholders. To issue stock, an entity must first be incorporated in a state. The process of incorporating requires filing the appropriate paperwork and receiving approval from a governmental entity to operate as a corporation. Each state has separate requirements for creating a corporation, but ultimately, each state grants a corporation the right to conduct business in the respective state in which the corporation is formed. The steps to incorporate are similar in most states: The founders (incorporators) choose an available business name that complies with the state’s corporation rules. A state will not allow a corporation to choose a name that is already in use or that has been in use in recent years. Also, similar names might be disallowed. The founders of a corporation prepare articles of incorporation called a “charter,” which defines the basic structure and purpose of the corporation and the amount of capital stock that can be issued or sold. The founders file the articles of incorporation with the Department of State of the state in which the incorporation is desired. Once the articles are filed and any required fees are paid, the government approves the incorporation. The incorporators hold an organizational meeting to elect the board of directors. Board meetings must be documented with formal board minutes (a written record of the items discussed, decisions made, and action plans resulting from the meeting). The board of directors generally meets at least annually. Microsoft , for example, has 14 directors on its board. 1 Boards may have more or fewer directors than this, but most boards have a minimum of at least three directors. 1 Microsoft Corporation. “Board of Directors.” https://www.microsoft.com/en-us/Investor/corporate-governance/board-of-directors.aspx The board of directors prepares and adopts corporate bylaws. These bylaws lay out the operating rules for the corporation. Templates for drawing up corporate bylaws are usually available from the state to ensure that they conform with that state’s requirements. The board of directors agrees upon a par value price for the stock. Par value is a legal concept discussed later in this section. The price that the company receives (the initial market value) will be determined by what the purchasing public is willing to pay. For example, the company might set the par value at $1 per share, while the investing public on the day of issuance might be willing to pay $30 per share for the stock. Concepts In Practice Deciding Where to Incorporate With 50 states to choose from, how do corporations decide where to incorporate? Many corporations are formed in either Delaware or Nevada for several reasons. Delaware is especially advantageous for large corporations because it has some of the most flexible business laws in the nation and its court system has a division specifically for handling business cases that operates without juries. Additionally, companies formed in Delaware that do not transact business in the state do not need to pay state corporate income tax. Delaware imposes no personal tax for non-residents, and shareholders can be non-residents. In addition, stock shares owned by non-Delaware residents are not subject to Delaware state taxation. Because of these advantages, Delaware dominated the share of business incorporation for several decades. In recent years, though, other states are seeking to compete for these businesses by offering similarly attractive benefits of incorporation. Nevada in particular has made headway. It has no state corporate income tax and does not impose any fees on shares or shareholders. After the initial set up fees, Nevada has no personal or franchise tax for corporations or their shareholders. Nevada, like Delaware, does not require shareholders to be state residents. If a corporation chooses to incorporate in Delaware, Nevada, or any state that is not its home state, it will need to register to do business in its home state. Corporations that transact in states other than their state of incorporation are considered foreign and may be subject to fees, local taxes, and annual reporting requirements that can be time consuming and expensive. Advantages of the Corporate Form Compared to other forms of organization for businesses, corporations have several advantages. A corporation is a separate legal entity, it provides limited liability for its owner or owners, ownership is transferable, it has a continuing existence, and capital is generally easy to raise. Separate Legal Entity A sole proprietorship, a partnership, and a corporation are different types of business entities. However, only a corporation is a legal entity. As a separate legal entity, a corporation can obtain funds by selling shares of stock, it can incur debt, it can become a party to a contract, it can sue other parties, and it can be sued. The owners are separate from the corporation. This separate legal status complies with one of the basic accounting concepts—the accounting entity concept , which indicates that the economic activity of an entity (the corporation) must be kept separate from the personal financial affairs of the owners. Limited Liability Many individuals seek to incorporate a business because they want the protection of limited liability . A corporation usually limits the liability of an investor to the amount of his or her investment in the corporation. For example, if a corporation enters into a loan agreement to borrow a sum of money and is unable to repay the loan, the lender cannot recover the amount owed from the shareholders (owners) unless the owners signed a personal guarantee. This is the opposite of partnerships and sole proprietorships. In partnerships and sole proprietorships, the owners can be held responsible for any unpaid financial obligations of the business and can be sued to pay obligations. Transferable Ownership Shareholders in a corporation can transfer shares to other parties without affecting the corporation’s operations. In effect, the transfer takes place between the parties outside of the corporation. In most corporations, the company generally does not have to give permission for shares to be transferred to another party. No journal entry is recorded in the corporation’s accounting records when a shareholder sells his or her stock to another shareholder. However, a memo entry must be made in the corporate stock ownership records so any dividends can be issued to the correct shareholder. Continuing Existence From a legal perspective, a corporation is granted existence forever with no termination date. This legal aspect falls in line with the basic accounting concept of the going concern assumption , which states that absent any evidence to the contrary, a business will continue to operate in the indefinite future. Because ownership of shares in a corporation is transferrable, re-incorporation is not necessary when ownership changes hands. This differs from a partnership, which ends when a partner dies, or from a sole proprietorship, which ends when the owner terminates the business. Ease of Raising Capital Because shares of stock can be easily transferred, corporations have a sizeable market of investors from whom to obtain capital. More than 65 million American households 2 hold investments in the securities markets. Compared to sole proprietorships (whose owners must obtain loans or invest their own funds) or to partnerships (which must typically obtain funds from the existing partners or seek other partners to join; although some partnerships are able borrow from outside parties), a corporation will find that capital is relatively easy to raise. 2 Financial Samurai. “What Percent of Americans Hold Stocks?” February 18, 2019. https://www.financialsamurai.com/what-percent-of-americans-own-stocks/ Disadvantages of the Corporate Form As compared to other organizations for businesses, there are also disadvantages to operating as a corporation. They include the costs of organization, regulation, and taxation. Costs of Organization Corporations incur costs associated with organizing the corporate entity, which include attorney fees, promotion costs, and filing fees paid to the state. These costs are debited to an account called organization costs . Assume that on January 1, Rayco Corporation made a payment for $750 to its attorney to prepare the incorporation documents and paid $450 to the state for filing fees. Rayco also incurred and paid $1,200 to advertise and promote the stock offering. The total organization costs are $2,400 ($750 + $450 + $1,200). The journal entry recorded by Rayco is a $2,400 debit to Organization Costs and a $2,400 credit to Cash. Organization costs are reported as part of the operating expenses on the corporation’s income statement. Regulation Compared to partnerships and sole proprietorships, corporations are subject to considerably more regulation both by the states in which they are incorporated and the states in which they operate. Each state provides limits to the powers that a corporation may exercise and specifies the rights and liabilities of shareholders. The Securities and Exchange Commission (SEC) is a federal agency that regulates corporations whose shares are listed and traded on security exchanges such as the New York Stock Exchange (NYSE), the National Association of Securities Dealers Automated Quotations Exchange (NASDAQ), and others; it accomplishes this through required periodic filings and other regulations. States also require the filing of periodic reports and payment of annual fees. Taxation As legal entities, typical corporations (C corporations, named after the specific subchapter of the Internal Revenue Service code under which they are taxed), are subject to federal and state income taxes (in those states with corporate taxes) based on the income they earn. Stockholders are also subject to income taxes, both on the dividends they receive from corporations and any gains they realize when they dispose of their stock. The income taxation of both the corporate entity’s income and the stockholder’s dividend is referred to as double taxation because the income is taxed to the corporation that earned the income and then taxed again to stockholders when they receive a distribution of the corporation’s income. Corporations that are closely held (with fewer than 100 stockholders) can be classified as S corporations, so named because they have elected to be taxed under subchapter S of the Internal Revenue Service code. For the most part, S corporations pay no income taxes because the income of the corporation is divided among and passed through to each of the stockholders, each of whom pays income taxes on his or her share. Both Subchapter S (Sub S) and similar Limited Liability Companies (LLCs) are not taxed at the business entity but instead pass their taxable income to their owners. Financing Options: Debt versus Equity Before exploring the process for securing corporate financing through equity, it is important to review the advantages and disadvantages of acquiring capital through debt. When deciding whether to raise capital by issuing debt or equity, a corporation needs to consider dilution of ownership, repayment of debt, cash obligations, budgeting impacts, administrative costs, and credit risks. Dilution of Ownership The most significant consideration of whether a company should seek funding using debt or equity financing is the effect on the company’s financial position. Issuance of debt does not dilute the company’s ownership as no additional ownership shares are issued. Issuing debt, or borrowing, creates an increase in cash, an asset, and an increase in a liability, such as notes payable or bonds payable. Because borrowing is independent of an owner’s ownership interest in the business, it has no effect on stockholders’ equity, and ownership of the corporation remains the same as illustrated in the accounting equation in Figure 14.2 . On the other hand, when a corporation issues stock, it is financing with equity. The same increase in cash occurs, but financing causes an increase in a capital stock account in stockholders’ equity as illustrated in the accounting equation in Figure 14.3 . This increase in stockholders’ equity implies that more shareholders will be allowed to vote and will participate in the distribution of profits and assets upon liquidation. Repayment of Debt A second concern when choosing between debt and equity financing relates to the repayment to the lender. A lender is a debt holder entitled to repayment of the original principal amount of the loan plus interest. Once the debt is paid, the corporation has no additional obligation to the lender. This allows owners of a corporation to claim a larger portion of the future earnings than would be possible if more stock were sold to investors. In addition, the interest component of the debt is an expense, which reduces the amount of income on which a company’s income tax liability is calculated, thereby lowering the corporation’s tax liability and the actual cost of the loan to the company. Cash Obligations The most obvious difference between debt and equity financing is that with debt, the principal and interest must be repaid, whereas with equity, there is no repayment requirement. The decision to declare dividends is solely up to the board of directors, so if a company has limitations on cash, it can skip or defer the declaration of dividends. When a company obtains capital through debt, it must have sufficient cash available to cover the repayment. This can put pressure on the company to meet debt obligations when cash is needed for other uses. Budgeting Except in the case of variable interest loans, loan and interest payments are easy to estimate for the purpose of budgeting cash payments. Loan payments do not tend to be flexible; instead the principal payment is required month after month. Moreover, interest costs incurred with debt are an additional fixed cost to the company, which raises the company’s break-even point (total revenue equals total costs) as well as its cash flow demands. Cost Differences Issuing debt rather than equity may reduce additional administration costs associated with having additional shareholders. These costs may include the costs for informational mailings, processing and direct-depositing dividend payments, and holding shareholder meetings. Issuing debt also saves the time associated with shareholder controversies, which can often defer certain management actions until a shareholder vote can be conducted. Risk Assessment by Creditors Borrowing commits the borrower to comply with debt covenants that can restrict both the financing options and the opportunities that extend beyond the main business function. This can limit a company’s vision or opportunities for change. For example, many debt covenants restrict a corporation’s debt-to-equity ratio , which measures the portion of debt used by a company relative to the amount of stockholders’ equity, calculated by dividing total debt by total equity. When a company borrows additional funds, its total debt (the numerator) rises. Because there is no change in total equity, the denominator remains the same, causing the debt-to-equity ratio to increase. Because an increase in this ratio usually means that the company will have more difficulty in repaying the debt, lenders and investors consider this an added risk. Accordingly, a business is limited in the amount of debt it can carry. A debt agreement may also restrict the company from borrowing additional funds. To increase the likelihood of debt repayment, a debt agreement often requires that a company’s assets serve as collateral, or for the company’s owners to guarantee repayment. Increased risks to the company from high-interest debt and high amounts of debt, particularly when the economy is unstable, include obstacles to growth and the potential for insolvency resulting from the costs of holding debt. These important considerations should be assessed prior to determining whether a company should choose debt or equity financing. Think It Through Financing a Business Expansion You are the CFO of a small corporation. The president, who is one of five shareholders, has created an innovative new product that is testing well with substantial demand. To begin manufacturing, $400,000 is needed to acquire the equipment. The corporation’s balance sheet shows total assets of $2,400,000 and total liabilities of $600,000. Most of the liabilities relate to debt that carries a covenant requiring that the company maintain a debt-to-equity ratio not exceeding 0.50 times. Determine the effect that each of the two options of obtaining additional capital will have on the debt covenant. Prepare a brief memo outlining the advantages of issuing shares of common stock. How Stocks Work The Securities and Exchange Commission (SEC) (www.sec.gov) is a government agency that regulates large and small public corporations. Its mission is “to protect investors, maintain fair, orderly, and efficient markets, and facilitate capital formation.” 3 The SEC identifies these as its five primary responsibilities: 3 U.S. Securities and Exchange Commission. “What We Do.” June 10, 2013. https://www.sec.gov/Article/whatwedo.html Inform and protect investors Facilitate capital information Enforce federal securities laws Regulate securities markets Provide data Under the Securities Act of 1933 , 4 all corporations that make their shares available for sale publicly in the United States are expected to register with the SEC. The SEC’s registration requirement covers all securities—not simply shares of stock—including most tradable financial instruments. The Securities Act of 1933, also known as the “truth in securities law,” aims to provide investors with the financial data they need to make informed decisions. While some companies are exempt from filing documents with the SEC, those that offer securities for sale in the U.S. and that are not exempt must file a number of forms along with financial statements audited by certified public accountants. 4 U.S. Securities and Exchange Commission. “Registration under the Securities Act of 1933.” https://www.investor.gov/additional-resources/general-resources/glossary/registration-under-securities-act-1933 Private versus Public Corporations Both private and public corporations become incorporated in the same manner through the state governmental agencies that handles incorporation. The journal entries and financial reporting are the same whether a company is a public or a private corporation. A private corporation is usually owned by a relatively small number of investors. Its shares are not publicly traded, and the ownership of the stock is restricted to only those allowed by the board of directors. The SEC defines a publicly traded company as a company that “discloses certain business and financial information regularly to the public” and whose “securities trade on public markets.” 5 A company can initially operate as private and later decide to “go public,” while other companies go public at the point of incorporation. The process of going public refers to a company undertaking an initial public offering (IPO) by issuing shares of its stock to the public for the first time. After its IPO, the corporation becomes subject to public reporting requirements and its shares are frequently listed on a stock exchange. 6 5 U. S. Securities and Exchange Commission. “Public Companies.” https://www.investor.gov/introduction-investing/basics/how-market-works/public-companies 6 U.S. Securities and Exchange Commission. “Companies, Going Public.” October 14, 2014. https://www.sec.gov/fast-answers/answers-comppublichtm.html Concepts In Practice Spreading the Risk The East India Company became the world’s first publicly traded company as the result of a single factor—risk. During the 1600s, single companies felt it was too risky to sail from the European mainland to the East Indies. These islands held vast resources and trade opportunities, enticing explorers to cross the Atlantic Ocean in search of fortunes. In 1600, several shipping companies joined forces and formed “Governor and Company of Merchants of London trading with the East Indies,” which was referred to as the East India Company . This arrangement allowed the shipping companies—the investors—to purchase shares in multiple companies rather than investing in a single voyage. If a single ship out of a fleet was lost at sea, investors could still generate a profit from ships that successfully completely their voyages. 7 7 Johnson Hur. “History of The Stock Market.” BeBusinessed.com. October 2016. https://bebusinessed.com/history/history-of-the-stock-market/ The Secondary Market A corporation’s shares continue to be bought and sold by the public after the initial public offering. Investors interested in purchasing shares of a corporation’s stock have several options. One option is to buy stock on the secondary market , an organized market where previously issued stocks and bonds can be traded after they are issued. Many investors purchase through stock exchanges like the New York Stock Exchange or NASDAQ using a brokerage firm. A full-service brokerage firm provides investment advice as well as a variety of financial planning services, whereas a discount brokerage offers a reduced commission and often does not provide investment advice. Most of the stock trading —buying and selling of shares by investors—takes place through brokers , registered members of the stock exchange who buy and sell stock on behalf of others. Online access to trading has broadened the secondary market significantly over the past few decades. Alternatively, stocks can be purchased from investment bankers , who provide advice to companies wishing to issue new stock, purchase the stock from the company issuing the stock, and then resell the securities to the public. 8 8 Dr. Econ. “Why Do Investment Banks Syndicate a New Securities Issue (and Related Questions).” Federal Reserve Bank of San Francisco. December 1999. https://www.frbsf.org/education/publications/doctor-econ/1999/december/investment-bank-securities-retirement-insurance/ Marketing a Company’s Stock Once a corporation has completed the incorporation process, it can issue stock. Each share of stock sold entitles the shareholder (the investor) to a percentage of ownership in the company. Private corporations are usually owned by a small number of investors and are not traded on a public exchange. Regardless of whether the corporation is public or private, the steps to finding investors are similar: Have a trusted and reliable management team. These should be experienced professionals who can guide the corporation. Have a financial reporting system in place. Accurate financial reporting is key to providing potential investors with reliable information. Choose an investment banker to provide advice and to assist in raising capital. Investment bankers are individuals who work in a financial institution that is primarily in the business of raising capital for corporations. Write the company’s story. This adds personality to the corporation. What is the mission, why it will be successful, and what sets the corporation apart? Approach potential investors. Selecting the right investment bankers will be extremely helpful with this step. Capital Stock A company’s corporate charter specifies the classes of shares and the number of shares of each class that a company can issue. There are two classes of capital stock—common stock and preferred stock. The two classes of stock enable a company to attract capital from investors with different risk preferences. Both classes of stock can be sold by either public or non-public companies; however, if a company issues only one class, it must be common stock. Companies report both common and preferred stock in the stockholders’ equity section of the balance sheet. Common Stock A company’s primary class of stock issued is common stock , and each share represents a partial claim to ownership or a share of the company’s business. For many companies, this is the only class of stock they have authorized. Common stockholders have four basic rights. Common stockholders have the right to vote on corporate matters, including the selection of corporate directors and other issues requiring the approval of owners. Each share of stock owned by an investor generally grants the investor one vote. Common stockholders have the right to share in corporate net income proportionally through dividends. If the corporation should have to liquidate, common stockholders have the right to share in any distribution of assets after all creditors and any preferred stockholders have been paid. In some jurisdictions, common shareholders have a preemptive right , which allows shareholders the option to maintain their ownership percentage when new shares of stock are issued by the company. For example, suppose a company has 1,000 shares of stock issued and plans to issue 200 more shares. A shareholder who currently owns 50 shares will be given the right to buy a percentage of the new issue equal to his current percentage of ownership. His current percentage of ownership is 5%: Original ownership percentage = 50 1,000 = 5 % Original ownership percentage = 50 1,000 = 5 % This shareholder will be given the right to buy 5% of the new issue, or 10 new shares. Number of new shares to be purchases = 5% × 200 shares = 10 shares Number of new shares to be purchases = 5% × 200 shares = 10 shares Should the shareholder choose not to buy the shares, the company can offer the shares to other investors. The purpose of the preemptive right is to prevent new issuances of stock from reducing the ownership percentage of the current shareholders. If the shareholder in our example is not offered the opportunity to buy 5% of the additional shares (his current ownership percentage) and the new shares are sold to other investors, the shareholder’s ownership percentage will drop because the total shares issued will increase. Total number of issues shares after the new issue = 1,000 + 200 = 1,200 shares Total number of issues shares after the new issue = 1,000 + 200 = 1,200 shares New ownership percentage = 50 1,200 = 4.17 % New ownership percentage = 50 1,200 = 4.17 % The shareholder would now own only 4.17% of the corporation, compared to the previous 5%. Preferred Stock A company’s charter may authorize more than one class of stock. Preferred stock has unique rights that are “preferred,” or more advantageous, to shareholders than common stock. The classification of preferred stock is often a controversial area in accounting as some researchers believe preferred stock has characteristics closer to that of a stock/bond hybrid security, with characteristics of debt rather than a true equity item. For example, unlike common stockholders, preferred shareholders typically do not have voting rights; in this way, they are similar to bondholders. In addition, preferred shares do not share in the common stock dividend distributions. Instead, the “preferred” classification entitles shareholders to a dividend that is fixed (assuming sufficient dividends are declared), similar to the fixed interest rate associated with bonds and other debt items. Preferred stock also mimics debt in that preferred shareholders have a priority of dividend payments over common stockholders. While there may be characteristics of both debt and equity, preferred stock is still reported as part of stockholders’ equity on the balance sheet. Not every corporation authorizes and issues preferred stock, and there are some important characteristics that corporations should consider when deciding to issue preferred stock. The price of preferred stock typically has less volatility in the stock market. This makes it easier for companies to more reliably budget the amount of the expected capital contribution since the share price is not expected to fluctuate as freely as for common stock. For the investor, this means there is less chance of large gains or losses on the sale of preferred stock. The Status of Shares of Stock The corporate charter specifies the number of authorized shares , which is the maximum number of shares that a corporation can issue to its investors as approved by the state in which the company is incorporated. Once shares are sold to investors, they are considered issued shares . Shares that are issued and are currently held by investors are called outstanding shares because they are “out” in the hands of investors. Occasionally, a company repurchases shares from investors. While these shares are still issued, they are no longer considered to be outstanding. These repurchased shares are called treasury stock . Assume that Waystar Corporation has 2,000 shares of capital stock authorized in its corporate charter. During May, Waystar issues 1,500 of these shares to investors. These investors are now called stockholders because they “hold” shares of stock. Because the other 500 authorized shares have not been issued they are considered unissued shares. Now assume that Waystar buys back 100 shares of stock from the investors who own the 1,500 shares. Only 1,400 of the issued shares are considered outstanding, because 100 shares are now held by the company as treasury shares. Stock Values Two of the most important values associated with stock are market value and par value. The market value of stock is the price at which the stock of a public company trades on the stock market. This amount does not appear in the corporation’s accounting records, nor in the company’s financial statements. Most corporate charters specify the par value assigned to each share of stock. This value is printed on the stock certificates and is often referred to as a face value because it is printed on the “face” of the certificate. Incorporators typically set the par value at a very small arbitrary amount because it is used internally for accounting purposes and has no economic significance. Because par value often has some legal significance, it is considered to be legal capital. In some states, par value is the minimum price at which the stock can be sold. If for some reason a share of stock with a par value of one dollar was issued for less than its par value of one dollar known as issuing at a stock discount , the shareholder could be held liable for the difference between the issue price and the par value if liquidation occurs and any creditors remain unpaid. Under some state laws, corporations are sometimes allowed to issue no-par stock —a stock with no par value assigned. When this occurs, the company’s board of directors typically assigns a stated value to each share of stock, which serves as the company’s legal capital. Companies generally account for stated value in the accounting records in the same manner as par value. If the company’s board fails to assign a stated value to no-par stock, the entire proceeds of the stock sale are treated as legal capital. A portion of the stockholders’ equity section of Frontier Communications Corporation ’s balance sheet as of December 31, 2017 displays the reported preferred and common stock. The par value of the preferred stock is $0.01 per share and $0.25 per share for common stock. The legal capital of the preferred stock is $192.50, while the legal capital of the common stock is $19,883. 9 9 Frontier Communications Corporation. 10-K Filing. February 28, 2018. https://www.sec.gov/Archives/edgar/data/20520/000002052018000007/ftr-20171231x10k.htm#Exhibits_and_Financial_Statement_Schedul Ethical Considerations Shareholders, Stakeholders, and the Business Judgment Rule Shareholders are the owners of a corporation, whereas stakeholders have an interest in the outcome of decisions of the corporation. Courts have ruled that, “A business corporation is organized and carried on primarily for the profit of the stockholders” as initially ruled in the early case Dodge v. Ford Motor Co. , 204 Mich. 459, 170 N.W. 668 (Mich. 1919). This early case outlined the “business judgment rule.” It allows for a corporation to use its judgment in how to run the company in the best interests of the shareholders, but also allows the corporation the ability to make decisions for the benefit of the company’s stakeholders. The term known as the “business judgment rule” has been expanded in numerous cases to include making decisions directly for the benefit of stakeholders, thereby allowing management to run a company in a prudent fashion. The stakeholder theories started in the Dodge case have been expended to allow corporations to make decisions for the corporation’s benefit, including decisions that support stakeholder rights. Prudent management of a corporation includes making decisions that support stakeholders and shareholders. A shareholder is also a stakeholder in any decision. A stakeholder is anyone with an interest in the outcome in the corporation’s decision, even if the person owns no financial interest in the corporation. Corporations need to take a proactive step in managing stakeholder concerns and issues. Strategies on how to manage stakeholder needs have been developed from both a moral perspective and a risk management perspective. Both approaches allow management to understand the issues related to their stakeholders and to make decisions in the best interest of the corporation and its owners. Proper stakeholder management should allow corporations to develop profitable long-term plans that lead to greater viability of the corporation. 14.2 Analyze and Record Transactions for the Issuance and Repurchase of Stock Chad and Rick have successfully incorporated La Cantina and are ready to issue common stock to themselves and the newly recruited investors. The proceeds will be used to open new locations. The corporate charter of the corporation indicates that the par value of its common stock is $1.50 per share. When stock is sold to investors, it is very rarely sold at par value. Most often, shares are issued at a value in excess of par. This is referred to as issuing stock at a premium. Stock with no par value that has been assigned a stated value is treated very similarly to stock with a par value. Stock can be issued in exchange for cash, property, or services provided to the corporation. For example, an investor could give a delivery truck in exchange for a company’s stock. Another investor could provide legal fees in exchange for stock. The general rule is to recognize the assets received in exchange for stock at the asset’s fair market value. Typical Common Stock Transactions The company plans to issue most of the shares in exchange for cash, and other shares in exchange for kitchen equipment provided to the corporation by one of the new investors. Two common accounts in the equity section of the balance sheet are used when issuing stock—Common Stock and Additional Paid-in Capital from Common Stock. Common Stock consists of the par value of all shares of common stock issued. Additional paid-in capital from common stock consists of the excess of the proceeds received from the issuance of the stock over the stock’s par value. When a company has more than one class of stock, it usually keeps a separate additional paid-in capital account for each class. Issuing Common Stock with a Par Value in Exchange for Cash When a company issues new stock for cash, assets increase with a debit, and equity accounts increase with a credit. To illustrate, assume that La Cantina issues 8,000 shares of common stock to investors on January 1 for cash, with the investors paying cash of $21.50 per share. The total cash to be received is $172,000. 8,000 shares × $21.50 = $172,000 8,000 shares × $21.50 = $172,000 The transaction causes Cash to increase (debit) for the total cash received. The Common Stock account increases (credit) with a credit for the par value of the 8,000 shares issued: 8,000 × $1.50, or $12,000. The excess received over the par value is reported in the Additional Paid-in Capital from Common Stock account. Since the shares were issued for $21.50 per share, the excess over par value per share of $20 ($21.50 − $1.50) is multiplied by the number of shares issued to arrive at the Additional Paid-in Capital from Common Stock credit. ( $21.50 − $1.50 ) × 8,000 = $160,000 ( $21.50 − $1.50 ) × 8,000 = $160,000 Issuing Common Stock with a Par Value in Exchange for Property or Services When a company issues stock for property or services, the company increases the respective asset account with a debit and the respective equity accounts with credits. The asset received in the exchange—such as land, equipment, inventory, or any services provided to the corporation such as legal or accounting services—is recorded at the fair market value of the stock or the asset or services received, whichever is more clearly determinable. To illustrate, assume that La Cantina issues 2,000 shares of authorized common stock in exchange for legal services provided by an attorney. The legal services have a value of $8,000 based on the amount the attorney would charge. Because La Cantina’s stock is not actively traded, the asset will be valued at the more easily determinable market value of the legal services. La Cantina must recognize the market value of the legal services as an increase (debit) of $8,000 to its Legal Services Expense account. Similar to recording the stock issued for cash, the Common Stock account is increased by the par value of the issued stock, $1.50 × 2,000 shares, or $3,000. The excess of the value of the legal services over the par value of the stock appears as an increase (credit) to the Additional Paid-in Capital from Common Stock account: $8,000 − $3,000 = $5,000 $8,000 − $3,000 = $5,000 Just after the issuance of both investments, the stockholders’ equity account, Common Stock, reflects the total par value of the issued stock; in this case, $3,000 + $12,000, or a total of $15,000. The amounts received in excess of the par value are accumulated in the Additional Paid-in Capital from Common Stock account in the amount of $5,000 + $160,000, or $165,000. A portion of the equity section of the balance sheet just after the two stock issuances by La Cantina will reflect the Common Stock account stock issuances as shown in Figure 14.4 . Issuing No-Par Common Stock with a Stated Value Not all stock has a par value specified in the company’s charter. In most cases, no-par stock is assigned a stated value by the board of directors, which then becomes the legal capital value. Stock with a stated value is treated as if the stated value is a par value. Assume that La Cantina’s 8,000 shares of common stock issued on June 1 for $21.50 were issued at a stated value of $1.50 rather than at a par value. The total cash to be received remains $172,000 (8,000 shares × $21.50), which is recorded as an increase (debit) to Cash. The Common Stock account increases with a credit for the stated value of the 8,000 shares issued: 8,000 × $1.50, or $12,000. The excess received over the stated value is reported in the Additional Paid-in Capital from Common Stock account at $160,000, based on the issue price of $21.50 per share less the stated value of $1.50, or $20, times the 8,000 shares issued: ( $21.50 − $1.50 ) × 8,000 = $160,000 ( $21.50 − $1.50 ) × 8,000 = $160,000 The transaction looks identical except for the explanation. If the 8,000 shares of La Cantina’s common stock had been no-par, and no stated value had been assigned, the $172,000 would be debited to Cash, with a corresponding increase in the Common Stock account as a credit of $172,000. No entry would be made to Additional Paid-in Capital account as it is reserved for stock issue amounts above par or stated value. The entry would appear as: Issuing Preferred Stock A few months later, Chad and Rick need additional capital to develop a website to add an online presence and decide to issue all 1,000 of the company’s authorized preferred shares. The 5%, $8 par value, preferred shares are sold at $45 each. The Cash account increases with a debit for $45 times 1,000 shares, or $45,000. The Preferred Stock account increases for the par value of the preferred stock, $8 times 1,000 shares, or $8,000. The excess of the issue price of $45 per share over the $8 par value, times the 1,000 shares, is credited as an increase to Additional Paid-in Capital from Preferred Stock, resulting in a credit of $37,000. ( $45 − $8 ) × 1,000 = $37,000 ( $45 − $8 ) × 1,000 = $37,000 The journal entry is: Figure 14.5 shows what the equity section of the balance sheet will reflect after the preferred stock is issued. Notice that the corporation presents preferred stock before common stock in the Stockholders’ Equity section of the balance sheet because preferred stock has preference over common stock in the case of liquidation. GAAP requires that each class of stock displayed in this section of the balance sheet includes several items that must be disclosed along with the respective account names. The required items to be disclosed are: Par or stated value Number of shares authorized Number of shares issued Number of shares outstanding If preferred stock, the dividend rate Treasury Stock Sometimes a corporation decides to purchase its own stock in the market. These shares are referred to as treasury stock. A company might purchase its own outstanding stock for a number of possible reasons. It can be a strategic maneuver to prevent another company from acquiring a majority interest or preventing a hostile takeover. A purchase can also create demand for the stock, which in turn raises the market price of the stock. Sometimes companies buy back shares to be used for employee stock options or profit-sharing plans. Think It Through Walt Disney Buys Back Stock The Walt Disney Company has consistently spent a large portion of its cash flows in buying back its own stock. According to The Motley Fool , the Walt Disney Company bought back 74 million shares in 2016 alone. Read the Motley Fool article and comment on other options that Walt Disney may have had to obtain financing. Acquiring Treasury Stock When a company purchases treasury stock, it is reflected on the balance sheet in a contra equity account. As a contra equity account, Treasury Stock has a debit balance, rather than the normal credit balances of other equity accounts. The total cost of treasury stock reduces total equity. In substance, treasury stock implies that a company owns shares of itself. However, owning a portion of one’s self is not possible. Treasury shares do not carry the basic common shareholder rights because they are not outstanding. Dividends are not paid on treasury shares, they provide no voting rights, and they do not receive a share of assets upon liquidation of the company. There are two methods possible to account for treasury stock—the cost method, which is discussed here, and the par value method, which is a more advanced accounting topic. The cost method is so named because the amount in the Treasury Stock account at any point in time represents the number of shares held in treasury times the original cost paid to acquire each treasury share. Assume Duratech’s net income for the first year was $3,100,000, and that the company has 12,500 shares of common stock issued. During May, the company’s board of directors authorizes the repurchase of 800 shares of the company’s own common stock as treasury stock. Each share of the company’s common stock is selling for $25 on the open market on May 1, the date that Duratech purchases the stock. Duratech will pay the market price of the stock at $25 per share times the 800 shares it purchased, for a total cost of $20,000. The following journal entry is recorded for the purchase of the treasury stock under the cost method. Even though the company is purchasing stock, there is no asset recognized for the purchase. An entity cannot own part of itself, so no asset is acquired. Immediately after the purchase, the equity section of the balance sheet ( Figure 14.6 ) will show the total cost of the treasury shares as a deduction from total stockholders’ equity. Notice on the partial balance sheet that the number of common shares outstanding changes when treasury stock transactions occur. Initially, the company had 10,000 common shares issued and outstanding. The 800 repurchased shares are no longer outstanding, reducing the total outstanding to 9,200 shares. Concepts In Practice Reporting Treasury Stock for Nestlé Holdings Group Nestlé Holdings Group sells a number of major brands of food and beverages including Gerber , Häagen-Dazs , Purina , and Lean Cuisine . The company’s statement of stockholders’ equity shows that it began with 990 million Swiss francs (CHF) in treasury stock at the beginning of 2016. In 2017, it acquired additional shares at a cost of 3,547 million CHF, raising its total treasury stock to 4,537 million CHF at the end of 2017, primarily due to a share buy-back program. 10 10 Nestlé. “Annual Report 2017.” 2017. https://www.nestle.com/investors/annual-report Reissuing Treasury Stock above Cost Management typically does not hold treasury stock forever. The company can resell the treasury stock at cost, above cost, below cost, or retire it. If La Cantina reissues 100 of its treasury shares at cost ($25 per share) on July 3, a reversal of the original purchase for the 100 shares is recorded. This has the effect of increasing an asset, Cash, with a debit, and decreasing the Treasury Stock account with a credit. The original cost paid for each treasury share, $25, is multiplied by the 100 shares to be resold, or $2,500. The journal entry to record this sale of the treasury shares at cost is: If the treasury stock is resold at a price higher than its original purchase price, the company debits the Cash account for the amount of cash proceeds, reduces the Treasury Stock account with a credit for the cost of the treasury shares being sold, and credits the Paid-in Capital from Treasury Stock account for the difference. Even though the difference—the selling price less the cost—looks like a gain, it is treated as additional capital because gains and losses only result from the disposition of economic resources (assets). Treasury Stock is not an asset. Assume that on August 1, La Cantina sells another 100 shares of its treasury stock, but this time the selling price is $28 per share. The Cash Account is increased by the selling price, $28 per share times the number of shares resold, 100, for a total debit to Cash of $2,800. The Treasury Stock account decreases by the cost of the 100 shares sold, 100 × $25 per share, for a total credit of $2,500, just as it did in the sale at cost. The difference is recorded as a credit of $300 to Additional Paid-in Capital from Treasury Stock. Reissuing Treasury Stock Below Cost If the treasury stock is reissued at a price below cost, the account used for the difference between the cash received from the resale and the original cost of the treasury stock depends on the balance in the Paid-in Capital from Treasury Stock account. Any balance that exists in this account will be a credit. The transaction will require a debit to the Paid-in Capital from Treasury Stock account to the extent of the balance. If the transaction requires a debit greater than the balance in the Paid-in Capital account, any additional difference between the cost of the treasury stock and its selling price is recorded as a reduction of the Retained Earnings account as a debit. If there is no balance in the Additional Paid-in Capital from Treasury Stock account, the entire debit will reduce retained earnings. Assume that on October 9, La Cantina sells another 100 shares of its treasury stock, but this time at $23 per share. Cash is increased for the selling price, $23 per share times the number of shares resold, 100, for a total debit to Cash of $2,300. The Treasury Stock account decreases by the cost of the 100 shares sold, 100 × $25 per share, for a total credit of $2,500. The difference is recorded as a debit of $200 to the Additional Paid-in Capital from Treasury Stock account. Notice that the balance in this account from the August 1 transaction was $300, which was sufficient to offset the $200 debit. The transaction is recorded as: Treasury stock transactions have no effect on the number of shares authorized or issued. Because shares held in treasury are not outstanding, each treasury stock transaction will impact the number of shares outstanding. A corporation may also purchase its own stock and retire it. Retired stock reduces the number of shares issued. When stock is repurchased for retirement, the stock must be removed from the accounts so that it is not reported on the balance sheet. The balance sheet will appear as if the stock was never issued in the first place. Your Turn Understanding Stockholders’ Equity Wilson Enterprises reports the following stockholders’ equity: Based on the partial balance sheet presented, answer the following questions: At what price was each share of treasury stock purchased? What is reflected in the additional paid-in capital account? Why is there a difference between the common stock shares issued and the shares outstanding? Solution A. $240,000 ÷ 20,000 = $12 per share. B. The difference between the market price and the par value when the stock was issued. C. Treasury stock. 14.3 Record Transactions and the Effects on Financial Statements for Cash Dividends, Property Dividends, Stock Dividends, and Stock Splits Do you remember playing the board game Monopoly when you were younger? If you landed on the Chance space, you picked a card. The Chance card may have paid a $50 dividend. At the time, you probably were just excited for the additional funds. For corporations, there are several reasons to consider sharing some of their earnings with investors in the form of dividends. Many investors view a dividend payment as a sign of a company’s financial health and are more likely to purchase its stock. In addition, corporations use dividends as a marketing tool to remind investors that their stock is a profit generator. This section explains the three types of dividends—cash dividends, property dividends, and stock dividends—along with stock splits, showing the journal entries involved and the reason why companies declare and pay dividends. The Nature and Purposes of Dividends Stock investors are typically driven by two factors—a desire to earn income in the form of dividends and a desire to benefit from the growth in the value of their investment. Members of a corporation’s board of directors understand the need to provide investors with a periodic return, and as a result, often declare dividends up to four times per year. However, companies can declare dividends whenever they want and are not limited in the number of annual declarations. Dividends are a distribution of a corporation’s earnings. They are not considered expenses, and they are not reported on the income statement. They are a distribution of the net income of a company and are not a cost of business operations. Concepts In Practice So Many Dividends The declaration and payment of dividends varies among companies. In December 2017 alone, 4,506 U.S. companies declared either cash, stock, or property dividends—the largest number of declarations since 2004. 12 It is likely that these companies waited to declare dividends until after financial statements were prepared, so that the board and other executives involved in the process were able to provide estimates of the 2017 earnings. 12 Ironman at Political Calculations. “Dividends by the Numbers through January 2018.” Seeking Alpha. February 9, 2018. https://seekingalpha.com/article/4145079-dividends-numbers-january-2018 Some companies choose not to pay dividends and instead reinvest all of their earnings back into the company. One common scenario for situation occurs when a company experiencing rapid growth. The company may want to invest all their retained earnings to support and continue that growth. Another scenario is a mature business that believes retaining its earnings is more likely to result in an increased market value and stock price. In other instances, a business may want to use its earnings to purchase new assets or branch out into new areas. Most companies attempt dividend smoothing , the practice of paying dividends that are relatively equal period after period, even when earnings fluctuate. In exceptional circumstances, some corporations pay a special dividend , which is a one-time extra distribution of corporate earnings. A special dividend usually stems from a period of extraordinary earnings or a special transaction, such as the sale of a division. Some companies, such as Costco Wholesale Corporation , pay recurring dividends and periodically offer a special dividend. While Costco ’s regular quarterly dividend is $0.57 per share, the company issued a $7.00 per share cash dividend in 2017. 13 Companies that have both common and preferred stock must consider the characteristics of each class of stock. 13 Jing Pan. “Will Costco Wholesale Corporation Pay a Special Dividend in 2018?” Income Investors. May 9, 2018. https://www.incomeinvestors.com/will-costco-wholesale-corporation-pay-special-dividend-2018/38865/ Note that dividends are distributed or paid only to shares of stock that are outstanding. Treasury shares are not outstanding, so no dividends are declared or distributed for these shares. Regardless of the type of dividend, the declaration always causes a decrease in the retained earnings account. Dividend Dates A company’s board of directors has the power to formally vote to declare dividends. The date of declaration is the date on which the dividends become a legal liability, the date on which the board of directors votes to distribute the dividends. Cash and property dividends become liabilities on the declaration date because they represent a formal obligation to distribute economic resources (assets) to stockholders. On the other hand, stock dividends distribute additional shares of stock, and because stock is part of equity and not an asset, stock dividends do not become liabilities when declared. At the time dividends are declared, the board establishes a date of record and a date of payment. The date of record establishes who is entitled to receive a dividend; stockholders who own stock on the date of record are entitled to receive a dividend even if they sell it prior to the date of payment. Investors who purchase shares after the date of record but before the payment date are not entitled to receive dividends since they did not own the stock on the date of record. These shares are said to be sold ex dividend . The date of payment is the date that payment is issued to the investor for the amount of the dividend declared. Cash Dividends Cash dividends are corporate earnings that companies pass along to their shareholders. To pay a cash dividend, the corporation must meet two criteria. First, there must be sufficient cash on hand to fulfill the dividend payment. Second, the company must have sufficient retained earnings; that is, it must have enough residual assets to cover the dividend such that the Retained Earnings account does not become a negative (debit) amount upon declaration. On the day the board of directors votes to declare a cash dividend, a journal entry is required to record the declaration as a liability. Accounting for Cash Dividends When Only Common Stock Is Issued Small private companies like La Cantina often have only one class of stock issued, common stock. Assume that on December 16, La Cantina’s board of directors declares a $0.50 per share dividend on common stock. As of the date of declaration, the company has 10,000 shares of common stock issued and holds 800 shares as treasury stock. The total cash dividend to be paid is based on the number of shares outstanding, which is the total shares issued less those in treasury. Outstanding shares are 10,000 – 800, or 9,200 shares. The cash dividend is: 9,200 shares × $0.50 = $4,600 9,200 shares × $0.50 = $4,600 The journal entry to record the declaration of the cash dividends involves a decrease (debit) to Retained Earnings (a stockholders’ equity account) and an increase (credit) to Cash Dividends Payable (a liability account). While a few companies may use a temporary account, Dividends Declared, rather than Retained Earnings, most companies debit Retained Earnings directly. Ultimately, any dividends declared cause a decrease to Retained Earnings. The second significant dividend date is the date of record. The date of record determines which shareholders will receive the dividends. There is no journal entry recorded; the company creates a list of the stockholders that will receive dividends. The date of payment is the third important date related to dividends. This is the date that dividend payments are prepared and sent to shareholders who owned stock on the date of record. The related journal entry is a fulfillment of the obligation established on the declaration date; it reduces the Cash Dividends Payable account (with a debit) and the Cash account (with a credit). Property Dividends A property dividend occurs when a company declares and distributes assets other than cash. The dividend typically involves either the distribution of shares of another company that the issuing corporation owns (one of its assets) or a distribution of inventory. For example, Walt Disney Company may choose to distribute tickets to visit its theme parks. Anheuser-Busch InBev , the company that owns the Budweiser and Michelob brands, may choose to distribute a case of beer to each shareholder. A property dividend may be declared when a company wants to reward its investors but doesn’t have the cash to distribute, or if it needs to hold onto its existing cash for other investments. Property dividends are not as common as cash or stock dividends. They are recorded at the fair market value of the asset being distributed. To illustrate accounting for a property dividend, assume that Duratech Corporation has 60,000 shares of $0.50 par value common stock outstanding at the end of its second year of operations, and the company’s board of directors declares a property dividend consisting of a package of soft drinks that it produces to each holder of common stock. The retail value of each case is $3.50. The amount of the dividend is calculated by multiplying the number of shares by the market value of each package: 60,000 shares × $3.50 = $210,000 60,000 shares × $3.50 = $210,000 The declaration to record the property dividend is a decrease (debit) to Retained Earnings for the value of the dividend and an increase (credit) to Property Dividends Payable for the $210,000. The journal entry to distribute the soft drinks on January 14 decreases both the Property Dividends Payable account (debit) and the Cash account (credit). Comparing Small Stock Dividends, Large Stock Dividends, and Stock Splits Companies that do not want to issue cash or property dividends but still want to provide some benefit to shareholders may choose between small stock dividends, large stock dividends, and stock splits. Both small and large stock dividends occur when a company distributes additional shares of stock to existing stockholders. There is no change in total assets, total liabilities, or total stockholders’ equity when a small stock dividend, a large stock dividend, or a stock split occurs. Both types of stock dividends impact the accounts in stockholders’ equity. A stock split causes no change in any of the accounts within stockholders’ equity. The impact on the financial statement usually does not drive the decision to choose between one of the stock dividend types or a stock split. Instead, the decision is typically based on its effect on the market. Large stock dividends and stock splits are done in an attempt to lower the market price of the stock so that it is more affordable to potential investors. A small stock dividend is viewed by investors as a distribution of the company’s earnings. Both small and large stock dividends cause an increase in common stock and a decrease to retained earnings. This is a method of capitalizing (increasing stock) a portion of the company’s earnings (retained earnings). Stock Dividends Some companies issue shares of stock as a dividend rather than cash or property. This often occurs when the company has insufficient cash but wants to keep its investors happy. When a company issues a stock dividend , it distributes additional shares of stock to existing shareholders. These shareholders do not have to pay income taxes on stock dividends when they receive them; instead, they are taxed when the investor sells them in the future. A stock dividend distributes shares so that after the distribution, all stockholders have the exact same percentage of ownership that they held prior to the dividend. There are two types of stock dividends—small stock dividends and large stock dividends. The key difference is that small dividends are recorded at market value and large dividends are recorded at the stated or par value. Small Stock Dividends A small stock dividend occurs when a stock dividend distribution is less than 25% of the total outstanding shares based on the shares outstanding prior to the dividend distribution. To illustrate, assume that Duratech Corporation has 60,000 shares of $0.50 par value common stock outstanding at the end of its second year of operations. Duratech’s board of directors declares a 5% stock dividend on the last day of the year, and the market value of each share of stock on the same day was $9. Figure 14.9 shows the stockholders’ equity section of Duratech’s balance sheet just prior to the stock declaration. The 5% common stock dividend will require the distribution of 60,000 shares times 5%, or 3,000 additional shares of stock. An investor who owns 100 shares will receive 5 shares in the dividend distribution (5% × 100 shares). The journal entry to record the stock dividend declaration requires a decrease (debit) to Retained Earnings for the market value of the shares to be distributed: 3,000 shares × $9, or $27,000. An increase (credit) to the Common Stock Dividends Distributable is recorded for the par value of the stock to be distributed: 3,000 × $0.50, or $1,500. The excess of the market value over the par value is reported as an increase (credit) to the Additional Paid-in Capital from Common Stock account in the amount of $25,500. If the company prepares a balance sheet prior to distributing the stock dividend, the Common Stock Dividend Distributable account is reported in the equity section of the balance sheet beneath the Common Stock account. The journal entry to record the stock dividend distribution requires a decrease (debit) to Common Stock Dividend Distributable to remove the distributable amount from that account, $1,500, and an increase (credit) to Common Stock for the same par value amount. To see the effects on the balance sheet, it is helpful to compare the stockholders’ equity section of the balance sheet before and after the small stock dividend. After the distribution, the total stockholders’ equity remains the same as it was prior to the distribution. The amounts within the accounts are merely shifted from the earned capital account (Retained Earnings) to the contributed capital accounts (Common Stock and Additional Paid-in Capital). However, the number of shares outstanding has changed. Prior to the distribution, the company had 60,000 shares outstanding. Just after the distribution, there are 63,000 outstanding. The difference is the 3,000 additional shares of the stock dividend distribution. The company still has the same total value of assets, so its value does not change at the time a stock distribution occurs. The increase in the number of outstanding shares does not dilute the value of the shares held by the existing shareholders. The market value of the original shares plus the newly issued shares is the same as the market value of the original shares before the stock dividend. For example, assume an investor owns 200 shares with a market value of $10 each for a total market value of $2,000. She receives 10 shares as a stock dividend from the company. She now has 210 shares with a total market value of $2,000. Each share now has a theoretical market value of about $9.52. Large Stock Dividends A large stock dividend occurs when a distribution of stock to existing shareholders is greater than 25% of the total outstanding shares just before the distribution. The accounting for large stock dividends differs from that of small stock dividends because a large dividend impacts the stock’s market value per share. While there may be a subsequent change in the market price of the stock after a small dividend, it is not as abrupt as that with a large dividend. To illustrate, assume that Duratech Corporation’s balance sheet at the end of its second year of operations shows the following in the stockholders’ equity section prior to the declaration of a large stock dividend. Also assume that Duratech’s board of directors declares a 30% stock dividend on the last day of the year, when the market value of each share of stock was $9. The 30% stock dividend will require the distribution of 60,000 shares times 30%, or 18,000 additional shares of stock. An investor who owns 100 shares will receive 30 shares in the dividend distribution (30% × 100 shares). The journal entry to record the stock dividend declaration requires a decrease (debit) to Retained Earnings and an increase (credit) to Common Stock Dividends Distributable for the par or stated value of the shares to be distributed: 18,000 shares × $0.50, or $9,000. The journal entry is: The subsequent distribution will reduce the Common Stock Dividends Distributable account with a debit and increase the Common Stock account with a credit for the $9,000. There is no consideration of the market value in the accounting records for a large stock dividend because the number of shares issued in a large dividend is large enough to impact the market; as such, it causes an immediate reduction of the market price of the company’s stock. In comparing the stockholders’ equity section of the balance sheet before and after the large stock dividend, we can see that the total stockholders’ equity is the same before and after the stock dividend, just as it was with a small dividend ( Figure 14.10 ). Similar to distribution of a small dividend, the amounts within the accounts are shifted from the earned capital account (Retained Earnings) to the contributed capital account (Common Stock) though in different amounts. The number of shares outstanding has increased from the 60,000 shares prior to the distribution, to the 78,000 outstanding shares after the distribution. The difference is the 18,000 additional shares in the stock dividend distribution. No change to the company’s assets occurred; however, the potential subsequent increase in market value of the company’s stock will increase the investor’s perception of the value of the company. Stock Splits A traditional stock split occurs when a company’s board of directors issue new shares to existing shareholders in place of the old shares by increasing the number of shares and reducing the par value of each share. For example, in a 2-for-1 stock split, two shares of stock are distributed for each share held by a shareholder. From a practical perspective, shareholders return the old shares and receive two shares for each share they previously owned. The new shares have half the par value of the original shares, but now the shareholder owns twice as many. If a 5-for-1 split occurs, shareholders receive 5 new shares for each of the original shares they owned, and the new par value results in one-fifth of the original par value per share. While a company technically has no control over its common stock price, a stock’s market value is often affected by a stock split. When a split occurs, the market value per share is reduced to balance the increase in the number of outstanding shares. In a 2-for-1 split, for example, the value per share typically will be reduced by half. As such, although the number of outstanding shares and the price change, the total market value remains constant. If you buy a candy bar for $1 and cut it in half, each half is now worth $0.50. The total value of the candy does not increase just because there are more pieces. A stock split is much like a large stock dividend in that both are large enough to cause a change in the market price of the stock. Additionally, the split indicates that share value has been increasing, suggesting growth is likely to continue and result in further increase in demand and value. Companies often make the decision to split stock when the stock price has increased enough to be out of line with competitors, and the business wants to continue to offer shares at an attractive price for small investors. Concepts In Practice Samsung Boasts a 50-to-1 Stock Split In May of 2018, Samsung Electronics 14 had a 50-to-1 stock split in an attempt to make it easier for investors to buy its stock. Samsung ’s market price of each share prior to the split was an incredible 2.65 won (“won” is a Japanese currency), or $2,467.48. Buying one share of stock at this price is rather expensive for most people. As might be expected, even after a slight drop in trading activity just after the split announcement, the reduced market price of the stock generated a significant increase to investors by making the price per share less expensive. The split caused the price to drop to 0.053 won, or $49.35 per share. This made the stock more accessible to potential investors who were previously unable to afford a share at $2,467. 14 Joyce Lee. “Trading in Samsung Electronics Shares Surges after Stock Split.” Reuters . May 3, 2018. https://www.reuters.com/article/us-samsung-elec-stocks/samsung-elec-shares-open-at-53000-won-each-after-501-stock-split-idUSKBN1I500B A reverse stock split occurs when a company attempts to increase the market price per share by reducing the number of shares of stock. For example, a 1-for-3 stock split is called a reverse split since it reduces the number of shares of stock outstanding by two-thirds and triples the par or stated value per share. The effect on the market is to increase the market value per share. A primary motivator of companies invoking reverse splits is to avoid being delisted and taken off a stock exchange for failure to maintain the exchange’s minimum share price. Accounting for stock splits is quite simple. No journal entry is recorded for a stock split. Instead, the company prepares a memo entry in its journal that indicates the nature of the stock split and indicates the new par value. The balance sheet will reflect the new par value and the new number of shares authorized, issued, and outstanding after the stock split. To illustrate, assume that Duratech’s board of directors declares a 4-for-1 common stock split on its $0.50 par value stock. Just before the split, the company has 60,000 shares of common stock outstanding, and its stock was selling at $24 per share. The split causes the number of shares outstanding to increase by four times to 240,000 shares (4 × 60,000), and the par value to decline to one-fourth of its original value, to $0.125 per share ($0.50 ÷ 4). No change occurs to the dollar amount of any general ledger account. The split typically causes the market price of stock to decline immediately to one-fourth of the original value—from the $24 per share pre-split price to approximately $6 per share post-split ($24 ÷ 4), because the total value of the company did not change as a result of the split. The total stockholders’ equity on the company’s balance sheet before and after the split remain the same. Think It Through Accounting for a Stock Split You have just obtained your MBA and obtained your dream job with a large corporation as a manager trainee in the corporate accounting department. Your employer plans to offer a 3-for-2 stock split. Briefly indicate the accounting entries necessary to recognize the split in the company’s accounting records and the effect the split will have on the company’s balance sheet. Your Turn Dividend Accounting Cynadyne, Inc.’s has 4,000 shares of $0.20 par value common stock authorized, 2,800 issued, and 400 shares held in treasury at the end of its first year of operations. On May 1, the company declared a $1 per share cash dividend, with a date of record on May 12, to be paid on May 25. What journal entries will be prepared to record the dividends? Solution A journal entry for the dividend declaration and a journal entry for the cash payout: To record the declaration: Date of declaration, May 12, no entry. To record the payment: Think It Through Recording Stock Transactions In your first year of operations the following transactions occur for a company: Net profit for the year is $16,000 100 shares of $1 par value common stock are issued for $32 per share The company purchases 10 shares at $35 per share The company pays a cash dividend of $1.50 per share Prepare journal entries for the above transactions and provide the balance in the following accounts: Common Stock, Dividends, Paid-in Capital, Retained Earnings, and Treasury Stock. 14.4 Compare and Contrast Owners’ Equity versus Retained Earnings Owners’ equity represents the business owners’ share of the company. It is often referred to as net worth or net assets in the financial world and as stockholders’ equity or shareholders’ equity when discussing businesses operations of corporations. From a practical perspective, it represents everything a company owns (the company’s assets) minus all the company owes (its liabilities). While “owners’ equity” is used for all three types of business organizations (corporations, partnerships, and sole proprietorships), only sole proprietorships name the balance sheet account “owner’s equity” as the entire equity of the company belongs to the sole owner. Partnerships (to be covered more thoroughly in Partnership Accounting ) often label this section of their balance sheet as “partners’ equity.” All three forms of business utilize different accounting for the respective equity transactions and use different equity accounts, but they all rely on the same relationship represented by the basic accounting equation ( Figure 14.11 ). Three Forms of Business Ownership Businesses operate in one of three forms—sole proprietorships, partnerships, or corporations. Sole proprietorships utilize a single account in owners’ equity in which the owner’s investments and net income of the company are accumulated and distributions to the owner are withdrawn. Partnerships utilize a separate capital account for each partner, with each capital account holding the respective partner’s investments and the partner’s respective share of net income, with reductions for the distributions to the respective partners. Corporations differ from sole proprietorships and partnerships in that their operations are more complex, often due to size. Unlike these other entity forms, owners of a corporation usually change continuously. The stockholders’ equity section of the balance sheet for corporations contains two primary categories of accounts. The first is paid-in capital, or contributed capital —consisting of amounts paid in by owners. The second category is earned capital , consisting of amounts earned by the corporation as part of business operations. On the balance sheet, retained earnings is a key component of the earned capital section, while the stock accounts such as common stock, preferred stock, and additional paid-in capital are the primary components of the contributed capital section. Concepts In Practice Contributed Capital and Earned Capital The stockholders’ equity section of Cracker Barrel Old Country Store, Inc. ’s consolidated balance sheet as of July 28, 2017, and July 29, 2016, shows the company’s contributed capital and the earned capital accounts. 15 15 Cracker Barrel. Cracker Barrel Old Country Store Annual Report 2017. September 22, 2017. http://investor.crackerbarrel.com/static-files/c05f90b8-1214-4f50-8508-d9a70301f51f Characteristics and Functions of the Retained Earnings Account Retained earnings is the primary component of a company’s earned capital. It generally consists of the cumulative net income minus any cumulative losses less dividends declared. A basic statement of retained earnings is referred to as an analysis of retained earnings because it shows the changes in the retained earnings account during the period. A company preparing a full set of financial statements may choose between preparing a statement of retained earnings, if the activity in its stock accounts is negligible, or a statement of stockholders’ equity, for corporations with activity in their stock accounts. A statement of retained earnings for Clay Corporation for its second year of operations ( Figure 14.12 ) shows the company generated more net income than the amount of dividends it declared. When the retained earnings balance drops below zero, this negative or debit balance is referred to as a deficit in retained earnings . Restrictions to Retained Earnings Retained earnings is often subject to certain restrictions. Restricted retained earnings is the portion of a company’s earnings that has been designated for a particular purpose due to legal or contractual obligations. Some of the restrictions reflect the laws of the state in which a company operates. Many states restrict retained earnings by the cost of treasury stock, which prevents the legal capital of the stock from dropping below zero. Other restrictions are contractual, such as debt covenants and loan arrangements; these exist to protect creditors, often limiting the payment of dividends to maintain a minimum level of earned capital. Appropriations of Retained Earnings A company’s board of directors may designate a portion of a company’s retained earnings for a particular purpose such as future expansion, special projects, or as part of a company’s risk management plan. The amount designated for a particular purpose is classified as appropriated retained earnings . There are two options in accounting for appropriated retained earnings, both of which allow the corporation to inform the financial statement users of the company’s future plans. The first accounting option is to make no journal entry and disclose the amount of appropriation in the notes to the financial statement. The second option is to record a journal entry that transfers part of the unappropriated retained earnings into an Appropriated Retained Earnings account. To illustrate, assume that on March 3, Clay Corporation’s board of directors appropriates $12,000 of its retained earnings for future expansion. The company’s retained earnings account is first renamed as Unappropriated Retained Earnings. The journal entry decreases the Unappropriated Retained Earnings account with a debit and increases the Appropriated Retained Earnings account with a credit for $12,000. The company will report the appropriate retained earnings in the earned capital section of its balance sheet. It should be noted that an appropriation does not set aside funds nor designate an income statement, asset, or liability effect for the appropriated amount. The appropriation simply designates a portion of the company’s retained earnings for a specific purpose, while signaling that the earnings are being retained in the company and are not available for dividend distributions. Statement of Stockholders’ Equity The statement of retained earnings is a subsection of the statement of stockholders’ equity. While the retained earnings statement shows the changes between the beginning and ending balances of the retained earnings account during the period, the statement of stockholders’ equity provides the changes between the beginning and ending balances of each of the stockholders’ equity accounts, including retained earnings. The format typically displays a separate column for each stockholders’ equity account, as shown for Clay Corporation in Figure 14.13 . The key events that occurred during the year—including net income, stock issuances, and dividends—are listed vertically. The stockholders’ equity section of the company’s balance sheet displays only the ending balances of the accounts and does not provide the activity or changes during the period. Nearly all public companies report a statement of stockholders’ equity rather than a statement of retained earnings because GAAP requires disclosure of the changes in stockholders’ equity accounts during each accounting period. It is significantly easier to see the changes in the accounts on a statement of stockholders’ equity rather than as a paragraph note to the financial statements. IFRS Connection Corporate Accounting and IFRS Both U.S. GAAP and IFRS require the reporting of the various owners’ accounts. Under U.S. GAAP, these accounts are presented in a statement that is most often called the Statement of Stockholders’ Equity. Under IFRS, this statement is usually called the Statement of Changes in Equity. Some of the biggest differences between U.S. GAAP and IFRS that arise in reporting the various accounts that appear in those statements relate to either categorization or terminology differences. U.S. GAAP divides owners’ accounts into two categories: contributed capital and retained earnings. IFRS uses three categories: share capital, accumulated profits and losses, and reserves. The first two IFRS categories correspond to the two categories used under U.S. GAAP. What about the third category, reserves? Reserves is a category that is used to report items such as revaluation surpluses from revaluing long-term assets (see the Long-Term Assets Feature Box: IFRS Connection for details), as well as other equity transactions such as unrealized gains and losses on available-for-sale securities and transactions that fall under Other Comprehensive Income (topics typically covered in more advanced accounting classes). U.S. GAAP does not use the term “reserves” for any reporting. There are also differences in terminology between U.S. GAAP and IFRS shown in Table 14.1 . Terminology Differences between U.S. GAAP and IFRS U.S. GAAP IFRS Common stock Share capital Preferred stock Preference shares Additional paid-in capital Share premium Stockholders Shareholders Retained earnings Retained profits or accumulated profits Retained earnings deficit Accumulated losses Table 14.1 All of this information pertains to publicly traded corporations, but what about corporations that are not publicly traded? Most corporations in the U.S. are not publicly traded, so do these corporations use U.S. GAAP? Some do; some do not. A non-public corporation can use cash basis, tax basis, or full accrual basis of accounting. Most corporations would use a full accrual basis of accounting such as U.S. GAAP. Cash and tax basis are most likely used only by sole proprietors or small partnerships. However, U.S. GAAP is not the only full accrual method available to non-public corporations. Two alternatives are IFRS and a simpler form of IFRS, known as IFRS for Small and Medium Sized Entities, or SMEs for short. In 2008, the AICPA recognized the IASB as a standard setter of acceptable GAAP and designated IFRS and IFRS for SMEs as an acceptable set of generally accepted accounting principles. However, it is up to each State Board of Accountancy to determine if that state will allow the use of IFRS or IFRS for SMEs by non-public entities incorporated in that state. What is a SME? Despite the use of size descriptors in the title, qualifying as a small or medium-sized entity has nothing to do with size. A SME is any entity that publishes general purpose financial statements for public use but does not have public accountability. In other words, the entity is not publicly traded. In addition, the entity, even if it is a partnership, cannot act as a fiduciary; for example, it cannot be a bank or insurance company and use SME rules. Why might a non-public corporation want to use IFRS for SMEs? First, IFRS for SMEs contains fewer and simpler standards. IFRS for SMEs has only about 300 pages of requirements, whereas regular IFRS is over 2,500 pages and U.S. GAAP is over 25,000 pages. Second, IFRS for SMEs is only modified every three years. This means entities using IFRS for SMEs don’t have to frequently adjust their accounting systems and reporting to new standards, whereas U.S. GAAP and IFRS are modified more frequently. Finally, if a corporation transacts business with international businesses, or hopes to attract international partners, seek capital from international sources, or be bought out by an international company, then having their financial statements in IFRS form would make these transactions easier. Prior Period Adjustments Prior period adjustments are corrections of errors that appeared on previous periods’ financial statements. These errors can stem from mathematical errors, misinterpretation of GAAP, or a misunderstanding of facts at the time the financial statements were prepared. Many errors impact the retained earnings account whose balance is carried forward from the previous period. Since the financial statements have already been issued, they must be corrected. The correction involves changing the financial statement amounts to the amounts they would have been had no errors occurred, a process known as restatement . The correction may impact both balance sheet and income statement accounts, requiring the company to record a transaction that corrects both. Since income statement accounts are closed at the end of every period, the journal entry will contain an entry to the Retained Earnings account. As such, prior period adjustments are reported on a company’s statement of retained earnings as an adjustment to the beginning balance of retained earnings. By directly adjusting beginning retained earnings, the adjustment has no effect on current period net income. The goal is to separate the error correction from the current period’s net income to avoid distorting the current period’s profitability. In other words, prior period adjustments are a way to go back and correct past financial statements that were misstated because of a reporting error. Concepts In Practice Are Companies Making Fewer Errors in Financial Reporting? According to Kevin LaCroix, additional reporting requirements created by the Sarbanes Oxley Act prompted a surge in 2005 and 2006 of the number of companies that had to make corrections and reissue financial statements. However, since that time, the number of companies making corrections has dropped over 60%, partially due to the number of U.S. companies listed on stock exchanges, and partially due to tighter regulations. The severity of the errors that caused restatements has declined as well, primarily due to tighter regulation, which has forced companies to improve their internal controls. 16 16 Kevin M. LaCroix. “Financial Statements Continue to Decline for U.S. Reporting Companies.” The D & O Diary. June 12, 2017. https://www.dandodiary.com/2017/06/articles/sox-generally/financial-restatements-continue-decline-u-s-reporting-companies/ To illustrate how to correct an error requiring a prior period adjustment, assume that in early 2020, Clay Corporation’s controller determined it had made an error when calculating depreciation in the preceding year, resulting in an understatement of depreciation of $1,000. The entry to correct the error contains a decrease to Retained Earnings on the statement of retained earnings for $1,000. Depreciation expense would have been $1,000 higher if the correct depreciation had been recorded. The entry to Retained Earnings adds an additional debit to the total debits that were previously part of the closing entry for the previous year. The credit is to the balance sheet account in which the $1,000 would have been recorded had the correct depreciation entry occurred, in this case, Accumulated Depreciation. Because the adjustment to retained earnings is due to an income statement amount that was recorded incorrectly, there will also be an income tax effect. The tax effect is shown in the statement of retained earnings in presenting the prior period adjustment. Assuming that Clay Corporation’s income tax rate is 30%, the tax effect of the $1,000 is a $300 (30% × $1,000) reduction in income taxes. The increase in expenses in the amount of $1,000 combined with the $300 decrease in income tax expense results in a net $700 decrease in net income for the prior period. The $700 prior period correction is reported as an adjustment to beginning retained earnings, net of income taxes, as shown in Figure 14.14 . Generally accepted accounting principles (GAAP), the set of accounting rules that companies are required to follow for financial reporting, requires companies to disclose in the notes to the financial statements the nature of any prior period adjustment and the related impact on the financial statement amounts. Link to Learning The correction of errors in financial statements is a complicated situation. Both shareholders and investors tend to view these with deep suspicion. Many believe corporations are attempting to smooth earnings, hide possible problems, or cover up mistakes. The Journal of Accountancy , a periodical published by the AICPA, offers guidance in how to manage this process. Browse the Journal of Accountancy website for articles and cases of prior period adjustment issues. Concepts In Practice Tune into Financial News Tune into a financial news program like Squawk Box or Mad Money on CNBC or Bloomberg’s . Notice the terminology used to describe the corporations being analyzed. Notice the speed at which topics are discussed. Are these shows for the novice investor? How could this information impact potential investors? Link to Learning Log onto the Annual Reports website to access a comprehensive collection of more than 5,000 annual reports produced by publicly-traded companies. The site is a tremendous resource for both school and investment-related research. Reading annual reports provides a different type of insight into corporations. Beyond the financial statements, annual reports give shareholders and the public a glimpse into the operations, mission, and charitable giving of a corporation. 14.5 Discuss the Applicability of Earnings per Share as a Method to Measure Performance Earnings per share (EPS) measures the portion of a corporation’s profit allocated to each outstanding share of common stock. Many financial analysts believe that EPS is the single most important tool in assessing a stock’s market price. A high or increasing earnings per share can drive up a stock price. Conversely, falling earnings per share can lower a stock’s market price. EPS is also a component in calculating the price-to-earnings ratio (the market price of the stock divided by its earnings per share), which many investors find to be a key indicator of the value of a company’s stock. Concepts In Practice Microsoft Earnings Announcements Exceeds Wall Street Targets While a company’s board of directors makes the final approval of the reports, a key goal of each company is to look favorable to investors while providing financial statements that accurately reflect the financial condition of the company. Each quarter, public companies report EPS through a public announcement as one of the key measures of their profitability. These announcements are highly anticipated by investors and analysts. The suspense is heightened because analysts provide earnings estimates to the public prior to each announcement release. According to Matt Weinberger of Business Insider , the announcement by Microsoft of its first quarter 2018 EPS reported at $0.95 per share, higher than analysts’ estimates of $0.85 per share, caused the value of its stock to rise by more than 3% within hours of the announcement. 17 While revenue was the other key metric in Microsoft ’s earnings announcement, EPS carried more weight in the surge of the company’s market price. 17 Matt Weinberger. “Microsoft’s Cloud Business Is Driving a Revenue Surge That’s Well above Wall Street Targets.” Business Insider. April 26, 2018. https://www.businessinsider.com/microsoft-q3-fy18-earnings-revenue-eps-analysis-2018-4 Calculating Earnings per Share Earnings per share is the profit a company earns for each of its outstanding common shares. Both the balance sheet and income statement are needed to calculate EPS. The balance sheet provides details on the preferred dividend rate, the total par value of the preferred stock, and the number of common shares outstanding. The income statement indicates the net income for the period. The formula to calculate basic earnings per share is: By removing the preferred dividends from net income, the numerator represents the profit available to common shareholders. Because preferred dividends represent the amount of net income to be distributed to preferred shareholders, this portion of the income is obviously not available for common shareholders. While there are a number of variations of measuring a company’s profit used in the financial world, such as NOPAT (net operating profit after taxes) and EBITDA (earnings before interest, taxes, depreciation, and amortization), GAAP requires companies to calculate EPS based on a corporation’s net income, as this amount appears directly on a company’s income statement, which for public companies must be audited. In the denominator, only common shares are used to determine earnings per share because EPS is a measure of earnings for each common share of stock. The denominator can fluctuate throughout the year as a company issues and buys back shares of its own stock. The weighted average number of shares is used on the denominator because of this fluctuation. To illustrate, assume that a corporation began the year with 600 shares of common stock outstanding and then on April 1 issued 1,000 more shares. During the period January 1 to March 31, the company had the original 600 shares outstanding. Once the new shares were issued, the company had the original 600 plus the new 1,000 shares, for a total of 1,600 shares for each of the next nine months—from April 1 to December 31. To determine the weighted average shares, apply these fractional weights to both of the stock amounts, as shown in Figure 14.15 . If the shares were not weighted, the calculation would not consider the time period during which the shares were outstanding. To illustrate how EPS is calculated, assume Sanaron Company earns $50,000 in net income during 2020. During the year, the company also declared a $10,000 dividend on preferred stock and a $14,000 dividend on common stock. The company had 5,000 common shares outstanding the entire year along with 2,000 preferred shares. Sanaron has generated $8 of earnings ($50,000 less the $10,000 of preferred dividends) for each of the 5,000 common shares of stock it has outstanding. Earnings per share = $50,000 − $10,000 5,000 = $8.00 Earnings per share = $50,000 − $10,000 5,000 = $8.00 Think It Through Ethical Calculations of Earnings per Share When a company issued new shares of stock and buys other back as treasury stock, EPS can be manipulated because both of these transactions affect the number of shares of stock outstanding. What are ethical considerations involved in calculating EPS? Measuring Performance with EPS EPS is a key profitability measure that both current and potential common stockholders monitor. Its importance is accentuated by the fact that GAAP requires public companies to report EPS on the face of a company’s income statement. This is only ratio that requires such prominent reporting. If fact, public companies are required to report two different earnings per share amounts on their income statements—basic and diluted. We’ve illustrated the calculation of basic EPS. Diluted EPS, which is not demonstrated here, involves the consideration of all securities such as stocks and bonds that could potentially dilute, or reduce, the basic EPS. Link to Learning Where can you find EPS information on public companies? Check out the Yahoo Finance website and search for EPS data for your favorite corporation. Common stock shares are normally purchased by investors to generate income through dividends or to sell at a profit in the future. Investors realize that inadequate EPS can result in poor or inconsistent dividend payments and fluctuating stock prices. As such, companies seek to produce EPS amounts that rise each period. However, an increase in EPS may not always reflect favorable performance, as there are multiple reasons that EPS may increase. One way EPS can increase is because of increased net income. On the other hand, it can also increase when a company buys back its own shares of stock. For example, assume that Ranadune Enterprises generated net income of $15,000 in 2020. In addition, 20,000 shares of common stock and no preferred stock were outstanding throughout 2020. On January 1, 2020, the company buys back 2,500 shares of its common stock and holds them as treasury shares. Net income for 2020 stayed static at $15,000. Just before the repurchasing of the stock, the company’s EPS is $0.75 per share: Earnings per share = $15,000 20,000 shares = $0.75 per share Earnings per share = $15,000 20,000 shares = $0.75 per share The purchase of treasury stock in 2020 reduces the common shares outstanding to 17,500 because treasury shares are considered issued but not outstanding (20,000 − 2,500). EPS for 2020 is now $0.86 per share even though earnings remains the same. Earnings per share = $15,000 17,500 shares = $0.86 per share Earnings per share = $15,000 17,500 shares = $0.86 per share This increase in EPS occurred because the net income is now spread over fewer shares of stock. Similarly, EPS can decline even when a company’s net income increases if the number of shares increases at a higher degree than net income. Unfortunately, managers understand how the number of shares outstanding can affect EPS and are often in position to manipulate EPS by creating transactions that target a desired EPS number. Ethical Considerations Stock Buybacks Drive Up Earnings per Share: Ethical? Public companies can increase their earnings per share by buying their own stock in the open market. The increase in earnings per share results because the number of shares is reduced by the purchase even though the earnings remain the same. With fewer shares and the same amount of earnings, the earnings per share increases without any change in overall profitability or operational efficiency. A Market Watch article attributing Goldman Sachs states, “S&P 500 companies will spend about $780 billion on share buybacks in 2017, marking a 30% rise from 2016.” 18 An article in Forbes provides some perspective by pointing out that buying back shares was legalized in 1982, but for the majority of the twentieth century, corporate buybacks of shares was considered illegal because “they were thought to be a form of stock market manipulation. . . . Buying back company stock can inflate a company’s share price and boost its earnings per share—metrics that often guide lucrative executive bonuses.” 19 Is a corporation buying back its shares an ethical way in which to raise or maintain the price of a company’s shares? 18 C. Linnane and T. Kilgore. “Share Buybacks Will Rise 30% to $780 Billion Next Year, says Goldman Sachs.” Market Watch. November 22, 2016 https://www.marketwatch.com/story/share-buybacks-will-return-with-a-vengeance-next-year-2016-11-21. 19 Arne Alsin. “The Ugly Truth Behind Stock Buybacks.” Forbes . Feb. 28, 2017. https://www.forbes.com/sites/aalsin/2017/02/28/shareholders-should-be-required-to-vote-on-stock-buybacks/#69b300816b1e Earnings per share is interpreted differently by different analysts. Some financial experts favor companies with higher EPS values. The reasoning is that a higher EPS is a reflection of strong earnings and therefore a good investment prospect. A more meaningful analysis occurs when EPS is tracked over a number of years, such as when presented in the comparative income statements for Cracker Barrel Old Country Store, Inc. ’s respective year ends in 2017, 2016, and 2015 shown in Figure 14.16 . 20 Cracker Barrel ’s basic EPS is labeled as “net income per share: basic.” 20 Cracker Barrel. Cracker Barrel Old Country Store 2017 Annual Report. September 22, 2017. http://investor.crackerbarrel.com/static-files/c05f90b8-1214-4f50-8508-d9a70301f51f Most analysts believe that a consistent improvement in EPS year after year is the indication of continuous improvement in the earning power of a company. This is what is seen in Cracker Barrel ’s EPS amounts over each of the three years reported, moving from $6.85 to $7.91 to $8.40. However, it is important to remember that EPS is calculated on historical data, which is not always predictive of the future. In addition, when EPS is used to compare different companies, significant differences may exist. If companies are in the same industry, that comparison may be more valuable than if they are in different industries. Basically, EPS should be a tool used in decision-making, utilized alongside other analytic tools. Your Turn Would You Have Invested? What if, in 1997, you invested $5,000 in Amazon ? Today, your investment would be worth nearly $1 million. Potential investors viewing Amazon ’s income statement in 1997 would have seen an EPS of a negative $0.11. In other words, Amazon lost $0.11 for each share of common stock outstanding. Would you have invested? Solution Answers will vary. A strong response would include the idea that a negative or small EPS reflects upon the past historical operations of a company. EPS does not predict the future. Investors in 1997 looked beyond Amazon’s profitability and saw its business model having strong future potential. Think It Through Using Earnings per Share in Decision-Making As a valued employee, you have been awarded 10 shares of the company’s stock. Congratulations! How could you use earnings per share to help you decide whether to hold on to the stock or keep it for the future?
introduction_to_sociology
Learning Objectives 14.1 What Is Marriage? What Is a Family? Describe society’s current understanding of family Recognize changes in marriage and family patterns Differentiate between lines of decent and residence 14.2 Variations in Family Life Recognize variations in family life Understand the prevalence of single parents, cohabitation, same-sex couples, and unmarried individuals Discuss the social impact of changing family structures 14.3 Challenges Families Face Understand the social and interpersonal impact of divorce Describe the social and interpersonal impact of family abuse Introduction to Marriage and Family Christina and James met in college and have been dating for more than five years. For the past two years, they have been living together in a condo they purchased jointly. While Christina and James were confident in their decision to enter into a commitment like a 20-year mortgage, they are unsure if they want to enter into marriage . The couple had many discussions about marriage and decided that it just didn’t seem necessary. Wasn’t it only a piece of paper? And didn’t half of all marriages end in divorce? Neither Christina nor James had seen much success with marriage while growing up. Christina was raised by a single mother. Her parents never married, and her father has had little contact with the family since she was a toddler. Christina and her mother lived with her maternal grandmother, who often served as a surrogate parent. James grew up in a two-parent household until age seven, when his parents divorced. He lived with his mother for a few years, and then later with his mother and her boyfriend until he left for college. James remained close with his father who remarried and had a baby with his new wife. Recently, Christina and James have been thinking about having children and the subject of marriage has resurfaced. Christina likes the idea of her children growing up in a traditional family, while James is concerned about possible marital problems down the road and negative consequences for the children should that occur. When they shared these concerns with their parents, James’s mom was adamant that the couple should get married. Despite having been divorced and having a live-in boyfriend of 15 years, she believes that children are better off when their parents are married. Christina’s mom believes that the couple should do whatever they want but adds that it would “be nice” if they wed. Christina and James’s friends told them, married or not married, they would still be a family. Christina and James’s scenario may be complicated, but it is representative of the lives of many young couples today, particularly those in urban areas (Useem 2007). The U.S. Census Bureau reports that the number of unmarried couples has grown from fewer than one million in the 1970s to 6.4 million in 2008. Cohabitating, but unwed, couples account for 10 percent of all opposite-sex couples in the United States (U.S. Census Bureau 2008). Some may never choose to wed (Jayson 2008). With fewer couples marrying, the traditional American family structure is becoming less common.
[ { "answer": { "ans_choice": 0, "ans_text": "how a given society sanctions the relationships of people who are connected through blood, marriage, or adoption" }, "bloom": null, "hl_context": "The question of what constitutes a family is a prime area of debate in family sociology , as well as in politics and religion . Social conservatives tend to define the family in terms of structure with each family member filling a certain role ( like father , mother , or child ) . <hl> Sociologists , on the other hand , tend to define family more in terms of the manner in which members relate to one another than on a strict configuration of status roles . <hl> <hl> Here , we ’ ll define family as a socially recognized group ( usually joined by blood , marriage , or adoption ) that forms an emotional connection and serves as an economic unit of society . <hl> Sociologists identify different types of families based on how one enters into them . A family of orientation refers to the family into which a person is born . A family of procreation describes one that is formed through marriage . These distinctions have cultural significance related to issues of lineage .", "hl_sentences": "Sociologists , on the other hand , tend to define family more in terms of the manner in which members relate to one another than on a strict configuration of status roles . Here , we ’ ll define family as a socially recognized group ( usually joined by blood , marriage , or adoption ) that forms an emotional connection and serves as an economic unit of society .", "question": { "cloze_format": "Sociologists tend to define family in terms of ___ .", "normal_format": "In terms of what sociologists tend to define family?", "question_choices": [ "how a given society sanctions the relationships of people who are connected through blood, marriage, or adoption", "the connection of bloodlines", "the status roles that exist in a family structure", "how closely members adhere to social norms" ], "question_id": "fs-id2746090", "question_text": "Sociologists tend to define family in terms of" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "at least as close" }, "bloom": "1", "hl_context": "Family is , indeed , a subjective concept , but it is a fairly objective fact that family ( whatever one ’ s concept of it may be ) is very important to Americans . In a 2010 survey by Pew Research Center in Washington , D . C . , 76 percent of adults surveyed stated that family is “ the most important ” element of their life — just one percent said it was “ not important ” ( Pew Research Center 2010 ) . It is also very important to society . President Ronald Regan notably stated , “ The family has always been the cornerstone of American society . Our families nurture , preserve , and pass on to each succeeding generation the values we share and cherish , values that are the foundation of our freedoms ” ( Lee 2009 ) . <hl> While the design of the family may have changed in recent years , the fundamentals of emotional closeness and support are still present . <hl> <hl> Most responders to the Pew survey stated that their family today is at least as close ( 45 percent ) or closer ( 40 percent ) than the family with which they grew up ( Pew Research Center 2010 ) . <hl>", "hl_sentences": "While the design of the family may have changed in recent years , the fundamentals of emotional closeness and support are still present . Most responders to the Pew survey stated that their family today is at least as close ( 45 percent ) or closer ( 40 percent ) than the family with which they grew up ( Pew Research Center 2010 ) .", "question": { "cloze_format": "Research suggests that people generally feel that their current family is _______ than the family they grew up with.", "normal_format": "Research suggests that people generally feel that their current family is which of the following in comparison to the family they grew up with?", "question_choices": [ "less close", "more close", "at least as close", "none of the above" ], "question_id": "fs-id2791962", "question_text": "Research suggests that people generally feel that their current family is _______ than the family they grew up with." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "polyandry" }, "bloom": "1", "hl_context": "One Partner or Many ? Americans typically equate marriage with monogamy , when someone is married to only one person at a time . In many countries and cultures around the world , however , having one spouse is not the only form of marriage . In a majority of cultures ( 78 percent ) , polygamy , or being married to more than one person at a time , is accepted ( Murdock 1967 ) , with most polygamous societies existing in northern Africa and east Asia ( Altman and Ginat 1996 ) . Instances of polygamy are almost exclusively in the form of polygyny . Polygyny refers to a man being married to more than one woman at the same time . <hl> The reverse , when a woman is married to more than one man at the same time , is called polyandry . <hl> It is far less common and only occurs in about one percent of the world ’ s cultures ( Altman and Ginat 1996 ) . The reasons for the overwhelming prevalence of polygamous societies are varied but they often include issues of population growth , religious ideologies , and social status .", "hl_sentences": "The reverse , when a woman is married to more than one man at the same time , is called polyandry .", "question": { "cloze_format": "A woman being married to two men would be an example of ___.", "normal_format": "A woman being married to two men would be an example of what?", "question_choices": [ "monogamy", "polygyny", "polyandry", "cohabitation" ], "question_id": "fs-id3633493", "question_text": "A woman being married to two men would be an example of:" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "patrilineal" }, "bloom": "1", "hl_context": "<hl> There are three types of unilateral descent : patrilineal , which follows the father ’ s line only ; matrilineal , which follows the mother ’ s side only ; and ambilineal , which follows either the father ’ s only or the mother ’ s side only , depending on the situation . <hl> In partrilineal societies , such as those in rural China and India , only males carry on the family surname . This gives males the prestige of permanent family membership while females are seen as only temporary members ( Harrell 2001 ) . American society assumes some aspects of partrilineal decent . For instance , most children assume their father ’ s last name even if the mother retains her birth name . In matrilineal societies , inheritance and family ties are traced to women . Matrilineal descent is common in Native American societies , notably the Crow and Cherokee tribes . In these societies , children are seen as belonging to the women and , therefore , one ’ s kinship is traced to one ’ s mother , grandmother , great grandmother , and so on ( Mails 1996 ) . In ambilineal societies , which are most common in Southeast Asian countries , parents may choose to associate their children with the kinship of either the mother or the father . This choice maybe based on the desire to follow stronger or more prestigious kinship lines or on cultural customs such as men following their father ’ s side and women following their mother ’ s side ( Lambert 2009 ) . Tracing one ’ s line of descent to one parent rather than the other can be relevant to the issue of residence . In many cultures , newly married couples move in with , or near to , family members . In a patrilocal residence system it is customary for the wife to live with ( or near ) her husband ’ s blood relatives ( or family or orientation ) . Patrilocal systems can be traced back thousands of years . In a DNA analysis of 4,600- year-old bones found in Germany , scientists found indicators of patrilocal living arrangements ( Haak et al 2008 ) . Patrilocal residence is thought to be disadvantageous to women because it makes them outsiders in the home and community ; it also keeps them disconnected from their own blood relatives . In China , where patrilocal and patrilineal customs are common , the written symbols for maternal grandmother ( wáipá ) are separately translated to mean “ outsider ” and “ women ” ( Cohen 2011 ) . Similarly , in matrilocal residence systems , where it is customary for the husband to live with his wife ’ s blood relatives ( or her family of orientation ) , the husband can feel disconnected and can be labeled as an outsider . The Minangkabau people , a matrilocal society that is indigenous to the highlands of West Sumatra in Indonesia , believe that home is the place of women and they give men little power in issues relating to the home or family ( Joseph and Najmabadi 2003 ) . Most societies that use patrilocal and patrilineal systems are patriarchal , but very few societies that use matrilocal and matrilineal systems are matriarchal , as family life is often considered an important part of the culture for women , regardless of their power relative to men . Stages of Family Life", "hl_sentences": "There are three types of unilateral descent : patrilineal , which follows the father ’ s line only ; matrilineal , which follows the mother ’ s side only ; and ambilineal , which follows either the father ’ s only or the mother ’ s side only , depending on the situation .", "question": { "cloze_format": "A child who associates his line of descent with his father’s side only is part of a _____ society.", "normal_format": "A child who associates his line of descent with his father’s side only is part of which society?", "question_choices": [ "matrilocal", "bilateral", "matrilineal", "patrilineal" ], "question_id": "fs-id1373905", "question_text": "A child who associates his line of descent with his father’s side only is part of a _____ society." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "It is too narrowly focused on a sequence of stages." }, "bloom": "1", "hl_context": "<hl> As early “ stage ” theories have been criticized for generalizing family life and not accounting for differences in gender , ethnicity , culture , and lifestyle , less rigid models of the family life cycle have been developed . <hl> One example is the family life course , which recognizes the events that occur in the lives of families but views them as parting terms of a fluid course rather than in consecutive stages ( Strong and DeVault 1992 ) . This type of model accounts for changes in family development , such as the fact that in today ’ s society , childbearing does not always occur with marriage . It also sheds light on other shifts in the way family life is practiced . Society ’ s modern understanding of family rejects rigid “ stage ” theories and is more accepting of new , fluid models .", "hl_sentences": "As early “ stage ” theories have been criticized for generalizing family life and not accounting for differences in gender , ethnicity , culture , and lifestyle , less rigid models of the family life cycle have been developed .", "question": { "cloze_format": "A criticism of the family life cycle model is that ___.", "normal_format": "Which of the following is a criticism of the family life cycle model?", "question_choices": [ "It is too broad and accounts for too many aspects of family.", "It is too narrowly focused on a sequence of stages.", "It does not serve a practical purpose for studying family behavior.", "It is not based on comprehensive research." ], "question_id": "fs-id3577740", "question_text": "Which of the following is a criticism of the family life cycle model?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 0, "ans_text": "two-parent households" }, "bloom": null, "hl_context": "<hl> Stepparents are an additional family element in two-parent homes . <hl> Among children living in two-parent households , 9 percent live with a biological or adoptive parent and a stepparent . The majority ( 70 percent ) of those children live with their biological mother and a stepfather . Family structure has been shown to vary with the age of the child . Older children ( ages 15 – 17 ) are less likely to live with two parents than adolescent children ( ages 6 – 14 ) or young children ( ages 0 – 5 ) . Older children who do live with two parents are also more likely to live with stepparents ( U . S . Census 2011 ) . The combination of husband , wife , and children that 99.8 percent of Americans believes constitutes a family is not representative of 99.8 percent of U . S . families . <hl> According to 2010 census data , only 66 percent of children under age 17 live in a household with two married parents . <hl> This is a decrease from 77 percent in 1980 ( U . S . Census 2011 ) . This two-parent family structure is known as a nuclear family , referring to married parents and children as the nucleus , or core , of the group . Recent years have seen a rise in variations of the nuclear family with the parents not being married . Three percent of children live with two cohabiting parents ( U . S . Census 2011 ) . Single Parents", "hl_sentences": "Stepparents are an additional family element in two-parent homes . According to 2010 census data , only 66 percent of children under age 17 live in a household with two married parents .", "question": { "cloze_format": "The majority of American children live in ___ .", "normal_format": "Where do the majority of children in America live in?", "question_choices": [ "two-parent households", "one-parent households", "no-parent households", "multigenerational households" ], "question_id": "fs-id3072661", "question_text": "The majority of American children live in:" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 3, "ans_text": "one divorced parent; one unwed parent" }, "bloom": null, "hl_context": "Changes in the traditional family structure raise questions about how such societal shifts affect children . U . S . Census statistics have long shown that children living in homes with both parents grow up with more financial and educational advantages than children who are raised in single-parent homes ( U . S . Census 1997 ) . Parental marital status seems to be a significant indicator of advancement in a child ’ s life . <hl> Children living with a divorced parent typically have more advantages than children living with a parent who never married ; this is particularly true of children who live with divorced fathers . <hl> This correlates with the statistic that never-married parents are typically younger , have fewer years of schooling , and have lower incomes ( U . S . Census 1997 ) . Six in ten children living with only their mother live near or below the poverty level . Of those being raised by single mothers , 69 percent live in or near poverty compared to 45 percent for divorced mothers ( U . S . Census 1997 ) . Though other factors such as age and education play a role in these differences , it can be inferred that marriage between parents is generally beneficial for children .", "hl_sentences": "Children living with a divorced parent typically have more advantages than children living with a parent who never married ; this is particularly true of children who live with divorced fathers .", "question": { "cloze_format": "According to the study cited from the U.S. Census Bureau, children who live with ______ grow up with more advantages than children who live with ______.", "normal_format": "According to the study cited from the U.S. Census Bureau, children who live with what grow up with more advantages than children who live with what?", "question_choices": [ "one unwed parent; one divorced parent", "one divorced parent; two married parents", "one grandparent; two married parents", "one divorced parent; one unwed parent" ], "question_id": "fs-id2011822", "question_text": "According to the study cited from the U.S. Census Bureau, children who live with ______ grow up with more advantages than children who live with ______." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "slightly less likely than" }, "bloom": null, "hl_context": "While couples may use this time to “ work out the kinks ” of a relationship before they wed , the most recent research has found that cohabitation has little effect on the success of a marriage . <hl> In fact , those who do not cohabitate before marriage have slightly better rates of remaining married for more than 10 years ( Jayson 2010 ) . <hl> Cohabitation may contribute to the increase in the number of men and women who delay marriage . The median age for marriage is the highest it has ever been since the U . S . Census kept records — age 26 for women and age 28 for men ( U . S . Census 2010 ) .", "hl_sentences": "In fact , those who do not cohabitate before marriage have slightly better rates of remaining married for more than 10 years ( Jayson 2010 ) .", "question": { "cloze_format": "Couples who cohabitate before marriage are ______ couples who did not cohabitate before marriage to be married at least 10 years.", "normal_format": "Which of the following are likely for couples who cohabitate before marriage compared to couples who did not cohabitate before marriage to be married at least 10 years?", "question_choices": [ "far more likely than", "far less likely than", "slightly less likely than", "equally as likely as" ], "question_id": "fs-id2729111", "question_text": "Couples who cohabitate before marriage are ______ couples who did not cohabitate before marriage to be married at least 10 years." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "1" }, "bloom": null, "hl_context": "The number of same-sex couples has grown significantly in the past decade . The U . S . Census Bureau reported 594,000 same-sex couple households in the United States , a 50 percent increase from 2000 . This increase is a result of more coupling , the growing social acceptance of homosexuality , and a subsequent increase in willingness to report it . <hl> Nationally , same-sex couple households make up 1 percent of the population , ranging from as little as 0.29 percent in Wyoming to 4.01 percent in the District of Columbia ( U . S . Census 2011 ) . <hl> Legal recognition of same-sex couples as spouses is different in each state , as only six states and the District of Columbia have legalized same-sex marriage . The 2010 U . S . Census , however , allowed same-sex couples to report as spouses regardless of whether their state legally recognizes their relationship . Nationally , 25 percent of all same-sex households reported that they were spouses . In states where same-sex marriages are performed , nearly half ( 42.4 percent ) of same-sex couple households were reported as spouses .", "hl_sentences": "Nationally , same-sex couple households make up 1 percent of the population , ranging from as little as 0.29 percent in Wyoming to 4.01 percent in the District of Columbia ( U . S . Census 2011 ) .", "question": { "cloze_format": "Same-sex couple households account for _____ percent of American households.", "normal_format": "What percentage of American households are same-sex couple households?", "question_choices": [ "1", "10", "15", "30" ], "question_id": "fs-id3647930", "question_text": "Same-sex couple households account for _____ percent of American households." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "increased for both men and women" }, "bloom": null, "hl_context": "While couples may use this time to “ work out the kinks ” of a relationship before they wed , the most recent research has found that cohabitation has little effect on the success of a marriage . In fact , those who do not cohabitate before marriage have slightly better rates of remaining married for more than 10 years ( Jayson 2010 ) . Cohabitation may contribute to the increase in the number of men and women who delay marriage . <hl> The median age for marriage is the highest it has ever been since the U . S . Census kept records — age 26 for women and age 28 for men ( U . S . Census 2010 ) . <hl>", "hl_sentences": "The median age for marriage is the highest it has ever been since the U . S . Census kept records — age 26 for women and age 28 for men ( U . S . Census 2010 ) .", "question": { "cloze_format": "The median age of first marriage has ______ in the last 50 years.", "normal_format": "Which of the following is correct about the median age of first marriage in the last 50 years?", "question_choices": [ "increased for men but not women", "decreased for men but not women", "increased for both men and women", "decreased for both men and women" ], "question_id": "fs-id1551347", "question_text": "The median age of first marriage has ______ in the last 50 years." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "steadily declining" }, "bloom": null, "hl_context": "Divorce , while fairly common and accepted in modern American society , was once a word that would only be whispered and was accompanied by gestures of disapproval . In 1960 , divorce was generally uncommon , affecting only 9.1 out of every 1,000 married persons . That number more than doubled ( to 20.3 ) by 1975 and peaked in 1980 at 22.6 ( Popenoe 2007 ) . Over the last quarter century , divorce rates have dropped steadily and are now similar to those in 1970 . <hl> The dramatic increase in divorce rates after the 1960s has been associated with the liberalization of divorce laws and the shift in societal make up due to women increasingly entering the workforce ( Michael 1978 ) . <hl> The decrease in divorce rates can be attributed to two probable factors : an increase in the age at which people get married , and an increased level of education among those who marry — both of which have been found to promote greater marital stability .", "hl_sentences": "The dramatic increase in divorce rates after the 1960s has been associated with the liberalization of divorce laws and the shift in societal make up due to women increasingly entering the workforce ( Michael 1978 ) .", "question": { "cloze_format": "Current divorce rates are ___ .", "normal_format": "What are the current divorce rates?", "question_choices": [ "at an all-time high", "at an all-time low", "steadily increasing", "steadily declining" ], "question_id": "fs-id752454", "question_text": "Current divorce rates are:" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "more likely" }, "bloom": null, "hl_context": "Divorce is thought to have a cyclical pattern . <hl> Children of divorced parents are 40 percent more likely to divorce than children of married parents . <hl> And when we consider children whose parents divorced and then remarried , the likelihood of their own divorce rises to 91 percent ( Wolfinger 2005 ) . This might result from being socialized to a mindset that a broken marriage can be replaced rather than repaired ( Wolfinger 2005 ) . That sentiment is also reflected in the finding that when both partners of a married couple have been previously divorced , their marriage is 90 percent more likely to end in divorce ( Wolfinger 2005 ) .", "hl_sentences": "Children of divorced parents are 40 percent more likely to divorce than children of married parents .", "question": { "cloze_format": "Children of divorced parents are _______ to divorce in their own marriage than children of parents who stayed married.", "normal_format": "Children of divorced parents are how likely to divorce in their own marriage than children of parents who stayed married?", "question_choices": [ "more likely", "less likely", "equally likely" ], "question_id": "fs-id820582", "question_text": "Children of divorced parents are _______ to divorce in their own marriage than children of parents who stayed married." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "high-conflict" }, "bloom": "1", "hl_context": "Divorce and remarriage can been stressful on partners and children alike . Divorce is often justified by the notion that children are better off in a divorced family than in a family with parents who do not get along . However , long-term studies determine that to be generally untrue . Research suggests that while marital conflict does not provide an ideal childrearing environment , going through a divorce can be damaging . Children are often confused and frightened by the threat to their family security . They may feel responsible for the divorce and attempt to bring their parents back together , often by sacrificing their own well-being ( Amato 2000 ) . <hl> Only in high-conflict homes do children benefit from divorce and the subsequent decrease in conflict . <hl> The majority of divorces come out of lower-conflict homes , and children from those homes are more negatively impacted by the stress of the divorce than the stress of unhappiness in the marriage ( Amato 2000 ) . Studies also suggest that stress levels for children are not improved when a child acquires a stepfamily through marriage . Although there may be increased economic stability , stepfamilies typically have a high level of interpersonal conflict ( McLanahan and Sandefur 1994 ) .", "hl_sentences": "Only in high-conflict homes do children benefit from divorce and the subsequent decrease in conflict .", "question": { "cloze_format": "In general, children in ______ households benefit from divorce.", "normal_format": "In general, children in which households benefit from divorce?", "question_choices": [ "stepfamily", "multigenerational", "high-conflict", "low-conflict" ], "question_id": "fs-id3070266", "question_text": "In general, children in ______ households benefit from divorce." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "Nearly half of instances of IPV involve drugs or alcohol." }, "bloom": "1", "hl_context": "Domestic violence is a significant social problem in the United States . It is often characterized as violence between household or family members , specifically spouses . <hl> To include unmarried , cohabitating , and same-sex couples , family sociologists have created the term intimate partner violence ( IPV ) . <hl> Women are the primary victims of intimate partner violence . <hl> It is estimated that 1 in 4 women has experienced some form of IPV in her lifetime ( compared to 1 in 7 men ) ( Catalano 2007 ) . <hl> <hl> IPV may include physical violence , such as punching , kicking , or other methods of inflicting physical pain ; sexual violence , such as rape or other forced sexual acts ; threats and intimidation that imply either physical or sexual abuse ; and emotional abuse , such as harming another ’ s sense of self-worth through words or controlling another ’ s behavior . <hl> IPV often starts as emotional abuse and then escalates to other forms or combinations of abuse ( Centers for Disease Control 2012 ) . <hl> In 2010 , of IPV acts that involved physical actions against women , 57 percent involved physical violence only ; 9 percent involved rape and physical violence ; 14 percent involved physical violence and stalking ; 12 percent involved rape , physical violence , and stalking ; and 4 percent involved rape only ( CDC 2011 ) . <hl> This is vastly different than IPV abuse patterns for men , which show that nearly all ( 92 percent ) physical acts of IVP take the form of physical violence and fewer than one percent involve rape alone or in combination ( Catalano 2007 ) . <hl> IPV affects women at greater rates than men because women often take the passive role in relationships and may become emotionally dependent on their partner . <hl> Perpetrators of IPV work to establish and maintain such dependence in order to hold power and control over their victims , making them feel stupid , crazy , or ugly — in some way worthless .", "hl_sentences": "To include unmarried , cohabitating , and same-sex couples , family sociologists have created the term intimate partner violence ( IPV ) . It is estimated that 1 in 4 women has experienced some form of IPV in her lifetime ( compared to 1 in 7 men ) ( Catalano 2007 ) . IPV may include physical violence , such as punching , kicking , or other methods of inflicting physical pain ; sexual violence , such as rape or other forced sexual acts ; threats and intimidation that imply either physical or sexual abuse ; and emotional abuse , such as harming another ’ s sense of self-worth through words or controlling another ’ s behavior . In 2010 , of IPV acts that involved physical actions against women , 57 percent involved physical violence only ; 9 percent involved rape and physical violence ; 14 percent involved physical violence and stalking ; 12 percent involved rape , physical violence , and stalking ; and 4 percent involved rape only ( CDC 2011 ) . IPV affects women at greater rates than men because women often take the passive role in relationships and may become emotionally dependent on their partner .", "question": { "cloze_format": "A true statement of intimate partner violence (IPV) is that ___ .", "normal_format": "Which of the following is true of intimate partner violence (IPV)?", "question_choices": [ "IPV victims are more frequently men than women.", "One in ten women is a victim of IPV.", "Nearly half of instances of IPV involve drugs or alcohol.", "Rape is the most common form of IPV." ], "question_id": "fs-id1635170", "question_text": "Which of the following is true of intimate partner violence (IPV)?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "Neglect" }, "bloom": "1", "hl_context": "Drug and alcohol use is also a known contributor to child abuse . <hl> Children raised by substance abusers have a risk of physical abuse three times greater than other kids , and neglect is four times as prevalent in these families ( Child Welfare Information Gateway 2011 ) . <hl> Other risk factors include social isolation , depression , low parental education , and a history of being mistreated as a child . Approximately 30 percent of abused children will later abuse their own children ( Child Welfare Information Gateway 2006 ) . <hl> Child abuse may come in several forms , the most common being neglect ( 78.3 percent ) , followed by physical abuse ( 10.8 percent ) , sexual abuse ( 7.6 percent ) , psychological maltreatment ( 7.6 percent ) , and medical neglect ( 2.4 percent ) ( Child Help 2011 ) . <hl> Some children suffer from a combination of these forms of abuse . The majority (8 1.2 percent ) of perpetrators are parents ; 6.2 percent are other relatives .", "hl_sentences": "Children raised by substance abusers have a risk of physical abuse three times greater than other kids , and neglect is four times as prevalent in these families ( Child Welfare Information Gateway 2011 ) . Child abuse may come in several forms , the most common being neglect ( 78.3 percent ) , followed by physical abuse ( 10.8 percent ) , sexual abuse ( 7.6 percent ) , psychological maltreatment ( 7.6 percent ) , and medical neglect ( 2.4 percent ) ( Child Help 2011 ) .", "question": { "cloze_format": "The type of child abuse that is most prevalent in the United States is ___ .", "normal_format": "Which type of child abuse is most prevalent in the United States?", "question_choices": [ "Physical abuse", "Neglect", "Shaken-baby syndrome", "Verbal mistreatment" ], "question_id": "fs-id2313877", "question_text": "Which type of child abuse is most prevalent in the United States?" }, "references_are_paraphrase": null } ]
14
14.1 What Is Marriage? What Is a Family? Marriage and family are key structures in most societies. While the two institutions have historically been closely linked in American culture, their connection is becoming more complex. The relationship between marriage and family is an interesting topic of study to sociologists. What is marriage? Different people define it in different ways. Not even sociologists are able to agree on a single meaning. For our purposes, we’ll define marriage as a legally recognized social contract between two people, traditionally based on a sexual relationship and implying a permanence of the union. In practicing cultural relativism, we should also consider variations, such as whether a legal union is required (think of “common law” marriage and its equivalents), or whether more than two people can be involved (consider polygamy). Other variations on the definition of marriage might include whether spouses are of opposite sexes or the same sex, and how one of the traditional expectations of marriage (to produce children) is understood today. Sociologists are interested in the relationship between the institution of marriage and the institution of family because, historically, marriages are what create a family, and families are the most basic social unit upon which society is built. Both marriage and family create status roles that are sanctioned by society. So what is a family? A husband, a wife, and two children—maybe even a pet—has served as the model for the traditional American family for most of the 20th century. But what about families that deviate from this model, such as a single-parent household or a homosexual couple without children? Should they be considered families as well? The question of what constitutes a family is a prime area of debate in family sociology, as well as in politics and religion. Social conservatives tend to define the family in terms of structure with each family member filling a certain role (like father, mother, or child). Sociologists, on the other hand, tend to define family more in terms of the manner in which members relate to one another than on a strict configuration of status roles. Here, we’ll define family as a socially recognized group (usually joined by blood, marriage, or adoption) that forms an emotional connection and serves as an economic unit of society. Sociologists identify different types of families based on how one enters into them. A family of orientation refers to the family into which a person is born. A family of procreation describes one that is formed through marriage. These distinctions have cultural significance related to issues of lineage. Drawing on two sociological paradigms, the sociological understanding of what constitutes a family can be explained by symbolic interactionism as well as functionalism. These two theories indicate that families are groups in which participants view themselves as family members and act accordingly. In other words, families are groups in which people come together to form a strong primary group connection, maintaining emotional ties to one another over a long period of time. Such families may include groups of close friends or teammates. In addition, the functionalist perspective views families as groups that perform vital roles for society—both internally (for the family itself) and externally (for society as a whole). Families provide for one another’s physical, emotional, and social well-being. Parents care for and socialize children. Later in life, adult children often care for elderly parents. While interactionism helps us to understand the subjective experience of belonging to a “family,” functionalism illuminates the many purposes of families and their role in the maintenance of a balanced society (Parsons and Bales 1956). We will go into more detail about how these theories apply to family in. Challenges Families Face Americans, as a nation, are somewhat divided when it comes to determining what does and what does not constitute a family. In a 2010 survey conducted by professors at the University of Indiana, nearly all participants (99.8 percent) agreed that a husband, wife, and children constitute a family. Ninety-two percent stated that a husband and a wife without children still constitute a family. The numbers drop for less traditional structures: unmarried couples with children (83 percent), unmarried couples without children (39.6 percent), gay male couples with children (64 percent), and gay male couples without children (33 percent) (Powell et al. 2010). This survey revealed that children tend to be the key indicator in establishing “family” status: the percentage of individuals who agreed that unmarried couples and gay couples constitute a family nearly doubled when children were added. The study also revealed that 60 percent of Americans agreed that if you consider yourself a family, you are a family (a concept that reinforces an interactionist perspective) (Powell 2010). The government, however, is not so flexible in its definition of “family.” The U.S. Census Bureau defines a family as “a group of two people or more (one of whom is the householder) related by birth, marriage, or adoption and residing together” (U.S. Census Bureau 2010). While this structured definition can be used as a means to consistently track family-related patterns over several years, it excludes individuals such as cohabitating unmarried heterosexual and homosexual couples. Legality aside, sociologists would argue that the general concept of family is more diverse and less structured than in years past. Society has given more leeway to the design of a family making room for what works for its members (Jayson 2010). Family is, indeed, a subjective concept, but it is a fairly objective fact that family (whatever one’s concept of it may be) is very important to Americans. In a 2010 survey by Pew Research Center in Washington, D.C., 76 percent of adults surveyed stated that family is “the most important” element of their life—just one percent said it was “not important” (Pew Research Center 2010). It is also very important to society. President Ronald Regan notably stated, “The family has always been the cornerstone of American society. Our families nurture, preserve, and pass on to each succeeding generation the values we share and cherish, values that are the foundation of our freedoms” (Lee 2009). While the design of the family may have changed in recent years, the fundamentals of emotional closeness and support are still present. Most responders to the Pew survey stated that their family today is at least as close (45 percent) or closer (40 percent) than the family with which they grew up (Pew Research Center 2010). Alongside the debate surrounding what constitutes a family is the question of what Americans believe constitutes a marriage. Many religious and social conservatives believe that marriage can only exist between man and a woman, citing religious scripture and the basics of human reproduction as support. Social liberals and progressives, on the other hand, believe that marriage can exist between two consenting adults—be they a man and a woman, or a woman and a woman—and that it would be discriminatory to deny such a couple the civil, social, and economic benefits of marriage. Marriage Patterns With single parenting and cohabitation (when a couple shares a residence but not a marriage) becoming more acceptable in recent years, people may be less motivated to get married. In a recent survey, 39 percent of respondents answered “yes” when asked whether marriage is becoming obsolete (Pew Research Center 2010). The institution of marriage is likely to continue, but some previous patterns of marriage will become outdated as new patterns emerge. In this context, cohabitation contributes to the phenomenon of people getting married for the first time at a later age than was typical in earlier generations (Glezer 1991). Furthermore, marriage will continue to be delayed as more people place education and career ahead of “settling down.” One Partner or Many? Americans typically equate marriage with monogamy , when someone is married to only one person at a time. In many countries and cultures around the world, however, having one spouse is not the only form of marriage. In a majority of cultures (78 percent), polygamy , or being married to more than one person at a time, is accepted (Murdock 1967), with most polygamous societies existing in northern Africa and east Asia (Altman and Ginat 1996). Instances of polygamy are almost exclusively in the form of polygyny. Polygyny refers to a man being married to more than one woman at the same time. The reverse, when a woman is married to more than one man at the same time, is called polyandry . It is far less common and only occurs in about one percent of the world’s cultures (Altman and Ginat 1996). The reasons for the overwhelming prevalence of polygamous societies are varied but they often include issues of population growth, religious ideologies, and social status. While the majority of societies accept polygyny, the majority of people do not practice it. Often fewer than 10 percent (and no more than 25–35 percent) of men in polygamous cultures have more than one wife; these husbands are often older, wealthy, high-status men (Altman and Ginat 1996). The average plural marriage involves no more than three wives. Negev Bedouin men in Israel, for example, typically have two wives, although it is acceptable to have up to four (Griver 2008). As urbanization increases in these cultures, polygamy is likely to decrease as a result of greater access to mass media, technology, and education (Altman and Ginat 1996). In the United States, polygamy is considered by most to be socially unacceptable and it is illegal. The act of entering into marriage while still married to another person is referred to as bigamy and is considered a felony in most states. Polygamy in America is often associated with those of the Mormon faith, although in 1890 the Mormon Church officially renounced polygamy. Fundamentalist Mormons, such as those in the Fundamentalist Church of Jesus Christ of Latter Day Saints (FLDS), on the other hand, still hold tightly to the historic Mormon beliefs and practices and allow polygamy in their sect. The prevalence of polygamy among Mormons is often overestimated due to sensational media stories such as the Yearning for Zion ranch raid in Texas in 2008 and popular television shows such as HBO’s Big Love and TLC’s Sister Wives . It is estimated that there are about 37,500 fundamentalist Mormons involved in polygamy in the United States, Canada, and Mexico, but that number has shown a steady decrease in the last 100 years (Useem 2007). American Muslims, however, are an emerging group with an estimated 20,000 practicing polygamy. Again, polygamy among American-Muslims is uncommon and occurs only in approximately one percent of the population (Useem 2007). For now polygamy among American Muslims has gone fairly unnoticed by mainstream society, but like fundamentalist Mormons whose practices were off the public’s radar for decades, they may someday find themselves at the center of social debate. Residency and Lines of Descent When considering one’s lineage, most Americans look to both their father’s and mother’s sides. Both paternal and maternal ancestors are considered part of one’s family. This pattern of tracing kinship is called bilateral descent . Note that kinship , or one’s traceable ancestry, can be based on blood or marriage or adoption. Sixty percent of societies, mostly modernized nations, follow a bilateral descent pattern. Unilateral descent (the tracing of kinship through one parent only) is practiced in the other 40 percent of the world’s societies, with high concentration in pastoral cultures (O’Neal 2006). There are three types of unilateral descent: patrilineal , which follows the father’s line only; matrilineal , which follows the mother’s side only; and ambilineal , which follows either the father’s only or the mother’s side only, depending on the situation. In partrilineal societies, such as those in rural China and India, only males carry on the family surname. This gives males the prestige of permanent family membership while females are seen as only temporary members (Harrell 2001). American society assumes some aspects of partrilineal decent. For instance, most children assume their father’s last name even if the mother retains her birth name. In matrilineal societies, inheritance and family ties are traced to women. Matrilineal descent is common in Native American societies, notably the Crow and Cherokee tribes. In these societies, children are seen as belonging to the women and, therefore, one’s kinship is traced to one’s mother, grandmother, great grandmother, and so on (Mails 1996). In ambilineal societies, which are most common in Southeast Asian countries, parents may choose to associate their children with the kinship of either the mother or the father. This choice maybe based on the desire to follow stronger or more prestigious kinship lines or on cultural customs such as men following their father’s side and women following their mother’s side (Lambert 2009). Tracing one’s line of descent to one parent rather than the other can be relevant to the issue of residence. In many cultures, newly married couples move in with, or near to, family members. In a patrilocal residence system it is customary for the wife to live with (or near) her husband’s blood relatives (or family or orientation). Patrilocal systems can be traced back thousands of years. In a DNA analysis of 4,600-year-old bones found in Germany, scientists found indicators of patrilocal living arrangements (Haak et al 2008). Patrilocal residence is thought to be disadvantageous to women because it makes them outsiders in the home and community; it also keeps them disconnected from their own blood relatives. In China, where patrilocal and patrilineal customs are common, the written symbols for maternal grandmother ( wáipá ) are separately translated to mean “outsider” and “women” (Cohen 2011). Similarly, in matrilocal residence systems, where it is customary for the husband to live with his wife’s blood relatives (or her family of orientation), the husband can feel disconnected and can be labeled as an outsider. The Minangkabau people, a matrilocal society that is indigenous to the highlands of West Sumatra in Indonesia, believe that home is the place of women and they give men little power in issues relating to the home or family (Joseph and Najmabadi 2003). Most societies that use patrilocal and patrilineal systems are patriarchal, but very few societies that use matrilocal and matrilineal systems are matriarchal, as family life is often considered an important part of the culture for women, regardless of their power relative to men. Stages of Family Life As we’ve established, the concept of family has changed greatly in recent decades. Historically, it was often thought that most (certainly many) families evolved through a series of predictable stages. Developmental or “stage” theories used to play a prominent role in family sociology (Strong and DeVault 1992). Today, however, these models have been criticized for their linear and conventional assumptions as well as for their failure to capture the diversity of family forms. While reviewing some of these once-popular theories, it is important to identify their strengths and weaknesses. The set of predictable steps and patterns families experience over time is referred to as the family life cycle . One of the first designs of the family life cycle was developed by Paul Glick in 1955. In Glick’s original design, he asserted that most people will grow up, establish families, rear and launch their children, experience an “empty nest” period, and come to the end of their lives. This cycle will then continue with each subsequent generation (Glick 1989). Glick’s colleague, Evelyn Duvall, elaborated on the family life cycle by developing these classic stages of family (Strong and DeVault 1992): Stage Family Type Children 1 Marriage Family Childless 2 Procreation Family Children ages 0 to 2.5 3 Preschooler Family Children ages 2.5 to 6 4 School-age Family Children ages 6–13 5 Teenage Family Children ages 13–20 6 Launching Family Children begin to leave home 7 Empty Nest Family “Empty nest”; adult children have left home Table 14.1 Stage Theory This table shows one example of how a “stage” theory might categorize the phases a family goes through. The family life cycle was used to explain the different processes that occur in families over time. Sociologists view each stage as having its own structure with different challenges, achievements, and accomplishments that transition the family from one stage to the next. For example, the problems and challenges that a family experiences in Stage 1 as a married couple with no children are likely much different than those experienced in Stage 5 as a married couple with teenagers. The success of a family can be measured by how well they adapt to these challenges and transition into each stage. While sociologists use the family life cycle to study the dynamics of family overtime, consumer and marketing researchers have used it to determine what goods and services families need as they progress through each stage (Murphy and Staples 1979). As early “stage” theories have been criticized for generalizing family life and not accounting for differences in gender, ethnicity, culture, and lifestyle, less rigid models of the family life cycle have been developed. One example is the family life course , which recognizes the events that occur in the lives of families but views them as parting terms of a fluid course rather than in consecutive stages (Strong and DeVault 1992). This type of model accounts for changes in family development, such as the fact that in today’s society, childbearing does not always occur with marriage. It also sheds light on other shifts in the way family life is practiced. Society’s modern understanding of family rejects rigid “stage” theories and is more accepting of new, fluid models. Sociology in the Real World The Evolution of Television Families Whether you grew up watching the Cleavers, the Waltons, the Huxtables, or the Simpsons, most of the iconic families you saw in television sitcoms included a father, a mother, and children cavorting under the same roof while comedy ensued. The 1960s was the height of the suburban American nuclear family on television with shows such as The Donna Reed Show and Father Knows Best . While some shows of this era portrayed single parents ( My Three Sons and Bonanza , for instance), the single status almost always resulted from being widowed, not divorced or unwed. Although family dynamics in real American homes were changing, the expectations for families portrayed on television were not. America’s first reality show, An American Family (which aired on PBS in 1973) chronicled Bill and Pat Loud and their children as a “typical” American family. During the series, the oldest son, Lance, announced to the family that he was gay, and at the series’ conclusion, Bill and Pat decided to divorce. Although the Loud’s union was among the 30 percent of marriages that ended in divorce in 1973, the family was featured on the cover of the March 12 issue of Newsweek with the title “The Broken Family” (Ruoff 2002). Less traditional family structures in sitcoms gained popularity in the 1980s with shows such as Diff’rent Strokes (a widowed man with two adopted African-American sons) and One Day at a Time (a divorced woman with two teenage daughters). Still, traditional families such as those in Family Ties and The Cosby Show dominated the ratings. The late 1980s and the 1990s saw the introduction of the dysfunctional family. Shows such as Roseanne , Married with Children , and The Simpsons portrayed traditional nuclear families, but in a much less flattering light than those from the 1960s did (Museum of Broadcast Communications 2011). Over the past 10 years, the nontraditional family has become somewhat of a tradition in television. While most situation comedies focus on single men and women without children, those that do portray families often stray from the classic structure: they include unmarried and divorced parents, adopted children, gay couples, and multigenerational households. Even those that do feature traditional family structures may show less-traditional characters in supporting roles, such as the brothers in the highly rated shows Everybody Loves Raymond and Two and Half Men . Even wildly popular children’s programs as Disney’s Hannah Montana and The Suite Life of Zack & Cody feature single parents. In 2009, ABC premiered an intensely nontraditional family with the broadcast of Modern Family . The show follows an extended family that includes a divorced and remarried father with one stepchild, and his biological adult children—one of who is in a traditional two-parent household, and the other who is a gay man in a committed relationship raising an adopted daughter. While this dynamic may be more complicated than the typical “modern” family, its elements may resonate with many of today’s viewers. “The families on the shows aren't as idealistic, but they remain relatable,” states television critic Maureen Ryan. “The most successful shows, comedies especially, have families that you can look at and see parts of your family in them” (Respers France 2010). 14.2 Variations in Family Life The combination of husband, wife, and children that 99.8 percent of Americans believes constitutes a family is not representative of 99.8 percent of U.S. families. According to 2010 census data, only 66 percent of children under age 17 live in a household with two married parents. This is a decrease from 77 percent in 1980 (U.S. Census 2011). This two-parent family structure is known as a nuclear family , referring to married parents and children as the nucleus, or core, of the group. Recent years have seen a rise in variations of the nuclear family with the parents not being married. Three percent of children live with two cohabiting parents (U.S. Census 2011). Single Parents Single-parent households are on the rise. In 2010, 27 percent of children lived with a single parent only, up from 25 percent in 2008. Of that 27 percent, 23 percent live with their mother and three percent live with their father. Ten percent of children living with their single mother and 20 percent of children living with their single father also live with the cohabitating partner of their parent (i.e., boyfriends or girlfriends). Stepparents are an additional family element in two-parent homes. Among children living in two-parent households, 9 percent live with a biological or adoptive parent and a stepparent. The majority (70 percent) of those children live with their biological mother and a stepfather. Family structure has been shown to vary with the age of the child. Older children (ages 15–17) are less likely to live with two parents than adolescent children (ages 6–14) or young children (ages 0–5). Older children who do live with two parents are also more likely to live with stepparents (U.S. Census 2011). In some family structures a parent is not present at all. In 2010, three million children (4 percent of all children) lived with a guardian who was neither their biological nor adoptive parent. Of these children, 54 percent live with grandparents, 21 percent live with other relatives, and 24 percent live with non-relatives. This family structure is referred to as the extended family , and may include aunts, uncles, and cousins living in the same home. Foster parents account for about a quarter of non-relatives. The practice of grandparents acting as parents, whether alone or in combination with the child’s parent, is becoming widespread among today’s families (De Toledo and Brown 1995). Nine percent of all children live with a grandparent, and in nearly half of those cases, the grandparent maintains primary responsibility for the child (U.S. Census 2011). A grandparent functioning as the primary care provider often results from parental drug abuse, incarceration, or abandonment. Events like these can render the parent incapable of caring for his or her child. Changes in the traditional family structure raise questions about how such societal shifts affect children. U.S. Census statistics have long shown that children living in homes with both parents grow up with more financial and educational advantages than children who are raised in single-parent homes (U.S. Census 1997). Parental marital status seems to be a significant indicator of advancement in a child’s life. Children living with a divorced parent typically have more advantages than children living with a parent who never married; this is particularly true of children who live with divorced fathers. This correlates with the statistic that never-married parents are typically younger, have fewer years of schooling, and have lower incomes (U.S. Census 1997). Six in ten children living with only their mother live near or below the poverty level. Of those being raised by single mothers, 69 percent live in or near poverty compared to 45 percent for divorced mothers (U.S. Census 1997). Though other factors such as age and education play a role in these differences, it can be inferred that marriage between parents is generally beneficial for children. Cohabitation Living together before or in lieu of marriage is a growing option for many couples. Cohabitation, when a man and woman live together in a sexual relationship without being married, was practiced by an estimated 7.5 million people (11.5 percent of the population) in 2010, which shows an increase of 13 percent since 2009 (U.S. Census 2010). This surge in cohabitation is likely due to the decrease in social stigma pertaining to the practice. In a 2010 National Center for Health Statistics survey, only 38 percent of the 13,000-person sample thought that cohabitation negatively impacted society (Jayson 2010). Of those who cohabitate, the majority are non-Hispanic with no high school diploma or GED and grew up in a single-parent household (U.S. Census 2010). Cohabitating couples may choose to live together in an effort to spend more time together or to save money on living costs. Many couples view cohabitation as a “trial run” for marriage. Today, approximately 28 percent of men and women cohabitated before their first marriage. By comparison, 18 percent of men and 23 percent of women married without ever cohabitating (U.S. Census Bureau 2010). The vast majority of cohabitating relationships eventually result in marriage; only 15 percent of men and women cohabitate only and do not marry. About one half of cohabitators transition into marriage within three years (U.S. Census 2010). While couples may use this time to “work out the kinks” of a relationship before they wed, the most recent research has found that cohabitation has little effect on the success of a marriage. In fact, those who do not cohabitate before marriage have slightly better rates of remaining married for more than 10 years (Jayson 2010). Cohabitation may contribute to the increase in the number of men and women who delay marriage. The median age for marriage is the highest it has ever been since the U.S. Census kept records—age 26 for women and age 28 for men (U.S. Census 2010). Same-Sex Couples The number of same-sex couples has grown significantly in the past decade. The U.S. Census Bureau reported 594,000 same-sex couple households in the United States, a 50 percent increase from 2000. This increase is a result of more coupling, the growing social acceptance of homosexuality, and a subsequent increase in willingness to report it. Nationally, same-sex couple households make up 1 percent of the population, ranging from as little as 0.29 percent in Wyoming to 4.01 percent in the District of Columbia (U.S. Census 2011). Legal recognition of same-sex couples as spouses is different in each state, as only six states and the District of Columbia have legalized same-sex marriage. The 2010 U.S. Census, however, allowed same-sex couples to report as spouses regardless of whether their state legally recognizes their relationship. Nationally, 25 percent of all same-sex households reported that they were spouses. In states where same-sex marriages are performed, nearly half (42.4 percent) of same-sex couple households were reported as spouses. In terms of demographics, same-sex couples are not very different from opposite-sex couples. Same-sex couple households have an average age of 52 and an average household income of $91,558; opposite-sex couple households have an average age of 59 and an average household income of $95,075. Additionally, 31 percent of same-sex couples are raising children, not far from the 43 percent of opposite-sex couples (U.S. Census 2009). Of the children in same-sex couple households, 73 percent are biological children (of only one of the parents), 21 percent are adopted only, and 6 percent are a combination of biological and adopted (U.S. Census 2009). While there is some concern from socially conservative groups regarding the well-being of children who grow up in same-sex households, research reports that same-sex parents are as effective as opposite-sex parents. In an analysis of 81 parenting studies, sociologists found no quantifiable data to support the notion that opposite-sex parenting is any better than same-sex parenting. Children of lesbian couples, however, were shown to have slightly lower rates of behavioral problems and higher rates of self-esteem (Biblarz and Stacey 2010). Staying Single Gay or straight, a new option for many Americans is simply to stay single. In 2010, there were 99.6 million unmarried individuals over age 18 in the United States, accounting for 44 percent of the total adult population (U.S. Census 2011). In 2010, never-married individuals in the 25 to 29 age bracket accounted for 62 percent of women and 48 percent of men, up from 11 percent and 19 percent, respectively, in 1970 (U.S. Census 2011). Single, or never-married, individuals are found in higher concentrations in large cities or metropolitan areas, with New York City being one of the highest. Although both single men and single women report social pressure to get married, women are subject to greater scrutiny. Single women are often portrayed as unhappy “spinsters” or “old maids” who cannot find a man to marry them. Single men, on the other hand, are typically portrayed as lifetime bachelors who cannot settle down or simply “have not found the right girl.” Single women report feeling insecure and displaced in their families when their single status is disparaged (Roberts 2007). However, single women older than 35 report feeling secure and happy with their unmarried status, as many women in this category have found success in their education and careers. In general, women feel more independent and more prepared to live a large portion of their adult lives without a spouse or domestic partner than they did in the 1960s (Roberts 2007). The decision to marry or not to marry can be based a variety of factors including religion and cultural expectations. Asian individuals are the most likely to marry while African Americans are the least likely to marry (Venugopal 2011). Additionally, individuals who place no value on religion are more likely to be unmarried than those who place a high value on religion. For black women, however, the importance of religion made no difference in marital status (Bakalar 2010). In general, being single is not a rejection of marriage; rather, it is a lifestyle that does not necessarily include marriage. By age 40, according to census figures, 20 percent of women and 14 of men will have never married (U.S. Census Bureau 2011). Sociological Research Deceptive Divorce Rates It is often cited that half of all marriages end in divorce. This statistic has made many people cynical when it comes to marriage, but it is misleading. Let’s take a closer look at the data. Using National Center for Health Statistics data from 2003 that show a marriage rate of 7.5 (per 1000 people) and a divorce rate of 3.8, it would appear that exactly one half of all marriages failed (Hurley 2005). This reasoning is deceptive, however, because instead of tracing actual marriages to see their longevity (or lack thereof), this compares what are unrelated statistics: that is, the number of marriages in a given year does not have a direct correlation to the divorces occurring that same year. Research published in the New York Times took a different approach—determining how many people had ever been married, and of those, how many later divorced. The result? According to this analysis, American divorce rates have only gone as high as 41 percent (Hurley 2005). Another way to calculate divorce rates would be through a cohort study. For instance, we could determine the percentage of marriages that are intact after, say, five or seven years, compared to marriages that have ended in divorce after five or seven years. Sociological researchers must remain aware of research methods and how statistical results are applied. As illustrated, different methodologies and different interpretations can lead to contradictory, and even misleading, results. Theoretical Perspectives on Marriage and Family Sociologists study families on both the macro and micro level to determine how families function. Sociologists may use a variety of theoretical perspectives to explain events that occur within and outside of the family. Functionalism When considering the role of family in society, functionalists uphold the notion that families are an important social institution and that they play a key role in stabilizing society. They also note that family members take on status roles in a marriage or family. The family—and its members—perform certain functions that facilitate the prosperity and development of society. Sociologist George Murdock conducted a survey of 250 societies and determined that there are four universal residual functions of the family: sexual, reproductive, educational, and economic (Lee 1985). According to Murdock, the family (which for him includes the state of marriage) regulates sexual relations between individuals. He does not deny the existence or impact of premarital or extramarital sex, but states that the family offers a socially legitimate sexual outlet for adults (Lee 1985). This outlet gives way to reproduction, which is a necessary part of ensuring the survival of society. Once children are produced, the family plays a vital role in training them for adult life. As the primary agent of socialization and enculturation, the family teaches young children the ways of thinking and behaving that follow social and cultural norms, values, beliefs, and attitudes. Parents teach their children manners and civility. A well-mannered child reflects a well-mannered parent. Parents also teach children gender roles. Gender roles are an important part of the economic function of a family. In each family, there is a division of labor that consists of instrumental and expressive roles. Men tend to assume the instrumental roles in the family, which typically involve work outside of the family that provides financial support and establishes family status. Women tend to assume the expressive roles, which typically involve work inside of the family which provides emotional support and physical care for children (Crano and Aronoff 1978). According to functionalists, the differentiation of the roles on the basis of sex ensures that families are well balanced and coordinated. When family members move outside of these roles, the family is thrown out of balance and must recalibrate in order to function properly. For example, if the father assumes an expressive role such as providing daytime care for the children, the mother must take on an instrumental role such as gaining paid employment outside of the home in order for the family to maintain balance and function. Conflict Theory Conflict theorists are quick to point out that American families have been defined as private entities, the consequence of which has been to leave family matters to only those within the family. Many Americans are resistant to government intervention in the family: parents do not want the government to tell them how to raise their children or to become involved in domestic issues. Conflict theory highlights the role of power in family life and contends that the family is often not a haven but rather an arena where power struggles can occur. This exercise of power often entails the performance of family status roles. Conflict theorists may study conflicts as simple as the enforcement of rules from parent to child, or they may examine more serious issues such as domestic violence (spousal and child), sexual assault, marital rape, and incest. The first study of marital power was performed in 1960. Researchers found that the person with the most access to value resources held the most power. As money is one of the most valuable resources, men who worked in paid labor outside of the home held more power than women who worked inside the home (Blood and Wolfe 1960). Conflict theorists find disputes over the division of household labor to be a common source of marital discord. Household labor offers no wages and, therefore, no power. Studies indicate that when men do more housework, women experience more satisfaction in their marriages, reducing the incidence of conflict (Coltrane 2000). In general, conflict theorists tend to study areas of marriage and life that involve inequalities or discrepancies in power and authority, as they are reflective of the larger social structure. Symbolic Interactionism Interactionists view the world in terms of symbols and the meanings assigned to them (LaRossa and Reitzes 1993). The family itself is a symbol. To some, it is a father, mother, and children; to others, it is any union that involves respect and compassion. Interactionists stress that family is not an objective, concrete reality. Like other social phenomena, it is a social construct that is subject to the ebb and flow of social norms and ever-changing meanings. Consider the meaning of other elements of family: “parent” was a symbol of a biological and emotional connection to a child; with more parent-child relationships developing through adoption, remarriage, or change in guardianship, the word “parent” today is less likely to be associated with a biological connection than with whoever is socially recognized as having the responsibility for a child’s upbringing. Similarly, the terms “mother” and “father” are no longer rigidly associated with the meanings of caregiver and breadwinner. These meanings are more free-flowing through changing family roles. Interactionists also recognize how the family status roles of each member are socially constructed, playing an important part in how people perceive and interpret social behavior. Interactionists view the family as a group of role players or “actors” that come together to act out their parts in an effort to construct a family. These roles are up for interpretation. In the late 19th and early 20th century, a “good father,” for example, was one who worked hard to provided financial security for his children. Today, a “good father” is one who takes the time outside of work to promote his children’s emotional well-being, social skills, and intellectual growth—in some ways, a much more daunting task. 14.3 Challenges Families Face As the structure of family changes over time, so do the challenges families face. Events like divorce and remarriage present new difficulties for families and individuals. Other long-standing domestic issues such as abuse continue to strain the health and stability of today’s families. Divorce and Remarriage Divorce, while fairly common and accepted in modern American society, was once a word that would only be whispered and was accompanied by gestures of disapproval. In 1960, divorce was generally uncommon, affecting only 9.1 out of every 1,000 married persons. That number more than doubled (to 20.3) by 1975 and peaked in 1980 at 22.6 (Popenoe 2007). Over the last quarter century, divorce rates have dropped steadily and are now similar to those in 1970. The dramatic increase in divorce rates after the 1960s has been associated with the liberalization of divorce laws and the shift in societal make up due to women increasingly entering the workforce (Michael 1978). The decrease in divorce rates can be attributed to two probable factors: an increase in the age at which people get married, and an increased level of education among those who marry—both of which have been found to promote greater marital stability. Divorce does not occur equally among all Americans; some segments of the American population are more likely to divorce than others. According the American Community Survey (ACS), men and women in the Northeast have the lowest rates of divorce at 7.2 and 7.5 per 1,000 people. The South has the highest rate of divorce at 10.2 for men and 11.1 for women. Divorce rates are likely higher in the South because marriage rates are higher and marriage occurs at younger-than-average ages in this region. In the Northeast, the marriage rate is lower and first marriages tend to be delayed; therefore, the divorce rate is lower (U.S. Census Bureau 2011). The rate of divorce also varies by race. In a 2009 ACS study, American Indian and Alaskan Natives reported the highest percentages of currently divorced individuals (12.6 percent) followed by blacks (11.5 percent), whites (10.8 percent), Pacific Islanders (8 percent), Latinos (7.8 percent) and Asians (4.9 percent) (ACS 2011). In general those who marry at a later age, have a college education have lower rates of divorce. Year Divorces and annulments Population Rate per 1,000 total population 2009 840,000 242,497,000 3.5 2008 844,000 240,663,000 3.5 2007 856,000 238,759,000 3.6 2006 872,000 236,172,000 3.7 2005 847,000 234,114,000 3.6 2004 879,000 237,042,000 3.7 2003 927,000 245,200,000 3.8 2002 955,000 243,600,000 3.9 2001 940,000 236,650,000 4.0 2000 944,000 233,550,000 4.0 Table 14.2 Provisional number of divorces and annulments and rate: United States, 2000–2009 There has been a steady decrease in divorce over the past decade. (National Center for Health Statistics, CDC) So what causes divorce? While more young people are choosing to postpone or opt out of marriage, those who enter into the union do so with the expectation that it will last. A great deal of marital problems can be related to stress, especially financial stress. According to researchers participating in the University of Virginia’s National Marriage Project, couples who enter marriage without a strong asset base (like a home, savings, and a retirement plan) are 70 percent more likely to be divorced after three years than are couples with at least $10,000 in assets. This is connected to factors such as age and education level that correlate with low incomes. The addition of children to a marriage creates added financial and emotional stress. Research has established that marriages enter their most stressful phase upon the birth of the first child (Popenoe and Whitehead 2007). This is particularly true for couples who have multiples (twins, triplets, and so on). Married couples with twins or triplets are 17 percent more likely to divorce than those with children from single births (McKay 2010). Another contributor to the likelihood of divorce is a general decline in marital satisfaction over time. As people get older, they may find that their values and life goals no longer match up with those of their spouse (Popenoe and Whitehead 2004). Divorce is thought to have a cyclical pattern. Children of divorced parents are 40 percent more likely to divorce than children of married parents. And when we consider children whose parents divorced and then remarried, the likelihood of their own divorce rises to 91 percent (Wolfinger 2005). This might result from being socialized to a mindset that a broken marriage can be replaced rather than repaired (Wolfinger 2005). That sentiment is also reflected in the finding that when both partners of a married couple have been previously divorced, their marriage is 90 percent more likely to end in divorce (Wolfinger 2005). People in a second marriage account for approximately 19.3 percent of all married persons, and those who have been married three or more times account for 5.2 percent (U.S. Census Bureau 2011). The vast majority (91 percent) of remarriages occur after divorce; only 9 percent occur after death of a spouse (Kreider 2006). Most men and women remarry within five years of a divorce, with the median length for men (three years) being lower than for women (4.4 years). This length of time has been fairly consistent since the 1950s. The majority of those who remarry are between the ages of 25 and 44 (Kreider 2006). The general pattern of remarriage also shows that whites are more likely to remarry than black Americans. Marriage the second time around (or third or fourth) can be a very different process than the first. Remarriage lacks many of the classic courtship rituals of a first marriage. In a second marriage, individuals are less likely to deal with issues like parental approval, premarital sex, or desired family size (Elliot 2010). In a survey of households formed by remarriage, a mere 8 percent included only biological children of the remarried couple. Of the 49 percent of homes that include children, 24 percent included only the woman’s biological children, 3 percent included only the man’s biological children, and 9 percent included a combination of both spouse’s children (U.S. Census Bureau 2006). Children of Divorce and Remarriage Divorce and remarriage can been stressful on partners and children alike. Divorce is often justified by the notion that children are better off in a divorced family than in a family with parents who do not get along. However, long-term studies determine that to be generally untrue. Research suggests that while marital conflict does not provide an ideal childrearing environment, going through a divorce can be damaging. Children are often confused and frightened by the threat to their family security. They may feel responsible for the divorce and attempt to bring their parents back together, often by sacrificing their own well-being (Amato 2000). Only in high-conflict homes do children benefit from divorce and the subsequent decrease in conflict. The majority of divorces come out of lower-conflict homes, and children from those homes are more negatively impacted by the stress of the divorce than the stress of unhappiness in the marriage (Amato 2000). Studies also suggest that stress levels for children are not improved when a child acquires a stepfamily through marriage. Although there may be increased economic stability, stepfamilies typically have a high level of interpersonal conflict (McLanahan and Sandefur 1994). Children’s ability to deal with a divorce may depend on their age. Research has found that divorce may be most difficult for school-aged children, as they are old enough to understand the separation but not old enough to understand the reasoning behind it. Older teenagers are more likely to recognize the conflict that led to the divorce but may still feel fear, loneliness, guilt, and pressure to choose sides. Infants and preschool-age children may suffer the heaviest impact from the loss of routine that the marriage offered (Temke 2006). Proximity to parents also makes a difference in a child’s well-being after divorce. Boys who live or have joint arrangements with their fathers show less aggression than those who are raised by their mothers only. Similarly, girls who live or have joint arrangements with their mothers tend to be more responsible and mature than those who are raised by their fathers only. Nearly three-fourths of the children of parents who are divorced live in a household headed by their mother, leaving many boys without a father figure residing in the home (U.S. Census Bureau 2011b). Still, researchers suggest that a strong parent-child relationship can greatly improve a child’s adjustment to divorce (Temke 2006). There is empirical evidence that divorce has not discouraged children in terms of how they view marriage and family. In a survey conducted by researchers from the University of Michigan, about three-quarters of high school seniors said it was “extremely important” to have a strong marriage and family life. And over half believed it was “very likely” that they would be in a lifelong marriage (Popenoe and Whitehead 2007). These numbers have continued to climb over the last 25 years. Violence and Abuse Violence and abuse are among the most disconcerting of the challenges that today’s families face. Abuse can occur between spouses, between parent and child, as well as between other family members. The frequency of violence among families is a difficult to determine because many cases of spousal abuse and child abuse go unreported. In any case, studies have shown that abuse (reported or not) has a major impact on families and society as a whole. Domestic Violence Domestic violence is a significant social problem in the United States. It is often characterized as violence between household or family members, specifically spouses. To include unmarried, cohabitating, and same-sex couples, family sociologists have created the term intimate partner violence (IPV) . Women are the primary victims of intimate partner violence. It is estimated that 1 in 4 women has experienced some form of IPV in her lifetime (compared to 1 in 7 men) (Catalano 2007). IPV may include physical violence, such as punching, kicking, or other methods of inflicting physical pain; sexual violence, such as rape or other forced sexual acts; threats and intimidation that imply either physical or sexual abuse; and emotional abuse, such as harming another’s sense of self-worth through words or controlling another’s behavior. IPV often starts as emotional abuse and then escalates to other forms or combinations of abuse (Centers for Disease Control 2012). In 2010, of IPV acts that involved physical actions against women, 57 percent involved physical violence only; 9 percent involved rape and physical violence; 14 percent involved physical violence and stalking; 12 percent involved rape, physical violence, and stalking; and 4 percent involved rape only (CDC 2011). This is vastly different than IPV abuse patterns for men, which show that nearly all (92 percent) physical acts of IVP take the form of physical violence and fewer than one percent involve rape alone or in combination (Catalano 2007). IPV affects women at greater rates than men because women often take the passive role in relationships and may become emotionally dependent on their partner. Perpetrators of IPV work to establish and maintain such dependence in order to hold power and control over their victims, making them feel stupid, crazy, or ugly—in some way worthless. IPV affects different segments of the population at different rates. The rate of IPV for black women (4.6 per 1,000 persons over the age of 12) is higher than that for white women (3.1). These numbers have been fairly stable for both racial groups over the last 10 years. However, the numbers have steadily increased for Native Americans and Alaskan Natives (up to 11.1 for females) (Catalano 2007). Those who are separated report higher rates of abuse than those with other marital statuses, as conflict is typically higher in those relationships. Similarly, those who are cohabitating are more likely than those who are married to experience IPV (Stets and Straus 1990). Other researchers have found that the rate of IPV doubles for women in low-income disadvantaged areas when compared to IPV experienced by women who reside in more affluent areas (Benson and Fox 2004). Overall, women ages 20 to 24 are at the greatest risk of nonfatal abuse (Catalano 2007). Accurate statistics on IPV are difficult to determine, as it is estimated that more than half of nonfatal IPV goes unreported. It is not until victims choose to report crimes that patterns of abuse are exposed. Most victims studied stated that abuse had occurred for at least two years prior to their first report (Carlson, Harris, and Holden 1999). Sometimes abuse is reported to police by a third party, but it still may not be confirmed by victims. A study of domestic violence incident reports found that even when confronted by police about abuse, 29 percent of victims denied that abuse occurred. Surprisingly, 19 percent of their assailants were likely to admit to abuse (Felson, Ackerman, and Gallagher 2005). According to the National Criminal Victims Survey, victims cite varied reason why they are reluctant to report abuse, as shown in the table below. Reason Abuse Is Unreported % Females % Males Considered a Private Matter 22 39 Fear of Retaliation 12 5 To Protect the Abuser 14 16 Belief That Police Won’t Do Anything 8 8 Table 14.3 This chart shows reasons that victims give for why they fail to report abuse to police authorities (Catalano 2007). Two-thirds of nonfatal IPV occurs inside of the home and approximately 10 percent occurs at the home of the victim’s friend or neighbor. The majority of abuse takes place between the hours of 6 p.m. and 6 a.m, and nearly half (42 percent) involves alcohol or drug use (Catalano 2007). Many perpetrators of IVP blame alcohol or drugs for their abuse, though studies have shown that alcohol and drugs do not cause IPV, they may only lower inhibitions (Hanson 2011). IPV has significant long-term effects on individual victims and on society. Studies have shown that IPV damage extends beyond the direct physical or emotional wounds. Extended IPV has been linked to unemployment among victims, as many have difficulty finding or holding employment. Additionally, nearly all women who report serious domestic problems exhibit symptoms of major depression (Goodwin, Chandler, and Meisel 2003). Female victims of IPV are also more likely to abuse alcohol or drugs, suffer from eating disorders, and attempt suicide (Silverman et al. 2001). IPV is indeed something that impacts more than just intimate partners. In a survey, 34 percent of respondents said they have witnessed IPV, and 59 percent said that they know a victim personally (Roper Starch Worldwide 1995). Many people want to help IPV victims but are hesitant to intervene because they feel that it is a personal matter or they fear retaliation from the abuser—reasons similar to those of victims who do not report IPV. Child Abuse Children are among the most helpless victims of abuse. In 2010, there were more than 3.3 million reports of child abuse involving an estimated 5.9 million children (Child Help 2011). Three-fifths of child abuse reports are made by professionals, including teachers, law enforcement personal, and social services staff. The rest are made by anonymous sources, other relatives, parents, friends, and neighbors. Child abuse may come in several forms, the most common being neglect (78.3 percent), followed by physical abuse (10.8 percent), sexual abuse (7.6 percent), psychological maltreatment (7.6 percent), and medical neglect (2.4 percent) (Child Help 2011). Some children suffer from a combination of these forms of abuse. The majority (81.2 percent) of perpetrators are parents; 6.2 percent are other relatives. Infants (children less than one year old) were the most victimized population with an incident rate of 20.6 per 1,000 infants. This age group is particularly vulnerable to neglect because they are entirely dependent on parents for care. Some parents do not purposely neglect their children; factors such as cultural values, standard of care in a community, and poverty can lead to hazardous level of neglect. If information or assistance from public or private services are available and a parent fails to use those services, child welfare services may intervene (U.S. Department of Health and Human Services). Infants are also often victims of physical abuse, particularly in the form of violent shaking. This type of physical abuse is referred to as shaken-baby syndrome , which describes a group of medical symptoms such as brain swelling and retinal hemorrhage resulting from forcefully shaking or causing impact to an infant’s head. A baby’s cry is the number one trigger for shaking. Parents may find themselves unable to soothe a baby’s concerns and may take their frustration out on the child by shaking him or her violently. Other stress factors such as a poor economy, unemployment, and general dissatisfaction with parental life may contribute this type of abuse. While there is no official central registry of shaken-baby syndrome statistics, it is estimated that each year 1,400 babies die or suffer serious injury from being shaken (Barr 2007). Social Policy and Debate Corporal Punishment Physical abuse in children may come in the form of beating, kicking, throwing, choking, hitting with objects, burning, or other methods. Injury inflicted by such behavior is considered abuse even if the parent or caregiver did not intend to harm the child. Other types of physical contact that are characterized as discipline (spanking, for example) are not considered abuse as long as no injury results (Child Welfare Information Gateway 2008). This issue is rather controversial among modern-day Americans. While some parents feel that physical discipline, or corporal punishment, is an effective way to respond to bad behavior, others feel that it is a form of abuse. According to a poll conducted by ABC News, 65 percent of respondents approve of spanking and 50 percent said that they sometimes spank their child. Tendency toward physical punishment may be affected by culture and education. Those who live in the South are more likely than those who live in other regions to spank their child. Those who do not have a college education are also more likely to spank their child (Crandall 2011). Currently, 23 states officially allow spanking in the school system; however, many parents may object and school officials must follow a set of clear guidelines when administering this type of punishment (Crandall 2011). Studies have shown that spanking is not an effective form of punishment and may lead to aggression by the victim, particularly in those who are spanked at a young age (Berlin 2009). Child abuse occurs at all socioeconomic and education levels and crosses ethnic and cultural lines. Just as child abuse is often associated with stresses felt by parents, including financial stress, parents who demonstrate resilience to these stresses are less likely to abuse (Samuels 2011). Young parents are typically less capable of coping with stresses, particularly the stress of becoming a new parent. Teenage mothers are more likely to abuse their children than their older counterparts. As a parent’s age increases, the risk of abuse decreases. Children born to mothers age 15 or younger are twice as likely to be abused or neglected by age five than are children born to mothers ages 20–21 (George and Lee 1997). Drug and alcohol use is also a known contributor to child abuse. Children raised by substance abusers have a risk of physical abuse three times greater than other kids, and neglect is four times as prevalent in these families (Child Welfare Information Gateway 2011). Other risk factors include social isolation, depression, low parental education, and a history of being mistreated as a child. Approximately 30 percent of abused children will later abuse their own children (Child Welfare Information Gateway 2006). The long-term effects of child abuse impact the physical, mental, and emotional wellbeing of a child. Injury, poor health, and mental instability occur at a high rate in this group, with 80 percent meeting the criteria of one or more psychiatric disorders, such as depression, anxiety, or suicidal behavior, by age 21. Abused children may also suffer from cognitive and social difficulties. Behavioral consequences will affect most, but not all, of child abuse victims. Children of abuse are 25 percent more likely, as adolescents, to suffer from difficulties like poor academic performance and teen pregnancy, or to engage in behaviors like drug abuse and general delinquency. They are also more likely to participate in risky sexual acts that increase their chances of contracting a sexually transmitted disease (Child Welfare Information Gateway 2006). Other risky behaviors include drug and alcohol abuse. As these consequences can affect the health care, education, and criminal systems, the problems resulting from child abuse do not just belong to the child and family, but to society as a whole.
anatomy_and_physiology
Chapter Objectives After studying this chapter, you will be able to: Identify the components and anatomy of the lymphatic system Discuss the role of the innate immune response against pathogens Describe the power of the adaptive immune response to cure disease Explain immunological deficiencies and over-reactions of the immune system Discuss the role of the immune response in transplantation and cancer Describe the interaction of the immune and lymphatic systems with other body systems Introduction In June 1981, the Centers for Disease Control and Prevention (CDC), in Atlanta, Georgia, published a report of an unusual cluster of five patients in Los Angeles, California. All five were diagnosed with a rare pneumonia caused by a fungus called Pneumocystis jirovecii (formerly known as Pneumocystis carinii ). Why was this unusual? Although commonly found in the lungs of healthy individuals, this fungus is an opportunistic pathogen that causes disease in individuals with suppressed or underdeveloped immune systems. The very young, whose immune systems have yet to mature, and the elderly, whose immune systems have declined with age, are particularly susceptible. The five patients from LA, though, were between 29 and 36 years of age and should have been in the prime of their lives, immunologically speaking. What could be going on? A few days later, a cluster of eight cases was reported in New York City, also involving young patients, this time exhibiting a rare form of skin cancer known as Kaposi’s sarcoma. This cancer of the cells that line the blood and lymphatic vessels was previously observed as a relatively innocuous disease of the elderly. The disease that doctors saw in 1981 was frighteningly more severe, with multiple, fast-growing lesions that spread to all parts of the body, including the trunk and face. Could the immune systems of these young patients have been compromised in some way? Indeed, when they were tested, they exhibited extremely low numbers of a specific type of white blood cell in their bloodstreams, indicating that they had somehow lost a major part of the immune system. Acquired immune deficiency syndrome, or AIDS, turned out to be a new disease caused by the previously unknown human immunodeficiency virus (HIV). Although nearly 100 percent fatal in those with active HIV infections in the early years, the development of anti-HIV drugs has transformed HIV infection into a chronic, manageable disease and not the certain death sentence it once was. One positive outcome resulting from the emergence of HIV disease was that the public’s attention became focused as never before on the importance of having a functional and healthy immune system.
[ { "answer": { "ans_choice": 1, "ans_text": "macrophage" }, "bloom": "1", "hl_context": "<hl> A macrophage is an irregularly shaped phagocyte that is amoeboid in nature and is the most versatile of the phagocytes in the body . <hl> Macrophages move through tissues and squeeze through capillary walls using pseudopodia . They not only participate in innate immune responses but have also evolved to cooperate with lymphocytes as part of the adaptive immune response . Macrophages exist in many tissues of the body , either freely roaming through connective tissues or fixed to reticular fibers within specific tissues such as lymph nodes . When pathogens breach the body ’ s barrier defenses , macrophages are the first line of defense ( Table 21.3 ) . They are called different names , depending on the tissue : Kupffer cells in the liver , histiocytes in connective tissue , and alveolar macrophages in the lungs . Many of the cells of the immune system have a phagocytic ability , at least at some point during their life cycles . Phagocytosis is an important and effective mechanism of destroying pathogens during innate immune responses . The phagocyte takes the organism inside itself as a phagosome , which subsequently fuses with a lysosome and its digestive enzymes , effectively killing many pathogens . On the other hand , some bacteria including Mycobacteria tuberculosis , the cause of tuberculosis , may be resistant to these enzymes and are therefore much more difficult to clear from the body . <hl> Macrophages , neutrophils , and dendritic cells are the major phagocytes of the immune system . <hl>", "hl_sentences": "A macrophage is an irregularly shaped phagocyte that is amoeboid in nature and is the most versatile of the phagocytes in the body . Macrophages , neutrophils , and dendritic cells are the major phagocytes of the immune system .", "question": { "cloze_format": "The cell that is phagocytic is the ___.", "normal_format": "Which of the following cells is phagocytic?", "question_choices": [ "plasma cell", "macrophage", "B cell", "NK cell" ], "question_id": "fs-id1435493", "question_text": "Which of the following cells is phagocytic?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "thoracic duct" }, "bloom": null, "hl_context": "The overall drainage system of the body is asymmetrical ( see Figure 21.4 ) . The right lymphatic duct receives lymph from only the upper right side of the body . <hl> The lymph from the rest of the body enters the bloodstream through the thoracic duct via all the remaining lymphatic trunks . <hl> In general , lymphatic vessels of the subcutaneous tissues of the skin , that is , the superficial lymphatics , follow the same routes as veins , whereas the deep lymphatic vessels of the viscera generally follow the paths of arteries .", "hl_sentences": "The lymph from the rest of the body enters the bloodstream through the thoracic duct via all the remaining lymphatic trunks .", "question": { "cloze_format": "The structure that allows lymph from the lower right limb to enter the bloodstream is the ___.", "normal_format": "Which structure allows lymph from the lower right limb to enter the bloodstream?", "question_choices": [ "thoracic duct", "right lymphatic duct", "right lymphatic trunk", "left lymphatic trunk" ], "question_id": "fs-id1932906", "question_text": "Which structure allows lymph from the lower right limb to enter the bloodstream?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "macrophages" }, "bloom": "1", "hl_context": "<hl> A macrophage is an irregularly shaped phagocyte that is amoeboid in nature and is the most versatile of the phagocytes in the body . <hl> Macrophages move through tissues and squeeze through capillary walls using pseudopodia . <hl> They not only participate in innate immune responses but have also evolved to cooperate with lymphocytes as part of the adaptive immune response . <hl> Macrophages exist in many tissues of the body , either freely roaming through connective tissues or fixed to reticular fibers within specific tissues such as lymph nodes . When pathogens breach the body ’ s barrier defenses , macrophages are the first line of defense ( Table 21.3 ) . They are called different names , depending on the tissue : Kupffer cells in the liver , histiocytes in connective tissue , and alveolar macrophages in the lungs .", "hl_sentences": "A macrophage is an irregularly shaped phagocyte that is amoeboid in nature and is the most versatile of the phagocytes in the body . They not only participate in innate immune responses but have also evolved to cooperate with lymphocytes as part of the adaptive immune response .", "question": { "cloze_format": "The cells that are important in the innate immune response are ___.", "normal_format": "Which of the following cells is important in the innate immune response?", "question_choices": [ "B cells", "T cells", "macrophages", "plasma cells" ], "question_id": "fs-id1641348", "question_text": "Which of the following cells is important in the innate immune response?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "natural killer cell" }, "bloom": "1", "hl_context": "<hl> A fourth important lymphocyte is the natural killer cell , a participant in the innate immune response . <hl> A natural killer cell ( NK ) is a circulating blood cell that contains cytotoxic ( cell-killing ) granules in its extensive cytoplasm . It shares this mechanism with the cytotoxic T cells of the adaptive immune response . <hl> NK cells are among the body ’ s first lines of defense against viruses and certain types of cancer . <hl>", "hl_sentences": "A fourth important lymphocyte is the natural killer cell , a participant in the innate immune response . NK cells are among the body ’ s first lines of defense against viruses and certain types of cancer .", "question": { "cloze_format": "A ___ is a cell that would be most active in early, antiviral immune responses the first time one is exposed to pathogen.", "normal_format": "Which of the following cells would be most active in early, antiviral immune responses the first time one is exposed to pathogen?", "question_choices": [ "macrophage", "T cell", "neutrophil", "natural killer cell" ], "question_id": "fs-id1865619", "question_text": "Which of the following cells would be most active in early, antiviral immune responses the first time one is exposed to pathogen?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "tonsils" }, "bloom": "1", "hl_context": "<hl> Tonsils are lymphoid nodules located along the inner surface of the pharynx and are important in developing immunity to oral pathogens ( Figure 21.10 ) . <hl> <hl> The tonsil located at the back of the throat , the pharyngeal tonsil , is sometimes referred to as the adenoid when swollen . <hl> Such swelling is an indication of an active immune response to infection . Histologically , tonsils do not contain a complete capsule , and the epithelial layer invaginates deeply into the interior of the tonsil to form tonsillar crypts . These structures , which accumulate all sorts of materials taken into the body through eating and breathing , actually “ encourage ” pathogens to penetrate deep into the tonsillar tissues where they are acted upon by numerous lymphoid follicles and eliminated . This seems to be the major function of tonsils — to help children ’ s bodies recognize , destroy , and develop immunity to common environmental pathogens so that they will be protected in their later lives . Tonsils are often removed in those children who have recurring throat infections , especially those involving the palatine tonsils on either side of the throat , whose swelling may interfere with their breathing and / or swallowing .", "hl_sentences": "Tonsils are lymphoid nodules located along the inner surface of the pharynx and are important in developing immunity to oral pathogens ( Figure 21.10 ) . The tonsil located at the back of the throat , the pharyngeal tonsil , is sometimes referred to as the adenoid when swollen .", "question": { "cloze_format": "___ is/are a lymphoid nodule that is most likely to see food antigens first.", "normal_format": "Which of the lymphoid nodules is most likely to see food antigens first?", "question_choices": [ "tonsils", "Peyer’s patches", "bronchus-associated lymphoid tissue", "mucosa-associated lymphoid tissue" ], "question_id": "fs-id1909275", "question_text": "Which of the lymphoid nodules is most likely to see food antigens first?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "cold" }, "bloom": "1", "hl_context": "The hallmark of the innate immune response is inflammation . <hl> Inflammation is something everyone has experienced . <hl> <hl> Stub a toe , cut a finger , or do any activity that causes tissue damage and inflammation will result , with its four characteristics : heat , redness , pain , and swelling ( “ loss of function ” is sometimes mentioned as a fifth characteristic ) . <hl> It is important to note that inflammation does not have to be initiated by an infection , but can also be caused by tissue injuries . The release of damaged cellular contents into the site of injury is enough to stimulate the response , even in the absence of breaks in physical barriers that would allow pathogens to enter ( by hitting your thumb with a hammer , for example ) . The inflammatory reaction brings in phagocytic cells to the damaged area to clear cellular debris and to set the stage for wound repair ( Figure 21.14 ) .", "hl_sentences": "Inflammation is something everyone has experienced . Stub a toe , cut a finger , or do any activity that causes tissue damage and inflammation will result , with its four characteristics : heat , redness , pain , and swelling ( “ loss of function ” is sometimes mentioned as a fifth characteristic ) .", "question": { "cloze_format": "The sign that is not characteristic of inflammation is ___.", "normal_format": "Which of the following signs is not characteristic of inflammation?", "question_choices": [ "redness", "pain", "cold", "swelling" ], "question_id": "fs-id1386160", "question_text": "Which of the following signs is not characteristic of inflammation?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "microphages" }, "bloom": null, "hl_context": "The complement system is a series of proteins constitutively found in the blood plasma . As such , these proteins are not considered part of the early induced immune response , even though they share features with some of the antibacterial proteins of this class . <hl> Made in the liver , they have a variety of functions in the innate immune response , using what is known as the “ alternate pathway ” of complement activation . <hl> Additionally , complement functions in the adaptive immune response as well , in what is called the classical pathway . The complement system consists of several proteins that enzymatically alter and fragment later proteins in a series , which is why it is termed cascade . Once activated , the series of reactions is irreversible , and releases fragments that have the following actions : <hl> Early induced proteins are those that are not constitutively present in the body , but are made as they are needed early during the innate immune response . <hl> <hl> Interferons are an example of early induced proteins . <hl> Cells infected with viruses secrete interferons that travel to adjacent cells and induce them to make antiviral proteins . Thus , even though the initial cell is sacrificed , the surrounding cells are protected . Other early induced proteins specific for bacterial cell wall components are mannose-binding protein and C-reactive protein , made in the liver , which bind specifically to polysaccharide components of the bacterial cell wall . Phagocytes such as macrophages have receptors for these proteins , and they are thus able to recognize them as they are bound to the bacteria . This brings the phagocyte and bacterium into close proximity and enhances the phagocytosis of the bacterium by the process known as opsonization . Opsonization is the tagging of a pathogen for phagocytosis by the binding of an antibody or an antimicrobial protein . <hl> A fourth important lymphocyte is the natural killer cell , a participant in the innate immune response . <hl> A natural killer cell ( NK ) is a circulating blood cell that contains cytotoxic ( cell-killing ) granules in its extensive cytoplasm . It shares this mechanism with the cytotoxic T cells of the adaptive immune response . NK cells are among the body ’ s first lines of defense against viruses and certain types of cancer .", "hl_sentences": "Made in the liver , they have a variety of functions in the innate immune response , using what is known as the “ alternate pathway ” of complement activation . Early induced proteins are those that are not constitutively present in the body , but are made as they are needed early during the innate immune response . Interferons are an example of early induced proteins . A fourth important lymphocyte is the natural killer cell , a participant in the innate immune response .", "question": { "cloze_format": "___ is not important in the antiviral innate immune response.", "normal_format": "Which of the following is not important in the antiviral innate immune response?", "question_choices": [ "interferons", "natural killer cells", "complement", "microphages" ], "question_id": "fs-id1850767", "question_text": "Which of the following is not important in the antiviral innate immune response?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 1, "ans_text": "opsonization" }, "bloom": "1", "hl_context": "Early induced proteins are those that are not constitutively present in the body , but are made as they are needed early during the innate immune response . Interferons are an example of early induced proteins . Cells infected with viruses secrete interferons that travel to adjacent cells and induce them to make antiviral proteins . Thus , even though the initial cell is sacrificed , the surrounding cells are protected . Other early induced proteins specific for bacterial cell wall components are mannose-binding protein and C-reactive protein , made in the liver , which bind specifically to polysaccharide components of the bacterial cell wall . Phagocytes such as macrophages have receptors for these proteins , and they are thus able to recognize them as they are bound to the bacteria . This brings the phagocyte and bacterium into close proximity and enhances the phagocytosis of the bacterium by the process known as opsonization . <hl> Opsonization is the tagging of a pathogen for phagocytosis by the binding of an antibody or an antimicrobial protein . <hl>", "hl_sentences": "Opsonization is the tagging of a pathogen for phagocytosis by the binding of an antibody or an antimicrobial protein .", "question": { "cloze_format": "Enhanced phagocytosis of a cell by the binding of a specific protein is called ________.", "normal_format": "What is the enhanced phagocytosis of a cell by the binding of a specific protein called?", "question_choices": [ "endocytosis", "opsonization", "anaphylaxis", "complement activation" ], "question_id": "fs-id2596426", "question_text": "Enhanced phagocytosis of a cell by the binding of a specific protein is called ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "increased blood flow" }, "bloom": "1", "hl_context": "Vasodilation . Many inflammatory mediators such as histamine are vasodilators that increase the diameters of local capillaries . <hl> This causes increased blood flow and is responsible for the heat and redness of inflamed tissue . <hl> <hl> It allows greater access of the blood to the site of inflammation . <hl>", "hl_sentences": "This causes increased blood flow and is responsible for the heat and redness of inflamed tissue . It allows greater access of the blood to the site of inflammation .", "question": { "cloze_format": "___ leads to the redness of inflammation.", "normal_format": "Which of the following leads to the redness of inflammation?", "question_choices": [ "increased vascular permeability", "anaphylactic shock", "increased blood flow", "complement activation" ], "question_id": "fs-id2044987", "question_text": "Which of the following leads to the redness of inflammation?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "antigen processing" }, "bloom": "1", "hl_context": "Two distinct types of MHC molecules , MHC class I and MHC class II , play roles in antigen presentation . Although produced from different genes , they both have similar functions . They bring processed antigen to the surface of the cell via a transport vesicle and present the antigen to the T cell and its receptor . Antigens from different classes of pathogens , however , use different MHC classes and take different routes through the cell to get to the surface for presentation . The basic mechanism , though , is the same . <hl> Antigens are processed by digestion , are brought into the endomembrane system of the cell , and then are expressed on the surface of the antigen-presenting cell for antigen recognition by a T cell . <hl> Intracellular antigens are typical of viruses , which replicate inside the cell , and certain other intracellular parasites and bacteria . These antigens are processed in the cytosol by an enzyme complex known as the proteasome and are then brought into the endoplasmic reticulum by the transporter associated with antigen processing ( TAP ) system , where they interact with class I MHC molecules and are eventually transported to the cell surface by a transport vesicle . Although Figure 21.16 shows T cell receptors interacting with antigenic determinants directly , the mechanism that T cells use to recognize antigens is , in reality , much more complex . T cells do not recognize free-floating or cell-bound antigens as they appear on the surface of the pathogen . They only recognize antigen on the surface of specialized cells called antigen-presenting cells . Antigens are internalized by these cells . <hl> Antigen processing is a mechanism that enzymatically cleaves the antigen into smaller pieces . <hl> The antigen fragments are then brought to the cell ’ s surface and associated with a specialized type of antigen-presenting protein known as a major histocompatibility complex ( MHC ) molecule . The MHC is the cluster of genes that encode these antigen-presenting molecules . The association of the antigen fragments with an MHC molecule on the surface of a cell is known as antigen presentation and results in the recognition of antigen by a T cell . This association of antigen and MHC occurs inside the cell , and it is the complex of the two that is brought to the surface . The peptide-binding cleft is a small indentation at the end of the MHC molecule that is furthest away from the cell membrane ; it is here that the processed fragment of antigen sits . MHC molecules are capable of presenting a variety of antigens , depending on the amino acid sequence , in their peptide-binding clefts . It is the combination of the MHC molecule and the fragment of the original peptide or carbohydrate that is actually physically recognized by the T cell receptor ( Figure 21.17 ) .", "hl_sentences": "Antigens are processed by digestion , are brought into the endomembrane system of the cell , and then are expressed on the surface of the antigen-presenting cell for antigen recognition by a T cell . Antigen processing is a mechanism that enzymatically cleaves the antigen into smaller pieces .", "question": { "cloze_format": "The taking in of antigen and digesting it for later presentation is called ________.", "normal_format": "What is the taking in of antigen and digesting it for later presentation called?", "question_choices": [ "antigen presentation", "antigen processing", "endocytosis", "exocytosis" ], "question_id": "fs-id1950416", "question_text": "The taking in of antigen and digesting it for later presentation is called ________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 3, "ans_text": "to increase the numbers of specific cells" }, "bloom": "1", "hl_context": "Mature T cells become activated by recognizing processed foreign antigen in association with a self-MHC molecule and begin dividing rapidly by mitosis . <hl> This proliferation of T cells is called clonal expansion and is necessary to make the immune response strong enough to effectively control a pathogen . <hl> How does the body select only those T cells that are needed against a specific pathogen ? Again , the specificity of a T cell is based on the amino acid sequence and the three-dimensional shape of the antigen-binding site formed by the variable regions of the two chains of the T cell receptor ( Figure 21.19 ) . Clonal selection is the process of antigen binding only to those T cells that have receptors specific to that antigen . Each T cell that is activated has a specific receptor “ hard-wired ” into its DNA , and all of its progeny will have identical DNA and T cell receptors , forming clones of the original T cell .", "hl_sentences": "This proliferation of T cells is called clonal expansion and is necessary to make the immune response strong enough to effectively control a pathogen .", "question": { "cloze_format": "Clonal expansion is so important ___ .", "normal_format": "Why is clonal expansion so important?", "question_choices": [ "to select for specific cells", "to secrete cytokines", "to kill target cells", "to increase the numbers of specific cells" ], "question_id": "fs-id2443791", "question_text": "Why is clonal expansion so important?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "negative selection." }, "bloom": "1", "hl_context": "Later , the cells become double positives that express both CD4 and CD8 markers and move from the cortex to the junction between the cortex and medulla . It is here that negative selection takes place . <hl> In negative selection , self-antigens are brought into the thymus from other parts of the body by professional antigen-presenting cells . <hl> The T cells that bind to these self-antigens are selected for negatively and are killed by apoptosis . In summary , the only T cells left are those that can bind to MHC molecules of the body with foreign antigens presented on their binding clefts , preventing an attack on one ’ s own body tissues , at least under normal circumstances . Tolerance can be broken , however , by the development of an autoimmune response , to be discussed later in this chapter .", "hl_sentences": "In negative selection , self-antigens are brought into the thymus from other parts of the body by professional antigen-presenting cells .", "question": { "cloze_format": "The elimination of self-reactive thymocytes is called ________.", "normal_format": "What is the elimination of self-reactive thymocytes called?", "question_choices": [ "positive selection.", "negative selection.", "tolerance.", "clonal selection." ], "question_id": "fs-id2169042", "question_text": "The elimination of self-reactive thymocytes is called ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "cytotoxic T cells" }, "bloom": "1", "hl_context": "Interferons have activity in slowing viral replication and are used in the treatment of certain viral diseases , such as hepatitis B and C , but their ability to eliminate the virus completely is limited . The cytotoxic T cell response , though , is key , as it eventually overwhelms the virus and kills infected cells before the virus can complete its replicative cycle . <hl> Clonal expansion and the ability of cytotoxic T cells to kill more than one target cell make these cells especially effective against viruses . <hl> In fact , without cytotoxic T cells , it is likely that humans would all die at some point from a viral infection ( if no vaccine were available ) .", "hl_sentences": "Clonal expansion and the ability of cytotoxic T cells to kill more than one target cell make these cells especially effective against viruses .", "question": { "cloze_format": "___ is/are a type of T cell that is most effective against viruses.", "normal_format": "Which type of T cell is most effective against viruses?", "question_choices": [ "Th1", "Th2", "cytotoxic T cells", "regulatory T cells" ], "question_id": "fs-id2293814", "question_text": "Which type of T cell is most effective against viruses?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "clonal anergy" }, "bloom": null, "hl_context": "B cell differentiation and the development of tolerance are not quite as well understood as it is in T cells . Central tolerance is the destruction or inactivation of B cells that recognize self-antigens in the bone marrow , and its role is critical and well established . In the process of clonal deletion , immature B cells that bind strongly to self-antigens expressed on tissues are signaled to commit suicide by apoptosis , removing them from the population . <hl> In the process of clonal anergy , however , B cells exposed to soluble antigen in the bone marrow are not physically deleted , but become unable to function . <hl>", "hl_sentences": "In the process of clonal anergy , however , B cells exposed to soluble antigen in the bone marrow are not physically deleted , but become unable to function .", "question": { "cloze_format": "Removing functionality from a B cell without killing it is called ________.", "normal_format": "What is removing functionality from a B cell without killing it called?", "question_choices": [ "clonal selection", "clonal expansion", "clonal deletion", "clonal anergy" ], "question_id": "fs-id1491920", "question_text": "Removing functionality from a B cell without killing it is called ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "IgG" }, "bloom": "1", "hl_context": "IgG is a major antibody of late primary responses and the main antibody of secondary responses in the blood . This is because class switching occurs during primary responses . IgG is a monomeric antibody that clears pathogens from the blood and can activate complement proteins ( although not as well as IgM ) , taking advantage of its antibacterial activities . <hl> Furthermore , this class of antibody is the one that crosses the placenta to protect the developing fetus from disease exits the blood to the interstitial fluid to fight extracellular pathogens . <hl>", "hl_sentences": "Furthermore , this class of antibody is the one that crosses the placenta to protect the developing fetus from disease exits the blood to the interstitial fluid to fight extracellular pathogens .", "question": { "cloze_format": "___ is a class of antibody that crosses the placenta in pregnant women.", "normal_format": "Which class of antibody crosses the placenta in pregnant women?", "question_choices": [ "IgM", "IgA", "IgE", "IgG" ], "question_id": "fs-id2919145", "question_text": "Which class of antibody crosses the placenta in pregnant women?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "IgD" }, "bloom": null, "hl_context": "In general , antibodies have two basic functions . They can act as the B cell antigen receptor or they can be secreted , circulate , and bind to a pathogen , often labeling it for identification by other forms of the immune response . Of the five antibody classes , notice that only two can function as the antigen receptor for naïve B cells : IgM and IgD ( Figure 21.22 ) . Mature B cells that leave the bone marrow express both IgM and IgD , but both antibodies have the same antigen specificity . <hl> Only IgM is secreted , however , and no other nonreceptor function for IgD has been discovered . <hl>", "hl_sentences": "Only IgM is secreted , however , and no other nonreceptor function for IgD has been discovered .", "question": { "cloze_format": "The class of antibody that has no known function other than as an antigen receptor is ___.", "normal_format": "Which class of antibody has no known function other than as an antigen receptor?", "question_choices": [ "IgM", "IgA", "IgE", "IgD" ], "question_id": "fs-id2170697", "question_text": "Which class of antibody has no known function other than as an antigen receptor?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "primary response" }, "bloom": "1", "hl_context": "<hl> IgG is a major antibody of late primary responses and the main antibody of secondary responses in the blood . <hl> <hl> This is because class switching occurs during primary responses . <hl> IgG is a monomeric antibody that clears pathogens from the blood and can activate complement proteins ( although not as well as IgM ) , taking advantage of its antibacterial activities . Furthermore , this class of antibody is the one that crosses the placenta to protect the developing fetus from disease exits the blood to the interstitial fluid to fight extracellular pathogens . IgM consists of five four-chain structures ( 20 total chains with 10 identical antigen-binding sites ) and is thus the largest of the antibody molecules . IgM is usually the first antibody made during a primary response . Its 10 antigen-binding sites and large shape allow it to bind well to many bacterial surfaces . It is excellent at binding complement proteins and activating the complement cascade , consistent with its role in promoting chemotaxis , opsonization , and cell lysis . Thus , it is a very effective antibody against bacteria at early stages of a primary antibody response . As the primary response proceeds , the antibody produced in a B cell can change to IgG , IgA , or IgE by the process known as class switching . <hl> Class switching is the change of one antibody class to another . <hl> While the class of antibody changes , the specificity and the antigen-binding sites do not . Thus , the antibodies made are still specific to the pathogen that stimulated the initial IgM response .", "hl_sentences": "IgG is a major antibody of late primary responses and the main antibody of secondary responses in the blood . This is because class switching occurs during primary responses . Class switching is the change of one antibody class to another .", "question": { "cloze_format": "Class switching occurs in ___ .", "normal_format": "When does class switching occur?", "question_choices": [ "primary response", "secondary response", "tolerance", "memory response" ], "question_id": "fs-id1472037", "question_text": "When does class switching occur?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "IgA" }, "bloom": "1", "hl_context": "<hl> IgA exists in two forms , a four-chain monomer in the blood and an eight-chain structure , or dimer , in exocrine gland secretions of the mucous membranes , including mucus , saliva , and tears . <hl> Thus , dimeric IgA is the only antibody to leave the interior of the body to protect body surfaces . IgA is also of importance to newborns , because this antibody is present in mother ’ s breast milk ( colostrum ) , which serves to protect the infant from disease .", "hl_sentences": "IgA exists in two forms , a four-chain monomer in the blood and an eight-chain structure , or dimer , in exocrine gland secretions of the mucous membranes , including mucus , saliva , and tears .", "question": { "cloze_format": "The class of antibody that is found in mucus is ___ .", "normal_format": "Which class of antibody is found in mucus?", "question_choices": [ "IgM", "IgA", "IgE", "IgD" ], "question_id": "fs-id1910429", "question_text": "Which class of antibody is found in mucus?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 3, "ans_text": "lysosomal" }, "bloom": "1", "hl_context": "The body fights bacterial pathogens with a wide variety of immunological mechanisms , essentially trying to find one that is effective . <hl> Bacteria such as Mycobacterium leprae , the cause of leprosy , are resistant to lysosomal enzymes and can persist in macrophage organelles or escape into the cytosol . <hl> <hl> In such situations , infected macrophages receiving cytokine signals from Th1 cells turn on special metabolic pathways . <hl> <hl> Macrophage oxidative metabolism is hostile to intracellular bacteria , often relying on the production of nitric oxide to kill the bacteria inside the macrophage . <hl>", "hl_sentences": "Bacteria such as Mycobacterium leprae , the cause of leprosy , are resistant to lysosomal enzymes and can persist in macrophage organelles or escape into the cytosol . In such situations , infected macrophages receiving cytokine signals from Th1 cells turn on special metabolic pathways . Macrophage oxidative metabolism is hostile to intracellular bacteria , often relying on the production of nitric oxide to kill the bacteria inside the macrophage .", "question": { "cloze_format": "The ___ enzymes in macrophages are important for clearing intracellular bacteria.", "normal_format": "Which enzymes in macrophages are important for clearing intracellular bacteria?", "question_choices": [ "metabolic", "mitochondrial", "nuclear", "lysosomal" ], "question_id": "fs-id1841131", "question_text": "Which enzymes in macrophages are important for clearing intracellular bacteria?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "tuberculosis" }, "bloom": "1", "hl_context": "It is important to keep in mind that although the immune system has evolved to be able to control many pathogens , pathogens themselves have evolved ways to evade the immune response . <hl> An example already mentioned is in Mycobacterium tuberculosis , which has evolved a complex cell wall that is resistant to the digestive enzymes of the macrophages that ingest them , and thus persists in the host , causing the chronic disease tuberculosis . <hl> This section briefly summarizes other ways in which pathogens can “ outwit ” immune responses . But keep in mind , although it seems as if pathogens have a will of their own , they do not . All of these evasive “ strategies ” arose strictly by evolution , driven by selection .", "hl_sentences": "An example already mentioned is in Mycobacterium tuberculosis , which has evolved a complex cell wall that is resistant to the digestive enzymes of the macrophages that ingest them , and thus persists in the host , causing the chronic disease tuberculosis .", "question": { "cloze_format": "___ is a type of chronic lung disease that is caused by a Mycobacterium.", "normal_format": "What type of chronic lung disease is caused by a Mycobacterium?", "question_choices": [ "asthma", "emphysema", "tuberculosis", "leprosy" ], "question_id": "fs-id2105019", "question_text": "What type of chronic lung disease is caused by a Mycobacterium?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 1, "ans_text": "complement" }, "bloom": "1", "hl_context": "IgM consists of five four-chain structures ( 20 total chains with 10 identical antigen-binding sites ) and is thus the largest of the antibody molecules . IgM is usually the first antibody made during a primary response . Its 10 antigen-binding sites and large shape allow it to bind well to many bacterial surfaces . It is excellent at binding complement proteins and activating the complement cascade , consistent with its role in promoting chemotaxis , opsonization , and cell lysis . <hl> Thus , it is a very effective antibody against bacteria at early stages of a primary antibody response . <hl> As the primary response proceeds , the antibody produced in a B cell can change to IgG , IgA , or IgE by the process known as class switching . Class switching is the change of one antibody class to another . While the class of antibody changes , the specificity and the antigen-binding sites do not . Thus , the antibodies made are still specific to the pathogen that stimulated the initial IgM response . The complement system is a series of proteins constitutively found in the blood plasma . As such , these proteins are not considered part of the early induced immune response , even though they share features with some of the antibacterial proteins of this class . <hl> Made in the liver , they have a variety of functions in the innate immune response , using what is known as the “ alternate pathway ” of complement activation . <hl> <hl> Additionally , complement functions in the adaptive immune response as well , in what is called the classical pathway . <hl> <hl> The complement system consists of several proteins that enzymatically alter and fragment later proteins in a series , which is why it is termed cascade . <hl> Once activated , the series of reactions is irreversible , and releases fragments that have the following actions :", "hl_sentences": "Thus , it is a very effective antibody against bacteria at early stages of a primary antibody response . Made in the liver , they have a variety of functions in the innate immune response , using what is known as the “ alternate pathway ” of complement activation . Additionally , complement functions in the adaptive immune response as well , in what is called the classical pathway . The complement system consists of several proteins that enzymatically alter and fragment later proteins in a series , which is why it is termed cascade .", "question": { "cloze_format": "___ are a type of immune response that is most directly effective against bacteria.", "normal_format": "Which type of immune response is most directly effective against bacteria?", "question_choices": [ "natural killer cells", "complement", "cytotoxic T cells", "helper T cells" ], "question_id": "fs-id1753105", "question_text": "Which type of immune response is most directly effective against bacteria?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "mutation" }, "bloom": "1", "hl_context": "<hl> Another method of immune evasion is mutation . <hl> <hl> Because viruses ’ surface molecules mutate continuously , viruses like influenza change enough each year that the flu vaccine for one year may not protect against the flu common to the next . <hl> New vaccine formulations must be derived for each flu season .", "hl_sentences": "Another method of immune evasion is mutation . Because viruses ’ surface molecules mutate continuously , viruses like influenza change enough each year that the flu vaccine for one year may not protect against the flu common to the next .", "question": { "cloze_format": "The reason that you have to be immunized with a new influenza vaccine each year is ___.", "normal_format": "What is the reason that you have to be immunized with a new influenza vaccine each year?", "question_choices": [ "the vaccine is only protective for a year", "mutation", "macrophage oxidative metabolism", "memory response" ], "question_id": "fs-id2464756", "question_text": "What is the reason that you have to be immunized with a new influenza vaccine each year?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "natural killer cells" }, "bloom": "1", "hl_context": "The primary mechanisms against viruses are NK cells , interferons , and cytotoxic T cells . Antibodies are effective against viruses mostly during protection , where an immune individual can neutralize them based on a previous exposure . Antibodies have no effect on viruses or other intracellular pathogens once they enter the cell , since antibodies are not able to penetrate the plasma membrane of the cell . Many cells respond to viral infections by downregulating their expression of MHC class I molecules . This is to the advantage of the virus , because without class I expression , cytotoxic T cells have no activity . NK cells , however , can recognize virally infected class I-negative cells and destroy them . <hl> Thus , NK and cytotoxic T cells have complementary activities against virally infected cells . <hl>", "hl_sentences": "Thus , NK and cytotoxic T cells have complementary activities against virally infected cells .", "question": { "cloze_format": "The immune response that works in concert with cytotoxic T cells against virally infected cells is ___ .", "normal_format": "Which type of immune response works in concert with cytotoxic T cells against virally infected cells?", "question_choices": [ "natural killer cells", "complement", "antibodies", "memory" ], "question_id": "fs-id1927907", "question_text": "Which type of immune response works in concert with cytotoxic T cells against virally infected cells?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "type III" }, "bloom": "1", "hl_context": "Type II hypersensitivity , which involves IgG-mediated lysis of cells by complement proteins , occurs during mismatched blood transfusions and blood compatibility diseases such as erythroblastosis fetalis ( see section on transplantation ) . <hl> Type III hypersensitivity occurs with diseases such as systemic lupus erythematosus , where soluble antigens , mostly DNA and other material from the nucleus , and antibodies accumulate in the blood to the point that the antigen and antibody precipitate along blood vessel linings . <hl> These immune complexes often lodge in the kidneys , joints , and other organs where they can activate complement proteins and cause inflammation . <hl> The word “ hypersensitivity ” simply means sensitive beyond normal levels of activation . <hl> Allergies and inflammatory responses to nonpathogenic environmental substances have been observed since the dawn of history . Hypersensitivity is a medical term describing symptoms that are now known to be caused by unrelated mechanisms of immunity . Still , it is useful for this discussion to use the four types of hypersensitivities as a guide to understand these mechanisms ( Figure 21.28 ) .", "hl_sentences": "Type III hypersensitivity occurs with diseases such as systemic lupus erythematosus , where soluble antigens , mostly DNA and other material from the nucleus , and antibodies accumulate in the blood to the point that the antigen and antibody precipitate along blood vessel linings . The word “ hypersensitivity ” simply means sensitive beyond normal levels of activation .", "question": { "cloze_format": "The type of hypersensitivity that involves soluble antigen-antibody complexes is ___ .", "normal_format": "Which type of hypersensitivity involves soluble antigen-antibody complexes?", "question_choices": [ "type I", "type II", "type III", "type IV" ], "question_id": "fs-id2158272", "question_text": "Which type of hypersensitivity involves soluble antigen-antibody complexes?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "recruitment of immune cells" }, "bloom": "1", "hl_context": "Delayed hypersensitivity , or type IV hypersensitivity , is basically a standard cellular immune response . <hl> In delayed hypersensitivity , the first exposure to an antigen is called sensitization , such that on re-exposure , a secondary cellular response results , secreting cytokines that recruit macrophages and other phagocytes to the site . <hl> <hl> These sensitized T cells , of the Th1 class , will also activate cytotoxic T cells . <hl> <hl> The time it takes for this reaction to occur accounts for the 24 - to 72 - hour delay in development . <hl>", "hl_sentences": "In delayed hypersensitivity , the first exposure to an antigen is called sensitization , such that on re-exposure , a secondary cellular response results , secreting cytokines that recruit macrophages and other phagocytes to the site . These sensitized T cells , of the Th1 class , will also activate cytotoxic T cells . The time it takes for this reaction to occur accounts for the 24 - to 72 - hour delay in development .", "question": { "cloze_format": "(The) ___ cause(s) the delay in delayed hypersensitivity.", "normal_format": "What causes the delay in delayed hypersensitivity?", "question_choices": [ "inflammation", "cytokine release", "recruitment of immune cells", "histamine release" ], "question_id": "fs-id2302229", "question_text": "What causes the delay in delayed hypersensitivity?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 3, "ans_text": "histamine release" }, "bloom": null, "hl_context": "Allergists use skin testing to identify allergens in type I hypersensitivity . <hl> In skin testing , allergen extracts are injected into the epidermis , and a positive result of a soft , pale swelling at the site surrounded by a red zone ( called the wheal and flare response ) , caused by the release of histamine and the granule mediators , usually occurs within 30 minutes . <hl> The soft center is due to fluid leaking from the blood vessels and the redness is caused by the increased blood flow to the area that results from the dilation of local blood vessels at the site . Antigens that cause allergic responses are often referred to as allergens . The specificity of the immediate hypersensitivity response is predicated on the binding of allergen-specific IgE to the mast cell surface . The process of producing allergen-specific IgE is called sensitization , and is a necessary prerequisite for the symptoms of immediate hypersensitivity to occur . Allergies and allergic asthma are mediated by mast cell degranulation that is caused by the crosslinking of the antigen-specific IgE molecules on the mast cell surface . <hl> The mediators released have various vasoactive effects already discussed , but the major symptoms of inhaled allergens are the nasal edema and runny nose caused by the increased vascular permeability and increased blood flow of nasal blood vessels . <hl> <hl> As these mediators are released with mast cell degranulation , type I hypersensitivity reactions are usually rapid and occur within just a few minutes , hence the term immediate hypersensitivity . <hl> The word “ hypersensitivity ” simply means sensitive beyond normal levels of activation . Allergies and inflammatory responses to nonpathogenic environmental substances have been observed since the dawn of history . <hl> Hypersensitivity is a medical term describing symptoms that are now known to be caused by unrelated mechanisms of immunity . <hl> Still , it is useful for this discussion to use the four types of hypersensitivities as a guide to understand these mechanisms ( Figure 21.28 ) .", "hl_sentences": "In skin testing , allergen extracts are injected into the epidermis , and a positive result of a soft , pale swelling at the site surrounded by a red zone ( called the wheal and flare response ) , caused by the release of histamine and the granule mediators , usually occurs within 30 minutes . The mediators released have various vasoactive effects already discussed , but the major symptoms of inhaled allergens are the nasal edema and runny nose caused by the increased vascular permeability and increased blood flow of nasal blood vessels . As these mediators are released with mast cell degranulation , type I hypersensitivity reactions are usually rapid and occur within just a few minutes , hence the term immediate hypersensitivity . Hypersensitivity is a medical term describing symptoms that are now known to be caused by unrelated mechanisms of immunity .", "question": { "cloze_format": "___ is/are a critical feature of immediate hypersensitivity.", "normal_format": "Which of the following is a critical feature of immediate hypersensitivity?", "question_choices": [ "inflammation", "cytotoxic T cells", "recruitment of immune cells", "histamine release" ], "question_id": "fs-id2326143", "question_text": "Which of the following is a critical feature of immediate hypersensitivity?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "rheumatic fever" }, "bloom": "1", "hl_context": "Environmental triggers seem to play large roles in autoimmune responses . One explanation for the breakdown of tolerance is that , after certain bacterial infections , an immune response to a component of the bacterium cross-reacts with a self-antigen . <hl> This mechanism is seen in rheumatic fever , a result of infection with Streptococcus bacteria , which causes strep throat . <hl> <hl> The antibodies to this pathogen ’ s M protein cross-react with an antigenic component of heart myosin , a major contractile protein of the heart that is critical to its normal function . <hl> <hl> The antibody binds to these molecules and activates complement proteins , causing damage to the heart , especially to the heart valves . <hl> On the other hand , some theories propose that having multiple common infectious diseases actually prevents autoimmune responses . The fact that autoimmune diseases are rare in countries that have a high incidence of infectious diseases supports this idea , another example of the hygiene hypothesis discussed earlier in this chapter .", "hl_sentences": "This mechanism is seen in rheumatic fever , a result of infection with Streptococcus bacteria , which causes strep throat . The antibodies to this pathogen ’ s M protein cross-react with an antigenic component of heart myosin , a major contractile protein of the heart that is critical to its normal function . The antibody binds to these molecules and activates complement proteins , causing damage to the heart , especially to the heart valves .", "question": { "cloze_format": "___ is an autoimmune disease of the heart.", "normal_format": "Which of the following is an autoimmune disease of the heart?", "question_choices": [ "rheumatoid arthritis", "lupus", "rheumatic fever", "Hashimoto’s thyroiditis" ], "question_id": "fs-id2021024", "question_text": "Which of the following is an autoimmune disease of the heart?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 0, "ans_text": "epinephrine" }, "bloom": "1", "hl_context": "Most allergens are in themselves nonpathogenic and therefore innocuous . Some individuals develop mild allergies , which are usually treated with antihistamines . Others develop severe allergies that may cause anaphylactic shock , which can potentially be fatal within 20 to 30 minutes if untreated . This drop in blood pressure ( shock ) with accompanying contractions of bronchial smooth muscle is caused by systemic mast cell degranulation when an allergen is eaten ( for example , shellfish and peanuts ) , injected ( by a bee sting or being administered penicillin ) , or inhaled ( asthma ) . <hl> Because epinephrine raises blood pressure and relaxes bronchial smooth muscle , it is routinely used to counteract the effects of anaphylaxis and can be lifesaving . <hl> Patients with known severe allergies are encouraged to keep automatic epinephrine injectors with them at all times , especially when away from easy access to hospitals .", "hl_sentences": "Because epinephrine raises blood pressure and relaxes bronchial smooth muscle , it is routinely used to counteract the effects of anaphylaxis and can be lifesaving .", "question": { "cloze_format": "The drug that is used to counteract the effects of anaphylactic shock is ___.", "normal_format": "What drug is used to counteract the effects of anaphylactic shock?", "question_choices": [ "epinephrine", "antihistamines", "antibiotics", "aspirin" ], "question_id": "fs-id2573002", "question_text": "What drug is used to counteract the effects of anaphylactic shock?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 0, "ans_text": "Kaposi’s sarcoma" }, "bloom": "1", "hl_context": "<hl> Immune Responses Against Cancer It is clear that with some cancers , for example Kaposi ’ s sarcoma , a healthy immune system does a good job at controlling them ( Figure 21.31 ) . <hl> This disease , which is caused by the human herpesvirus , is almost never observed in individuals with strong immune systems , such as the young and immunocompetent . Other examples of cancers caused by viruses include liver cancer caused by the hepatitis B virus and cervical cancer caused by the human papilloma virus . As these last two viruses have vaccines available for them , getting vaccinated can help prevent these two types of cancer by stimulating the immune response . Introduction In June 1981 , the Centers for Disease Control and Prevention ( CDC ) , in Atlanta , Georgia , published a report of an unusual cluster of five patients in Los Angeles , California . All five were diagnosed with a rare pneumonia caused by a fungus called Pneumocystis jirovecii ( formerly known as Pneumocystis carinii ) . Why was this unusual ? Although commonly found in the lungs of healthy individuals , this fungus is an opportunistic pathogen that causes disease in individuals with suppressed or underdeveloped immune systems . The very young , whose immune systems have yet to mature , and the elderly , whose immune systems have declined with age , are particularly susceptible . The five patients from LA , though , were between 29 and 36 years of age and should have been in the prime of their lives , immunologically speaking . What could be going on ? A few days later , a cluster of eight cases was reported in New York City , also involving young patients , this time exhibiting a rare form of skin cancer known as Kaposi ’ s sarcoma . This cancer of the cells that line the blood and lymphatic vessels was previously observed as a relatively innocuous disease of the elderly . The disease that doctors saw in 1981 was frighteningly more severe , with multiple , fast-growing lesions that spread to all parts of the body , including the trunk and face . Could the immune systems of these young patients have been compromised in some way ? Indeed , when they were tested , they exhibited extremely low numbers of a specific type of white blood cell in their bloodstreams , indicating that they had somehow lost a major part of the immune system . <hl> Acquired immune deficiency syndrome , or AIDS , turned out to be a new disease caused by the previously unknown human immunodeficiency virus ( HIV ) . <hl> Although nearly 100 percent fatal in those with active HIV infections in the early years , the development of anti-HIV drugs has transformed HIV infection into a chronic , manageable disease and not the certain death sentence it once was . One positive outcome resulting from the emergence of HIV disease was that the public ’ s attention became focused as never before on the importance of having a functional and healthy immune system .", "hl_sentences": "Immune Responses Against Cancer It is clear that with some cancers , for example Kaposi ’ s sarcoma , a healthy immune system does a good job at controlling them ( Figure 21.31 ) . Acquired immune deficiency syndrome , or AIDS , turned out to be a new disease caused by the previously unknown human immunodeficiency virus ( HIV ) .", "question": { "cloze_format": "The type of cancer that is associated with HIV disease is ___.", "normal_format": "Which type of cancer is associated with HIV disease?", "question_choices": [ "Kaposi’s sarcoma", "melanoma", "lymphoma", "renal cell carcinoma" ], "question_id": "fs-id1492110", "question_text": "Which type of cancer is associated with HIV disease?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 3, "ans_text": "graft-versus-host disease" }, "bloom": "1", "hl_context": "<hl> One disease of transplantation occurs with bone marrow transplants , which are used to treat various diseases , including SCID and leukemia . <hl> <hl> Because the bone marrow cells being transplanted contain lymphocytes capable of mounting an immune response , and because the recipient ’ s immune response has been destroyed before receiving the transplant , the donor cells may attack the recipient tissues , causing graft-versus-host disease . <hl> Symptoms of this disease , which usually include a rash and damage to the liver and mucosa , are variable , and attempts have been made to moderate the disease by first removing mature T cells from the donor bone marrow before transplanting it .", "hl_sentences": "One disease of transplantation occurs with bone marrow transplants , which are used to treat various diseases , including SCID and leukemia . Because the bone marrow cells being transplanted contain lymphocytes capable of mounting an immune response , and because the recipient ’ s immune response has been destroyed before receiving the transplant , the donor cells may attack the recipient tissues , causing graft-versus-host disease .", "question": { "cloze_format": "The disease that is associated with bone marrow transplants is (the) ___ .", "normal_format": "What disease is associated with bone marrow transplants?", "question_choices": [ "diabetes mellitus type I", "melanoma", "headache", "graft-versus-host disease" ], "question_id": "fs-id2584330", "question_text": "What disease is associated with bone marrow transplants?" }, "references_are_paraphrase": 0 } ]
21
21.1 Anatomy of the Lymphatic and Immune Systems Learning Objectives By the end of this section, you will be able to: Describe the structure and function of the lymphatic tissue (lymph fluid, vessels, ducts, and organs) Describe the structure and function of the primary and secondary lymphatic organs Discuss the cells of the immune system, how they function, and their relationship with the lymphatic system The immune system is the complex collection of cells and organs that destroys or neutralizes pathogens that would otherwise cause disease or death. The lymphatic system, for most people, is associated with the immune system to such a degree that the two systems are virtually indistinguishable. The lymphatic system is the system of vessels, cells, and organs that carries excess fluids to the bloodstream and filters pathogens from the blood. The swelling of lymph nodes during an infection and the transport of lymphocytes via the lymphatic vessels are but two examples of the many connections between these critical organ systems. Functions of the Lymphatic System A major function of the lymphatic system is to drain body fluids and return them to the bloodstream. Blood pressure causes leakage of fluid from the capillaries, resulting in the accumulation of fluid in the interstitial space—that is, spaces between individual cells in the tissues. In humans, 20 liters of plasma is released into the interstitial space of the tissues each day due to capillary filtration. Once this filtrate is out of the bloodstream and in the tissue spaces, it is referred to as interstitial fluid. Of this, 17 liters is reabsorbed directly by the blood vessels. But what happens to the remaining three liters? This is where the lymphatic system comes into play. It drains the excess fluid and empties it back into the bloodstream via a series of vessels, trunks, and ducts. Lymph is the term used to describe interstitial fluid once it has entered the lymphatic system. When the lymphatic system is damaged in some way, such as by being blocked by cancer cells or destroyed by injury, protein-rich interstitial fluid accumulates (sometimes “backs up” from the lymph vessels) in the tissue spaces. This inappropriate accumulation of fluid referred to as lymphedema may lead to serious medical consequences. As the vertebrate immune system evolved, the network of lymphatic vessels became convenient avenues for transporting the cells of the immune system. Additionally, the transport of dietary lipids and fat-soluble vitamins absorbed in the gut uses this system. Cells of the immune system not only use lymphatic vessels to make their way from interstitial spaces back into the circulation, but they also use lymph nodes as major staging areas for the development of critical immune responses. A lymph node is one of the small, bean-shaped organs located throughout the lymphatic system. Interactive Link Visit this website for an overview of the lymphatic system. What are the three main components of the lymphatic system? Structure of the Lymphatic System The lymphatic vessels begin as a blind ending, or closed at one end, capillaries, which feed into larger and larger lymphatic vessels, and eventually empty into the bloodstream by a series of ducts. Along the way, the lymph travels through the lymph nodes, which are commonly found near the groin, armpits, neck, chest, and abdomen. Humans have about 500–600 lymph nodes throughout the body ( Figure 21.2 ). A major distinction between the lymphatic and cardiovascular systems in humans is that lymph is not actively pumped by the heart, but is forced through the vessels by the movements of the body, the contraction of skeletal muscles during body movements, and breathing. One-way valves (semi-lunar valves) in lymphatic vessels keep the lymph moving toward the heart. Lymph flows from the lymphatic capillaries, through lymphatic vessels, and then is dumped into the circulatory system via the lymphatic ducts located at the junction of the jugular and subclavian veins in the neck. Lymphatic Capillaries Lymphatic capillaries , also called the terminal lymphatics, are vessels where interstitial fluid enters the lymphatic system to become lymph fluid. Located in almost every tissue in the body, these vessels are interlaced among the arterioles and venules of the circulatory system in the soft connective tissues of the body ( Figure 21.3 ). Exceptions are the central nervous system, bone marrow, bones, teeth, and the cornea of the eye, which do not contain lymph vessels. Lymphatic capillaries are formed by a one cell-thick layer of endothelial cells and represent the open end of the system, allowing interstitial fluid to flow into them via overlapping cells (see Figure 21.3 ). When interstitial pressure is low, the endothelial flaps close to prevent “backflow.” As interstitial pressure increases, the spaces between the cells open up, allowing the fluid to enter. Entry of fluid into lymphatic capillaries is also enabled by the collagen filaments that anchor the capillaries to surrounding structures. As interstitial pressure increases, the filaments pull on the endothelial cell flaps, opening up them even further to allow easy entry of fluid. In the small intestine, lymphatic capillaries called lacteals are critical for the transport of dietary lipids and lipid-soluble vitamins to the bloodstream. In the small intestine, dietary triglycerides combine with other lipids and proteins, and enter the lacteals to form a milky fluid called chyle . The chyle then travels through the lymphatic system, eventually entering the bloodstream. Larger Lymphatic Vessels, Trunks, and Ducts The lymphatic capillaries empty into larger lymphatic vessels, which are similar to veins in terms of their three-tunic structure and the presence of valves. These one-way valves are located fairly close to one another, and each one causes a bulge in the lymphatic vessel, giving the vessels a beaded appearance (see Figure 21.3 ). The superficial and deep lymphatics eventually merge to form larger lymphatic vessels known as lymphatic trunks . On the right side of the body, the right sides of the head, thorax, and right upper limb drain lymph fluid into the right subclavian vein via the right lymphatic duct ( Figure 21.4 ). On the left side of the body, the remaining portions of the body drain into the larger thoracic duct, which drains into the left subclavian vein. The thoracic duct itself begins just beneath the diaphragm in the cisterna chyli , a sac-like chamber that receives lymph from the lower abdomen, pelvis, and lower limbs by way of the left and right lumbar trunks and the intestinal trunk. The overall drainage system of the body is asymmetrical (see Figure 21.4 ). The right lymphatic duct receives lymph from only the upper right side of the body. The lymph from the rest of the body enters the bloodstream through the thoracic duct via all the remaining lymphatic trunks. In general, lymphatic vessels of the subcutaneous tissues of the skin, that is, the superficial lymphatics, follow the same routes as veins, whereas the deep lymphatic vessels of the viscera generally follow the paths of arteries. The Organization of Immune Function The immune system is a collection of barriers, cells, and soluble proteins that interact and communicate with each other in extraordinarily complex ways. The modern model of immune function is organized into three phases based on the timing of their effects. The three temporal phases consist of the following: Barrier defenses such as the skin and mucous membranes, which act instantaneously to prevent pathogenic invasion into the body tissues The rapid but nonspecific innate immune response , which consists of a variety of specialized cells and soluble factors The slower but more specific and effective adaptive immune response , which involves many cell types and soluble factors, but is primarily controlled by white blood cells (leukocytes) known as lymphocytes , which help control immune responses The cells of the blood, including all those involved in the immune response, arise in the bone marrow via various differentiation pathways from hematopoietic stem cells ( Figure 21.5 ). In contrast with embryonic stem cells, hematopoietic stem cells are present throughout adulthood and allow for the continuous differentiation of blood cells to replace those lost to age or function. These cells can be divided into three classes based on function: Phagocytic cells, which ingest pathogens to destroy them Lymphocytes, which specifically coordinate the activities of adaptive immunity Cells containing cytoplasmic granules, which help mediate immune responses against parasites and intracellular pathogens such as viruses Lymphocytes: B Cells, T Cells, Plasma Cells, and Natural Killer Cells As stated above, lymphocytes are the primary cells of adaptive immune responses ( Table 21.1 ). The two basic types of lymphocytes, B cells and T cells, are identical morphologically with a large central nucleus surrounded by a thin layer of cytoplasm. They are distinguished from each other by their surface protein markers as well as by the molecules they secrete. While B cells mature in red bone marrow and T cells mature in the thymus, they both initially develop from bone marrow. T cells migrate from bone marrow to the thymus gland where they further mature. B cells and T cells are found in many parts of the body, circulating in the bloodstream and lymph, and residing in secondary lymphoid organs, including the spleen and lymph nodes, which will be described later in this section. The human body contains approximately 10 12 lymphocytes. B Cells B cells are immune cells that function primarily by producing antibodies. An antibody is any of the group of proteins that binds specifically to pathogen-associated molecules known as antigens. An antigen is a chemical structure on the surface of a pathogen that binds to T or B lymphocyte antigen receptors. Once activated by binding to antigen, B cells differentiate into cells that secrete a soluble form of their surface antibodies. These activated B cells are known as plasma cells. T Cells The T cell , on the other hand, does not secrete antibody but performs a variety of functions in the adaptive immune response. Different T cell types have the ability to either secrete soluble factors that communicate with other cells of the adaptive immune response or destroy cells infected with intracellular pathogens. The roles of T and B lymphocytes in the adaptive immune response will be discussed further in this chapter. Plasma Cells Another type of lymphocyte of importance is the plasma cell. A plasma cell is a B cell that has differentiated in response to antigen binding, and has thereby gained the ability to secrete soluble antibodies. These cells differ in morphology from standard B and T cells in that they contain a large amount of cytoplasm packed with the protein-synthesizing machinery known as rough endoplasmic reticulum. Natural Killer Cells A fourth important lymphocyte is the natural killer cell, a participant in the innate immune response. A natural killer cell (NK) is a circulating blood cell that contains cytotoxic (cell-killing) granules in its extensive cytoplasm. It shares this mechanism with the cytotoxic T cells of the adaptive immune response. NK cells are among the body’s first lines of defense against viruses and certain types of cancer. Lymphocytes Type of lymphocyte Primary function B lymphocyte Generates diverse antibodies T lymphocyte Secretes chemical messengers Plasma cell Secretes antibodies NK cell Destroys virally infected cells Table 21.1 Interactive Link Visit this website to learn about the many different cell types in the immune system and their very specialized jobs. What is the role of the dendritic cell in an HIV infection? Primary Lymphoid Organs and Lymphocyte Development Understanding the differentiation and development of B and T cells is critical to the understanding of the adaptive immune response. It is through this process that the body (ideally) learns to destroy only pathogens and leaves the body’s own cells relatively intact. The primary lymphoid organs are the bone marrow and thymus gland. The lymphoid organs are where lymphocytes mature, proliferate, and are selected, which enables them to attack pathogens without harming the cells of the body. Bone Marrow In the embryo, blood cells are made in the yolk sac. As development proceeds, this function is taken over by the spleen, lymph nodes, and liver. Later, the bone marrow takes over most hematopoietic functions, although the final stages of the differentiation of some cells may take place in other organs. The red bone marrow is a loose collection of cells where hematopoiesis occurs, and the yellow bone marrow is a site of energy storage, which consists largely of fat cells ( Figure 21.6 ). The B cell undergoes nearly all of its development in the red bone marrow, whereas the immature T cell, called a thymocyte , leaves the bone marrow and matures largely in the thymus gland. Thymus The thymus gland is a bilobed organ found in the space between the sternum and the aorta of the heart ( Figure 21.7 ). Connective tissue holds the lobes closely together but also separates them and forms a capsule. Interactive Link View the University of Michigan WebScope to explore the tissue sample in greater detail. The connective tissue capsule further divides the thymus into lobules via extensions called trabeculae. The outer region of the organ is known as the cortex and contains large numbers of thymocytes with some epithelial cells, macrophages, and dendritic cells (two types of phagocytic cells that are derived from monocytes). The cortex is densely packed so it stains more intensely than the rest of the thymus (see Figure 21.7 ). The medulla, where thymocytes migrate before leaving the thymus, contains a less dense collection of thymocytes, epithelial cells, and dendritic cells. Aging and the... Immune System By the year 2050, 25 percent of the population of the United States will be 60 years of age or older. The CDC estimates that 80 percent of those 60 years and older have one or more chronic disease associated with deficiencies of the immune systems. This loss of immune function with age is called immunosenescence. To treat this growing population, medical professionals must better understand the aging process. One major cause of age-related immune deficiencies is thymic involution, the shrinking of the thymus gland that begins at birth, at a rate of about three percent tissue loss per year, and continues until 35–45 years of age, when the rate declines to about one percent loss per year for the rest of one’s life. At that pace, the total loss of thymic epithelial tissue and thymocytes would occur at about 120 years of age. Thus, this age is a theoretical limit to a healthy human lifespan. Thymic involution has been observed in all vertebrate species that have a thymus gland. Animal studies have shown that transplanted thymic grafts between inbred strains of mice involuted according to the age of the donor and not of the recipient, implying the process is genetically programmed. There is evidence that the thymic microenvironment, so vital to the development of naïve T cells, loses thymic epithelial cells according to the decreasing expression of the FOXN1 gene with age. It is also known that thymic involution can be altered by hormone levels. Sex hormones such as estrogen and testosterone enhance involution, and the hormonal changes in pregnant women cause a temporary thymic involution that reverses itself, when the size of the thymus and its hormone levels return to normal, usually after lactation ceases. What does all this tell us? Can we reverse immunosenescence, or at least slow it down? The potential is there for using thymic transplants from younger donors to keep thymic output of naïve T cells high. Gene therapies that target gene expression are also seen as future possibilities. The more we learn through immunosenescence research, the more opportunities there will be to develop therapies, even though these therapies will likely take decades to develop. The ultimate goal is for everyone to live and be healthy longer, but there may be limits to immortality imposed by our genes and hormones. Secondary Lymphoid Organs and their Roles in Active Immune Responses Lymphocytes develop and mature in the primary lymphoid organs, but they mount immune responses from the secondary lymphoid organs . A naïve lymphocyte is one that has left the primary organ and entered a secondary lymphoid organ. Naïve lymphocytes are fully functional immunologically, but have yet to encounter an antigen to respond to. In addition to circulating in the blood and lymph, lymphocytes concentrate in secondary lymphoid organs, which include the lymph nodes, spleen, and lymphoid nodules. All of these tissues have many features in common, including the following: The presence of lymphoid follicles, the sites of the formation of lymphocytes, with specific B cell-rich and T cell-rich areas An internal structure of reticular fibers with associated fixed macrophages Germinal centers , which are the sites of rapidly dividing and differentiating B lymphocytes Specialized post-capillary vessels known as high endothelial venules ; the cells lining these venules are thicker and more columnar than normal endothelial cells, which allow cells from the blood to directly enter these tissues Lymph Nodes Lymph nodes function to remove debris and pathogens from the lymph, and are thus sometimes referred to as the “filters of the lymph” ( Figure 21.8 ). Any bacteria that infect the interstitial fluid are taken up by the lymphatic capillaries and transported to a regional lymph node. Dendritic cells and macrophages within this organ internalize and kill many of the pathogens that pass through, thereby removing them from the body. The lymph node is also the site of adaptive immune responses mediated by T cells, B cells, and accessory cells of the adaptive immune system. Like the thymus, the bean-shaped lymph nodes are surrounded by a tough capsule of connective tissue and are separated into compartments by trabeculae, the extensions of the capsule. In addition to the structure provided by the capsule and trabeculae, the structural support of the lymph node is provided by a series of reticular fibers laid down by fibroblasts. Interactive Link View the University of Michigan WebScope to explore the tissue sample in greater detail. The major routes into the lymph node are via afferent lymphatic vessels (see Figure 21.8 ). Cells and lymph fluid that leave the lymph node may do so by another set of vessels known as the efferent lymphatic vessels . Lymph enters the lymph node via the subcapsular sinus, which is occupied by dendritic cells, macrophages, and reticular fibers. Within the cortex of the lymph node are lymphoid follicles, which consist of germinal centers of rapidly dividing B cells surrounded by a layer of T cells and other accessory cells. As the lymph continues to flow through the node, it enters the medulla, which consists of medullary cords of B cells and plasma cells, and the medullary sinuses where the lymph collects before leaving the node via the efferent lymphatic vessels. Spleen In addition to the lymph nodes, the spleen is a major secondary lymphoid organ ( Figure 21.9 ). It is about 12 cm (5 in) long and is attached to the lateral border of the stomach via the gastrosplenic ligament. The spleen is a fragile organ without a strong capsule, and is dark red due to its extensive vascularization. The spleen is sometimes called the “filter of the blood” because of its extensive vascularization and the presence of macrophages and dendritic cells that remove microbes and other materials from the blood, including dying red blood cells. The spleen also functions as the location of immune responses to blood-borne pathogens. The spleen is also divided by trabeculae of connective tissue, and within each splenic nodule is an area of red pulp, consisting of mostly red blood cells, and white pulp, which resembles the lymphoid follicles of the lymph nodes. Upon entering the spleen, the splenic artery splits into several arterioles (surrounded by white pulp) and eventually into sinusoids. Blood from the capillaries subsequently collects in the venous sinuses and leaves via the splenic vein. The red pulp consists of reticular fibers with fixed macrophages attached, free macrophages, and all of the other cells typical of the blood, including some lymphocytes. The white pulp surrounds a central arteriole and consists of germinal centers of dividing B cells surrounded by T cells and accessory cells, including macrophages and dendritic cells. Thus, the red pulp primarily functions as a filtration system of the blood, using cells of the relatively nonspecific immune response, and white pulp is where adaptive T and B cell responses are mounted. Lymphoid Nodules The other lymphoid tissues, the lymphoid nodules , have a simpler architecture than the spleen and lymph nodes in that they consist of a dense cluster of lymphocytes without a surrounding fibrous capsule. These nodules are located in the respiratory and digestive tracts, areas routinely exposed to environmental pathogens. Tonsils are lymphoid nodules located along the inner surface of the pharynx and are important in developing immunity to oral pathogens ( Figure 21.10 ). The tonsil located at the back of the throat, the pharyngeal tonsil, is sometimes referred to as the adenoid when swollen. Such swelling is an indication of an active immune response to infection. Histologically, tonsils do not contain a complete capsule, and the epithelial layer invaginates deeply into the interior of the tonsil to form tonsillar crypts. These structures, which accumulate all sorts of materials taken into the body through eating and breathing, actually “encourage” pathogens to penetrate deep into the tonsillar tissues where they are acted upon by numerous lymphoid follicles and eliminated. This seems to be the major function of tonsils—to help children’s bodies recognize, destroy, and develop immunity to common environmental pathogens so that they will be protected in their later lives. Tonsils are often removed in those children who have recurring throat infections, especially those involving the palatine tonsils on either side of the throat, whose swelling may interfere with their breathing and/or swallowing. Interactive Link View the University of Michigan WebScope to explore the tissue sample in greater detail. Mucosa-associated lymphoid tissue (MALT) consists of an aggregate of lymphoid follicles directly associated with the mucous membrane epithelia. MALT makes up dome-shaped structures found underlying the mucosa of the gastrointestinal tract, breast tissue, lungs, and eyes. Peyer’s patches, a type of MALT in the small intestine, are especially important for immune responses against ingested substances ( Figure 21.11 ). Peyer’s patches contain specialized endothelial cells called M (or microfold) cells that sample material from the intestinal lumen and transport it to nearby follicles so that adaptive immune responses to potential pathogens can be mounted. A similar process occurs involving MALT in the mucosa and submucosa of the appendix. A blockage of the lumen triggers these cells to elicit an inflammatory response that can lead to appendicitis. Bronchus-associated lymphoid tissue (BALT) consists of lymphoid follicular structures with an overlying epithelial layer found along the bifurcations of the bronchi, and between bronchi and arteries. They also have the typically less-organized structure of other lymphoid nodules. These tissues, in addition to the tonsils, are effective against inhaled pathogens. 21.2 Barrier Defenses and the Innate Immune Response Learning Objectives By the end of this section, you will be able to: Describe the barrier defenses of the body Show how the innate immune response is important and how it helps guide and prepare the body for adaptive immune responses Describe various soluble factors that are part of the innate immune response Explain the steps of inflammation and how they lead to destruction of a pathogen Discuss early induced immune responses and their level of effectiveness The immune system can be divided into two overlapping mechanisms to destroy pathogens: the innate immune response, which is relatively rapid but nonspecific and thus not always effective, and the adaptive immune response, which is slower in its development during an initial infection with a pathogen, but is highly specific and effective at attacking a wide variety of pathogens ( Figure 21.12 ). Any discussion of the innate immune response usually begins with the physical barriers that prevent pathogens from entering the body, destroy them after they enter, or flush them out before they can establish themselves in the hospitable environment of the body’s soft tissues. Barrier defenses are part of the body’s most basic defense mechanisms. The barrier defenses are not a response to infections, but they are continuously working to protect against a broad range of pathogens. The different modes of barrier defenses are associated with the external surfaces of the body, where pathogens may try to enter ( Table 21.2 ). The primary barrier to the entrance of microorganisms into the body is the skin. Not only is the skin covered with a layer of dead, keratinized epithelium that is too dry for bacteria in which to grow, but as these cells are continuously sloughed off from the skin, they carry bacteria and other pathogens with them. Additionally, sweat and other skin secretions may lower pH, contain toxic lipids, and physically wash microbes away. Barrier Defenses Site Specific defense Protective aspect Skin Epidermal surface Keratinized cells of surface, Langerhans cells Skin (sweat/secretions) Sweat glands, sebaceous glands Low pH, washing action Oral cavity Salivary glands Lysozyme Stomach Gastrointestinal tract Low pH Mucosal surfaces Mucosal epithelium Nonkeratinized epithelial cells Normal flora (nonpathogenic bacteria) Mucosal tissues Prevent pathogens from growing on mucosal surfaces Table 21.2 Another barrier is the saliva in the mouth, which is rich in lysozyme—an enzyme that destroys bacteria by digesting their cell walls. The acidic environment of the stomach, which is fatal to many pathogens, is also a barrier. Additionally, the mucus layer of the gastrointestinal tract, respiratory tract, reproductive tract, eyes, ears, and nose traps both microbes and debris, and facilitates their removal. In the case of the upper respiratory tract, ciliated epithelial cells move potentially contaminated mucus upwards to the mouth, where it is then swallowed into the digestive tract, ending up in the harsh acidic environment of the stomach. Considering how often you breathe compared to how often you eat or perform other activities that expose you to pathogens, it is not surprising that multiple barrier mechanisms have evolved to work in concert to protect this vital area. Cells of the Innate Immune Response A phagocyte is a cell that is able to surround and engulf a particle or cell, a process called phagocytosis . The phagocytes of the immune system engulf other particles or cells, either to clean an area of debris, old cells, or to kill pathogenic organisms such as bacteria. The phagocytes are the body’s fast acting, first line of immunological defense against organisms that have breached barrier defenses and have entered the vulnerable tissues of the body. Phagocytes: Macrophages and Neutrophils Many of the cells of the immune system have a phagocytic ability, at least at some point during their life cycles. Phagocytosis is an important and effective mechanism of destroying pathogens during innate immune responses. The phagocyte takes the organism inside itself as a phagosome, which subsequently fuses with a lysosome and its digestive enzymes, effectively killing many pathogens. On the other hand, some bacteria including Mycobacteria tuberculosis , the cause of tuberculosis, may be resistant to these enzymes and are therefore much more difficult to clear from the body. Macrophages, neutrophils, and dendritic cells are the major phagocytes of the immune system. A macrophage is an irregularly shaped phagocyte that is amoeboid in nature and is the most versatile of the phagocytes in the body. Macrophages move through tissues and squeeze through capillary walls using pseudopodia. They not only participate in innate immune responses but have also evolved to cooperate with lymphocytes as part of the adaptive immune response. Macrophages exist in many tissues of the body, either freely roaming through connective tissues or fixed to reticular fibers within specific tissues such as lymph nodes. When pathogens breach the body’s barrier defenses, macrophages are the first line of defense ( Table 21.3 ). They are called different names, depending on the tissue: Kupffer cells in the liver, histiocytes in connective tissue, and alveolar macrophages in the lungs. A neutrophil is a phagocytic cell that is attracted via chemotaxis from the bloodstream to infected tissues. These spherical cells are granulocytes. A granulocyte contains cytoplasmic granules, which in turn contain a variety of vasoactive mediators such as histamine. In contrast, macrophages are agranulocytes. An agranulocyte has few or no cytoplasmic granules. Whereas macrophages act like sentries, always on guard against infection, neutrophils can be thought of as military reinforcements that are called into a battle to hasten the destruction of the enemy. Although, usually thought of as the primary pathogen-killing cell of the inflammatory process of the innate immune response, new research has suggested that neutrophils play a role in the adaptive immune response as well, just as macrophages do. A monocyte is a circulating precursor cell that differentiates into either a macrophage or dendritic cell, which can be rapidly attracted to areas of infection by signal molecules of inflammation. Phagocytic Cells of the Innate Immune System Cell Cell type Primary location Function in the innate immune response Macrophage Agranulocyte Body cavities/organs Phagocytosis Neutrophil Granulocyte Blood Phagocytosis Monocyte Agranulocyte Blood Precursor of macrophage/dendritic cell Table 21.3 Natural Killer Cells NK cells are a type of lymphocyte that have the ability to induce apoptosis, that is, programmed cell death, in cells infected with intracellular pathogens such as obligate intracellular bacteria and viruses. NK cells recognize these cells by mechanisms that are still not well understood, but that presumably involve their surface receptors. NK cells can induce apoptosis, in which a cascade of events inside the cell causes its own death by either of two mechanisms: 1) NK cells are able to respond to chemical signals and express the fas ligand. The fas ligand is a surface molecule that binds to the fas molecule on the surface of the infected cell, sending it apoptotic signals, thus killing the cell and the pathogen within it; or 2) The granules of the NK cells release perforins and granzymes. A perforin is a protein that forms pores in the membranes of infected cells. A granzyme is a protein-digesting enzyme that enters the cell via the perforin pores and triggers apoptosis intracellularly. Both mechanisms are especially effective against virally infected cells. If apoptosis is induced before the virus has the ability to synthesize and assemble all its components, no infectious virus will be released from the cell, thus preventing further infection. Recognition of Pathogens Cells of the innate immune response, the phagocytic cells, and the cytotoxic NK cells recognize patterns of pathogen-specific molecules, such as bacterial cell wall components or bacterial flagellar proteins, using pattern recognition receptors. A pattern recognition receptor (PRR) is a membrane-bound receptor that recognizes characteristic features of a pathogen and molecules released by stressed or damaged cells. These receptors, which are thought to have evolved prior to the adaptive immune response, are present on the cell surface whether they are needed or not. Their variety, however, is limited by two factors. First, the fact that each receptor type must be encoded by a specific gene requires the cell to allocate most or all of its DNA to make receptors able to recognize all pathogens. Secondly, the variety of receptors is limited by the finite surface area of the cell membrane. Thus, the innate immune system must “get by” using only a limited number of receptors that are active against as wide a variety of pathogens as possible. This strategy is in stark contrast to the approach used by the adaptive immune system, which uses large numbers of different receptors, each highly specific to a particular pathogen. Should the cells of the innate immune system come into contact with a species of pathogen they recognize, the cell will bind to the pathogen and initiate phagocytosis (or cellular apoptosis in the case of an intracellular pathogen) in an effort to destroy the offending microbe. Receptors vary somewhat according to cell type, but they usually include receptors for bacterial components and for complement, discussed below. Soluble Mediators of the Innate Immune Response The previous discussions have alluded to chemical signals that can induce cells to change various physiological characteristics, such as the expression of a particular receptor. These soluble factors are secreted during innate or early induced responses, and later during adaptive immune responses. Cytokines and Chemokines A cytokine is signaling molecule that allows cells to communicate with each other over short distances. Cytokines are secreted into the intercellular space, and the action of the cytokine induces the receiving cell to change its physiology. A chemokine is a soluble chemical mediator similar to cytokines except that its function is to attract cells (chemotaxis) from longer distances. Interactive Link Visit this website to learn about phagocyte chemotaxis. Phagocyte chemotaxis is the movement of phagocytes according to the secretion of chemical messengers in the form of interleukins and other chemokines. By what means does a phagocyte destroy a bacterium that it has ingested? Early induced Proteins Early induced proteins are those that are not constitutively present in the body, but are made as they are needed early during the innate immune response. Interferons are an example of early induced proteins. Cells infected with viruses secrete interferons that travel to adjacent cells and induce them to make antiviral proteins. Thus, even though the initial cell is sacrificed, the surrounding cells are protected. Other early induced proteins specific for bacterial cell wall components are mannose-binding protein and C-reactive protein, made in the liver, which bind specifically to polysaccharide components of the bacterial cell wall. Phagocytes such as macrophages have receptors for these proteins, and they are thus able to recognize them as they are bound to the bacteria. This brings the phagocyte and bacterium into close proximity and enhances the phagocytosis of the bacterium by the process known as opsonization. Opsonization is the tagging of a pathogen for phagocytosis by the binding of an antibody or an antimicrobial protein. Complement System The complement system is a series of proteins constitutively found in the blood plasma. As such, these proteins are not considered part of the early induced immune response , even though they share features with some of the antibacterial proteins of this class. Made in the liver, they have a variety of functions in the innate immune response, using what is known as the “alternate pathway” of complement activation. Additionally, complement functions in the adaptive immune response as well, in what is called the classical pathway. The complement system consists of several proteins that enzymatically alter and fragment later proteins in a series, which is why it is termed cascade. Once activated, the series of reactions is irreversible, and releases fragments that have the following actions: Bind to the cell membrane of the pathogen that activates it, labeling it for phagocytosis (opsonization) Diffuse away from the pathogen and act as chemotactic agents to attract phagocytic cells to the site of inflammation Form damaging pores in the plasma membrane of the pathogen Figure 21.13 shows the classical pathway, which requires antibodies of the adaptive immune response. The alternate pathway does not require an antibody to become activated. The splitting of the C3 protein is the common step to both pathways. In the alternate pathway, C3 is activated spontaneously and, after reacting with the molecules factor P, factor B, and factor D, splits apart. The larger fragment, C3b, binds to the surface of the pathogen and C3a, the smaller fragment, diffuses outward from the site of activation and attracts phagocytes to the site of infection. Surface-bound C3b then activates the rest of the cascade, with the last five proteins, C5–C9, forming the membrane-attack complex (MAC). The MAC can kill certain pathogens by disrupting their osmotic balance. The MAC is especially effective against a broad range of bacteria. The classical pathway is similar, except the early stages of activation require the presence of antibody bound to antigen, and thus is dependent on the adaptive immune response. The earlier fragments of the cascade also have important functions. Phagocytic cells such as macrophages and neutrophils are attracted to an infection site by chemotactic attraction to smaller complement fragments. Additionally, once they arrive, their receptors for surface-bound C3b opsonize the pathogen for phagocytosis and destruction. Inflammatory Response The hallmark of the innate immune response is inflammation . Inflammation is something everyone has experienced. Stub a toe, cut a finger, or do any activity that causes tissue damage and inflammation will result, with its four characteristics: heat, redness, pain, and swelling (“loss of function” is sometimes mentioned as a fifth characteristic). It is important to note that inflammation does not have to be initiated by an infection, but can also be caused by tissue injuries. The release of damaged cellular contents into the site of injury is enough to stimulate the response, even in the absence of breaks in physical barriers that would allow pathogens to enter (by hitting your thumb with a hammer, for example). The inflammatory reaction brings in phagocytic cells to the damaged area to clear cellular debris and to set the stage for wound repair ( Figure 21.14 ). This reaction also brings in the cells of the innate immune system, allowing them to get rid of the sources of a possible infection. Inflammation is part of a very basic form of immune response. The process not only brings fluid and cells into the site to destroy the pathogen and remove it and debris from the site, but also helps to isolate the site, limiting the spread of the pathogen. Acute inflammation is a short-term inflammatory response to an insult to the body. If the cause of the inflammation is not resolved, however, it can lead to chronic inflammation, which is associated with major tissue destruction and fibrosis. Chronic inflammation is ongoing inflammation. It can be caused by foreign bodies, persistent pathogens, and autoimmune diseases such as rheumatoid arthritis. There are four important parts to the inflammatory response: Tissue Injury. The released contents of injured cells stimulate the release of mast cell granules and their potent inflammatory mediators such as histamine, leukotrienes, and prostaglandins. Histamine increases the diameter of local blood vessels (vasodilation), causing an increase in blood flow. Histamine also increases the permeability of local capillaries, causing plasma to leak out and form interstitial fluid. This causes the swelling associated with inflammation. Additionally, injured cells, phagocytes, and basophils are sources of inflammatory mediators, including prostaglandins and leukotrienes. Leukotrienes attract neutrophils from the blood by chemotaxis and increase vascular permeability. Prostaglandins cause vasodilation by relaxing vascular smooth muscle and are a major cause of the pain associated with inflammation. Nonsteroidal anti-inflammatory drugs such as aspirin and ibuprofen relieve pain by inhibiting prostaglandin production. Vasodilation. Many inflammatory mediators such as histamine are vasodilators that increase the diameters of local capillaries. This causes increased blood flow and is responsible for the heat and redness of inflamed tissue. It allows greater access of the blood to the site of inflammation. Increased Vascular Permeability. At the same time, inflammatory mediators increase the permeability of the local vasculature, causing leakage of fluid into the interstitial space, resulting in the swelling, or edema, associated with inflammation. Recruitment of Phagocytes. Leukotrienes are particularly good at attracting neutrophils from the blood to the site of infection by chemotaxis. Following an early neutrophil infiltrate stimulated by macrophage cytokines, more macrophages are recruited to clean up the debris left over at the site. When local infections are severe, neutrophils are attracted to the sites of infections in large numbers, and as they phagocytose the pathogens and subsequently die, their accumulated cellular remains are visible as pus at the infection site. Overall, inflammation is valuable for many reasons. Not only are the pathogens killed and debris removed, but the increase in vascular permeability encourages the entry of clotting factors, the first step towards wound repair. Inflammation also facilitates the transport of antigen to lymph nodes by dendritic cells for the development of the adaptive immune response. 21.3 The Adaptive Immune Response: T lymphocytes and Their Functional Types Learning Objectives By the end of this section, you will be able to: Explain the advantages of the adaptive immune response over the innate immune response List the various characteristics of an antigen Describe the types of T cell antigen receptors Outline the steps of T cell development Describe the major T cell types and their functions Innate immune responses (and early induced responses) are in many cases ineffective at completely controlling pathogen growth. However, they slow pathogen growth and allow time for the adaptive immune response to strengthen and either control or eliminate the pathogen. The innate immune system also sends signals to the cells of the adaptive immune system, guiding them in how to attack the pathogen. Thus, these are the two important arms of the immune response. The Benefits of the Adaptive Immune Response The specificity of the adaptive immune response—its ability to specifically recognize and make a response against a wide variety of pathogens—is its great strength. Antigens, the small chemical groups often associated with pathogens, are recognized by receptors on the surface of B and T lymphocytes. The adaptive immune response to these antigens is so versatile that it can respond to nearly any pathogen. This increase in specificity comes because the adaptive immune response has a unique way to develop as many as 10 11 , or 100 trillion, different receptors to recognize nearly every conceivable pathogen. How could so many different types of antibodies be encoded? And what about the many specificities of T cells? There is not nearly enough DNA in a cell to have a separate gene for each specificity. The mechanism was finally worked out in the 1970s and 1980s using the new tools of molecular genetics Primary Disease and Immunological Memory The immune system’s first exposure to a pathogen is called a primary adaptive response . Symptoms of a first infection, called primary disease, are always relatively severe because it takes time for an initial adaptive immune response to a pathogen to become effective. Upon re-exposure to the same pathogen, a secondary adaptive immune response is generated, which is stronger and faster that the primary response. The secondary adaptive response often eliminates a pathogen before it can cause significant tissue damage or any symptoms. Without symptoms, there is no disease, and the individual is not even aware of the infection. This secondary response is the basis of immunological memory , which protects us from getting diseases repeatedly from the same pathogen. By this mechanism, an individual’s exposure to pathogens early in life spares the person from these diseases later in life. Self Recognition A third important feature of the adaptive immune response is its ability to distinguish between self-antigens, those that are normally present in the body, and foreign antigens, those that might be on a potential pathogen. As T and B cells mature, there are mechanisms in place that prevent them from recognizing self-antigen, preventing a damaging immune response against the body. These mechanisms are not 100 percent effective, however, and their breakdown leads to autoimmune diseases, which will be discussed later in this chapter. T Cell-Mediated Immune Responses The primary cells that control the adaptive immune response are the lymphocytes, the T and B cells. T cells are particularly important, as they not only control a multitude of immune responses directly, but also control B cell immune responses in many cases as well. Thus, many of the decisions about how to attack a pathogen are made at the T cell level, and knowledge of their functional types is crucial to understanding the functioning and regulation of adaptive immune responses as a whole. T lymphocytes recognize antigens based on a two-chain protein receptor. The most common and important of these are the alpha-beta T cell receptors ( Figure 21.15 ). There are two chains in the T cell receptor, and each chain consists of two domains. The variable region domain is furthest away from the T cell membrane and is so named because its amino acid sequence varies between receptors. In contrast, the constant region domain has less variation. The differences in the amino acid sequences of the variable domains are the molecular basis of the diversity of antigens the receptor can recognize. Thus, the antigen-binding site of the receptor consists of the terminal ends of both receptor chains, and the amino acid sequences of those two areas combine to determine its antigenic specificity. Each T cell produces only one type of receptor and thus is specific for a single particular antigen. Antigens Antigens on pathogens are usually large and complex, and consist of many antigenic determinants. An antigenic determinant (epitope) is one of the small regions within an antigen to which a receptor can bind, and antigenic determinants are limited by the size of the receptor itself. They usually consist of six or fewer amino acid residues in a protein, or one or two sugar moieties in a carbohydrate antigen. Antigenic determinants on a carbohydrate antigen are usually less diverse than on a protein antigen. Carbohydrate antigens are found on bacterial cell walls and on red blood cells (the ABO blood group antigens). Protein antigens are complex because of the variety of three-dimensional shapes that proteins can assume, and are especially important for the immune responses to viruses and worm parasites. It is the interaction of the shape of the antigen and the complementary shape of the amino acids of the antigen-binding site that accounts for the chemical basis of specificity ( Figure 21.16 ). Antigen Processing and Presentation Although Figure 21.16 shows T cell receptors interacting with antigenic determinants directly, the mechanism that T cells use to recognize antigens is, in reality, much more complex. T cells do not recognize free-floating or cell-bound antigens as they appear on the surface of the pathogen. They only recognize antigen on the surface of specialized cells called antigen-presenting cells. Antigens are internalized by these cells. Antigen processing is a mechanism that enzymatically cleaves the antigen into smaller pieces. The antigen fragments are then brought to the cell’s surface and associated with a specialized type of antigen-presenting protein known as a major histocompatibility complex (MHC) molecule. The MHC is the cluster of genes that encode these antigen-presenting molecules. The association of the antigen fragments with an MHC molecule on the surface of a cell is known as antigen presentation and results in the recognition of antigen by a T cell. This association of antigen and MHC occurs inside the cell, and it is the complex of the two that is brought to the surface. The peptide-binding cleft is a small indentation at the end of the MHC molecule that is furthest away from the cell membrane; it is here that the processed fragment of antigen sits. MHC molecules are capable of presenting a variety of antigens, depending on the amino acid sequence, in their peptide-binding clefts. It is the combination of the MHC molecule and the fragment of the original peptide or carbohydrate that is actually physically recognized by the T cell receptor ( Figure 21.17 ). Two distinct types of MHC molecules, MHC class I and MHC class II , play roles in antigen presentation. Although produced from different genes, they both have similar functions. They bring processed antigen to the surface of the cell via a transport vesicle and present the antigen to the T cell and its receptor. Antigens from different classes of pathogens, however, use different MHC classes and take different routes through the cell to get to the surface for presentation. The basic mechanism, though, is the same. Antigens are processed by digestion, are brought into the endomembrane system of the cell, and then are expressed on the surface of the antigen-presenting cell for antigen recognition by a T cell. Intracellular antigens are typical of viruses, which replicate inside the cell, and certain other intracellular parasites and bacteria. These antigens are processed in the cytosol by an enzyme complex known as the proteasome and are then brought into the endoplasmic reticulum by the transporter associated with antigen processing (TAP) system, where they interact with class I MHC molecules and are eventually transported to the cell surface by a transport vesicle. Extracellular antigens, characteristic of many bacteria, parasites, and fungi that do not replicate inside the cell’s cytoplasm, are brought into the endomembrane system of the cell by receptor-mediated endocytosis. The resulting vesicle fuses with vesicles from the Golgi complex, which contain pre-formed MHC class II molecules. After fusion of these two vesicles and the association of antigen and MHC, the new vesicle makes its way to the cell surface. Professional Antigen-presenting Cells Many cell types express class I molecules for the presentation of intracellular antigens. These MHC molecules may then stimulate a cytotoxic T cell immune response, eventually destroying the cell and the pathogen within. This is especially important when it comes to the most common class of intracellular pathogens, the virus. Viruses infect nearly every tissue of the body, so all these tissues must necessarily be able to express class I MHC or no T cell response can be made. On the other hand, class II MHC molecules are expressed only on the cells of the immune system, specifically cells that affect other arms of the immune response. Thus, these cells are called “professional” antigen-presenting cells to distinguish them from those that bear class I MHC. The three types of professional antigen presenters are macrophages, dendritic cells, and B cells ( Table 21.4 ). Macrophages stimulate T cells to release cytokines that enhance phagocytosis. Dendritic cells also kill pathogens by phagocytosis (see Figure 21.17 ), but their major function is to bring antigens to regional draining lymph nodes. The lymph nodes are the locations in which most T cell responses against pathogens of the interstitial tissues are mounted. Macrophages are found in the skin and in the lining of mucosal surfaces, such as the nasopharynx, stomach, lungs, and intestines. B cells may also present antigens to T cells, which are necessary for certain types of antibody responses, to be covered later in this chapter. Classes of Antigen-presenting Cells MHC Cell type Phagocytic? Function Class I Many No Stimulates cytotoxic T cell immune response Class II Macrophage Yes Stimulates phagocytosis and presentation at primary infection site Class II Dendritic Yes, in tissues Brings antigens to regional lymph nodes Class II B cell Yes, internalizes surface Ig and antigen Stimulates antibody secretion by B cells Table 21.4 T Cell Development and Differentiation The process of eliminating T cells that might attack the cells of one’s own body is referred to as T cell tolerance . While thymocytes are in the cortex of the thymus, they are referred to as “double negatives,” meaning that they do not bear the CD4 or CD8 molecules that you can use to follow their pathways of differentiation ( Figure 21.18 ). In the cortex of the thymus, they are exposed to cortical epithelial cells. In a process known as positive selection , double-negative thymocytes bind to the MHC molecules they observe on the thymic epithelia, and the MHC molecules of “self” are selected. This mechanism kills many thymocytes during T cell differentiation. In fact, only two percent of the thymocytes that enter the thymus leave it as mature, functional T cells. Later, the cells become double positives that express both CD4 and CD8 markers and move from the cortex to the junction between the cortex and medulla. It is here that negative selection takes place. In negative selection , self-antigens are brought into the thymus from other parts of the body by professional antigen-presenting cells. The T cells that bind to these self-antigens are selected for negatively and are killed by apoptosis. In summary, the only T cells left are those that can bind to MHC molecules of the body with foreign antigens presented on their binding clefts, preventing an attack on one’s own body tissues, at least under normal circumstances. Tolerance can be broken, however, by the development of an autoimmune response, to be discussed later in this chapter. The cells that leave the thymus become single positives, expressing either CD4 or CD8, but not both (see Figure 21.18 ). The CD4 + T cells will bind to class II MHC and the CD8 + cells will bind to class I MHC. The discussion that follows explains the functions of these molecules and how they can be used to differentiate between the different T cell functional types. Mechanisms of T Cell-mediated Immune Responses Mature T cells become activated by recognizing processed foreign antigen in association with a self-MHC molecule and begin dividing rapidly by mitosis. This proliferation of T cells is called clonal expansion and is necessary to make the immune response strong enough to effectively control a pathogen. How does the body select only those T cells that are needed against a specific pathogen? Again, the specificity of a T cell is based on the amino acid sequence and the three-dimensional shape of the antigen-binding site formed by the variable regions of the two chains of the T cell receptor ( Figure 21.19 ). Clonal selection is the process of antigen binding only to those T cells that have receptors specific to that antigen. Each T cell that is activated has a specific receptor “hard-wired” into its DNA, and all of its progeny will have identical DNA and T cell receptors, forming clones of the original T cell. Clonal Selection and Expansion The clonal selection theory was proposed by Frank Burnet in the 1950s. However, the term clonal selection is not a complete description of the theory, as clonal expansion goes hand in glove with the selection process. The main tenet of the theory is that a typical individual has a multitude (10 11 ) of different types of T cell clones based on their receptors. In this use, a clone is a group of lymphocytes that share the same antigen receptor . Each clone is necessarily present in the body in low numbers. Otherwise, the body would not have room for lymphocytes with so many specificities. Only those clones of lymphocytes whose receptors are activated by the antigen are stimulated to proliferate. Keep in mind that most antigens have multiple antigenic determinants, so a T cell response to a typical antigen involves a polyclonal response. A polyclonal response is the stimulation of multiple T cell clones. Once activated, the selected clones increase in number and make many copies of each cell type, each clone with its unique receptor. By the time this process is complete, the body will have large numbers of specific lymphocytes available to fight the infection (see Figure 21.19 ). The Cellular Basis of Immunological Memory As already discussed, one of the major features of an adaptive immune response is the development of immunological memory. During a primary adaptive immune response, both memory T cells and effector T cells are generated. Memory T cells are long-lived and can even persist for a lifetime. Memory cells are primed to act rapidly. Thus, any subsequent exposure to the pathogen will elicit a very rapid T cell response. This rapid, secondary adaptive response generates large numbers of effector T cells so fast that the pathogen is often overwhelmed before it can cause any symptoms of disease. This is what is meant by immunity to a disease. The same pattern of primary and secondary immune responses occurs in B cells and the antibody response, as will be discussed later in the chapter. T Cell Types and their Functions In the discussion of T cell development, you saw that mature T cells express either the CD4 marker or the CD8 marker, but not both. These markers are cell adhesion molecules that keep the T cell in close contact with the antigen-presenting cell by directly binding to the MHC molecule (to a different part of the molecule than does the antigen). Thus, T cells and antigen-presenting cells are held together in two ways: by CD4 or CD8 attaching to MHC and by the T cell receptor binding to antigen ( Figure 21.20 ). Although the correlation is not 100 percent, CD4-bearing T cells are associated with helper functions and CD8-bearing T cells are associated with cytotoxicity. These functional distinctions based on CD4 and CD8 markers are useful in defining the function of each type. Helper T Cells and their Cytokines Helper T cells (Th) , bearing the CD4 molecule, function by secreting cytokines that act to enhance other immune responses. There are two classes of Th cells, and they act on different components of the immune response. These cells are not distinguished by their surface molecules but by the characteristic set of cytokines they secrete ( Table 21.5 ). Th1 cells are a type of helper T cell that secretes cytokines that regulate the immunological activity and development of a variety of cells, including macrophages and other types of T cells. Th2 cells , on the other hand, are cytokine-secreting cells that act on B cells to drive their differentiation into plasma cells that make antibody. In fact, T cell help is required for antibody responses to most protein antigens, and these are called T cell-dependent antigens. Cytotoxic T cells Cytotoxic T cells (Tc) are T cells that kill target cells by inducing apoptosis using the same mechanism as NK cells. They either express Fas ligand, which binds to the fas molecule on the target cell, or act by using perforins and granzymes contained in their cytoplasmic granules. As was discussed earlier with NK cells, killing a virally infected cell before the virus can complete its replication cycle results in the production of no infectious particles. As more Tc cells are developed during an immune response, they overwhelm the ability of the virus to cause disease. In addition, each Tc cell can kill more than one target cell, making them especially effective. Tc cells are so important in the antiviral immune response that some speculate that this was the main reason the adaptive immune response evolved in the first place. Regulatory T Cells Regulatory T cells (Treg) , or suppressor T cells, are the most recently discovered of the types listed here, so less is understood about them. In addition to CD4, they bear the molecules CD25 and FOXP3. Exactly how they function is still under investigation, but it is known that they suppress other T cell immune responses. This is an important feature of the immune response, because if clonal expansion during immune responses were allowed to continue uncontrolled, these responses could lead to autoimmune diseases and other medical issues. Not only do T cells directly destroy pathogens, but they regulate nearly all other types of the adaptive immune response as well, as evidenced by the functions of the T cell types, their surface markers, the cells they work on, and the types of pathogens they work against (see Table 21.5 ). Functions of T Cell Types and Their Cytokines T cell Main target Function Pathogen Surface marker MHC Cytokines or mediators Tc Infected cells Cytotoxicity Intracellular CD8 Class I Perforins, granzymes, and fas ligand Th1 Macrophage Helper inducer Extracellular CD4 Class II Interferon-γ and TGF-β Th2 B cell Helper inducer Extracellular CD4 Class II IL-4, IL-6, IL-10, and others Treg Th cell Suppressor None CD4, CD25 ? TGF-β and IL-10 Table 21.5 21.4 The Adaptive Immune Response: B-lymphocytes and Antibodies Learning Objectives By the end of this section, you will be able to: Explain how B cells mature and how B cell tolerance develops Discuss how B cells are activated and differentiate into plasma cells Describe the structure of the antibody classes and their functions Antibodies were the first component of the adaptive immune response to be characterized by scientists working on the immune system. It was already known that individuals who survived a bacterial infection were immune to re-infection with the same pathogen. Early microbiologists took serum from an immune patient and mixed it with a fresh culture of the same type of bacteria, then observed the bacteria under a microscope. The bacteria became clumped in a process called agglutination. When a different bacterial species was used, the agglutination did not happen. Thus, there was something in the serum of immune individuals that could specifically bind to and agglutinate bacteria. Scientists now know the cause of the agglutination is an antibody molecule, also called an immunoglobulin . What is an antibody? An antibody protein is essentially a secreted form of a B cell receptor. (In fact, surface immunoglobulin is another name for the B cell receptor.) Not surprisingly, the same genes encode both the secreted antibodies and the surface immunoglobulins. One minor difference in the way these proteins are synthesized distinguishes a naïve B cell with antibody on its surface from an antibody-secreting plasma cell with no antibodies on its surface. The antibodies of the plasma cell have the exact same antigen-binding site and specificity as their B cell precursors. There are five different classes of antibody found in humans: IgM, IgD, IgG, IgA, and IgE. Each of these has specific functions in the immune response, so by learning about them, researchers can learn about the great variety of antibody functions critical to many adaptive immune responses. B cells do not recognize antigen in the complex fashion of T cells. B cells can recognize native, unprocessed antigen and do not require the participation of MHC molecules and antigen-presenting cells. B Cell Differentiation and Activation B cells differentiate in the bone marrow. During the process of maturation, up to 100 trillion different clones of B cells are generated, which is similar to the diversity of antigen receptors seen in T cells. B cell differentiation and the development of tolerance are not quite as well understood as it is in T cells. Central tolerance is the destruction or inactivation of B cells that recognize self-antigens in the bone marrow, and its role is critical and well established. In the process of clonal deletion , immature B cells that bind strongly to self-antigens expressed on tissues are signaled to commit suicide by apoptosis, removing them from the population. In the process of clonal anergy , however, B cells exposed to soluble antigen in the bone marrow are not physically deleted, but become unable to function. Another mechanism called peripheral tolerance is a direct result of T cell tolerance. In peripheral tolerance , functional, mature B cells leave the bone marrow but have yet to be exposed to self-antigen. Most protein antigens require signals from helper T cells (Th2) to proceed to make antibody. When a B cell binds to a self-antigen but receives no signals from a nearby Th2 cell to produce antibody, the cell is signaled to undergo apoptosis and is destroyed. This is yet another example of the control that T cells have over the adaptive immune response. After B cells are activated by their binding to antigen, they differentiate into plasma cells. Plasma cells often leave the secondary lymphoid organs, where the response is generated, and migrate back to the bone marrow, where the whole differentiation process started. After secreting antibodies for a specific period, they die, as most of their energy is devoted to making antibodies and not to maintaining themselves. Thus, plasma cells are said to be terminally differentiated. The final B cell of interest is the memory B cell, which results from the clonal expansion of an activated B cell. Memory B cells function in a way similar to memory T cells. They lead to a stronger and faster secondary response when compared to the primary response, as illustrated below. Antibody Structure Antibodies are glycoproteins consisting of two types of polypeptide chains with attached carbohydrates. The heavy chain and the light chain are the two polypeptides that form the antibody. The main differences between the classes of antibodies are in the differences between their heavy chains, but as you shall see, the light chains have an important role, forming part of the antigen-binding site on the antibody molecules. Four-chain Models of Antibody Structures All antibody molecules have two identical heavy chains and two identical light chains. (Some antibodies contain multiple units of this four-chain structure.) The Fc region of the antibody is formed by the two heavy chains coming together, usually linked by disulfide bonds ( Figure 21.21 ). The Fc portion of the antibody is important in that many effector cells of the immune system have Fc receptors. Cells having these receptors can then bind to antibody-coated pathogens, greatly increasing the specificity of the effector cells. At the other end of the molecule are two identical antigen-binding sites. Five Classes of Antibodies and their Functions In general, antibodies have two basic functions. They can act as the B cell antigen receptor or they can be secreted, circulate, and bind to a pathogen, often labeling it for identification by other forms of the immune response. Of the five antibody classes, notice that only two can function as the antigen receptor for naïve B cells: IgM and IgD ( Figure 21.22 ). Mature B cells that leave the bone marrow express both IgM and IgD, but both antibodies have the same antigen specificity. Only IgM is secreted, however, and no other nonreceptor function for IgD has been discovered. IgM consists of five four-chain structures (20 total chains with 10 identical antigen-binding sites) and is thus the largest of the antibody molecules. IgM is usually the first antibody made during a primary response. Its 10 antigen-binding sites and large shape allow it to bind well to many bacterial surfaces. It is excellent at binding complement proteins and activating the complement cascade, consistent with its role in promoting chemotaxis, opsonization, and cell lysis. Thus, it is a very effective antibody against bacteria at early stages of a primary antibody response. As the primary response proceeds, the antibody produced in a B cell can change to IgG, IgA, or IgE by the process known as class switching. Class switching is the change of one antibody class to another. While the class of antibody changes, the specificity and the antigen-binding sites do not. Thus, the antibodies made are still specific to the pathogen that stimulated the initial IgM response. IgG is a major antibody of late primary responses and the main antibody of secondary responses in the blood. This is because class switching occurs during primary responses. IgG is a monomeric antibody that clears pathogens from the blood and can activate complement proteins (although not as well as IgM), taking advantage of its antibacterial activities. Furthermore, this class of antibody is the one that crosses the placenta to protect the developing fetus from disease exits the blood to the interstitial fluid to fight extracellular pathogens. IgA exists in two forms, a four-chain monomer in the blood and an eight-chain structure, or dimer, in exocrine gland secretions of the mucous membranes, including mucus, saliva, and tears. Thus, dimeric IgA is the only antibody to leave the interior of the body to protect body surfaces. IgA is also of importance to newborns, because this antibody is present in mother’s breast milk (colostrum), which serves to protect the infant from disease. IgE is usually associated with allergies and anaphylaxis. It is present in the lowest concentration in the blood, because its Fc region binds strongly to an IgE-specific Fc receptor on the surfaces of mast cells. IgE makes mast cell degranulation very specific, such that if a person is allergic to peanuts, there will be peanut-specific IgE bound to his or her mast cells. In this person, eating peanuts will cause the mast cells to degranulate, sometimes causing severe allergic reactions, including anaphylaxis, a severe, systemic allergic response that can cause death. Clonal Selection of B Cells Clonal selection and expansion work much the same way in B cells as in T cells. Only B cells with appropriate antigen specificity are selected for and expanded ( Figure 21.23 ). Eventually, the plasma cells secrete antibodies with antigenic specificity identical to those that were on the surfaces of the selected B cells. Notice in the figure that both plasma cells and memory B cells are generated simultaneously. Primary versus Secondary B Cell Responses Primary and secondary responses as they relate to T cells were discussed earlier. This section will look at these responses with B cells and antibody production. Because antibodies are easily obtained from blood samples, they are easy to follow and graph ( Figure 21.24 ). As you will see from the figure, the primary response to an antigen (representing a pathogen) is delayed by several days. This is the time it takes for the B cell clones to expand and differentiate into plasma cells. The level of antibody produced is low, but it is sufficient for immune protection. The second time a person encounters the same antigen, there is no time delay, and the amount of antibody made is much higher. Thus, the secondary antibody response overwhelms the pathogens quickly and, in most situations, no symptoms are felt. When a different antigen is used, another primary response is made with its low antibody levels and time delay. Active versus Passive Immunity Immunity to pathogens, and the ability to control pathogen growth so that damage to the tissues of the body is limited, can be acquired by (1) the active development of an immune response in the infected individual or (2) the passive transfer of immune components from an immune individual to a nonimmune one. Both active and passive immunity have examples in the natural world and as part of medicine. Active immunity is the resistance to pathogens acquired during an adaptive immune response within an individual ( Table 21.6 ). Naturally acquired active immunity, the response to a pathogen, is the focus of this chapter. Artificially acquired active immunity involves the use of vaccines. A vaccine is a killed or weakened pathogen or its components that, when administered to a healthy individual, leads to the development of immunological memory (a weakened primary immune response) without causing much in the way of symptoms. Thus, with the use of vaccines, one can avoid the damage from disease that results from the first exposure to the pathogen, yet reap the benefits of protection from immunological memory. The advent of vaccines was one of the major medical advances of the twentieth century and led to the eradication of smallpox and the control of many infectious diseases, including polio, measles, and whooping cough. Active versus Passive Immunity Natural Artificial Active Adaptive immune response Vaccine response Passive Trans-placental antibodies/breastfeeding Immune globulin injections Table 21.6 Passive immunity arises from the transfer of antibodies to an individual without requiring them to mount their own active immune response. Naturally acquired passive immunity is seen during fetal development. IgG is transferred from the maternal circulation to the fetus via the placenta, protecting the fetus from infection and protecting the newborn for the first few months of its life. As already stated, a newborn benefits from the IgA antibodies it obtains from milk during breastfeeding. The fetus and newborn thus benefit from the immunological memory of the mother to the pathogens to which she has been exposed. In medicine, artificially acquired passive immunity usually involves injections of immunoglobulins, taken from animals previously exposed to a specific pathogen. This treatment is a fast-acting method of temporarily protecting an individual who was possibly exposed to a pathogen. The downside to both types of passive immunity is the lack of the development of immunological memory. Once the antibodies are transferred, they are effective for only a limited time before they degrade. Interactive Link Immunity can be acquired in an active or passive way, and it can be natural or artificial. Watch this video to see an animated discussion of passive and active immunity. What is an example of natural immunity acquired passively? T cell-dependent versus T cell-independent Antigens As discussed previously, Th2 cells secrete cytokines that drive the production of antibodies in a B cell, responding to complex antigens such as those made by proteins. On the other hand, some antigens are T cell independent. A T cell-independent antigen usually is in the form of repeated carbohydrate moieties found on the cell walls of bacteria. Each antibody on the B cell surface has two binding sites, and the repeated nature of T cell-independent antigen leads to crosslinking of the surface antibodies on the B cell. The crosslinking is enough to activate it in the absence of T cell cytokines. A T cell-dependent antigen , on the other hand, usually is not repeated to the same degree on the pathogen and thus does not crosslink surface antibody with the same efficiency. To elicit a response to such antigens, the B and T cells must come close together ( Figure 21.25 ). The B cell must receive two signals to become activated. Its surface immunoglobulin must recognize native antigen. Some of this antigen is internalized, processed, and presented to the Th2 cells on a class II MHC molecule. The T cell then binds using its antigen receptor and is activated to secrete cytokines that diffuse to the B cell, finally activating it completely. Thus, the B cell receives signals from both its surface antibody and the T cell via its cytokines, and acts as a professional antigen-presenting cell in the process. 21.5 The Immune Response against Pathogens Learning Objectives By the end of this section, you will be able to: Explain the development of immunological competence Describe the mucosal immune response Discuss immune responses against bacterial, viral, fungal, and animal pathogens Describe different ways pathogens evade immune responses Now that you understand the development of mature, naïve B cells and T cells, and some of their major functions, how do all of these various cells, proteins, and cytokines come together to actually resolve an infection? Ideally, the immune response will rid the body of a pathogen entirely. The adaptive immune response, with its rapid clonal expansion, is well suited to this purpose. Think of a primary infection as a race between the pathogen and the immune system. The pathogen bypasses barrier defenses and starts multiplying in the host’s body. During the first 4 to 5 days, the innate immune response will partially control, but not stop, pathogen growth. As the adaptive immune response gears up, however, it will begin to clear the pathogen from the body, while at the same time becoming stronger and stronger. When following antibody responses in patients with a particular disease such as a virus, this clearance is referred to as seroconversion (sero- = “serum”). Seroconversion is the reciprocal relationship between virus levels in the blood and antibody levels. As the antibody levels rise, the virus levels decline, and this is a sign that the immune response is being at least partially effective (partially, because in many diseases, seroconversion does not necessarily mean a patient is getting well). An excellent example of this is seroconversion during HIV disease ( Figure 21.26 ). Notice that antibodies are made early in this disease, and the increase in anti-HIV antibodies correlates with a decrease in detectable virus in the blood. Although these antibodies are an important marker for diagnosing the disease, they are not sufficient to completely clear the virus. Several years later, the vast majority of these individuals, if untreated, will lose their entire adaptive immune response, including the ability to make antibodies, during the final stages of AIDS. Everyday Connection Disinfectants: Fighting the Good Fight? “Wash your hands!” Parents have been telling their children this for generations. Dirty hands can spread disease. But is it possible to get rid of enough pathogens that children will never get sick? Are children who avoid exposure to pathogens better off? The answers to both these questions appears to be no. Antibacterial wipes, soaps, gels, and even toys with antibacterial substances embedded in their plastic are ubiquitous in our society. Still, these products do not rid the skin and gastrointestinal tract of bacteria, and it would be harmful to our health if they did. We need these nonpathogenic bacteria on and within our bodies to keep the pathogenic ones from growing. The urge to keep children perfectly clean is thus probably misguided. Children will get sick anyway, and the later benefits of immunological memory far outweigh the minor discomforts of most childhood diseases. In fact, getting diseases such as chickenpox or measles later in life is much harder on the adult and are associated with symptoms significantly worse than those seen in the childhood illnesses. Of course, vaccinations help children avoid some illnesses, but there are so many pathogens, we will never be immune to them all. Could over-cleanliness be the reason that allergies are increasing in more developed countries? Some scientists think so. Allergies are based on an IgE antibody response. Many scientists think the system evolved to help the body rid itself of worm parasites. The hygiene theory is the idea that the immune system is geared to respond to antigens, and if pathogens are not present, it will respond instead to inappropriate antigens such as allergens and self-antigens. This is one explanation for the rising incidence of allergies in developed countries, where the response to nonpathogens like pollen, shrimp, and cat dander cause allergic responses while not serving any protective function. The Mucosal Immune Response Mucosal tissues are major barriers to the entry of pathogens into the body. The IgA (and sometimes IgM) antibodies in mucus and other secretions can bind to the pathogen, and in the cases of many viruses and bacteria, neutralize them. Neutralization is the process of coating a pathogen with antibodies, making it physically impossible for the pathogen to bind to receptors. Neutralization, which occurs in the blood, lymph, and other body fluids and secretions, protects the body constantly. Neutralizing antibodies are the basis for the disease protection offered by vaccines. Vaccinations for diseases that commonly enter the body via mucous membranes, such as influenza, are usually formulated to enhance IgA production. Immune responses in some mucosal tissues such as the Peyer’s patches (see Figure 21.11 ) in the small intestine take up particulate antigens by specialized cells known as microfold or M cells ( Figure 21.27 ). These cells allow the body to sample potential pathogens from the intestinal lumen. Dendritic cells then take the antigen to the regional lymph nodes, where an immune response is mounted. Defenses against Bacteria and Fungi The body fights bacterial pathogens with a wide variety of immunological mechanisms, essentially trying to find one that is effective. Bacteria such as Mycobacterium leprae , the cause of leprosy, are resistant to lysosomal enzymes and can persist in macrophage organelles or escape into the cytosol. In such situations, infected macrophages receiving cytokine signals from Th1 cells turn on special metabolic pathways. Macrophage oxidative metabolism is hostile to intracellular bacteria, often relying on the production of nitric oxide to kill the bacteria inside the macrophage. Fungal infections, such as those from Aspergillus , Candida , and Pneumocystis , are largely opportunistic infections that take advantage of suppressed immune responses. Most of the same immune mechanisms effective against bacteria have similar effects on fungi, both of which have characteristic cell wall structures that protect their cells. Defenses against Parasites Worm parasites such as helminths are seen as the primary reason why the mucosal immune response, IgE-mediated allergy and asthma, and eosinophils evolved. These parasites were at one time very common in human society. When infecting a human, often via contaminated food, some worms take up residence in the gastrointestinal tract. Eosinophils are attracted to the site by T cell cytokines, which release their granule contents upon their arrival. Mast cell degranulation also occurs, and the fluid leakage caused by the increase in local vascular permeability is thought to have a flushing action on the parasite, expelling its larvae from the body. Furthermore, if IgE labels the parasite, the eosinophils can bind to it by its Fc receptor. Defenses against Viruses The primary mechanisms against viruses are NK cells, interferons, and cytotoxic T cells. Antibodies are effective against viruses mostly during protection, where an immune individual can neutralize them based on a previous exposure. Antibodies have no effect on viruses or other intracellular pathogens once they enter the cell, since antibodies are not able to penetrate the plasma membrane of the cell. Many cells respond to viral infections by downregulating their expression of MHC class I molecules. This is to the advantage of the virus, because without class I expression, cytotoxic T cells have no activity. NK cells, however, can recognize virally infected class I-negative cells and destroy them. Thus, NK and cytotoxic T cells have complementary activities against virally infected cells. Interferons have activity in slowing viral replication and are used in the treatment of certain viral diseases, such as hepatitis B and C, but their ability to eliminate the virus completely is limited. The cytotoxic T cell response, though, is key, as it eventually overwhelms the virus and kills infected cells before the virus can complete its replicative cycle. Clonal expansion and the ability of cytotoxic T cells to kill more than one target cell make these cells especially effective against viruses. In fact, without cytotoxic T cells, it is likely that humans would all die at some point from a viral infection (if no vaccine were available). Evasion of the Immune System by Pathogens It is important to keep in mind that although the immune system has evolved to be able to control many pathogens, pathogens themselves have evolved ways to evade the immune response. An example already mentioned is in Mycobacterium tuberculosis , which has evolved a complex cell wall that is resistant to the digestive enzymes of the macrophages that ingest them, and thus persists in the host, causing the chronic disease tuberculosis. This section briefly summarizes other ways in which pathogens can “outwit” immune responses. But keep in mind, although it seems as if pathogens have a will of their own, they do not. All of these evasive “strategies” arose strictly by evolution, driven by selection. Bacteria sometimes evade immune responses because they exist in multiple strains, such as different groups of Staphylococcus aureus . S. aureus is commonly found in minor skin infections, such as boils, and some healthy people harbor it in their nose. One small group of strains of this bacterium, however, called methicillin-resistant Staphylococcus aureus , has become resistant to multiple antibiotics and is essentially untreatable. Different bacterial strains differ in the antigens on their surfaces. The immune response against one strain (antigen) does not affect the other; thus, the species survives. Another method of immune evasion is mutation. Because viruses’ surface molecules mutate continuously, viruses like influenza change enough each year that the flu vaccine for one year may not protect against the flu common to the next. New vaccine formulations must be derived for each flu season. Genetic recombination—the combining of gene segments from two different pathogens—is an efficient form of immune evasion. For example, the influenza virus contains gene segments that can recombine when two different viruses infect the same cell. Recombination between human and pig influenza viruses led to the 2010 H1N1 swine flu outbreak. Pathogens can produce immunosuppressive molecules that impair immune function, and there are several different types. Viruses are especially good at evading the immune response in this way, and many types of viruses have been shown to suppress the host immune response in ways much more subtle than the wholesale destruction caused by HIV. 21.6 Diseases Associated with Depressed or Overactive Immune Responses Learning Objectives By the end of this section, you will be able to: Discuss inherited and acquired immunodeficiencies Explain the four types of hypersensitivity and how they differ Give an example of how autoimmune disease breaks tolerance This section is about how the immune system goes wrong. When it goes haywire, and becomes too weak or too strong, it leads to a state of disease. The factors that maintain immunological homeostasis are complex and incompletely understood. Immunodeficiencies As you have seen, the immune system is quite complex. It has many pathways using many cell types and signals. Because it is so complex, there are many ways for it to go wrong. Inherited immunodeficiencies arise from gene mutations that affect specific components of the immune response. There are also acquired immunodeficiencies with potentially devastating effects on the immune system, such as HIV. Inherited Immunodeficiencies A list of all inherited immunodeficiencies is well beyond the scope of this book. The list is almost as long as the list of cells, proteins, and signaling molecules of the immune system itself. Some deficiencies, such as those for complement, cause only a higher susceptibility to some Gram-negative bacteria. Others are more severe in their consequences. Certainly, the most serious of the inherited immunodeficiencies is severe combined immunodeficiency disease (SCID) . This disease is complex because it is caused by many different genetic defects. What groups them together is the fact that both the B cell and T cell arms of the adaptive immune response are affected. Children with this disease usually die of opportunistic infections within their first year of life unless they receive a bone marrow transplant. Such a procedure had not yet been perfected for David Vetter, the “boy in the bubble,” who was treated for SCID by having to live almost his entire life in a sterile plastic cocoon for the 12 years before his death from infection in 1984. One of the features that make bone marrow transplants work as well as they do is the proliferative capability of hematopoietic stem cells of the bone marrow. Only a small amount of bone marrow from a healthy donor is given intravenously to the recipient. It finds its own way to the bone where it populates it, eventually reconstituting the patient’s immune system, which is usually destroyed beforehand by treatment with radiation or chemotherapeutic drugs. New treatments for SCID using gene therapy, inserting nondefective genes into cells taken from the patient and giving them back, have the advantage of not needing the tissue match required for standard transplants. Although not a standard treatment, this approach holds promise, especially for those in whom standard bone marrow transplantation has failed. Human Immunodeficiency Virus/AIDS Although many viruses cause suppression of the immune system, only one wipes it out completely, and that is the previously mentioned HIV. It is worth discussing the biology of this virus, which can lead to the well-known AIDS, so that its full effects on the immune system can be understood. The virus is transmitted through semen, vaginal fluids, and blood, and can be caught by risky sexual behaviors and the sharing of needles by intravenous drug users. There are sometimes, but not always, flu-like symptoms in the first 1 to 2 weeks after infection. This is later followed by seroconversion. The anti-HIV antibodies formed during seroconversion are the basis for most initial HIV screening done in the United States. Because seroconversion takes different lengths of time in different individuals, multiple AIDS tests are given months apart to confirm or eliminate the possibility of infection. After seroconversion, the amount of virus circulating in the blood drops and stays at a low level for several years. During this time, the levels of CD4 + cells, especially helper T cells, decline steadily, until at some point, the immune response is so weak that opportunistic disease and eventually death result. HIV uses CD4 as the receptor to get inside cells, but it also needs a co-receptor, such as CCR5 or CXCR4. These co-receptors, which usually bind to chemokines, present another target for anti-HIV drug development. Although other antigen-presenting cells are infected with HIV, given that CD4 + helper T cells play an important role in T cell immune responses and antibody responses, it should be no surprise that both types of immune responses are eventually seriously compromised. Treatment for the disease consists of drugs that target virally encoded proteins that are necessary for viral replication but are absent from normal human cells. By targeting the virus itself and sparing the cells, this approach has been successful in significantly prolonging the lives of HIV-positive individuals. On the other hand, an HIV vaccine has been 30 years in development and is still years away. Because the virus mutates rapidly to evade the immune system, scientists have been looking for parts of the virus that do not change and thus would be good targets for a vaccine candidate. Hypersensitivities The word “hypersensitivity” simply means sensitive beyond normal levels of activation. Allergies and inflammatory responses to nonpathogenic environmental substances have been observed since the dawn of history. Hypersensitivity is a medical term describing symptoms that are now known to be caused by unrelated mechanisms of immunity. Still, it is useful for this discussion to use the four types of hypersensitivities as a guide to understand these mechanisms ( Figure 21.28 ). Immediate (Type I) Hypersensitivity Antigens that cause allergic responses are often referred to as allergens. The specificity of the immediate hypersensitivity response is predicated on the binding of allergen-specific IgE to the mast cell surface. The process of producing allergen-specific IgE is called sensitization, and is a necessary prerequisite for the symptoms of immediate hypersensitivity to occur. Allergies and allergic asthma are mediated by mast cell degranulation that is caused by the crosslinking of the antigen-specific IgE molecules on the mast cell surface. The mediators released have various vasoactive effects already discussed, but the major symptoms of inhaled allergens are the nasal edema and runny nose caused by the increased vascular permeability and increased blood flow of nasal blood vessels. As these mediators are released with mast cell degranulation, type I hypersensitivity reactions are usually rapid and occur within just a few minutes, hence the term immediate hypersensitivity. Most allergens are in themselves nonpathogenic and therefore innocuous. Some individuals develop mild allergies, which are usually treated with antihistamines. Others develop severe allergies that may cause anaphylactic shock, which can potentially be fatal within 20 to 30 minutes if untreated. This drop in blood pressure (shock) with accompanying contractions of bronchial smooth muscle is caused by systemic mast cell degranulation when an allergen is eaten (for example, shellfish and peanuts), injected (by a bee sting or being administered penicillin), or inhaled (asthma). Because epinephrine raises blood pressure and relaxes bronchial smooth muscle, it is routinely used to counteract the effects of anaphylaxis and can be lifesaving. Patients with known severe allergies are encouraged to keep automatic epinephrine injectors with them at all times, especially when away from easy access to hospitals. Allergists use skin testing to identify allergens in type I hypersensitivity. In skin testing, allergen extracts are injected into the epidermis, and a positive result of a soft, pale swelling at the site surrounded by a red zone (called the wheal and flare response), caused by the release of histamine and the granule mediators, usually occurs within 30 minutes. The soft center is due to fluid leaking from the blood vessels and the redness is caused by the increased blood flow to the area that results from the dilation of local blood vessels at the site. Type II and Type III Hypersensitivities Type II hypersensitivity , which involves IgG-mediated lysis of cells by complement proteins, occurs during mismatched blood transfusions and blood compatibility diseases such as erythroblastosis fetalis (see section on transplantation). Type III hypersensitivity occurs with diseases such as systemic lupus erythematosus, where soluble antigens, mostly DNA and other material from the nucleus, and antibodies accumulate in the blood to the point that the antigen and antibody precipitate along blood vessel linings. These immune complexes often lodge in the kidneys, joints, and other organs where they can activate complement proteins and cause inflammation. Delayed (Type IV) Hypersensitivity Delayed hypersensitivity , or type IV hypersensitivity, is basically a standard cellular immune response. In delayed hypersensitivity, the first exposure to an antigen is called sensitization , such that on re-exposure, a secondary cellular response results, secreting cytokines that recruit macrophages and other phagocytes to the site. These sensitized T cells, of the Th1 class, will also activate cytotoxic T cells. The time it takes for this reaction to occur accounts for the 24- to 72-hour delay in development. The classical test for delayed hypersensitivity is the tuberculin test for tuberculosis, where bacterial proteins from M. tuberculosis are injected into the skin. A couple of days later, a positive test is indicated by a raised red area that is hard to the touch, called an induration, which is a consequence of the cellular infiltrate, an accumulation of activated macrophages. A positive tuberculin test means that the patient has been exposed to the bacteria and exhibits a cellular immune response to it. Another type of delayed hypersensitivity is contact sensitivity, where substances such as the metal nickel cause a red and swollen area upon contact with the skin. The individual must have been previously sensitized to the metal. A much more severe case of contact sensitivity is poison ivy, but many of the harshest symptoms of the reaction are associated with the toxicity of its oils and are not T cell mediated. Autoimmune Responses The worst cases of the immune system over-reacting are autoimmune diseases. Somehow, tolerance breaks down and the immune systems in individuals with these diseases begin to attack their own bodies, causing significant damage. The trigger for these diseases is, more often than not, unknown, and the treatments are usually based on resolving the symptoms using immunosuppressive and anti-inflammatory drugs such as steroids. These diseases can be localized and crippling, as in rheumatoid arthritis, or diffuse in the body with multiple symptoms that differ in different individuals, as is the case with systemic lupus erythematosus ( Figure 21.29 ). Environmental triggers seem to play large roles in autoimmune responses. One explanation for the breakdown of tolerance is that, after certain bacterial infections, an immune response to a component of the bacterium cross-reacts with a self-antigen. This mechanism is seen in rheumatic fever, a result of infection with Streptococcus bacteria, which causes strep throat. The antibodies to this pathogen’s M protein cross-react with an antigenic component of heart myosin, a major contractile protein of the heart that is critical to its normal function. The antibody binds to these molecules and activates complement proteins, causing damage to the heart, especially to the heart valves. On the other hand, some theories propose that having multiple common infectious diseases actually prevents autoimmune responses. The fact that autoimmune diseases are rare in countries that have a high incidence of infectious diseases supports this idea, another example of the hygiene hypothesis discussed earlier in this chapter. There are genetic factors in autoimmune diseases as well. Some diseases are associated with the MHC genes that an individual expresses. The reason for this association is likely because if one’s MHC molecules are not able to present a certain self-antigen, then that particular autoimmune disease cannot occur. Overall, there are more than 80 different autoimmune diseases, which are a significant health problem in the elderly. Table 21.7 lists several of the most common autoimmune diseases, the antigens that are targeted, and the segment of the adaptive immune response that causes the damage. Autoimmune Diseases Disease Autoantigen Symptoms Celiac disease Tissue transglutaminase Damage to small intestine Diabetes mellitus type I Beta cells of pancreas Low insulin production; inability to regulate serum glucose Graves’ disease Thyroid-stimulating hormone receptor (antibody blocks receptor) Hyperthyroidism Hashimoto’s thyroiditis Thyroid-stimulating hormone receptor (antibody mimics hormone and stimulates receptor) Hypothyroidism Lupus erythematosus Nuclear DNA and proteins Damage of many body systems Myasthenia gravis Acetylcholine receptor in neuromuscular junctions Debilitating muscle weakness Rheumatoid arthritis Joint capsule antigens Chronic inflammation of joints Table 21.7 21.7 Transplantation and Cancer Immunology Learning Objectives By the end of this section, you will be able to: Explain why blood typing is important and what happens when mismatched blood is used in a transfusion Describe how tissue typing is done during organ transplantation and the role of transplant anti-rejection drugs Show how the immune response is able to control some cancers and how this immune response might be enhanced by cancer vaccines The immune responses to transplanted organs and to cancer cells are both important medical issues. With the use of tissue typing and anti-rejection drugs, transplantation of organs and the control of the anti-transplant immune response have made huge strides in the past 50 years. Today, these procedures are commonplace. Tissue typing is the determination of MHC molecules in the tissue to be transplanted to better match the donor to the recipient. The immune response to cancer, on the other hand, has been more difficult to understand and control. Although it is clear that the immune system can recognize some cancers and control them, others seem to be resistant to immune mechanisms. The Rh Factor Red blood cells can be typed based on their surface antigens. ABO blood type, in which individuals are type A, B, AB, or O according to their genetics, is one example. A separate antigen system seen on red blood cells is the Rh antigen. When someone is “A positive” for example, the positive refers to the presence of the Rh antigen, whereas someone who is “A negative” would lack this molecule. An interesting consequence of Rh factor expression is seen in erythroblastosis fetalis , a hemolytic disease of the newborn ( Figure 21.30 ). This disease occurs when mothers negative for Rh antigen have multiple Rh-positive children. During the birth of a first Rh-positive child, the mother makes a primary anti-Rh antibody response to the fetal blood cells that enter the maternal bloodstream. If the mother has a second Rh-positive child, IgG antibodies against Rh-positive blood mounted during this secondary response cross the placenta and attack the fetal blood, causing anemia. This is a consequence of the fact that the fetus is not genetically identical to the mother, and thus the mother is capable of mounting an immune response against it. This disease is treated with antibodies specific for Rh factor. These are given to the mother during the first and subsequent births, destroying any fetal blood that might enter her system and preventing the immune response. Tissue Transplantation Tissue transplantation is more complicated than blood transfusions because of two characteristics of MHC molecules. These molecules are the major cause of transplant rejection (hence the name “histocompatibility”). MHC polygeny refers to the multiple MHC proteins on cells, and MHC polymorphism refers to the multiple alleles for each individual MHC locus. Thus, there are many alleles in the human population that can be expressed ( Table 21.8 and Table 21.9 ). When a donor organ expresses MHC molecules that are different from the recipient, the latter will often mount a cytotoxic T cell response to the organ and reject it. Histologically, if a biopsy of a transplanted organ exhibits massive infiltration of T lymphocytes within the first weeks after transplant, it is a sign that the transplant is likely to fail. The response is a classical, and very specific, primary T cell immune response. As far as medicine is concerned, the immune response in this scenario does the patient no good at all and causes significant harm. Partial Table of Alleles of the Human MHC (Class I) Gene # of alleles # of possible MHC I protein components A 2132 1527 B 2798 2110 C 1672 1200 E 11 3 F 22 4 G 50 16 Table 21.8 Partial Table of Alleles of the Human MHC (Class II) Gene # of alleles # of possible MHC II protein components DRA 7 2 DRB 1297 958 DQA1 49 31 DQB1 179 128 DPA1 36 18 DPB1 158 136 DMA 7 4 DMB 13 7 DOA 12 3 DOB 13 5 Table 21.9 Immunosuppressive drugs such as cyclosporine A have made transplants more successful, but matching the MHC molecules is still key. In humans, there are six MHC molecules that show the most polymorphisms, three class I molecules (A, B, and C) and three class II molecules called DP, DQ, and DR. A successful transplant usually requires a match between at least 3–4 of these molecules, with more matches associated with greater success. Family members, since they share a similar genetic background, are much more likely to share MHC molecules than unrelated individuals do. In fact, due to the extensive polymorphisms in these MHC molecules, unrelated donors are found only through a worldwide database. The system is not foolproof however, as there are not enough individuals in the system to provide the organs necessary to treat all patients needing them. One disease of transplantation occurs with bone marrow transplants, which are used to treat various diseases, including SCID and leukemia. Because the bone marrow cells being transplanted contain lymphocytes capable of mounting an immune response, and because the recipient’s immune response has been destroyed before receiving the transplant, the donor cells may attack the recipient tissues, causing graft-versus-host disease . Symptoms of this disease, which usually include a rash and damage to the liver and mucosa, are variable, and attempts have been made to moderate the disease by first removing mature T cells from the donor bone marrow before transplanting it. Immune Responses Against Cancer It is clear that with some cancers, for example Kaposi’s sarcoma, a healthy immune system does a good job at controlling them ( Figure 21.31 ). This disease, which is caused by the human herpesvirus, is almost never observed in individuals with strong immune systems, such as the young and immunocompetent. Other examples of cancers caused by viruses include liver cancer caused by the hepatitis B virus and cervical cancer caused by the human papilloma virus. As these last two viruses have vaccines available for them, getting vaccinated can help prevent these two types of cancer by stimulating the immune response. On the other hand, as cancer cells are often able to divide and mutate rapidly, they may escape the immune response, just as certain pathogens such as HIV do. There are three stages in the immune response to many cancers: elimination, equilibrium, and escape. Elimination occurs when the immune response first develops toward tumor-specific antigens specific to the cancer and actively kills most cancer cells, followed by a period of controlled equilibrium during which the remaining cancer cells are held in check. Unfortunately, many cancers mutate, so they no longer express any specific antigens for the immune system to respond to, and a subpopulation of cancer cells escapes the immune response, continuing the disease process. This fact has led to extensive research in trying to develop ways to enhance the early immune response to completely eliminate the early cancer and thus prevent a later escape. One method that has shown some success is the use of cancer vaccines, which differ from viral and bacterial vaccines in that they are directed against the cells of one’s own body. Treated cancer cells are injected into cancer patients to enhance their anti-cancer immune response and thereby prolong survival. The immune system has the capability to detect these cancer cells and proliferate faster than the cancer cells do, overwhelming the cancer in a similar way as they do for viruses. Cancer vaccines have been developed for malignant melanoma, a highly fatal skin cancer, and renal (kidney) cell carcinoma. These vaccines are still in the development stages, but some positive and encouraging results have been obtained clinically. It is tempting to focus on the complexity of the immune system and the problems it causes as a negative. The upside to immunity, however, is so much greater: The benefit of staying alive far outweighs the negatives caused when the system does sometimes go awry. Working on “autopilot,” the immune system helps to maintain your health and kill pathogens. The only time you really miss the immune response is when it is not being effective and illness results, or, as in the extreme case of HIV disease, the immune system is gone completely. Everyday Connection How Stress Affects the Immune Response: The Connections between the Immune, Nervous, and Endocrine Systems of the Body The immune system cannot exist in isolation. After all, it has to protect the entire body from infection. Therefore, the immune system is required to interact with other organ systems, sometimes in complex ways. Thirty years of research focusing on the connections between the immune system, the central nervous system, and the endocrine system have led to a new science with the unwieldy name of called psychoneuroimmunology . The physical connections between these systems have been known for centuries: All primary and secondary organs are connected to sympathetic nerves. What is more complex, though, is the interaction of neurotransmitters, hormones, cytokines, and other soluble signaling molecules, and the mechanism of “crosstalk” between the systems. For example, white blood cells, including lymphocytes and phagocytes, have receptors for various neurotransmitters released by associated neurons. Additionally, hormones such as cortisol (naturally produced by the adrenal cortex) and prednisone (synthetic) are well known for their abilities to suppress T cell immune mechanisms, hence, their prominent use in medicine as long-term, anti-inflammatory drugs. One well-established interaction of the immune, nervous, and endocrine systems is the effect of stress on immune health. In the human vertebrate evolutionary past, stress was associated with the fight-or-flight response, largely mediated by the central nervous system and the adrenal medulla. This stress was necessary for survival. The physical action of fighting or running, whichever the animal decides, usually resolves the problem in one way or another. On the other hand, there are no physical actions to resolve most modern day stresses, including short-term stressors like taking examinations and long-term stressors such as being unemployed or losing a spouse. The effect of stress can be felt by nearly every organ system, and the immune system is no exception ( Table 21.10 ). Effects of Stress on Body Systems System Stress-related illness Integumentary system Acne, skin rashes, irritation Nervous system Headaches, depression, anxiety, irritability, loss of appetite, lack of motivation, reduced mental performance Muscular and skeletal systems Muscle and joint pain, neck and shoulder pain Circulatory system Increased heart rate, hypertension, increased probability of heart attacks Digestive system Indigestion, heartburn, stomach pain, nausea, diarrhea, constipation, weight gain or loss Immune system Depressed ability to fight infections Male reproductive system Lowered sperm production, impotence, reduced sexual desire Female reproductive system Irregular menstrual cycle, reduced sexual desire Table 21.10 At one time, it was assumed that all types of stress reduced all aspects of the immune response, but the last few decades of research have painted a different picture. First, most short-term stress does not impair the immune system in healthy individuals enough to lead to a greater incidence of diseases. However, older individuals and those with suppressed immune responses due to disease or immunosuppressive drugs may respond even to short-term stressors by getting sicker more often. It has been found that short-term stress diverts the body’s resources towards enhancing innate immune responses, which have the ability to act fast and would seem to help the body prepare better for possible infections associated with the trauma that may result from a fight-or-flight exchange. The diverting of resources away from the adaptive immune response, however, causes its own share of problems in fighting disease. Chronic stress, unlike short-term stress, may inhibit immune responses even in otherwise healthy adults. The suppression of both innate and adaptive immune responses is clearly associated with increases in some diseases, as seen when individuals lose a spouse or have other long-term stresses, such as taking care of a spouse with a fatal disease or dementia. The new science of psychoneuroimmunology, while still in its relative infancy, has great potential to make exciting advances in our understanding of how the nervous, endocrine, and immune systems have evolved together and communicate with each other.
biology
Chapter Outline 39.1 Systems of Gas Exchange 39.2 Gas Exchange across Respiratory Surfaces 39.3 Breathing 39.4 Transport of Gases in Human Bodily Fluids Introduction Breathing is an involuntary event. How often a breath is taken and how much air is inhaled or exhaled are tightly regulated by the respiratory center in the brain. Humans, when they aren’t exerting themselves, breathe approximately 15 times per minute on average. Canines, like the dog in Figure 39.1 , have a respiratory rate of about 15–30 breaths per minute. With every inhalation, air fills the lungs, and with every exhalation, air rushes back out. That air is doing more than just inflating and deflating the lungs in the chest cavity. The air contains oxygen that crosses the lung tissue, enters the bloodstream, and travels to organs and tissues. Oxygen (O 2 ) enters the cells where it is used for metabolic reactions that produce ATP, a high-energy compound. At the same time, these reactions release carbon dioxide (CO 2 ) as a by-product. CO 2 is toxic and must be eliminated. Carbon dioxide exits the cells, enters the bloodstream, travels back to the lungs, and is expired out of the body during exhalation.
[ { "answer": { "ans_choice": 0, "ans_text": "provides body tissues with oxygen" }, "bloom": null, "hl_context": "Insect respiration is independent of its circulatory system ; therefore , the blood does not play a direct role in oxygen transport . <hl> Insects have a highly specialized type of respiratory system called the tracheal system , which consists of a network of small tubes that carries oxygen to the entire body . <hl> The tracheal system is the most direct and efficient respiratory system in active animals . The tubes in the tracheal system are made of a polymeric material called chitin . <hl> The primary function of the respiratory system is to deliver oxygen to the cells of the body ’ s tissues and remove carbon dioxide , a cell waste product . <hl> The main structures of the human respiratory system are the nasal cavity , the trachea , and lungs .", "hl_sentences": "Insects have a highly specialized type of respiratory system called the tracheal system , which consists of a network of small tubes that carries oxygen to the entire body . The primary function of the respiratory system is to deliver oxygen to the cells of the body ’ s tissues and remove carbon dioxide , a cell waste product .", "question": { "cloze_format": "The respiratory system ________.", "normal_format": "Which of the following is correct about the respiratory system?", "question_choices": [ "provides body tissues with oxygen", "provides body tissues with oxygen and carbon dioxide", "establishes how many breaths are taken per minute", "provides the body with carbon dioxide" ], "question_id": "fs-idm57160400", "question_text": "The respiratory system ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "prevent damage to the lungs" }, "bloom": null, "hl_context": "The air that organisms breathe contains particulate matter such as dust , dirt , viral particles , and bacteria that can damage the lungs or trigger allergic immune responses . The respiratory system contains several protective mechanisms to avoid problems or tissue damage . <hl> In the nasal cavity , hairs and mucus trap small particles , viruses , bacteria , dust , and dirt to prevent their entry . <hl> In mammals , pulmonary ventilation occurs via inhalation ( breathing ) . During inhalation , air enters the body through the nasal cavity located just inside the nose ( Figure 39.7 ) . As air passes through the nasal cavity , the air is warmed to body temperature and humidified . The respiratory tract is coated with mucus to seal the tissues from direct contact with air . Mucus is high in water . As air crosses these surfaces of the mucous membranes , it picks up water . These processes help equilibrate the air to the body conditions , reducing any damage that cold , dry air can cause . Particulate matter that is floating in the air is removed in the nasal passages via mucus and cilia . <hl> The processes of warming , humidifying , and removing particles are important protective mechanisms that prevent damage to the trachea and lungs . <hl> Thus , inhalation serves several purposes in addition to bringing oxygen into the respiratory system .", "hl_sentences": "In the nasal cavity , hairs and mucus trap small particles , viruses , bacteria , dust , and dirt to prevent their entry . The processes of warming , humidifying , and removing particles are important protective mechanisms that prevent damage to the trachea and lungs .", "question": { "cloze_format": "Air is warmed and humidified in the nasal passages. This helps to ________.", "normal_format": "Air is warmed and humidified in the nasal passages. What does this help?", "question_choices": [ "ward off infection", "decrease sensitivity during breathing", "prevent damage to the lungs", "all of the above" ], "question_id": "fs-idp72164784", "question_text": "Air is warmed and humidified in the nasal passages. This helps to ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "nasal cavity, larynx, trachea, bronchi, bronchioles, alveoli" }, "bloom": null, "hl_context": "<hl> Lungs : Bronchi and Alveoli The end of the trachea bifurcates ( divides ) to the right and left lungs . <hl> The lungs are not identical . The right lung is larger and contains three lobes , whereas the smaller left lung contains two lobes ( Figure 39.9 ) . The muscular diaphragm , which facilitates breathing , is inferior to ( below ) the lungs and marks the end of the thoracic cavity . <hl> In the lungs , air is diverted into smaller and smaller passages , or bronchi . <hl> <hl> Air enters the lungs through the two primary ( main ) bronchi ( singular : bronchus ) . <hl> <hl> Each bronchus divides into secondary bronchi , then into tertiary bronchi , which in turn divide , creating smaller and smaller diameter bronchioles as they split and spread through the lung . <hl> Like the trachea , the bronchi are made of cartilage and smooth muscle . At the bronchioles , the cartilage is replaced with elastic fibers . Bronchi are innervated by nerves of both the parasympathetic and sympathetic nervous systems that control muscle contraction ( parasympathetic ) or relaxation ( sympathetic ) in the bronchi and bronchioles , depending on the nervous system ’ s cues . In humans , bronchioles with a diameter smaller than 0.5 mm are the respiratory bronchioles . They lack cartilage and therefore rely on inhaled air to support their shape . As the passageways decrease in diameter , the relative amount of smooth muscle increases . <hl> The terminal bronchioles subdivide into microscopic branches called respiratory bronchioles . <hl> <hl> The respiratory bronchioles subdivide into several alveolar ducts . <hl> <hl> Numerous alveoli and alveolar sacs surround the alveolar ducts . <hl> The alveolar sacs resemble bunches of grapes tethered to the end of the bronchioles ( Figure 39.10 ) . In the acinar region , the alveolar ducts are attached to the end of each bronchiole . At the end of each duct are approximately 100 alveolar sacs , each containing 20 to 30 alveoli that are 200 to 300 microns in diameter . Gas exchange occurs only in alveoli . Alveoli are made of thin-walled parenchymal cells , typically one-cell thick , that look like tiny bubbles within the sacs . Alveoli are in direct contact with capillaries ( one-cell thick ) of the circulatory system . Such intimate contact ensures that oxygen will diffuse from alveoli into the blood and be distributed to the cells of the body . In addition , the carbon dioxide that was produced by cells as a waste product will diffuse from the blood into alveoli to be exhaled . The anatomical arrangement of capillaries and alveoli emphasizes the structural and functional relationship of the respiratory and circulatory systems . Because there are so many alveoli ( ~ 300 million per lung ) within each alveolar sac and so many sacs at the end of each alveolar duct , the lungs have a sponge-like consistency . This organization produces a very large surface area that is available for gas exchange . The surface area of alveoli in the lungs is approximately 75 m 2 . This large surface area , combined with the thin-walled nature of the alveolar parenchymal cells , allows gases to easily diffuse across the cells . <hl> From the nasal cavity , air passes through the pharynx ( throat ) and the larynx ( voice box ) , as it makes its way to the trachea ( Figure 39.7 ) . <hl> <hl> The main function of the trachea is to funnel the inhaled air to the lungs and the exhaled air back out of the body . <hl> The human trachea is a cylinder about 10 to 12 cm long and 2 cm in diameter that sits in front of the esophagus and extends from the larynx into the chest cavity where it divides into the two primary bronchi at the midthorax . It is made of incomplete rings of hyaline cartilage and smooth muscle ( Figure 39.8 ) . The trachea is lined with mucus-producing goblet cells and ciliated epithelia . The cilia propel foreign particles trapped in the mucus toward the pharynx . The cartilage provides strength and support to the trachea to keep the passage open . The smooth muscle can contract , decreasing the trachea ’ s diameter , which causes expired air to rush upwards from the lungs at a great force . The forced exhalation helps expel mucus when we cough . Smooth muscle can contract or relax , depending on stimuli from the external environment or the body ’ s nervous system .", "hl_sentences": "Lungs : Bronchi and Alveoli The end of the trachea bifurcates ( divides ) to the right and left lungs . In the lungs , air is diverted into smaller and smaller passages , or bronchi . Air enters the lungs through the two primary ( main ) bronchi ( singular : bronchus ) . Each bronchus divides into secondary bronchi , then into tertiary bronchi , which in turn divide , creating smaller and smaller diameter bronchioles as they split and spread through the lung . The terminal bronchioles subdivide into microscopic branches called respiratory bronchioles . The respiratory bronchioles subdivide into several alveolar ducts . Numerous alveoli and alveolar sacs surround the alveolar ducts . From the nasal cavity , air passes through the pharynx ( throat ) and the larynx ( voice box ) , as it makes its way to the trachea ( Figure 39.7 ) . The main function of the trachea is to funnel the inhaled air to the lungs and the exhaled air back out of the body .", "question": { "cloze_format": "The order of airflow during inhalation is ___ .", "normal_format": "Which is the order of airflow during inhalation?", "question_choices": [ "nasal cavity, trachea, larynx, bronchi, bronchioles, alveoli", "nasal cavity, larynx, trachea, bronchi, bronchioles, alveoli", "nasal cavity, larynx, trachea, bronchioles, bronchi, alveoli", "nasal cavity, trachea, larynx, bronchi, bronchioles, alveoli" ], "question_id": "fs-idp19628544", "question_text": "Which is the order of airflow during inhalation?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "amount of air that can be further inhaled after a normal breath" }, "bloom": null, "hl_context": "Table 39.1 The volume in the lung can be divided into four units : tidal volume , expiratory reserve volume , inspiratory reserve volume , and residual volume . Tidal volume ( TV ) measures the amount of air that is inspired and expired during a normal breath . On average , this volume is around one-half liter , which is a little less than the capacity of a 20 - ounce drink bottle . The expiratory reserve volume ( ERV ) is the additional amount of air that can be exhaled after a normal exhalation . It is the reserve amount that can be exhaled beyond what is normal . <hl> Conversely , the inspiratory reserve volume ( IRV ) is the additional amount of air that can be inhaled after a normal inhalation . <hl> The residual volume ( RV ) is the amount of air that is left after expiratory reserve volume is exhaled . The lungs are never completely empty : There is always some air left in the lungs after a maximal exhalation . If this residual volume did not exist and the lungs emptied completely , the lung tissues would stick together and the energy necessary to re-inflate the lung could be too great to overcome . Therefore , there is always some air remaining in the lungs . Residual volume is also important for preventing large fluctuations in respiratory gases ( O 2 and CO 2 ) . The residual volume is the only lung volume that cannot be measured directly because it is impossible to completely empty the lung of air . This volume can only be calculated rather than measured .", "hl_sentences": "Conversely , the inspiratory reserve volume ( IRV ) is the additional amount of air that can be inhaled after a normal inhalation .", "question": { "cloze_format": "The inspiratory reserve volume measures the ________.", "normal_format": "What does the inspiratory reserve volume measure?", "question_choices": [ "amount of air remaining in the lung after a maximal exhalation", "amount of air that the lung holds", "amount of air the can be further exhaled after a normal breath", "amount of air that can be further inhaled after a normal breath" ], "question_id": "fs-idm17333744", "question_text": "The inspiratory reserve volume measures the ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "Lungs exert a pressure on the air to reduce the oxygen pressure." }, "bloom": null, "hl_context": "Above , the partial pressure of oxygen in the lungs was calculated to be 150 mm Hg . <hl> However , lungs never fully deflate with an exhalation ; therefore , the inspired air mixes with this residual air and lowers the partial pressure of oxygen within the alveoli . <hl> <hl> This means that there is a lower concentration of oxygen in the lungs than is found in the air outside the body . <hl> Knowing the RQ , the partial pressure of oxygen in the alveoli can be calculated :", "hl_sentences": "However , lungs never fully deflate with an exhalation ; therefore , the inspired air mixes with this residual air and lowers the partial pressure of oxygen within the alveoli . This means that there is a lower concentration of oxygen in the lungs than is found in the air outside the body .", "question": { "cloze_format": "___ does not explain why the partial pressure of oxygen is lower in the lung than in the external air.", "normal_format": "Of the following, which does not explain why the partial pressure of oxygen is lower in the lung than in the external air?", "question_choices": [ "Air in the lung is humidified; therefore, water vapor pressure alters the pressure.", "Carbon dioxide mixes with oxygen.", "Oxygen is moved into the blood and is headed to the tissues.", "Lungs exert a pressure on the air to reduce the oxygen pressure." ], "question_id": "fs-idp94151328", "question_text": "Of the following, which does not explain why the partial pressure of oxygen is lower in the lung than in the external air?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "residual volume + expiratory reserve volume + tidal volume + inspiratory reserve volume" }, "bloom": null, "hl_context": "Capacities are measurements of two or more volumes . The vital capacity ( VC ) measures the maximum amount of air that can be inhaled or exhaled during a respiratory cycle . It is the sum of the expiratory reserve volume , tidal volume , and inspiratory reserve volume . The inspiratory capacity ( IC ) is the amount of air that can be inhaled after the end of a normal expiration . It is , therefore , the sum of the tidal volume and inspiratory reserve volume . The functional residual capacity ( FRC ) includes the expiratory reserve volume and the residual volume . The FRC measures the amount of additional air that can be exhaled after a normal exhalation . <hl> Lastly , the total lung capacity ( TLC ) is a measurement of the total amount of air that the lung can hold . <hl> <hl> It is the sum of the residual volume , expiratory reserve volume , tidal volume , and inspiratory reserve volume . <hl>", "hl_sentences": "Lastly , the total lung capacity ( TLC ) is a measurement of the total amount of air that the lung can hold . It is the sum of the residual volume , expiratory reserve volume , tidal volume , and inspiratory reserve volume .", "question": { "cloze_format": "The total lung capacity is calculated using the formula of ___ .", "normal_format": "The total lung capacity is calculated using which of the following formulas?", "question_choices": [ "residual volume + tidal volume + inspiratory reserve volume", "residual volume + expiratory reserve volume + inspiratory reserve volume", "expiratory reserve volume + tidal volume + inspiratory reserve volume", "residual volume + expiratory reserve volume + tidal volume + inspiratory reserve volume" ], "question_id": "fs-idp155016848", "question_text": "The total lung capacity is calculated using which of the following formulas?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "It would prevent inhalation because the intrapleural pressure would not change." }, "bloom": null, "hl_context": "There is always a slightly negative pressure within the thoracic cavity , which aids in keeping the airways of the lungs open . <hl> During inhalation , volume increases as a result of contraction of the diaphragm , and pressure decreases ( according to Boyle ’ s Law ) . <hl> This decrease of pressure in the thoracic cavity relative to the environment makes the cavity less than the atmosphere ( Figure 39.16 a ) . Because of this drop in pressure , air rushes into the respiratory passages . To increase the volume of the lungs , the chest wall expands . This results from the contraction of the intercostal muscles , the muscles that are connected to the rib cage . <hl> Lung volume expands because the diaphragm contracts and the intercostals muscles contract , thus expanding the thoracic cavity . <hl> This increase in the volume of the thoracic cavity lowers pressure compared to the atmosphere , so air rushes into the lungs , thus increasing its volume . The resulting increase in volume is largely attributed to an increase in alveolar space , because the bronchioles and bronchi are stiff structures that do not change in size . The chest wall expands out and away from the lungs . The lungs are elastic ; therefore , when air fills the lungs , the elastic recoil within the tissues of the lung exerts pressure back toward the interior of the lungs . These outward and inward forces compete to inflate and deflate the lung with every breath . Upon exhalation , the lungs recoil to force the air out of the lungs , and the intercostal muscles relax , returning the chest wall back to its original position ( Figure 39.16 b ) . <hl> The diaphragm also relaxes and moves higher into the thoracic cavity . <hl> <hl> This increases the pressure within the thoracic cavity relative to the environment , and air rushes out of the lungs . <hl> The movement of air out of the lungs is a passive event . No muscles are contracting to expel the air . Each lung is surrounded by an invaginated sac . The layer of tissue that covers the lung and dips into spaces is called the visceral pleura . A second layer of parietal pleura lines the interior of the thorax ( Figure 39.17 ) . The space between these layers , the intrapleural space , contains a small amount of fluid that protects the tissue and reduces the friction generated from rubbing the tissue layers together as the lungs contract and relax . Pleurisy results when these layers of tissue become inflamed ; it is painful because the inflammation increases the pressure within the thoracic cavity and reduces the volume of the lung . <hl> The Mechanics of Human Breathing Boyle ’ s Law is the gas law that states that in a closed space , pressure and volume are inversely related . <hl> <hl> As volume decreases , pressure increases and vice versa ( Figure 39.15 ) . <hl> The relationship between gas pressure and volume helps to explain the mechanics of breathing . All mammals have lungs that are the main organs for breathing . Lung capacity has evolved to support the animal ’ s activities . <hl> During inhalation , the lungs expand with air , and oxygen diffuses across the lung ’ s surface and enters the bloodstream . <hl> <hl> During exhalation , the lungs expel air and lung volume decreases . <hl> In the next few sections , the process of human breathing will be explained . Mammalian lungs are located in the thoracic cavity where they are surrounded and protected by the rib cage , intercostal muscles , and bound by the chest wall . <hl> The bottom of the lungs is contained by the diaphragm , a skeletal muscle that facilitates breathing . <hl> <hl> Breathing requires the coordination of the lungs , the chest wall , and most importantly , the diaphragm . <hl>", "hl_sentences": "During inhalation , volume increases as a result of contraction of the diaphragm , and pressure decreases ( according to Boyle ’ s Law ) . Lung volume expands because the diaphragm contracts and the intercostals muscles contract , thus expanding the thoracic cavity . The diaphragm also relaxes and moves higher into the thoracic cavity . This increases the pressure within the thoracic cavity relative to the environment , and air rushes out of the lungs . The Mechanics of Human Breathing Boyle ’ s Law is the gas law that states that in a closed space , pressure and volume are inversely related . As volume decreases , pressure increases and vice versa ( Figure 39.15 ) . During inhalation , the lungs expand with air , and oxygen diffuses across the lung ’ s surface and enters the bloodstream . During exhalation , the lungs expel air and lung volume decreases . The bottom of the lungs is contained by the diaphragm , a skeletal muscle that facilitates breathing . Breathing requires the coordination of the lungs , the chest wall , and most importantly , the diaphragm .", "question": { "cloze_format": "The way paralysis of the diaphragm alter inspiration is that ___ .", "normal_format": "How would paralysis of the diaphragm alter inspiration?", "question_choices": [ "It would prevent contraction of the intercostal muscles.", "It would prevent inhalation because the intrapleural pressure would not change.", "It would decrease the intrapleural pressure and allow more air to enter the lungs.", "It would slow expiration because the lung would not relax." ], "question_id": "fs-idp67913520", "question_text": "How would paralysis of the diaphragm alter inspiration?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "decrease the compliance of the lung" }, "bloom": null, "hl_context": "Examples of restrictive diseases are respiratory distress syndrome and pulmonary fibrosis . <hl> In both diseases , the airways are less compliant and they are stiff or fibrotic . <hl> <hl> There is a decrease in compliance because the lung tissue cannot bend and move . <hl> In these types of restrictive diseases , the intrapleural pressure is more positive and the airways collapse upon exhalation , which traps air in the lungs . Forced or functional vital capacity ( FVC ) , which is the amount of air that can be forcibly exhaled after taking the deepest breath possible , is much lower than in normal patients , and the time it takes to exhale most of the air is greatly prolonged ( Figure 39.18 ) . A patient suffering from these diseases cannot exhale the normal amount of air . <hl> Pulmonary diseases reduce the rate of gas exchange into and out of the lungs . <hl> <hl> Two main causes of decreased gas exchange are compliance ( how elastic the lung is ) and resistance ( how much obstruction exists in the airways ) . <hl> A change in either can dramatically alter breathing and the ability to take in oxygen and release carbon dioxide .", "hl_sentences": "In both diseases , the airways are less compliant and they are stiff or fibrotic . There is a decrease in compliance because the lung tissue cannot bend and move . Pulmonary diseases reduce the rate of gas exchange into and out of the lungs . Two main causes of decreased gas exchange are compliance ( how elastic the lung is ) and resistance ( how much obstruction exists in the airways ) .", "question": { "cloze_format": "Restrictive airway diseases ________.", "normal_format": "Which of the following is correct about restrictive airway diseases?", "question_choices": [ "increase the compliance of the lung", "decrease the compliance of the lung", "increase the lung volume", "decrease the work of breathing" ], "question_id": "fs-idm39676880", "question_text": "Restrictive airway diseases ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "both a and c" }, "bloom": null, "hl_context": "The number of breaths per minute is the respiratory rate . On average , under non-exertion conditions , the human respiratory rate is 12 – 15 breaths / minute . The respiratory rate contributes to the alveolar ventilation , or how much air moves into and out of the alveoli . Alveolar ventilation prevents carbon dioxide buildup in the alveoli . <hl> There are two ways to keep the alveolar ventilation constant : increase the respiratory rate while decreasing the tidal volume of air per breath ( shallow breathing ) , or decrease the respiratory rate while increasing the tidal volume per breath . <hl> In either case , the ventilation remains the same , but the work done and type of work needed are quite different . Both tidal volume and respiratory rate are closely regulated when oxygen demand increases .", "hl_sentences": "There are two ways to keep the alveolar ventilation constant : increase the respiratory rate while decreasing the tidal volume of air per breath ( shallow breathing ) , or decrease the respiratory rate while increasing the tidal volume per breath .", "question": { "cloze_format": "Alveolar ventilation remains constant when ________.", "normal_format": "When does alveolar ventilation remain constant?", "question_choices": [ "the respiratory rate is increased while the volume of air per breath is decreased", "the respiratory rate and the volume of air per breath are increased", "the respiratory rate is decreased while increasing the volume per breath", "both a and c" ], "question_id": "fs-idm53491072", "question_text": "Alveolar ventilation remains constant when ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "decreased body temperature" }, "bloom": "3", "hl_context": "and hydrogen ions ( H + ) . <hl> As the level of carbon dioxide in the blood increases , more H + is produced and the pH decreases . <hl> <hl> This increase in carbon dioxide and subsequent decrease in pH reduce the affinity of hemoglobin for oxygen . <hl> <hl> The oxygen dissociates from the Hb molecule , shifting the oxygen dissociation curve to the right . <hl> Therefore , more oxygen is needed to reach the same hemoglobin saturation level as when the pH was higher . A similar shift in the curve also results from an increase in body temperature . Increased temperature , such as from increased activity of skeletal muscle , causes the affinity of hemoglobin for oxygen to be reduced . Diseases like sickle cell anemia and thalassemia decrease the blood ’ s ability to deliver oxygen to tissues and its oxygen-carrying capacity . In sickle cell anemia , the shape of the red blood cell is crescent-shaped , elongated , and stiffened , reducing its ability to deliver oxygen ( Figure 39.21 ) . In this form , red blood cells cannot pass through the capillaries . This is painful when it occurs . Thalassemia is a rare genetic disease caused by a defect in either the alpha or the beta subunit of Hb . Patients with thalassemia produce a high number of red blood cells , but these cells have lower-than-normal levels of hemoglobin . Therefore , the oxygen-carrying capacity is diminished . <hl> , other environmental factors and diseases can affect oxygen carrying capacity and delivery . <hl> <hl> Carbon dioxide levels , blood pH , and body temperature affect oxygen-carrying capacity ( Figure 39.20 ) . <hl> When carbon dioxide is in the blood , it reacts with water to form bicarbonate", "hl_sentences": "As the level of carbon dioxide in the blood increases , more H + is produced and the pH decreases . This increase in carbon dioxide and subsequent decrease in pH reduce the affinity of hemoglobin for oxygen . The oxygen dissociates from the Hb molecule , shifting the oxygen dissociation curve to the right . , other environmental factors and diseases can affect oxygen carrying capacity and delivery . Carbon dioxide levels , blood pH , and body temperature affect oxygen-carrying capacity ( Figure 39.20 ) .", "question": { "cloze_format": "___ will NOT facilitate the transfer of oxygen to tissues.", "normal_format": "Which of the following will NOT facilitate the transfer of oxygen to tissues?", "question_choices": [ "decreased body temperature", "decreased pH of the blood", "increased carbon dioxide", "increased exercise" ], "question_id": "fs-idp17243280", "question_text": "Which of the following will NOT facilitate the transfer of oxygen to tissues?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "conversion to bicarbonate" }, "bloom": null, "hl_context": "<hl> Third , the majority of carbon dioxide molecules (8 5 percent ) are carried as part of the bicarbonate buffer system . <hl> <hl> In this system , carbon dioxide diffuses into the red blood cells . <hl> <hl> Carbonic anhydrase ( CA ) within the red blood cells quickly converts the carbon dioxide into carbonic acid ( H 2 CO 3 ) . <hl> Carbonic acid is an unstable intermediate molecule that immediately dissociates into bicarbonate ions <hl> Carbon dioxide molecules are transported in the blood from body tissues to the lungs by one of three methods : dissolution directly into the blood , binding to hemoglobin , or carried as a bicarbonate ion . <hl> Several properties of carbon dioxide in the blood affect its transport . First , carbon dioxide is more soluble in blood than oxygen . About 5 to 7 percent of all carbon dioxide is dissolved in the plasma . Second , carbon dioxide can bind to plasma proteins or can enter red blood cells and bind to hemoglobin . This form transports about 10 percent of the carbon dioxide . When carbon dioxide binds to hemoglobin , a molecule called carbaminohemoglobin is formed . Binding of carbon dioxide to hemoglobin is reversible . Therefore , when it reaches the lungs , the carbon dioxide can freely dissociate from the hemoglobin and be expelled from the body .", "hl_sentences": "Third , the majority of carbon dioxide molecules (8 5 percent ) are carried as part of the bicarbonate buffer system . In this system , carbon dioxide diffuses into the red blood cells . Carbonic anhydrase ( CA ) within the red blood cells quickly converts the carbon dioxide into carbonic acid ( H 2 CO 3 ) . Carbon dioxide molecules are transported in the blood from body tissues to the lungs by one of three methods : dissolution directly into the blood , binding to hemoglobin , or carried as a bicarbonate ion .", "question": { "cloze_format": "The majority of carbon dioxide in the blood is transported by ________.", "normal_format": "How is the majority of carbon dioxide in the blood transported?", "question_choices": [ "binding to hemoglobin", "dissolution in the blood", "conversion to bicarbonate", "binding to plasma proteins" ], "question_id": "fs-idp3605056", "question_text": "The majority of carbon dioxide in the blood is transported by ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "binding to hemoglobin" }, "bloom": null, "hl_context": "Although oxygen dissolves in blood , only a small amount of oxygen is transported this way . Only 1.5 percent of oxygen in the blood is dissolved directly into the blood itself . <hl> Most oxygen — 98.5 percent — is bound to a protein called hemoglobin and carried to the tissues . <hl>", "hl_sentences": "Most oxygen — 98.5 percent — is bound to a protein called hemoglobin and carried to the tissues .", "question": { "cloze_format": "The majority of oxygen in the blood is transported by ________.", "normal_format": "The majority of oxygen in the blood is transported by which of the following?", "question_choices": [ "dissolution in the blood", "being carried as bicarbonate ions", "binding to blood plasma", "binding to hemoglobin" ], "question_id": "fs-idm78036128", "question_text": "The majority of oxygen in the blood is transported by ________." }, "references_are_paraphrase": null } ]
39
39.1 Systems of Gas Exchange Learning Objectives By the end of this section, you will be able to: Describe the passage of air from the outside environment to the lungs Explain how the lungs are protected from particulate matter The primary function of the respiratory system is to deliver oxygen to the cells of the body’s tissues and remove carbon dioxide, a cell waste product. The main structures of the human respiratory system are the nasal cavity, the trachea, and lungs. All aerobic organisms require oxygen to carry out their metabolic functions. Along the evolutionary tree, different organisms have devised different means of obtaining oxygen from the surrounding atmosphere. The environment in which the animal lives greatly determines how an animal respires. The complexity of the respiratory system is correlated with the size of the organism. As animal size increases, diffusion distances increase and the ratio of surface area to volume drops. In unicellular organisms, diffusion across the cell membrane is sufficient for supplying oxygen to the cell ( Figure 39.2 ). Diffusion is a slow, passive transport process. In order for diffusion to be a feasible means of providing oxygen to the cell, the rate of oxygen uptake must match the rate of diffusion across the membrane. In other words, if the cell were very large or thick, diffusion would not be able to provide oxygen quickly enough to the inside of the cell. Therefore, dependence on diffusion as a means of obtaining oxygen and removing carbon dioxide remains feasible only for small organisms or those with highly-flattened bodies, such as many flatworms (Platyhelminthes). Larger organisms had to evolve specialized respiratory tissues, such as gills, lungs, and respiratory passages accompanied by complex circulatory systems, to transport oxygen throughout their entire body. Direct Diffusion For small multicellular organisms, diffusion across the outer membrane is sufficient to meet their oxygen needs. Gas exchange by direct diffusion across surface membranes is efficient for organisms less than 1 mm in diameter. In simple organisms, such as cnidarians and flatworms, every cell in the body is close to the external environment. Their cells are kept moist and gases diffuse quickly via direct diffusion. Flatworms are small, literally flat worms, which ‘breathe’ through diffusion across the outer membrane ( Figure 39.3 ). The flat shape of these organisms increases the surface area for diffusion, ensuring that each cell within the body is close to the outer membrane surface and has access to oxygen. If the flatworm had a cylindrical body, then the cells in the center would not be able to get oxygen. Skin and Gills Earthworms and amphibians use their skin (integument) as a respiratory organ. A dense network of capillaries lies just below the skin and facilitates gas exchange between the external environment and the circulatory system. The respiratory surface must be kept moist in order for the gases to dissolve and diffuse across cell membranes. Organisms that live in water need to obtain oxygen from the water. Oxygen dissolves in water but at a lower concentration than in the atmosphere. The atmosphere has roughly 21 percent oxygen. In water, the oxygen concentration is much smaller than that. Fish and many other aquatic organisms have evolved gills to take up the dissolved oxygen from water ( Figure 39.4 ). Gills are thin tissue filaments that are highly branched and folded. When water passes over the gills, the dissolved oxygen in water rapidly diffuses across the gills into the bloodstream. The circulatory system can then carry the oxygenated blood to the other parts of the body. In animals that contain coelomic fluid instead of blood, oxygen diffuses across the gill surfaces into the coelomic fluid. Gills are found in mollusks, annelids, and crustaceans. The folded surfaces of the gills provide a large surface area to ensure that the fish gets sufficient oxygen. Diffusion is a process in which material travels from regions of high concentration to low concentration until equilibrium is reached. In this case, blood with a low concentration of oxygen molecules circulates through the gills. The concentration of oxygen molecules in water is higher than the concentration of oxygen molecules in gills. As a result, oxygen molecules diffuse from water (high concentration) to blood (low concentration), as shown in Figure 39.5 . Similarly, carbon dioxide molecules in the blood diffuse from the blood (high concentration) to water (low concentration). Tracheal Systems Insect respiration is independent of its circulatory system; therefore, the blood does not play a direct role in oxygen transport. Insects have a highly specialized type of respiratory system called the tracheal system, which consists of a network of small tubes that carries oxygen to the entire body. The tracheal system is the most direct and efficient respiratory system in active animals. The tubes in the tracheal system are made of a polymeric material called chitin. Insect bodies have openings, called spiracles, along the thorax and abdomen. These openings connect to the tubular network, allowing oxygen to pass into the body ( Figure 39.6 ) and regulating the diffusion of CO 2 and water vapor. Air enters and leaves the tracheal system through the spiracles. Some insects can ventilate the tracheal system with body movements. Mammalian Systems In mammals, pulmonary ventilation occurs via inhalation (breathing). During inhalation, air enters the body through the nasal cavity located just inside the nose ( Figure 39.7 ). As air passes through the nasal cavity, the air is warmed to body temperature and humidified. The respiratory tract is coated with mucus to seal the tissues from direct contact with air. Mucus is high in water. As air crosses these surfaces of the mucous membranes, it picks up water. These processes help equilibrate the air to the body conditions, reducing any damage that cold, dry air can cause. Particulate matter that is floating in the air is removed in the nasal passages via mucus and cilia. The processes of warming, humidifying, and removing particles are important protective mechanisms that prevent damage to the trachea and lungs. Thus, inhalation serves several purposes in addition to bringing oxygen into the respiratory system. Visual Connection Which of the following statements about the mammalian respiratory system is false? When we breathe in, air travels from the pharynx to the trachea. The bronchioles branch into bronchi. Alveolar ducts connect to alveolar sacs. Gas exchange between the lung and blood takes place in the alveolus. From the nasal cavity, air passes through the pharynx (throat) and the larynx (voice box), as it makes its way to the trachea ( Figure 39.7 ). The main function of the trachea is to funnel the inhaled air to the lungs and the exhaled air back out of the body. The human trachea is a cylinder about 10 to 12 cm long and 2 cm in diameter that sits in front of the esophagus and extends from the larynx into the chest cavity where it divides into the two primary bronchi at the midthorax. It is made of incomplete rings of hyaline cartilage and smooth muscle ( Figure 39.8 ). The trachea is lined with mucus-producing goblet cells and ciliated epithelia. The cilia propel foreign particles trapped in the mucus toward the pharynx. The cartilage provides strength and support to the trachea to keep the passage open. The smooth muscle can contract, decreasing the trachea’s diameter, which causes expired air to rush upwards from the lungs at a great force. The forced exhalation helps expel mucus when we cough. Smooth muscle can contract or relax, depending on stimuli from the external environment or the body’s nervous system. Lungs: Bronchi and Alveoli The end of the trachea bifurcates (divides) to the right and left lungs. The lungs are not identical. The right lung is larger and contains three lobes, whereas the smaller left lung contains two lobes ( Figure 39.9 ). The muscular diaphragm , which facilitates breathing, is inferior to (below) the lungs and marks the end of the thoracic cavity. In the lungs, air is diverted into smaller and smaller passages, or bronchi . Air enters the lungs through the two primary (main) bronchi (singular: bronchus). Each bronchus divides into secondary bronchi, then into tertiary bronchi, which in turn divide, creating smaller and smaller diameter bronchioles as they split and spread through the lung. Like the trachea, the bronchi are made of cartilage and smooth muscle. At the bronchioles, the cartilage is replaced with elastic fibers. Bronchi are innervated by nerves of both the parasympathetic and sympathetic nervous systems that control muscle contraction (parasympathetic) or relaxation (sympathetic) in the bronchi and bronchioles, depending on the nervous system’s cues. In humans, bronchioles with a diameter smaller than 0.5 mm are the respiratory bronchioles . They lack cartilage and therefore rely on inhaled air to support their shape. As the passageways decrease in diameter, the relative amount of smooth muscle increases. The terminal bronchioles subdivide into microscopic branches called respiratory bronchioles. The respiratory bronchioles subdivide into several alveolar ducts. Numerous alveoli and alveolar sacs surround the alveolar ducts. The alveolar sacs resemble bunches of grapes tethered to the end of the bronchioles ( Figure 39.10 ). In the acinar region, the alveolar ducts are attached to the end of each bronchiole. At the end of each duct are approximately 100 alveolar sacs , each containing 20 to 30 alveoli that are 200 to 300 microns in diameter. Gas exchange occurs only in alveoli. Alveoli are made of thin-walled parenchymal cells, typically one-cell thick, that look like tiny bubbles within the sacs. Alveoli are in direct contact with capillaries (one-cell thick) of the circulatory system. Such intimate contact ensures that oxygen will diffuse from alveoli into the blood and be distributed to the cells of the body. In addition, the carbon dioxide that was produced by cells as a waste product will diffuse from the blood into alveoli to be exhaled. The anatomical arrangement of capillaries and alveoli emphasizes the structural and functional relationship of the respiratory and circulatory systems. Because there are so many alveoli (~300 million per lung) within each alveolar sac and so many sacs at the end of each alveolar duct, the lungs have a sponge-like consistency. This organization produces a very large surface area that is available for gas exchange. The surface area of alveoli in the lungs is approximately 75 m 2 . This large surface area, combined with the thin-walled nature of the alveolar parenchymal cells, allows gases to easily diffuse across the cells. Link to Learning Watch the following video to review the respiratory system. Click to view content Protective Mechanisms The air that organisms breathe contains particulate matter such as dust, dirt, viral particles, and bacteria that can damage the lungs or trigger allergic immune responses. The respiratory system contains several protective mechanisms to avoid problems or tissue damage. In the nasal cavity, hairs and mucus trap small particles, viruses, bacteria, dust, and dirt to prevent their entry. If particulates do make it beyond the nose, or enter through the mouth, the bronchi and bronchioles of the lungs also contain several protective devices. The lungs produce mucus —a sticky substance made of mucin , a complex glycoprotein, as well as salts and water—that traps particulates. The bronchi and bronchioles contain cilia, small hair-like projections that line the walls of the bronchi and bronchioles ( Figure 39.11 ). These cilia beat in unison and move mucus and particles out of the bronchi and bronchioles back up to the throat where it is swallowed and eliminated via the esophagus. In humans, for example, tar and other substances in cigarette smoke destroy or paralyze the cilia, making the removal of particles more difficult. In addition, smoking causes the lungs to produce more mucus, which the damaged cilia are not able to move. This causes a persistent cough, as the lungs try to rid themselves of particulate matter, and makes smokers more susceptible to respiratory ailments. 39.2 Gas Exchange across Respiratory Surfaces Learning Objectives By the end of this section, you will be able to: Name and describe lung volumes and capacities Understand how gas pressure influences how gases move into and out of the body The structure of the lung maximizes its surface area to increase gas diffusion. Because of the enormous number of alveoli (approximately 300 million in each human lung), the surface area of the lung is very large (75 m 2 ). Having such a large surface area increases the amount of gas that can diffuse into and out of the lungs. Basic Principles of Gas Exchange Gas exchange during respiration occurs primarily through diffusion. Diffusion is a process in which transport is driven by a concentration gradient. Gas molecules move from a region of high concentration to a region of low concentration. Blood that is low in oxygen concentration and high in carbon dioxide concentration undergoes gas exchange with air in the lungs. The air in the lungs has a higher concentration of oxygen than that of oxygen-depleted blood and a lower concentration of carbon dioxide. This concentration gradient allows for gas exchange during respiration. Partial pressure is a measure of the concentration of the individual components in a mixture of gases. The total pressure exerted by the mixture is the sum of the partial pressures of the components in the mixture. The rate of diffusion of a gas is proportional to its partial pressure within the total gas mixture. This concept is discussed further in detail below. Lung Volumes and Capacities Different animals have different lung capacities based on their activities. Cheetahs have evolved a much higher lung capacity than humans; it helps provide oxygen to all the muscles in the body and allows them to run very fast. Elephants also have a high lung capacity. In this case, it is not because they run fast but because they have a large body and must be able to take up oxygen in accordance with their body size. Human lung size is determined by genetics, sex, and height. At maximal capacity, an average lung can hold almost six liters of air, but lungs do not usually operate at maximal capacity. Air in the lungs is measured in terms of lung volumes and lung capacities ( Figure 39.12 and Table 39.1 ). Volume measures the amount of air for one function (such as inhalation or exhalation). Capacity is any two or more volumes (for example, how much can be inhaled from the end of a maximal exhalation). Lung Volumes and Capacities (Avg Adult Male) Volume/Capacity Definition Volume (liters) Equations Tidal volume (TV) Amount of air inhaled during a normal breath 0.5 - Expiratory reserve volume (ERV) Amount of air that can be exhaled after a normal exhalation 1.2 - Inspiratory reserve volume (IRV) Amount of air that can be further inhaled after a normal inhalation 3.1 - Residual volume (RV) Air left in the lungs after a forced exhalation 1.2 - Vital capacity (VC) Maximum amount of air that can be moved in or out of the lungs in a single respiratory cycle 4.8 ERV+TV+IRV Inspiratory capacity (IC) Volume of air that can be inhaled in addition to a normal exhalation 3.6 TV+IRV Functional residual capacity (FRC) Volume of air remaining after a normal exhalation 2.4 ERV+RV Total lung capacity (TLC) Total volume of air in the lungs after a maximal inspiration 6.0 RV+ERV+TV+IRV Forced expiratory volume (FEV1) How much air can be forced out of the lungs over a specific time period, usually one second ~4.1 to 5.5 - Table 39.1 The volume in the lung can be divided into four units: tidal volume, expiratory reserve volume, inspiratory reserve volume, and residual volume. Tidal volume (TV) measures the amount of air that is inspired and expired during a normal breath. On average, this volume is around one-half liter, which is a little less than the capacity of a 20-ounce drink bottle. The expiratory reserve volume (ERV) is the additional amount of air that can be exhaled after a normal exhalation. It is the reserve amount that can be exhaled beyond what is normal. Conversely, the inspiratory reserve volume (IRV) is the additional amount of air that can be inhaled after a normal inhalation. The residual volume (RV) is the amount of air that is left after expiratory reserve volume is exhaled. The lungs are never completely empty: There is always some air left in the lungs after a maximal exhalation. If this residual volume did not exist and the lungs emptied completely, the lung tissues would stick together and the energy necessary to re-inflate the lung could be too great to overcome. Therefore, there is always some air remaining in the lungs. Residual volume is also important for preventing large fluctuations in respiratory gases (O 2 and CO 2 ). The residual volume is the only lung volume that cannot be measured directly because it is impossible to completely empty the lung of air. This volume can only be calculated rather than measured. Capacities are measurements of two or more volumes. The vital capacity (VC) measures the maximum amount of air that can be inhaled or exhaled during a respiratory cycle. It is the sum of the expiratory reserve volume, tidal volume, and inspiratory reserve volume. The inspiratory capacity (IC) is the amount of air that can be inhaled after the end of a normal expiration. It is, therefore, the sum of the tidal volume and inspiratory reserve volume. The functional residual capacity (FRC) includes the expiratory reserve volume and the residual volume. The FRC measures the amount of additional air that can be exhaled after a normal exhalation. Lastly, the total lung capacity (TLC) is a measurement of the total amount of air that the lung can hold. It is the sum of the residual volume, expiratory reserve volume, tidal volume, and inspiratory reserve volume. Lung volumes are measured by a technique called spirometry . An important measurement taken during spirometry is the forced expiratory volume (FEV) , which measures how much air can be forced out of the lung over a specific period, usually one second (FEV1). In addition, the forced vital capacity (FVC), which is the total amount of air that can be forcibly exhaled, is measured. The ratio of these values ( FEV1/FVC ratio ) is used to diagnose lung diseases including asthma, emphysema, and fibrosis. If the FEV1/FVC ratio is high, the lungs are not compliant (meaning they are stiff and unable to bend properly), and the patient most likely has lung fibrosis. Patients exhale most of the lung volume very quickly. Conversely, when the FEV1/FVC ratio is low, there is resistance in the lung that is characteristic of asthma. In this instance, it is hard for the patient to get the air out of his or her lungs, and it takes a long time to reach the maximal exhalation volume. In either case, breathing is difficult and complications arise. Career Connection Respiratory Therapist Respiratory therapists or respiratory practitioners evaluate and treat patients with lung and cardiovascular diseases. They work as part of a medical team to develop treatment plans for patients. Respiratory therapists may treat premature babies with underdeveloped lungs, patients with chronic conditions such as asthma, or older patients suffering from lung disease such as emphysema and chronic obstructive pulmonary disease (COPD). They may operate advanced equipment such as compressed gas delivery systems, ventilators, blood gas analyzers, and resuscitators. Specialized programs to become a respiratory therapist generally lead to a bachelor’s degree with a respiratory therapist specialty. Because of a growing aging population, career opportunities as a respiratory therapist are expected to remain strong. Gas Pressure and Respiration The respiratory process can be better understood by examining the properties of gases. Gases move freely, but gas particles are constantly hitting the walls of their vessel, thereby producing gas pressure. Air is a mixture of gases, primarily nitrogen (N 2 ; 78.6 percent), oxygen (O 2 ; 20.9 percent), water vapor (H 2 O; 0.5 percent), and carbon dioxide (CO 2 ; 0.04 percent). Each gas component of that mixture exerts a pressure. The pressure for an individual gas in the mixture is the partial pressure of that gas. Approximately 21 percent of atmospheric gas is oxygen. Carbon dioxide, however, is found in relatively small amounts, 0.04 percent. The partial pressure for oxygen is much greater than that of carbon dioxide. The partial pressure of any gas can be calculated by: P = (P atm )  ×  (percent content in mixture) . P = (P atm )  ×  (percent content in mixture) . P atm , the atmospheric pressure, is the sum of all of the partial pressures of the atmospheric gases added together, P atm  = P N 2  + P O 2  + P H 2 O  + P CO 2 = 760 mm Hg P atm  = P N 2  + P O 2  + P H 2 O  + P CO 2 = 760 mm Hg × (percent content in mixture). The pressure of the atmosphere at sea level is 760 mm Hg. Therefore, the partial pressure of oxygen is: P O 2 = (760 mm Hg) (0 .21) = 160 mm Hg P O 2 = (760 mm Hg) (0 .21) = 160 mm Hg and for carbon dioxide: P CO 2 = (760 mm Hg) (0 .0004) = 0 .3 mm Hg . P CO 2 = (760 mm Hg) (0 .0004) = 0 .3 mm Hg . At high altitudes, P atm decreases but concentration does not change; the partial pressure decrease is due to the reduction in P atm . When the air mixture reaches the lung, it has been humidified. The pressure of the water vapor in the lung does not change the pressure of the air, but it must be included in the partial pressure equation. For this calculation, the water pressure (47 mm Hg) is subtracted from the atmospheric pressure: 760 mm Hg  −  47 mm Hg  =   713 mm Hg 760 mm Hg  −  47 mm Hg  =   713 mm Hg and the partial pressure of oxygen is: (760 mm Hg  −  47 mm Hg)  ×  0 .21  =  150 mm Hg . (760 mm Hg  −  47 mm Hg)  ×  0 .21  =  150 mm Hg . These pressures determine the gas exchange, or the flow of gas, in the system. Oxygen and carbon dioxide will flow according to their pressure gradient from high to low. Therefore, understanding the partial pressure of each gas will aid in understanding how gases move in the respiratory system. Gas Exchange across the Alveoli In the body, oxygen is used by cells of the body’s tissues and carbon dioxide is produced as a waste product. The ratio of carbon dioxide production to oxygen consumption is the respiratory quotient (RQ) . RQ varies between 0.7 and 1.0. If just glucose were used to fuel the body, the RQ would equal one. One mole of carbon dioxide would be produced for every mole of oxygen consumed. Glucose, however, is not the only fuel for the body. Protein and fat are also used as fuels for the body. Because of this, less carbon dioxide is produced than oxygen is consumed and the RQ is, on average, about 0.7 for fat and about 0.8 for protein. The RQ is used to calculate the partial pressure of oxygen in the alveolar spaces within the lung, the alveolar P O 2 P O 2 Above, the partial pressure of oxygen in the lungs was calculated to be 150 mm Hg. However, lungs never fully deflate with an exhalation; therefore, the inspired air mixes with this residual air and lowers the partial pressure of oxygen within the alveoli. This means that there is a lower concentration of oxygen in the lungs than is found in the air outside the body. Knowing the RQ, the partial pressure of oxygen in the alveoli can be calculated: alveolar P O 2 = inspired P O 2 − ( alveolar  P O 2 RQ ) alveolar P O 2 = inspired P O 2 − ( alveolar  P O 2 RQ ) With an RQ of 0.8 and a P CO 2 P CO 2 in the alveoli of 40 mm Hg, the alveolar P O 2 P O 2 is equal to: alveolar P O 2  = 150 mm Hg − ( 40 mm Hg 0 .8 ) = mm Hg . alveolar P O 2  = 150 mm Hg − ( 40 mm Hg 0 .8 ) = mm Hg . Notice that this pressure is less than the external air. Therefore, the oxygen will flow from the inspired air in the lung ( P O 2 P O 2 = 150 mm Hg) into the bloodstream ( P O 2 P O 2 = 100 mm Hg) ( Figure 39.13 ). In the lungs, oxygen diffuses out of the alveoli and into the capillaries surrounding the alveoli. Oxygen (about 98 percent) binds reversibly to the respiratory pigment hemoglobin found in red blood cells (RBCs). RBCs carry oxygen to the tissues where oxygen dissociates from the hemoglobin and diffuses into the cells of the tissues. More specifically, alveolar P O 2 P O 2 is higher in the alveoli ( P ALVO 2 P ALVO 2 = 100 mm Hg) than blood P O 2 P O 2 (40 mm Hg) in the capillaries. Because this pressure gradient exists, oxygen diffuses down its pressure gradient, moving out of the alveoli and entering the blood of the capillaries where O 2 binds to hemoglobin. At the same time, alveolar P CO 2 P CO 2 is lower P ALVO 2 P ALVO 2 = 40 mm Hg than blood P CO 2 P CO 2 = (45 mm Hg). CO 2 diffuses down its pressure gradient, moving out of the capillaries and entering the alveoli. Oxygen and carbon dioxide move independently of each other; they diffuse down their own pressure gradients. As blood leaves the lungs through the pulmonary veins, the venous P O 2 P O 2 = 100 mm Hg, whereas the venous P CO 2 P CO 2 = 40 mm Hg. As blood enters the systemic capillaries, the blood will lose oxygen and gain carbon dioxide because of the pressure difference of the tissues and blood. In systemic capillaries, P O 2 P O 2 = 100 mm Hg, but in the tissue cells, P O 2 P O 2 = 40 mm Hg. This pressure gradient drives the diffusion of oxygen out of the capillaries and into the tissue cells. At the same time, blood P CO 2 P CO 2 = 40 mm Hg and systemic tissue P CO 2 P CO 2 = 45 mm Hg. The pressure gradient drives CO 2 out of tissue cells and into the capillaries. The blood returning to the lungs through the pulmonary arteries has a venous P O 2 P O 2 = 40 mm Hg and a P CO 2 P CO 2 = 45 mm Hg. The blood enters the lung capillaries where the process of exchanging gases between the capillaries and alveoli begins again ( Figure 39.13 ). Visual Connection Which of the following statements is false? In the tissues, P O 2 P O 2 drops as blood passes from the arteries to the veins, while P CO 2 P CO 2 increases. Blood travels from the lungs to the heart to body tissues, then back to the heart, then the lungs. Blood travels from the lungs to the heart to body tissues, then back to the lungs, then the heart. P O 2 P O 2 is higher in air than in the lungs. In short, the change in partial pressure from the alveoli to the capillaries drives the oxygen into the tissues and the carbon dioxide into the blood from the tissues. The blood is then transported to the lungs where differences in pressure in the alveoli result in the movement of carbon dioxide out of the blood into the lungs, and oxygen into the blood. Link to Learning Watch this video to learn how to carry out spirometry. Click to view content 39.3 Breathing Learning Objectives By the end of this section, you will be able to: Describe how the structures of the lungs and thoracic cavity control the mechanics of breathing Explain the importance of compliance and resistance in the lungs Discuss problems that may arise due to a V/Q mismatch Mammalian lungs are located in the thoracic cavity where they are surrounded and protected by the rib cage, intercostal muscles, and bound by the chest wall. The bottom of the lungs is contained by the diaphragm, a skeletal muscle that facilitates breathing. Breathing requires the coordination of the lungs, the chest wall, and most importantly, the diaphragm. Types of Breathing Amphibians have evolved multiple ways of breathing. Young amphibians, like tadpoles, use gills to breathe, and they don’t leave the water. Some amphibians retain gills for life. As the tadpole grows, the gills disappear and lungs grow. These lungs are primitive and not as evolved as mammalian lungs. Adult amphibians are lacking or have a reduced diaphragm, so breathing via lungs is forced. The other means of breathing for amphibians is diffusion across the skin. To aid this diffusion, amphibian skin must remain moist. Birds face a unique challenge with respect to breathing: They fly. Flying consumes a great amount of energy; therefore, birds require a lot of oxygen to aid their metabolic processes. Birds have evolved a respiratory system that supplies them with the oxygen needed to enable flying. Similar to mammals, birds have lungs, which are organs specialized for gas exchange. Oxygenated air, taken in during inhalation, diffuses across the surface of the lungs into the bloodstream, and carbon dioxide diffuses from the blood into the lungs and expelled during exhalation. The details of breathing between birds and mammals differ substantially. In addition to lungs, birds have air sacs inside their body. Air flows in one direction from the posterior air sacs to the lungs and out of the anterior air sacs. The flow of air is in the opposite direction from blood flow, and gas exchange takes place much more efficiently. This type of breathing enables birds to obtain the requisite oxygen, even at higher altitudes where the oxygen concentration is low. This directionality of airflow requires two cycles of air intake and exhalation to completely get the air out of the lungs. Evolution Connection Avian Respiration Birds have evolved a respiratory system that enables them to fly. Flying is a high-energy process and requires a lot of oxygen. Furthermore, many birds fly in high altitudes where the concentration of oxygen in low. How did birds evolve a respiratory system that is so unique? Decades of research by paleontologists have shown that birds evolved from therapods, meat-eating dinosaurs ( Figure 39.14 ). In fact, fossil evidence shows that meat-eating dinosaurs that lived more than 100 million years ago had a similar flow-through respiratory system with lungs and air sacs. Archaeopteryx and Xiaotingia , for example, were flying dinosaurs and are believed to be early precursors of birds. Most of us consider that dinosaurs are extinct. However, modern birds are descendants of avian dinosaurs. The respiratory system of modern birds has been evolving for hundreds of millions of years. All mammals have lungs that are the main organs for breathing. Lung capacity has evolved to support the animal’s activities. During inhalation, the lungs expand with air, and oxygen diffuses across the lung’s surface and enters the bloodstream. During exhalation, the lungs expel air and lung volume decreases. In the next few sections, the process of human breathing will be explained. The Mechanics of Human Breathing Boyle’s Law is the gas law that states that in a closed space, pressure and volume are inversely related. As volume decreases, pressure increases and vice versa ( Figure 39.15 ). The relationship between gas pressure and volume helps to explain the mechanics of breathing. There is always a slightly negative pressure within the thoracic cavity, which aids in keeping the airways of the lungs open. During inhalation, volume increases as a result of contraction of the diaphragm, and pressure decreases (according to Boyle’s Law). This decrease of pressure in the thoracic cavity relative to the environment makes the cavity less than the atmosphere ( Figure 39.16 a ). Because of this drop in pressure, air rushes into the respiratory passages. To increase the volume of the lungs, the chest wall expands. This results from the contraction of the intercostal muscles , the muscles that are connected to the rib cage. Lung volume expands because the diaphragm contracts and the intercostals muscles contract, thus expanding the thoracic cavity. This increase in the volume of the thoracic cavity lowers pressure compared to the atmosphere, so air rushes into the lungs, thus increasing its volume. The resulting increase in volume is largely attributed to an increase in alveolar space, because the bronchioles and bronchi are stiff structures that do not change in size. The chest wall expands out and away from the lungs. The lungs are elastic; therefore, when air fills the lungs, the elastic recoil within the tissues of the lung exerts pressure back toward the interior of the lungs. These outward and inward forces compete to inflate and deflate the lung with every breath. Upon exhalation, the lungs recoil to force the air out of the lungs, and the intercostal muscles relax, returning the chest wall back to its original position ( Figure 39.16 b ). The diaphragm also relaxes and moves higher into the thoracic cavity. This increases the pressure within the thoracic cavity relative to the environment, and air rushes out of the lungs. The movement of air out of the lungs is a passive event. No muscles are contracting to expel the air. Each lung is surrounded by an invaginated sac. The layer of tissue that covers the lung and dips into spaces is called the visceral pleura . A second layer of parietal pleura lines the interior of the thorax ( Figure 39.17 ). The space between these layers, the intrapleural space , contains a small amount of fluid that protects the tissue and reduces the friction generated from rubbing the tissue layers together as the lungs contract and relax. Pleurisy results when these layers of tissue become inflamed; it is painful because the inflammation increases the pressure within the thoracic cavity and reduces the volume of the lung. Link to Learning View how Boyle’s Law is related to breathing and watch a video on Boyle’s Law. Click to view content The Work of Breathing The number of breaths per minute is the respiratory rate . On average, under non-exertion conditions, the human respiratory rate is 12–15 breaths/minute. The respiratory rate contributes to the alveolar ventilation , or how much air moves into and out of the alveoli. Alveolar ventilation prevents carbon dioxide buildup in the alveoli. There are two ways to keep the alveolar ventilation constant: increase the respiratory rate while decreasing the tidal volume of air per breath (shallow breathing), or decrease the respiratory rate while increasing the tidal volume per breath. In either case, the ventilation remains the same, but the work done and type of work needed are quite different. Both tidal volume and respiratory rate are closely regulated when oxygen demand increases. There are two types of work conducted during respiration, flow-resistive and elastic work. Flow-resistive refers to the work of the alveoli and tissues in the lung, whereas elastic work refers to the work of the intercostal muscles, chest wall, and diaphragm. Increasing the respiration rate increases the flow-resistive work of the airways and decreases the elastic work of the muscles. Decreasing the respiratory rate reverses the type of work required. Surfactant The air-tissue/water interface of the alveoli has a high surface tension. This surface tension is similar to the surface tension of water at the liquid-air interface of a water droplet that results in the bonding of the water molecules together. Surfactant is a complex mixture of phospholipids and lipoproteins that works to reduce the surface tension that exists between the alveoli tissue and the air found within the alveoli. By lowering the surface tension of the alveolar fluid, it reduces the tendency of alveoli to collapse. Surfactant works like a detergent to reduce the surface tension and allows for easier inflation of the airways. When a balloon is first inflated, it takes a large amount of effort to stretch the plastic and start to inflate the balloon. If a little bit of detergent was applied to the interior of the balloon, then the amount of effort or work needed to begin to inflate the balloon would decrease, and it would become much easier to start blowing up the balloon. This same principle applies to the airways. A small amount of surfactant to the airway tissues reduces the effort or work needed to inflate those airways. Babies born prematurely sometimes do not produce enough surfactant. As a result, they suffer from respiratory distress syndrome , because it requires more effort to inflate their lungs. Surfactant is also important for preventing collapse of small alveoli relative to large alveoli. Lung Resistance and Compliance Pulmonary diseases reduce the rate of gas exchange into and out of the lungs. Two main causes of decreased gas exchange are compliance (how elastic the lung is) and resistance (how much obstruction exists in the airways). A change in either can dramatically alter breathing and the ability to take in oxygen and release carbon dioxide. Examples of restrictive diseases are respiratory distress syndrome and pulmonary fibrosis. In both diseases, the airways are less compliant and they are stiff or fibrotic. There is a decrease in compliance because the lung tissue cannot bend and move. In these types of restrictive diseases, the intrapleural pressure is more positive and the airways collapse upon exhalation, which traps air in the lungs. Forced or functional vital capacity (FVC) , which is the amount of air that can be forcibly exhaled after taking the deepest breath possible, is much lower than in normal patients, and the time it takes to exhale most of the air is greatly prolonged ( Figure 39.18 ). A patient suffering from these diseases cannot exhale the normal amount of air. Obstructive diseases and conditions include emphysema, asthma, and pulmonary edema. In emphysema, which mostly arises from smoking tobacco, the walls of the alveoli are destroyed, decreasing the surface area for gas exchange. The overall compliance of the lungs is increased, because as the alveolar walls are damaged, lung elastic recoil decreases due to a loss of elastic fibers, and more air is trapped in the lungs at the end of exhalation. Asthma is a disease in which inflammation is triggered by environmental factors. Inflammation obstructs the airways. The obstruction may be due to edema (fluid accumulation), smooth muscle spasms in the walls of the bronchioles, increased mucus secretion, damage to the epithelia of the airways, or a combination of these events. Those with asthma or edema experience increased occlusion from increased inflammation of the airways. This tends to block the airways, preventing the proper movement of gases ( Figure 39.18 ). Those with obstructive diseases have large volumes of air trapped after exhalation and breathe at a very high lung volume to compensate for the lack of airway recruitment. Dead Space: V/Q Mismatch Pulmonary circulation pressure is very low compared to that of the systemic circulation. It is also independent of cardiac output. This is because of a phenomenon called recruitment , which is the process of opening airways that normally remain closed when cardiac output increases. As cardiac output increases, the number of capillaries and arteries that are perfused (filled with blood) increases. These capillaries and arteries are not always in use but are ready if needed. At times, however, there is a mismatch between the amount of air (ventilation, V) and the amount of blood (perfusion, Q) in the lungs. This is referred to as ventilation/perfusion (V/Q) mismatch . There are two types of V/Q mismatch. Both produce dead space , regions of broken down or blocked lung tissue. Dead spaces can severely impact breathing, because they reduce the surface area available for gas diffusion. As a result, the amount of oxygen in the blood decreases, whereas the carbon dioxide level increases. Dead space is created when no ventilation and/or perfusion takes place. Anatomical dead space or anatomical shunt, arises from an anatomical failure, while physiological dead space or physiological shunt, arises from a functional impairment of the lung or arteries. An example of an anatomical shunt is the effect of gravity on the lungs. The lung is particularly susceptible to changes in the magnitude and direction of gravitational forces. When someone is standing or sitting upright, the pleural pressure gradient leads to increased ventilation further down in the lung. As a result, the intrapleural pressure is more negative at the base of the lung than at the top, and more air fills the bottom of the lung than the top. Likewise, it takes less energy to pump blood to the bottom of the lung than to the top when in a prone position. Perfusion of the lung is not uniform while standing or sitting. This is a result of hydrostatic forces combined with the effect of airway pressure. An anatomical shunt develops because the ventilation of the airways does not match the perfusion of the arteries surrounding those airways. As a result, the rate of gas exchange is reduced. Note that this does not occur when lying down, because in this position, gravity does not preferentially pull the bottom of the lung down. A physiological shunt can develop if there is infection or edema in the lung that obstructs an area. This will decrease ventilation but not affect perfusion; therefore, the V/Q ratio changes and gas exchange is affected. The lung can compensate for these mismatches in ventilation and perfusion. If ventilation is greater than perfusion, the arterioles dilate and the bronchioles constrict. This increases perfusion and reduces ventilation. Likewise, if ventilation is less than perfusion, the arterioles constrict and the bronchioles dilate to correct the imbalance. Link to Learning View the mechanics of breathing. Click to view content 39.4 Transport of Gases in Human Bodily Fluids Learning Objectives By the end of this section, you will be able to: Describe how oxygen is bound to hemoglobin and transported to body tissues Explain how carbon dioxide is transported from body tissues to the lungs Once the oxygen diffuses across the alveoli, it enters the bloodstream and is transported to the tissues where it is unloaded, and carbon dioxide diffuses out of the blood and into the alveoli to be expelled from the body. Although gas exchange is a continuous process, the oxygen and carbon dioxide are transported by different mechanisms. Transport of Oxygen in the Blood Although oxygen dissolves in blood, only a small amount of oxygen is transported this way. Only 1.5 percent of oxygen in the blood is dissolved directly into the blood itself. Most oxygen—98.5 percent—is bound to a protein called hemoglobin and carried to the tissues. Hemoglobin Hemoglobin , or Hb, is a protein molecule found in red blood cells (erythrocytes) made of four subunits: two alpha subunits and two beta subunits ( Figure 39.19 ). Each subunit surrounds a central heme group that contains iron and binds one oxygen molecule, allowing each hemoglobin molecule to bind four oxygen molecules. Molecules with more oxygen bound to the heme groups are brighter red. As a result, oxygenated arterial blood where the Hb is carrying four oxygen molecules is bright red, while venous blood that is deoxygenated is darker red. It is easier to bind a second and third oxygen molecule to Hb than the first molecule. This is because the hemoglobin molecule changes its shape, or conformation, as oxygen binds. The fourth oxygen is then more difficult to bind. The binding of oxygen to hemoglobin can be plotted as a function of the partial pressure of oxygen in the blood (x-axis) versus the relative Hb-oxygen saturation (y-axis). The resulting graph—an oxygen dissociation curve —is sigmoidal, or S-shaped ( Figure 39.20 ). As the partial pressure of oxygen increases, the hemoglobin becomes increasingly saturated with oxygen. Visual Connection The kidneys are responsible for removing excess H+ ions from the blood. If the kidneys fail, what would happen to blood pH and to hemoglobin affinity for oxygen? Factors That Affect Oxygen Binding The oxygen-carrying capacity of hemoglobin determines how much oxygen is carried in the blood. In addition to P O 2 P O 2 , other environmental factors and diseases can affect oxygen carrying capacity and delivery. Carbon dioxide levels, blood pH, and body temperature affect oxygen-carrying capacity ( Figure 39.20 ). When carbon dioxide is in the blood, it reacts with water to form bicarbonate (HCO 3 − ) (HCO 3 − ) and hydrogen ions (H + ). As the level of carbon dioxide in the blood increases, more H + is produced and the pH decreases. This increase in carbon dioxide and subsequent decrease in pH reduce the affinity of hemoglobin for oxygen. The oxygen dissociates from the Hb molecule, shifting the oxygen dissociation curve to the right. Therefore, more oxygen is needed to reach the same hemoglobin saturation level as when the pH was higher. A similar shift in the curve also results from an increase in body temperature. Increased temperature, such as from increased activity of skeletal muscle, causes the affinity of hemoglobin for oxygen to be reduced. Diseases like sickle cell anemia and thalassemia decrease the blood’s ability to deliver oxygen to tissues and its oxygen-carrying capacity. In sickle cell anemia , the shape of the red blood cell is crescent-shaped, elongated, and stiffened, reducing its ability to deliver oxygen ( Figure 39.21 ). In this form, red blood cells cannot pass through the capillaries. This is painful when it occurs. Thalassemia is a rare genetic disease caused by a defect in either the alpha or the beta subunit of Hb. Patients with thalassemia produce a high number of red blood cells, but these cells have lower-than-normal levels of hemoglobin. Therefore, the oxygen-carrying capacity is diminished. Transport of Carbon Dioxide in the Blood Carbon dioxide molecules are transported in the blood from body tissues to the lungs by one of three methods: dissolution directly into the blood, binding to hemoglobin, or carried as a bicarbonate ion. Several properties of carbon dioxide in the blood affect its transport. First, carbon dioxide is more soluble in blood than oxygen. About 5 to 7 percent of all carbon dioxide is dissolved in the plasma. Second, carbon dioxide can bind to plasma proteins or can enter red blood cells and bind to hemoglobin. This form transports about 10 percent of the carbon dioxide. When carbon dioxide binds to hemoglobin, a molecule called carbaminohemoglobin is formed. Binding of carbon dioxide to hemoglobin is reversible. Therefore, when it reaches the lungs, the carbon dioxide can freely dissociate from the hemoglobin and be expelled from the body. Third, the majority of carbon dioxide molecules (85 percent) are carried as part of the bicarbonate buffer system . In this system, carbon dioxide diffuses into the red blood cells. Carbonic anhydrase (CA) within the red blood cells quickly converts the carbon dioxide into carbonic acid (H 2 CO 3 ). Carbonic acid is an unstable intermediate molecule that immediately dissociates into bicarbonate ions (HCO 3 − ) (HCO 3 − ) and hydrogen (H + ) ions. Since carbon dioxide is quickly converted into bicarbonate ions, this reaction allows for the continued uptake of carbon dioxide into the blood down its concentration gradient. It also results in the production of H + ions. If too much H + is produced, it can alter blood pH. However, hemoglobin binds to the free H + ions and thus limits shifts in pH. The newly synthesized bicarbonate ion is transported out of the red blood cell into the liquid component of the blood in exchange for a chloride ion (Cl - ); this is called the chloride shift . When the blood reaches the lungs, the bicarbonate ion is transported back into the red blood cell in exchange for the chloride ion. The H + ion dissociates from the hemoglobin and binds to the bicarbonate ion. This produces the carbonic acid intermediate, which is converted back into carbon dioxide through the enzymatic action of CA. The carbon dioxide produced is expelled through the lungs during exhalation. CO 2  + H 2 O ↔ H 2 CO 3 (carbonic acid)   ↔ HCO 3  + H + (bicarbonate)   CO 2  + H 2 O ↔ H 2 CO 3 (carbonic acid)   ↔ HCO 3  + H + (bicarbonate)   The benefit of the bicarbonate buffer system is that carbon dioxide is “soaked up” into the blood with little change to the pH of the system. This is important because it takes only a small change in the overall pH of the body for severe injury or death to result. The presence of this bicarbonate buffer system also allows for people to travel and live at high altitudes: When the partial pressure of oxygen and carbon dioxide change at high altitudes, the bicarbonate buffer system adjusts to regulate carbon dioxide while maintaining the correct pH in the body. Carbon Monoxide Poisoning While carbon dioxide can readily associate and dissociate from hemoglobin, other molecules such as carbon monoxide (CO) cannot. Carbon monoxide has a greater affinity for hemoglobin than oxygen. Therefore, when carbon monoxide is present, it binds to hemoglobin preferentially over oxygen. As a result, oxygen cannot bind to hemoglobin, so very little oxygen is transported through the body ( Figure 39.22 ). Carbon monoxide is a colorless, odorless gas and is therefore difficult to detect. It is produced by gas-powered vehicles and tools. Carbon monoxide can cause headaches, confusion, and nausea; long-term exposure can cause brain damage or death. Administering 100 percent (pure) oxygen is the usual treatment for carbon monoxide poisoning. Administration of pure oxygen speeds up the separation of carbon monoxide from hemoglobin.
anatomy_and_physiology
Chapter Objectives After studying this chapter, you will be able to: Describe the processes involved in anabolic and catabolic reactions List and describe the steps necessary for carbohydrate, lipid, and protein metabolism Explain the processes that regulate glucose levels during the absorptive and postabsorptive states Explain how metabolism is essential to maintaining body temperature (thermoregulation) Summarize the importance of vitamins and minerals in the diet Introduction Eating is essential to life. Many of us look to eating as not only a necessity, but also a pleasure. You may have been told since childhood to start the day with a good breakfast to give you the energy to get through most of the day. You most likely have heard about the importance of a balanced diet, with plenty of fruits and vegetables. But what does this all mean to your body and the physiological processes it carries out each day? You need to absorb a range of nutrients so that your cells have the building blocks for metabolic processes that release the energy for the cells to carry out their daily jobs, to manufacture new proteins, cells, and body parts, and to recycle materials in the cell. This chapter will take you through some of the chemical reactions essential to life, the sum of which is referred to as metabolism. The focus of these discussions will be anabolic reactions and catabolic reactions. You will examine the various chemical reactions that are important to sustain life, including why you must have oxygen, how mitochondria transfer energy, and the importance of certain “metabolic” hormones and vitamins. Metabolism varies, depending on age, gender, activity level, fuel consumption, and lean body mass. Your own metabolic rate fluctuates throughout life. By modifying your diet and exercise regimen, you can increase both lean body mass and metabolic rate. Factors affecting metabolism also play important roles in controlling muscle mass. Aging is known to decrease the metabolic rate by as much as 5 percent per year. Additionally, because men tend have more lean muscle mass then women, their basal metabolic rate (metabolic rate at rest) is higher; therefore, men tend to burn more calories than women do. Lastly, an individual’s inherent metabolic rate is a function of the proteins and enzymes derived from their genetic background. Thus, your genes play a big role in your metabolism. Nonetheless, each person’s body engages in the same overall metabolic processes.
[ { "answer": { "ans_choice": 2, "ans_text": "catabolic reaction" }, "bloom": "1", "hl_context": "Of the four major macromolecular groups ( carbohydrates , lipids , proteins , and nucleic acids ) that are processed by digestion , carbohydrates are considered the most common source of energy to fuel the body . <hl> They take the form of either complex carbohydrates , polysaccharides like starch and glycogen , or simple sugars ( monosaccharides ) like glucose and fructose . <hl> <hl> Sugar catabolism breaks polysaccharides down into their individual monosaccharides . <hl> Among the monosaccharides , glucose is the most common fuel for ATP production in cells , and as such , there are a number of endocrine control mechanisms to regulate glucose concentration in the bloodstream . Excess glucose is either stored as an energy reserve in the liver and skeletal muscles as the complex polymer glycogen , or it is converted into fat ( triglyceride ) in adipose cells ( adipocytes ) . Metabolic processes are constantly taking place in the body . Metabolism is the sum of all of the chemical reactions that are involved in catabolism and anabolism . <hl> The reactions governing the breakdown of food to obtain energy are called catabolic reactions . <hl> <hl> Conversely , anabolic reactions use the energy produced by catabolic reactions to synthesize larger molecules from smaller ones , such as when the body forms proteins by stringing together amino acids . <hl> Both sets of reactions are critical to maintaining life .", "hl_sentences": "They take the form of either complex carbohydrates , polysaccharides like starch and glycogen , or simple sugars ( monosaccharides ) like glucose and fructose . Sugar catabolism breaks polysaccharides down into their individual monosaccharides . The reactions governing the breakdown of food to obtain energy are called catabolic reactions . Conversely , anabolic reactions use the energy produced by catabolic reactions to synthesize larger molecules from smaller ones , such as when the body forms proteins by stringing together amino acids .", "question": { "cloze_format": "The reaction in which a monosaccharide is formed from a polysaccharide is called a(n) ___.", "normal_format": "A monosaccharide is formed from a polysaccharide in what kind of reaction?", "question_choices": [ "oxidation–reduction reaction", "anabolic reaction", "catabolic reaction", "biosynthetic reaction" ], "question_id": "fs-id1983978", "question_text": "A monosaccharide is formed from a polysaccharide in what kind of reaction?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 0, "ans_text": "reduced" }, "bloom": "1", "hl_context": "Oxidation-reduction reactions are catalyzed by enzymes that trigger the removal of hydrogen atoms . Coenzymes work with enzymes and accept hydrogen atoms . <hl> The two most common coenzymes of oxidation-reduction reactions are nicotinamide adenine dinucleotide ( NAD ) and flavin adenine dinucleotide ( FAD ) . <hl> <hl> Their respective reduced coenzymes are NADH and FADH 2 , which are energy-containing molecules used to transfer energy during the creation of ATP . <hl> 24.2 Carbohydrate Metabolism Learning Objectives By the end of this section , you will be able to :", "hl_sentences": "The two most common coenzymes of oxidation-reduction reactions are nicotinamide adenine dinucleotide ( NAD ) and flavin adenine dinucleotide ( FAD ) . Their respective reduced coenzymes are NADH and FADH 2 , which are energy-containing molecules used to transfer energy during the creation of ATP .", "question": { "cloze_format": "When NAD becomes NADH, the coenzyme has been ________.", "normal_format": "When NAD becomes NADH, the coenzyme has been what?", "question_choices": [ "reduced", "oxidized", "metabolized", "hydrolyzed" ], "question_id": "fs-id2601735", "question_text": "When NAD becomes NADH, the coenzyme has been ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "removing a phosphate group from ATP" }, "bloom": "1", "hl_context": "Structurally , ATP molecules consist of an adenine , a ribose , and three phosphate groups ( Figure 24.2 ) . <hl> The chemical bond between the second and third phosphate groups , termed a high-energy bond , represents the greatest source of energy in a cell . <hl> <hl> It is the first bond that catabolic enzymes break when cells require energy to do work . <hl> The products of this reaction are a molecule of adenosine diphosphate ( ADP ) and a lone phosphate group ( P i ) . ATP , ADP , and P i are constantly being cycled through reactions that build ATP and store energy , and reactions that break down ATP and release energy .", "hl_sentences": "The chemical bond between the second and third phosphate groups , termed a high-energy bond , represents the greatest source of energy in a cell . It is the first bond that catabolic enzymes break when cells require energy to do work .", "question": { "cloze_format": "Anabolic reactions use energy by ________.", "normal_format": "What do anabolic reactions use energy by?", "question_choices": [ "turning ADP into ATP", "removing a phosphate group from ATP", "producing heat", "breaking down molecules into smaller parts" ], "question_id": "fs-id1374818", "question_text": "Anabolic reactions use energy by ________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "pyruvate, oxygen, lactate" }, "bloom": "1", "hl_context": "<hl> Glycolysis begins with the phosphorylation of glucose by hexokinase to form glucose - 6 - phosphate . <hl> This step uses one ATP , which is the donor of the phosphate group . Under the action of phosphofructokinase , glucose - 6 - phosphate is converted into fructose - 6 - phosphate . At this point , a second ATP donates its phosphate group , forming fructose -1,6- bisphosphate . This six-carbon sugar is split to form two phosphorylated three-carbon molecules , glyceraldehyde - 3 - phosphate and dihydroxyacetone phosphate , which are both converted into glyceraldehyde - 3 - phosphate . The glyceraldehyde - 3 - phosphate is further phosphorylated with groups donated by dihydrogen phosphate present in the cell to form the three-carbon molecule 1,3- bisphosphoglycerate . The energy of this reaction comes from the oxidation of ( removal of electrons from ) glyceraldehyde - 3 - phosphate . <hl> In a series of reactions leading to pyruvate , the two phosphate groups are then transferred to two ADPs to form two ATPs . <hl> <hl> Thus , glycolysis uses two ATPs but generates four ATPs , yielding a net gain of two ATPs and two molecules of pyruvate . <hl> <hl> In the presence of oxygen , pyruvate continues on to the Krebs cycle ( also called the citric acid cycle or tricarboxylic acid cycle ( TCA ) , where additional energy is extracted and passed on . <hl>", "hl_sentences": "Glycolysis begins with the phosphorylation of glucose by hexokinase to form glucose - 6 - phosphate . In a series of reactions leading to pyruvate , the two phosphate groups are then transferred to two ADPs to form two ATPs . Thus , glycolysis uses two ATPs but generates four ATPs , yielding a net gain of two ATPs and two molecules of pyruvate . In the presence of oxygen , pyruvate continues on to the Krebs cycle ( also called the citric acid cycle or tricarboxylic acid cycle ( TCA ) , where additional energy is extracted and passed on .", "question": { "cloze_format": "Glycolysis results in the production of two ________ molecules from a single molecule of glucose. In the absence of ________, the end product of glycolysis is ________.", "normal_format": "Glycolysis results in the production of which two molecules from a single molecule of glucose? In which absence, what is the end product of glycolysis?", "question_choices": [ "acetyl CoA, pyruvate, lactate", "ATP, carbon, pyruvate", "pyruvate, oxygen, lactate", "pyruvate, carbon, acetyl CoA" ], "question_id": "fs-id2511765", "question_text": "Glycolysis results in the production of two ________ molecules from a single molecule of glucose. In the absence of ________, the end product of glycolysis is ________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 1, "ans_text": "acetyl CoA; FADH2; NADH" }, "bloom": null, "hl_context": "<hl> To start the Krebs cycle , citrate synthase combines acetyl CoA and oxaloacetate to form a six-carbon citrate molecule ; CoA is subsequently released and can combine with another pyruvate molecule to begin the cycle again . <hl> The aconitase enzyme converts citrate into isocitrate . In two successive steps of oxidative decarboxylation , two molecules of CO 2 and two NADH molecules are produced when isocitrate dehydrogenase converts isocitrate into the five-carbon α-ketoglutarate , which is then catalyzed and converted into the four-carbon succinyl CoA by α-ketoglutarate dehydrogenase . The enzyme succinyl CoA dehydrogenase then converts succinyl CoA into succinate and forms the high-energy molecule GTP , which transfers its energy to ADP to produce ATP . Succinate dehydrogenase then converts succinate into fumarate , forming a molecule of FADH 2 . Fumarase then converts fumarate into malate , which malate dehydrogenase then converts back into oxaloacetate while reducing NAD + to NADH . Oxaloacetate is then ready to combine with the next acetyl CoA to start the Krebs cycle again ( see Figure 24.7 ) . <hl> For each turn of the cycle , three NADH , one ATP ( through GTP ) , and one FADH 2 are created . <hl> Each carbon of pyruvate is converted into CO 2 , which is released as a byproduct of oxidative ( aerobic ) respiration .", "hl_sentences": "To start the Krebs cycle , citrate synthase combines acetyl CoA and oxaloacetate to form a six-carbon citrate molecule ; CoA is subsequently released and can combine with another pyruvate molecule to begin the cycle again . For each turn of the cycle , three NADH , one ATP ( through GTP ) , and one FADH 2 are created .", "question": { "cloze_format": "The Krebs cycle converts ________ through a cycle of reactions. In the process, ATP, ________, and ________ are produced.", "normal_format": "What Kerbs cycle converts through a cycle of reactions?", "question_choices": [ "acetyl CoA; FAD, NAD", "acetyl CoA; FADH2; NADH", "pyruvate; NAD; FADH2", "pyruvate; oxygen; oxaloacetate" ], "question_id": "fs-id2889034", "question_text": "The Krebs cycle converts ________ through a cycle of reactions. In the process, ATP, ________, and ________ are produced." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "all of the above" }, "bloom": "1", "hl_context": "<hl> Fats ( or triglycerides ) within the body are ingested as food or synthesized by adipocytes or hepatocytes from carbohydrate precursors ( Figure 24.11 ) . <hl> <hl> Lipid metabolism entails the oxidation of fatty acids to either generate energy or synthesize new lipids from smaller constituent molecules . <hl> <hl> Lipid metabolism is associated with carbohydrate metabolism , as products of glucose ( such as acetyl CoA ) can be converted into lipids . <hl>", "hl_sentences": "Fats ( or triglycerides ) within the body are ingested as food or synthesized by adipocytes or hepatocytes from carbohydrate precursors ( Figure 24.11 ) . Lipid metabolism entails the oxidation of fatty acids to either generate energy or synthesize new lipids from smaller constituent molecules . Lipid metabolism is associated with carbohydrate metabolism , as products of glucose ( such as acetyl CoA ) can be converted into lipids .", "question": { "cloze_format": "Lipids in the diet can be ________.", "normal_format": "What can lipids in the diet be?", "question_choices": [ "broken down into energy for the body", "stored as triglycerides for later use", "converted into acetyl CoA", "all of the above" ], "question_id": "fs-id1548558", "question_text": "Lipids in the diet can be ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "bile salts" }, "bloom": null, "hl_context": "<hl> Together , the pancreatic lipases and bile salts break down triglycerides into free fatty acids . <hl> <hl> These fatty acids can be transported across the intestinal membrane . <hl> However , once they cross the membrane , they are recombined to again form triglyceride molecules . Within the intestinal cells , these triglycerides are packaged along with cholesterol molecules in phospholipid vesicles called chylomicrons ( Figure 24.12 ) . The chylomicrons enable fats and cholesterol to move within the aqueous environment of your lymphatic and circulatory systems . Chylomicrons leave the enterocytes by exocytosis and enter the lymphatic system via lacteals in the villi of the intestine . From the lymphatic system , the chylomicrons are transported to the circulatory system . Once in the circulation , they can either go to the liver or be stored in fat cells ( adipocytes ) that comprise adipose ( fat ) tissue found throughout the body . Lipid metabolism begins in the intestine where ingested triglycerides are broken down into smaller chain fatty acids and subsequently into monoglyceride molecules ( see Figure 24.11 b ) by pancreatic lipases , enzymes that break down fats after they are emulsified by bile salts . When food reaches the small intestine in the form of chyme , a digestive hormone called cholecystokinin ( CCK ) is released by intestinal cells in the intestinal mucosa . <hl> CCK stimulates the release of pancreatic lipase from the pancreas and stimulates the contraction of the gallbladder to release stored bile salts into the intestine . <hl> CCK also travels to the brain , where it can act as a hunger suppressant .", "hl_sentences": "Together , the pancreatic lipases and bile salts break down triglycerides into free fatty acids . These fatty acids can be transported across the intestinal membrane . CCK stimulates the release of pancreatic lipase from the pancreas and stimulates the contraction of the gallbladder to release stored bile salts into the intestine .", "question": { "cloze_format": "The gallbladder provides ________ that aid(s) in transport of lipids across the intestinal membrane.", "normal_format": "What does the gallbladder provide that aid(s) in transport of lipids across the intestinal membrane?", "question_choices": [ "lipases", "cholesterol", "proteins", "bile salts" ], "question_id": "fs-id918427", "question_text": "The gallbladder provides ________ that aid(s) in transport of lipids across the intestinal membrane." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 0, "ans_text": "they cannot move easily in the blood stream because they are fat based, while the blood is water based" }, "bloom": "3", "hl_context": "Together , the pancreatic lipases and bile salts break down triglycerides into free fatty acids . These fatty acids can be transported across the intestinal membrane . However , once they cross the membrane , they are recombined to again form triglyceride molecules . <hl> Within the intestinal cells , these triglycerides are packaged along with cholesterol molecules in phospholipid vesicles called chylomicrons ( Figure 24.12 ) . <hl> <hl> The chylomicrons enable fats and cholesterol to move within the aqueous environment of your lymphatic and circulatory systems . <hl> Chylomicrons leave the enterocytes by exocytosis and enter the lymphatic system via lacteals in the villi of the intestine . From the lymphatic system , the chylomicrons are transported to the circulatory system . Once in the circulation , they can either go to the liver or be stored in fat cells ( adipocytes ) that comprise adipose ( fat ) tissue found throughout the body .", "hl_sentences": "Within the intestinal cells , these triglycerides are packaged along with cholesterol molecules in phospholipid vesicles called chylomicrons ( Figure 24.12 ) . The chylomicrons enable fats and cholesterol to move within the aqueous environment of your lymphatic and circulatory systems .", "question": { "cloze_format": "Triglycerides are transported by chylomicrons because ________.", "normal_format": "Why are triglycerides transported by chylomicrons?", "question_choices": [ "they cannot move easily in the blood stream because they are fat based, while the blood is water based", "they are too small to move by themselves", "the chylomicrons contain enzymes they need for anabolism", "they cannot fit across the intestinal membrane" ], "question_id": "fs-id1468981", "question_text": "Triglycerides are transported by chylomicrons because ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "acetyl CoA" }, "bloom": null, "hl_context": "The breakdown of fatty acids , called fatty acid oxidation or beta ( β ) - oxidation , begins in the cytoplasm , where fatty acids are converted into fatty acyl CoA molecules . This fatty acyl CoA combines with carnitine to create a fatty acyl carnitine molecule , which helps to transport the fatty acid across the mitochondrial membrane . Once inside the mitochondrial matrix , the fatty acyl carnitine molecule is converted back into fatty acyl CoA and then into acetyl CoA ( Figure 24.13 ) . <hl> The newly formed acetyl CoA enters the Krebs cycle and is used to produce ATP in the same way as acetyl CoA derived from pyruvate . <hl> The three-carbon pyruvate molecule generated during glycolysis moves from the cytoplasm into the mitochondrial matrix , where it is converted by the enzyme pyruvate dehydrogenase into a two-carbon acetyl coenzyme A ( acetyl CoA ) molecule . This reaction is an oxidative decarboxylation reaction . It converts the three-carbon pyruvate into a two-carbon acetyl CoA molecule , releasing carbon dioxide and transferring two electrons that combine with NAD + to form NADH . <hl> Acetyl CoA enters the Krebs cycle by combining with a four-carbon molecule , oxaloacetate , to form the six-carbon molecule citrate , or citric acid , at the same time releasing the coenzyme A molecule . <hl>", "hl_sentences": "The newly formed acetyl CoA enters the Krebs cycle and is used to produce ATP in the same way as acetyl CoA derived from pyruvate . Acetyl CoA enters the Krebs cycle by combining with a four-carbon molecule , oxaloacetate , to form the six-carbon molecule citrate , or citric acid , at the same time releasing the coenzyme A molecule .", "question": { "cloze_format": "The molecules that can enter the Krebs cycle are ___.", "normal_format": "Which molecules can enter the Krebs cycle?", "question_choices": [ "chylomicrons", "acetyl CoA", "monoglycerides", "ketone bodies" ], "question_id": "fs-id2158787", "question_text": "Which molecules can enter the Krebs cycle?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "polysaccharides" }, "bloom": null, "hl_context": "<hl> When glucose levels are plentiful , the excess acetyl CoA generated by glycolysis can be converted into fatty acids , triglycerides , cholesterol , steroids , and bile salts . <hl> This process , called lipogenesis , creates lipids ( fat ) from the acetyl CoA and takes place in the cytoplasm of adipocytes ( fat cells ) and hepatocytes ( liver cells ) . When you eat more glucose or carbohydrates than your body needs , your system uses acetyl CoA to turn the excess into fat . Although there are several metabolic sources of acetyl CoA , it is most commonly derived from glycolysis . Acetyl CoA availability is significant , because it initiates lipogenesis . Lipogenesis begins with acetyl CoA and advances by the subsequent addition of two carbon atoms from another acetyl CoA ; this process is repeated until fatty acids are the appropriate length . Because this is a bond-creating anabolic process , ATP is consumed . However , the creation of triglycerides and lipids is an efficient way of storing the energy available in carbohydrates . Triglycerides and lipids , high-energy molecules , are stored in adipose tissue until they are needed . <hl> If excessive acetyl CoA is created from the oxidation of fatty acids and the Krebs cycle is overloaded and cannot handle it , the acetyl CoA is diverted to create ketone bodies . <hl> These ketone bodies can serve as a fuel source if glucose levels are too low in the body . Ketones serve as fuel in times of prolonged starvation or when patients suffer from uncontrolled diabetes and cannot utilize most of the circulating glucose . In both cases , fat stores are liberated to generate energy through the Krebs cycle and will generate ketone bodies when too much acetyl CoA accumulates .", "hl_sentences": "When glucose levels are plentiful , the excess acetyl CoA generated by glycolysis can be converted into fatty acids , triglycerides , cholesterol , steroids , and bile salts . If excessive acetyl CoA is created from the oxidation of fatty acids and the Krebs cycle is overloaded and cannot handle it , the acetyl CoA is diverted to create ketone bodies .", "question": { "cloze_format": "Acetyl CoA can be converted to all of the following except ________.", "normal_format": "Acetyl CoA cannot be converted to which of the following?", "question_choices": [ "ketone bodies", "fatty acids", "polysaccharides", "triglycerides" ], "question_id": "fs-id1297521", "question_text": "Acetyl CoA can be converted to all of the following except ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "stomach; pepsin; HCl; amino acids" }, "bloom": "1", "hl_context": "<hl> The digestion of proteins begins in the stomach . <hl> <hl> When protein-rich foods enter the stomach , they are greeted by a mixture of the enzyme pepsin and hydrochloric acid ( HCl ; 0.5 percent ) . <hl> The latter produces an environmental pH of 1.5 – 3.5 that denatures proteins within food . <hl> Pepsin cuts proteins into smaller polypeptides and their constituent amino acids . <hl> When the food-gastric juice mixture ( chyme ) enters the small intestine , the pancreas releases sodium bicarbonate to neutralize the HCl . This helps to protect the lining of the intestine . The small intestine also releases digestive hormones , including secretin and CCK , which stimulate digestive processes to break down the proteins further . Secretin also stimulates the pancreas to release sodium bicarbonate . The pancreas releases most of the digestive enzymes , including the proteases trypsin , chymotrypsin , and elastase , which aid protein digestion . Together , all of these enzymes break complex proteins into smaller individual amino acids ( Figure 24.17 ) , which are then transported across the intestinal mucosa to be used to create new proteins , or to be converted into fats or acetyl CoA and used in the Krebs cycle .", "hl_sentences": "The digestion of proteins begins in the stomach . When protein-rich foods enter the stomach , they are greeted by a mixture of the enzyme pepsin and hydrochloric acid ( HCl ; 0.5 percent ) . Pepsin cuts proteins into smaller polypeptides and their constituent amino acids .", "question": { "cloze_format": "Digestion of proteins begins in the ________ where ________ and ________ mix with food to break down protein into ________.", "normal_format": "Where does digestion of proteins begin? What mix with food to break down proteins? What do proteins break down into?", "question_choices": [ "stomach; amylase; HCl; amino acids", "mouth; pepsin; HCl; fatty acids", "stomach; lipase; HCl; amino acids", "stomach; pepsin; HCl; amino acids" ], "question_id": "fs-id2041938", "question_text": "Digestion of proteins begins in the ________ where ________ and ________ mix with food to break down protein into ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "build new proteins" }, "bloom": "1", "hl_context": "<hl> Freely available amino acids are used to create proteins . <hl> If amino acids exist in excess , the body has no capacity or mechanism for their storage ; thus , they are converted into glucose or ketones , or they are decomposed . Amino acid decomposition results in hydrocarbons and nitrogenous waste . However , high concentrations of nitrogenous byproducts are toxic . The urea cycle processes nitrogen and facilitates its excretion from the body . Much of the body is made of protein , and these proteins take on a myriad of forms . They represent cell signaling receptors , signaling molecules , structural members , enzymes , intracellular trafficking components , extracellular matrix scaffolds , ion pumps , ion channels , oxygen and CO 2 transporters ( hemoglobin ) . That is not even the complete list ! There is protein in bones ( collagen ) , muscles , and tendons ; the hemoglobin that transports oxygen ; and enzymes that catalyze all biochemical reactions . Protein is also used for growth and repair . Amid all these necessary functions , proteins also hold the potential to serve as a metabolic fuel source . Proteins are not stored for later use , so excess proteins must be converted into glucose or triglycerides , and used to supply energy or build energy reserves . <hl> Although the body can synthesize proteins from amino acids , food is an important source of those amino acids , especially because humans cannot synthesize all of the 20 amino acids used to build proteins . <hl> Proteins , which are polymers , can be broken down into their monomers , individual amino acids . <hl> Amino acids can be used as building blocks of new proteins or broken down further for the production of ATP . <hl> When one is chronically starving , this use of amino acids for energy production can lead to a wasting away of the body , as more and more proteins are broken down .", "hl_sentences": "Freely available amino acids are used to create proteins . Although the body can synthesize proteins from amino acids , food is an important source of those amino acids , especially because humans cannot synthesize all of the 20 amino acids used to build proteins . Amino acids can be used as building blocks of new proteins or broken down further for the production of ATP .", "question": { "cloze_format": "Amino acids are needed to ________.", "normal_format": "Why are amino acids needed?", "question_choices": [ "build new proteins", "serve as fat stores", "supply energy for the cell", "create red blood cells" ], "question_id": "fs-id2058342", "question_text": "Amino acids are needed to ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "converted to glucose or ketones" }, "bloom": null, "hl_context": "<hl> Freely available amino acids are used to create proteins . <hl> <hl> If amino acids exist in excess , the body has no capacity or mechanism for their storage ; thus , they are converted into glucose or ketones , or they are decomposed . <hl> Amino acid decomposition results in hydrocarbons and nitrogenous waste . However , high concentrations of nitrogenous byproducts are toxic . The urea cycle processes nitrogen and facilitates its excretion from the body .", "hl_sentences": "Freely available amino acids are used to create proteins . If amino acids exist in excess , the body has no capacity or mechanism for their storage ; thus , they are converted into glucose or ketones , or they are decomposed .", "question": { "cloze_format": "If an amino acid is not used to create new proteins, it can be ________.", "normal_format": "If an amino acid is not used to create new proteins, what can it be?", "question_choices": [ "converted to acetyl CoA", "converted to glucose or ketones", "converted to nitrogen", "stored to be used later" ], "question_id": "fs-id1863134", "question_text": "If an amino acid is not used to create new proteins, it can be ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "high; high; are low" }, "bloom": null, "hl_context": "<hl> The absorptive state , or the fed state , occurs after a meal when your body is digesting the food and absorbing the nutrients ( anabolism exceeds catabolism ) . <hl> Digestion begins the moment you put food into your mouth , as the food is broken down into its constituent parts to be absorbed through the intestine . The digestion of carbohydrates begins in the mouth , whereas the digestion of proteins and fats begins in the stomach and small intestine . The constituent parts of these carbohydrates , fats , and proteins are transported across the intestinal wall and enter the bloodstream ( sugars and amino acids ) or the lymphatic system ( fats ) . From the intestines , these systems transport them to the liver , adipose tissue , or muscle cells that will process and use , or store , the energy . Depending on the amounts and types of nutrients ingested , the absorptive state can linger for up to 4 hours . <hl> The ingestion of food and the rise of glucose concentrations in the bloodstream stimulate pancreatic beta cells to release insulin into the bloodstream , where it initiates the absorption of blood glucose by liver hepatocytes , and by adipose and muscle cells . <hl> <hl> Once inside these cells , glucose is immediately converted into glucose - 6 - phosphate . <hl> By doing this , a concentration gradient is established where glucose levels are higher in the blood than in the cells . This allows for glucose to continue moving from the blood to the cells where it is needed . Insulin also stimulates the storage of glucose as glycogen in the liver and muscle cells where it can be used for later energy needs of the body . Insulin also promotes the synthesis of protein in muscle . As you will see , muscle protein can be catabolized and used as fuel in times of starvation .", "hl_sentences": "The absorptive state , or the fed state , occurs after a meal when your body is digesting the food and absorbing the nutrients ( anabolism exceeds catabolism ) . The ingestion of food and the rise of glucose concentrations in the bloodstream stimulate pancreatic beta cells to release insulin into the bloodstream , where it initiates the absorption of blood glucose by liver hepatocytes , and by adipose and muscle cells . Once inside these cells , glucose is immediately converted into glucose - 6 - phosphate .", "question": { "cloze_format": "During the absorptive state, glucose levels are ________, insulin levels are ________, and glucagon levels ________.", "normal_format": "What are the glucose, insulin, and glucagon levels during the absorptive state, respectively?", "question_choices": [ "high; low; stay the same", "low; low; stay the same", "high; high; are high", "high; high; are low" ], "question_id": "fs-id1365095", "question_text": "During the absorptive state, glucose levels are ________, insulin levels are ________, and glucagon levels ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "glycogen; liver" }, "bloom": "1", "hl_context": "<hl> The postabsorptive state , or the fasting state , occurs when the food has been digested , absorbed , and stored . <hl> You commonly fast overnight , but skipping meals during the day puts your body in the postabsorptive state as well . <hl> During this state , the body must rely initially on stored glycogen . <hl> Glucose levels in the blood begin to drop as it is absorbed and used by the cells . In response to the decrease in glucose , insulin levels also drop . Glycogen and triglyceride storage slows . However , due to the demands of the tissues and organs , blood glucose levels must be maintained in the normal range of 80 – 120 mg / dL . In response to a drop in blood glucose concentration , the hormone glucagon is released from the alpha cells of the pancreas . <hl> Glucagon acts upon the liver cells , where it inhibits the synthesis of glycogen and stimulates the breakdown of stored glycogen back into glucose . <hl> This glucose is released from the liver to be used by the peripheral tissues and the brain . As a result , blood glucose levels begin to rise . Gluconeogenesis will also begin in the liver to replace the glucose that has been used by the peripheral tissues .", "hl_sentences": "The postabsorptive state , or the fasting state , occurs when the food has been digested , absorbed , and stored . During this state , the body must rely initially on stored glycogen . Glucagon acts upon the liver cells , where it inhibits the synthesis of glycogen and stimulates the breakdown of stored glycogen back into glucose .", "question": { "cloze_format": "The postabsorptive state relies on stores of ________ in the ________.", "normal_format": "The postabsorptive state relies on stores of what? in where?", "question_choices": [ "insulin; pancreas", "glucagon; pancreas", "glycogen; liver", "glucose; liver" ], "question_id": "fs-id1496229", "question_text": "The postabsorptive state relies on stores of ________ in the ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "hypothalamus; 97.7–99.5 °F" }, "bloom": null, "hl_context": "<hl> The hypothalamus in the brain is the master switch that works as a thermostat to regulate the body ’ s core temperature ( Figure 24.23 ) . <hl> If the temperature is too high , the hypothalamus can initiate several processes to lower it . These include increasing the circulation of the blood to the surface of the body to allow for the dissipation of heat through the skin and initiation of sweating to allow evaporation of water on the skin to cool its surface . Conversely , if the temperature falls below the set core temperature , the hypothalamus can initiate shivering to generate heat . The body uses more energy and generates more heat . In addition , thyroid hormone will stimulate more energy use and heat production by cells throughout the body . An environment is said to be thermoneutral when the body does not expend or release energy to maintain its core temperature . For a naked human , this is an ambient air temperature of around 84 ° F . If the temperature is higher , for example , when wearing clothes , the body compensates with cooling mechanisms . The body loses heat through the mechanisms of heat exchange . The body tightly regulates the body temperature through a process called thermoregulation , in which the body can maintain its temperature within certain boundaries , even when the surrounding temperature is very different . <hl> The core temperature of the body remains steady at around 36.5 – 37.5 ° C ( or 97.7 – 99.5 ° F ) . <hl> In the process of ATP production by cells throughout the body , approximately 60 percent of the energy produced is in the form of heat used to maintain body temperature . Thermoregulation is an example of negative feedback .", "hl_sentences": "The hypothalamus in the brain is the master switch that works as a thermostat to regulate the body ’ s core temperature ( Figure 24.23 ) . The core temperature of the body remains steady at around 36.5 – 37.5 ° C ( or 97.7 – 99.5 ° F ) .", "question": { "cloze_format": "The body’s temperature is controlled by the ________. This temperature is always kept between ________.", "normal_format": "What controls the body’s temperature? What is this temperature always kept between?", "question_choices": [ "pituitary; 36.5–37.5 °C", "hypothalamus; 97.7–99.5 °F", "hypothalamus; 36.5–37.5 °F", "pituitary; 97.7–99.5 °F" ], "question_id": "fs-id1720962", "question_text": "The body’s temperature is controlled by the ________. This temperature is always kept between ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "all of the above" }, "bloom": null, "hl_context": "The hypothalamus in the brain is the master switch that works as a thermostat to regulate the body ’ s core temperature ( Figure 24.23 ) . If the temperature is too high , the hypothalamus can initiate several processes to lower it . <hl> These include increasing the circulation of the blood to the surface of the body to allow for the dissipation of heat through the skin and initiation of sweating to allow evaporation of water on the skin to cool its surface . <hl> <hl> Conversely , if the temperature falls below the set core temperature , the hypothalamus can initiate shivering to generate heat . <hl> The body uses more energy and generates more heat . In addition , thyroid hormone will stimulate more energy use and heat production by cells throughout the body . An environment is said to be thermoneutral when the body does not expend or release energy to maintain its core temperature . For a naked human , this is an ambient air temperature of around 84 ° F . If the temperature is higher , for example , when wearing clothes , the body compensates with cooling mechanisms . The body loses heat through the mechanisms of heat exchange .", "hl_sentences": "These include increasing the circulation of the blood to the surface of the body to allow for the dissipation of heat through the skin and initiation of sweating to allow evaporation of water on the skin to cool its surface . Conversely , if the temperature falls below the set core temperature , the hypothalamus can initiate shivering to generate heat .", "question": { "cloze_format": "Fever increases the body temperature and can induce chills to help cool the temperature back down. The other mechanism that is in place to regulate the body temperature is ___.", "normal_format": "Fever increases the body temperature and can induce chills to help cool the temperature back down. What other mechanisms are in place to regulate the body temperature?", "question_choices": [ "shivering", "sweating", "erection of the hairs on the arms and legs", "all of the above" ], "question_id": "fs-id2105111", "question_text": "Fever increases the body temperature and can induce chills to help cool the temperature back down. What other mechanisms are in place to regulate the body temperature?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "conduction" }, "bloom": "3", "hl_context": "<hl> Conduction is the transfer of heat by two objects that are in direct contact with one another . <hl> It occurs when the skin comes in contact with a cold or warm object . For example , when holding a glass of ice water , the heat from your skin will warm the glass and in turn melt the ice . Alternatively , on a cold day , you might warm up by wrapping your cold hands around a hot mug of coffee . Only about 3 percent of the body ’ s heat is lost through conduction .", "hl_sentences": "Conduction is the transfer of heat by two objects that are in direct contact with one another .", "question": { "cloze_format": "The heat you feel on your chair when you stand up was transferred from your skin via ________.", "normal_format": "The heat you feel on your chair when you stand up was transferred from your skin via what?", "question_choices": [ "conduction", "convection", "radiation", "evaporation" ], "question_id": "fs-id2161602", "question_text": "The heat you feel on your chair when you stand up was transferred from your skin via ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "radiation" }, "bloom": "3", "hl_context": "Radiation is the transfer of heat via infrared waves . This occurs between any two objects when their temperatures differ . A radiator can warm a room via radiant heat . On a sunny day , the radiation from the sun warms the skin . <hl> The same principle works from the body to the environment . <hl> About 60 percent of the heat lost by the body is lost through radiation .", "hl_sentences": "The same principle works from the body to the environment .", "question": { "cloze_format": "A crowded room warms up through the mechanism of ________.", "normal_format": "A crowded room warms up through which mechanism?", "question_choices": [ "conduction", "convection", "radiation", "evaporation" ], "question_id": "fs-id1620590", "question_text": "A crowded room warms up through the mechanism of ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "phosphorous" }, "bloom": null, "hl_context": "<hl> The most common minerals in the body are calcium and phosphorous , both of which are stored in the skeleton and necessary for the hardening of bones . <hl> Most minerals are ionized , and their ionic forms are used in physiological processes throughout the body . Sodium and chloride ions are electrolytes in the blood and extracellular tissues , and iron ions are critical to the formation of hemoglobin . There are additional trace minerals that are still important to the body ’ s functions , but their required quantities are much lower .", "hl_sentences": "The most common minerals in the body are calcium and phosphorous , both of which are stored in the skeleton and necessary for the hardening of bones .", "question": { "cloze_format": "___ is stored in the body.", "normal_format": "Which of the following is stored in the body?", "question_choices": [ "thiamine", "phosphorous", "folic acid", "vitamin C" ], "question_id": "fs-id2773895", "question_text": "Which of the following is stored in the body?" }, "references_are_paraphrase": null } ]
24
24.1 Overview of Metabolic Reactions Learning Objectives By the end of this section, you will be able to: Describe the process by which polymers are broken down into monomers Describe the process by which monomers are combined into polymers Discuss the role of ATP in metabolism Explain oxidation-reduction reactions Describe the hormones that regulate anabolic and catabolic reactions Metabolic processes are constantly taking place in the body. Metabolism is the sum of all of the chemical reactions that are involved in catabolism and anabolism. The reactions governing the breakdown of food to obtain energy are called catabolic reactions. Conversely, anabolic reactions use the energy produced by catabolic reactions to synthesize larger molecules from smaller ones, such as when the body forms proteins by stringing together amino acids. Both sets of reactions are critical to maintaining life. Because catabolic reactions produce energy and anabolic reactions use energy, ideally, energy usage would balance the energy produced. If the net energy change is positive (catabolic reactions release more energy than the anabolic reactions use), then the body stores the excess energy by building fat molecules for long-term storage. On the other hand, if the net energy change is negative (catabolic reactions release less energy than anabolic reactions use), the body uses stored energy to compensate for the deficiency of energy released by catabolism. Catabolic Reactions Catabolic reactions break down large organic molecules into smaller molecules, releasing the energy contained in the chemical bonds. These energy releases (conversions) are not 100 percent efficient. The amount of energy released is less than the total amount contained in the molecule. Approximately 40 percent of energy yielded from catabolic reactions is directly transferred to the high-energy molecule adenosine triphosphate (ATP). ATP, the energy currency of cells, can be used immediately to power molecular machines that support cell, tissue, and organ function. This includes building new tissue and repairing damaged tissue. ATP can also be stored to fulfill future energy demands. The remaining 60 percent of the energy released from catabolic reactions is given off as heat, which tissues and body fluids absorb. Structurally, ATP molecules consist of an adenine, a ribose, and three phosphate groups ( Figure 24.2 ). The chemical bond between the second and third phosphate groups, termed a high-energy bond, represents the greatest source of energy in a cell. It is the first bond that catabolic enzymes break when cells require energy to do work. The products of this reaction are a molecule of adenosine diphosphate (ADP) and a lone phosphate group (P i ). ATP, ADP, and P i are constantly being cycled through reactions that build ATP and store energy, and reactions that break down ATP and release energy. The energy from ATP drives all bodily functions, such as contracting muscles, maintaining the electrical potential of nerve cells, and absorbing food in the gastrointestinal tract. The metabolic reactions that produce ATP come from various sources ( Figure 24.3 ). Of the four major macromolecular groups (carbohydrates, lipids, proteins, and nucleic acids) that are processed by digestion, carbohydrates are considered the most common source of energy to fuel the body. They take the form of either complex carbohydrates, polysaccharides like starch and glycogen, or simple sugars (monosaccharides) like glucose and fructose. Sugar catabolism breaks polysaccharides down into their individual monosaccharides. Among the monosaccharides, glucose is the most common fuel for ATP production in cells, and as such, there are a number of endocrine control mechanisms to regulate glucose concentration in the bloodstream. Excess glucose is either stored as an energy reserve in the liver and skeletal muscles as the complex polymer glycogen, or it is converted into fat (triglyceride) in adipose cells (adipocytes). Among the lipids (fats), triglycerides are most often used for energy via a metabolic process called β-oxidation. About one-half of excess fat is stored in adipocytes that accumulate in the subcutaneous tissue under the skin, whereas the rest is stored in adipocytes in other tissues and organs. Proteins, which are polymers, can be broken down into their monomers, individual amino acids. Amino acids can be used as building blocks of new proteins or broken down further for the production of ATP. When one is chronically starving, this use of amino acids for energy production can lead to a wasting away of the body, as more and more proteins are broken down. Nucleic acids are present in most of the foods you eat. During digestion, nucleic acids including DNA and various RNAs are broken down into their constituent nucleotides. These nucleotides are readily absorbed and transported throughout the body to be used by individual cells during nucleic acid metabolism. Anabolic Reactions In contrast to catabolic reactions, anabolic reactions involve the joining of smaller molecules into larger ones. Anabolic reactions combine monosaccharides to form polysaccharides, fatty acids to form triglycerides, amino acids to form proteins, and nucleotides to form nucleic acids. These processes require energy in the form of ATP molecules generated by catabolic reactions. Anabolic reactions, also called biosynthesis reactions , create new molecules that form new cells and tissues, and revitalize organs. Hormonal Regulation of Metabolism Catabolic and anabolic hormones in the body help regulate metabolic processes. Catabolic hormones stimulate the breakdown of molecules and the production of energy. These include cortisol, glucagon, adrenaline/epinephrine, and cytokines. All of these hormones are mobilized at specific times to meet the needs of the body. Anabolic hormones are required for the synthesis of molecules and include growth hormone, insulin-like growth factor, insulin, testosterone, and estrogen. Table 24.1 summarizes the function of each of the catabolic hormones and Table 24.2 summarizes the functions of the anabolic hormones. Catabolic Hormones Hormone Function Cortisol Released from the adrenal gland in response to stress; its main role is to increase blood glucose levels by gluconeogenesis (breaking down fats and proteins) Glucagon Released from alpha cells in the pancreas either when starving or when the body needs to generate additional energy; it stimulates the breakdown of glycogen in the liver to increase blood glucose levels; its effect is the opposite of insulin; glucagon and insulin are a part of a negative-feedback system that stabilizes blood glucose levels Adrenaline/epinephrine Released in response to the activation of the sympathetic nervous system; increases heart rate and heart contractility, constricts blood vessels, is a bronchodilator that opens (dilates) the bronchi of the lungs to increase air volume in the lungs, and stimulates gluconeogenesis Table 24.1 Anabolic Hormones Hormone Function Growth hormone (GH) Synthesized and released from the pituitary gland; stimulates the growth of cells, tissues, and bones Insulin-like growth factor (IGF) Stimulates the growth of muscle and bone while also inhibiting cell death (apoptosis) Insulin Produced by the beta cells of the pancreas; plays an essential role in carbohydrate and fat metabolism, controls blood glucose levels, and promotes the uptake of glucose into body cells; causes cells in muscle, adipose tissue, and liver to take up glucose from the blood and store it in the liver and muscle as glycogen; its effect is the opposite of glucagon; glucagon and insulin are a part of a negative-feedback system that stabilizes blood glucose levels Testosterone Produced by the testes in males and the ovaries in females; stimulates an increase in muscle mass and strength as well as the growth and strengthening of bone Estrogen Produced primarily by the ovaries, it is also produced by the liver and adrenal glands; its anabolic functions include increasing metabolism and fat deposition Table 24.2 Disorders of the... Metabolic Processes: Cushing Syndrome and Addison’s Disease As might be expected for a fundamental physiological process like metabolism, errors or malfunctions in metabolic processing lead to a pathophysiology or—if uncorrected—a disease state. Metabolic diseases are most commonly the result of malfunctioning proteins or enzymes that are critical to one or more metabolic pathways. Protein or enzyme malfunction can be the consequence of a genetic alteration or mutation. However, normally functioning proteins and enzymes can also have deleterious effects if their availability is not appropriately matched with metabolic need. For example, excessive production of the hormone cortisol (see Table 24.1 ) gives rise to Cushing syndrome. Clinically, Cushing syndrome is characterized by rapid weight gain, especially in the trunk and face region, depression, and anxiety. It is worth mentioning that tumors of the pituitary that produce adrenocorticotropic hormone (ACTH), which subsequently stimulates the adrenal cortex to release excessive cortisol, produce similar effects. This indirect mechanism of cortisol overproduction is referred to as Cushing disease. Patients with Cushing syndrome can exhibit high blood glucose levels and are at an increased risk of becoming obese. They also show slow growth, accumulation of fat between the shoulders, weak muscles, bone pain (because cortisol causes proteins to be broken down to make glucose via gluconeogenesis), and fatigue. Other symptoms include excessive sweating (hyperhidrosis), capillary dilation, and thinning of the skin, which can lead to easy bruising. The treatments for Cushing syndrome are all focused on reducing excessive cortisol levels. Depending on the cause of the excess, treatment may be as simple as discontinuing the use of cortisol ointments. In cases of tumors, surgery is often used to remove the offending tumor. Where surgery is inappropriate, radiation therapy can be used to reduce the size of a tumor or ablate portions of the adrenal cortex. Finally, medications are available that can help to regulate the amounts of cortisol. Insufficient cortisol production is equally problematic. Adrenal insufficiency, or Addison’s disease, is characterized by the reduced production of cortisol from the adrenal gland. It can result from malfunction of the adrenal glands—they do not produce enough cortisol—or it can be a consequence of decreased ACTH availability from the pituitary. Patients with Addison’s disease may have low blood pressure, paleness, extreme weakness, fatigue, slow or sluggish movements, lightheadedness, and salt cravings due to the loss of sodium and high blood potassium levels (hyperkalemia). Victims also may suffer from loss of appetite, chronic diarrhea, vomiting, mouth lesions, and patchy skin color. Diagnosis typically involves blood tests and imaging tests of the adrenal and pituitary glands. Treatment involves cortisol replacement therapy, which usually must be continued for life. Oxidation-Reduction Reactions The chemical reactions underlying metabolism involve the transfer of electrons from one compound to another by processes catalyzed by enzymes. The electrons in these reactions commonly come from hydrogen atoms, which consist of an electron and a proton. A molecule gives up a hydrogen atom, in the form of a hydrogen ion (H + ) and an electron, breaking the molecule into smaller parts. The loss of an electron, or oxidation , releases a small amount of energy; both the electron and the energy are then passed to another molecule in the process of reduction , or the gaining of an electron. These two reactions always happen together in an oxidation-reduction reaction (also called a redox reaction)—when an electron is passed between molecules, the donor is oxidized and the recipient is reduced. Oxidation-reduction reactions often happen in a series, so that a molecule that is reduced is subsequently oxidized, passing on not only the electron it just received but also the energy it received. As the series of reactions progresses, energy accumulates that is used to combine P i and ADP to form ATP, the high-energy molecule that the body uses for fuel. Oxidation-reduction reactions are catalyzed by enzymes that trigger the removal of hydrogen atoms. Coenzymes work with enzymes and accept hydrogen atoms. The two most common coenzymes of oxidation-reduction reactions are nicotinamide adenine dinucleotide (NAD) and flavin adenine dinucleotide (FAD) . Their respective reduced coenzymes are NADH and FADH 2 , which are energy-containing molecules used to transfer energy during the creation of ATP. 24.2 Carbohydrate Metabolism Learning Objectives By the end of this section, you will be able to: Explain the processes of glycolysis Describe the pathway of a pyruvate molecule through the Krebs cycle Explain the transport of electrons through the electron transport chain Describe the process of ATP production through oxidative phosphorylation Summarize the process of gluconeogenesis Carbohydrates are organic molecules composed of carbon, hydrogen, and oxygen atoms. The family of carbohydrates includes both simple and complex sugars. Glucose and fructose are examples of simple sugars, and starch, glycogen, and cellulose are all examples of complex sugars. The complex sugars are also called polysaccharides and are made of multiple monosaccharide molecules. Polysaccharides serve as energy storage (e.g., starch and glycogen) and as structural components (e.g., chitin in insects and cellulose in plants). During digestion, carbohydrates are broken down into simple, soluble sugars that can be transported across the intestinal wall into the circulatory system to be transported throughout the body. Carbohydrate digestion begins in the mouth with the action of salivary amylase on starches and ends with monosaccharides being absorbed across the epithelium of the small intestine. Once the absorbed monosaccharides are transported to the tissues, the process of cellular respiration begins ( Figure 24.4 ). This section will focus first on glycolysis, a process where the monosaccharide glucose is oxidized, releasing the energy stored in its bonds to produce ATP. Glycolysis Glucose is the body’s most readily available source of energy. After digestive processes break polysaccharides down into monosaccharides, including glucose, the monosaccharides are transported across the wall of the small intestine and into the circulatory system, which transports them to the liver. In the liver, hepatocytes either pass the glucose on through the circulatory system or store excess glucose as glycogen. Cells in the body take up the circulating glucose in response to insulin and, through a series of reactions called glycolysis , transfer some of the energy in glucose to ADP to form ATP ( Figure 24.5 ). The last step in glycolysis produces the product pyruvate . Glycolysis begins with the phosphorylation of glucose by hexokinase to form glucose-6-phosphate. This step uses one ATP, which is the donor of the phosphate group. Under the action of phosphofructokinase, glucose-6-phosphate is converted into fructose-6-phosphate. At this point, a second ATP donates its phosphate group, forming fructose-1,6-bisphosphate. This six-carbon sugar is split to form two phosphorylated three-carbon molecules, glyceraldehyde-3-phosphate and dihydroxyacetone phosphate, which are both converted into glyceraldehyde-3-phosphate. The glyceraldehyde-3-phosphate is further phosphorylated with groups donated by dihydrogen phosphate present in the cell to form the three-carbon molecule 1,3-bisphosphoglycerate. The energy of this reaction comes from the oxidation of (removal of electrons from) glyceraldehyde-3-phosphate. In a series of reactions leading to pyruvate, the two phosphate groups are then transferred to two ADPs to form two ATPs. Thus, glycolysis uses two ATPs but generates four ATPs, yielding a net gain of two ATPs and two molecules of pyruvate. In the presence of oxygen, pyruvate continues on to the Krebs cycle (also called the citric acid cycle or tricarboxylic acid cycle (TCA) , where additional energy is extracted and passed on. Interactive Link Watch this video to learn about glycolysis. Glycolysis can be divided into two phases: energy consuming (also called chemical priming) and energy yielding. The first phase is the energy-consuming phase , so it requires two ATP molecules to start the reaction for each molecule of glucose. However, the end of the reaction produces four ATPs, resulting in a net gain of two ATP energy molecules. Glycolysis can be expressed as the following equation: Glucose + 2ATP + 2NAD +  + 4ADP + 2P i   →  2 Pyruvate + 4ATP + 2NADH + 2H + Glucose + 2ATP + 2NAD +  + 4ADP + 2P i   →  2 Pyruvate + 4ATP + 2NADH + 2H + MathType@MTEF@5@5@+=feaagyart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaae4raiaabYgacaqG1bGaae4yaiaab+gacaqGZbGaaeyzaiaabccacaqGRaGaaeiiaiaabkdacaqGbbGaaeivaiaabcfacaqGGaGaae4kaiaabccacaqGYaGaaeOtaiaabgeacaqGebWaaWbaaSqabeaacaqGRaaaaOGaaeiiaiaabUcacaqGGaGaaeinaiaabgeacaqGebGaaeiuaiaabccacaqGRaGaaeiiaiaabkdacaqGqbWaaSbaaSqaaiaadMgaaeqaaOGaaeiiaiabgkziUkaabccacaqGYaGaaeiiaiaabcfacaqG5bGaaeOCaiaabwhacaqG2bGaaeyyaiaabshacaqGLbGaaeiiaiaabUcacaqGGaGaaeOmaiaab6eacaqGbbGaaeiraiaabIeacaqGGaGaae4kaiaabccacaqGYaGaaeisamaaCaaaleqabaGaae4kaaaaaaa@6726@ This equation states that glucose, in combination with ATP (the energy source), NAD + (a coenzyme that serves as an electron acceptor), and inorganic phosphate, breaks down into two pyruvate molecules, generating four ATP molecules—for a net yield of two ATP—and two energy-containing NADH coenzymes. The NADH that is produced in this process will be used later to produce ATP in the mitochondria. Importantly, by the end of this process, one glucose molecule generates two pyruvate molecules, two high-energy ATP molecules, and two electron-carrying NADH molecules. The following discussions of glycolysis include the enzymes responsible for the reactions. When glucose enters a cell, the enzyme hexokinase (or glucokinase, in the liver) rapidly adds a phosphate to convert it into glucose-6-phosphate . A kinase is a type of enzyme that adds a phosphate molecule to a substrate (in this case, glucose, but it can be true of other molecules also). This conversion step requires one ATP and essentially traps the glucose in the cell, preventing it from passing back through the plasma membrane, thus allowing glycolysis to proceed. It also functions to maintain a concentration gradient with higher glucose levels in the blood than in the tissues. By establishing this concentration gradient, the glucose in the blood will be able to flow from an area of high concentration (the blood) into an area of low concentration (the tissues) to be either used or stored. Hexokinase is found in nearly every tissue in the body. Glucokinase , on the other hand, is expressed in tissues that are active when blood glucose levels are high, such as the liver. Hexokinase has a higher affinity for glucose than glucokinase and therefore is able to convert glucose at a faster rate than glucokinase. This is important when levels of glucose are very low in the body, as it allows glucose to travel preferentially to those tissues that require it more. In the next step of the first phase of glycolysis, the enzyme glucose-6-phosphate isomerase converts glucose-6-phosphate into fructose-6-phosphate. Like glucose, fructose is also a six carbon-containing sugar. The enzyme phosphofructokinase-1 then adds one more phosphate to convert fructose-6-phosphate into fructose-1-6-bisphosphate, another six-carbon sugar, using another ATP molecule. Aldolase then breaks down this fructose-1-6-bisphosphate into two three-carbon molecules, glyceraldehyde-3-phosphate and dihydroxyacetone phosphate. The triosephosphate isomerase enzyme then converts dihydroxyacetone phosphate into a second glyceraldehyde-3-phosphate molecule. Therefore, by the end of this chemical-priming or energy-consuming phase, one glucose molecule is broken down into two glyceraldehyde-3-phosphate molecules. The second phase of glycolysis, the energy-yielding phase , creates the energy that is the product of glycolysis. Glyceraldehyde-3-phosphate dehydrogenase converts each three-carbon glyceraldehyde-3-phosphate produced during the energy-consuming phase into 1,3-bisphosphoglycerate. This reaction releases an electron that is then picked up by NAD + to create an NADH molecule. NADH is a high-energy molecule, like ATP, but unlike ATP, it is not used as energy currency by the cell. Because there are two glyceraldehyde-3-phosphate molecules, two NADH molecules are synthesized during this step. Each 1,3-bisphosphoglycerate is subsequently dephosphorylated (i.e., a phosphate is removed) by phosphoglycerate kinase into 3-phosphoglycerate. Each phosphate released in this reaction can convert one molecule of ADP into one high-energy ATP molecule, resulting in a gain of two ATP molecules. The enzyme phosphoglycerate mutase then converts the 3-phosphoglycerate molecules into 2-phosphoglycerate. The enolase enzyme then acts upon the 2-phosphoglycerate molecules to convert them into phosphoenolpyruvate molecules. The last step of glycolysis involves the dephosphorylation of the two phosphoenolpyruvate molecules by pyruvate kinase to create two pyruvate molecules and two ATP molecules. In summary, one glucose molecule breaks down into two pyruvate molecules, and creates two net ATP molecules and two NADH molecules by glycolysis. Therefore, glycolysis generates energy for the cell and creates pyruvate molecules that can be processed further through the aerobic Krebs cycle (also called the citric acid cycle or tricarboxylic acid cycle); converted into lactic acid or alcohol (in yeast) by fermentation; or used later for the synthesis of glucose through gluconeogenesis. Anaerobic Respiration When oxygen is limited or absent, pyruvate enters an anaerobic pathway. In these reactions, pyruvate can be converted into lactic acid. In addition to generating an additional ATP, this pathway serves to keep the pyruvate concentration low so glycolysis continues, and it oxidizes NADH into the NAD + needed by glycolysis. In this reaction, lactic acid replaces oxygen as the final electron acceptor. Anaerobic respiration occurs in most cells of the body when oxygen is limited or mitochondria are absent or nonfunctional. For example, because erythrocytes (red blood cells) lack mitochondria, they must produce their ATP from anaerobic respiration. This is an effective pathway of ATP production for short periods of time, ranging from seconds to a few minutes. The lactic acid produced diffuses into the plasma and is carried to the liver, where it is converted back into pyruvate or glucose via the Cori cycle. Similarly, when a person exercises, muscles use ATP faster than oxygen can be delivered to them. They depend on glycolysis and lactic acid production for rapid ATP production. Aerobic Respiration In the presence of oxygen, pyruvate can enter the Krebs cycle where additional energy is extracted as electrons are transferred from the pyruvate to the receptors NAD + , GDP, and FAD, with carbon dioxide being a “waste product” ( Figure 24.6 ). The NADH and FADH 2 pass electrons on to the electron transport chain, which uses the transferred energy to produce ATP. As the terminal step in the electron transport chain, oxygen is the terminal electron acceptor and creates water inside the mitochondria. Krebs Cycle/Citric Acid Cycle/Tricarboxylic Acid Cycle The pyruvate molecules generated during glycolysis are transported across the mitochondrial membrane into the inner mitochondrial matrix, where they are metabolized by enzymes in a pathway called the Krebs cycle ( Figure 24.7 ). The Krebs cycle is also commonly called the citric acid cycle or the tricarboxylic acid (TCA) cycle. During the Krebs cycle, high-energy molecules, including ATP, NADH, and FADH 2 , are created. NADH and FADH 2 then pass electrons through the electron transport chain in the mitochondria to generate more ATP molecules. Interactive Link Watch this animation to observe the Krebs cycle. The three-carbon pyruvate molecule generated during glycolysis moves from the cytoplasm into the mitochondrial matrix, where it is converted by the enzyme pyruvate dehydrogenase into a two-carbon acetyl coenzyme A (acetyl CoA) molecule. This reaction is an oxidative decarboxylation reaction. It converts the three-carbon pyruvate into a two-carbon acetyl CoA molecule, releasing carbon dioxide and transferring two electrons that combine with NAD + to form NADH. Acetyl CoA enters the Krebs cycle by combining with a four-carbon molecule, oxaloacetate, to form the six-carbon molecule citrate, or citric acid, at the same time releasing the coenzyme A molecule. The six-carbon citrate molecule is systematically converted to a five-carbon molecule and then a four-carbon molecule, ending with oxaloacetate, the beginning of the cycle. Along the way, each citrate molecule will produce one ATP, one FADH 2 , and three NADH. The FADH 2 and NADH will enter the oxidative phosphorylation system located in the inner mitochondrial membrane. In addition, the Krebs cycle supplies the starting materials to process and break down proteins and fats. To start the Krebs cycle, citrate synthase combines acetyl CoA and oxaloacetate to form a six-carbon citrate molecule; CoA is subsequently released and can combine with another pyruvate molecule to begin the cycle again. The aconitase enzyme converts citrate into isocitrate. In two successive steps of oxidative decarboxylation, two molecules of CO 2 and two NADH molecules are produced when isocitrate dehydrogenase converts isocitrate into the five-carbon α-ketoglutarate, which is then catalyzed and converted into the four-carbon succinyl CoA by α-ketoglutarate dehydrogenase. The enzyme succinyl CoA dehydrogenase then converts succinyl CoA into succinate and forms the high-energy molecule GTP, which transfers its energy to ADP to produce ATP. Succinate dehydrogenase then converts succinate into fumarate, forming a molecule of FADH 2 . Fumarase then converts fumarate into malate, which malate dehydrogenase then converts back into oxaloacetate while reducing NAD + to NADH. Oxaloacetate is then ready to combine with the next acetyl CoA to start the Krebs cycle again (see Figure 24.7 ). For each turn of the cycle, three NADH, one ATP (through GTP), and one FADH 2 are created. Each carbon of pyruvate is converted into CO 2 , which is released as a byproduct of oxidative (aerobic) respiration. Oxidative Phosphorylation and the Electron Transport Chain The electron transport chain (ETC) uses the NADH and FADH 2 produced by the Krebs cycle to generate ATP. Electrons from NADH and FADH 2 are transferred through protein complexes embedded in the inner mitochondrial membrane by a series of enzymatic reactions. The electron transport chain consists of a series of four enzyme complexes (Complex I – Complex IV) and two coenzymes (ubiquinone and Cytochrome c), which act as electron carriers and proton pumps used to transfer H + ions into the space between the inner and outer mitochondrial membranes ( Figure 24.8 ). The ETC couples the transfer of electrons between a donor (like NADH) and an electron acceptor (like O 2 ) with the transfer of protons (H + ions) across the inner mitochondrial membrane, enabling the process of oxidative phosphorylation . In the presence of oxygen, energy is passed, stepwise, through the electron carriers to collect gradually the energy needed to attach a phosphate to ADP and produce ATP. The role of molecular oxygen, O 2 , is as the terminal electron acceptor for the ETC. This means that once the electrons have passed through the entire ETC, they must be passed to another, separate molecule. These electrons, O 2 , and H + ions from the matrix combine to form new water molecules. This is the basis for your need to breathe in oxygen. Without oxygen, electron flow through the ETC ceases. Interactive Link Watch this video to learn about the electron transport chain. The electrons released from NADH and FADH 2 are passed along the chain by each of the carriers, which are reduced when they receive the electron and oxidized when passing it on to the next carrier. Each of these reactions releases a small amount of energy, which is used to pump H + ions across the inner membrane. The accumulation of these protons in the space between the membranes creates a proton gradient with respect to the mitochondrial matrix. Also embedded in the inner mitochondrial membrane is an amazing protein pore complex called ATP synthase . Effectively, it is a turbine that is powered by the flow of H + ions across the inner membrane down a gradient and into the mitochondrial matrix. As the H + ions traverse the complex, the shaft of the complex rotates. This rotation enables other portions of ATP synthase to encourage ADP and P i to create ATP. In accounting for the total number of ATP produced per glucose molecule through aerobic respiration, it is important to remember the following points: A net of two ATP are produced through glycolysis (four produced and two consumed during the energy-consuming stage). However, these two ATP are used for transporting the NADH produced during glycolysis from the cytoplasm into the mitochondria. Therefore, the net production of ATP during glycolysis is zero. In all phases after glycolysis, the number of ATP, NADH, and FADH 2 produced must be multiplied by two to reflect how each glucose molecule produces two pyruvate molecules. In the ETC, about three ATP are produced for every oxidized NADH. However, only about two ATP are produced for every oxidized FADH 2 . The electrons from FADH 2 produce less ATP, because they start at a lower point in the ETC (Complex II) compared to the electrons from NADH (Complex I) (see Figure 24.8 ). Therefore, for every glucose molecule that enters aerobic respiration, a net total of 36 ATPs are produced ( Figure 24.9 ). Gluconeogenesis Gluconeogenesis is the synthesis of new glucose molecules from pyruvate, lactate, glycerol, or the amino acids alanine or glutamine. This process takes place primarily in the liver during periods of low glucose, that is, under conditions of fasting, starvation, and low carbohydrate diets. So, the question can be raised as to why the body would create something it has just spent a fair amount of effort to break down? Certain key organs, including the brain, can use only glucose as an energy source; therefore, it is essential that the body maintain a minimum blood glucose concentration. When the blood glucose concentration falls below that certain point, new glucose is synthesized by the liver to raise the blood concentration to normal. Gluconeogenesis is not simply the reverse of glycolysis. There are some important differences ( Figure 24.10 ). Pyruvate is a common starting material for gluconeogenesis. First, the pyruvate is converted into oxaloacetate. Oxaloacetate then serves as a substrate for the enzyme phosphoenolpyruvate carboxykinase (PEPCK), which transforms oxaloacetate into phosphoenolpyruvate (PEP). From this step, gluconeogenesis is nearly the reverse of glycolysis. PEP is converted back into 2-phosphoglycerate, which is converted into 3-phosphoglycerate. Then, 3-phosphoglycerate is converted into 1,3 bisphosphoglycerate and then into glyceraldehyde-3-phosphate. Two molecules of glyceraldehyde-3-phosphate then combine to form fructose-1-6-bisphosphate, which is converted into fructose 6-phosphate and then into glucose-6-phosphate. Finally, a series of reactions generates glucose itself. In gluconeogenesis (as compared to glycolysis), the enzyme hexokinase is replaced by glucose-6-phosphatase, and the enzyme phosphofructokinase-1 is replaced by fructose-1,6-bisphosphatase. This helps the cell to regulate glycolysis and gluconeogenesis independently of each other. As will be discussed as part of lipolysis, fats can be broken down into glycerol, which can be phosphorylated to form dihydroxyacetone phosphate or DHAP. DHAP can either enter the glycolytic pathway or be used by the liver as a substrate for gluconeogenesis. Aging and the... Body’s Metabolic Rate The human body’s metabolic rate decreases nearly 2 percent per decade after age 30. Changes in body composition, including reduced lean muscle mass, are mostly responsible for this decrease. The most dramatic loss of muscle mass, and consequential decline in metabolic rate, occurs between 50 and 70 years of age. Loss of muscle mass is the equivalent of reduced strength, which tends to inhibit seniors from engaging in sufficient physical activity. This results in a positive-feedback system where the reduced physical activity leads to even more muscle loss, further reducing metabolism. There are several things that can be done to help prevent general declines in metabolism and to fight back against the cyclic nature of these declines. These include eating breakfast, eating small meals frequently, consuming plenty of lean protein, drinking water to remain hydrated, exercising (including strength training), and getting enough sleep. These measures can help keep energy levels from dropping and curb the urge for increased calorie consumption from excessive snacking. While these strategies are not guaranteed to maintain metabolism, they do help prevent muscle loss and may increase energy levels. Some experts also suggest avoiding sugar, which can lead to excess fat storage. Spicy foods and green tea might also be beneficial. Because stress activates cortisol release, and cortisol slows metabolism, avoiding stress, or at least practicing relaxation techniques, can also help. 24.3 Lipid Metabolism Learning Objectives By the end of this section, you will be able to: Explain how energy can be derived from fat Explain the purpose and process of ketogenesis Describe the process of ketone body oxidation Explain the purpose and the process of lipogenesis Fats (or triglycerides) within the body are ingested as food or synthesized by adipocytes or hepatocytes from carbohydrate precursors ( Figure 24.11 ). Lipid metabolism entails the oxidation of fatty acids to either generate energy or synthesize new lipids from smaller constituent molecules. Lipid metabolism is associated with carbohydrate metabolism, as products of glucose (such as acetyl CoA) can be converted into lipids. Lipid metabolism begins in the intestine where ingested triglycerides are broken down into smaller chain fatty acids and subsequently into monoglyceride molecules (see Figure 24.11 b ) by pancreatic lipases , enzymes that break down fats after they are emulsified by bile salts . When food reaches the small intestine in the form of chyme, a digestive hormone called cholecystokinin (CCK) is released by intestinal cells in the intestinal mucosa. CCK stimulates the release of pancreatic lipase from the pancreas and stimulates the contraction of the gallbladder to release stored bile salts into the intestine. CCK also travels to the brain, where it can act as a hunger suppressant. Together, the pancreatic lipases and bile salts break down triglycerides into free fatty acids. These fatty acids can be transported across the intestinal membrane. However, once they cross the membrane, they are recombined to again form triglyceride molecules. Within the intestinal cells, these triglycerides are packaged along with cholesterol molecules in phospholipid vesicles called chylomicrons ( Figure 24.12 ). The chylomicrons enable fats and cholesterol to move within the aqueous environment of your lymphatic and circulatory systems. Chylomicrons leave the enterocytes by exocytosis and enter the lymphatic system via lacteals in the villi of the intestine. From the lymphatic system, the chylomicrons are transported to the circulatory system. Once in the circulation, they can either go to the liver or be stored in fat cells (adipocytes) that comprise adipose (fat) tissue found throughout the body. Lipolysis To obtain energy from fat, triglycerides must first be broken down by hydrolysis into their two principal components, fatty acids and glycerol. This process, called lipolysis , takes place in the cytoplasm. The resulting fatty acids are oxidized by β-oxidation into acetyl CoA, which is used by the Krebs cycle. The glycerol that is released from triglycerides after lipolysis directly enters the glycolysis pathway as DHAP. Because one triglyceride molecule yields three fatty acid molecules with as much as 16 or more carbons in each one, fat molecules yield more energy than carbohydrates and are an important source of energy for the human body. Triglycerides yield more than twice the energy per unit mass when compared to carbohydrates and proteins. Therefore, when glucose levels are low, triglycerides can be converted into acetyl CoA molecules and used to generate ATP through aerobic respiration. The breakdown of fatty acids, called fatty acid oxidation or beta (β)-oxidation , begins in the cytoplasm, where fatty acids are converted into fatty acyl CoA molecules. This fatty acyl CoA combines with carnitine to create a fatty acyl carnitine molecule, which helps to transport the fatty acid across the mitochondrial membrane. Once inside the mitochondrial matrix, the fatty acyl carnitine molecule is converted back into fatty acyl CoA and then into acetyl CoA ( Figure 24.13 ). The newly formed acetyl CoA enters the Krebs cycle and is used to produce ATP in the same way as acetyl CoA derived from pyruvate. Ketogenesis If excessive acetyl CoA is created from the oxidation of fatty acids and the Krebs cycle is overloaded and cannot handle it, the acetyl CoA is diverted to create ketone bodies . These ketone bodies can serve as a fuel source if glucose levels are too low in the body. Ketones serve as fuel in times of prolonged starvation or when patients suffer from uncontrolled diabetes and cannot utilize most of the circulating glucose. In both cases, fat stores are liberated to generate energy through the Krebs cycle and will generate ketone bodies when too much acetyl CoA accumulates. In this ketone synthesis reaction, excess acetyl CoA is converted into hydroxymethylglutaryl CoA (HMG CoA) . HMG CoA is a precursor of cholesterol and is an intermediate that is subsequently converted into β-hydroxybutyrate, the primary ketone body in the blood ( Figure 24.14 ). Ketone Body Oxidation Organs that have classically been thought to be dependent solely on glucose, such as the brain, can actually use ketones as an alternative energy source. This keeps the brain functioning when glucose is limited. When ketones are produced faster than they can be used, they can be broken down into CO 2 and acetone. The acetone is removed by exhalation. One symptom of ketogenesis is that the patient’s breath smells sweet like alcohol. This effect provides one way of telling if a diabetic is properly controlling the disease. The carbon dioxide produced can acidify the blood, leading to diabetic ketoacidosis, a dangerous condition in diabetics. Ketones oxidize to produce energy for the brain. beta (β)-hydroxybutyrate is oxidized to acetoacetate and NADH is released. An HS-CoA molecule is added to acetoacetate, forming acetoacetyl CoA. The carbon within the acetoacetyl CoA that is not bonded to the CoA then detaches, splitting the molecule in two. This carbon then attaches to another free HS-CoA, resulting in two acetyl CoA molecules. These two acetyl CoA molecules are then processed through the Krebs cycle to generate energy ( Figure 24.15 ). Lipogenesis When glucose levels are plentiful, the excess acetyl CoA generated by glycolysis can be converted into fatty acids, triglycerides, cholesterol, steroids, and bile salts. This process, called lipogenesis , creates lipids (fat) from the acetyl CoA and takes place in the cytoplasm of adipocytes (fat cells) and hepatocytes (liver cells). When you eat more glucose or carbohydrates than your body needs, your system uses acetyl CoA to turn the excess into fat. Although there are several metabolic sources of acetyl CoA, it is most commonly derived from glycolysis. Acetyl CoA availability is significant, because it initiates lipogenesis. Lipogenesis begins with acetyl CoA and advances by the subsequent addition of two carbon atoms from another acetyl CoA; this process is repeated until fatty acids are the appropriate length. Because this is a bond-creating anabolic process, ATP is consumed. However, the creation of triglycerides and lipids is an efficient way of storing the energy available in carbohydrates. Triglycerides and lipids, high-energy molecules, are stored in adipose tissue until they are needed. Although lipogenesis occurs in the cytoplasm, the necessary acetyl CoA is created in the mitochondria and cannot be transported across the mitochondrial membrane. To solve this problem, pyruvate is converted into both oxaloacetate and acetyl CoA. Two different enzymes are required for these conversions. Oxaloacetate forms via the action of pyruvate carboxylase, whereas the action of pyruvate dehydrogenase creates acetyl CoA. Oxaloacetate and acetyl CoA combine to form citrate, which can cross the mitochondrial membrane and enter the cytoplasm. In the cytoplasm, citrate is converted back into oxaloacetate and acetyl CoA. Oxaloacetate is converted into malate and then into pyruvate. Pyruvate crosses back across the mitochondrial membrane to wait for the next cycle of lipogenesis. The acetyl CoA is converted into malonyl CoA that is used to synthesize fatty acids. Figure 24.16 summarizes the pathways of lipid metabolism. 24.4 Protein Metabolism Learning Objectives By the end of this section, you will be able to: Describe how the body digests proteins Explain how the urea cycle prevents toxic concentrations of nitrogen Differentiate between glucogenic and ketogenic amino acids Explain how protein can be used for energy Much of the body is made of protein, and these proteins take on a myriad of forms. They represent cell signaling receptors, signaling molecules, structural members, enzymes, intracellular trafficking components, extracellular matrix scaffolds, ion pumps, ion channels, oxygen and CO 2 transporters (hemoglobin). That is not even the complete list! There is protein in bones (collagen), muscles, and tendons; the hemoglobin that transports oxygen; and enzymes that catalyze all biochemical reactions. Protein is also used for growth and repair. Amid all these necessary functions, proteins also hold the potential to serve as a metabolic fuel source. Proteins are not stored for later use, so excess proteins must be converted into glucose or triglycerides, and used to supply energy or build energy reserves. Although the body can synthesize proteins from amino acids, food is an important source of those amino acids, especially because humans cannot synthesize all of the 20 amino acids used to build proteins. The digestion of proteins begins in the stomach. When protein-rich foods enter the stomach, they are greeted by a mixture of the enzyme pepsin and hydrochloric acid (HCl; 0.5 percent). The latter produces an environmental pH of 1.5–3.5 that denatures proteins within food. Pepsin cuts proteins into smaller polypeptides and their constituent amino acids. When the food-gastric juice mixture (chyme) enters the small intestine, the pancreas releases sodium bicarbonate to neutralize the HCl. This helps to protect the lining of the intestine. The small intestine also releases digestive hormones, including secretin and CCK, which stimulate digestive processes to break down the proteins further. Secretin also stimulates the pancreas to release sodium bicarbonate. The pancreas releases most of the digestive enzymes, including the proteases trypsin, chymotrypsin, and elastase , which aid protein digestion. Together, all of these enzymes break complex proteins into smaller individual amino acids ( Figure 24.17 ), which are then transported across the intestinal mucosa to be used to create new proteins, or to be converted into fats or acetyl CoA and used in the Krebs cycle. In order to avoid breaking down the proteins that make up the pancreas and small intestine, pancreatic enzymes are released as inactive proenzymes that are only activated in the small intestine. In the pancreas, vesicles store trypsin and chymotrypsin as trypsinogen and chymotrypsinogen . Once released into the small intestine, an enzyme found in the wall of the small intestine, called enterokinase , binds to trypsinogen and converts it into its active form, trypsin. Trypsin then binds to chymotrypsinogen to convert it into the active chymotrypsin. Trypsin and chymotrypsin break down large proteins into smaller peptides, a process called proteolysis . These smaller peptides are catabolized into their constituent amino acids, which are transported across the apical surface of the intestinal mucosa in a process that is mediated by sodium-amino acid transporters. These transporters bind sodium and then bind the amino acid to transport it across the membrane. At the basal surface of the mucosal cells, the sodium and amino acid are released. The sodium can be reused in the transporter, whereas the amino acids are transferred into the bloodstream to be transported to the liver and cells throughout the body for protein synthesis. Freely available amino acids are used to create proteins. If amino acids exist in excess, the body has no capacity or mechanism for their storage; thus, they are converted into glucose or ketones, or they are decomposed. Amino acid decomposition results in hydrocarbons and nitrogenous waste. However, high concentrations of nitrogenous byproducts are toxic. The urea cycle processes nitrogen and facilitates its excretion from the body. Urea Cycle The urea cycle is a set of biochemical reactions that produces urea from ammonium ions in order to prevent a toxic level of ammonium in the body. It occurs primarily in the liver and, to a lesser extent, in the kidney. Prior to the urea cycle, ammonium ions are produced from the breakdown of amino acids. In these reactions, an amine group, or ammonium ion, from the amino acid is exchanged with a keto group on another molecule. This transamination event creates a molecule that is necessary for the Krebs cycle and an ammonium ion that enters into the urea cycle to be eliminated. In the urea cycle, ammonium is combined with CO 2 , resulting in urea and water. The urea is eliminated through the kidneys in the urine ( Figure 24.18 ). Amino acids can also be used as a source of energy, especially in times of starvation. Because the processing of amino acids results in the creation of metabolic intermediates, including pyruvate, acetyl CoA, acetoacyl CoA, oxaloacetate, and α-ketoglutarate, amino acids can serve as a source of energy production through the Krebs cycle ( Figure 24.19 ). Figure 24.20 summarizes the pathways of catabolism and anabolism for carbohydrates, lipids, and proteins. Disorders of the... Metabolism: Pyruvate Dehydrogenase Complex Deficiency and Phenylketonuria Pyruvate dehydrogenase complex deficiency (PDCD) and phenylketonuria (PKU) are genetic disorders. Pyruvate dehydrogenase is the enzyme that converts pyruvate into acetyl CoA, the molecule necessary to begin the Krebs cycle to produce ATP. With low levels of the pyruvate dehydrogenase complex (PDC), the rate of cycling through the Krebs cycle is dramatically reduced. This results in a decrease in the total amount of energy that is produced by the cells of the body. PDC deficiency results in a neurodegenerative disease that ranges in severity, depending on the levels of the PDC enzyme. It may cause developmental defects, muscle spasms, and death. Treatments can include diet modification, vitamin supplementation, and gene therapy; however, damage to the central nervous system usually cannot be reversed. PKU affects about 1 in every 15,000 births in the United States. People afflicted with PKU lack sufficient activity of the enzyme phenylalanine hydroxylase and are therefore unable to break down phenylalanine into tyrosine adequately. Because of this, levels of phenylalanine rise to toxic levels in the body, which results in damage to the central nervous system and brain. Symptoms include delayed neurological development, hyperactivity, mental retardation, seizures, skin rash, tremors, and uncontrolled movements of the arms and legs. Pregnant women with PKU are at a high risk for exposing the fetus to too much phenylalanine, which can cross the placenta and affect fetal development. Babies exposed to excess phenylalanine in utero may present with heart defects, physical and/or mental retardation, and microcephaly. Every infant in the United States and Canada is tested at birth to determine whether PKU is present. The earlier a modified diet is begun, the less severe the symptoms will be. The person must closely follow a strict diet that is low in phenylalanine to avoid symptoms and damage. Phenylalanine is found in high concentrations in artificial sweeteners, including aspartame. Therefore, these sweeteners must be avoided. Some animal products and certain starches are also high in phenylalanine, and intake of these foods should be carefully monitored. 24.5 Metabolic States of the Body Learning Objectives By the end of this section, you will be able to: Describe what defines each of the three metabolic states Describe the processes that occur during the absorptive state of metabolism Describe the processes that occur during the postabsorptive state of metabolism Explain how the body processes glucose when the body is starved of fuel You eat periodically throughout the day; however, your organs, especially the brain, need a continuous supply of glucose. How does the body meet this constant demand for energy? Your body processes the food you eat both to use immediately and, importantly, to store as energy for later demands. If there were no method in place to store excess energy, you would need to eat constantly in order to meet energy demands. Distinct mechanisms are in place to facilitate energy storage, and to make stored energy available during times of fasting and starvation. The Absorptive State The absorptive state , or the fed state, occurs after a meal when your body is digesting the food and absorbing the nutrients (anabolism exceeds catabolism). Digestion begins the moment you put food into your mouth, as the food is broken down into its constituent parts to be absorbed through the intestine. The digestion of carbohydrates begins in the mouth, whereas the digestion of proteins and fats begins in the stomach and small intestine. The constituent parts of these carbohydrates, fats, and proteins are transported across the intestinal wall and enter the bloodstream (sugars and amino acids) or the lymphatic system (fats). From the intestines, these systems transport them to the liver, adipose tissue, or muscle cells that will process and use, or store, the energy. Depending on the amounts and types of nutrients ingested, the absorptive state can linger for up to 4 hours. The ingestion of food and the rise of glucose concentrations in the bloodstream stimulate pancreatic beta cells to release insulin into the bloodstream, where it initiates the absorption of blood glucose by liver hepatocytes, and by adipose and muscle cells. Once inside these cells, glucose is immediately converted into glucose-6-phosphate. By doing this, a concentration gradient is established where glucose levels are higher in the blood than in the cells. This allows for glucose to continue moving from the blood to the cells where it is needed. Insulin also stimulates the storage of glucose as glycogen in the liver and muscle cells where it can be used for later energy needs of the body. Insulin also promotes the synthesis of protein in muscle. As you will see, muscle protein can be catabolized and used as fuel in times of starvation. If energy is exerted shortly after eating, the dietary fats and sugars that were just ingested will be processed and used immediately for energy. If not, the excess glucose is stored as glycogen in the liver and muscle cells, or as fat in adipose tissue; excess dietary fat is also stored as triglycerides in adipose tissues. Figure 24.21 summarizes the metabolic processes occurring in the body during the absorptive state. The Postabsorptive State The postabsorptive state , or the fasting state, occurs when the food has been digested, absorbed, and stored. You commonly fast overnight, but skipping meals during the day puts your body in the postabsorptive state as well. During this state, the body must rely initially on stored glycogen . Glucose levels in the blood begin to drop as it is absorbed and used by the cells. In response to the decrease in glucose, insulin levels also drop. Glycogen and triglyceride storage slows. However, due to the demands of the tissues and organs, blood glucose levels must be maintained in the normal range of 80–120 mg/dL. In response to a drop in blood glucose concentration, the hormone glucagon is released from the alpha cells of the pancreas. Glucagon acts upon the liver cells, where it inhibits the synthesis of glycogen and stimulates the breakdown of stored glycogen back into glucose. This glucose is released from the liver to be used by the peripheral tissues and the brain. As a result, blood glucose levels begin to rise. Gluconeogenesis will also begin in the liver to replace the glucose that has been used by the peripheral tissues. After ingestion of food, fats and proteins are processed as described previously; however, the glucose processing changes a bit. The peripheral tissues preferentially absorb glucose. The liver, which normally absorbs and processes glucose, will not do so after a prolonged fast. The gluconeogenesis that has been ongoing in the liver will continue after fasting to replace the glycogen stores that were depleted in the liver. After these stores have been replenished, excess glucose that is absorbed by the liver will be converted into triglycerides and fatty acids for long-term storage. Figure 24.22 summarizes the metabolic processes occurring in the body during the postabsorptive state. Starvation When the body is deprived of nourishment for an extended period of time, it goes into “survival mode.” The first priority for survival is to provide enough glucose or fuel for the brain. The second priority is the conservation of amino acids for proteins. Therefore, the body uses ketones to satisfy the energy needs of the brain and other glucose-dependent organs, and to maintain proteins in the cells (see Figure 24.2 ). Because glucose levels are very low during starvation, glycolysis will shut off in cells that can use alternative fuels. For example, muscles will switch from using glucose to fatty acids as fuel. As previously explained, fatty acids can be converted into acetyl CoA and processed through the Krebs cycle to make ATP. Pyruvate, lactate, and alanine from muscle cells are not converted into acetyl CoA and used in the Krebs cycle, but are exported to the liver to be used in the synthesis of glucose. As starvation continues, and more glucose is needed, glycerol from fatty acids can be liberated and used as a source for gluconeogenesis. After several days of starvation, ketone bodies become the major source of fuel for the heart and other organs. As starvation continues, fatty acids and triglyceride stores are used to create ketones for the body. This prevents the continued breakdown of proteins that serve as carbon sources for gluconeogenesis. Once these stores are fully depleted, proteins from muscles are released and broken down for glucose synthesis. Overall survival is dependent on the amount of fat and protein stored in the body. 24.6 Energy and Heat Balance Learning Objectives By the end of this section, you will be able to: Describe how the body regulates temperature Explain the significance of the metabolic rate The body tightly regulates the body temperature through a process called thermoregulation , in which the body can maintain its temperature within certain boundaries, even when the surrounding temperature is very different. The core temperature of the body remains steady at around 36.5–37.5 °C (or 97.7–99.5 °F). In the process of ATP production by cells throughout the body, approximately 60 percent of the energy produced is in the form of heat used to maintain body temperature. Thermoregulation is an example of negative feedback. The hypothalamus in the brain is the master switch that works as a thermostat to regulate the body’s core temperature ( Figure 24.23 ). If the temperature is too high, the hypothalamus can initiate several processes to lower it. These include increasing the circulation of the blood to the surface of the body to allow for the dissipation of heat through the skin and initiation of sweating to allow evaporation of water on the skin to cool its surface. Conversely, if the temperature falls below the set core temperature, the hypothalamus can initiate shivering to generate heat. The body uses more energy and generates more heat. In addition, thyroid hormone will stimulate more energy use and heat production by cells throughout the body. An environment is said to be thermoneutral when the body does not expend or release energy to maintain its core temperature. For a naked human, this is an ambient air temperature of around 84 °F. If the temperature is higher, for example, when wearing clothes, the body compensates with cooling mechanisms. The body loses heat through the mechanisms of heat exchange. Mechanisms of Heat Exchange When the environment is not thermoneutral, the body uses four mechanisms of heat exchange to maintain homeostasis: conduction, convection, radiation, and evaporation. Each of these mechanisms relies on the property of heat to flow from a higher concentration to a lower concentration; therefore, each of the mechanisms of heat exchange varies in rate according to the temperature and conditions of the environment. Conduction is the transfer of heat by two objects that are in direct contact with one another. It occurs when the skin comes in contact with a cold or warm object. For example, when holding a glass of ice water, the heat from your skin will warm the glass and in turn melt the ice. Alternatively, on a cold day, you might warm up by wrapping your cold hands around a hot mug of coffee. Only about 3 percent of the body’s heat is lost through conduction. Convection is the transfer of heat to the air surrounding the skin. The warmed air rises away from the body and is replaced by cooler air that is subsequently heated. Convection can also occur in water. When the water temperature is lower than the body’s temperature, the body loses heat by warming the water closest to the skin, which moves away to be replaced by cooler water. The convection currents created by the temperature changes continue to draw heat away from the body more quickly than the body can replace it, resulting in hyperthermia. About 15 percent of the body’s heat is lost through convection. Radiation is the transfer of heat via infrared waves. This occurs between any two objects when their temperatures differ. A radiator can warm a room via radiant heat. On a sunny day, the radiation from the sun warms the skin. The same principle works from the body to the environment. About 60 percent of the heat lost by the body is lost through radiation. Evaporation is the transfer of heat by the evaporation of water. Because it takes a great deal of energy for a water molecule to change from a liquid to a gas, evaporating water (in the form of sweat) takes with it a great deal of energy from the skin. However, the rate at which evaporation occurs depends on relative humidity—more sweat evaporates in lower humidity environments. Sweating is the primary means of cooling the body during exercise, whereas at rest, about 20 percent of the heat lost by the body occurs through evaporation. Metabolic Rate The metabolic rate is the amount of energy consumed minus the amount of energy expended by the body. The basal metabolic rate (BMR) describes the amount of daily energy expended by humans at rest, in a neutrally temperate environment, while in the postabsorptive state. It measures how much energy the body needs for normal, basic, daily activity. About 70 percent of all daily energy expenditure comes from the basic functions of the organs in the body. Another 20 percent comes from physical activity, and the remaining 10 percent is necessary for body thermoregulation or temperature control. This rate will be higher if a person is more active or has more lean body mass. As you age, the BMR generally decreases as the percentage of less lean muscle mass decreases. 24.7 Nutrition and Diet Learning Objectives By the end of this section, you will be able to: Explain how different foods can affect metabolism Describe a healthy diet, as recommended by the U.S. Department of Agriculture (USDA) List reasons why vitamins and minerals are critical to a healthy diet The carbohydrates, lipids, and proteins in the foods you eat are used for energy to power molecular, cellular, and organ system activities. Importantly, the energy is stored primarily as fats. The quantity and quality of food that is ingested, digested, and absorbed affects the amount of fat that is stored as excess calories. Diet—both what you eat and how much you eat—has a dramatic impact on your health. Eating too much or too little food can lead to serious medical issues, including cardiovascular disease, cancer, anorexia, and diabetes, among others. Combine an unhealthy diet with unhealthy environmental conditions, such as smoking, and the potential medical complications increase significantly. Food and Metabolism The amount of energy that is needed or ingested per day is measured in calories. The nutritional Calorie (C) is the amount of heat it takes to raise 1 kg (1000 g) of water by 1 °C. This is different from the calorie (c) used in the physical sciences, which is the amount of heat it takes to raise 1 g of water by 1 °C. When we refer to "calorie," we are referring to the nutritional Calorie. On average, a person needs 1500 to 2000 calories per day to sustain (or carry out) daily activities. The total number of calories needed by one person is dependent on their body mass, age, height, gender, activity level, and the amount of exercise per day. If exercise is regular part of one’s day, more calories are required. As a rule, people underestimate the number of calories ingested and overestimate the amount they burn through exercise. This can lead to ingestion of too many calories per day. The accumulation of an extra 3500 calories adds one pound of weight. If an excess of 200 calories per day is ingested, one extra pound of body weight will be gained every 18 days. At that rate, an extra 20 pounds can be gained over the course of a year. Of course, this increase in calories could be offset by increased exercise. Running or jogging one mile burns almost 100 calories. The type of food ingested also affects the body’s metabolic rate. Processing of carbohydrates requires less energy than processing of proteins. In fact, the breakdown of carbohydrates requires the least amount of energy, whereas the processing of proteins demands the most energy. In general, the amount of calories ingested and the amount of calories burned determines the overall weight. To lose weight, the number of calories burned per day must exceed the number ingested. Calories are in almost everything you ingest, so when considering calorie intake, beverages must also be considered. To help provide guidelines regarding the types and quantities of food that should be eaten every day, the USDA has updated their food guidelines from MyPyramid to MyPlate. They have put the recommended elements of a healthy meal into the context of a place setting of food. MyPlate categorizes food into the standard six food groups: fruits, vegetables, grains, protein foods, dairy, and oils. The accompanying website gives clear recommendations regarding quantity and type of each food that you should consume each day, as well as identifying which foods belong in each category. The accompanying graphic ( Figure 24.24 ) gives a clear visual with general recommendations for a healthy and balanced meal. The guidelines recommend to “Make half your plate fruits and vegetables.” The other half is grains and protein, with a slightly higher quantity of grains than protein. Dairy products are represented by a drink, but the quantity can be applied to other dairy products as well. ChooseMyPlate.gov provides extensive online resources for planning a healthy diet and lifestyle, including offering weight management tips and recommendations for physical activity. It also includes the SuperTracker, a web-based application to help you analyze your own diet and physical activity. Everyday Connection Metabolism and Obesity Obesity in the United States is epidemic. The rate of obesity has been steadily rising since the 1980s. In the 1990s, most states reported that less than 10 percent of their populations was obese, and the state with the highest rate reported that only 15 percent of their population was considered obese. By 2010, the U.S. Centers for Disease Control and Prevention reported that nearly 36 percent of adults over 20 years old were obese and an additional 33 percent were overweight, leaving only about 30 percent of the population at a healthy weight. These studies find the highest levels of obesity are concentrated in the southern states. They also find the level of childhood obesity is rising. Obesity is defined by the body mass index (BMI) , which is a measure of a person's weight in kilograms divided by height in meters squared.. The normal, or healthy, BMI range is between 18 and 24.9 kg/m 2 . Overweight is defined as a BMI of 25 to 29.9 kg/m 2 , and obesity is considered to be a BMI greater than 30 kg/m 2 . Obesity can arise from a number of factors, including overeating, poor diet, sedentary lifestyle, limited sleep, genetic factors, and even diseases or drugs. Severe obesity (morbid obesity) or long-term obesity can result in serious medical conditions, including coronary heart disease; type 2 diabetes; endometrial, breast, or colon cancer; hypertension (high blood pressure); dyslipidemia (high cholesterol or elevated triglycerides); stroke; liver disease; gall bladder disease; sleep apnea or respiratory diseases; osteoarthritis; and infertility. Research has shown that losing weight can help reduce or reverse the complications associated with these conditions. Vitamins Vitamins are organic compounds found in foods and are a necessary part of the biochemical reactions in the body. They are involved in a number of processes, including mineral and bone metabolism, and cell and tissue growth, and they act as cofactors for energy metabolism. The B vitamins play the largest role of any vitamins in metabolism ( Table 24.3 and Table 24.4 ). You get most of your vitamins through your diet, although some can be formed from the precursors absorbed during digestion. For example, the body synthesizes vitamin A from the β-carotene in orange vegetables like carrots and sweet potatoes. Vitamins are either fat-soluble or water-soluble. Fat-soluble vitamins A, D, E, and K, are absorbed through the intestinal tract with lipids in chylomicrons. Vitamin D is also synthesized in the skin through exposure to sunlight. Because they are carried in lipids, fat-soluble vitamins can accumulate in the lipids stored in the body. If excess vitamins are retained in the lipid stores in the body, hypervitaminosis can result. Water-soluble vitamins, including the eight B vitamins and vitamin C, are absorbed with water in the gastrointestinal tract. These vitamins move easily through bodily fluids, which are water based, so they are not stored in the body. Excess water-soluble vitamins are excreted in the urine. Therefore, hypervitaminosis of water-soluble vitamins rarely occurs, except with an excess of vitamin supplements. Fat-soluble Vitamins Vitamin and alternative name Sources Recommended daily allowance Function Problems associated with deficiency A retinal or β-carotene Yellow and orange fruits and vegetables, dark green leafy vegetables, eggs, milk, liver 700–900 µ g Eye and bone development, immune function Night blindness, epithelial changes, immune system deficiency D cholecalciferol Dairy products, egg yolks; also synthesized in the skin from exposure to sunlight 5–15 µ g Aids in calcium absorption, promoting bone growth Rickets, bone pain, muscle weakness, increased risk of death from cardiovascular disease, cognitive impairment, asthma in children, cancer E tocopherols Seeds, nuts, vegetable oils, avocados, wheat germ 15 mg Antioxidant Anemia K phylloquinone Dark green leafy vegetables, broccoli, Brussels sprouts, cabbage 90–120 µ g Blood clotting, bone health Hemorrhagic disease of newborn in infants; uncommon in adults Table 24.3 Water-soluble Vitamins Vitamin and alternative name Sources Recommended daily allowance Function Problems associated with deficiency B 1 thiamine Whole grains, enriched bread and cereals, milk, meat 1.1–1.2 mg Carbohydrate metabolism Beriberi, Wernicke-Korsikoff syndrome B 2 riboflavin Brewer’s yeast, almonds, milk, organ meats, legumes, enriched breads and cereals, broccoli, asparagus 1.1–1.3 mg Synthesis of FAD for metabolism, production of red blood cells Fatigue, slowed growth, digestive problems, light sensitivity, epithelial problems like cracks in the corners of the mouth B 3 niacin Meat, fish, poultry, enriched breads and cereals, peanuts 14–16 mg Synthesis of NAD, nerve function, cholesterol production Cracked, scaly skin; dementia; diarrhea; also known as pellagra B 5 pantothenic acid Meat, poultry, potatoes, oats, enriched breads and cereals, tomatoes 5 mg Synthesis of coenzyme A in fatty acid metabolism Rare: symptoms may include fatigue, insomnia, depression, irritability B 6 pyridoxine Potatoes, bananas, beans, seeds, nuts, meat, poultry, fish, eggs, dark green leafy vegetables, soy, organ meats 1.3–1.5 mg Sodium and potassium balance, red blood cell synthesis, protein metabolism Confusion, irritability, depression, mouth and tongue sores B 7 biotin Liver, fruits, meats 30 µ g Cell growth, metabolism of fatty acids, production of blood cells Rare in developed countries; symptoms include dermatitis, hair loss, loss of muscular coordination B 9 folic acid Liver, legumes, dark green leafy vegetables, enriched breads and cereals, citrus fruits 400 µ g DNA/protein synthesis Poor growth, gingivitis, appetite loss, shortness of breath, gastrointestinal problems, mental deficits B 12 cyanocobalamin Fish, meat, poultry, dairy products, eggs 2.4 µ g Fatty acid oxidation, nerve cell function, red blood cell production Pernicious anemia, leading to nerve cell damage C ascorbic acid Citrus fruits, red berries, peppers, tomatoes, broccoli, dark green leafy vegetables 75–90 mg Necessary to produce collagen for formation of connective tissue and teeth, and for wound healing Dry hair, gingivitis, bleeding gums, dry and scaly skin, slow wound healing, easy bruising, compromised immunity; can lead to scurvy Table 24.4 Minerals Minerals in food are inorganic compounds that work with other nutrients to ensure the body functions properly. Minerals cannot be made in the body; they come from the diet. The amount of minerals in the body is small—only 4 percent of the total body mass—and most of that consists of the minerals that the body requires in moderate quantities: potassium, sodium, calcium, phosphorus, magnesium, and chloride. The most common minerals in the body are calcium and phosphorous, both of which are stored in the skeleton and necessary for the hardening of bones. Most minerals are ionized, and their ionic forms are used in physiological processes throughout the body. Sodium and chloride ions are electrolytes in the blood and extracellular tissues, and iron ions are critical to the formation of hemoglobin. There are additional trace minerals that are still important to the body’s functions, but their required quantities are much lower. Like vitamins, minerals can be consumed in toxic quantities (although it is rare). A healthy diet includes most of the minerals your body requires, so supplements and processed foods can add potentially toxic levels of minerals. Table 24.5 and Table 24.6 provide a summary of minerals and their function in the body. Major Minerals Mineral Sources Recommended daily allowance Function Problems associated with deficiency Potassium Meats, some fish, fruits, vegetables, legumes, dairy products 4700 mg Nerve and muscle function; acts as an electrolyte Hypokalemia: weakness, fatigue, muscle cramping, gastrointestinal problems, cardiac problems Sodium Table salt, milk, beets, celery, processed foods 2300 mg Blood pressure, blood volume, muscle and nerve function Rare Calcium Dairy products, dark green leafy vegetables, blackstrap molasses, nuts, brewer’s yeast, some fish 1000 mg Bone structure and health; nerve and muscle functions, especially cardiac function Slow growth, weak and brittle bones Phosphorous Meat, milk 700 mg Bone formation, metabolism, ATP production Rare Magnesium Whole grains, nuts, leafy green vegetables 310–420 mg Enzyme activation, production of energy, regulation of other nutrients Agitation, anxiety, sleep problems, nausea and vomiting, abnormal heart rhythms, low blood pressure, muscular problems Chloride Most foods, salt, vegetables, especially seaweed, tomatoes, lettuce, celery, olives 2300 mg Balance of body fluids, digestion Loss of appetite, muscle cramps Table 24.5 Trace Minerals Mineral Sources Recommended daily allowance Function Problems associated with deficiency Iron Meat, poultry, fish, shellfish, legumes, nuts, seeds, whole grains, dark leafy green vegetables 8–18 mg Transport of oxygen in blood, production of ATP Anemia, weakness, fatigue Zinc Meat, fish, poultry, cheese, shellfish 8–11 mg Immunity, reproduction, growth, blood clotting, insulin and thyroid function Loss of appetite, poor growth, weight loss, skin problems, hair loss, vision problems, lack of taste or smell Copper Seafood, organ meats, nuts, legumes, chocolate, enriched breads and cereals, some fruits and vegetables 900 µ g Red blood cell production, nerve and immune system function, collagen formation, acts as an antioxidant Anemia, low body temperature, bone fractures, low white blood cell concentration, irregular heartbeat, thyroid problems Iodine Fish, shellfish, garlic, lima beans, sesame seeds, soybeans, dark leafy green vegetables 150 µ g Thyroid function Hypothyroidism: fatigue, weight gain, dry skin, temperature sensitivity Sulfur Eggs, meat, poultry, fish, legumes None Component of amino acids Protein deficiency Fluoride Fluoridated water 3–4 mg Maintenance of bone and tooth structure Increased cavities, weak bones and teeth Manganese Nuts, seeds, whole grains, legumes 1.8–2.3 mg Formation of connective tissue and bones, blood clotting, sex hormone development, metabolism, brain and nerve function Infertility, bone malformation, weakness, seizures Cobalt Fish, nuts, leafy green vegetables, whole grains None Component of B 12 None Selenium Brewer’s yeast, wheat germ, liver, butter, fish, shellfish, whole grains 55 µ g Antioxidant, thyroid function, immune system function Muscle pain Chromium Whole grains, lean meats, cheese, black pepper, thyme, brewer’s yeast 25–35 µ g Insulin function High blood sugar, triglyceride, and cholesterol levels Molybdenum Legumes, whole grains, nuts 45 µ g Cofactor for enzymes Rare Table 24.6
biology
Chapter Outline 33.1 Animal Form and Function 33.2 Animal Primary Tissues 33.3 Homeostasis Introduction The arctic fox is an example of a complex animal that has adapted to its environment and illustrates the relationships between an animal’s form and function. The structures of animals consist of primary tissues that make up more complex organs and organ systems. Homeostasis allows an animal to maintain a balance between its internal and external environments.
[ { "answer": { "ans_choice": 0, "ans_text": "endotherm" }, "bloom": "1", "hl_context": "Endotherms and Ectotherms Animals can be divided into two groups : some maintain a constant body temperature in the face of differing environmental temperatures , while others have a body temperature that is the same as their environment and thus varies with the environment . Animals that do not control their body temperature are ectotherms . This group has been called cold-blooded , but the term may not apply to an animal in the desert with a very warm body temperature . In contrast to ectotherms , which rely on external temperatures to set their body temperatures , poikilotherms are animals with constantly varying internal temperatures . An animal that maintains a constant body temperature in the face of environmental changes is called a homeotherm . <hl> Endotherms are animals that rely on internal sources for body temperature but which can exhibit extremes in temperature . <hl> These animals are able to maintain a level of activity at cooler temperature , which an ectotherm cannot due to differing enzyme levels of activity . Heat can be exchanged between an animal and its environment through four mechanisms : radiation , evaporation , convection , and conduction ( Figure 33.22 ) . Radiation is the emission of electromagnetic “ heat ” waves . Heat comes from the sun in this manner and radiates from dry skin the same way . Heat can be removed with liquid from a surface during evaporation . This occurs when a mammal sweats . Convection currents of air remove heat from the surface of dry skin as the air passes over it . Heat will be conducted from one surface to another during direct contact with the surfaces , such as an animal resting on a warm rock . Smaller endothermic animals have a greater surface area for their mass than larger ones ( Figure 33.4 ) . <hl> Therefore , smaller animals lose heat at a faster rate than larger animals and require more energy to maintain a constant internal temperature . <hl> This results in a smaller endothermic animal having a higher BMR , per body weight , than a larger endothermic animal . The amount of energy expended by an animal over a specific time is called its metabolic rate . The rate is measured variously in joules , calories , or kilocalories ( 1000 calories ) . Carbohydrates and proteins contain about 4.5 to 5 kcal / g , and fat contains about 9 kcal / g . Metabolic rate is estimated as the basal metabolic rate ( BMR ) in endothermic animals at rest and as the standard metabolic rate ( SMR ) in ectotherms . Human males have a BMR of 1600 to 1800 kcal / day , and human females have a BMR of 1300 to 1500 kcal / day . <hl> Even with insulation , endothermal animals require extensive amounts of energy to maintain a constant body temperature . <hl> An ectotherm such as an alligator has an SMR of 60 kcal / day . Energy Requirements Related to Body Size", "hl_sentences": "Endotherms are animals that rely on internal sources for body temperature but which can exhibit extremes in temperature . Therefore , smaller animals lose heat at a faster rate than larger animals and require more energy to maintain a constant internal temperature . Even with insulation , endothermal animals require extensive amounts of energy to maintain a constant body temperature .", "question": { "cloze_format": "A ___ is a type of animal that maintains a constant internal body temperature.", "normal_format": "Which type of animal maintains a constant internal body temperature?", "question_choices": [ "endotherm", "ectotherm", "coelomate", "mesoderm" ], "question_id": "fs-idp64261056", "question_text": "Which type of animal maintains a constant internal body temperature?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "bilateral" }, "bloom": null, "hl_context": "<hl> Limits on Animal Size and Shape Animals with bilateral symmetry that live in water tend to have a fusiform shape : this is a tubular shaped body that is tapered at both ends . <hl> This shape decreases the drag on the body as it moves through water and allows the animal to swim at high speeds . Table 33.1 lists the maximum speed of various animals . Certain types of sharks can swim at fifty kilometers an hour and some dolphins at 32 to 40 kilometers per hour . Land animals frequently travel faster , although the tortoise and snail are significantly slower than cheetahs . Another difference in the adaptations of aquatic and land-dwelling organisms is that aquatic organisms are constrained in shape by the forces of drag in the water since water has higher viscosity than air . On the other hand , land-dwelling organisms are constrained mainly by gravity , and drag is relatively unimportant . For example , most adaptations in birds are for gravity not for drag . Animal body plans follow set patterns related to symmetry . They are asymmetrical , radial , or bilateral in form as illustrated in Figure 33.2 . Asymmetrical animals are animals with no pattern or symmetry ; an example of an asymmetrical animal is a sponge . Radial symmetry , as illustrated in Figure 33.2 , describes when an animal has an up-and-down orientation : any plane cut along its longitudinal axis through the organism produces equal halves , but not a definite right or left side . This plan is found mostly in aquatic animals , especially organisms that attach themselves to a base , like a rock or a boat , and extract their food from the surrounding water as it flows around the organism . Bilateral symmetry is illustrated in the same figure by a goat . The goat also has an upper and lower component to it , but a plane cut from front to back separates the animal into definite right and left sides . Additional terms used when describing positions in the body are anterior ( front ) , posterior ( rear ) , dorsal ( toward the back ) , and ventral ( toward the stomach ) . <hl> Bilateral symmetry is found in both land-based and aquatic animals ; it enables a high level of mobility . <hl>", "hl_sentences": "Limits on Animal Size and Shape Animals with bilateral symmetry that live in water tend to have a fusiform shape : this is a tubular shaped body that is tapered at both ends . Bilateral symmetry is found in both land-based and aquatic animals ; it enables a high level of mobility .", "question": { "cloze_format": "The symmetry found in animals that move swiftly is ________.", "normal_format": "What is the symmetry found in animals that move swiftly?", "question_choices": [ "radial", "bilateral", "sequential", "interrupted" ], "question_id": "fs-idm151811072", "question_text": "The symmetry found in animals that move swiftly is ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "estivation" }, "bloom": null, "hl_context": "Animals adapt to extremes of temperature or food availability through torpor . <hl> Torpor is a process that leads to a decrease in activity and metabolism and allows animals to survive adverse conditions . <hl> Torpor can be used by animals for long periods , such as entering a state of hibernation during the winter months , in which case it enables them to maintain a reduced body temperature . During hibernation , ground squirrels can achieve an abdominal temperature of 0 ° C ( 32 ° F ) , while a bear ’ s internal temperature is maintained higher at about 37 ° C ( 99 ° F ) . <hl> If torpor occurs during the summer months with high temperatures and little water , it is called estivation . <hl> Some desert animals use this to survive the harshest months of the year . Torpor can occur on a daily basis ; this is seen in bats and hummingbirds . While endothermy is limited in smaller animals by surface to volume ratio , some organisms can be smaller and still be endotherms because they employ daily torpor during the part of the day that is coldest . This allows them to conserve energy during the colder parts of the day , when they consume more energy to maintain their body temperature .", "hl_sentences": "Torpor is a process that leads to a decrease in activity and metabolism and allows animals to survive adverse conditions . If torpor occurs during the summer months with high temperatures and little water , it is called estivation .", "question": { "cloze_format": "___ is the term that describes the condition of a desert mouse that lowers its metabolic rate and “sleeps” during the hot day.", "normal_format": "What term describes the condition of a desert mouse that lowers its metabolic rate and “sleeps” during the hot day?", "question_choices": [ "turgid", "hibernation", "estivation", "normal sleep pattern" ], "question_id": "fs-idm68253376", "question_text": "What term describes the condition of a desert mouse that lowers its metabolic rate and “sleeps” during the hot day?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "midsagittal" }, "bloom": null, "hl_context": "A standing vertebrate animal can be divided by several planes . A sagittal plane divides the body into right and left portions . <hl> A midsagittal plane divides the body exactly in the middle , making two equal right and left halves . <hl> A frontal plane ( also called a coronal plane ) separates the front from the back . A transverse plane ( or , horizontal plane ) divides the animal into upper and lower portions . This is sometimes called a cross section , and , if the transverse cut is at an angle , it is called an oblique plane . Figure 33.5 illustrates these planes on a goat ( a four-legged animal ) and a human being .", "hl_sentences": "A midsagittal plane divides the body exactly in the middle , making two equal right and left halves .", "question": { "cloze_format": "A plane that divides an animal into equal right and left portions is ________.", "normal_format": "What is a plane that divides an animal into equal right and left portions?", "question_choices": [ "diagonal", "midsagittal", "coronal", "transverse" ], "question_id": "fs-idm131350080", "question_text": "A plane that divides an animal into equal right and left portions is ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "transverse" }, "bloom": null, "hl_context": "A standing vertebrate animal can be divided by several planes . A sagittal plane divides the body into right and left portions . A midsagittal plane divides the body exactly in the middle , making two equal right and left halves . A frontal plane ( also called a coronal plane ) separates the front from the back . <hl> A transverse plane ( or , horizontal plane ) divides the animal into upper and lower portions . <hl> This is sometimes called a cross section , and , if the transverse cut is at an angle , it is called an oblique plane . Figure 33.5 illustrates these planes on a goat ( a four-legged animal ) and a human being .", "hl_sentences": "A transverse plane ( or , horizontal plane ) divides the animal into upper and lower portions .", "question": { "cloze_format": "A plane that divides an animal into dorsal and ventral portions is ________.", "normal_format": "What is a plane that divides an animal into dorsal and ventral portions?", "question_choices": [ "sagittal", "midsagittal", "coronal", "transverse" ], "question_id": "fs-idm126918432", "question_text": "A plane that divides an animal into dorsal and ventral portions is ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "thoracic cavity" }, "bloom": null, "hl_context": "Vertebrate animals have a number of defined body cavities , as illustrated in Figure 33.6 . Two of these are major cavities that contain smaller cavities within them . The dorsal cavity contains the cranial and the vertebral ( or spinal ) cavities . <hl> The ventral cavity contains the thoracic cavity , which in turn contains the pleural cavity around the lungs and the pericardial cavity , which surrounds the heart . <hl> The ventral cavity also contains the abdominopelvic cavity , which can be separated into the abdominal and the pelvic cavities .", "hl_sentences": "The ventral cavity contains the thoracic cavity , which in turn contains the pleural cavity around the lungs and the pericardial cavity , which surrounds the heart .", "question": { "cloze_format": "The pleural cavity is a part of ___.", "normal_format": "The pleural cavity is a part of which cavity?", "question_choices": [ "dorsal cavity", "thoracic cavity", "abdominal cavity", "pericardial cavity" ], "question_id": "fs-idm64477360", "question_text": "The pleural cavity is a part of which cavity?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "columnar" }, "bloom": "2", "hl_context": "Columnar epithelial cells lining the respiratory tract appear to be stratified . However , each cell is attached to the base membrane of the tissue and , therefore , they are simple tissues . The nuclei are arranged at different levels in the layer of cells , making it appear as though there is more than one layer , as seen in Figure 33.10 . This is called pseudostratified , columnar epithelia . <hl> This cellular covering has cilia at the apical , or free , surface of the cells . <hl> <hl> The cilia enhance the movement of mucous and trapped particles out of the respiratory tract , helping to protect the system from invasive microorganisms and harmful material that has been breathed into the body . <hl> Goblet cells are interspersed in some tissues ( such as the lining of the trachea ) . The goblet cells contain mucous that traps irritants , which in the case of the trachea keep these irritants from getting into the lungs . Columnar epithelial cells are taller than they are wide : they resemble a stack of columns in an epithelial layer , and are most commonly found in a single-layer arrangement . The nuclei of columnar epithelial cells in the digestive tract appear to be lined up at the base of the cells , as illustrated in Figure 33.9 . <hl> These cells absorb material from the lumen of the digestive tract and prepare it for entry into the body through the circulatory and lymphatic systems . <hl>", "hl_sentences": "This cellular covering has cilia at the apical , or free , surface of the cells . The cilia enhance the movement of mucous and trapped particles out of the respiratory tract , helping to protect the system from invasive microorganisms and harmful material that has been breathed into the body . These cells absorb material from the lumen of the digestive tract and prepare it for entry into the body through the circulatory and lymphatic systems .", "question": { "cloze_format": "The ___ epithelial cell that is best adapted to aid diffusion.", "normal_format": "Which type of epithelial cell is best adapted to aid diffusion?", "question_choices": [ "squamous", "cuboidal", "columnar", "transitional" ], "question_id": "fs-idp135609056", "question_text": "Which type of epithelial cell is best adapted to aid diffusion?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "cuboidal" }, "bloom": "1", "hl_context": "Cuboidal epithelial cells , shown in Figure 33.8 , are cube-shaped with a single , central nucleus . <hl> They are most commonly found in a single layer representing a simple epithelia in glandular tissues throughout the body where they prepare and secrete glandular material . <hl> They are also found in the walls of tubules and in the ducts of the kidney and liver .", "hl_sentences": "They are most commonly found in a single layer representing a simple epithelia in glandular tissues throughout the body where they prepare and secrete glandular material .", "question": { "cloze_format": "The ___ epithelial cell is found in glands.", "normal_format": "Which type of epithelial cell is found in glands?", "question_choices": [ "squamous", "cuboidal", "columnar", "transitional" ], "question_id": "fs-idp123194416", "question_text": "Which type of epithelial cell is found in glands?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "transitional" }, "bloom": null, "hl_context": "<hl> Transitional or uroepithelial cells appear only in the urinary system , primarily in the bladder and ureter . <hl> These cells are arranged in a stratified layer , but they have the capability of appearing to pile up on top of each other in a relaxed , empty bladder , as illustrated in Figure 33.11 . As the urinary bladder fills , the epithelial layer unfolds and expands to hold the volume of urine introduced into it . As the bladder fills , it expands and the lining becomes thinner . In other words , the tissue transitions from thick to thin .", "hl_sentences": "Transitional or uroepithelial cells appear only in the urinary system , primarily in the bladder and ureter .", "question": { "cloze_format": "The ___ epithelial cell is found in the urinary bladder.", "normal_format": "Which type of epithelial cell is found in the urinary bladder?", "question_choices": [ "squamous", "cuboidal", "columnar", "transitional" ], "question_id": "fs-idp6137088", "question_text": "Which type of epithelial cell is found in the urinary bladder?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "fibrous connective tissue" }, "bloom": null, "hl_context": "<hl> Fibrous connective tissues contain large amounts of collagen fibers and few cells or matrix material . <hl> The fibers can be arranged irregularly or regularly with the strands lined up in parallel . Irregularly arranged fibrous connective tissues are found in areas of the body where stress occurs from all directions , such as the dermis of the skin . Regular fibrous connective tissue , shown in Figure 33.13 , is found in tendons ( which connect muscles to bones ) and ligaments ( which connect bones to bones ) . Loose connective tissue , also called areolar connective tissue , has a sampling of all of the components of a connective tissue . <hl> As illustrated in Figure 33.12 , loose connective tissue has some fibroblasts ; macrophages are present as well . <hl> Collagen fibers are relatively wide and stain a light pink , while elastic fibers are thin and stain dark blue to black . The space between the formed elements of the tissue is filled with the matrix . The material in the connective tissue gives it a loose consistency similar to a cotton ball that has been pulled apart . Loose connective tissue is found around every blood vessel and helps to keep the vessel in place . The tissue is also found around and between most body organs . In summary , areolar tissue is tough , yet flexible , and comprises membranes . <hl> The organic portion or protein fibers found in connective tissues are either collagen , elastic , or reticular fibers . <hl> Collagen fibers provide strength to the tissue , preventing it from being torn or separated from the surrounding tissues . Elastic fibers are made of the protein elastin ; this fiber can stretch to one and one half of its length and return to its original size and shape . Elastic fibers provide flexibility to the tissues . Reticular fibers are the third type of protein fiber found in connective tissues . This fiber consists of thin strands of collagen that form a network of fibers to support the tissue and other organs to which it is connected . The various types of connective tissues , the types of cells and fibers they are made of , and sample locations of the tissues is summarized in Table 33.3 .", "hl_sentences": "Fibrous connective tissues contain large amounts of collagen fibers and few cells or matrix material . As illustrated in Figure 33.12 , loose connective tissue has some fibroblasts ; macrophages are present as well . The organic portion or protein fibers found in connective tissues are either collagen , elastic , or reticular fibers .", "question": { "cloze_format": "The type of connective tissue that has the most fibers is the ___.", "normal_format": "Which type of connective tissue has the most fibers?", "question_choices": [ "loose connective tissue", "fibrous connective tissue", "cartilage", "bone" ], "question_id": "fs-idp95325536", "question_text": "Which type of connective tissue has the most fibers?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "bone" }, "bloom": null, "hl_context": "<hl> Bone , or osseous tissue , is a connective tissue that has a large amount of two different types of matrix material . <hl> The organic matrix is similar to the matrix material found in other connective tissues , including some amount of collagen and elastic fibers . This gives strength and flexibility to the tissue . The inorganic matrix consists of mineral salts — mostly calcium salts — that give the tissue hardness . Without adequate organic material in the matrix , the tissue breaks ; without adequate inorganic material in the matrix , the tissue bends .", "hl_sentences": "Bone , or osseous tissue , is a connective tissue that has a large amount of two different types of matrix material .", "question": { "cloze_format": "The type of connective tissue that has a mineralized different matrix is the ___.", "normal_format": "Which type of connective tissue has a mineralized different matrix?", "question_choices": [ "loose connective tissue", "fibrous connective tissue", "cartilage", "bone" ], "question_id": "fs-idp129085216", "question_text": "Which type of connective tissue has a mineralized different matrix?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "osteoclast" }, "bloom": null, "hl_context": "There are three types of cells in bone : osteoblasts , osteocytes , and osteoclasts . Osteoblasts are active in making bone for growth and remodeling . Osteoblasts deposit bone material into the matrix and , after the matrix surrounds them , they continue to live , but in a reduced metabolic state as osteocytes . Osteocytes are found in lacunae of the bone . <hl> Osteoclasts are active in breaking down bone for bone remodeling , and they provide access to calcium stored in tissues . <hl> Osteoclasts are usually found on the surface of the tissue .", "hl_sentences": "Osteoclasts are active in breaking down bone for bone remodeling , and they provide access to calcium stored in tissues .", "question": { "cloze_format": "The cell found in bone that breaks it down is called an ________.", "normal_format": "What is called the cell found in bone that breaks it down? ", "question_choices": [ "osteoblast", "osteocyte", "osteoclast", "osteon" ], "question_id": "fs-idp48720512", "question_text": "The cell found in bone that breaks it down is called an ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "osteoblast" }, "bloom": null, "hl_context": "There are three types of cells in bone : osteoblasts , osteocytes , and osteoclasts . Osteoblasts are active in making bone for growth and remodeling . <hl> Osteoblasts deposit bone material into the matrix and , after the matrix surrounds them , they continue to live , but in a reduced metabolic state as osteocytes . <hl> Osteocytes are found in lacunae of the bone . Osteoclasts are active in breaking down bone for bone remodeling , and they provide access to calcium stored in tissues . Osteoclasts are usually found on the surface of the tissue .", "hl_sentences": "Osteoblasts deposit bone material into the matrix and , after the matrix surrounds them , they continue to live , but in a reduced metabolic state as osteocytes .", "question": { "cloze_format": "The cell found in bone that makes the bone is called an ________.", "normal_format": "What is the cell found in bone that makes the bone called?", "question_choices": [ "osteoblast", "osteocyte", "osteoclast", "osteon" ], "question_id": "fs-idp6545232", "question_text": "The cell found in bone that makes the bone is called an ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "matrix of blood" }, "bloom": null, "hl_context": "Blood is considered a connective tissue because it has a matrix , as shown in Figure 33.17 . The living cell types are red blood cells ( RBC ) , also called erythrocytes , and white blood cells ( WBC ) , also called leukocytes . <hl> The fluid portion of whole blood , its matrix , is commonly called plasma . <hl>", "hl_sentences": "The fluid portion of whole blood , its matrix , is commonly called plasma .", "question": { "cloze_format": "Plasma is the ________.", "normal_format": "What is Plasma?", "question_choices": [ "fibers in blood", "matrix of blood", "cell that phagocytizes bacteria", "cell fragment found in the tissue" ], "question_id": "fs-idp43292144", "question_text": "Plasma is the ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "skeletal muscle" }, "bloom": null, "hl_context": "Cardiac muscle , shown in Figure 33.18 , is found only in the heart . Like skeletal muscle , it has cross striations in its cells , but cardiac muscle has a single , centrally located nucleus . <hl> Cardiac muscle is not under voluntary control but can be influenced by the autonomic nervous system to speed up or slow down . <hl> An added feature to cardiac muscle cells is a line than extends along the end of the cell as it abuts the next cardiac cell in the row . This line is called an intercalated disc : it assists in passing electrical impulse efficiently from one cell to the next and maintains the strong connection between neighboring cardiac cells . Skeletal muscle has striations across its cells caused by the arrangement of the contractile proteins actin and myosin . These muscle cells are relatively long and have multiple nuclei along the edge of the cell . <hl> Skeletal muscle is under voluntary , somatic nervous system control and is found in the muscles that move bones . <hl> Figure 33.18 illustrates the histology of skeletal muscle . <hl> Muscle Tissues There are three types of muscle in animal bodies : smooth , skeletal , and cardiac . <hl> <hl> They differ by the presence or absence of striations or bands , the number and location of nuclei , whether they are voluntarily or involuntarily controlled , and their location within the body . <hl> Table 33.4 summarizes these differences .", "hl_sentences": "Cardiac muscle is not under voluntary control but can be influenced by the autonomic nervous system to speed up or slow down . Skeletal muscle is under voluntary , somatic nervous system control and is found in the muscles that move bones . Muscle Tissues There are three types of muscle in animal bodies : smooth , skeletal , and cardiac . They differ by the presence or absence of striations or bands , the number and location of nuclei , whether they are voluntarily or involuntarily controlled , and their location within the body .", "question": { "cloze_format": "The type of muscle cell under voluntary control is the ________.", "normal_format": "Which type of muscle cell is under voluntary control?", "question_choices": [ "smooth muscle", "skeletal muscle", "cardiac muscle", "visceral muscle" ], "question_id": "fs-idp86056688", "question_text": "The type of muscle cell under voluntary control is the ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "cell body" }, "bloom": null, "hl_context": "Nervous tissues are made of cells specialized to receive and transmit electrical impulses from specific areas of the body and to send them to specific locations in the body . The main cell of the nervous system is the neuron , illustrated in Figure 33.19 . <hl> The large structure with a central nucleus is the cell body of the neuron . <hl> Projections from the cell body are either dendrites specialized in receiving input or a single axon specialized in transmitting impulses . Some glial cells are also shown . Astrocytes regulate the chemical environment of the nerve cell , and oligodendrocytes insulate the axon so the electrical nerve impulse is transferred more efficiently . Other glial cells that are not shown support the nutritional and waste requirements of the neuron . Some of the glial cells are phagocytic and remove debris or damaged cells from the tissue . A nerve consists of neurons and glial cells .", "hl_sentences": "The large structure with a central nucleus is the cell body of the neuron .", "question": { "cloze_format": "The ___ part of a neuron contains the nucleus.", "normal_format": "What is the part of a neuron that contains the nucleus?", "question_choices": [ "cell body", "dendrite", "axon", "glial" ], "question_id": "fs-idp83445088", "question_text": "The part of a neuron that contains the nucleus is the" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "increase muscle activity to generate heat" }, "bloom": null, "hl_context": "Heat Conservation and Dissipation Animals conserve or dissipate heat in a variety of ways . In certain climates , endothermic animals have some form of insulation , such as fur , fat , feathers , or some combination thereof . Animals with thick fur or feathers create an insulating layer of air between their skin and internal organs . Polar bears and seals live and swim in a subfreezing environment and yet maintain a constant , warm , body temperature . The arctic fox , for example , uses its fluffy tail as extra insulation when it curls up to sleep in cold weather . <hl> Mammals have a residual effect from shivering and increased muscle activity : arrector pili muscles cause “ goose bumps , ” causing small hairs to stand up when the individual is cold ; this has the intended effect of increasing body temperature . <hl> Mammals use layers of fat to achieve the same end . Loss of significant amounts of body fat will compromise an individual ’ s ability to conserve heat . <hl> Endotherms and Ectotherms Animals can be divided into two groups : some maintain a constant body temperature in the face of differing environmental temperatures , while others have a body temperature that is the same as their environment and thus varies with the environment . <hl> Animals that do not control their body temperature are ectotherms . This group has been called cold-blooded , but the term may not apply to an animal in the desert with a very warm body temperature . In contrast to ectotherms , which rely on external temperatures to set their body temperatures , poikilotherms are animals with constantly varying internal temperatures . An animal that maintains a constant body temperature in the face of environmental changes is called a homeotherm . <hl> Endotherms are animals that rely on internal sources for body temperature but which can exhibit extremes in temperature . <hl> <hl> These animals are able to maintain a level of activity at cooler temperature , which an ectotherm cannot due to differing enzyme levels of activity . <hl> <hl> Heat can be exchanged between an animal and its environment through four mechanisms : radiation , evaporation , convection , and conduction ( Figure 33.22 ) . <hl> Radiation is the emission of electromagnetic “ heat ” waves . Heat comes from the sun in this manner and radiates from dry skin the same way . Heat can be removed with liquid from a surface during evaporation . This occurs when a mammal sweats . Convection currents of air remove heat from the surface of dry skin as the air passes over it . Heat will be conducted from one surface to another during direct contact with the surfaces , such as an animal resting on a warm rock .", "hl_sentences": "Mammals have a residual effect from shivering and increased muscle activity : arrector pili muscles cause “ goose bumps , ” causing small hairs to stand up when the individual is cold ; this has the intended effect of increasing body temperature . Endotherms and Ectotherms Animals can be divided into two groups : some maintain a constant body temperature in the face of differing environmental temperatures , while others have a body temperature that is the same as their environment and thus varies with the environment . Endotherms are animals that rely on internal sources for body temperature but which can exhibit extremes in temperature . These animals are able to maintain a level of activity at cooler temperature , which an ectotherm cannot due to differing enzyme levels of activity . Heat can be exchanged between an animal and its environment through four mechanisms : radiation , evaporation , convection , and conduction ( Figure 33.22 ) .", "question": { "cloze_format": "When faced with a sudden drop in environmental temperature, an endothermic animal will ___ .", "normal_format": "What will be the reaction of an endothermic animal when faced with a sudden drop in environmental temperature?", "question_choices": [ "experience a drop in its body temperature", "wait to see if it goes lower", "increase muscle activity to generate heat", "add fur or fat to increase insulation" ], "question_id": "fs-idp44762736", "question_text": "When faced with a sudden drop in environmental temperature, an endothermic animal will:" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "lowering of blood glucose after a meal" }, "bloom": null, "hl_context": "Any homeostatic process that changes the direction of the stimulus is a negative feedback loop . It may either increase or decrease the stimulus , but the stimulus is not allowed to continue as it did before the receptor sensed it . In other words , if a level is too high , the body does something to bring it down , and conversely , if a level is too low , the body does something to make it go up . Hence the term negative feedback . An example is animal maintenance of blood glucose levels . When an animal has eaten , blood glucose levels rise . This is sensed by the nervous system . Specialized cells in the pancreas sense this , and the hormone insulin is released by the endocrine system . <hl> Insulin causes blood glucose levels to decrease , as would be expected in a negative feedback system , as illustrated in Figure 33.20 . <hl> <hl> However , if an animal has not eaten and blood glucose levels decrease , this is sensed in another group of cells in the pancreas , and the hormone glucagon is released causing glucose levels to increase . <hl> This is still a negative feedback loop , but not in the direction expected by the use of the term “ negative . ” Another example of an increase as a result of the feedback loop is the control of blood calcium . If calcium levels decrease , specialized cells in the parathyroid gland sense this and release parathyroid hormone ( PTH ) , causing an increased absorption of calcium through the intestines and kidneys and , possibly , the breakdown of bone in order to liberate calcium . The effects of PTH are to raise blood levels of the element . Negative feedback loops are the predominant mechanism used in homeostasis . The goal of homeostasis is the maintenance of equilibrium around a point or value called a set point . While there are normal fluctuations from the set point , the body ’ s systems will usually attempt to go back to this point . A change in the internal or external environment is called a stimulus and is detected by a receptor ; the response of the system is to adjust the deviation parameter toward the set point . For instance , if the body becomes too warm , adjustments are made to cool the animal . <hl> If the blood ’ s glucose rises after a meal , adjustments are made to lower the blood glucose level by getting the nutrient into tissues that need it or to store it for later use . <hl>", "hl_sentences": "Insulin causes blood glucose levels to decrease , as would be expected in a negative feedback system , as illustrated in Figure 33.20 . However , if an animal has not eaten and blood glucose levels decrease , this is sensed in another group of cells in the pancreas , and the hormone glucagon is released causing glucose levels to increase . If the blood ’ s glucose rises after a meal , adjustments are made to lower the blood glucose level by getting the nutrient into tissues that need it or to store it for later use .", "question": { "cloze_format": "___ is an example of negative feedback.", "normal_format": "Which is an example of negative feedback?", "question_choices": [ "lowering of blood glucose after a meal", "blood clotting after an injury", "lactation during nursing", "uterine contractions during labor" ], "question_id": "fs-idm3729104", "question_text": "Which is an example of negative feedback?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "conduction" }, "bloom": "1", "hl_context": "Endotherms and Ectotherms Animals can be divided into two groups : some maintain a constant body temperature in the face of differing environmental temperatures , while others have a body temperature that is the same as their environment and thus varies with the environment . Animals that do not control their body temperature are ectotherms . This group has been called cold-blooded , but the term may not apply to an animal in the desert with a very warm body temperature . In contrast to ectotherms , which rely on external temperatures to set their body temperatures , poikilotherms are animals with constantly varying internal temperatures . An animal that maintains a constant body temperature in the face of environmental changes is called a homeotherm . Endotherms are animals that rely on internal sources for body temperature but which can exhibit extremes in temperature . These animals are able to maintain a level of activity at cooler temperature , which an ectotherm cannot due to differing enzyme levels of activity . Heat can be exchanged between an animal and its environment through four mechanisms : radiation , evaporation , convection , and conduction ( Figure 33.22 ) . Radiation is the emission of electromagnetic “ heat ” waves . Heat comes from the sun in this manner and radiates from dry skin the same way . Heat can be removed with liquid from a surface during evaporation . This occurs when a mammal sweats . Convection currents of air remove heat from the surface of dry skin as the air passes over it . <hl> Heat will be conducted from one surface to another during direct contact with the surfaces , such as an animal resting on a warm rock . <hl>", "hl_sentences": "Heat will be conducted from one surface to another during direct contact with the surfaces , such as an animal resting on a warm rock .", "question": { "cloze_format": "___ is a method of heat exchange that occurs during direct contact between the source and animal.", "normal_format": "Which method of heat exchange occurs during direct contact between the source and animal?", "question_choices": [ "radiation", "evaporation", "convection", "conduction" ], "question_id": "fs-idm58054384", "question_text": "Which method of heat exchange occurs during direct contact between the source and animal?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "hypothalamus" }, "bloom": null, "hl_context": "Neural Control of Thermoregulation The nervous system is important to thermoregulation , as illustrated in Figure 33.22 . <hl> The processes of homeostasis and temperature control are centered in the hypothalamus of the advanced animal brain . <hl>", "hl_sentences": "The processes of homeostasis and temperature control are centered in the hypothalamus of the advanced animal brain .", "question": { "cloze_format": "The body’s thermostat is located in the ________.", "normal_format": "Where is the body’s thermostat located?", "question_choices": [ "homeostatic receptor", "hypothalamus", "medulla", "vasodilation center" ], "question_id": "fs-idp61752464", "question_text": "The body’s thermostat is located in the ________." }, "references_are_paraphrase": null } ]
33
33.1 Animal Form and Function Learning Objectives By the end of this section, you will be able to: Describe the various types of body plans that occur in animals Describe limits on animal size and shape Relate bioenergetics to body size, levels of activity, and the environment Animals vary in form and function. From a sponge to a worm to a goat, an organism has a distinct body plan that limits its size and shape. Animals’ bodies are also designed to interact with their environments, whether in the deep sea, a rainforest canopy, or the desert. Therefore, a large amount of information about the structure of an organism's body (anatomy) and the function of its cells, tissues and organs (physiology) can be learned by studying that organism's environment. Body Plans Animal body plans follow set patterns related to symmetry. They are asymmetrical, radial, or bilateral in form as illustrated in Figure 33.2 . Asymmetrical animals are animals with no pattern or symmetry; an example of an asymmetrical animal is a sponge. Radial symmetry, as illustrated in Figure 33.2 , describes when an animal has an up-and-down orientation: any plane cut along its longitudinal axis through the organism produces equal halves, but not a definite right or left side. This plan is found mostly in aquatic animals, especially organisms that attach themselves to a base, like a rock or a boat, and extract their food from the surrounding water as it flows around the organism. Bilateral symmetry is illustrated in the same figure by a goat. The goat also has an upper and lower component to it, but a plane cut from front to back separates the animal into definite right and left sides. Additional terms used when describing positions in the body are anterior (front), posterior (rear), dorsal (toward the back), and ventral (toward the stomach). Bilateral symmetry is found in both land-based and aquatic animals; it enables a high level of mobility. Limits on Animal Size and Shape Animals with bilateral symmetry that live in water tend to have a fusiform shape: this is a tubular shaped body that is tapered at both ends. This shape decreases the drag on the body as it moves through water and allows the animal to swim at high speeds. Table 33.1 lists the maximum speed of various animals. Certain types of sharks can swim at fifty kilometers an hour and some dolphins at 32 to 40 kilometers per hour. Land animals frequently travel faster, although the tortoise and snail are significantly slower than cheetahs. Another difference in the adaptations of aquatic and land-dwelling organisms is that aquatic organisms are constrained in shape by the forces of drag in the water since water has higher viscosity than air. On the other hand, land-dwelling organisms are constrained mainly by gravity, and drag is relatively unimportant. For example, most adaptations in birds are for gravity not for drag. Maximum Speed of Assorted Land Marine Animals Animal Speed (kmh) Speed (mph) Cheetah 113 70 Quarter horse 77 48 Fox 68 42 Shortfin mako shark 50 31 Domestic house cat 48 30 Human 45 28 Dolphin 32–40 20–25 Mouse 13 8 Snail 0.05 0.03 Table 33.1 Most animals have an exoskeleton, including insects, spiders, scorpions, horseshoe crabs, centipedes, and crustaceans. Scientists estimate that, of insects alone, there are over 30 million species on our planet. The exoskeleton is a hard covering or shell that provides benefits to the animal, such as protection against damage from predators and from water loss (for land animals); it also provides for the attachments of muscles. As the tough and resistant outer cover of an arthropod, the exoskeleton may be constructed of a tough polymer such as chitin and is often biomineralized with materials such as calcium carbonate. This is fused to the animal’s epidermis. Ingrowths of the exoskeleton, called apodemes , function as attachment sites for muscles, similar to tendons in more advanced animals ( Figure 33.3 ). In order to grow, the animal must first synthesize a new exoskeleton underneath the old one and then shed or molt the original covering. This limits the animal’s ability to grow continually, and may limit the individual’s ability to mature if molting does not occur at the proper time. The thickness of the exoskeleton must be increased significantly to accommodate any increase in weight. It is estimated that a doubling of body size increases body weight by a factor of eight. The increasing thickness of the chitin necessary to support this weight limits most animals with an exoskeleton to a relatively small size. The same principles apply to endoskeletons, but they are more efficient because muscles are attached on the outside, making it easier to compensate for increased mass. An animal with an endoskeleton has its size determined by the amount of skeletal system it needs in order to support the other tissues and the amount of muscle it needs for movement. As the body size increases, both bone and muscle mass increase. The speed achievable by the animal is a balance between its overall size and the bone and muscle that provide support and movement. Limiting Effects of Diffusion on Size and Development The exchange of nutrients and wastes between a cell and its watery environment occurs through the process of diffusion. All living cells are bathed in liquid, whether they are in a single-celled organism or a multicellular one. Diffusion is effective over a specific distance and limits the size that an individual cell can attain. If a cell is a single-celled microorganism, such as an amoeba, it can satisfy all of its nutrient and waste needs through diffusion. If the cell is too large, then diffusion is ineffective and the center of the cell does not receive adequate nutrients nor is it able to effectively dispel its waste. An important concept in understanding how efficient diffusion is as a means of transport is the surface to volume ratio. Recall that any three-dimensional object has a surface area and volume; the ratio of these two quantities is the surface-to-volume ratio. Consider a cell shaped like a perfect sphere: it has a surface area of 4πr 2 , and a volume of (4/3)πr 3 . The surface-to-volume ratio of a sphere is 3/r; as the cell gets bigger, its surface to volume ratio decreases, making diffusion less efficient. The larger the size of the sphere, or animal, the less surface area for diffusion it possesses. The solution to producing larger organisms is for them to become multicellular. Specialization occurs in complex organisms, allowing cells to become more efficient at doing fewer tasks. For example, circulatory systems bring nutrients and remove waste, while respiratory systems provide oxygen for the cells and remove carbon dioxide from them. Other organ systems have developed further specialization of cells and tissues and efficiently control body functions. Moreover, surface-to-volume ratio applies to other areas of animal development, such as the relationship between muscle mass and cross-sectional surface area in supporting skeletons, and in the relationship between muscle mass and the generation of dissipation of heat. Link to Learning Visit this interactive site to see an entire animal (a zebrafish embryo) at the cellular and sub-cellular level. Use the zoom and navigation functions for a virtual nanoscopy exploration. Animal Bioenergetics All animals must obtain their energy from food they ingest or absorb. These nutrients are converted to adenosine triphosphate (ATP) for short-term storage and use by all cells. Some animals store energy for slightly longer times as glycogen, and others store energy for much longer times in the form of triglycerides housed in specialized adipose tissues. No energy system is one hundred percent efficient, and an animal’s metabolism produces waste energy in the form of heat. If an animal can conserve that heat and maintain a relatively constant body temperature, it is classified as a warm-blooded animal and called an endotherm . The insulation used to conserve the body heat comes in the forms of fur, fat, or feathers. The absence of insulation in ectothermic animals increases their dependence on the environment for body heat. The amount of energy expended by an animal over a specific time is called its metabolic rate. The rate is measured variously in joules, calories, or kilocalories (1000 calories). Carbohydrates and proteins contain about 4.5 to 5 kcal/g, and fat contains about 9 kcal/g. Metabolic rate is estimated as the basal metabolic rate (BMR) in endothermic animals at rest and as the standard metabolic rate (SMR) in ectotherms. Human males have a BMR of 1600 to 1800 kcal/day, and human females have a BMR of 1300 to 1500 kcal/day. Even with insulation, endothermal animals require extensive amounts of energy to maintain a constant body temperature. An ectotherm such as an alligator has an SMR of 60 kcal/day. Energy Requirements Related to Body Size Smaller endothermic animals have a greater surface area for their mass than larger ones ( Figure 33.4 ). Therefore, smaller animals lose heat at a faster rate than larger animals and require more energy to maintain a constant internal temperature. This results in a smaller endothermic animal having a higher BMR, per body weight, than a larger endothermic animal. Energy Requirements Related to Levels of Activity The more active an animal is, the more energy is needed to maintain that activity, and the higher its BMR or SMR. The average daily rate of energy consumption is about two to four times an animal’s BMR or SMR. Humans are more sedentary than most animals and have an average daily rate of only 1.5 times the BMR. The diet of an endothermic animal is determined by its BMR. For example: the type of grasses, leaves, or shrubs that an herbivore eats affects the number of calories that it takes in. The relative caloric content of herbivore foods, in descending order, is tall grasses > legumes > short grasses > forbs (any broad-leaved plant, not a grass) > subshrubs > annuals/biennials. Energy Requirements Related to Environment Animals adapt to extremes of temperature or food availability through torpor. Torpor is a process that leads to a decrease in activity and metabolism and allows animals to survive adverse conditions. Torpor can be used by animals for long periods, such as entering a state of hibernation during the winter months, in which case it enables them to maintain a reduced body temperature. During hibernation, ground squirrels can achieve an abdominal temperature of 0° C (32° F), while a bear’s internal temperature is maintained higher at about 37° C (99° F). If torpor occurs during the summer months with high temperatures and little water, it is called estivation . Some desert animals use this to survive the harshest months of the year. Torpor can occur on a daily basis; this is seen in bats and hummingbirds. While endothermy is limited in smaller animals by surface to volume ratio, some organisms can be smaller and still be endotherms because they employ daily torpor during the part of the day that is coldest. This allows them to conserve energy during the colder parts of the day, when they consume more energy to maintain their body temperature. Animal Body Planes and Cavities A standing vertebrate animal can be divided by several planes. A sagittal plane divides the body into right and left portions. A midsagittal plane divides the body exactly in the middle, making two equal right and left halves. A frontal plane (also called a coronal plane) separates the front from the back. A transverse plane (or, horizontal plane) divides the animal into upper and lower portions. This is sometimes called a cross section, and, if the transverse cut is at an angle, it is called an oblique plane. Figure 33.5 illustrates these planes on a goat (a four-legged animal) and a human being. Vertebrate animals have a number of defined body cavities, as illustrated in Figure 33.6 . Two of these are major cavities that contain smaller cavities within them. The dorsal cavity contains the cranial and the vertebral (or spinal) cavities. The ventral cavity contains the thoracic cavity, which in turn contains the pleural cavity around the lungs and the pericardial cavity, which surrounds the heart. The ventral cavity also contains the abdominopelvic cavity, which can be separated into the abdominal and the pelvic cavities. Career Connection Physical Anthropologist Physical anthropologists study the adaption, variability, and evolution of human beings, plus their living and fossil relatives. They can work in a variety of settings, although most will have an academic appointment at a university, usually in an anthropology department or a biology, genetics, or zoology department. Non-academic positions are available in the automotive and aerospace industries where the focus is on human size, shape, and anatomy. Research by these professionals might range from studies of how the human body reacts to car crashes to exploring how to make seats more comfortable. Other non-academic positions can be obtained in museums of natural history, anthropology, archaeology, or science and technology. These positions involve educating students from grade school through graduate school. Physical anthropologists serve as education coordinators, collection managers, writers for museum publications, and as administrators. Zoos employ these professionals, especially if they have an expertise in primate biology; they work in collection management and captive breeding programs for endangered species. Forensic science utilizes physical anthropology expertise in identifying human and animal remains, assisting in determining the cause of death, and for expert testimony in trials. 33.2 Animal Primary Tissues Learning Objectives By the end of this section, you will be able to: Describe epithelial tissues Discuss the different types of connective tissues in animals Describe three types of muscle tissues Describe nervous tissue The tissues of multicellular, complex animals are four primary types: epithelial, connective, muscle, and nervous. Recall that tissues are groups of similar cells group of similar cells carrying out related functions. These tissues combine to form organs—like the skin or kidney—that have specific, specialized functions within the body. Organs are organized into organ systems to perform functions; examples include the circulatory system, which consists of the heart and blood vessels, and the digestive system, consisting of several organs, including the stomach, intestines, liver, and pancreas. Organ systems come together to create an entire organism. Epithelial Tissues Epithelial tissues cover the outside of organs and structures in the body and line the lumens of organs in a single layer or multiple layers of cells. The types of epithelia are classified by the shapes of cells present and the number of layers of cells. Epithelia composed of a single layer of cells is called simple epithelia ; epithelial tissue composed of multiple layers is called stratified epithelia . Table 33.2 summarizes the different types of epithelial tissues. Different Types of Epithelial Tissues Cell shape Description Location squamous flat, irregular round shape simple: lung alveoli, capillaries stratified: skin, mouth, vagina cuboidal cube shaped, central nucleus glands, renal tubules columnar tall, narrow, nucleus toward base tall, narrow, nucleus along cell simple: digestive tract pseudostratified: respiratory tract transitional round, simple but appear stratified urinary bladder Table 33.2 Squamous Epithelia Squamous epithelial cells are generally round, flat, and have a small, centrally located nucleus. The cell outline is slightly irregular, and cells fit together to form a covering or lining. When the cells are arranged in a single layer (simple epithelia), they facilitate diffusion in tissues, such as the areas of gas exchange in the lungs and the exchange of nutrients and waste at blood capillaries. Figure 33.7 a illustrates a layer of squamous cells with their membranes joined together to form an epithelium. Image Figure 33.7 b illustrates squamous epithelial cells arranged in stratified layers, where protection is needed on the body from outside abrasion and damage. This is called a stratified squamous epithelium and occurs in the skin and in tissues lining the mouth and vagina. Cuboidal Epithelia Cuboidal epithelial cells, shown in Figure 33.8 , are cube-shaped with a single, central nucleus. They are most commonly found in a single layer representing a simple epithelia in glandular tissues throughout the body where they prepare and secrete glandular material. They are also found in the walls of tubules and in the ducts of the kidney and liver. Columnar Epithelia Columnar epithelial cells are taller than they are wide: they resemble a stack of columns in an epithelial layer, and are most commonly found in a single-layer arrangement. The nuclei of columnar epithelial cells in the digestive tract appear to be lined up at the base of the cells, as illustrated in Figure 33.9 . These cells absorb material from the lumen of the digestive tract and prepare it for entry into the body through the circulatory and lymphatic systems. Columnar epithelial cells lining the respiratory tract appear to be stratified. However, each cell is attached to the base membrane of the tissue and, therefore, they are simple tissues. The nuclei are arranged at different levels in the layer of cells, making it appear as though there is more than one layer, as seen in Figure 33.10 . This is called pseudostratified , columnar epithelia. This cellular covering has cilia at the apical, or free, surface of the cells. The cilia enhance the movement of mucous and trapped particles out of the respiratory tract, helping to protect the system from invasive microorganisms and harmful material that has been breathed into the body. Goblet cells are interspersed in some tissues (such as the lining of the trachea). The goblet cells contain mucous that traps irritants, which in the case of the trachea keep these irritants from getting into the lungs. Transitional Epithelia Transitional or uroepithelial cells appear only in the urinary system, primarily in the bladder and ureter. These cells are arranged in a stratified layer, but they have the capability of appearing to pile up on top of each other in a relaxed, empty bladder, as illustrated in Figure 33.11 . As the urinary bladder fills, the epithelial layer unfolds and expands to hold the volume of urine introduced into it. As the bladder fills, it expands and the lining becomes thinner. In other words, the tissue transitions from thick to thin. Visual Connection Which of the following statements about types of epithelial cells is false? Simple columnar epithelial cells line the tissue of the lung. Simple cuboidal epithelial cells are involved in the filtering of blood in the kidney. Pseudostratisfied columnar epithilia occur in a single layer, but the arrangement of nuclei makes it appear that more than one layer is present. Transitional epithelia change in thickness depending on how full the bladder is. Connective Tissues Connective tissues are made up of a matrix consisting of living cells and a non-living substance, called the ground substance. The ground substance is made of an organic substance (usually a protein) and an inorganic substance (usually a mineral or water). The principal cell of connective tissues is the fibroblast. This cell makes the fibers found in nearly all of the connective tissues. Fibroblasts are motile, able to carry out mitosis, and can synthesize whichever connective tissue is needed. Macrophages, lymphocytes, and, occasionally, leukocytes can be found in some of the tissues. Some tissues have specialized cells that are not found in the others. The matrix in connective tissues gives the tissue its density. When a connective tissue has a high concentration of cells or fibers, it has proportionally a less dense matrix. The organic portion or protein fibers found in connective tissues are either collagen, elastic, or reticular fibers. Collagen fibers provide strength to the tissue, preventing it from being torn or separated from the surrounding tissues. Elastic fibers are made of the protein elastin; this fiber can stretch to one and one half of its length and return to its original size and shape. Elastic fibers provide flexibility to the tissues. Reticular fibers are the third type of protein fiber found in connective tissues. This fiber consists of thin strands of collagen that form a network of fibers to support the tissue and other organs to which it is connected. The various types of connective tissues, the types of cells and fibers they are made of, and sample locations of the tissues is summarized in Table 33.3 . Connective Tissues Tissue Cells Fibers Location loose/areolar fibroblasts, macrophages, some lymphocytes, some neutrophils few: collagen, elastic, reticular around blood vessels; anchors epithelia dense, fibrous connective tissue fibroblasts, macrophages, mostly collagen irregular: skin regular: tendons, ligaments cartilage chondrocytes, chondroblasts hyaline: few collagen fibrocartilage: large amount of collagen shark skeleton, fetal bones, human ears, intervertebral discs bone osteoblasts, osteocytes, osteoclasts some: collagen, elastic vertebrate skeletons adipose adipocytes few adipose (fat) blood red blood cells, white blood cells none blood Table 33.3 Loose/Areolar Connective Tissue Loose connective tissue , also called areolar connective tissue, has a sampling of all of the components of a connective tissue. As illustrated in Figure 33.12 , loose connective tissue has some fibroblasts; macrophages are present as well. Collagen fibers are relatively wide and stain a light pink, while elastic fibers are thin and stain dark blue to black. The space between the formed elements of the tissue is filled with the matrix. The material in the connective tissue gives it a loose consistency similar to a cotton ball that has been pulled apart. Loose connective tissue is found around every blood vessel and helps to keep the vessel in place. The tissue is also found around and between most body organs. In summary, areolar tissue is tough, yet flexible, and comprises membranes. Fibrous Connective Tissue Fibrous connective tissues contain large amounts of collagen fibers and few cells or matrix material. The fibers can be arranged irregularly or regularly with the strands lined up in parallel. Irregularly arranged fibrous connective tissues are found in areas of the body where stress occurs from all directions, such as the dermis of the skin. Regular fibrous connective tissue, shown in Figure 33.13 , is found in tendons (which connect muscles to bones) and ligaments (which connect bones to bones). Cartilage Cartilage is a connective tissue with a large amount of the matrix and variable amounts of fibers. The cells, called chondrocytes , make the matrix and fibers of the tissue. Chondrocytes are found in spaces within the tissue called lacunae . A cartilage with few collagen and elastic fibers is hyaline cartilage, illustrated in Figure 33.14 . The lacunae are randomly scattered throughout the tissue and the matrix takes on a milky or scrubbed appearance with routine histological stains. Sharks have cartilaginous skeletons, as does nearly the entire human skeleton during a specific pre-birth developmental stage. A remnant of this cartilage persists in the outer portion of the human nose. Hyaline cartilage is also found at the ends of long bones, reducing friction and cushioning the articulations of these bones. Elastic cartilage has a large amount of elastic fibers, giving it tremendous flexibility. The ears of most vertebrate animals contain this cartilage as do portions of the larynx, or voice box. Fibrocartilage contains a large amount of collagen fibers, giving the tissue tremendous strength. Fibrocartilage comprises the intervertebral discs in vertebrate animals. Hyaline cartilage found in movable joints such as the knee and shoulder becomes damaged as a result of age or trauma. Damaged hyaline cartilage is replaced by fibrocartilage and results in the joints becoming “stiff.” Bone Bone, or osseous tissue, is a connective tissue that has a large amount of two different types of matrix material. The organic matrix is similar to the matrix material found in other connective tissues, including some amount of collagen and elastic fibers. This gives strength and flexibility to the tissue. The inorganic matrix consists of mineral salts—mostly calcium salts—that give the tissue hardness. Without adequate organic material in the matrix, the tissue breaks; without adequate inorganic material in the matrix, the tissue bends. There are three types of cells in bone: osteoblasts, osteocytes, and osteoclasts. Osteoblasts are active in making bone for growth and remodeling. Osteoblasts deposit bone material into the matrix and, after the matrix surrounds them, they continue to live, but in a reduced metabolic state as osteocytes. Osteocytes are found in lacunae of the bone. Osteoclasts are active in breaking down bone for bone remodeling, and they provide access to calcium stored in tissues. Osteoclasts are usually found on the surface of the tissue. Bone can be divided into two types: compact and spongy. Compact bone is found in the shaft (or diaphysis) of a long bone and the surface of the flat bones, while spongy bone is found in the end (or epiphysis) of a long bone. Compact bone is organized into subunits called osteons , as illustrated in Figure 33.15 . A blood vessel and a nerve are found in the center of the structure within the Haversian canal, with radiating circles of lacunae around it known as lamellae. The wavy lines seen between the lacunae are microchannels called canaliculi ; they connect the lacunae to aid diffusion between the cells. Spongy bone is made of tiny plates called trabeculae these plates serve as struts to give the spongy bone strength. Over time, these plates can break causing the bone to become less resilient. Bone tissue forms the internal skeleton of vertebrate animals, providing structure to the animal and points of attachment for tendons. Adipose Tissue Adipose tissue, or fat tissue, is considered a connective tissue even though it does not have fibroblasts or a real matrix and only has a few fibers. Adipose tissue is made up of cells called adipocytes that collect and store fat in the form of triglycerides, for energy metabolism. Adipose tissues additionally serve as insulation to help maintain body temperatures, allowing animals to be endothermic, and they function as cushioning against damage to body organs. Under a microscope, adipose tissue cells appear empty due to the extraction of fat during the processing of the material for viewing, as seen in Figure 33.16 . The thin lines in the image are the cell membranes, and the nuclei are the small, black dots at the edges of the cells. Blood Blood is considered a connective tissue because it has a matrix, as shown in Figure 33.17 . The living cell types are red blood cells (RBC), also called erythrocytes, and white blood cells (WBC), also called leukocytes. The fluid portion of whole blood, its matrix, is commonly called plasma. The cell found in greatest abundance in blood is the erythrocyte. Erythrocytes are counted in millions in a blood sample: the average number of red blood cells in primates is 4.7 to 5.5 million cells per microliter. Erythrocytes are consistently the same size in a species, but vary in size between species. For example, the average diameter of a primate red blood cell is 7.5 µl, a dog is close at 7.0 µl, but a cat’s RBC diameter is 5.9 µl. Sheep erythrocytes are even smaller at 4.6 µl. Mammalian erythrocytes lose their nuclei and mitochondria when they are released from the bone marrow where they are made. Fish, amphibian, and avian red blood cells maintain their nuclei and mitochondria throughout the cell’s life. The principal job of an erythrocyte is to carry and deliver oxygen to the tissues. Leukocytes are the predominant white blood cells found in the peripheral blood. Leukocytes are counted in the thousands in the blood with measurements expressed as ranges: primate counts range from 4,800 to 10,800 cells per µl, dogs from 5,600 to 19,200 cells per µl, cats from 8,000 to 25,000 cells per µl, cattle from 4,000 to 12,000 cells per µl, and pigs from 11,000 to 22,000 cells per µl. Lymphocytes function primarily in the immune response to foreign antigens or material. Different types of lymphocytes make antibodies tailored to the foreign antigens and control the production of those antibodies. Neutrophils are phagocytic cells and they participate in one of the early lines of defense against microbial invaders, aiding in the removal of bacteria that has entered the body. Another leukocyte that is found in the peripheral blood is the monocyte. Monocytes give rise to phagocytic macrophages that clean up dead and damaged cells in the body, whether they are foreign or from the host animal. Two additional leukocytes in the blood are eosinophils and basophils—both help to facilitate the inflammatory response. The slightly granular material among the cells is a cytoplasmic fragment of a cell in the bone marrow. This is called a platelet or thrombocyte. Platelets participate in the stages leading up to coagulation of the blood to stop bleeding through damaged blood vessels. Blood has a number of functions, but primarily it transports material through the body to bring nutrients to cells and remove waste material from them. Muscle Tissues There are three types of muscle in animal bodies: smooth, skeletal, and cardiac. They differ by the presence or absence of striations or bands, the number and location of nuclei, whether they are voluntarily or involuntarily controlled, and their location within the body. Table 33.4 summarizes these differences. Types of Muscles Type of Muscle Striations Nuclei Control Location smooth no single, in center involuntary visceral organs skeletal yes many, at periphery voluntary skeletal muscles cardiac yes single, in center involuntary heart Table 33.4 Smooth Muscle Smooth muscle does not have striations in its cells. It has a single, centrally located nucleus, as shown in Figure 33.18 . Constriction of smooth muscle occurs under involuntary, autonomic nervous control and in response to local conditions in the tissues. Smooth muscle tissue is also called non-striated as it lacks the banded appearance of skeletal and cardiac muscle. The walls of blood vessels, the tubes of the digestive system, and the tubes of the reproductive systems are composed of mostly smooth muscle. Skeletal Muscle Skeletal muscle has striations across its cells caused by the arrangement of the contractile proteins actin and myosin. These muscle cells are relatively long and have multiple nuclei along the edge of the cell. Skeletal muscle is under voluntary, somatic nervous system control and is found in the muscles that move bones. Figure 33.18 illustrates the histology of skeletal muscle. Cardiac Muscle Cardiac muscle, shown in Figure 33.18 , is found only in the heart. Like skeletal muscle, it has cross striations in its cells, but cardiac muscle has a single, centrally located nucleus. Cardiac muscle is not under voluntary control but can be influenced by the autonomic nervous system to speed up or slow down. An added feature to cardiac muscle cells is a line than extends along the end of the cell as it abuts the next cardiac cell in the row. This line is called an intercalated disc: it assists in passing electrical impulse efficiently from one cell to the next and maintains the strong connection between neighboring cardiac cells. Nervous Tissues Nervous tissues are made of cells specialized to receive and transmit electrical impulses from specific areas of the body and to send them to specific locations in the body. The main cell of the nervous system is the neuron, illustrated in Figure 33.19 . The large structure with a central nucleus is the cell body of the neuron. Projections from the cell body are either dendrites specialized in receiving input or a single axon specialized in transmitting impulses. Some glial cells are also shown. Astrocytes regulate the chemical environment of the nerve cell, and oligodendrocytes insulate the axon so the electrical nerve impulse is transferred more efficiently. Other glial cells that are not shown support the nutritional and waste requirements of the neuron. Some of the glial cells are phagocytic and remove debris or damaged cells from the tissue. A nerve consists of neurons and glial cells. Link to Learning Click through the interactive review to learn more about epithelial tissues. Career Connection Pathologist A pathologist is a medical doctor or veterinarian who has specialized in the laboratory detection of disease in animals, including humans. These professionals complete medical school education and follow it with an extensive post-graduate residency at a medical center. A pathologist may oversee clinical laboratories for the evaluation of body tissue and blood samples for the detection of disease or infection. They examine tissue specimens through a microscope to identify cancers and other diseases. Some pathologists perform autopsies to determine the cause of death and the progression of disease. 33.3 Homeostasis Learning Objectives By the end of this section, you will be able to: Define homeostasis Describe the factors affecting homeostasis Discuss positive and negative feedback mechanisms used in homeostasis Describe thermoregulation of endothermic and ectothermic animals Animal organs and organ systems constantly adjust to internal and external changes through a process called homeostasis (“steady state”). These changes might be in the level of glucose or calcium in blood or in external temperatures. Homeostasis means to maintain dynamic equilibrium in the body. It is dynamic because it is constantly adjusting to the changes that the body’s systems encounter. It is equilibrium because body functions are kept within specific ranges. Even an animal that is apparently inactive is maintaining this homeostatic equilibrium. Homeostatic Process The goal of homeostasis is the maintenance of equilibrium around a point or value called a set point . While there are normal fluctuations from the set point, the body’s systems will usually attempt to go back to this point. A change in the internal or external environment is called a stimulus and is detected by a receptor; the response of the system is to adjust the deviation parameter toward the set point. For instance, if the body becomes too warm, adjustments are made to cool the animal. If the blood’s glucose rises after a meal, adjustments are made to lower the blood glucose level by getting the nutrient into tissues that need it or to store it for later use. Control of Homeostasis When a change occurs in an animal’s environment, an adjustment must be made. The receptor senses the change in the environment, then sends a signal to the control center (in most cases, the brain) which in turn generates a response that is signaled to an effector. The effector is a muscle (that contracts or relaxes) or a gland that secretes. Homeostatsis is maintained by negative feedback loops. Positive feedback loops actually push the organism further out of homeostasis, but may be necessary for life to occur. Homeostasis is controlled by the nervous and endocrine system of mammals. Negative Feedback Mechanisms Any homeostatic process that changes the direction of the stimulus is a negative feedback loop . It may either increase or decrease the stimulus, but the stimulus is not allowed to continue as it did before the receptor sensed it. In other words, if a level is too high, the body does something to bring it down, and conversely, if a level is too low, the body does something to make it go up. Hence the term negative feedback. An example is animal maintenance of blood glucose levels. When an animal has eaten, blood glucose levels rise. This is sensed by the nervous system. Specialized cells in the pancreas sense this, and the hormone insulin is released by the endocrine system. Insulin causes blood glucose levels to decrease, as would be expected in a negative feedback system, as illustrated in Figure 33.20 . However, if an animal has not eaten and blood glucose levels decrease, this is sensed in another group of cells in the pancreas, and the hormone glucagon is released causing glucose levels to increase. This is still a negative feedback loop, but not in the direction expected by the use of the term “negative.” Another example of an increase as a result of the feedback loop is the control of blood calcium. If calcium levels decrease, specialized cells in the parathyroid gland sense this and release parathyroid hormone (PTH), causing an increased absorption of calcium through the intestines and kidneys and, possibly, the breakdown of bone in order to liberate calcium. The effects of PTH are to raise blood levels of the element. Negative feedback loops are the predominant mechanism used in homeostasis. Positive Feedback Loop A positive feedback loop maintains the direction of the stimulus, possibly accelerating it. Few examples of positive feedback loops exist in animal bodies, but one is found in the cascade of chemical reactions that result in blood clotting, or coagulation. As one clotting factor is activated, it activates the next factor in sequence until a fibrin clot is achieved. The direction is maintained, not changed, so this is positive feedback. Another example of positive feedback is uterine contractions during childbirth, as illustrated in Figure 33.21 . The hormone oxytocin, made by the endocrine system, stimulates the contraction of the uterus. This produces pain sensed by the nervous system. Instead of lowering the oxytocin and causing the pain to subside, more oxytocin is produced until the contractions are powerful enough to produce childbirth. Visual Connection State whether each of the following processes is regulated by a positive feedback loop or a negative feedback loop. A person feels satiated after eating a large meal. The blood has plenty of red blood cells. As a result, erythropoietin, a hormone that stimulates the production of new red blood cells, is no longer released from the kidney. Set Point It is possible to adjust a system’s set point. When this happens, the feedback loop works to maintain the new setting. An example of this is blood pressure: over time, the normal or set point for blood pressure can increase as a result of continued increases in blood pressure. The body no longer recognizes the elevation as abnormal and no attempt is made to return to the lower set point. The result is the maintenance of an elevated blood pressure that can have harmful effects on the body. Medication can lower blood pressure and lower the set point in the system to a more healthy level. This is called a process of alteration of the set point in a feedback loop. Changes can be made in a group of body organ systems in order to maintain a set point in another system. This is called acclimatization . This occurs, for instance, when an animal migrates to a higher altitude than it is accustomed to. In order to adjust to the lower oxygen levels at the new altitude, the body increases the number of red blood cells circulating in the blood to ensure adequate oxygen delivery to the tissues. Another example of acclimatization is animals that have seasonal changes in their coats: a heavier coat in the winter ensures adequate heat retention, and a light coat in summer assists in keeping body temperature from rising to harmful levels. Link to Learning Feedback mechanisms can be understood in terms of driving a race car along a track: watch a short video lesson on positive and negative feedback loops. Click to view content Homeostasis: Thermoregulation Body temperature affects body activities. Generally, as body temperature rises, enzyme activity rises as well. For every ten degree centigrade rise in temperature, enzyme activity doubles, up to a point. Body proteins, including enzymes, begin to denature and lose their function with high heat (around 50 o C for mammals). Enzyme activity will decrease by half for every ten degree centigrade drop in temperature, to the point of freezing, with a few exceptions. Some fish can withstand freezing solid and return to normal with thawing. Link to Learning Watch this Discovery Channel video on thermoregulation to see illustrations of this process in a variety of animals. Click to view content Endotherms and Ectotherms Animals can be divided into two groups: some maintain a constant body temperature in the face of differing environmental temperatures, while others have a body temperature that is the same as their environment and thus varies with the environment. Animals that do not control their body temperature are ectotherms. This group has been called cold-blooded, but the term may not apply to an animal in the desert with a very warm body temperature. In contrast to ectotherms, which rely on external temperatures to set their body temperatures, poikilotherms are animals with constantly varying internal temperatures. An animal that maintains a constant body temperature in the face of environmental changes is called a homeotherm. Endotherms are animals that rely on internal sources for body temperature but which can exhibit extremes in temperature. These animals are able to maintain a level of activity at cooler temperature, which an ectotherm cannot due to differing enzyme levels of activity. Heat can be exchanged between an animal and its environment through four mechanisms: radiation, evaporation, convection, and conduction ( Figure 33.22 ). Radiation is the emission of electromagnetic “heat” waves. Heat comes from the sun in this manner and radiates from dry skin the same way. Heat can be removed with liquid from a surface during evaporation. This occurs when a mammal sweats. Convection currents of air remove heat from the surface of dry skin as the air passes over it. Heat will be conducted from one surface to another during direct contact with the surfaces, such as an animal resting on a warm rock. Heat Conservation and Dissipation Animals conserve or dissipate heat in a variety of ways. In certain climates, endothermic animals have some form of insulation, such as fur, fat, feathers, or some combination thereof. Animals with thick fur or feathers create an insulating layer of air between their skin and internal organs. Polar bears and seals live and swim in a subfreezing environment and yet maintain a constant, warm, body temperature. The arctic fox, for example, uses its fluffy tail as extra insulation when it curls up to sleep in cold weather. Mammals have a residual effect from shivering and increased muscle activity: arrector pili muscles cause “goose bumps,” causing small hairs to stand up when the individual is cold; this has the intended effect of increasing body temperature. Mammals use layers of fat to achieve the same end. Loss of significant amounts of body fat will compromise an individual’s ability to conserve heat. Endotherms use their circulatory systems to help maintain body temperature. Vasodilation brings more blood and heat to the body surface, facilitating radiation and evaporative heat loss, which helps to cool the body. Vasoconstriction reduces blood flow in peripheral blood vessels, forcing blood toward the core and the vital organs found there, and conserving heat. Some animals have adaptions to their circulatory system that enable them to transfer heat from arteries to veins, warming blood returning to the heart. This is called a countercurrent heat exchange; it prevents the cold venous blood from cooling the heart and other internal organs. This adaption can be shut down in some animals to prevent overheating the internal organs. The countercurrent adaption is found in many animals, including dolphins, sharks, bony fish, bees, and hummingbirds. In contrast, similar adaptations can help cool endotherms when needed, such as dolphin flukes and elephant ears. Some ectothermic animals use changes in their behavior to help regulate body temperature. For example, a desert ectothermic animal may simply seek cooler areas during the hottest part of the day in the desert to keep from getting too warm. The same animals may climb onto rocks to capture heat during a cold desert night. Some animals seek water to aid evaporation in cooling them, as seen with reptiles. Other ectotherms use group activity such as the activity of bees to warm a hive to survive winter. Many animals, especially mammals, use metabolic waste heat as a heat source. When muscles are contracted, most of the energy from the ATP used in muscle actions is wasted energy that translates into heat. Severe cold elicits a shivering reflex that generates heat for the body. Many species also have a type of adipose tissue called brown fat that specializes in generating heat. Neural Control of Thermoregulation The nervous system is important to thermoregulation , as illustrated in Figure 33.22 . The processes of homeostasis and temperature control are centered in the hypothalamus of the advanced animal brain. Visual Connection When bacteria are destroyed by leuckocytes, pyrogens are released into the blood. Pyrogens reset the body’s thermostat to a higher temperature, resulting in fever. How might pyrogens cause the body temperature to rise? The hypothalamus maintains the set point for body temperature through reflexes that cause vasodilation and sweating when the body is too warm, or vasoconstriction and shivering when the body is too cold. It responds to chemicals from the body. When a bacterium is destroyed by phagocytic leukocytes, chemicals called endogenous pyrogens are released into the blood. These pyrogens circulate to the hypothalamus and reset the thermostat. This allows the body’s temperature to increase in what is commonly called a fever. An increase in body temperature causes iron to be conserved, which reduces a nutrient needed by bacteria. An increase in body heat also increases the activity of the animal’s enzymes and protective cells while inhibiting the enzymes and activity of the invading microorganisms. Finally, heat itself may also kill the pathogen. A fever that was once thought to be a complication of an infection is now understood to be a normal defense mechanism.
biology
Chapter Outline 13.1 Chromosomal Theory and Genetic Linkage 13.2 Chromosomal Basis of Inherited Disorders Introduction The gene is the physical unit of inheritance, and genes are arranged in a linear order on chromosomes. The behaviors and interactions of chromosomes during meiosis explain, at a cellular level, the patterns of inheritance that we observe in populations. Genetic disorders involving alterations in chromosome number or structure may have dramatic effects and can prevent a fertilized egg from developing altogether.
[ { "answer": { "ans_choice": 0, "ans_text": "in more males than females" }, "bloom": null, "hl_context": "Several errors in sex chromosome number have been characterized . <hl> Individuals with three X chromosomes , called triplo-X , are phenotypically female but express developmental delays and reduced fertility . <hl> <hl> The XXY genotype , corresponding to one type of Klinefelter syndrome , corresponds to phenotypically male individuals with small testes , enlarged breasts , and reduced body hair . <hl> More complex types of Klinefelter syndrome exist in which the individual has as many as five X chromosomes . In all types , every X chromosome except one undergoes inactivation to compensate for the excess genetic dosage . This can be seen as several Barr bodies in each cell nucleus . Turner syndrome , characterized as an X0 genotype ( i . e . , only a single sex chromosome ) , corresponds to a phenotypically female individual with short stature , webbed skin in the neck region , hearing and cardiac impairments , and sterility . As shown in Figure 13.4 , by using recombination frequency to predict genetic distance , the relative order of genes on chromosome 2 could be inferred . The values shown represent map distances in centimorgans ( cM ) , which correspond to recombination frequencies ( in percent ) . Therefore , the genes for body color and wing size were 65.5 − 48.5 = 17 cM apart , indicating that the maternal and paternal alleles for these genes recombine in 17 percent of offspring , on average . To construct a chromosome map , Sturtevant assumed that genes were ordered serially on threadlike chromosomes . He also assumed that the incidence of recombination between two homologous chromosomes could occur with equal likelihood anywhere along the length of the chromosome . Operating under these assumptions , Sturtevant postulated that alleles that were far apart on a chromosome were more likely to dissociate during meiosis simply because there was a larger region over which recombination could occur . Conversely , alleles that were close to each other on the chromosome were likely to be inherited together . The average number of crossovers between two alleles — that is , their recombination frequency — correlated with their genetic distance from each other , relative to the locations of other genes on that chromosome . Considering the example cross between AaBb and aabb above , the frequency of recombination could be calculated as 50/1000 = 0.05 . That is , the likelihood of a crossover between genes A / a and B / b was 0.05 , or 5 percent . Such a result would indicate that the genes were definitively linked , but that they were far enough apart for crossovers to occasionally occur . Sturtevant divided his genetic map into map units , or centimorgans ( cM ) , in which a recombination frequency of 0.01 corresponds to 1 cM . By representing alleles in a linear map , Sturtevant suggested that genes can range from being perfectly linked ( recombination frequency = 0 ) to being perfectly unlinked ( recombination frequency = 0.5 ) when genes are on different chromosomes or genes are separated very far apart on the same chromosome . Perfectly unlinked genes correspond to the frequencies predicted by Mendel to assort independently in a dihybrid cross . A recombination frequency of 0.5 indicates that 50 percent of offspring are recombinants and the other 50 percent are parental types . That is , every type of allele combination is represented with equal frequency . This representation allowed Sturtevant to additively calculate distances between several genes on the same chromosome . However , as the genetic distances approached 0.50 , his predictions became less accurate because it was not clear whether the genes were very far apart on the same chromosome or on different chromosomes . In 1931 , Barbara McClintock and Harriet Creighton demonstrated the crossover of homologous chromosomes in corn plants . Weeks later , homologous recombination in Drosophila was demonstrated microscopically by Curt Stern . <hl> Stern observed several X-linked phenotypes that were associated with a structurally unusual and dissimilar X chromosome pair in which one X was missing a small terminal segment , and the other X was fused to a piece of the Y chromosome . <hl> By crossing flies , observing their offspring , and then visualizing the offspring ’ s chromosomes , Stern demonstrated that every time the offspring allele combination deviated from either of the parental combinations , there was a corresponding exchange of an X chromosome segment . Using mutant flies with structurally distinct X chromosomes was the key to observing the products of recombination because DNA sequencing and other molecular tools were not yet available . It is now known that homologous chromosomes regularly exchange segments in meiosis by reciprocally breaking and rejoining their DNA at precise locations .", "hl_sentences": "Individuals with three X chromosomes , called triplo-X , are phenotypically female but express developmental delays and reduced fertility . The XXY genotype , corresponding to one type of Klinefelter syndrome , corresponds to phenotypically male individuals with small testes , enlarged breasts , and reduced body hair . Stern observed several X-linked phenotypes that were associated with a structurally unusual and dissimilar X chromosome pair in which one X was missing a small terminal segment , and the other X was fused to a piece of the Y chromosome .", "question": { "cloze_format": "X-linked recessive traits in humans (or in Drosophila) are observed ________.", "normal_format": "Where are X-linked recessive traits in humans (or in Drosophila) observed?", "question_choices": [ "in more males than females", "in more females than males", "in males and females equally", "in different distributions depending on the trait" ], "question_id": "fs-id1596619", "question_text": "X-linked recessive traits in humans (or in Drosophila) are observed ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "chiasmata" }, "bloom": null, "hl_context": "<hl> In 1909 , Frans Janssen observed chiasmata — the point at which chromatids are in contact with each other and may exchange segments — prior to the first division of meiosis . <hl> <hl> He suggested that alleles become unlinked and chromosomes physically exchange segments . <hl> As chromosomes condensed and paired with their homologs , they appeared to interact at distinct points . Janssen suggested that these points corresponded to regions in which chromosome segments were exchanged . It is now known that the pairing and interaction between homologous chromosomes , known as synapsis , does more than simply organize the homologs for migration to separate daughter cells . When synapsed , homologous chromosomes undergo reciprocal physical exchanges at their arms in a process called homologous recombination , or more simply , “ crossing over . ”", "hl_sentences": "In 1909 , Frans Janssen observed chiasmata — the point at which chromatids are in contact with each other and may exchange segments — prior to the first division of meiosis . He suggested that alleles become unlinked and chromosomes physically exchange segments .", "question": { "cloze_format": "The first suggestion that chromosomes may physically exchange segments came from the microscopic identification of ________.", "normal_format": "The first suggestion that chromosomes may physically exchange segments came from which microscopic identification?", "question_choices": [ "synapsis", "sister chromatids", "chiasmata", "alleles" ], "question_id": "fs-id2165395", "question_text": "The first suggestion that chromosomes may physically exchange segments came from the microscopic identification of ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "0.50" }, "bloom": "2", "hl_context": "As shown in Figure 13.4 , by using recombination frequency to predict genetic distance , the relative order of genes on chromosome 2 could be inferred . The values shown represent map distances in centimorgans ( cM ) , which correspond to recombination frequencies ( in percent ) . Therefore , the genes for body color and wing size were 65.5 − 48.5 = 17 cM apart , indicating that the maternal and paternal alleles for these genes recombine in 17 percent of offspring , on average . To construct a chromosome map , Sturtevant assumed that genes were ordered serially on threadlike chromosomes . He also assumed that the incidence of recombination between two homologous chromosomes could occur with equal likelihood anywhere along the length of the chromosome . Operating under these assumptions , Sturtevant postulated that alleles that were far apart on a chromosome were more likely to dissociate during meiosis simply because there was a larger region over which recombination could occur . Conversely , alleles that were close to each other on the chromosome were likely to be inherited together . The average number of crossovers between two alleles — that is , their recombination frequency — correlated with their genetic distance from each other , relative to the locations of other genes on that chromosome . Considering the example cross between AaBb and aabb above , the frequency of recombination could be calculated as 50/1000 = 0.05 . That is , the likelihood of a crossover between genes A / a and B / b was 0.05 , or 5 percent . Such a result would indicate that the genes were definitively linked , but that they were far enough apart for crossovers to occasionally occur . Sturtevant divided his genetic map into map units , or centimorgans ( cM ) , in which a recombination frequency of 0.01 corresponds to 1 cM . <hl> By representing alleles in a linear map , Sturtevant suggested that genes can range from being perfectly linked ( recombination frequency = 0 ) to being perfectly unlinked ( recombination frequency = 0.5 ) when genes are on different chromosomes or genes are separated very far apart on the same chromosome . <hl> <hl> Perfectly unlinked genes correspond to the frequencies predicted by Mendel to assort independently in a dihybrid cross . <hl> <hl> A recombination frequency of 0.5 indicates that 50 percent of offspring are recombinants and the other 50 percent are parental types . <hl> That is , every type of allele combination is represented with equal frequency . This representation allowed Sturtevant to additively calculate distances between several genes on the same chromosome . However , as the genetic distances approached 0.50 , his predictions became less accurate because it was not clear whether the genes were very far apart on the same chromosome or on different chromosomes . In 1931 , Barbara McClintock and Harriet Creighton demonstrated the crossover of homologous chromosomes in corn plants . Weeks later , homologous recombination in Drosophila was demonstrated microscopically by Curt Stern . Stern observed several X-linked phenotypes that were associated with a structurally unusual and dissimilar X chromosome pair in which one X was missing a small terminal segment , and the other X was fused to a piece of the Y chromosome . By crossing flies , observing their offspring , and then visualizing the offspring ’ s chromosomes , Stern demonstrated that every time the offspring allele combination deviated from either of the parental combinations , there was a corresponding exchange of an X chromosome segment . Using mutant flies with structurally distinct X chromosomes was the key to observing the products of recombination because DNA sequencing and other molecular tools were not yet available . It is now known that homologous chromosomes regularly exchange segments in meiosis by reciprocally breaking and rejoining their DNA at precise locations .", "hl_sentences": "By representing alleles in a linear map , Sturtevant suggested that genes can range from being perfectly linked ( recombination frequency = 0 ) to being perfectly unlinked ( recombination frequency = 0.5 ) when genes are on different chromosomes or genes are separated very far apart on the same chromosome . Perfectly unlinked genes correspond to the frequencies predicted by Mendel to assort independently in a dihybrid cross . A recombination frequency of 0.5 indicates that 50 percent of offspring are recombinants and the other 50 percent are parental types .", "question": { "cloze_format": "The recombination frequency ___ corresponds to independent assortment and the absence of linkage.", "normal_format": "Which recombination frequency corresponds to independent assortment and the absence of linkage?", "question_choices": [ "0", "0.25", "0.50", "0.75" ], "question_id": "fs-id2914799", "question_text": "Which recombination frequency corresponds to independent assortment and the absence of linkage?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "0" }, "bloom": "2", "hl_context": "As shown in Figure 13.4 , by using recombination frequency to predict genetic distance , the relative order of genes on chromosome 2 could be inferred . The values shown represent map distances in centimorgans ( cM ) , which correspond to recombination frequencies ( in percent ) . Therefore , the genes for body color and wing size were 65.5 − 48.5 = 17 cM apart , indicating that the maternal and paternal alleles for these genes recombine in 17 percent of offspring , on average . To construct a chromosome map , Sturtevant assumed that genes were ordered serially on threadlike chromosomes . He also assumed that the incidence of recombination between two homologous chromosomes could occur with equal likelihood anywhere along the length of the chromosome . Operating under these assumptions , Sturtevant postulated that alleles that were far apart on a chromosome were more likely to dissociate during meiosis simply because there was a larger region over which recombination could occur . Conversely , alleles that were close to each other on the chromosome were likely to be inherited together . The average number of crossovers between two alleles — that is , their recombination frequency — correlated with their genetic distance from each other , relative to the locations of other genes on that chromosome . Considering the example cross between AaBb and aabb above , the frequency of recombination could be calculated as 50/1000 = 0.05 . That is , the likelihood of a crossover between genes A / a and B / b was 0.05 , or 5 percent . Such a result would indicate that the genes were definitively linked , but that they were far enough apart for crossovers to occasionally occur . Sturtevant divided his genetic map into map units , or centimorgans ( cM ) , in which a recombination frequency of 0.01 corresponds to 1 cM . <hl> By representing alleles in a linear map , Sturtevant suggested that genes can range from being perfectly linked ( recombination frequency = 0 ) to being perfectly unlinked ( recombination frequency = 0.5 ) when genes are on different chromosomes or genes are separated very far apart on the same chromosome . <hl> Perfectly unlinked genes correspond to the frequencies predicted by Mendel to assort independently in a dihybrid cross . A recombination frequency of 0.5 indicates that 50 percent of offspring are recombinants and the other 50 percent are parental types . That is , every type of allele combination is represented with equal frequency . This representation allowed Sturtevant to additively calculate distances between several genes on the same chromosome . However , as the genetic distances approached 0.50 , his predictions became less accurate because it was not clear whether the genes were very far apart on the same chromosome or on different chromosomes . In 1931 , Barbara McClintock and Harriet Creighton demonstrated the crossover of homologous chromosomes in corn plants . Weeks later , homologous recombination in Drosophila was demonstrated microscopically by Curt Stern . Stern observed several X-linked phenotypes that were associated with a structurally unusual and dissimilar X chromosome pair in which one X was missing a small terminal segment , and the other X was fused to a piece of the Y chromosome . By crossing flies , observing their offspring , and then visualizing the offspring ’ s chromosomes , Stern demonstrated that every time the offspring allele combination deviated from either of the parental combinations , there was a corresponding exchange of an X chromosome segment . Using mutant flies with structurally distinct X chromosomes was the key to observing the products of recombination because DNA sequencing and other molecular tools were not yet available . It is now known that homologous chromosomes regularly exchange segments in meiosis by reciprocally breaking and rejoining their DNA at precise locations .", "hl_sentences": "By representing alleles in a linear map , Sturtevant suggested that genes can range from being perfectly linked ( recombination frequency = 0 ) to being perfectly unlinked ( recombination frequency = 0.5 ) when genes are on different chromosomes or genes are separated very far apart on the same chromosome .", "question": { "cloze_format": "The recombination frequency ___ corresponds to perfect linkage and violates the law of independent assortment.", "normal_format": "Which recombination frequency corresponds to perfect linkage and violates the law of independent assortment?", "question_choices": [ "0", "0.25", "0.50", "0.75" ], "question_id": "fs-id2310144", "question_text": "Which recombination frequency corresponds to perfect linkage and violates the law of independent assortment?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "13q12" }, "bloom": "1", "hl_context": "In a given species , chromosomes can be identified by their number , size , centromere position , and banding pattern . In a human karyotype , autosomes or “ body chromosomes ” ( all of the non – sex chromosomes ) are generally organized in approximate order of size from largest ( chromosome 1 ) to smallest ( chromosome 22 ) . The X and Y chromosomes are not autosomes . However , chromosome 21 is actually shorter than chromosome 22 . This was discovered after the naming of Down syndrome as trisomy 21 , reflecting how this disease results from possessing one extra chromosome 21 ( three total ) . Not wanting to change the name of this important disease , chromosome 21 retained its numbering , despite describing the shortest set of chromosomes . <hl> The chromosome “ arms ” projecting from either end of the centromere may be designated as short or long , depending on their relative lengths . <hl> <hl> The short arm is abbreviated p ( for “ petite ” ) , whereas the long arm is abbreviated q ( because it follows “ p ” alphabetically ) . <hl> Each arm is further subdivided and denoted by a number . Using this naming system , locations on chromosomes can be described consistently in the scientific literature . Career Connection Geneticists Use Karyograms to Identify Chromosomal Aberrations", "hl_sentences": "The chromosome “ arms ” projecting from either end of the centromere may be designated as short or long , depending on their relative lengths . The short arm is abbreviated p ( for “ petite ” ) , whereas the long arm is abbreviated q ( because it follows “ p ” alphabetically ) .", "question": { "cloze_format": "The code that describes position 12 on the long arm of chromosome 13 is ___.", "normal_format": "Which of the following codes describes position 12 on the long arm of chromosome 13?", "question_choices": [ "13p12", "13q12", "12p13", "12q13" ], "question_id": "fs-id2348752", "question_text": "Which of the following codes describes position 12 on the long arm of chromosome 13?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "larger yields" }, "bloom": null, "hl_context": "<hl> An individual with more than the correct number of chromosome sets ( two for diploid species ) is called polyploid . <hl> <hl> For instance , fertilization of an abnormal diploid egg with a normal haploid sperm would yield a triploid zygote . <hl> Polyploid animals are extremely rare , with only a few examples among the flatworms , crustaceans , amphibians , fish , and lizards . Polyploid animals are sterile because meiosis cannot proceed normally and instead produces mostly aneuploid daughter cells that cannot yield viable zygotes . Rarely , polyploid animals can reproduce asexually by haplodiploidy , in which an unfertilized egg divides mitotically to produce offspring . <hl> In contrast , polyploidy is very common in the plant kingdom , and polyploid plants tend to be larger and more robust than euploids of their species ( Figure 13.8 ) . <hl>", "hl_sentences": "An individual with more than the correct number of chromosome sets ( two for diploid species ) is called polyploid . For instance , fertilization of an abnormal diploid egg with a normal haploid sperm would yield a triploid zygote . In contrast , polyploidy is very common in the plant kingdom , and polyploid plants tend to be larger and more robust than euploids of their species ( Figure 13.8 ) .", "question": { "cloze_format": "In agriculture, polyploid crops (like coffee, strawberries, or bananas) tend to produce ________.", "normal_format": "In agriculture,what do polyploid crops (like coffee, strawberries, or bananas) tend to produce?", "question_choices": [ "more uniformity", "more variety", "larger yields", "smaller yields" ], "question_id": "fs-id18031230", "question_text": "In agriculture, polyploid crops (like coffee, strawberries, or bananas) tend to produce ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "Klinefelter syndrome" }, "bloom": null, "hl_context": "Several errors in sex chromosome number have been characterized . Individuals with three X chromosomes , called triplo-X , are phenotypically female but express developmental delays and reduced fertility . <hl> The XXY genotype , corresponding to one type of Klinefelter syndrome , corresponds to phenotypically male individuals with small testes , enlarged breasts , and reduced body hair . <hl> More complex types of Klinefelter syndrome exist in which the individual has as many as five X chromosomes . In all types , every X chromosome except one undergoes inactivation to compensate for the excess genetic dosage . This can be seen as several Barr bodies in each cell nucleus . Turner syndrome , characterized as an X0 genotype ( i . e . , only a single sex chromosome ) , corresponds to a phenotypically female individual with short stature , webbed skin in the neck region , hearing and cardiac impairments , and sterility .", "hl_sentences": "The XXY genotype , corresponding to one type of Klinefelter syndrome , corresponds to phenotypically male individuals with small testes , enlarged breasts , and reduced body hair .", "question": { "cloze_format": "The genotype XXY corresponds to the ___.", "normal_format": "What does the genotype XXY corresponds to?", "question_choices": [ "Klinefelter syndrome", "Turner syndrome", "Triplo-X", "Jacob syndrome" ], "question_id": "fs-id2336690", "question_text": "The genotype XXY corresponds to" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "X inactivation" }, "bloom": null, "hl_context": "<hl> An individual carrying an abnormal number of X chromosomes will inactivate all but one X chromosome in each of her cells . <hl> <hl> However , even inactivated X chromosomes continue to express a few genes , and X chromosomes must reactivate for the proper maturation of female ovaries . <hl> <hl> As a result , X-chromosomal abnormalities are typically associated with mild mental and physical defects , as well as sterility . <hl> If the X chromosome is absent altogether , the individual will not develop in utero .", "hl_sentences": "An individual carrying an abnormal number of X chromosomes will inactivate all but one X chromosome in each of her cells . However , even inactivated X chromosomes continue to express a few genes , and X chromosomes must reactivate for the proper maturation of female ovaries . As a result , X-chromosomal abnormalities are typically associated with mild mental and physical defects , as well as sterility .", "question": { "cloze_format": "Abnormalities in the number of X chromosomes tends to have milder phenotypic effects than the same abnormalities in autosomes because of ________.", "normal_format": "Because of what abnormalities in the number of X chromosomes tend to have milder phenotypic effects than the same abnormalities in autosomes?", "question_choices": [ "deletions", "nonhomologous recombination", "synapsis", "X inactivation" ], "question_id": "fs-id1256328", "question_text": "Abnormalities in the number of X chromosomes tends to have milder phenotypic effects than the same abnormalities in autosomes because of ________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "centromere" }, "bloom": null, "hl_context": "<hl> An inversion can be pericentric and include the centromere , or paracentric and occur outside of the centromere ( Figure 13.11 ) . <hl> <hl> A pericentric inversion that is asymmetric about the centromere can change the relative lengths of the chromosome arms , making these inversions easily identifiable . <hl>", "hl_sentences": "An inversion can be pericentric and include the centromere , or paracentric and occur outside of the centromere ( Figure 13.11 ) . A pericentric inversion that is asymmetric about the centromere can change the relative lengths of the chromosome arms , making these inversions easily identifiable .", "question": { "cloze_format": "By definition, a pericentric inversion includes the ________.", "normal_format": "What is a pericentric inversion includes by definition?", "question_choices": [ "centromere", "chiasma", "telomere", "synapse" ], "question_id": "fs-id1441754", "question_text": "By definition, a pericentric inversion includes the ________." }, "references_are_paraphrase": null } ]
13
13.1 Chromosomal Theory and Genetic Linkage Learning Objectives By the end of this section, you will be able to: Discuss Sutton’s Chromosomal Theory of Inheritance Describe genetic linkage Explain the process of homologous recombination, or crossing over Describe how chromosome maps are created Calculate the distances between three genes on a chromosome using a three-point test cross Long before chromosomes were visualized under a microscope, the father of modern genetics, Gregor Mendel, began studying heredity in 1843. With the improvement of microscopic techniques during the late 1800s, cell biologists could stain and visualize subcellular structures with dyes and observe their actions during cell division and meiosis. With each mitotic division, chromosomes replicated, condensed from an amorphous (no constant shape) nuclear mass into distinct X-shaped bodies (pairs of identical sister chromatids), and migrated to separate cellular poles. Chromosomal Theory of Inheritance The speculation that chromosomes might be the key to understanding heredity led several scientists to examine Mendel’s publications and re-evaluate his model in terms of the behavior of chromosomes during mitosis and meiosis. In 1902, Theodor Boveri observed that proper embryonic development of sea urchins does not occur unless chromosomes are present. That same year, Walter Sutton observed the separation of chromosomes into daughter cells during meiosis ( Figure 13.2 ). Together, these observations led to the development of the Chromosomal Theory of Inheritance , which identified chromosomes as the genetic material responsible for Mendelian inheritance. The Chromosomal Theory of Inheritance was consistent with Mendel’s laws and was supported by the following observations: During meiosis, homologous chromosome pairs migrate as discrete structures that are independent of other chromosome pairs. The sorting of chromosomes from each homologous pair into pre-gametes appears to be random. Each parent synthesizes gametes that contain only half of their chromosomal complement. Even though male and female gametes (sperm and egg) differ in size and morphology, they have the same number of chromosomes, suggesting equal genetic contributions from each parent. The gametic chromosomes combine during fertilization to produce offspring with the same chromosome number as their parents. Despite compelling correlations between the behavior of chromosomes during meiosis and Mendel’s abstract laws, the Chromosomal Theory of Inheritance was proposed long before there was any direct evidence that traits were carried on chromosomes. Critics pointed out that individuals had far more independently segregating traits than they had chromosomes. It was only after several years of carrying out crosses with the fruit fly, Drosophila melanogaster , that Thomas Hunt Morgan provided experimental evidence to support the Chromosomal Theory of Inheritance. Genetic Linkage and Distances Mendel’s work suggested that traits are inherited independently of each other. Morgan identified a 1:1 correspondence between a segregating trait and the X chromosome, suggesting that the random segregation of chromosomes was the physical basis of Mendel’s model. This also demonstrated that linked genes disrupt Mendel’s predicted outcomes. The fact that each chromosome can carry many linked genes explains how individuals can have many more traits than they have chromosomes. However, observations by researchers in Morgan’s laboratory suggested that alleles positioned on the same chromosome were not always inherited together. During meiosis, linked genes somehow became unlinked. Homologous Recombination In 1909, Frans Janssen observed chiasmata—the point at which chromatids are in contact with each other and may exchange segments—prior to the first division of meiosis. He suggested that alleles become unlinked and chromosomes physically exchange segments. As chromosomes condensed and paired with their homologs, they appeared to interact at distinct points. Janssen suggested that these points corresponded to regions in which chromosome segments were exchanged. It is now known that the pairing and interaction between homologous chromosomes, known as synapsis, does more than simply organize the homologs for migration to separate daughter cells. When synapsed, homologous chromosomes undergo reciprocal physical exchanges at their arms in a process called homologous recombination , or more simply, “crossing over.” To better understand the type of experimental results that researchers were obtaining at this time, consider a heterozygous individual that inherited dominant maternal alleles for two genes on the same chromosome (such as AB ) and two recessive paternal alleles for those same genes (such as ab ). If the genes are linked, one would expect this individual to produce gametes that are either AB or ab with a 1:1 ratio. If the genes are unlinked, the individual should produce AB , Ab , aB , and ab gametes with equal frequencies, according to the Mendelian concept of independent assortment. Because they correspond to new allele combinations, the genotypes Ab and aB are nonparental types that result from homologous recombination during meiosis. Parental types are progeny that exhibit the same allelic combination as their parents. Morgan and his colleagues, however, found that when such heterozygous individuals were test crossed to a homozygous recessive parent ( AaBb × aabb ), both parental and nonparental cases occurred. For example, 950 offspring might be recovered that were either AaBb or aabb , but 50 offspring would also be obtained that were either Aabb or aaBb . These results suggested that linkage occurred most often, but a significant minority of offspring were the products of recombination. Visual Connection In a test cross for two characteristics such as the one shown here, can the predicted frequency of recombinant offspring be 60 percent? Why or why not? Genetic Maps Janssen did not have the technology to demonstrate crossing over so it remained an abstract idea that was not widely accepted. Scientists thought chiasmata were a variation on synapsis and could not understand how chromosomes could break and rejoin. Yet, the data were clear that linkage did not always occur. Ultimately, it took a young undergraduate student and an “all-nighter” to mathematically elucidate the problem of linkage and recombination. In 1913, Alfred Sturtevant, a student in Morgan’s laboratory, gathered results from researchers in the laboratory, and took them home one night to mull them over. By the next morning, he had created the first “chromosome map,” a linear representation of gene order and relative distance on a chromosome ( Figure 13.4 ). Visual Connection Which of the following statements is true? Recombination of the body color and red/cinnabar eye alleles will occur more frequently than recombination of the alleles for wing length and aristae length. Recombination of the body color and aristae length alleles will occur more frequently than recombination of red/brown eye alleles and the aristae length alleles. Recombination of the gray/black body color and long/short aristae alleles will not occur. Recombination of the red/brown eye and long/short aristae alleles will occur more frequently than recombination of the alleles for wing length and body color. As shown in Figure 13.4 , by using recombination frequency to predict genetic distance, the relative order of genes on chromosome 2 could be inferred. The values shown represent map distances in centimorgans (cM), which correspond to recombination frequencies (in percent). Therefore, the genes for body color and wing size were 65.5 − 48.5 = 17 cM apart, indicating that the maternal and paternal alleles for these genes recombine in 17 percent of offspring, on average. To construct a chromosome map, Sturtevant assumed that genes were ordered serially on threadlike chromosomes. He also assumed that the incidence of recombination between two homologous chromosomes could occur with equal likelihood anywhere along the length of the chromosome. Operating under these assumptions, Sturtevant postulated that alleles that were far apart on a chromosome were more likely to dissociate during meiosis simply because there was a larger region over which recombination could occur. Conversely, alleles that were close to each other on the chromosome were likely to be inherited together. The average number of crossovers between two alleles—that is, their recombination frequency —correlated with their genetic distance from each other, relative to the locations of other genes on that chromosome. Considering the example cross between AaBb and aabb above, the frequency of recombination could be calculated as 50/1000 = 0.05. That is, the likelihood of a crossover between genes A/a and B/b was 0.05, or 5 percent. Such a result would indicate that the genes were definitively linked, but that they were far enough apart for crossovers to occasionally occur. Sturtevant divided his genetic map into map units, or centimorgans (cM) , in which a recombination frequency of 0.01 corresponds to 1 cM. By representing alleles in a linear map, Sturtevant suggested that genes can range from being perfectly linked (recombination frequency = 0) to being perfectly unlinked (recombination frequency = 0.5) when genes are on different chromosomes or genes are separated very far apart on the same chromosome. Perfectly unlinked genes correspond to the frequencies predicted by Mendel to assort independently in a dihybrid cross. A recombination frequency of 0.5 indicates that 50 percent of offspring are recombinants and the other 50 percent are parental types. That is, every type of allele combination is represented with equal frequency. This representation allowed Sturtevant to additively calculate distances between several genes on the same chromosome. However, as the genetic distances approached 0.50, his predictions became less accurate because it was not clear whether the genes were very far apart on the same chromosome or on different chromosomes. In 1931, Barbara McClintock and Harriet Creighton demonstrated the crossover of homologous chromosomes in corn plants. Weeks later, homologous recombination in Drosophila was demonstrated microscopically by Curt Stern. Stern observed several X-linked phenotypes that were associated with a structurally unusual and dissimilar X chromosome pair in which one X was missing a small terminal segment, and the other X was fused to a piece of the Y chromosome. By crossing flies, observing their offspring, and then visualizing the offspring’s chromosomes, Stern demonstrated that every time the offspring allele combination deviated from either of the parental combinations, there was a corresponding exchange of an X chromosome segment. Using mutant flies with structurally distinct X chromosomes was the key to observing the products of recombination because DNA sequencing and other molecular tools were not yet available. It is now known that homologous chromosomes regularly exchange segments in meiosis by reciprocally breaking and rejoining their DNA at precise locations. Link to Learning Review Sturtevant’s process to create a genetic map on the basis of recombination frequencies here . Mendel’s Mapped Traits Homologous recombination is a common genetic process, yet Mendel never observed it. Had he investigated both linked and unlinked genes, it would have been much more difficult for him to create a unified model of his data on the basis of probabilistic calculations. Researchers who have since mapped the seven traits investigated by Mendel onto the seven chromosomes of the pea plant genome have confirmed that all of the genes he examined are either on separate chromosomes or are sufficiently far apart as to be statistically unlinked. Some have suggested that Mendel was enormously lucky to select only unlinked genes, whereas others question whether Mendel discarded any data suggesting linkage. In any case, Mendel consistently observed independent assortment because he examined genes that were effectively unlinked. 13.2 Chromosomal Basis of Inherited Disorders Learning Objectives By the end of this section, you will be able to: Describe how a karyogram is created Explain how nondisjunction leads to disorders in chromosome number Compare disorders caused by aneuploidy Describe how errors in chromosome structure occur through inversions and translocations Inherited disorders can arise when chromosomes behave abnormally during meiosis. Chromosome disorders can be divided into two categories: abnormalities in chromosome number and chromosomal structural rearrangements. Because even small segments of chromosomes can span many genes, chromosomal disorders are characteristically dramatic and often fatal. Identification of Chromosomes The isolation and microscopic observation of chromosomes forms the basis of cytogenetics and is the primary method by which clinicians detect chromosomal abnormalities in humans. A karyotype is the number and appearance of chromosomes, and includes their length, banding pattern, and centromere position. To obtain a view of an individual’s karyotype, cytologists photograph the chromosomes and then cut and paste each chromosome into a chart, or karyogram , also known as an ideogram ( Figure 13.5 ). In a given species, chromosomes can be identified by their number, size, centromere position, and banding pattern. In a human karyotype, autosomes or “body chromosomes” (all of the non–sex chromosomes) are generally organized in approximate order of size from largest (chromosome 1) to smallest (chromosome 22). The X and Y chromosomes are not autosomes. However, chromosome 21 is actually shorter than chromosome 22. This was discovered after the naming of Down syndrome as trisomy 21, reflecting how this disease results from possessing one extra chromosome 21 (three total). Not wanting to change the name of this important disease, chromosome 21 retained its numbering, despite describing the shortest set of chromosomes. The chromosome “arms” projecting from either end of the centromere may be designated as short or long, depending on their relative lengths. The short arm is abbreviated p (for “petite”), whereas the long arm is abbreviated q (because it follows “p” alphabetically). Each arm is further subdivided and denoted by a number. Using this naming system, locations on chromosomes can be described consistently in the scientific literature. Career Connection Geneticists Use Karyograms to Identify Chromosomal Aberrations Although Mendel is referred to as the “father of modern genetics,” he performed his experiments with none of the tools that the geneticists of today routinely employ. One such powerful cytological technique is karyotyping, a method in which traits characterized by chromosomal abnormalities can be identified from a single cell. To observe an individual’s karyotype, a person’s cells (like white blood cells) are first collected from a blood sample or other tissue. In the laboratory, the isolated cells are stimulated to begin actively dividing. A chemical called colchicine is then applied to cells to arrest condensed chromosomes in metaphase. Cells are then made to swell using a hypotonic solution so the chromosomes spread apart. Finally, the sample is preserved in a fixative and applied to a slide. The geneticist then stains chromosomes with one of several dyes to better visualize the distinct and reproducible banding patterns of each chromosome pair. Following staining, the chromosomes are viewed using bright-field microscopy. A common stain choice is the Giemsa stain. Giemsa staining results in approximately 400–800 bands (of tightly coiled DNA and condensed proteins) arranged along all of the 23 chromosome pairs; an experienced geneticist can identify each band. In addition to the banding patterns, chromosomes are further identified on the basis of size and centromere location. To obtain the classic depiction of the karyotype in which homologous pairs of chromosomes are aligned in numerical order from longest to shortest, the geneticist obtains a digital image, identifies each chromosome, and manually arranges the chromosomes into this pattern ( Figure 13.5 ). At its most basic, the karyogram may reveal genetic abnormalities in which an individual has too many or too few chromosomes per cell. Examples of this are Down Syndrome, which is identified by a third copy of chromosome 21, and Turner Syndrome, which is characterized by the presence of only one X chromosome in women instead of the normal two. Geneticists can also identify large deletions or insertions of DNA. For instance, Jacobsen Syndrome—which involves distinctive facial features as well as heart and bleeding defects—is identified by a deletion on chromosome 11. Finally, the karyotype can pinpoint translocations , which occur when a segment of genetic material breaks from one chromosome and reattaches to another chromosome or to a different part of the same chromosome. Translocations are implicated in certain cancers, including chronic myelogenous leukemia. During Mendel’s lifetime, inheritance was an abstract concept that could only be inferred by performing crosses and observing the traits expressed by offspring. By observing a karyogram, today’s geneticists can actually visualize the chromosomal composition of an individual to confirm or predict genetic abnormalities in offspring, even before birth. Disorders in Chromosome Number Of all of the chromosomal disorders, abnormalities in chromosome number are the most obviously identifiable from a karyogram. Disorders of chromosome number include the duplication or loss of entire chromosomes, as well as changes in the number of complete sets of chromosomes. They are caused by nondisjunction , which occurs when pairs of homologous chromosomes or sister chromatids fail to separate during meiosis. Misaligned or incomplete synapsis, or a dysfunction of the spindle apparatus that facilitates chromosome migration, can cause nondisjunction. The risk of nondisjunction occurring increases with the age of the parents. Nondisjunction can occur during either meiosis I or II, with differing results ( Figure 13.6 ). If homologous chromosomes fail to separate during meiosis I, the result is two gametes that lack that particular chromosome and two gametes with two copies of the chromosome. If sister chromatids fail to separate during meiosis II, the result is one gamete that lacks that chromosome, two normal gametes with one copy of the chromosome, and one gamete with two copies of the chromosome. Visual Connection Which of the following statements about nondisjunction is true? Nondisjunction only results in gametes with n+1 or n–1 chromosomes. Nondisjunction occurring during meiosis II results in 50 percent normal gametes. Nondisjunction during meiosis I results in 50 percent normal gametes. Nondisjunction always results in four different kinds of gametes. Aneuploidy An individual with the appropriate number of chromosomes for their species is called euploid ; in humans, euploidy corresponds to 22 pairs of autosomes and one pair of sex chromosomes. An individual with an error in chromosome number is described as aneuploid , a term that includes monosomy (loss of one chromosome) or trisomy (gain of an extraneous chromosome). Monosomic human zygotes missing any one copy of an autosome invariably fail to develop to birth because they lack essential genes. This underscores the importance of “gene dosage” in humans. Most autosomal trisomies also fail to develop to birth; however, duplications of some of the smaller chromosomes (13, 15, 18, 21, or 22) can result in offspring that survive for several weeks to many years. Trisomic individuals suffer from a different type of genetic imbalance: an excess in gene dose. Individuals with an extra chromosome may synthesize an abundance of the gene products encoded by that chromosome. This extra dose (150 percent) of specific genes can lead to a number of functional challenges and often precludes development. The most common trisomy among viable births is that of chromosome 21, which corresponds to Down Syndrome. Individuals with this inherited disorder are characterized by short stature and stunted digits, facial distinctions that include a broad skull and large tongue, and significant developmental delays. The incidence of Down syndrome is correlated with maternal age; older women are more likely to become pregnant with fetuses carrying the trisomy 21 genotype ( Figure 13.7 ). Link to Learning Visualize the addition of a chromosome that leads to Down syndrome in this video simulation . Polyploidy An individual with more than the correct number of chromosome sets (two for diploid species) is called polyploid . For instance, fertilization of an abnormal diploid egg with a normal haploid sperm would yield a triploid zygote. Polyploid animals are extremely rare, with only a few examples among the flatworms, crustaceans, amphibians, fish, and lizards. Polyploid animals are sterile because meiosis cannot proceed normally and instead produces mostly aneuploid daughter cells that cannot yield viable zygotes. Rarely, polyploid animals can reproduce asexually by haplodiploidy, in which an unfertilized egg divides mitotically to produce offspring. In contrast, polyploidy is very common in the plant kingdom, and polyploid plants tend to be larger and more robust than euploids of their species ( Figure 13.8 ). Sex Chromosome Nondisjunction in Humans Humans display dramatic deleterious effects with autosomal trisomies and monosomies. Therefore, it may seem counterintuitive that human females and males can function normally, despite carrying different numbers of the X chromosome. Rather than a gain or loss of autosomes, variations in the number of sex chromosomes are associated with relatively mild effects. In part, this occurs because of a molecular process called X inactivation . Early in development, when female mammalian embryos consist of just a few thousand cells (relative to trillions in the newborn), one X chromosome in each cell inactivates by tightly condensing into a quiescent (dormant) structure called a Barr body. The chance that an X chromosome (maternally or paternally derived) is inactivated in each cell is random, but once the inactivation occurs, all cells derived from that one will have the same inactive X chromosome or Barr body. By this process, females compensate for their double genetic dose of X chromosome. In so-called “tortoiseshell” cats, embryonic X inactivation is observed as color variegation ( Figure 13.9 ). Females that are heterozygous for an X-linked coat color gene will express one of two different coat colors over different regions of their body, corresponding to whichever X chromosome is inactivated in the embryonic cell progenitor of that region. An individual carrying an abnormal number of X chromosomes will inactivate all but one X chromosome in each of her cells. However, even inactivated X chromosomes continue to express a few genes, and X chromosomes must reactivate for the proper maturation of female ovaries. As a result, X-chromosomal abnormalities are typically associated with mild mental and physical defects, as well as sterility. If the X chromosome is absent altogether, the individual will not develop in utero. Several errors in sex chromosome number have been characterized. Individuals with three X chromosomes, called triplo-X, are phenotypically female but express developmental delays and reduced fertility. The XXY genotype, corresponding to one type of Klinefelter syndrome, corresponds to phenotypically male individuals with small testes, enlarged breasts, and reduced body hair. More complex types of Klinefelter syndrome exist in which the individual has as many as five X chromosomes. In all types, every X chromosome except one undergoes inactivation to compensate for the excess genetic dosage. This can be seen as several Barr bodies in each cell nucleus. Turner syndrome, characterized as an X0 genotype (i.e., only a single sex chromosome), corresponds to a phenotypically female individual with short stature, webbed skin in the neck region, hearing and cardiac impairments, and sterility. Duplications and Deletions In addition to the loss or gain of an entire chromosome, a chromosomal segment may be duplicated or lost. Duplications and deletions often produce offspring that survive but exhibit physical and mental abnormalities. Duplicated chromosomal segments may fuse to existing chromosomes or may be free in the nucleus. Cri-du-chat (from the French for “cry of the cat”) is a syndrome associated with nervous system abnormalities and identifiable physical features that result from a deletion of most of 5p (the small arm of chromosome 5) ( Figure 13.10 ). Infants with this genotype emit a characteristic high-pitched cry on which the disorder’s name is based. Chromosomal Structural Rearrangements Cytologists have characterized numerous structural rearrangements in chromosomes, but chromosome inversions and translocations are the most common. Both are identified during meiosis by the adaptive pairing of rearranged chromosomes with their former homologs to maintain appropriate gene alignment. If the genes carried on two homologs are not oriented correctly, a recombination event could result in the loss of genes from one chromosome and the gain of genes on the other. This would produce aneuploid gametes. Chromosome Inversions A chromosome inversion is the detachment, 180° rotation, and reinsertion of part of a chromosome. Inversions may occur in nature as a result of mechanical shear, or from the action of transposable elements (special DNA sequences capable of facilitating the rearrangement of chromosome segments with the help of enzymes that cut and paste DNA sequences). Unless they disrupt a gene sequence, inversions only change the orientation of genes and are likely to have more mild effects than aneuploid errors. However, altered gene orientation can result in functional changes because regulators of gene expression could be moved out of position with respect to their targets, causing aberrant levels of gene products. An inversion can be pericentric and include the centromere, or paracentric and occur outside of the centromere ( Figure 13.11 ). A pericentric inversion that is asymmetric about the centromere can change the relative lengths of the chromosome arms, making these inversions easily identifiable. When one homologous chromosome undergoes an inversion but the other does not, the individual is described as an inversion heterozygote. To maintain point-for-point synapsis during meiosis, one homolog must form a loop, and the other homolog must mold around it. Although this topology can ensure that the genes are correctly aligned, it also forces the homologs to stretch and can be associated with regions of imprecise synapsis ( Figure 13.12 ). Evolution Connection The Chromosome 18 Inversion Not all structural rearrangements of chromosomes produce nonviable, impaired, or infertile individuals. In rare instances, such a change can result in the evolution of a new species. In fact, a pericentric inversion in chromosome 18 appears to have contributed to the evolution of humans. This inversion is not present in our closest genetic relatives, the chimpanzees. Humans and chimpanzees differ cytogenetically by pericentric inversions on several chromosomes and by the fusion of two separate chromosomes in chimpanzees that correspond to chromosome two in humans. The pericentric chromosome 18 inversion is believed to have occurred in early humans following their divergence from a common ancestor with chimpanzees approximately five million years ago. Researchers characterizing this inversion have suggested that approximately 19,000 nucleotide bases were duplicated on 18p, and the duplicated region inverted and reinserted on chromosome 18 of an ancestral human. A comparison of human and chimpanzee genes in the region of this inversion indicates that two genes— ROCK1 and USP14 —that are adjacent on chimpanzee chromosome 17 (which corresponds to human chromosome 18) are more distantly positioned on human chromosome 18. This suggests that one of the inversion breakpoints occurred between these two genes. Interestingly, humans and chimpanzees express USP14 at distinct levels in specific cell types, including cortical cells and fibroblasts. Perhaps the chromosome 18 inversion in an ancestral human repositioned specific genes and reset their expression levels in a useful way. Because both ROCK1 and USP14 encode cellular enzymes, a change in their expression could alter cellular function. It is not known how this inversion contributed to hominid evolution, but it appears to be a significant factor in the divergence of humans from other primates. 1 1 Violaine Goidts et al., “Segmental duplication associated with the human-specific inversion of chromosome 18: a further example of the impact of segmental duplications on karyotype and genome evolution in primates,” Human Genetics . 115 (2004):116-122 Translocations A translocation occurs when a segment of a chromosome dissociates and reattaches to a different, nonhomologous chromosome. Translocations can be benign or have devastating effects depending on how the positions of genes are altered with respect to regulatory sequences. Notably, specific translocations have been associated with several cancers and with schizophrenia. Reciprocal translocations result from the exchange of chromosome segments between two nonhomologous chromosomes such that there is no gain or loss of genetic information ( Figure 13.13 ).
american_government
Summary 12.1 The Design and Evolution of the Presidency The delegates at the Constitutional Convention proposed creating the office of the president and debated many forms the role might take. The president is elected for a maximum of two four-year terms and can be impeached by Congress for wrongdoing and removed from office. The presidency and presidential power, especially war powers, have expanded greatly over the last two centuries, often with the willing assistance of the legislative branch. Executive privilege and executive orders are two of the presidency’s powerful tools. During the last several decades, historical events and new technologies such as radio, television, and the Internet have further enhanced the stature of the presidency. 12.2 The Presidential Election Process The position of president of the United States was created during the Constitutional Convention. Within a generation of Washington’s administration, powerful political parties had overtaken the nominating power of state legislatures and created their own systems for selecting candidates. At first, party leaders kept tight control over the selection of candidates via the convention process. By the start of the twentieth century, however, primary and caucus voting had brought the power to select candidates directly to the people, and the once-important conventions became rubber-stamping events. 12.3 Organizing to Govern It can be difficult for a new president to come to terms with both the powers of the office and the limitations of those powers. Successful presidents assume their role ready to make a smooth transition and to learn to work within the complex governmental system to fill vacant positions in the cabinet and courts, many of which require Senate confirmation. It also means efficiently laying out a political agenda and reacting appropriately to unexpected events. A new president has limited time to get things done and must take action with the political wind at his or her back. 12.4 The Public Presidency Despite the obvious fact that the president is the head of state, the U.S. Constitution actually empowers the occupant of the White House with very little authority. Apart from the president’s war powers, the office holder’s real advantage is the ability to speak to the nation with one voice. Technological changes in the twentieth century have greatly expanded the power of the presidential bully pulpit. The twentieth century also saw a string of more public first ladies. Women like Eleanor Roosevelt and Lady Bird Johnson greatly expanded the power of the first lady’s role, although first ladies who have undertaken more nontraditional roles have encountered significant criticism. 12.5 Presidential Governance: Direct Presidential Action While the power of the presidency is typically checked by the other two branches of government, presidents have the unencumbered power to pardon those convicted of federal crimes and to issue executive orders, which don’t require congressional approval but lack the permanence of laws passed by Congress. In matters concerning foreign policy, presidents have at their disposal the executive agreement, which is a much-easier way for two countries to come to terms than a treaty that requires Senate ratification but is also much narrower in scope. Presidents use various means to attempt to drive public opinion and effect political change. But history has shown that they are limited in their ability to drive public opinion. Favorable conditions can help a president move policies forward. These conditions include party control of Congress and the arrival of crises such as war or economic decline. But as some presidencies have shown, even the most favorable conditions don’t guarantee success.
Chapter Outline 12.1 The Design and Evolution of the Presidency 12.2 The Presidential Election Process 12.3 Organizing to Govern 12.4 The Public Presidency 12.5 Presidential Governance: Direct Presidential Action Introduction The presidency is the most visible position in the U.S. government ( Figure 12.1 ). During the Constitutional Convention of 1787, delegates accepted the need to empower a relatively strong and vigorous chief executive. But they also wanted this chief executive to be bound by checks from the other branches of the federal government as well as by the Constitution itself. Over time, the power of the presidency has grown in response to circumstances and challenges. However, to this day, a president must still work with the other branches to be most effective. Unilateral actions, in which the president acts alone on important and consequential matters, such as President Barack Obama’s strategy on the Iran nuclear deal, are bound to be controversial and suggest potentially serious problems within the federal government. Effective presidents, especially in peacetime, are those who work with the other branches through persuasion and compromise to achieve policy objectives. What are the powers, opportunities, and limitations of the presidency? How does the chief executive lead in our contemporary political system? What guides his or her actions, including unilateral actions? If it is most effective to work with others to get things done, how does the president do so? What can get in the way of this goal? This chapter answers these and other questions about the nation’s most visible leader.
[ { "answer": { "ans_choice": 1, "ans_text": "B" }, "bloom": null, "hl_context": "Debate and discussion continued throughout the summer . Delegates eventually settled upon a single executive , but they remained at a loss for how to select that person . Pennsylvania ’ s James Wilson , who had triumphed on the issue of a single executive , at first proposed the direct election of the president . When delegates rejected that idea , he responded with the suggestion that electors , chosen throughout the nation , should select the executive . <hl> Over time , Wilson ’ s idea gained ground with delegates who were uneasy at the idea of an election by the legislature , which presented the opportunity for intrigue and corruption . <hl> The idea of a shorter term of service combined with eligibility for reelection also became more attractive to delegates . The framers of the Constitution struggled to find the proper balance between giving the president the power to perform the job on one hand and opening the way for a president to abuse power and act like a monarch on the other .", "hl_sentences": "Over time , Wilson ’ s idea gained ground with delegates who were uneasy at the idea of an election by the legislature , which presented the opportunity for intrigue and corruption .", "question": { "cloze_format": "Many at the Continental Congress were skeptical of allowing presidents to be directly elected by the legislature because ________.", "normal_format": "Many at the Continental Congress were skeptical of allowing presidents to be directly elected by the legislature because of what?", "question_choices": [ "they were worried about giving the legislature too much power", "they feared the opportunities created for corruption", "they knew the weaknesses of an electoral college", "they worried about subjecting the commander-in-chief to public scrutiny" ], "question_id": "fs-id1172503227391", "question_text": "Many at the Continental Congress were skeptical of allowing presidents to be directly elected by the legislature because ________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 1, "ans_text": "He appointed the heads of various federal departments as his own advisors." }, "bloom": null, "hl_context": "<hl> No sooner had the presidency been established than the occupants of the office , starting with George Washington , began acting in ways that expanded both its formal and informal powers . <hl> <hl> For example , Washington established a cabinet or group of advisors to help him administer his duties , consisting of the most senior appointed officers of the executive branch . <hl> <hl> Today , the heads of the fifteen executive departments serve as the president ’ s advisers . <hl> 10 And , in 1793 , when it became important for the United States to take a stand in the evolving European conflicts between France and other European powers , especially Great Britain , Washington issued a neutrality proclamation that extended his rights as diplomat-in-chief far more broadly than had at first been conceived .", "hl_sentences": "No sooner had the presidency been established than the occupants of the office , starting with George Washington , began acting in ways that expanded both its formal and informal powers . For example , Washington established a cabinet or group of advisors to help him administer his duties , consisting of the most senior appointed officers of the executive branch . Today , the heads of the fifteen executive departments serve as the president ’ s advisers .", "question": { "cloze_format": "A way in which George Washington expanded the power of the presidency is that ___ .", "normal_format": "Which of the following is a way George Washington expanded the power of the presidency?", "question_choices": [ "He refused to run again after serving two terms.", "He appointed the heads of various federal departments as his own advisors.", "He worked with the Senate to draft treaties with foreign countries.", "He submitted his neutrality proclamation to the Senate for approval." ], "question_id": "fs-id1172503589275", "question_text": "Which of the following is a way George Washington expanded the power of the presidency?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "D" }, "bloom": null, "hl_context": "The framers of the Constitution made no provision in the document for the establishment of political parties . Indeed , parties were not necessary to select the first president , since George Washington ran unopposed . <hl> Following the first election of Washington , the political party system gained steam and power in the electoral process , creating separate nomination and general election stages . <hl> <hl> Early on , the power to nominate presidents for office bubbled up from the party operatives in the various state legislatures and toward what was known as the king caucus or congressional caucus . <hl> The caucus or large-scale gathering was made up of legislators in the Congress who met informally to decide on nominees from their respective parties . <hl> In somewhat of a countervailing trend in the general election stage of the process , by the presidential election of 1824 , many states were using popular elections to choose their electors . <hl> <hl> This became important in that election when Andrew Jackson won the popular vote and the largest number of electors , but the presidency was given to John Quincy Adams instead . <hl> Out of the frustration of Jackson ’ s supporters emerged a powerful two-party system that took control of the selection process . 17", "hl_sentences": "Following the first election of Washington , the political party system gained steam and power in the electoral process , creating separate nomination and general election stages . Early on , the power to nominate presidents for office bubbled up from the party operatives in the various state legislatures and toward what was known as the king caucus or congressional caucus . In somewhat of a countervailing trend in the general election stage of the process , by the presidential election of 1824 , many states were using popular elections to choose their electors . This became important in that election when Andrew Jackson won the popular vote and the largest number of electors , but the presidency was given to John Quincy Adams instead .", "question": { "cloze_format": "The election of 1824 changed the way presidents were selected because ___.", "normal_format": "How did the election of 1824 change the way presidents were selected?", "question_choices": [ "Following this election, presidents were directly elected.", "Jackson’s supporters decided to create a device for challenging the Electoral College.", "The election convinced many that the parties must adopt the king caucus as the primary method for selecting presidents.", "The selection of the candidate with fewer electoral votes triggered the rise of party control over nominations." ], "question_id": "fs-id1172504673317", "question_text": "How did the election of 1824 change the way presidents were selected?" }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 0, "ans_text": "Sometimes candidates unpopular with the party leadership reach the top." }, "bloom": null, "hl_context": "<hl> Finally , the process of going straight to the people through primaries and caucuses has created some opportunities for party outsiders to rise . <hl> <hl> Neither Ronald Reagan nor Bill Clinton was especially popular with the party leadership of the Republicans or the Democrats ( respectively ) at the outset . <hl> The outsider phenomenon has been most clearly demonstrated , however , in the 2016 presidential nominating process , as those distrusted by the party establishment , such as Senator Ted Cruz and Donald Trump , who never before held political office , raced ahead of party favorites like Jeb Bush early in the primary process ( Figure 12.6 ) . The rise of the primary system during the Progressive Era came at the cost of party regulars ’ control of the process of candidate selection . Some party primaries even allow registered independents or members of the opposite party to vote . Even so , the process tends to attract the party faithful at the expense of independent voters , who often hold the key to victory in the fall contest . Thus , candidates who want to succeed in the primary contests seek to align themselves with committed partisans , who are often at the ideological extreme . Those who survive the primaries in this way have to moderate their image as they enter the general election if they hope to succeed among the rest of the party adherents and the uncommitted .", "hl_sentences": "Finally , the process of going straight to the people through primaries and caucuses has created some opportunities for party outsiders to rise . Neither Ronald Reagan nor Bill Clinton was especially popular with the party leadership of the Republicans or the Democrats ( respectively ) at the outset .", "question": { "cloze_format": "An unintended consequence of the rise of the primary and caucus system is that ___ .", "normal_format": "Which of the following is an unintended consequence of the rise of the primary and caucus system?", "question_choices": [ "Sometimes candidates unpopular with the party leadership reach the top.", "Campaigns have become shorter and more expensive.", "The conventions have become more powerful than the voters.", "Often incumbent presidents will fail to be renominated by the party." ], "question_id": "fs-id1172504636725", "question_text": "Which of the following is an unintended consequence of the rise of the primary and caucus system?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "must be confirmed by the Senate" }, "bloom": null, "hl_context": "<hl> Once the new president has been inaugurated and can officially nominate people to fill cabinet positions , the Senate confirms or rejects these nominations . <hl> At times , though rarely , cabinet nominations have failed to be confirmed or have even been withdrawn because of questions raised about the past behavior of the nominee . 23 Prominent examples of such withdrawals were Senator John Tower for defense secretary ( George H . W . Bush ) and Zoe Baird for attorney general ( Bill Clinton ): Senator Tower ’ s indiscretions involving alcohol and womanizing led to concerns about his fitness to head the military and his rejection by the Senate , 24 whereas Zoe Baird faced controversy and withdrew her nomination when it was revealed , through what the press dubbed “ Nannygate , ” that house staff of hers were undocumented workers . However , these cases are rare exceptions to the rule , which is to give approval to the nominees that the president wishes to have in the cabinet . Other possible candidates for cabinet posts may decline to be considered for a number of reasons , from the reduction in pay that can accompany entrance into public life to unwillingness to be subjected to the vetting process that accompanies a nomination .", "hl_sentences": "Once the new president has been inaugurated and can officially nominate people to fill cabinet positions , the Senate confirms or rejects these nominations .", "question": { "cloze_format": "The people who make up the modern president’s cabinet are the heads of the major federal departments and ________.", "normal_format": "What do the people who make up the modern president’s cabinet and are the heads of the major federal departments do?", "question_choices": [ "must be confirmed by the Senate", "once in office are subject to dismissal by the Senate", "serve two-year terms", "are selected base on the rules of patronage" ], "question_id": "fs-id1172504462185", "question_text": "The people who make up the modern president’s cabinet are the heads of the major federal departments and ________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "C" }, "bloom": null, "hl_context": "<hl> Finally , the president ’ s job included nominating federal judges , including Supreme Court justices , as well as other federal officials , and making appointments to fill military and diplomatic posts . <hl> The number of judicial appointments and nominations of other federal officials is great . In recent decades , two-term presidents have nominated well over three hundred federal judges while in office . <hl> 8 Moreover , new presidents nominate close to five hundred top officials to their Executive Office of the President , key agencies ( such as the Department of Justice ) , and regulatory commissions ( such as the Federal Reserve Board ) , whose appointments require Senate majority approval . <hl> 9", "hl_sentences": "Finally , the president ’ s job included nominating federal judges , including Supreme Court justices , as well as other federal officials , and making appointments to fill military and diplomatic posts . 8 Moreover , new presidents nominate close to five hundred top officials to their Executive Office of the President , key agencies ( such as the Department of Justice ) , and regulatory commissions ( such as the Federal Reserve Board ) , whose appointments require Senate majority approval .", "question": { "cloze_format": "A very challenging job for new presidents is to ______.", "normal_format": "Which is a very challenging job for new presidents?", "question_choices": [ "move into the White House", "prepare and deliver their first State of the Union address", "nominate and gain confirmation for their cabinet and hundreds of other officials", "prepare their first executive budget" ], "question_id": "fs-id1172504402334", "question_text": "A very challenging job for new presidents is to ______." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "C" }, "bloom": null, "hl_context": "Theodore Roosevelt came to the presidency in 1901 , at a time when movie newsreels were becoming popular . <hl> Roosevelt , who had always excelled at cultivating good relationships with the print media , eagerly exploited this new opportunity as he took his case to the people with the concept of the presidency as bully pulpit , a platform from which to push his agenda to the public . <hl> His successors followed suit , and they discovered and employed new ways of transmitting their message to the people in an effort to gain public support for policy initiatives . With the popularization of radio in the early twentieth century , it became possible to broadcast the president ’ s voice into many of the nation ’ s homes . Most famously , FDR used the radio to broadcast his thirty “ fireside chats ” to the nation between 1933 and 1944 .", "hl_sentences": "Roosevelt , who had always excelled at cultivating good relationships with the print media , eagerly exploited this new opportunity as he took his case to the people with the concept of the presidency as bully pulpit , a platform from which to push his agenda to the public .", "question": { "cloze_format": "President Theodore Roosevelt’s concept of the bully pulpit was the office’s ________.", "normal_format": "What did President Theodore Roosevelt’s concept of the bully pulpit mean to the offices?", "question_choices": [ "authority to use force, especially military force", "constitutional power to veto legislation", "premier position to pressure through public appeal", "ability to use technology to enhance the voice of the president" ], "question_id": "fs-id11725033653660", "question_text": "President Theodore Roosevelt’s concept of the bully pulpit was the office’s ________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 0, "ans_text": "struggles for power between the president and the Congress" }, "bloom": null, "hl_context": "The president may not be able to appoint key members of his or her administration without Senate confirmation , but he or she can demand the resignation or removal of cabinet officers , high-ranking appointees ( such as ambassadors ) , and members of the presidential staff . <hl> During Reconstruction , Congress tried to curtail the president ’ s removal power with the Tenure of Office Act ( 1867 ) , which required Senate concurrence to remove presidential nominees who took office upon Senate confirmation . <hl> Andrew Johnson ’ s violation of that legislation provided the grounds for his impeachment in 1868 . Subsequent presidents secured modifications of the legislation before the Supreme Court ruled in 1926 that the Senate had no right to impair the president ’ s removal power . 39 In the case of Senate failure to approve presidential nominations , the president is empowered to issue recess appointments ( made while the Senate is in recess ) that continue in force until the end of the next session of the Senate ( unless the Senate confirms the nominee ) .", "hl_sentences": "During Reconstruction , Congress tried to curtail the president ’ s removal power with the Tenure of Office Act ( 1867 ) , which required Senate concurrence to remove presidential nominees who took office upon Senate confirmation .", "question": { "cloze_format": "The passage of the Tenure of Office Act of 1867 was just one instance in a long line of ________.", "normal_format": "What was the passage of the Tenure of Office Act of 1867 instance of?", "question_choices": [ "struggles for power between the president and the Congress", "unconstitutional presidential power grabbing", "impeachment trials", "arguments over presidential policy" ], "question_id": "fs-id1172503338543", "question_text": "The passage of the Tenure of Office Act of 1867 was just one instance in a long line of ________." }, "references_are_paraphrase": 0 }, { "answer": { "ans_choice": 2, "ans_text": "C" }, "bloom": null, "hl_context": "Presidents also issue executive agreement s with foreign powers . <hl> Executive agreements are formal agreements negotiated between two countries but not ratified by a legislature as a treaty must be . <hl> As such , they are not treaties under U . S . law , which require two-thirds of the Senate for ratification . Treaties , presidents have found , are particularly difficult to get ratified . And with the fast pace and complex demands of modern foreign policy , concluding treaties with countries can be a tiresome and burdensome chore . That said , some executive agreements do require some legislative approval , such as those that commit the United States to make payments and thus are restrained by the congressional power of the purse . <hl> But for the most part , executive agreements signed by the president require no congressional action and are considered enforceable as long as the provisions of the executive agreement do not conflict with current domestic law . <hl>", "hl_sentences": "Executive agreements are formal agreements negotiated between two countries but not ratified by a legislature as a treaty must be . But for the most part , executive agreements signed by the president require no congressional action and are considered enforceable as long as the provisions of the executive agreement do not conflict with current domestic law .", "question": { "cloze_format": "It is an example of an executive agreement that ___.", "normal_format": "Which of the following is an example of an executive agreement?", "question_choices": [ "The president negotiates an agreement with China and submits it to the Senate for ratification.", "The president changes a regulation on undocumented immigrant status without congressional approval.", "The president signs legally binding nuclear arms terms with Iran without seeking congressional approval.", "The president issues recommendations to the Department of Justice on what the meaning of a new criminal statute is." ], "question_id": "fs-id1172503355046", "question_text": "Which of the following is an example of an executive agreement?" }, "references_are_paraphrase": 0 } ]
12
12.1 The Design and Evolution of the Presidency Learning Objectives By the end of this section, you will be able to: Explain the reason for the design of the executive branch and its plausible alternatives Analyze the way presidents have expanded presidential power and why Identify the limitations on a president's power Since its invention at the Constitutional Convention of 1787, the presidential office has gradually become more powerful, giving its occupants a far-greater chance to exercise leadership at home and abroad. The role of the chief executive has changed over time, as various presidents have confronted challenges in domestic and foreign policy in times of war as well as peace, and as the power of the federal government has grown. INVENTING THE PRESIDENCY The Articles of Confederation made no provision for an executive branch, although they did use the term “president” to designate the presiding officer of the Confederation Congress, who also handled other administrative duties. 1 The presidency was proposed early in the Constitutional Convention in Philadelphia by Virginia’s Edmund Randolph, as part of James Madison’s proposal for a federal government, which became known as the Virginia Plan . Madison offered a rather sketchy outline of the executive branch, leaving open whether what he termed the “national executive” would be an individual or a set of people. He proposed that Congress select the executive, whose powers and authority, and even length of term of service, were left largely undefined. He also proposed a “council of revision” consisting of the national executive and members of the national judiciary, which would review laws passed by the legislature and have the power of veto. 2 Early deliberations produced agreement that the executive would be a single person, elected for a single term of seven years by the legislature, empowered to veto legislation, and subject to impeachment and removal by the legislature. New Jersey’s William Paterson offered an alternate model as part of his proposal, typically referred to as the small-state or New Jersey Plan . This plan called for merely amending the Articles of Confederation to allow for an executive branch made up of a committee elected by a unicameral Congress for a single term. Under this proposal, the executive committee would be particularly weak because it could be removed from power at any point if a majority of state governors so desired. Far more extreme was Alexander Hamilton’s suggestion that the executive power be entrusted to a single individual. This individual would be chosen by electors, would serve for life, and would exercise broad powers, including the ability to veto legislation, the power to negotiate treaties and grant pardons in all cases except treason, and the duty to serve as commander-in-chief of the armed forces ( Figure 12.2 ). Debate and discussion continued throughout the summer. Delegates eventually settled upon a single executive, but they remained at a loss for how to select that person. Pennsylvania’s James Wilson, who had triumphed on the issue of a single executive, at first proposed the direct election of the president. When delegates rejected that idea, he responded with the suggestion that electors, chosen throughout the nation, should select the executive. Over time, Wilson’s idea gained ground with delegates who were uneasy at the idea of an election by the legislature, which presented the opportunity for intrigue and corruption. The idea of a shorter term of service combined with eligibility for reelection also became more attractive to delegates. The framers of the Constitution struggled to find the proper balance between giving the president the power to perform the job on one hand and opening the way for a president to abuse power and act like a monarch on the other. By early September, the Electoral College had emerged as the way to select a president for four years who was eligible for reelection. This process is discussed more fully in the chapter on elections. Today, the Electoral College consists of a body of 538 people called electors, each representing one of the fifty states or the District of Columbia, who formally cast votes for the election of the president and vice president ( Figure 12.3 ). In forty-eight states and the District of Columbia, the candidate who wins the popular vote in November receives all the state’s electoral votes. In two states, Nebraska and Maine, the electoral votes are divided: The candidate who wins the popular vote in the state gets two electoral votes, but the winner of each congressional district also receives an electoral vote. In the original design implemented for the first four presidential elections (1788–89, 1792, 1796, and 1800), the electors cast two ballots (but only one could go to a candidate from the elector’s state), and the person who received a majority won the election. The second-place finisher became vice president. Should no candidate receive a majority of the votes cast, the House of Representatives would select the president, with each state casting a single vote, while the Senate chose the vice president. While George Washington was elected president twice with this approach, the design resulted in controversy in both the 1796 and 1800 elections. In 1796, John Adams won the presidency, while his opponent and political rival Thomas Jefferson was elected vice president. In 1800, Thomas Jefferson and his running mate Aaron Burr finished tied in the Electoral College. Jefferson was elected president in the House of Representatives on the thirty-sixth ballot. These controversies led to the proposal and ratification of the Twelfth Amendment , which couples a particular presidential candidate with that candidate’s running mate in a unified ticket. 3 For the last two centuries or so, the Twelfth Amendment has worked fairly well. But this doesn’t mean the arrangement is foolproof. For example, the amendment created a separate ballot for the vice president but left the rules for electors largely intact. One of those rules states that the two votes the electors cast cannot both be for “an inhabitant of the same state with themselves.” 4 This rule means that an elector from, say, Louisiana, could not cast votes for a presidential candidate and a vice presidential candidate who were both from Louisiana; that elector could vote for only one of these people. The intent of the rule was to encourage electors from powerful states to look for a more diverse pool of candidates. But what would happen in a close election where the members of the winning ticket were both from the same state? The nation almost found out in 2000. In the presidential election of that year, the Republican ticket won the election by a very narrow electoral margin. To win the presidency or vice presidency, a candidate must get 270 electoral votes (a majority). George W. Bush and Dick Cheney won by the skin of their teeth with just 271. Both, however, were living in Texas. This should have meant that Texas’s 32 electoral votes could have gone to only one or the other. Cheney anticipated this problem and had earlier registered to vote in Wyoming, where he was originally from and where he had served as a representative years earlier. 5 It’s hard to imagine that the 2000 presidential election could have been even more complicated than it was, but thanks to that seemingly innocuous rule in Article II of the Constitution, that was a real possibility. Despite provisions for the election of a vice president (to serve in case of the president’s death, resignation, or removal through the impeachment process), and apart from the suggestion that the vice president should be responsible for presiding over the Senate, the framers left the vice president’s role undeveloped. As a result, the influence of the vice presidency has varied dramatically, depending on how much of a role the vice president is given by the president. Some vice presidents, such as Dan Quayle under President George H. W. Bush, serve a mostly ceremonial function, while others, like Dick Cheney under President George W. Bush, become a partner in governance and rival the White House chief of staff in terms of influence. Link to Learning Read about James Madison’s evolving views of the presidency and the Electoral College. In addition to describing the process of election for the presidency and vice presidency, the delegates to the Constitutional Convention also outlined who was eligible for election and how Congress might remove the president. Article II of the Constitution lays out the agreed-upon requirements—the chief executive must be at least thirty-five years old and a “natural born” citizen of the United States (or a citizen at the time of the Constitution’s adoption) who has been an inhabitant of the United States for at least fourteen years. 6 While Article II also states that the term of office is four years and does not expressly limit the number of times a person might be elected president, after Franklin D. Roosevelt was elected four times (from 1932 to 1944), the Twenty-Second Amendment was proposed and ratified, limiting the presidency to two four-year terms. An important means of ensuring that no president could become tyrannical was to build into the Constitution a clear process for removing the chief executive— impeachment . Impeachment is the act of charging a government official with serious wrongdoing; the Constitution calls this wrongdoing high crimes and misdemeanors. The method the framers designed required two steps and both chambers of the Congress. First, the House of Representatives could impeach the president by a simple majority vote. In the second step, the Senate could remove him or her from office by a two-thirds majority, with the chief justice of the Supreme Court presiding over the trial. Upon conviction and removal of the president, if that occurred, the vice president would become president. Three presidents have faced impeachment proceedings in the House; none has been both impeached by the House and removed by the Senate. In the wake of the Civil War, President Andrew Johnson faced congressional contempt for decisions made during Reconstruction. President Richard Nixon faced an overwhelming likelihood of impeachment in the House for his cover-up of key information relating to the 1972 break-in at the Democratic Party’s campaign headquarters at the Watergate hotel and apartment complex. Nixon likely would have also been removed by the Senate, since there was strong bipartisan consensus for his impeachment and removal. Instead, he resigned before the House and Senate could exercise their constitutional prerogatives. The most recent impeachment was of President Bill Clinton , brought on by his lying about an extramarital affair with a White House intern named Monica Lewinsky. House Republicans felt the affair and Clinton’s initial public denial of it rose to a level of wrongdoing worthy of impeachment. House Democrats believed it fell short of an impeachable offense and that a simply censure made better sense. Clinton's trial in the Senate went nowhere because too few Senators wanted to move forward with removing the president. Thus, impeachment remains a rare event indeed and removal has never occurred. Still, the fact that a president could be impeached and removed is an important reminder of the role of the executive in the broader system of shared powers. The same outcome occurred in the case of Andrew Johnson in the nineteenth century though he came closer to the threshold of votes needed for removal than did Clinton. The Constitution that emerged from the deliberations in Philadelphia treated the powers of the presidency in concise fashion. The president was to be commander-in-chief of the armed forces of the United States, negotiate treaties with the advice and consent of the Senate, and receive representatives of foreign nations ( Figure 12.4 ). Charged to “take care that the laws be faithfully executed,” the president was given broad power to pardon those convicted of federal offenses, except for officials removed through the impeachment process. 7 The chief executive would present to Congress information about the state of the union; call Congress into session when needed; veto legislation if necessary, although a two-thirds supermajority in both houses of Congress could override that veto; and make recommendations for legislation and policy as well as call on the heads of various departments to make reports and offer opinions. Finally, the president’s job included nominating federal judges, including Supreme Court justices, as well as other federal officials, and making appointments to fill military and diplomatic posts. The number of judicial appointments and nominations of other federal officials is great. In recent decades, two-term presidents have nominated well over three hundred federal judges while in office. 8 Moreover, new presidents nominate close to five hundred top officials to their Executive Office of the President, key agencies (such as the Department of Justice), and regulatory commissions (such as the Federal Reserve Board), whose appointments require Senate majority approval. 9 THE EVOLVING EXECUTIVE BRANCH No sooner had the presidency been established than the occupants of the office, starting with George Washington , began acting in ways that expanded both its formal and informal powers. For example, Washington established a cabinet or group of advisors to help him administer his duties, consisting of the most senior appointed officers of the executive branch. Today, the heads of the fifteen executive departments serve as the president’s advisers. 10 And, in 1793, when it became important for the United States to take a stand in the evolving European conflicts between France and other European powers, especially Great Britain, Washington issued a neutrality proclamation that extended his rights as diplomat-in-chief far more broadly than had at first been conceived. Later presidents built on the foundation of these powers. Some waged undeclared wars, as John Adams did against the French in the Quasi-War (1798–1800). Others agreed to negotiate for significant territorial gains, as Thomas Jefferson did when he oversaw the purchase of Louisiana from France. Concerned that he might be violating the powers of the office, Jefferson rationalized that his not facing impeachment charges constituted Congress’s tacit approval of his actions. James Monroe used his annual message in 1823 to declare that the United States would consider it an intolerable act of aggression for European powers to intervene in the affairs of the nations of the Western Hemisphere. Later dubbed the Monroe Doctrine , this declaration of principles laid the foundation for the growth of American power in the twentieth century. Andrew Jackson employed the veto as a measure of policy to block legislative initiatives with which he did not agree and acted unilaterally when it came to depositing federal funds in several local banks around the country instead of in the Bank of the United States. This move changed the way vetoes would be used in the future. Jackson’s twelve vetoes were more than those of all prior presidents combined, and he issued them due to policy disagreements (their basis today) rather than as a legal tool to protect against encroachments by Congress on the president’s powers. Of the many ways in which the chief executive’s power grew over the first several decades, the most significant was the expansion of presidential war powers. While Washington, Adams, and Jefferson led the way in waging undeclared wars, it was President James K. Polk who truly set the stage for the broad growth of this authority. In 1846, as the United States and Mexico were bickering over the messy issue of where Texas’s southern border lay, Polk purposely raised anxieties and ruffled feathers through his envoy in Mexico. He then responded to the newly heightened state of affairs by sending U.S. troops to the Rio Grande, the border Texan expansionists claimed for Texas. Mexico sent troops in response, and the Mexican-American War began soon afterward. 11 Abraham Lincoln , a member of Congress at the time, was critical of Polk’s actions. Later, however, as president himself, Lincoln used presidential war powers and the concepts of military necessity and national security to undermine the Confederate effort to seek independence for the Southern states. In suspending the privilege of the writ of habeas corpus , Lincoln blurred the boundaries between acceptable dissent and unacceptable disloyalty. He also famously used a unilateral proclamation to issue the Emancipation Proclamation , which cited the military necessity of declaring millions of slaves in Confederate-controlled territory to be free. His successor, Andrew Johnson , became so embroiled with Radical Republicans about ways to implement Reconstruction policies and programs after the Civil War that the House of Representatives impeached him, although the legislators in the Senate were unable to successfully remove him from office. 12 Over the course of the twentieth century, presidents expanded and elaborated upon these powers. The rather vague wording in Article II , which says that the “executive power shall be vested” in the president, has been subject to broad and sweeping interpretation in order to justify actions beyond those specifically enumerated in the document. 13 As the federal bureaucracy expanded, so too did the president’s power to grow agencies like the Secret Service and the Federal Bureau of Investigation. Presidents also further developed the concept of executive privilege , the right to withhold information from Congress, the judiciary, or the public. This right, not enumerated in the Constitution, was first asserted by George Washington to curtail inquiry into the actions of the executive branch. 14 The more general defense of its use by White House officials and attorneys ensures that the president can secure candid advice from his or her advisors and staff members. Increasingly over time, presidents have made more use of their unilateral powers, including executive order s , rules that bypass Congress but still have the force of law if the courts do not overturn them. More recently, presidents have offered their own interpretation of legislation as they sign it via signing statements (discussed later in this chapter) directed to the bureaucratic entity charged with implementation. In the realm of foreign policy, Congress permitted the widespread use of executive agreement s to formalize international relations, so long as important matters still came through the Senate in the form of treaties. 15 Recent presidents have continued to rely upon an ever more expansive definition of war powers to act unilaterally at home and abroad. Finally, presidents, often with Congress's blessing through the formal delegation of authority, have taken the lead in framing budgets, negotiating budget compromises, and at times impounding funds in an effort to prevail in matters of policy. Milestone The Budget and Accounting Act of 1921 Developing a budget in the nineteenth century was a chaotic mess. Unlike the case today, in which the budgeting process is centrally controlled, Congresses in the nineteenth century developed a budget in a piecemeal process. Federal agencies independently submitted budget requests to Congress, and these requests were then considered through the congressional committee process. Because the government was relatively small in the first few decades of the republic, this approach was sufficient. However, as the size and complexity of the U.S. economy grew over the course of the nineteenth century, the traditional congressional budgeting process was unable to keep up. 16 Things finally came to a head following World War I, when federal spending and debt skyrocketed. Reformers proposed the solution of putting the executive branch in charge of developing a budget that could be scrutinized, amended, and approved by Congress. However, President Woodrow Wilson , owing to a provision tacked onto the bill regarding presidential appointments, vetoed the legislation that would have transformed the budgeting process in this way. His successor, Warren Harding , felt differently and signed the Budget and Accounting Act of 1921. The act gave the president first-mover advantage in the budget process via the first “executive budget.” It also created the first-ever budget staff at the disposal of a president, at the time called the Bureau of the Budget but decades later renamed the Office of Management and Budget ( Figure 12.5 ). With this act, Congress willingly delegated significant authority to the executive and made the president the chief budget agenda setter. The Budget Act of 1921 effectively shifted some congressional powers to the president. Why might Congress have felt it important to centralize the budgeting process in the executive branch? What advantages could the executive branch have over the legislative branch in this regard? The growth of presidential power is also attributable to the growth of the United States and the power of the national government. As the nation has grown and developed, so has the office. Whereas most important decisions were once made at the state and local levels, the increasing complexity and size of the domestic economy have led people in the United States to look to the federal government more often for solutions. At the same time, the rising profile of the United States on the international stage has meant that the president is a far more important figure as leader of the nation, as diplomat-in-chief, and as commander-in-chief. Finally, with the rise of electronic mass media, a president who once depended on newspapers and official documents to distribute information beyond an immediate audience can now bring that message directly to the people via radio, television, and social media. Major events and crises, such as the Great Depression, two world wars, the Cold War, and the war on terrorism, have further contributed to presidential stature. 12.2 The Presidential Election Process Learning Objectives By the end of this section, you will be able to: Describe changes over time in the way the president and vice president are selected Identify the stages in the modern presidential selection process Assess the advantages and disadvantages of the Electoral College The process of electing a president every four years has evolved over time. This evolution has resulted from attempts to correct the cumbersome procedures first offered by the framers of the Constitution and as a result of political parties’ rising power to act as gatekeepers to the presidency. Over the last several decades, the manner by which parties have chosen candidates has trended away from congressional caucuses and conventions and towards a drawn-out series of state contests, called primaries and caucuses, which begin in the winter prior to the November general election. SELECTING THE CANDIDATE: THE PARTY PROCESS The framers of the Constitution made no provision in the document for the establishment of political parties. Indeed, parties were not necessary to select the first president, since George Washington ran unopposed. Following the first election of Washington, the political party system gained steam and power in the electoral process, creating separate nomination and general election stages. Early on, the power to nominate presidents for office bubbled up from the party operatives in the various state legislatures and toward what was known as the king caucus or congressional caucus. The caucus or large-scale gathering was made up of legislators in the Congress who met informally to decide on nominees from their respective parties. In somewhat of a countervailing trend in the general election stage of the process, by the presidential election of 1824, many states were using popular elections to choose their electors. This became important in that election when Andrew Jackson won the popular vote and the largest number of electors, but the presidency was given to John Quincy Adams instead. Out of the frustration of Jackson’s supporters emerged a powerful two-party system that took control of the selection process. 17 In the decades that followed, party organizations, party leaders, and workers met in national conventions to choose their nominees, sometimes after long struggles that took place over multiple ballots. In this way, the political parties kept a tight control on the selection of a candidate. In the early twentieth century, however, some states began to hold primaries , elections in which candidates vied for the support of state delegations to the party’s nominating convention. Over the course of the century, the primaries gradually became a far more important part of the process, though the party leadership still controlled the route to nomination through the convention system. This has changed in recent decades, and now a majority of the delegates are chosen through primary elections, and the party conventions themselves are little more than a widely publicized rubber-stamping event. The rise of the presidential primary and caucus system as the main means by which presidential candidates are selected has had a number of anticipated and unanticipated consequences. For one, the campaign season has grown longer and more costly. In 1960, John F. Kennedy declared his intention to run for the presidency just eleven months before the general election. Compare this to Hillary Clinton, who announced her intention to run nearly two years before the 2008 general election. Today’s long campaign seasons are seasoned with a seemingly ever-increasing number of debates among contenders for the nomination. In 2016, when the number of candidates for the Republican nomination became large and unwieldy, two debates among them were held, in which only those candidates polling greater support were allowed in the more important prime-time debate. The runners-up spoke in the other debate. Finally, the process of going straight to the people through primaries and caucuses has created some opportunities for party outsiders to rise. Neither Ronald Reagan nor Bill Clinton was especially popular with the party leadership of the Republicans or the Democrats (respectively) at the outset. The outsider phenomenon has been most clearly demonstrated, however, in the 2016 presidential nominating process, as those distrusted by the party establishment, such as Senator Ted Cruz and Donald Trump , who never before held political office, raced ahead of party favorites like Jeb Bush early in the primary process ( Figure 12.6 ). The rise of the primary system during the Progressive Era came at the cost of party regulars’ control of the process of candidate selection. Some party primaries even allow registered independents or members of the opposite party to vote. Even so, the process tends to attract the party faithful at the expense of independent voters, who often hold the key to victory in the fall contest. Thus, candidates who want to succeed in the primary contests seek to align themselves with committed partisans, who are often at the ideological extreme. Those who survive the primaries in this way have to moderate their image as they enter the general election if they hope to succeed among the rest of the party adherents and the uncommitted. Primaries offer tests of candidates’ popular appeal, while state caucuses testify to their ability to mobilize and organize grassroots support among committed followers. Primaries also reward candidates in different ways, with some giving the winner all the state’s convention delegates, while others distribute delegates proportionately according to the distribution of voter support. Finally, the order in which the primary elections and caucus selections are held shape the overall race. 18 Currently, the Iowa caucuses and the New Hampshire primary occur first. These early contests tend to shrink the field as candidates who perform poorly leave the race. At other times in the campaign process, some states will maximize their impact on the race by holding their primaries on the same day that other states do. The media has dubbed these critical groupings “Super Tuesdays,” “Super Saturdays,” and so on. They tend to occur later in the nominating process as parties try to force the voters to coalesce around a single nominee. The rise of the primary has also displaced the convention itself as the place where party regulars choose their standard bearer. Once true contests in which party leaders fought it out to elect a candidate, by the 1970s, party conventions more often than not simply served to rubber-stamp the choice of the primaries. By the 1980s, the convention drama was gone, replaced by a long, televised commercial designed to extol the party’s greatness ( Figure 12.7 ). Without the drama and uncertainty, major news outlets have steadily curtailed their coverage of the conventions, convinced that few people are interested. The 2016 elections seem to support the idea that the primary process produces a nominee rather than party insiders. Outsiders Donald Trump on the Republican side and Senator Bernie Sanders on the Democratic side had much success despite significant concerns about them from party elites. Whether this pattern could be reversed in the case of a closely contested selection process remains to be seen. ELECTING THE PRESIDENT: THE GENERAL ELECTION Early presidential elections, conducted along the lines of the original process outlined in the Constitution, proved unsatisfactory. So long as George Washington was a candidate, his election was a foregone conclusion. But it took some manipulation of the votes of electors to ensure that the second-place winner (and thus the vice president) did not receive the same number of votes. When Washington declined to run again after two terms, matters worsened. In 1796, political rivals John Adams and Thomas Jefferson were elected president and vice president, respectively. Yet the two men failed to work well together during Adams’s administration, much of which Jefferson spent at his Virginia residence at Monticello. As noted earlier in this chapter, the shortcomings of the system became painfully evident in 1800, when Jefferson and his running mate Aaron Burr finished tied, thus leaving it to the House of Representatives to elect Jefferson. 19 The Twelfth Amendment , ratified in 1804, provided for the separate election of president and vice president as well as setting out ways to choose a winner if no one received a majority of the electoral votes. Only once since the passage of the Twelfth Amendment, during the election of 1824, has the House selected the president under these rules, and only once, in 1836, has the Senate chosen the vice president. In several elections, such as in 1876 and 1888, a candidate who received less than a majority of the popular vote has claimed the presidency, including cases when the losing candidate secured a majority of the popular vote. A recent case was the 2000 election, in which Democratic nominee Al Gore won the popular vote, while Republican nominee George W. Bush won the Electoral College vote and hence the presidency. The 2016 election brought another such irregularity as Donald Trump comfortably won the Electoral College by narrowly winning the popular vote in several states, while Hillary Clinton collected nearly 2.9 million more votes nationwide. Not everyone is satisfied with how the Electoral College fundamentally shapes the election, especially in cases such as those noted above, when a candidate with a minority of the popular vote claims victory over a candidate who drew more popular support. Yet movements for electoral reform, including proposals for a straightforward nationwide direct election by popular vote, have gained little traction. Supporters of the current system defend it as a manifestation of federalism, arguing that it also guards against the chaos inherent in a multiparty environment by encouraging the current two-party system. They point out that under a system of direct election, candidates would focus their efforts on more populous regions and ignore others. 20 Critics, on the other hand, charge that the current system negates the one-person, one-vote basis of U.S. elections, subverts majority rule, works against political participation in states deemed safe for one party, and might lead to chaos should an elector desert a candidate, thus thwarting the popular will. Despite all this, the system remains in place. It appears that many people are more comfortable with the problems of a flawed system than with the uncertainty of change. 21 Get Connected! Electoral College Reform Following the 2000 presidential election, when then-governor George W. Bush won by a single electoral vote and with over half a million fewer individual votes than his challenger, astonished voters called for Electoral College reform. Years later, however, nothing of any significance had been done. The absence of reform in the wake of such a problematic election is a testament to the staying power of the Electoral College. Those who insist that the Electoral College should be reformed argue that its potential benefits pale in comparison to the way the Electoral College depresses voter turnout and fails to represent the popular will. In addition to favoring small states, since individual votes there count more than in larger states due to the mathematics involved in the distribution of electors, the Electoral College results in a significant number of “safe” states that receive no real electioneering, such that nearly 75 percent of the country is ignored in the general election. One potential solution to the problems with the Electoral College is to scrap it all together and replace it with the popular vote. The popular vote would be the aggregated totals of the votes in the fifty states and District of Columbia, as certified by the head election official of each state. A second solution often mentioned is to make the Electoral College proportional. That is, as each state assigns it electoral votes, it would do so based on the popular vote percentage in their state, rather with the winner-take-all approach almost all the states use today. A third alternative for Electoral College reform has been proposed by an organization called National Popular Vote. The National Popular Vote movement is an interstate compact between multiple states that sign onto the compact. Once a combination of states constituting 270 Electoral College votes supports the movement, each state entering the compact pledges all of its Electoral College votes to the national popular vote winner. This reform does not technically change the Electoral College structure, but it results in a mandated process that makes the Electoral College reflect the popular vote. Thus far, eleven states with a total of 165 electoral votes among them have signed onto the compact. In what ways does the current Electoral College system protect the representative power of small states and less densely populated regions? Why might it be important to preserve these protections? Follow-up activity: View the National Popular Vote website to learn more about their position. Consider reaching out to them to learn more, offer your support, or even to argue against their proposal. Link to Learning See how the Electoral College and the idea of swing states fundamentally shapes elections by experimenting with the interactive Electoral College map at 270 to Win. The general election usually features a series of debates between the presidential contenders as well as a debate among vice presidential candidates. Because the stakes are high, quite a bit of money and resources are expended on all sides. Attempts to rein in the mounting costs of modern general-election campaigns have proven ineffective. Nor has public funding helped to solve the problem. Indeed, starting with Barack Obama ’s 2008 decision to forfeit public funding so as to skirt the spending limitations imposed, candidates now regularly opt to raise more money rather than to take public funding. 22 In addition, political action committees (PACs), supposedly focused on issues rather than specific candidates, seek to influence the outcome of the race by supporting or opposing a candidate according to the PAC’s own interests. But after all the spending and debating is done, those who have not already voted by other means set out on the first Tuesday following the first Monday in November to cast their votes. Several weeks later, the electoral votes are counted and the president is formally elected ( Figure 12.8 ). 12.3 Organizing to Govern Learning Objectives By the end of this section, you will be able to: Explain how incoming and outgoing presidents peacefully transfer power Describe how new presidents fill positions in the executive branch Discuss how incoming presidents use their early popularity to advance larger policy solutions It is one thing to win an election; it is quite another to govern, as many frustrated presidents have discovered. Critical to a president’s success in office is the ability to make a deft transition from the previous administration, including naming a cabinet and filling other offices. The new chief executive must also fashion an agenda, which he or she will often preview in general terms in an inaugural address. Presidents usually embark upon their presidency benefitting from their own and the nation’s renewed hope and optimism, although often unrealistic expectations set the stage for subsequent disappointment. TRANSITION AND APPOINTMENTS In the immediate aftermath of the election, the incoming and outgoing administrations work together to help facilitate the transfer of power. While the General Services Administration oversees the logistics of the process, such as office assignments, information technology, and the assignment of keys, prudent candidates typically prepare for a possible victory by appointing members of a transition team during the lead-up to the general election. The success of the team’s actions becomes apparent on inauguration day, when the transition of power takes place in what is often a seamless fashion, with people evacuating their offices (and the White House) for their successors. Link to Learning Read about presidential transitions as well as explore other topics related to the transfer of power at the White House Transition Project website. Among the president-elect’s more important tasks is the selection of a cabinet. George Washington’s cabinet was made up of only four people, the attorney general and the secretaries of the Departments of War, State, and the Treasury. Currently, however, there are fifteen members of the cabinet, including the Secretaries of Labor, Agriculture, Education, and others ( Figure 12.9 ). The most important members—the heads of the Departments of Defense, Justice, State, and the Treasury (echoing Washington’s original cabinet)—receive the most attention from the president, the Congress, and the media. These four departments have been referred to as the inner cabinet, while the others are called the outer cabinet. When selecting a cabinet, presidents consider ability, expertise, influence, and reputation. More recently, presidents have also tried to balance political and demographic representation (gender, race, religion, and other considerations) to produce a cabinet that is capable as well as descriptively representative, meaning that those in the cabinet look like the U.S. population (see the chapter on bureaucracy and the term “representative bureaucracy”). A recent president who explicitly stated this as his goal was Bill Clinton, who talked about an “E.G.G. strategy” for senior-level appointments, where the E stands for ethnicity, G for gender, and the second G for geography. Once the new president has been inaugurated and can officially nominate people to fill cabinet positions, the Senate confirms or rejects these nominations. At times, though rarely, cabinet nominations have failed to be confirmed or have even been withdrawn because of questions raised about the past behavior of the nominee. 23 Prominent examples of such withdrawals were Senator John Tower for defense secretary (George H. W. Bush) and Zoe Baird for attorney general (Bill Clinton): Senator Tower’s indiscretions involving alcohol and womanizing led to concerns about his fitness to head the military and his rejection by the Senate, 24 whereas Zoe Baird faced controversy and withdrew her nomination when it was revealed, through what the press dubbed “Nannygate,” that house staff of hers were undocumented workers. However, these cases are rare exceptions to the rule, which is to give approval to the nominees that the president wishes to have in the cabinet. Other possible candidates for cabinet posts may decline to be considered for a number of reasons, from the reduction in pay that can accompany entrance into public life to unwillingness to be subjected to the vetting process that accompanies a nomination. Also subject to Senate approval are a number of non-cabinet subordinate administrators in the various departments of the executive branch, as well as the administrative heads of several agencies and commissions. These include the heads of the Internal Revenue Service, the Central Intelligence Agency, the Office of Management and Budget, the Federal Reserve, the Social Security Administration, the Environmental Protection Agency, the National Labor Relations Board, and the Equal Employment Opportunity Commission. The Office of Management and Budget (OMB) is the president’s own budget department. In addition to preparing the executive budget proposal and overseeing budgetary implementation during the federal fiscal year, the OMB oversees the actions of the executive bureaucracy. Not all the non-cabinet positions are open at the beginning of an administration, but presidents move quickly to install their preferred choices in most roles when given the opportunity. Finally, new presidents usually take the opportunity to nominate new ambassadors, whose appointments are subject to Senate confirmation. New presidents make thousands of new appointments in their first two years in office. All the senior cabinet agency positions and nominees for all positions in the Executive Office of the President are made as presidents enter office or when positions become vacant during their presidency. Federal judges serve for life. Therefore, vacancies for the federal courts and the U.S. Supreme Court occur gradually as judges retire. Throughout much of the history of the republic, the Senate has closely guarded its constitutional duty to consent to the president’s nominees, although in the end it nearly always confirms them. Still, the Senate does occasionally hold up a nominee. Benjamin Fishbourn, President George Washington’s nomination for a minor naval post, was rejected largely because he had insulted a particular senator. 25 Other rejected nominees included Clement Haynsworth and G. Harrold Carswell, nominated for the U.S. Supreme Court by President Nixon; Theodore Sorensen, nominated by President Carter for director of the Central Intelligence Agency; and John Tower, discussed earlier. At other times, the Senate has used its power to rigorously scrutinize the president’s nominees ( Figure 12.10 ). Supreme Court nominee Clarence Thomas, who faced numerous sexual harassment charges from former employees, was forced to sit through repeated questioning of his character and past behavior during Senate hearings, something he referred to as “a high-tech lynching for uppity blacks.” 26 More recently, the Senate has attempted a new strategy, refusing to hold hearings at all, a strategy of defeat that scholars have referred to as “malign neglect.” 27 Despite the fact that one-third of U.S. presidents have appointed a Supreme Court justice in an election year, when Associate Justice Antonin Scalia died unexpectedly in early 2016, Senate majority leader Mitch McConnell declared that the Senate would not hold hearings on a nominee until after the upcoming presidential election. 28 McConnell remained adamant even after President Barack Obama, saying he was acting in fulfillment of his constitutional duty, nominated Merrick Garland , longtime chief judge of the federal Circuit Court of Appeals for the DC Circuit. Garland is highly respected by senators from both parties and won confirmation to his DC circuit position by a 76–23 vote in the Senate. When Republican Donald Trump was elected president in the fall, this strategy appeared to pay off. The Republican Senate and Judiciary Committee will welcome a Trump nominee in early 2017. Other presidential selections are not subject to Senate approval, including the president’s personal staff (whose most important member is the White House chief of staff) and various advisers (most notably the national security adviser). The Executive Office of the President , created by Franklin D. Roosevelt (FDR) , contains a number of advisory bodies, including the Council of Economic Advisers, the National Security Council, the OMB, and the Office of the Vice President. Presidents also choose political advisers, speechwriters, and a press secretary to manage the politics and the message of the administration. In recent years, the president’s staff has become identified by the name of the place where many of its members work: the West Wing of the White House. These people serve at the pleasure of the president, and often the president reshuffles or reforms the staff during his or her term. Just as government bureaucracy has expanded over the centuries, so has the White House staff, which under Abraham Lincoln numbered a handful of private secretaries and a few minor functionaries. A recent report pegged the number of employees working within the White House over 450. 29 When the staff in nearby executive buildings of the Executive Office of the President are added in, that number increases four-fold. Finding a Middle Ground No Fun at Recess: Dueling Loopholes and the Limits of Presidential Appointments When Supreme Court justice Antonin Scalia died unexpectedly in early 2016, many in Washington braced for a political sandstorm of obstruction and accusations. Such was the record of Supreme Court nominations during the Obama administration and, indeed, for the last few decades. Nor is this phenomenon restricted to nominations for the highest court in the land. The Senate has been known to occasionally block or slow appointments not because the quality of the nominee was in question but rather as a general protest against the policies of the president and/or as part of the increasing partisan bickering that occurs when the presidency is controlled by one political party and the Senate by the other. This occurred, for example, when the Senate initially refused to nominate anyone to head the Consumer Financial Protection Bureau, established in 2011, because Republicans disliked the existence of the bureau itself. Such political holdups, however, tend to be the exception rather than the rule. For example, historically, nominees to the presidential cabinet are rarely rejected. And each Congress oversees the approval of around four thousand civilian and sixty-five thousand military appointments from the executive branch. 30 The overwhelming majority of these are confirmed in a routine and systematic fashion, and only rarely do holdups occur. But when they do, the Constitution allows for a small presidential loophole called the recess appointment. The relevant part of Article II , Section 2, of the Constitution reads: “The President shall have Power to fill up all Vacancies that may happen during the Recess of the Senate, by granting Commissions which shall expire at the End of their next Session.” The purpose of the provision was to give the president the power to temporarily fill vacancies during times when the Senate was not in session and could not act. But presidents have typically used this loophole to get around a Senate that’s inclined to obstruct. Presidents Bill Clinton and George W. Bush made 139 and 171 recess appointments, respectively. President Obama has made far fewer recess appointments; as of May 1, 2015, he had made only thirty-two. 31 One reason this number is so low is another loophole the Senate began using at the end of George W. Bush’s presidency, the pro forma session. A pro forma session is a short meeting held with the understanding that no work will be done. These sessions have the effect of keeping the Senate officially in session while functionally in recess. In 2012, President Obama decided to ignore the pro forma session and make four recess appointments anyway. The Republicans in the Senate were furious and contested the appointments. Eventually, the Supreme Court had the final say in a 2014 decision that declared unequivocally that “the Senate is in session when it says it is.” 32 For now at least, the court’s ruling means that the president’s loophole and the Senate’s loophole cancel each other out. It seems they’ve found the middle ground whether they like it or not. What might have been the legitimate original purpose of the recess appointment loophole? Do you believe the Senate is unfairly obstructing by effectively ending recesses altogether so as to prevent the president from making appointments without its approval? The most visible, though arguably the least powerful, member of a president’s cabinet is the vice president. Throughout most of the nineteenth and into the twentieth century, the vast majority of vice presidents took very little action in the office unless fate intervened. Few presidents consulted with their running mates. Indeed, until the twentieth century, many presidents had little to do with the naming of their running mate at the nominating convention. The office was seen as a form of political exile, and that motivated Republicans to name Theodore Roosevelt as William McKinley’s running mate in 1900. The strategy was to get the ambitious politician out of the way while still taking advantage of his popularity. This scheme backfired, however, when McKinley was assassinated and Roosevelt became president ( Figure 12.11 ). Vice presidents were often sent on minor missions or used as mouthpieces for the administration, often with a sharp edge. Richard Nixon ’s vice president Spiro Agnew is an example. But in the 1970s, starting with Jimmy Carter, presidents made a far more conscious effort to make their vice presidents part of the governing team, placing them in charge of increasingly important issues. Sometimes, as in the case of Bill Clinton and Al Gore , the partnership appeared to be smooth if not always harmonious. In the case of George W. Bush and his very experienced vice president Dick Cheney , observers speculated whether the vice president might have exercised too much influence. Barack Obama ’s choice for a running mate and subsequent two-term vice president, former Senator Joseph Biden , was picked for his experience, especially in foreign policy. President Obama relied on Vice President Biden for advice throughout his tenure. In any case, the vice presidency is no longer quite as weak as it once was, and a capable vice president can do much to augment the president’s capacity to govern across issues if the president so desires. 33 FORGING AN AGENDA Having secured election, the incoming president must soon decide how to deliver upon what was promised during the campaign. The chief executive must set priorities, chose what to emphasize, and formulate strategies to get the job done. He or she labors under the shadow of a measure of presidential effectiveness known as the first hundred days in office, a concept popularized during Franklin Roosevelt’s first term in the 1930s. While one hundred days is possibly too short a time for any president to boast of any real accomplishments, most presidents do recognize that they must address their major initiatives during their first two years in office. This is the time when the president is most powerful and is given the benefit of the doubt by the public and the media (aptly called the honeymoon period), especially if he or she enters the White House with a politically aligned Congress, as Barack Obama did. However, recent history suggests that even one-party control of Congress and the presidency does not ensure efficient policymaking. This difficulty is due as much to divisions within the governing party as to obstructionist tactics skillfully practiced by the minority party in Congress. Democratic president Jimmy Carter’s battles with a Congress controlled by Democratic majorities provide a good case in point. The incoming president must deal to some extent with the outgoing president’s last budget proposal. While some modifications can be made, it is more difficult to pursue new initiatives immediately. Most presidents are well advised to prioritize what they want to achieve during the first year in office and not lose control of their agenda. At times, however, unanticipated events can determine policy, as happened in 2001 when nineteen hijackers perpetrated the worst terrorist attack in U.S. history and transformed U.S. foreign and domestic policy in dramatic ways. Moreover, a president must be sensitive to what some scholars have termed “political time,” meaning the circumstances under which he or she assumes power. Sometimes, the nation is prepared for drastic proposals to solve deep and pressing problems that cry out for immediate solutions, as was the case following the 1932 election of FDR at the height of the Great Depression. Most times, however, the country is far less inclined to accept revolutionary change. Being an effective president means recognizing the difference. 34 The first act undertaken by the new president—the delivery of an inaugural address —can do much to set the tone for what is intended to follow. While such an address may be an exercise in rhetorical inspiration, it also allows the president to set forth priorities within the overarching vision of what he or she intends to do. Abraham Lincoln used his inaugural addresses to calm rising concerns in the South that he would act to overturn slavery. Unfortunately, this attempt at appeasement fell on deaf ears, and the country descended into civil war. Franklin Roosevelt used his first inaugural address to boldly proclaim that the country need not fear the change that would deliver it from the grip of the Great Depression, and he set to work immediately enlarging the federal government to that end. John F. Kennedy, who entered the White House at the height of the Cold War, made an appeal to talented young people around the country to help him make the world a better place. He followed up with new institutions like the Peace Corps, which sends young citizens around the world to work as secular missionaries for American values like democracy and free enterprise. Link to Learning Listen to clips of the most famous inaugural address in presidential history at the Washington Post w ebsite. 12.4 The Public Presidency Learning Objectives By the end of this section, you will be able to: Explain how technological innovations have empowered presidents Identify ways in which presidents appeal to the public for approval Explain how the role of first ladies changed over the course of the twentieth century With the advent of motion picture newsreels and voice recordings in the 1920s, presidents began to broadcast their message to the general public. Franklin Roosevelt, while not the first president to use the radio, adopted this technology to great effect. Over time, as radio gave way to newer and more powerful technologies like television, the Internet, and social media, other presidents have been able magnify their voices to an even-larger degree. Presidents now have far more tools at their disposal to shape public opinion and build support for policies. However, the choice to “go public” does not always lead to political success; it is difficult to convert popularity in public opinion polls into political power. Moreover, the modern era of information and social media empowers opponents at the same time that it provides opportunities for presidents. THE SHAPING OF THE MODERN PRESIDENCY From the days of the early republic through the end of the nineteenth century, presidents were limited in the ways they could reach the public to convey their perspective and shape policy. Inaugural addresses and messages to Congress, while circulated in newspapers, proved clumsy devices to attract support, even when a president used plain, blunt language. Some presidents undertook tours of the nation, notably George Washington and Rutherford B. Hayes. Others promoted good relationships with newspaper editors and reporters, sometimes going so far as to sanction a pro-administration newspaper. One president, Ulysses S. Grant, cultivated political cartoonist Thomas Nast to present the president’s perspective in the pages of the magazine Harper’s Weekly . 35 Abraham Lincoln experimented with public meetings recorded by newspaper reporters and public letters that would appear in the press, sometimes after being read at public gatherings ( Figure 12.12 ). Most presidents gave speeches, although few proved to have much immediate impact, including Lincoln’s memorable Gettysburg Address. Rather, most presidents exercised the power of patronage (or appointing people who are loyal and help them out politically) and private deal-making to get what they wanted at a time when Congress usually held the upper hand in such transactions. But even that presidential power began to decline with the emergence of civil service reform in the later nineteenth century, which led to most government officials being hired on their merit instead of through patronage. Only when it came to diplomacy and war were presidents able to exercise authority on their own, and even then, institutional as well as political restraints limited their independence of action. Theodore Roosevelt came to the presidency in 1901, at a time when movie newsreels were becoming popular. Roosevelt, who had always excelled at cultivating good relationships with the print media, eagerly exploited this new opportunity as he took his case to the people with the concept of the presidency as bully pulpit , a platform from which to push his agenda to the public. His successors followed suit, and they discovered and employed new ways of transmitting their message to the people in an effort to gain public support for policy initiatives. With the popularization of radio in the early twentieth century, it became possible to broadcast the president’s voice into many of the nation’s homes. Most famously, FDR used the radio to broadcast his thirty “ fireside chats ” to the nation between 1933 and 1944. In the post–World War II era, television began to replace radio as the medium through which presidents reached the public. This technology enhanced the reach of the handsome young president John F. Kennedy and the trained actor Ronald Reagan. At the turn of the twentieth century, the new technology was the Internet. The extent to which this mass media technology can enhance the power and reach of the president has yet to be fully realized. Other presidents have used advances in transportation to take their case to the people. Woodrow Wilson traveled the country to advocate formation of the League of Nations. However, he fell short of his goal when he suffered a stroke in 1919 and cut his tour short. Both Franklin Roosevelt in the 1930s and 1940s and Harry S. Truman in the 1940s and 1950s used air travel to conduct diplomatic and military business. Under President Dwight D. Eisenhower, a specific plane, commonly called Air Force One, began carrying the president around the country and the world. This gives the president the ability to take his or her message directly to the far corners of the nation at any time. GOING PUBLIC: PROMISE AND PITFALLS The concept of going public involves the president delivering a major television address in the hope that Americans watching the address will be compelled to contact their House and Senate member and that such public pressure will result in the legislators supporting the president on a major piece of legislation. Technological advances have made it more efficient for presidents to take their messages directly to the people than was the case before mass media ( Figure 12.13 ). Presidential visits can build support for policy initiatives or serve political purposes, helping the president reward supporters, campaign for candidates, and seek reelection. It remains an open question, however, whether choosing to go public actually enhances a president’s political position in battles with Congress. Political scientist George C. Edwards goes so far as to argue that taking a president’s position public serves to polarize political debate, increase public opposition to the president, and complicate the chances to get something done. It replaces deliberation and compromise with confrontation and campaigning. Edwards believes the best way for presidents to achieve change is to keep issues private and negotiate resolutions that preclude partisan combat. Going public may be more effective in rallying supporters than in gaining additional support or changing minds. 36 Link to Learning Today, it is possible for the White House to take its case directly to the people via websites like White House Live, where the public can watch live press briefings and speeches. THE FIRST LADY: A SECRET WEAPON? The president is not the only member of the First Family who often attempts to advance an agenda by going public. First ladies increasingly exploited the opportunity to gain public support for an issue of deep interest to them. Before 1933, most first ladies served as private political advisers to their husbands. In the 1910s, Edith Bolling Wilson took a more active but still private role assisting her husband, President Woodrow Wilson, afflicted by a stroke, in the last years of his presidency. However, as the niece of one president and the wife of another, it was Eleanor Roosevelt in the 1930s and 1940s who opened the door for first ladies to do something more. Eleanor Roosevelt took an active role in championing civil rights, becoming in some ways a bridge between her husband and the civil rights movement . She coordinated meetings between FDR and members of the NAACP, championed antilynching legislation, openly defied segregation laws, and pushed the Army Nurse Corps to allow black women in its ranks. She also wrote a newspaper column and had a weekly radio show. Her immediate successors returned to the less visible role held by her predecessors, although in the early 1960s, Jacqueline Kennedy gained attention for her efforts to refurbish the White House along historical lines, and Lady Bird Johnson in the mid- and late 1960s endorsed an effort to beautify public spaces and highways in the United States. She also established the foundations of what came to be known as the Office of the First Lady , complete with a news reporter, Liz Carpenter, as her press secretary. Betty Ford took over as first lady in 1974 and became an avid advocate of women’s rights, proclaiming that she was pro-choice when it came to abortion and lobbying for the ratification of the Equal Rights Amendment (ERA). She shared with the public the news of her breast cancer diagnosis and subsequent mastectomy. Her successor, Rosalynn Carter , attended several cabinet meetings and pushed for the ratification of the ERA as well as for legislation addressing mental health issues ( Figure 12.14 ). The increasing public political role of the first lady continued in the 1980s with Nancy Reagan ’s “Just Say No” antidrug campaign and in the early 1990s with Barbara Bush ’s efforts on behalf of literacy. The public role of the first lady reach a new level with Hillary Clinton in the 1990s when her husband put her in charge of his efforts to achieve health care reform, a controversial decision that did not meet with political success. Her successors, Laura Bush in the first decade of the twenty-first century and Michelle Obama in the second, returned to the roles played by predecessors in advocating less controversial policies: Laura Bush advocated literacy and education, while Michelle Obama has emphasized physical fitness and healthy diet and exercise. Nevertheless, the public and political profiles of first ladies remain high, and in the future, the president’s spouse will have the opportunity to use that unelected position to advance policies that might well be less controversial and more appealing than those pushed by the president. Insider Perspective A New Role for the First Lady? While running for the presidency for the first time in 1992, Bill Clinton frequently touted the experience and capabilities of his wife. There was a lot to brag about. Hillary Rodham Clinton was a graduate of Yale Law School, had worked as a member of the impeachment inquiry staff during the height of the Watergate scandal in Nixon’s administration, and had been a staff attorney for the Children’s Defense Fund before becoming the first lady of Arkansas. Acknowledging these qualifications, candidate Bill Clinton once suggested that by electing him, voters would get “two for the price of one.” The clear implication in this statement was that his wife would take on a far larger role than previous first ladies, and this proved to be the case. 37 Shortly after taking office, Clinton appointed the first lady to chair the Task Force on National Health Care Reform. This organization was to follow through on his campaign promise to fix the problems in the U.S. healthcare system. Hillary Clinton had privately requested the appointment, but she quickly realized that the complex web of business interests and political aspirations combined to make the topic of health care reform a hornet’s nest. This put the Clinton administration’s first lady directly into partisan battles few if any previous first ladies had ever faced. As a testament to both the large role the first lady had taken on and the extent to which she had become the target of political attacks, the recommendations of the task force were soon dubbed “Hillarycare” by opponents. In a particularly contentious hearing in the House, the first lady and Republican representative Dick Armey exchanged pointed jabs with each other. At one point, Armey suggested that the reports of her charm were “overstated” after the first lady likened him to Dr. Jack Kevorkian, a physician known for helping patients commit suicide ( Figure 12.15 ). 38 The following summer, the first lady attempted to use a national bus tour to popularize the health care proposal, although distaste for her and for the program had reached such a fevered pitch that she sometimes was compelled to wear a bulletproof vest. In the end, the efforts came up short and the reform attempts were abandoned as a political failure. Nevertheless, Hillary Clinton remained a political lightning rod for the rest of the Clinton presidency. What do the challenges of First Lady Hillary Clinton’s foray into national politics suggest about the dangers of a first lady abandoning the traditionally safe nonpartisan goodwill efforts? What do the actions of the first ladies since Clinton suggest about the lessons learned or not learned? 12.5 Presidential Governance: Direct Presidential Action Learning Objectives By the end of this section, you will be able to: Identify the power presidents have to effect change without congressional cooperation Analyze how different circumstances influence the way presidents use unilateral authority Explain how presidents persuade others in the political system to support their initiatives Describe how historians and political scientists evaluate the effectiveness of a presidency A president’s powers can be divided into two categories: direct actions the chief executive can take by employing the formal institutional powers of the office and informal powers of persuasion and negotiation essential to working with the legislative branch. When a president governs alone through direct action, it may break a policy deadlock or establish new grounds for action, but it may also spark opposition that might have been handled differently through negotiation and discussion. Moreover, such decisions are subject to court challenge, legislative reversal, or revocation by a successor. What may seem to be a sign of strength is often more properly understood as independent action undertaken in the wake of a failure to achieve a solution through the legislative process, or an admission that such an effort would prove futile. When it comes to national security, international negotiations, or war, the president has many more opportunities to act directly and in some cases must do so when circumstances require quick and decisive action. DOMESTIC POLICY The president may not be able to appoint key members of his or her administration without Senate confirmation, but he or she can demand the resignation or removal of cabinet officers, high-ranking appointees (such as ambassadors), and members of the presidential staff. During Reconstruction, Congress tried to curtail the president’s removal power with the Tenure of Office Act (1867), which required Senate concurrence to remove presidential nominees who took office upon Senate confirmation. Andrew Johnson’s violation of that legislation provided the grounds for his impeachment in 1868. Subsequent presidents secured modifications of the legislation before the Supreme Court ruled in 1926 that the Senate had no right to impair the president’s removal power. 39 In the case of Senate failure to approve presidential nominations, the president is empowered to issue recess appointments (made while the Senate is in recess) that continue in force until the end of the next session of the Senate (unless the Senate confirms the nominee). The president also exercises the power of pardon without conditions. Once used fairly sparingly—apart from Andrew Johnson’s wholesale pardons of former Confederates during the Reconstruction period—the pardon power has become more visible in recent decades. President Harry S. Truman issued over two thousand pardons and commutations, more than any other post–World War II president. 40 President Gerald Ford has the unenviable reputation of being the only president to pardon another president (his predecessor Richard Nixon, who resigned after the Watergate scandal) ( Figure 12.16 ). While not as generous as Truman, President Jimmy Carter also issued a great number of pardons, including several for draft dodging during the Vietnam War. President Reagan was reluctant to use the pardon as much, as was President George H. W. Bush. President Clinton pardoned few people for much of his presidency, but did make several last-minute pardons, which led to some controversy. To date, Barack Obama has seldom used his power to pardon. 41 Presidents may choose to issue executive order s or proclamations to achieve policy goals. Usually, executive orders direct government agencies to pursue a certain course in the absence of congressional action. A more subtle version pioneered by recent presidents is the executive memorandum, which tends to attract less attention. Many of the most famous executive orders have come in times of war or invoke the president’s authority as commander-in-chief, including Franklin Roosevelt’s order permitting the internment of Japanese Americans in 1942 and Harry Truman’s directive desegregating the armed forces (1948). The most famous presidential proclamation was Abraham Lincoln’s Emancipation Proclamation (1863), which declared slaves in areas under Confederate control to be free (with a few exceptions). Executive orders are subject to court rulings or changes in policy enacted by Congress. During the Korean War, the Supreme Court revoked Truman’s order seizing the steel industry. 42 These orders are also subject to reversal by presidents who come after, and recent presidents have wasted little time reversing the orders of their predecessors in cases of disagreement. Sustained executive orders, which are those not overturned in courts, typically have some prior authority from Congress that legitimizes them. When there is no prior authority, it is much more likely that an executive order will be overturned by a later president. For this reason, this tool has become less common in recent decades ( Figure 12.17 ). Milestone Executive Order 9066 Following the devastating Japanese attacks on the U.S. Pacific fleet at Pearl Harbor in 1941, many in the United States feared that Japanese Americans on the West Coast had the potential and inclination to form a fifth column (a hostile group working from the inside) for the purpose of aiding a Japanese invasion. These fears mingled with existing anti-Japanese sentiment across the country and created a paranoia that washed over the West Coast like a large wave. In an attempt to calm fears and prevent any real fifth-column actions, President Franklin D. Roosevelt signed Executive Order 9066 , which authorized the removal of people from military areas as necessary. When the military dubbed the entire West Coast a military area, it effectively allowed for the removal of more than 110,000 Japanese Americans from their homes. These people, many of them U.S. citizens, were moved to relocation centers in the interior of the country. They lived in the camps there for two and a half years ( Figure 12.18 ). 43 The overwhelming majority of Japanese Americans felt shamed by the actions of the Japanese empire and willingly went along with the policy in an attempt to demonstrate their loyalty to the United States. But at least one Japanese American refused to go along. His name was Fred Korematsu , and he decided to go into hiding in California rather than be taken to the internment camps with his family. He was soon discovered, turned over to the military, and sent to the internment camp in Utah that held his family. But his challenge to the internment system and the president’s executive order continued. In 1944, Korematsu’s case was heard by the Supreme Court. In a 6–3 decision, the Court ruled against him, arguing that the administration had the constitutional power to sign the order because of the need to protect U.S. interests against the threat of espionage. 44 Forty-four years after this decision, President Reagan issued an official apology for the internment and provided some compensation to the survivors. In 2011, the Justice Department went a step further by filing a notice officially recognizing that the solicitor general of the United States acted in error by arguing to uphold the executive order. (The solicitor general is the official who argues cases for the U.S. government before the Supreme Court.) However, despite these actions, in 2014, the late Supreme Court justice Antonin Scalia was documented as saying that while he believed the decision was wrong, it could occur again. 45 What do the Korematsu case and the internment of over 100,000 Japanese Americans suggest about the extent of the president’s war powers? What does this episode in U.S. history suggest about the weaknesses of constitutional checks on executive power during times of war? Link to Learning To learn more about the relocation and confinement of Japanese Americans during World War II, visit Heart Mountain online. Finally, presidents have also used the line-item veto and signing statements to alter or influence the application of the laws they sign. A line-item veto is a type of veto that keeps the majority of a spending bill unaltered but nullifies certain lines of spending within it. While a number of states allow their governors the line-item veto (discussed in the chapter on state and local government), the president acquired this power only in 1996 after Congress passed a law permitting it. President Clinton used the tool sparingly. However, those entities that stood to receive the federal funding he lined out brought suit. Two such groups were the City of New York and the Snake River Potato Growers in Idaho. 46 The Supreme Court heard their claims together and just sixteen months later declared unconstitutional the act that permitted the line-item veto. 47 Since then, presidents have asked Congress to draft a line-item veto law that would be constitutional, although none have made it to the president’s desk. On the other hand, signing statements are statements issued by a president when agreeing to legislation that indicate how the chief executive will interpret and enforce the legislation in question. Signing statements are less powerful than vetoes, though congressional opponents have complained that they derail legislative intent. Signing statements have been used by presidents since at least James Monroe, but they became far more common in this century. NATIONAL SECURITY, FOREIGN POLICY, AND WAR Presidents are more likely to justify the use of executive orders in cases of national security or as part of their war powers. In addition to mandating emancipation and the internment of Japanese Americans, presidents have issued orders to protect the homeland from internal threats. Most notably, Lincoln ordered the suspension of the privilege of the writ of habeas corpus in 1861 and 1862 before seeking congressional legislation to undertake such an act. Presidents hire and fire military commanders; they also use their power as commander-in-chief to aggressively deploy U.S. military force. Congress rarely has taken the lead over the course of history, with the War of 1812 being the lone exception. Pearl Harbor was a salient case where Congress did make a clear and formal declaration when asked by FDR. However, since World War II, it has been the president and not Congress who has taken the lead in engaging the United States in military action outside the nation’s boundaries, most notably in Korea, Vietnam, and the Persian Gulf ( Figure 12.19 ). Presidents also issue executive agreement s with foreign powers. Executive agreements are formal agreements negotiated between two countries but not ratified by a legislature as a treaty must be. As such, they are not treaties under U.S. law, which require two-thirds of the Senate for ratification. Treaties, presidents have found, are particularly difficult to get ratified. And with the fast pace and complex demands of modern foreign policy, concluding treaties with countries can be a tiresome and burdensome chore. That said, some executive agreements do require some legislative approval, such as those that commit the United States to make payments and thus are restrained by the congressional power of the purse. But for the most part, executive agreements signed by the president require no congressional action and are considered enforceable as long as the provisions of the executive agreement do not conflict with current domestic law. Link to Learning The American Presidency Project has gathered data outlining presidential activity, including measures for executive orders and signing statements. THE POWER OF PERSUASION The framers of the Constitution, concerned about the excesses of British monarchial power, made sure to design the presidency within a network of checks and balances controlled by the other branches of the federal government. Such checks and balances encourage consultation, cooperation, and compromise in policymaking. This is most evident at home, where the Constitution makes it difficult for either Congress or the chief executive to prevail unilaterally, at least when it comes to constructing policy. Although much is made of political stalemate and obstructionism in national political deliberations today, the framers did not want to make it too easy to get things done without a great deal of support for such initiatives. It is left to the president to employ a strategy of negotiation, persuasion, and compromise in order to secure policy achievements in cooperation with Congress. In 1960, political scientist Richard Neustadt put forward the thesis that presidential power is the power to persuade, a process that takes many forms and is expressed in various ways. 48 Yet the successful employment of this technique can lead to significant and durable successes. For example, legislative achievements tend to be of greater duration because they are more difficult to overturn or replace, as the case of health care reform under President Barack Obama suggests. Obamacare has faced court cases and repeated (if largely symbolic) attempts to gut it in Congress. Overturning it will take a new president who opposes it, together with a Congress that can pass the dissolving legislation. In some cases, cooperation is essential, as when the president nominates and the Senate confirms persons to fill vacancies on the Supreme Court, an increasingly contentious area of friction between branches. While Congress cannot populate the Court on its own, it can frustrate the president’s efforts to do so. Presidents who seek to prevail through persuasion, according to Neustadt, target Congress, members of their own party, the public, the bureaucracy, and, when appropriate, the international community and foreign leaders. Of these audiences, perhaps the most obvious and challenging is Congress. Link to Learning Read “Power Lessons for Obama” at this website to learn more about applying Richard Neustadt’s framework to the leaders of today. Much depends on the balance of power within Congress: Should the opposition party hold control of both houses, it will be difficult indeed for the president to realize his or her objectives, especially if the opposition is intent on frustrating all initiatives. However, even control of both houses by the president’s own party is no guarantee of success or even of productive policymaking. For example, neither Bill Clinton nor Barack Obama achieved all they desired despite having favorable conditions for the first two years of their presidencies. In times of divided government (when one party controls the presidency and the other controls one or both chambers of Congress), it is up to the president to cut deals and make compromises that will attract support from at least some members of the opposition party without excessively alienating members of his or her own party. Both Ronald Reagan and Bill Clinton proved effective in dealing with divided government—indeed, Clinton scored more successes with Republicans in control of Congress than he did with Democrats in charge. It is more difficult to persuade members of the president’s own party or the public to support a president’s policy without risking the dangers inherent in going public. There is precious little opportunity for private persuasion while also going public in such instances, at least directly. The way the president and his or her staff handle media coverage of the administration may afford some opportunities for indirect persuasion of these groups. It is not easy to persuade the federal bureaucracy to do the president’s bidding unless the chief executive has made careful appointments. When it comes to diplomacy, the president must relay some messages privately while offering incentives, both positive and negative, in order to elicit desired responses, although at times, people heed only the threat of force and coercion. While presidents may choose to go public in an attempt to put pressure on other groups to cooperate, most of the time they “stay private” as they attempt to make deals and reach agreements out of the public eye. The tools of negotiation have changed over time. Once chief executives played patronage politics, rewarding friends while attacking and punishing critics as they built coalitions of support. But the advent of civil service reform in the 1880s systematically deprived presidents of that option and reduced its scope and effectiveness. Although the president may call upon various agencies for assistance in lobbying for proposals, such as the Office of Legislative Liaison with Congress, it is often left to the chief executive to offer incentives and rewards. Some of these are symbolic, like private meetings in the White House or an appearance on the campaign trail. The president must also find common ground and make compromises acceptable to all parties, thus enabling everyone to claim they secured something they wanted. Complicating Neustadt’s model, however, is that many of the ways he claimed presidents could shape favorable outcomes require going public, which as we have seen can produce mixed results. Political scientist Fred Greenstein, on the other hand, touted the advantages of a “hidden hand presidency,” in which the chief executive did most of the work behind the scenes, wielding both the carrot and the stick. 49 Greenstein singled out President Dwight Eisenhower as particularly skillful in such endeavors. OPPORTUNITY AND LEGACY What often shapes a president’s performance, reputation, and ultimately legacy depends on circumstances that are largely out of his or her control. Did the president prevail in a landslide or was it a closely contested election? Did he or she come to office as the result of death, assassination, or resignation? How much support does the president’s party enjoy, and is that support reflected in the composition of both houses of Congress, just one, or neither? Will the president face a Congress ready to embrace proposals or poised to oppose them? Whatever a president’s ambitions, it will be hard to realize them in the face of a hostile or divided Congress, and the options to exercise independent leadership are greater in times of crisis and war than when looking at domestic concerns alone. Then there is what political scientist Stephen Skowronek calls “political time.” 50 Some presidents take office at times of great stability with few concerns. Unless there are radical or unexpected changes, a president’s options are limited, especially if voters hoped for a simple continuation of what had come before. Other presidents take office at a time of crisis or when the electorate is looking for significant changes. Then there is both pressure and opportunity for responding to those challenges. Some presidents, notably Theodore Roosevelt, openly bemoaned the lack of any such crisis, which Roosevelt deemed essential for him to achieve greatness as a president. People in the United States claim they want a strong president. What does that mean? At times, scholars point to presidential independence, even defiance, as evidence of strong leadership. Thus, vigorous use of the veto power in key situations can cause observers to judge a president as strong and independent, although far from effective in shaping constructive policies. Nor is such defiance and confrontation always evidence of presidential leadership skill or greatness, as the case of Andrew Johnson should remind us. When is effectiveness a sign of strength, and when are we confusing being headstrong with being strong? Sometimes, historians and political scientists see cooperation with Congress as evidence of weakness, as in the case of Ulysses S. Grant, who was far more effective in garnering support for administration initiatives than scholars have given him credit for. These questions overlap with those concerning political time and circumstance. While domestic policymaking requires far more give-and-take and a fair share of cajoling and collaboration, national emergencies and war offer presidents far more opportunity to act vigorously and at times independently. This phenomenon often produces the rally around the flag effect , in which presidential popularity spikes during international crises. A president must always be aware that politics, according to Otto von Bismarck, is the art of the possible, even as it is his or her duty to increase what might be possible by persuading both members of Congress and the general public of what needs to be done. Finally, presidents often leave a legacy that lasts far beyond their time in office ( Figure 12.20 ). Sometimes, this is due to the long-term implications of policy decisions. Critical to the notion of legacy is the shaping of the Supreme Court as well as other federal judges. Long after John Adams left the White House in 1801, his appointment of John Marshall as chief justice shaped American jurisprudence for over three decades. No wonder confirmation hearings have grown more contentious in the cases of highly visible nominees. Other legacies are more difficult to define, although they suggest that, at times, presidents cast a long shadow over their successors. It was a tough act to follow George Washington, and in death, Abraham Lincoln’s presidential stature grew to extreme heights. Theodore and Franklin D. Roosevelt offered models of vigorous executive leadership, while the image and style of John F. Kennedy and Ronald Reagan influenced and at times haunted or frustrated successors. Nor is this impact limited to chief executives deemed successful: Lyndon Johnson’s Vietnam and Richard Nixon’s Watergate offered cautionary tales of presidential power gone wrong, leaving behind legacies that include terms like Vietnam syndrome and the tendency to add the suffix “-gate” to scandals and controversies.
biology
Chapter Outline 9.1 Signaling Molecules and Cellular Receptors 9.2 Propagation of the Signal 9.3 Response to the Signal 9.4 Signaling in Single-Celled Organisms Introduction Imagine what life would be like if you and the people around you could not communicate. You would not be able to express your wishes to others, nor could you ask questions to find out more about your environment. Social organization is dependent on communication between the individuals that comprise that society; without communication, society would fall apart. As with people, it is vital for individual cells to be able to interact with their environment. This is true whether a cell is growing by itself in a pond or is one of many cells that form a larger organism. In order to properly respond to external stimuli, cells have developed complex mechanisms of communication that can receive a message, transfer the information across the plasma membrane, and then produce changes within the cell in response to the message. In multicellular organisms, cells send and receive chemical messages constantly to coordinate the actions of distant organs, tissues, and cells. The ability to send messages quickly and efficiently enables cells to coordinate and fine-tune their functions. While the necessity for cellular communication in larger organisms seems obvious, even single-celled organisms communicate with each other. Yeast cells signal each other to aid mating. Some forms of bacteria coordinate their actions in order to form large complexes called biofilms or to organize the production of toxins to remove competing organisms. The ability of cells to communicate through chemical signals originated in single cells and was essential for the evolution of multicellular organisms. The efficient and error-free function of communication systems is vital for all life as we know it.
[ { "answer": { "ans_choice": 1, "ans_text": "The molecules are hydrophilic and cannot penetrate the hydrophobic interior of the plasma membrane." }, "bloom": "3", "hl_context": "<hl> Ion channel-linked receptors bind a ligand and open a channel through the membrane that allows specific ions to pass through . <hl> <hl> To form a channel , this type of cell-surface receptor has an extensive membrane-spanning region . <hl> <hl> In order to interact with the phospholipid fatty acid tails that form the center of the plasma membrane , many of the amino acids in the membrane-spanning region are hydrophobic in nature . <hl> <hl> Conversely , the amino acids that line the inside of the channel are hydrophilic to allow for the passage of water or ions . <hl> When a ligand binds to the extracellular region of the channel , there is a conformational change in the proteins structure that allows ions such as sodium , calcium , magnesium , and hydrogen to pass through ( Figure 9.5 ) .", "hl_sentences": "Ion channel-linked receptors bind a ligand and open a channel through the membrane that allows specific ions to pass through . To form a channel , this type of cell-surface receptor has an extensive membrane-spanning region . In order to interact with the phospholipid fatty acid tails that form the center of the plasma membrane , many of the amino acids in the membrane-spanning region are hydrophobic in nature . Conversely , the amino acids that line the inside of the channel are hydrophilic to allow for the passage of water or ions .", "question": { "cloze_format": "The property that prevents the ligands of cell-surface receptors from entering the cell is that ___.", "normal_format": "What property prevents the ligands of cell-surface receptors from entering the cell?", "question_choices": [ "The molecules bind to the extracellular domain.", "The molecules are hydrophilic and cannot penetrate the hydrophobic interior of the plasma membrane.", "The molecules are attached to transport proteins that deliver them through the bloodstream to target cells.", "The ligands are able to penetrate the membrane and directly influence gene expression upon receptor binding." ], "question_id": "fs-id1239117", "question_text": "What property prevents the ligands of cell-surface receptors from entering the cell?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "endocrine signaling" }, "bloom": null, "hl_context": "<hl> Signals from distant cells are called endocrine signals , and they originate from endocrine cells . <hl> <hl> ( In the body , many endocrine cells are located in endocrine glands , such as the thyroid gland , the hypothalamus , and the pituitary gland . ) <hl> These types of signals usually produce a slower response but have a longer-lasting effect . The ligands released in endocrine signaling are called hormones , signaling molecules that are produced in one part of the body but affect other body regions some distance away .", "hl_sentences": "Signals from distant cells are called endocrine signals , and they originate from endocrine cells . ( In the body , many endocrine cells are located in endocrine glands , such as the thyroid gland , the hypothalamus , and the pituitary gland . )", "question": { "cloze_format": "The secretion of hormones by the pituitary gland is an example of _______________.", "normal_format": "What is the secretion of hormones by the pituitary gland an example of?", "question_choices": [ "autocrine signaling", "paracrine signaling", "endocrine signaling", "direct signaling across gap junctions" ], "question_id": "fs-id1975143", "question_text": "The secretion of hormones by the pituitary gland is an example of _______________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "Ions are charged particles and cannot diffuse through the hydrophobic interior of the membrane." }, "bloom": "2", "hl_context": "<hl> Ion channel-linked receptors bind a ligand and open a channel through the membrane that allows specific ions to pass through . <hl> <hl> To form a channel , this type of cell-surface receptor has an extensive membrane-spanning region . <hl> <hl> In order to interact with the phospholipid fatty acid tails that form the center of the plasma membrane , many of the amino acids in the membrane-spanning region are hydrophobic in nature . <hl> Conversely , the amino acids that line the inside of the channel are hydrophilic to allow for the passage of water or ions . When a ligand binds to the extracellular region of the channel , there is a conformational change in the proteins structure that allows ions such as sodium , calcium , magnesium , and hydrogen to pass through ( Figure 9.5 ) .", "hl_sentences": "Ion channel-linked receptors bind a ligand and open a channel through the membrane that allows specific ions to pass through . To form a channel , this type of cell-surface receptor has an extensive membrane-spanning region . In order to interact with the phospholipid fatty acid tails that form the center of the plasma membrane , many of the amino acids in the membrane-spanning region are hydrophobic in nature .", "question": { "cloze_format": "The reason ion channels are necessary to transport ions into or out of a cell is that ___ .", "normal_format": "Why are ion channels necessary to transport ions into or out of a cell?", "question_choices": [ "Ions are too large to diffuse through the membrane.", "Ions are charged particles and cannot diffuse through the hydrophobic interior of the membrane.", "Ions do not need ion channels to move through the membrane.", "Ions bind to carrier proteins in the bloodstream, which must be removed before transport into the cell." ], "question_id": "fs-id1584266", "question_text": "Why are ion channels necessary to transport ions into or out of a cell?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 0, "ans_text": "the ligands are transported through the bloodstream and travel greater distances" }, "bloom": "3", "hl_context": "<hl> Hormones travel the large distances between endocrine cells and their target cells via the bloodstream , which is a relatively slow way to move throughout the body . <hl> <hl> Because of their form of transport , hormones get diluted and are present in low concentrations when they act on their target cells . <hl> <hl> This is different from paracrine signaling , in which local concentrations of ligands can be very high . <hl>", "hl_sentences": "Hormones travel the large distances between endocrine cells and their target cells via the bloodstream , which is a relatively slow way to move throughout the body . Because of their form of transport , hormones get diluted and are present in low concentrations when they act on their target cells . This is different from paracrine signaling , in which local concentrations of ligands can be very high .", "question": { "cloze_format": "Endocrine signals are transmitted more slowly than paracrine signals because ___________.", "normal_format": "Why are endocrine signals transmitted more slowly than paracrine signals?", "question_choices": [ "the ligands are transported through the bloodstream and travel greater distances", "the target and signaling cells are close together", "the ligands are degraded rapidly", "the ligands don't bind to carrier proteins during transport" ], "question_id": "fs-id1804619", "question_text": "Endocrine signals are transmitted more slowly than paracrine signals because ___________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "They are the cleavage products of the inositol phospholipid, PIP2." }, "bloom": "2", "hl_context": "<hl> The enzyme phospholipase C cleaves PIP 2 to form diacylglycerol ( DAG ) and inositol triphosphate ( IP 3 ) ( Figure 9.13 ) . <hl> <hl> These products of the cleavage of PIP 2 serve as second messengers . <hl> Diacylglycerol ( DAG ) remains in the plasma membrane and activates protein kinase C ( PKC ) , which then phosphorylates serine and threonine residues in its target proteins . IP 3 diffuses into the cytoplasm and binds to ligand-gated calcium channels in the endoplasmic reticulum to release Ca 2 + that continues the signal cascade . 9.3 Response to the Signal Learning Objectives By the end of this section , you will be able to :", "hl_sentences": "The enzyme phospholipase C cleaves PIP 2 to form diacylglycerol ( DAG ) and inositol triphosphate ( IP 3 ) ( Figure 9.13 ) . These products of the cleavage of PIP 2 serve as second messengers .", "question": { "cloze_format": "The origin of DAG and IP3 is in that ___.", "normal_format": "Where do DAG and IP3 originate?", "question_choices": [ "They are formed by phosphorylation of cAMP.", "They are ligands expressed by signaling cells.", "They are hormones that diffuse through the plasma membrane to stimulate protein production.", "They are the cleavage products of the inositol phospholipid, PIP2." ], "question_id": "fs-id2896121", "question_text": "Where do DAG and IP3 originate?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "They contain a hydroxyl group." }, "bloom": "2", "hl_context": "<hl> One of the most common chemical modifications that occurs in signaling pathways is the addition of a phosphate group ( PO 4 – 3 ) to a molecule such as a protein in a process called phosphorylation . <hl> The phosphate can be added to a nucleotide such as GMP to form GDP or GTP . <hl> Phosphates are also often added to serine , threonine , and tyrosine residues of proteins , where they replace the hydroxyl group of the amino acid ( Figure 9.11 ) . <hl> The transfer of the phosphate is catalyzed by an enzyme called a kinase . Various kinases are named for the substrate they phosphorylate . Phosphorylation of serine and threonine residues often activates enzymes . Phosphorylation of tyrosine residues can either affect the activity of an enzyme or create a binding site that interacts with downstream components in the signaling cascade . Phosphorylation may activate or inactivate enzymes , and the reversal of phosphorylation , dephosphorylation by a phosphatase , will reverse the effect .", "hl_sentences": "One of the most common chemical modifications that occurs in signaling pathways is the addition of a phosphate group ( PO 4 – 3 ) to a molecule such as a protein in a process called phosphorylation . Phosphates are also often added to serine , threonine , and tyrosine residues of proteins , where they replace the hydroxyl group of the amino acid ( Figure 9.11 ) .", "question": { "cloze_format": "The property that enables the residues of the amino acids serine, threonine, and tyrosine to be phosphorylated is that ___ .", "normal_format": "What property enables the residues of the amino acids serine, threonine, and tyrosine to be phosphorylated?", "question_choices": [ "They are polar.", "They are non-polar.", "They contain a hydroxyl group.", "They occur more frequently in the amino acid sequence of signaling proteins." ], "question_id": "fs-id1404092", "question_text": "What property enables the residues of the amino acids serine, threonine, and tyrosine to be phosphorylated?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "A phosphatase removes the phosphate group from phosphorylated amino acid residues in a protein." }, "bloom": "2", "hl_context": "<hl> One of the most common chemical modifications that occurs in signaling pathways is the addition of a phosphate group ( PO 4 – 3 ) to a molecule such as a protein in a process called phosphorylation . <hl> <hl> The phosphate can be added to a nucleotide such as GMP to form GDP or GTP . <hl> <hl> Phosphates are also often added to serine , threonine , and tyrosine residues of proteins , where they replace the hydroxyl group of the amino acid ( Figure 9.11 ) . <hl> The transfer of the phosphate is catalyzed by an enzyme called a kinase . Various kinases are named for the substrate they phosphorylate . Phosphorylation of serine and threonine residues often activates enzymes . Phosphorylation of tyrosine residues can either affect the activity of an enzyme or create a binding site that interacts with downstream components in the signaling cascade . Phosphorylation may activate or inactivate enzymes , and the reversal of phosphorylation , dephosphorylation by a phosphatase , will reverse the effect .", "hl_sentences": "One of the most common chemical modifications that occurs in signaling pathways is the addition of a phosphate group ( PO 4 – 3 ) to a molecule such as a protein in a process called phosphorylation . The phosphate can be added to a nucleotide such as GMP to form GDP or GTP . Phosphates are also often added to serine , threonine , and tyrosine residues of proteins , where they replace the hydroxyl group of the amino acid ( Figure 9.11 ) .", "question": { "cloze_format": "The function of a phosphatase is that ___ .", "normal_format": "What is the function of a phosphatase?", "question_choices": [ "A phosphatase removes phosphorylated amino acids from proteins.", "A phosphatase removes the phosphate group from phosphorylated amino acid residues in a protein.", "A phosphatase phosphorylates serine, threonine, and tyrosine residues.", "A phosphatase degrades second messengers in the cell." ], "question_id": "fs-id1472126", "question_text": "What is the function of a phosphatase?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 1, "ans_text": "Phosphorylation of the inhibitor Iκ-B dissociates the complex between it and NF-κB, and allows NF-κB to enter the nucleus and stimulate transcription." }, "bloom": "2", "hl_context": "Some signal transduction pathways regulate the transcription of RNA . Others regulate the translation of proteins from mRNA . An example of a protein that regulates translation in the nucleus is the MAP kinase ERK . ERK is activated in a phosphorylation cascade when epidermal growth factor ( EGF ) binds the EGF receptor ( see Figure 9.10 ) . Upon phosphorylation , ERK enters the nucleus and activates a protein kinase that , in turn , regulates protein translation ( Figure 9.14 ) . The second kind of protein with which PKC can interact is a protein that acts as an inhibitor . An inhibitor is a molecule that binds to a protein and prevents it from functioning or reduces its function . <hl> In this case , the inhibitor is a protein called Iκ-B , which binds to the regulatory protein NF-κB . <hl> <hl> ( The symbol κ represents the Greek letter kappa . ) <hl> <hl> When Iκ-B is bound to NF-κB , the complex cannot enter the nucleus of the cell , but when Iκ-B is phosphorylated by PKC , it can no longer bind NF-κB , and NF-κB ( a transcription factor ) can enter the nucleus and initiate RNA transcription . <hl> <hl> In this case , the effect of phosphorylation is to inactivate an inhibitor and thereby activate the process of transcription . <hl>", "hl_sentences": "In this case , the inhibitor is a protein called Iκ-B , which binds to the regulatory protein NF-κB . ( The symbol κ represents the Greek letter kappa . ) When Iκ-B is bound to NF-κB , the complex cannot enter the nucleus of the cell , but when Iκ-B is phosphorylated by PKC , it can no longer bind NF-κB , and NF-κB ( a transcription factor ) can enter the nucleus and initiate RNA transcription . In this case , the effect of phosphorylation is to inactivate an inhibitor and thereby activate the process of transcription .", "question": { "cloze_format": "NF-κB induce gene expression because ___.", "normal_format": "How does NF-κB induce gene expression?", "question_choices": [ "A small, hydrophobic ligand binds to NF-κB, activating it.", "Phosphorylation of the inhibitor Iκ-B dissociates the complex between it and NF-κB, and allows NF-κB to enter the nucleus and stimulate transcription.", "NF-κB is phosphorylated and is then free to enter the nucleus and bind DNA.", "NF-κB is a kinase that phosphorylates a transcription factor that binds DNA and promotes protein production." ], "question_id": "fs-id2348311", "question_text": "How does NF-κB induce gene expression?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "all of the above" }, "bloom": null, "hl_context": "<hl> Apoptosis is also essential for normal embryological development . <hl> In vertebrates , for example , early stages of development include the formation of web-like tissue between individual fingers and toes ( Figure 9.15 ) . <hl> During the course of normal development , these unneeded cells must be eliminated , enabling fully separated fingers and toes to form . <hl> <hl> A cell signaling mechanism triggers apoptosis , which destroys the cells between the developing digits . <hl> <hl> Another example of external signaling that leads to apoptosis occurs in T-cell development . <hl> <hl> T-cells are immune cells that bind to foreign macromolecules and particles , and target them for destruction by the immune system . <hl> Normally , T-cells do not target “ self ” proteins ( those of their own organism ) , a process that can lead to autoimmune diseases . In order to develop the ability to discriminate between self and non-self , immature T-cells undergo screening to determine whether they bind to so-called self proteins . If the T-cell receptor binds to self proteins , the cell initiates apoptosis to remove the potentially dangerous cell . <hl> When a cell is damaged , superfluous , or potentially dangerous to an organism , a cell can initiate a mechanism to trigger programmed cell death , or apoptosis . <hl> <hl> Apoptosis allows a cell to die in a controlled manner that prevents the release of potentially damaging molecules from inside the cell . <hl> There are many internal checkpoints that monitor a cell ’ s health ; if abnormalities are observed , a cell can spontaneously initiate the process of apoptosis . However , in some cases , such as a viral infection or uncontrolled cell division due to cancer , the cell ’ s normal checks and balances fail . External signaling can also initiate apoptosis . For example , most normal animal cells have receptors that interact with the extracellular matrix , a network of glycoproteins that provides structural support for cells in an organism . The binding of cellular receptors to the extracellular matrix initiates a signaling cascade within the cell . However , if the cell moves away from the extracellular matrix , the signaling ceases , and the cell undergoes apoptosis . This system keeps cells from traveling through the body and proliferating out of control , as happens with tumor cells that metastasize .", "hl_sentences": "Apoptosis is also essential for normal embryological development . During the course of normal development , these unneeded cells must be eliminated , enabling fully separated fingers and toes to form . A cell signaling mechanism triggers apoptosis , which destroys the cells between the developing digits . Another example of external signaling that leads to apoptosis occurs in T-cell development . T-cells are immune cells that bind to foreign macromolecules and particles , and target them for destruction by the immune system . When a cell is damaged , superfluous , or potentially dangerous to an organism , a cell can initiate a mechanism to trigger programmed cell death , or apoptosis . Apoptosis allows a cell to die in a controlled manner that prevents the release of potentially damaging molecules from inside the cell .", "question": { "cloze_format": "Apoptosis can occur in a cell when the cell is ________________.", "normal_format": "Apoptosis can occur in a cell when the cell is which of the following?", "question_choices": [ "damaged", "no longer needed", "infected by a virus", "all of the above" ], "question_id": "fs-id2018772", "question_text": "Apoptosis can occur in a cell when the cell is ________________." }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "The enzyme is inactivated." }, "bloom": null, "hl_context": "Some signal transduction pathways regulate the transcription of RNA . Others regulate the translation of proteins from mRNA . An example of a protein that regulates translation in the nucleus is the MAP kinase ERK . ERK is activated in a phosphorylation cascade when epidermal growth factor ( EGF ) binds the EGF receptor ( see Figure 9.10 ) . Upon phosphorylation , ERK enters the nucleus and activates a protein kinase that , in turn , regulates protein translation ( Figure 9.14 ) . <hl> The second kind of protein with which PKC can interact is a protein that acts as an inhibitor . <hl> <hl> An inhibitor is a molecule that binds to a protein and prevents it from functioning or reduces its function . <hl> <hl> In this case , the inhibitor is a protein called Iκ-B , which binds to the regulatory protein NF-κB . <hl> <hl> ( The symbol κ represents the Greek letter kappa . ) <hl> <hl> When Iκ-B is bound to NF-κB , the complex cannot enter the nucleus of the cell , but when Iκ-B is phosphorylated by PKC , it can no longer bind NF-κB , and NF-κB ( a transcription factor ) can enter the nucleus and initiate RNA transcription . <hl> <hl> In this case , the effect of phosphorylation is to inactivate an inhibitor and thereby activate the process of transcription . <hl> The enzyme phospholipase C cleaves PIP 2 to form diacylglycerol ( DAG ) and inositol triphosphate ( IP 3 ) ( Figure 9.13 ) . These products of the cleavage of PIP 2 serve as second messengers . <hl> Diacylglycerol ( DAG ) remains in the plasma membrane and activates protein kinase C ( PKC ) , which then phosphorylates serine and threonine residues in its target proteins . <hl> IP 3 diffuses into the cytoplasm and binds to ligand-gated calcium channels in the endoplasmic reticulum to release Ca 2 + that continues the signal cascade . 9.3 Response to the Signal Learning Objectives By the end of this section , you will be able to : One of the most common chemical modifications that occurs in signaling pathways is the addition of a phosphate group ( PO 4 – 3 ) to a molecule such as a protein in a process called phosphorylation . The phosphate can be added to a nucleotide such as GMP to form GDP or GTP . Phosphates are also often added to serine , threonine , and tyrosine residues of proteins , where they replace the hydroxyl group of the amino acid ( Figure 9.11 ) . <hl> The transfer of the phosphate is catalyzed by an enzyme called a kinase . <hl> Various kinases are named for the substrate they phosphorylate . Phosphorylation of serine and threonine residues often activates enzymes . Phosphorylation of tyrosine residues can either affect the activity of an enzyme or create a binding site that interacts with downstream components in the signaling cascade . <hl> Phosphorylation may activate or inactivate enzymes , and the reversal of phosphorylation , dephosphorylation by a phosphatase , will reverse the effect . <hl>", "hl_sentences": "The second kind of protein with which PKC can interact is a protein that acts as an inhibitor . An inhibitor is a molecule that binds to a protein and prevents it from functioning or reduces its function . In this case , the inhibitor is a protein called Iκ-B , which binds to the regulatory protein NF-κB . ( The symbol κ represents the Greek letter kappa . ) When Iκ-B is bound to NF-κB , the complex cannot enter the nucleus of the cell , but when Iκ-B is phosphorylated by PKC , it can no longer bind NF-κB , and NF-κB ( a transcription factor ) can enter the nucleus and initiate RNA transcription . In this case , the effect of phosphorylation is to inactivate an inhibitor and thereby activate the process of transcription . Diacylglycerol ( DAG ) remains in the plasma membrane and activates protein kinase C ( PKC ) , which then phosphorylates serine and threonine residues in its target proteins . The transfer of the phosphate is catalyzed by an enzyme called a kinase . Phosphorylation may activate or inactivate enzymes , and the reversal of phosphorylation , dephosphorylation by a phosphatase , will reverse the effect .", "question": { "cloze_format": "The effect of an inhibitor binding an enzyme is that ___ .", "normal_format": "What is the effect of an inhibitor binding an enzyme?", "question_choices": [ "The enzyme is degraded.", "The enzyme is activated.", "The enzyme is inactivated.", "The complex is transported out of the cell." ], "question_id": "fs-id2154549", "question_text": "What is the effect of an inhibitor binding an enzyme?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 2, "ans_text": "mating factor" }, "bloom": "1", "hl_context": "Yeasts are eukaryotes ( fungi ) , and the components and processes found in yeast signals are similar to those of cell-surface receptor signals in multicellular organisms . Budding yeasts ( Figure 9.16 ) are able to participate in a process that is similar to sexual reproduction that entails two haploid cells ( cells with one-half the normal number of chromosomes ) combining to form a diploid cell ( a cell with two sets of each chromosome , which is what normal body cells contain ) . <hl> In order to find another haploid yeast cell that is prepared to mate , budding yeasts secrete a signaling molecule called mating factor . <hl> <hl> When mating factor binds to cell-surface receptors in other yeast cells that are nearby , they stop their normal growth cycles and initiate a cell signaling cascade that includes protein kinases and GTP-binding proteins that are similar to G-proteins . <hl>", "hl_sentences": "In order to find another haploid yeast cell that is prepared to mate , budding yeasts secrete a signaling molecule called mating factor . When mating factor binds to cell-surface receptors in other yeast cells that are nearby , they stop their normal growth cycles and initiate a cell signaling cascade that includes protein kinases and GTP-binding proteins that are similar to G-proteins .", "question": { "cloze_format": "The type of molecule that acts as a signaling molecule in yeasts is ___.", "normal_format": "Which type of molecule acts as a signaling molecule in yeasts?", "question_choices": [ "steroid", "autoinducer", "mating factor", "second messenger" ], "question_id": "fs-id2169688", "question_text": "Which type of molecule acts as a signaling molecule in yeasts?" }, "references_are_paraphrase": null }, { "answer": { "ans_choice": 3, "ans_text": "a sufficient number of bacteria are present" }, "bloom": null, "hl_context": "The first evidence of bacterial communication was observed in a bacterium that has a symbiotic relationship with Hawaiian bobtail squid . <hl> When the population density of the bacteria reaches a certain level , specific gene expression is initiated , and the bacteria produce bioluminescent proteins that emit light . <hl> <hl> Because the number of cells present in the environment ( cell density ) is the determining factor for signaling , bacterial signaling was named quorum sensing . <hl> <hl> In politics and business , a quorum is the minimum number of members required to be present to vote on an issue . <hl>", "hl_sentences": "When the population density of the bacteria reaches a certain level , specific gene expression is initiated , and the bacteria produce bioluminescent proteins that emit light . Because the number of cells present in the environment ( cell density ) is the determining factor for signaling , bacterial signaling was named quorum sensing . In politics and business , a quorum is the minimum number of members required to be present to vote on an issue .", "question": { "cloze_format": "Quorum sensing is triggered to begin when ___________.", "normal_format": "When is quorum sensing triggered to begin?", "question_choices": [ "treatment with antibiotics occurs", "bacteria release growth hormones", "bacterial protein expression is switched on", "a sufficient number of bacteria are present" ], "question_id": "fs-id1465106", "question_text": "Quorum sensing is triggered to begin when ___________." }, "references_are_paraphrase": null } ]
9
9.1 Signaling Molecules and Cellular Receptors Learning Objectives By the end of this section, you will be able to: Describe four types of signaling found in multicellular organisms Compare internal receptors with cell-surface receptors Recognize the relationship between a ligand’s structure and its mechanism of action There are two kinds of communication in the world of living cells. Communication between cells is called intercellular signaling , and communication within a cell is called intracellular signaling . An easy way to remember the distinction is by understanding the Latin origin of the prefixes: inter- means "between" (for example, intersecting lines are those that cross each other) and intra- means "inside" (like intravenous). Chemical signals are released by signaling cells in the form of small, usually volatile or soluble molecules called ligands. A ligand is a molecule that binds another specific molecule, in some cases, delivering a signal in the process. Ligands can thus be thought of as signaling molecules. Ligands interact with proteins in target cells , which are cells that are affected by chemical signals; these proteins are also called receptors . Ligands and receptors exist in several varieties; however, a specific ligand will have a specific receptor that typically binds only that ligand. Forms of Signaling There are four categories of chemical signaling found in multicellular organisms: paracrine signaling, endocrine signaling, autocrine signaling, and direct signaling across gap junctions ( Figure 9.2 ). The main difference between the different categories of signaling is the distance that the signal travels through the organism to reach the target cell. Not all cells are affected by the same signals. Paracrine Signaling Signals that act locally between cells that are close together are called paracrine signals . Paracrine signals move by diffusion through the extracellular matrix. These types of signals usually elicit quick responses that last only a short amount of time. In order to keep the response localized, paracrine ligand molecules are normally quickly degraded by enzymes or removed by neighboring cells. Removing the signals will reestablish the concentration gradient for the signal, allowing them to quickly diffuse through the intracellular space if released again. One example of paracrine signaling is the transfer of signals across synapses between nerve cells. A nerve cell consists of a cell body, several short, branched extensions called dendrites that receive stimuli, and a long extension called an axon, which transmits signals to other nerve cells or muscle cells. The junction between nerve cells where signal transmission occurs is called a synapse. A synaptic signal is a chemical signal that travels between nerve cells. Signals within the nerve cells are propagated by fast-moving electrical impulses. When these impulses reach the end of the axon, the signal continues on to a dendrite of the next cell by the release of chemical ligands called neurotransmitters by the presynaptic cell (the cell emitting the signal). The neurotransmitters are transported across the very small distances between nerve cells, which are called chemical synapses ( Figure 9.3 ). The small distance between nerve cells allows the signal to travel quickly; this enables an immediate response, such as, Take your hand off the stove! When the neurotransmitter binds the receptor on the surface of the postsynaptic cell, the electrochemical potential of the target cell changes, and the next electrical impulse is launched. The neurotransmitters that are released into the chemical synapse are degraded quickly or get reabsorbed by the presynaptic cell so that the recipient nerve cell can recover quickly and be prepared to respond rapidly to the next synaptic signal. Endocrine Signaling Signals from distant cells are called endocrine signals , and they originate from endocrine cells . (In the body, many endocrine cells are located in endocrine glands, such as the thyroid gland, the hypothalamus, and the pituitary gland.) These types of signals usually produce a slower response but have a longer-lasting effect. The ligands released in endocrine signaling are called hormones, signaling molecules that are produced in one part of the body but affect other body regions some distance away. Hormones travel the large distances between endocrine cells and their target cells via the bloodstream, which is a relatively slow way to move throughout the body. Because of their form of transport, hormones get diluted and are present in low concentrations when they act on their target cells. This is different from paracrine signaling, in which local concentrations of ligands can be very high. Autocrine Signaling Autocrine signals are produced by signaling cells that can also bind to the ligand that is released. This means the signaling cell and the target cell can be the same or a similar cell (the prefix auto- means self, a reminder that the signaling cell sends a signal to itself). This type of signaling often occurs during the early development of an organism to ensure that cells develop into the correct tissues and take on the proper function. Autocrine signaling also regulates pain sensation and inflammatory responses. Further, if a cell is infected with a virus, the cell can signal itself to undergo programmed cell death, killing the virus in the process. In some cases, neighboring cells of the same type are also influenced by the released ligand. In embryological development, this process of stimulating a group of neighboring cells may help to direct the differentiation of identical cells into the same cell type, thus ensuring the proper developmental outcome. Direct Signaling Across Gap Junctions Gap junctions in animals and plasmodesmata in plants are connections between the plasma membranes of neighboring cells. These water-filled channels allow small signaling molecules, called intracellular mediators , to diffuse between the two cells. Small molecules, such as calcium ions (Ca 2+ ), are able to move between cells, but large molecules like proteins and DNA cannot fit through the channels. The specificity of the channels ensures that the cells remain independent but can quickly and easily transmit signals. The transfer of signaling molecules communicates the current state of the cell that is directly next to the target cell; this allows a group of cells to coordinate their response to a signal that only one of them may have received. In plants, plasmodesmata are ubiquitous, making the entire plant into a giant, communication network. Types of Receptors Receptors are protein molecules in the target cell or on its surface that bind ligand. There are two types of receptors, internal receptors and cell-surface receptors. Internal receptors Internal receptors , also known as intracellular or cytoplasmic receptors, are found in the cytoplasm of the cell and respond to hydrophobic ligand molecules that are able to travel across the plasma membrane. Once inside the cell, many of these molecules bind to proteins that act as regulators of mRNA synthesis (transcription) to mediate gene expression. Gene expression is the cellular process of transforming the information in a cell's DNA into a sequence of amino acids, which ultimately forms a protein. When the ligand binds to the internal receptor, a conformational change is triggered that exposes a DNA-binding site on the protein. The ligand-receptor complex moves into the nucleus, then binds to specific regulatory regions of the chromosomal DNA and promotes the initiation of transcription ( Figure 9.4 ). Transcription is the process of copying the information in a cells DNA into a special form of RNA called messenger RNA (mRNA); the cell uses information in the mRNA (which moves out into the cytoplasm and associates with ribosomes) to link specific amino acids in the correct order, producing a protein. Internal receptors can directly influence gene expression without having to pass the signal on to other receptors or messengers. Cell-Surface Receptors Cell-surface receptors , also known as transmembrane receptors, are cell surface, membrane-anchored (integral) proteins that bind to external ligand molecules. This type of receptor spans the plasma membrane and performs signal transduction, in which an extracellular signal is converted into an intercellular signal. Ligands that interact with cell-surface receptors do not have to enter the cell that they affect. Cell-surface receptors are also called cell-specific proteins or markers because they are specific to individual cell types. Because cell-surface receptor proteins are fundamental to normal cell functioning, it should come as no surprise that a malfunction in any one of these proteins could have severe consequences. Errors in the protein structures of certain receptor molecules have been shown to play a role in hypertension (high blood pressure), asthma, heart disease, and cancer. Each cell-surface receptor has three main components: an external ligand-binding domain, a hydrophobic membrane-spanning region, and an intracellular domain inside the cell. The ligand-binding domain is also called the extracellular domain . The size and extent of each of these domains vary widely, depending on the type of receptor. Evolution Connection How Viruses Recognize a Host Unlike living cells, many viruses do not have a plasma membrane or any of the structures necessary to sustain life. Some viruses are simply composed of an inert protein shell containing DNA or RNA. To reproduce, viruses must invade a living cell, which serves as a host, and then take over the hosts cellular apparatus. But how does a virus recognize its host? Viruses often bind to cell-surface receptors on the host cell. For example, the virus that causes human influenza (flu) binds specifically to receptors on membranes of cells of the respiratory system. Chemical differences in the cell-surface receptors among hosts mean that a virus that infects a specific species (for example, humans) cannot infect another species (for example, chickens). However, viruses have very small amounts of DNA or RNA compared to humans, and, as a result, viral reproduction can occur rapidly. Viral reproduction invariably produces errors that can lead to changes in newly produced viruses; these changes mean that the viral proteins that interact with cell-surface receptors may evolve in such a way that they can bind to receptors in a new host. Such changes happen randomly and quite often in the reproductive cycle of a virus, but the changes only matter if a virus with new binding properties comes into contact with a suitable host. In the case of influenza, this situation can occur in settings where animals and people are in close contact, such as poultry and swine farms. 1 Once a virus jumps to a new host, it can spread quickly. Scientists watch newly appearing viruses (called emerging viruses) closely in the hope that such monitoring can reduce the likelihood of global viral epidemics. 1 A. B. Sigalov, The School of Nature. IV. Learning from Viruses, Self/Nonself 1, no. 4 (2010): 282-298. Y. Cao, X. Koh, L. Dong, X. Du, A. Wu, X. Ding, H. Deng, Y. Shu, J. Chen, T. Jiang, Rapid Estimation of Binding Activity of Influenza Virus Hemagglutinin to Human and Avian Receptors, PLoS One 6, no. 4 (2011): e18664. Cell-surface receptors are involved in most of the signaling in multicellular organisms. There are three general categories of cell-surface receptors: ion channel-linked receptors, G-protein-linked receptors, and enzyme-linked receptors. Ion channel-linked receptors bind a ligand and open a channel through the membrane that allows specific ions to pass through. To form a channel, this type of cell-surface receptor has an extensive membrane-spanning region. In order to interact with the phospholipid fatty acid tails that form the center of the plasma membrane, many of the amino acids in the membrane-spanning region are hydrophobic in nature. Conversely, the amino acids that line the inside of the channel are hydrophilic to allow for the passage of water or ions. When a ligand binds to the extracellular region of the channel, there is a conformational change in the proteins structure that allows ions such as sodium, calcium, magnesium, and hydrogen to pass through ( Figure 9.5 ). G-protein-linked receptors bind a ligand and activate a membrane protein called a G-protein. The activated G-protein then interacts with either an ion channel or an enzyme in the membrane ( Figure 9.6 ). All G-protein-linked receptors have seven transmembrane domains, but each receptor has its own specific extracellular domain and G-protein-binding site. Cell signaling using G-protein-linked receptors occurs as a cyclic series of events. Before the ligand binds, the inactive G-protein can bind to a newly revealed site on the receptor specific for its binding. Once the ligand binds to the receptor, the resultant shape change activates the G-protein, which releases GDP and picks up GTP. The subunits of the G-protein then split into the α subunit and the βγ subunit. One or both of these G-protein fragments may be able to activate other proteins as a result. After a while, the GTP on the active α subunit of the G-protein is hydrolyzed to GDP and the βγ subunit is deactivated. The subunits reassociate to form the inactive G-protein and the cycle begins anew. G-protein-linked receptors have been extensively studied and much has been learned about their roles in maintaining health. Bacteria that are pathogenic to humans can release poisons that interrupt specific G-protein-linked receptor function, leading to illnesses such as pertussis, botulism, and cholera. In cholera ( Figure 9.7 ), for example, the water-borne bacterium Vibrio cholerae produces a toxin, choleragen, that binds to cells lining the small intestine. The toxin then enters these intestinal cells, where it modifies a G-protein that controls the opening of a chloride channel and causes it to remain continuously active, resulting in large losses of fluids from the body and potentially fatal dehydration as a result. Enzyme-linked receptors are cell-surface receptors with intracellular domains that are associated with an enzyme. In some cases, the intracellular domain of the receptor itself is an enzyme. Other enzyme-linked receptors have a small intracellular domain that interacts directly with an enzyme. The enzyme-linked receptors normally have large extracellular and intracellular domains, but the membrane-spanning region consists of a single alpha-helical region of the peptide strand. When a ligand binds to the extracellular domain, a signal is transferred through the membrane, activating the enzyme. Activation of the enzyme sets off a chain of events within the cell that eventually leads to a response. One example of this type of enzyme-linked receptor is the tyrosine kinase receptor ( Figure 9.8 ). A kinase is an enzyme that transfers phosphate groups from ATP to another protein. The tyrosine kinase receptor transfers phosphate groups to tyrosine molecules (tyrosine residues). First, signaling molecules bind to the extracellular domain of two nearby tyrosine kinase receptors. The two neighboring receptors then bond together, or dimerize. Phosphates are then added to tyrosine residues on the intracellular domain of the receptors (phosphorylation). The phosphorylated residues can then transmit the signal to the next messenger within the cytoplasm. Visual Connection HER2 is a receptor tyrosine kinase. In 30 percent of human breast cancers, HER2 is permanently activated, resulting in unregulated cell division. Lapatinib, a drug used to treat breast cancer, inhibits HER2 receptor tyrosine kinase autophosphorylation (the process by which the receptor adds phosphates onto itself), thus reducing tumor growth by 50 percent. Besides autophosphorylation, which of the following steps would be inhibited by Lapatinib? Signaling molecule binding, dimerization, and the downstream cellular response Dimerization, and the downstream cellular response The downstream cellular response Phosphatase activity, dimerization, and the downsteam cellular response Signaling Molecules Produced by signaling cells and the subsequent binding to receptors in target cells, ligands act as chemical signals that travel to the target cells to coordinate responses. The types of molecules that serve as ligands are incredibly varied and range from small proteins to small ions like calcium (Ca 2+ ). Small Hydrophobic Ligands Small hydrophobic ligands can directly diffuse through the plasma membrane and interact with internal receptors. Important members of this class of ligands are the steroid hormones. Steroids are lipids that have a hydrocarbon skeleton with four fused rings; different steroids have different functional groups attached to the carbon skeleton. Steroid hormones include the female sex hormone, estradiol, which is a type of estrogen; the male sex hormone, testosterone; and cholesterol, which is an important structural component of biological membranes and a precursor of steriod hormones ( Figure 9.9 ). Other hydrophobic hormones include thyroid hormones and vitamin D. In order to be soluble in blood, hydrophobic ligands must bind to carrier proteins while they are being transported through the bloodstream. Water-Soluble Ligands Water-soluble ligands are polar and therefore cannot pass through the plasma membrane unaided; sometimes, they are too large to pass through the membrane at all. Instead, most water-soluble ligands bind to the extracellular domain of cell-surface receptors. This group of ligands is quite diverse and includes small molecules, peptides, and proteins. Other Ligands Nitric oxide (NO) is a gas that also acts as a ligand. It is able to diffuse directly across the plasma membrane, and one of its roles is to interact with receptors in smooth muscle and induce relaxation of the tissue. NO has a very short half-life and therefore only functions over short distances. Nitroglycerin, a treatment for heart disease, acts by triggering the release of NO, which causes blood vessels to dilate (expand), thus restoring blood flow to the heart. NO has become better known recently because the pathway that it affects is targeted by prescription medications for erectile dysfunction, such as Viagra (erection involves dilated blood vessels). 9.2 Propagation of the Signal Learning Objectives By the end of this section, you will be able to: Explain how the binding of a ligand initiates signal transduction throughout a cell Recognize the role of phosphorylation in the transmission of intracellular signals Evaluate the role of second messengers in signal transmission Once a ligand binds to a receptor, the signal is transmitted through the membrane and into the cytoplasm. Continuation of a signal in this manner is called signal transduction . Signal transduction only occurs with cell-surface receptors because internal receptors are able to interact directly with DNA in the nucleus to initiate protein synthesis. When a ligand binds to its receptor, conformational changes occur that affect the receptor’s intracellular domain. Conformational changes of the extracellular domain upon ligand binding can propagate through the membrane region of the receptor and lead to activation of the intracellular domain or its associated proteins. In some cases, binding of the ligand causes dimerization of the receptor, which means that two receptors bind to each other to form a stable complex called a dimer. A dimer is a chemical compound formed when two molecules (often identical) join together. The binding of the receptors in this manner enables their intracellular domains to come into close contact and activate each other. Binding Initiates a Signaling Pathway After the ligand binds to the cell-surface receptor, the activation of the receptor’s intracellular components sets off a chain of events that is called a signaling pathway or a signaling cascade. In a signaling pathway, second messengers, enzymes, and activated proteins interact with specific proteins, which are in turn activated in a chain reaction that eventually leads to a change in the cell’s environment ( Figure 9.10 ). The events in the cascade occur in a series, much like a current flows in a river. Interactions that occur before a certain point are defined as upstream events, and events after that point are called downstream events. Visual Connection In certain cancers, the GTPase activity of the RAS G-protein is inhibited. This means that the RAS protein can no longer hydrolyze GTP into GDP. What effect would this have on downstream cellular events? Signaling pathways can get very complicated very quickly because most cellular proteins can affect different downstream events, depending on the conditions within the cell. A single pathway can branch off toward different endpoints based on the interplay between two or more signaling pathways, and the same ligands are often used to initiate different signals in different cell types. This variation in response is due to differences in protein expression in different cell types. Another complicating element is signal integration of the pathways, in which signals from two or more different cell-surface receptors merge to activate the same response in the cell. This process can ensure that multiple external requirements are met before a cell commits to a specific response. The effects of extracellular signals can also be amplified by enzymatic cascades. At the initiation of the signal, a single ligand binds to a single receptor. However, activation of a receptor-linked enzyme can activate many copies of a component of the signaling cascade, which amplifies the signal. Methods of Intracellular Signaling The induction of a signaling pathway depends on the modification of a cellular component by an enzyme. There are numerous enzymatic modifications that can occur, and they are recognized in turn by the next component downstream. The following are some of the more common events in intracellular signaling. Link to Learning Observe an animation of cell signaling at this site . Phosphorylation One of the most common chemical modifications that occurs in signaling pathways is the addition of a phosphate group (PO 4 –3 ) to a molecule such as a protein in a process called phosphorylation. The phosphate can be added to a nucleotide such as GMP to form GDP or GTP. Phosphates are also often added to serine, threonine, and tyrosine residues of proteins, where they replace the hydroxyl group of the amino acid ( Figure 9.11 ). The transfer of the phosphate is catalyzed by an enzyme called a kinase . Various kinases are named for the substrate they phosphorylate. Phosphorylation of serine and threonine residues often activates enzymes. Phosphorylation of tyrosine residues can either affect the activity of an enzyme or create a binding site that interacts with downstream components in the signaling cascade. Phosphorylation may activate or inactivate enzymes, and the reversal of phosphorylation, dephosphorylation by a phosphatase, will reverse the effect. Second Messengers Second messengers are small molecules that propagate a signal after it has been initiated by the binding of the signaling molecule to the receptor. These molecules help to spread a signal through the cytoplasm by altering the behavior of certain cellular proteins. Calcium ion is a widely used second messenger. The free concentration of calcium ions (Ca 2+ ) within a cell is very low because ion pumps in the plasma membrane continuously use adenosine-5'-triphosphate (ATP) to remove it. For signaling purposes, Ca 2+ is stored in cytoplasmic vesicles, such as the endoplasmic reticulum, or accessed from outside the cell. When signaling occurs, ligand-gated calcium ion channels allow the higher levels of Ca 2+ that are present outside the cell (or in intracellular storage compartments) to flow into the cytoplasm, which raises the concentration of cytoplasmic Ca 2+ . The response to the increase in Ca 2+ varies, depending on the cell type involved. For example, in the β-cells of the pancreas, Ca 2+ signaling leads to the release of insulin, and in muscle cells, an increase in Ca 2+ leads to muscle contractions. Another second messenger utilized in many different cell types is cyclic AMP (cAMP) . Cyclic AMP is synthesized by the enzyme adenylyl cyclase from ATP ( Figure 9.12 ). The main role of cAMP in cells is to bind to and activate an enzyme called cAMP-dependent kinase (A-kinase) . A-kinase regulates many vital metabolic pathways: It phosphorylates serine and threonine residues of its target proteins, activating them in the process. A-kinase is found in many different types of cells, and the target proteins in each kind of cell are different. Differences give rise to the variation of the responses to cAMP in different cells. Present in small concentrations in the plasma membrane, inositol phospholipids are lipids that can also be converted into second messengers. Because these molecules are membrane components, they are located near membrane-bound receptors and can easily interact with them. Phosphatidylinositol (PI) is the main phospholipid that plays a role in cellular signaling. Enzymes known as kinases phosphorylate PI to form PI-phosphate (PIP) and PI-bisphosphate (PIP 2 ). The enzyme phospholipase C cleaves PIP 2 to form diacylglycerol (DAG) and inositol triphosphate (IP 3 ) ( Figure 9.13 ). These products of the cleavage of PIP 2 serve as second messengers. Diacylglycerol (DAG) remains in the plasma membrane and activates protein kinase C (PKC), which then phosphorylates serine and threonine residues in its target proteins. IP 3 diffuses into the cytoplasm and binds to ligand-gated calcium channels in the endoplasmic reticulum to release Ca 2+ that continues the signal cascade. 9.3 Response to the Signal Learning Objectives By the end of this section, you will be able to: Describe how signaling pathways direct protein expression, cellular metabolism, and cell growth Identify the function of PKC in signal transduction pathways Recognize the role of apoptosis in the development and maintenance of a healthy organism Inside the cell, ligands bind to their internal receptors, allowing them to directly affect the cell’s DNA and protein-producing machinery. Using signal transduction pathways, receptors in the plasma membrane produce a variety of effects on the cell. The results of signaling pathways are extremely varied and depend on the type of cell involved as well as the external and internal conditions. A small sampling of responses is described below. Gene Expression Some signal transduction pathways regulate the transcription of RNA. Others regulate the translation of proteins from mRNA. An example of a protein that regulates translation in the nucleus is the MAP kinase ERK. ERK is activated in a phosphorylation cascade when epidermal growth factor (EGF) binds the EGF receptor (see Figure 9.10 ). Upon phosphorylation, ERK enters the nucleus and activates a protein kinase that, in turn, regulates protein translation ( Figure 9.14 ). The second kind of protein with which PKC can interact is a protein that acts as an inhibitor. An inhibitor is a molecule that binds to a protein and prevents it from functioning or reduces its function. In this case, the inhibitor is a protein called Iκ-B, which binds to the regulatory protein NF-κB. (The symbol κ represents the Greek letter kappa.) When Iκ-B is bound to NF-κB, the complex cannot enter the nucleus of the cell, but when Iκ-B is phosphorylated by PKC, it can no longer bind NF-κB, and NF-κB (a transcription factor) can enter the nucleus and initiate RNA transcription. In this case, the effect of phosphorylation is to inactivate an inhibitor and thereby activate the process of transcription. Increase in Cellular Metabolism The result of another signaling pathway affects muscle cells. The activation of β-adrenergic receptors in muscle cells by adrenaline leads to an increase in cyclic AMP (cAMP) inside the cell. Also known as epinephrine, adrenaline is a hormone (produced by the adrenal gland attached to the kidney) that readies the body for short-term emergencies. Cyclic AMP activates PKA (protein kinase A), which in turn phosphorylates two enzymes. The first enzyme promotes the degradation of glycogen by activating intermediate glycogen phosphorylase kinase (GPK) that in turn activates glycogen phosphorylase (GP) that catabolizes glycogen into glucose. (Recall that your body converts excess glucose to glycogen for short-term storage. When energy is needed, glycogen is quickly reconverted to glucose.) Phosphorylation of the second enzyme, glycogen synthase (GS), inhibits its ability to form glycogen from glucose. In this manner, a muscle cell obtains a ready pool of glucose by activating its formation via glycogen degradation and by inhibiting the use of glucose to form glycogen, thus preventing a futile cycle of glycogen degradation and synthesis. The glucose is then available for use by the muscle cell in response to a sudden surge of adrenaline—the “fight or flight” reflex. Cell Growth Cell signaling pathways also play a major role in cell division. Cells do not normally divide unless they are stimulated by signals from other cells. The ligands that promote cell growth are called growth factors . Most growth factors bind to cell-surface receptors that are linked to tyrosine kinases. These cell-surface receptors are called receptor tyrosine kinases (RTKs). Activation of RTKs initiates a signaling pathway that includes a G-protein called RAS, which activates the MAP kinase pathway described earlier. The enzyme MAP kinase then stimulates the expression of proteins that interact with other cellular components to initiate cell division. Career Connection Cancer Biologist Cancer biologists study the molecular origins of cancer with the goal of developing new prevention methods and treatment strategies that will inhibit the growth of tumors without harming the normal cells of the body. As mentioned earlier, signaling pathways control cell growth. These signaling pathways are controlled by signaling proteins, which are, in turn, expressed by genes. Mutations in these genes can result in malfunctioning signaling proteins. This prevents the cell from regulating its cell cycle, triggering unrestricted cell division and cancer. The genes that regulate the signaling proteins are one type of oncogene which is a gene that has the potential to cause cancer. The gene encoding RAS is an oncogene that was originally discovered when mutations in the RAS protein were linked to cancer. Further studies have indicated that 30 percent of cancer cells have a mutation in the RAS gene that leads to uncontrolled growth. If left unchecked, uncontrolled cell division can lead tumor formation and metastasis, the growth of cancer cells in new locations in the body. Cancer biologists have been able to identify many other oncogenes that contribute to the development of cancer. For example, HER2 is a cell-surface receptor that is present in excessive amounts in 20 percent of human breast cancers. Cancer biologists realized that gene duplication led to HER2 overexpression in 25 percent of breast cancer patients and developed a drug called Herceptin (trastuzumab). Herceptin is a monoclonal antibody that targets HER2 for removal by the immune system. Herceptin therapy helps to control signaling through HER2. The use of Herceptin in combination with chemotherapy has helped to increase the overall survival rate of patients with metastatic breast cancer. More information on cancer biology research can be found at the National Cancer Institute website (http://www.cancer.gov/cancertopics/understandingcancer/targetedtherapies). Cell Death When a cell is damaged, superfluous, or potentially dangerous to an organism, a cell can initiate a mechanism to trigger programmed cell death, or apoptosis . Apoptosis allows a cell to die in a controlled manner that prevents the release of potentially damaging molecules from inside the cell. There are many internal checkpoints that monitor a cell’s health; if abnormalities are observed, a cell can spontaneously initiate the process of apoptosis. However, in some cases, such as a viral infection or uncontrolled cell division due to cancer, the cell’s normal checks and balances fail. External signaling can also initiate apoptosis. For example, most normal animal cells have receptors that interact with the extracellular matrix, a network of glycoproteins that provides structural support for cells in an organism. The binding of cellular receptors to the extracellular matrix initiates a signaling cascade within the cell. However, if the cell moves away from the extracellular matrix, the signaling ceases, and the cell undergoes apoptosis. This system keeps cells from traveling through the body and proliferating out of control, as happens with tumor cells that metastasize. Another example of external signaling that leads to apoptosis occurs in T-cell development. T-cells are immune cells that bind to foreign macromolecules and particles, and target them for destruction by the immune system. Normally, T-cells do not target “self” proteins (those of their own organism), a process that can lead to autoimmune diseases. In order to develop the ability to discriminate between self and non-self, immature T-cells undergo screening to determine whether they bind to so-called self proteins. If the T-cell receptor binds to self proteins, the cell initiates apoptosis to remove the potentially dangerous cell. Apoptosis is also essential for normal embryological development. In vertebrates, for example, early stages of development include the formation of web-like tissue between individual fingers and toes ( Figure 9.15 ). During the course of normal development, these unneeded cells must be eliminated, enabling fully separated fingers and toes to form. A cell signaling mechanism triggers apoptosis, which destroys the cells between the developing digits. Termination of the Signal Cascade The aberrant signaling often seen in tumor cells is proof that the termination of a signal at the appropriate time can be just as important as the initiation of a signal. One method of stopping a specific signal is to degrade the ligand or remove it so that it can no longer access its receptor. One reason that hydrophobic hormones like estrogen and testosterone trigger long-lasting events is because they bind carrier proteins. These proteins allow the insoluble molecules to be soluble in blood, but they also protect the hormones from degradation by circulating enzymes. Inside the cell, many different enzymes reverse the cellular modifications that result from signaling cascades. For example, phosphatases are enzymes that remove the phosphate group attached to proteins by kinases in a process called dephosphorylation. Cyclic AMP (cAMP) is degraded into AMP by phosphodiesterase , and the release of calcium stores is reversed by the Ca 2+ pumps that are located in the external and internal membranes of the cell. 9.4 Signaling in Single-Celled Organisms Learning Objectives By the end of this section, you will be able to: Describe how single-celled yeasts use cell signaling to communicate with one another Relate the role of quorum sensing to the ability of some bacteria to form biofilms Within-cell signaling allows bacteria to respond to environmental cues, such as nutrient levels, some single-celled organisms also release molecules to signal to each other. Signaling in Yeast Yeasts are eukaryotes (fungi), and the components and processes found in yeast signals are similar to those of cell-surface receptor signals in multicellular organisms. Budding yeasts ( Figure 9.16 ) are able to participate in a process that is similar to sexual reproduction that entails two haploid cells (cells with one-half the normal number of chromosomes) combining to form a diploid cell (a cell with two sets of each chromosome, which is what normal body cells contain). In order to find another haploid yeast cell that is prepared to mate, budding yeasts secrete a signaling molecule called mating factor . When mating factor binds to cell-surface receptors in other yeast cells that are nearby, they stop their normal growth cycles and initiate a cell signaling cascade that includes protein kinases and GTP-binding proteins that are similar to G-proteins. Signaling in Bacteria Signaling in bacteria enables bacteria to monitor extracellular conditions, ensure that there are sufficient amounts of nutrients, and ensure that hazardous situations are avoided. There are circumstances, however, when bacteria communicate with each other. The first evidence of bacterial communication was observed in a bacterium that has a symbiotic relationship with Hawaiian bobtail squid. When the population density of the bacteria reaches a certain level, specific gene expression is initiated, and the bacteria produce bioluminescent proteins that emit light. Because the number of cells present in the environment (cell density) is the determining factor for signaling, bacterial signaling was named quorum sensing . In politics and business, a quorum is the minimum number of members required to be present to vote on an issue. Quorum sensing uses autoinducers as signaling molecules. Autoinducers are signaling molecules secreted by bacteria to communicate with other bacteria of the same kind. The secreted autoinducers can be small, hydrophobic molecules such as acyl-homoserine lactone, (AHL) or larger peptide-based molecules; each type of molecule has a different mode of action. When AHL enters target bacteria, it binds to transcription factors, which then switch gene expression on or off ( Figure 9.17 ). The peptide autoinducers stimulate more complicated signaling pathways that include bacterial kinases. The changes in bacteria following exposure to autoinducers can be quite extensive. The pathogenic bacterium Pseudomonas aeruginosa has 616 different genes that respond to autoinducers. Visual Connection Which of the following statements about quorum sensing is false? Autoinducer must bind to receptor to turn on transcription of genes responsible for the production of more autoinducer. The receptor stays in the bacterial cell, but the autoinducer diffuses out. Autoinducer can only act on a different cell: it cannot act on the cell in which it is made. Autoinducer turns on genes that enable the bacteria to form a biofilm. Some species of bacteria that use quorum sensing form biofilms, complex colonies of bacteria (often containing several species) that exchange chemical signals to coordinate the release of toxins that will attack the host. Bacterial biofilms ( Figure 9.18 ) can sometimes be found on medical equipment; when biofilms invade implants such as hip or knee replacements or heart pacemakers, they can cause life-threatening infections. Visual Connection What advantage might biofilm production confer on the S. aureus inside the catheter? Research on the details of quorum sensing has led to advances in growing bacteria for industrial purposes. Recent discoveries suggest that it may be possible to exploit bacterial signaling pathways to control bacterial growth; this process could replace or supplement antibiotics that are no longer effective in certain situations. Link to Learning Watch geneticist Bonnie Bassler discuss her discovery of quorum sensing in biofilm bacteria in squid. Evolution Connection Cellular Communication in Yeasts The first life on our planet consisted of single-celled prokaryotic organisms that had limited interaction with each other. While some external signaling occurs between different species of single-celled organisms, the majority of signaling within bacteria and yeasts concerns only other members of the same species. The evolution of cellular communication is an absolute necessity for the development of multicellular organisms, and this innovation is thought to have required approximately 2.5 billion years to appear in early life forms. Yeasts are single-celled eukaryotes, and therefore have a nucleus and organelles characteristic of more complex life forms. Comparisons of the genomes of yeasts, nematode worms, fruit flies, and humans illustrate the evolution of increasingly complex signaling systems that allow for the efficient inner workings that keep humans and other complex life forms functioning correctly. Kinases are a major component of cellular communication, and studies of these enzymes illustrate the evolutionary connectivity of different species. Yeasts have 130 types of kinases. More complex organisms such as nematode worms and fruit flies have 454 and 239 kinases, respectively. Of the 130 kinase types in yeast, 97 belong to the 55 subfamilies of kinases that are found in other eukaryotic organisms. The only obvious deficiency seen in yeasts is the complete absence of tyrosine kinases. It is hypothesized that phosphorylation of tyrosine residues is needed to control the more sophisticated functions of development, differentiation, and cellular communication used in multicellular organisms. Because yeasts contain many of the same classes of signaling proteins as humans, these organisms are ideal for studying signaling cascades. Yeasts multiply quickly and are much simpler organisms than humans or other multicellular animals. Therefore, the signaling cascades are also simpler and easier to study, although they contain similar counterparts to human signaling. 2 2 G. Manning, G.D. Plowman, T. Hunter, S. Sudarsanam, “Evolution of Protein Kinase Signaling from Yeast to Man,” Trends in Biochemical Sciences 27, no. 10 (2002): 514–520. Link to Learning Watch this collection of interview clips with biofilm researchers in “What Are Bacterial Biofilms?” Click to view content